OECD Report: “As AI use grows, so do its benefits and risks. These risks can lead to actual harms (“AI incidents”) or potential dangers (“AI hazards”). Clear definitions are essential for managing and preventing these risks. This report proposes definitions for AI incidents and related terms. These definitions aim to foster international interoperability while providing flexibility for jurisdictions to determine the scope of AI incidents and hazards they wish to address…(More)”.
Building a trauma-informed algorithmic assessment toolkit
Report by Suvradip Maitra, Lyndal Sleep, Suzanna Fay, Paul Henman: “Artificial intelligence (AI) and automated processes provide considerable promise to enhance human wellbeing by fully automating or co-producing services with human service providers. Concurrently, if not well considered, automation also provides ways in which to generate harms at scale and speed. To address this challenge, much discussion to date has focused on principles of ethical AI and accountable algorithms with a groundswell of early work seeking to translate these into practical frameworks and processes to ensure such principles are enacted. AI risk assessment frameworks to detect and evaluate possible harms is one dominant approach, as are a growing body of AI audit frameworks, with concomitant emerging governmental and organisational regulatory settings, and associate professionals.
The research outlined in this report took a different approach. Building on work in social services on trauma-informed practice, researchers identified key principles and a practical framework that framed AI design, development and deployment as a reflective, constructive exercise that resulting in algorithmic supported services to be cognisant and inclusive of the diversity of human experience, and particularly those who have experienced trauma. This study resulted in a practical, co-designed, piloted Trauma Informed Algorithmic Assessment Toolkit.
This Toolkit has been designed to assist organisations in their use of automation in service delivery at any stage of their automation journey: ideation; design; development; piloting; deployment or evaluation. While of particular use for social service organisations working with people who may have experienced past trauma, the tool will be beneficial for any organisation wanting to ensure safe, responsible and ethical use of automation and AI…(More)”.
AI for social good: Improving lives and protecting the planet
McKinsey Report: “…Challenges in scaling AI for social-good initiatives are persistent and tough. Seventy-two percent of the respondents to our expert survey observed that most efforts to deploy AI for social good to date have focused on research and innovation rather than adoption and scaling. Fifty-five percent of grants for AI research and deployment across the SDGs are $250,000 or smaller, which is consistent with a focus on targeted research or smaller-scale deployment, rather than large-scale expansion. Aside from funding, the biggest barriers to scaling AI continue to be data availability, accessibility, and quality; AI talent availability and accessibility; organizational receptiveness; and change management. More on these topics can be found in the full report.
While overcoming these challenges, organizations should also be aware of strategies to address the range of risks, including inaccurate outputs, biases embedded in the underlying training data, the potential for large-scale misinformation, and malicious influence on politics and personal well-being. As we have noted in multiple recent articles, AI tools and techniques can be misused, even if the tools were originally designed for social good. Experts identified the top risks as impaired fairness, malicious use, and privacy and security concerns, followed by explainability (Exhibit 2). Respondents from not-for-profits expressed relatively more concern about misinformation, talent issues such as job displacement, and effects of AI on economic stability compared with their counterparts at for-profits, who were more often concerned with IP infringement…(More)”
Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing)
Book by Salman Khan: “…explores how artificial intelligence and GPT technology will transform learning, and offers a road map for teachers, parents, and students to navigate this exciting (and sometimes intimidating) new world.
A pioneer in the field of education technology, Khan examines the ins and outs of these cutting-edge tools and how they will revolutionize the way we learn and teach. For parents concerned about their children’s success, Khan illustrates how AI can personalize learning by adapting to each student’s individual pace and style, identifying strengths and areas for improvement, and offering tailored support and feedback to complement traditional classroom instruction. Khan emphasizes that embracing AI in education is not about replacing human interaction but enhancing it with customized and accessible learning tools that encourage creative problem-solving skills and prepare students for an increasingly digital world.
But Brave New Words is not just about technology—it’s about what this technology means for our society, and the practical implications for administrators, guidance counselors, and hiring managers who can harness the power of AI in education and the workplace. Khan also delves into the ethical and social implications of AI and large language models, offering thoughtful insights into how we can use these tools to build a more accessible education system for students around the world…(More)”.
US Senate AI Working Group Releases Policy Roadmap
Article by Gabby Miller: “On Wednesday, May 15, 2024, a bipartisan US Senate working group led by Majority Leader Sen. Chuck Schumer (D-NY), Sen. Mike Rounds (R-SD), Sen. Martin Heinrich (D-NM), and Sen. Todd Young (R-IN) released a report titled “Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate.” The 31-page report follows a series of off-the-record “educational briefings,” including “the first ever all-senators classified briefing focused solely on AI,” and nine “AI Insight Forums” hosted in the fall of 2023 that drew on the participation of more than 150 experts from industry, academia, and civil society.
The report makes a number of recommendations on funding priorities, the development of new legislation, and areas that require further exploration. It also encourages the executive branch to share information “in a timely fashion and on an ongoing basis” about its AI priorities and “any AI-related Memorandums of Understanding with other countries and the results from any AI-related studies in order to better inform the legislative process.”…(More)”.
Artificial Intelligence and the Skill Premium
Paper by David E. Bloom et al: “How will the emergence of ChatGPT and other forms of artificial intelligence (AI) affect the skill premium? To address this question, we propose a nested constant elasticity of substitution production function that distinguishes among three types of capital: traditional physical capital (machines, assembly lines), industrial robots, and AI. Following the literature, we assume that industrial robots predominantly substitute for low-skill workers, whereas AI mainly helps to perform the tasks of high-skill workers. We show that AI reduces the skill premium as long as it is more substitutable for high-skill workers than low-skill workers are for high-skill workers…(More)”
Artificial intelligence and complex sustainability policy problems: translating promise into practice
Paper by Ruby O’Connor et al: “Addressing sustainability policy challenges requires tools that can navigate complexity for better policy processes and outcomes. Attention on Artificial Intelligence (AI) tools and expectations for their use by governments have dramatically increased over the past decade. We conducted a narrative review of academic and grey literature to investigate how AI tools are being used and adapted for policy and public sector decision-making. We found that academics, governments, and consultants expressed positive expectations about AI, arguing that AI could or should be used to address a wide range of policy challenges. However, there is much less evidence of how public decision makers are actually using AI tools or detailed insight into the outcomes of use. From our findings we draw four lessons for translating the promise of AI into practice: 1) Document and evaluate AI’s application to sustainability policy problems in the real-world; 2) Focus on existing and mature AI technologies, not speculative promises or external pressures; 3) Start with the problem to be solved, not the technology to be applied; and 4) Anticipate and adapt to the complexity of sustainability policy problems…(More)”.
Automatic Generation of Model and Data Cards: A Step Towards Responsible AI
Paper by Jiarui Liu, Wenkai Li, Zhijing Jin, Mona Diab: “In an era of model and data proliferation in machine learning/AI especially marked by the rapid advancement of open-sourced technologies, there arises a critical need for standardized consistent documentation. Our work addresses the information incompleteness in current human-generated model and data cards. We propose an automated generation approach using Large Language Models (LLMs). Our key contributions include the establishment of CardBench, a comprehensive dataset aggregated from over 4.8k model cards and 1.4k data cards, coupled with the development of the CardGen pipeline comprising a two-step retrieval process. Our approach exhibits enhanced completeness, objectivity, and faithfulness in generated model and data cards, a significant step in responsible AI documentation practices ensuring better accountability and traceability…(More)”.
We don’t need an AI manifesto — we need a constitution
Article by Vivienne Ming: “Loans drive economic mobility in America, even as they’ve been a historically powerful tool for discrimination. I’ve worked on multiple projects to reduce that bias using AI. What I learnt, however, is that even if an algorithm works exactly as intended, it is still solely designed to optimise the financial returns to the lender who paid for it. The loan application process is already impenetrable to most, and now your hopes for home ownership or small business funding are dying in a 50-millisecond computation…
In law, the right to a lawyer and judicial review are a constitutional guarantee in the US and an established civil right throughout much of the world. These are the foundations of your civil liberties. When algorithms act as an expert witness, testifying against you but immune to cross examination, these rights are not simply eroded — they cease to exist.
People aren’t perfect. Neither ethics training for AI engineers nor legislation by woefully uninformed politicians can change that simple truth. I don’t need to assume that Big Tech chief executives are bad actors or that large companies are malevolent to understand that what is in their self-interest is not always in mine. The framers of the US Constitution recognised this simple truth and sought to leverage human nature for a greater good. The Constitution didn’t simply assume people would always act towards that greater good. Instead it defined a dynamic mechanism — self-interest and the balance of power — that would force compromise and good governance. Its vision of treating people as real actors rather than better angels produced one of the greatest frameworks for governance in history.
Imagine you were offered an AI-powered test for post-partum depression. My company developed that very test and it has the power to change your life, but you may choose not to use it for fear that we might sell the results to data brokers or activist politicians. You have a right to our AI acting solely for your health. It was for this reason I founded an independent non-profit, The Human Trust, that holds all of the data and runs all of the algorithms with sole fiduciary responsibility to you. No mother should have to choose between a life-saving medical test and her civil rights…(More)”.
A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI
Report by Hannah Chafetz, Sampriti Saxena, and Stefaan G. Verhulst: “Since late 2022, generative AI services and large language models (LLMs) have transformed how many individuals access, and process information. However, how generative AI and LLMs can be augmented with open data from official sources and how open data can be made more accessible with generative AI – potentially enabling a Fourth Wave of Open Data – remains an under explored area.
For these reasons, The Open Data Policy Lab (a collaboration between The GovLab and Microsoft) decided to explore the possible intersections between open data from official sources and generative AI. Throughout the last year, the team has conducted a range of research initiatives about the potential of open data and generative including a panel discussion, interviews, and Open Data Action Labs – a series of design sprints with a diverse group of industry experts.
These initiatives were used to inform our latest report, “A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI,” (May 2024) which provides a new framework and recommendations to support open data providers and other interested parties in making open data “ready” for generative AI…
The report outlines five scenarios in which open data from official sources (e.g. open government and open research data) and generative AI can intersect. Each of these scenarios includes case studies from the field and a specific set of requirements that open data providers can focus on to become ready for a scenario. These include…(More)” (Arxiv).