Report by Danielle Goldfarb: “From collecting millions of online price data to measure inflation, to assessing the economic impact of the COVID-19 pandemic on low-income workers, digital data sets can be used to benefit the public interest. Using these and other examples, this special report explores how digital data sets and advances in artificial intelligence (AI) can provide timely, transparent and detailed insights into global challenges. These experiments illustrate how governments and civil society analysts can reuse digital data to spot emerging problems, analyze specific group impacts, complement traditional metrics or verify data that may be manipulated. AI and data governance should extend beyond addressing harms. International institutions and governments need to actively steward digital data and AI tools to support a step change in our understanding of society’s biggest challenges…(More)”
Recommendations for Better Sharing of Climate Data
Creative Commons: “…the culmination of a nine-month research initiative from our Open Climate Data project. These guidelines are a result of collaboration between Creative Commons, government agencies and intergovernmental organizations. They mark a significant milestone in our ongoing effort to enhance the accessibility, sharing, and reuse of open climate data to address the climate crisis. Our goal is to share strategies that align with existing data sharing principles and pave the way for a more interconnected and accessible future for climate data.
Our recommendations offer practical steps and best practices, crafted in collaboration with key stakeholders and organizations dedicated to advancing open practices in climate data. We provide recommendations for 1) legal and licensing terms, 2) using metadata values for attribution and provenance, and 3) management and governance for better sharing.
Opening climate data requires an examination of the public’s legal rights to access and use the climate data, often dictated by copyright and licensing. This legal detail is sometimes missing from climate data sharing and legal interoperability conversations. Our recommendations suggest two options: Option A: CC0 + Attribution Request, in order to maximize reuse by dedicating climate data to the public domain, plus a request for attribution; and Option B: CC BY 4.0, for retaining data ownership and legal enforcement of attribution. We address how to navigate license stacking and attribution stacking for climate data hosts and for users working with multiple climate data sources.
We also propose standardized human- and machine-readable metadata values that enhance transparency, reduce guesswork, and ensure broader accessibility to climate data. We built upon existing model metadata schemas and standards, including those that address license and attribution information. These recommendations address a gap and provide metadata schema that standardize the inclusion of upfront, clear values related to attribution, licensing and provenance.
Lastly, we highlight four key aspects of effective climate data management: designating a dedicated technical managing steward, designating a legal and/or policy steward, encouraging collaborative data sharing, and regularly revisiting and updating data sharing policies in accordance with parallel open data policies and standards…(More)”.
Network architecture for global AI policy
Article by Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, and Andrew W. Wyckoff: “We see efforts to consolidate international AI governance as premature and ill-suited to respond to the immense, complex, novel, challenges of governing advanced AI, and the current diverse and decentralized efforts as beneficial and the best fit for this complex and rapidly developing technology.
Exploring the vast terra incognita of AI, realizing its opportunities, and managing its risks requires governance that can adapt and respond rapidly to AI risks as they emerge, develop deep understanding of the technology and its implications, and mobilize diverse resources and initiatives to address the growing global demand for access to AI. No one government or body will have the capacity to take on these challenges without building multiple coalitions and working closely with experts and institutions in industry, philanthropy, civil society, and the academy.
A distributed network of networks can more effectively address the challenges and opportunities of AI governance than a centralized system. Like the architecture of the interconnected information technology systems on which AI depends, such a decentralized system can bring to bear redundancy, resiliency, and diversity by channeling the functions of AI governance toward the most timely and effective pathways in iterative and diversified processes, providing agility against setbacks or failures at any single point. These multiple centers of effort can harness the benefit of network effects and parallel processing.
We explore this model of distributed and iterative AI governance below…(More)”.
So You’ve Decided To Carry Your Brain Around
Article by Nicholas Clairmont: “If the worry during the Enlightenment, as mathematician Isaac Milner wrote in 1794, was that ‘the great and high’ have ‘forgotten that they have souls,’ then today the worry is that many of us have forgotten that we have bodies.” So writes Christine Rosen, senior fellow at the American Enterprise Institute and senior editor of this journal, in her new book, The Extinction of Experience: Being Human in a Disembodied World.
A sharp articulation of the problem, attributed to Thomas Edison, is that “the chief function of the body is to carry the brain around.” Today, the “brain” can be cast virtually into text or voice communication with just about anyone on Earth, and information and entertainment can be delivered almost immediately to wherever a brain happens to be carried around. But we forget how recently this became possible.
Can it really be less than two decades ago that life started to be revolutionized by the smartphone, the technology that made it possible for people of Edison’s persuasion to render the body seemingly redundant? The iPhone was released in 2007. But even by 2009, according to Pew Research, only a third of American adults “had at some point used the internet on their mobile device.” It wasn’t until 2012 that half did so at least occasionally. And then there is that other technology that took off over the same time period: Facebook and Twitter and Instagram and TikTok and the rest of the social networks that allow us to e-commune and that induce us to see everything we do in light of how it might look to others online.
For such a drastic and recent change, it is one we have largely accepted as just a fact. All the public hand-wringing about it has arguably not made a dent in our actual habits. And maybe that’s because we have underestimated the problem with how it has changed us…(More)”.
Public Policy Evaluation
Implementation Toolkit by the OECD: “…offers practical guidance for government officials and evaluators seeking to improve their evaluation capacities and systems, by enabling a deeper understanding of their strengths and weaknesses and learning from OECD member country experiences and trends. The toolkit supports the practical implementation of the principles contained in the 2022 OECD Recommendation on Public Policy Evaluation, which is the first international standard aimed at driving the establishment of robust institutions and practices that promote the use of public policy evaluations. Together, the Recommendation and this accompanying toolkit seek to help governments build a culture of continuous learning and evidence-informed policymaking, potentially leading to more impactful policies and greater trust in government action...(More)”.
The new politics of AI
Report by the IPPR: AI is fundamentally different from other technologies – it is set to unleash a vast number of highly sophisticated ‘artificial agents’ into the economy. AI systems that can take actions and make decisions are not just tools – they are actors. This can be a good thing. But it requires a novel type of policymaking and politics. Merely accelerating AI deployment and hoping it will deliver public value will likely be insufficient.
In this briefing, we outline how the summit constitutes the first event of a new era of AI policymaking that links AI policy to delivering public value. We argue that AI needs to be directed towards societies’ goals, via ‘mission-based policies’….(More)”.
Data Stewardship Decoded: Mapping Its Diverse Manifestations and Emerging Relevance at a time of AI
Paper by Stefaan Verhulst: “Data stewardship has become a critical component of modern data governance, especially with the growing use of artificial intelligence (AI). Despite its increasing importance, the concept of data stewardship remains ambiguous and varies in its application. This paper explores four distinct manifestations of data stewardship to clarify its emerging position in the data governance landscape. These manifestations include a) data stewardship as a set of competencies and skills, b) a function or role within organizations, c) an intermediary organization facilitating collaborations, and d) a set of guiding principles.
The paper subsequently outlines the core competencies required for effective data stewardship, explains the distinction between data stewards and Chief Data Officers (CDOs), and details the intermediary role of stewards in bridging gaps between data holders and external stakeholders. It also explores key principles aligned with the FAIR framework (Findable, Accessible, Interoperable, Reusable) and introduces the emerging principle of AI readiness to ensure data meets the ethical and technical requirements of AI systems.
The paper emphasizes the importance of data stewardship in enhancing data collaboration, fostering public value, and managing data reuse responsibly, particularly in the era of AI. It concludes by identifying challenges and opportunities for advancing data stewardship, including the need for standardized definitions, capacity building efforts, and the creation of a professional association for data stewardship…(More)”
Enhancing Access to and Sharing of Data in the Age of Artificial Intelligence
OECD Report: “Artificial intelligence (AI) is transforming economies and societies, but its full potential is hindered by poor access to quality data and models. Based on comprehensive country examples, the OECD report “Enhancing Access to and Sharing of Data in the Age of AI” highlights how governments can enhance access to and sharing of data and certain AI models, while ensuring privacy and other rights and interests such as intellectual property rights. It highlights the OECD Recommendation on Enhancing Access to and Sharing of Data, which provides principles to balance openness while ensuring effective legal, technical and organisational safeguards. This policy brief highlights the key findings of the report and their relevance for stakeholders seeking to promote trustworthy AI through better policies for data and AI models that drive trust, investment, innovation, and well-being….(More)”
Tech tycoons have got the economics of AI wrong
The Economist: “…The Jevons paradox—the idea that efficiency leads to more use of a resource, not less—has in recent days provided comfort to Silicon Valley titans worried about the impact of DeepSeek, the maker of a cheap and efficient Chinese chatbot, which threatens the more powerful but energy-guzzling American varieties. Satya Nadella, the boss of Microsoft, posted on X, a social-media platform, that “Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of,” along with a link to the Wikipedia page for the economic principle. Under this logic, DeepSeek’s progress will mean more demand for data centres, Nvidia chips and even the nuclear reactors that the hyperscalers were, prior to the unveiling of DeepSeek, paying to restart. Nothing to worry about if the price falls, Microsoft can make it up on volume.
The logic, however self-serving, has a ring of truth to it. Jevons’s paradox is real and observable in a range of other markets. Consider the example of lighting. William Nordhaus, a Nobel-prizewinning economist, has calculated that a Babylonian oil lamp, powered by sesame oil, produced about 0.06 lumens of light per watt of energy. That compares with up to 110 lumens for a modern light-emitting diode. The world has not responded to this dramatic improvement in energy efficiency by enjoying the same amount of light as a Babylonian at lower cost. Instead, it has banished darkness completely, whether through more bedroom lamps than could have been imagined in ancient Mesopotamia or the Las Vegas sphere, which provides passersby with the chance to see a 112-metre-tall incandescent emoji. Urban light is now so cheap and so abundant that many consider it to be a pollutant.
Likewise, more efficient chatbots could mean that AI finds new uses (some no doubt similarly obnoxious). The ability of DeepSeek’s model to perform about as well as more compute-hungry American AI shows that data centres are more productive than previously thought, rather than less. Expect, the logic goes, more investment in data centres and so on than you did before.
Although this idea should provide tech tycoons with some solace, they still ought to worry. The Jevons paradox is a form of a broader phenomenon known as “rebound effects”. These are typically not large enough to fully offset savings from improved efficiency….Basing the bull case for AI on the Jevons paradox is, therefore, a bet not on the efficiency of the technology but on the level of demand. If adoption is being held back by price then efficiency gains will indeed lead to greater use. If technological progress raises expectations rather than reduces costs, as is typical in health care, then chatbots will make up an ever larger proportion of spending. At the moment, that looks unlikely. America’s Census Bureau finds that only 5% of American firms currently use AI and 7% have plans to adopt it in the future. Many others find the tech difficult to use or irrelevant to their line of business…(More)”.
Unlocking AI’s potential for the public sector
Article by Ruth Kelly: “…Government needs to work on its digital foundations. The extent of legacy IT systems across government is huge. Many were designed and built for a previous business era, and still rely on paper-based processes. Historic neglect and a lack of asset maintenance has added to the difficulty. Because many systems are not compatible, sharing data across systems requires manual extraction which is risky and costly. All this adds to problems with data quality. Government suffers from data which is incomplete, inconsistent, inaccessible, difficult to process and not easily shareable. A lack of common data models, both between and within government departments, makes it difficult and costly to combine different sources of data, and significant manual effort is required to make data usable. Some departments have told us that they spend 60% to 80% of their time on cleaning data when carrying out analysis.
Why is this an issue for AI? Large volumes of good-quality data are important for training, testing and deploying AI models. Poor data leads to poor outcomes, especially where it involves personal data. Access to good-quality data was identified as a barrier to implementing AI by 62% of the 87 government bodies responding to our survey. Simple productivity improvements that provide integration with routine administration (for example summarising documents) is already possible, but integration with big, established legacy IT is a whole other long-term endeavour. Layering new technology on top of existing systems, and reusing poor-quality and aging data, carries the risk of magnifying problems and further embedding reliance on legacy systems…(More)”