Press Release: “In May 2023, the United Nations System Chief Executives Board for Coordination endorsed International Data Governance – Pathways to Progress, developed through the High-level Committee on Programmes (HLCP) which approved the paper at its 45th session in March 2023. International Data Governance – Pathways to Progress and its addenda were developed by the HLCP Working Group on International Data Governance…(More)”. (See Annex 1: Mapping and Comparing Data Governance Frameworks).
Machines of mind: The case for an AI-powered productivity boom
Report by Martin Neil Baily, Erik Brynjolfsson, Anton Korinek: “ Large language models such as ChatGPT are emerging as powerful tools that not only make workers more productive but also increase the rate of innovation, laying the foundation for a significant acceleration in economic growth. As a general purpose technology, AI will impact a wide array of industries, prompting investments in new skills, transforming business processes, and altering the nature of work. However, official statistics will only partially capture the boost in productivity because the output of knowledge workers is difficult to measure. The rapid advances can have great benefits but may also lead to significant risks, so it is crucial to ensure that we steer progress in a direction that benefits all of society…(More)”.
Data portability and interoperability: A primer on two policy tools for regulation of digitized industries
Article by Sukhi Gulati-Gilbert and Robert Seamans: “…In this article we describe two other tools, data portability and interoperability, that may be particularly useful in technology-enabled sectors. Data portability allows users to move data from one company to another, helping to reduce switching costs and providing rival firms with access to valuable customer data. Interoperability allows two or more technical systems to exchange data interactively. Due to its interactive nature, interoperability can help prevent lock-in to a specific platform by allowing users to connect across platforms. Data portability and interoperability share some similarities; in addition to potential pro-competitive benefits, the tools promote values of openness, transparency, and consumer choice.
After providing an overview of these topics, we describe the tradeoffs involved with implementing data portability and interoperability. While these policy tools offer lots of promise, in practice there can be many challenges involved when determining how to fund and design an implementation that is secure and intuitive and accomplishes the intended result. These challenges require that policymakers think carefully about the initial implementation of data portability and interoperability. Finally, to better show how data portability and interoperability can increase competition in an industry, we discuss how they could be applied in the banking and social media sectors. These are just two examples of how data portability and interoperability policy could be applied to many different industries facing increased digitization. Our definitions and examples should be helpful to those interested in understanding the tradeoffs involved in using these tools to promote competition and innovation in the U.S. economy…(More)” See also: Data to Go: The Value of Data Portability as a Means to Data Liquidity.
Regulating Cross-Border Data Flows
Book by Bryan Mercurio, and Ronald Yu: “Data is now one of, if not the world’s most valuable resource. The adoption of data-driven applications across economic sectors has made data and the flow of data so pervasive that it has become integral to everything we as members of society do – from conducting our finances to operating businesses to powering the apps we use every day. For this reason, governing cross-border data flows is inherently difficult given the ubiquity and value of data, and the impact government policies can have on national competitiveness, business attractiveness and personal rights. The challenge for governments is to address in a coherent manner the broad range of data-related issues in the context of a global data-driven economy.
This book engages with the unexplored topic of why and how governments should develop a coherent and consistent strategic framework regulating cross-border data flows. The objective is to fill a very significant gap in the legal and policy setting by considering multiple perspectives in order to assist in the development of a jurisdiction’s coherent and strategic policy framework…(More)“.
3 barriers to successful data collaboratives
Article by Federico Bartolomucci: “Data collaboratives have proliferated in recent years as effective means of promoting the use of data for social good. This type of social partnership involves actors from the private, public, and not-for-profit sectors working together to leverage public or private data to enhance collective capacity to address societal and environmental challenges. The California Data Collaborative for instance, combines the data of numerous Californian water managers to enhance data-informed policy and decision making.
But, in my years as a researcher studying more than a hundred cases of data collaboratives, I have observed widespread feelings of isolation among collaborating partners due to the absence of success-proven reference models. …Below, I provide an overview of three governance challenges faced by practitioners, as well as recommendations for addressing them. In doing so, I encourage every practitioner embarking on a data collaborative initiative to reflect on these challenges and create ad-hoc strategies to address them…
1. Overly relying on grant funding limits a collaborative’s options.
Data Collaboratives are typically conceived as not-for-profit projects, relying solely on grant funding from the founding partners. This is the case, for example, with TD1_Index, a global collaboration that seeks to gather data on Type 1 diabetes, raise awareness, and advance research on the topic. Although grant funding schemas work in some cases (like in that of T1D_Index), relying solely on grant funding makes a data collaborative heavily dependent on the willingness of one or more partners to sustain its activities and hinders its ability to achieve operational and decisional autonomy.
Operational and decisional autonomy indeed appears to be a beneficial condition for a collaborative to develop trust, involve other partners, and continuously adapt its activities and structure to external events—characteristics required for operating in a highly innovative sector.
Hybrid business models that combine grant funding with revenue-generating activities indicate a promising evolutionary path. The simplest way to do this is to monetize data analysis and data stewardship services. The ActNow Coalition, a U.S.-based not-for-profit organization, combines donations with client-funded initiatives in which the team provides data collection, analysis, and visualization services. Offering these types of services generates revenues for the collaborative and gaining access to them is among the most compelling incentives for partners to join the collaboration.
In studying data collaboratives around the world, two models emerge as most effective: (1) pay-per-use models, in which collaboration partners can access data-related services on demand (see Civity NL and their project Sniffer Bike) and (2) membership models, in which participation in the collaborative entitles partners to access certain services under predefined conditions (see the California Data Collaborative).
2. Demonstrating impact is key to a collaborative’s survival.
As partners’ participation in data collaboratives is primarily motivated by a shared social purpose, the collaborative’s ability to demonstrate its efficacy in achieving its purpose means being able to defend its raison d’être. Demonstrating impact enables collaboratives to retain existing partners, renew commitments, and recruit new partners…(More)”.
If good data is key to decarbonization, more than half of Asia’s economies are being locked out of progress, this report says
Blog by Ewan Thomson: “If measuring something is the first step towards understanding it, and understanding something is necessary to be able to improve it, then good data is the key to unlocking positive change. This is particularly true in the energy sector as it seeks to decarbonize.
But some countries have a data problem, according to energy think tank Ember and climate solutions enabler Subak’s Asia Data Transparency Report 2023, and this lack of open and reliable power-generation data is holding back the speed of the clean power transition in the region.
Asia is responsible for around 80% of global coal consumption, making it a big contributor to carbon emissions. Progress is being made on reducing these emissions, but without reliable data on power generation, measuring the rate of this progress will be challenging.
These charts show how different Asian economies are faring on data transparency on power generation and what can be done to improve both the quality and quantity of the data.
Over half of Asian countries lack reliable data in their power sectors, Ember says. Image: Ember
There are major data gaps in 24 out of the 39 Asian economies covered in the Ember research. This means it is unclear whether the energy needs of the nearly 700 million people in these 24 economies are being met with renewables or fossil fuels…(More)”.
AI Is Tearing Wikipedia Apart
Article by Claire Woodcock: “As generative artificial intelligence continues to permeate all aspects of culture, the people who steward Wikipedia are divided on how best to proceed.
During a recent community call, it became apparent that there is a community split over whether or not to use large language models to generate content. While some people expressed that tools like Open AI’s ChatGPT could help with generating and summarizing articles, others remained wary.
The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist. This often results in text summaries which seem accurate, but on closer inspection are revealed to be completely fabricated.
“The risk for Wikipedia is people could be lowering the quality by throwing in stuff that they haven’t checked,” Bruckman added. “I don’t think there’s anything wrong with using it as a first draft, but every point has to be verified.”
The Wikimedia Foundation, the nonprofit organization behind the website, is looking into building tools to make it easier for volunteers to identify bot-generated content. Meanwhile, Wikipedia is working to draft a policy that lays out the limits to how volunteers can use large language models to create content.
The current draft policy notes that anyone unfamiliar with the risks of large language models should avoid using them to create Wikipedia content, because it can open the Wikimedia Foundation up to libel suits and copyright violations—both of which the nonprofit gets protections from but the Wikipedia volunteers do not. These large language models also contain implicit biases, which often result in content skewed against marginalized and underrepresented groups of people.
The community is also divided on whether large language models should be allowed to train on Wikipedia content. While open access is a cornerstone of Wikipedia’s design principles, some worry the unrestricted scraping of internet data allows AI companies like OpenAI to exploit the open web to create closed commercial datasets for their models. This is especially a problem if the Wikipedia content itself is AI-generated, creating a feedback loop of potentially biased information, if left unchecked…(More)”.
Mapping the discourse on evidence-based policy, artificial intelligence, and the ethical practice of policy analysis
Paper by Joshua Newman and Michael Mintrom: “Scholarship on evidence-based policy, a subset of the policy analysis literature, largely assumes information is produced and consumed by humans. However, due to the expansion of artificial intelligence in the public sector, debates no longer capture the full range concerns. Here, we derive a typology of arguments on evidence-based policy that performs two functions: taken separately, the categories serve as directions in which debates may proceed, in light of advances in technology; taken together, the categories act as a set of frames through which the use of evidence in policy making might be understood. Using a case of welfare fraud detection in the Netherlands, we show how the acknowledgement of divergent frames can enable a holistic analysis of evidence use in policy making that considers the ethical issues inherent in automated data processing. We argue that such an analysis will enhance the real-world relevance of the evidence-based policy paradigm….(More)”
The Ethics of Artificial Intelligence for the Sustainable Development Goals
Book by Francesca Mazzi and Luciano Floridi: “Artificial intelligence (AI) as a general-purpose technology has great potential for advancing the United Nations Sustainable Development Goals (SDGs). However, the AI×SDGs phenomenon is still in its infancy in terms of diffusion, analysis, and empirical evidence. Moreover, a scalable adoption of AI solutions to advance the achievement of the SDGs requires private and public actors to engage in coordinated actions that have been analysed only partially so far. This volume provides the first overview of the AI×SDGs phenomenon and its related challenges and opportunities. The first part of the book adopts a programmatic approach, discussing AI×SDGs at a theoretical level and from the perspectives of different stakeholders. The second part illustrates existing projects and potential new applications…(More)”.
Will A.I. Become the New McKinsey?
Essay by Ted Chiang: “When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, it’s become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.
So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America…(More)”.