Machines of mind: The case for an AI-powered productivity boom


Report by Martin Neil Baily, Erik Brynjolfsson, Anton Korinek: “ Large language models such as ChatGPT are emerging as powerful tools that not only make workers more productive but also increase the rate of innovation, laying the foundation for a significant acceleration in economic growth. As a general purpose technology, AI will impact a wide array of industries, prompting investments in new skills, transforming business processes, and altering the nature of work. However, official statistics will only partially capture the boost in productivity because the output of knowledge workers is difficult to measure. The rapid advances can have great benefits but may also lead to significant risks, so it is crucial to ensure that we steer progress in a direction that benefits all of society…(More)”.

Data portability and interoperability: A primer on two policy tools for regulation of digitized industries


Article by Sukhi Gulati-Gilbert and Robert Seamans: “…In this article we describe two other tools, data portability and interoperability, that may be particularly useful in technology-enabled sectors. Data portability allows users to move data from one company to another, helping to reduce switching costs and providing rival firms with access to valuable customer data. Interoperability allows two or more technical systems to exchange data interactively. Due to its interactive nature, interoperability can help prevent lock-in to a specific platform by allowing users to connect across platforms. Data portability and interoperability share some similarities; in addition to potential pro-competitive benefits, the tools promote values of openness, transparency, and consumer choice.

After providing an overview of these topics, we describe the tradeoffs involved with implementing data portability and interoperability. While these policy tools offer lots of promise, in practice there can be many challenges involved when determining how to fund and design an implementation that is secure and intuitive and accomplishes the intended result.  These challenges require that policymakers think carefully about the initial implementation of data portability and interoperability. Finally, to better show how data portability and interoperability can increase competition in an industry, we discuss how they could be applied in the banking and social media sectors. These are just two examples of how data portability and interoperability policy could be applied to many different industries facing increased digitization. Our definitions and examples should be helpful to those interested in understanding the tradeoffs involved in using these tools to promote competition and innovation in the U.S. economy…(More)” See also: Data to Go: The Value of Data Portability as a Means to Data Liquidity.

Evidence Gap Maps as Critical Information Communication Devices for Evidence-based Public Policy


Paper by Esteban Villa-Turek et al: “The public policy cycle requires increasingly the use of evidence by policy makers. Evidence Gap Maps (EGMs) are a relatively new methodology that helps identify, process, and visualize the vast amounts of studies representing a rich source of evidence for better policy making. This document performs a methodological review of EGMs and presents the development of a working integrated system that automates several critical steps of EGM creation by means of applied computational and statistical methods. Above all, the proposed system encompasses all major steps of EGM creation in one place, namely inclusion criteria determination, processing of information, analysis, and user-friendly communication of synthesized relevant evidence. This tool represents a critical milestone in the efforts of implementing cutting-edge computational methods in usable systems. The contribution of the document is two-fold. First, it presents the critical importance of EGMs in the public policy cycle; second, it justifies and explains the development of a usable tool that encompasses the methodological phases of creation of EGMs, while automating most time-consuming stages of the process. The overarching goal is the better and faster information communication to relevant actors like policy makers, thus promoting well-being through better and more efficient interventions based on more evidence-driven policy making…(More)”.

Regulating Cross-Border Data Flows


Book by Bryan Mercurio, and Ronald Yu: “Data is now one of, if not the world’s most valuable resource. The adoption of data-driven applications across economic sectors has made data and the flow of data so pervasive that it has become integral to everything we as members of society do – from conducting our finances to operating businesses to powering the apps we use every day. For this reason, governing cross-border data flows is inherently difficult given the ubiquity and value of data, and the impact government policies can have on national competitiveness, business attractiveness and personal rights. The challenge for governments is to address in a coherent manner the broad range of data-related issues in the context of a global data-driven economy.

This book engages with the unexplored topic of why and how governments should develop a coherent and consistent strategic framework regulating cross-border data flows. The objective is to fill a very significant gap in the legal and policy setting by considering multiple perspectives in order to assist in the development of a jurisdiction’s coherent and strategic policy framework…(More)“.

If good data is key to decarbonization, more than half of Asia’s economies are being locked out of progress, this report says


Blog by Ewan Thomson: “If measuring something is the first step towards understanding it, and understanding something is necessary to be able to improve it, then good data is the key to unlocking positive change. This is particularly true in the energy sector as it seeks to decarbonize.

But some countries have a data problem, according to energy think tank Ember and climate solutions enabler Subak’s Asia Data Transparency Report 2023, and this lack of open and reliable power-generation data is holding back the speed of the clean power transition in the region.

Asia is responsible for around 80% of global coal consumption, making it a big contributor to carbon emissions. Progress is being made on reducing these emissions, but without reliable data on power generation, measuring the rate of this progress will be challenging.

These charts show how different Asian economies are faring on data transparency on power generation and what can be done to improve both the quality and quantity of the data.

Infographic showing the number of economies by overall transparency score.

Over half of Asian countries lack reliable data in their power sectors, Ember says. Image: Ember

There are major data gaps in 24 out of the 39 Asian economies covered in the Ember research. This means it is unclear whether the energy needs of the nearly 700 million people in these 24 economies are being met with renewables or fossil fuels…(More)”.

AI Is Tearing Wikipedia Apart


Article by Claire Woodcock: “As generative artificial intelligence continues to permeate all aspects of culture, the people who steward Wikipedia are divided on how best to proceed. 

During a recent community call, it became apparent that there is a community split over whether or not to use large language models to generate content. While some people expressed that tools like Open AI’s ChatGPT could help with generating and summarizing articles, others remained wary. 

The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist. This often results in text summaries which seem accurate, but on closer inspection are revealed to be completely fabricated

“The risk for Wikipedia is people could be lowering the quality by throwing in stuff that they haven’t checked,” Bruckman added. “I don’t think there’s anything wrong with using it as a first draft, but every point has to be verified.” 

The Wikimedia Foundation, the nonprofit organization behind the website, is looking into building tools to make it easier for volunteers to identify bot-generated content. Meanwhile, Wikipedia is working to draft a policy that lays out the limits to how volunteers can use large language models to create content.

The current draft policy notes that anyone unfamiliar with the risks of large language models should avoid using them to create Wikipedia content, because it can open the Wikimedia Foundation up to libel suits and copyright violations—both of which the nonprofit gets protections from but the Wikipedia volunteers do not. These large language models also contain implicit biases, which often result in content skewed against marginalized and underrepresented groups of people

The community is also divided on whether large language models should be allowed to train on Wikipedia content. While open access is a cornerstone of Wikipedia’s design principles, some worry the unrestricted scraping of internet data allows AI companies like OpenAI to exploit the open web to create closed commercial datasets for their models. This is especially a problem if the Wikipedia content itself is AI-generated, creating a feedback loop of potentially biased information, if left unchecked…(More)”.

The Ethics of Artificial Intelligence for the Sustainable Development Goals


Book by Francesca Mazzi and Luciano Floridi: “Artificial intelligence (AI) as a general-purpose technology has great potential for advancing the United Nations Sustainable Development Goals (SDGs). However, the AI×SDGs phenomenon is still in its infancy in terms of diffusion, analysis, and empirical evidence. Moreover, a scalable adoption of AI solutions to advance the achievement of the SDGs requires private and public actors to engage in coordinated actions that have been analysed only partially so far. This volume provides the first overview of the AI×SDGs phenomenon and its related challenges and opportunities. The first part of the book adopts a programmatic approach, discussing AI×SDGs at a theoretical level and from the perspectives of different stakeholders. The second part illustrates existing projects and potential new applications…(More)”.

Spatial data trusts: an emerging governance framework for sharing spatial data


Paper by Nenad Radosevic et al: “Data Trusts are an important emerging approach to enabling the much wider sharing of data from many different sources and for many different purposes, backed by the confidence of clear and unambiguous data governance. Data Trusts combine the technical infrastructure for sharing data with the governance framework of a legal trust. The concept of a data Trust applied specifically to spatial data offers significant opportunities for new and future applications, addressing some longstanding barriers to data sharing, such as location privacy and data sovereignty. This paper introduces and explores the concept of a ‘spatial data Trust’ by identifying and explaining the key functions and characteristics required to underpin a data Trust for spatial data. The work identifies five key features of spatial data Trusts that demand specific attention and connects these features to a history of relevant work in the field, including spatial data infrastructures (SDIs), location privacy, and spatial data quality. The conclusions identify several key strands of research for the future development of this rapidly emerging framework for spatial data sharing…(More)”.

From Fragmentation to Coordination: The Case for an Institutional Mechanism for Cross-Border Data Flows


Report by the World Economic Forum: “Digital transformation of the global economy is bringing markets and people closer. Few conveniences of modern life – from international travel to online shopping to cross-border payments – would exist without the free flow of data.

Yet, impediments to free-flowing data are growing. The “Data Free Flow with Trust (DFFT)” concept is based on the idea that responsible data concerns, such as privacy and security, can be addressed without obstructing international data transfers. Policy-makers, trade negotiators and regulators are actively working on this, and while important progress has been made, an effective and trusted international cooperation mechanism would amplify their progress.

This white paper makes the case for establishing such a mechanism with a permanent secretariat, starting with the Group of Seven (G7) member-countries, and ensuring participation of high-level representatives of multiple stakeholder groups, including the private sector, academia and civil society.

This new institution would go beyond short-term fixes and catalyse long-term thinking to operationalize DFFT…(More)”.

Unlocking the Power of Data Refineries for Social Impact


Essay by Jason Saul & Kriss Deiglmeier: “In 2021, US companies generated $2.77 trillion in profits—the largest ever recorded in history. This is a significant increase since 2000 when corporate profits totaled $786 billion. Social progress, on the other hand, shows a very different picture. From 2000 to 2021, progress on the United Nations Sustainable Development Goals has been anemic, registering less than 10 percent growth over 20 years.

What explains this massive split between the corporate and the social sectors? One explanation could be the role of data. In other words, companies are benefiting from a culture of using data to make decisions. Some refer to this as the “data divide”—the increasing gap between the use of data to maximize profit and the use of data to solve social problems…

Our theory is that there is something more systemic going on. Even if nonprofit practitioners and policy makers had the budget, capacity, and cultural appetite to use data; does the data they need even exist in the form they need it? We submit that the answer to this question is a resounding no. Usable data doesn’t yet exist for the sector because the sector lacks a fully functioning data ecosystem to create, analyze, and use data at the same level of effectiveness as the commercial sector…(More)”.