“Data Commons”: Under Threat by or The Solution for a Generative AI Era ? Rethinking Data Access and Re-us


Article by Stefaan G. Verhulst, Hannah Chafetz and Andrew Zahuranec: “One of the great paradoxes of our datafied era is that we live amid both unprecedented abundance and scarcity. Even as data grows more central to our ability to promote the public good, so too does it remain deeply — and perhaps increasingly — inaccessible and privately controlled. In response, there have been growing calls for “data commons” — pools of data that would be (self-)managed by distinctive communities or entities operating in the public’s interest. These pools could then be made accessible and reused for the common good.

Data commons are typically the results of collaborative and participatory approaches to data governance [1]. They offer an alternative to the growing tendency toward privatized data silos or extractive re-use of open data sets, instead emphasizing the communal and shared value of data — for example, by making data resources accessible in an ethical and sustainable way for purposes in alignment with community values or interests such as scientific researchsocial good initiativesenvironmental monitoringpublic health, and other domains.

Data commons can today be considered (the missing) critical infrastructure for leveraging data to advance societal wellbeing. When designed responsibly, they offer potential solutions for a variety of wicked problems, from climate change to pandemics and economic and social inequities. However, the rapid ascent of generative artificial intelligence (AI) technologies is changing the rules of the game, leading both to new opportunities as well as significant challenges for these communal data repositories.

On the one hand, generative AI has the potential to unlock new insights from data for a broader audience (through conversational interfaces such as chats), fostering innovation, and streamlining decision-making to serve the public interest. Generative AI also stands out in the realm of data governance due to its ability to reuse data at a massive scale, which has been a persistent challenge in many open data initiatives. On the other hand, generative AI raises uncomfortable questions related to equitable accesssustainability, and the ethical re-use of shared data resources. Further, without the right guardrailsfunding models and enabling governance frameworks, data commons risk becoming data graveyards — vast repositories of unused, and largely unusable, data.

Ten part framework to rethink Data Commons

In what follows, we lay out some of the challenges and opportunities posed by generative AI for data commons. We then turn to a ten-part framework to set the stage for a broader exploration on how to reimagine and reinvigorate data commons for the generative AI era. This framework establishes a landscape for further investigation; our goal is not so much to define what an updated data commons would look like but to lay out pathways that would lead to a more meaningful assessment of the design requirements for resilient data commons in the age of generative AI…(More)”

5 Ways AI Could Shake Up Democracy


Article by Shane Snider: “Tech luminary, author and Harvard Kennedy School lecturer Bruce Schneier on Tuesday offered his take on the promises and perils of artificial intelligence in key aspects of democracy.

In just two years, generative artificial intelligence (GenAI) has sparked a race to adopt (and defend against) the technology in government and the enterprise. It seems every aspect of life will soon be impacted — if not already feeling AI’s influence. A global race to place regulatory guardrails is taking shape even as companies and governments are spending billions of dollars implementing new AI technologies.

Schneier contends that five major areas of our democracy will likely see profound changes, including politics, lawmaking, administration, the legal system, and to citizens themselves.

“I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society, not necessarily by doing new things, but mostly by doing things that already or could be done by humans, are now replacing humans … There are potential changes in four dimensions: speed, scale, scope, and sophistication.”..(More)”.

Digital Sovereignty: A Descriptive Analysis and a Critical Evaluation of Existing Models


Paper by Samuele Fratini et al: “Digital sovereignty is a popular yet still emerging concept. It is claimed by and related to various global actors, whose narratives are often competing and mutually inconsistent. Various scholars have proposed different descriptive approaches to make sense of the matter. We argue that existing works help advance our analytical understanding and that a critical assessment of existing forms of digital sovereignty is needed. Thus, the article offers an updated mapping of forms of digital sovereignty, while testing their effectiveness in response to radical changes and challenges. To do this, the article undertakes a systematic literature review, collecting 271 peer-reviewed articles from Google Scholar. They are used to identify descriptive features (how digital sovereignty is pursued) and value features (why digital sovereignty is pursued), which are then combined to produce four models: the rights-based model, market-oriented model, centralisation model, and state-based model. We evaluate their effectiveness within a framework of robust governance that accounts for the models’ ability to absorb the disruptions caused by technological advancements, geopolitical changes, and evolving societal norms. We find that none of the available models fully combines comprehensive regulations of digital technologies with a sufficient degree of responsiveness to fast-paced technological innovation and social and economic shifts. However, each offers valuable lessons to policymakers who wish to implement an effective and robust form of digital sovereignty…(More)”.

The Age of AI Nationalism and its Effects


Paper by Susan Ariel Aaronson: “This paper aims to illuminate how AI nationalistic policies may backfire. Over time, such actions and policies could alienate allies and prod other countries to adopt “beggar-thy neighbor” approaches to AI (The Economist: 2023; Kim: 2023 Shivakumar et al. 2024). Moreover, AI nationalism could have additional negative spillovers over time. Many AI experts are optimistic about the benefits of AI, whey they are aware of its many risks to democracy, equity, and society. They understand that AI can be a public good when it is used to mitigate complex problems affecting society (Gopinath: 2023; Okolo: 2023). However, when policymakers take steps to advance AI within their borders, they may — perhaps without intending to do so – make it harder for other countries with less capital, expertise, infrastructure, and data prowess to develop AI systems that could meet the needs of their constituents. In so doing, these officials could undermine the potential of AI to enhance human welfare and impede the development of more trustworthy AI around the world. (Slavkovik: 2024; Aaronson: 2023; Brynjolfsson and Unger: 2023; Agrawal et al. 2017).

Governments have many means of nurturing AI within their borders that do not necessarily discriminate between foreign and domestic producers of AI. Nevertheless, officials may be under pressure from local firms to limit the market power of foreign competitors. Officials may also want to use trade (for example, export controls) as a lever to prod other governments to change their behavior (Buchanan: 2020). Additionally, these officials may be acting in what they believe is the nation’s national security interest, which may necessitate that officials rely solely on local suppliers and local control. (GAO: 2021)

Herein the author attempts to illuminate AI nationalism and its consequences by answering 3 questions:
• What are nations doing to nurture AI capacity within their borders?
• Are some of these actions trade distorting?
• What are the implications of such trade-distorting actions?…(More)”

What Mission-Driven Government Means


Article by Mariana Mazzucato & Rainer Kattel: “The COVID-19 pandemic, inflation, and wars have alerted governments to the realities of what it takes to tackle massive crises. In extraordinary times, policymakers often rediscover their capacity for bold decision-making. The rapid speed of COVID-19 vaccine development and deployment was a case in point.

But preparing for other challenges requires more sustained efforts in “mission-driven government.” Recalling the successful language and strategies of the Cold War-era moonshot, governments around the world are experimenting with ambitious policy programs and public-private partnerships in pursuit of specific social, economic, and environmental goals. For example, in the United Kingdom, the Labour Party’s five-mission campaign platform has kicked off a vibrant debate about whether and how to create a “mission economy.”

Mission-driven government is not about achieving doctrinal adherence to some original set of ideas; it is about identifying the essential components of missions and accepting that different countries might need different approaches. As matters stand, the emerging landscape of public missions is characterized by a re-labeling or repurposing of existing institutions and policies, with more stuttering starts than rapid takeoffs. But that is okay. We should not expect a radical change in policymaking strategies to happen overnight, or even over one electoral cycle.

Particularly in liberal democracies, ambitious change requires engagement across a wide range of constituencies to secure public buy-in, and to ensure that the benefits will be widely shared. The paradox at the heart of mission-driven government is that it pursues ambitious, clearly articulated policy goals through myriad policies and programs based on experimentation.

This embrace of experimentation is what separates today’s missions from the missions of the moonshot era (though it does echo the Roosevelt administration’s experimental approach during the 1930s New Deal). Major societal challenges, such as the urgent need to create more equitable and sustainable food systems, cannot be tackled the same way as a moon landing. Such systems consist of multiple technological dimensions (in the case of food, these include everything from energy to waste management), and involve widespread and often disconnected agents and an array of cultural norms, values, and habits…(More)”.

First EU rulebook to protect media independence and pluralism enters into force


Press Release: “Today, the European Media Freedom Act, a new set of unprecedented rules to protect media independence and pluralism, enters into force.

This new legislation provides safeguards against political interference in editorial decisions and against surveillance of journalists. The Act guarantees that media can operate more easily in the internal market and online. Additionally, the regulation also aims to secure the independence and stable funding of public service media, as well as the transparency of both media ownership and allocation of state advertising.

Vice-President for Values and Transparency, Věra Jourová, said:

 “For the first time ever, the EU has a law to protect media freedom. The EU recognises that journalists play an essential role for democracy and should be protected. I call on Member States to implement the new rules as soon as possible.”

Commissioner for Internal Market, Thierry Breton, added:

“Media companies play a vital role in our democracies but are confronted with falling revenues, threats to media freedom and pluralism and a patchwork of different national rules. Thanks to the European Media Freedom Act, media companies will enjoy common safeguards at EU level to guarantee a plurality of voices and be able to better benefit from the opportunities of operating in our single market without any interference, be it private or public.”

Proposed by the Commission in September 2022, this Regulation puts in place several protections for the right to media plurality becoming applicable within 6 months. More details on the timeline for its application are available in this infographic. ..(More)”.

The Human Rights Data Revolution


Briefing by Domenico Zipoli: “… explores the evolving landscape of digital human rights tracking tools and databases (DHRTTDs). It discusses their growing adoption for monitoring, reporting, and implementing human rights globally, while also pinpointing the challenge of insufficient coordination and knowledge sharing among these tools’ developers and users. Drawing from insights of over 50 experts across multiple sectors gathered during two pivotal roundtables organized by the GHRP in 2022 and 2023, this new publication critically evaluates the impact and future of DHRTTDs. It integrates lessons and challenges from these discussions, along with targeted research and interviews, to guide the human rights community in leveraging digital advancements effectively..(More)”.

Technology and the Transformation of U.S. Foreign Policy


Speech by Antony J. Blinken: “Today’s revolutions in technology are at the heart of our competition with geopolitical rivals. They pose a real test to our security. And they also represent an engine of historic possibility – for our economies, for our democracies, for our people, for our planet.

Put another way: Security, stability, prosperity – they are no longer solely analog matters.

The test before us is whether we can harness the power of this era of disruption and channel it into greater stability, greater prosperity, greater opportunity.

President Biden is determined not just to pass this “tech test,” but to ace it.

Our ability to design, to develop, to deploy technologies will determine our capacity to shape the tech future. And naturally, operating from a position of strength better positions us to set standards and advance norms around the world.

But our advantage comes not just from our domestic strength.

It comes from our solidarity with the majority of the world that shares our vision for a vibrant, open, and secure technological future, and from an unmatched network of allies and partners with whom we can work in common cause to pass the “tech test.”

We’re committed not to “digital sovereignty” but “digital solidarity.

On May 6, the State Department unveiled the U.S. International Cyberspace and Digital Strategy, which treats digital solidarity as our North Star. Solidarity informs our approach not only to digital technologies, but to all key foundational technologies.

So what I’d like to do now is share with you five ways that we’re putting this into practice.

First, we’re harnessing technology for the betterment not just of our people and our friends, but of all humanity.

The United States believes emerging and foundational technologies can and should be used to drive development and prosperity, to promote respect for human rights, to solve shared global challenges.

Some of our strategic rivals are working toward a very different goal. They’re using digital technologies and genomic data collection to surveil their people, to repress human rights.

Pretty much everywhere I go, I hear from government officials and citizens alike about their concerns about these dystopian uses of technology. And I also hear an abiding commitment to our affirmative vision and to the embrace of technology as a pathway to modernization and opportunity.

Our job is to use diplomacy to try to grow this consensus even further – to internationalize and institutionalize our vision of “tech for good.”..(More)”.

Complexity and the Global Governance of AI


Paper by Gordon LaForge et al: “In the coming years, advanced artificial intelligence (AI) systems are expected to bring significant benefits and risks for humanity. Many governments, companies, researchers, and civil society organizations are proposing, and in some cases, building global governance frameworks and institutions to promote AI safety and beneficial development. Complexity thinking, a way of viewing the world not just as discrete parts at the macro level but also in terms of bottom-up and interactive complex adaptive systems, can be a useful intellectual and scientific lens for shaping these endeavors. This paper details how insights from the science and theory of complexity can aid understanding of the challenges posed by AI and its potential impacts on society. Given the characteristics of complex adaptive systems, the paper recommends that global AI governance be based on providing a fit, adaptive response system that mitigates harmful outcomes of AI and enables positive aspects to flourish. The paper proposes components of such a system in three areas: access and power, international relations and global stability; and accountability and liability…(More)”

The case for global governance of AI: arguments, counter-arguments, and challenges ahead


Paper by Mark Coeckelbergh: “But why, exactly, is global governance needed, and what form can and should it take? The main argument for the global governance of AI, which is also applicable to digital technologies in general, is essentially a moral one: as AI technologies become increasingly powerful and influential, we have the moral responsibility to ensure that it benefits humanity as a whole and that we deal with the global risks and the ethical and societal issues that arise from the technology, including privacy issues, security and military uses, bias and fairness, responsibility attribution, transparency, job displacement, safety, manipulation, and AI’s environmental impact. Since the effects of AI cross borders, so the argument continues, global cooperation and global governance are the only means to fully and effectively exercise that moral responsibility and ensure responsible innovation and use of technology to increase the well-being for all and preserve peace; national regulation is not sufficient….(More)”.