We don’t need an AI manifesto — we need a constitution


Article by Vivienne Ming: “Loans drive economic mobility in America, even as they’ve been a historically powerful tool for discrimination. I’ve worked on multiple projects to reduce that bias using AI. What I learnt, however, is that even if an algorithm works exactly as intended, it is still solely designed to optimise the financial returns to the lender who paid for it. The loan application process is already impenetrable to most, and now your hopes for home ownership or small business funding are dying in a 50-millisecond computation…

In law, the right to a lawyer and judicial review are a constitutional guarantee in the US and an established civil right throughout much of the world. These are the foundations of your civil liberties. When algorithms act as an expert witness, testifying against you but immune to cross examination, these rights are not simply eroded — they cease to exist.

People aren’t perfect. Neither ethics training for AI engineers nor legislation by woefully uninformed politicians can change that simple truth. I don’t need to assume that Big Tech chief executives are bad actors or that large companies are malevolent to understand that what is in their self-interest is not always in mine. The framers of the US Constitution recognised this simple truth and sought to leverage human nature for a greater good. The Constitution didn’t simply assume people would always act towards that greater good. Instead it defined a dynamic mechanism — self-interest and the balance of power — that would force compromise and good governance. Its vision of treating people as real actors rather than better angels produced one of the greatest frameworks for governance in history.

Imagine you were offered an AI-powered test for post-partum depression. My company developed that very test and it has the power to change your life, but you may choose not to use it for fear that we might sell the results to data brokers or activist politicians. You have a right to our AI acting solely for your health. It was for this reason I founded an independent non-profit, The Human Trust, that holds all of the data and runs all of the algorithms with sole fiduciary responsibility to you. No mother should have to choose between a life-saving medical test and her civil rights…(More)”.

A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI


Report by Hannah Chafetz, Sampriti Saxena, and Stefaan G. Verhulst: “Since late 2022, generative AI services and large language models (LLMs) have transformed how many individuals access, and process information. However, how generative AI and LLMs can be augmented with open data from official sources and how open data can be made more accessible with generative AI – potentially enabling a Fourth Wave of Open Data – remains an under explored area. 

For these reasons, The Open Data Policy Lab (a collaboration between The GovLab and Microsoft) decided to explore the possible intersections between open data from official sources and generative AI. Throughout the last year, the team has conducted a range of research initiatives about the potential of open data and generative including a panel discussion, interviews, and Open Data Action Labs – a series of design sprints with a diverse group of industry experts. 

These initiatives were used to inform our latest report, “A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI,” (May 2024) which provides a new framework and recommendations to support open data providers and other interested parties in making open data “ready” for generative AI…

The report outlines five scenarios in which open data from official sources (e.g. open government and open research data) and generative AI can intersect. Each of these scenarios includes case studies from the field and a specific set of requirements that open data providers can focus on to become ready for a scenario. These include…(More)” (Arxiv).

Png Cover Page 26

“Data Commons”: Under Threat by or The Solution for a Generative AI Era ? Rethinking Data Access and Re-us


Article by Stefaan G. Verhulst, Hannah Chafetz and Andrew Zahuranec: “One of the great paradoxes of our datafied era is that we live amid both unprecedented abundance and scarcity. Even as data grows more central to our ability to promote the public good, so too does it remain deeply — and perhaps increasingly — inaccessible and privately controlled. In response, there have been growing calls for “data commons” — pools of data that would be (self-)managed by distinctive communities or entities operating in the public’s interest. These pools could then be made accessible and reused for the common good.

Data commons are typically the results of collaborative and participatory approaches to data governance [1]. They offer an alternative to the growing tendency toward privatized data silos or extractive re-use of open data sets, instead emphasizing the communal and shared value of data — for example, by making data resources accessible in an ethical and sustainable way for purposes in alignment with community values or interests such as scientific researchsocial good initiativesenvironmental monitoringpublic health, and other domains.

Data commons can today be considered (the missing) critical infrastructure for leveraging data to advance societal wellbeing. When designed responsibly, they offer potential solutions for a variety of wicked problems, from climate change to pandemics and economic and social inequities. However, the rapid ascent of generative artificial intelligence (AI) technologies is changing the rules of the game, leading both to new opportunities as well as significant challenges for these communal data repositories.

On the one hand, generative AI has the potential to unlock new insights from data for a broader audience (through conversational interfaces such as chats), fostering innovation, and streamlining decision-making to serve the public interest. Generative AI also stands out in the realm of data governance due to its ability to reuse data at a massive scale, which has been a persistent challenge in many open data initiatives. On the other hand, generative AI raises uncomfortable questions related to equitable accesssustainability, and the ethical re-use of shared data resources. Further, without the right guardrailsfunding models and enabling governance frameworks, data commons risk becoming data graveyards — vast repositories of unused, and largely unusable, data.

Ten part framework to rethink Data Commons

In what follows, we lay out some of the challenges and opportunities posed by generative AI for data commons. We then turn to a ten-part framework to set the stage for a broader exploration on how to reimagine and reinvigorate data commons for the generative AI era. This framework establishes a landscape for further investigation; our goal is not so much to define what an updated data commons would look like but to lay out pathways that would lead to a more meaningful assessment of the design requirements for resilient data commons in the age of generative AI…(More)”

5 Ways AI Could Shake Up Democracy


Article by Shane Snider: “Tech luminary, author and Harvard Kennedy School lecturer Bruce Schneier on Tuesday offered his take on the promises and perils of artificial intelligence in key aspects of democracy.

In just two years, generative artificial intelligence (GenAI) has sparked a race to adopt (and defend against) the technology in government and the enterprise. It seems every aspect of life will soon be impacted — if not already feeling AI’s influence. A global race to place regulatory guardrails is taking shape even as companies and governments are spending billions of dollars implementing new AI technologies.

Schneier contends that five major areas of our democracy will likely see profound changes, including politics, lawmaking, administration, the legal system, and to citizens themselves.

“I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society, not necessarily by doing new things, but mostly by doing things that already or could be done by humans, are now replacing humans … There are potential changes in four dimensions: speed, scale, scope, and sophistication.”..(More)”.

Digital Sovereignty: A Descriptive Analysis and a Critical Evaluation of Existing Models


Paper by Samuele Fratini et al: “Digital sovereignty is a popular yet still emerging concept. It is claimed by and related to various global actors, whose narratives are often competing and mutually inconsistent. Various scholars have proposed different descriptive approaches to make sense of the matter. We argue that existing works help advance our analytical understanding and that a critical assessment of existing forms of digital sovereignty is needed. Thus, the article offers an updated mapping of forms of digital sovereignty, while testing their effectiveness in response to radical changes and challenges. To do this, the article undertakes a systematic literature review, collecting 271 peer-reviewed articles from Google Scholar. They are used to identify descriptive features (how digital sovereignty is pursued) and value features (why digital sovereignty is pursued), which are then combined to produce four models: the rights-based model, market-oriented model, centralisation model, and state-based model. We evaluate their effectiveness within a framework of robust governance that accounts for the models’ ability to absorb the disruptions caused by technological advancements, geopolitical changes, and evolving societal norms. We find that none of the available models fully combines comprehensive regulations of digital technologies with a sufficient degree of responsiveness to fast-paced technological innovation and social and economic shifts. However, each offers valuable lessons to policymakers who wish to implement an effective and robust form of digital sovereignty…(More)”.

The Age of AI Nationalism and its Effects


Paper by Susan Ariel Aaronson: “This paper aims to illuminate how AI nationalistic policies may backfire. Over time, such actions and policies could alienate allies and prod other countries to adopt “beggar-thy neighbor” approaches to AI (The Economist: 2023; Kim: 2023 Shivakumar et al. 2024). Moreover, AI nationalism could have additional negative spillovers over time. Many AI experts are optimistic about the benefits of AI, whey they are aware of its many risks to democracy, equity, and society. They understand that AI can be a public good when it is used to mitigate complex problems affecting society (Gopinath: 2023; Okolo: 2023). However, when policymakers take steps to advance AI within their borders, they may — perhaps without intending to do so – make it harder for other countries with less capital, expertise, infrastructure, and data prowess to develop AI systems that could meet the needs of their constituents. In so doing, these officials could undermine the potential of AI to enhance human welfare and impede the development of more trustworthy AI around the world. (Slavkovik: 2024; Aaronson: 2023; Brynjolfsson and Unger: 2023; Agrawal et al. 2017).

Governments have many means of nurturing AI within their borders that do not necessarily discriminate between foreign and domestic producers of AI. Nevertheless, officials may be under pressure from local firms to limit the market power of foreign competitors. Officials may also want to use trade (for example, export controls) as a lever to prod other governments to change their behavior (Buchanan: 2020). Additionally, these officials may be acting in what they believe is the nation’s national security interest, which may necessitate that officials rely solely on local suppliers and local control. (GAO: 2021)

Herein the author attempts to illuminate AI nationalism and its consequences by answering 3 questions:
• What are nations doing to nurture AI capacity within their borders?
• Are some of these actions trade distorting?
• What are the implications of such trade-distorting actions?…(More)”

Establish Data Collaboratives To Foster Meaningful Public Involvement


Article by Gwen Ottinger: “Federal agencies are striving to expand the role of the public, including members of marginalized communities, in developing regulatory policy. At the same time, agencies are considering how to mobilize data of increasing size and complexity to ensure that policies are equitable and evidence-based. However, community engagement has rarely been extended to the process of examining and interpreting data. This is a missed opportunity: community members can offer critical context to quantitative data, ground-truth data analyses, and suggest ways of looking at data that could inform policy responses to pressing problems in their lives. Realizing this opportunity requires a structure for public participation in which community members can expect both support from agency staff in accessing and understanding data and genuine openness to new perspectives on quantitative analysis. 

To deepen community involvement in developing evidence-based policy, federal agencies should form Data Collaboratives in which staff and members of the public engage in mutual learning about available datasets and their affordances for clarifying policy problems…(More)”.

Technology and the Transformation of U.S. Foreign Policy


Speech by Antony J. Blinken: “Today’s revolutions in technology are at the heart of our competition with geopolitical rivals. They pose a real test to our security. And they also represent an engine of historic possibility – for our economies, for our democracies, for our people, for our planet.

Put another way: Security, stability, prosperity – they are no longer solely analog matters.

The test before us is whether we can harness the power of this era of disruption and channel it into greater stability, greater prosperity, greater opportunity.

President Biden is determined not just to pass this “tech test,” but to ace it.

Our ability to design, to develop, to deploy technologies will determine our capacity to shape the tech future. And naturally, operating from a position of strength better positions us to set standards and advance norms around the world.

But our advantage comes not just from our domestic strength.

It comes from our solidarity with the majority of the world that shares our vision for a vibrant, open, and secure technological future, and from an unmatched network of allies and partners with whom we can work in common cause to pass the “tech test.”

We’re committed not to “digital sovereignty” but “digital solidarity.

On May 6, the State Department unveiled the U.S. International Cyberspace and Digital Strategy, which treats digital solidarity as our North Star. Solidarity informs our approach not only to digital technologies, but to all key foundational technologies.

So what I’d like to do now is share with you five ways that we’re putting this into practice.

First, we’re harnessing technology for the betterment not just of our people and our friends, but of all humanity.

The United States believes emerging and foundational technologies can and should be used to drive development and prosperity, to promote respect for human rights, to solve shared global challenges.

Some of our strategic rivals are working toward a very different goal. They’re using digital technologies and genomic data collection to surveil their people, to repress human rights.

Pretty much everywhere I go, I hear from government officials and citizens alike about their concerns about these dystopian uses of technology. And I also hear an abiding commitment to our affirmative vision and to the embrace of technology as a pathway to modernization and opportunity.

Our job is to use diplomacy to try to grow this consensus even further – to internationalize and institutionalize our vision of “tech for good.”..(More)”.

Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges


Report by the President’s Council of Advisors on Science and Technology (PCAST): “Broadly speaking, scientific advances have historically proceeded via a combination of three paradigms: empirical studies and experimentation; scientific theory and mathematical analyses; and numerical experiments and modeling. In recent years a fourth paradigm, data-driven discovery, has emerged.

These four paradigms complement and support each other. However, all four scientific modalities experience impediments to progress. Verification of a scientific hypothesis through experimentation, careful observation, or via clinical trial can be slow and expensive. The range of candidate theories to consider can be too vast and complex for human scientists to analyze. Truly innovative new hypotheses might only be discovered by fortuitous chance, or by exceptionally insightful researchers. Numerical models can be inaccurate or require enormous amounts of computational resources. Data sets can be incomplete, biased, heterogeneous, or noisy to analyze using traditional data science methods.

AI tools have obvious applications in data-driven science, but it has also been a long-standing aspiration to use these technologies to remove, or at least reduce, many of the obstacles encountered in the other three paradigms. With the current advances in AI, this dream is on the cusp of becoming a reality: candidate solutions to scientific problems are being rapidly identified, complex simulations are being enriched, and robust new ways of analyzing data are being developed.

By combining AI with the other three research modes, the rate of scientific progress will be greatly accelerated, and researchers will be positioned to meet urgent global challenges in a timely manner. Like most technologies, AI is dual use: AI technology can facilitate both beneficial and harmful applications and can cause unintended negative consequences if deployed irresponsibly or without expert and ethical human supervision. Nevertheless, PCAST sees great potential for advances in AI to accelerate science and technology for the benefit of society and the planet. In this report, we provide a high-level vision for how AI, if used responsibly, can transform the way that science is done, expand the boundaries of human knowledge, and enable researchers to find solutions to some of society’s most pressing problems…(More)”

Disfactory Project: How to Detect Illegal Factories by Open Source Technology and Crowdsourcing


Article by Peii Lai: “…building illegal factories on farmlands is still a profitable business, because the factory owners thus obtain the means of production at a lower price and can easily get away with penalties by simply ignoring their legal responsibility. Such conduct simply shifts the cost of production onto the environment in an irresponsible way. As we can imagine, such violations has been increasing year by year. On average, Taiwan loses 1,500 hectares of farmland each year due to illegal use, which demonstrates that illegal factories are an ongoing and escalating problem that people cannot ignore.

It’s clearly that the problem of illegal factories are caused by dysfunction of the previous land management regulations. In response to that, Citizens of Earth Taiwan (CET) started seeking solutions to tackle the illegal factories. CET soon realized that the biggest obstacle they faced was that no one saw the violations as a big deal. Local governments avoided standing on the opposite side of the illegal factories. For local governments, imposing penalties is an arduous and thankless task…

Through the collaboration of CET and g0v-zero, the Disfactory project combines the knowledge they have accumulated through advocacy and the diverse techniques brought by the passionate civic contributors. In 2020, the Disfactory project team delivered its first product: disfactory.tw. They built a website with geographic information that whistle blowers can operate on the ground by themselves. Through a few simple steps: identifying the location of the target illegal factory, taking a picture of it, uploading the photos, any citizen can easily register the information on Disfactory’s website….(More)”