Why we’re fighting to make sure labor unions have a voice in how AI is implemented


Article by Liz Shuler and Mike Kubzansky: “Earlier this month, Google’s co-founder admitted that the company had “definitely messed up” after its AI tool, Gemini, produced historically inaccurate images—including depictions of racially diverse Nazis. Sergey Brin cited a lack of “thorough testing” of the AI tool, but the incident is a good reminder that, despite all the hype around generative AI replacing human output, the technology still has a long way to go. 

Of course, that hasn’t stopped companies from deploying AI in the workplace. Some even use the technology as an excuse to lay workers off. Since last May, at least 4,000 people have lost their jobs to AI, and 70% of workers across the country live with the fear that AI is coming for theirs next. And while the technology may still be in its infancy, it’s developing fast. Earlier this year, AI pioneer Mustafa Suleyman said that “left completely to the market and to their own devices, [AI tools are] fundamentally labor-replacing.” Without changes now, AI could be coming to replace a lot of people’s jobs.

It doesn’t have to be this way. AI has enormous potential to build prosperity and unleash human creativity, but only if it also works for working people. Ensuring that happens requires giving the voice of workers—the people who will engage with these technologies every day, and whose lives, health, and livelihoods are increasingly affected by AI and automation—a seat at the decision-making table. 

As president of the AFL-CIO, representing 12.5 million working people across 60 unions, and CEO of Omidyar Network, a social change philanthropy that supports responsible technology, we believe that the single best movement to give everyone a voice is the labor movement. Empowering workers—from warehouse associates to software engineers—is the most powerful tactic we have to ensure that AI develops in the interests of the many, not the few…(More)”.

Commons-based Data Set: Governance for AI


Report by Open Future: “In this white paper, we propose an approach to sharing data sets for AI training as a public good governed as a commons. By adhering to the six principles of commons-based governance, data sets can be managed in a way that generates public value while making shared resources resilient to extraction or capture by commercial interests.

The purpose of defining these principles is two-fold:

We propose these principles as input into policy debates on data and AI governance. A commons-based approach can be introduced through regulatory means, funding and procurement rules, statements of principles, or data sharing frameworks. Secondly, these principles can also serve as a blueprint for the design of data sets that are governed and shared as a commons. To this end, we also provide practical examples of how these principles are being brought to life. Projects like Big Science or Common Voice have demonstrated that commons-based data sets can be successfully built.

These principles, tailored for the governance of AI data sets, are built on our previous work on Data Commons Primer. They are also the outcome of our research into the governance of AI datasets, including the AI_Commons case study.  Finally, they are based on ongoing efforts to define how AI systems can be shared and made open, in which we have been participating – including the OSI-led process to define open-source AI systems, and the DPGA Community of Practice exploring AI systems as Digital Public Goods…(More)”.

The six principles for commons-based data set governance are as follows:

Central banks use AI to assess climate-related risks


Article by Huw Jones: “Central bankers said on Tuesday they have broken new ground by using artificial intelligence to collect data for assessing climate-related financial risks, just as the volume of disclosures from banks and other companies is set to rise.

The Bank for International Settlements, a forum for central banks, the Bank of Spain, Germany’s Bundesbank and the European Central Bank said their experimental Gaia AI project was used to analyse company disclosures on carbon emissions, green bond issuance and voluntary net-zero commitments.

Regulators of banks, insurers and asset managers need high-quality data to assess the impact of climate-change on financial institutions. However, the absence of a single reporting standard confronts them with a patchwork of public information spread across text, tables and footnotes in annual reports.

Gaia was able to overcome differences in definitions and disclosure frameworks across jurisdictions to offer much-needed transparency, and make it easier to compare indicators on climate-related financial risks, the central banks said in a joint statement.

Despite variations in how the same data is reported by companies, Gaia focuses on the definition of each indicator, rather than how the data is labelled.

Furthermore, with the traditional approach, each additional key performance indicator, or KPI, and each new institution requires the analyst to either search for the information in public corporate reports or contact the institution for information…(More)”.

God-like: A 500-Year History of Artificial Intelligence in Myths, Machines, Monsters


Book by Kester Brewin: “In the year 1600 a monk is burned at the stake for claiming to have built a device that will allow him to know all things.

350 years later, having witnessed ‘Trinity’ – the first test of the atomic bomb – America’s leading scientist outlines a memory machine that will help end war on earth.

25 years in the making, an ex-soldier finally unveils this ‘machine for augmenting human intellect’, dazzling as he stands ‘Zeus-like, dealing lightning with both hands.’

AI is both stunningly new and rooted in ancient desires. As we finally welcome this ‘god-like’ technology amongst us, what can learn from the myths and monsters of the past about how to survive alongside our greatest ever invention?…(More)”.

A typology of artificial intelligence data work


Article by James Muldoon et al: “This article provides a new typology for understanding human labour integrated into the production of artificial intelligence systems through data preparation and model evaluation. We call these forms of labour ‘AI data work’ and show how they are an important and necessary element of the artificial intelligence production process. We draw on fieldwork with an artificial intelligence data business process outsourcing centre specialising in computer vision data, alongside a decade of fieldwork with microwork platforms, business process outsourcing, and artificial intelligence companies to help dispel confusion around the multiple concepts and frames that encompass artificial intelligence data work including ‘ghost work’, ‘microwork’, ‘crowdwork’ and ‘cloudwork’. We argue that these different frames of reference obscure important differences between how this labour is organised in different contexts. The article provides a conceptual division between the different types of artificial intelligence data work institutions and the different stages of what we call the artificial intelligence data pipeline. This article thus contributes to our understanding of how the practices of workers become a valuable commodity integrated into global artificial intelligence production networks…(More)”.

The New Fire: War, Peace, and Democracy in the Age of AI


Book by Ben Buchanan and Andrew Imbrie: “Artificial intelligence is revolutionizing the modern world. It is ubiquitous—in our homes and offices, in the present and most certainly in the future. Today, we encounter AI as our distant ancestors once encountered fire. If we manage AI well, it will become a force for good, lighting the way to many transformative inventions. If we deploy it thoughtlessly, it will advance beyond our control. If we wield it for destruction, it will fan the flames of a new kind of war, one that holds democracy in the balance. As AI policy experts Ben Buchanan and Andrew Imbrie show in The New Fire, few choices are more urgent—or more fascinating—than how we harness this technology and for what purpose.

The new fire has three sparks: data, algorithms, and computing power. These components fuel viral disinformation campaigns, new hacking tools, and military weapons that once seemed like science fiction. To autocrats, AI offers the prospect of centralized control at home and asymmetric advantages in combat. It is easy to assume that democracies, bound by ethical constraints and disjointed in their approach, will be unable to keep up. But such a dystopia is hardly preordained. Combining an incisive understanding of technology with shrewd geopolitical analysis, Buchanan and Imbrie show how AI can work for democracy. With the right approach, technology need not favor tyranny…(More)”.

AI-Powered Urban Innovations Bring Promise, Risk to Future Cities


Article by Anthony Townsend and Hubert Beroche: “Red lights are obsolete. That seems to be the thinking behind Google’s latest fix for cities, which rolled out late last year in a dozen cities around the world, from Seattle to Jakarta. Most cities still collect the data to determine the timing of traffic signals by hand. But Project Green Light replaced clickers and clipboards with mountains of location data culled from smartphones. Artificial intelligence crunched the numbers, adjusting the signal pattern to smooth the flow of traffic. Motorists saw 30% fewer delays. There’s just one catch. Even as pedestrian deaths in the US reached a 40-year high in 2022, Google engineers omitted pedestrians and cyclists from their calculations.

Google’s oversight threatens to undo a decade of progress on safe streets and is a timely reminder of the risks in store when AI invades the city. Mayors across global cities have embraced Vision Zero pledges to eliminate pedestrian deaths. They are trying to slow traffic down, not speed it up. But Project Green Light’s website doesn’t even mention road safety. Still, the search giant’s experiment demonstrates AI’s potential to help cities. Tailpipe greenhouse gas emissions at intersections fell by 10%. Imagine what AI could do if we used it to empower people in cities rather than ignore them.

Take the technocratic task of urban planning and the many barriers to participation it creates. The same technology that powers chatbots and deepfakes is rapidly bringing down those barriers. Real estate developers have mastered the art of using glossy renderings to shape public opinion. But UrbanistAI, a tool developed by Helsinki-based startup SPIN Unit and the Milanese software company Toretei, puts that power in the hands of residents: It uses generative AI to transform text prompts into photorealistic images of alternative designs for controversial projects. Another startup, the Barcelona-based Aino, wraps a chatbot around a mapping tool. Using such computer aids, neighborhood activists no longer need to hire a data scientist to produce maps from census data to make their case…(More)”.

Artificial Intelligence: A Threat to Climate Change, Energy Usage and Disinformation


Press Release: “Today, partners in the Climate Action Against Disinformation coalition released a report that maps the risks that artificial intelligence poses to the climate crisis.

Topline points:

  • AI systems require an enormous amount of energy and water, and consumption is expanding quickly. Estimates suggest a doubling in 5-10 years.
  • Generative AI has the potential to turbocharge climate disinformation, including climate change-related deepfakes, ahead of a historic election year where climate policy will be central to the debate. 
  • The current AI policy landscape reveals a concerning lack of regulation on the federal level, with minor progress made at the state level, relying on voluntary, opaque and unenforceable pledges to pause development, or provide safety with its products…(More)”.

A World Divided Over Artificial Intelligence


Article by Aziz Huq: “…Through multinational communiqués and bilateral talks, an international framework for regulating AI does seem to be coalescing. Take a close look at U.S. President Joe Biden’s October 2023 executive order on AI; the EU’s AI Act, which passed the European Parliament in December 2023 and will likely be finalized later this year; or China’s slate of recent regulations on the topic, and a surprising degree of convergence appears. They have much in common. These regimes broadly share the common goal of preventing AI’s misuse without restraining innovation in the process. Optimists have floated proposals for closer international management of AI, such as the ideas presented in Foreign Affairs by the geopolitical analyst Ian Bremmer and the entrepreneur Mustafa Suleyman and the plan offered by Suleyman and Eric Schmidt, the former CEO of Google, in the Financial Times in which they called for the creation of an international panel akin to the UN’s Intergovernmental Panel on Climate Change to “inform governments about the current state of AI capabilities and make evidence-based predictions about what’s coming.”

But these ambitious plans to forge a new global governance regime for AI may collide with an unfortunate obstacle: cold reality. The great powers, namely, China, the United States, and the EU, may insist publicly that they want to cooperate on regulating AI, but their actions point toward a future of fragmentation and competition. Divergent legal regimes are emerging that will frustrate any cooperation when it comes to access to semiconductors, the setting of technical standards, and the regulation of data and algorithms. This path doesn’t lead to a coherent, contiguous global space for uniform AI-related rules but to a divided landscape of warring regulatory blocs—a world in which the lofty idea that AI can be harnessed for the common good is dashed on the rocks of geopolitical tensions…(More)”.

Synthetic Data and the Future of AI


Paper by Peter Lee: “The future of artificial intelligence (AI) is synthetic. Several of the most prominent technical and legal challenges of AI derive from the need to amass huge amounts of real-world data to train machine learning (ML) models. Collecting such real-world data can be highly difficult and can threaten privacy, introduce bias in automated decision making, and infringe copyrights on a massive scale. This Article explores the emergence of a seemingly paradoxical technical creation that can mitigate—though not completely eliminate—these concerns: synthetic data. Increasingly, data scientists are using simulated driving environments, fabricated medical records, fake images, and other forms of synthetic data to train ML models. Artificial data, in other words, is being used to train artificial intelligence. Synthetic data offers a host of technical and legal benefits; it promises to radically decrease the cost of obtaining data, sidestep privacy issues, reduce automated discrimination, and avoid copyright infringement. Alongside such promise, however, synthetic data offers perils as well. Deficiencies in the development and deployment of synthetic data can exacerbate the dangers of AI and cause significant social harm.

In light of the enormous value and importance of synthetic data, this Article sketches the contours of an innovation ecosystem to promote its robust and responsible development. It identifies three objectives that should guide legal and policy measures shaping the creation of synthetic data: provisioning, disclosure, and democratization. Ideally, such an ecosystem should incentivize the generation of high-quality synthetic data, encourage disclosure of both synthetic data and processes for generating it, and promote multiple sources of innovation. This Article then examines a suite of “innovation mechanisms” that can advance these objectives, ranging from open source production to proprietary approaches based on patents, trade secrets, and copyrights. Throughout, it suggests policy and doctrinal reforms to enhance innovation, transparency, and democratic access to synthetic data. Just as AI will have enormous legal implications, law and policy can play a central role in shaping the future of AI…(More)”.