United against algorithms: a primer on disability-led struggles against algorithmic injustice


Report by Georgia van Toorn: “Algorithmic decision-making (ADM) poses urgent concerns regarding the rights and entitlements of people with disability from all walks of life. As ADM systems become increasingly embedded in government decision-making processes, there is a heightened risk of harm, such as unjust denial of benefits or inadequate support, accentuated by the expanding reach of state surveillance.

ADM systems have far reaching impacts on disabled lives and life chances. Despite this, they are often designed without the input of people with lived experience of disability, for purposes that do not align with the goals of full rights, participation, and justice for disabled people.

This primer explores how people with disability are collectively responding to the threats posed by algorithmic, data-driven systems – specifically their public sector applications. It provides an introductory overview of the topic, exploring the approaches, obstacles, and actions taken by people with disability in their ‘algoactivist’ struggles…(More)”.

Measuring the mobile body


Article by Laura Jung: “…While nation states have been collecting data on citizens for the purposes of taxation and military recruitment for centuries, its indexing, organization in databases and classification for particular governmental purposes – such as controlling the mobility of ‘undesirable’ populations – is a nineteenth-century invention. The French historian and philosopher Michel Foucault describes how, in the context of growing urbanization and industrialization, states became increasingly preoccupied with the question of ‘circulation’. Persons and goods, as well as pathogens, circulated further than they had in the early modern period. While states didn’t seek to suppress or control these movements entirely, they sought means to increase what was seen as ‘positive’ circulation and minimize ‘negative’ circulation. They deployed the novel tools of a positivist social science for this purpose: statistical approaches were used in the field of demography to track and regulate phenomena such as births, accidents, illness and deaths. The emerging managerial nation state addressed the problem of circulation by developing a very particular toolkit amassing detailed information about the population and developing standardized methods of storage and analysis.

One particularly vexing problem was the circulation of known criminals. In the nineteenth century, it was widely believed that if a person offended once, they would offend again. However, the systems available for criminal identification were woefully inadequate to the task.

As criminologist Simon Cole explains, identifying an unknown person requires a ‘truly unique body mark’. Yet before the advent of modern systems of identification, there were only two ways to do this: branding or personal recognition. While branding had been widely used in Europe and North America on convicts, prisoners and enslaved people, evolving ideas around criminality and punishment largely led to the abolition of physical marking in the early nineteenth century. The criminal record was established in its place: a written document cataloguing the convict’s name and a written description of their person, including identifying marks and scars…(More)”.

The False Choice Between Digital Regulation and Innovation


Paper by Anu Bradford: “This Article challenges the common view that more stringent regulation of the digital economy inevitably compromises innovation and undermines technological progress. This view, vigorously advocated by the tech industry, has shaped the public discourse in the United States, where the country’s thriving tech economy is often associated with a staunch commitment to free markets. US lawmakers have also traditionally embraced this perspective, which explains their hesitancy to regulate the tech industry to date. The European Union has chosen another path, regulating the digital economy with stringent data privacy, antitrust, content moderation, and other digital regulations designed to shape the evolution of the tech economy towards European values around digital rights and fairness. According to the EU’s critics, this far-reaching tech regulation has come at the cost of innovation, explaining the EU’s inability to nurture tech companies and compete with the US and China in the tech race. However, this Article argues that the association between digital regulation and technological progress is considerably more complex than what the public conversation, US lawmakers, tech companies, and several scholars have suggested to date. For this reason, the existing technological gap between the US and the EU should not be attributed to the laxity of American laws and the stringency of European digital regulation. Instead, this Article shows there are more foundational features of the American legal and technological ecosystem that have paved the way for US tech companies’ rise to global prominence—features that the EU has not been able to replicate to date. By severing tech regulation from its allegedly adverse effect on innovation, this Article seeks to advance a more productive scholarly conversation on the costs and benefits of digital regulation. It also directs governments deliberating tech policy away from a false choice between regulation and innovation while drawing their attention to a broader set of legal and institutional reforms that are necessary for tech companies to innovate and for digital economies and societies to thrive…(More)”.

The generation of public value through e-participation initiatives: A synthesis of the extant literature


Paper by Naci Karkin and Asunur Cezar: “The number of studies evaluating e-participation levels in e-government services has recently increased. These studies primarily examine stakeholders’ acceptance and adoption of e-government initiatives. However, it is equally important to understand whether and how value is generated through e-participation, regardless of whether the focus is on government efforts or user adoption/acceptance levels. There is a need in the literature for a synthesis focusing on e- participation’s connection with public value creation using a systematic and comprehensive approach. This study employs a systematic literature review to collect, examine, and synthesize prior findings, aiming to investigate public value creation through e-participation initiatives, including their facilitators and barriers. By reviewing sixty-four peer-reviewed studies indexed by Web of Science and Scopus, this research demonstrates that e-participation initiatives and efforts can generate public value. Nevertheless, several factors are pivotal for the success and sustainability of these initiatives. The study’s findings could guide researchers and practitioners in comprehending the determinants and barriers influencing the success and sustainability of e-participation initiatives in the public value creation process while highlighting potential future research opportunities in this domain…(More)”.

How Belgium is Giving Citizens a Say on AI


Article by Graham Wetherall-Grujić: “A few weeks before the European Parliament’s final debate on the AI Act, 60 randomly selected members of the Belgian public convened in Brussels for a discussion of their own. The aim was not to debate a particular piece of legislation, but to help shape a European vision on the future of AI, drawing on the views, concerns, and ideas of the public. 

They were taking part in a citizens’ assembly on AI, held as part of Belgium’s presidency of the European Council. When Belgium assumed the presidency for six months beginning in January 2024, they announced they would be placing “special focus” on citizens’ participation. The citizen panel on AI is the largest of the scheduled participation projects. Over a total of three weekends, participants are deliberating on a range of topics including the impact of AI on work, education, and democracy. 

The assembly comes at a point in time with rising calls for more public inputs on the topic of AI. Some big tech firms have begun to respond with participation projects of their own. But this is the first time an EU institution has launched a consultation on the topic. The organisers hope it will pave the way for more to come…(More)”.

The Global State of Social Connections


Gallup: “Social needs are universal, and the degree to which they are fulfilled — or not — impacts the health, well-being and resilience of people everywhere. With increasing global interest in understanding how social connections support or hinder health, policymakers worldwide may benefit from reliable data on the current state of social connectedness. Despite the critical role of social connectedness for communities and the people who live in them, little is known about the frequency or form of social connection in many — if not most — parts of the world.

Meta and Gallup have collaborated on two research studies to help fill this gap. In 2022, the Meta-Gallup State of Social Connections report revealed important variations in people’s sense of connectedness and loneliness across the seven countries studied. This report builds on that research by presenting data on connections and loneliness among people from 142 countries…(More)”.

The impact of generative artificial intelligence on socioeconomic inequalities and
policy making


Paper by Valerio Capraro et al: “Generative artificial intelligence, including chatbots like ChatGPT, has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the probable impacts of generative AI on four critical domains: work, education, health, and information. Our goal is to warn about how generative AI could worsen existing inequalities while illuminating directions for using AI to resolve pervasive social problems. Generative AI in the workplace can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning but may widen the digital divide. In healthcare, it improves diagnostics and accessibility but could deepen pre-existing inequalities. For information, it democratizes content creation and access but also dramatically expands the production and proliferation of misinformation. Each section covers a specific topic, evaluates existing research, identifies critical gaps, and recommends research directions. We conclude with a section highlighting the role of policymaking to maximize generative AI’s potential to reduce inequalities while
mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We contend that these policies should promote shared prosperity through the advancement of generative AI. We suggest several concrete policies to encourage further research and debate. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI…(More)”.

The tech industry can’t agree on what open-source AI means. That’s a problem.


Article by Edd Gent: “Suddenly, “open source” is the latest buzzword in AI circles. Meta has pledged to create open-source artificial general intelligence. And Elon Musk is suing OpenAI over its lack of open-source AI models.

Meanwhile, a growing number of tech leaders and companies are setting themselves up as open-source champions. 

But there’s a fundamental problem—no one can agree on what “open-source AI” means. 

On the face of it, open-source AI promises a future where anyone can take part in the technology’s development. That could accelerate innovation, boost transparency, and give users greater control over systems that could soon reshape many aspects of our lives. But what even is it? What makes an AI model open source, and what disqualifies it?

The answers could have significant ramifications for the future of the technology. Until the tech industry has settled on a definition, powerful companies can easily bend the concept to suit their own needs, and it could become a tool to entrench the dominance of today’s leading players.

Entering this fray is the Open Source Initiative (OSI), the self-appointed arbiters of what it means to be open source. Founded in 1998, the nonprofit is the custodian of the Open Source Definition, a widely accepted set of rules that determine whether a piece of software can be considered open source. 

Now, the organization has assembled a 70-strong group of researchers, lawyers, policymakers, activists, and representatives from big tech companies like Meta, Google, and Amazon to come up with a working definition of open-source AI…(More)”.

New Jersey is turning to AI to improve the job search process


Article by Beth Simone Noveck: “Americans are experiencing some conflicting feelings about AI.

While people are flocking to new roles like prompt engineer and AI ethicist, the technology is also predicted to put many jobs at risk, including computer programmers, data scientists, graphic designers, writers, lawyers.

Little wonder, then, that a national survey by the Heldrich Center for Workforce Development found an overwhelming majority of Americans (66%) believe that they “will need more technological skills to achieve their career goals.” One thing is certain: Workers will need to train for change. And in a world of misinformation-filled social media platforms, it is increasingly important for trusted public institutions to provide reliable, data-driven resources.

In New Jersey, we’ve tried doing just that by collaborating with workers, including many with disabilities, to design technology that will support better decision-making around training and career change. Investing in similar public AI-powered tools could help support better consumer choice across various domains. When a public entity designs, controls and implements AI, there is a far greater likelihood that this powerful technology will be used for good.

In New Jersey, the public can find reliable, independent, unbiased information about training and upskilling on the state’s new MyCareer website, which uses AI to make personalized recommendations about your career prospects, and the training you will need to be ready for a high-growth, in-demand job…(More)”.

Could artificial intelligence benefit democracy?


Article by Brian Wheeler: “Each week sees a new set of warnings about the potential impact of AI-generated deepfakes – realistic video and audio of politicians saying things they never said – spreading confusion and mistrust among the voting public.

And in the UK, regulators, security services and government are battling to protect this year’s general election from malign foreign interference.

Less attention has been given to the possible benefits of AI.

But a lot of work is going on, often below the radar, to try to harness its power in ways that might enhance democracy rather than destroy it.

“While this technology does pose some important risks in terms of disinformation, it also offers some significant opportunities for campaigns, which we can’t ignore,” Hannah O’Rourke, co-founder of Campaign Lab, a left-leaning network of tech volunteers, says.

“Like all technology, what matters is how AI is actually implemented. “Its impact will be felt in the way campaigners actually use it.”

Among other things, Campaign Lab runs training courses for Labour and Liberal Democrat campaigners on how to use ChatGPT (Chat Generative Pre-trained Transformer) to create the first draft of election leaflets.

It reminds them to edit the final product carefully, though, as large language models (LLMs) such as ChatGPT have a worrying tendency to “hallucinate” or make things up.

The group is also experimenting with chatbots to help train canvassers to have more engaging conversations on the doorstep.

AI is already embedded in everyday programs, from Microsoft Outlook to Adobe Photoshop, Ms O’Rourke says, so why not use it in a responsible way to free up time for more face-to-face campaigning?…

Conservative-supporting AI expert Joe Reeve is another young political campaigner convinced the new technology can transform things for the better.

He runs Future London, a community of “techno optimists” who use AI to seek answers to big questions such as “Why can’t I buy a house?” and, crucially, “Where’s my robot butler?”

In 2020, Mr Reeve founded Tory Techs, partly as a right-wing response to Campaign Lab.

The group has run programming sessions and explored how to use AI to hone Tory campaign messages but, Mr Reeve says, it now “mostly focuses on speaking with MPs in more private and safe spaces to help coach politicians on what AI means and how it can be a positive force”.

“Technology has an opportunity to make the world a lot better for a lot of people and that is regardless of politics,” he tells BBC News…(More)”.