How Belgium is Giving Citizens a Say on AI


Article by Graham Wetherall-Grujić: “A few weeks before the European Parliament’s final debate on the AI Act, 60 randomly selected members of the Belgian public convened in Brussels for a discussion of their own. The aim was not to debate a particular piece of legislation, but to help shape a European vision on the future of AI, drawing on the views, concerns, and ideas of the public. 

They were taking part in a citizens’ assembly on AI, held as part of Belgium’s presidency of the European Council. When Belgium assumed the presidency for six months beginning in January 2024, they announced they would be placing “special focus” on citizens’ participation. The citizen panel on AI is the largest of the scheduled participation projects. Over a total of three weekends, participants are deliberating on a range of topics including the impact of AI on work, education, and democracy. 

The assembly comes at a point in time with rising calls for more public inputs on the topic of AI. Some big tech firms have begun to respond with participation projects of their own. But this is the first time an EU institution has launched a consultation on the topic. The organisers hope it will pave the way for more to come…(More)”.

The Global State of Social Connections


Gallup: “Social needs are universal, and the degree to which they are fulfilled — or not — impacts the health, well-being and resilience of people everywhere. With increasing global interest in understanding how social connections support or hinder health, policymakers worldwide may benefit from reliable data on the current state of social connectedness. Despite the critical role of social connectedness for communities and the people who live in them, little is known about the frequency or form of social connection in many — if not most — parts of the world.

Meta and Gallup have collaborated on two research studies to help fill this gap. In 2022, the Meta-Gallup State of Social Connections report revealed important variations in people’s sense of connectedness and loneliness across the seven countries studied. This report builds on that research by presenting data on connections and loneliness among people from 142 countries…(More)”.

The impact of generative artificial intelligence on socioeconomic inequalities and
policy making


Paper by Valerio Capraro et al: “Generative artificial intelligence, including chatbots like ChatGPT, has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the probable impacts of generative AI on four critical domains: work, education, health, and information. Our goal is to warn about how generative AI could worsen existing inequalities while illuminating directions for using AI to resolve pervasive social problems. Generative AI in the workplace can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning but may widen the digital divide. In healthcare, it improves diagnostics and accessibility but could deepen pre-existing inequalities. For information, it democratizes content creation and access but also dramatically expands the production and proliferation of misinformation. Each section covers a specific topic, evaluates existing research, identifies critical gaps, and recommends research directions. We conclude with a section highlighting the role of policymaking to maximize generative AI’s potential to reduce inequalities while
mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We contend that these policies should promote shared prosperity through the advancement of generative AI. We suggest several concrete policies to encourage further research and debate. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI…(More)”.

The tech industry can’t agree on what open-source AI means. That’s a problem.


Article by Edd Gent: “Suddenly, “open source” is the latest buzzword in AI circles. Meta has pledged to create open-source artificial general intelligence. And Elon Musk is suing OpenAI over its lack of open-source AI models.

Meanwhile, a growing number of tech leaders and companies are setting themselves up as open-source champions. 

But there’s a fundamental problem—no one can agree on what “open-source AI” means. 

On the face of it, open-source AI promises a future where anyone can take part in the technology’s development. That could accelerate innovation, boost transparency, and give users greater control over systems that could soon reshape many aspects of our lives. But what even is it? What makes an AI model open source, and what disqualifies it?

The answers could have significant ramifications for the future of the technology. Until the tech industry has settled on a definition, powerful companies can easily bend the concept to suit their own needs, and it could become a tool to entrench the dominance of today’s leading players.

Entering this fray is the Open Source Initiative (OSI), the self-appointed arbiters of what it means to be open source. Founded in 1998, the nonprofit is the custodian of the Open Source Definition, a widely accepted set of rules that determine whether a piece of software can be considered open source. 

Now, the organization has assembled a 70-strong group of researchers, lawyers, policymakers, activists, and representatives from big tech companies like Meta, Google, and Amazon to come up with a working definition of open-source AI…(More)”.

New Jersey is turning to AI to improve the job search process


Article by Beth Simone Noveck: “Americans are experiencing some conflicting feelings about AI.

While people are flocking to new roles like prompt engineer and AI ethicist, the technology is also predicted to put many jobs at risk, including computer programmers, data scientists, graphic designers, writers, lawyers.

Little wonder, then, that a national survey by the Heldrich Center for Workforce Development found an overwhelming majority of Americans (66%) believe that they “will need more technological skills to achieve their career goals.” One thing is certain: Workers will need to train for change. And in a world of misinformation-filled social media platforms, it is increasingly important for trusted public institutions to provide reliable, data-driven resources.

In New Jersey, we’ve tried doing just that by collaborating with workers, including many with disabilities, to design technology that will support better decision-making around training and career change. Investing in similar public AI-powered tools could help support better consumer choice across various domains. When a public entity designs, controls and implements AI, there is a far greater likelihood that this powerful technology will be used for good.

In New Jersey, the public can find reliable, independent, unbiased information about training and upskilling on the state’s new MyCareer website, which uses AI to make personalized recommendations about your career prospects, and the training you will need to be ready for a high-growth, in-demand job…(More)”.

Could artificial intelligence benefit democracy?


Article by Brian Wheeler: “Each week sees a new set of warnings about the potential impact of AI-generated deepfakes – realistic video and audio of politicians saying things they never said – spreading confusion and mistrust among the voting public.

And in the UK, regulators, security services and government are battling to protect this year’s general election from malign foreign interference.

Less attention has been given to the possible benefits of AI.

But a lot of work is going on, often below the radar, to try to harness its power in ways that might enhance democracy rather than destroy it.

“While this technology does pose some important risks in terms of disinformation, it also offers some significant opportunities for campaigns, which we can’t ignore,” Hannah O’Rourke, co-founder of Campaign Lab, a left-leaning network of tech volunteers, says.

“Like all technology, what matters is how AI is actually implemented. “Its impact will be felt in the way campaigners actually use it.”

Among other things, Campaign Lab runs training courses for Labour and Liberal Democrat campaigners on how to use ChatGPT (Chat Generative Pre-trained Transformer) to create the first draft of election leaflets.

It reminds them to edit the final product carefully, though, as large language models (LLMs) such as ChatGPT have a worrying tendency to “hallucinate” or make things up.

The group is also experimenting with chatbots to help train canvassers to have more engaging conversations on the doorstep.

AI is already embedded in everyday programs, from Microsoft Outlook to Adobe Photoshop, Ms O’Rourke says, so why not use it in a responsible way to free up time for more face-to-face campaigning?…

Conservative-supporting AI expert Joe Reeve is another young political campaigner convinced the new technology can transform things for the better.

He runs Future London, a community of “techno optimists” who use AI to seek answers to big questions such as “Why can’t I buy a house?” and, crucially, “Where’s my robot butler?”

In 2020, Mr Reeve founded Tory Techs, partly as a right-wing response to Campaign Lab.

The group has run programming sessions and explored how to use AI to hone Tory campaign messages but, Mr Reeve says, it now “mostly focuses on speaking with MPs in more private and safe spaces to help coach politicians on what AI means and how it can be a positive force”.

“Technology has an opportunity to make the world a lot better for a lot of people and that is regardless of politics,” he tells BBC News…(More)”.

Data Rules: Reinventing the Market Economy


Book by Cristina Alaimo and Jannis Kallinikos: “Digital data have become the critical frontier where emerging economic practices and organizational forms confront the traditional economic order and its institutions. In Data Rules, Cristina Alaimo and Jannis Kallinikos establish a social science framework for analyzing the unprecedented social and economic restructuring brought about by data. Working at the intersection of information systems and organizational studies, they draw extensively on intellectual currents in sociology, semiotics, cognitive science and technology, and social theory. Making the case for turning “data-making” into an area of inquiry of its own, the authors uncover how data are deeply implicated in rewiring the institutions of the market economy.

The authors associate digital data with the decentering of organizations. As they point out, centered systems make sense only when firms (and formal organizations more broadly) can keep the external world at arm’s length and maintain a relative operation independence from it. These patterns no longer hold. Data transform the production of goods and services to an endless series of exchanges and interactions that defeat the functional logics of markets and organizations. The diffusion of platforms and ecosystems is indicative of these broader transformations. Rather than viewing data as simply a force of surveillance and control, the authors place the transformative potential of data at the center of an emerging socioeconomic order that restructures society and its institutions…(More)”.

Global AI governance: barriers and pathways forward 


Paper by Huw Roberts, Emmie Hine, Mariarosaria Taddeo, Luciano Floridi: “This policy paper is a response to the growing calls for ambitious new international institutions for AI. It maps the geopolitical and institutional barriers to stronger global AI governance and considers potential pathways forward in light of these constraints. We argue that a promising foundation of international regimes focused on AI governance is emerging, but the centrality of AI to interstate competition, dysfunctional international institutions and disagreement over policy priorities problematizes substantive cooperation. We propose strengthening the existing weak ‘regime complex’ of international institutions as the most desirable and realistic path forward for global AI governance. Strengthening coordination between, and the capacities of, existing institutions supports mutually reinforcing policy change, which, if enacted properly, can lead to catalytic change across the various policy areas where AI has an impact. It also facilitates the flexible governance needed for rapidly evolving technologies.

To make this argument, we outline key global AI governance processes in the next section. In the third section, we analyse how first- and second-order cooperation problems in international relations apply to AI. In the fourth section we assess potential routes for advancing global AI governance, and we conclude by providing recommendations on how to strengthen the weak AI regime complex…(More)”.

Synthetic Politics: Preparing democracy for Generative AI


Report by Demos: “This year is a politically momentous one, with almost half the world voting in elections. Generative AI may revolutionise our political information environments by making them more effective, relevant, and participatory. But it’s also possible that they will become more manipulative, confusing, and dangerous. We’ve already seen AI-generated audio of politicians going viral and chatbots offering incorrect information about elections.

This report, produced in partnership with University College London, explores how synthetic content produced by generative AI poses risks to the core democratic values of truthequality, and non-violence. It proposes two action plans for what private and public decision-makers should be doing to safeguard democratic integrity immediately and in the long run:

  • In Action Plan 1, we consider the actions that should be urgently put in place to reduce the acute risks to democratic integrity presented by generative AI tools. This includes reducing the production and dissemination of harmful synthetic content and empowering users so that harmful impacts of synthetic content are reduced in the immediate term.
  • In Action Plan 2, we set out a longer-term vision for how the fundamental risks to democratic integrity should be addressed. We explore the ways in which generative AI tools can help bolster equality, truth and non-violence, from enabling greater democratic participation to improving how key information institutions operate…(More)”.

Citizen scientists—practices, observations, and experience


Paper by Michael O’Grady & Eleni Mangina: “Citizen science has been studied intensively in recent years. Nonetheless, the voice of citizen scientists is often lost despite their altruistic and indispensable role. To remedy this deficiency, a survey on the overall experiences of citizen scientists was undertaken. Dimensions investigated include activities, open science concepts, and data practices. However, the study prioritizes knowledge and practices of data and data management. When a broad understanding of data is lacking, the ability to make informed decisions about consent and data sharing, for example, is compromised. Furthermore, the potential and impact of individual endeavors and collaborative projects are reduced. Findings indicate that understanding of data management principles is limited. Furthermore, an unawareness of common data and open science concepts was observed. It is concluded that appropriate training and a raised awareness of Responsible Research and Innovation concepts would benefit individual citizen scientists, their projects, and society…(More)”.