The False Choice Between Digital Regulation and Innovation


Paper by Anu Bradford: “This Article challenges the common view that more stringent regulation of the digital economy inevitably compromises innovation and undermines technological progress. This view, vigorously advocated by the tech industry, has shaped the public discourse in the United States, where the country’s thriving tech economy is often associated with a staunch commitment to free markets. US lawmakers have also traditionally embraced this perspective, which explains their hesitancy to regulate the tech industry to date. The European Union has chosen another path, regulating the digital economy with stringent data privacy, antitrust, content moderation, and other digital regulations designed to shape the evolution of the tech economy towards European values around digital rights and fairness. According to the EU’s critics, this far-reaching tech regulation has come at the cost of innovation, explaining the EU’s inability to nurture tech companies and compete with the US and China in the tech race. However, this Article argues that the association between digital regulation and technological progress is considerably more complex than what the public conversation, US lawmakers, tech companies, and several scholars have suggested to date. For this reason, the existing technological gap between the US and the EU should not be attributed to the laxity of American laws and the stringency of European digital regulation. Instead, this Article shows there are more foundational features of the American legal and technological ecosystem that have paved the way for US tech companies’ rise to global prominence—features that the EU has not been able to replicate to date. By severing tech regulation from its allegedly adverse effect on innovation, this Article seeks to advance a more productive scholarly conversation on the costs and benefits of digital regulation. It also directs governments deliberating tech policy away from a false choice between regulation and innovation while drawing their attention to a broader set of legal and institutional reforms that are necessary for tech companies to innovate and for digital economies and societies to thrive…(More)”.

The generation of public value through e-participation initiatives: A synthesis of the extant literature


Paper by Naci Karkin and Asunur Cezar: “The number of studies evaluating e-participation levels in e-government services has recently increased. These studies primarily examine stakeholders’ acceptance and adoption of e-government initiatives. However, it is equally important to understand whether and how value is generated through e-participation, regardless of whether the focus is on government efforts or user adoption/acceptance levels. There is a need in the literature for a synthesis focusing on e- participation’s connection with public value creation using a systematic and comprehensive approach. This study employs a systematic literature review to collect, examine, and synthesize prior findings, aiming to investigate public value creation through e-participation initiatives, including their facilitators and barriers. By reviewing sixty-four peer-reviewed studies indexed by Web of Science and Scopus, this research demonstrates that e-participation initiatives and efforts can generate public value. Nevertheless, several factors are pivotal for the success and sustainability of these initiatives. The study’s findings could guide researchers and practitioners in comprehending the determinants and barriers influencing the success and sustainability of e-participation initiatives in the public value creation process while highlighting potential future research opportunities in this domain…(More)”.

How Belgium is Giving Citizens a Say on AI


Article by Graham Wetherall-Grujić: “A few weeks before the European Parliament’s final debate on the AI Act, 60 randomly selected members of the Belgian public convened in Brussels for a discussion of their own. The aim was not to debate a particular piece of legislation, but to help shape a European vision on the future of AI, drawing on the views, concerns, and ideas of the public. 

They were taking part in a citizens’ assembly on AI, held as part of Belgium’s presidency of the European Council. When Belgium assumed the presidency for six months beginning in January 2024, they announced they would be placing “special focus” on citizens’ participation. The citizen panel on AI is the largest of the scheduled participation projects. Over a total of three weekends, participants are deliberating on a range of topics including the impact of AI on work, education, and democracy. 

The assembly comes at a point in time with rising calls for more public inputs on the topic of AI. Some big tech firms have begun to respond with participation projects of their own. But this is the first time an EU institution has launched a consultation on the topic. The organisers hope it will pave the way for more to come…(More)”.

The Global State of Social Connections


Gallup: “Social needs are universal, and the degree to which they are fulfilled — or not — impacts the health, well-being and resilience of people everywhere. With increasing global interest in understanding how social connections support or hinder health, policymakers worldwide may benefit from reliable data on the current state of social connectedness. Despite the critical role of social connectedness for communities and the people who live in them, little is known about the frequency or form of social connection in many — if not most — parts of the world.

Meta and Gallup have collaborated on two research studies to help fill this gap. In 2022, the Meta-Gallup State of Social Connections report revealed important variations in people’s sense of connectedness and loneliness across the seven countries studied. This report builds on that research by presenting data on connections and loneliness among people from 142 countries…(More)”.

The impact of generative artificial intelligence on socioeconomic inequalities and
policy making


Paper by Valerio Capraro et al: “Generative artificial intelligence, including chatbots like ChatGPT, has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the probable impacts of generative AI on four critical domains: work, education, health, and information. Our goal is to warn about how generative AI could worsen existing inequalities while illuminating directions for using AI to resolve pervasive social problems. Generative AI in the workplace can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning but may widen the digital divide. In healthcare, it improves diagnostics and accessibility but could deepen pre-existing inequalities. For information, it democratizes content creation and access but also dramatically expands the production and proliferation of misinformation. Each section covers a specific topic, evaluates existing research, identifies critical gaps, and recommends research directions. We conclude with a section highlighting the role of policymaking to maximize generative AI’s potential to reduce inequalities while
mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We contend that these policies should promote shared prosperity through the advancement of generative AI. We suggest several concrete policies to encourage further research and debate. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI…(More)”.

The tech industry can’t agree on what open-source AI means. That’s a problem.


Article by Edd Gent: “Suddenly, “open source” is the latest buzzword in AI circles. Meta has pledged to create open-source artificial general intelligence. And Elon Musk is suing OpenAI over its lack of open-source AI models.

Meanwhile, a growing number of tech leaders and companies are setting themselves up as open-source champions. 

But there’s a fundamental problem—no one can agree on what “open-source AI” means. 

On the face of it, open-source AI promises a future where anyone can take part in the technology’s development. That could accelerate innovation, boost transparency, and give users greater control over systems that could soon reshape many aspects of our lives. But what even is it? What makes an AI model open source, and what disqualifies it?

The answers could have significant ramifications for the future of the technology. Until the tech industry has settled on a definition, powerful companies can easily bend the concept to suit their own needs, and it could become a tool to entrench the dominance of today’s leading players.

Entering this fray is the Open Source Initiative (OSI), the self-appointed arbiters of what it means to be open source. Founded in 1998, the nonprofit is the custodian of the Open Source Definition, a widely accepted set of rules that determine whether a piece of software can be considered open source. 

Now, the organization has assembled a 70-strong group of researchers, lawyers, policymakers, activists, and representatives from big tech companies like Meta, Google, and Amazon to come up with a working definition of open-source AI…(More)”.

New Jersey is turning to AI to improve the job search process


Article by Beth Simone Noveck: “Americans are experiencing some conflicting feelings about AI.

While people are flocking to new roles like prompt engineer and AI ethicist, the technology is also predicted to put many jobs at risk, including computer programmers, data scientists, graphic designers, writers, lawyers.

Little wonder, then, that a national survey by the Heldrich Center for Workforce Development found an overwhelming majority of Americans (66%) believe that they “will need more technological skills to achieve their career goals.” One thing is certain: Workers will need to train for change. And in a world of misinformation-filled social media platforms, it is increasingly important for trusted public institutions to provide reliable, data-driven resources.

In New Jersey, we’ve tried doing just that by collaborating with workers, including many with disabilities, to design technology that will support better decision-making around training and career change. Investing in similar public AI-powered tools could help support better consumer choice across various domains. When a public entity designs, controls and implements AI, there is a far greater likelihood that this powerful technology will be used for good.

In New Jersey, the public can find reliable, independent, unbiased information about training and upskilling on the state’s new MyCareer website, which uses AI to make personalized recommendations about your career prospects, and the training you will need to be ready for a high-growth, in-demand job…(More)”.

Could artificial intelligence benefit democracy?


Article by Brian Wheeler: “Each week sees a new set of warnings about the potential impact of AI-generated deepfakes – realistic video and audio of politicians saying things they never said – spreading confusion and mistrust among the voting public.

And in the UK, regulators, security services and government are battling to protect this year’s general election from malign foreign interference.

Less attention has been given to the possible benefits of AI.

But a lot of work is going on, often below the radar, to try to harness its power in ways that might enhance democracy rather than destroy it.

“While this technology does pose some important risks in terms of disinformation, it also offers some significant opportunities for campaigns, which we can’t ignore,” Hannah O’Rourke, co-founder of Campaign Lab, a left-leaning network of tech volunteers, says.

“Like all technology, what matters is how AI is actually implemented. “Its impact will be felt in the way campaigners actually use it.”

Among other things, Campaign Lab runs training courses for Labour and Liberal Democrat campaigners on how to use ChatGPT (Chat Generative Pre-trained Transformer) to create the first draft of election leaflets.

It reminds them to edit the final product carefully, though, as large language models (LLMs) such as ChatGPT have a worrying tendency to “hallucinate” or make things up.

The group is also experimenting with chatbots to help train canvassers to have more engaging conversations on the doorstep.

AI is already embedded in everyday programs, from Microsoft Outlook to Adobe Photoshop, Ms O’Rourke says, so why not use it in a responsible way to free up time for more face-to-face campaigning?…

Conservative-supporting AI expert Joe Reeve is another young political campaigner convinced the new technology can transform things for the better.

He runs Future London, a community of “techno optimists” who use AI to seek answers to big questions such as “Why can’t I buy a house?” and, crucially, “Where’s my robot butler?”

In 2020, Mr Reeve founded Tory Techs, partly as a right-wing response to Campaign Lab.

The group has run programming sessions and explored how to use AI to hone Tory campaign messages but, Mr Reeve says, it now “mostly focuses on speaking with MPs in more private and safe spaces to help coach politicians on what AI means and how it can be a positive force”.

“Technology has an opportunity to make the world a lot better for a lot of people and that is regardless of politics,” he tells BBC News…(More)”.

Data Rules: Reinventing the Market Economy


Book by Cristina Alaimo and Jannis Kallinikos: “Digital data have become the critical frontier where emerging economic practices and organizational forms confront the traditional economic order and its institutions. In Data Rules, Cristina Alaimo and Jannis Kallinikos establish a social science framework for analyzing the unprecedented social and economic restructuring brought about by data. Working at the intersection of information systems and organizational studies, they draw extensively on intellectual currents in sociology, semiotics, cognitive science and technology, and social theory. Making the case for turning “data-making” into an area of inquiry of its own, the authors uncover how data are deeply implicated in rewiring the institutions of the market economy.

The authors associate digital data with the decentering of organizations. As they point out, centered systems make sense only when firms (and formal organizations more broadly) can keep the external world at arm’s length and maintain a relative operation independence from it. These patterns no longer hold. Data transform the production of goods and services to an endless series of exchanges and interactions that defeat the functional logics of markets and organizations. The diffusion of platforms and ecosystems is indicative of these broader transformations. Rather than viewing data as simply a force of surveillance and control, the authors place the transformative potential of data at the center of an emerging socioeconomic order that restructures society and its institutions…(More)”.

Global AI governance: barriers and pathways forward 


Paper by Huw Roberts, Emmie Hine, Mariarosaria Taddeo, Luciano Floridi: “This policy paper is a response to the growing calls for ambitious new international institutions for AI. It maps the geopolitical and institutional barriers to stronger global AI governance and considers potential pathways forward in light of these constraints. We argue that a promising foundation of international regimes focused on AI governance is emerging, but the centrality of AI to interstate competition, dysfunctional international institutions and disagreement over policy priorities problematizes substantive cooperation. We propose strengthening the existing weak ‘regime complex’ of international institutions as the most desirable and realistic path forward for global AI governance. Strengthening coordination between, and the capacities of, existing institutions supports mutually reinforcing policy change, which, if enacted properly, can lead to catalytic change across the various policy areas where AI has an impact. It also facilitates the flexible governance needed for rapidly evolving technologies.

To make this argument, we outline key global AI governance processes in the next section. In the third section, we analyse how first- and second-order cooperation problems in international relations apply to AI. In the fourth section we assess potential routes for advancing global AI governance, and we conclude by providing recommendations on how to strengthen the weak AI regime complex…(More)”.