New Jersey is turning to AI to improve the job search process


Article by Beth Simone Noveck: “Americans are experiencing some conflicting feelings about AI.

While people are flocking to new roles like prompt engineer and AI ethicist, the technology is also predicted to put many jobs at risk, including computer programmers, data scientists, graphic designers, writers, lawyers.

Little wonder, then, that a national survey by the Heldrich Center for Workforce Development found an overwhelming majority of Americans (66%) believe that they “will need more technological skills to achieve their career goals.” One thing is certain: Workers will need to train for change. And in a world of misinformation-filled social media platforms, it is increasingly important for trusted public institutions to provide reliable, data-driven resources.

In New Jersey, we’ve tried doing just that by collaborating with workers, including many with disabilities, to design technology that will support better decision-making around training and career change. Investing in similar public AI-powered tools could help support better consumer choice across various domains. When a public entity designs, controls and implements AI, there is a far greater likelihood that this powerful technology will be used for good.

In New Jersey, the public can find reliable, independent, unbiased information about training and upskilling on the state’s new MyCareer website, which uses AI to make personalized recommendations about your career prospects, and the training you will need to be ready for a high-growth, in-demand job…(More)”.

Could artificial intelligence benefit democracy?


Article by Brian Wheeler: “Each week sees a new set of warnings about the potential impact of AI-generated deepfakes – realistic video and audio of politicians saying things they never said – spreading confusion and mistrust among the voting public.

And in the UK, regulators, security services and government are battling to protect this year’s general election from malign foreign interference.

Less attention has been given to the possible benefits of AI.

But a lot of work is going on, often below the radar, to try to harness its power in ways that might enhance democracy rather than destroy it.

“While this technology does pose some important risks in terms of disinformation, it also offers some significant opportunities for campaigns, which we can’t ignore,” Hannah O’Rourke, co-founder of Campaign Lab, a left-leaning network of tech volunteers, says.

“Like all technology, what matters is how AI is actually implemented. “Its impact will be felt in the way campaigners actually use it.”

Among other things, Campaign Lab runs training courses for Labour and Liberal Democrat campaigners on how to use ChatGPT (Chat Generative Pre-trained Transformer) to create the first draft of election leaflets.

It reminds them to edit the final product carefully, though, as large language models (LLMs) such as ChatGPT have a worrying tendency to “hallucinate” or make things up.

The group is also experimenting with chatbots to help train canvassers to have more engaging conversations on the doorstep.

AI is already embedded in everyday programs, from Microsoft Outlook to Adobe Photoshop, Ms O’Rourke says, so why not use it in a responsible way to free up time for more face-to-face campaigning?…

Conservative-supporting AI expert Joe Reeve is another young political campaigner convinced the new technology can transform things for the better.

He runs Future London, a community of “techno optimists” who use AI to seek answers to big questions such as “Why can’t I buy a house?” and, crucially, “Where’s my robot butler?”

In 2020, Mr Reeve founded Tory Techs, partly as a right-wing response to Campaign Lab.

The group has run programming sessions and explored how to use AI to hone Tory campaign messages but, Mr Reeve says, it now “mostly focuses on speaking with MPs in more private and safe spaces to help coach politicians on what AI means and how it can be a positive force”.

“Technology has an opportunity to make the world a lot better for a lot of people and that is regardless of politics,” he tells BBC News…(More)”.

Data Rules: Reinventing the Market Economy


Book by Cristina Alaimo and Jannis Kallinikos: “Digital data have become the critical frontier where emerging economic practices and organizational forms confront the traditional economic order and its institutions. In Data Rules, Cristina Alaimo and Jannis Kallinikos establish a social science framework for analyzing the unprecedented social and economic restructuring brought about by data. Working at the intersection of information systems and organizational studies, they draw extensively on intellectual currents in sociology, semiotics, cognitive science and technology, and social theory. Making the case for turning “data-making” into an area of inquiry of its own, the authors uncover how data are deeply implicated in rewiring the institutions of the market economy.

The authors associate digital data with the decentering of organizations. As they point out, centered systems make sense only when firms (and formal organizations more broadly) can keep the external world at arm’s length and maintain a relative operation independence from it. These patterns no longer hold. Data transform the production of goods and services to an endless series of exchanges and interactions that defeat the functional logics of markets and organizations. The diffusion of platforms and ecosystems is indicative of these broader transformations. Rather than viewing data as simply a force of surveillance and control, the authors place the transformative potential of data at the center of an emerging socioeconomic order that restructures society and its institutions…(More)”.

Global AI governance: barriers and pathways forward 


Paper by Huw Roberts, Emmie Hine, Mariarosaria Taddeo, Luciano Floridi: “This policy paper is a response to the growing calls for ambitious new international institutions for AI. It maps the geopolitical and institutional barriers to stronger global AI governance and considers potential pathways forward in light of these constraints. We argue that a promising foundation of international regimes focused on AI governance is emerging, but the centrality of AI to interstate competition, dysfunctional international institutions and disagreement over policy priorities problematizes substantive cooperation. We propose strengthening the existing weak ‘regime complex’ of international institutions as the most desirable and realistic path forward for global AI governance. Strengthening coordination between, and the capacities of, existing institutions supports mutually reinforcing policy change, which, if enacted properly, can lead to catalytic change across the various policy areas where AI has an impact. It also facilitates the flexible governance needed for rapidly evolving technologies.

To make this argument, we outline key global AI governance processes in the next section. In the third section, we analyse how first- and second-order cooperation problems in international relations apply to AI. In the fourth section we assess potential routes for advancing global AI governance, and we conclude by providing recommendations on how to strengthen the weak AI regime complex…(More)”.

Synthetic Politics: Preparing democracy for Generative AI


Report by Demos: “This year is a politically momentous one, with almost half the world voting in elections. Generative AI may revolutionise our political information environments by making them more effective, relevant, and participatory. But it’s also possible that they will become more manipulative, confusing, and dangerous. We’ve already seen AI-generated audio of politicians going viral and chatbots offering incorrect information about elections.

This report, produced in partnership with University College London, explores how synthetic content produced by generative AI poses risks to the core democratic values of truthequality, and non-violence. It proposes two action plans for what private and public decision-makers should be doing to safeguard democratic integrity immediately and in the long run:

  • In Action Plan 1, we consider the actions that should be urgently put in place to reduce the acute risks to democratic integrity presented by generative AI tools. This includes reducing the production and dissemination of harmful synthetic content and empowering users so that harmful impacts of synthetic content are reduced in the immediate term.
  • In Action Plan 2, we set out a longer-term vision for how the fundamental risks to democratic integrity should be addressed. We explore the ways in which generative AI tools can help bolster equality, truth and non-violence, from enabling greater democratic participation to improving how key information institutions operate…(More)”.

Citizen scientists—practices, observations, and experience


Paper by Michael O’Grady & Eleni Mangina: “Citizen science has been studied intensively in recent years. Nonetheless, the voice of citizen scientists is often lost despite their altruistic and indispensable role. To remedy this deficiency, a survey on the overall experiences of citizen scientists was undertaken. Dimensions investigated include activities, open science concepts, and data practices. However, the study prioritizes knowledge and practices of data and data management. When a broad understanding of data is lacking, the ability to make informed decisions about consent and data sharing, for example, is compromised. Furthermore, the potential and impact of individual endeavors and collaborative projects are reduced. Findings indicate that understanding of data management principles is limited. Furthermore, an unawareness of common data and open science concepts was observed. It is concluded that appropriate training and a raised awareness of Responsible Research and Innovation concepts would benefit individual citizen scientists, their projects, and society…(More)”.

Evidence Ecosystems and the Challenge of Humanising and Normalising Evidence


Article by Geoff Mulgan: “It is reasonable to assume that the work of governments, businesses and civil society goes better if the people making decisions are well-informed, using reliable facts and strong evidence rather than only hunch and anecdote.  The term ‘evidence ecosystem’1  is a useful shorthand for the results of systematic attempts to make this easier, enabling decision makers, particularly in governments, to access the best available evidence, in easily digestible forms and when it’s needed.  

…This sounds simple.  But these ecosystems are as varied as ecosystems in nature.  How they work depends on many factors, including how political or technical the issues are; the presence or absence of confident, well-organised professions; the availability of good quality evidence; whether there is a political culture that values research; and much more.

In particular, the paper argues that the next generation of evidence ecosystems need a sharper understanding of how the supply of evidence meets demand, and the human dimension of evidence.  That means cultivating lasting relationships rather than relying too much on a linear flow of evidence from researchers to decision-makers; it means using conversation as much as prose reports to ensure evidence is understood and acted on; and it means making use of stories as well as dry analysis.  It depends, in other words, on recognising that the users of evidence are humans.

In terms of prescription the paper emphasises:

  • Sustainability/normalisation: the best approaches are embedded, part of the daily life of decision-making rather than depending on one-off projects and programmes.  This applies both to evidence and to data.  Yet embeddedness is the exception rather than the rule.
  • Multiplicity: multiple types of knowledge, and logics, are relevant to decisions, which is why people and institutions that understand these different logics are so vital.  
  • Credibility and relationships: the intermediaries who connect the supply and demand of knowledge need to be credible, with both depth of knowledge and an ability to interpret it for diverse audiences, and they need to be able to create and maintain relationships, which will usually be either place or topic based, and will take time to develop, with the communication of evidence often done best in conversation.
  • Stories: influencing decision-makers depends on indirect as well as direct communication, since the media in all their forms play a crucial role in validating evidence and evidence travels best with stories, vignettes and anecdotes.

In short, while evidence is founded on rigorous analysis, good data and robust methods, it also needs to be humanised – embedded in relationships, brought alive in conversations and vivid, human stories – and normalised, becoming part of everyday work…(More)”.

Mechanisms for Researcher Access to Online Platform Data


Status Report by the EU/USA: “Academic and civil society research on prominent online platforms has become a crucial way to understand the information environment and its impact on our societies. Scholars across the globe have leveraged application programming interfaces (APIs) and web crawlers to collect public user-generated content and advertising content on online platforms to study societal issues ranging from technology-facilitated gender-based violence, to the impact of media on mental health for children and youth. Yet, a changing landscape of platforms’ data access mechanisms and policies has created uncertainty and difficulty for critical research projects.


The United States and the European Union have a shared commitment to advance data access for researchers, in line with the high-level principles on access to data from online platforms for researchers announced at the EU-U.S. Trade and Technology Council (TTC) Ministerial Meeting in May 2023.1 Since the launch of the TTC, the EU Digital Services Act (DSA) has gone into effect, requiring providers of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to provide increased transparency into their services. The DSA includes provisions on transparency reports, terms and conditions, and explanations for content moderation decisions. Among those, two provisions provide important access to publicly available content on platforms:


• DSA Article 40.12 requires providers of VLOPs/VLOSEs to provide academic and civil society researchers with data that is “publicly accessible in their online interface.”
• DSA Article 39 requires providers of VLOPs/VLOSEs to maintain a public repository of advertisements.

The announcements related to new researcher access mechanisms mark an important development and opportunity to better understand the information environment. This status report summarizes a subset of mechanisms that are available to European and/or United States researchers today, following, in part VLOPs and VLOSEs measures to comply with the DSA. The report aims at showcasing the existing access modalities and encouraging the use of these mechanisms to study the impact of online platform’s design and decisions on society. The list of mechanisms reviewed is included in the Appendix…(More)”

The Potential of Artificial Intelligence for the SDGs and Official Statistics


Report by Paris21: “Artificial Intelligence (AI) and its impact on people’s lives is growing rapidly. AI is already leading to significant developments from healthcare to education, which can contribute to the efficient monitoring and achievement of the Sustainable Development Goals (SDGs), a call to action to address the world’s greatest challenges. AI is also raising concerns because, if not addressed carefully, its risks may outweigh its benefits. As a result, AI is garnering increasing attention from National Statistical Offices (NSOs) and the official statistics community as they are challenged to produce more, comprehensive, timely, and highquality data for decision-making with limited resources in a rapidly changing world of data and technologies and in light of complex and converging global issues from pandemics to climate change. This paper has been prepared as an input to the “Data and AI for Sustainable Development: Building a Smarter Future” Conference, organized in partnership with The Partnership in Statistics for Development in the 21st Century (PARIS21), the World Bank and the International Monetary Fund (IMF). Building on case studies that examine the use of AI by NSOs, the paper presents the benefits and risks of AI with a focus on NSO operations related to sustainable development. The objective is to spark discussions and to initiate a dialogue around how AI can be leveraged to inform decisions and take action to better monitor and achieve sustainable development, while mitigating its risks…(More)”.

Counting Feminicide: Data Feminism in Action


Book by Catherine D’Ignazio: “What isn’t counted doesn’t count. And mainstream institutions systematically fail to account for feminicide, the gender-related killing of women and girls, including cisgender and transgender women. Against this failure, Counting Feminicide brings to the fore the work of data activists across the Americas who are documenting such murders—and challenging the reigning logic of data science by centering care, memory, and justice in their work. Drawing on Data Against Feminicide, a large-scale collaborative research project, Catherine D’Ignazio describes the creative, intellectual, and emotional labor of feminicide data activists who are at the forefront of a data ethics that rigorously and consistently takes power and people into account.

Individuals, researchers, and journalists—these data activists scour news sources to assemble spreadsheets and databases of women killed by gender-related violence, then circulate those data in a variety of creative and political forms. Their work reveals the potential of restorative/transformative data science—the use of systematic information to, first, heal communities from the violence and trauma produced by structural inequality and, second, envision and work toward the world in which such violence has been eliminated. Specifically, D’Ignazio explores the possibilities and limitations of counting and quantification—reducing complex social phenomena to convenient, sortable, aggregable forms—when the goal is nothing short of the elimination of gender-related violence.

Counting Feminicide showcases the incredible power of data feminism in practice, in which each murdered woman or girl counts, and, in being counted, joins a collective demand for the restoration of rights and a transformation of the gendered order of the world…(More)”.