Information Ecosystems and Troubled Democracy


Report by the Observatory on Information and Democracy: “This inaugural meta-analysis provides a critical assessment of the role of information ecosystems in the Global North and Global Majority World, focusing on their relationship with information integrity (the quality of public discourse), the fairness of political processes, the protection of media freedoms, and the resilience of public institutions.

The report addresses three thematic areas with a cross-cutting theme of mis- and disinformation:

  • Media, Politics and Trust;
  • Artificial Intelligence, Information Ecosystems and Democracy;
  • and Data Governance and Democracy.

The analysis is based mainly on academic publications supplemented by reports and other materials from different disciplines and regions (1,664 citations selected among a total corpus of over +2700 resources aggregated). The report showcases what we can learn from landmark research on often intractable challenges posed by rapid changes in information and communication spaces…(More)”.

What’s a Fact, Anyway?


Essay by Fergus McIntosh: “…For journalists, as for anyone, there are certain shortcuts to trustworthiness, including reputation, expertise, and transparency—the sharing of sources, for example, or the prompt correction of errors. Some of these shortcuts are more perilous than others. Various outfits, positioning themselves as neutral guides to the marketplace of ideas, now tout evaluations of news organizations’ trustworthiness, but relying on these requires trusting in the quality and objectivity of the evaluation. Official data is often taken at face value, but numbers can conceal motives: think of the dispute over how to count casualties in recent conflicts. Governments, meanwhile, may use their powers over information to suppress unfavorable narratives: laws originally aimed at misinformation, many enacted during the COVID-19 pandemic, can hinder free expression. The spectre of this phenomenon is fuelling a growing backlash in America and elsewhere.

Although some categories of information may come to be considered inherently trustworthy, these, too, are in flux. For decades, the technical difficulty of editing photographs and videos allowed them to be treated, by most people, as essentially incontrovertible. With the advent of A.I.-based editing software, footage and imagery have swiftly become much harder to credit. Similar tools are already used to spoof voices based on only seconds of recorded audio. For anyone, this might manifest in scams (your grandmother calls, but it’s not Grandma on the other end), but for a journalist it also puts source calls into question. Technologies of deception tend to be accompanied by ones of detection or verification—a battery of companies, for example, already promise that they can spot A.I.-manipulated imagery—but they’re often locked in an arms race, and they never achieve total accuracy. Though chatbots and A.I.-enabled search engines promise to help us with research (when a colleague “interviewed” ChatGPT, it told him, “I aim to provide information that is as neutral and unbiased as possible”), their inability to provide sourcing, and their tendency to hallucinate, looks more like a shortcut to nowhere, at least for now. The resulting problems extend far beyond media: election campaigns, in which subtle impressions can lead to big differences in voting behavior, feel increasingly vulnerable to deepfakes and other manipulations by inscrutable algorithms. Like everyone else, journalists have only just begun to grapple with the implications.

In such circumstances, it becomes difficult to know what is true, and, consequently, to make decisions. Good journalism offers a way through, but only if readers are willing to follow: trust and naïveté can feel uncomfortably close. Gaining and holding that trust is hard. But failure—the end point of the story of generational decay, of gold exchanged for dross—is not inevitable. Fact checking of the sort practiced at The New Yorker is highly specific and resource-intensive, and it’s only one potential solution. But any solution must acknowledge the messiness of truth, the requirements of attention, the way we squint to see more clearly. It must tell you to say what you mean, and know that you mean it…(More)”.

AI for Social Good


Essay by Iqbal Dhaliwal: “Artificial intelligence (AI) has the potential to transform our lives. Like the internet, it’s a general-purpose technology that spans sectors, is widely accessible, has a low marginal cost of adding users, and is constantly improving. Tech companies are rapidly deploying more capable AI models that are seeping into our personal lives and work.

AI is also swiftly penetrating the social sector. Governments, social enterprises, and NGOs are infusing AI into programs, while public treasuries and donors are working hard to understand where to invest. For example, AI is being deployed to improve health diagnostics, map flood-prone areas for better relief targeting, grade students’ essays to free up teachers’ time for student interaction, assist governments in detecting tax fraud, and enable agricultural extension workers to customize advice.

But the social sector is also rife with examples over the past two decades of technologies touted as silver bullets that fell short of expectations, including One Laptop Per ChildSMS reminders to take medication, and smokeless stoves to reduce indoor air pollution. To avoid a similar fate, AI-infused programs must incorporate insights from years of evidence generated by rigorous impact evaluations and be scaled in an informed way through concurrent evaluations.

Specifically, implementers of such programs must pay attention to three elements. First, they must use research insights on where AI is likely to have the greatest social impact. Decades of research using randomized controlled trials and other exacting empirical work provide us with insights across sectors on where and how AI can play the most effective role in social programs.

Second, they must incorporate research lessons on how to effectively infuse AI into existing social programs. We have decades of research on when and why technologies succeed or fail in the social sector that can help guide AI adopters (governments, social enterprises, NGOs), tech companies, and donors to avoid pitfalls and design effective programs that work in the field.

Third, we must promote the rigorous evaluation of AI in the social sector so that we disseminate trustworthy information about what works and what does not. We must motivate adopters, tech companies, and donors to conduct independent, rigorous, concurrent impact evaluations of promising AI applications across social sectors (including impact on workers themselves); draw insights emerging across multiple studies; and disseminate those insights widely so that the benefits of AI can be maximized and its harms understood and minimized. Taking these steps can also help build trust in AI among social sector players and program participants more broadly…(More)”.

Facing & mitigating common challenges when working with real-world data: The Data Learning Paradigm


Paper by Jake Lever et al: “The rapid growth of data-driven applications is ubiquitous across virtually all scientific domains, and has led to an increasing demand for effective methods to handle data deficiencies and mitigate the effects of imperfect data. This paper presents a guide for researchers encountering real-world data-driven applications, and the respective challenges associated with this. This article proposes the concept of the Data Learning Paradigm, combining the principles of machine learning, data science and data assimilation to tackle real-world challenges in data-driven applications. Models are a product of the data upon which they are trained, and no data collected from real world scenarios is perfect due to natural limitations of sensing and collection. Thus, computational modelling of real world systems is intrinsically limited by the various deficiencies encountered in real data. The Data Learning Paradigm aims to leverage the strengths of data improvement to enhance the accuracy, reliability, and interpretability of data-driven models. We outline a range of methods which are currently being implemented in the field of Data Learning involving machine learning and data science methods, and discuss how these mitigate the various problems associated with data-driven models, illustrating improved results in a multitude of real world applications. We highlight examples where these methods have led to significant advancements in fields such as environmental monitoring, planetary exploration, healthcare analytics, linguistic analysis, social networks, and smart manufacturing. We offer a guide to how these methods may be implemented to deal with general types of limitations in data, alongside their current and potential applications…(More)”.

Sortition: Past and Present


Introduction to the Journal of Sortition: “Since ancient times sortition (random selection by lot) has been used both to distribute political office and as a general prophylactic against factionalism and corruption in societies as diverse as classical-era Athens and the Most Serene Republic of Venice. Lotteries have also been employed for the allocation of scarce goods such as social housing and school places to eliminate bias and ensure just distribution, along with drawing lots in circumstances where unpopular tasks or tragic choices are involved (as some situations are beyond rational human decision-making). More recently, developments in public opinion polling using random sampling have led to the proliferation of citizens’ assemblies selected by lot. Some activists have even proposed such bodies as an alternative to elected representatives. The Journal of Sortition benefits from an editorial board with a wide range of expertise and perspectives in this area. In this introduction to the first issue, we have invited our editors to explain why they are interested in sortition, and to outline the benefits (and pitfalls) of the recent explosion of interest in the topic…(More)”.

Digitalizing sewage: The politics of producing, sharing, and operationalizing data from wastewater-based surveillance


Paper by Josie Wittmer, Carolyn Prouse, and Mohammed Rafi Arefin: “Expanded during the COVID-19 pandemic, Wastewater-Based Surveillance (WBS) is now heralded by scientists and policy makers alike as the future of monitoring and governing urban health. The expansion of WBS reflects larger neoliberal governance trends whereby digitalizing states increasingly rely on producing big data as a ‘best practice’ to surveil various aspects of everyday life. With a focus on three South Asian cities, our paper investigates the transnational pathways through which WBS data is produced, made known, and operationalized in ‘evidence-based’ decision-making in a time of crisis. We argue that in South Asia, wastewater surveillance data is actively produced through fragile but power-laden networks of transnational and local knowledge, funding, and practices. Using mixed qualitative methods, we found these networks produced artifacts like dashboards to communicate data to the public in ways that enabled claims to objectivity, ethical interventions, and transparency. Interrogating these representations, we demonstrate how these artifacts open up messy spaces of translation that trouble linear notions of objective data informing accountable, transparent, and evidence-based decision-making for diverse urban actors. By thinking through the production of precarious biosurveillance infrastructures, we respond to calls for more robust ethical and legal frameworks for the field and suggest that the fragility of WBS infrastructures has important implications for the long-term trajectories of urban public health governance in the global South…(More)”

Will Artificial Intelligence Replace Us or Empower Us?


Article by Peter Coy: “…But A.I. could also be designed to empower people rather than replace them, as I wrote a year ago in a newsletter about the M.I.T. Shaping the Future of Work Initiative.

Which of those A.I. futures will be realized was a big topic at the San Francisco conference, which was the annual meeting of the American Economic Association, the American Finance Association and 65 smaller groups in the Allied Social Science Associations.

Erik Brynjolfsson of Stanford was one of the busiest economists at the conference, dashing from one panel to another to talk about his hopes for a human-centric A.I. and his warnings about what he has called the “Turing Trap.”

Alan Turing, the English mathematician and World War II code breaker, proposed in 1950 to evaluate the intelligence of computers by whether they could fool someone into thinking they were human. His “imitation game” led the field in an unfortunate direction, Brynjolfsson argues — toward creating machines that behaved as much like humans as possible, instead of like human helpers.

Henry Ford didn’t set out to build a car that could mimic a person’s walk, so why should A.I. experts try to build systems that mimic a person’s mental abilities? Brynjolfsson asked at one session I attended.

Other economists have made similar points: Daron Acemoglu of M.I.T. and Pascual Restrepo of Boston University use the term “so-so technologies” for systems that replace human beings without meaningfully increasing productivity, such as self-checkout kiosks in supermarkets.

People will need a lot more education and training to take full advantage of A.I.’s immense power, so that they aren’t just elbowed aside by it. “In fact, for each dollar spent on machine learning technology, companies may need to spend nine dollars on intangible human capital,” Brynjolfsson wrote in 2022, citing research by him and others…(More)”.

Theorizing the functions and patterns of agency in the policymaking process


Paper by Giliberto Capano, et al: “Theories of the policy process understand the dynamics of policymaking as the result of the interaction of structural and agency variables. While these theories tend to conceptualize structural variables in a careful manner, agency (i.e. the actions of individual agents, like policy entrepreneurs, policy leaders, policy brokers, and policy experts) is left as a residual piece in the puzzle of the causality of change and stability. This treatment of agency leaves room for conceptual overlaps, analytical confusion and empirical shortcomings that can complicate the life of the empirical researcher and, most importantly, hinder the ability of theories of the policy process to fully address the drivers of variation in policy dynamics. Drawing on Merton’s concept of function, this article presents a novel theorization of agency in the policy process. We start from the assumption that agency functions are a necessary component through which policy dynamics evolve. We then theorise that agency can fulfil four main functions – steering, innovation, intermediation and intelligence – that need to be performed, by individual agents, in any policy process through four patterns of action – leadership, entrepreneurship, brokerage and knowledge accumulation – and we provide a roadmap for operationalising and measuring these concepts. We then demonstrate what can be achieved in terms of analytical clarity and potential theoretical leverage by applying this novel conceptualisation to two major policy process theories: the Multiple Streams Framework (MSF) and the Advocacy Coalition Framework (ACF)…(More)”.

The Access to Public Information: A Fundamental Right


Book by Alejandra Soriano Diaz: “Information is not only a human-fundamental right, but it has been shaped as a pillar for the exercise of other human rights around the world. It is the path for bringing to account authorities and other powerful actors before the people, who are, for all purposes, the actual owners of public data.

Providing information about public decisions that have the potential to significantly impact a community is vital to modern democracy. This book explores the forms in which individuals and collectives are able to voice their opinions and participate in public decision-making when long-lasting effects are at stake, on present and future generations. The strong correlation between the right to access public information and the enjoyment of civil and political rights, as well as economic and environmental rights, emphasizes their interdependence.

This study raises a number of important questions to mobilize towards openness and empowerment of people’s right of ownership of their public information…(More)”.

Big brother: the effects of surveillance on fundamental aspects of social vision


Paper by Kiley Seymour et al: “Despite the dramatic rise of surveillance in our societies, only limited research has examined its effects on humans. While most research has focused on voluntary behaviour, no study has examined the effects of surveillance on more fundamental and automatic aspects of human perceptual awareness and cognition. Here, we show that being watched on CCTV markedly impacts a hardwired and involuntary function of human sensory perception—the ability to consciously detect faces. Using the method of continuous flash suppression (CFS), we show that when people are surveilled (N = 24), they are quicker than controls (N = 30) to detect faces. An independent control experiment (N = 42) ruled out an explanation based on demand characteristics and social desirability biases. These findings show that being watched impacts not only consciously controlled behaviours but also unconscious, involuntary visual processing. Our results have implications concerning the impacts of surveillance on basic human cognition as well as public mental health…(More)”.