Making data for good better


Article by Caroline Buckee, Satchit Balsari, and Andrew Schroeder: “…Despite the long standing excitement about the potential for digital tools, Big Data and AI to transform our lives, these innovations–with some exceptions–have so far had little impact on the greatest public health emergency of our time.

Attempts to use digital data streams to rapidly produce public health insights that were not only relevant for local contexts in cities and countries around the world, but also available to decision makers who needed them, exposed enormous gaps across the translational pipeline. The insights from novel data streams which could help drive precise, impactful health programs, and bring effective aid to communities, found limited use among public health and emergency response systems. We share here our experience from the COVID-19 Mobility Data Network (CMDN), now Crisis Ready (crisisready.io), a global collaboration of researchers, mostly infectious disease epidemiologists and data scientists, who served as trusted intermediaries between technology companies willing to share vast amounts of digital data, and policy makers, struggling to incorporate insights from these novel data streams into their decision making. Through our experience with the Network, and using human mobility data as an illustrative example, we recognize three sets of barriers to the successful application of large digital datasets for public good.

First, in the absence of pre-established working relationships with technology companies and data brokers, the data remain primarily confined within private circuits of ownership and control. During the pandemic, data sharing agreements between large technology companies and researchers were hastily cobbled together, often without the right kind of domain expertise in the mix. Second, the lack of standardization, interoperability and information on the uncertainty and biases associated with these data, necessitated complex analytical processing by highly specialized domain experts. And finally, local public health departments, understandably unfamiliar with these novel data streams, had neither the bandwidth nor the expertise to sift noise from signal. Ultimately, most efforts did not yield consistently useful information for decision making, particularly in low resource settings, where capacity limitations in the public sector are most acute…(More)”.

‘Sharing Is Caring’: Creative Commons, Transformative Culture, and Moral Rights Protection


Paper by Alexandra Giannopoulou: “The practice of sharing works free from traditional legal reservations, aims to mark both ideological and systemic distance from the exclusive proprietary regime of copyright. The positive involvement of the public in creativity acts is a defining feature of transformative culture in the digital sphere, which encourages creative collaborations between several people, without any limitation in space or time. Moral rights regimes are antithetical to these practices. This chapter will explore the moral rights challenges emerging from transformative culture. We will take the example of Creative Commons licenses and their interaction with internationally recognized moral rights. We conclude that the chilling effects of this legal uncertainty linked to moral rights enforcement could hurt copyright as a whole, but that moral rights can still constitute a strong defence mechanism against modern risks related to digital transformative creativity…(More)”.

The 2022 Edelman Trust Barometer


Edelman: “The world is failing to meet the unprecedented challenges of our time because it is ensnared in a vicious cycle of distrust. Four interlocking forces drive this cycle, thwarting progress on climate change, global pandemic management, racism and mounting tensions between China and the U.S. Left unchecked, the following four forces, evident in the 2022 Edelman Trust Barometer, will undermine institutions and further destabilize society:  

  • Government-media distrust spiral. Two institutions people rely on for truth are doing a dangerous tango of short-term mutual advantage, with exaggeration and division to gain clicks and votes.
  • Excessive reliance on business. Government failure has created an over-reliance on business to fill the void, a job that private enterprise was not designed to deliver.
  • Mass-class divide. The global pandemic has widened the fissure that surfaced in the wake of the Great Recession. High-income earners have become more trusting of institutions, while lower-income earners remain wary.
  • Failure of leadership. Classic societal leaders in government, the media and business have been discredited. Trust, once hierarchical, has become local and dispersed as people rely on my employer, my colleagues, my family. Coinciding with this upheaval is a collapse of trust within democracies and a trust surge within autocracies.

The media business model has become dependent on generating partisan outrage, while the political model has become dependent on exploiting it. Whatever short-term benefits either institution derives, it is a long-term catastrophe for society. Distrust is now society’s default emotion, with nearly 60 percent inclined to distrust…(More)”.

From Poisons to Antidotes: Algorithms as Democracy Boosters


Paper by Paolo Cavaliere and Graziella Romeo: “Under what conditions can artificial intelligence contribute to political processes without undermining their legitimacy? Thanks to the ever-growing availability of data and the increasing power of decision-making algorithms, the future of political institutions is unlikely to be anything similar to what we have known throughout the last century, possibly with Parliaments deprived of their traditional authority and public decision-making processes largely unaccountable. This paper discusses and challenges these concerns by suggesting a theoretical framework under which algorithmic decision-making is compatible with democracy and, most relevantly, can offer a viable solution to counter the rise of populist rhetoric in the governance arena. Such a framework is based on three pillars: a. understanding the civic issues that are subjected to automated decision-making; b. controlling the issues that are assigned to AI; and c. evaluating and challenging the outputs of algorithmic decision-making….(More)”.

What Works? Developing a global evidence base for public engagement


Report by Reema Patel and Stephen Yeo: “…the Wellcome Trust commissioned OTT Consulting to recommend the best approach for enabling public engagement communities to share and gather evidence on public engagement practice globally, and in particular to assess the suitability of an approach adapted from the UK ‘What Works Centres’. This report is the output from that commission. It draws from a desk-based literature review, workshops in India, Peru and the UK, and a series of stakeholder interviews with international organisations.

The key themes that emerged from stakeholder interviews and workshops were that, in order for evidence about public engagement to help inform and shape public engagement practice, and for public engagement to be used and deployed effectively, there has to be an approach that can: understand the audiences, broaden out how ‘evidence’ is understood and generated, think strategically about how evidence affects and informs practice and understand the complexity of the system dynamics within which public engagement (and evidence about public engagement) operates….(More)”.

Trove of unique health data sets could help AI predict medical conditions earlier


Madhumita Murgia at the Financial Times: “…Ziad Obermeyer, a physician and machine learning scientist at the University of California, Berkeley, launched Nightingale Open Science last month — a treasure trove of unique medical data sets, each curated around an unsolved medical mystery that artificial intelligence could help to solve.

The data sets, released after the project received $2m of funding from former Google chief executive Eric Schmidt, could help to train computer algorithms to predict medical conditions earlier, triage better and save lives.

The data include 40 terabytes of medical imagery, such as X-rays, electrocardiogram waveforms and pathology specimens, from patients with a range of conditions, including high-risk breast cancer, sudden cardiac arrest, fractures and Covid-19. Each image is labelled with the patient’s medical outcomes, such as the stage of breast cancer and whether it resulted in death, or whether a Covid patient needed a ventilator.

Obermeyer has made the data sets free to use and mainly worked with hospitals in the US and Taiwan to build them over two years. He plans to expand this to Kenya and Lebanon in the coming months to reflect as much medical diversity as possible.

“Nothing exists like it,” said Obermeyer, who announced the new project in December alongside colleagues at NeurIPS, the global academic conference for artificial intelligence. “What sets this apart from anything available online is the data sets are labelled with the ‘ground truth’, which means with what really happened to a patient and not just a doctor’s opinion.”…

The Nightingale data sets were among dozens proposed this year at NeurIPS.

Other projects included a speech data set of Mandarin and eight subdialects recorded by 27,000 speakers in 34 cities in China; the largest audio data set of Covid respiratory sounds, such as breathing, coughing and voice recordings, from more than 36,000 participants to help screen for the disease; and a data set of satellite images covering the entire country of South Africa from 2006 to 2017, divided and labelled by neighbourhood, to study the social effects of spatial apartheid.

Elaine Nsoesie, a computational epidemiologist at the Boston University School of Public Health, said new types of data could also help with studying the spread of diseases in diverse locations, as people from different cultures react differently to illnesses.

She said her grandmother in Cameroon, for example, might think differently than Americans do about health. “If someone had an influenza-like illness in Cameroon, they may be looking for traditional, herbal treatments or home remedies, compared to drugs or different home remedies in the US.”

Computer scientists Serena Yeung and Joaquin Vanschoren, who proposed that research to build new data sets should be exchanged at NeurIPS, pointed out that the vast majority of the AI community still cannot find good data sets to evaluate their algorithms. This meant that AI researchers were still turning to data that were potentially “plagued with bias”, they said. “There are no good models without good data.”…(More)”.

Deliberate Ignorance: Choosing Not to Know


Book edited by Ralph Hertwig and Christoph Engel: “The history of intellectual thought abounds with claims that knowledge is valued and sought, yet individuals and groups often choose not to know. We call the conscious choice not to seek or use knowledge (or information) deliberate ignorance. When is this a virtue, when is it a vice, and what can be learned from formally modeling the underlying motives? On which normative grounds can it be judged? Which institutional interventions can promote or prevent it? In this book, psychologists, economists, historians, computer scientists, sociologists, philosophers, and legal scholars explore the scope of deliberate ignorance.

Drawing from multiple examples, including the right not to know in genetic testing, collective amnesia in transformational societies, blind orchestral auditions, and “don’t ask don’t tell” policies), the contributors offer novel insights and outline avenues for future research into this elusive yet fascinating aspect of human nature…(More)”.

The new machinery of government: using machine technology in administrative decision-making


Report by New South Wales Ombudsman: “There are many situations in which government agencies could use appropriately-designed machine technologies to assist in the exercise of their functions, which would be compatible with lawful and appropriate conduct. Indeed, in some instances machine technology may improve aspects of good administrative conduct – such as accuracy and consistency in decision-making, as well as mitigating the risk of individual human bias.

However, if machine technology is designed and used in a way that does not accord with administrative law and associated principles of good administrative practice, then its use could constitute or involve maladministration. It could also result in legal challenges, including a risk that administrative decisions or actions may later be held by a court to have been unlawful or invalid.

The New South Wales Ombudsman was prompted to prepare this report after becoming aware of one agency (Revenue NSW) using machine technology for the performance of a discretionary statutory function (the garnisheeing of unpaid fine debts from individuals’ bank accounts), in a way that was having a significant impact on individuals, many of whom were already in situations of financial vulnerability.

The Ombudsman’s experience with Revenue NSW, and a scan of the government’s published policies on the use of artificial intelligence and other digital technologies, suggests that there may be inadequate attention being given to fundamental aspects of public law that are relevant to machine technology adoption….(More)”

The Government of Emergency: Vital Systems, Expertise, and the Politics of Security


Book by Stephen J. Collier and Andrew Lakoff: “From pandemic disease, to the disasters associated with global warming, to cyberattacks, today we face an increasing array of catastrophic threats. It is striking that, despite the diversity of these threats, experts and officials approach them in common terms: as future events that threaten to disrupt the vital, vulnerable systems upon which modern life depends.

The Government of Emergency tells the story of how this now taken-for-granted way of understanding and managing emergencies arose. Amid the Great Depression, World War II, and the Cold War, an array of experts and officials working in obscure government offices developed a new understanding of the nation as a complex of vital, vulnerable systems. They invented technical and administrative devices to mitigate the nation’s vulnerability, and organized a distinctive form of emergency government that would make it possible to prepare for and manage potentially catastrophic events.

Through these conceptual and technical inventions, Stephen Collier and Andrew Lakoff argue, vulnerability was defined as a particular kind of problem, one that continues to structure the approach of experts, officials, and policymakers to future emergencies…(More)”.

Navigating Trust in Society,


Report by Coeuraj: “This report provides empirical evidence of existing levels of trust, among the US population, with regard to institutions, and philanthropy—all shaped during a time of deep polarization and a global pandemic.

The source of the data is two-fold. Firstly, a year-over-year analysis of institutional trust, as measured by Global Web Index USA from more than 20,000 respondents and, secondly, an ad-hoc nationally representative survey, conducted by one of Coeuraj’s data partners AudienceNet, in the two weeks immediately preceding the 2021 United Nations General Assembly. This report presents the core findings that emerged from both research initiatives….(More)”.