How spooks are turning to superforecasting in the Cosmic Bazaar


The Economist: “Every morning for the past year, a group of British civil servants, diplomats, police officers and spies have woken up, logged onto a slick website and offered their best guess as to whether China will invade Taiwan by a particular date. Or whether Arctic sea ice will retrench by a certain amount. Or how far covid-19 infection rates will fall. These imponderables are part of Cosmic Bazaar, a forecasting tournament created by the British government to improve its intelligence analysis.

Since the website was launched in April 2020, more than 10,000 forecasts have been made by 1,300 forecasters, from 41 government departments and several allied countries. The site has around 200 regular forecasters, who must use only publicly available information to tackle the 30-40 questions that are live at any time. Cosmic Bazaar represents the gamification of intelligence. Users are ranked by a single, brutally simple measure: the accuracy of their predictions.

Forecasting tournaments like Cosmic Bazaar draw on a handful of basic ideas. One of them, as seen in this case, is the “wisdom of crowds”, a concept first illustrated by Francis Galton, a statistician, in 1907. Galton observed that in a contest to estimate the weight of an ox at a county fair, the median guess of nearly 800 people was accurate within 1% of the true figure.

Crowdsourcing, as this idea is now called, has been augmented by more recent research into whether and how people make good judgments. Experiments by Philip Tetlock of the University of Pennsylvania, and others, show that experts’ predictions are often no better than chance. Yet some people, dubbed “superforecasters”, often do make accurate predictions, largely because of the way they form judgments—such as having a commitment to revising predictions in light of new data, and being aware of typical human biases. Dr Tetlock’s ideas received publicity last year when Dominic Cummings, then an adviser to Boris Johnson, Britain’s prime minister, endorsed his book and hired a controversial superforecaster to work at Mr Johnson’s office in Downing Street….(More)”.

Digital Inclusion is a Social Determinant of Health


Paper by Jill Castek et al: “Efforts to improve digital literacies and internet access are valuable tools to reduce health disparities. The costs of equipping a person to use the internet are substantially lower than treating health conditions, and the benefits are multiple….

Those who do not have access to affordable broadband internet services, digital devices, digital literacies training, and technical support, face numerous challenges video-conferencing with their doctor,  checking test results, filling prescriptions, and much more.  Many individuals require significant support developing the digital literacies needed to engage in telehealth with the greatest need among older individuals, racial/ethnic minorities, and low-income communities. Taken in context, the costs of equipping a person to use the internet are substantially lower than treating health conditions, and the benefits are both persistent and significant.2 

“Super” Social Determinants of Health

Digital literacies and internet connectivity have been called the “super social determinants of health” because they encompass all other social determinants of health (SDOH).  Access to information, supports, and services are increasingly, and sometimes exclusively, accessible only online.

The social determinants of health shown in Figure 1. Digital Literacies & Access, include the neighborhood and physical environment, economic sustainability, healthcare system, community and social context, food, and education.4  Together these factors impact an individual’s ability to access healthcare services, education, housing, transportation, online banking, and sustain relationships with family members and friends.  Digital literacies and access impacts all facets of a person’s life and affects behavioral and environmental outcomes such as shopping choices, housing, support systems, and health coverage….(More)”

Figure 1. Digital Literacies & Access. 

Power to the Public: The Promise of Public Interest Technology


Book by Tara Dawson McGuinness and Hana Schank: “As the speed and complexity of the world increases, governments and nonprofit organizations need new ways to effectively tackle the critical challenges of our time—from pandemics and global warming to social media warfare. In Power to the Public, Tara Dawson McGuinness and Hana Schank describe a revolutionary new approach—public interest technology—that has the potential to transform the way governments and nonprofits around the world solve problems. Through inspiring stories about successful projects ranging from a texting service for teenagers in crisis to a streamlined foster care system, the authors show how public interest technology can make the delivery of services to the public more effective and efficient.

At its heart, public interest technology means putting users at the center of the policymaking process, using data and metrics in a smart way, and running small experiments and pilot programs before scaling up. And while this approach may well involve the innovative use of digital technology, technology alone is no panacea—and some of the best solutions may even be decidedly low-tech.

Clear-eyed yet profoundly optimistic, Power to the Public presents a powerful blueprint for how government and nonprofits can help solve society’s most serious problems….(More)

Administrative Law in the Automated State


Paper by Cary Coglianese: “In the future, administrative agencies will rely increasingly on digital automation powered by machine learning algorithms. Can U.S. administrative law accommodate such a future? Not only might a highly automated state readily meet longstanding administrative law principles, but the responsible use of machine learning algorithms might perform even better than the status quo in terms of fulfilling administrative law’s core values of expert decision-making and democratic accountability. Algorithmic governance clearly promises more accurate, data-driven decisions. Moreover, due to their mathematical properties, algorithms might well prove to be more faithful agents of democratic institutions. Yet even if an automated state were smarter and more accountable, it might risk being less empathic. Although the degree of empathy in existing human-driven bureaucracies should not be overstated, a large-scale shift to government by algorithm will pose a new challenge for administrative law: ensuring that an automated state is also an empathic one….(More)”.

Data Brokers Are a Threat to Democracy


Justin Sherman at Wired: “Enter the data brokerage industry, the multibillion dollar economy of selling consumers’ and citizens’ intimate details. Much of the privacy discourse has rightly pointed fingers at Facebook, Twitter, YouTube, and TikTok, which collect users’ information directly. But a far broader ecosystem of buying up, licensing, selling, and sharing data exists around those platforms. Data brokerage firms are middlemen of surveillance capitalism—purchasing, aggregating, and repackaging data from a variety of other companies, all with the aim of selling or further distributing it.

Data brokerage is a threat to democracy. Without robust national privacy safeguards, entire databases of citizen information are ready for purchase, whether to predatory loan companies, law enforcement agencies, or even malicious foreign actors. Federal privacy bills that don’t give sufficient attention to data brokerage will therefore fail to tackle an enormous portion of the data surveillance economy, and will leave civil rights, national security, and public-private boundaries vulnerable in the process.

Large data brokers—like Acxiom, CoreLogic, and Epsilon—tout the detail of their data on millions or even billions of people. CoreLogic, for instance, advertises its real estate and property information on 99.9 percent of the US population. Acxiom promotes 11,000-plus “data attributes,” from auto loan information to travel preferences, on 2.5 billion people (all to help brands connect with people “ethically,” it adds). This level of data collection and aggregation enables remarkably specific profiling.

Need to run ads targeting poor families in rural areas? Check out one data broker’s “Rural and Barely Making It” data set. Or how about racially profiling financial vulnerability? Buy another company’s “Ethnic Second-City Strugglers” data set. These are just some of the disturbing titles captured in a 2013 Senate report on the industry’s data products, which have only expanded since. Many other brokers advertise their ability to identify subgroups upon subgroups of individuals through criteria like race, gender, marital status, and income level, all sensitive characteristics that citizens likely didn’t know would end up in a database—let alone up for sale….(More)”.

Advancing data literacy in the post-pandemic world


Paper by Archita Misra (PARIS21): “The COVID-19 crisis presents a monumental opportunity to engender a widespread data culture in our societies. Since early 2020, the emergence of popular data sites like Worldometer2 have promoted interest and attention in data-driven tracking of the pandemic. “R values”, “flattening the curve” and “exponential increase” have seeped into everyday lexicon. Social media and news outlets have filled the public consciousness with trends, rankings and graphs throughout multiple waves of COVID-19.

Yet, the crisis also reveals a critical lack of data literacy amongst citizens in many parts of the world. The lack of a data literate culture predates the pandemic. The supply of statistics and information has significantly outpaced the ability of lay citizens to make informed choices about their lives in the digital data age.

Today’s fragmented datafied information landscape is also susceptible to the pitfalls of misinformation, post-truth politics and societal polarisation – all of which demand a critical thinking lens towards data. There is an urgent need to develop data literacy at the level of citizens, organisations and society – such that all actors are empowered to navigate the complexity of modern data ecosystems.

The paper identifies three key take-aways. It is crucial to

  • forge a common language around data literacy
  • adopt a demand-driven approach and participatory approach to doing data literacy
  • move from ad-hoc programming towards sustained policy, investment and impact…(More)”.

Regulating Personal Data : Data Models and Digital Services Trade


Report by Martina Francesca Ferracane and Erik van der Marel: “While regulations on personal data diverge widely between countries, it is nonetheless possible to identify three main models based on their distinctive features: one model based on open transfers and processing of data, a second model based on conditional transfers and processing, and third a model based on limited transfers and processing. These three data models have become a reference for many other countries when defining their rules on the cross-border transfer and domestic processing of personal data.

The study reviews their main characteristics and systematically identifies for 116 countries worldwide to which model they adhere for the two components of data regulation (i.e. cross-border transfers and domestic processing of data). In a second step, using gravity analysis, the study estimates whether countries sharing the same data model exhibit higher or lower digital services trade compared to countries with different regulatory data models. The results show that sharing the open data model for cross-border data transfers is positively associated with trade in digital services, while sharing the conditional model for domestic data processing is also positively correlated with trade in digital services. Country-pairs sharing the limited model, instead, exhibit a double whammy: they show negative trade correlations throughout the two components of data regulation. Robustness checks control for restrictions in digital services, the quality of digital infrastructure, as well as for the use of alternative data sources….(More)”.

Combining Racial Groups in Data Analysis Can Mask Important Differences in Communities


Blog by Jonathan Schwabish and Alice Feng: “Surveys, datasets, and published research often lump together racial and ethnic groups, which can erase the experiences of certain communities. Combining groups with different experiences can mask how specific groups and communities are faring and, in turn, affect how government funds are distributed, how services are provided, and how groups are perceived.

Large surveys that collect data on race and ethnicity are used to disburse government funds and services in a number of ways. The US Department of Housing Urban Development, for instance, distributes millions of dollars annually to Native American tribes through the Indian Housing Block Grant. And statistics on race and ethnicity are used as evidence in employment discrimination lawsuits and to help determine whether banks are discriminating against people and communities of color.

Despite the potentially large effects these data can have, researchers don’t always disaggregate their analysis to more racial groups. Many point to small sample sizes as a limitation for including more race and ethnicity categories in their analysis, but efforts to gather more specific data and disaggregate available survey results are critical to creating better policy for everyone.

To illustrate how aggregating racial groups can mask important variation, we looked at the 2019 poverty rate across 139 detailed race categories in the Census Bureau’s annual American Community Survey (ACS). The ACS provides information that helps determine how more than $675 billion in government funds is distributed each year.

The official poverty rate in the United States stood at 10.5 percent in 2019, with significant variation across racial and ethnic groups. The primary question in the ACS concerning race includes 15 separate checkboxes, with space to print additional names or races for some options (a separate question refers to Hispanic or Latino origin).

Screenshot of the American Community Survey's race question

Although the survey offers ample latitude for interviewees to respond with their race, researchers have a tendency to aggregate racial categories. People who identify as Asian or Pacific Islander (API), for example, are often combined in economic analyses.

This aggregation can mask variation within racial or ethnic categories. As an example, one analysis that used the ACS showed 11 percent of children in the API group are in poverty, relative to 18 percent of the overall population. But that estimate could understate the poverty rate among children who identify as Pacific lslanders and could overstate the poverty rate among children who identify as Asian, which itself is a broad grouping that encompasses many different communities with various experiences. Similar aggregating can be found across economic literature, including on educationimmigration (PDF), and wealth….(More)”.

Averting Catastrophe


Book by Cass Sunstein on “Decision Theory for COVID-19, Climate Change, and Potential Disasters of All Kinds…The world is increasingly confronted with new challenges related to climate change, globalization, disease, and technology. Governments are faced with having to decide how much risk is worth taking, how much destruction and death can be tolerated, and how much money should be invested in the hopes of avoiding catastrophe. Lacking full information, should decision-makers focus on avoiding the most catastrophic outcomes? When should extreme measures be taken to prevent as much destruction as possible?

Averting Catastrophe explores how governments ought to make decisions in times of imminent disaster. Cass R. Sunstein argues that using the “maximin rule,” which calls for choosing the approach that eliminates the worst of the worst-case scenarios, may be necessary when public officials lack important information, and when the worst-case scenario is too disastrous to contemplate. He underscores this argument by emphasizing the reality of “Knightian uncertainty,” found in circumstances in which it is not possible to assign probabilities to various outcomes. Sunstein brings foundational issues in decision theory in close contact with real problems in regulation, law, and daily life, and considers other potential future risks. At once an approachable introduction to decision-theory and a provocative argument for how governments ought to handle risk, Averting Catastrophe offers a definitive path forward in a world rife with uncertainty….(More)”.

Democratic institutions and prosperity: The benefits of an open society


Paper by the European Parliamentary Research Service: “The ongoing structural transformation and the rapid spread of the technologies of the fourth industrial revolution are challenging current democratic institutions and their established forms of governance and regulation.At the same time, these changes offer vast opportunities to enhance, strengthen and expand the existing democratic framework to reflect a more complex and interdependent world. This process has already begun in many democratic societies but further progress is needed.
Examining these issues involves looking at the impact of ongoing complex and simultaneous changes on the theoretical framework underpinning beneficial democratic regulation. More specifically, combining economic, legal and political perspectives, it is necessary to explore how some adaptations to existing democratic institutions could further improve the functioning of democracies while also delivering additional economic benefits to citizens and society as whole. The introduction of a series of promising new tools could offer a potential way to support democratic decision-makers in regulating complexity and tackling ongoing and future challenges. The first of these tools is to use strategic foresight to anticipate and control future events; the second is collective intelligence, following the idea that citizens are collectively capable of providing better solutions to regulatory problems than are public administrations; the third and fourth are concerned with design-thinking and algorithmic regulation respectively. Design-based approaches are credited with opening up innovative options for policy-makers, while algorithms hold the promise of enabling decision-making to handle complex issues while remaining participatory….(More)”.