Citizen science is booming during the pandemic


Sigal Samuel at Vox: “…The pandemic has driven a huge increase in participation in citizen science, where people without specialized training collect data out in the world or perform simple analyses of data online to help out scientists.

Stuck at home with time on their hands, millions of amateurs arouennd the world are gathering information on everything from birds to plants to Covid-19 at the request of institutional researchers. And while quarantine is mostly a nightmare for us, it’s been a great accelerant for science.

Early in the pandemic, a firehose of data started gushing forth on citizen science platforms like Zooniverse and SciStarter, where scientists ask the public to analyze their data online.It’s a form of crowdsourcing that has the added bonus of giving volunteers a real sense of community; each project has a discussion forum where participants can pose questions to each other (and often to the scientists behind the projects) and forge friendly connections.

“There’s a wonderful project called Rainfall Rescue that’s transcribing historical weather records. It’s a climate change project to understand how weather has changed over the past few centuries,” Laura Trouille, vice president of citizen science at the Adler Planetarium in Chicago and co-lead of Zooniverse, told me. “They uploaded a dataset of 10,000 weather logs that needed transcribing — and that was completed in one day!”

Some Zooniverse projects, like Snapshot Safari, ask participants to classify animals in images from wildlife cameras. That project saw daily classifications go from 25,000 to 200,000 per day in the initial days of lockdown. And across all its projects, Zooniverse reported that 200,000 participants contributed more than 5 million classifications of images in one week alone — the equivalent of 48 years of research. Although participation has slowed a bit since the spring, it’s still four times what it was pre-pandemic.

Many people are particularly eager to help tackle Covid-19, and scientists have harnessed their energy. Carnegie Mellon University’s Roni Rosenfeld set up a platform where volunteers can help artificial intelligence predict the spread of the coronavirus, even if they know nothing about AI. Researchers at the University of Washington invited people to contribute to Covid-19 drug discovery using a computer game called Foldit; they experimented with designing proteins that could attach to the virus that causes Covid-19 and prevent it from entering cells….(More)”.

In AI We Trust: Power, Illusion and Control of Predictive Algorithms


Book by Helga Nowotny: “One of the most persistent concerns about the future is whether it will be dominated by the predictive algorithms of AI – and, if so, what this will mean for our behaviour, for our institutions and for what it means to be human. AI changes our experience of time and the future and challenges our identities, yet we are blinded by its efficiency and fail to understand how it affects us.

At the heart of our trust in AI lies a paradox: we leverage AI to increase control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future. This happens when we forget that that we humans have created the digital technologies to which we attribute agency. These developments also challenge the narrative of progress, which played such a central role in modernity and is based on the hubris of total control. We are now moving into an era where this control is limited as AI monitors our actions, posing the threat of surveillance, but also offering the opportunity to reappropriate control and transform it into care.

As we try to adjust to a world in which algorithms, robots and avatars play an ever-increasing role, we need to understand better the limitations of AI and how their predictions affect our agency, while at the same time having the courage to embrace the uncertainty of the future….(More)”.

Leave No Migrant Behind: The 2030 Agenda and Data Disaggregation


Guide by the International Organization for Migration (IOM): “To date, disaggregation of global development data by migratory status remains low. Migrants are largely invisible in official SDG data. As the global community approaches 2030, very little is known about the impact of the 2030 Agenda on migrants. Despite a growing focus worldwide on data disaggregation, namely the breaking down of data into smaller sub-categories, there is a lack of practical guidance on the topic that can be tailored to address individual needs and capacities of countries.

Developed by IOM’s Global Migration Data Analysis Centre (GMDAC), the guide titled ‘Leave No Migrant Behind: The 2030 Agenda and Data Disaggregation‘ centres on nine SDGs focusing on hunger, education, and gender equality among others. The document is the first of its kind, in that it seeks to address a range of different categorization interests and needs related to international migrants and suggests practical steps that practitioners can tailor to best fit their context…The guide also highlights the key role disaggregation plays in understanding the many positive links between migration and the SDGs, highlighting migrants’ contributions to the 2030 Agenda.

The guide outlines key steps for actors to plan and implement initiatives by looking at sex, gender, age and disability, in addition to migratory status. These steps include undertaking awareness raising, identifying priority indicators, conducting data mapping, and more….Read more about the importance of data disaggregation for SDG indicators here….(More)”

How spooks are turning to superforecasting in the Cosmic Bazaar


The Economist: “Every morning for the past year, a group of British civil servants, diplomats, police officers and spies have woken up, logged onto a slick website and offered their best guess as to whether China will invade Taiwan by a particular date. Or whether Arctic sea ice will retrench by a certain amount. Or how far covid-19 infection rates will fall. These imponderables are part of Cosmic Bazaar, a forecasting tournament created by the British government to improve its intelligence analysis.

Since the website was launched in April 2020, more than 10,000 forecasts have been made by 1,300 forecasters, from 41 government departments and several allied countries. The site has around 200 regular forecasters, who must use only publicly available information to tackle the 30-40 questions that are live at any time. Cosmic Bazaar represents the gamification of intelligence. Users are ranked by a single, brutally simple measure: the accuracy of their predictions.

Forecasting tournaments like Cosmic Bazaar draw on a handful of basic ideas. One of them, as seen in this case, is the “wisdom of crowds”, a concept first illustrated by Francis Galton, a statistician, in 1907. Galton observed that in a contest to estimate the weight of an ox at a county fair, the median guess of nearly 800 people was accurate within 1% of the true figure.

Crowdsourcing, as this idea is now called, has been augmented by more recent research into whether and how people make good judgments. Experiments by Philip Tetlock of the University of Pennsylvania, and others, show that experts’ predictions are often no better than chance. Yet some people, dubbed “superforecasters”, often do make accurate predictions, largely because of the way they form judgments—such as having a commitment to revising predictions in light of new data, and being aware of typical human biases. Dr Tetlock’s ideas received publicity last year when Dominic Cummings, then an adviser to Boris Johnson, Britain’s prime minister, endorsed his book and hired a controversial superforecaster to work at Mr Johnson’s office in Downing Street….(More)”.

Digital Inclusion is a Social Determinant of Health


Paper by Jill Castek et al: “Efforts to improve digital literacies and internet access are valuable tools to reduce health disparities. The costs of equipping a person to use the internet are substantially lower than treating health conditions, and the benefits are multiple….

Those who do not have access to affordable broadband internet services, digital devices, digital literacies training, and technical support, face numerous challenges video-conferencing with their doctor,  checking test results, filling prescriptions, and much more.  Many individuals require significant support developing the digital literacies needed to engage in telehealth with the greatest need among older individuals, racial/ethnic minorities, and low-income communities. Taken in context, the costs of equipping a person to use the internet are substantially lower than treating health conditions, and the benefits are both persistent and significant.2 

“Super” Social Determinants of Health

Digital literacies and internet connectivity have been called the “super social determinants of health” because they encompass all other social determinants of health (SDOH).  Access to information, supports, and services are increasingly, and sometimes exclusively, accessible only online.

The social determinants of health shown in Figure 1. Digital Literacies & Access, include the neighborhood and physical environment, economic sustainability, healthcare system, community and social context, food, and education.4  Together these factors impact an individual’s ability to access healthcare services, education, housing, transportation, online banking, and sustain relationships with family members and friends.  Digital literacies and access impacts all facets of a person’s life and affects behavioral and environmental outcomes such as shopping choices, housing, support systems, and health coverage….(More)”

Figure 1. Digital Literacies & Access. 

Power to the Public: The Promise of Public Interest Technology


Book by Tara Dawson McGuinness and Hana Schank: “As the speed and complexity of the world increases, governments and nonprofit organizations need new ways to effectively tackle the critical challenges of our time—from pandemics and global warming to social media warfare. In Power to the Public, Tara Dawson McGuinness and Hana Schank describe a revolutionary new approach—public interest technology—that has the potential to transform the way governments and nonprofits around the world solve problems. Through inspiring stories about successful projects ranging from a texting service for teenagers in crisis to a streamlined foster care system, the authors show how public interest technology can make the delivery of services to the public more effective and efficient.

At its heart, public interest technology means putting users at the center of the policymaking process, using data and metrics in a smart way, and running small experiments and pilot programs before scaling up. And while this approach may well involve the innovative use of digital technology, technology alone is no panacea—and some of the best solutions may even be decidedly low-tech.

Clear-eyed yet profoundly optimistic, Power to the Public presents a powerful blueprint for how government and nonprofits can help solve society’s most serious problems….(More)

Administrative Law in the Automated State


Paper by Cary Coglianese: “In the future, administrative agencies will rely increasingly on digital automation powered by machine learning algorithms. Can U.S. administrative law accommodate such a future? Not only might a highly automated state readily meet longstanding administrative law principles, but the responsible use of machine learning algorithms might perform even better than the status quo in terms of fulfilling administrative law’s core values of expert decision-making and democratic accountability. Algorithmic governance clearly promises more accurate, data-driven decisions. Moreover, due to their mathematical properties, algorithms might well prove to be more faithful agents of democratic institutions. Yet even if an automated state were smarter and more accountable, it might risk being less empathic. Although the degree of empathy in existing human-driven bureaucracies should not be overstated, a large-scale shift to government by algorithm will pose a new challenge for administrative law: ensuring that an automated state is also an empathic one….(More)”.

Data Brokers Are a Threat to Democracy


Justin Sherman at Wired: “Enter the data brokerage industry, the multibillion dollar economy of selling consumers’ and citizens’ intimate details. Much of the privacy discourse has rightly pointed fingers at Facebook, Twitter, YouTube, and TikTok, which collect users’ information directly. But a far broader ecosystem of buying up, licensing, selling, and sharing data exists around those platforms. Data brokerage firms are middlemen of surveillance capitalism—purchasing, aggregating, and repackaging data from a variety of other companies, all with the aim of selling or further distributing it.

Data brokerage is a threat to democracy. Without robust national privacy safeguards, entire databases of citizen information are ready for purchase, whether to predatory loan companies, law enforcement agencies, or even malicious foreign actors. Federal privacy bills that don’t give sufficient attention to data brokerage will therefore fail to tackle an enormous portion of the data surveillance economy, and will leave civil rights, national security, and public-private boundaries vulnerable in the process.

Large data brokers—like Acxiom, CoreLogic, and Epsilon—tout the detail of their data on millions or even billions of people. CoreLogic, for instance, advertises its real estate and property information on 99.9 percent of the US population. Acxiom promotes 11,000-plus “data attributes,” from auto loan information to travel preferences, on 2.5 billion people (all to help brands connect with people “ethically,” it adds). This level of data collection and aggregation enables remarkably specific profiling.

Need to run ads targeting poor families in rural areas? Check out one data broker’s “Rural and Barely Making It” data set. Or how about racially profiling financial vulnerability? Buy another company’s “Ethnic Second-City Strugglers” data set. These are just some of the disturbing titles captured in a 2013 Senate report on the industry’s data products, which have only expanded since. Many other brokers advertise their ability to identify subgroups upon subgroups of individuals through criteria like race, gender, marital status, and income level, all sensitive characteristics that citizens likely didn’t know would end up in a database—let alone up for sale….(More)”.

Advancing data literacy in the post-pandemic world


Paper by Archita Misra (PARIS21): “The COVID-19 crisis presents a monumental opportunity to engender a widespread data culture in our societies. Since early 2020, the emergence of popular data sites like Worldometer2 have promoted interest and attention in data-driven tracking of the pandemic. “R values”, “flattening the curve” and “exponential increase” have seeped into everyday lexicon. Social media and news outlets have filled the public consciousness with trends, rankings and graphs throughout multiple waves of COVID-19.

Yet, the crisis also reveals a critical lack of data literacy amongst citizens in many parts of the world. The lack of a data literate culture predates the pandemic. The supply of statistics and information has significantly outpaced the ability of lay citizens to make informed choices about their lives in the digital data age.

Today’s fragmented datafied information landscape is also susceptible to the pitfalls of misinformation, post-truth politics and societal polarisation – all of which demand a critical thinking lens towards data. There is an urgent need to develop data literacy at the level of citizens, organisations and society – such that all actors are empowered to navigate the complexity of modern data ecosystems.

The paper identifies three key take-aways. It is crucial to

  • forge a common language around data literacy
  • adopt a demand-driven approach and participatory approach to doing data literacy
  • move from ad-hoc programming towards sustained policy, investment and impact…(More)”.

Regulating Personal Data : Data Models and Digital Services Trade


Report by Martina Francesca Ferracane and Erik van der Marel: “While regulations on personal data diverge widely between countries, it is nonetheless possible to identify three main models based on their distinctive features: one model based on open transfers and processing of data, a second model based on conditional transfers and processing, and third a model based on limited transfers and processing. These three data models have become a reference for many other countries when defining their rules on the cross-border transfer and domestic processing of personal data.

The study reviews their main characteristics and systematically identifies for 116 countries worldwide to which model they adhere for the two components of data regulation (i.e. cross-border transfers and domestic processing of data). In a second step, using gravity analysis, the study estimates whether countries sharing the same data model exhibit higher or lower digital services trade compared to countries with different regulatory data models. The results show that sharing the open data model for cross-border data transfers is positively associated with trade in digital services, while sharing the conditional model for domestic data processing is also positively correlated with trade in digital services. Country-pairs sharing the limited model, instead, exhibit a double whammy: they show negative trade correlations throughout the two components of data regulation. Robustness checks control for restrictions in digital services, the quality of digital infrastructure, as well as for the use of alternative data sources….(More)”.