A Worldwide Assessment of COVID-19 Pandemic-Policy Fatigue


Paper by Anna Petherick et al: “As the COVID-19 pandemic lingers, signs of “pandemic-policy fatigue” have raised worldwide concerns. But the phenomenon itself is yet to be thoroughly defined, documented, and delved into. Based on self-reported behaviours from samples of 238,797 respondents, representative of the populations of 14 countries, as well as global mobility and policy data, we systematically examine the prevalence and shape of people’s alleged gradual reduction in adherence to governments’ protective-behaviour policies against COVID-19. Our results show that from March through December 2020, pandemic-policy fatigue was empirically meaningful and geographically widespread. It emerged for high-cost and sensitising behaviours (physical distancing) but not for low-cost and habituating ones (mask wearing), and was less intense among retired people, people with chronic diseases, and in countries with high interpersonal trust. Particularly due to fatigue reversal patterns in high- and upper-middle-income countries, we observe an arch rather than a monotonic decline in global pandemic-policy fatigue….(More)”.

Give more data, awareness and control to individual citizens, and they will help COVID-19 containment


Paper by Mirco Nanni et al: “The rapid dynamics of COVID-19 calls for quick and effective tracking of virus transmission chains and early detection of outbreaks, especially in the “phase 2” of the pandemic, when lockdown and other restriction measures are progressively withdrawn, in order to avoid or minimize contagion resurgence. For this purpose, contact-tracing apps are being proposed for large scale adoption by many countries. A centralized approach, where data sensed by the app are all sent to a nation-wide server, raises concerns about citizens’ privacy and needlessly strong digital surveillance, thus alerting us to the need to minimize personal data collection and avoiding location tracking. We advocate the conceptual advantage of a decentralized approach, where both contact and location data are collected exclusively in individual citizens’ “personal data stores”, to be shared separately and selectively (e.g., with a backend system, but possibly also with other citizens), voluntarily, only when the citizen has tested positive for COVID-19, and with a privacy preserving level of granularity. This approach better protects the personal sphere of citizens and affords multiple benefits: it allows for detailed information gathering for infected people in a privacy-preserving fashion; and, in turn this enables both contact tracing, and, the early detection of outbreak hotspots on more finely-granulated geographic scale. The decentralized approach is also scalable to large populations, in that only the data of positive patients need be handled at a central level. Our recommendation is two-fold. First to extend existing decentralized architectures with a light touch, in order to manage the collection of location data locally on the device, and allow the user to share spatio-temporal aggregates—if and when they want and for specific aims—with health authorities, for instance. Second, we favour a longer-term pursuit of realizing a Personal Data Store vision, giving users the opportunity to contribute to collective good in the measure they want, enhancing self-awareness, and cultivating collective efforts for rebuilding society….(More)”.

From satisficing to artificing: The evolution of administrative decision-making in the age of the algorithm


Paper by Thea Snow at Data & Policy: “Algorithmic decision tools (ADTs) are being introduced into public sector organizations to support more accurate and consistent decision-making. Whether they succeed turns, in large part, on how administrators use these tools. This is one of the first empirical studies to explore how ADTs are being used by Street Level Bureaucrats (SLBs). The author develops an original conceptual framework and uses in-depth interviews to explore whether SLBs are ignoring ADTs (algorithm aversion); deferring to ADTs (automation bias); or using ADTs together with their own judgment (an approach the author calls “artificing”). Interviews reveal that artificing is the most common use-type, followed by aversion, while deference is rare. Five conditions appear to influence how practitioners use ADTs: (a) understanding of the tool (b) perception of human judgment (c) seeing value in the tool (d) being offered opportunities to modify the tool (e) alignment of tool with expectations….(More)”.

The Coup We Are Not Talking About


Shoshana Zuboff in the New York Times: “Two decades ago, the American government left democracy’s front door open to California’s fledgling internet companies, a cozy fire lit in welcome. In the years that followed, a surveillance society flourished in those rooms, a social vision born in the distinct but reciprocal needs of public intelligence agencies and private internet companies, both spellbound by a dream of total information awareness. Twenty years later, the fire has jumped the screen, and on Jan. 6, it threatened to burn down democracy’s house.

I have spent exactly 42 years studying the rise of the digital as an economic force driving our transformation into an information civilization. Over the last two decades, I’ve observed the consequences of this surprising political-economic fraternity as those young companies morphed into surveillance empires powered by global architectures of behavioral monitoring, analysis, targeting and prediction that I have called surveillance capitalism. On the strength of their surveillance capabilities and for the sake of their surveillance profits, the new empires engineered a fundamentally anti-democratic epistemic coupmarked by unprecedented concentrations of knowledge about us and the unaccountable power that accrues to such knowledge.

In an information civilization, societies are defined by questions of knowledge — how it is distributed, the authority that governs its distribution and the power that protects that authority. Who knows? Who decides who knows? Who decides who decides who knows? Surveillance capitalists now hold the answers to each question, though we never elected them to govern. This is the essence of the epistemic coup. They claim the authority to decide who knows by asserting ownership rights over our personal information and defend that authority with the power to control critical information systems and infrastructures….(More)”.

Using “Big Data” to forecast migration


Blog Post by Jasper Tjaden, Andres Arau, Muertizha Nuermaimaiti, Imge Cetin, Eduardo Acostamadiedo, Marzia Rango: Act 1 — High Expectations

“Data is the new oil,” they say. ‘Big Data’ is even bigger than that. The “data revolution” will contribute to solving societies’ problems and help governments adopt better policies and run more effective programs. In the migration field, digital trace data are seen as a potentially powerful tool to improve migration management processes (visa applicationsasylum decision and geographic allocation of asylum seeker, facilitating integration, “smart borders” etc.).1

Forecasting migration is one particular area where big data seems to excite data nerds (like us) and policymakers alike. If there is one way big data has already made a difference, it is its ability to bring different actors together — data scientists, business people and policy makers — to sit through countless slides with numbers, tables and graphs. Traditional migration data sources, like censuses, administrative data and surveys, have never quite managed to generate the same level of excitement.

Many EU countries are currently heavily investing in new ways to forecast migration. Relatively large numbers of asylum seekers in 2014, 2015 and 2016 strained the capacity of many EU governments. Better forecasting tools are meant to help governments prepare in advance.

In a recent European Migration Network study, 10 out of the 22 EU governments surveyed said they make use of forecasting methods, many using open source data for “early warning and risk analysis” purposes. The 2020 European Migration Network conference was dedicated entirely to the theme of forecasting migration, hosting more than 15 expert presentations on the topic. The recently proposed EU Pact on Migration and Asylum outlines a “Migration Preparedness and Crisis Blueprint” which “should provide timely and adequate information in order to establish the updated migration situational awareness and provide for early warning/forecasting, as well as increase resilience to efficiently deal with any type of migration crisis.” (p. 4) The European Commission is currently finalizing a feasibility study on the use of artificial intelligence for predicting migration to the EU; Frontex — the EU Border Agency — is scaling up efforts to forecast irregular border crossings; EASO — the European Asylum Support Office — is devising a composite “push-factor index” and experimenting with forecasting asylum-related migration flows using machine learning and data at scale. In Fall 2020, during Germany’s EU Council Presidency, the German Interior Ministry organized a workshop series around Migration 4.0 highlighting the benefits of various ways to “digitalize” migration management. At the same time, the EU is investing substantial resources in migration forecasting research under its Horizon2020 programme, including QuantMigITFLOWS, and HumMingBird.

Is all this excitement warranted?

Yes, it is….(More)” See also: Big Data for Migration Alliance

Ten computer codes that transformed science


Jeffrey M. Perkel at Nature: “From Fortran to arXiv.org, these advances in programming and platforms sent biology, climate science and physics into warp speed….In 2019, the Event Horizon Telescope team gave the world the first glimpse of what a black hole actually looks like. But the image of a glowing, ring-shaped object that the group unveiled wasn’t a conventional photograph. It was computed — a mathematical transformation of data captured by radio telescopes in the United States, Mexico, Chile, Spain and the South Pole1. The team released the programming code it used to accomplish that feat alongside the articles that documented its findings, so the scientific community could see — and build on — what it had done.

It’s an increasingly common pattern. From astronomy to zoology, behind every great scientific finding of the modern age, there is a computer. Michael Levitt, a computational biologist at Stanford University in California who won a share of the 2013 Nobel Prize in Chemistry for his work on computational strategies for modelling chemical structure, notes that today’s laptops have about 10,000 times the memory and clock speed that his lab-built computer had in 1967, when he began his prizewinning work. “We really do have quite phenomenal amounts of computing at our hands today,” he says. “Trouble is, it still requires thinking.”

Enter the scientist-coder. A powerful computer is useless without software capable of tackling research questions — and researchers who know how to write it and use it. “Research is now fundamentally connected to software,” says Neil Chue Hong, director of the Software Sustainability Institute, headquartered in Edinburgh, UK, an organization dedicated to improving the development and use of software in science. “It permeates every aspect of the conduct of research.”

Scientific discoveries rightly get top billing in the media. But Nature this week looks behind the scenes, at the key pieces of code that have transformed research over the past few decades.

Although no list like this can be definitive, we polled dozens of researchers over the past year to develop a diverse line-up of ten software tools that have had a big impact on the world of science. You can weigh in on our choices at the end of the story….(More)”.

Facebook Data for Good


Foreword by Sheryl Sandberg: “When Facebook launched the Data for Good program in 2017, we never imagined it would play a role so soon in response to a truly global emergency. The COVID-19 pandemic is not just a public health crisis, but also a social and economic one. It has caused hardship in every part of the world, but its impact hasn’t been felt equally. It has hit women and the most disadvantaged communities the hardest – something this work has helped shine a light on.

In response to the pandemic, Facebook has been part of an unprecedented collaboration between technology companies, the public sector, universities, nonprofits and others. Our partners operate in some of the most challenging environments in the world, where lengthy analysis and debate is often a luxury they don’t have. The policies that govern delivery of vaccines, masks, and financial support can mean the difference between life and death. By sharing tools that provide real-time insights, Facebook can make decision-making on the ground just a little bit easier and more effective.

This report highlights some of the ways Facebook data – shared in a way that protects the privacy of individuals – assisted the response efforts to the pandemic and other major crises in 2020. I hope the examples included help illustrate what successful data sharing projects can look like, and how future projects can be improved. Above all, I hope we can continue to work together in 2021 and beyond to save lives and mitigate the damage caused by the pandemic and any crises that may follow….(More)”.

Enabling the future of academic research with the Twitter API


Twitter Developer Blog: “When we introduced the next generation of the Twitter API in July 2020, we also shared our plans to invest in the success of the academic research community with tailored solutions that better serve their goals. Today, we’re excited to launch the Academic Research product track on the new Twitter API. 

Why we’re launching this & how we got here

Since the Twitter API was first introduced in 2006, academic researchers have used data from the public conversation to study topics as diverse as the conversation on Twitter itself – from state-backed efforts to disrupt the public conversation to floods and climate change, from attitudes and perceptions about COVID-19 to efforts to promote healthy conversation online. Today, academic researchers are one of the largest groups of people using the Twitter API. 

Our developer platform hasn’t always made it easy for researchers to access the data they need, and many have had to rely on their own resourcefulness to find the right information. Despite this, for over a decade, academic researchers have used Twitter data for discoveries and innovations that help make the world a better place.

Over the past couple of years, we’ve taken iterative steps to improve the experience for researchers, like when we launched a webpage dedicated to Academic Research, and updated our Twitter Developer Policy to make it easier to validate or reproduce others’ research using Twitter data.

We’ve also made improvements to help academic researchers use Twitter data to advance their disciplines, answer urgent questions during crises, and even help us improve Twitter. For example, in April 2020, we released the COVID-19 stream endpoint – the first free, topic-based stream built solely for researchers to use data from the global conversation for the public good. Researchers from around the world continue to use this endpoint for a number of projects.

Over two years ago, we started our own extensive research to better understand the needs, constraints and challenges that researchers have when studying the public conversation. In October 2020, we tested this product track in a private beta program where we gathered additional feedback. This gave us a glimpse into some of the important work that the free Academic Research product track we’re launching today can now enable….(More)”.

Facebook will let researchers study how advertisers targeted users with political ads prior to Election Day


Nick Statt at The Verge: “Facebook is aiming to improve transparency around political advertising on its platform by opening up more data to independent researchers, including targeting information on more than 1.3 million ads that ran in the three months prior to the US election on November 3rd of last year. Researchers interested in studying the ads can apply for access to the Facebook Open Research and Transparency (FORT) platform here.

The move is significant because Facebook has long resisted willfully allowing access to data around political advertising, often citing user privacy. The company has gone so far as to even disable third-party web plugins, like ProPublica’s Facebook Political Ad Collector tool, that collect such data without Facebook’s express consent.

Numerous research groups around the globe have spent years now studying Facebook’s impact on everything from democratic elections to news dissemination, but sometimes without full access to all the desired data. Only last year, after partnering with Harvard University’s Social Science One (the group overseeing applications for the new political ad targeting initiative), did Facebook better formalize the process of granting anonymized user data for research studies.

In the past, Facebook has made some crucial political ad information in its Ad Library available to the public, including the amount spent on certain ads and demographic information about who saw those ads. But now the company says it wants to do more to improve transparency, specifically around how advertisers target certain subsets of users with political advertising….(More)”.

Twitter’s misinformation problem is much bigger than Trump. The crowd may help solve it.


Elizabeth Dwoskin at the Washington Post: “A pilot program called Birdwatch lets selected users write corrections and fact checks on potentially misleading tweets…

The presidential election is over, but the fight against misinformation continues.

The latest volley in that effort comes from Twitter, which on MondayannouncedBirdwatch, a pilot project that uses crowdsourcing techniques to combat falsehoods and misleading statements on its service.

The pilot, which is open to only about 1,000 select users who can apply to be contributors, will allow people to write notes with corrections and accurate information directly into misleading tweets — a method that has the potential to get quality information to people more quickly than traditional fact-checking. Fact checks that are rated by other contributors as high quality may get bumped up or rewarded with greater visibility.

Birdwatch represents Twitter’s most experimental response to one of the biggest lessons that social media companies drew from the historic events of 2020: that their existing efforts to combat misinformation — including labeling, fact-checking and sometimes removing content — were not enough to prevent falsehoods about a stolen election or the coronavirus from reaching and influencing broad swaths of the population. Researchers who studied enforcement actions by social media companies last year found that fact checks and labels are usually implemented too late, after a post or a tweet has gone viral.

The Birdwatch project — which for the duration of the pilot will function as a separate website — is novel in that it attempts to build new mechanisms into Twitter’s product that foreground fact-checking by its community of 187 million daily users worldwide. Rather than having to comb through replies to tweets to sift through what’s true or false — or having Twitter employees append to a tweet a label providing additional context — users will be able to click on a separate notes folder attached to a tweet where they can see the consensus-driven responses from the community. Twitter will have a team reviewing winning responses to prevent manipulation, though a major question is whether any part of the process will be automated and therefore more easily gamed….(More)”