How can stakeholder engagement and mini-publics better inform the use of data for pandemic response?


Andrew Zahuranec, Andrew Young and Stefaan G. Verhulst at the OECD Participo Blog Series:

Image for post

“What does the public expect from data-driven responses to the COVID-19 pandemic? And under what conditions?” These are the motivating questions behind The Data Assembly, a recent initiative by The GovLab at New York University Tandon School of Engineering — an action research center that aims to help institutions work more openly, collaboratively, effectively, and legitimately.

Launched with support from The Henry Luce Foundation, The Data Assembly solicited diverse, actionable public input on data re-use for crisis response in the United States. In particular, we sought to engage the public on how to facilitate, if deemed acceptable, the use of data that was collected for a particular purpose for informing COVID-19. One additional objective was to inform the broader emergence of data collaboration— through formal and ad hoc arrangements between the public sector, civil society, and those in the private sector — by evaluating public expectation and concern with current institutional, contractual, and technical structures and instruments that may underpin these partnerships.

The Data Assembly used a new methodology that re-imagines how organisations can engage with society to better understand local expectations regarding data re-use and related issues. This work goes beyond soliciting input from just the “usual suspects”. Instead, data assemblies provide a forum for a much more diverse set of participants to share their insights and voice their concerns.

This article is informed by our experience piloting The Data Assembly in New York City in summer 2020. It provides an overview of The Data Assembly’s methodology and outcomes and describes major elements of the effort to support organisations working on similar issues in other cities, regions, and countries….(More)”.

As Jakarta floods again, humanitarian chatbots on social media support community-led disaster response


Blog by Petabencana: “On February 20th, #banjir and #JakartaBanjir were the highest trending topics on Twitter Indonesia, as the capital city was inundated for the third major time this year, following particularly heavy rainfall from Friday night (19/2/2021) to Saturday morning (20/02/2021). As Jakarta residents turned to social media to share updates about the flood, they were greeted by “Disaster Bot” – a novel AI-assisted chatbot that monitors social media for posts about disasters and automatically invites users to submit more detailed disaster reports. These crowd-sourced reports are used to map disasters in real-time, on a free and open source website, PetaBencana.id.

As flooding blocked major thoroughfares and toll roads, disrupted commuter lines, and cut off electricity to over 60,000 homes, residents continued to share updates about the flood situation in order to stay alert and make timely decisions about safety and response. Hundreds of residents submitted flood reports to PetaBencana.id, alerting each other about water levels, broken infrastructures and road accessibility. The Jakarta Emergency Management Agency also updated the map with official information about flood affected  areas, and monitored the map to respond to resident needs. PetaBencana.id experienced a 2000% in activity in under 12 hours as residents actively checked the map to understand the flooding situation, avoid flooded areas, and make decisions about safety and response. 

Residents share updates about flood-affected road access through the open source information sharing platform, PetaBencana.id. Thousands of residents used the map to navigate safely as heavy rainfall inundated the city for the third major time this year.

As flooding incidents continue to occur with increasing intensity across the country, community-led information sharing is once again proving its significance in supporting response and planning at multiple scales. …(More)”.

A New Way to Inoculate People Against Misinformation


Article by Jon Roozenbeek, Melisa Basol, and Sander van der Linden: “From setting mobile phone towers on fire to refusing critical vaccinations, we know the proliferation of misinformation online can have massive, real-world consequences.

For those who want to avert those consequences, it makes sense to try and correct misinformation. But as we now know, misinformation—both intentional and unintentional—is difficult to fight once it’s out in the digital wild. The pace at which unverified (and often false) information travels makes any attempt to catch up to, retrieve, and correct it an ambitious endeavour. We also know that viral information tends to stick, that repeated misinformation is more likely to be judged as true, and that people often continue to believe falsehoods even after they have been debunked.

Instead of fighting misinformation after it’s already spread, some researchers have shifted their strategy: they’re trying to prevent it from going viral in the first place, an approach known as “prebunking.” Prebunking attempts to explain how people can resist persuasion by misinformation. Grounded in inoculation theory, the approach uses the analogy of biological immunization. Just as weakened exposure to a pathogen triggers antibody production, inoculation theory posits that pre-emptively exposing people to a weakened persuasive argument builds people’s resistance against future manipulation.

But while inoculation is a promising approach, it has its limitations. Traditional inoculation messages are issue-specific, and have often remained confined to the particular context that you want to inoculate people against. For example, an inoculation message might forewarn people that false information is circulating encouraging people to drink bleach as a cure for the coronavirus. Although that may help stop bleach drinking, this messaging doesn’t pre-empt misinformation about other fake cures. As a result, prebunking approaches haven’t easily adapted to the changing misinformation landscape, making them difficult to scale.

However, our research suggests that there may be another way to inoculate people that preserves the benefits of prebunking: it may be possible to build resistance against misinformation in general, rather than fighting it one piece at a time….(More)”.

The (il)logic of legibility – Why governments should stop simplifying complex systems


Thea Snow at LSE Blog: “Sometimes, you learn about an idea that really sticks with you. This happened to me recently when I learnt about “legibility” — a concept which James C Scott introduces in his book Seeing like a State.

Just last week, I was involved in two conversations which highlighted how pervasive the logic of legibility continues to be in influencing how governments think and act. But first, what is legibility?

Defining Legibility

Legibility describes the very human tendency to simplify complex systems in order to exert control over them.

In this blog, Venkatesh Rao offers a recipe for legibility:

  • Look at a complex and confusing reality…
  • Fail to understand all the subtleties of how the complex reality works
  • Attribute that failure to the irrationality of what you are looking at, rather than your own limitations
  • Come up with an idealized blank-slate vision of what that reality ought to look like
  • Argue that the relative simplicity and platonic orderliness of the vision represents rationality
  • Use power to impose that vision, by demolishing the old reality if necessary.

Rao explains: “The big mistake in this pattern of failure is projecting your subjective lack of comprehension onto the object you are looking at, as “irrationality.” We make this mistake because we are tempted by a desire for legibility.”

Scott uses modern forestry practices as an example of the practice of legibility. Hundreds of years ago, forests acted as many things — they were places people harvested wood, but also places where locals went foraging and hunting, as well as an ecosystem for animals and plants. According to the logic of scientific forestry practices, forests would be much more valuable if they just produced timber. To achieve this, they had to be made legible.

So, modern agriculturalists decided to clear cut forest, and plant perfectly straight rows of a particular species of fast-growing trees. It was assumed this would be more efficient. Planting just one species meant the quality of timber would be predictable. In addition, the straight rows would make it easy to know exactly how much timber was there, and would mean timber production could be easily monitored and controlled.

 Reproduced from https://www.ribbonfarm.com/2010/07/26/a-big-little-idea-called-legibility/

For the first generation of trees, the agriculturalists achieved higher yields, and there was much celebration and self-congratulation. But, after about a century, the problems of the ecosystem collapse started to reveal themselves. In imposing a logic of order and control, scientific forestry destroyed the complex, invisible, and unknowable network of relationships between plants, animals and people, which are necessary for a forest to thrive.

After a century it became apparent that relationships between plants and animals were so distorted that pests were destroying crops. The nutrient balance of the soil was disrupted. And after the first generation of trees, the forest was not thriving at all….(More)”.

Robot census: Gathering data to improve policymaking on new technologies


Essay by Robert Seamans: There is understandable excitement about the impact that new technologies like artificial intelligence (AI) and robotics will have on our economy. In our everyday lives, we already see the benefits of these technologies: when we use our smartphones to navigate from one location to another using the fastest available route or when a predictive typing algorithm helps us finish a sentence in our email. At the same time, there are concerns about possible negative effects of these new technologies on labor. The Council of Economic Advisers of the past two Administrations have addressed these issues in the annual Economic Report of the President (ERP). For example, the 2016 ERP included a chapter on technology and innovation that linked robotics to productivity and growth, and the 2019 ERP included a chapter on artificial intelligence that discussed the uneven effects of technological change. Both these chapters used data at highly aggregated levels, in part because that is the data that is available. As I’ve noted elsewhere, AI and robots are everywhere, except, as it turns out, in the data.

To date, there have been no large scale, systematic studies in the U.S. on how robots and AI affect productivity and labor in individual firms or establishments (a firm could own one or more establishments, which for example could be a plant in a manufacturing setting or a storefront in a retail setting). This is because the data are scarce. Academic researchers interested in the effects of AI and robotics on economic outcomes have mostly used aggregate country and industry-level data. Very recently, some have studied these issues at the firm level using data on robot imports to France, Spain, and other countries. I review a few of these academic papers in both categories below, which provide early findings on the nuanced role these new technologies have on labor. Thanks to some excellent work being done by the U.S. Census Bureau, however, we may soon have more data to work with. This includes new questions on robot purchases in the Annual Survey of Manufacturers and Annual Capital Expenditures Survey and new questions on other technologies including cloud computing and machine learning in the Annual Business Survey….(More)”.

Democratizing data in a 5G world


Blog by Dimitrios Dosis at Mastercard: “The next generation of mobile technology has arrived, and it’s more powerful than anything we’ve experienced before. 5G can move data faster, with little delay — in fact, with 5G, you could’ve downloaded a movie in the time you’ve read this far. 5G will also create a vast network of connected machines. The Internet of Things will finally deliver on its promise to fuse all our smart products — vehicles, appliances, personal devices — into a single streamlined ecosystem.

My smartwatch could monitor my blood pressure and schedule a doctor’s appointment, while my car could collect data on how I drive and how much gas I use while behind the wheel. In some cities, petrol trucks already act as roving gas stations, receiving pings when cars are low on gas and refueling them as needed, wherever they are.

This amounts to an incredible proliferation of data. By 2025, every connected person will conduct nearly 5,000 data interactions every day — one every 18 seconds — whether they know it or not. 

Enticing and convenient as new 5G-powered developments may be, it also raises complex questions about data. Namely, who is privy to our personal information? As your smart refrigerator records the foods you buy, will the refrigerator’s manufacturer be able to see your eating habits? Could it sell that information to a consumer food product company for market research without your knowledge? And where would the information go from there? 

People are already asking critical questions about data privacy. In fact, 72% of them say they are paying attention to how companies collect and use their data, according to a global survey released last year by the Harvard Business Review Analytic Services. The survey, sponsored by Mastercard, also found that while 60% of executives believed consumers think the value they get in exchange for sharing their data is worthwhile, only 44% of consumers actually felt that way.

There are many reasons for this data disconnect, including the lack of transparency that currently exists in data sharing and the tension between an individual’s need for privacy and his or her desire for personalization.

This paradox can be solved by putting data in the hands of the people who create it — giving consumers the ability to manage, control and share their own personal information when they want to, with whom they want to, and in a way that benefits them.

That’s the basis of Mastercard’s core set of principles regarding data responsibility – and in this 5G world, it’s more important than ever. We will be able to gain from these new technologies, but this change must come with trust and user control at its core. The data ecosystem needs to evolve from schemes dominated by third parties, where some data brokers collect inferred, often unreliable and inaccurate data, then share it without the consumer’s knowledge….(More)”.

Using “Big Data” to forecast migration


Blog Post by Jasper Tjaden, Andres Arau, Muertizha Nuermaimaiti, Imge Cetin, Eduardo Acostamadiedo, Marzia Rango: Act 1 — High Expectations

“Data is the new oil,” they say. ‘Big Data’ is even bigger than that. The “data revolution” will contribute to solving societies’ problems and help governments adopt better policies and run more effective programs. In the migration field, digital trace data are seen as a potentially powerful tool to improve migration management processes (visa applicationsasylum decision and geographic allocation of asylum seeker, facilitating integration, “smart borders” etc.).1

Forecasting migration is one particular area where big data seems to excite data nerds (like us) and policymakers alike. If there is one way big data has already made a difference, it is its ability to bring different actors together — data scientists, business people and policy makers — to sit through countless slides with numbers, tables and graphs. Traditional migration data sources, like censuses, administrative data and surveys, have never quite managed to generate the same level of excitement.

Many EU countries are currently heavily investing in new ways to forecast migration. Relatively large numbers of asylum seekers in 2014, 2015 and 2016 strained the capacity of many EU governments. Better forecasting tools are meant to help governments prepare in advance.

In a recent European Migration Network study, 10 out of the 22 EU governments surveyed said they make use of forecasting methods, many using open source data for “early warning and risk analysis” purposes. The 2020 European Migration Network conference was dedicated entirely to the theme of forecasting migration, hosting more than 15 expert presentations on the topic. The recently proposed EU Pact on Migration and Asylum outlines a “Migration Preparedness and Crisis Blueprint” which “should provide timely and adequate information in order to establish the updated migration situational awareness and provide for early warning/forecasting, as well as increase resilience to efficiently deal with any type of migration crisis.” (p. 4) The European Commission is currently finalizing a feasibility study on the use of artificial intelligence for predicting migration to the EU; Frontex — the EU Border Agency — is scaling up efforts to forecast irregular border crossings; EASO — the European Asylum Support Office — is devising a composite “push-factor index” and experimenting with forecasting asylum-related migration flows using machine learning and data at scale. In Fall 2020, during Germany’s EU Council Presidency, the German Interior Ministry organized a workshop series around Migration 4.0 highlighting the benefits of various ways to “digitalize” migration management. At the same time, the EU is investing substantial resources in migration forecasting research under its Horizon2020 programme, including QuantMigITFLOWS, and HumMingBird.

Is all this excitement warranted?

Yes, it is….(More)” See also: Big Data for Migration Alliance

The High Price of Mistrust


fs.blog: “There are costs to falling community participation. Rather than simply lamenting the loss of a past golden era (as people have done in every era), Harvard political scientist Robert D. Putnam explains these costs, as well as how we might bring community participation back.

First published twenty years ago, Bowling Alone is an exhaustive, hefty work. In its 544 pages, Putnam negotiated mountains of data to support his thesis that the previous few decades had seen Americans retreat en masse from public life. Putnam argued Americans had become disconnected from their wider communities, as evidenced by changes such as a decline in civic engagement and dwindling membership rates for groups such as bowling leagues and PTAs.

Though aspects of Bowling Alone are a little dated today (“computer-mediated communication” isn’t a phrase you’re likely to have heard recently), a quick glance at 2021’s social landscape would suggest many of the trends Putnam described have only continued and apply in other parts of the world too.

Right now, polarization and social distancing have forced us apart from any sense of community to a degree that can seem irresolvable.

Will we ever bowl in leagues alongside near strangers and turn them into friends again? Will we ever bowl again at all, even if alone, or will those gleaming aisles, too-tight shoes, and overpriced sodas fade into a distant memory we recount to our children?

The idea of going into a public space for a non-essential reason can feel incredibly out of reach for many of us right now. And who knows how spaces like bowling alleys will survive in the long run without the social scenes that fuelled them. Now is a perfect time to revisit Bowling Alone to see what it can still teach us, because many of its warnings and lessons are perhaps more relevant now than at its time of publication.

One key lesson we can derive from Bowling Alone is that the less we trust each other—something which is both a cause and consequence of declining community engagement—the more it costs us. Mistrust is expensive.…(More)”

The Rise of Urban Commons


Blogpost by Alessandra Quarta and Antonio Vercellone: “In the last ten years, the concept of the commons became popular in social studies and political activism and in some countries domestic lawyers have shared the interest for this notion. Even if an (existing or proposed) statutory definition of the commons is still very rare, lawyers get familiar with the concept of the commons through the filter of property law, where such a concept has been quite discredited. In fact, approaching property law, many students of different legal traditions learn the origins of property rights revolving on the “tragedy of the commons”, the “parable” made famous by Garrett Hardin in the late nineteen-sixties. According to this widespread narrative, the impossibility to avoid the over-exploitation of those resources managed through an open-access regime determines the necessity of allocating private property rights. In this classic argument, the commons appear in a negative light: they represent the impossibility for a community to manage shared resources without concentrating all the decision-making powers in the hand of a single owner or of a central government. Moreover, they represent the wasteful inefficiency of the Feudal World.

This vision has dominated social and economic studies until 1998, when Elinor Ostrom published her famous book Governing the commons, offering the results of her research on resources managed by communities in different parts of the world. Ostrom, awarded with the Nobel Prize in 2009, demonstrated that the commons are not necessarily a tragedy and a place of no-law. In fact, local communities generally define principles for their government and sharing in a resilient way avoiding the tragedy to occur. Moreover, Ostrom defined a set of principles for checking if the commons are managed efficiently and can compete with both private and public arrangements of resource management.

Later on, under an institutional perspective, the commons became the tool of contestation of political and economic mainstream dogmas, including the unquestionable efficiency of both the market and private property in the allocation of resources. The research of new tools for managing resources has been carried out in several experimentations that generally occurs at the local and urban level: scholars and practitioners define these experiences as ‘urban commons’….(More)”.

Improved targeting for mobile phone surveys: A public-private data collaboration


Blogpost by Kristen Himelein and Lorna McPherson: “Mobile phone surveys have been rapidly deployed by the World Bank to measure the impact of COVID-19 in nearly 100 countries across the world. Previous posts on this blog have discussed the sampling and  implementation challenges associated with these efforts, and coverage errors are an inherent problem to the approach. The survey methodology literature has shown mobile phone survey respondents in the poorest countries are more likely to be male, urban, wealthier, and more highly educated. This bias can stem from phone ownership, as mobile phone surveys are at best representative of mobile phone owners, a group which, particularly in poor countries, may differ from the overall population; or from differential response rates among these owners, with some groups more or less likely to respond to a call from an unknown number. In this post, we share our experiences in trying to improve representativeness and boost sample sizes for the poor in Papua New Guinea (PNG)….(More)”.