Legislative Performance Futures


Article by Ben Podgursky on “Incentivize Good Laws by Monetizing the Verdict of History”….There are net-positive legislative policies which legislators won’t enact, because they only help people in the medium to far future.  For example:

  • Climate change policy
  • Infrastructure investments and mass-transit projects
  • Debt control and social security reform
  • Child tax credits

The (infrequent) times reforms on these issues are legislated — which happens rarely compared to their future value — they are passed not because of the value provided to future generations, but because of the immediate benefit to voters today:

  • Infrastructure investment goes to “shovel ready” projects, with an emphasis on short-term job creation, even when the prime benefit is to future GDP.  For example, Dams constructed in the 1930s (the Hoover Dam, the TVA) provide immense value today, but the projects only happened in order to create tens of thousands of jobs.
  • Climate change legislation is usually weakly directed.  Instead of policies which incur significant long-term benefits but short-term costs (ie, carbon taxes), “green legislation” aims to create green jobs and incentivize rooftop solar (reducing power bills today).
  • (small) child tax credits are passed to help parents today, even though the vastly larger benefit is incurred by children who exist because the marginal extra cash helped their parents afford an extra child.

On the other hand, reforms which provide nobenefit to today’s voter do not happen; this is why the upcoming Social Security Trust Fund shortfall will likely not be fixed until benefits are reduced and voters are directly impacted.

The issue is that while the future reaps the benefits or failures of today’s laws, people of the future cannot vote in today’s elections.  In fact, in almost no circumstances does the future have any ability to meaningfully reward or punish past lawmakers; there are debates today about whether to remove statues and rename buildings dedicated to those on the wrong side of history, actions which even proponents acknowledge as entirely symbolic….(More)”.

Policy 2.0 in the Pandemic World: What Worked, What Didn’t, and Why


Blog by David Osimo: “…So how, then, did these new tools perform when confronted with the once-in-a-lifetime crisis of a vast global pandemic?

It turns out, some things worked. Others didn’t. And the question of how these new policymaking tools functioned in the heat of battle is already generating valuable ammunition for future crises.

So what worked?

Policy modelling – an analytical framework designed to anticipate the impact of decisions by simulating the interaction of multiple agents in a system rather than just the independent actions of atomised and rational humans – took centre stage in the pandemic and emerged with reinforced importance in policymaking. Notably, it helped governments predict how and when to introduce lockdowns or open up. But even there uptake was limited. A recent survey showed that of the 28 models used in different countries to fight the pandemic were traditional, and not the modern “agent-based models” or “system dynamics” supposed to deal best with uncertainty. Meanwhile, the concepts of system science was becoming prominent and widely communicated. It became quickly clear in the course of the crisis that social distancing was more a method to reduce the systemic pressure on the health services than a way to avoid individual contagion (the so called “flatten the curve” project).

Open government data has long promised to allow citizens and businesses to build new services at scale and make government accountable. The pandemic largely confirmed how important this data could be to allow citizens to analyse things independently. Hundreds of analysts from all walks of life and disciplines used social media to discuss their analysis and predictions, many becoming household names and go-to people in countries and regions. Yes, this led to noise and a so-called “infodemic,” but overall it served as a fundamental tool to increase confidence and consensus behind the policy measures and to make governments accountable for their actions. For instance, one Catalan analyst demonstrated that vaccines were not provided during weekends and forced the government to change its stance. Yet it is also clear that not all went well, most notably on the supply side. Governments published data of low quality, either in PDF, with delays or with missing data due to spreadsheet abuse.

In most cases, there was little demand for sophisticated data publishing solutions such as “linked” or “FAIR” data, although particularly significant was the uptake of these kinds of solutions when it came time to share crucial research data. Experts argue that the trend towards open science has accelerated dramatically and irreversibly in the last year, as shown by the portal https://www.covid19dataportal.org/ which allowed sharing of high quality data for scientific research….

But other new policy tools proved less easy to use and ultimately ineffective. Collaborative governance, for one, promised to leverage the knowledge of thousands of citizens to improve public policies and services. In practice, methodologies aiming at involving citizens in decision making and service design were of little use. Decisions related to lockdown and opening up were taken in closed committees in top down mode. Individual exceptions certainly exist: Milan, one of the cities worst hit by the pandemic, launched a co-created strategy for opening up after the lockdown, receiving almost 3000 contributions to the consultation. But overall, such initiatives had limited impact and visibility. With regard to co-design of public services, in times of emergency there was no time for prototyping or focus groups. Services such as emergency financial relief had to be launched in a hurry and “just work.”

Citizen science promised to make every citizen a consensual data source for monitoring complex phenomena in real time through apps and Internet-of-Things sensors. In the pandemic, there were initially great expectations on digital contact tracing apps to allow for real time monitoring of contagions, most notably through bluetooth connections in the phone. However, they were mostly a disappointment. Citizens were reluctant to install them. And contact tracing soon appeared to be much more complicated – and human intensive – than originally thought. The huge debate between technology and privacy was followed by very limited impact. Much ado about nothing.

Behavioural economics (commonly known as nudge theory) is probably the most visible failure of the pandemic. It promised to move beyond traditional carrots (public funding) and sticks (regulation) in delivering policy objectives by adopting an experimental method to influence or “nudge” human behaviour towards desired outcomes. The reality is that soft nudges proved an ineffective alternative to hard lockdown choices. What makes it uniquely negative is that such methods took centre stage in the initial phase of the pandemic and particularly informed the United Kingdom’s lax approach in the first months on the basis of a hypothetical and unproven “behavioural fatigue.” This attracted heavy criticism towards the excessive reliance on nudges by the United Kingdom government, a legacy of Prime Minister David Cameron’s administration. The origin of such criticisms seems to lie not in the method shortcomings per se, which enjoyed success previously on more specific cases, but in the backlash from excessive expectations and promises, epitomised in the quote of a prominent behavioural economist: “It’s no longer a matter of supposition as it was in 2010 […] we can now say with a high degree of confidence these models give you best policy.

Three factors emerge as the key determinants behind success and failure: maturity, institutions and leadership….(More)”.

Open Data Day 2021: How to unlock its potential moving forward?


Stefaan Verhulst, Andrew Young, and Andrew Zahuranec at Data and Policy: “For over a decade, data advocates have reserved one day out of the year to celebrate open data. Open Data Day 2021 comes at a time of unprecedented upheaval. As the world remains in the grip of COVID-19, open data researchers and practitioners must confront the challenge of how to use open data to address the types of complex, emergent challenges that are likely to define the rest of this century (and beyond). Amid threats like the ongoing pandemic, climate change, and systemic poverty, there is renewed pressure to find ways that open data can solve complex social, cultural, economic and political problems.

Over the past year, the Open Data Policy Lab, an initiative of The GovLab at NYU’s Tandon School of Engineering, held several sessions with leaders of open data from around the world. Over the course of these sessions, which we called the Summer of Open Data, we studied various strategies and trends, and identified future pathways for open data leaders to pursue. The results of this research suggest an emergent Third Wave of Open Data— one that offers a clear pathway for stakeholders of all types to achieve Open Data Day’s goal of “showing the benefits of open data and encouraging the adoption of open data policies in government, business, and civil society.”

The Third Wave of Open Data is central to how data is being collected, stored, shared, used, and reused around the world. In what follows, we explain this notion further, and argue that it offers a useful rubric through which to take stock of where we are — and to consider future goals — as we mark this latest iteration of Open Data Day.

The Past and Present of Open Data

The history of open data can be divided into several waves, each reflecting the priorities and values of the era in which they emerged….(More)”.

Image for post
The Three Waves of Open Data

Improving Governance by Asking Questions that Matter


Fiona Cece, Nicola Nixon and Stefaan Verhulst at the Open Government Partnership:

“You can tell whether a man is clever by his answers. You can tell whether a man is wise by his questions” – Naguib Mahfouz

Data is at the heart of every dimension of the COVID-19 challenge. It’s been vital in the monitoring of daily rates, track and trace technologies, doctors appointments, and the vaccine roll-out. Yet our daily diet of brightly-coloured graphed global trends masks the maelstrom of inaccuracies, gaps and guesswork that underlies the ramshackle numbers on which they are so often based. Governments are unable to address their citizens’ needs in an informed way when the data itself is partial, incomplete or simply biased. And citizens’ in turn are unable to contribute to collective decision-making that impacts their lives when the channels for doing so in meaningful ways are largely non-existent. 

There is an irony here. We live in an era in which there are an unprecedented number of methods for collecting data. Even in the poorest countries with weak or largely non-existent government systems, anyone with a mobile phone or who accesses the internet is using and producing data. Yet a chasm exists between the potential of data to contribute to better governance and what it is actually collected and used for.

Even where data accuracy can be relied upon, the practice of effective, efficient and equitable data governance requires much more than its collection and dissemination.

And although governments will play a vital role, combatting the pandemic and its associated socio-economic challenges will require the combined efforts of non-government organizations (NGOs), civil society organizations (CSOs), citizens’ associations, healthcare companies and providers, universities, think tanks and so many others. Collaboration is key.

There is a need to collectively move beyond solution-driven thinking. One initiative working toward this end is The 100 Questions Initiative by The Governance Lab (The GovLab) at the NYU Tandon School of Engineering. In partnership with the The Asia Foundation, the Centre for Strategic and International Studies in Indonesia, and the BRAC Institute of Governance and Development, the Initiative is launching a Governance domain. Collectively we will draw on the expertise of over 100 “bilinguals”– experts in both data science and governance — to identify the 10 most-pressing questions on a variety of issues that can be addressed using data and data science. The cohort for this domain is multi-sectoral and geographically varied, and will provide diverse input on these governance challenges. 

Once the questions have been identified and prioritized, and we have engaged with a broader public through a voting campaign, the ultimate goal is to establish one or more data collaboratives that can generate answers to the questions at hand. Data collaboratives are an emerging structure that allow pooling of data and expertise across sectors, often resulting in new insights and public sector innovations.  Data collaboratives are fundamentally about sharing and cross-sectoral engagement. They have been deployed across countries and sectoral contexts, and their relative success shows that in the twenty-first century no single actor can solve vexing public problems. The route to success lies through broad-based collaboration. 

Multi-sectoral and geographically diverse insight is needed to address the governance challenges we are living through, especially during the time of COVIDd-19. The pandemic has exposed weak governance practices globally, and collectively we need to craft a better response. As an open governance and data-for-development community, we have not yet leveraged the best insight available to inform an effective, evidence-based response to the pandemic. It is time we leverage more data and technology to enable citizen-centrism in our service delivery and decision-making processes, to contribute to overcoming the pandemic and to building our governance systems, institutions and structures back better. Together with over 130 ‘Bilinguals’ – experts in both governance and data – we have set about identifying the priority questions that data can answer to improve governance. Join us on this journey. Stay tuned for our public voting campaign in a couple of months’ time when we will crowdsource your views on which of the questions they pose really matter….(More)”.

Why Transparency Won’t Save Us


Essay by Sun-ha Hong: “In a society beset with black-boxed algorithms and vast surveillance systems, transparency is often hailed as liberal democracy’s superhero. It’s a familiar story: inject the public with information to digest, then await their rational deliberation and improved decision making. Whether in discussions of facial recognition software or platform moderation, we run into the argument that transparency will correct the harmful effects of algorithmic systems. The trouble is that in our movies and comic books, superheroes are themselves deus ex machina: black boxes designed to make complex problems disappear so that the good guys can win. Too often, transparency is asked to save the day on its own, under the assumption that disinformation or abuse of power can be shamed away with information.

Transparency without adequate support, however, can quickly become fuel for speculation and misunderstanding….

All this is part of a broader pattern in which the very groups who should be held accountable by the data tend to be its gatekeepers. Facebook is notorious for transparency-washing strategies, in which it dangles data access like a carrot but rarely follows through in actually delivering it. When researchers worked to create more independent means of holding Facebook accountable — as New York University’s Ad Observatory did last year, using volunteer researchers to build a public database of ads on the platform — Facebook threatened to sue them. Despite the lofty rhetoric around Facebook’s Oversight Board (often described as a “Supreme Court” for the platform), it falls into the same trap of transparency without power: the scope is limited to individual cases of content moderation, with no binding authority over the company’s business strategy, algorithmic design, or even similar moderation cases in the future.

Here, too, the real bottleneck is not information or technology, but power: the legal, political and economic pressure necessary to compel companies like Facebook to produce information and to act on it. We see this all too clearly when ordinary people do take up this labour of transparency, and attempt to hold technological systems accountable. In August 2020, Facebook users reported the Kenosha Guard group more than 400 times for incitement of violence. But Facebook declined to take any action until an armed shooter travelled to Kenosha, Wisconsin, and killed two protesters. When transparency is compromised by the concentration of power, it is often the vulnerable who are asked to make up the difference — and then to pay the price.

Transparency cannot solve our problems on its own. In his book The Rise of the Right to Know, journalism scholar Michael Schudson argues that transparency is better understood as a “secondary or procedural morality”: a tool that only becomes effective by other means. We must move beyond the pernicious myth of transparency as a universal solution, and address the distribution of economic and political power that is the root cause of technologically amplified irrationality and injustice….(More)”.

How can stakeholder engagement and mini-publics better inform the use of data for pandemic response?


Andrew Zahuranec, Andrew Young and Stefaan G. Verhulst at the OECD Participo Blog Series:

Image for post

“What does the public expect from data-driven responses to the COVID-19 pandemic? And under what conditions?” These are the motivating questions behind The Data Assembly, a recent initiative by The GovLab at New York University Tandon School of Engineering — an action research center that aims to help institutions work more openly, collaboratively, effectively, and legitimately.

Launched with support from The Henry Luce Foundation, The Data Assembly solicited diverse, actionable public input on data re-use for crisis response in the United States. In particular, we sought to engage the public on how to facilitate, if deemed acceptable, the use of data that was collected for a particular purpose for informing COVID-19. One additional objective was to inform the broader emergence of data collaboration— through formal and ad hoc arrangements between the public sector, civil society, and those in the private sector — by evaluating public expectation and concern with current institutional, contractual, and technical structures and instruments that may underpin these partnerships.

The Data Assembly used a new methodology that re-imagines how organisations can engage with society to better understand local expectations regarding data re-use and related issues. This work goes beyond soliciting input from just the “usual suspects”. Instead, data assemblies provide a forum for a much more diverse set of participants to share their insights and voice their concerns.

This article is informed by our experience piloting The Data Assembly in New York City in summer 2020. It provides an overview of The Data Assembly’s methodology and outcomes and describes major elements of the effort to support organisations working on similar issues in other cities, regions, and countries….(More)”.

As Jakarta floods again, humanitarian chatbots on social media support community-led disaster response


Blog by Petabencana: “On February 20th, #banjir and #JakartaBanjir were the highest trending topics on Twitter Indonesia, as the capital city was inundated for the third major time this year, following particularly heavy rainfall from Friday night (19/2/2021) to Saturday morning (20/02/2021). As Jakarta residents turned to social media to share updates about the flood, they were greeted by “Disaster Bot” – a novel AI-assisted chatbot that monitors social media for posts about disasters and automatically invites users to submit more detailed disaster reports. These crowd-sourced reports are used to map disasters in real-time, on a free and open source website, PetaBencana.id.

As flooding blocked major thoroughfares and toll roads, disrupted commuter lines, and cut off electricity to over 60,000 homes, residents continued to share updates about the flood situation in order to stay alert and make timely decisions about safety and response. Hundreds of residents submitted flood reports to PetaBencana.id, alerting each other about water levels, broken infrastructures and road accessibility. The Jakarta Emergency Management Agency also updated the map with official information about flood affected  areas, and monitored the map to respond to resident needs. PetaBencana.id experienced a 2000% in activity in under 12 hours as residents actively checked the map to understand the flooding situation, avoid flooded areas, and make decisions about safety and response. 

Residents share updates about flood-affected road access through the open source information sharing platform, PetaBencana.id. Thousands of residents used the map to navigate safely as heavy rainfall inundated the city for the third major time this year.

As flooding incidents continue to occur with increasing intensity across the country, community-led information sharing is once again proving its significance in supporting response and planning at multiple scales. …(More)”.

A New Way to Inoculate People Against Misinformation


Article by Jon Roozenbeek, Melisa Basol, and Sander van der Linden: “From setting mobile phone towers on fire to refusing critical vaccinations, we know the proliferation of misinformation online can have massive, real-world consequences.

For those who want to avert those consequences, it makes sense to try and correct misinformation. But as we now know, misinformation—both intentional and unintentional—is difficult to fight once it’s out in the digital wild. The pace at which unverified (and often false) information travels makes any attempt to catch up to, retrieve, and correct it an ambitious endeavour. We also know that viral information tends to stick, that repeated misinformation is more likely to be judged as true, and that people often continue to believe falsehoods even after they have been debunked.

Instead of fighting misinformation after it’s already spread, some researchers have shifted their strategy: they’re trying to prevent it from going viral in the first place, an approach known as “prebunking.” Prebunking attempts to explain how people can resist persuasion by misinformation. Grounded in inoculation theory, the approach uses the analogy of biological immunization. Just as weakened exposure to a pathogen triggers antibody production, inoculation theory posits that pre-emptively exposing people to a weakened persuasive argument builds people’s resistance against future manipulation.

But while inoculation is a promising approach, it has its limitations. Traditional inoculation messages are issue-specific, and have often remained confined to the particular context that you want to inoculate people against. For example, an inoculation message might forewarn people that false information is circulating encouraging people to drink bleach as a cure for the coronavirus. Although that may help stop bleach drinking, this messaging doesn’t pre-empt misinformation about other fake cures. As a result, prebunking approaches haven’t easily adapted to the changing misinformation landscape, making them difficult to scale.

However, our research suggests that there may be another way to inoculate people that preserves the benefits of prebunking: it may be possible to build resistance against misinformation in general, rather than fighting it one piece at a time….(More)”.

The (il)logic of legibility – Why governments should stop simplifying complex systems


Thea Snow at LSE Blog: “Sometimes, you learn about an idea that really sticks with you. This happened to me recently when I learnt about “legibility” — a concept which James C Scott introduces in his book Seeing like a State.

Just last week, I was involved in two conversations which highlighted how pervasive the logic of legibility continues to be in influencing how governments think and act. But first, what is legibility?

Defining Legibility

Legibility describes the very human tendency to simplify complex systems in order to exert control over them.

In this blog, Venkatesh Rao offers a recipe for legibility:

  • Look at a complex and confusing reality…
  • Fail to understand all the subtleties of how the complex reality works
  • Attribute that failure to the irrationality of what you are looking at, rather than your own limitations
  • Come up with an idealized blank-slate vision of what that reality ought to look like
  • Argue that the relative simplicity and platonic orderliness of the vision represents rationality
  • Use power to impose that vision, by demolishing the old reality if necessary.

Rao explains: “The big mistake in this pattern of failure is projecting your subjective lack of comprehension onto the object you are looking at, as “irrationality.” We make this mistake because we are tempted by a desire for legibility.”

Scott uses modern forestry practices as an example of the practice of legibility. Hundreds of years ago, forests acted as many things — they were places people harvested wood, but also places where locals went foraging and hunting, as well as an ecosystem for animals and plants. According to the logic of scientific forestry practices, forests would be much more valuable if they just produced timber. To achieve this, they had to be made legible.

So, modern agriculturalists decided to clear cut forest, and plant perfectly straight rows of a particular species of fast-growing trees. It was assumed this would be more efficient. Planting just one species meant the quality of timber would be predictable. In addition, the straight rows would make it easy to know exactly how much timber was there, and would mean timber production could be easily monitored and controlled.

 Reproduced from https://www.ribbonfarm.com/2010/07/26/a-big-little-idea-called-legibility/

For the first generation of trees, the agriculturalists achieved higher yields, and there was much celebration and self-congratulation. But, after about a century, the problems of the ecosystem collapse started to reveal themselves. In imposing a logic of order and control, scientific forestry destroyed the complex, invisible, and unknowable network of relationships between plants, animals and people, which are necessary for a forest to thrive.

After a century it became apparent that relationships between plants and animals were so distorted that pests were destroying crops. The nutrient balance of the soil was disrupted. And after the first generation of trees, the forest was not thriving at all….(More)”.

Robot census: Gathering data to improve policymaking on new technologies


Essay by Robert Seamans: There is understandable excitement about the impact that new technologies like artificial intelligence (AI) and robotics will have on our economy. In our everyday lives, we already see the benefits of these technologies: when we use our smartphones to navigate from one location to another using the fastest available route or when a predictive typing algorithm helps us finish a sentence in our email. At the same time, there are concerns about possible negative effects of these new technologies on labor. The Council of Economic Advisers of the past two Administrations have addressed these issues in the annual Economic Report of the President (ERP). For example, the 2016 ERP included a chapter on technology and innovation that linked robotics to productivity and growth, and the 2019 ERP included a chapter on artificial intelligence that discussed the uneven effects of technological change. Both these chapters used data at highly aggregated levels, in part because that is the data that is available. As I’ve noted elsewhere, AI and robots are everywhere, except, as it turns out, in the data.

To date, there have been no large scale, systematic studies in the U.S. on how robots and AI affect productivity and labor in individual firms or establishments (a firm could own one or more establishments, which for example could be a plant in a manufacturing setting or a storefront in a retail setting). This is because the data are scarce. Academic researchers interested in the effects of AI and robotics on economic outcomes have mostly used aggregate country and industry-level data. Very recently, some have studied these issues at the firm level using data on robot imports to France, Spain, and other countries. I review a few of these academic papers in both categories below, which provide early findings on the nuanced role these new technologies have on labor. Thanks to some excellent work being done by the U.S. Census Bureau, however, we may soon have more data to work with. This includes new questions on robot purchases in the Annual Survey of Manufacturers and Annual Capital Expenditures Survey and new questions on other technologies including cloud computing and machine learning in the Annual Business Survey….(More)”.