Why bad times call for good data


Tim Harford in the Financial Times: “Watching the Ever Given wedge itself across the Suez Canal, it would have taken a heart of stone not to laugh. But it was yet another unpleasant reminder that the unseen gears in our global economy can all too easily grind or stick.

From the shutdown of Texas’s plastic polymer manufacturing to a threat, to vaccine production from a shortage of giant plastic bags, we keep finding out the hard way that modern life relies on weak links in surprising places.

So where else is infrastructure fragile and taken for granted? I worry about statistical infrastructure — the standards and systems we rely on to collect, store and analyse our data.

Statistical infrastructure sounds less important than a bridge or a power line, but it can mean the difference between life and death for millions. Consider Recovery (Randomised Evaluations of Covid-19 Therapy). Set up in a matter of days by two Oxford academics, Martin Landray and Peter Horby, over the past year Recovery has enlisted hospitals across the UK to run randomised trials of treatments such as the antimalarial drug hydroxychloroquine and the cheap steroid dexamethasone.

With minimal expense and paperwork, it turned the guesses of physicians into simple but rigorous clinical trials. The project quickly found that dexamethasone was highly effective as a treatment for severe Covid-19, thereby saving a million lives.

Recovery relied on data accumulated as hospitals treated patients and updated their records. It wasn’t always easy to reconcile the different sources — some patients were dead according to one database and alive on another. But such data problems are solvable and were solved. A modest amount of forethought about collecting the right data in the right way has produced enormous benefits….

But it isn’t just poor countries that have suffered. In the US, data about Covid-19 testing was collected haphazardly by states. This left the federal government flying blind, unable to see where and how quickly the virus was spreading. Eventually volunteers, led by the journalists Robinson Meyer and Alexis Madrigal of the Covid Tracking Project, put together a serviceable data dashboard. “We have come to see the government’s initial failure here as the fault on which the entire catastrophe pivots,” wrote Meyer and Madrigal in The Atlantic. They are right.

What is more striking is that the weakness was there in plain sight. Madrigal recently told me that the government’s plan for dealing with a pandemic assumed that good data would be available — but did not build the systems to create them. It is hard to imagine a starker example of taking good statistical infrastructure for granted….(More)”.

Global inequality remotely sensed


Paper by M. Usman Mirza et al: “Economic inequality is notoriously difficult to quantify as reliable data on household incomes are missing for most of the world. Here, we show that a proxy for inequality based on remotely sensed nighttime light data may help fill this gap. Individual households cannot be remotely sensed. However, as households tend to segregate into richer and poorer neighborhoods, the correlation between light emission and economic thriving shown in earlier studies suggests that spatial variance of remotely sensed light per person might carry a signal of economic inequality.

To test this hypothesis, we quantified Gini coefficients of the spatial variation in average nighttime light emitted per person. We found a significant relationship between the resulting light-based inequality indicator and existing estimates of net income inequality. This correlation between light-based Gini coefficients and traditional estimates exists not only across countries, but also on a smaller spatial scale comparing the 50 states within the United States. The remotely sensed character makes it possible to produce high-resolution global maps of estimated inequality. The inequality proxy is entirely independent from traditional estimates as it is based on observed light emission rather than self-reported household incomes. Both are imperfect estimates of true inequality. However, their independent nature implies that the light-based proxy could be used to constrain uncertainty in traditional estimates. More importantly, the light-based Gini maps may provide an estimate of inequality where previously no data were available at all….(More)”.

Ideology and Performance in Public Organizations


NBER Working Paper by Jorg L. Spenkuch, Edoardo Teso & Guo Xu: “We combine personnel records of the United States federal bureaucracy from 1997-2019 with administrative voter registration data to study how ideological alignment between politicians and bureaucrats affects the personnel policies and performance of public organizations. We present four results. (i) Consistent with the use of the spoils system to align ideology at the highest levels of government, we document significant partisan cycles and substantial turnover among political appointees. (ii) By contrast, we find virtually no political cycles in the civil service. The lower levels of the federal government resemble a “Weberian” bureaucracy that appears to be largely protected from political interference. (iii) Democrats make up the plurality of civil servants. Overrepresentation of Democrats increases with seniority, with the difference in career progression being largely explained by positive selection on observables. (iv) Political misalignment carries a sizeable performance penalty. Exploiting presidential transitions as a source of “within-bureaucrat” variation in the political alignment of procurement officers over time, we find that contracts overseen by a misaligned officer exhibit cost overruns that are, on average, 8% higher than the mean overrun. We provide evidence that is consistent with a general “morale effect,” whereby misaligned bureaucrats are less motivated….(More)”

Digitally Kind


Report by Anna Grant with Cliff Manning and Ben Thurman: “Over the past decade and particularly since the outbreak of the COVID-19 pandemic we have seen increasing use of digital technology in service provision by third and public sector organisations. But with this increasing use comes challenges. The development and use of these technologies often outpace the organisational structures put in place to improve delivery and protect both individuals and organisations.

Digitally Kind is devised to help bridge the gaps between digital policy, process and practice to improve outcomes, and introducing kindness as a value to underpin an organisational approach.

Based on workshops with over 40 practitioners and frontline staff, the report has been designed as a starting point to support organisations open up conversations around their use of digital in delivering services. Digitally Kind explores a range of technical, social and cultural considerations around the use of tech when working with individuals covering values and governance; access; safety and wellbeing; knowledge and skills; and participation.

While the project predominantly focused on the experiences of practitioners and organisations working with young people, many of the principles hold true for other sectors. The research also highlights a short set of considerations for funders, policymakers (including regulators) and online platforms….(More)”.

Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence


Press Release: “The Commission proposes today new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. New rules on Machinery will complement this approach by adapting safety rules to increase users’ trust in the new, versatile generation of products.

The European approach to trustworthy AI

The new rules will be applied directly in the same way across all Member States based on a future-proof definition of AI. They follow a risk-based approach:

Unacceptable risk: AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.

High-risk: AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimise risk;
  • High level of robustnesssecurity and accuracy.

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal risk: The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens’ rights or safety.

In terms of governance, the Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. Additionally, voluntary codes of conduct are proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation….(More)”.

Tech tools help deepen citizen input in drafting laws abroad and in U.S. states


Gopal Ratnam at RollCall: “Earlier this month, New Jersey’s Department of Education launched a citizen engagement process asking students, teachers and parents to vote on ideas for changes that officials should consider as the state reopens its schools after the pandemic closed classrooms for a year. 

The project, managed by The Governance Lab at New York University’s Tandon School of Engineering, is part of a monthlong nationwide effort using an online survey tool called All Our Ideas to help state education officials prioritize policymaking based on ideas solicited from those who are directly affected by the policies.

Among the thousands of votes cast for various ideas nationwide, teachers and parents backed changes that would teach more problem-solving skills to kids. But students backed a different idea as the most important: making sure that kids have social and emotional skills, as well as “self-awareness and empathy.” 

A government body soliciting ideas from those who are directly affected, via online technology, is one small example of greater citizen participation in governance that advocates hope can grow at both state and federal levels….

Taiwan has taken crowdsourcing legislative ideas to a new height.

Using a variety of open-source engagement and consultation tools that are collectively known as the vTaiwan process, government ministries, elected representatives, experts, civil society groups, businesses and ordinary citizens come together to produce legislation. 

The need for an open consultation process stemmed from the 2014 Sunflower Student Movement, when groups of students and others occupied the Taiwanese parliament to protest the fast-tracking of a trade agreement with China with little public review.  

After the country’s parliament acceded to the demands, the “consensus opinion was that instead of people having to occupy the parliament every time there’s a controversial, emergent issue, it might actually work better if we have a consultation mechanism in the very beginning of the issue rather than at the end,” said Audrey Tang, Taiwan’s digital minister. …

At about the same time that Taiwan’s Sunflower movement was unfolding, in Brazil then-President Dilma Rousseff signed into law the country’s internet bill of rights in April 2014. 

The bill was drafted and refined through a consultative process that included not only legal and technical experts but average citizens as well, said Debora Albu, program coordinator at the Institute for Technology and Society of Rio, also known as ITS. 

The institute was involved in designing the platform for seeking public participation, Albu said. 

“From then onwards, we wanted to continue developing projects that incorporated this idea of collective intelligence built into the development of legislation or public policies,” Albu said….(More)”.

We’re Beating Systems Change to Death


Essay by Kevin Starr: “Systems change! Just saying the words aloud makes me feel like one of the cognoscenti, one of the elite who has transcended the ways of old-school philanthropy. Those two words capture our aspirations of lasting impact at scale: systems are big, and if you manage to change them, they’ll keep spinning out impact forever. Why would you want to do anything else?

There’s a problem, though. “Systems analysis” is an elegant and useful way to think about problems and get ideas for solutions, but “systems change” is accelerating toward buzzword purgatory. It’s so sexy that everyone wants to use it for everything. …

But when you rummage through the growing literature on systems change thinking, there are in fact a few recurring themes. One is the need to tackle the root causes of any problem you take on. Another is that a broad coalition must be assembled ASAP. Finally, the most salient theme is the notion that the systems involved are transformed as a result of the work (although in many of the examples I read about, it’s not articulated clearly just what system is being changed).

Taken individually or as a whole, these themes point to some of the ways in which systems change is a less-than-ideal paradigm for the work we need to get done:

1. It’s too hard to know to what degree systems change is or isn’t happening. It may be the case that “not everything that matters can be counted,” but most of the stuff that matters can, and it’s hard to get better at something if you’re unable to measure it. But these words of a so-called expert on systems change measurement are typical of what I’ve seen in in the literature: “Measuring systems change is about detecting patterns in the connections between the parts. It is about qualitative changes in the structure of the system, about its adaptiveness and resilience, about synergies emerging from collective efforts—and more…”

Like I said, it’s too hard to know to what is or isn’t happening.

2. “Root cause” thinking can—paradoxically—bog down progress. “Root cause” analysis is a common feature of most systems change discussions, and it’s a wonderful tool to generate ideas and avoid unintended consequences. However, broad efforts to tackle all of a problem’s root causes can turn anything into a complicated, hard-to-replicate project. It can also make things look so overwhelming as to result in a kind of paralysis. And however successful a systems change effort might be, that complication makes it hard to replicate, and you’re often stuck with a one-off project….(More)”.

Digital Identity, Virtual Borders and Social Media: A Panacea for Migration Governance?


Book edited by Emre Eren Korkmaz: “…discusses how states deploy frontier and digital technologies to manage and control migratory movements. Assessing the development of blockchain technologies for digital identities and cash transfer; artificial intelligence for smart borders, resettlement of refugees and assessing asylum applications; social media and mobile phone applications to track and surveil migrants, it critically examines the consequences of new technological developments and evaluates their impact on the rights of migrants and refugees.

Chapters evaluate the technology-based public-private projects that govern migration globally and illustrate the political implications of these virtual borders. International contributors compare and contrast different forms of political expression, in both personal technologies, such as social media for refugees and smugglers, and automated decision-making algorithms used by states to enable migration governance. This timely book challenges hegemonic approach to migration governance and provides cases demonstrating the dangers of employing frontier technologies denying basic rights, liberties and agencies of migrants and refugees.

Stepping into a contentious political climate for migrants and refugees, this provocative book is ideal reading for scholars and researchers of political science and public policy, particularly those focusing on migration and refugee studies. It will also benefit policymakers and practitioners dealing with migration, such as humanitarian NGOs, UN agencies and local authorities….(More)”.

Re-Thinking Think Tanks: Differentiating Knowledge-Based Policy Influence Organizations


Paper by Adam Wellstead and Michael P. Howlett: “The idea of “think tanks” is one of the oldest in the policy sciences. While the topic has been studied for decades, however, recent work dealing with advocacy groups, policy and Behavioural Insight labs, and into the activities of think tanks themselves have led to discontent with the definitions used in the field, and especially with the way the term may obfuscate rather than clarify important distinctions between different kinds of knowledge-based policy influence organizations (KBPIO). In this paper, we examine the traditional and current definitions of think tanks utilized in the discipline and point out their weaknesses. We then develop a new framework to better capture the variation in such organizations which operate in many sectors….(More)”.

Knowledge Assets in Government


Draft Guidance by HM Treasury (UK): “Embracing innovation is critical to the future of the UK’s economy, society and its place in the world. However, one of the key findings of HM Treasury’s knowledge assets report published at Budget 2018, was that there was little clear strategic guidance on how to realise value from intangibles or knowledge assets such as intellectual property, research & development, and data, which are pivotal for innovation.

This new draft guidance establishes the concept of managing knowledge assets in government and the public sector. It focuses on how to identify, protect and support their exploitation to help maximise the social, economic and financial value they generate.

The guidance provided in this document is intended to advise and support organisations in scope with their knowledge asset management and, in turn, fulfil their responsibilities as set out in MPM. While the guidance clarifies best practice and provides recommendations, these should not be interpreted as additional rules. The draft guidance recommends that organisations:

  • develop a strategy for managing their knowledge assets, as part of their wider asset management strategy (a requirement of MPM)
  • appoint a Senior Responsible Owner (SRO) for knowledge assets who has clear responsibility for the organisation’s knowledge asset management strategy…(More)“.