Bridging the data-policy gap in Africa


Report by PARIS21 and the Mo Ibrahim Foundation (MIF): “National statistics are an essential component of policymaking: they provide the evidence required to design policies that address the needs of citizens, to monitor results and hold governments to account. Data and policy are closely linked. As Mo Ibrahim puts it: “without data, governments drive blind”. However, there is evidence that the capacity of African governments for data-driven policymaking remains limited by a wide data-policy gap.

What is the data-policy gap?
On the data side, statistical capacity across the continent has improved in recent decades. However, it remains low compared to other world regions and is hindered by several challenges. African national statistical offices (NSOs) often lack adequate financial and human resources as well as the capacity to provide accessible and available data. On the policy side, data literacy as well as a culture of placing data first in policy design and monitoring are still not widespread. Thus, investing in the basic building blocks of national statistics, such as civil registration, is often not a key priority.

At the same time, international development frameworks, such as the United Nations 2030 Agenda for Sustainable Development and the African Union Agenda 2063, require that every signatory country produce and use high-quality, timely and disaggregated data in order to shape development policies that leave no one behind and to fulfil reporting commitments.

Also, the new data ecosystem linked to digital technologies is providing an explosion of data sourced from non-state providers. Within this changing data landscape, African NSOs, like those in many other parts of the world, are confronted with a new data stewardship role. This will add further pressure on the capacity of NSOs, and presents additional challenges in terms of navigating issues of governance and use…

Recommendations as part of a six-point roadmap for bridging the data-policy map include:

  1. Creating a statistical capacity strategy to raise funds
  2. Connecting to knowledge banks to hire and retain talent
  3. Building good narratives for better data use
  4. Recognising the power of foundational data
  5. Strengthening statistical laws to harness the data revolution
  6. Encouraging data use in policy design and implementation…(More)”

Why bad times call for good data


Tim Harford in the Financial Times: “Watching the Ever Given wedge itself across the Suez Canal, it would have taken a heart of stone not to laugh. But it was yet another unpleasant reminder that the unseen gears in our global economy can all too easily grind or stick.

From the shutdown of Texas’s plastic polymer manufacturing to a threat, to vaccine production from a shortage of giant plastic bags, we keep finding out the hard way that modern life relies on weak links in surprising places.

So where else is infrastructure fragile and taken for granted? I worry about statistical infrastructure — the standards and systems we rely on to collect, store and analyse our data.

Statistical infrastructure sounds less important than a bridge or a power line, but it can mean the difference between life and death for millions. Consider Recovery (Randomised Evaluations of Covid-19 Therapy). Set up in a matter of days by two Oxford academics, Martin Landray and Peter Horby, over the past year Recovery has enlisted hospitals across the UK to run randomised trials of treatments such as the antimalarial drug hydroxychloroquine and the cheap steroid dexamethasone.

With minimal expense and paperwork, it turned the guesses of physicians into simple but rigorous clinical trials. The project quickly found that dexamethasone was highly effective as a treatment for severe Covid-19, thereby saving a million lives.

Recovery relied on data accumulated as hospitals treated patients and updated their records. It wasn’t always easy to reconcile the different sources — some patients were dead according to one database and alive on another. But such data problems are solvable and were solved. A modest amount of forethought about collecting the right data in the right way has produced enormous benefits….

But it isn’t just poor countries that have suffered. In the US, data about Covid-19 testing was collected haphazardly by states. This left the federal government flying blind, unable to see where and how quickly the virus was spreading. Eventually volunteers, led by the journalists Robinson Meyer and Alexis Madrigal of the Covid Tracking Project, put together a serviceable data dashboard. “We have come to see the government’s initial failure here as the fault on which the entire catastrophe pivots,” wrote Meyer and Madrigal in The Atlantic. They are right.

What is more striking is that the weakness was there in plain sight. Madrigal recently told me that the government’s plan for dealing with a pandemic assumed that good data would be available — but did not build the systems to create them. It is hard to imagine a starker example of taking good statistical infrastructure for granted….(More)”.

Global inequality remotely sensed


Paper by M. Usman Mirza et al: “Economic inequality is notoriously difficult to quantify as reliable data on household incomes are missing for most of the world. Here, we show that a proxy for inequality based on remotely sensed nighttime light data may help fill this gap. Individual households cannot be remotely sensed. However, as households tend to segregate into richer and poorer neighborhoods, the correlation between light emission and economic thriving shown in earlier studies suggests that spatial variance of remotely sensed light per person might carry a signal of economic inequality.

To test this hypothesis, we quantified Gini coefficients of the spatial variation in average nighttime light emitted per person. We found a significant relationship between the resulting light-based inequality indicator and existing estimates of net income inequality. This correlation between light-based Gini coefficients and traditional estimates exists not only across countries, but also on a smaller spatial scale comparing the 50 states within the United States. The remotely sensed character makes it possible to produce high-resolution global maps of estimated inequality. The inequality proxy is entirely independent from traditional estimates as it is based on observed light emission rather than self-reported household incomes. Both are imperfect estimates of true inequality. However, their independent nature implies that the light-based proxy could be used to constrain uncertainty in traditional estimates. More importantly, the light-based Gini maps may provide an estimate of inequality where previously no data were available at all….(More)”.

Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence


Press Release: “The Commission proposes today new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. New rules on Machinery will complement this approach by adapting safety rules to increase users’ trust in the new, versatile generation of products.

The European approach to trustworthy AI

The new rules will be applied directly in the same way across all Member States based on a future-proof definition of AI. They follow a risk-based approach:

Unacceptable risk: AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.

High-risk: AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimise risk;
  • High level of robustnesssecurity and accuracy.

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal risk: The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens’ rights or safety.

In terms of governance, the Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. Additionally, voluntary codes of conduct are proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation….(More)”.

We’re Beating Systems Change to Death


Essay by Kevin Starr: “Systems change! Just saying the words aloud makes me feel like one of the cognoscenti, one of the elite who has transcended the ways of old-school philanthropy. Those two words capture our aspirations of lasting impact at scale: systems are big, and if you manage to change them, they’ll keep spinning out impact forever. Why would you want to do anything else?

There’s a problem, though. “Systems analysis” is an elegant and useful way to think about problems and get ideas for solutions, but “systems change” is accelerating toward buzzword purgatory. It’s so sexy that everyone wants to use it for everything. …

But when you rummage through the growing literature on systems change thinking, there are in fact a few recurring themes. One is the need to tackle the root causes of any problem you take on. Another is that a broad coalition must be assembled ASAP. Finally, the most salient theme is the notion that the systems involved are transformed as a result of the work (although in many of the examples I read about, it’s not articulated clearly just what system is being changed).

Taken individually or as a whole, these themes point to some of the ways in which systems change is a less-than-ideal paradigm for the work we need to get done:

1. It’s too hard to know to what degree systems change is or isn’t happening. It may be the case that “not everything that matters can be counted,” but most of the stuff that matters can, and it’s hard to get better at something if you’re unable to measure it. But these words of a so-called expert on systems change measurement are typical of what I’ve seen in in the literature: “Measuring systems change is about detecting patterns in the connections between the parts. It is about qualitative changes in the structure of the system, about its adaptiveness and resilience, about synergies emerging from collective efforts—and more…”

Like I said, it’s too hard to know to what is or isn’t happening.

2. “Root cause” thinking can—paradoxically—bog down progress. “Root cause” analysis is a common feature of most systems change discussions, and it’s a wonderful tool to generate ideas and avoid unintended consequences. However, broad efforts to tackle all of a problem’s root causes can turn anything into a complicated, hard-to-replicate project. It can also make things look so overwhelming as to result in a kind of paralysis. And however successful a systems change effort might be, that complication makes it hard to replicate, and you’re often stuck with a one-off project….(More)”.

Re-Thinking Think Tanks: Differentiating Knowledge-Based Policy Influence Organizations


Paper by Adam Wellstead and Michael P. Howlett: “The idea of “think tanks” is one of the oldest in the policy sciences. While the topic has been studied for decades, however, recent work dealing with advocacy groups, policy and Behavioural Insight labs, and into the activities of think tanks themselves have led to discontent with the definitions used in the field, and especially with the way the term may obfuscate rather than clarify important distinctions between different kinds of knowledge-based policy influence organizations (KBPIO). In this paper, we examine the traditional and current definitions of think tanks utilized in the discipline and point out their weaknesses. We then develop a new framework to better capture the variation in such organizations which operate in many sectors….(More)”.

Knowledge Assets in Government


Draft Guidance by HM Treasury (UK): “Embracing innovation is critical to the future of the UK’s economy, society and its place in the world. However, one of the key findings of HM Treasury’s knowledge assets report published at Budget 2018, was that there was little clear strategic guidance on how to realise value from intangibles or knowledge assets such as intellectual property, research & development, and data, which are pivotal for innovation.

This new draft guidance establishes the concept of managing knowledge assets in government and the public sector. It focuses on how to identify, protect and support their exploitation to help maximise the social, economic and financial value they generate.

The guidance provided in this document is intended to advise and support organisations in scope with their knowledge asset management and, in turn, fulfil their responsibilities as set out in MPM. While the guidance clarifies best practice and provides recommendations, these should not be interpreted as additional rules. The draft guidance recommends that organisations:

  • develop a strategy for managing their knowledge assets, as part of their wider asset management strategy (a requirement of MPM)
  • appoint a Senior Responsible Owner (SRO) for knowledge assets who has clear responsibility for the organisation’s knowledge asset management strategy…(More)“.

Innovation in Real Places: Strategies for Prosperity in an Unforgiving World


Innovation in Real Places – Strategies for Prosperity in an Unforgiving World - Oxford Scholarship Online

Book by Dan Breznitz: “Across the world, cities and regions have wasted trillions of dollars blindly copying the Silicon Valley model of growth creation. We have lived with this system for decades, and the result is clear: a small number of regions and cities are at the top of the high-tech industry, but many more are fighting a losing battle to retain economic dynamism. But, as this books details, there are other models for innovation-based growth that don’t rely on a flourishing high-tech industry. Breznitz argues that the purveyors of the dominant ideas on innovation have a feeble understanding of the big picture on global production and innovation.

They conflate innovation with invention and suffer from techno-fetishism. In their devotion to start-ups, they refuse to admit that the real obstacle to growth for most cities is the overwhelming power of the real hubs, which siphon up vast amounts of talent and money. Communities waste time, money, and energy pursuing this road to nowhere. Instead, Breznitz proposes that communities focus on where they fit within the four stages in the global production process. Success lies in understanding the changed structure of the global system of production and then using those insights to enable communities to recognize their own advantages, which in turn allows to them to foster surprising forms of specialized innovation. All localities have certain advantages relative to at least one stage of the global production process, and the trick is in recognizing it….(More)”.

The Co-Creation Compass: From Research to Action.


Policy Brief by Jill Dixon et al: ” Modern public administrations face a wider range of challenges than in the past, from designing effective social services that help vulnerable citizens to regulating data sharing between banks and fintech startups to ensure competition and growth to mainstreaming gender policies effectively across the departments of a large public administration.

These very different goals have one thing in common. To be solved, they require collaboration with other entities – citizens, companies and other public administrations and departments. The buy-in of these entities is the factor determining success or failure in achieving the goals. To help resolve this problem, social scientists, researchers and students of public administration have devised several novel tools, some of which draw heavily on the most advanced management thinking of the last decade.

First and foremost is co-creation – an awkward sounding word for a relatively simple idea: the notion that better services can be designed and delivered by listening to users, by creating feedback loops where their success (or failure) can be studied, by frequently innovating and iterating incremental improvements through small-scale experimentation so they can deliver large-scale learnings and by ultimately involving users themselves in designing the way these services can be made most effective and best be delivered.

Co-creation tools and methods provide a structured manner for involving users, thereby maximising the probability of satisfaction, buy-in and adoption. As such, co-creation is not a digital tool; it is a governance tool. There is little doubt that working with citizens in re-designing the online service for school registration will boost the usefulness and effectiveness of the service. And failing to do so will result in yet another digital service struggling to gain adoption….(More)”

Data Is Power: Washington Needs to Craft New Rules for the Digital Age


Matthew Slaughter and David McCormick at Foreign Affairs: “…Working with all willing and like-minded nations, it should seek a structure for data that maximizes its immense economic potential without sacrificing privacy and individual liberty. This framework should take the form of a treaty that has two main parts.

First would be a set of binding principles that would foster the cross-border flow of data in the most data-intensive sectors—such as energy, transportation, and health care. One set of principles concerns how to value data and determine where it was generated. Just as traditional trade regimes require goods and services to be priced and their origins defined, so, too, must this framework create a taxonomy to classify data flows by value and source. Another set of principles would set forth the privacy standards that governments and companies would have to follow to use data. (Anonymizing data, made easier by advances in encryption and quantum computing, will be critical to this step.) A final principle, which would be conditional on achieving the other two, would be to promote as much cross-border and open flow of data as possible. Consistent with the long-established value of free trade, the parties should, for example, agree to not levy taxes on data flows—and diligently enforce that rule. And they would be wise to ensure that any negative impacts of open data flows, such as job losses or reduced wages, are offset through strong programs to help affected workers adapt to the digital economy.

Such standards would benefit every sector they applied to. Envision, for example, dozens of nations with data-sharing arrangements for autonomous vehicles, oncology treatments, and clean-tech batteries. Relative to their experience in today’s Balkanized world, researchers would be able to discover more data-driven innovations—and in more countries, rather than just in those that already have a large presence in these industries.

The second part of the framework would be free-trade agreements regulating the capital goods, intermediate inputs, and final goods and services of the targeted sectors, all in an effort to maximize the gains that might arise from data-driven innovations. Thus would the traditional forces of comparative advantage and global competition help bring new self-driving vehicles, new lifesaving chemotherapy compounds, and new sources of renewable energy to participating countries around the world. 

There is already a powerful example of such agreements. In 1996, dozens of countries accounting for nearly 95 percent of world trade in information technology ratified the Information Technology Agreement, a multilateral trade deal under the WTO. The agreement ultimately eliminated all tariffs for hundreds of IT-related capital goods, intermediate inputs, and final products—from machine tools to motherboards to personal computers. The agreement proved to be an important impetus for the subsequent wave of the IT revolution, a competitive spur that led to productivity gains for firms and price declines for consumers….(More)”.