Whatever happened to evidence-based policy making?


Speech by Professor Gary Banks: “One of the challenges in talking about EBPM (evidence-based policy making), which I had not fully appreciated last time, was that it means different things to different people, especially academics. As a result, disagreements, misunderstandings and controversies (or faux controversies) have abounded. And these may have contributed to the demise of the expression, if not the concept.

For example, some have interpreted the term EBPM so literally as to insist that the word “based” be replaced by “influenced”, arguing that policy decisions are rarely based on evidence alone. That of course is true, but few using the term (myself included) would have thought otherwise. And I am sure no-one in an audience such as this, especially in our nation’s capital, believes policy decisions could derive solely from evidence — or even rational analysis!

If you’ll pardon a quotation from my earlier address: “Values, interests, personalities, timing, circumstance and happenstance – in short, democracy – determine what actually happens” (EBPM: What is it? How do we get it?). Indeed it is precisely because of such multiple influences, that “evidence” has a potentially significant role to play.

So, adopting the position from Alice in Wonderland, I am inclined to stick with the term EBPM, which I choose to mean an approach to policy-making that makes systematic provision for evidence and analysis. Far from the deterministic straw man depicted in certain academic articles, it is an approach that seeks to achieve policy decisions that are better informed in a substantive sense, accepting that they will nevertheless ultimately be – and in a democracy need to be — political in nature.

A second and more significant area of debate concerns the meaning and value of “evidence” itself. There are a number of strands involved.

Evidentiary elitism?

One relates to methodology, and can be likened to the differences between the thresholds for a finding of guilt under civil and criminal law (“balance of probabilities” versus “beyond reasonable doubt”).

Some analysts have argued that, to be useful for policy, evidence must involve rigorous unbiased research techniques, the “gold standard” for which are “randomized control trials”. The “randomistas”, to use the term which headlines Andrew Leigh’s new book (Leigh, 2018), claim that only such a methodology is able to truly tell us “what works”

However adopting this exacting standard from the medical research world would leave policy makers with an excellent tool of limited application. Its forte is testing a specific policy or program relative to business as usual, akin to drug tests involving a placebo for a control group. And there are some inspiring examples of insights gained. But for many areas of public policy the technique is not practicable. Even where it is, it requires that a case has to some extent already been made. And while it can identify the extent to which a particular program “works”, it is less useful for understanding why, or whether something else might work even better.

That is not to say that any evidence will do. Setting the quality bar too low is the bigger problem in practice and the notion of a hierarchy of methodologies is helpful. However, no such analytical tools are self-sufficient for policy-making purposes and in my view are best thought of as components of a “cost benefit framework” – one that enables comparisons of different options, employing those estimation techniques that are most fit for purpose. Though challenging to populate fully with monetized data, CBA provides a coherent conceptual basis for assessing the net social impacts of different policy choices – which is what EBPM must aspire to as its contribution to (political) policy decisions….(More)”.

Blockchain’s Occam problem


Report by Matt Higginson, Marie-Claude Nadeau, and Kausik Rajgopal: “Blockchain has yet to become the game-changer some expected. A key to finding the value is to apply the technology only when it is the simplest solution available.

Blockchain over recent years has been extolled as a revolution in business technology. In the nine years since its launch, companies, regulators, and financial technologists have spent countless hours exploring its potential. The resulting innovations have started to reshape business processes, particularly in accounting and transactions.

Amid intense experimentation, industries from financial services to healthcare and the arts have identified more than 100 blockchain use cases. These range from new land registries, to KYC applications and smart contracts that enable actions from product processing to share trading. The most impressive results have seen blockchains used to store information, cut out intermediaries, and enable greater coordination between companies, for example in relation to data standards….

There is a clear sense that blockchain is a potential game-changer. However, there are also emerging doubts. A particular concern, given the amount of money and time spent, is that little of substance has been achieved. Of the many use cases, a large number are still at the idea stage, while others are in development but with no output. The bottom line is that despite billions of dollars of investment, and nearly as many headlines, evidence for a practical scalable use for blockchain is thin on the ground.

Infant technology

From an economic theory perspective, the stuttering blockchain development path is not entirely surprising. It is an infant technology that is relatively unstable, expensive, and complex. It is also unregulated and selectively distrusted. Classic lifecycle theory suggests the evolution of any industry or product can be divided into four stages: pioneering, growth, maturity, and decline (exhibit). Stage 1 is when the industry is getting started, or a particular product is brought to market. This is ahead of proven demand and often before the technology has been fully tested. Sales tend to be low and return on investment is negative. Stage 2 is when demand begins to accelerate, the market expands and the industry or product “takes off.”

Blockchain is struggling to emerge from the pioneering stage.
Exhibit

Across its many applications, blockchain arguably remains stuck at stage 1 in the lifecycle (with a few exceptions). The vast majority of proofs of concept (POCs) are in pioneering mode (or being wound up) and many projects have failed to get to Series C funding rounds.

One reason for the lack of progress is the emergence of competing technologies. In payments, for example, it makes sense that a shared ledger could replace the current highly intermediated system. However, blockchains are not the only game in town. Numerous fintechs are disrupting the value chain. Of nearly $12 billion invested in US fintechs last year, 60 percent was focused on payments and lending. SWIFT’s global payments innovation initiative (GPI), meanwhile, is addressing initial pain points through higher transaction speeds and increased transparency, building on bank collaboration….(More)” (See also: Blockchange)

A Study of the Implications of Advanced Digital Technologies (Including AI Systems) for the Concept of Responsibility Within a Human Rights Framework


Report by Karen Yeung: “This study was commissioned by the Council of Europe’s Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT). It was prompted by concerns about the potential adverse consequences of advanced digital technologies (including artificial intelligence (‘AI’)), particularly their impact on the enjoyment of human rights and fundamental freedoms. This draft report seeks to examine the implications of these technologies for the concept of responsibility, and this includes investigating where responsibility should lie for their adverse consequences. In so doing, it seeks to understand (a) how human rights and fundamental freedoms protected under the ECHR may be adversely affected by the development of AI technologies and (b) how responsibility for those risks and consequences should be allocated. 

Its methodological approach is interdisciplinary, drawing on concepts and academic scholarship from the humanities, the social sciences and, to a more limited extent, from computer science. It concludes that, if we are to take human rights seriously in a hyperconnected digital age, we cannot allow the power of our advanced digital technologies and systems, and those who develop and implement them, to be accrued and exercised without responsibility. Nations committed to protecting human rights must therefore ensure that those who wield and derive benefits from developing and deploying these technologies are held responsible for their risks and consequences. This includes obligations to ensure that there are effective and legitimate mechanisms that will operate to prevent and forestall violations to human rights which these technologies may threaten, and to attend to the health of the larger collective and shared socio-technical environment in which human rights and the rule of law are anchored….(More)”.

Data Policy in the Fourth Industrial Revolution: Insights on personal data


Report by the World Economic Forum: “Development of comprehensive data policy necessarily involves trade-offs. Cross-border data flows are crucial to the digital economy. The use of data is critical to innovation and technology. However, to engender trust, we need to have appropriate levels of protection in place to ensure privacy, security and safety. Over 120 laws in effect across the globe today provide differing levels of protection for data but few anticipated 

Data Policy in the Fourth Industrial Revolution: Insights on personal data, a paper by the World Economic Forum in collaboration with the Ministry of Cabinet Affairs and the Future, United Arab Emirates, examines the relationship between risk and benefit, recognizing the impact of culture, values and social norms This work is a start toward developing a comprehensive data policy toolkit and knowledge repository of case studies for policy makers and data policy leaders globally….(More)”.

Citizen science for environmental policy: Development of an EU-wide inventory and analysis of selected practices


EU Science Hub: “Citizen science is the non-professional involvement of volunteers in the scientific process, whether in the data collection phase or in other phases of the research.

It can be a powerful tool for environmental management that has the potential to inform an increasingly complex environmental policy landscape and to meet the growing demands from society for more participatory decision-making.

While there is growing interest from international bodies and national governments in citizen science, the evidence that it can successfully contribute to environmental policy development, implementation, evaluation or compliance remains scant.

Central to elucidating this question is a better understanding of the benefits delivered by citizen science, that is to determine to what extent these benefits can contribute to environmental policy, and to establish whether projects that provide policy support also co-benefit science and encourage meaningful citizen engagement.

EU-wide inventory 

In order to get an evidence base of citizen science activities that can support environmental policies in the European Union (EU), the European Commission (DG ENV, with the support of DG JRC) contracted Bio Innovation Service (FR), in association with Fundacion Ibercivis (ES) and The Natural History Museum (UK), to perform a “Study on an inventory of citizen science activities for environmental policies”.

The first objective was to develop an inventory of citizen science projects relevant for environmental policy and assess how these projects contribute to the Sustainable Development Goals (SDGs) set by the United Nations (UN) General Assembly.

To this end, a desk-research and an EU-wide survey were used to identify 503 citizen science projects of relevance to environmental policy.

The study demonstrates the breadth of citizen science that can be of relevance to environmental policy....Three salient features were found:

  • Government support, not only in the funding, but also through active participation in the design and implementation of the project appears to be a key factor for the successful uptake of citizen science in environmental policy.
  • When there is easy engagement process for the citizens, that is, with projects requiring limited efforts and a priori skills, this facilitates their policy uptake.
  • Scientific aspects on the other hand did not appear to affect the policy uptake of the analysed projects, but they were a strong determinant of how well the project could serve policy: projects with high scientific standards and endorsed by scientists served more phases of the environmental policy cycle.

In conclusion, this study demonstrates that citizen science has the potential to be a cost-effective way to contribute to policy and highlights the importance of fostering a diversity of citizen science activities and their innovativeness….(More)”.

Data scores


Data-scores.org: “Data scores that combine data from a variety of both online and offline activities are becoming a way to categorize citizens, allocating services, and predicting future behavior. Yet little is still known about the implementation of data-driven systems and algorithmic processes in public services and how citizens are increasingly ‘scored’ based on the collection and combination of data.

As part of our project ‘Data Scores as Governance’ we have developed a tool to map and investigate the uses of data analytics and algorithms in public services in the UK. This tool is designed to facilitate further research and investigation into this topic and to advance public knowledge and understanding.

The tool is made up of a collection of documents from different sources that can be searched and mapped according to different categories. The database consists of more than 5300 unverified documents that have been scraped based on a number of search terms relating to data systems in government. This is an incomplete and on-going data-set. You can read more in our Methodology section….(More)”.

A Grand Challenges-Based Research Agenda for Scholarly Communication and Information Science


Report by Micah Altman and Chris Bourg: “…The overarching question these problems pose is how to create a global scholarly knowledge ecosystem that supports participation, ensures agency, equitable access, trustworthiness, integrity, and is legally, economically, institutionally, technically, and socially sustainable. The aim of the Grand Challenges Summit and this report is to identify broad research areas and questions to be explored in order to provide an evidence base from which to answer specific aspects of that broad question.

Reaching this future state requires exploring a set of interrelated anthropological, behavioral, computational, economic, legal, policy, organizational, sociological, and technological areas. The extent of these areas of research is illustrated by the following exemplars:

What is necessary to develop coherent, comprehensive, and empirically testable theories of the value of scholarly knowledge to society? What is the best current evidence of this value, and what does it elide? How should the measures of use and utility of scholarly outputs be adapted for different communities of use, disciplines, theories, and cultures? What methods will improve our predictions of the future value of collections of information, or enable the selection and construction of collections that will be likely to be of value in the future?…

What parts of the scholarly knowledge ecosystem promote the values of transparency, individual agency, participation, accountability, and fairness? How can these values be reflected in the algorithms, information architecture, and technological systems supporting the scholarly knowledge ecosystem? What principles of design and governance would be effective for embedding these values?…

The list above provides a partial outline of research areas that will need to be addressed in order to overcome the major barriers to a better future for scholarly communication and information science. As the field progresses in exploring these areas, and attempting to address the barriers is discussed, new areas are likely to be identified. Even within this initial list of research areas, there are many pressing questions ripe for exploration….

Research on open scholarship solutions is needed to assess the scale and breadth of access,[68] the costs to actors and stakeholders at all levels, and the effects of openness on perceptions of trust and confidence in research and research organizations. Research is also needed in the intersection between open scholarship and participation, new forms of scholarship, information integrity, information durability, and information agency (see section 3.1.). This will require an assessment of the costs and returns of open scholarship at a systemic level, rather than at the level of individual institutions or actors. We also need to assess whether and under what conditions interventions directed at removing reputation and institutional barriers to collaboration promote open scholarship. Research is likewise required to document the conditions under which open scholarship reduces duplication and inefficiency, and promotes equity in the creation and use of knowledge. In addition, research should address the permeability of open scholarship systems to researchers across multiple scientific fields, and whether—and under what conditions—open scholarship enhances interdisciplinary collaboration….(More)”.

Draft Ethics guidelines for trustworthy AI


Working document by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG): “…Artificial Intelligence (AI) is one of the most transformative forces of our time, and is bound to alter the fabric of society. It presents a great opportunity to increase prosperity and growth, which Europe must strive to achieve. Over the last decade, major advances were realised due to the availability of vast amounts of digital data, powerful computing architectures, and advances in AI techniques such as machine learning. Major AI-enabled developments in autonomous vehicles, healthcare, home/service robots, education or cybersecurity are improving the quality of our lives every day. Furthermore, AI is key for addressing many of the grand challenges facing the world, such as global health and wellbeing, climate change, reliable legal and democratic systems and others expressed in the United Nations Sustainable Development Goals.

Having the capability to generate tremendous benefits for individuals and society, AI also gives rise to certain risks that should be properly managed. Given that, on the whole, AI’s benefits outweigh its risks, we must ensure to follow the road that maximises the benefits of AI while minimising its risks. To ensure that we stay on the right track, a human-centric approach to AI is needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being. Trustworthy AI will be our north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology.

Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose” and (2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.

These Guidelines therefore set out a framework for Trustworthy AI:

  • Chapter I deals with ensuring AI’s ethical purpose, by setting out the fundamental rights, principles and values that it should comply with.
  • From those principles, Chapter II derives guidance on the realisation of Trustworthy AI, tackling both ethical purpose and technical robustness. This is done by listing the requirements for Trustworthy AI and offering an overview of technical and non-technical methods that can be used for its implementation.
  • Chapter III subsequently operationalises the requirements by providing a concrete but nonexhaustive assessment list for Trustworthy AI. This list is then adapted to specific use cases. …(More)”

A People’s Guide to AI


Booklet by Mimi Onuoha and Diana Nucera: “..this booklet aims to fill the gaps in information about AI by creating accessible materials that inform communities and allow them to identify what their ideal futures with AI can look like. Although the contents of this booklet focus on demystifying AI, we find it important to state that the benefits of any technology should be felt by all of us. Too often, the challenges presented by new technology spell out yet another tale of racism, sexism, gender inequality, ableism, and lack of consent within digital culture.

The path to a fair future starts with the humans behind the machines, not the machines themselves. Self-reflection and a radical transformation of our relationships to our environment and each other are at the heart of combating structural inequality. But understanding what it takes to create a fair and just society is the first step. In creating this booklet, we start from the belief that equity begins with education…For those who wish to learn more about specific topics, we recommend looking at the table of contents and choosing sections to read. For more hands-on learners, we have also included a number of workbook activities that allow the material to be explored in a more active fashion.

We hope that this booklet inspires and informs those who are developing emerging technologies to reflect on how these technologies can impact our societies. We also hope that this booklet inspires and informs black, brown, indigenous, and immigrant communities to reclaim technology as a tool of liberation…(More)”.

Abandoning Silos: How innovative governments are collaborating horizontally to solve complex problems


Report by Michael Crawford Urban: “The complex challenges that governments at all levels are facing today cut across long-standing and well-defined government boundaries and organizational structures. Solving these problems therefore requires a horizontal approach. This report looks at how such an approach can be successfully implemented.There are a number of key obstacles to effective horizontal collaboration in government, ranging from misaligned professional incentive structures to incompatible computer systems. But a number of governments – Estonia, the UK, and New Zealand – have all recently introduced innovative initiatives that are succeeding in creatively tackling these complex horizontal challenges. In each case, this is delivering critical benefits – reduced government costs and regulatory burdens, getting more out of existing personnel while recruiting more high quality professionals, or providing new and impactful data-driven insights that are helping improve the quality of human services.

How are they achieving this? We answer this question by using an analytical framework organized along three fundamental dimensions: governance(structuring accountability and responsibility), people (managing culture and personnel), and data (collecting, transmitting and using information). In each of our three cases, we show how specific steps taken along one of these dimensions can help overcome important obstacles that commonly arise and, in so doing, enable successful horizontal collaboration….(More)”.