What’s Wrong with Public Policy Education


Francis Fukuyama at the American Interest: “Most programs train students to become capable policy analysts, but with no understanding of how to implement those policies in the real world…Public policy education is ripe for an overhaul…

Public policy education in most American universities today reflects a broader problem in the social sciences, which is the dominance of economics. Most programs center on teaching students a battery of quantitative methods that are useful in policy analysis: applied econometrics, cost-benefit analysis, decision analysis, and, most recently, use of randomized experiments for program evaluation. Many schools build their curricula around these methods rather than the substantive areas of policy such as health, education, defense, criminal justice, or foreign policy. Students come out of these programs qualified to be policy analysts: They know how to gather data, analyze it rigorously, and evaluate the effectiveness of different public policy interventions. Historically, this approach started with the Rand Graduate School in the 1970s (which has subsequently undergone a major re-thinking of its approach).

There is no question that these skills are valuable and should be part of a public policy education.  The world has undergone a revolution in recent decades in terms of the role of evidence-based policy analysis, where policymakers can rely not just on anecdotes and seat-of-the-pants assessments, but statistically valid inferences that intervention X is likely to result in outcome Y, or that the millions of dollars spent on policy Z has actually had no measurable impact. Evidence-based policymaking is particularly necessary in the age of Donald Trump, amid the broad denigration of inconvenient facts that do not suit politicians’ prior preferences.

But being skilled in policy analysis is woefully inadequate to bring about policy change in the real world. Policy analysis will tell you what the optimal policy should be, but it does not tell you how to achieve that outcome.

The world is littered with optimal policies that don’t have a snowball’s chance in hell of being adopted. Take for example a carbon tax, which a wide range of economists and policy analysts will tell you is the most efficient way to abate carbon emissions, reduce fossil fuel dependence, and achieve a host of other desired objectives. A carbon tax has been a nonstarter for years due to the protestations of a range of interest groups, from oil and chemical companies to truckers and cabbies and ordinary drivers who do not want to pay more for the gas they use to commute to work, or as inputs to their industrial processes. Implementing a carbon tax would require a complex strategy bringing together a coalition of groups that are willing to support it, figuring out how to neutralize the die-hard opponents, and convincing those on the fence that the policy would be a good, or at least a tolerable, thing. How to organize such a coalition, how to communicate a winning message, and how to manage the politics on a state and federal level would all be part of a necessary implementation strategy.

It is entirely possible that an analysis of the implementation strategy, rather than analysis of the underlying policy, will tell you that the goal is unachievable absent an external shock, which might then mean changing the scope of the policy, rethinking its objectives, or even deciding that you are pursuing the wrong objective.

Public policy education that sought to produce change-makers rather than policy analysts would therefore have to be different.  It would continue to teach policy analysis, but the latter would be a small component embedded in a broader set of skills.

The first set of skills would involve problem definition. A change-maker needs to query stakeholders about what they see as the policy problem, understand the local history, culture, and political system, and define a problem that is sufficiently narrow in scope that it can plausibly be solved.

At times reformers start with a favored solution without defining the right problem. A student I know spent a summer working at an NGO in India advocating use of electric cars in the interest of carbon abatement. It turns out, however, that India’s reliance on coal for marginal electricity generation means that more carbon would be put in the air if the country were to switch to electric vehicles, not less, so the group was actually contributing to the problem they were trying to solve….

The second set of skills concerns solutions development. This is where traditional policy analysis comes in: It is important to generate data, come up with a theory of change, and posit plausible options by which reformers can solve the problem they have set for themselves. This is where some ideas from product design, like rapid prototyping and testing, may be relevant.

The third and perhaps most important set of skills has to do with implementation. This begins necessarily with stakeholder analysis: that is, mapping of actors who are concerned with the particular policy problem, either as supporters of a solution, or opponents who want to maintain the status quo. From an analysis of the power and interests of the different stakeholders, one can begin to build coalitions of proponents, and think about strategies for expanding the coalition and neutralizing those who are opposed.  A reformer needs to think about where resources can be obtained, and, very critically, how to communicate one’s goals to the stakeholder audiences involved. Finally comes testing and evaluation—not in the expectation that there will be a continuous and rapid iterative process by which solutions are tried, evaluated, and modified. Randomized experiments have become the gold standard for program evaluation in recent years, but their cost and length of time to completion are often the enemies of rapid iteration and experimentation….(More) (see also http://canvas.govlabacademy.org/).

Open Data Use Case: Using data to improve public health


Chris Willsher at ODX: “Studies have shown that a large majority of Canadians spend too much time in sedentary activities. According to the Health Status of Canadians report in 2016, only 2 out of 10 Canadian adults met the Canadian Physical Activity Guidelines. Increasing physical activity and healthy lifestyle behaviours can reduce the risk of chronic illnesses, which can decrease pressures on our health care system. And data can play a role in improving public health.

We are already seeing examples of a push to augment the role of data, with programs recently being launched at home and abroad. Canada and the US established an initiative in the spring of 2017 called the Healthy Behaviour Data Challenge. The goal of the initiative is to open up new methods for generating and using data to monitor health, specifically in the areas of physical activity, sleep, sedentary behaviour, or nutrition. The challenge recently wrapped up with winners being announced in late April 2018. Programs such as this provide incentive to the private sector to explore data’s role in measuring healthy lifestyles and raise awareness of the importance of finding new solutions.

In the UK, Sport England and the Open Data Institute (ODI) have collaborated to create the OpenActive initiative. It has set out to encourage both government and private sector entities to unlock data around physical activities so that others can utilize this information to ease the process of engaging in an active lifestyle. The goal is to “make it as easy to find and book a badminton court as it is to book a hotel room.” As of last fall, OpenActive counted more than 76,000 activities across 1,000 locations from their partner organizations. They have also developed a standard for activity data to ensure consistency among data sources, which eases the ability for developers to work with the data. Again, this initiative serves as a mechanism for open data to help address public health issues.

In Canada, we are seeing more open datasets that could be utilized to devise new solutions for generating higher rates of physical activity. A lot of useful information is available at the municipal level that can provide specifics around local infrastructure. Plus, there is data at the provincial and federal level that can provide higher-level insights useful to developing methods for promoting healthier lifestyles.

Information about cycling infrastructure seems to be relatively widespread among municipalities with a robust open data platform. As an example, the City of Toronto, publishes map data of bicycle routes around the city. This information could be utilized in a way to help citizens find the best bike route between two points. In addition, the city also publishes data on indooroutdoor, and post and ring bicycle parking facilities that can identify where to securely lock your bike. Exploring data from proprietary sources, such as Strava, could further enhance an application by layering on popular cycling routes or allow users to integrate their personal information. And algorithms could allow for the inclusion of data on comparable driving times, projected health benefits, or savings on automotive maintenance.

The City of Calgary publishes data on park sports surfaces and recreation facilities that could potentially be incorporated into sports league applications. This would make it easier to display locations for upcoming games or to arrange pick-up games. Knowing where there are fields nearby that may be available for a last minute soccer game could be useful in encouraging use of the facilities and generating more physical activity. Again, other data sources, such as weather, could be integrated with this information to provide a planning tool for organizing these activities….(More)”.

Under what conditions is information empowering?


FeedbackLabs: “A 72% increase in students ceasing to abuse drugs. A 57 percentage point jump in vaccination rates. Fourteen percent higher odds of adults quitting smoking. The improvements in outcomes that people can achieve for themselves when armed with information can be striking.

Yet the above examples and many more show that information alone rarely empowers people to make their lives better. Information empowers when social and emotional factors induce people to reinterpret that information, and act on it. In this report, we draw on 44 real-life examples and 168 research papers from 10 fields to develop 7 general principles that seem to underlie information initiatives that successfully empower people. Principles 1, 2, and 3 speak to how information empowers through reinterpretation, and Principles 4 to 7 speak to how we can support that reinterpretation—and get people to act. Based on the 7 principles, we then provide a checklist of questions a team can use to increase the likelihood that their initiative will empower the people they seek to serve.

Throughout, we provide concrete illustrations from a wide range of fields to show how applying these principles in practice has led to substantially better outcomes. We also consider examples with outcomes we might consider to be negative. The 7 principles are broadly applicable to how information empowers people to perceive, make and act on choices—but they are agnostic about whether the outcomes of those choices are positive or negative.

The way that the principles are applied in one context may not always work in another. But from the context-specific evidence summarized in this report we have extrapolated a framework that can be applied more broadly—in both theory and practice, for both funders and implementers. Although many of the in-depth case studies presented stem from the US, the principles are based on a wide range of examples and evidence from around the world. We believe the framework we construct here is powerful and can be applied globally; but it’s also clear that much more remains to be understood, so we hope it also sparks ideas, experimentation, and new discoveries….(More)”.

We Need Transparency in Algorithms, But Too Much Can Backfire


Kartik Hosanagar and Vivian Jair at Harvard Business Review: “In 2013, Stanford professor Clifford Nass faced a student revolt. Nass’s students claimed that those in one section of his technology interface course received higher grades on the final exam than counterparts in another. Unfortunately, they were right: two different teaching assistants had graded the two different sections’ exams, and one had been more lenient than the other. Students with similar answers had ended up with different grades.

Nass, a computer scientist, recognized the unfairness and created a technical fix: a simple statistical model to adjust scores, where students got a certain percentage boost on their final mark when graded by a TA known to give grades that percentage lower than average. In the spirit of openness, Nass sent out emails to the class with a full explanation of his algorithm. Further complaints poured in, some even angrier than before. Where had he gone wrong?…

Kizilcec had in fact tested three levels of transparency: low and medium but also high, where the students got not only a paragraph explaining the grading process but also their raw peer-graded scores and how these were each precisely adjusted by the algorithm to get to a final grade. And this is where the results got more interesting. In the experiment, while medium transparency increased trust significantly, high transparency eroded it completely, to the point where trust levels were either equal to or lower than among students experiencing low transparency.

Making Modern AI Transparent: A Fool’s Errand?

 What are businesses to take home from this experiment?  It suggests that technical transparency – revealing the source code, inputs, and outputs of the algorithm – can build trust in many situations. But most algorithms in the world today are created and managed by for-profit companies, and many businesses regard their algorithms as highly valuable forms of intellectual property that must remain in a “black box.” Some lawmakers have proposed a compromise, suggesting that the source code be revealed to regulators or auditors in the event of a serious problem, and this adjudicator will assure consumers that the process is fair.

This approach merely shifts the burden of belief from the algorithm itself to the regulators. This may a palatable solution in many arenas: for example, few of us fully understand financial markets, so we trust the SEC to take on oversight. But in a world where decisions large and small, personal and societal, are being handed over to algorithms, this becomes less acceptable.

Another problem with technical transparency is that it makes algorithms vulnerable to gaming. If an instructor releases the complete source code for an algorithm grading student essays, it becomes easy for students to exploit loopholes in the code:  maybe, for example, the algorithm seeks evidence that the students have done research by looking for phrases such as “according to published research.” A student might then deliberately use this language at the start of every paragraph in her essay.

But the biggest problem is that modern AI is making source code – transparent or not – less relevant compared with other factors in algorithmic functioning. Specifically, machine learning algorithms – and deep learning algorithms in particular – are usually built on just a few hundred lines of code. The algorithms logic is mostly learned from training data and is rarely reflected in its source code. Which is to say, some of today’s best-performing algorithms are often the most opaque. High transparency might involve getting our heads around reams and reams of data – and then still only being able to guess at what lessons the algorithm has learned from it.

This is where Kizilcec’s work becomes relevant – a way to embrace rather than despair over deep learning’s impenetrability. His work shows that users will not trust black box models, but they don’t need – or even want – extremely high levels of transparency. That means responsible companies need not fret over what percentage of source code to reveal, or how to help users “read” massive datasets. Instead, they should work to provide basic insights on the factors driving algorithmic decisions….(More)”

What top technologies should the next generation know how to use?


Lottie Waters at Devex: “Technology provides some great opportunities for global development, and a promising future. But for the next generation of professionals to succeed, it’s vital they stay up to date with the latest tech, innovations, and tools.

In a recent report produced by Devex in collaboration with the United States Agency for International Development and DAI, some 86 percent of survey respondents believe the technology, skills, and approaches development professionals will be using in 10 years’ time will be significantly different to today’s.

In fact, “technology for development” is regarded as the sector that will see the most development progress, but is also cited as the one that will see the biggest changes in skills required, according to the survey.

“As different technologies develop, new possibilities will open up that we may not even be aware of yet. These opportunities will bring new people into the development sector and require those in it to be more agile in adapting technologies to meet development challenges,” said one survey respondent.

While “blockchain,” “artificial intelligence,” and “drones” may be the current buzzwords surrounding tech in global development, geographical information systems, or GIS, and big data are actually the top technologies respondents believe the next generation of development professionals should learn how to utilize.

So, how are these technologies currently being used in development, how might this change in the near future, and what will their impact be in the next 10 years? Devex spoke with experts in the field who are already integrating these technologies into their work to find out….(More)”

How games can help craft better policy


Shrabonti Bagchi at LiveMint: “I have never seen economists having fun!” Anantha K. Duraiappah, director of Unesco-MGIEP (Mahatma Gandhi Institute of Education for Peace and Sustainable Development), was heard exclaiming during a recent conference. The academics in question were a group of environmental economists at an Indian Society for Ecological Economics conference in Thrissur, Kerala, and they were playing a game called Cantor’s World, in which each player assumes the role of the supreme leader of a country and gets to decide the fate of his or her nation.

Well, it’s not quite as simple as that (this is not Settlers Of Catan!). Players have to take decisions on long-term goals like education and industrialization based on data such as GDP, produced capital, human capital, and natural resources while adhering to the UN’s sustainable development goals. The game is probably the most accessible and enjoyable way of seeing how long-term policy decisions change and impact the future of countries.

That’s what Fields Of View does. The Bengaluru-based non-profit creates games, simulations and learning tools for the better understanding of policy and its impact. Essentially, their work is to make sure economists like the ones at the Thrissur conference actually have some fun while thrashing out crucial issues of public policy.

A screen grab from ‘Cantor’s World’.

A screen grab from ‘Cantor’s World’.

Can policymaking be made more relevant to the lives of people affected by it? Can policymaking be more responsive to a dynamic social-economic-environmental context? Can we reduce the time taken for a policy to go from the drawing board to implementation? These were some of the questions the founders of Fields Of View, Sruthi Krishnan and Bharath M. Palavalli, set out to answer. “There are no binaries in policymaking. There are an infinite set of possibilities,” says Palavalli, who was named an Ashoka fellow in May for his work at the intersection of technology, social sciences and design.

Earlier this year, Fields Of View organized a session of one of its earliest games, City Game, for a group of 300 female college students in Mangaluru. City Game is a multiplayer offline game designed to explore urban infrastructure and help groups and individual understand the dynamics of urban governance…(More)”.

On the Bumpy Road Towards Open Government: The Not-Invented-Here Syndrome as a Major Pothole


Paper by Lisa Schmidthuber, David Antons and Dennis Hilgers: “This paper investigates the role of public employees in absorbing external knowledge. Triggered by open government initiatives and open calls for participation, external actors are invited to integrate ideas, solutions, or experience into public organizations. Such exploitation of valuable external knowledge across organizational interfaces might, however, be hindered by negative attitudes of public employees towards external input. The rejection of outside knowledge by internal actors is known as the Not-Invented-Here syndrome. This paper sheds light on NIH attitudes in public organizations. After reviewing the state of the art of research on NIH, it emphasizes rethinking NIH in the public sector. The in-depth discussion of previous work thus serves to derive an extensive agenda for future research….(More)”

The Case for Accountability: How it Enables Effective Data Protection and Trust in the Digital Society


Centre for Information Policy Leadership: “Accountability now has broad international support and has been adopted in many laws, including in the EU General Data Protection Regulation (GDPR), regulatory policies and organisational practices. It is essential that there is consensus and clarity on the precise meaning and application of organisational accountability among all stakeholders, including organisations implementing accountability and data protection authorities (DPAs) overseeing accountability.

Without such consensus, organisations will not know what DPAs expect of them and DPAs will not know how to assess organisations’ accountability-based privacy programs with any degree of consistency and predictability. Thus, drawing from the global experience with accountability to date and from the Centre for Information Policy Leadership’s (CIPL) own extensive prior work on accountability, this paper seeks to explain the following issues:

  • The concept of organisational accountability and how it is reflected in the GDPR;
  • The essential elements of accountability and how the requirements of the GDPR (and of other normative frameworks) map to these elements;
  • Global acceptance and adoption of accountability;
  • How organisations can implement accountability (including by and between controllers and processors) through comprehensive internal privacy programs that implement external rules or the organisation’s own data protection policies and goals, or through verified or certified accountability mechanisms, such as Binding Corporate Rules (BCR), APEC Cross-Border Privacy Rules (CBPR), APEC Privacy Recognition for Processors (PRP), other seals and certifications, including future GDPR certifications and codes of conduct; and
  • The benefits that accountability can deliver to each stakeholder group.

In addition, the paper argues that accountability exists along a spectrum, ranging from basic accountability requirements required by law (such as under the GDPR) to stronger and more granular accountability measures that may not be required by law but that organisations may nevertheless want to implement because they convey substantial benefits….(More)”.

Collective Awareness


J. Doyne Farmer at the Edge: “Economic failures cause us serious problems. We need to build simulations of the economy at a much more fine-grained level that take advantage of all the data that computer technologies and the Internet provide us with. We need new technologies of economic prediction that take advantage of the tools we have in the 21st century.

Places like the US Federal Reserve Bank make predictions using a system that has been developed over the last eighty years or so. This line of effort goes back to the middle of the 20th century, when people realized that we needed to keep track of the economy. They began to gather data and set up a procedure for having firms fill out surveys, for having the census take data, for collecting a lot of data on economic activity and processing that data. This system is called “national accounting,” and it produces numbers like GDP, unemployment, and so on. The numbers arrive at a very slow timescale. Some of the numbers come out once a quarter, some of the numbers come out once a year. The numbers are typically lagged because it takes a lot of time to process the data, and the numbers are often revised as much as a year or two later. That system has been built to work in tandem with the models that have been built, which also process very aggregated, high-level summaries of what the economy is doing. The data is old fashioned and the models are old fashioned.

It’s a 20th-century technology that’s been refined in the 21st century. It’s very useful, and it represents a high level of achievement, but it is now outdated. The Internet and computers have changed things. With the Internet, we can gather rich, detailed data about what the economy is doing at the level of individuals. We don’t have to rely on surveys; we can just grab the data. Furthermore, with modern computer technology we could simulate what 300 million agents are doing, simulate the economy at the level of the individuals. We can simulate what every company is doing and what every bank is doing in the United States. The model we could build could be much, much better than what we have now. This is an achievable goal.

But we’re not doing that, nothing close to that. We could achieve what I just said with a technological system that’s simpler than Google search. But we’re not doing that. We need to do it. We need to start creating a new technology for economic prediction that runs side-by-side with the old one, that makes its predictions in a very different way. This could give us a lot more guidance about where we’re going and help keep the economic shit from hitting the fan as often as it does….(More)”.

A model to help tech companies make responsible technology a reality


Sam Brown at DotEveryone: “..adopting a Responsible Technology approach isn’t straightforward. There’s currently no roadmap, or even any common language, about how to embed responsible technology practices in practical and tangible ways.

That’s why Doteveryone has spent the last year researching the issues organisations face and we’re now developing a model that will help organisations do just that.

The 3C model helps to guide organisations on how to assess the level of responsibility of their technology products or services as they develop them.

It’s not an ethical bible which dictates right from wrong, but a framework which gives teams space and parameters to foresee the potential impacts their technologies could have and to consider how to handle them.

Our 3C Model of Responsible Technology considers:

  1. the Context of the wider world a technology product or service exists within
  2. the potential ways technology can have unintended Consequences
  3. the different Contribution people make to a technology — how value is given and received

We are developing a number of assessment tools which product teams can work through to help them examine and evaluate each of these areas in real time during the development cycle. The form of the assessments range from checklists to step-by-step information mapping to team board games….(More)”.