America’s Problem Isn’t Too Little Democracy. It’s Too Much.


Joshua A. Geltzer at PoliticoMagazine: Democracy’s lamentations sometimes seem deafening these days. “Democracy is dying,” proclaimed a recent article in Foreign Policy—and another in the Guardian, and yet another in Quartz. We’ve reached “the end of democracy,” avows a new book—as well as an op-ed in the Washington Post.

But what if these perspectives have it all backwards? What if our problem isn’t too little democracy, but too much?

There’s no doubt that democracy in the United States appears on shaky ground. That’s not because 2016 marked the first time in American history that the presidency was captured by a candidate with no political or military experience. It’s not even because Donald Trump did so despite losing the popular vote by almost 3 million ballots, with his adversary garnering the most votes ever cast for a losing presidential candidate.

It’s because the 2016 election revealed new vulnerabilities in our democracy, generated by social media’s explosion and utilized by Russia and Russian-linked actorspossibly including Trump’s team itself. And it’s also because the aftermath of that election has laid bare a Congress so polarized, gridlocked and downright incapacitated that it has proved unable even to keep our government from shutting down and has consistently failed to fulfill its responsibility to exercise meaningful oversight of the executive branch.

What ails us? The current vogue is to place the blame on the inadequacies of our incarnation of democracy. The brilliant Yascha Mounk, for example, argues that the American people may think they’re living in a democracy, but—unbeknownst to them—it’s really all a charade. On Mounk’s account, Americans speak at town halls, organize on behalf of candidates and cast ballots; but, because the game’s been rigged by the powerful, all of that activity doesn’t really matter compared to the influence of the well-placed and well-heeled. In the words of two political scientists quoted favorably by Mounk, what we think of as democracy in action really amounts to “a minuscule, near-zero, statistically non-significant impact upon public policy.”

Some suggest that democracy’s insufficiencies are global, and the defining problem of our times. In his magisterial account of democracy’s fading allure in Hungary and Poland, Roger Cohen echoes earlier scholars in seeing democracy now eclipsed by “competitive authoritarianism, a form of European single-party rule that retains a veneer of democracy while skewing the contest sufficiently to ensure it is likely to yield only one result.”

But while these commentators are right that the cracks are there, the cause is the very opposite of what they claim, at least when it comes to America. The problem isn’t that democracy is in short supply in the United States. It’s that technology has helped to unleash hyper-democratization—a shift away from the mediated, checked republic that America’s founders carefully crafted toward an impulsive, unleashed direct democracy that’s indulging the worst impulses of our most extreme elements.

To put it bluntly, we’re increasingly ruled by an online mob. And it’s a mob getting besieged with misinformation…(More)”.

The Role of Behavioral Economics in Evidence-Based Policymaking


William J. Congdon and Maya Shankar in Special Issue of The ANNALS of the American Academy of Political and Social Science on Evidence Based Policy Making: “Behavioral economics has come to play an important role in evidence-based policymaking. In September 2015, President Obama signed an executive order directing federal agencies to incorporate insights from behavioral science into federal policies and programs. The order also charged the White House Social and Behavioral Sciences Team (SBST) with supporting this directive. In this article, we briefly trace the history of behavioral economics in public policy. We then turn to a discussion of what the SBST was, how it was built, and the lessons we draw from its experience and achievements. We conclude with a discussion of prospects for the future, arguing that even as SBST is currently lying fallow, behavioral economics continues to gain currency and show promise as an essential element of evidence-based policy….(More)”.

Big Data and AI – A transformational shift for government: So, what next for research?


Irina Pencheva, Marc Esteve and Slava Jenkin Mikhaylov in Public Policy and Administration: “Big Data and artificial intelligence will have a profound transformational impact on governments around the world. Thus, it is important for scholars to provide a useful analysis on the topic to public managers and policymakers. This study offers an in-depth review of the Policy and Administration literature on the role of Big Data and advanced analytics in the public sector. It provides an overview of the key themes in the research field, namely the application and benefits of Big Data throughout the policy process, and challenges to its adoption and the resulting implications for the public sector. It is argued that research on the subject is still nascent and more should be done to ensure that the theory adds real value to practitioners. A critical assessment of the strengths and limitations of the existing literature is developed, and a future research agenda to address these gaps and enrich our understanding of the topic is proposed…(More)”.

Public Policy in an AI Economy


NBER Working Paper by Austan Goolsbee: “This paper considers the role of policy in an AI-intensive economy (interpreting AI broadly). It emphasizes the speed of adoption of the technology for the impact on the job market and the implications for inequality across people and across places. It also discusses the challenges of enacting a Universal Basic Income as a response to widespread AI adoption, discuss pricing, privacy and competition policy the question of whether AI could improve policy making itself….(More).

4 reasons why Data Collaboratives are key to addressing migration


Stefaan Verhulst and Andrew Young at the Migration Data Portal: “If every era poses its dilemmas, then our current decade will surely be defined by questions over the challenges and opportunities of a surge in migration. The issues in addressing migration safely, humanely, and for the benefit of communities of origin and destination are varied and complex, and today’s public policy practices and tools are not adequate. Increasingly, it is clear, we need not only new solutions but also new, more agile, methods for arriving at solutions.

Data are central to meeting these challenges and to enabling public policy innovation in a variety of ways. Yet, for all of data’s potential to address public challenges, the truth remains that most data generated today are in fact collected by the private sector. These data contains tremendous possible insights and avenues for innovation in how we solve public problems. But because of access restrictions, privacy concerns and often limited data science capacity, their vast potential often goes untapped.

Data Collaboratives offer a way around this limitation.

Data Collaboratives: A new form of Public-Private Partnership for a Data Age

Data Collaboratives are an emerging form of partnership, typically between the private and public sectors, but often also involving civil society groups and the education sector. Now in use across various countries and sectors, from health to agriculture to economic development, they allow for the opening and sharing of information held in the private sector, in the process freeing data silos up to serve public ends.

Although still fledgling, we have begun to see instances of Data Collaboratives implemented toward solving specific challenges within the broad and complex refugee and migrant space. As the examples we describe below suggest (which we examine in more detail Stanford Social Innovation Review), the use of such Collaboratives is geographically dispersed and diffuse; there is an urgent need to pull together a cohesive body of knowledge to more systematically analyze what works, and what doesn’t.

This is something we have started to do at the GovLab. We have analyzed a wide variety of Data Collaborative efforts, across geographies and sectors, with a goal of understanding when and how they are most effective.

The benefits of Data Collaboratives in the migration field

As part of our research, we have identified four main value propositions for the use of Data Collaboratives in addressing different elements of the multi-faceted migration issue. …(More)”,

UK can lead the way on ethical AI, says Lords Committee


Lords Select Committee: “The UK is in a strong position to be a world leader in the development of artificial intelligence (AI). This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come. The best way to do this is to put ethics at the centre of AI’s development and use concludes a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able?, published today….

One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Other conclusions from the report include:

  • Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.
  • Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.
  • The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
  • The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
  • Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
  • At earlier stages of education, children need to be adequately prepared for working with, and using, AI. The ethical design and use of AI should become an integral part of the curriculum.
  • The Government should be bold and use targeted procurement to provide a boost to AI development and deployment. It could encourage the development of solutions to public policy challenges through speculative investment. There have been impressive advances in AI for healthcare, which the NHS should capitalise on.
  • It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed. The Committee recommend that the Law Commission investigate this issue.
  • The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK….(More)”.

Behavioral Economics: Are Nudges Cost-Effective?


Carla Fried at UCLA Anderson Review: “Behavioral science does not suffer from a lack of academic focus. A Google Scholar search for the term delivers more than three million results.

While there is an abundance of research into how human nature can muck up our decision making process and the potential for well-placed nudges to help guide us to better outcomes, the field has kept rather mum on a basic question: Are behavioral nudges cost-effective?

That’s an ever more salient question as the art of the nudge is increasingly being woven into public policy initiatives. In 2009, the Obama administration set up a nudge unit within the White House Office of Information and Technology, and a year later the U.K. government launched its own unit. Harvard’s Cass Sunstein, co-author of the book Nudge, headed the U.S. effort. His co-author, the University of Chicago’s Richard Thaler — who won the 2017 Nobel Prize in Economics — helped develop the U.K.’s Behavioral Insights office. Nudge units are now humming away in other countries, including Germany and Singapore, as well as at the World Bank, various United Nations agencies and the Organisation for Economic Co-operation and Development (OECD).

Given the interest in the potential for behavioral science to improve public policy outcomes, a team of nine experts, including UCLA Anderson’s Shlomo Benartzi, Sunstein and Thaler, set out to explore the cost-effectiveness of behavioral nudges relative to more traditional forms of government interventions.

In addition to conducting their own experiments, the researchers looked at published research that addressed four areas where public policy initiatives aim to move the needle to improve individuals’ choices: saving for retirement, applying to college, energy conservation and flu vaccinations.

For each topic, they culled studies that focused on both nudge approaches and more traditional mandates such as tax breaks, education and financial incentives, and calculated cost-benefit estimates for both types of studies. Research used in this study was published between 2000 and 2015. All cost estimates were inflation-adjusted…

The study itself should serve as a nudge for governments to consider adding nudging to their policy toolkits, as this approach consistently delivered a high return on investment, relative to traditional mandates and policies….(More)”.

Practical approaches to big data privacy over time


Micah Altman, Alexandra Wood, David R O’Brien and Urs Gasser in International Data Privacy Law: “

  • Governments and businesses are increasingly collecting, analysing, and sharing detailed information about individuals over long periods of time.
  • Vast quantities of data from new sources and novel methods for large-scale data analysis promise to yield deeper understanding of human characteristics, behaviour, and relationships and advance the state of science, public policy, and innovation.
  • The collection and use of fine-grained personal data over time, at the same time, is associated with significant risks to individuals, groups, and society at large.
  • This article examines a range of long-term research studies in order to identify the characteristics that drive their unique sets of risks and benefits and the practices established to protect research data subjects from long-term privacy risks.
  • We find that many big data activities in government and industry settings have characteristics and risks similar to those of long-term research studies, but are subject to less oversight and control.
  • We argue that the risks posed by big data over time can best be understood as a function of temporal factors comprising age, period, and frequency and non-temporal factors such as population diversity, sample size, dimensionality, and intended analytic use.
  • Increasing complexity in any of these factors, individually or in combination, creates heightened risks that are not readily addressable through traditional de-identification and process controls.
  • We provide practical recommendations for big data privacy controls based on the risk factors present in a specific case and informed by recent insights from the state of the art and practice….(More)”.

Citizen Sensing: A Toolkit


Book from Making Sense: “Collaboration using open-source technologies makes it possible to create new and powerful forms of community action, social learning and citizenship. There are now widely accessible platforms through which we can come together to make sense of urgent challenges, and discover ways to address these. Together we can shape our streets, neighbourhoods, cities and countries – and in turn, shape our future. You can join with others to become the solution to challenges in our environment, in our communities and in the way we live together.

In this book, there are ideas and ways of working that can help you build collective understanding and inspire others to take action. By coming together with others on issues you identify and define yourselves, and by designing and using the right tools collaboratively, both your awareness and ability to act will be improved. In the process, everyone involved will have better insights, better arguments and better discussions; sometimes to astonishing effect!

We hope this book will help you engage people to learn more about an issue that concerns you, support you to take action, and change the world for the better. This resource will teach you how to scope your questions, identify and nurture relevant communities, and plan an effective campaign. It will then help you gather data and evidence, interpret your findings, build awareness and achieve tangible outcomes. Finally, it will show you how to reflect on these outcomes, and offers suggestions on how you can leave a lasting legacy.

This book is intended to help community activists who are curious or concerned about one or more issues, whether local or global, and are motivated to take action. This resource can also be of value to professionals in organisations which support community actions and activists. Finally, this book will be of interest to researchers in the fields of citizen science, community activism and participatory sensing, government officials and other public policy actors who wish to include citizens’ voices in the decision-making process…(More)”.

Replicating the Justice Data Lab in the USA: Key Considerations


Blog by Tracey Gyateng and Tris Lumley: “Since 2011, NPC has researched, supported and advocated for the development of impact-focussed Data Labs in the UK. The goal has been to unlock government administrative data so that organisations (primarily nonprofits) who provide a social service can understand the impact of their services on the people who use them.

So far, one of these Data Labs has been developed to measure re-offending outcomes- the Justice Data Lab-, and others are currently being piloted for employment and education. Given our seven years of work in this area, we at NPC have decided to reflect on the key factors needed to create a Data Lab with our report: How to Create an Impact Data Lab. This blog outlines these factors, examines whether they are present in the USA, and asks what the next steps should be — drawing on the research undertaken with the Governance Lab….Below we examine the key factors and to what extent they appear to be present within the USA.

Environment: A broad culture that supports impact measurement. Similar to the UK, nonprofits in the USA are increasingly measuring the impact they have had on the participants of their service and sharing the difficulties of undertaking robust, high quality evaluations.

Data: Individual person-level administrative data. A key difference between the two countries is that, in the USA, personal data on social services tends to be held at a local, rather than central level. In the UK social services data such as reoffending, education and employment are collated into a central database. In the USA, the federal government has limited centrally collated personal data, instead this data can be found at state/city level….

A leading advocate: A Data Lab project team, and strong networks. Data Labs do not manifest by themselves. They requires a lead agency to campaign with, and on behalf of, nonprofits to set out a persuasive case for their development. In the USA, we have developed a partnership with the Governance Lab to seek out opportunities where Data Labs can be established but given the size of the country, there is scope for further collaborations/ and or advocates to be identified and supported.

Customers: Identifiable organisations that would use the Data Lab. Initial discussions with several US nonprofits and academia indicate support for a Data Lab in their context. Broad consultation based on an agreed region and outcome(s) will be needed to fully assess the potential customer base.

Data owners: Engaged civil servants. Generating buy-in and persuading various stakeholders including data owners, analysts and politicians is a critical part of setting up a data lab. While the exact profiles of the right people to approach can only be assessed once a region and outcome(s) of interest have been chosen, there are encouraging signs, such as the passing of the Foundations for Evidence-Based Policy Making Act of 2017 in the house of representatives which, among other things, mandates the appointment of “Chief Evaluation Officers” in government departments- suggesting that there is bipartisan support for increased data-driven policy evaluation.

Legal and ethical governance: A legal framework for sharing data. In the UK, all personal data is subject to data protection legislation, which provides standardised governance for how personal data can be processed across the country and within the European Union. A universal data protection framework does not exist within the USA, therefore data sharing agreements between customers and government data-owners will need to be designed for the purposes of Data Labs, unless there are existing agreements that enable data sharing for research purposes. This will need to be investigated at the state/city level of a desired Data Lab.

Funding: Resource and support for driving the set-up of the Data Lab. Most of our policy lab case studies were funded by a mixture of philanthropy and government grants. It is expected that a similar mixed funding model will need to be created to establish Data Labs. One alternative is the model adopted by the Washington State Institute for Public Policy (WSIPP), which was created by the Washington State Legislature and is funded on a project basis, primarily by the state. Additionally funding will be needed to enable advocates of a Data Lab to campaign for the service….(More)”.