Participatory Budgeting: Step to Building Active Citizenship or a Distraction from Democratic Backsliding?


David Sasaki: “Is there any there there? That’s what we wanted to uncover beneath the hype and skepticism surrounding participatory budgeting, an innovation in democracy that began in Brazil in 1989 and has quickly spread to nearly every corner of the world like a viral hashtag….We ended up selecting two groups of consultants for two phases of work. The first phase was led by three academic researchers — Brian WamplerMike Touchton and Stephanie McNulty — to synthesize what we know broadly about PB’s impact and where there are gaps in the evidence. mySociety led the second phase, which originally intended to identify the opportunities and challenges faced by civil society organizations and public officials that implement participatory budgeting. However, a number of unforeseen circumstances, including contested elections in Kenya and a major earthquake in Mexico, shifted mySociety’s focus to take a global, field-wide perspective.

In the end, we were left with two reports that were similar in scope and differed in perspective. Together they make for compelling reading. And while they come from different perspectives, they settle on similar recommendations. I’ll focus on just three: 1) the need for better research, 2) the lack of global coordination, and 3) the emerging opportunity to link natural resource governance with participatory budgeting….

As we consider some preliminary opportunities to advance participatory budgeting, we are clear-eyed about the risks and challenges. In the face of democratic backsliding and the concern that liberal democracy may not survive the 21st century, are these efforts to deepen local democracy merely a distraction from a larger threat, or is this a way to build active citizenship? Also, implementing PB is expensive — both in terms of money and time; is it worth the investment? Is PB just the latest checkbox for governments that want a reputation for supporting citizen participation without investing in the values and process it entails? Just like the proliferation of fake “consultation meetings,” fake PB could merely exacerbate our disappointment with democracy. What should we make of the rise of participatory budgeting in quasi-authoritarian contexts like China and Russia? Is PB a tool for undemocratic central governments to keep local governments in check while giving citizens a simulacrum of democratic participation? Crucially, without intentional efforts to be inclusive like we’ve seen in Boston, PB could merely direct public resources to those neighborhoods with the most outspoken and powerful residents.

On the other hand, we don’t want to dismiss the significant opportunities that come with PB’s rapid global expansion. For example, what happens when social movements lose their momentum between election cycles? Participatory budgeting could create a civic space for social movements to pursue concrete outcomes while engaging with neighbors and public officials. (In China, it has even helped address the urban-rural divide on perspectives toward development policy.) Meanwhile, social media have exacerbated our human tendency to complain, but participatory budgeting requires us to shift our perspective from complaints to engaging with others on solutions. It could even serve as a gateway to deeper forms of democratic participation and increased trust between governments, civil society organizations, and citizens. Perhaps participatory budgeting is the first step we need to rebuild our civic infrastructure and make space for more diverse voices to steer our complex public institutions.

Until we have more research and evidence, however, these possibilities remain speculative….(More)”.

Behavioral Economics: Are Nudges Cost-Effective?


Carla Fried at UCLA Anderson Review: “Behavioral science does not suffer from a lack of academic focus. A Google Scholar search for the term delivers more than three million results.

While there is an abundance of research into how human nature can muck up our decision making process and the potential for well-placed nudges to help guide us to better outcomes, the field has kept rather mum on a basic question: Are behavioral nudges cost-effective?

That’s an ever more salient question as the art of the nudge is increasingly being woven into public policy initiatives. In 2009, the Obama administration set up a nudge unit within the White House Office of Information and Technology, and a year later the U.K. government launched its own unit. Harvard’s Cass Sunstein, co-author of the book Nudge, headed the U.S. effort. His co-author, the University of Chicago’s Richard Thaler — who won the 2017 Nobel Prize in Economics — helped develop the U.K.’s Behavioral Insights office. Nudge units are now humming away in other countries, including Germany and Singapore, as well as at the World Bank, various United Nations agencies and the Organisation for Economic Co-operation and Development (OECD).

Given the interest in the potential for behavioral science to improve public policy outcomes, a team of nine experts, including UCLA Anderson’s Shlomo Benartzi, Sunstein and Thaler, set out to explore the cost-effectiveness of behavioral nudges relative to more traditional forms of government interventions.

In addition to conducting their own experiments, the researchers looked at published research that addressed four areas where public policy initiatives aim to move the needle to improve individuals’ choices: saving for retirement, applying to college, energy conservation and flu vaccinations.

For each topic, they culled studies that focused on both nudge approaches and more traditional mandates such as tax breaks, education and financial incentives, and calculated cost-benefit estimates for both types of studies. Research used in this study was published between 2000 and 2015. All cost estimates were inflation-adjusted…

The study itself should serve as a nudge for governments to consider adding nudging to their policy toolkits, as this approach consistently delivered a high return on investment, relative to traditional mandates and policies….(More)”.

A New Model for Industry-Academic Partnerships


Working Paper by Gary King and Nathaniel Persily: “The mission of the academic social sciences is to understand and ameliorate society’s greatest challenges. The data held by private companies holds vast potential to further this mission. Yet, because of its interaction with highly politicized issues, customer privacy, proprietary content, and differing goals of firms and academics, these data are often inaccessible to university researchers.

We propose here a new model for industry-academic partnerships that addresses these problems via a novel organizational structure: Respected scholars form a commission which, as a trusted third party, receives access to all relevant firm information and systems, and then recruits independent academics to do research in specific areas following standard peer review protocols organized and funded by nonprofit foundations.

We also report on a partnership we helped forge under this model to make data available about the extremely visible and highly politicized issues surrounding the impact of social media on elections and democracy. In our partnership, Facebook will provide privacy-preserving data and access; seven major politically and substantively diverse nonprofit foundations will fund the research; and the Social Science Research Council will oversee the peer review process for funding and data access….(More)”.

Democracy is in danger when the census undercounts vulnerable populations


Emily Klancher Merchant at The Conversation: “The 2020 U.S. Census is still two years away, but experts and civil rights groups are already disputing the results.At issue is whether the census will fulfill the Census Bureau’s mandate to “count everyone once, only once, and in the right place.”

The task is hardly as simple as it seems and has serious political consequences. Recent changes to the 2020 census, such as asking about citizenship status, will make populations already vulnerable to undercounting even more likely to be missed. These vulnerable populations include the young, poor, nonwhite, non-English-speaking, foreign-born and transient.

An accurate count is critical to the functioning of the U.S. government. Census data determine how the power and resources of the federal government are distributed across the 50 states. This includes seats in the House, votes in the Electoral College and funds for federal programs. Census data also guide the drawing of congressional and other voting districts and the enforcement of civil and voting rights laws.

Places where large numbers of people go uncounted get less than their fair share of political representation and federal resources. When specific racial and ethnic groups are undercounted, it is harder to identify and rectify violations of their civil rights. My research on the international history of demography demonstrates that the question of how to equitably count the population is not new, nor is it unique to the United States. The experience of the United States and other countries may hold important lessons as the Census Bureau finalizes its plans for the 2020 count.

Let’s take a look at that history….

In 1790, the United States became the first country to take a regular census. Following World War II, the U.S. government began to promote census-taking in other countries. U.S. leaders believed data about the size and location of populations throughout the Western Hemisphere could help the government plan defense. What’s more, U.S. businesses could also use the data to identify potential markets and labor forces in nearby countries.

The U.S. government began investing in a program called the Census of the Americas. Through this program, the State Department provided financial support and the Census Bureau provided technical assistance to Western Hemisphere countries taking censuses in 1950.

United Nations demographers also viewed the Census of the Americas as an opportunity. Data that were standardized across countries could serve as the basis for projections of world population growth and the calculation of social and economic indicators. They also hoped that censuses would provide useful information to newly established governments. The U.N. turned the Census of the Americas into a global affair, recommending that “all Member States planning population censuses about 1950 use comparable schedules so far as possible.” Since 1960, the U.N. has sponsored a World Census Program every 10 years. The 2020 World Census Program will be the seventh round….

Not all countries went along with the program. For example, Lebanon’s Christian rulers feared that a census would show Christians to be a minority, undermining the legitimacy of their government. However, for the 65 sovereign countries taking censuses between 1945 and 1954, leaders faced the same question the U.S. faces today: How can we make sure that everyone has an equal chance of being counted?…(More)”.

Algorithmic Sovereignty


Thesis by Denis Roio: “This thesis describes a practice based research journey across various projects dealing with the design of algorithms, to highlight the governance implications in design choices made on them. The research provides answers and documents methodologies to address the urgent need for more awareness of decisions made by algorithms about the social and economical context in which we live. Algorithms consitute a foundational basis across different fields of studies: policy making, governance, art and technology. The ability to understand what is inscribed in such algorithms, what are the consequences of their execution and what is the agency left for the living world is crucial. Yet there is a lack of interdisciplinary and practice based literature, while specialised treatises are too narrow to relate to the broader context in which algorithms are enacted.

This thesis advances the awareness of algorithms and related aspects of sovereignty through a series of projects documented as participatory action research. One of the projects described, Devuan, leads to the realisation of a new, worldwide renown operating system. Another project, “sup”, consists of a minimalist approach to mission critical software and literate programming to enhance security and reliability of applications. Another project, D-CENT, consisted in a 3 year long path of cutting edge research funded by the EU commission on the emerging dynamics of participatory democracy connected to the technologies adopted by citizen organizations.

My original contribution to knowledge lies within the function that the research underpinning these projects has on the ability to gain a better understanding of sociopolitical aspects connected to the design and management of algorithms. It suggests that we can improve the design and regulation of future public, private and common spaces which are increasingly governed by algorithms by understanding not only economical and legal implications, but also the connections between design choices and the sociopolitical context for their development and execution….(More)”.

How Democracy Can Survive Big Data


Colin Koopman in The New York Times: “…The challenge of designing ethics into data technologies is formidable. This is in part because it requires overcoming a century-long ethos of data science: Develop first, question later. Datafication first, regulation afterward. A glimpse at the history of data science shows as much.

The techniques that Cambridge Analytica uses to produce its psychometric profiles are the cutting edge of data-driven methodologies first devised 100 years ago. The science of personality research was born in 1917. That year, in the midst of America’s fevered entry into war, Robert Sessions Woodworth of Columbia University created the Personal Data Sheet, a questionnaire that promised to assess the personalities of Army recruits. The war ended before Woodworth’s psychological instrument was ready for deployment, but the Army had envisioned its use according to the precedent set by the intelligence tests it had been administering to new recruits under the direction of Robert Yerkes, a professor of psychology at Harvard at the time. The data these tests could produce would help decide who should go to the fronts, who was fit to lead and who should stay well behind the lines.

The stakes of those wartime decisions were particularly stark, but the aftermath of those psychometric instruments is even more unsettling. As the century progressed, such tests — I.Q. tests, college placement exams, predictive behavioral assessments — would affect the lives of millions of Americans. Schoolchildren who may have once or twice acted out in such a way as to prompt a psychometric evaluation could find themselves labeled, setting them on an inescapable track through the education system.

Researchers like Woodworth and Yerkes (or their Stanford colleague Lewis Terman, who formalized the first SAT) did not anticipate the deep consequences of their work; they were too busy pursuing the great intellectual challenges of their day, much like Mr. Zuckerberg in his pursuit of the next great social media platform. Or like Cambridge Analytica’s Christopher Wylie, the twentysomething data scientist who helped build psychometric profiles of two-thirds of all Americans by leveraging personal information gained through uninformed consent. All of these researchers were, quite understandably, obsessed with the great data science challenges of their generation. Their failure to consider the consequences of their pursuits, however, is not so much their fault as it is our collective failing.

For the past 100 years we have been chasing visions of data with a singular passion. Many of the best minds of each new generation have devoted themselves to delivering on the inspired data science promises of their day: intelligence testing, building the computer, cracking the genetic code, creating the internet, and now this. We have in the course of a single century built an entire society, economy and culture that runs on information. Yet we have hardly begun to engineer data ethics appropriate for our extraordinary information carnival. If we do not do so soon, data will drive democracy, and we may well lose our chance to do anything about it….(More)”.

Reapplying behavioral symmetry: public choice and choice architecture


Michael David Thomas in Public Choice: “New justifications for government intervention based on behavioral psychology rely on a behavioral asymmetry between expert policymakers and market participants. Public choice theory applied the behavioral symmetry assumption to policy making in order to illustrate how special interests corrupt the suppositions of benevolence on the part of policy makers. Cognitive problems associated with market choices have been used to argue for even more intervention.

If behavioral symmetry is applied to the experts and not just to market participants, problems with this approach to public policy formation become clear. Manipulation, cognitive capture, and expert bias are among the problems associated with a behavioral theory of market failure. The application of behavioral symmetry to the expanding role of choice architecture will help to limit the bias in behavioral policy. Since experts are also subject to cognitive failures, policy must include an evaluation of expert error. Like the rent-seeking literature before it, a theory of cognitive capture points out the systematic problems with a theory of asymmetry between policy experts and citizens when it comes to policy making….(More)”.

The People vs. Democracy: Why Our Freedom Is in Danger and How to Save It


Book by Yascha Mounk: “The world is in turmoil. From India to Turkey and from Poland to the United States, authoritarian populists have seized power. As a result, Yascha Mounk shows, democracy itself may now be at risk.

Two core components of liberal democracy—individual rights and the popular will—are increasingly at war with each other. As the role of money in politics soared and important issues were taken out of public contestation, a system of “rights without democracy” took hold. Populists who rail against this say they want to return power to the people. But in practice they create something just as bad: a system of “democracy without rights.”

The consequence, Mounk shows in The People vs. Democracy, is that trust in politics is dwindling. Citizens are falling out of love with their political system. Democracy is wilting away. Drawing on vivid stories and original research, Mounk identifies three key drivers of voters’ discontent: stagnating living standards, fears of multiethnic democracy, and the rise of social media. To reverse the trend, politicians need to enact radical reforms that benefit the many, not the few.

The People vs. Democracy is the first book to go beyond a mere description of the rise of populism. In plain language, it describes both how we got here and where we need to go. For those unwilling to give up on either individual rights or the popular will, Mounk shows, there is little time to waste: this may be our last chance to save democracy….(More)”

Psychographics: the behavioural analysis that helped Cambridge Analytica know voters’ minds


Michael Wade at The Conversation: “Much of the discussion has been on how Cambridge Analytica was able to obtain data on more than 50m Facebook users – and how it allegedly failed to delete this data when told to do so. But there is also the matter of what Cambridge Analytica actually did with the data. In fact the data crunching company’s approach represents a step change in how analytics can today be used as a tool to generate insights – and to exert influence.

For example, pollsters have long used segmentation to target particular groups of voters, such as through categorising audiences by gender, age, income, education and family size. Segments can also be created around political affiliation or purchase preferences. The data analytics machine that presidential candidate Hillary Clinton used in her 2016 campaign – named Ada after the 19th-century mathematician and early computing pioneer – used state-of-the-art segmentation techniques to target groups of eligible voters in the same way that Barack Obama had done four years previously.

Cambridge Analytica was contracted to the Trump campaign and provided an entirely new weapon for the election machine. While it also used demographic segments to identify groups of voters, as Clinton’s campaign had, Cambridge Analytica also segmented using psychographics. As definitions of class, education, employment, age and so on, demographics are informational. Psychographics are behavioural – a means to segment by personality.

This makes a lot of sense. It’s obvious that two people with the same demographic profile (for example, white, middle-aged, employed, married men) can have markedly different personalities and opinions. We also know that adapting a message to a person’s personality – whether they are open, introverted, argumentative, and so on – goes a long way to help getting that message across….

There have traditionally been two routes to ascertaining someone’s personality. You can either get to know them really well – usually over an extended time. Or you can get them to take a personality test and ask them to share it with you. Neither of these methods is realistically open to pollsters. Cambridge Analytica found a third way, with the assistance of two University of Cambridge academics.

The first, Aleksandr Kogan, sold them access to 270,000 personality tests completed by Facebook users through an online app he had created for research purposes. Providing the data to Cambridge Analytica was, it seems, against Facebook’s internal code of conduct, but only now in March 2018 has Kogan been banned by Facebook from the platform. In addition, Kogan’s data also came with a bonus: he had reportedly collected Facebook data from the test-takers’ friends – and, at an average of 200 friends per person, that added up to some 50m people.

However, these 50m people had not all taken personality tests. This is where the second Cambridge academic, Michal Kosinski, came in. Kosinski – who is said to believe that micro-targeting based on online data could strengthen democracy – had figured out a way to reverse engineer a personality profile from Facebook activity such as likes. Whether you choose to like pictures of sunsets, puppies or people apparently says a lot about your personality. So much, in fact, that on the basis of 300 likes, Kosinski’s model is able to predict someone’s personality profile with the same accuracy as a spouse….(More)”

Co-creation in Urban Governance: From Inclusion to Innovation


Paper by Dorthe Hedensted Lund: “This article sets out to establish what we mean by the recent buzzword ‘co-creation’ and what practical application this concept entails for democracy in urban governance, both in theory and practice.

The rise of the concept points to a shift in how public participation is understood. Whereas from the 1970s onwards the discussions surrounding participation centred on rights and power, following Sherry Arnstein, participation conceptualised as co-creation instead focuses on including diverse forms of knowledge in urban processes in order to create innovative solutions to complex problems.

Consequently, democratic legitimacy now relies to a much greater extent on output, rather than input legitimacy. Rather than provision of inclusive spaces for democratic debate and empowerment of the deprived, which have been the goals of numerous urban participatory efforts in the past, it is the ability to solve complex problems that has become the main criterion for the evaluation of co-creation. Furthermore, conceptualising participation as co-creation has consequences for the roles available to both citizens and public administrators in urban processes, which has implications for urban governance. An explicit debate, both in academia and in practice, about the normative content and implications of conceptualising participation as co-creation is therefore salient and necessary….(More).