Can Technology Support Democracy?


Essay by Douglas Schuler: “The utopian optimism about democracy and the internet has given way to disillusionment. At the same time, given the complexity of today’s wicked problems, the need for democracy is critical. Unfortunately democracy is under attack around the world, and there are ominous signs of its retreat.

How does democracy fare when digital technology is added to the picture? Weaving technology and democracy together is risky, and technologists who begin any digital project with the conviction that technology can and will solve “problems” of democracy are likely to be disappointed. Technology can be a boon to democracy if it is informed technology.

The goal in writing this essay was to encourage people to help develop and cultivate a rich democratic sphere. Democracy has great potential that it rarely achieves. It is radical, critical, complex, and fragile. It takes different forms in different contexts. These forms are complex and the solutionism promoted by the computer industry and others is not appropriate in the case of democracies. The primary aim of technology in the service of democracy is not merely to make it easier or more convenient but to improve society’s civic intelligence, its ability to address the problems it faces effectively and equitably….(More)”.

Who will benefit most from the data economy?


Special Report by The Economist: “The data economy is a work in progress. Its economics still have to be worked out; its infrastructure and its businesses need to be fully built; geopolitical arrangements must be found. But there is one final major tension: between the wealth the data economy will create and how it will be distributed. The data economy—or the “second economy”, as Brian Arthur of the Santa Fe Institute terms it—will make the world a more productive place no matter what, he predicts. But who gets what and how is less clear. “We will move from an economy where the main challenge is to produce more and more efficiently,” says Mr Arthur, “to one where distribution of the wealth produced becomes the biggest issue.”

The data economy as it exists today is already very unequal. It is dominated by a few big platforms. In the most recent quarter, Amazon, Apple, Alphabet, Microsoft and Facebook made a combined profit of $55bn, more than the next five most valuable American tech firms over the past 12 months. This corporate inequality is largely the result of network effects—economic forces that mean size begets size. A firm that can collect a lot of data, for instance, can make better use of artificial intelligence and attract more users, who in turn supply more data. Such firms can also recruit the best data scientists and have the cash to buy the best ai startups.

It is also becoming clear that, as the data economy expands, these sorts of dynamics will increasingly apply to non-tech companies and even countries. In many sectors, the race to become a dominant data platform is on. This is the mission of Compass, a startup, in residential property. It is one goal of Tesla in self-driving cars. And Apple and Google hope to repeat the trick in health care. As for countries, America and China account for 90% of the market capitalisation of the world’s 70 largest platforms (see chart), Africa and Latin America for just 1%. Economies on both continents risk “becoming mere providers of raw data…while having to pay for the digital intelligence produced,” the United Nations Conference on Trade and Development recently warned.

Yet it is the skewed distribution of income between capital and labour that may turn out to be the most pressing problem of the data economy. As it grows, more labour will migrate into the mirror worlds, just as other economic activity will. It is not only that people will do more digitally, but they will perform actual “data work”: generating the digital information needed to train and improve ai services. This can mean simply moving about online and providing feedback, as most people already do. But it will increasingly include more active tasks, such as labelling pictures, driving data-gathering vehicles and perhaps, one day, putting one’s digital twin through its paces. This is the reason why some say ai should actually be called “collective intelligence”: it takes in a lot of human input—something big tech firms hate to admit….(More)”.

Beyond Takedown: Expanding the Toolkit for Responding to Online Hate


Paper by Molly K. Land and Rebecca J. Hamilton: “The current preoccupation with ‘fake news’ has spurred a renewed emphasis in popular discourse on the potential harms of speech. In the world of international law, however, ‘fake news’ is far from new. Propaganda of various sorts is a well-worn tactic of governments, and in its most insidious form, it has played an instrumental role in inciting and enabling some of the worst atrocities of our time. Yet as familiar as propaganda might be in theory, it is raising new issues as it has migrated to the digital realm. Technological developments have largely outpaced existing legal and political tools for responding to the use of mass communications devices to instigate or perpetrate human rights violations.

This chapter evaluates the current practices of social media companies for responding to online hate, arguing that they are inevitably both overbroad and under-inclusive. Using the example of the role played by Facebook in the recent genocide against the minority Muslim Rohingya population in Myanmar, the chapter illustrates the failure of platform hate speech policies to address pervasive and coordinated online speech, often state-sponsored or state-aligned, denigrating a particular group that is used to justify or foster impunity for violence against that group. Addressing this “conditioning speech” requires a more tailored response that includes remedies other than content removal and account suspensions. The chapter concludes by surveying a range of innovative responses to harmful online content that would give social media platforms the flexibly to intervene earlier, but with a much lighter touch….(More)”.

We All Wear Tinfoil Hats Now


Article by Geoff Shullenberger on “How fears of mind control went from paranoid delusion to conventional wisdom”: “In early 2017, after the double shock of Brexit and the election of Donald Trump, the British data-mining firm Cambridge Analytica gained sudden notoriety. The previously little-known company, reporters claimed, had used behavioral influencing techniques to turn out social media users to vote in both elections. By its own account, Cambridge Analytica had worked with both campaigns to produce customized propaganda for targeting individuals on Facebook likely to be swept up in the tide of anti-immigrant populism. Its methods, some news sources suggested, might have sent enough previously disengaged voters to the polls to have tipped the scales in favor of the surprise victors. To a certain segment of the public, this story seemed to answer the question raised by both upsets: How was it possible that the seemingly solid establishment consensus had been rejected? What’s more, the explanation confirmed everything that seemed creepy about the Internet, evoking a sci-fi vision of social media users turned into an army of political zombies, mobilized through subliminal manipulation.

Cambridge Analytica’s violations of Facebook users’ privacy have made it an enduring symbol of the dark side of social media. However, the more dramatic claims about the extent of the company’s political impact collapse under closer scrutiny, mainly because its much-hyped “psychographic targeting” methods probably don’t work. As former Facebook product manager Antonio García Martínez noted in a 2018 Wired article, “the public, with no small help from the media sniffing a great story, is ready to believe in the supernatural powers of a mostly unproven targeting strategy,” but “most ad insiders express skepticism about Cambridge Analytica’s claims of having influenced the election, and stress the real-world difficulty of changing anyone’s mind about anything with mere Facebook ads, least of all deeply ingrained political views.” According to García, the entire affair merely confirms a well-established truth: “In the ads world, just because a product doesn’t work doesn’t mean you can’t sell it….(More)”.

Nudge Theory and Decision Making: Enabling People to Make Better Choices


Chapter by Vikramsinh Amarsinh Patil: “This chapter examines the theoretical underpinnings of nudge theory and makes a case for incorporating nudging into the decision-making process in corporate contexts. Nudging and more broadly behavioural economics have become buzzwords on account of the seminal work that has been done by economists and highly publicized interventions employed by governments to support national priorities. Firms are not to be left behind, however. What follows is extensive documentation of such firms that have successfully employed nudging techniques. The examples are segmented by the nudge recipient, namely – managers, employees, and consumers. Firms can guide managers to become better leaders, employees to become more productive, and consumers to stay loyal. However, nudging is not without its pitfalls. It can be used towards nefarious ends and be notoriously difficult to implement and execute. Therefore, nudges should be rigorously tested via experimentation and should be ethically sound….(More)”.

Collaborative е-Rulemaking, Democratic Bots, and the Future of Digital Democracy


Paper by Oren Perez: “… focuses on “deliberative e-rulemaking”: digital consultation processes that seek to facilitate public deliberation over policy or regulatory proposals. The main challenge of е-rulemaking platforms is to support an “intelligent” deliberative process that enables decision makers to identify a wide range of options, weigh the relevant considerations, and develop epistemically responsible solutions. This article discusses and critiques two approaches to this challenge: The Cornell Regulation Room project and model of computationally assisted regulatory participation by Livermore et al. It then proceeds to explore two alternative approaches to e-rulemaking: One is based on the implementation of collaborative, wiki-styled tools. This article discusses the findings of an experiment, which was conducted at Bar-Ilan University and explored various aspects of a wiki-based collaborative е-rulemaking system. The second approach follows a more futuristic approach, focusing on the potential development of autonomous, artificial democratic agents. This article critically discusses this alternative, also in view of the recent debate regarding the idea of “augmented democracy.”…(More)”.

Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies


The Administrative Conference of the United States: “Artificial intelligence (AI) promises to transform how government agencies do their work. Rapid developments in AI have the potential to reduce the cost of core governance functions, improve the quality of decisions, and unleash the power of administrative data, thereby making government performance more efficient and effective. Agencies that use AI to realize these gains will also confront important questions about the proper design of algorithms and user interfaces, the respective scope of human and machine decision-making, the boundaries between public actions and private contracting, their own capacity to learn over time using AI, and whether the use of AI is even permitted.

These are important issues for public debate and academic inquiry. Yet little is known about how agencies are currently using AI systems beyond a few headlinegrabbing examples or surface-level descriptions. Moreover, even amidst growing public and  scholarly discussion about how society might regulate government use of AI, little attention has been devoted to how agencies acquire such tools in the first place or oversee their use. In an effort to fill these gaps, the Administrative Conference of the United States (ACUS) commissioned this report from researchers at Stanford University and New York University. The research team included a diverse set of lawyers, law students, computer scientists, and social scientists with the capacity to analyze these cutting-edge issues from technical, legal, and policy angles. The resulting report offers three cuts at federal agency use of AI:

  • a rigorous canvass of AI use at the 142 most significant federal departments, agencies, and sub-agencies (Part I)
  • a series of in-depth but accessible case studies of specific AI applications at seven leading agencies covering a range of governance tasks (Part II); and
  • a set of cross-cutting analyses of the institutional, legal, and policy challenges raised by agency use of AI (Part III)….(More)”

Digital tools can be a useful bolster to democracy


Rana Foroohar at the Financial Times: “…A report by a Swedish research group called V-Dem found Taiwan was subject to more disinformation than nearly any other country, much of it coming from mainland China. Yet the popularity of pro-independence politicians is growing there, something Ms Tang views as a circular phenomenon.

When politicians enable more direct participation, the public begins to have more trust in government. Rather than social media creating “a false sense of us versus them,” she notes, decentralised technologies have “enabled a sense of shared reality” in Taiwan.

The same seems to be true in a number of other countries, including Israel, where Green party leader and former Occupy activist Stav Shaffir crowdsourced technology expertise to develop a bespoke data analysis app that allowed her to make previously opaque Treasury data transparent. She’s now heading an OECD transparency group to teach other politicians how to do the same. Part of the power of decentralised technologies is that they allow, at scale, the sort of public input on a wide range of complex issues that would have been impossible in the analogue era.

Consider “quadratic voting”, a concept that has been popularised by economist Glen Weyl, co-author of Radical Markets: Uprooting Capitalism and Democracy for a Just Society. Mr Weyl is the founder of the RadicalxChange movement, which aimsto empower a more participatory democracy. Unlike a binary “yes” or “no” vote for or against one thing, quadratic voting allows a large group of people to use a digital platform to express the strength of their desire on a variety of issues.

For example, when he headed the appropriations committee in the Colorado House of Representatives, Chris Hansen used quadratic voting to help his party quickly sort through how much of their $40m budget should be allocated to more than 100 proposals….(More)”.

Behavioral Public Administration: : Past, Present, and Future


Essay by Syon P. Bhanot and Elizabeth Linos: “The last decade has seen remarkable growth in the field of behavioral public administration, both in practice and in academia. In both domains, applications of behavioral science to policy problems have moved forward at breakneck speed; researchers are increasingly pursuing randomized behavioral interventions in public administration contexts, editors of peer‐reviewed academic journals are showing greater interest in publishing this work, and policy makers at all levels are creating new initiatives to bring behavioral science into the public sector.

However, because the expansion of the field has been so rapid, there has been relatively little time to step back and reflect on the work that has been done and to assess where the field is going in the future. It is high time for such reflection: where is the field currently on track, and where might it need course correction?…(More)”.

Dark Data: Why What You Don’t Know Matters


Book by David J. Hand: “In the era of big data, it is easy to imagine that we have all the information we need to make good decisions. But in fact the data we have are never complete, and may be only the tip of the iceberg. Just as much of the universe is composed of dark matter, invisible to us but nonetheless present, the universe of information is full of dark data that we overlook at our peril. In Dark Data, data expert David Hand takes us on a fascinating and enlightening journey into the world of the data we don’t see.

Dark Data explores the many ways in which we can be blind to missing data and how that can lead us to conclusions and actions that are mistaken, dangerous, or even disastrous. Examining a wealth of real-life examples, from the Challenger shuttle explosion to complex financial frauds, Hand gives us a practical taxonomy of the types of dark data that exist and the situations in which they can arise, so that we can learn to recognize and control for them. In doing so, he teaches us not only to be alert to the problems presented by the things we don’t know, but also shows how dark data can be used to our advantage, leading to greater understanding and better decisions.

Today, we all make decisions using data. Dark Data shows us all how to reduce the risk of making bad ones….(More)”.