The People vs. Democracy: Why Our Freedom Is in Danger and How to Save It


Book by Yascha Mounk: “The world is in turmoil. From India to Turkey and from Poland to the United States, authoritarian populists have seized power. As a result, Yascha Mounk shows, democracy itself may now be at risk.

Two core components of liberal democracy—individual rights and the popular will—are increasingly at war with each other. As the role of money in politics soared and important issues were taken out of public contestation, a system of “rights without democracy” took hold. Populists who rail against this say they want to return power to the people. But in practice they create something just as bad: a system of “democracy without rights.”

The consequence, Mounk shows in The People vs. Democracy, is that trust in politics is dwindling. Citizens are falling out of love with their political system. Democracy is wilting away. Drawing on vivid stories and original research, Mounk identifies three key drivers of voters’ discontent: stagnating living standards, fears of multiethnic democracy, and the rise of social media. To reverse the trend, politicians need to enact radical reforms that benefit the many, not the few.

The People vs. Democracy is the first book to go beyond a mere description of the rise of populism. In plain language, it describes both how we got here and where we need to go. For those unwilling to give up on either individual rights or the popular will, Mounk shows, there is little time to waste: this may be our last chance to save democracy….(More)”

Law, Metaphor, and the Encrypted Machine


Paper by Lex Gill: “The metaphors we use to imagine, describe and regulate new technologies have profound legal implications. This paper offers a critical examination of the metaphors we choose to describe encryption technology in particular, and aims to uncover some of the normative and legal implications of those choices.

Part I provides a basic description of encryption as a mathematical and technical process. At the heart of this paper is a question about what encryption is to the law. It is therefore fundamental that readers have a shared understanding of the basic scientific concepts at stake. This technical description will then serve to illustrate the host of legal and political problems arising from encryption technology, the most important of which are addressed in Part II. That section also provides a brief history of various legislative and judicial responses to the encryption “problem,” mapping out some of the major challenges still faced by jurists, policymakers and activists. While this paper draws largely upon common law sources from the United States and Canada, metaphor provides a core form of cognitive scaffolding across legal traditions. Part III explores the relationship between metaphor and the law, demonstrating the ways in which it may shape, distort or transform the structure of legal reasoning. Part IV demonstrates that the function served by legal metaphor is particularly determinative wherever the law seeks to integrate novel technologies into old legal frameworks. Strong, ubiquitous commercial encryption has created a range of legal problems for which the appropriate metaphors remain unfixed. Part V establishes a loose framework for thinking about how encryption has been described by courts and lawmakers — and how it could be. What does it mean to describe the encrypted machine as a locked container or building? As a combination safe? As a form of speech? As an untranslatable library or an unsolvable puzzle? What is captured by each of these cognitive models, and what is lost? This section explores both the technological accuracy and the legal implications of each choice. Finally, the paper offers a few concluding thoughts about the utility and risk of metaphor in the law, reaffirming the need for a critical, transparent and lucid appreciation of language and the power it wields….(More)”.

Truth Decay: An Initial Exploration of the Diminishing Role of Facts and Analysis in American Public Life


Report by Jennifer Kavanagh and Michael D. Rich: “Over the past two decades, national political and civil discourse in the United States has been characterized by “Truth Decay,” defined as a set of four interrelated trends: an increasing disagreement about facts and analytical interpretations of facts and data; a blurring of the line between opinion and fact; an increase in the relative volume, and resulting influence, of opinion and personal experience over fact; and lowered trust in formerly respected sources of factual information. These trends have many causes, but this report focuses on four: characteristics of human cognitive processing, such as cognitive bias; changes in the information system, including social media and the 24-hour news cycle; competing demands on the education system that diminish time spent on media literacy and critical thinking; and polarization, both political and demographic. The most damaging consequences of Truth Decay include the erosion of civil discourse, political paralysis, alienation and disengagement of individuals from political and civic institutions, and uncertainty over national policy.

This report explores the causes and consequences of Truth Decay and how they are interrelated, and examines past eras of U.S. history to identify evidence of Truth Decay’s four trends and observe similarities with and differences from the current period. It also outlines a research agenda, a strategy for investigating the causes of Truth Decay and determining what can be done to address its causes and consequences….(More)”.

Issuing Bonds to Invest in People


Tina Rosenberg at the New York Times: “The first social impact bond began in 2010 in Peterborough, England. Investors funded a program aimed at keeping newly released short-term inmates out of prison. It reduced reoffending by 9 percent compared to a control group, exceeding its target. So investors got their money back, plus interest.

Seldom has a policy idea gone viral so fast. There are now 108 such bonds, in 24 countries. The United States has 20, leveraging $211 million in investment capital, and at least 50 more are on the way. These bonds fund programs to reduce Oklahoma’s population of women in prison, help low-income mothers to have healthy pregnancies in South Carolina, teach refugees and immigrants English and job skills in Boston, house the homeless in Denver, and reduce storm water runoff in the District of Columbia. There’s a Forest Resilience Bond underway that seeks to finance desperately needed wildfire prevention.

Here’s how social impact bonds differ from standard social programs:

They raise upfront money to do prevention. Everyone knows most prevention is a great investment. But politicians don’t do “think ahead” very well. They hate to spend money now to create savings their successors will reap. Issuing a social impact bond means they don’t have to.

They concentrate resources on what works. Bonds build market discipline, since investors demand evidence of success.

They focus attention on outcomes rather than outputs. “Take work-force training,” said David Wilkinson, commissioner of Connecticut’s Office of Early Childhood. “We tend to pay for how many people receive training. We’re less likely to pay for — or even look at — how many people get good jobs.” Providers, he said, were best recognized for their work “when we reward them for outcomes they want to see and families they are serving want to achieve.”

They improve incentives.Focusing on outcomes changes the way social service providers think. In Connecticut, said Duryea, they now have a financial incentive to keep children out of foster care, rather than bring more in.

They force decision makers to look at data. Programs start with great fanfare, but often nobody then examines how they are doing. But with a bond, evaluation is essential.

They build in flexibility.“It’s a big advantage that they don’t prescribe what needs to be done,” said Cohen. The people on the ground choose the strategy, and can change it if necessary. “Innovators can think outside the box and tackle health or education in revolutionary ways,” he said.

…In the United States, social impact bonds have become synonymous with “pay for success” programs. But there are other ways to pay for success. For example, Wilkinson, the Connecticut official, has just started an Outcomes Rate Card — a way for a government to pay for home visits for vulnerable families. The social service agencies get base pay, but also bonuses. If a client has a full-term birth, the agency gets an extra $135 for a low-risk family, $170 for a hard-to-help one. A client who finds stable housing brings $150 or $220 to the agency, depending on the family’s situation….(More)”.

How to Make A.I. That’s Good for People


Fei-Fei Li in the New York Times: “For a field that was not well known outside of academia a decade ago, artificial intelligence has grown dizzyingly fast. Tech companies from Silicon Valley to Beijing are betting everything on it, venture capitalists are pouring billions into research and development, and start-ups are being created on what seems like a daily basis. If our era is the next Industrial Revolution, as many claim, A.I. is surely one of its driving forces.

It is an especially exciting time for a researcher like me. When I was a graduate student in computer science in the early 2000s, computers were barely able to detect sharp edges in photographs, let alone recognize something as loosely defined as a human face. But thanks to the growth of big data, advances in algorithms like neural networks and an abundance of powerful computer hardware, something momentous has occurred: A.I. has gone from an academic niche to the leading differentiator in a wide range of industries, including manufacturing, health care, transportation and retail.

I worry, however, that enthusiasm for A.I. is preventing us from reckoning with its looming effects on society. Despite its name, there is nothing “artificial” about this technology — it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.

I call this approach “human-centered A.I.” It consists of three goals that can help responsibly guide the development of intelligent machines.

First, A.I. needs to reflect more of the depth that characterizes our own intelligence….

No technology is more reflective of its creators than A.I. It has been said that there are no “machine” values at all, in fact; machine values arehuman values. A human-centered approach to A.I. means these machines don’t have to be our competitors, but partners in securing our well-being. However autonomous our technology becomes, its impact on the world — for better or worse — will always be our responsibility….(More).

Ostrom in the City: Design Principles and Practices for the Urban Commons


Chapter by Sheila Foster and Christian Iaione in Routledge Handbook of the Study of the Commons (Dan Cole, Blake Hudson, Jonathan Rosenbloom eds.): “If cities are the places where most of the world’s population will be living in the next century, as is predicted, it is not surprising that they have become sites of contestation over use and access to urban land, open space, infrastructure, and culture. The question posed by Saskia Sassen in a recent essay—who owns the city?—is arguably at the root of these contestations and of social movements that resist the enclosure of cities by economic elites (Sassen 2015). One answer to the question of who owns the city is that we all do. In our work we argue that the city is a common good or a “commons”—a shared resource that belongs to all of its inhabitants, and to the public more generally.

We have been writing about the urban commons for the last decade, very much inspired by the work of Jane Jacobs and Elinor Ostrom. The idea of the urban commons captures the ecological view of the city that characterizes Jane Jacobs classic work, The Death and Life of Great American Cities. (Foster 2006) It also builds on Elinor Ostrom’s finding that common resources are capable of being collectively managed by users in ways that support their needs yet sustains the resource over the long run (Ostrom 1990).

Jacobs analyzed cities as complex, organic systems and observed the activity within them at the neighborhood and street level, much like an ecologist would study natural habitats and the species interacting within them. She emphasized the diversity of land use, of people and neighborhoods, and the interaction among them as important to maintaining the ecological balance of urban life in great cities like New York. Jacob’s critique of the urban renewal slum clearance programs of the 1940s and 50s in the United States was focused not just on the destruction of physical neighborhoods, but also on the destruction of the “irreplaceable social capital”—the networks of residents who build and strengthen working relationships over time through trust and voluntary cooperation—necessary for “self-governance” of urban neighborhoods. (Jacobs 1961) As political scientist Douglas Rae has written, this social capital is the “civic fauna” of urbanism (Rae 2003)…(More)”.

Artificial intelligence could identify gang crimes—and ignite an ethical firestorm


Matthew Hutson at Science: “When someone roughs up a pedestrian, robs a store, or kills in cold blood, police want to know whether the perpetrator was a gang member: Do they need to send in a special enforcement team? Should they expect a crime in retaliation? Now, a new algorithm is trying to automate the process of identifying gang crimes. But some scientists warn that far from reducing gang violence, the program could do the opposite by eroding trust in communities, or it could brand innocent people as gang members.

That has created some tensions. At a presentation of the new program this month, one audience member grew so upset he stormed out of the talk, and some of the creators of the program have been tight-lipped about how it could be used….

For years, scientists have been using computer algorithms to map criminal networks, or to guess where and when future crimes might take place, a practice known as predictive policing. But little work has been done on labeling past crimes as gang-related.

In the new work, researchers developed a system that can identify a crime as gang-related based on only four pieces of information: the primary weapon, the number of suspects, and the neighborhood and location (such as an alley or street corner) where the crime took place. Such analytics, which can help characterize crimes before they’re fully investigated, could change how police respond, says Doug Haubert, city prosecutor for Long Beach, California, who has authored strategies on gang prevention.

To classify crimes, the researchers invented something called a partially generative neural network. A neural network is made of layers of small computing elements that process data in a way reminiscent of the brain’s neurons. A form of machine learning, it improves based on feedback—whether its judgments were right. In this case, researchers trained their algorithm using data from the Los Angeles Police Department (LAPD) in California from 2014 to 2016 on more than 50,000 gang-related and non–gang-related homicides, aggravated assaults, and robberies.

The researchers then tested their algorithm on another set of LAPD data. The network was “partially generative,” because even when it did not receive an officer’s narrative summary of a crime, it could use the four factors noted above to fill in that missing information and then use all the pieces to infer whether a crime was gang-related. Compared with a stripped-down version of the network that didn’t use this novel approach, the partially generative algorithm reduced errors by close to 30%, the team reported at the Artificial Intelligence, Ethics, and Society (AIES) conference this month in New Orleans, Louisiana. The researchers have not yet tested their algorithm’s accuracy against trained officers.

It’s an “interesting paper,” says Pete Burnap, a computer scientist at Cardiff University who has studied crime data. But although the predictions could be useful, it’s possible they would be no better than officers’ intuitions, he says. Haubert agrees, but he says that having the assistance of data modeling could sometimes produce “better and faster results.” Such analytics, he says, “would be especially useful in large urban areas where a lot of data is available.”…(More).

Infection forecasts powered by big data


Michael Eisenstein at Nature: “…The good news is that the present era of widespread access to the Internet and digital health has created a rich reservoir of valuable data for researchers to dive into….By harvesting and combining these streams of big data with conventional ways of monitoring infectious diseases, the public-health community could gain fresh powers to catch and curb emerging outbreaks before they rage out of control.

Going viral

Data scientists at Google were the first to make a major splash using data gathered online to track infectious diseases. The Google Flu Trends algorithm, launched in November 2008, combed through hundreds of billions of users’ queries on the popular search engine to look for small increases in flu-related terms such as symptoms or vaccine availability. Initial data suggested that Google Flu Trends could accurately map the incidence of flu with a lag of roughly one day. “It was a very exciting use of these data for the purpose of public health,” says Brownstein. “It really did start a whole revolution and new field of work in query data.”

Unfortunately, Google Flu Trends faltered when it mattered the most, completely missing the onset in April 2009 of the H1N1 pandemic. The algorithm also ran into trouble later on in the pandemic. It had been trained against seasonal fluctuations of flu, says Viboud, but people’s behaviour changed in the wake of panic fuelled by media reports — and that threw off Google’s data. …

Nevertheless, its work with Internet usage data was inspirational for infectious-disease researchers. A subsequent study from a team led by Cecilia Marques-Toledo at the Federal University of Minas Gerais in Belo Horizonte, Brazil, used Twitter to get high-resolution data on the spread of dengue fever in the country. The researchers could quickly map new cases to specific cities and even predict where the disease might spread to next (C. A. Marques-Toledo et al. PLoS Negl. Trop. Dis. 11, e0005729; 2017). Similarly, Brownstein and his colleagues were able to use search data from Google and Twitter to project the spread of Zika virus in Latin America several weeks before formal outbreak declarations were made by public-health officials. Both Internet services are used widely, which makes them data-rich resources. But they are also proprietary systems for which access to data is controlled by a third party; for that reason, Generous and his colleagues have opted instead to make use of search data from Wikipedia, which is open source. “You can get the access logs, and how many people are viewing articles, which serves as a pretty good proxy for search interest,” he says.

However, the problems that sank Google Flu Trends still exist….Additionally, online activity differs for infectious conditions with a social stigma such as syphilis or AIDS, because people who are or might be affected are more likely to be concerned about privacy. Appropriate search-term selection is essential: Generous notes that initial attempts to track flu on Twitter were confounded by irrelevant tweets about ‘Bieber fever’ — a decidedly non-fatal condition affecting fans of Canadian pop star Justin Bieber.

Alternatively, researchers can go straight to the source — by using smartphone apps to ask people directly about their health. Brownstein’s team has partnered with the Skoll Global Threats Fund to develop an app called Flu Near You, through which users can voluntarily report symptoms of infection and other information. “You get more detailed demographics about age and gender and vaccination status — things that you can’t get from other sources,” says Brownstein. Ten European Union member states are involved in a similar surveillance programme known as Influenzanet, which has generally maintained 30,000–40,000 active users for seven consecutive flu seasons. These voluntary reporting systems are particularly useful for diseases such as flu, for which many people do not bother going to the doctor — although it can be hard to persuade people to participate for no immediate benefit, says Brownstein. “But we still get a good signal from the people that are willing to be a part of this.”…(More)”.

Your Data Is Crucial to a Robotic Age. Shouldn’t You Be Paid for It?


The New York Times: “The idea has been around for a bit. Jaron Lanier, the tech philosopher and virtual-reality pioneer who now works for Microsoft Research, proposed it in his 2013 book, “Who Owns the Future?,” as a needed corrective to an online economy mostly financed by advertisers’ covert manipulation of users’ consumer choices.

It is being picked up in “Radical Markets,” a book due out shortly from Eric A. Posner of the University of Chicago Law School and E. Glen Weyl, principal researcher at Microsoft. And it is playing into European efforts to collect tax revenue from American internet giants.

In a report obtained last month by Politico, the European Commission proposes to impose a tax on the revenue of digital companies based on their users’ location, on the grounds that “a significant part of the value of a business is created where the users are based and data is collected and processed.”

Users’ data is a valuable commodity. Facebook offers advertisers precisely targeted audiences based on user profiles. YouTube, too, uses users’ preferences to tailor its feed. Still, this pales in comparison with how valuable data is about to become, as the footprint of artificial intelligence extends across the economy.

Data is the crucial ingredient of the A.I. revolution. Training systems to perform even relatively straightforward tasks like voice translation, voice transcription or image recognition requires vast amounts of data — like tagged photos, to identify their content, or recordings with transcriptions.

“Among leading A.I. teams, many can likely replicate others’ software in, at most, one to two years,” notes the technologist Andrew Ng. “But it is exceedingly difficult to get access to someone else’s data. Thus data, rather than software, is the defensible barrier for many businesses.”

We may think we get a fair deal, offering our data as the price of sharing puppy pictures. By other metrics, we are being victimized: In the largest technology companies, the share of income going to labor is only about 5 to 15 percent, Mr. Posner and Mr. Weyl write. That’s way below Walmart’s 80 percent. Consumer data amounts to work they get free….

The big question, of course, is how we get there from here. My guess is that it would be naïve to expect Google and Facebook to start paying for user data of their own accord, even if that improved the quality of the information. Could policymakers step in, somewhat the way the European Commission did, demanding that technology companies compute the value of consumer data?…(More)”.

Building Democratic Infrastructure


Hollie Russon Gilman, K. Sabeel Rahman, & Elena Souris in Stanford Social Innovation Review: “How can civic engagement be effective in fostering an accountable, inclusive, and responsive American democracy? This question has gained new relevance under the Trump administration, where a sense of escalating democratic crises risks obscuring any nascent grassroots activism. Since the 2016 election, the twin problems of authoritarianism and insufficient political accountability have attracted much attention, as has the need to mobilize for near-future elections. These things are critical for the long-term health of American democracy, but at the same time, it’s not enough to focus solely on Washington or to rely on electoral campaigns to salvage our democracy.

Conventional civic-engagement activities such as canvassing, registering voters, signing petitions, and voting are largely transient experiences, offering little opportunity for civic participation once the election is over. And such tactics often do little to address the background conditions that make participation more difficult for marginalized communities.

To address these issues, civil society organization and local governments should build more long-term and durable democratic infrastructure, with the aim of empowering constituencies to participate in meaningful and concrete ways, overcoming division within our societies, and addressing a general distrust of government by enhancing accountability.

In our work with groups like the Center for Rural Strategies in Appalachia and the Chicago-based Inner-City Muslim Action Network, as well as with local government officials in Eau Claire, Wis. and Boston, Mass., we identify two areas where can help build a broader democratic infrastructure for the long haul. First, we need to support and radically expand efforts by local-level government officials to innovate more participatory and accountable forms of policymaking. And then we need to continue developing new methods of diverse, cross-constituency organizing that can help build more inclusive identities and narratives. Achieving this more-robust form of democracy will require that many different communities—including organizers and advocacy groups, policymakers and public officials, technologists, and funders—combine their efforts….(More)”.