Democracy is in danger when the census undercounts vulnerable populations


Emily Klancher Merchant at The Conversation: “The 2020 U.S. Census is still two years away, but experts and civil rights groups are already disputing the results.At issue is whether the census will fulfill the Census Bureau’s mandate to “count everyone once, only once, and in the right place.”

The task is hardly as simple as it seems and has serious political consequences. Recent changes to the 2020 census, such as asking about citizenship status, will make populations already vulnerable to undercounting even more likely to be missed. These vulnerable populations include the young, poor, nonwhite, non-English-speaking, foreign-born and transient.

An accurate count is critical to the functioning of the U.S. government. Census data determine how the power and resources of the federal government are distributed across the 50 states. This includes seats in the House, votes in the Electoral College and funds for federal programs. Census data also guide the drawing of congressional and other voting districts and the enforcement of civil and voting rights laws.

Places where large numbers of people go uncounted get less than their fair share of political representation and federal resources. When specific racial and ethnic groups are undercounted, it is harder to identify and rectify violations of their civil rights. My research on the international history of demography demonstrates that the question of how to equitably count the population is not new, nor is it unique to the United States. The experience of the United States and other countries may hold important lessons as the Census Bureau finalizes its plans for the 2020 count.

Let’s take a look at that history….

In 1790, the United States became the first country to take a regular census. Following World War II, the U.S. government began to promote census-taking in other countries. U.S. leaders believed data about the size and location of populations throughout the Western Hemisphere could help the government plan defense. What’s more, U.S. businesses could also use the data to identify potential markets and labor forces in nearby countries.

The U.S. government began investing in a program called the Census of the Americas. Through this program, the State Department provided financial support and the Census Bureau provided technical assistance to Western Hemisphere countries taking censuses in 1950.

United Nations demographers also viewed the Census of the Americas as an opportunity. Data that were standardized across countries could serve as the basis for projections of world population growth and the calculation of social and economic indicators. They also hoped that censuses would provide useful information to newly established governments. The U.N. turned the Census of the Americas into a global affair, recommending that “all Member States planning population censuses about 1950 use comparable schedules so far as possible.” Since 1960, the U.N. has sponsored a World Census Program every 10 years. The 2020 World Census Program will be the seventh round….

Not all countries went along with the program. For example, Lebanon’s Christian rulers feared that a census would show Christians to be a minority, undermining the legitimacy of their government. However, for the 65 sovereign countries taking censuses between 1945 and 1954, leaders faced the same question the U.S. faces today: How can we make sure that everyone has an equal chance of being counted?…(More)”.

Replicating the Justice Data Lab in the USA: Key Considerations


Blog by Tracey Gyateng and Tris Lumley: “Since 2011, NPC has researched, supported and advocated for the development of impact-focussed Data Labs in the UK. The goal has been to unlock government administrative data so that organisations (primarily nonprofits) who provide a social service can understand the impact of their services on the people who use them.

So far, one of these Data Labs has been developed to measure re-offending outcomes- the Justice Data Lab-, and others are currently being piloted for employment and education. Given our seven years of work in this area, we at NPC have decided to reflect on the key factors needed to create a Data Lab with our report: How to Create an Impact Data Lab. This blog outlines these factors, examines whether they are present in the USA, and asks what the next steps should be — drawing on the research undertaken with the Governance Lab….Below we examine the key factors and to what extent they appear to be present within the USA.

Environment: A broad culture that supports impact measurement. Similar to the UK, nonprofits in the USA are increasingly measuring the impact they have had on the participants of their service and sharing the difficulties of undertaking robust, high quality evaluations.

Data: Individual person-level administrative data. A key difference between the two countries is that, in the USA, personal data on social services tends to be held at a local, rather than central level. In the UK social services data such as reoffending, education and employment are collated into a central database. In the USA, the federal government has limited centrally collated personal data, instead this data can be found at state/city level….

A leading advocate: A Data Lab project team, and strong networks. Data Labs do not manifest by themselves. They requires a lead agency to campaign with, and on behalf of, nonprofits to set out a persuasive case for their development. In the USA, we have developed a partnership with the Governance Lab to seek out opportunities where Data Labs can be established but given the size of the country, there is scope for further collaborations/ and or advocates to be identified and supported.

Customers: Identifiable organisations that would use the Data Lab. Initial discussions with several US nonprofits and academia indicate support for a Data Lab in their context. Broad consultation based on an agreed region and outcome(s) will be needed to fully assess the potential customer base.

Data owners: Engaged civil servants. Generating buy-in and persuading various stakeholders including data owners, analysts and politicians is a critical part of setting up a data lab. While the exact profiles of the right people to approach can only be assessed once a region and outcome(s) of interest have been chosen, there are encouraging signs, such as the passing of the Foundations for Evidence-Based Policy Making Act of 2017 in the house of representatives which, among other things, mandates the appointment of “Chief Evaluation Officers” in government departments- suggesting that there is bipartisan support for increased data-driven policy evaluation.

Legal and ethical governance: A legal framework for sharing data. In the UK, all personal data is subject to data protection legislation, which provides standardised governance for how personal data can be processed across the country and within the European Union. A universal data protection framework does not exist within the USA, therefore data sharing agreements between customers and government data-owners will need to be designed for the purposes of Data Labs, unless there are existing agreements that enable data sharing for research purposes. This will need to be investigated at the state/city level of a desired Data Lab.

Funding: Resource and support for driving the set-up of the Data Lab. Most of our policy lab case studies were funded by a mixture of philanthropy and government grants. It is expected that a similar mixed funding model will need to be created to establish Data Labs. One alternative is the model adopted by the Washington State Institute for Public Policy (WSIPP), which was created by the Washington State Legislature and is funded on a project basis, primarily by the state. Additionally funding will be needed to enable advocates of a Data Lab to campaign for the service….(More)”.

AI And Open Data Show Just How Often Cars Block Bus And Bike Lanes


Eillie Anzilotti in Fast Company: “…While anyone who bikes or rides a bus in New York City knows intuitively that the lanes are often blocked, there’s been little data to back up that feeling apart from the fact that last year, the NYPD issues 24,000 tickets for vehicles blocking bus lanes, and around 79,000 to cars in the bike lane. By building the algorithm, Bell essentializes what engaged citizenship and productive use of open data looks like. The New York City Department of Transportation maintains several hundred video cameras throughout the city; those cameras feed images in real time to the DOT’s open-data portal. Bell downloaded a week’s worth of footage from that portal to analyze.

To build his computer algorithm to do the analysis, he fed around 2,000 images of buses, cars, pedestrians, and vehicles like UPS trucks into TensorFlow, Google’s open-source framework that the tech giant is using to train autonomous vehicles to recognize other road users. “Because of the push into AVs, machine learning in general and neural networks have made lots of progress, because they have to answer the same questions of: What is this vehicle, and what is it going to do?” Bell says. After several rounds of processing, Bell was able to come up with an algorithm that fairly faultlessly could determine if a vehicle at the bus stop was, in fact, a bus, or if it was something else that wasn’t supposed to be there.

As cities and governments, spurred by organizations like OpenGov, have moved to embrace transparency and open data, the question remains: So, what do you do with it?

For Bell, the answer is that citizens can use it to empower themselves. “I’m a little uncomfortable with cameras and surveillance in cities,” Bell says. “But agencies like the NYPD and DOT have already made the decision to put the cameras up. We don’t know the positive and negative outcomes if more and more data from cameras is opened to the public, but if the cameras are going in, we should know what data they’re collecting and be able to access it,” he says. He’s made his algorithm publicly available in the hopes that more people will use data to investigate the issue on their own streets, and perhaps in other cities….Bell is optimistic that open data can empower more citizens to identify issues in their own cities and bring a case for why they need to be addressed….(More)”.

The People vs. Democracy: Why Our Freedom Is in Danger and How to Save It


Book by Yascha Mounk: “The world is in turmoil. From India to Turkey and from Poland to the United States, authoritarian populists have seized power. As a result, Yascha Mounk shows, democracy itself may now be at risk.

Two core components of liberal democracy—individual rights and the popular will—are increasingly at war with each other. As the role of money in politics soared and important issues were taken out of public contestation, a system of “rights without democracy” took hold. Populists who rail against this say they want to return power to the people. But in practice they create something just as bad: a system of “democracy without rights.”

The consequence, Mounk shows in The People vs. Democracy, is that trust in politics is dwindling. Citizens are falling out of love with their political system. Democracy is wilting away. Drawing on vivid stories and original research, Mounk identifies three key drivers of voters’ discontent: stagnating living standards, fears of multiethnic democracy, and the rise of social media. To reverse the trend, politicians need to enact radical reforms that benefit the many, not the few.

The People vs. Democracy is the first book to go beyond a mere description of the rise of populism. In plain language, it describes both how we got here and where we need to go. For those unwilling to give up on either individual rights or the popular will, Mounk shows, there is little time to waste: this may be our last chance to save democracy….(More)”

Law, Metaphor, and the Encrypted Machine


Paper by Lex Gill: “The metaphors we use to imagine, describe and regulate new technologies have profound legal implications. This paper offers a critical examination of the metaphors we choose to describe encryption technology in particular, and aims to uncover some of the normative and legal implications of those choices.

Part I provides a basic description of encryption as a mathematical and technical process. At the heart of this paper is a question about what encryption is to the law. It is therefore fundamental that readers have a shared understanding of the basic scientific concepts at stake. This technical description will then serve to illustrate the host of legal and political problems arising from encryption technology, the most important of which are addressed in Part II. That section also provides a brief history of various legislative and judicial responses to the encryption “problem,” mapping out some of the major challenges still faced by jurists, policymakers and activists. While this paper draws largely upon common law sources from the United States and Canada, metaphor provides a core form of cognitive scaffolding across legal traditions. Part III explores the relationship between metaphor and the law, demonstrating the ways in which it may shape, distort or transform the structure of legal reasoning. Part IV demonstrates that the function served by legal metaphor is particularly determinative wherever the law seeks to integrate novel technologies into old legal frameworks. Strong, ubiquitous commercial encryption has created a range of legal problems for which the appropriate metaphors remain unfixed. Part V establishes a loose framework for thinking about how encryption has been described by courts and lawmakers — and how it could be. What does it mean to describe the encrypted machine as a locked container or building? As a combination safe? As a form of speech? As an untranslatable library or an unsolvable puzzle? What is captured by each of these cognitive models, and what is lost? This section explores both the technological accuracy and the legal implications of each choice. Finally, the paper offers a few concluding thoughts about the utility and risk of metaphor in the law, reaffirming the need for a critical, transparent and lucid appreciation of language and the power it wields….(More)”.

Truth Decay: An Initial Exploration of the Diminishing Role of Facts and Analysis in American Public Life


Report by Jennifer Kavanagh and Michael D. Rich: “Over the past two decades, national political and civil discourse in the United States has been characterized by “Truth Decay,” defined as a set of four interrelated trends: an increasing disagreement about facts and analytical interpretations of facts and data; a blurring of the line between opinion and fact; an increase in the relative volume, and resulting influence, of opinion and personal experience over fact; and lowered trust in formerly respected sources of factual information. These trends have many causes, but this report focuses on four: characteristics of human cognitive processing, such as cognitive bias; changes in the information system, including social media and the 24-hour news cycle; competing demands on the education system that diminish time spent on media literacy and critical thinking; and polarization, both political and demographic. The most damaging consequences of Truth Decay include the erosion of civil discourse, political paralysis, alienation and disengagement of individuals from political and civic institutions, and uncertainty over national policy.

This report explores the causes and consequences of Truth Decay and how they are interrelated, and examines past eras of U.S. history to identify evidence of Truth Decay’s four trends and observe similarities with and differences from the current period. It also outlines a research agenda, a strategy for investigating the causes of Truth Decay and determining what can be done to address its causes and consequences….(More)”.

Issuing Bonds to Invest in People


Tina Rosenberg at the New York Times: “The first social impact bond began in 2010 in Peterborough, England. Investors funded a program aimed at keeping newly released short-term inmates out of prison. It reduced reoffending by 9 percent compared to a control group, exceeding its target. So investors got their money back, plus interest.

Seldom has a policy idea gone viral so fast. There are now 108 such bonds, in 24 countries. The United States has 20, leveraging $211 million in investment capital, and at least 50 more are on the way. These bonds fund programs to reduce Oklahoma’s population of women in prison, help low-income mothers to have healthy pregnancies in South Carolina, teach refugees and immigrants English and job skills in Boston, house the homeless in Denver, and reduce storm water runoff in the District of Columbia. There’s a Forest Resilience Bond underway that seeks to finance desperately needed wildfire prevention.

Here’s how social impact bonds differ from standard social programs:

They raise upfront money to do prevention. Everyone knows most prevention is a great investment. But politicians don’t do “think ahead” very well. They hate to spend money now to create savings their successors will reap. Issuing a social impact bond means they don’t have to.

They concentrate resources on what works. Bonds build market discipline, since investors demand evidence of success.

They focus attention on outcomes rather than outputs. “Take work-force training,” said David Wilkinson, commissioner of Connecticut’s Office of Early Childhood. “We tend to pay for how many people receive training. We’re less likely to pay for — or even look at — how many people get good jobs.” Providers, he said, were best recognized for their work “when we reward them for outcomes they want to see and families they are serving want to achieve.”

They improve incentives.Focusing on outcomes changes the way social service providers think. In Connecticut, said Duryea, they now have a financial incentive to keep children out of foster care, rather than bring more in.

They force decision makers to look at data. Programs start with great fanfare, but often nobody then examines how they are doing. But with a bond, evaluation is essential.

They build in flexibility.“It’s a big advantage that they don’t prescribe what needs to be done,” said Cohen. The people on the ground choose the strategy, and can change it if necessary. “Innovators can think outside the box and tackle health or education in revolutionary ways,” he said.

…In the United States, social impact bonds have become synonymous with “pay for success” programs. But there are other ways to pay for success. For example, Wilkinson, the Connecticut official, has just started an Outcomes Rate Card — a way for a government to pay for home visits for vulnerable families. The social service agencies get base pay, but also bonuses. If a client has a full-term birth, the agency gets an extra $135 for a low-risk family, $170 for a hard-to-help one. A client who finds stable housing brings $150 or $220 to the agency, depending on the family’s situation….(More)”.

How to Make A.I. That’s Good for People


Fei-Fei Li in the New York Times: “For a field that was not well known outside of academia a decade ago, artificial intelligence has grown dizzyingly fast. Tech companies from Silicon Valley to Beijing are betting everything on it, venture capitalists are pouring billions into research and development, and start-ups are being created on what seems like a daily basis. If our era is the next Industrial Revolution, as many claim, A.I. is surely one of its driving forces.

It is an especially exciting time for a researcher like me. When I was a graduate student in computer science in the early 2000s, computers were barely able to detect sharp edges in photographs, let alone recognize something as loosely defined as a human face. But thanks to the growth of big data, advances in algorithms like neural networks and an abundance of powerful computer hardware, something momentous has occurred: A.I. has gone from an academic niche to the leading differentiator in a wide range of industries, including manufacturing, health care, transportation and retail.

I worry, however, that enthusiasm for A.I. is preventing us from reckoning with its looming effects on society. Despite its name, there is nothing “artificial” about this technology — it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.

I call this approach “human-centered A.I.” It consists of three goals that can help responsibly guide the development of intelligent machines.

First, A.I. needs to reflect more of the depth that characterizes our own intelligence….

No technology is more reflective of its creators than A.I. It has been said that there are no “machine” values at all, in fact; machine values arehuman values. A human-centered approach to A.I. means these machines don’t have to be our competitors, but partners in securing our well-being. However autonomous our technology becomes, its impact on the world — for better or worse — will always be our responsibility….(More).

Ostrom in the City: Design Principles and Practices for the Urban Commons


Chapter by Sheila Foster and Christian Iaione in Routledge Handbook of the Study of the Commons (Dan Cole, Blake Hudson, Jonathan Rosenbloom eds.): “If cities are the places where most of the world’s population will be living in the next century, as is predicted, it is not surprising that they have become sites of contestation over use and access to urban land, open space, infrastructure, and culture. The question posed by Saskia Sassen in a recent essay—who owns the city?—is arguably at the root of these contestations and of social movements that resist the enclosure of cities by economic elites (Sassen 2015). One answer to the question of who owns the city is that we all do. In our work we argue that the city is a common good or a “commons”—a shared resource that belongs to all of its inhabitants, and to the public more generally.

We have been writing about the urban commons for the last decade, very much inspired by the work of Jane Jacobs and Elinor Ostrom. The idea of the urban commons captures the ecological view of the city that characterizes Jane Jacobs classic work, The Death and Life of Great American Cities. (Foster 2006) It also builds on Elinor Ostrom’s finding that common resources are capable of being collectively managed by users in ways that support their needs yet sustains the resource over the long run (Ostrom 1990).

Jacobs analyzed cities as complex, organic systems and observed the activity within them at the neighborhood and street level, much like an ecologist would study natural habitats and the species interacting within them. She emphasized the diversity of land use, of people and neighborhoods, and the interaction among them as important to maintaining the ecological balance of urban life in great cities like New York. Jacob’s critique of the urban renewal slum clearance programs of the 1940s and 50s in the United States was focused not just on the destruction of physical neighborhoods, but also on the destruction of the “irreplaceable social capital”—the networks of residents who build and strengthen working relationships over time through trust and voluntary cooperation—necessary for “self-governance” of urban neighborhoods. (Jacobs 1961) As political scientist Douglas Rae has written, this social capital is the “civic fauna” of urbanism (Rae 2003)…(More)”.

Artificial intelligence could identify gang crimes—and ignite an ethical firestorm


Matthew Hutson at Science: “When someone roughs up a pedestrian, robs a store, or kills in cold blood, police want to know whether the perpetrator was a gang member: Do they need to send in a special enforcement team? Should they expect a crime in retaliation? Now, a new algorithm is trying to automate the process of identifying gang crimes. But some scientists warn that far from reducing gang violence, the program could do the opposite by eroding trust in communities, or it could brand innocent people as gang members.

That has created some tensions. At a presentation of the new program this month, one audience member grew so upset he stormed out of the talk, and some of the creators of the program have been tight-lipped about how it could be used….

For years, scientists have been using computer algorithms to map criminal networks, or to guess where and when future crimes might take place, a practice known as predictive policing. But little work has been done on labeling past crimes as gang-related.

In the new work, researchers developed a system that can identify a crime as gang-related based on only four pieces of information: the primary weapon, the number of suspects, and the neighborhood and location (such as an alley or street corner) where the crime took place. Such analytics, which can help characterize crimes before they’re fully investigated, could change how police respond, says Doug Haubert, city prosecutor for Long Beach, California, who has authored strategies on gang prevention.

To classify crimes, the researchers invented something called a partially generative neural network. A neural network is made of layers of small computing elements that process data in a way reminiscent of the brain’s neurons. A form of machine learning, it improves based on feedback—whether its judgments were right. In this case, researchers trained their algorithm using data from the Los Angeles Police Department (LAPD) in California from 2014 to 2016 on more than 50,000 gang-related and non–gang-related homicides, aggravated assaults, and robberies.

The researchers then tested their algorithm on another set of LAPD data. The network was “partially generative,” because even when it did not receive an officer’s narrative summary of a crime, it could use the four factors noted above to fill in that missing information and then use all the pieces to infer whether a crime was gang-related. Compared with a stripped-down version of the network that didn’t use this novel approach, the partially generative algorithm reduced errors by close to 30%, the team reported at the Artificial Intelligence, Ethics, and Society (AIES) conference this month in New Orleans, Louisiana. The researchers have not yet tested their algorithm’s accuracy against trained officers.

It’s an “interesting paper,” says Pete Burnap, a computer scientist at Cardiff University who has studied crime data. But although the predictions could be useful, it’s possible they would be no better than officers’ intuitions, he says. Haubert agrees, but he says that having the assistance of data modeling could sometimes produce “better and faster results.” Such analytics, he says, “would be especially useful in large urban areas where a lot of data is available.”…(More).