A.I. and Big Data Could Power a New War on Poverty


Elisabeth A. Mason in The New York Times: “When it comes to artificial intelligence and jobs, the prognostications are grim. The conventional wisdom is that A.I. might soon put millions of people out of work — that it stands poised to do to clerical and white collar workers over the next two decades what mechanization did to factory workers over the past two. And that is to say nothing of the truckers and taxi drivers who will find themselves unemployed or underemployed as self-driving cars take over our roads.

But it’s time we start thinking about A.I.’s potential benefits for society as well as its drawbacks. The big-data and A.I. revolutions could also help fight poverty and promote economic stability.

Poverty, of course, is a multifaceted phenomenon. But the condition of poverty often entails one or more of these realities: a lack of income (joblessness); a lack of preparedness (education); and a dependency on government services (welfare). A.I. can address all three.

First, even as A.I. threatens to put people out of work, it can simultaneously be used to match them to good middle-class jobs that are going unfilled. Today there are millions of such jobs in the United States. This is precisely the kind of matching problem at which A.I. excels. Likewise, A.I. can predict where the job openings of tomorrow will lie, and which skills and training will be needed for them….

Second, we can bring what is known as differentiated education — based on the idea that students master skills in different ways and at different speeds — to every student in the country. A 2013 study by the National Institutes of Health found that nearly 40 percent of medical students held a strong preference for one mode of learning: Some were listeners; others were visual learners; still others learned best by doing….

Third, a concerted effort to drag education and job training and matching into the 21st century ought to remove the reliance of a substantial portion of the population on government programs designed to assist struggling Americans. With 21st-century technology, we could plausibly reduce the use of government assistance services to levels where they serve the function for which they were originally intended…(More)”.

Liberal Democracy and the Unraveling of the Enlightenment Project


James Davison Hunter in The Hedgehog Review: “…while institutions tend to be stable and enduring, even as they evolve, no institution is permanent or indefinitely fixable. The question now is whether contemporary American democracy can even be fixed. What if the political problems we are rightly worried about are actually symptoms of a deeper problem for which there is no easy or obvious remedy?

These are necessarily historical questions. The democratic revolutions of the eighteenth and nineteenth centuries in Europe and North America were largely products of the Enlightenment project, reflecting all of its highest ideals, contradictions, hopes, and inconsistencies. It underwrote the project of modern liberalism, which, for all of its flaws and failures, can still boast of some of the greatest achievements in human history. As the first president of Czechoslovakia, Tomáš Garrigue Masaryk, observed, democracy is the political form of the humane ideal.

Yet with the advantage of twenty-first-century hindsight, we can now see that the Enlightenment project has been unraveling for some time, and that what we are witnessing today are likely the political consequences of that unraveling. Any possibility of “fixing” what ails late-modern American democracy has to take the full measure of this transformation in the deep structures of American and Western political culture. While politics can give expression to and defend a particular social order, it cannot direct it. As Michael Oakeshott famously said, “Political activity may have given us Magna Carta and the Bill of Rights, but it did not give us the contents of these documents, which came from a stratum of social thought far too deep to be influenced by the actions of politicians.”1

What I am driving at is made clearer by the distinction between the politics of culture and the culture of politics. The politics of culture refers to the contestation of power over cultural issues. This would include the mobilization of parties and rank-and-file support, the organization of leadership, the formation of special-interest coalitions, and the manipulation of public rhetoric on matters reflecting the symbols or ideals at the heart of a group’s collective identity. This is what most people think about when they use the term culture war. In this case, culture war is the accumulation of political conflicts over issues like abortion, gay rights, or federal funding of the humanities and arts. Though culture is implicated at every level, the politics of culture is primarily about politics.

The culture of politics, by contrast, refers to the symbolic environment in which political institutions are embedded and political action occurs. This symbolic environment is constituted by the basic frameworks of implicit meaning that make particular political arrangements understandable or incomprehensible, desirable or reprehensible. These frameworks constitute a culture’s “deep structure.” Absent a deep structure, certain political institutions and practices simply do not make any sense.

This distinction is essential to making sense of our political moment….(More)”.

A Guide to Chicago’s Array of Things Initiative


Sean Thornton at Data-Smart City Solutions: “The 606, Chicago’s rails-to-trails project that stretches for 4.2 miles on the city’s northwest side, has been popular with residents and visitors ever since its launch last year.  The trail recently added a new art installationBlue Sky, that will greet visitors over the next five years with an array of lights and colors. Less noticed, but no less important, will be another array on display near the trail: a sensor node from Chicago’s Array of Things initiative.

If you’re a frequent reader of all things civic tech, then you may have already come across the Array of Things (AoT).  Launched in 2016, the project, which consists of a network of sensor boxes mounted on light posts, has now begun collecting a host of real-time data on Chicago’s environmental surroundings and urban activity.   After installing a small number of sensors downtown and elsewhere in 2016, Chicago is now adding additional sensors across the city and the city’s data portal currently lists locations for all of AoT’s active and yet-to-be installed sensors.  This year, data collected from AoT will be accessible online, providing valuable information for researchers, urban planners, and the general public.

AoT’s public engagement campaign has been picking up steam as well, with a recent community event held this fall. As a non-proprietary project, AoT is being implemented as a tool to improve not just urban planning and sustainability efforts, but quality of life for residents and communities. To engage with the public, project leaders have held meetings and workshops to build relationships with residents and identify community priorities. Those priorities, which vary from community to community, could range from monitoring traffic congestion around specific intersections to addressing air quality concerns at local parks and schoolyards.

The AoT project is a leading example of how new technology—and the Internet of Things (IoT) in particular—is transforming efforts for sustainable urban growth and “smart” city planning.  AoT’s truly multi-dimensional character sets it apart from other smart city efforts: complementing environmental sensor data collection, the initiative includes educational programming, community outreach, and R&D opportunities for academics, startups, corporations, and other organizations that could stand to benefit.

Launching a project like AoT, of course, isn’t as simple as installing sensor nodes and flipping on a switch. AoT has been in the works for years, and its recent launch marks a milestone event for its developers, the City of Chicago, and smart city technologies.  AoT has frequently appeared in the press  – yet often, coverage loses sight of the many facets of this unique project. How did AoT get to where it is today?  What is the project’s significance outside of Chicago? What are AoT’s implications for cities? Consider this article as your primer for all things AoT….(More)”.

Even Imperfect Algorithms Can Improve the Criminal Justice System


Sam Corbett-Davies, Sharad Goel and Sandra González-Bailón in the The New York Times: “In courtrooms across the country, judges turn to computer algorithms when deciding whether defendants awaiting trial must pay bail or can be released without payment. The increasing use of such algorithms has prompted warnings about the dangers of artificial intelligence. But research shows that algorithms are powerful tools for combating the capricious and biased nature of human decisions.

Bail decisions have traditionally been made by judges relying on intuition and personal preference, in a hasty process that often lasts just a few minutes. In New York City, the strictest judges are more than twice as likely to demand bail as the most lenient ones.

To combat such arbitrariness, judges in some cities now receive algorithmically generated scores that rate a defendant’s risk of skipping trial or committing a violent crime if released. Judges are free to exercise discretion, but algorithms bring a measure of consistency and evenhandedness to the process.

The use of these algorithms often yields immediate and tangible benefits: Jail populations, for example, can decline without adversely affecting public safety.

In one recent experiment, agencies in Virginia were randomly selected to use an algorithm that rated both defendants’ likelihood of skipping trial and their likelihood of being arrested if released. Nearly twice as many defendants were released, and there was no increase in pretrial crime….(More)”.

New York City moves to create accountability for algorithms


Lauren Kirchner at ArsTechnica: “The algorithms that play increasingly central roles in our lives often emanate from Silicon Valley, but the effort to hold them accountable may have another epicenter: New York City. Last week, the New York City Council unanimously passed a bill to tackle algorithmic discrimination—the first measure of its kind in the country.

The algorithmic accountability bill, waiting to be signed into law by Mayor Bill de Blasio, establishes a task force that will study how city agencies use algorithms to make decisions that affect New Yorkers’ lives, and whether any of the systems appear to discriminate against people based on age, race, religion, gender, sexual orientation, or citizenship status. The task force’s report will also explore how to make these decision-making processes understandable to the public.

The bill’s sponsor, Council Member James Vacca, said he was inspired by ProPublica’s investigation into racially biased algorithms used to assess the criminal risk of defendants….

A previous, more sweeping version of the bill had mandated that city agencies publish the source code of all algorithms being used for “targeting services” or “imposing penalties upon persons or policing” and to make them available for “self-testing” by the public. At a hearing at City Hall in October, representatives from the mayor’s office expressed concerns that this mandate would threaten New Yorkers’ privacy and the government’s cybersecurity.

The bill was one of two moves the City Council made last week concerning algorithms. On Thursday, the committees on health and public safety held a hearing on the city’s forensic methods, including controversial tools that the chief medical examiner’s office crime lab has used for difficult-to-analyze samples of DNA.

As a ProPublica/New York Times investigation detailed in September, an algorithm created by the lab for complex DNA samples has been called into question by scientific experts and former crime lab employees.

The software, called the Forensic Statistical Tool, or FST, has never been adopted by any other lab in the country….(More)”.

Crowdsourcing Accurately and Robustly Predicts Supreme Court Decisions


Paper by Katz, Daniel Martin and Bommarito, Michael James and Blackman, Josh: “Scholars have increasingly investigated “crowdsourcing” as an alternative to expert-based judgment or purely data-driven approaches to predicting the future. Under certain conditions, scholars have found that crowd-sourcing can outperform these other approaches. However, despite interest in the topic and a series of successful use cases, relatively few studies have applied empirical model thinking to evaluate the accuracy and robustness of crowdsourcing in real-world contexts.

In this paper, we offer three novel contributions. First, we explore a dataset of over 600,000 predictions from over 7,000 participants in a multi-year tournament to predict the decisions of the Supreme Court of the United States. Second, we develop a comprehensive crowd construction framework that allows for the formal description and application of crowdsourcing to real-world data. Third, we apply this framework to our data to construct more than 275,000 crowd models. We find that in out-of-sample historical simulations, crowdsourcing robustly outperforms the commonly-accepted null model, yielding the highest-known performance for this context at 80.8% case level accuracy. To our knowledge, this dataset and analysis represent one of the largest explorations of recurring human prediction to date, and our results provide additional empirical support for the use of crowdsourcing as a prediction method….(More)”.

Disrupting Democracy: Point. Click. Transform.


Book edited by Anthony T. Silberfeld: “In January 2017, the Bertelsmann Foundation embarked on a nine-month journey to explore how digital innovation impacts democracies and societies around the world. This voyage included more than 40,000 miles in the air, thousands of miles on the ground and hundreds of interviews.

From the rival capitals of Washington and Havana to the bustling streets of New Delhi; the dynamic tech startups in Tel Aviv to the efficient order of Berlin, this book focuses on key challenges that have emerged as a result of technological disruption and offers potential lessons to other nations situated at various points along the technological and democratic spectra.

Divided into six chapters, this book provides two perspectives on each of our five case studies (India, Cuba, the United States, Israel and Germany) followed by polling data collected on demographics, digital access and political engagement from four of these countries.

The global political environment is constantly evolving, and it is clear that technology is accelerating that process for better and, in some cases, for worse. Disrupting Democracy attempts to sort through these changes to give policymakers and citizens information that will help them navigate this increasingly volatile world….(More)”.

The Engineers and the Political System


Aaron Timms at the Los Angeles Review of Books: “Engineers enjoy a prestige in China that connects them to political power far more directly than in the United States. ….America, by contrast, has historically been governed by lawyers. That remains true today: there are 218 lawyers in Congress and 208 former businesspeople, according to the Congressional Research Service, but only eight engineers. (Science is even more severely underrepresented, with just three members in the House.) It’s unlikely that that balance will tilt meaningfully in favor of STEM-ers in the near term. But in another sense, the growing cultural capital of the engineers will inevitably translate to political power, whatever its form.

The engineering profession today is broad, much broader than it was in 1921 when Thorstein Veblen published The Engineers and the Price System, his classic pamphlet on industrial sabotage and government by technocrats. Engineering has outgrown the four traditional branches (chemical, civil, electrical, mechanical) to include all the professions in which the laws of mathematics and science are applied to real-world problems…..In a way that was never the case for previous generations, engineering today is politics, and politics engineering. Power is coming for the engineers, but are the engineers ready for power?

…tech smarts do not port easily to politics. However violently Silicon Valley pushes the story that it’s here to fix things for all of us, building an algorithm and coming up with intelligent ways to improve society are not the same thing. The triumph of the engineers is that they’ve managed to convince so many people otherwise.

This victory is more than simply economic or mechanical; engineering has also come to permeate the language of politics itself. Zuckerberg’s doe-eyed both-sidesism is the latest expression of the idea, nourished through the Clinton years and the height of the evidence-based policy movement, that facts offer the surest solution to knotty political problems. This is, we already know, a temple built on sand, ignoring as it does the intractably political nature of politics; hence the failure of “figures” and “facts” and “evidence” to do anything to shift positions on gun reform or voter fraud. But it’s a temple with enduring bipartisan appeal, and the engineers have come along at the right moment to give it a fresh lick of paint. If thinking like an engineer is the new way to do business, engineerialism, in politics, is the new centrism — rule by experts remarketed for the innovation age. It might be generations before a Veblenian technocrat calls the White House home, but no presidency can match the power engineers already have — a power to define progress, a power without check….(More)”.

The citizen in the smart city. How the smart city could transform citizenship


Paper by Martijn de Waal and Marloes Dignum: “Smart city-policy makers and technology vendors are increasingly stating they want to bring about citizen-centered smart cities. Yet, it often remains unclear what exactly that means, and how citizens are envisaged as actors in smart cities. This article wants to contribute to this discussion by exploring the relation between smart cities and citizenship. It aims to do this by introducing a heuristic scheme that brings out the implied notions of citizenship in three distinct sets of smart city visions and practices: The Control Room envisages the city as a collection of infrastructures and services; The Creative City views the city from the perspective of (economic) geography and ponders on local and regional systems of innovation; The Smart Citizens discourse addresses the city as a political and civic community. These smart city discourses are mapped against two visions on citizenship and governance taken from political philosophy. A `republican’ perspective with strong presence in social-democratic countries is contrasted with a libertarian one, most prominent in Silicon Valley approaches to smart city technologies. This provides a scheme to reflect on potential benefits and downsides if a specific smart city discourse would develop. Instances of smart cities may promote notions of citizenship that are based on consumer choice and individual responsibility, alternatively they could also reinforce collective responsibilities towards the common good of society…(More)”.

Scientists can now figure out detailed, accurate neighborhood demographics using Google Street View photos


Christopher Ingraham at the Washington Post: “A team of computer scientists has derived accurate, neighborhood-level estimates of the racial, economic and political characteristics of 200 U.S. cities using an unlikely data source — Google Street View images of people’s cars.

Published this week in the Proceedings of the National Academy of Sciences, the report details how the scientists extracted 50 million photographs of street scenes captured by Google’s Street View cars in 2013 and 2014. They then trained a computer algorithm to identify the make, model and year of 22 million automobiles appearing in neighborhoods in those images, parked outside homes or driving down the street.

The vehicles seen in Street View images are often small or blurry, making precise identification a challenge. So the researchers had human experts identify a small subsample of the vehicles and compare those to the results churned out by their algorithm. They that the algorithm correctly identified whether a vehicle was U.S.- or foreign-made roughly 88 percent of the time, got the manufacturer right 66 percent of the time and nailed the exact model 52 percent of the time.

While far from perfect, the sheer size of the vehicle database means those numbers are still useful for real-world statistical applications, like drawing connections between vehicle preferences and demographic data. The 22 million vehicles in the database comprise roughly 8 percent of all vehicles in the United States. By comparison, the U.S. Census Bureau’s massive American Community Survey reaches only about 1.6 percent of American householdseach year, while the typical 1,000-person opinion poll includes just 0.0004 of American adults.

To test what this data set could be capable of, the researchers first paired the Zip code-level vehicle data with numbers on race, income and education from the American Community Survey. They did this for a random 15 percent of the Zip codes in their data set to create a “training set.” They then created another algorithm to go through the training set to see how vehicle characteristics correlated with neighborhood characteristics: What kinds of vehicles are disproportionately likely to appear in white neighborhoods, or black ones? Low-income vs. high-income? Highly-educated areas vs. less-educated ones?

That yielded a number of reliable correlations….(More)”.