Google, T-Mobile Tackle 911 Call Problem


Sarah Krouse at the Wall Street Journal: “Emergency call operators will soon have an easier time pinpointing the whereabouts of Android phone users.

Google has struck a deal with T-Mobile US to pipe location data from cellphones with Android operating systems in the U.S. to emergency call centers, said Fiona Lee, who works on global partnerships for Android emergency location services.

The move is a sign that smartphone operating system providers and carriers are taking steps to improve the quality of location data they send when customers call 911. Locating callers has become a growing problem for 911 operators as cellphone usage has proliferated. Wireless devices now make 80% or more of the 911 calls placed in some parts of the U.S., according to the trade group National Emergency Number Association. There are roughly 240 million calls made to 911 annually.

While landlines deliver an exact address, cellphones typically register only an estimated location provided by wireless carriers that can be as wide as a few hundred yards and imprecise indoors.

That has meant that while many popular applications like Uber can pinpoint users, 911 call takers can’t always do so. Technology giants such as Google and Apple Inc. that run phone operating systems need a direct link to the technology used within emergency call centers to transmit precise location data….

Google currently offers emergency location services in 14 countries around the world by partnering with carriers and companies that are part of local emergency communications infrastructure. Its location data is based on a combination of inputs from Wi-Fi to sensors, GPS and a mobile network information.

Jim Lake, director at the Charleston County Consolidated 9-1-1 Center, participated in a pilot of Google’s emergency location services and said it made it easier to find people who didn’t know their location, particularly because the area draws tourists.

“On a day-to-day basis, most people know where they are, but when they don’t, usually those are the most horrifying calls and we need to know right away,” Mr. Lake said.

In June, Apple said it had partnered with RapidSOS to send iPhone users’ location information to 911 call centers….(More)”

One of New York City’s most urgent design challenges is invisible


Diana Budds at Curbed: “Algorithms are invisible, but they already play a large role in shaping New York City’s built environment, schooling, public resources, and criminal justice system. Earlier this year, the City Council and Mayor Bill de Blasio formed the Automated Decision Systems Task Force, the first of its kind in the country, to analyze how NYC deploys automated systems to ensure fairness, equity, and accountability are upheld.

This week, 20 experts in the field of civil rights and artificial intelligence co-signed a letter to the task force to help influence its official report, which is scheduled to be published in December 2019.

The letter’s recommendations include creating a publicly accessible list of all the automated decision systems in use; consulting with experts before adopting an automated decision system; creating a permanent government body to oversee the procurement and regulation of automated decision systems; and upholding civil liberties in all matters related to automation. This could lay the groundwork for future legislation around automation in the city….Read the full letter here.”

What’s Wrong with Public Policy Education


Francis Fukuyama at the American Interest: “Most programs train students to become capable policy analysts, but with no understanding of how to implement those policies in the real world…Public policy education is ripe for an overhaul…

Public policy education in most American universities today reflects a broader problem in the social sciences, which is the dominance of economics. Most programs center on teaching students a battery of quantitative methods that are useful in policy analysis: applied econometrics, cost-benefit analysis, decision analysis, and, most recently, use of randomized experiments for program evaluation. Many schools build their curricula around these methods rather than the substantive areas of policy such as health, education, defense, criminal justice, or foreign policy. Students come out of these programs qualified to be policy analysts: They know how to gather data, analyze it rigorously, and evaluate the effectiveness of different public policy interventions. Historically, this approach started with the Rand Graduate School in the 1970s (which has subsequently undergone a major re-thinking of its approach).

There is no question that these skills are valuable and should be part of a public policy education.  The world has undergone a revolution in recent decades in terms of the role of evidence-based policy analysis, where policymakers can rely not just on anecdotes and seat-of-the-pants assessments, but statistically valid inferences that intervention X is likely to result in outcome Y, or that the millions of dollars spent on policy Z has actually had no measurable impact. Evidence-based policymaking is particularly necessary in the age of Donald Trump, amid the broad denigration of inconvenient facts that do not suit politicians’ prior preferences.

But being skilled in policy analysis is woefully inadequate to bring about policy change in the real world. Policy analysis will tell you what the optimal policy should be, but it does not tell you how to achieve that outcome.

The world is littered with optimal policies that don’t have a snowball’s chance in hell of being adopted. Take for example a carbon tax, which a wide range of economists and policy analysts will tell you is the most efficient way to abate carbon emissions, reduce fossil fuel dependence, and achieve a host of other desired objectives. A carbon tax has been a nonstarter for years due to the protestations of a range of interest groups, from oil and chemical companies to truckers and cabbies and ordinary drivers who do not want to pay more for the gas they use to commute to work, or as inputs to their industrial processes. Implementing a carbon tax would require a complex strategy bringing together a coalition of groups that are willing to support it, figuring out how to neutralize the die-hard opponents, and convincing those on the fence that the policy would be a good, or at least a tolerable, thing. How to organize such a coalition, how to communicate a winning message, and how to manage the politics on a state and federal level would all be part of a necessary implementation strategy.

It is entirely possible that an analysis of the implementation strategy, rather than analysis of the underlying policy, will tell you that the goal is unachievable absent an external shock, which might then mean changing the scope of the policy, rethinking its objectives, or even deciding that you are pursuing the wrong objective.

Public policy education that sought to produce change-makers rather than policy analysts would therefore have to be different.  It would continue to teach policy analysis, but the latter would be a small component embedded in a broader set of skills.

The first set of skills would involve problem definition. A change-maker needs to query stakeholders about what they see as the policy problem, understand the local history, culture, and political system, and define a problem that is sufficiently narrow in scope that it can plausibly be solved.

At times reformers start with a favored solution without defining the right problem. A student I know spent a summer working at an NGO in India advocating use of electric cars in the interest of carbon abatement. It turns out, however, that India’s reliance on coal for marginal electricity generation means that more carbon would be put in the air if the country were to switch to electric vehicles, not less, so the group was actually contributing to the problem they were trying to solve….

The second set of skills concerns solutions development. This is where traditional policy analysis comes in: It is important to generate data, come up with a theory of change, and posit plausible options by which reformers can solve the problem they have set for themselves. This is where some ideas from product design, like rapid prototyping and testing, may be relevant.

The third and perhaps most important set of skills has to do with implementation. This begins necessarily with stakeholder analysis: that is, mapping of actors who are concerned with the particular policy problem, either as supporters of a solution, or opponents who want to maintain the status quo. From an analysis of the power and interests of the different stakeholders, one can begin to build coalitions of proponents, and think about strategies for expanding the coalition and neutralizing those who are opposed.  A reformer needs to think about where resources can be obtained, and, very critically, how to communicate one’s goals to the stakeholder audiences involved. Finally comes testing and evaluation—not in the expectation that there will be a continuous and rapid iterative process by which solutions are tried, evaluated, and modified. Randomized experiments have become the gold standard for program evaluation in recent years, but their cost and length of time to completion are often the enemies of rapid iteration and experimentation….(More) (see also http://canvas.govlabacademy.org/).

NZ to perform urgent algorithm ‘stocktake’ fearing data misuse within government


Asha McLean at ZDNet: “The New Zealand government has announced it will be assessing how government agencies are using algorithms to analyse data, hoping to ensure transparency and fairness in decisions that affect citizens.

A joint statement from Minister for Government Digital Services Clare Curran and Minister of Statistics James Shaw said the algorithm “stocktake” will be conducted with urgency, but cites only the growing interest in data analytics as the reason for the probe.

“The government is acutely aware of the need to ensure transparency and accountability as interest grows regarding the challenges and opportunities associated with emerging technology such as artificial intelligence,” Curran said.

It was revealed in April that Immigration New Zealand may have been using citizen data for less than desirable purposes, with claims that data collected through the country’s visa application process that was being used to determine those in breach of their visa conditions was in fact filtering people based on their age, gender, and ethnicity.

Rejecting the idea the data-collection project was racial profiling, Immigration Minister Iain Lees-Galloway told Radio New Zealand that Immigration looks at a range of issues, including at those who have made — and have had rejected — multiple visa applications.

“It looks at people who place the greatest burden on the health system, people who place the greatest burden on the criminal justice system, and uses that data to prioritise those people,” he said.

“It is important that we protect the integrity of our immigration system and that we use the resources that immigration has as effectively as we can — I do support them using good data to make good decisions about where best to deploy their resources.”

In the statement on Wednesday, Shaw pointed to two further data-modelling projects the government had embarked on, with one from the Ministry of Health looking into the probability of five-year post-transplant survival in New Zealand.

“Using existing data to help model possible outcomes is an important part of modern government decision-making,” Shaw said….(More)”.

How artificial intelligence is transforming the world


Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity….(More)

Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion

Algorithmic Impact Assessment (AIA) framework


Report by AINow Institute: “Automated decision systems are currently being used by public agencies, reshaping how criminal justice systems work via risk assessment algorithms1 and predictive policing, optimizing energy use in critical infrastructure through AI-driven resource allocation, and changing our employment4 and educational systems through automated evaluation tools and matching algorithms.Researchers, advocates, and policymakers are debating when and where automated decision systems are appropriate, including whether they are appropriate at all in particularly sensitive domains.

Questions are being raised about how to fully assess the short and long term impacts of these systems, whose interests they serve, and if they are sufficiently sophisticated to contend with complex social and historical contexts. These questions are essential, and developing strong answers has been hampered in part by a lack of information and access to the systems under deliberation. Many such systems operate as “black boxes” – opaque software tools working outside the scope of meaningful scrutiny and accountability.8 This is concerning, since an informed policy debate is impossible without the ability to understand which existing systems are being used, how they are employed, and whether these systems cause unintended consequences. The Algorithmic Impact Assessment (AIA) framework proposed in this report is designed to support affected communities and stakeholders as they seek to assess the claims made about these systems, and to determine where – or if – their use is acceptable….

KEY ELEMENTS OF A PUBLIC AGENCY ALGORITHMIC IMPACT ASSESSMENT

1. Agencies should conduct a self-assessment of existing and proposed automated decision systems, evaluating potential impacts on fairness, justice, bias, or other concerns across affected communities;

2. Agencies should develop meaningful external researcher review processes to discover, measure, or track impacts over time;

3. Agencies should provide notice to the public disclosing their definition of “automated decision system,” existing and proposed systems, and any related self-assessments and researcher review processes before the system has been acquired;

4. Agencies should solicit public comments to clarify concerns and answer outstanding questions; and

5. Governments should provide enhanced due process mechanisms for affected individuals or communities to challenge inadequate assessments or unfair, biased, or otherwise harmful system uses that agencies have failed to mitigate or correct….(More)”.

Even Imperfect Algorithms Can Improve the Criminal Justice System


Sam Corbett-Davies, Sharad Goel and Sandra González-Bailón in the The New York Times: “In courtrooms across the country, judges turn to computer algorithms when deciding whether defendants awaiting trial must pay bail or can be released without payment. The increasing use of such algorithms has prompted warnings about the dangers of artificial intelligence. But research shows that algorithms are powerful tools for combating the capricious and biased nature of human decisions.

Bail decisions have traditionally been made by judges relying on intuition and personal preference, in a hasty process that often lasts just a few minutes. In New York City, the strictest judges are more than twice as likely to demand bail as the most lenient ones.

To combat such arbitrariness, judges in some cities now receive algorithmically generated scores that rate a defendant’s risk of skipping trial or committing a violent crime if released. Judges are free to exercise discretion, but algorithms bring a measure of consistency and evenhandedness to the process.

The use of these algorithms often yields immediate and tangible benefits: Jail populations, for example, can decline without adversely affecting public safety.

In one recent experiment, agencies in Virginia were randomly selected to use an algorithm that rated both defendants’ likelihood of skipping trial and their likelihood of being arrested if released. Nearly twice as many defendants were released, and there was no increase in pretrial crime….(More)”.

The Challenges of Prediction: Lessons from Criminal Justice


Paper by David G. Robinson: “Government authorities at all levels increasingly rely on automated predictions, grounded in statistical patterns, to shape people’s lives. Software that wields government power deserves special attention, particularly when it uses historical data to decide automatically what ought to happen next.

In this article, I draw examples primarily from the domain of criminal justice — and in particular, the intersection of civil rights and criminal justice — to illustrate three structural challenges that can arise whenever law or public policy contemplates adopting predictive analytics as a tool:

1) What matters versus what the data measure;
2) Current goals versus historical patterns; and
3) Public authority versus private expertise.

After explaining each of these challenges and illustrating each with concrete examples, I describe feasible ways to avoid these problems and to do prediction more successfully…(More)”

Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing


Priscilla Guo, Danielle Kehl, and Sam Kessler at Responsive Communities (Harvard): “In the summer of 2016, some unusual headlines began appearing in news outlets across the United States. “Secret Algorithms That Predict Future Criminals Get a Thumbs Up From the Wisconsin Supreme Court,” read one. Another declared: “There’s software used across the country to predict future criminals. And it’s biased against blacks.” These news stories (and others like them) drew attention to a previously obscure but fast-growing area in the field of criminal justice: the use of risk assessment software, powered by sophisticated and sometimes proprietary algorithms, to predict whether individual criminals are likely candidates for recidivism. In recent years, these programs have spread like wildfire throughout the American judicial system. They are now being used in a broad capacity, in areas ranging from pre-trial risk assessment to sentencing and probation hearings. This paper focuses on the latest—and perhaps most concerning—use of these risk assessment tools: their incorporation into the criminal sentencing process, a development which raises fundamental legal and ethical questions about fairness, accountability, and transparency. The goal is to provide an overview of these issues and offer a set of key considerations and questions for further research that can help local policymakers who are currently implementing or considering implementing similar systems. We start by putting this trend in context: the history of actuarial risk in the American legal system and the evolution of algorithmic risk assessments as the latest incarnation of a much broader trend. We go on to discuss how these tools are used in sentencing specifically and how that differs from other contexts like pre-trial risk assessment. We then delve into the legal and policy questions raised by the use of risk assessment software in sentencing decisions, including the potential for constitutional challenges under the Due Process and Equal Protection clauses of the Fourteenth Amendment. Finally, we summarize the challenges that these systems create for law and policymakers in the United States, and outline a series of possible best practices to ensure that these systems are deployed in a manner that promotes fairness, transparency, and accountability in the criminal justice system….(More)”.

Bridging Governments’ Borders


Robyn Scott & Lisa Witter at SSIR: “…Our research found that “disconnection” falls into five, negatively reinforcing categories in the public sector; a closer look at these categories may help policy makers see the challenge before them more clearly:

1. Disconnected Governments

There is a truism in politics and government that all policy is local and context-dependent. Whether this was ever an accurate statement is questionable; it is certainly no longer. While all policy must ultimately be customized for local conditions, it absurd to assume there is little or nothing to learn from other countries. Three trends, in fact, indicate that solutions will become increasingly fungible between countries…..

2. Disconnected Issues

What climate change policy can endure without a job-creation strategy? What sensible criminal justice reform does not consider education? Yet even within countries, departments and their employees often remain as foreign to each other as do nations….

3. Disconnected Public Servants

The isolation of governments, and of government departments, is caused by and reinforces the isolation of people working in government, who have few incentives—and plenty of disincentives—to share what they are working on…..

4. Disconnected Citizens

…There are areas of increasingly visible progress in bridging the disconnections of government, citizen engagement being one. We’re still in the early stages, but private sector fashions such as human-centered design and design thinking have become government buzzwords. And platforms enabling new types of citizen engagement—from participatory budgeting to apps that people use to report potholes—are increasingly popping up around the world…..

5. Disconnected Ideas

According to the World Bank’s own data, one third of its reports are never read, even once. Foundations and academia pour tens of millions of dollars into policy research with few targeted channels to reach policymakers; they also tend to produce and deliver information in formats that policymakers don’t find useful. People in government, like everyone else, are frequently on their mobile phones, and short of time….(More)”