Regulating the Regulators: Tracing the Emergence of the Political Transparency Laws in Chile


Conference Paper by Bettina Schorr: “Due to high social inequalities and weak public institutions, political corruption and the influence of business elites on policy-makers are widespread in the Andean region. The consequences for the opportunities of sustainable development are serious: regulation limiting harmful business activities or (re-)distributive reforms are difficult to achieve and public resources often end up as private gains instead of serving development purposes.

Given international and domestic pressures, political corruption has reached the top of the political agendas in many countries. However, frequently transparency goals do not materialize into new binding policies or, when reforms are enacted, they suffer from severe implementation gaps.

The paper analyses transparency politics in Chile where a series of reforms regarding political transparency were implemented since 2014. Hence, Chile counts among the few successful cases in the region. By tracing the process that led to the emergence of new transparency policies in Chile, the paper elaborates an analytical framework for the explanation of institutional innovation in the case of political transparency. In particular, the study emphasizes the importance of civil society actors´ involvement in the whole policy cycle, particularly in the stages of formulation, implementation and evaluation….(More)”.

Reach is crowdsourcing street criminal incidents to reduce crime in Lima


Michael Krumholtz at LATAM Tech: “Unfortunately, in Latin America and many other places around the world, robberies are a part of urban life. Moisés Salazar of Lima has been a victim of petty crime in the streets, which is what led him to create Reach.

The application that markets itself as a kind of Waze for street crime alerts users through a map of incident reports or crimes that display visibly on your phone….

Salazar said that Reach helps users before, during and after incidents that could victimize them. That’s because the map allows users to avoid certain areas where a crime may have just happened or is being carried out.

In addition, there is a panic button that users can push if they find themselves in danger or in need of authorities. After the fact, that data then gets made public and can be analyzed by expert users or authorities wanting to see which incidents occur most commonly and where they occur.

Reach is very similar to the U.S. application Citizen, which is a crime avoidance tool used in major metropolitan areas in the U.S. like New York. That application alerts users to crime reports in their neighborhoods and gives them a forum to either record anything they witness or talk about it with other users….(More)”.

Computers Can Solve Your Problem. You May Not Like The Answer


David Scharfenberg at the Boston Globe: “Years of research have shown that teenagers need their sleep. Yet high schools often start very early in the morning. Starting them later in Boston would require tinkering with elementary and middle school schedules, too — a Gordian knot of logistics, pulled tight by the weight of inertia, that proved impossible to untangle.

Until the computers came along.

Last year, the Boston Public Schools asked MIT graduate students Sébastien Martin and Arthur Delarue to build an algorithm that could do the enormously complicated work of changing start times at dozens of schools — and rerouting the hundreds of buses that serve them….

The algorithm was poised to put Boston on the leading edge of a digital transformation of government. In New York, officials were using a regression analysis tool to focus fire inspections on the most vulnerable buildings. And in Allegheny County, Pa., computers were churning through thousands of health, welfare, and criminal justice records to help identify children at risk of abuse….

While elected officials tend to legislate by anecdote and oversimplify the choices that voters face, algorithms can chew through huge amounts of complicated information. The hope is that they’ll offer solutions we’ve never imagined ­— much as Google Maps, when you’re stuck in traffic, puts you on an alternate route, down streets you’ve never traveled.

Dataphiles say algorithms may even allow us to filter out the human biases that run through our criminal justice, social service, and education systems. And the MIT algorithm offered a small window into that possibility. The data showed that schools in whiter, better-off sections of Boston were more likely to have the school start times that parents prize most — between 8 and 9 a.m. The mere act of redistributing start times, if aimed at solving the sleep deprivation problem and saving money, could bring some racial equity to the system, too.

Or, the whole thing could turn into a political disaster.

District officials expected some pushback when they released the new school schedule on a Thursday night in December, with plans to implement in the fall of 2018. After all, they’d be messing with the schedules of families all over the city.

But no one anticipated the crush of opposition that followed. Angry parents signed an online petition and filled the school committee chamber, turning the plan into one of the biggest crises of Mayor Marty Walsh’s tenure. The city summarily dropped it. The failure would eventually play a role in the superintendent’s resignation.

It was a sobering moment for a public sector increasingly turning to computer scientists for help in solving nagging policy problems. What had gone wrong? Was it a problem with the machine? Or was it a problem with the people — both the bureaucrats charged with introducing the algorithm to the public, and the public itself?…(More)”

Google, T-Mobile Tackle 911 Call Problem


Sarah Krouse at the Wall Street Journal: “Emergency call operators will soon have an easier time pinpointing the whereabouts of Android phone users.

Google has struck a deal with T-Mobile US to pipe location data from cellphones with Android operating systems in the U.S. to emergency call centers, said Fiona Lee, who works on global partnerships for Android emergency location services.

The move is a sign that smartphone operating system providers and carriers are taking steps to improve the quality of location data they send when customers call 911. Locating callers has become a growing problem for 911 operators as cellphone usage has proliferated. Wireless devices now make 80% or more of the 911 calls placed in some parts of the U.S., according to the trade group National Emergency Number Association. There are roughly 240 million calls made to 911 annually.

While landlines deliver an exact address, cellphones typically register only an estimated location provided by wireless carriers that can be as wide as a few hundred yards and imprecise indoors.

That has meant that while many popular applications like Uber can pinpoint users, 911 call takers can’t always do so. Technology giants such as Google and Apple Inc. that run phone operating systems need a direct link to the technology used within emergency call centers to transmit precise location data….

Google currently offers emergency location services in 14 countries around the world by partnering with carriers and companies that are part of local emergency communications infrastructure. Its location data is based on a combination of inputs from Wi-Fi to sensors, GPS and a mobile network information.

Jim Lake, director at the Charleston County Consolidated 9-1-1 Center, participated in a pilot of Google’s emergency location services and said it made it easier to find people who didn’t know their location, particularly because the area draws tourists.

“On a day-to-day basis, most people know where they are, but when they don’t, usually those are the most horrifying calls and we need to know right away,” Mr. Lake said.

In June, Apple said it had partnered with RapidSOS to send iPhone users’ location information to 911 call centers….(More)”

One of New York City’s most urgent design challenges is invisible


Diana Budds at Curbed: “Algorithms are invisible, but they already play a large role in shaping New York City’s built environment, schooling, public resources, and criminal justice system. Earlier this year, the City Council and Mayor Bill de Blasio formed the Automated Decision Systems Task Force, the first of its kind in the country, to analyze how NYC deploys automated systems to ensure fairness, equity, and accountability are upheld.

This week, 20 experts in the field of civil rights and artificial intelligence co-signed a letter to the task force to help influence its official report, which is scheduled to be published in December 2019.

The letter’s recommendations include creating a publicly accessible list of all the automated decision systems in use; consulting with experts before adopting an automated decision system; creating a permanent government body to oversee the procurement and regulation of automated decision systems; and upholding civil liberties in all matters related to automation. This could lay the groundwork for future legislation around automation in the city….Read the full letter here.”

What’s Wrong with Public Policy Education


Francis Fukuyama at the American Interest: “Most programs train students to become capable policy analysts, but with no understanding of how to implement those policies in the real world…Public policy education is ripe for an overhaul…

Public policy education in most American universities today reflects a broader problem in the social sciences, which is the dominance of economics. Most programs center on teaching students a battery of quantitative methods that are useful in policy analysis: applied econometrics, cost-benefit analysis, decision analysis, and, most recently, use of randomized experiments for program evaluation. Many schools build their curricula around these methods rather than the substantive areas of policy such as health, education, defense, criminal justice, or foreign policy. Students come out of these programs qualified to be policy analysts: They know how to gather data, analyze it rigorously, and evaluate the effectiveness of different public policy interventions. Historically, this approach started with the Rand Graduate School in the 1970s (which has subsequently undergone a major re-thinking of its approach).

There is no question that these skills are valuable and should be part of a public policy education.  The world has undergone a revolution in recent decades in terms of the role of evidence-based policy analysis, where policymakers can rely not just on anecdotes and seat-of-the-pants assessments, but statistically valid inferences that intervention X is likely to result in outcome Y, or that the millions of dollars spent on policy Z has actually had no measurable impact. Evidence-based policymaking is particularly necessary in the age of Donald Trump, amid the broad denigration of inconvenient facts that do not suit politicians’ prior preferences.

But being skilled in policy analysis is woefully inadequate to bring about policy change in the real world. Policy analysis will tell you what the optimal policy should be, but it does not tell you how to achieve that outcome.

The world is littered with optimal policies that don’t have a snowball’s chance in hell of being adopted. Take for example a carbon tax, which a wide range of economists and policy analysts will tell you is the most efficient way to abate carbon emissions, reduce fossil fuel dependence, and achieve a host of other desired objectives. A carbon tax has been a nonstarter for years due to the protestations of a range of interest groups, from oil and chemical companies to truckers and cabbies and ordinary drivers who do not want to pay more for the gas they use to commute to work, or as inputs to their industrial processes. Implementing a carbon tax would require a complex strategy bringing together a coalition of groups that are willing to support it, figuring out how to neutralize the die-hard opponents, and convincing those on the fence that the policy would be a good, or at least a tolerable, thing. How to organize such a coalition, how to communicate a winning message, and how to manage the politics on a state and federal level would all be part of a necessary implementation strategy.

It is entirely possible that an analysis of the implementation strategy, rather than analysis of the underlying policy, will tell you that the goal is unachievable absent an external shock, which might then mean changing the scope of the policy, rethinking its objectives, or even deciding that you are pursuing the wrong objective.

Public policy education that sought to produce change-makers rather than policy analysts would therefore have to be different.  It would continue to teach policy analysis, but the latter would be a small component embedded in a broader set of skills.

The first set of skills would involve problem definition. A change-maker needs to query stakeholders about what they see as the policy problem, understand the local history, culture, and political system, and define a problem that is sufficiently narrow in scope that it can plausibly be solved.

At times reformers start with a favored solution without defining the right problem. A student I know spent a summer working at an NGO in India advocating use of electric cars in the interest of carbon abatement. It turns out, however, that India’s reliance on coal for marginal electricity generation means that more carbon would be put in the air if the country were to switch to electric vehicles, not less, so the group was actually contributing to the problem they were trying to solve….

The second set of skills concerns solutions development. This is where traditional policy analysis comes in: It is important to generate data, come up with a theory of change, and posit plausible options by which reformers can solve the problem they have set for themselves. This is where some ideas from product design, like rapid prototyping and testing, may be relevant.

The third and perhaps most important set of skills has to do with implementation. This begins necessarily with stakeholder analysis: that is, mapping of actors who are concerned with the particular policy problem, either as supporters of a solution, or opponents who want to maintain the status quo. From an analysis of the power and interests of the different stakeholders, one can begin to build coalitions of proponents, and think about strategies for expanding the coalition and neutralizing those who are opposed.  A reformer needs to think about where resources can be obtained, and, very critically, how to communicate one’s goals to the stakeholder audiences involved. Finally comes testing and evaluation—not in the expectation that there will be a continuous and rapid iterative process by which solutions are tried, evaluated, and modified. Randomized experiments have become the gold standard for program evaluation in recent years, but their cost and length of time to completion are often the enemies of rapid iteration and experimentation….(More) (see also http://canvas.govlabacademy.org/).

NZ to perform urgent algorithm ‘stocktake’ fearing data misuse within government


Asha McLean at ZDNet: “The New Zealand government has announced it will be assessing how government agencies are using algorithms to analyse data, hoping to ensure transparency and fairness in decisions that affect citizens.

A joint statement from Minister for Government Digital Services Clare Curran and Minister of Statistics James Shaw said the algorithm “stocktake” will be conducted with urgency, but cites only the growing interest in data analytics as the reason for the probe.

“The government is acutely aware of the need to ensure transparency and accountability as interest grows regarding the challenges and opportunities associated with emerging technology such as artificial intelligence,” Curran said.

It was revealed in April that Immigration New Zealand may have been using citizen data for less than desirable purposes, with claims that data collected through the country’s visa application process that was being used to determine those in breach of their visa conditions was in fact filtering people based on their age, gender, and ethnicity.

Rejecting the idea the data-collection project was racial profiling, Immigration Minister Iain Lees-Galloway told Radio New Zealand that Immigration looks at a range of issues, including at those who have made — and have had rejected — multiple visa applications.

“It looks at people who place the greatest burden on the health system, people who place the greatest burden on the criminal justice system, and uses that data to prioritise those people,” he said.

“It is important that we protect the integrity of our immigration system and that we use the resources that immigration has as effectively as we can — I do support them using good data to make good decisions about where best to deploy their resources.”

In the statement on Wednesday, Shaw pointed to two further data-modelling projects the government had embarked on, with one from the Ministry of Health looking into the probability of five-year post-transplant survival in New Zealand.

“Using existing data to help model possible outcomes is an important part of modern government decision-making,” Shaw said….(More)”.

How artificial intelligence is transforming the world


Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity….(More)

Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion

Algorithmic Impact Assessment (AIA) framework


Report by AINow Institute: “Automated decision systems are currently being used by public agencies, reshaping how criminal justice systems work via risk assessment algorithms1 and predictive policing, optimizing energy use in critical infrastructure through AI-driven resource allocation, and changing our employment4 and educational systems through automated evaluation tools and matching algorithms.Researchers, advocates, and policymakers are debating when and where automated decision systems are appropriate, including whether they are appropriate at all in particularly sensitive domains.

Questions are being raised about how to fully assess the short and long term impacts of these systems, whose interests they serve, and if they are sufficiently sophisticated to contend with complex social and historical contexts. These questions are essential, and developing strong answers has been hampered in part by a lack of information and access to the systems under deliberation. Many such systems operate as “black boxes” – opaque software tools working outside the scope of meaningful scrutiny and accountability.8 This is concerning, since an informed policy debate is impossible without the ability to understand which existing systems are being used, how they are employed, and whether these systems cause unintended consequences. The Algorithmic Impact Assessment (AIA) framework proposed in this report is designed to support affected communities and stakeholders as they seek to assess the claims made about these systems, and to determine where – or if – their use is acceptable….

KEY ELEMENTS OF A PUBLIC AGENCY ALGORITHMIC IMPACT ASSESSMENT

1. Agencies should conduct a self-assessment of existing and proposed automated decision systems, evaluating potential impacts on fairness, justice, bias, or other concerns across affected communities;

2. Agencies should develop meaningful external researcher review processes to discover, measure, or track impacts over time;

3. Agencies should provide notice to the public disclosing their definition of “automated decision system,” existing and proposed systems, and any related self-assessments and researcher review processes before the system has been acquired;

4. Agencies should solicit public comments to clarify concerns and answer outstanding questions; and

5. Governments should provide enhanced due process mechanisms for affected individuals or communities to challenge inadequate assessments or unfair, biased, or otherwise harmful system uses that agencies have failed to mitigate or correct….(More)”.

Even Imperfect Algorithms Can Improve the Criminal Justice System


Sam Corbett-Davies, Sharad Goel and Sandra González-Bailón in the The New York Times: “In courtrooms across the country, judges turn to computer algorithms when deciding whether defendants awaiting trial must pay bail or can be released without payment. The increasing use of such algorithms has prompted warnings about the dangers of artificial intelligence. But research shows that algorithms are powerful tools for combating the capricious and biased nature of human decisions.

Bail decisions have traditionally been made by judges relying on intuition and personal preference, in a hasty process that often lasts just a few minutes. In New York City, the strictest judges are more than twice as likely to demand bail as the most lenient ones.

To combat such arbitrariness, judges in some cities now receive algorithmically generated scores that rate a defendant’s risk of skipping trial or committing a violent crime if released. Judges are free to exercise discretion, but algorithms bring a measure of consistency and evenhandedness to the process.

The use of these algorithms often yields immediate and tangible benefits: Jail populations, for example, can decline without adversely affecting public safety.

In one recent experiment, agencies in Virginia were randomly selected to use an algorithm that rated both defendants’ likelihood of skipping trial and their likelihood of being arrested if released. Nearly twice as many defendants were released, and there was no increase in pretrial crime….(More)”.