The Merit Principle in Crisis


Commentary in Governance: “In the United States, the presidential race is heating up, and one result is an increasing number of assaults on century-old ideas about the merit-based civil service.  “The merit principle is under fierce attack,” says Donald Kettl, in a new commentary for Governance.  Kettl outlines five “tough questions” that are raised by attacks on the civil service system — and says that the US research community “has been largely asleep at the switch” on all of them.  Within major public policy schools, courses on the public service have been “pushed to the side.”  A century ago, American academics helped to build the American state.  Kettl warns that “scholarly neglect in the 2000s could undermine it.”  Read the commentary.

Innovative Study Supports Asteroid Initiative, Journey To Mars


David Steitz at NASA: “Innovation is a primary tool for problem solving at NASA. Whether creating new robotic spacecraft to explore asteroids or developing space habitats for our journey to Mars, innovative thinking is key to our success. NASA leads the federal government in cutting edge methods for conceptualizing and then executing America’s space exploration goals.

One example of NASA innovation is the agency’s work with the Expert and Citizen Assessment of Science and Technology (ECAST) Network. The ECAST group provided a citizen-focused, participatory technology assessment of NASA’s Asteroid Initiative, increasing public understanding of and engagement in the initiative while also providing the agency with new knowledge for use in planning our future missions.

“Participatory Exploration includes public engagement as we chart the course for future NASA activities, ranging from planetary defense to boots on Mars,” said Jason Kessler, program executive for NASA’s Asteroid Grand Challenge within the Office of the Chief Technologist at NASA Headquarters in Washington. “The innovative methodology for public engagement that the ECAST has given us opens new avenues for dialog directly with stakeholders across the nation, Americans who have and want to share their ideas with NASA on activities the agency is executing, now and in the future.”

In addition to formal “requests for information” or forums with industry for ideas, NASA employed ECAST to engage in a “participatory technology assessment,” an engagement model that seeks to improve the outcomes of science and technology decision-making through dialog with informed citizens. Participatory technology assessment involves engaging a group of non-experts who are representative of the general population but who—unlike political, academic, and industry stakeholders—who are often underrepresented in technology-related policymaking….(More)”

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots


Book description: “Robots are poised to transform today’s society as completely as the Internet did twenty years ago. Pulitzer prize-winning New York Times science writer John Markoff argues that we must decide to design ourselves into our future, or risk being excluded from it altogether.

In the past decade, Google introduced us to driverless cars; Apple debuted Siri, a personal assistant that we keep in our pockets; and an Internet of Things connected the smaller tasks of everyday life to the farthest reaches of the Web. Robots have become an integral part of society on the battlefield and the road; in business, education, and health care. Cheap sensors and powerful computers will ensure that in the coming years, these robots will act on their own. This new era offers the promise of immensely powerful machines, but it also reframes a question first raised more than half a century ago, when the intelligent machine was born. Will we control these systems, or will they control us?

In Machines of Loving Grace, John Markoff offers a sweeping history of the complicated and evolving relationship between humans and computers. In recent years, the pace of technological change has accelerated dramatically, posing an ethical quandary. If humans delegate decisions to machines, who will be responsible for the consequences? As Markoff chronicles the history of automation, from the birth of the artificial intelligence and intelligence augmentation communities in the 1950s and 1960s, to the modern-day brain trusts at Google and Apple in Silicon Valley, and on to the expanding robotics economy around Boston, he traces the different ways developers have addressed this fundamental problem and urges them to carefully consider the consequences of their work. We are on the brink of the next stage of the computer revolution, Markoff argues, and robots will profoundly transform modern life. Yet it remains for us to determine whether this new world will be a utopia. Moreover, it is now incumbent upon the designers of these robots to draw a bright line between what is human and what is machine.

After nearly forty years covering the tech industry, Markoff offers an unmatched perspective on the most drastic technology-driven societal shifts since the introduction of the Internet. Machines of Loving Grace draws on an extensive array of research and interviews to present an eye-opening history of one of the most pressing questions of our time, and urges us to remember that we still have the opportunity to design ourselves into the future—before it’s too late….(More)”

How Africa can benefit from the data revolution


 in The Guardian: “….The modern information infrastructure is about movement of data. From data we derive information and knowledge, and that knowledge can be propagated rapidly across the country and throughout the world. Facebook and Google have both made massive investments in machine learning, the mainstay technology for converting data into knowledge. But the potential for these technologies in Africa is much larger: instead of simply advertising products to people, we can imagine modern distributed health systems, distributed markets, knowledge systems for disease intervention. The modern infrastructure should be data driven and deployed across the mobile network. A single good idea can then be rapidly implemented and distributed via the mobile phone app ecosystems.

The information infrastructure does not require large scale thinking and investment to deliver. In fact, it requires just the reverse. It requires agility and innovation. Larger companies cannot react quickly enough to exploit technological advances. Small companies with a good idea can grow quickly. From IBM to Microsoft, Google and now Facebook. All these companies now agree on one thing: data is where the value lies. Modern internet companies are data-driven from the ground up. Could the same thing happen in Africa’s economies? Can entire countries reformulate their infrastructures to be data-driven from the ground up?

Maybe, or maybe not, but it isn’t necessary to have a grand plan to give it a go. It is already natural to use data and communication to solve real world problems. In Silicon Valley these are the challenges of getting a taxi or reserving a restaurant. In Africa they are often more fundamental. John Quinn has been in Kampala, Uganda at Makerere University for eight years now targeting these challenges. In June this year, John and other researchers from across the region came together for Africa’s first workshop on data science at Dedan Kimathi University of Technology. The objective was to spread knowledge of technologies, ideas and solutions. For the modern information infrastructure to be successful software solutions need to be locally generated. African apps to solve African problems. With this in mind the workshop began with a three day summer school on data science which was then followed by two days of talks on challenges in African data science.

The ideas and solutions presented were cutting edge. The Umati project uses social media to understand the use of ethnic hate speech in Kenya (Sidney Ochieng, iHub, Nairobi). The use of social media for monitoring the evolution and effects of Ebola in west Africa (Nuri Pashwani, IBM Research Africa). The Kudusystem for market making in Ugandan farm produce distribution via SMS messages (Kenneth Bwire, Makerere University, Kampala). Telecommunications data for inferring the source and spread of a typhoid outbreak in Kampala (UN Pulse Lab, Kampala). The Punya system for prototyping and deployment of mobile phone apps to deal with emerging crises or market opportunities (Julius Adebayor, MIT) and large scale systems for collating and sharing data resources Open Data Kenya and UN OCHA Human Data Exchange….(More)”

Index: Crime and Criminal Justice Data


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on crime and criminal justice data and was originally published in 2015.

This index provides information about the type of crime and criminal justice data collected, shared and used in the United States. Because it is well known that data related to the criminal justice system is often times unreliable, or just plain missing, this index also highlights some of the issues that stand in the way of accessing useful and in-demand statistics.

Data Collections: National Crime Statistics

  • Number of incident-based crime datasets created by the Federal Bureau of Investigation (FBI): 2
    • Number of U.S. Statistical Agencies: 13
    • How many of those are focused on criminal justice: 1, the Bureau of Justice Statistics (BJS)
    • Number of data collections focused on criminal justice the BJS produces: 61
    • Number of federal-level APIs available for crime or criminal justice data: 1, the National Crime Victimization Survey (NCVS).
    • Frequency of the NCVS: annually
  • Number of Statistical Analysis Centers (SACs), organizations that are essentially clearinghouses for crime and criminal justice data for each state, the District of Columbia, Puerto Rico and the Northern Mariana Islands: 53

Open data, data use and the impact of those efforts

  • Number of datasets that are returned when “criminal justice” is searched for on Data.gov: 417, including federal-, state- and city-level datasets
  • Number of datasets that are returned when “crime” is searched for on Data.gov: 281
  • The percentage that public complaints dropped after officers started wearing body cameras, according to a study done in Rialto, Calif.: 88
  • The percentage that reported incidents of officer use of force fell after officers started wearing body cameras, according to a study done in Rialto, Calif.: 5
  • The percent that crime decreased during an experiment in predictive policing in Shreveport, LA: 35  
  • Number of crime data sets made available by the Seattle Police Department – generally seen as a leader in police data innovation – on the Seattle.gov website: 4
    • Major crime stats by category in aggregate
    • Crime trend reports
    • Precinct data by beat
    • State sex offender database
  • Number of datasets mapped by the Seattle Police Department: 2:
      • 911 incidents
    • Police reports
  • Number of states where risk assessment tools must be used in pretrial proceedings to help determine whether an offender is released from jail before a trial: at least 11.

Police Data

    • Number of federally mandated databases that collect information about officer use of force or officer involved shootings, nationwide: 0
    • The year a crime bill was passed that called for data on excessive force to be collected for research and statistical purposes, but has never been funded: 1994
    • Number of police departments that committed to being a part of the White House’s Police Data Initiative: 21
    • Percentage of police departments surveyed in 2013 by the Office of Community Oriented Policing within the Department of Justice that are not using body cameras, therefore not collecting body camera data: 75

The criminal justice system

  • Parts of the criminal justice system where data about an individual can be created or collected: at least 6
    • Entry into the system (arrest)
    • Prosecution and pretrial
    • Sentencing
    • Corrections
    • Probation/parole
    • Recidivism

Sources

  • Crime Mapper. Philadelphia Police Department. Accessed August 24, 2014.

The Silo Effect – The Peril of Expertise and the Promise of Breaking Down Barriers


Book by Gillian Tett: “From award-winning columnist and journalist Gillian Tett comes a brilliant examination of how our tendency to create functional departments—silos—hinders our work…and how some people and organizations can break those silos down to unleash innovation.

One of the characteristics of industrial age enterprises is that they are organized around functional departments. This organizational structure results in both limited information and restricted thinking. The Silo Effect asks these basic questions: why do humans working in modern institutions collectively act in ways that sometimes seem stupid? Why do normally clever people fail to see risks and opportunities that later seem blindingly obvious? Why, as psychologist Daniel Kahneman put it, are we sometimes so “blind to our own blindness”?

Gillian Tett, journalist and senior editor for the Financial Times, answers these questions by plumbing her background as an anthropologist and her experience reporting on the financial crisis in 2008. In The Silo Effect, she shares eight different tales of the silo syndrome, spanning Bloomberg’s City Hall in New York, the Bank of England in London, Cleveland Clinic hospital in Ohio, UBS bank in Switzerland, Facebook in San Francisco, Sony in Tokyo, the BlueMountain hedge fund, and the Chicago police. Some of these narratives illustrate how foolishly people can behave when they are mastered by silos. Others, however, show how institutions and individuals can master their silos instead. These are stories of failure and success.

From ideas about how to organize office spaces and lead teams of people with disparate expertise, Tett lays bare the silo effect and explains how people organize themselves, interact with each other, and imagine the world can take hold of an organization and lead from institutional blindness to 20/20 vision. – (More)”

Ethics in Public Policy and Management: A global research companion


New book edited by Alan Lawton, Zeger van der Wal, and Leo Huberts: “Ethics in Public Policy and Management: A global research companion showcases the latest research from established and newly emerging scholars in the fields of public management and ethics. This collection examines the profound changes of the last 25 years, including the rise of New Public Management, New Public Governance and Public Value; how these have altered practitioners’ delivery of public services; and how academics think about those services.

Drawing on research from a broad range of disciplines, Ethics in Public Policy and Management looks to reflect on this changing landscape. With contributions from Asia, Australasia, Europe and the USA, the collection is grouped into five main themes:

  • theorising the practice of ethics;
  • understanding and combating corruption;
  • managing integrity;
  • ethics across boundaries;
  • expanding ethical policy domains.

This volume will prove thought-provoking for educators, administrators, policy makers and researchers across the fields of public management, public administration and ethics….(More)”

Review Federal Agencies on Yelp…and Maybe Get a Response


Yelp Official Blog: “We are excited to announce that Yelp has concluded an agreement with the federal government that will allow federal agencies and offices to claim their Yelp pages, read and respond to reviews, and incorporate that feedback into service improvements.

We encourage Yelpers to review any of the thousands of agency field offices, TSA checkpoints, national parks, Social Security Administration offices, landmarks and other places already listed on Yelp if you have good or bad feedback to share about your experiences. Not only is it helpful to others who are looking for information on these services, but you can actually make an impact by sharing your feedback directly with the source.

It’s clear Washington is eager to engage with people directly through social media. Earlier this year a group of 46 lawmakers called for the creation of a “Yelp for Government” in order to boost transparency and accountability, and Representative Ron Kind reiterated this call in a letter to the General Services Administration (GSA). Luckily for them, there’s no need to create a new platform now that government agencies can engage directly on Yelp.

As this agreement is fully implemented in the weeks and months ahead, we’re excited to help the federal government more directly interact with and respond to the needs of citizens and to further empower the millions of Americans who use Yelp every day.

In addition to working with the federal government, last week we announced our our partnership with ProPublica to incorporate health care statistics and consumer opinion survey data onto the Yelp business pages of more than 25,000 medical treatment facilities. We’ve also partnered with local governments in expanding the LIVES open data standard to show restaurant health scores on Yelp….(More)”

Can big databases be kept both anonymous and useful?


The Economist: “….The anonymisation of a data record typically means the removal from it of personally identifiable information. Names, obviously. But also phone numbers, addresses and various intimate details like dates of birth. Such a record is then deemed safe for release to researchers, and even to the public, to make of it what they will. Many people volunteer information, for example to medical trials, on the understanding that this will happen.

But the ability to compare databases threatens to make a mockery of such protections. Participants in genomics projects, promised anonymity in exchange for their DNA, have been identified by simple comparison with electoral rolls and other publicly available information. The health records of a governor of Massachusetts were plucked from a database, again supposedly anonymous, of state-employee hospital visits using the same trick. Reporters sifting through a public database of web searches were able to correlate them in order to track down one, rather embarrassed, woman who had been idly searching for single men. And so on.

Each of these headline-generating stories creates a demand for more controls. But that, in turn, deals a blow to the idea of open data—that the electronic “data exhaust” people exhale more or less every time they do anything in the modern world is actually useful stuff which, were it freely available for analysis, might make that world a better place.

Of cake, and eating it

Modern cars, for example, record in their computers much about how, when and where the vehicle has been used. Comparing the records of many vehicles, says Viktor Mayer-Schönberger of the Oxford Internet Institute, could provide a solid basis for, say, spotting dangerous stretches of road. Similarly, an opening of health records, particularly in a country like Britain, which has a national health service, and cross-fertilising them with other personal data, might help reveal the multifarious causes of diseases like Alzheimer’s.

This is a true dilemma. People want both perfect privacy and all the benefits of openness. But they cannot have both. The stripping of a few details as the only means of assuring anonymity, in a world choked with data exhaust, cannot work. Poorly anonymised data are only part of the problem. What may be worse is that there is no standard for anonymisation. Every American state, for example, has its own prescription for what constitutes an adequate standard.

Worse still, devising a comprehensive standard may be impossible. Paul Ohm of Georgetown University, in Washington, DC, thinks that this is partly because the availability of new data constantly shifts the goalposts. “If we could pick an industry standard today, it would be obsolete in short order,” he says. Some data, such as those about medical conditions, are more sensitive than others. Some data sets provide great precision in time or place, others merely a year or a postcode. Each set presents its own dangers and requirements.

Fortunately, there are a few easy fixes. Thanks in part to the headlines, many now agree that public release of anonymised data is a bad move. Data could instead be released piecemeal, or kept in-house and accessible by researchers through a question-and-answer mechanism. Or some users could be granted access to raw data, but only in strictly controlled conditions.

All these approaches, though, are anathema to the open-data movement, because they limit the scope of studies. “If we’re making it so hard to share that only a few have access,” says Tim Althoff, a data scientist at Stanford University, “that has profound implications for science, for people being able to replicate and advance your work.”

Purely legal approaches might mitigate that. Data might come with what have been called “downstream contractual obligations”, outlining what can be done with a given data set and holding any onward recipients to the same standards. One perhaps draconian idea, suggested by Daniel Barth-Jones, an epidemiologist at Columbia University, in New York, is to make it illegal even to attempt re-identification….(More).”

Big data algorithms can discriminate, and it’s not clear what to do about it


 at the Conversation“This program had absolutely nothing to do with race…but multi-variable equations.”

That’s what Brett Goldstein, a former policeman for the Chicago Police Department (CPD) and current Urban Science Fellow at the University of Chicago’s School for Public Policy, said about a predictive policing algorithm he deployed at the CPD in 2010. His algorithm tells police where to look for criminals based on where people have been arrested previously. It’s a “heat map” of Chicago, and the CPD claims it helps them allocate resources more effectively.

Chicago police also recently collaborated with Miles Wernick, a professor of electrical engineering at Illinois Institute of Technology, to algorithmically generate a “heat list” of 400 individuals it claims have thehighest chance of committing a violent crime. In response to criticism, Wernick said the algorithm does not use “any racial, neighborhood, or other such information” and that the approach is “unbiased” and “quantitative.” By deferring decisions to poorly understood algorithms, industry professionals effectively shed accountability for any negative effects of their code.

But do these algorithms discriminate, treating low-income and black neighborhoods and their inhabitants unfairly? It’s the kind of question many researchers are starting to ask as more and more industries use algorithms to make decisions. It’s true that an algorithm itself is quantitative – it boils down to a sequence of arithmetic steps for solving a problem. The danger is that these algorithms, which are trained on data produced by people, may reflect the biases in that data, perpetuating structural racism and negative biases about minority groups.

There are a lot of challenges to figuring out whether an algorithm embodies bias. First and foremost, many practitioners and “computer experts” still don’t publicly admit that algorithms can easily discriminate.More and more evidence supports that not only is this possible, but it’s happening already. The law is unclear on the legality of biased algorithms, and even algorithms researchers don’t precisely understand what it means for an algorithm to discriminate….

While researchers clearly understand the theoretical dangers of algorithmic discrimination, it’s difficult to cleanly measure the scope of the issue in practice. No company or public institution is willing to publicize its data and algorithms for fear of being labeled racist or sexist, or maybe worse, having a great algorithm stolen by a competitor.

Even when the Chicago Police Department was hit with a Freedom of Information Act request, they did not release their algorithms or heat list, claiming a credible threat to police officers and the people on the list. This makes it difficult for researchers to identify problems and potentially provide solutions.

Legal hurdles

Existing discrimination law in the United States isn’t helping. At best, it’s unclear on how it applies to algorithms; at worst, it’s a mess. Solon Barocas, a postdoc at Princeton, and Andrew Selbst, a law clerk for the Third Circuit US Court of Appeals, argued together that US hiring law fails to address claims about discriminatory algorithms in hiring.

The crux of the argument is called the “business necessity” defense, in which the employer argues that a practice that has a discriminatory effect is justified by being directly related to job performance….(More)”