Meaningful Consent: The Economics of Privity in Networked Environments


Paper by Jonathan Cave: “Recent work on privacy (e.g. WEIS 2013/4, Meaningful Consent in the Digital Economy project) recognises the unanticipated consequences of data-centred legal protections in a world of shifting relations between data and human actors. But the rules have not caught up with these changes, and the irreversible consequences of ‘make do and mend’ are not often taken into account when changing policy.

Many of the most-protected ‘personal’ data are not personal at all, but are created to facilitate the operation of larger (e.g. administrative, economic, transport) systems or inadvertently generated by using such systems. The protection given to such data typically rests on notions of informed consent even in circumstances where such consent may be difficult to define, harder to give and nearly impossible to certify in meaningful ways. Such protections typically involve a mix of data collection, access and processing rules that are either imposed on behalf of individuals or are to be exercised by them. This approach adequately protects some personal interests, but not all – and is definitely not future-proof. Boundaries between allowing individuals to discover and pursue their interests on one side and behavioural manipulation on the other are often blurred. The costs (psychological and behavioural as well as economic and practical) of exercising control over one’s data are rarely taken into account as some instances of the Right to be Forgotten illustrate. The purposes for which privacy rights were constructed are often forgotten, or have not been reinterpreted in a world of ubiquitous monitoring data, multi-person ‘private exchanges,’ and multiple pathways through which data can be used to create and to capture value. Moreover, the parties who should be involved in making decisions – those connected by a network of informational relationships – are often not in contractual, practical or legal contact. These developments, associated with e.g. the Internet of Things, Cloud computing and big data analytics, should be recognised as challenging privacy rules and, more fundamentally, the adequacy of informed consent (e.g. to access specified data for specified purposes) as a means of managing innovative, flexible, and complex informational architectures.

This paper presents a framework for organising these challenges using them to evaluate proposed policies, specifically in relation to complex, automated, automatic or autonomous data collection, processing and use. It argues for a movement away from a system of property rights based on individual consent to a values-based ‘privity’ regime – a collection of differentiated (relational as well as property) rights and consents that may be better able to accommodate innovations. Privity regimes (see deFillipis 2006) bundle together rights regarding e.g. confidential disclosure with ‘standing’ or voice options in relation to informational linkages.

The impacts are examined through a game-theoretic comparison between the proposed privity regime and existing privacy rights in personal data markets that include: conventional ‘behavioural profiling’ and search; situations where third parties may have complementary roles conflicting interests in such data and where data have value in relation both to specific individuals and to larger groups (e.g. ‘real-world’ health data); n-sided markets on data platforms (including social and crowd-sourcing platforms with long and short memories); and the use of ‘privity-like’ rights inherited by data objects and by autonomous systems whose ownership may be shared among many people….(More)”

The Future of the Professions: How Technology Will Transform the Work of Human Experts


New book by Richard Susskind and Daniel Susskind: “This book predicts the decline of today’s professions and describes the people and systems that will replace them. In an Internet society, according to Richard Susskind and Daniel Susskind, we will neither need nor want doctors, teachers, accountants, architects, the clergy, consultants, lawyers, and many others, to work as they did in the 20th century.

The Future of the Professions explains how ‘increasingly capable systems’ – from telepresence to artificial intelligence – will bring fundamental change in the way that the ‘practical expertise’ of specialists is made available in society.

The authors challenge the ‘grand bargain’ – the arrangement that grants various monopolies to today’s professionals. They argue that our current professions are antiquated, opaque and no longer affordable, and that the expertise of the best is enjoyed only by a few. In their place, they propose six new models for producing and distributing expertise in society.

The book raises important practical and moral questions. In an era when machines can out-perform human beings at most tasks, what are the prospects for employment, who should own and control online expertise, and what tasks should be reserved exclusively for people?

Based on the authors’ in-depth research of more than ten professions, and illustrated by numerous examples from each, this is the first book to assess and question the relevance of the professions in the 21st century. (Chapter 1)”

Citizen Urban Science


New report by Anthony Townsend and Alissa Chisholm at the Cities of Data Project: “Over the coming decades, the world will continue to urbanize rapidly amidst an historic migration of computing power off the desktop, unleashing new opportunities for data collection that reveal how cities function. In a recent report, Making Sense of the Science of Cities (bit.ly/sciencecities) we described an emerging global research movement that seeks establish a new urban science built atop this new infrastructure of instruments. But will this new intellectual venture be an inclusive endeavor? What role is 1 there for the growing ranks of increasingly well-equipped and well-informed citizen volunteers and amateur investigators to work alongside professional scientists? How are researchers, activists and city governments exploring that potential today? Finally, what can be done to encourage and accelerate experimentation?

This report examines three case studies that provide insight into emerging models of citizen science, highlighting the possibilities of citizen-university-government collaborative research, and the important role of open data platforms to enable these partnerships….(More)”

Journal of Technology Science


Technology Science is an open access forum for any original material dealing primarily with a social, political, personal, or organizational benefit or adverse consequence of technology. Studies that characterize a technology-society clash or present an approach to better harmonize technology and society are especially welcomed. Papers can come from anywhere in the world.

Technology Science is interested in reviews of research, experiments, surveys, tutorials, and analyses. Writings may propose solutions or describe unsolved problems. Technology Science may also publish letters, short communications, and relevant news items. All submissions are peer-reviewed.

The scientific study of technology-society clashes is a cross-disciplinary pursuit, so papers in Technology Science may come from any of many possible disciplinary traditions, including but not limited to social science, computer science, political science, law, economics, policy, or statistics.

The Data Privacy Lab at Harvard University publishes Technology Science and its affiliated subset of papers called the Journal of Technology Science and maintains them online at techscience.org and at jots.pub. Technology Science is available free of charge over the Internet. While it is possible that bound paper copies of Technology Science content may be produced for a fee, all content will continue to be offered online at no charge….(More)”

 

Index: Crime and Criminal Justice Data


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on crime and criminal justice data and was originally published in 2015.

This index provides information about the type of crime and criminal justice data collected, shared and used in the United States. Because it is well known that data related to the criminal justice system is often times unreliable, or just plain missing, this index also highlights some of the issues that stand in the way of accessing useful and in-demand statistics.

Data Collections: National Crime Statistics

  • Number of incident-based crime datasets created by the Federal Bureau of Investigation (FBI): 2
    • Number of U.S. Statistical Agencies: 13
    • How many of those are focused on criminal justice: 1, the Bureau of Justice Statistics (BJS)
    • Number of data collections focused on criminal justice the BJS produces: 61
    • Number of federal-level APIs available for crime or criminal justice data: 1, the National Crime Victimization Survey (NCVS).
    • Frequency of the NCVS: annually
  • Number of Statistical Analysis Centers (SACs), organizations that are essentially clearinghouses for crime and criminal justice data for each state, the District of Columbia, Puerto Rico and the Northern Mariana Islands: 53

Open data, data use and the impact of those efforts

  • Number of datasets that are returned when “criminal justice” is searched for on Data.gov: 417, including federal-, state- and city-level datasets
  • Number of datasets that are returned when “crime” is searched for on Data.gov: 281
  • The percentage that public complaints dropped after officers started wearing body cameras, according to a study done in Rialto, Calif.: 88
  • The percentage that reported incidents of officer use of force fell after officers started wearing body cameras, according to a study done in Rialto, Calif.: 5
  • The percent that crime decreased during an experiment in predictive policing in Shreveport, LA: 35  
  • Number of crime data sets made available by the Seattle Police Department – generally seen as a leader in police data innovation – on the Seattle.gov website: 4
    • Major crime stats by category in aggregate
    • Crime trend reports
    • Precinct data by beat
    • State sex offender database
  • Number of datasets mapped by the Seattle Police Department: 2:
      • 911 incidents
    • Police reports
  • Number of states where risk assessment tools must be used in pretrial proceedings to help determine whether an offender is released from jail before a trial: at least 11.

Police Data

    • Number of federally mandated databases that collect information about officer use of force or officer involved shootings, nationwide: 0
    • The year a crime bill was passed that called for data on excessive force to be collected for research and statistical purposes, but has never been funded: 1994
    • Number of police departments that committed to being a part of the White House’s Police Data Initiative: 21
    • Percentage of police departments surveyed in 2013 by the Office of Community Oriented Policing within the Department of Justice that are not using body cameras, therefore not collecting body camera data: 75

The criminal justice system

  • Parts of the criminal justice system where data about an individual can be created or collected: at least 6
    • Entry into the system (arrest)
    • Prosecution and pretrial
    • Sentencing
    • Corrections
    • Probation/parole
    • Recidivism

Sources

  • Crime Mapper. Philadelphia Police Department. Accessed August 24, 2014.

The Silo Effect – The Peril of Expertise and the Promise of Breaking Down Barriers


Book by Gillian Tett: “From award-winning columnist and journalist Gillian Tett comes a brilliant examination of how our tendency to create functional departments—silos—hinders our work…and how some people and organizations can break those silos down to unleash innovation.

One of the characteristics of industrial age enterprises is that they are organized around functional departments. This organizational structure results in both limited information and restricted thinking. The Silo Effect asks these basic questions: why do humans working in modern institutions collectively act in ways that sometimes seem stupid? Why do normally clever people fail to see risks and opportunities that later seem blindingly obvious? Why, as psychologist Daniel Kahneman put it, are we sometimes so “blind to our own blindness”?

Gillian Tett, journalist and senior editor for the Financial Times, answers these questions by plumbing her background as an anthropologist and her experience reporting on the financial crisis in 2008. In The Silo Effect, she shares eight different tales of the silo syndrome, spanning Bloomberg’s City Hall in New York, the Bank of England in London, Cleveland Clinic hospital in Ohio, UBS bank in Switzerland, Facebook in San Francisco, Sony in Tokyo, the BlueMountain hedge fund, and the Chicago police. Some of these narratives illustrate how foolishly people can behave when they are mastered by silos. Others, however, show how institutions and individuals can master their silos instead. These are stories of failure and success.

From ideas about how to organize office spaces and lead teams of people with disparate expertise, Tett lays bare the silo effect and explains how people organize themselves, interact with each other, and imagine the world can take hold of an organization and lead from institutional blindness to 20/20 vision. – (More)”

Social Media and Local Governments


Book edited by Sobaci, Mehmet Zahid: “Today, social media have attracted the attention of political actors and administrative institutions to inform citizens as a prerequisite of open and transparent administration, deliver public services, contact stakeholders, revitalize democracy, encourage the cross-agency cooperation, and contribute to knowledge management. In this context, the social media tools can contribute to the emergence of citizen-oriented, open, transparent and participatory public administration. Taking advantage of the opportunities offered by social media is not limited to central government. Local governments deploy internet-based innovative technologies that complement traditional methods in implementing different functions. This book focuses on the relationship between the local governments and social media, deals with the change that social media have caused in the organization, understanding of service provision, performance of local governments and in the relationships between local governments and their partners, and aims to advance our theoretical and empirical understanding of the growing use of social media by local governments. This book will be of interest to researchers and students in e-government, public administration, political science, communication, information science, and social media. Government officials and public managers will also find practical use recommendations for social media in several aspects of local governance…(More)”

Review Federal Agencies on Yelp…and Maybe Get a Response


Yelp Official Blog: “We are excited to announce that Yelp has concluded an agreement with the federal government that will allow federal agencies and offices to claim their Yelp pages, read and respond to reviews, and incorporate that feedback into service improvements.

We encourage Yelpers to review any of the thousands of agency field offices, TSA checkpoints, national parks, Social Security Administration offices, landmarks and other places already listed on Yelp if you have good or bad feedback to share about your experiences. Not only is it helpful to others who are looking for information on these services, but you can actually make an impact by sharing your feedback directly with the source.

It’s clear Washington is eager to engage with people directly through social media. Earlier this year a group of 46 lawmakers called for the creation of a “Yelp for Government” in order to boost transparency and accountability, and Representative Ron Kind reiterated this call in a letter to the General Services Administration (GSA). Luckily for them, there’s no need to create a new platform now that government agencies can engage directly on Yelp.

As this agreement is fully implemented in the weeks and months ahead, we’re excited to help the federal government more directly interact with and respond to the needs of citizens and to further empower the millions of Americans who use Yelp every day.

In addition to working with the federal government, last week we announced our our partnership with ProPublica to incorporate health care statistics and consumer opinion survey data onto the Yelp business pages of more than 25,000 medical treatment facilities. We’ve also partnered with local governments in expanding the LIVES open data standard to show restaurant health scores on Yelp….(More)”

Open Data: A 21st Century Asset for Small and Medium Sized Enterprises


“The economic and social potential of open data is widely acknowledged. In particular, the business opportunities have received much attention. But for all the excitement, we still know very little about how and under what conditions open data really works.

To broaden our understanding of the use and impact of open data, the GovLab has a variety of initiatives and studies underway. Today, we share publicly our findings on how Small and Medium Sized Enterprises (SMEs) are leveraging open data for a variety of purposes. Our paper “Open Data: A 21st Century Asset for Small and Medium Sized Enterprises” seeks to build a portrait of the lifecycle of open data—how it is collected, stored and used. It outlines some of the most important parameters of an open data business model for SMEs….

The paper analyzes ten aspects of open data and establishes ten principles for its effective use by SMEs. Taken together, these offer a roadmap for any SME considering greater use or adoption of open data in its business.

Among the key findings included in the paper:

  • SMEs, which often lack access to data or sophisticated analytical tools to process large datasets, are likely to be one of the chief beneficiaries of open data.
  • Government data is the main category of open data being used by SMEs. A number of SMEs are also using open scientific and shared corporate data.
  • Open data is used primarily to serve the Business-to-Business (B2B) markets, followed by the Business-to-Consumer (B2C) markets. A number of the companies studied serve two or three market segments simultaneously.
  • Open data is usually a free resource, but SMEs are monetizing their open-data-driven services to build viable businesses. The most common revenue models include subscription-based services, advertising, fees for products and services, freemium models, licensing fees, lead generation and philanthropic grants.
  • The most significant challenges SMEs face in using open data include those concerning data quality and consistency, insufficient financial and human resources, and issues surrounding privacy.

This is just a sampling of findings and observations. The paper includes a number of additional observations concerning business and revenue models, product development, customer acquisition, and other subjects of relevance to any company considering an open data strategy.”

Can big databases be kept both anonymous and useful?


The Economist: “….The anonymisation of a data record typically means the removal from it of personally identifiable information. Names, obviously. But also phone numbers, addresses and various intimate details like dates of birth. Such a record is then deemed safe for release to researchers, and even to the public, to make of it what they will. Many people volunteer information, for example to medical trials, on the understanding that this will happen.

But the ability to compare databases threatens to make a mockery of such protections. Participants in genomics projects, promised anonymity in exchange for their DNA, have been identified by simple comparison with electoral rolls and other publicly available information. The health records of a governor of Massachusetts were plucked from a database, again supposedly anonymous, of state-employee hospital visits using the same trick. Reporters sifting through a public database of web searches were able to correlate them in order to track down one, rather embarrassed, woman who had been idly searching for single men. And so on.

Each of these headline-generating stories creates a demand for more controls. But that, in turn, deals a blow to the idea of open data—that the electronic “data exhaust” people exhale more or less every time they do anything in the modern world is actually useful stuff which, were it freely available for analysis, might make that world a better place.

Of cake, and eating it

Modern cars, for example, record in their computers much about how, when and where the vehicle has been used. Comparing the records of many vehicles, says Viktor Mayer-Schönberger of the Oxford Internet Institute, could provide a solid basis for, say, spotting dangerous stretches of road. Similarly, an opening of health records, particularly in a country like Britain, which has a national health service, and cross-fertilising them with other personal data, might help reveal the multifarious causes of diseases like Alzheimer’s.

This is a true dilemma. People want both perfect privacy and all the benefits of openness. But they cannot have both. The stripping of a few details as the only means of assuring anonymity, in a world choked with data exhaust, cannot work. Poorly anonymised data are only part of the problem. What may be worse is that there is no standard for anonymisation. Every American state, for example, has its own prescription for what constitutes an adequate standard.

Worse still, devising a comprehensive standard may be impossible. Paul Ohm of Georgetown University, in Washington, DC, thinks that this is partly because the availability of new data constantly shifts the goalposts. “If we could pick an industry standard today, it would be obsolete in short order,” he says. Some data, such as those about medical conditions, are more sensitive than others. Some data sets provide great precision in time or place, others merely a year or a postcode. Each set presents its own dangers and requirements.

Fortunately, there are a few easy fixes. Thanks in part to the headlines, many now agree that public release of anonymised data is a bad move. Data could instead be released piecemeal, or kept in-house and accessible by researchers through a question-and-answer mechanism. Or some users could be granted access to raw data, but only in strictly controlled conditions.

All these approaches, though, are anathema to the open-data movement, because they limit the scope of studies. “If we’re making it so hard to share that only a few have access,” says Tim Althoff, a data scientist at Stanford University, “that has profound implications for science, for people being able to replicate and advance your work.”

Purely legal approaches might mitigate that. Data might come with what have been called “downstream contractual obligations”, outlining what can be done with a given data set and holding any onward recipients to the same standards. One perhaps draconian idea, suggested by Daniel Barth-Jones, an epidemiologist at Columbia University, in New York, is to make it illegal even to attempt re-identification….(More).”