Restoring Confidence in Open, Shared and Personal Data


Report of the UK Digital Government Review: “It is obvious that government needs to be able to use data both to deliver services and to present information to public view. How else would government know which bank account to place a pension payment into, or a citizen know the results of an election or how to contact their elected representatives?

As more and more data is created, preserved and shared in ever-increasing volumes a number of urgent questions are begged: over opportunities and hazards; over the importance of using best-practice techniques, insights and technologies developed in the private sector, academia and elsewhere; over the promises and limitations of openness; and how all this might be articulated and made accessible to the public.

Government has already adopted “open data” (we will discuss this more in the next section) and there are now increasing calls for government to pay more attention to data analytics and so-called “big data” – although the first faltering steps to unlock benefits, here, have often ended in the discovery that using large-scale data is a far more nuanced business than was initially assumed

Debates around government and data have often been extremely high-profile – the NHS care.data [27] debate was raging while this review was in progress – but they are also shrouded in terms that can generate confusion and complexities that are not easily summarized.

In this chapter we will unpick some of these terms and some parts of the debate. This is a detailed and complex area and there is much more that could have been included [28]. This is not an area that can easily be summarized into a simple bullet-pointed list of policies.

Within this report we will use the following terms and definitions, proceeding to a detailed analysis of each in turn:

Type of Data

Definition [29]

Examples

1. Open Data Data that can be freely used, reused and redistributed by anyone – subject only, at most, to the requirement to attribute and sharealike Insolvency notices in the London Gazette
Government spending information
Public transport information
Official National Statistics
2. Shared Data Restricted data provided to restricted organisations or individuals for restricted purposes National Pupil Database
NHS care.data
Integrated health and social care
Individual census returns
3. Personal Data Data that relate to a living individual who can be identified from that data. For full legal definition see [30] Health records
Individual tax records
Insolvency notices in the London gazette
National Pupil Database
NB These definitions overlap. Personal data can exist in both open and shared data.

This social productivity will help build future economic productivity; in the meantime it will improve people’s lives and it will enhance our democracy. From our analysis it was clear that there was room for improvement…”

White House: Help Shape Public Participation


Corinna Zarek and Justin Herman at the White House Blog: “Public participation — where citizens help shape and implement government programs — is a foundation of open, transparent, and engaging government services. From emergency management and regulatory development to science and education, better and more meaningful engagement with those who use public services can measurably improve government for everyone.
A team across the government is now working side-by-side with civil society organizations to deliver the first U.S. Public Participation Playbook, dedicated to providing best practices for how agencies can better design public participation programs, and suggested performance metrics for evaluating their effectiveness.
Developing a U.S. Public Participation Playbook has been an open government priority, and was included in both the first and second U.S. Open Government National Action Plans as part of the United States effort to increase public integrity in government programs. This resource reflects the commitment of the government and civic partners to measurably improve participation programs, and is designed using the same inclusive principles that it champions.
More than 30 Federal leaders from across diverse missions in public service have collaborated on draft best practices, or “plays,” lead by the General Services Administration’s inter-agency SocialGov Community. The playbook is not limited to digital participation, and is designed to address needs from the full spectrum of public participation programs.
The plays are structured to provide best practices, tangible examples, and suggested performance metrics for government activities that already exist or are under development. Some categories included in the plays include encouraging community development and outreach, empowering participants through public/private partnerships, using data to drive decisions, and designing for inclusiveness and accessibility.
In developing this new resource, the team has been reaching out to more than a dozen civil society organizations and stakeholders, asking them to contribute as the Playbook is created. The team would like your input as well! Over the next month, contribute your ideas to the playbook using Madison, an easy-to-use, open source platform that allows for accountable review of each contribution.
Through this process, the team will work together to ensure that the Playbook reflects the best ideas and examples for agencies to use in developing and implementing their programs with public participation in mind. This resource will be a living document, and stakeholders from inside or outside of government should continually offer new insights — whether new plays, the latest case studies, or the most current performance metrics — to the playbook.
We look forward to seeing the public participate in the creation and evolution of the Public Participation Playbook!”

Understanding "New Power"


Article by Jeremy Heimans and Henry Timms in Harvard Business Review: “We all sense that power is shifting in the world. We see increasing political protest, a crisis in representation and governance, and upstart businesses upending traditional industries. But the nature of this shift tends to be either wildly romanticized or dangerously underestimated.
There are those who cherish giddy visions of a new techno-utopia in which increased connectivity yields instant democratization and prosperity. The corporate and bureaucratic giants will be felled and the crowds coronated, each of us wearing our own 3D-printed crown. There are also those who have seen this all before. Things aren’t really changing that much, they say. Twitter supposedly toppled a dictator in Egypt, but another simply popped up in his place. We gush over the latest sharing-economy start-up, but the most powerful companies and people seem only to get more powerful.
Both views are wrong. They confine us to a narrow debate about technology in which either everything is changing or nothing is. In reality, a much more interesting and complex transformation is just beginning, one driven by a growing tension between two distinct forces: old power and new power.
Old power works like a currency. It is held by few. Once gained, it is jealously guarded, and the powerful have a substantial store of it to spend. It is closed, inaccessible, and leader-driven. It downloads, and it captures.
New power operates differently, like a current. It is made by many. It is open, participatory, and peer-driven. It uploads, and it distributes. Like water or electricity, it’s most forceful when it surges. The goal with new power is not to hoard it but to channel it.

The battle and the balancing between old and new power will be a defining feature of society and business in the coming years. In this article, we lay out a simple framework for understanding the underlying dynamics at work and how power is really shifting: who has it, how it is distributed, and where it is heading….”

Smart cities: the state-of-the-art and governance challenge


New Paper by Mark Deakin in Triple Helix – A Journal of University-Industry-Government Innovation and Entrepreneurship: “Reflecting on the governance of smart cities, the state-of-the-art this paper advances offers a critique of recent city ranking and future Internet accounts of their development. Armed with these critical insights, it goes on to explain smart cities in terms of the social networks, cultural attributes and environmental capacities, vis-a-vis, vital ecologies of the intellectual capital, wealth creation and standards of participatory governance regulating their development. The Triple Helix model which the paper advances to explain these performances in turn suggests that cities are smart when the ICTs of future Internet developments successfully embed the networks society needs for them to not only generate intellectual capital, or create wealth, but also cultivate the environmental capacity, ecology and vitality of those spaces which the direct democracy of their participatory governance open up, add value to and construct.”

Crowdsourcing and Humanitarian Action: Analysis of the Literature


Patrick Meier:  “Raphael Hörler from Zurich’s ETH University has just completed his thesis on the role of crowdsourcing in humanitarian action. His valuable research offers one of the most up-to-date and comprehensive reviews of the principal players and humanitarian technologies in action today. In short, I highly recommend this important resource. Raphael’s full thesis is available here (PDF).”

Challenging Critics of Transparency in Government


at Brookings’s FIXGOV: “Brookings today published my paper, “Why Critics of Transparency Are Wrong.” It describes and subsequently challenges a school of thinkers who in various ways object to government openness and transparency. They include some very distinguished scholars and practitioners from Francis Fukuyama to Brookings’ own Jonathan Rauch. My co-authors, Gary Bass and Danielle Brian, and I explain why they get it wrong—government needs more transparency, not less.

“Critics like these assert that transparency results in government indecision, poor performance, and stalemate. Their arguments are striking because they attack a widely-cherished value, openness, attempting to connect it to an unrelated malady, gridlock. But when you hold the ‘transparency is the problem’ hypothesis up to the sunlight, its gaping holes quickly become visible.”

There is no doubt that gridlock, government dysfunction, polarization and other suboptimal aspects of the current policy environment are frustrating. However, proposed solutions must factor in both the benefits and the expected negative consequences of such changes. Less openness and transparency may ameliorate some current challenges while returning the American political system to a pre-progressive reform era in which corruption precipitated serious social and political costs.

“Simply put, information is power, and keeping information secret only serves to keep power in the hands of a few. This is a key reason the latest group of transparency critics should not be shrugged off: if left unaddressed, their arguments will give those who want to operate in the shadows new excuses.”

It is difficult to imagine a context in which honest graft is not paired with dishonest graft. It is even harder to foresee a government that is effective at distinguishing between the two and rooting out the latter.

“Rather than demonizing transparency for today’s problems, we should look to factors such as political parties and congressional leadership, partisan groups, and social (and mainstream) media, all of which thrive on the gridlock and dysfunction in Washington.”….

Click to read “Why Critics of Transparency Are Wrong.”

Look to Government—Yes, Government—for New Social Innovations


Paper by Christian Bason and Philip Colligan: “If asked to identify the hotbed of social innovation right now, many people would likely point to the new philanthropy of Silicon Valley or the social entrepreneurship efforts supported by Ashoka, Echoing Green, and Skoll Foundation. Very few people, if any, would mention their state capital or Capitol Hill. While local and national governments may have promulgated some of the greatest advances in human history — from public education to putting a man on the moon — public bureaucracies are more commonly known to stifle innovation.
Yet, around the world, there are local, regional, and national government innovators who are challenging this paradigm. They are pioneering a new form of experimental government — bringing new knowledge and practices to the craft of governing and policy making; drawing on human-centered design, user engagement, open innovation, and cross-sector collaboration; and using data, evidence, and insights in new ways.
Earlier this year, Nesta, the UK’s innovation foundation (which Philip helps run), teamed up with Bloomberg Philanthropies to publish i-teams, the first global review of public innovation teams set up by national and city governments. The study profiled 20 of the most established i-teams from around the world, including:

  • French Experimental Fund for Youth, which has supported more than 554 experimental projects (such as one that reduces school drop-out rates) that have benefited over 480,000 young people;
  • Nesta’s Innovation Lab, which has run 70 open innovation challenges and programs supporting over 750 innovators working in fields as diverse as energy efficiency, healthcare, and digital education;
  • New Orleans’ Innovation and Delivery team, which achieved a 19% reduction in the number of murders in the city in 2013 compared to the previous year.

How are i-teams achieving these results? The most effective ones are explicit about the goal they seek – be it creating a solution to a specific policy challenge, engaging citizenry in behaviors that help the commonweal, or transforming the way government behaves. Importantly, these teams are also able to deploy the right skills, capabilities, and methods for the job.
In addition, ­i-teams have a strong bias toward action. They apply academic research in behavioral economics and psychology to public policy and services, focusing on rapid experimentation and iteration. The approach stands in stark contrast to the normal routines of government.
Take for example, The UK’s Behavioural Insights Team (BIT), often called the Nudge Unit. It sets clear goals, engages the right expertise to prototype means to the end, and tests innovations rapidly in the field, to learn what’s not working and rapidly scales what is.
One of BIT’s most famous projects changed taxpayer behavior. BIT’s team of economists, behavioral psychologists, and seasoned government staffers came up with minor changes to tax letters, sent out by the UK Government, that subtlety introduced positive peer pressure. By simply altering the letters to say that most people in their local area had already paid their taxes, BIT was able to boost repayment rates by around 5%. This trial was part of a range of interventions, which have helped forward over £200 million in additional tax revenue to HM Revenue & Customs, the UK’s tax authority.
The Danish government’s internal i-team, MindLab (which Christian ran for 8 years) has likewise influenced citizen behavior….”

Smarter Than Us: The Rise of Machine Intelligence


 

Book by Stuart Armstrong at the Machine Intelligence Research Institute: “What happens when machines become smarter than humans? Forget lumbering Terminators. The power of an artificial intelligence (AI) comes from its intelligence, not physical strength and laser guns. Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? Stuart Armstrong’s new book navigates these questions with clarity and wit.
Can we instruct AIs to steer the future as we desire? What goals should we program into them? It turns out this question is difficult to answer! Philosophers have tried for thousands of years to define an ideal world, but there remains no consensus. The prospect of goal-driven, smarter-than-human AI gives moral philosophy a new urgency. The future could be filled with joy, art, compassion, and beings living worthwhile and wonderful lives—but only if we’re able to precisely define what a “good” world is, and skilled enough to describe it perfectly to a computer program.
AIs, like computers, will do what we say—which is not necessarily what we mean. Such precision requires encoding the entire system of human values for an AI: explaining them to a mind that is alien to us, defining every ambiguous term, clarifying every edge case. Moreover, our values are fragile: in some cases, if we mis-define a single piece of the puzzle—say, consciousness—we end up with roughly 0% of the value we intended to reap, instead of 99% of the value.
Though an understanding of the problem is only beginning to spread, researchers from fields ranging from philosophy to computer science to economics are working together to conceive and test solutions. Are we up to the challenge?
A mathematician by training, Armstrong is a Research Fellow at the Future of Humanity Institute (FHI) at Oxford University. His research focuses on formal decision theory, the risks and possibilities of AI, the long term potential for intelligent life (and the difficulties of predicting this), and anthropic (self-locating) probability. Armstrong wrote Smarter Than Us at the request of the Machine Intelligence Research Institute, a non-profit organization studying the theoretical underpinnings of artificial superintelligence.”

Linguistic Mapping Reveals How Word Meanings Sometimes Change Overnight


Emerging Technology From the arXiv: “In October 2012, Hurricane Sandy approached the eastern coast of the United States. At the same time, the English language was undergoing a small earthquake of its own. Just months before, the word “sandy” was an adjective meaning “covered in or consisting mostly of sand” or “having light yellowish brown color.” Almost overnight, this word gained an additional meaning as a proper noun for one of the costliest storms in U.S. history.
A similar change occurred to the word “mouse” in the early 1970s when it gained the new meaning of “computer input device.” In the 1980s, the word “apple” became a proper noun synonymous with the computer company. And later, the word “windows” followed a similar course after the release of the Microsoft operating system.
All this serves to show how language constantly evolves, often slowly but at other times almost overnight. Keeping track of these new senses and meanings has always been hard. But not anymore.
Today, Vivek Kulkarni at Stony Brook University in New York and a few pals show how they have tracked these linguistic changes by mining the corpus of words stored in databases such as Google Books, movie reviews from Amazon, and of course the microblogging site Twitter.
These guys have developed three ways to spot changes in the language. The first is a simple count of how often words are used, using tools such as Google Trends. For example, in October 2012, the frequency of the words “Sandy” and “hurricane” both spiked in the runup to the storm. However, only one of these words changed its meaning, something that a frequency count cannot spot.
So Kulkarni and co have a second method in which they label all of the words in the databases according to their parts of speech, whether a noun, a proper noun, a verb, an adjective and so on. This clearly reveals a change in the way the word “Sandy” was used, from adjective to proper noun, while also showing that the word “hurricane” had not changed.
The parts of speech technique is useful but not infallible. It cannot pick up the change in meaning of the word mouse, both of which are nouns. So the team have a third approach.
This maps the linguistic vector space in which words are embedded. The idea is that words in this space are close to other words that appear in similar contexts. For example, the word “big” is close to words such as “large,” “huge,” “enormous,” and so on.
By examining the linguistic space at different points in history, it is possible to see how meanings have changed. For example, in the 1950s, the word “gay” was close to words such as “cheerful” and “dapper.” Today, however, it has moved significantly to be closer to words such as “lesbian,” homosexual,” and so on.
Kulkarni and co examine three different databases to see how words have changed: the set of five-word sequences that appear in the Google Books corpus, Amazon movie reviews since 2000, and messages posted on Twitter between September 2011 and October 2013.
Their results reveal not only which words have changed in meaning, but when the change occurred and how quickly. For example, before the 1970s, the word “tape” was used almost exclusively to describe adhesive tape but then gained an additional meaning of “cassette tape.”…”

Activists Wield Search Data to Challenge and Change Police Policy


at the New York Times: “One month after a Latino youth died from a gunshot as he sat handcuffed in the back of a police cruiser here last year, 150 demonstrators converged on Police Headquarters, some shouting “murderers” as baton-wielding officers in riot gear fired tear gas.

The police say the youth shot himself with a hidden gun. But to many residents of this city, which is 40 percent black, the incident fit a pattern of abuse and bias against minorities that includes frequent searches of cars and use of excessive force. In one case, a black female Navy veteran said she was beaten by an officer after telling a friend she was visiting that the friend did not have to let the police search her home.

Yet if it sounds as if Durham might have become a harbinger of Ferguson, Mo. — where the fatal shooting of an unarmed black teenager by a white police officer led to weeks of protests this summer — things took a very different turn. Rather than relying on demonstrations to force change, a coalition of ministers, lawyers and community and political activists turned instead to numbers. They used an analysis of state data from 2002 to 2013 that showed that the Durham police searched black male motorists at more than twice the rate of white males during stops. Drugs and other illicit materials were found no more often on blacks….

The use of statistics is gaining traction not only in North Carolina, where data on police stops is collected under a 15-year-old law, but in other cities around the country.

Austin, Tex., began requiring written consent for searches without probable cause two years ago, after its independent police monitor reported that whites stopped by the police were searched one in every 28 times, while blacks were searched one in eight times.

In Kalamazoo, Mich., a city-funded study last year found that black drivers were nearly twice as likely to be stopped, and then “much more likely to be asked to exit their vehicle, to be handcuffed, searched and arrested.”

As a result, Jeff Hadley, the public safety chief of Kalamazoo, imposed new rules requiring officers to explain to supervisors what “reasonable suspicion” they had each time they sought a driver’s consent to a search. Traffic stops have declined 42 percent amid a drop of more than 7 percent in the crime rate, he said.

“It really stops the fishing expeditions,” Chief Hadley said of the new rules. Though the findings demoralized his officers, he said, the reaction from the African-American community stunned him. “I thought they would be up in arms, but they said: ‘You’re not telling us anything we didn’t already know. How can we help?’ ”

The School of Government at the University of North Carolina at Chapel Hill has a new manual for defense lawyers, prosecutors and judges, with a chapter that shows how stop and search data can be used by the defense to raise challenges in cases where race may have played a role…”