Reclaiming the Smart City: Personal Data, Trust and the New Commons


Report by Theo Bass, Emma Sutherland and Tom Symons: “Cities are becoming a major focal point in the personal data economy. In city governments, there is a clamour for data-informed approaches to everything from waste management and public transport through to policing and emergency response

This is a triumph for advocates of the better use of data in how we run cities. After years of making the case, there is now a general acceptance that social, economic and environmental pressures can be better responded to by harnessing data.

But as that argument is won, a fresh debate is bubbling up under the surface of the glossy prospectus of the smart city: who decides what we do with all this data, and how do we ensure that its generation and use does not result in discrimination, exclusion and the erosion of privacy for citizens?

This report brings together a range of case studies featuring cities which have pioneered innovative practices and policies around the responsible use of data about people. Our methods combined desk research and over 20 interviews with city administrators in a number of cities across the world.

Recommendations

Based on our case studies, we also compile a range of lessons that policymakers can use to build an alternative version to the smart city – one which promotes ethical data collection practices and responsible innovation with new technologies:

  1. Build consensus around clear ethical principles, and translate them into practical policies.
  2. Train public sector staff in how to assess the benefits and risks of smart technologies.
  3. Look outside the council for expertise and partnerships, including with other city governments.
  4. Find and articulate the benefits of privacy and digital ethics to multiple stakeholders
  5. Become a test-bed for new services that give people more privacy and control.
  6. Make time and resources available for genuine public engagement on the use of surveillance technologies.
  7. Build digital literacy and make complex or opaque systems more understandable and accountable.
  8. Find opportunities to involve citizens in the process of data collection and analysis from start to finish….(More)”.

Informational Autocrats


Paper by Sergei M. Guriev and Daniel Treisman: “In recent decades, dictatorships based on mass repression have largely given way to a new model based on the manipulation of information. Instead of terrorizing citizens into submission, “informational autocrats” artificially boost their popularity by convincing the public they are competent.

To do so, they use propaganda and silence informed members of the elite by co-optation or censorship.

Using several sources – including a newly created dataset of authoritarian control techniques – we document a range of trends in recent autocracies that fit the theory: a decline in violence, efforts to conceal state repression, rejection of official ideologies, imitation of democracy, a perceptions gap between masses and elite, and the adoption by leaders of a rhetoric of performance rather than one aimed at inspiring fear….(More)”

Cloud Communities: The Dawn of Global Citizenship?


Robert Schuman Centre for Advanced Studies Research Paper by Liav Orgad and Rainer Baubock: “New digital technologies are rapidly changing the global economy and have connected billions of people in deterritoralised social network. Will they also create new opportunities for global citizenship and alternatives to state-based political communities?

In his kick-off essay, Liav Orgad takes an optimistic view. Blockchain technology permits to give every human being a unique legal persona and allows individuals to associate in ‘cloud communities’ that may take on several functions of territorial states. 14 commentators discuss this vision.

Sceptics assume that states or business corporations have always found ways to capture and use new technologies for their purposes. They emphasise that the political functions of states, including their task to protect human rights, require territorial monopolies of legitimate coercion that cannot be provided by cloud communities.

Others point out that individuals would sort themselves out into cloud communities that are internally homogenous which risks to deepen political cleavages within territorial societies.

Finally, some authors are concerned that digital political communities will enhance global social inequalities through excluding from access those who are already worse off in the birthright lottery of territorial citizenship.

Optimists see instead the great potential of blockchain technology to overcome exclusion and marginalisation based on statelessness or sheer lack of civil registries; they regard it as a tool for enhancing individual freedom, since people are self-sovereign in controlling their personal data; and they emphasise the possibilities for emancipatory movements to mobilise for global justice across territorial borders or to create their own internally democratic political utopias.

In the boldest vision, the deficits of cloud communities as voluntary political associations with limited scope of power could be overcome in a global cryptodemocracy that lets all individuals participate on a one-person-one-vote basis in global political decisions….(More)”.

What if people were paid for their data?


The Economist: “Data Slavery” Jennifer Lyn Morone, an American artist, thinks this is the state in which most people now live. To get free online services, she laments, they hand over intimate information to technology firms. “Personal data are much more valuable than you think,” she says. To highlight this sorry state of affairs, Ms Morone has resorted to what she calls “extreme capitalism”: she registered herself as a company in Delaware in an effort to exploit her personal data for financial gain. She created dossiers containing different subsets of data, which she displayed in a London gallery in 2016 and offered for sale, starting at £100 ($135). The entire collection, including her health data and social-security number, can be had for £7,000.

Only a few buyers have taken her up on this offer and she finds “the whole thing really absurd”. ..Given the current state of digital affairs, in which the collection and exploitation of personal data is dominated by big tech firms, Ms Morone’s approach, in which individuals offer their data for sale, seems unlikely to catch on. But what if people really controlled their data—and the tech giants were required to pay for access? What would such a data economy look like?…

Labour, like data, is a resource that is hard to pin down. Workers were not properly compensated for labour for most of human history. Even once people were free to sell their labour, it took decades for wages to reach liveable levels on average. History won’t repeat itself, but chances are that it will rhyme, Mr Weyl predicts in “Radical Markets”, a provocative new book he has co-written with Eric Posner of the University of Chicago. He argues that in the age of artificial intelligence, it makes sense to treat data as a form of labour.

To understand why, it helps to keep in mind that “artificial intelligence” is something of a misnomer. Messrs Weyl and Posner call it “collective intelligence”: most AI algorithms need to be trained using reams of human-generated examples, in a process called machine learning. Unless they know what the right answers (provided by humans) are meant to be, algorithms cannot translate languages, understand speech or recognise objects in images. Data provided by humans can thus be seen as a form of labour which powers AI. As the data economy grows up, such data work will take many forms. Much of it will be passive, as people engage in all kinds of activities—liking social-media posts, listening to music, recommending restaurants—that generate the data needed to power new services. But some people’s data work will be more active, as they make decisions (such as labelling images or steering a car through a busy city) that can be used as the basis for training AI systems….

But much still needs to happen for personal data to be widely considered as labour, and paid for as such. For one thing, the right legal framework will be needed to encourage the emergence of a new data economy. The European Union’s new General Data Protection Regulation, which came into effect in May, already gives people extensive rights to check, download and even delete personal data held by companies. Second, the technology to keep track of data flows needs to become much more capable. Research to calculate the value of particular data to an AI service is in its infancy.

Third, and most important, people will have to develop a “class consciousness” as data workers. Most people say they want their personal information to be protected, but then trade it away for nearly nothing, something known as the “privacy paradox”. Yet things may be changing: more than 90% of Americans think being in control of who can get data on them is important, according to the Pew Research Centre, a think-tank….(More)”.

Balancing Act: Innovation vs. Privacy in the Age of Data Portability


Thursday, July 12, 2018 @ 2 MetroTech Center, Brooklyn, NY 11201

RSVP here.

The ability of people to move or copy data about themselves from one service to another — data portability — has been hailed as a way of increasing competition and driving innovation. In many areas, such as through the Open Banking initiative in the United Kingdom, the practice of data portability is fully underway and propagating. The launch of GDPR in Europe has also elevated the issue among companies and individuals alike. But recent online security breaches and other experiences of personal data being transferred surreptitiously from private companies, (e.g., Cambridge Analytica’s appropriation of Facebook data), highlight how data portability can also undermine people’s privacy.

The GovLab at the NYU Tandon School of Engineering is pleased to present Jeni Tennison, CEO of the Open Data Institute, for its next Ideas Lunch, where she will discuss how data portability has been regulated in the UK and Europe, and what governments, businesses and people need to do to strike the balance between its risks and benefits.

Jeni Tennison is the CEO of the Open Data Institute. She gained her PhD from the University of Nottingham then worked as an independent consultant, specialising in open data publishing and consumption, before joining the ODI in 2012. Jeni was awarded an OBE for services to technology and open data in the 2014 New Year Honours.

Before joining the ODI, Jeni was the technical architect and lead developer for legislation.gov.uk. She worked on the early linked data work on data.gov.uk, including helping to engineer new standards for publishing statistics as linked data. She continues her work within the UK’s public sector as a member of the Open Standards Board.

Jeni also works on international web standards. She was appointed to serve on the W3C’s Technical Architecture Group from 2011 to 2015 and in 2014 she started to co-chair the W3C’s CSV on the Web Working Group. She also sits on the Advisory Boards for Open Contracting Partnership and the Data Transparency Lab.

Twitter handle: @JeniT

Personal Data v. Big Data: Challenges of Commodification of Personal Data


Maria Bottis and  George Bouchagiar in the Open Journal of Philosophy: “Any firm today may, at little or no cost, build its own infrastructure to process personal data for commercial, economic, political, technological or any other purposes. Society has, therefore, turned into a privacy-unfriendly environment. The processing of personal data is essential for multiple economically and socially useful purposes, such as health care, education or terrorism prevention. But firms view personal data as a commodity, as a valuable asset, and heavily invest in processing for private gains. This article studies the potential to subject personal data to trade secret rules, so as to ensure the users’ control over their data without limiting the data’s free movement, and examines some positive scenarios of attributing commercial value to personal data….(More)”.

Data Protection and e-Privacy: From Spam and Cookies to Big Data, Machine Learning and Profiling


Chapter by Lilian Edwards in L Edwards ed Law, Policy and the Internet (Hart , 2018): “In this chapter, I examine in detail how data subjects are tracked, profiled and targeted by their activities on line and, increasingly, in the “offline” world as well. Tracking is part of both commercial and state surveillance, but in this chapter I concentrate on the former. The European law relating to spam, cookies, online behavioural advertising (OBA), machine learning (ML) and the Internet of Things (IoT) is examined in detail, using both the GDPR and the forthcoming draft ePrivacy Regulation. The chapter concludes by examining both code and law solutions which might find a way forward to protect user privacy and still enable innovation, by looking to paradigms not based around consent, and less likely to rely on a “transparency fallacy”. Particular attention is drawn to the new work around Personal Data Containers (PDCs) and distributed ML analytics….(More)”.

I want your (anonymized) social media data


Anthony Sanford at The Conversation: “Social media sites’ responses to the Facebook-Cambridge Analytica scandal and new European privacy regulations have given users much more control over who can access their data, and for what purposes. To me, as a social media user, these are positive developments: It’s scary to think what these platforms could do with the troves of data available about me. But as a researcher, increased restrictions on data sharing worry me.

I am among the many scholars who depend on data from social media to gain insights into people’s actions. In a rush to protect individuals’ privacy, I worry that an unintended casualty could be knowledge about human nature. My most recent work, for example, analyzes feelings people express on Twitter to explain why the stock market fluctuates so much over the course of a single day. There are applications well beyond finance. Other scholars have studied mass transit rider satisfactionemergency alert systems’ function during natural disasters and how online interactions influence people’s desire to lead healthy lifestyles.

This poses a dilemma – not just for me personally, but for society as a whole. Most people don’t want social media platforms to share or sell their personal information, unless specifically authorized by the individual user. But as members of a collective society, it’s useful to understand the social forces at work influencing everyday life and long-term trends. Before the recent crises, Facebook and other companies had already been making it hard for legitimate researchers to use their data, including by making it more difficult and more expensive to download and access data for analysis. The renewed public pressure for privacy means it’s likely to get even tougher….

It’s true – and concerning – that some presumably unethical people have tried to use social media data for their own benefit. But the data are not the actual problem, and cutting researchers’ access to data is not the solution. Doing so would also deprive society of the benefits of social media analysis.

Fortunately, there is a way to resolve this dilemma. Anonymization of data can keep people’s individual privacy intact, while giving researchers access to collective data that can yield important insights.

There’s even a strong model for how to strike that balance efficiently: the U.S. Census Bureau. For decades, that government agency has collected extremely personal data from households all across the country: ages, employment status, income levels, Social Security numbers and political affiliations. The results it publishes are very rich, but also not traceable to any individual.

It often is technically possible to reverse anonymity protections on data, using multiple pieces of anonymized information to identify the person they all relate to. The Census Bureau takes steps to prevent this.

For instance, when members of the public access census data, the Census Bureau restricts information that is likely to identify specific individuals, such as reporting there is just one person in a community with a particularly high- or low-income level.

For researchers the process is somewhat different, but provides significant protections both in law and in practice. Scholars have to pass the Census Bureau’s vetting process to make sure they are legitimate, and must undergo training about what they can and cannot do with the data. The penalties for violating the rules include not only being barred from using census data in the future, but also civil fines and even criminal prosecution.

Even then, what researchers get comes without a name or Social Security number. Instead, the Census Bureau uses what it calls “protected identification keys,” a random number that replaces data that would allow researchers to identify individuals.

Each person’s data is labeled with his or her own identification key, allowing researchers to link information of different types. For instance, a researcher wanting to track how long it takes people to complete a college degree could follow individuals’ education levels over time, thanks to the identification keys.

Social media platforms could implement a similar anonymization process instead of increasing hurdles – and cost – to access their data…(More)” .

Free Speech is a Triangle


Essay by Jack Balkin: “The vision of free expression that characterized much of the twentieth century is inadequate to protect free expression today.

The twentieth century featured a dyadic or dualist model of speech regulation with two basic kinds of players: territorial governments on the one hand, and speakers on the other. The twenty-first century model is pluralist, with multiple players. It is easiest to think of it as a triangle. On one corner are nation states and the European Union. On the second corner are privately-owned Internet infrastructure companies, including social media companies, search engines, broadband providers, and electronic payment systems. On the third corner are many different kinds of speakers, legacy media, civil society organizations, hackers, and trolls.

Territorial goverments continue to regulate speakers and legacy media through traditional or “old-school” speech regulation. But nation states and the European Union also now employ “new-school” speech regulation that is aimed at Internet infrastructure owners and designed to get these private companies to surveil, censor, and regulate speakers for them. Finally, infrastructure companies like Facebook also regulate and govern speakers through techniques of private governance and surveillance.

The practical ability to speak in the digital world emerges from the struggle for power between these various forces, with old-school, new-school and private regulation directed at speakers, and both nation states and civil society organizations pressuring infrastructure owners to regulate speech.

If the characteristic feature of free speech regulation in our time is a triangle that combines new school speech regulation with private governance, then the best way to protect free speech values today is to combat and compensate for that triangle’s evolving logic of public and private regulation. The first goal is to prevent or ameliorate as much as possible collateral censorship and new forms of digital prior restraint. The second goal is to protect people from new methods of digital surveillance and manipulation—methods that emerged from the rise of large multinational companies that depend on data collection, surveillance, analysis, control, and distribution of personal data.

This essay describes how nation states should and should not regulate the digital infrastructure consistent with the values of freedom of speech and press; it emphasizes that different models of regulation are appropriate for different parts of the digital infrastructure. Some parts of the digital infrastructure are best regulated along the lines of common carriers or places of public accommodation. But governments should not impose First Amendment-style or common carriage obligations on social media and search engines. Rather, governments should require these companies to provide due process toward their end-users. Governments should also treat these companies as information fiduciaries who have duties of good faith and non-manipulation toward their end-users. Governments can implement all of these reforms—properly designed—consistent with constitutional guarantees of free speech and free press….(More)”.

Optimal Scope for Free Flow of Non-Personal Data in Europe


Paper by Simon Forge for the European Parliament Think Tank: “Data is not static in a personal/non-personal classification – with modern analytic methods, certain non-personal data can help to generate personal data – so the distinction may become blurred. Thus, de-anonymisation techniques with advances in artificial intelligence (AI) and manipulation of large datasets will become a major issue. In some new applications, such as smart cities and connected cars, the enormous volumes of data gathered may be used for personal information as well as for non-personal functions, so such data may cross over from the technical and non-personal into the personal domain. A debate is taking place on whether current EU restrictions on confidentiality of personal private information should be relaxed so as to include personal information in free and open data flows. However, it is unlikely that a loosening of such rules will be positive for the growth of open data. Public distrust of open data flows may be exacerbated because of fears of potential commercial misuse of such data, as well of leakages, cyberattacks, and so on. The proposed recommendations are: to promote the use of open data licences to build trust and openness, promote sharing of private enterprises’ data within vertical sectors and across sectors to increase the volume of open data through incentive programmes, support testing for contamination of open data mixed with personal data to ensure open data is scrubbed clean – and so reinforce public confidence, ensure anti-competitive behaviour does not compromise the open data initiative….(More)”.