Watchdog to launch inquiry into misuse of data in politics


, and Alice Gibbs in The Guardian: “The UK’s privacy watchdog is launching an inquiry into how voters’ personal data is being captured and exploited in political campaigns, cited as a key factor in both the Brexit and Trump victories last year.

The intervention by the Information Commissioner’s Office (ICO) follows revelations in last week’s Observer that a technology company part-owned by a US billionaire played a key role in the campaign to persuade Britons to vote to leave the European Union.

It comes as privacy campaigners, lawyers, politicians and technology experts express fears that electoral laws are not keeping up with the pace of technological change.

“We are conducting a wide assessment of the data-protection risks arising from the use of data analytics, including for political purposes, and will be contacting a range of organisations,” an ICO spokeswoman confirmed. “We intend to publicise our findings later this year.”

The ICO spokeswoman confirmed that it had approached Cambridge Analytica over its apparent use of data following the story in the Observer. “We have concerns about Cambridge Analytica’s reported use of personal data and we are in contact with the organisation,” she said….

In the US, companies are free to use third-party data without seeking consent. But Gavin Millar QC, of Matrix Chambers, said this was not the case in Europe. “The position in law is exactly the same as when people would go canvassing from door to door,” Millar said. “They have to say who they are, and if you don’t want to talk to them you can shut the door in their face.That’s the same principle behind the data protection act. It’s why if telephone canvassers ring you, they have to say that whole long speech. You have to identify yourself explicitly.”…

Dr Simon Moores, visiting lecturer in the applied sciences and computing department at Canterbury Christ Church University and a technology ambassador under the Blair government, said the ICO’s decision to shine a light on the use of big data in politics was timely.

“A rapid convergence in the data mining, algorithmic and granular analytics capabilities of companies like Cambridge Analytica and Facebook is creating powerful, unregulated and opaque ‘intelligence platforms’. In turn, these can have enormous influence to affect what we learn, how we feel, and how we vote. The algorithms they may produce are frequently hidden from scrutiny and we see only the results of any insights they might choose to publish.” …(More)”

AI, machine learning and personal data


Jo Pedder at the Information Commissioner’s Office Blog: “Today sees the publication of the ICO’s updated paper on big data and data protection.

But why now? What’s changed in the two and a half years since we first visited this topic? Well, quite a lot actually:

  • big data is becoming the norm for many organisations, using it to profile people and inform their decision-making processes, whether that’s to determine your car insurance premium or to accept/reject your job application;
  • artificial intelligence (AI) is stepping out of the world of science-fiction and into real life, providing the ‘thinking’ power behind virtual personal assistants and smart cars; and
  • machine learning algorithms are discovering patterns in data that traditional data analysis couldn’t hope to find, helping to detect fraud and diagnose diseases.

The complexity and opacity of these types of processing operations mean that it’s often hard to know what’s going on behind the scenes. This can be problematic when personal data is involved, especially when decisions are made that have significant effects on people’s lives. The combination of these factors has led some to call for new regulation of big data, AI and machine learning, to increase transparency and ensure accountability.

In our view though, whilst the means by which the processing of personal data are changing, the underlying issues remain the same. Are people being treated fairly? Are decisions accurate and free from bias? Is there a legal basis for the processing? These are issues that the ICO has been addressing for many years, through oversight of existing European data protection legislation….(More)”

Will Democracy Survive Big Data and Artificial Intelligence?


Dirk Helbing, Bruno S. Frey, Gerd Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne Hofstetter, Jeroen van den Hoven, Roberto V. Zicari, and Andrej Zwitter in Scientific American: “….In summary, it can be said that we are now at a crossroads (see Fig. 2). Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society—for better or worse. If such widespread technologies are not compatible with our society’s core values, sooner or later they will cause extensive damage. They could lead to an automated society with totalitarian features. In the worst case, a centralized artificial intelligence would control what we know, what we think and how we act. We are at the historic moment, where we have to decide on the right path—a path that allows us all to benefit from the digital revolution. Therefore, we urge to adhere to the following fundamental principles:

1. to increasingly decentralize the function of information systems;

2. to support informational self-determination and participation;

3. to improve transparency in order to achieve greater trust;

4. to reduce the distortion and pollution of information;

5. to enable user-controlled information filters;

6. to support social and economic diversity;

7. to improve interoperability and collaborative opportunities;

8. to create digital assistants and coordination tools;

9. to support collective intelligence, and

10. to promote responsible behavior of citizens in the digital world through digital literacy and enlightenment.

Following this digital agenda we would all benefit from the fruits of the digital revolution: the economy, government and citizens alike. What are we waiting for?A strategy for the digital age

Big data and artificial intelligence are undoubtedly important innovations. They have an enormous potential to catalyze economic value and social progress, from personalized healthcare to sustainable cities. It is totally unacceptable, however, to use these technologies to incapacitate the citizen. Big nudging and citizen scores abuse centrally collected personal data for behavioral control in ways that are totalitarian in nature. This is not only incompatible with human rights and democratic principles, but also inappropriate to manage modern, innovative societies. In order to solve the genuine problems of the world, far better approaches in the fields of information and risk management are required. The research area of responsible innovation and the initiative ”Data for Humanity” (see “Big Data for the benefit of society and humanity”) provide guidance as to how big data and artificial intelligence should be used for the benefit of society….(More)”

Public services and the new age of data


 at Civil Service Quaterly: “Government holds massive amounts of data. The potential in that data for transforming the way government makes policy and delivers public services is equally huge. So, getting data right is the next phase of public service reform. And the UK Government has a strong foundation on which to build this future.

Public services have a long and proud relationship with data. In 1858, more than 50 years before the creation of the Cabinet Office, Florence Nightingale produced her famous ‘Diagram of the causes of mortality in the army in the east’ during the Crimean War. The modern era of statistics in government was born at the height of the Second World War with the creation of the Central Statistical Office in 1941.

How data can help

However, the huge advances we’ve seen in technology mean there are significant new opportunities to use data to improve public services. It can help us:

  • understand what works and what doesn’t, through data science techniques, so we can make better decisions: improving the way government works and saving money
  • change the way that citizens interact with government through new better digital services built on reliable data;.
  • boost the UK economy by opening and sharing better quality data, in a secure and sensitive way, to stimulate new data-based businesses
  • demonstrate a trustworthy approach to data, so citizens know more about the information held about them and how and why it’s being used

In 2011 the Government embarked upon a radical improvement in its digital capability with the creation of the Government Digital Service, and over the last few years we have seen a similar revolution begin on data. Although there is much more to do, in areas like open data, the UK is already seen as world-leading.

…But if government is going to seize this opportunity, it needs to make some changes in:

  • infrastructure – data is too often hard to find, hard to access, and hard to work with; so government is introducing developer-friendly open registers of trusted core data, such as countries and local authorities, and better tools to find and access personal data where appropriate through APIs for transformative digital services;
  • approach – we need the right policies in place to enable us to get the most out of data for citizens and ensure we’re acting appropriately; and the introduction of new legislation on data access will ensure government is doing the right thing – for example, through the data science code of ethics;
  • data science skills – those working in government need the skills to be confident with data; that means recruiting more data scientists, developing data science skills across government, and using those skills on transformative projects….(More)”.

Montreal monitoring city traffic via drivers’ Bluetooth


Springwise: “Rather than rely on once-yearly spot checks of traffic throughout the city, Montreal, Canada, decided to build a more comprehensive picture of what was working well, and what wasn’t working very well, around the city. Working with traffic management company Orange Traffic, the city installed more than 100 sensors along the busiest vehicular routes. The sensors pick up mobile phone Bluetooth signals, making the system inexpensive to use and install as no additional hardware or devices are needed.

Once the sensors pick up a Bluetooth signal, they track it through several measurement points to get an idea of how fast or slow traffic is moving. The data is sent to the city’s Urban Mobility Management Center. City officials are keen to emphasize that no personal data is recorded as Bluetooth signals cannot be linked to individuals. Traffic management and urban planning teams will be able to use the data to redesign problematic intersections and improve the overall mobility of the city’s streets and transport facilities.

Smart cities are those making safety and efficiency a priority, from providing digital driver licenses in India to crowdsourcing a map of cars in bike lanes in New York City….(More)”

New UN resolution on the right to privacy in the digital age: crucial and timely


Deborah Brown at the Internet Policy Review: “The rapid pace of technological development enables individuals all over the world to use new information and communications technologies (ICTs) to improve their lives. At the same time, technology is enhancing the capacity of governments, companies and individuals to undertake surveillance, interception and data collection, which may violate or abuse human rights, in particular the right to privacy. In this context, the UN General Assembly’s Third Committee adoption on 21 November of a new resolution on the right to privacy in the digital age comes as timely and crucial for protecting the right to privacy in light of new challenges.

As with previous UN resolutions on this topic, the resolution adopted on 21 November 2016 recognises the importance of respecting international commitments in relation to the right to privacy. It underscores that any legitimate concerns states may have with regard to their security can and should be addressed in a manner consistent with obligations under international human rights law.

Recognising that more and more personal data is being collected, processed, and shared, this year’s resolution expresses concern about the sale or multiple re-sales of personal data, which often happens without the individual’s free, explicit and informed consent. It calls for the strengthening of prevention of and protection against such violations, and calls on states to develop preventative measures, sanctions, and remedies.

This year, the resolution more explicitly acknowledges the role of the private sector. It calls on states to put in place (or maintain) effective sanctions and remedies to prevent the private sector from committing violations and abuses of the right to privacy. This is in line with states’ obligations under the UN Guiding Principles on Business and Human Rights, which require states to protect against abuses by businesses within their territories or jurisdictions. The resolution specifically calls on states to refrain from requiring companies to take steps that interfere with the right to privacy in an arbitrary or unlawful way. With respect to companies, it recalls the responsibility of the private sector to respect human rights, and specifically calls on them to inform users about company policies that may impact their right to privacy….(More)”

Reframing Data Transparency


“Recently, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP, a privacy and information policy think tank based in Brussels, London and Washington, D.C., and Telefónica, one of the largest telecommunications company in the world, issued a joint white paper on Reframing Data Transparency (the “white paper”). The white paper was the outcome of a June 2016 roundtable held by the two organizations in London, in which senior business leaders, Data Privacy Officers, lawyers and academics discussed the importance of user-centric transparency to the data driven economy….The issues explored during the roundtable and in the white paper include the following:

  • The transparency deficit in the digital age. There is a growing gap between traditional, legal privacy notices and user-centric transparency that is capable of delivering understandable and actionable information concerning an organization’s data use policies and practices, including why it processes data, what the benefits are to individuals and society, how it protects the data and how users can manage and control the use of their data.
  • The impact of the transparency deficit. The transparency deficit undermines customer trust and customers’ ability to participate more effectively in the digital economy.
  • Challenges of delivering user-centric transparency. In a connected world where there may be no direct relationship between companies and their end users, both transparency and consent as a basis for processing are particularly challenging.
  • Transparency as a multistakeholder challenge. Transparency is not solely a legal issue, but a multistakeholder challenge, which requires engagement of regulators, companies, individuals, behavioral economists, social scientists, psychologists and user experience specialists.
  • The role of data protection authorities (“DPAs”). DPAs play a key role in promoting and incentivizing effective data transparency approaches and tools.
  • The role of companies. Data transparency is a critical business issue because transparency drives digital trust as well as business opportunities. Organizations must innovate on how to deliver user-centric transparency. Data driven companies must research and develop new approaches to transparency that explain the value exchange between customers and companies and the companies’ data practices, and create tools that enable their customers to exercise effective engagement and control.
  • The importance of empowering individuals. It is crucial to support and enhance individuals’ digital literacy, which includes an understanding of the uses of personal data and the benefits of data processing, as well as knowledge of relevant privacy rights and the data management tools that are available to them. Government bodies, regulators and industry should be involved in educating the public regarding digital literacy. Such education should take place in schools and universities, and through consumer education campaigns. Transparency is the foundation and sine qua non of individual empowerment.
  • The role of behavioral economists, social scientists, psychologists and user experience specialists. Experts from these disciplines will be crucial in developing user-centric transparency and controls….(More)”.

How Big Data Analytics is Changing Legal Ethics


Renee Knake at Bloomberg Law: “Big data analytics are changing how lawyers find clients, conduct legal research and discovery, draft contracts and court papers, manage billing and performance, predict the outcome of a matter, select juries, and more. Ninety percent of corporate legal departments, law firms, and government lawyers note that data analytics are applied in their organizations, albeit in limited ways, according to a 2015 survey. The Legal Services Corporation, the largest funder of civil legal aid for low-income individuals in the United States, recommended in 2012 that all states collect and assess data on case progress/outcomes to improve the delivery of legal services. Lawyers across all sectors of the market increasingly recognize how big data tools can enhance their work.

A growing literature advocates for businesses and governmental bodies to adopt data ethics policies, and many have done so. It is not uncommon to find data-use policies prominently displayed on company or government websites, or required a part of a click-through consent before gaining access to a mobile app or webpage. Data ethics guidelines can help avoid controversies, especially when analytics are used in potentially manipulative or exploitive ways. Consider, for example, Target’s data analytics that uncovered a teen’s pregnancy before her father did, or Orbitz’s data analytics offered pricier hotels to Mac users. These are just two of numerous examples in recent years where companies faced criticism for how they used data analytics.

While some law firms and legal services organizations follow data-use policies or codes of conduct, many do not. Perhaps this is because the legal profession was not transformed as early or rapidly as other industries, or because until now, big data in legal was largely limited to e-discovery, where the data use is confined to the litigation and is subject to judicial oversight. Another reason may be that lawyers believe their rules of professional conduct provide sufficient guidance and protection. Unlike other industries, lawyers are governed by a special code of ethical obligations to clients, the justice system, and the public. In most states, this code is based in part upon the American Bar Association (ABA) Model Rules of Professional Conduct, though rules often vary from jurisdiction to jurisdiction. Several of the Model Rules are relevant to big data use. That said, the Model Rules are insufficient for addressing a number of fundamental ethical concerns.

At the moment, legal ethics for big data analytics is at best an incomplete mix of professional conduct rules and informal policies adopted by some, but not all law practices. Given the increasing prevalence of data analytics in legal services, lawyers and law students should be familiar not only with the relevant professional conduct rules, but also the ethical questions left unanswered. Listed below is a brief summary of both, followed by a proposed legal ethics agenda for data analytics. …

Questions Unanswered by Lawyer Ethics Rules 

Access/Ownership. Who owns the original data — the individual source or the holder of the pooled information? Who owns the insights drawn from its analysis? Who should receive access to the data compilation and the results?

Anonymity/Identity. Should all personally identifiable or sensitive information be removed from the data? What protections are necessary to respect individual autonomy? How should individuals be able to control and shape their electronic identity?

Consent. Should individuals affirmatively consent to use of their personal data? Or is it sufficient to provide notice, perhaps with an opt-out provision?

Privacy/Security. Should privacy be protected beyond the professional obligation of client confidentiality? How should data be secured? The ABA called upon private and public sector lawyers to implement cyber-security policies, including data use, in a 2012resolution and produced a cyber-security handbook in 2013.

Process. How involved should lawyers be in the process of data collection and analysis? In the context of e-discovery, for example, a lawyer is expected to understand how documents are collected, produced, and preserved, or to work with a specialist. Should a similar level of knowledge be required for all forms of data analytics use?

Purpose. Why was the data first collected from individuals? What is the purpose for the current use? Is there a significant divergence between the original and secondary purposes? If so, is it necessary for the individuals to consent to the secondary purpose? How will unintended consequences be addressed?

Source. What is the source of the data? Did the lawyer collect it directly from clients, or is the lawyer relying upon a third-party source? Client-based data is, of course, subject to the lawyer’s professional conduct rules. Data from any source should be trustworthy, reasonable, timely, complete, and verifiable….(More)”

Stop the privatization of health data


John T. Wilbanks & Eric J. Topol in Nature: “Over the past year, technology titans including Google, Apple, Microsoft and IBM have been hiring leaders in biomedical research to bolster their efforts to change medicine….

In many ways, the migration of clinical scientists into technology corporations that are focused on gathering, analysing and storing information is long overdue. Because of the costs and difficulties of obtaining data about health and disease, scientists conducting clinical or population studies have rarely been able to track sufficient numbers of patients closely enough to make anything other than coarse predictions. Given such limitations, who wouldn’t want access to Internet-scale, multidimensional health data; teams of engineers who can build sensors for data collection and algorithms for analysis; and the resources to conduct projects at scales and speeds unthinkable in the public sector?

Yet there is a major downside to monoliths such as Google or smaller companies such as consumer-genetics firm 23andMe owning health data — or indeed, controlling the tools and methods used to match people’s digital health profiles to specific services.

Digital profiling in other contexts is already creating what has been termed a ‘black box’ society. Online adverts are tailored to people’s age, location, spending and browsing habits. Certain retail services have preferentially been made available only to particular groups of people. And law enforcers are being given tools to help them make sentencing decisions that cannot be openly assessed (see go.nature.com/29umpu1). This is all thanks to the deliberately hidden collection and manipulation of personal data.

If undisclosed algorithmic decision-making starts to incorporate health data, the ability of black-box calculations to accentuate pre-existing biases in society could greatly increase. Crucially, if the citizens being profiled are not given their data and allowed to share the information with others, they will not know about incorrect or discriminatory health actions — much less be able to challenge them. And most researchers won’t have access to such health data either, or to the insights gleaned from them….(More)”

Privacy concerns in smart cities


Liesbet van Zoonen in Government Information Quarterly: “In this paper a framework is constructed to hypothesize if and how smart city technologies and urban big data produce privacy concerns among the people in these cities (as inhabitants, workers, visitors, and otherwise). The framework is built on the basis of two recurring dimensions in research about people’s concerns about privacy: one dimensions represents that people perceive particular data as more personal and sensitive than others, the other dimension represents that people’s privacy concerns differ according to the purpose for which data is collected, with the contrast between service and surveillance purposes most paramount. These two dimensions produce a 2 × 2 framework that hypothesizes which technologies and data-applications in smart cities are likely to raise people’s privacy concerns, distinguishing between raising hardly any concern (impersonal data, service purpose), to raising controversy (personal data, surveillance purpose). Specific examples from the city of Rotterdam are used to further explore and illustrate the academic and practical usefulness of the framework. It is argued that the general hypothesis of the framework offers clear directions for further empirical research and theory building about privacy concerns in smart cities, and that it provides a sensitizing instrument for local governments to identify the absence, presence, or emergence of privacy concerns among their citizens….(More)”