The Crowd & the Cloud


The Crowd & the Cloud (TV series): “Are you interested in birds, fish, the oceans or streams in your community? Are you concerned about fracking, air quality, extreme weather, asthma, Alzheimer’s disease, Zika or other epidemics? Now you can do more than read about these issues. You can be part of the solution.

Smartphones, computers and mobile technology are enabling regular citizens to become part of a 21st century way of doing science. By observing their environments, monitoring neighborhoods, collecting information about the world and the things they care about, so-called “citizen scientists” are helping professional scientists to advance knowledge while speeding up new discoveries and innovations.

The results are improving health and welfare, assisting in wildlife conservation, and giving communities the power to create needed change and help themselves.

Citizen science has amazing promise, but also raises questions about data quality and privacy. Its potential and challenges are explored in THE CROWD & THE CLOUD, a 4-part public television series premiering in April 2017. Hosted by former NASA Chief Scientist Waleed Abdalati, each episode takes viewers on a global tour of the projects and people on the front lines of this disruptive transformation in how science is done, and shows how anyone, anywhere can participate….(More)”

 

Migration tracking is a mess


Huub Dijstelbloem in Nature: “As debates over migration, refugees and freedom of movement intensify, technologies are increasingly monitoring the movements of people. Biometric passports and databases containing iris scans or fingerprints are being used to check a person’s right to travel through or stay within a territory. India, for example, is establishing biometric identification for its 1.3 billion citizens.

But technologies are spreading beyond borders. Security policies and humanitarian considerations have broadened the landscape. Drones and satellite images inform policies and direct aid to refugees. For instance, the United Nations Institute for Training and Research (UNITAR), maps refugee camps in Jordan and elsewhere with its Operational Satellite Applications Programme (UNOSAT; see www.unitar.org/unosat/map/1928).

Three areas are in need of research, in my view: the difficulties of joining up disparate monitoring systems; privacy issues and concerns over the inviolability of the human body; and ‘counter-surveillance’ deployed by non-state actors to highlight emergencies or contest claims that governments make.

Ideally, state monitoring of human mobility would be bound by ethical principles, solid legislation, periodical evaluations and the checks and balances of experts and political and public debates. In reality, it is ad hoc. Responses are arbitrary, fuelled by the crisis management of governments that have failed to anticipate global and regional migration patterns. Too often, this results in what the late sociologist Ulrich Beck called organized irresponsibility: situations of inadequacy in which it is hard to blame a single actor.

Non-governmental organizations, activists and migrant groups are using technologies to register incidents and to blame and shame states. For example, the Forensic Architecture research agency at Goldsmiths, University of London, has used satellite imagery and other evidence to reconstruct the journey of a boat that left Tripoli on 27 March 2011 with 72 passengers. A fortnight later, it returned to the Libyan coast with just 9 survivors. Although the boat had been spotted by several aircraft and vessels, no rescue operation had been mounted (go.nature.com/2mbwvxi). Whether the states involved can be held accountable is still being considered.

In the end, technologies to monitor mobility are political tools. Their aims, design, use, costs and consequences should be developed and evaluated accordingly….(More)”.

Bit By Bit: Social Research in the Digital Age


Open Review of Book by Matthew J. Salganik: “In the summer of 2009, mobile phones were ringing all across Rwanda. In addition to the millions of calls between family, friends, and business associates, about 1,000 Rwandans received a call from Joshua Blumenstock and his colleagues. The researchers were studying wealth and poverty by conducting a survey of people who had been randomly sampled from a database of 1.5 million customers from Rwanda’s largest mobile phone provider. Blumenstock and colleagues asked the participants if they wanted to participate in a survey, explained the nature of the research to them, and then asked a series of questions about their demographic, social, and economic characteristics.

Everything I have said up until now makes this sound like a traditional social science survey. But, what comes next is not traditional, at least not yet. They used the survey data to train a machine learning model to predict someone’s wealth from their call data, and then they used this model to estimate the wealth of all 1.5 million customers. Next, they estimated the place of residence of all 1.5 million customers by using the geographic information embedded in the call logs. Putting these two estimates together—the estimated wealth and the estimated place of residence—Blumenstock and colleagues were able to produce high-resolution estimates of the geographic distribution of wealth across Rwanda. In particular, they could produce an estimated wealth for each of Rwanda’s 2,148 cells, the smallest administrative unit in the country.

It was impossible to validate these estimates because no one had ever produced estimates for such small geographic areas in Rwanda. But, when Blumenstock and colleagues aggregated their estimates to Rwanda’s 30 districts, they found that their estimates were similar to estimates from the Demographic and Health Survey, the gold standard of surveys in developing countries. Although these two approaches produced similar estimates in this case, the approach of Blumenstock and colleagues was about 10 times faster and 50 times cheaper than the traditional Demographic and Health Surveys. These dramatically faster and lower cost estimates create new possibilities for researchers, governments, and companies (Blumenstock, Cadamuro, and On 2015).

In addition to developing a new methodology, this study is kind of like a Rorschach inkblot test; what people see depends on their background. Many social scientists see a new measurement tool that can be used to test theories about economic development. Many data scientists see a cool new machine learning problem. Many business people see a powerful approach for unlocking value in the digital trace data that they have already collected. Many privacy advocates see a scary reminder that we live in a time of mass surveillance. Many policy makers see a way that new technology can help create a better world. In fact, this study is all of those things, and that is why it is a window into the future of social research….(More)”

Watchdog to launch inquiry into misuse of data in politics


, and Alice Gibbs in The Guardian: “The UK’s privacy watchdog is launching an inquiry into how voters’ personal data is being captured and exploited in political campaigns, cited as a key factor in both the Brexit and Trump victories last year.

The intervention by the Information Commissioner’s Office (ICO) follows revelations in last week’s Observer that a technology company part-owned by a US billionaire played a key role in the campaign to persuade Britons to vote to leave the European Union.

It comes as privacy campaigners, lawyers, politicians and technology experts express fears that electoral laws are not keeping up with the pace of technological change.

“We are conducting a wide assessment of the data-protection risks arising from the use of data analytics, including for political purposes, and will be contacting a range of organisations,” an ICO spokeswoman confirmed. “We intend to publicise our findings later this year.”

The ICO spokeswoman confirmed that it had approached Cambridge Analytica over its apparent use of data following the story in the Observer. “We have concerns about Cambridge Analytica’s reported use of personal data and we are in contact with the organisation,” she said….

In the US, companies are free to use third-party data without seeking consent. But Gavin Millar QC, of Matrix Chambers, said this was not the case in Europe. “The position in law is exactly the same as when people would go canvassing from door to door,” Millar said. “They have to say who they are, and if you don’t want to talk to them you can shut the door in their face.That’s the same principle behind the data protection act. It’s why if telephone canvassers ring you, they have to say that whole long speech. You have to identify yourself explicitly.”…

Dr Simon Moores, visiting lecturer in the applied sciences and computing department at Canterbury Christ Church University and a technology ambassador under the Blair government, said the ICO’s decision to shine a light on the use of big data in politics was timely.

“A rapid convergence in the data mining, algorithmic and granular analytics capabilities of companies like Cambridge Analytica and Facebook is creating powerful, unregulated and opaque ‘intelligence platforms’. In turn, these can have enormous influence to affect what we learn, how we feel, and how we vote. The algorithms they may produce are frequently hidden from scrutiny and we see only the results of any insights they might choose to publish.” …(More)”

Open Data Privacy Playbook


A data privacy playbook by Ben Green, Gabe Cunningham, Ariel Ekblaw, Paul Kominers, Andrew Linzer, and Susan Crawford: “Cities today collect and store a wide range of data that may contain sensitive or identifiable information about residents. As cities embrace open data initiatives, more of this information is available to the public. While releasing data has many important benefits, sharing data comes with inherent risks to individual privacy: released data can reveal information about individuals that would otherwise not be public knowledge. In recent years, open data such as taxi trips, voter registration files, and police records have revealed information that many believe should not be released.

Effective data governance is a prerequisite for successful open data programs. The goal of this document is to codify responsible privacy-protective approaches and processes that could be adopted by cities and other government organizations that are publicly releasing data. Our report is organized around four recommendations:

  • Conduct risk-benefit analyses to inform the design and implementation of open data programs.
  • Consider privacy at each stage of the data lifecycle: collect, maintain, release, delete.
  • Develop operational structures and processes that codify privacy management widely throughout the City.
  • Emphasize public engagement and public priorities as essential aspects of data management programs.

Each chapter of this report is dedicated to one of these four recommendations, and provides fundamental context along with specific suggestions to carry them out. In particular, we provide case studies of best practices from numerous cities and a set of forms and tactics for cities to implement our recommendations. The Appendix synthesizes key elements of the report into an Open Data Privacy Toolkit that cities can use to manage privacy when releasing data….(More)”

Denmark is appointing an ambassador to big tech


Matthew Hughes in The Next Web: “Question: Is Facebook a country? It sounds silly, but when you think about it, it does have many attributes in common with nation states. For starters, it’s got a population that’s bigger than that of India, and its 2016 revenue wasn’t too far from Estonia’s GDP. It also has a ‘national ethos’. If America’s philosophy is capitalism, Cuba’s is communism, and Sweden’s is social democracy, Facebook’s is ‘togetherness’, as corny as that may sound.

 Given all of the above, is it really any surprise that Denmark is considering appointing a ‘big tech ambassador’ whose job is to establish and manage the country’s relationship with the world’s most powerful tech companies?

Denmark’s “digital ambassador” is a first. No country has ever created such a role. Their job will be to liase with the likes of Google, Twitter, Facebook.

Given the fraught relationship many European countries have with American big-tech – especially on issues of taxation, privacy, and national security – Denmark’s decision to extend an olive branch seems sensible.

Speaking with the Washington Post, Danish Foreign Minister Anders Samuelsen said, “just as we engage in a diplomatic dialogue with countries, we also need to establish and prioritize comprehensive relations with tech actors, such as Google, Facebook, Apple and so on. The idea is, we see a lot of companies and new technologies that will in many ways involve and be part of everyday life of citizens in Denmark.”….(More)”

Data Disrupts Corruption


Carlos Santiso & Ben Roseth at Stanford Social Innovation Review: “…The Panama Papers scandal demonstrates the power of data analytics to uncover corruption in a world flooded with terabytes needing only the computing capacity to make sense of it all. The Rousse impeachment illustrates how open data can be used to bring leaders to account. Together, these stories show how data, both “big” and “open,” is driving the fight against corruption with fast-paced, evidence-driven, crowd-sourced efforts. Open data can put vast quantities of information into the hands of countless watchdogs and whistleblowers. Big data can turn that information into insight, making corruption easier to identify, trace, and predict. To realize the movement’s full potential, technologists, activists, officials, and citizens must redouble their efforts to integrate data analytics into policy making and government institutions….

Making big data open cannot, in itself, drive anticorruption efforts. “Without analytics,” a 2014 White House report on big data and individual privacy underscored, “big datasets could be stored, and they could be retrieved, wholly or selectively. But what comes out would be exactly what went in.”

In this context, it is useful to distinguish the four main stages of data analytics to illustrate its potential in the global fight against corruption: Descriptive analytics uses data to describe what has happened in analyzing complex policy issues; diagnostic analytics goes a step further by mining and triangulating data to explain why a specific policy problem has happened, identify its root causes, and decipher underlying structural trends; predictive analytics uses data and algorithms to predict what is most likely to occur, by utilizing machine learning; and prescriptive analytics proposes what should be done to cause or prevent something from happening….

Despite the big data movement’s promise for fighting corruption, many challenges remain. The smart use of open and big data should focus not only on uncovering corruption, but also on better understanding its underlying causes and preventing its recurrence. Anticorruption analytics cannot exist in a vacuum; it must fit in a strategic institutional framework that starts with quality information and leads to reform. Even the most sophisticated technologies and data innovations cannot prevent what French novelist Théophile Gautier described as the “inexplicable attraction of corruption, even amongst the most honest souls.” Unless it is harnessed for improvements in governance and institutions, data analytics will not have the impact that it could, nor be sustainable in the long run…(More)”.

Social Media for Government


Book by Gohar Feroz Khan: “This book provides practical know-how on understanding, implementing, and managing main stream social media tools (e.g., blogs and micro-blogs, social network sites, and content communities) from a public sector perspective. Through social media, government organizations can inform citizens, promote their services, seek public views and feedback, and monitor satisfaction with the services they offer so as to improve their quality. Given the exponential growth of social media in contemporary society, it has become an essential tool for communication, content sharing, and collaboration. This growth and these tools also present an unparalleled opportunity to implement a transparent, open, and collaborative government.  However, many government organization, particularly those in the developing world, are still somewhat reluctant to leverage social media, as it requires significant policy and governance changes, as well as specific know-how, skills and resources to plan, implement and manage social media tools. As a result, governments around the world ignore or mishandle the opportunities and threats presented by social media. To help policy makers and governments implement a social media driven government, this book provides guidance in developing an effective social media policy and strategy. It also addresses issues such as those related to security and privacy….(More)”

Protecting One’s Own Privacy in a Big Data Economy


Anita L. Allen in the Harvard Law Review Forum: “Big Data is the vast quantities of information amenable to large-scale collection, storage, and analysis. Using such data, companies and researchers can deploy complex algorithms and artificial intelligence technologies to reveal otherwise unascertained patterns, links, behaviors, trends, identities, and practical knowledge. The information that comprises Big Data arises from government and business practices, consumer transactions, and the digital applications sometimes referred to as the “Internet of Things.” Individuals invisibly contribute to Big Data whenever they live digital lifestyles or otherwise participate in the digital economy, such as when they shop with a credit card, get treated at a hospital, apply for a job online, research a topic on Google, or post on Facebook.

Privacy advocates and civil libertarians say Big Data amounts to digital surveillance that potentially results in unwanted personal disclosures, identity theft, and discrimination in contexts such as employment, housing, and financial services. These advocates and activists say typical consumers and internet users do not understand the extent to which their activities generate data that is being collected, analyzed, and put to use for varied governmental and business purposes.

I have argued elsewhere that individuals have a moral obligation to respect not only other people’s privacy but also their own. Here, I wish to comment first on whether the notion that individuals have a moral obligation to protect their own information privacy is rendered utterly implausible by current and likely future Big Data practices; and on whether a conception of an ethical duty to self-help in the Big Data context may be more pragmatically framed as a duty to be part of collective actions encouraging business and government to adopt more robust privacy protections and data security measures….(More)”

Empirical data on the privacy paradox


Benjamin Wittes and Emma Kohse at Brookings: “The contemporary debate about the effects of new technology on individual privacy centers on the idea that privacy is an eroding value. The erosion is ongoing and takes place because of the government and big corporations that collect data on us all: In the consumer space, technology and the companies that create it erode privacy, as consumers trade away their solitude either unknowingly or in exchange for convenience and efficiency.

On January 13, we released a Brookings paper that challenges this idea. Entitled, “The Privacy Paradox II: Measuring the Privacy Benefits of Privacy Threats,” we try to measure the extent to which this focus ignores the significant privacy benefits of the technologies that concern privacy advocates. And we conclude that quantifiable effects in consumer behavior strongly support the reality of these benefits.

In 2015, one of us, writing with Jodie Liu, laid out the basic idea last year in a paper published by Brookings called “The Privacy Paradox: the Privacy Benefits of Privacy Threats.” (The title, incidentally, became the name of Lawfare’s privacy-oriented subsidiary page.) Individuals, we argued, might be more concerned with keeping private information from specific people—friends, neighbors, parents, or even store clerks—than from large, remote corporations, and they might actively prefer to give information remote corporations by way of shielding it from those immediately around them. By failing to associate this concern with the concept of privacy, academic and public debates tends to ignore countervailing privacy benefits associated with privacy threats, and thereby keeps score in a way biased toward the threats side of the ledger.To cite a few examples, an individual may choose to use a Kindle e-reader to read Fifty Shades of Grey precisely because she values the privacy benefit of hiding her book choice from the eyes of people on the bus or the store clerk at the book store, rather than for reasons of mere convenience. This privacy benefit, for many consumers, can outweigh the privacy concern presented by Amazon’s data mining. At the very least, the privacy benefits of the Kindle should enter into the discussion.

To cite a few examples, an individual may choose to use a Kindle e-reader to read Fifty Shades of Grey precisely because she values the privacy benefit of hiding her book choice from the eyes of people on the bus or the store clerk at the book store, rather than for reasons of mere convenience. This privacy benefit, for many consumers, can outweigh the privacy concern presented by Amazon’s data mining. At the very least, the privacy benefits of the Kindle should enter into the discussion.

In this paper, we tried to begin the task for measuring the effect and reasoning that supported the thesis in the “Privacy Paradox” using Google Surveys, an online survey tool….(More)”.