The Privacy Project


The New York Times: “Companies and governments are gaining new powers to follow people across the internet and around the world, and even to peer into their genomes. The benefits of such advances have been apparent for years; the costs — in anonymity, even autonomy — are now becoming clearer. The boundaries of privacy are in dispute, and its future is in doubt. Citizens, politicians and business leaders are asking if societies are making the wisest tradeoffs. The Times is embarking on this months long project to explore the technology and where it’s taking us, and to convene debate about how it can best help realize human potential….(More)”

Does Privacy Matter?

What Do They Know, and How Do They Know It?

What Should Be Done About This?

What Can I Do?

(View all Privacy articles…)

Filling a gap: the clandestine gang fixing Rome illegally


Giorgio Ghiglione in The Guardian: “It is 6am on a Sunday and the streets of the Ostiense neighbourhood in southern Rome are empty. The metro has just opened and nearby cafes still await their first customers.

Seven men and women are working hard, their faces obscured by scarves and hoodies as they unload bags of cement and sand from a car near the Basilica of St Paul Outside the Walls.

They are not criminals. Members of the secret Gap organisation, they hide their identities because what they are doing – fixing a broken pavement without official permission – is technically illegal.

City maintenance – or the lack of it – has long been a hot-button issue in Italy’s capital. There are an estimated 10,000 potholesin the city – a source of frustration for the many Romans who travel by scooter. Garbage collection has also become a major problem since the city’s landfill was closed in 2013, with periodic “waste crises” where trash piles up in the streets. Cases of exploding buses and the collapse of a metro escalatormade international headlines.

The seven clandestine pavement-fixers are part of a network of about 20 activists quietly doing the work that the city authorities have failed to do. Gap stands for Gruppi Artigiani Pronto Intervento, (“groups of artisan emergency services”) but is also a tribute to the partisans of Gruppi di Azione Patriottica, who fought the fascists during the second world war.

“We chose this name because many of our parents or grandparents were partisans and we liked the idea of honouring their memory,” says one of the activists, a fiftysomething architect who goes by the pseudonym Renato. While the modern-day Gap aren’t risking their lives, their modus operandi is inspired by resistance saboteurs: they identify a target, strike and disappear unseen into the city streets.

Gap have been busy over the past few months. In December they repaired the fountain, built in the 1940s, of the Principe di Piemonte primary school. In January they painted a pedestrian crossing on a dangerous major road. Their latest work, the pavement fixing in Ostiense, involved filling a deep hole that regularly filled with water when it rained….(More)”.

The Automated Administrative State


Paper by Danielle Citron and Ryan Calo: “The administrative state has undergone radical change in recent decades. In the twentieth century, agencies in the United States generally relied on computers to assist human decision-makers. In the twenty-first century, computers are making agency decisions themselves. Automated systems are increasingly taking human beings out of the loop. Computers terminate Medicaid to cancer patients and deny food stamps to individuals. They identify parents believed to owe child support and initiate collection proceedings against them. Computers purge voters from the rolls and deem small businesses ineligible for federal contracts [1].

Automated systems built in the early 2000s eroded procedural safeguards at the heart of the administrative state. When government makes important decisions that affect our lives, liberty, and property, it owes us “due process”— understood as notice of, and a chance to object to, those decisions. Automated systems, however, frustrate these guarantees. Some systems like the “no-fly” list were designed and deployed in secret; others lacked record-keeping audit trails, making review of the law and facts supporting a system’s decisions impossible. Because programmers working at private contractors lacked training in the law, they distorted policy when translating it into code [2].

Some of us in the academy sounded the alarm as early as the 1990s, offering an array of mechanisms to ensure the accountability and transparency of automated administrative state [3]. Yet the same pathologies continue to plague government decision-making systems today. In some cases, these pathologies have deepened and extended. Agencies lean upon algorithms that turn our personal data into predictions, professing to reflect who we are and what we will do. The algorithms themselves increasingly rely upon techniques, such as deep learning, that are even less amenable to scrutiny than purely statistical models. Ideals of what the administrative law theorist Jerry Mashaw has called “bureaucratic justice” in the form of efficiency with a “human face” feel impossibly distant [4].

The trend toward more prevalent and less transparent automation in agency decision-making is deeply concerning. For a start, we have yet to address in any meaningful way the widening gap between the commitments of due process and the actual practices of contemporary agencies [5]. Nonetheless, agencies rush to automate (surely due to the influence and illusive promises of companies seeking lucrative contracts), trusting algorithms to tell us if criminals should receive probation, if public school teachers should be fired, or if severely disabled individuals should receive less than the maximum of state-funded nursing care [6]. Child welfare agencies conduct intrusive home inspections because some system, which no party to the interaction understands, has rated a poor mother as having a propensity for violence. The challenges of preserving due process in light of algorithmic decision-making is an area of renewed and active attention within academia, civil society, and even the courts [7].

Second, and routinely overlooked, we are applying the new affordances of artificial intelligence in precisely the wrong contexts…(More)”.

Artists as ‘Creative Problem-Solvers’ at City Agencies


Sophie Haigney at The New York Times: “Taja Lindley, a Brooklyn-based interdisciplinary artist and activist, will spend the next year doing an unconventional residency — she’ll be collaborating with the New York City Department of Health and Mental Hygiene, working on a project that deals with unequal birth outcomes and maternal mortality for pregnant and parenting black people in the Bronx.

Ms. Lindley is one of four artists who were selected this year for the City’s Public Artists in Residence program, or PAIR, which is managed by New York City’s Department of Cultural Affairs. The program, which began in 2015, matches artists and public agencies, and the artists are tasked with developing creative projects around social issues.

Ms. Lindley will be working with the Tremont Neighborhood Health Action Center, part of the department of health, in the Bronx. “People who are black are met with skepticism, minimized and dismissed when they seek health care,” Ms. Lindley said, “and the voices of black people can really shift medical practices and city practices, so I’ll really be centering those voices.” She said that performance, film and storytelling are likely to be incorporated in her project.

The other three artists selected this year are the artist Laura Nova, who will be in residence with the Department for the Aging; the artist Julia Weist, who will be in residence with the Department of Records and Information Services; and the artist Janet Zweig, who will be in residence with the Mayor’s Office of Sustainability. Each will receive $40,000. There is a three-month-long research phase and then the artists will spend a minimum of nine months creating and producing their work….(More)”.

Crowdsourced reports could save lives when the next earthquake hits


Charlotte Jee at MIT Technology Review: “When it comes to earthquakes, every minute counts. Knowing that one has hit—and where—can make the difference between staying inside a building and getting crushed, and running out and staying alive. This kind of timely information can also be vital to first responders.

However, the speed of early warning systems varies from country to country. In Japan  and California, huge networks of sensors and seismic stations can alert citizens to an earthquake. But these networks are expensive to install and maintain. Earthquake-prone countries such as Mexico and Indonesia don’t have such an advanced or widespread system.

A cheap, effective way to help close this gap between countries might be to crowdsource earthquake reports and combine them with traditional detection data from seismic monitoring stations. The approach was described in a paper in Science Advances today.

The crowdsourced reports come from three sources: people submitting information using LastQuake, an app created by the Euro-Mediterranean Seismological Centre; tweets that refer to earthquake-related keywords; and the time and IP address data associated with visits to the EMSC website.

When this method was applied retrospectively to earthquakes that occurred in 2016 and 2017, the crowdsourced detections on their own were 85% accurate. Combining the technique with traditional seismic data raised accuracy to 97%. The crowdsourced system was faster, too. Around 50% of the earthquake locations were found in less than two minutes, a whole minute faster than with data provided only by a traditional seismic network.

When EMSC has identified a suspected earthquake, it sends out alerts via its LastQuake app asking users nearby for more information: images, videos, descriptions of the level of tremors, and so on. This can help assess the level of damage for early responders….(More)”.

Nudging the dead: How behavioural psychology inspired Nova Scotia’s organ donation scheme


Joseph Brean at National Post: “Nova Scotia’s decision to presume people’s consent to donating their organs after death is not just a North American first. It is also the latest example of how deeply behavioural psychology has changed policy debates.

That is a rare achievement for science. Governments used to appeal to people’s sense of reason, religion, civic duty, or fear of consequences. Today, when they want to change how their citizens behave, they use psychological tricks to hack their minds.

Nudge politics, as it came to be known, has been an intellectual hit among wonks and technocrats ever since Daniel Kahneman won the Nobel Prize in 2002 for destroying the belief people make decisions based on good information and reasonable expectations. Not so, he showed. Not even close. Human decision-making is an organic process, all but immune to reason, but strangely susceptible to simple environmental cues, just waiting to be exploited by a clever policymaker….

Organ donation is a natural fit. Nova Scotia’s experiment aims to solve a policy problem by getting people to do what they always tend to do about government requests — nothing.

The cleverness is evident in the N.S. government’s own words, which play on the meaning of “opportunity”: “Every Nova Scotian will have the opportunity to be an organ and tissue donor unless they opt out.” The policy applies to kidneys, pancreas, heart, liver, lungs, small bowel, cornea, sclera, skin, bones, tendons and heart valves.

It is so clever it aims to make progress as people ignore it. The default position is a positive for the policy. It assumes poor pickup. You can opt out of organ donation if you want. Nova Scotia is simply taking the informed gamble that you probably won’t. That is the goal, and it will make for a revealing case study.

Organ donation is an important question, and chronically low donation rates can reasonably be called a crisis. But most people make their personal choice “thoughtlessly,” as Kahneman wrote in the 2011 book Thinking, Fast and Slow.

He referred to European statistics which showed vast differences in organ donation rights between neighbouring and culturally similar countries, such as Sweden and Denmark, or Germany and Austria. The key difference, he noted, was what he called “framing effects,” or how the question was asked….(More)”.

Fearful of fake news blitz, U.S. Census enlists help of tech giants


Nick Brown at Reuters: “The U.S. Census Bureau has asked tech giants Google, Facebook and Twitter to help it fend off “fake news” campaigns it fears could disrupt the upcoming 2020 count, according to Census officials and multiple sources briefed on the matter.

The push, the details of which have not been previously reported, follows warnings from data and cybersecurity experts dating back to 2016 that right-wing groups and foreign actors may borrow the “fake news” playbook from the last presidential election to dissuade immigrants from participating in the decennial count, the officials and sources told Reuters.

The sources, who asked not to be named, said evidence included increasing chatter on platforms like “4chan” by domestic and foreign networks keen to undermine the survey. The census, they said, is a powerful target because it shapes U.S. election districts and the allocation of more than $800 billion a year in federal spending.

Ron Jarmin, the Deputy Director of the Census Bureau, confirmed the bureau was anticipating disinformation campaigns, and was enlisting the help of big tech companies to fend off the threat.

“We expect that (the census) will be a target for those sorts of efforts in 2020,” he said.

Census Bureau officials have held multiple meetings with tech companies since 2017 to discuss ways they could help, including as recently as last week, Jarmin said.

So far, the bureau has gotten initial commitments from Alphabet Inc’s Google, Twitter Inc and Facebook Inc to help quash disinformation campaigns online, according to documents summarizing some of those meetings reviewed by Reuters.

But neither Census nor the companies have said how advanced any of the efforts are….(More)”.

How the NYPD is using machine learning to spot crime patterns


Colin Wood at StateScoop: “Civilian analysts and officers within the New York City Police Department are using a unique computational tool to spot patterns in crime data that is closing cases.

A collection of machine-learning models, which the department calls Patternizr, was first deployed in December 2016, but the department only revealed the system last month when its developers published a research paper in the Informs Journal on Applied Analytics. Drawing on 10 years of historical data about burglary, robbery and grand larceny, the tool is the first of its kind to be used by law enforcement, the developers wrote.

The NYPD hired 100 civilian analysts in 2017 to use Patternizr. It’s also available to all officers through the department’s Domain Awareness System, a citywide network of sensors, databases, devices, software and other technical infrastructure. Researchers told StateScoop the tool has generated leads on several cases that traditionally would have stretched officers’ memories and traditional evidence-gathering abilities.

Connecting similar crimes into patterns is a crucial part of gathering evidence and eventually closing in on an arrest, said Evan Levine, the NYPD’s assistant commissioner of data analytics and one of Patternizr’s developers. Taken independently, each crime in a string of crimes may not yield enough evidence to identify a perpetrator, but the work of finding patterns is slow and each officer only has a limited amount of working knowledge surrounding an incident, he said.

“The goal here is to alleviate all that kind of busywork you might have to do to find hits on a pattern,” said Alex Chohlas-Wood, a Patternizr researcher and deputy director of the Computational Policy Lab at Stanford University.

The knowledge of individual officers is limited in scope by dint of the NYPD’s organizational structure. The department divides New York into 77 precincts, and a person who commits crimes across precincts, which often have arbitrary boundaries, is often more difficult to catch because individual beat officers are typically focused on a single neighborhood.

There’s also a lot of data to sift through. In 2016 alone, about 13,000 burglaries, 15,000 robberies and 44,000 grand larcenies were reported across the five boroughs.

Levine said that last month, police used Patternizr to spot a pattern of three knife-point robberies around a Bronx subway station. It would have taken police much longer to connect those crimes manually, Levine said.

The software works by an analyst feeding it “seed” case, which is then compared against a database of hundreds of thousands of crime records that Patternizr has already processed. The tool generates a “similarity score” and returns a rank-ordered list and a map. Analysts can read a few details of each complaint before examining the seed complaint and similar complaints in a detailed side-by-side view or filtering results….(More)”.

What you don’t know about your health data will make you sick


Jeanette Beebe at Fast Company: “Every time you shuffle through a line at the pharmacy, every time you try to get comfortable in those awkward doctor’s office chairs, every time you scroll through the web while you’re put on hold with a question about your medical bill, take a second to think about the person ahead of you and behind you.

Chances are, at least one of you is being monitored by a third party like data analytics giant Optum, which is owned by UnitedHealth Group, Inc. Since 1993, it’s captured medical data—lab results, diagnoses, prescriptions, and more—from 150 million Americans. That’s almost half of the U.S. population.

“They’re the ones that are tapping the data. They’re in there. I can’t remove them from my own health insurance contracts. So I’m stuck. It’s just part of the system,” says Joel Winston, an attorney who specializes in privacy and data protection law.

Healthcare providers can legally sell their data to a now-dizzyingly vast spread of companies, who can use it to make decisions, from designing new drugs to pricing your insurance rates to developing highly targeted advertising.

It’s written in the fine print: You don’t own your medical records. Well, except if you live in New Hampshire. It’s the only state that mandates its residents own their medical data. In 21 states, the law explicitly says that healthcare providers own these records, not patients. In the rest of the country, it’s up in the air.

Every time you visit a doctor or a pharmacy, your record grows. The details can be colorful: Using sources like Milliman’s IntelliScript and ExamOne’s ScriptCheck, a fuller picture of you emerges. Your interactions with the health are system, your medical payments, your prescription drug purchase history. And the market for the data is surging.

Its buyers and sharers—pharma giants, insurers, credit reporting agencies, and other data-hungry companies or “fourth parties” (like Facebook)—say that these massive health data sets can improve healthcare delivery and fuel advances in so-called “precision medicine.”

Still, this glut of health data has raised alarms among privacy advocates, who say many consumers are in the dark about how much of their health-related info is being gathered and mined….

Gardner predicted that traditional health data systems—electronic health records and electronic medical records—are less than ideal, given the “rigidity of the vendors and the products” and the way our data is owned and secured. Don’t count on them being around much longer, she said, “beyond the next few years.”

The future, Gardner suggested, is a system that runs on blockchain, which she defined for the committee as “basically a secure, visible, irrefutable ledger of transactions and ownership.” Still, a recent analysis of over 150 white papers revealed most healthcare blockchain projects “fall somewhere between half-baked and overly optimistic.”

As larger companies like IBM sign on, the technology may be edging closer to reality. Last year, Proof Work outlined a HIPAA-compliant system that manages patients’ medical histories over time, from acute care in the hospital to preventative checkups. The goal is to give these records to patients on their phones, and to create a “democratized ecosystem” to solve interoperability between patients, healthcare providers, insurance companies, and researchers. Similar proposals from blockchain-focused startups like Health Bank and Humanity.co would help patients store and share their health information securely—and sell it to researchers, too….(More)”.

Technology and political will can create better governance


Darshana Narayanan at The Economist: “Current forms of democracy exclude most people from political decision-making. We elect representatives and participate in the occasional referendums, but we mainly remain on the outside. The result is that a handful of people in power dictate what ought to be collective decisions. What we have now is hardly a democracy, or at least, not a democracy that we should settle for.

To design a truer form of democracy—that is, fair representation and an outcome determined by a plurality—we might draw some lessons from the collective behaviour of other social animals: schools of fish, for example. Schooling fish self-organise for the benefit of the group and are rarely in a fracas. Individuals in the group may not be associated and yet they reach consensus. A study in 2011 led by Iain Couzin found that “uninformed” fish—in that case, ones that had not been trained to have a preference to move towards a particular target—can dilute the influence of a powerful minority group which did have such preferences. 

Of course fish are not the same as humans. But that study does suggest a way of thinking about decision-making. Instead of limiting influence to experts and strongly motivated interest groups, we should actively work to broaden participation to ensure that we include people lacking strong preferences or prior knowledge of an issue. In other words, we need to go against the ingrained thinking that non-experts should be excluded from decision-making. Inclusivity might just improve our chances of reaching a real, democratic consensus.

How can our political institutions facilitate this? In my work over the past several years I have tried to apply findings from behavioural science into institutions and into code to create better systems of governance. In the course of my work, I have found some promising experiments taking place around the world that harness new digital tools. They point the way to how democracy can be practiced in the 21st century….(More)”.