The Internet of Bodies: A Convenient—and, Yes, Creepy—New Platform for Data Discovery


David Horrigan at ALM: “In the Era of the Internet of Things, we’ve become (at least somewhat) comfortable with our refrigerators knowing more about us than we know about ourselves and our Apple watches transmitting our every movement. The Internet of Things has even made it into the courtroom in cases such as the hot tub saga of Amazon Echo’s Alexa in State v. Bates and an unfortunate wife’s Fitbit in State v. Dabate.

But the Internet of Bodies?…

The Internet of Bodies refers to the legal and policy implications of using the human body as a technology platform,” said Northeastern University law professor Andrea Matwyshyn, who works also as co-director of Northeastern’s Center for Law, Innovation, and Creativity (CLIC).

“In brief, the Internet of Things (IoT) is moving onto and inside the human body, becoming the Internet of Bodies (IoB),” Matwyshyn added….


The Internet of Bodies is not merely a theoretical discussion of what might happen in the future. It’s happening already.

Former U.S. Vice President Dick Cheney revealed in 2013 that his physicians ordered the wireless capabilities of his heart implant disabled out of concern for potential assassin hackers, and in 2017, the U.S. Food and Drug Administration recalled almost half a million pacemakers over security issues requiring a firmware update.

It’s not just former vice presidents and heart patients becoming part of the Internet of Bodies. Northeastern’s Matwyshyn notes that so-called “smart pills” with sensors can report back health data from your stomach to smartphones, and a self-tuning brain implant is being tested to treat Alzheimer’s and Parkinson’s.

So, what’s not to like?

Better with Bacon?

“We are attaching everything to the Internet whether we need to or not,” Matwyshyn said, calling it the “Better with Bacon” problem, noting that—as bacon has become a popular condiment in restaurants—chefs are putting it on everything from drinks to cupcakes.

“It’s great if you love bacon, but not if you’re a vegetarian or if you just don’t like bacon. It’s not a bonus,” Matwyshyn added.

Matwyshyn’s bacon analogy raises interesting questions: Do we really need to connect everything to the Internet? Do the data privacy and data protection risks outweigh the benefits?

The Northeastern Law professor divides these IoB devices into three generations: 1) “body external” devices, such as Fitbits and Apple watches, 2) “body internal” devices, including Internet-connected pacemakers, cochlear implants, and digital pills, and 3) “body embedded” devices, hardwired technology where the human brain and external devices meld, where a human body has a real time connection to a remote machine with live updates.

Chip Party for Chipped Employees

A Wisconsin company, Three Square Market, made headlines in 2017—including an appearance on The Today Show—when the company microchipped its employees, not unlike what veterinarians do with the family pet. Not surprisingly, the company touted the benefits of implanting microchips under the skin of employees, including being able to wave one’s hand at a door instead of having to carry a badge or use a password….(More)”.

The Age of Surveillance Capitalism


Book by Shoshana Zuboff: “The challenges to humanity posed by the digital future, the first detailed examination of the unprecedented form of power called “surveillance capitalism,” and the quest by powerful corporations to predict and control our behavior.

Shoshana Zuboff’s interdisciplinary breadth and depth enable her to come to grips with the social, political, business, and technological meaning of the changes taking place in our time. We are at a critical juncture in the confrontation between the vast power of giant high-tech companies and government, the hidden economic logic of surveillance capitalism, and the propaganda of machine supremacy that threaten to shape and control human life. Will the brazen new methods of social engineering and behavior modification threaten individual autonomy and democratic rights and introduce extreme new forms of social inequality? Or will the promise of the digital age be one of individual empowerment and democratization?

The Age of Surveillance Capitalism is neither a hand-wringing narrative of danger and decline nor a digital fairy tale. Rather, it offers a deeply reasoned and evocative examination of the contests over the next chapter of capitalism that will decide the meaning of information civilization in the twenty-first century. The stark issue at hand is whether we will be the masters of information and machines or its slaves. …(More)”.

Your old tweets give away more location data than you think


Issie Lapowsky at Wired: “An international group of researchers has developed an algorithmic tool that uses Twitter to automatically predict exactly where you live in a matter of minutes, with more than 90 percent accuracy. It can also predict where you work, where you pray, and other information you might rather keep private, like, say, whether you’ve frequented a certain strip club or gone to rehab.

The tool, called LPAuditor (short for Location Privacy Auditor), exploits what the researchers call an “invasive policy” Twitter deployed after it introduced the ability to tag tweets with a location in 2009. For years, users who chose to geotag tweets with any location, even something as geographically broad as “New York City,” also automatically gave their precise GPS coordinates. Users wouldn’t see the coordinates displayed on Twitter. Nor would their followers. But the GPS information would still be included in the tweet’s metadata and accessible through Twitter’s API.

Twitter didn’t change this policy across its apps until April of 2015. Now, users must opt-in to share their precise location—and, according to a Twitter spokesperson, a very small percentage of people do. But the GPS data people shared before the update remains available through the API to this day.

The researchers developed LPAuditor to analyze those geotagged tweets and infer detailed information about people’s most sensitive locations. They outline this process in a new, peer-reviewed paper that will be presented at the Network and Distributed System Security Symposium next month. By analyzing clusters of coordinates, as well as timestamps on the tweets, LPAuditor was able to suss out where tens of thousands of people lived, worked, and spent their private time…(More)”.

Paying Users for Their Data Would Exacerbate Digital Inequality


Blog post by Eline Chivot: “Writing ever more complicated and intrusive regulations rules about data processing and data use has become the new fad in policymaking. Many are lending an ear to tempting yet ill-advised proposals to treat personal data as traditional finite resource. The latest example can be found in an article, A Blueprint for a Better Digital Society, by Glen Weyl, an economist at Microsoft Research, and Jaron Lanier, a computer scientist and writer. Not content with Internet users being able to access many online services like Bing and Twitter for free, they want online users to be paid in cash for the data they provide. To say that this proposal is flawed is an understatement. Its flawed for three main reasons: 1) consumers would lose significant shared value in exchange for minimal cash compensation; 2) higher incomes individuals would benefit at the expense of the poor; and 3) transaction costs would increase substantially, further reducing value for consumers and limiting opportunities for businesses to innovate with the data.

Weyl and Lanier’s argument is motivated by the belief that because Internet users are getting so many valuable services—like search, email, maps, and social networking—for free, they must be paying with their data. Therefore, they argue, if users are paying with their data, they should get something in return. Never mind that they do get something in return: valuable digital services that they do not pay for monetarily. But Weyl and Lanier say this is not enough, and consumers should get more.

While this idea may sound good on paper, in practice, it would be a disaster.

…Weyl and Lanier’s self-declared objective is to ensure digital dignity, but in practice this proposal would disrupt the equal treatment users receive from digital services today by valuing users based on their net worth. In this techno-socialist nirvana, to paraphrase Orwell, some pigs would be more equal than others. The French Data Protection Authority, CNIL, itself raised concerns about treating data as a commodity, warning that doing so would jeopardize society’s humanist values and fundamental rights which are, in essence, priceless.

To ensure “a better digital society,” companies should continue to be allowed to decide the best Internet business models based on what consumers demand. Data is neither cash nor a commodity, and pursuing policies based on this misconception will damage the digital economy and make the lives of digital consumers considerably worse….(More)”.

Digital rights as a security objective: New gateways for attacks


Yannic Blaschke at EDRI: “Violations of human rights online, most notably the right to data protection, can pose a real threat to electoral security and societal polarisation. In this series of blogposts, we’ll explain how and why digital rights must be treated as a security objective instead. The second part of the series explains how encroaching on digital rights could create new gateways for attacks against our security.

In the first part of this series, we analysed the failure of the Council of the European Union to connect the obvious dots between ePrivacy and disinformation online, leaving open a security vulnerability through a lack of protection of citizens. However, a failure to act is not the only front on which the EU is potentially weakening our security on- and offline: on the contrary, some of the EU’s more actively pursued digital policies could have unintended, yet serious consequences in the future. Nowhere is this trend more visible than in the recent trust in filtering algorithms, which seem to be the new “censorship machine” that is proposed as a solution for almost everything, from copyright infringements to terrorist content online.

Article 13 of the Copyright Directive proposal and the Terrorist Content Regulation proposal are two examples of the attempt to regulate the online world via algorithms. While having different motivations, both share the logic of outsourcing accountability and enforcement of public rules to private entities who will be the ones deciding about the availability of speech online. They, explicitly or implicitly, advocate for the introduction of technologies that detect and remove certain types of content: upload filters. They empower internet companies to decide which content will stay online, based on their terms of service (and not law). In a nutshell, public institutions are encouraging Google, Facebook and other platform giants to become the judge and the police of the internet. In turn, they undermine the presumption that it should be democratically legitimise states, not private entities, who are tasked with the heavy burden of balancing the right to freedom of expression.

Even more chilling is the outlook of upload filters creating new entry points for forces that seek to influence societal debates in their favour. If algorithms will be the judges of what can or cannot be published, they could become the target of the next wave of election interference campaigns, with attackers instigating them to take down critical or liberal voices to influence debates on the internet. Despite continuous warnings about the misuse of personal data on Facebook, it only took us a few years to arrive at the point of Cambridge Analytica. How long will it take us to arrive at a similar point of election interference through upload filters in online platforms?

If we let this pre-emptive and extra-judicial censorship happen, it would likely result in severe detriments to the freedom of speech and right to information of European citizens, and the free flow of information would, in consequence, be stifled. The societal effects of this could be further aggravated by the introduction of a press publishers right (Article 11 of the Copyright Directive) that is vividly opposed by the academic world, as it will concentrate the power over what appears in the news in ever fewer hands. Especially in Member States where media plurality and independence of bigger outlets from state authorities are no longer guaranteed, a decline in societal resilience to authoritarian tendencies is unfortunately easy to imagine.

We have to be very clear about what machines are good at and what they are bad at: Algorithms are incredibly well suited to detect patterns and trends, but cannot and will not be able perform the delicate act of balancing our rights and freedoms in accordance with the law any time soon….(More)”

Los Angeles Accuses Weather Channel App of Covertly Mining User Data


Jennifer Valentino-DeVries and Natasha Singer in The New York Times: “The Weather Channel app deceptively collected, shared and profited from the location information of millions of American consumers, the city attorney of Los Angeles said in a lawsuit filed on Thursday.

One of the most popular online weather services in the United States, the Weather Channel app has been downloaded more than 100 million times and has 45 million active users monthly.

The government said the Weather Company, the business behind the app, unfairly manipulated users into turning on location tracking by implying that the information would be used only to localize weather reports. Yet the company, which is owned by IBM, also used the data for unrelated commercial purposes, like targeted marketing and analysis for hedge fundsaccording to the lawsuit

In the complaint, the city attorney excoriated the Weather Company, saying it unfairly took advantage of its app’s popularity and the fact that consumers were likely to give their location data to get local weather alerts. The city said that the company failed to sufficiently disclose its data practices when it got users’ permission to track their location and that it obscured other tracking details in its privacy policy.

“These issues certainly aren’t limited to our state,” Mr. Feuer said. “Ideally this litigation will be the catalyst for other action — either litigation or legislative activity — to protect consumers’ ability to assure their private information remains just that, unless they speak clearly in advance.”…(More)”.

Can a set of equations keep U.S. census data private?


Jeffrey Mervis at Science: “The U.S. Census Bureau is making waves among social scientists with what it calls a “sea change” in how it plans to safeguard the confidentiality of data it releases from the decennial census.

The agency announced in September 2018 that it will apply a mathematical concept called differential privacy to its release of 2020 census data after conducting experiments that suggest current approaches can’t assure confidentiality. But critics of the new policy believe the Census Bureau is moving too quickly to fix a system that isn’t broken. They also fear the changes will degrade the quality of the information used by thousands of researchers, businesses, and government agencies.

The move has implications that extend far beyond the research community. Proponents of differential privacy say a fierce, ongoing legal battle over plans to add a citizenship question to the 2020 census has only underscored the need to assure people that the government will protect their privacy....

Differential privacy, first described in 2006, isn’t a substitute for swapping and other ways to perturb the data. Rather, it allows someone—in this case, the Census Bureau—to measure the likelihood that enough information will “leak” from a public data set to open the door to reconstruction.

“Any time you release a statistic, you’re leaking something,” explains Jerry Reiter, a professor of statistics at Duke University in Durham, North Carolina, who has worked on differential privacy as a consultant with the Census Bureau. “The only way to absolutely ensure confidentiality is to release no data. So the question is, how much risk is OK? Differential privacy allows you to put a boundary” on that risk....

In the case of census data, however, the agency has already decided what information it will release, and the number of queries is unlimited. So its challenge is to calculate how much the data must be perturbed to prevent reconstruction....

A professor of labor economics at Cornell University, Abowd first learned that traditional procedures to limit disclosure were vulnerable—and that algorithms existed to quantify the risk—at a 2005 conference on privacy attended mainly by cryptographers and computer scientists. “We were speaking different languages, and there was no Rosetta Stone,” he says.

He took on the challenge of finding common ground. In 2008, building on a long relationship with the Census Bureau, he and a team at Cornell created the first application of differential privacy to a census product. It is a web-based tool, called OnTheMap, that shows where people work and live….

The three-step process required substantial computing power. First, the researchers reconstructed records for individuals—say, a 55-year-old Hispanic woman—by mining the aggregated census tables. Then, they tried to match the reconstructed individuals to even more detailed census block records (that still lacked names or addresses); they found “putative matches” about half the time.

Finally, they compared the putative matches to commercially available credit databases in hopes of attaching a name to a particular record. Even if they could, however, the team didn’t know whether they had actually found the right person.

Abowd won’t say what proportion of the putative matches appeared to be correct. (He says a forthcoming paper will contain the ratio, which he calls “the amount of uncertainty an attacker would have once they claim to have reidentified a person from the public data.”) Although one of Abowd’s recent papers notes that “the risk of re-identification is small,” he believes the experiment proved reidentification “can be done.” And that, he says, “is a strong motivation for moving to differential privacy.”…

Such arguments haven’t convinced Ruggles and other social scientists opposed to applying differential privacy on the 2020 census. They are circulating manuscripts that question the significance of the census reconstruction exercise and that call on the agency to delay and change its plan....

Ruggles, meanwhile, has spent a lot of time thinking about the kinds of problems differential privacy might create. His Minnesota institute, for instance, disseminates data from the Census Bureau and 105 other national statistical agencies to 176,000 users. And he fears differential privacy will put a serious crimp in that flow of information…

There are also questions of capacity and accessibility. The centers require users to do all their work onsite, so researchers would have to travel, and the centers offer fewer than 300 workstations in total....

Abowd has said, “The deployment of differential privacy within the Census Bureau marks a sea change for the way that official statistics are produced and published.” And Ruggles agrees. But he says the agency hasn’t done enough to equip researchers with the maps and tools needed to navigate the uncharted waters….(More)”.

Data Policy in the Fourth Industrial Revolution: Insights on personal data


Report by the World Economic Forum: “Development of comprehensive data policy necessarily involves trade-offs. Cross-border data flows are crucial to the digital economy. The use of data is critical to innovation and technology. However, to engender trust, we need to have appropriate levels of protection in place to ensure privacy, security and safety. Over 120 laws in effect across the globe today provide differing levels of protection for data but few anticipated 

Data Policy in the Fourth Industrial Revolution: Insights on personal data, a paper by the World Economic Forum in collaboration with the Ministry of Cabinet Affairs and the Future, United Arab Emirates, examines the relationship between risk and benefit, recognizing the impact of culture, values and social norms This work is a start toward developing a comprehensive data policy toolkit and knowledge repository of case studies for policy makers and data policy leaders globally….(More)”.

The UN Principles on Personal Data Protection and Privacy


United Nations System: “The Principles on Personal Data Protection and Privacy set out a basic framework for the processing of personal data by, or on behalf of, the United Nations System Organizations in carrying out their mandated activities.

The Principles aim to: (i) harmonize standards for the protection of personal data across the UN System; (ii) facilitate the accountable processing of personal data; and (iii) ensure respect for the human rights and fundamental freedoms of individuals, in particular the right to privacy. These Principles apply to personal data, contained in any form, and processed in any manner. Where appropriate, they may also be used as a benchmark for the processing of non-personal data, in a sensitive context that may put certain individuals or groups of individuals at risk of harms. 
 
The High Level Committee on Management (HLCM) formally adopted the Principles at its 36th Meeting on 11 October 2018. The adoption followed the HLCM’s decision at its 35th Meeting in April 2018 to engage with the UN Data Privacy Policy Group (UN PPG) in developing a set of high-level principles on the cross-cutting issue of data privacy. Preceding the 36th HLCM meeting in October, the Principles were developed and unanimously endorsed by the organizations represented on the UN PPG….(More) (Download the Personal Data Protection and Privacy Principles)

In High-Tech Cities, No More Potholes, but What About Privacy?


Timothy Williams in The New York Times: “Hundreds of cities, large and small, have adopted or begun planning smart cities projects. But the risks are daunting. Experts say cities frequently lack the expertise to understand privacy, security and financial implications of such arrangements. Some mayors acknowledge that they have yet to master the responsibilities that go along with collecting billions of bits of data from residents….

Supporters of “smart cities” say that the potential is enormous and that some projects could go beyond creating efficiencies and actually save lives. Among the plans under development are augmented reality programs that could help firefighters find people trapped in burning buildings and the collection of sewer samples by robots to determine opioid use so that city services could be aimed at neighborhoods most in need.

The hazards are also clear.

“Cities don’t know enough about data, privacy or security,” said Lee Tien, a lawyer at the Electronic Frontier Foundation, a nonprofit organization focused on digital rights. “Local governments bear the brunt of so many duties — and in a lot of these cases, they are often too stupid or too lazy to talk to people who know.”

Cities habitually feel compelled to outdo each other, but the competition has now been intensified by lobbying from tech companies and federal inducements to modernize.

“There is incredible pressure on an unenlightened city to be a ‘smart city,’” said Ben Levine, executive director at MetroLab Network, a nonprofit organization that helps cities adapt to technology change.

That has left Washington, D.C., and dozens of other cities testing self-driving cars and Orlando trying to harness its sunshine to power electric vehicles. San Francisco has a system that tracks bicycle traffic, while Palm Beach, Fla., uses cycling data to decide where to send street sweepers. Boise, Idaho, monitors its trash dumps with drones. Arlington, Tex., is looking at creating a transit system based on data from ride-sharing apps….(More)”.