Privacy and digital ethics after the pandemic


Carissa Véliz at Nature: “The coronavirus pandemic has permanently changed our relationship with technology, accelerating the drive towards digitization. While this change has brought advantages, such as increased opportunities to work from home and innovations in e-commerce, it has also been accompanied with steep drawbacks, which include an increase in inequality and undesirable power dynamics.

Power asymmetries in the digital age have been a worry since big tech became big. Technophiles have often argued that if users are unhappy about online services, they can always opt-out. But opting-out has not felt like a meaningful alternative for years for at least two reasons.

First, the cost of not using certain services can amount to a competitive disadvantage — from not seeing a job advert to not having access to useful tools being used by colleagues. When a platform becomes too dominant, asking people not to use it is like asking them to refrain from being full participants in society. Second, platforms such as Facebook and Google are unavoidable — no one who has an online life can realistically steer clear of them. Google ads and their trackers creep throughout much of the Internet1, and Facebook has shadow profiles on netizens even when they have never had an account on the platform2.

Citizens have responded to the countless data abuses in the past few years with what has been described as a ‘techlash’3. Tech companies whose business model is based on surveillance ceased to be perceived as good guys in hoodies who offered services to make our lives better. They were instead data predators jeopardizing, not only users’ privacy and security, but also democracy itself. During lockdown, communication apps became necessary for any and all social interaction beyond our homes. People have had to use online tools to work, get an education, receive medical attention, and enjoy much-needed entertainment. Gratefulness for having technology that allows us to stay in contact during such circumstances has thus watered down the general techlash. Big tech’s stocks have been consistently on the rise during the pandemic, in line with its accumulating power.

As a result of the pandemic, however, any lingering illusion of voluntariness in the use of technology has disappeared. It is not only citizens who rely on big tech to perform their jobs: businesses, universities, health services, and governments need the platforms to carry out their everyday functions. All over the world, governmental and diplomatic meetings are being carried out on platforms such as Zoom and Teams. Since governments do not have full control over the platforms they use, confidentiality is uncertain.

Enhanced power asymmetries have also worsened the vulnerability of ordinary citizens in areas that range from the interaction with government to ordering food online, and almost everything in between. The pandemic has, for example, led to an increase in the surveillance of employees as they work from home4. Students are likewise being subjected to more scrutiny: by their schools and teachers, and above all, by the companies on which they depend5. Surveillance for public health purposes has likewise increased. Privacy losses disempower citizens and often lead to further abuses of power. In the UK, for example, companies collecting data for pubs and restaurants for contact-tracing purposes have sold on that information6.

Such abuses are not isolated events. For the past two decades, we have allowed an unethical business model that depends on the systematic violation of the right to privacy to run amok. As long as we treat personal data as a commodity, there will be a high risk of it being misused — by being stolen in a hack or by being sold to the highest bidder (which often includes nefarious agents)….(More)”.

Sharing Student Health Data with Health Agencies: Considerations and Recommendations


Memo by the Center for Democracy and Technology: “As schools respond to COVID-19 on their campuses, some have shared student information with state and local health agencies, often to aid in contact tracing or to provide services to students. Federal and state student privacy laws, however, do not necessarily permit that sharing, and schools should seek to protect both student health and student privacy.

How Are Schools Sharing COVID-Related Student Data?

When it comes to sharing student data, schools’ practices vary widely. For example, the New York City Department of Education provides a consent form for sharing COVID-related student data. Other schools do not have consent forms, but instead, share COVID-related data as required by local or state health agencies. For instance, Orange County Public Schools in Florida assists the local health agency in contact tracing by collecting information such as students’ names and dates of birth. Some districts, such as the Dallas Independent School District in Texas, report positive cases to the county, but do not publicly specify what information is reported. Many schools, however, do not publicly disclose their collection and sharing of COVID-related student data….(More)”

Citizen acceptance of mass surveillance? Identity, intelligence, and biodata concerns


Paper by Westerlund, Mika; Isabelle, Diane A; Leminen, Seppo: “News media and human rights organizations are warning about the rise of the surveillance state that builds on distrust and mass surveillance of its citizens. Further, the global pandemic has fostered public-private collaboration such as the launch of contact tracing apps to tackle COVID-19. Thus, such apps also contribute to the diffusion of technologies that can collect and analyse large amounts of sensitive data and the growth of the surveillance society. This study examines the impacts of citizens’ concerns about digital identity, government’s intelligence activities, and security of the increasing biodata on their trust in and acceptance of government’s use of personal data. Our analysis of survey data from 1,486 Canadians suggest that those concerns have direct effects on people’s acceptance of government’s use of personal data, but not necessarily on the trust in the government being respectful of privacy. Authorities should be more transparent about the collection and uses of data….(More)”

A Premature Eulogy for Privacy


Review Article by Evan Selinger: “Every so often a sound critical thinker and superb writer asks the wrong questions of a wheel-spinning topic like privacy and then draws the wrong conclusions. This is the case with Firmin DeBrabander in his Life After Privacy: Reclaiming Democracy in a Surveillance Society. Professor of Philosophy at the Maryland Institute College of Art, DeBrabander has a gift for clearly expressing complex ideas and explaining why underappreciated moments in the history of ideas have contemporary relevance. In this book, he aims “to understand the prospects and future of democracy without privacy, or very little of it.” That attempt necessarily leads him both to undervalue privacy and make a case for accepting a severely weakened democracy.

To be sure, DeBrabander doesn’t dismiss privacy with any sort of enthusiasm. On the contrary, he loves his privacy, depicting himself as someone who has to block his “beloved” but overly disclosive students on Facebook. “If I had my druthers, my personal data would be sacrosanct,” he writes. But he is convinced privacy is a lost cause. He makes what are essentially six claims about privacy — some of them seemingly obvious and others more startling — to buttress his eulogy.

Prosecuting Privacy

It’s worth listing DeBrabander’s six propositions here before then rebutting or at least complicating their veracity. The first privacy proposition repeats what has become a seeming truism: we’re living in a “confessional culture” that normalizes oversharing. His second has two components, both much discussed in the last several years: companies participating in the “surveillance economy” have an insatiable appetite for our personal information, and consumers don’t fully comprehend just how much value these companies are able to extract from it through data analytics. He emphasizes Charles Duhigg’s much-discussed 2012 New York Times article about Target using predictive analytics on big data to identify pregnant customers and present them with relevant coupons. Consumers, he contends, will continue giving away massive amounts of personal information to make their cars, homes, cities, and even bodies smarter; it’s likely they won’t be any more equipped in the future to assess tradeoffs and determine when they’re being exploited….(More)”.

COVID-19 Tests Gone Rogue: Privacy, Efficacy, Mismanagement and Misunderstandings


Paper by Manuel Morales et al: “COVID-19 testing, the cornerstone for effective screening and identification of COVID-19 cases, remains paramount as an intervention tool to curb the spread of COVID-19 both at local and national levels. However, the speed at which the pandemic struck and the response was rolled out, the widespread impact on healthcare infrastructure, the lack of sufficient preparation within the public health system, and the complexity of the crisis led to utter confusion among test-takers. Invasion of privacy remains a crucial concern. The user experience of test takers remains low. User friction affects user behavior and discourages participation in testing programs. Test efficacy has been overstated. Test results are poorly understood resulting in inappropriate follow-up recommendations. Herein, we review the current landscape of COVID-19 testing, identify four key challenges, and discuss the consequences of the failure to address these challenges. The current infrastructure around testing and information propagation is highly privacy-invasive and does not leverage scalable digital components. In this work, we discuss challenges complicating the existing covid-19 testing ecosystem and highlight the need to improve the testing experience for the user and reduce privacy invasions. Digital tools will play a critical role in resolving these challenges….(More)”.

Augmented Reality and the Surveillance Society


Mark Pesce at IEEE Spectrum: “First articulated in a 1965 white paper by Ivan Sutherland, titled “The Ultimate Display,” augmented reality (AR) lay beyond our technical capacities for 50 years. That changed when smartphones began providing people with a combination of cheap sensors, powerful processors, and high-bandwidth networking—the trifecta needed for AR to generate its spatial illusions. Among today’s emerging technologies, AR stands out as particularly demanding—for computational power, for sensed data, and, I’d argue, for attention to the danger it poses.

Unlike virtual-reality (VR) gear, which creates for the user a completely synthetic experience, AR gear adds to the user’s perception of her environment. To do that effectively, AR systems need to know where in space the user is located. VR systems originally used expensive and fragile systems for tracking user movements from the outside in, often requiring external sensors to be set up in the room. But the new generation of VR accomplishes this through a set of techniques collectively known as simultaneous localization and mapping (SLAM). These systems harvest a rich stream of observational data—mostly from cameras affixed to the user’s headgear, but sometimes also from sonar, lidar, structured light, and time-of-flight sensors—using those measurements to update a continuously evolving model of the user’s spatial environment.

For safety’s sake, VR systems must be restricted to certain tightly constrained areas, lest someone blinded by VR goggles tumble down a staircase. AR doesn’t hide the real world, though, so people can use it anywhere. That’s important because the purpose of AR is to add helpful (or perhaps just entertaining) digital illusions to the user’s perceptions. But AR has a second, less appreciated, facet: It also functions as a sophisticated mobile surveillance system.

This second quality is what makes Facebook’s recent Project Aria experiment so unnerving. Nearly four years ago, Mark Zuckerberg announced Facebook’s goal to create AR “spectacles”—consumer-grade devices that could one day rival the smartphone in utility and ubiquity. That’s a substantial technical ask, so Facebook’s research team has taken an incremental approach. Project Aria packs the sensors necessary for SLAM within a form factor that resembles a pair of sunglasses. Wearers collect copious amounts of data, which is fed back to Facebook for analysis. This information will presumably help the company to refine the design of an eventual Facebook AR product.

The concern here is obvious: When it comes to market in a few years, these glasses will transform their users into data-gathering minions for Facebook. Tens, then hundreds of millions of these AR spectacles will be mapping the contours of the world, along with all of its people, pets, possessions, and peccadilloes. The prospect of such intensive surveillance at planetary scale poses some tough questions about who will be doing all this watching and why….(More)”.

Inside India’s booming dark data economy


Snigdha Poonam and Samarath Bansal at the Rest of the World: “…The black market for data, as it exists online in India, resembles those for wholesale vegetables or smuggled goods. Customers are encouraged to buy in bulk, and the variety of what’s on offer is mind-boggling: There are databases about parents, cable customers, pregnant women, pizza eaters, mutual funds investors, and almost any niche group one can imagine. A typical database consists of a spreadsheet with row after row of names and key details: Sheila Gupta, 35, lives in Kolkata, runs a travel agency, and owns a BMW; Irfaan Khan, 52, lives in Greater Noida, and has a son who just applied to engineering college. The databases are usually updated every three months (the older one is, the less it is worth), and if you buy several at the same time, you’ll get a discount. Business is always brisk, and transactions are conducted quickly. No one will ask you for your name, let alone inquire why you want the phone numbers of five million people who have applied for bank loans.

There isn’t a reliable estimate of the size of India’s data economy or of how much money it generates annually. Regarding the former, each broker we spoke to had a different guess: One said only about one or two hundred professionals make up the top tier, another that every big Indian city has at least a thousand people trading data. To find them, potential customers need only look for their ads on social media or run searches with industry keywords and hashtags — “data,” “leads,” “database” — combined with detailed information about the kind of data they want and the city they want it from.

Privacy experts believe that the data-brokering industry has existed since the early days of the internet’s arrival in India. “Databases have been bought and sold in India for at least 15 years now. I remember a case from way back in 2006 of leaked employee data from Naukri.com (one of India’s first online job portals) being sold on CDs,” says Nikhil Pahwa, the editor and publisher of MediaNama, which covers technology policy. By 2009, data brokers were running SMS-marketing companies that offered complementary services: procuring targeted data and sending text messages in bulk. Back then, there was simply less data, “and those who had it could sell it at whatever price,” says Himanshu Bhatt, a data broker who claims to be retired. That is no longer the case: “Today, everyone has every kind of data,” he said.

No broker we contacted would openly discuss their methods of hunting, harvesting, and selling data. But the day-to-day work generally consists of following the trails that people leave during their travels around the internet. Brokers trawl data storage websites armed with a digital fishing net. “I was shocked when I was surfing [cloud-hosted data sites] one day and came across Aadhaar cards,” Bhatt remarked, referring to India’s state-issued biometric ID cards. Images of them were available to download in bulk, alongside completed loan applications and salary sheets.

Again, the legal boundaries here are far from clear. Anybody who has ever filled out a form on a coupon website or requested a refund for a movie ticket has effectively entered their information into a database that can be sold without their consent by the company it belongs to. A neighborhood cell phone store can sell demographic information to a political party for hyperlocal campaigning, and a fintech company can stealthily transfer an individual’s details from an astrology app onto its own server, to gauge that person’s creditworthiness. When somebody shares employment history on LinkedIn or contact details on a public directory, brokers can use basic software such as web scrapers to extract that data.

But why bother hacking into a database when you can buy it outright? More often, “brokers will directly approach a bank employee and tell them, ‘I need the high-end database’,” Bhatt said. And as demand for information increases, so, too, does data vulnerability. A 2019 survey found that 69% of Indian companies haven’t set up reliable data security systems; 44% have experienced at least one breach already. “In the past 12 months, we have seen an increasing trend of Indians’ data [appearing] on the dark web,” says Beenu Arora, the CEO of the global cyberintelligence firm Cyble….(More)”.

Consumer Bureau To Decide Who Owns Your Financial Data


Article by Jillian S. Ambroz: “A federal agency is gearing up to make wide-ranging policy changes on consumers’ access to their financial data.

The Consumer Financial Protection Bureau (CFPB) is looking to implement the area of the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act pertaining to a consumer’s rights to his or her own financial data. It is detailed in section 1033.

The agency has been laying the groundwork on this move for years, from requesting information in 2016 from financial institutions to hosting a symposium earlier this year on the problems of screen scraping, a risky but common method of collecting consumer data.

Now the agency, which was established by the Dodd-Frank Act, is asking for comments on this critical and controversial topic ahead of the proposed rulemaking. Unlike other regulations that affect single industries, this could be all-encompassing because the consumer data rule touches almost every market the agency covers, according to the story in American Banker.

The Trump administration all but ‘systematically neutered’ the agency.

With the ruling, the agency seeks to clarify its compliance expectations and help establish market practices to ensure consumers have access to consumer financial data. The agency sees an opportunity here to help shape this evolving area of financial technology, or fintech, recognizing both the opportunities and the risks to consumers as more fintechs become enmeshed with their data and day-to-day lives.

Its goal is “to better effectuate consumer access to financial records,” as stated in the regulatory filing….(More)”.

Evaluating Identity Disclosure Risk in Fully Synthetic Health Data: Model Development and Validation


Paper by Khaled El Emam et al: “There has been growing interest in data synthesis for enabling the sharing of data for secondary analysis; however, there is a need for a comprehensive privacy risk model for fully synthetic data: If the generative models have been overfit, then it is possible to identify individuals from synthetic data and learn something new about them.

Objective: The purpose of this study is to develop and apply a methodology for evaluating the identity disclosure risks of fully synthetic data.

Methods: A full risk model is presented, which evaluates both identity disclosure and the ability of an adversary to learn something new if there is a match between a synthetic record and a real person. We term this “meaningful identity disclosure risk.” The model is applied on samples from the Washington State Hospital discharge database (2007) and the Canadian COVID-19 cases database. Both of these datasets were synthesized using a sequential decision tree process commonly used to synthesize health and social science data.

Results: The meaningful identity disclosure risk for both of these synthesized samples was below the commonly used 0.09 risk threshold (0.0198 and 0.0086, respectively), and 4 times and 5 times lower than the risk values for the original datasets, respectively.

Conclusions: We have presented a comprehensive identity disclosure risk model for fully synthetic data. The results for this synthesis method on 2 datasets demonstrate that synthesis can reduce meaningful identity disclosure risks considerably. The risk model can be applied in the future to evaluate the privacy of fully synthetic data….(More)”.

How the U.S. Military Buys Location Data from Ordinary Apps


Joseph Cox at Vice: “The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a “level” app that can be used to help, for example, install shelves in a bedroom.

Through public records, interviews with developers, and technical analysis, Motherboard uncovered two separate, parallel data streams that the U.S. military uses, or has used, to obtain location data. One relies on a company called Babel Street, which creates a product called Locate X. U.S. Special Operations Command (USSOCOM), a branch of the military tasked with counterterrorism, counterinsurgency, and special reconnaissance, bought access to Locate X to assist on overseas special forces operations. The other stream is through a company called X-Mode, which obtains location data directly from apps, then sells that data to contractors, and by extension, the military.

The news highlights the opaque location data industry and the fact that the U.S. military, which has infamously used other location data to target drone strikes, is purchasing access to sensitive data. Many of the users of apps involved in the data supply chain are Muslim, which is notable considering that the United States has waged a decades-long war on predominantly Muslim terror groups in the Middle East, and has killed hundreds of thousands of civilians during its military operations in Pakistan, Afghanistan, and Iraq. Motherboard does not know of any specific operations in which this type of app-based location data has been used by the U.S. military.

The apps sending data to X-Mode include Muslim Pro, an app that reminds users when to pray and what direction Mecca is in relation to the user’s current location. The app has been downloaded over 50 million times on Android, according to the Google Play Store, and over 98 million in total across other platforms including iOS, according to Muslim Pro’s website….(More)”.