Congress should designate an entity to oversee data security, GAO says


Article by Matt Bracken: “Federal agencies may need to rethink how they handle individuals’ personal data to protect their civil rights and civil liberties, a congressional watchdog said in a new report Tuesday.

Without federal guidance governing the protection of the public’s civil rights and liberties, agencies have pursued a patchwork system of policies tied to the collection, sharing and use of data, the Government Accountability Office said

To address that problem head-on, the GAO is recommending that Congress select “an appropriate federal entity” to produce guidance or regulations regarding data protection that would apply to all agencies, giving that entity “the explicit authority to make needed technical and policy choices or explicitly stating Congress’s own choices.”

That recommendation was formed after the GAO sent a questionnaire to all 24 Chief Financial Officers Act agencies asking for information about their use of emerging technologies and data capabilities and how they’re guaranteeing that personally identifiable information is safeguarded.

The GAO found that 16 of those CFO Act agencies have policies or procedures in place to protect civil rights and civil liberties with regard to data use, while the other eight have not taken steps to do the same.

The most commonly cited issues for agencies in their efforts to protect the civil rights and civil liberties of the public were “complexities in handling protections associated with new and emerging technologies” and “a lack of qualified staff possessing needed skills in civil rights, civil liberties, and emerging technologies.”

“Further, eight of the 24 agencies believed that additional government-wide law or guidance would strengthen consistency in addressing civil rights and civil liberties protections,” the GAO wrote. “One agency noted that such guidance could eliminate the hodge-podge approach to the governance of data and technology.”

All 24 CFO Act agencies have internal offices to “handle the protection of the public’s civil rights as identified in federal laws,” with much of that work centered on the handling of civil rights violations and related complaints. Four agencies — the departments of Defense, Homeland Security, Justice and Education — have offices to specifically manage civil liberty protections across their entire agencies. The other 20 agencies have mostly adopted a “decentralized approach to protecting civil liberties, including when collecting, sharing, and using data,” the GAO noted…(More)”.

Privacy during pandemics: Attitudes to public use of personal data


Paper by Eleonora Freddi and Ole Christian Wasenden: “In this paper we investigate people’s attitudes to privacy and sharing of personal data when used to help society combat a contagious disease, such as COVID-19. Through a two-wave survey, we investigate the role of personal characteristics, and the effect of information, in shaping privacy attitudes. By conducting the survey in Norway and Sweden, which adopted very different strategies to handle the COVID-19 pandemic, we analyze potential differences in privacy attitudes due to policy changes. We find that privacy concern is negatively correlated with allowing public use of personal data. Trust in the entity collecting data and collectivist preferences are positively correlated with this type of data usage. Providing more information about the public benefit of sharing personal data makes respondents more positive to the use of their data, while providing additional information about the costs associated with data sharing does not change attitudes. The analysis suggests that stating a clear purpose and benefit for the data collection makes respondents more positive about sharing. Despite very different policy approaches, we do not find any major differences in privacy attitudes between Norway and Sweden. Findings are also similar between the two survey waves, suggesting a minor role for contextual changes…(More)”

What’s the Value of Privacy?


Brief by New America: “On a day-to-day basis, people make decisions about what information to share and what information to keep to themselves—guided by an inner privacy compass. Privacy is a concept that is both evocative and broad, often possessing different meanings for different people. The term eludes a commonstatic definition, though it is now inextricably linked to technology and a growing sense that individuals do not have control over their personal information. If privacy still, at its core, encompasses “the right to be left alone,” then that right is increasingly difficult to exercise in the modern era. 

The inability to meaningfully choose privacy is not an accident—in fact, it’s often by design. Society runs on data. Whether it is data about people’s personal attributespreferences, or actions, all that data can be linked together, becoming greater than the sum of its parts. If data is now the world’s most valuable resource, then the companies that are making record profits off that data are highly incentivized to keep accessing it and obfuscating the externalities of data sharing. In brief, data use and privacy are “economically significant.” 

And yet, despite the pervasive nature of data collection, much of the public lacks a nuanced understanding of the true costs and benefits of sharing their data—for themselves and for society as a whole. People who have made billions by collecting and re-selling individual user data will continue to claim that it has little value. And yet, there are legitimate reasons why data should be shared—without a clear understanding of an issue, it is impossible to address it…(More)”.

Someone Put Facial Recognition Tech onto Meta’s Smart Glasses to Instantly Dox Strangers


Article by Joseph Cox: “A pair of students at Harvard have built what big tech companies refused to release publicly due to the overwhelming risks and danger involved: smart glasses with facial recognition technology that automatically looks up someone’s face and identifies them. The students have gone a step further too. Their customized glasses also pull other information about their subject from around the web, including their home address, phone number, and family members. 

The project is designed to raise awareness of what is possible with this technology, and the pair are not releasing their code, AnhPhu Nguyen, one of the creators, told 404 Media. But the experiment, tested in some cases on unsuspecting people in the real world according to a demo video, still shows the razor thin line between a world in which people can move around with relative anonymity, to one where your identity and personal information can be pulled up in an instant by strangers.

Nguyen and co-creator Caine Ardayfio call the project I-XRAY. It uses a pair of Meta’s commercially available Ray Ban smart glasses, and allows a user to “just go from face to name,” Nguyen said…(More)”.

Data Privacy for Record Linkage and Beyond


Paper by Shurong Lin & Eric Kolaczyk: “In a data-driven world, two prominent research problems are record linkage and data privacy, among others. Record linkage is essential for improving decision-making by integrating information of the same entities from different sources. On the other hand, data privacy research seeks to balance the need to extract accurate insights from data with the imperative to protect the privacy of the entities involved. Inevitably, data privacy issues arise in the context of record linkage. This article identifies two complementary aspects at the intersection of these two fields: (1) how to ensure privacy during record linkage and (2) how to mitigate privacy risks when releasing the analysis results after record linkage. We specifically discuss privacy-preserving record linkage, differentially private regression, and related topics…(More)”.

The Complexities of Differential Privacy for Survey Data


Paper by Jörg Drechsler & James Bailie: “The concept of differential privacy (DP) has gained substantial attention in recent years, most notably since the U.S. Census Bureau announced the adoption of the concept for its 2020 Decennial Census. However, despite its attractive theoretical properties, implementing DP in practice remains challenging, especially when it comes to survey data. In this paper we present some results from an ongoing project funded by the U.S. Census Bureau that is exploring the possibilities and limitations of DP for survey data. Specifically, we identify five aspects that need to be considered when adopting DP in the survey context: the multi-staged nature of data production; the limited privacy amplification from complex sampling designs; the implications of survey-weighted estimates; the weighting adjustments for nonresponse and other data deficiencies, and the imputation of missing values. We summarize the project’s key findings with respect to each of these aspects and also discuss some of the challenges that still need to be addressed before DP could become the new data protection standard at statistical agencies…(More)”.

Data Protection Law and Emotion


Book by Damian Clifford: “Data protection law is often positioned as a regulatory solution to the risks posed by computational systems. Despite the widespread adoption of data protection laws, however, there are those who remain sceptical as to their capacity to engender change. Much of this criticism focuses on our role as ‘data subjects’. It has been demonstrated repeatedly that we lack the capacity to act in our own best interests and, what is more, that our decisions have negative impacts on others. Our decision-making limitations seem to be the inevitable by-product of the technological, social, and economic reality. Data protection law bakes in these limitations by providing frameworks for notions such as consent and subjective control rights and by relying on those who process our data to do so fairly.

Despite these valid concerns, Data Protection Law and Emotion argues that the (in)effectiveness of these laws are often more difficult to discern than the critical literature would suggest, while also emphasizing the importance of the conceptual value of subjective control. These points are explored (and indeed, exposed) by investigating data protection law through the lens of the insights provided by law and emotion scholarship and demonstrating the role emotions play in our decision-making. The book uses the development of Emotional Artificial Intelligence, a particularly controversial technology, as a case study to analyse these issues.

Original and insightful, Data Protection Law and Emotion offers a unique contribution to a contentious debate that will appeal to students and academics in data protection and privacy, policymakers, practitioners, and regulators…(More)”.

Anonymization: The imperfect science of using data while preserving privacy


Paper by Andrea Gadotti et al: “Information about us, our actions, and our preferences is created at scale through surveys or scientific studies or as a result of our interaction with digital devices such as smartphones and fitness trackers. The ability to safely share and analyze such data is key for scientific and societal progress. Anonymization is considered by scientists and policy-makers as one of the main ways to share data while minimizing privacy risks. In this review, we offer a pragmatic perspective on the modern literature on privacy attacks and anonymization techniques. We discuss traditional de-identification techniques and their strong limitations in the age of big data. We then turn our attention to modern approaches to share anonymous aggregate data, such as data query systems, synthetic data, and differential privacy. We find that, although no perfect solution exists, applying modern techniques while auditing their guarantees against attacks is the best approach to safely use and share data today…(More)”.

AI mass surveillance at Paris Olympics


Article by Anne Toomey McKenna: “The 2024 Paris Olympics is drawing the eyes of the world as thousands of athletes and support personnel and hundreds of thousands of visitors from around the globe converge in France. It’s not just the eyes of the world that will be watching. Artificial intelligence systems will be watching, too.

Government and private companies will be using advanced AI tools and other surveillance tech to conduct pervasive and persistent surveillance before, during and after the Games. The Olympic world stage and international crowds pose increased security risks so significant that in recent years authorities and critics have described the Olympics as the “world’s largest security operations outside of war.”

The French government, hand in hand with the private tech sector, has harnessed that legitimate need for increased security as grounds to deploy technologically advanced surveillance and data gathering tools. Its surveillance plans to meet those risks, including controversial use of experimental AI video surveillance, are so extensive that the country had to change its laws to make the planned surveillance legal.

The plan goes beyond new AI video surveillance systems. According to news reports, the prime minister’s office has negotiated a provisional decree that is classified to permit the government to significantly ramp up traditional, surreptitious surveillance and information gathering tools for the duration of the Games. These include wiretapping; collecting geolocation, communications and computer data; and capturing greater amounts of visual and audio data…(More)”.

Community consent: neither a ceiling nor a floor


Article by Jasmine McNealy: “The 23andMe breach and the Golden State Killer case are two of the more “flashy” cases, but questions of consent, especially the consent of all of those affected by biodata collection and analysis in more mundane or routine health and medical research projects, are just as important. The communities of people affected have expectations about their privacy and the possible impacts of inferences that could be made about them in data processing systems. Researchers must, then, acquire community consent when attempting to work with networked biodata. 

Several benefits of community consent exist, especially for marginalized and vulnerable populations. These benefits include:

  • Ensuring that information about the research project spreads throughout the community,
  • Removing potential barriers that might be created by resistance from community members,
  • Alleviating the possible concerns of individuals about the perspectives of community leaders, and 
  • Allowing the recruitment of participants using methods most salient to the community.

But community consent does not replace individual consent and limits exist for both community and individual consent. Therefore, within the context of a biorepository, understanding whether community consent might be a ceiling or a floor requires examining governance and autonomy…(More)”.