Paper by Susan Ariel Aaronson: “This paper argues that if trade policymakers truly want to achieve data free flow with trust, they must address user concerns beyond privacy. Survey data reveals that users are also anxious about online harassment, malware, censorship and disinformation. The paper focuses on three such problems, specifically, internet shutdowns, censorship and ransomware (a form of malware), each of which can distort trade and make users feel less secure online. Finally, the author concludes that trade policymakers will need to rethink how they involve the broad public in digital trade policymaking if they want digital trade agreements to facilitate trust….(More)”.
UN urges moratorium on use of AI that imperils human rights
Jamey Keaten and Matt O’Brien at the Washington Post: “The U.N. human rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.
Michelle Bachelet, the U.N. High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications which don’t comply with international human rights law.
Applications that should be prohibited include government “social scoring” systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.
AI-based technologies can be a force for good but they can also “have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said in a statement.
Her comments came along with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.
“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told journalists as she presented the report in Geneva. “It’s about recognizing that if AI is going to be used in these human rights — very critical — function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”
Bachelet didn’t call for an outright ban of facial recognition technology, but said governments should halt the scanning of people’s features in real time until they can show the technology is accurate, won’t discriminate and meets certain privacy and data protection standards….(More)” (Report).
The Secret Bias Hidden in Mortgage-Approval Algorithms
An investigation by The Markup: “…has found that lenders in 2019 were more likely to deny home loans to people of color than to White people with similar financial characteristics—even when we controlled for newly available financial factors that the mortgage industry for years has said would explain racial disparities in lending.
Holding 17 different factors steady in a complex statistical analysis of more than two million conventional mortgage applications for home purchases, we found that lenders were 40 percent more likely to turn down Latino applicants for loans, 50 percent more likely to deny Asian/Pacific Islander applicants, and 70 percent more likely to deny Native American applicants than similar White applicants. Lenders were 80 percent more likely to reject Black applicants than similar White applicants. These are national rates.
In every case, the prospective borrowers of color looked almost exactly the same on paper as the White applicants, except for their race.

The industry had criticized previous similar analyses for not including financial factors they said would explain disparities in lending rates but were not public at the time: debts as a percentage of income, how much of the property’s assessed worth the person is asking to borrow, and the applicant’s credit score.
The first two are now public in the Home Mortgage Disclosure Act data. Including these financial data points in our analysis not only failed to eliminate racial disparities in loan denials, it highlighted new, devastating ones.
We found that lenders gave fewer loans to Black applicants than White applicants even when their incomes were high—$100,000 a year or more—and had the same debt ratios. In fact, high-earning Black applicants with less debt were rejected more often than high-earning White applicants who have more debt….(More)”
Perspectives on Digital Humanism
Open Access Book edited by Hannes Werthner, Erich Prem, Edward A. Lee, and Carlo Ghezzi: “Digital Humanism is young; it has evolved from an unease about the consequences of a digitized world for human beings, into an internationally connected community that aims at developing concepts to provide a positive and constructive response. Following up on several successful workshops and a lecture series that bring together authorities of the various disciplines, this book is our latest contribution to the ongoing international discussions and developments. We have compiled a collection of 46 articles from experts with different disciplinary and institutional backgrounds, who provide their view on the interplay of human and machine.
Please note our open access publishing strategy for this book to enable widespread circulation and accessibility. This means that you can make use of the content freely, as long as you ensure appropriate referencing. At the same time, the book is also published in printed and online versions by Springer….(More)”.
Privacy Tradeoffs: Who Should Make Them, and How?
Paper by Jane R. Bambauer: “Privacy debates are contentious in part because we have not reached a broadly recognized cultural consensus about whether interests in privacy are like most other interests that can be traded off in utilitarian, cost-benefit terms, or if instead privacy is different—fundamental to conceptions of dignity and personal liberty. Thus, at the heart of privacy debates is an unresolved question: is privacy just another interest that can and should be bartered, mined, and used in the economy, or is it different?
This question identifies and isolates a wedge between those who hold essentially utilitarian views of ethics (and who would see many data practices as acceptable) and those who hold views of natural and fundamental rights (for whom common data mining practices are either never acceptable or, at the very least, never acceptable without significant participation and consent of the subject).
This essay provides an intervention of a purely descriptive sort. First, I lay out several candidates for ethical guidelines that might legitimately undergird privacy law and policy. Only one of the ethical models (the natural right to sanctuary) can track the full scope and implications of fundamental rights-based privacy laws like the GDPR.
Second, the project contributes to the field of descriptive ethics by using a vignette experiment to discover which of the various ethical models people actually do seem to hold and abide by. The vignette study uses a factorial design to help isolate the roles of various factors that may contribute to the respondents’ gauge of what an ethical firm should or should not do in the context of personal data use as well as two other non-privacy-related contexts. The results can shed light on whether privacy-related ethics are different and distinct from business ethics more generally. They also illuminate which version(s) of “good” and “bad” share broad support and deserve to be reflected in privacy law or business practice.
The results of the vignette experiment show that on balance, Americans subscribe to some form of utilitarianism, although a substantial minority subscribe to a natural right to sanctuary approach. Thus, consent and prohibitions of data practices are appropriate where the likely risks to some groups (most importantly, data subjects, but also firms and third parties) outweigh the benefits….(More)”
Indigenous Peoples Rise Up: The Global Ascendency of Social Media Activism
Book edited by Bronwyn Carlson and Jeff Berglund: “…llustrates the impact of social media in expanding the nature of Indigenous communities and social movements. Social media has bridged distance, time, and nation states to mobilize Indigenous peoples to build coalitions across the globe and to stand in solidarity with one another. These movements have succeeded and gained momentum and traction precisely because of the strategic use of social media. Social media—Twitter and Facebook in particular—has also served as a platform for fostering health, well-being, and resilience, recognizing Indigenous strength and talent, and sustaining and transforming cultural practices when great distances divide members of the same community.
Including a range of international indigenous voices from the US, Canada, Australia, Aotearoa (New Zealand) and Africa, the book takes an interdisciplinary approach, bridging Indigenous studies, media studies, and social justice studies. Including examples like Idle No More in Canada, Australian Recognise!, and social media campaigns to maintain Maori language, Indigenous Peoples Rise Up serves as one of the first studies of Indigenous social media use and activism…(More)”.
Human Rights Are Not A Bug: Upgrading Governance for an Equitable Internet
Report by Niels ten Oever: “COVID-19 showed how essential the Internet is, as people around the globe searched for critical health information, kept up with loved ones and worked remotely. All of this relied on an often unseen Internet infrastructure, consisting of myriad devices, institutions, and standards that kept them connected.
But who governs the patchwork that enables this essential utility? Internet governance organizations like the Internet Engineering Task Force develop the technical foundations of the Internet. Their decisions are high stakes, and impact security, access to information, freedom of expression and other human rights. Yet they can only set voluntary norms and protocols for industry behavior, and there is no central authority to ensure that standards are implemented correctly. Further, while Internet governance bodies are open to all sectors, they are dominated by the transnational corporations that own and operate much of the infrastructure. Thus our increasingly digital daily lives are defined by the interests of corporations, not of the public interest….
In this comprehensive, field-setting report published with the support of the Ford Foundation, Niels ten Oever, a postdoctoral researcher in Internet infrastructure at the University of Amsterdam, unpacks and looks at the human consequences of these governance flaws, from speed and access to security and privacy of online information. The report details how these flaws especially impact those who are already subject to surveillance or structural inequities, such as an activist texting meeting times on WhatsApp, or a low-income senior looking for a vaccine appointment….(More)”.
A framework for assessing intergenerational fairness
About: “Concerns about intergenerational fairness have steadily climbed up the priority ladder over the past decade. The 2020 OECD Report on Governance on Youth, Trust and Intergenerational Jusice outlines the intergenerational issues underlying many of today’s most urgent political debates, and we believe these questions will only intensify in coming years.
Ensuring effective long-term decision-making is hard. It requires leaders and decision-makers across public, private and civil society to be incentivised, and for all citizens to be empowered to have a say around the future. To do this will require change in our culture, behaviours, process and systems….
The School of International Futures and the Calouste Gulbenkian Foundation have created a methodology to assess whether a decision is fair to different generations, now and in the future.
It can be applied by national and local governments, independent institutions, international organisations, foundations, businesses and special interest groups to evaluate the impact of decisions on present and future generations.
The policy assessment methodology is freely available for use under the Creative Commons license for non-commercial use….
Our work on the Framework for Assessing Intergenerational Fairness and the Intergenerational Fairness Observatory are practical first steps to creating this change….(More)“.
What Should Happen to Our Data When We Die?
Adrienne Matei at the New York Times: “The new Anthony Bourdain documentary, “Roadrunner,” is one of many projects dedicated to the larger-than-life chef, writer and television personality. But the film has drawn outsize attention, in part because of its subtle reliance on artificial intelligence technology.
Using several hours of Mr. Bourdain’s voice recordings, a software company created 45 seconds of new audio for the documentary. The A.I. voice sounds just like Mr. Bourdain speaking from the great beyond; at one point in the movie, it reads an email he sent before his death by suicide in 2018.
“If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Morgan Neville, the director, said in an interview with The New Yorker. “We can have a documentary-ethics panel about it later.”
The time for that panel may be now. The dead are being digitally resurrected with growing frequency: as 2-D projections, 3-D holograms, C.G.I. renderings and A.I. chat bots….(More)”.
The Inevitable Weaponization of App Data Is Here
Joseph Cox at VICE: “…After years of warning from researchers, journalists, and even governments, someone used highly sensitive location data from a smartphone app to track and publicly harass a specific person. In this case, Catholic Substack publication The Pillar said it used location data ultimately tied to Grindr to trace the movements of a priest, and then outed him publicly as potentially gay without his consent. The Washington Post reported on Tuesday that the outing led to his resignation….
The data itself didn’t contain each mobile phone user’s real name, but The Pillar and its partner were able to pinpoint which device belonged to Burill by observing one that appeared at the USCCB staff residence and headquarters, locations of meetings that he was in, as well as his family lake house and an apartment that has him listed as a resident. In other words, they managed to, as experts have long said is easy to do, unmask this specific person and their movements across time from an supposedly anonymous dataset.
A Grindr spokesperson told Motherboard in an emailed statement that “Grindr’s response is aligned with the editorial story published by the Washington Post which describes the original blog post from The Pillar as homophobic and full of unsubstantiated inuendo. The alleged activities listed in that unattributed blog post are infeasible from a technical standpoint and incredibly unlikely to occur. There is absolutely no evidence supporting the allegations of improper data collection or usage related to the Grindr app as purported.”…
“The research from The Pillar aligns to the reality that Grindr has historically treated user data with almost no care or concern, and dozens of potential ad tech vendors could have ingested the data that led to the doxxing,” Zach Edwards, a researcher who has closely followed the supply chain of various sources of data, told Motherboard in an online chat. “No one should be doxxed and outed for adult consenting relationships, but Grindr never treated their own users with the respect they deserve, and the Grindr app has shared user data to dozens of ad tech and analytics vendors for years.”…(More)”.