Perspectives on Digital Humanism


Open Access Book edited by Hannes Werthner, Erich Prem, Edward A. Lee, and Carlo Ghezzi: “Digital Humanism is young; it has evolved from an unease about the consequences of a digitized world for human beings, into an internationally connected community that aims at developing concepts to provide a positive and constructive response. Following up on several successful workshops and a lecture series that bring together authorities of the various disciplines, this book is our latest contribution to the ongoing international discussions and developments. We have compiled a collection of 46 articles from experts with different disciplinary and institutional backgrounds, who provide their view on the interplay of human and machine.

Please note our open access publishing strategy for this book to enable widespread circulation and accessibility. This means that you can make use of the content freely, as long as you ensure appropriate referencing. At the same time, the book is also published in printed and online versions by Springer….(More)”.

Privacy Tradeoffs: Who Should Make Them, and How?


Paper by Jane R. Bambauer: “Privacy debates are contentious in part because we have not reached a broadly recognized cultural consensus about whether interests in privacy are like most other interests that can be traded off in utilitarian, cost-benefit terms, or if instead privacy is different—fundamental to conceptions of dignity and personal liberty. Thus, at the heart of privacy debates is an unresolved question: is privacy just another interest that can and should be bartered, mined, and used in the economy, or is it different?

This question identifies and isolates a wedge between those who hold essentially utilitarian views of ethics (and who would see many data practices as acceptable) and those who hold views of natural and fundamental rights (for whom common data mining practices are either never acceptable or, at the very least, never acceptable without significant participation and consent of the subject).

This essay provides an intervention of a purely descriptive sort. First, I lay out several candidates for ethical guidelines that might legitimately undergird privacy law and policy. Only one of the ethical models (the natural right to sanctuary) can track the full scope and implications of fundamental rights-based privacy laws like the GDPR.

Second, the project contributes to the field of descriptive ethics by using a vignette experiment to discover which of the various ethical models people actually do seem to hold and abide by. The vignette study uses a factorial design to help isolate the roles of various factors that may contribute to the respondents’ gauge of what an ethical firm should or should not do in the context of personal data use as well as two other non-privacy-related contexts. The results can shed light on whether privacy-related ethics are different and distinct from business ethics more generally. They also illuminate which version(s) of “good” and “bad” share broad support and deserve to be reflected in privacy law or business practice.

The results of the vignette experiment show that on balance, Americans subscribe to some form of utilitarianism, although a substantial minority subscribe to a natural right to sanctuary approach. Thus, consent and prohibitions of data practices are appropriate where the likely risks to some groups (most importantly, data subjects, but also firms and third parties) outweigh the benefits….(More)”

Indigenous Peoples Rise Up: The Global Ascendency of Social Media Activism


Book edited by Bronwyn Carlson and Jeff Berglund: “…llustrates the impact of social media in expanding the nature of Indigenous communities and social movements. Social media has bridged distance, time, and nation states to mobilize Indigenous peoples to build coalitions across the globe and to stand in solidarity with one another. These movements have succeeded and gained momentum and traction precisely because of the strategic use of social media. Social media—Twitter and Facebook in particular—has also served as a platform for fostering health, well-being, and resilience, recognizing Indigenous strength and talent, and sustaining and transforming cultural practices when great distances divide members of the same community.
 
Including a range of international indigenous voices from the US, Canada, Australia, Aotearoa (New Zealand) and Africa, the book takes an interdisciplinary approach, bridging Indigenous studies, media studies, and social justice studies. Including examples like Idle No More in Canada, Australian Recognise!, and social media campaigns to maintain Maori language, Indigenous Peoples Rise Up serves as one of the first studies of Indigenous social media use and activism…(More)”.

Human Rights Are Not A Bug: Upgrading Governance for an Equitable Internet


Report by Niels ten Oever: “COVID-19 showed how essential the Internet is, as people around the globe searched for critical health information, kept up with loved ones and worked remotely. All of this relied on an often unseen Internet infrastructure, consisting of myriad devices, institutions, and standards that kept them connected.

But who governs the patchwork that enables this essential utility? Internet governance organizations like the Internet Engineering Task Force develop the technical foundations of the Internet. Their decisions are high stakes, and impact security, access to information, freedom of expression and other human rights. Yet they can only set voluntary norms and protocols for industry behavior, and there is no central authority to ensure that standards are implemented correctly. Further, while Internet governance bodies are open to all sectors, they are dominated by the transnational corporations that own and operate much of the infrastructure. Thus our increasingly digital daily lives are defined by the interests of corporations, not of the public interest….

In this comprehensive, field-setting report published with the support of the Ford Foundation, Niels ten Oever, a postdoctoral researcher in Internet infrastructure at the University of Amsterdam, unpacks and looks at the human consequences of these governance flaws, from speed and access to security and privacy of online information. The report details how these flaws especially impact those who are already subject to surveillance or structural inequities, such as an activist texting meeting times on WhatsApp, or a low-income senior looking for a vaccine appointment….(More)”.

A framework for assessing intergenerational fairness


About: “Concerns about intergenerational fairness have steadily climbed up the priority ladder over the past decade. The 2020 OECD Report on Governance on Youth, Trust and Intergenerational Jusice outlines the intergenerational issues underlying many of today’s most urgent political debates, and we believe these questions will only intensify in coming years.

Ensuring effective long-term decision-making is hard. It requires leaders and decision-makers across public, private and civil society to be incentivised, and for all citizens to be empowered to have a say around the future. To do this will require change in our culture, behaviours, process and systems….

The School of International Futures and the Calouste Gulbenkian Foundation have created a methodology to assess whether a decision is fair to different generations, now and in the future.

It can be applied by national and local governments, independent institutions, international organisations, foundations, businesses and special interest groups to evaluate the impact of decisions on present and future generations.

The policy assessment methodology is freely available for use under the Creative Commons license for non-commercial use….

Our work on the Framework for Assessing Intergenerational Fairness and the Intergenerational Fairness Observatory are practical first steps to creating this change….(More)“.

What Should Happen to Our Data When We Die?


Adrienne Matei at the New York Times: “The new Anthony Bourdain documentary, “Roadrunner,” is one of many projects dedicated to the larger-than-life chef, writer and television personality. But the film has drawn outsize attention, in part because of its subtle reliance on artificial intelligence technology.

Using several hours of Mr. Bourdain’s voice recordings, a software company created 45 seconds of new audio for the documentary. The A.I. voice sounds just like Mr. Bourdain speaking from the great beyond; at one point in the movie, it reads an email he sent before his death by suicide in 2018.

“If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Morgan Neville, the director, said in an interview with The New Yorker. “We can have a documentary-ethics panel about it later.”

The time for that panel may be now. The dead are being digitally resurrected with growing frequency: as 2-D projections, 3-D holograms, C.G.I. renderings and A.I. chat bots….(More)”.

The Inevitable Weaponization of App Data Is Here


Joseph Cox at VICE: “…After years of warning from researchers, journalists, and even governments, someone used highly sensitive location data from a smartphone app to track and publicly harass a specific person. In this case, Catholic Substack publication The Pillar said it used location data ultimately tied to Grindr to trace the movements of a priest, and then outed him publicly as potentially gay without his consent. The Washington Post reported on Tuesday that the outing led to his resignation….

The data itself didn’t contain each mobile phone user’s real name, but The Pillar and its partner were able to pinpoint which device belonged to Burill by observing one that appeared at the USCCB staff residence and headquarters, locations of meetings that he was in, as well as his family lake house and an apartment that has him listed as a resident. In other words, they managed to, as experts have long said is easy to do, unmask this specific person and their movements across time from an supposedly anonymous dataset.

A Grindr spokesperson told Motherboard in an emailed statement that “Grindr’s response is aligned with the editorial story published by the Washington Post which describes the original blog post from The Pillar as homophobic and full of unsubstantiated inuendo. The alleged activities listed in that unattributed blog post are infeasible from a technical standpoint and incredibly unlikely to occur. There is absolutely no evidence supporting the allegations of improper data collection or usage related to the Grindr app as purported.”…

“The research from The Pillar aligns to the reality that Grindr has historically treated user data with almost no care or concern, and dozens of potential ad tech vendors could have ingested the data that led to the doxxing,” Zach Edwards, a researcher who has closely followed the supply chain of various sources of data, told Motherboard in an online chat. “No one should be doxxed and outed for adult consenting relationships, but Grindr never treated their own users with the respect they deserve, and the Grindr app has shared user data to dozens of ad tech and analytics vendors for years.”…(More)”.

Government algorithms are out of control and ruin lives



Nani Jansen Reventlow at Open Democracy: “Government services are increasingly being automated and technology is relied on more and more to make crucial decisions about our lives and livelihoods. This includes decisions about what type of support we can access in times of need: welfarebenefits, and other government services.

Technology has the potential to not only reproduce but amplify structural inequalities in our societies. If you combine this drive for automation with a broader context of criminalising poverty and systemic racism, this can have disastrous effects.

A recent example is the ‘child benefits scandal’ that brought down the Dutch government at the start of 2021. In the Netherlands, working parents are eligible for a government contribution toward the costs of daycare. This can run up to 90% of the actual costs for those with a low income. While contributions are often directly paid to childcare providers, parents are responsible for them. This means that, if the tax authorities determine that any allowance was wrongfully paid out, parents are liable for repaying them.

To detect cases of fraud, the Dutch tax authorities used a system that was outright discriminatory. An investigation by the Dutch Data Protection Authority last year showed that parents were singled out for special scrutiny because of their ethnic origin or dual nationality.  “The whole system was organised in a discriminatory manner and was also used as such,” it stated.

The fallout of these ‘fraud detection’ efforts was enormous. It is currently estimated that 46,000 parents were wrongly accused of having fraudulently claimed child care allowances. Families were forced to repay tens of thousands of euros, leading to financial hardship, loss of livelihood, homes, and in one case, even loss of life – one parent died by suicide. While we can still hope that justice for these families won’t be denied, it will certainly be delayed: this weekend, it became clear that it could take up to ten years to handle all claims. An unacceptable timeline, given how precarious the situation will be for many of those affected….(More)”.

Luxury Surveillance


Essay by Chris Gilliard and David Golumbia: One of the most troubling features of the digital revolution is that some people pay to subject themselves to surveillance that others are forced to endure and would, if anything, pay to be free of.

Consider a GPS tracker you can wear around one of your arms or legs. Make it sleek and cool — think the Apple Watch or FitBit —  and some will pay hundreds or even thousands of dollars for the privilege of wearing it. Make it bulky and obtrusive, and others, as a condition of release from jail or prison, being on probation, or awaiting an immigration hearing, will be forced to wear one — and forced to pay for it too.

In each case, the device collects intimate and detailed biometric information about its wearer and uploads that data to servers, communities, and repositories. To the providers of the devices, this data and the subsequent processing of it are the main reasons the devices exist. They are means of extraction: That data enables further study, prediction, and control of human beings and populations. While some providers certainly profit from the sale of devices, this secondary market for behavioral control and prediction is where the real money is — the heart of what Shoshana Zuboff rightly calls surveillance capitalism.

The formerly incarcerated person knows that their ankle monitor exists for that purpose: to predict and control their behavior. But the Apple Watch wearer likely thinks about it little, if at all — despite the fact that the watch has the potential to collect and analyze much more data about its user (e.g. health metrics like blood pressure, blood glucose levels, ECG data) than parole or probation officers are even allowed to gather about their “clients” without specific warrant. Fitness-tracker wearers are effectively putting themselves on parole and paying for the privilege.

Both the Apple Watch and the FitBit can be understood as examples of luxury surveillance: surveillance that people pay for and whose tracking, monitoring, and quantification features are understood by the user as benefits they are likely to celebrate. Google, which has recently acquired FitBit, is seemingly leaning into the category, launching a more expensive version of the device named the “Luxe.” Only certain people can afford luxury surveillance, but that is not necessarily a matter of money: In general terms, consumers of luxury surveillance see themselves as powerful and sovereign, and perhaps even immune from unwelcome monitoring and control. They see self-quantification and tracking not as disciplinary or coercive, but as a kind of care or empowerment. They understand it as something extra, something “smart.”…(More)”.

Google launches new search tool to help combat food insecurity


Article by Andrew J. Hawkins: “Google announced a new website designed to be a “one-stop shop” for people with food insecurity. The “Find Food Support” site includes a food locator tool powered by Google Maps which people can use to search for their nearest food bank, food pantry, or school lunch program pickup site in their community.

Google is working with non-profit groups like No Kid Hungry and FoodFinder, as well as the US Department of Agriculture, to aggregate 90,000 locations with free food support across all 50 states — with more locations to come.

The new site is a product of Google’s newly formed Food for Good team, formerly known as Project Delta when it was headquartered at Alphabet’s X moonshot division. Project Delta’s mission is to “create a smarter food system,” which includes standardizing data to improve communication between food distributors to curb food waste….(More)”.