Building machines that work for everyone – how diversity of test subjects is a technology blind spot, and what to do about it


Article by Tahira Reid and James Gibert: “People interact with machines in countless ways every day. In some cases, they actively control a device, like driving a car or using an app on a smartphone. Sometimes people passively interact with a device, like being imaged by an MRI machine. And sometimes they interact with machines without consent or even knowing about the interaction, like being scanned by a law enforcement facial recognition system.

Human-Machine Interaction (HMI) is an umbrella term that describes the ways people interact with machines. HMI is a key aspect of researching, designing and building new technologies, and also studying how people use and are affected by technologies.

Researchers, especially those traditionally trained in engineering, are increasingly taking a human-centered approach when developing systems and devices. This means striving to make technology that works as expected for the people who will use it by taking into account what’s known about the people and by testing the technology with them. But even as engineering researchers increasingly prioritize these considerations, some in the field have a blind spot: diversity.

As an interdisciplinary researcher who thinks holistically about engineering and design and an expert in dynamics and smart materials with interests in policy, we have examined the lack of inclusion in technology design, the negative consequences and possible solutions….

It is possible to use a homogenous sample of people in publishing a research paper that adds to a field’s body of knowledge. And some researchers who conduct studies this way acknowledge the limitations of homogenous study populations. However, when it comes to developing systems that rely on algorithms, such oversights can cause real-world problems. Algorithms are as only as good as the data that is used to build them.

Algorithms are often based on mathematical models that capture patterns and then inform a computer about those patterns to perform a given task. Imagine an algorithm designed to detect when colors appear on a clear surface. If the set of images used to train that algorithm consists of mostly shades of red, the algorithm might not detect when a shade of blue or yellow is present…(More)”.

Why Privacy Matters


Book by Neil Richards: “Many people tell us that privacy is dead, or that it is dying, but such talk is a dangerous fallacy. This book explains what privacy is, what privacy isn’t, and why privacy matters. Privacy is the extent to which human information is known or used, and it is fundamentally about the social power that human information provides over other people. The best way to ensure that power is checked and channeled in ways that benefit humans and their society is through rules—rules about human information. And because human information rules of some sort are inevitable, we should craft our privacy rules to promote human values. The book suggests three such values that our human information rules should promote: identity, freedom, and protection. Identity allows us to be thinking, self-defining humans; freedom lets us be citizens; while protection safeguards our roles as situated consumers and workers, allowing us, as members of society, to trust and rely on other people so that we can live our lives and hopefully build a better future together…(More)”.

‘Sharing Is Caring’: Creative Commons, Transformative Culture, and Moral Rights Protection


Paper by Alexandra Giannopoulou: “The practice of sharing works free from traditional legal reservations, aims to mark both ideological and systemic distance from the exclusive proprietary regime of copyright. The positive involvement of the public in creativity acts is a defining feature of transformative culture in the digital sphere, which encourages creative collaborations between several people, without any limitation in space or time. Moral rights regimes are antithetical to these practices. This chapter will explore the moral rights challenges emerging from transformative culture. We will take the example of Creative Commons licenses and their interaction with internationally recognized moral rights. We conclude that the chilling effects of this legal uncertainty linked to moral rights enforcement could hurt copyright as a whole, but that moral rights can still constitute a strong defence mechanism against modern risks related to digital transformative creativity…(More)”.

Legal study on Government access to data in third countries


Report commissioned by the European Data Protection Board: “The present report is part of a study analysing the implications for the work of the European Union (EU)/ European Economic Area (EEA) data protection supervisory authorities (SAs) in relation to transfers of personal data to third countries after the Court of Justice of the European Union (CJEU) judgment C- 311/18 on Data Protection Commissioner v. Facebook Ireland Ltd, Maximilian Schrems (Schrems II). Data controllers and processors may transfer personal data to third countries or international organisations only if the controller or processor has provided appropriate safeguards, and on the condition that enforceable data subject rights and effective legal remedies for data subjects are available.

Whereas it is the primary responsibility of data exporters and data importers to assess that the legislation of the country of destination enables the data importer to comply with any of the appropriate safeguards, SAs will play a key role when issuing further decisions on transfers to third countries. Hence, this report provides the European Data Protection Board (EDPB) and the SAs in the EEA/EU with information on the legislation and practice in China, India and Russia on their governments’ access to personal data processed by economic operators. The report contains an overview of the relevant information in order for the SAs to assess whether and to what extent legislation and practices in the abovementioned countries imply massive and/or indiscriminate access to personal data processed by economic operators…(More)”.

The Tech That Comes Next


Book by Amy Sample Ward and Afua Brice: “Who is part of technology development, who funds that development, and how we put technology to use all influence the outcomes that are possible. To change those outcomes, we must – all of us – shift our relationship to technology, how we use it, build it, fund it, and more. In The Tech That Comes Next, Amy Sample Ward and Afua Bruce – two leaders in equitable design and use of new technologies – invite you to join them in asking big questions and making change from wherever you are today. 

This book connects ideas and conversations across sectors from artificial intelligence to data collection, community centered design to collaborative funding, and social media to digital divides. Technology and equity are inextricably connected, and The Tech That Comes Next helps you accelerate change for the better….(More)

For Queer Communities, Being Counted Has Downsides


Article by Kevin Guyan: “Next March, for the first time, Scotland’s census will ask all residents 16 and over to share information about their sexual orientation and whether they identify as trans. These new questions, whose addition follows similar developments in other parts of the United Kingdom and Malta, invite people to “come out” on their census return. Proposals to add more questions about gender, sex, and sexuality to national censuses are at various stages of discussion in countries outside of Europe, including New ZealandCanadaAustralia, and the United States.

The idea of being counted in a census feels good. Perhaps it’s my passion for data, but I feel recognized when I tick the response option “gay” in a survey that previously pretended I did not exist or was not important enough to count. If you identify with descriptors less commonly listed in drop-down boxes, seeing yourself reflected in a survey can change how you relate to wider communities that go beyond individual experiences. It therefore makes sense that many bottom-up queer rights groups and top-down government agencies frame the counting of queer communities in a positive light and position expanded data collection as a step toward greater inclusion.

There is great historical significance in increased visibility for many queer communities. But an over-focus on the benefits of being counted distracts from the potential harms for queer communities that come with participation in data collection activities….

The limits of inclusion became apparent to me as I observed the design process for Scotland’s 2022 census. While researching my book Queer Data, I sat through committee meetings at the Scottish Parliament, digested lengthy reports, submitted evidence, and participated in stakeholder engagement sessions. As many months of disagreement over how to count and who to count progressed, it grew more and more obvious that the design of a census is never exclusively about the collection of accurate data.

I grew ambivalent about what “being counted” actually meant for queer communities and concerned that the expansion of the census to include some queer people further erased those who did not match the government’s narrow understanding of gender, sex, and sexuality. Most notably, Scotland’s 2022 census does not count nonbinary people, who are required to identify their sex as either male or female. In another example, trans-exclusionary campaign groups requested that the census remove the “other” write-in box and limit response options for sexual orientation to “gay or lesbian,” “bisexual,” and “straight/heterosexual.” Reproducing the idea that sexual orientation is based on a fixed, binary notion of sex and restricting the question to just three options would effectively delete those who identify as queer, pansexual, asexual, and other sexualities from the count. Although the final version of the sexual orientation question includes an “other” write-in box for sexuality, collecting data about the lives of some queer people can push those who fall outside these expectations further into the shadows…(More)”.

Are Your Data Visualizations Racist?


Article by Alice Feng & Jonathan Schwabish: “Through rigorous, data-based analysis, researchers and analysts can add to our understanding of societal shortcomings and point toward evidence-based solutions. But carelessly collecting and communicating data can lead to analyses and visualizations that have an outsized capacity to mislead, misrepresent, and harm communities already experiencing inequity and discrimination.

To unlock the full potential of data, researchers and analysts must consider and apply equity at every step of the research process. Ensuring responsible data collection, representing the communities surveyed accurately, and incorporating community input whenever possible will lead to more equitable data analyses and visualizations. Although there is no one-size-fits-all approach to working with data, for researchers to truly do no harm, they must build their work on a foundation of empathy.

In our recent report, Do No Harm Guide: Applying Equity Awareness in Data Visualization, we focus on how data practitioners can approach their work through a lens of diversity, equity, and inclusion. To create this report, we conducted more than a dozen interviews with nearly 20 people who work with data to hear how they approach inclusivity. In those interviews, we heard time and time again that demonstrating empathy for the people and communities you are focusing on and communicating with should be the guiding light for those working with data. Journalist Kim Bui succinctly captured how researchers and analysts can apply empathy, saying: “If I were one of the data points on this visualization, would I feel offended?”…(More)”.

The Birth of Digital Human Rights


Book by Rebekah Dowd on “Digitized Data Governance as a Human Rights Issue in the EU”: “…This book considers contested responsibilities between the public and private sectors over the use of online data, detailing exactly how digital human rights evolved in specific European states and gradually became a part of the European Union framework of legal protections. The author uniquely examines why and how European lawmakers linked digital data protection to fundamental human rights, something heretofore not explained in other works on general data governance and data privacy. In particular, this work examines the utilization of national and European Union institutional arrangements as a location for activism by legal and academic consultants and by first-mover states who legislated digital human rights beginning in the 1970s. By tracing the way that EU Member States and non-state actors utilized the structure of EU bodies to create the new norm of digital human rights, readers will learn about the process of expanding the scope of human rights protections within multiple dimensions of European political space. The project will be informative to scholar, student, and layperson, as it examines a new and evolving area of technology governance – the human rights of digital data use by the public and private sectors….(More)”.

‘Is it OK to …’: the bot that gives you an instant moral judgment


Article by Poppy Noor: “Corporal punishment, wearing fur, pineapple on pizza – moral dilemmas, are by their very nature, hard to solve. That’s why the same ethical questions are constantly resurfaced in TV, films and literature.

But what if AI could take away the brain work and answer ethical quandaries for us? Ask Delphi is a bot that’s been fed more than 1.7m examples of people’s ethical judgments on everyday questions and scenarios. If you pose an ethical quandary, it will tell you whether something is right, wrong, or indefensible.

Anyone can use Delphi. Users just put a question to the bot on its website, and see what it comes up with.

The AI is fed a vast number of scenarios – including ones from the popular Am I The Asshole sub-Reddit, where Reddit users post dilemmas from their personal lives and get an audience to judge who the asshole in the situation was.

Then, people are recruited from Mechanical Turk – a market place where researchers find paid participants for studies – to say whether they agree with the AI’s answers. Each answer is put to three arbiters, with the majority or average conclusion used to decide right from wrong. The process is selective – participants have to score well on a test to qualify to be a moral arbiter, and the researchers don’t recruit people who show signs of racism or sexism.

The arbitrators agree with the bot’s ethical judgments 92% of the time (although that could say as much about their ethics as it does the bot’s)…(More)”.

22 Questions to Assess Responsible Data for Children (RD4C)


An Audit Tool by The GovLab and UNICEF: “Around the world and across domains, institutions are using data to improve service delivery for children. Data for and about children can, however, pose risks of misuse, such as unauthorized access or data breaches, as well as missed use of data that could have improved children’s lives if harnessed effectively. 

The RD4C Principles — Participatory; Professionally Accountable; People-Centric; Prevention of Harms Across the Data Life Cycle; Proportional; Protective of Children’s Rights; and Purpose-Driven — were developed by the GovLab and UNICEF to guide responsible data handling toward saving children’s lives, defending their rights, and helping them fulfill their potential from early childhood through adolescence. These principles were developed to act as a north star, guiding practitioners toward more responsible data practices.

Today, The GovLab and UNICEF, as part of the Responsible Data for Children initiative (RD4C), are pleased to launch a new tool that aims to put the principles into practice. 22 Questions to Assess Responsible Data for Children (RD4C) is an audit tool to help stakeholders involved in the administration of data systems that handle data for and about children align their practices with the RD4C Principles. 

The tool encourages users to reflect on their data handling practices and strategy by posing questions regarding: 

  • Why: the purpose and rationale for the data system;
  • What: the data handled through the system; 
  • Who: the stakeholders involved in the system’s use, including data subjects;
  • How: the presence of operations, policies, and procedures; and 
  • When and where: temporal and place-based considerations….(More)”.
6b8bb1de 5bb6 474d B91a 99add0d5e4cd