For Queer Communities, Being Counted Has Downsides


Article by Kevin Guyan: “Next March, for the first time, Scotland’s census will ask all residents 16 and over to share information about their sexual orientation and whether they identify as trans. These new questions, whose addition follows similar developments in other parts of the United Kingdom and Malta, invite people to “come out” on their census return. Proposals to add more questions about gender, sex, and sexuality to national censuses are at various stages of discussion in countries outside of Europe, including New ZealandCanadaAustralia, and the United States.

The idea of being counted in a census feels good. Perhaps it’s my passion for data, but I feel recognized when I tick the response option “gay” in a survey that previously pretended I did not exist or was not important enough to count. If you identify with descriptors less commonly listed in drop-down boxes, seeing yourself reflected in a survey can change how you relate to wider communities that go beyond individual experiences. It therefore makes sense that many bottom-up queer rights groups and top-down government agencies frame the counting of queer communities in a positive light and position expanded data collection as a step toward greater inclusion.

There is great historical significance in increased visibility for many queer communities. But an over-focus on the benefits of being counted distracts from the potential harms for queer communities that come with participation in data collection activities….

The limits of inclusion became apparent to me as I observed the design process for Scotland’s 2022 census. While researching my book Queer Data, I sat through committee meetings at the Scottish Parliament, digested lengthy reports, submitted evidence, and participated in stakeholder engagement sessions. As many months of disagreement over how to count and who to count progressed, it grew more and more obvious that the design of a census is never exclusively about the collection of accurate data.

I grew ambivalent about what “being counted” actually meant for queer communities and concerned that the expansion of the census to include some queer people further erased those who did not match the government’s narrow understanding of gender, sex, and sexuality. Most notably, Scotland’s 2022 census does not count nonbinary people, who are required to identify their sex as either male or female. In another example, trans-exclusionary campaign groups requested that the census remove the “other” write-in box and limit response options for sexual orientation to “gay or lesbian,” “bisexual,” and “straight/heterosexual.” Reproducing the idea that sexual orientation is based on a fixed, binary notion of sex and restricting the question to just three options would effectively delete those who identify as queer, pansexual, asexual, and other sexualities from the count. Although the final version of the sexual orientation question includes an “other” write-in box for sexuality, collecting data about the lives of some queer people can push those who fall outside these expectations further into the shadows…(More)”.

Are Your Data Visualizations Racist?


Article by Alice Feng & Jonathan Schwabish: “Through rigorous, data-based analysis, researchers and analysts can add to our understanding of societal shortcomings and point toward evidence-based solutions. But carelessly collecting and communicating data can lead to analyses and visualizations that have an outsized capacity to mislead, misrepresent, and harm communities already experiencing inequity and discrimination.

To unlock the full potential of data, researchers and analysts must consider and apply equity at every step of the research process. Ensuring responsible data collection, representing the communities surveyed accurately, and incorporating community input whenever possible will lead to more equitable data analyses and visualizations. Although there is no one-size-fits-all approach to working with data, for researchers to truly do no harm, they must build their work on a foundation of empathy.

In our recent report, Do No Harm Guide: Applying Equity Awareness in Data Visualization, we focus on how data practitioners can approach their work through a lens of diversity, equity, and inclusion. To create this report, we conducted more than a dozen interviews with nearly 20 people who work with data to hear how they approach inclusivity. In those interviews, we heard time and time again that demonstrating empathy for the people and communities you are focusing on and communicating with should be the guiding light for those working with data. Journalist Kim Bui succinctly captured how researchers and analysts can apply empathy, saying: “If I were one of the data points on this visualization, would I feel offended?”…(More)”.

The Birth of Digital Human Rights


Book by Rebekah Dowd on “Digitized Data Governance as a Human Rights Issue in the EU”: “…This book considers contested responsibilities between the public and private sectors over the use of online data, detailing exactly how digital human rights evolved in specific European states and gradually became a part of the European Union framework of legal protections. The author uniquely examines why and how European lawmakers linked digital data protection to fundamental human rights, something heretofore not explained in other works on general data governance and data privacy. In particular, this work examines the utilization of national and European Union institutional arrangements as a location for activism by legal and academic consultants and by first-mover states who legislated digital human rights beginning in the 1970s. By tracing the way that EU Member States and non-state actors utilized the structure of EU bodies to create the new norm of digital human rights, readers will learn about the process of expanding the scope of human rights protections within multiple dimensions of European political space. The project will be informative to scholar, student, and layperson, as it examines a new and evolving area of technology governance – the human rights of digital data use by the public and private sectors….(More)”.

‘Is it OK to …’: the bot that gives you an instant moral judgment


Article by Poppy Noor: “Corporal punishment, wearing fur, pineapple on pizza – moral dilemmas, are by their very nature, hard to solve. That’s why the same ethical questions are constantly resurfaced in TV, films and literature.

But what if AI could take away the brain work and answer ethical quandaries for us? Ask Delphi is a bot that’s been fed more than 1.7m examples of people’s ethical judgments on everyday questions and scenarios. If you pose an ethical quandary, it will tell you whether something is right, wrong, or indefensible.

Anyone can use Delphi. Users just put a question to the bot on its website, and see what it comes up with.

The AI is fed a vast number of scenarios – including ones from the popular Am I The Asshole sub-Reddit, where Reddit users post dilemmas from their personal lives and get an audience to judge who the asshole in the situation was.

Then, people are recruited from Mechanical Turk – a market place where researchers find paid participants for studies – to say whether they agree with the AI’s answers. Each answer is put to three arbiters, with the majority or average conclusion used to decide right from wrong. The process is selective – participants have to score well on a test to qualify to be a moral arbiter, and the researchers don’t recruit people who show signs of racism or sexism.

The arbitrators agree with the bot’s ethical judgments 92% of the time (although that could say as much about their ethics as it does the bot’s)…(More)”.

22 Questions to Assess Responsible Data for Children (RD4C)


An Audit Tool by The GovLab and UNICEF: “Around the world and across domains, institutions are using data to improve service delivery for children. Data for and about children can, however, pose risks of misuse, such as unauthorized access or data breaches, as well as missed use of data that could have improved children’s lives if harnessed effectively. 

The RD4C Principles — Participatory; Professionally Accountable; People-Centric; Prevention of Harms Across the Data Life Cycle; Proportional; Protective of Children’s Rights; and Purpose-Driven — were developed by the GovLab and UNICEF to guide responsible data handling toward saving children’s lives, defending their rights, and helping them fulfill their potential from early childhood through adolescence. These principles were developed to act as a north star, guiding practitioners toward more responsible data practices.

Today, The GovLab and UNICEF, as part of the Responsible Data for Children initiative (RD4C), are pleased to launch a new tool that aims to put the principles into practice. 22 Questions to Assess Responsible Data for Children (RD4C) is an audit tool to help stakeholders involved in the administration of data systems that handle data for and about children align their practices with the RD4C Principles. 

The tool encourages users to reflect on their data handling practices and strategy by posing questions regarding: 

  • Why: the purpose and rationale for the data system;
  • What: the data handled through the system; 
  • Who: the stakeholders involved in the system’s use, including data subjects;
  • How: the presence of operations, policies, and procedures; and 
  • When and where: temporal and place-based considerations….(More)”.
6b8bb1de 5bb6 474d B91a 99add0d5e4cd

Why Are We Failing at AI Ethics?


Article by Anja Kaspersen and Wendell Wallach: “…Extremely troubling is the fact that the people who are most vulnerable to negative impacts from such rapid expansions of AI systems are often the least likely to be able to join the conversation about these systems, either because they have no or restricted digital access or their lack of digital literacy makes them ripe for exploitation.

Such vulnerable groups are often theoretically included in discussions, but not empowered to take a meaningful part in making decisions. This engineered inequity, alongside human biases, risks amplifying otherness through neglect, exclusion, misinformation, and disinformation.

Society should be deeply concerned that nowhere near enough substantive progress is being made to develop and scale actionable legal, ethical oversight while simultaneously addressing existing inequalities.

So, why hasn’t more been done? There are three main issues at play: 

First, many of the existing dialogues around the ethics of AI and governance are too narrow and fail to understand the subtleties and life cycles of AI systems and their impacts.

Often, these efforts focus only on the development and deployment stages of the technology life cycle, when many of the problems occur during the earlier stages of conceptualization, research, and design. Or they fail to comprehend when and if AI system operates at a level of maturity required to avoid failure in complex adaptive systems.

Or they focus on some aspects of ethics, while ignoring other aspects that are more fundamental and challenging. This is the problem known as “ethics washing” – creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns.

Let’s be clear: every choice entails tradeoffs. “Ethics talk” is often about underscoring the various tradeoffs entailed in differing courses of action. Once a course has been selected, comprehensive ethical oversight is also about addressing the considerations not accommodated by the options selected, which is essential to any future verification effort. This vital part of the process is often a stumbling block for those trying to address the ethics of AI.

The second major issue is that to date all the talk about ethics is simply that: talk. 

We’ve yet to see these discussions translate into meaningful change in managing the ways in which AI systems are being embedded into various aspect of our lives….

A third issue at play is that discussions on AI and ethics are still largely confined to the ivory tower.

There is an urgent need for more informed public discourse and serious investment in civic education around the societal impact of the bio-digital revolution. This could help address the first two problems, but most of what the general public currently perceives about AI comes from sci-fi tropes and blockbuster movies.

A few examples of algorithmic bias have penetrated the public discourse. But the most headline-grabbing research on AI and ethics tends to focus on far-horizon existential risks. More effort needs to be invested in communicating to the public that, beyond the hypothetical risks of future AI, there are real and imminent risks posed by why and how we embed AI systems that currently shape everyone’s daily lives….(More)”.

Can data die?


Article by Jennifer Ding: “…To me, the crux of the Lenna story is how little power we have over our data and how it is used and abused. This threat seems disproportionately higher for women who are often overrepresented in internet content, but underrepresented in internet company leadership and decision making. Given this reality, engineering and product decisions will continue to consciously (and unconsciously) exclude our needs and concerns.

While social norms are changing towards non-consensual data collection and data exploitation, digital norms seem to be moving in the opposite direction. Advancements in machine learning algorithms and data storage capabilities are only making data misuse easier. Whether the outcome is revenge porn or targeted ads, surveillance or discriminatory AI, if we want a world where our data can retire when it’s outlived its time, or when it’s directly harming our lives, we must create the tools and policies that empower data subjects to have a say in what happens to their data… including allowing their data to die…(More)”

International Network on Digital Self Determination


About: “Data is changing how we live and engage with and within our societies and our economies. As our digital footprints grow, how do we re-imagine ourselves in the digital world? How will we be able to determine the data-driven decisions that impact us?

Digital self-determination offers a unique way of understanding where we (can) live in the digital space – how we manage our social media environments, our interaction with Artificial Intelligence (AI) and other technologies, how we access and operate our personal data, and the ways in which we can have a say about mass data sharing.

Through this network, we aim to study and design ways to engage in trustworthy data spaces and ensure human centric approaches. We recognize an urgent need to ensure people’s digital self-determination so that ‘humans in the loop’ is not just a catch-phrase but a lived experience both at the individual and societal level….(More)”.

Nonprofit Websites Are Riddled With Ad Trackers


Article by By Alfred Ng and Maddy Varner: “Last year, nearly 200 million people visited the website of Planned Parenthood, a nonprofit that many people turn to for very private matters like sex education, access to contraceptives, and access to abortions. What those visitors may not have known is that as soon as they opened plannedparenthood.org, some two dozen ad trackers embedded in the site alerted a slew of companies whose business is not reproductive freedom but gathering, selling, and using browsing data.

The Markup ran Planned Parenthood’s website through our Blacklight tool and found 28 ad trackers and 40 third-party cookies tracking visitors, in addition to so-called “session recorders” that could be capturing the mouse movements and keystrokes of people visiting the homepage in search of things like information on contraceptives and abortions. The site also contained trackers that tell Facebook and Google if users visited the site.

The Markup’s scan found Planned Parenthood’s site communicating with companies like Oracle, Verizon, LiveRamp, TowerData, and Quantcast—some of which have made a business of assembling and selling access to masses of digital data about people’s habits.

Katie Skibinski, vice president for digital products at Planned Parenthood, said the data collected on its website is “used only for internal purposes by Planned Parenthood and our affiliates,” and the company doesn’t “sell” data to third parties.

“While we aim to use data to learn how we can be most impactful, at Planned Parenthood, data-driven learning is always thoughtfully executed with respect for patient and user privacy,” Skibinski said. “This means using analytics platforms to collect aggregate data to gather insights and identify trends that help us improve our digital programs.”

Skibinski did not dispute that the organization shares data with third parties, including data brokers.

Blacklight scan of Planned Parenthood Gulf Coast—a localized website specifically for people in the Gulf region, including Texas, where abortion has been essentially outlawed—churned up similar results.

Planned Parenthood is not alone when it comes to nonprofits, some operating in sensitive areas like mental health and addiction, gathering and sharing data on website visitors.

Using our Blacklight tool, The Markup scanned more than 23,000 websites of nonprofit organizations, including those belonging to abortion providers and nonprofit addiction treatment centers. The Markup used the IRS’s nonprofit master file to identify nonprofits that have filed a tax return since 2019 and that the agency categorizes as focusing on areas like mental health and crisis intervention, civil rights, and medical research. We then examined each nonprofit’s website as publicly listed in GuideStar. We found that about 86 percent of them had third-party cookies or tracking network requests. By comparison, when The Markup did a survey of the top 80,000 websites in 2020, we found 87 percent used some type of third-party tracking.

About 11 percent of the 23,856 nonprofit websites we scanned had a Facebook pixel embedded, while 18 percent used the Google Analytics “Remarketing Audiences” feature.

The Markup found that 439 of the nonprofit websites loaded scripts called session recorders, which can monitor visitors’ clicks and keystrokes. Eighty-nine of those were for websites that belonged to nonprofits that the IRS categorizes as primarily focusing on mental health and crisis intervention issues…(More)”.

Data Science for Social Good: Philanthropy and Social Impact in a Complex World


Book edited by Ciro Cattuto and Massimo Lapucci: “This book is a collection of insights by thought leaders at first-mover organizations in the emerging field of “Data Science for Social Good”. It examines the application of knowledge from computer science, complex systems, and computational social science to challenges such as humanitarian response, public health, and sustainable development. The book provides an overview of scientific approaches to social impact – identifying a social need, targeting an intervention, measuring impact – and the complementary perspective of funders and philanthropies pushing forward this new sector.

TABLE OF CONTENTS


Introduction; By Massimo Lapucci

The Value of Data and Data Collaboratives for Good: A Roadmap for Philanthropies to Facilitate Systems Change Through Data; By Stefaan G. Verhulst

UN Global Pulse: A UN Innovation Initiative with a Multiplier Effect; By Dr. Paula Hidalgo-Sanchis

Building the Field of Data for Good; By Claudia Juech

When Philanthropy Meets Data Science: A Framework for Governance to Achieve Data-Driven Decision-Making for Public Good; By Nuria Oliver

Data for Good: Unlocking Privately-Held Data to the Benefit of the Many; By Alberto Alemanno

Building a Funding Data Ecosystem: Grantmaking in the UK; By Rachel Rank

A Reflection on the Role of Data for Health: COVID-19 and Beyond; By Stefan E. Germann and Ursula Jasper….(More)”