Using Collaborative Crowdsourcing to Give Voice to Diverse Communities


Dennis Di Lorenzo at Campus Technology: “Universities face many critical challenges — student retention, campus safety, curriculum development priorities, alumni engagement and fundraising, and inclusion of diverse populations. In my role as dean of the New York University School of Professional Studies (NYUSPS) for the past four years, and in my prior 20 years of employment in senior-level positions within the school and at NYU, I have become intimately familiar with the complexities and the nuances of such multifaceted challenges.

For the past two years, one of our top priorities at NYUSPS has been striving to address sensitive issues regarding diversity and inclusion….

To identify and address the issues we saw arising from the shifting dynamics we were encountering in our classrooms, my team initially set about gathering feedback from NYUSPS faculty members and students through roundtable discussions. Though many individuals participated in these, we sensed that some were anxious and unwilling to fully share their experiences. We were able to initiate some productive conversations; however, we found they weren’t getting to the heart of the matter. To provide a sense of anonymity that would allow members of the NYUSPS community to express their concerns more freely, we identified a collaboration tool called POPin and utilized it to conduct a series of crowdsourcing campaigns that commenced with faculty members and then proceeded on to students.

Fostering Vital Conversations

Using POPin’s online discussion tool, we were able to scale an intimate and sensitive conversation up to include more than 4,500 students and 2,100 faculty members from a wide variety of countries, cultural and religious backgrounds, gender and sexual identities, economic classes and life stages. Because the tool’s feedback mechanism is both anonymous and interactive, the scope and quality of the conversations increased dramatically….(More)”.

Data Violence and How Bad Engineering Choices Can Damage Society


Blog by Anna Lauren Hoffmann: “…In 2015, a black developer in New York discovered that Google’s algorithmic photo recognition software had tagged pictures of him and his friends as gorillas.

The same year, Facebook auto-suspended Native Americans for using their real names, and in 2016, facial recognition was found to struggle to read black faces.

Software in airport body scanners has flagged transgender bodies as threatsfor years. In 2017, Google Translate took gender-neutral pronouns in Turkish and converted them to gendered pronouns in English — with startlingly biased results.

“Violence” might seem like a dramatic way to talk about these accidents of engineering and the processes of gathering data and using algorithms to interpret it. Yet just like physical violence in the real world, this kind of “data violence” (a term inspired by Dean Spade’s concept of administrative violence) occurs as the result of choices that implicitly and explicitly lead to harmful or even fatal outcomes.

Those choices are built on assumptions and prejudices about people, intimately weaving them into processes and results that reinforce biases and, worse, make them seem natural or given.

Take the experience of being a woman and having to constantly push back against rigid stereotypes and aggressive objectification.

Writer and novelist Kate Zambreno describes these biases as “ghosts,” a violent haunting of our true reality. “A return to these old roles that we play, that we didn’t even originate. All the ghosts of the past. Ghosts that aren’t even our ghosts.”

Structural bias is reinforced by the stereotypes fed to us in novels, films, and a pervasive cultural narrative that shapes the lives of real women every day, Zambreno describes. This extends to data and automated systems that now mediate our lives as well. Our viewing and shopping habits, our health and fitness tracking, our financial information all conspire to create a “data double” of ourselves, produced about us by third parties and standing in for us on data-driven systems and platforms.

These fabrications don’t emerge de novo, disconnected from history or social context. Rather, they often pick up and unwittingly spit out a tangled mess of historical conditions and current realities.

Search engines are a prime example of how data and algorithms can conspire to amplify racist and sexist biases. The academic Safiya Umoja Noble threw these messy entanglements into sharp relief in her book Algorithms of OppressionGoogle Search, she explains, has a history of offering up pages of porn for women from particular racial or ethnic groups, and especially black women. Google have also served up ads for criminal background checksalongside search results for African American–sounding names, as former Federal Trade Commission CTO Latanya Sweeney discovered.

“These search engine results for women whose identities are already maligned in the media, such as Black women and girls, only further debase and erode efforts for social, political, and economic recognition and justice,” Noble says.

These kinds of cultural harms go well beyond search results. Sociologist Rena Bivens has shown how the gender categories employed by platforms like Facebook can inflict symbolic violences against transgender and nonbinary users in ways that may never be made obvious to users….(More)”.

Gender is personal – not computational


Foad Hamidi, Morgan Scheuerman and Stacy Branham in the Conversation: “Efforts at automatic gender recognition – using algorithms to guess a person’s gender based on images, video or audio – raise significant social and ethical concerns that are not yet fully explored. Most current research on automatic gender recognition technologies focuses instead on technological details.

Our recent research found that people with diverse gender identities, including those identifying as transgender or gender nonbinary, are particularly concerned that these systems could miscategorize them. People who express their gender differently from stereotypical male and female norms already experience discrimination and harm as a result of being miscategorized or misunderstood. Ideally, technology designers should develop systems to make these problems less common, not more so.

As digital technologies become more powerful and sophisticated, their designers are trying to use them to identify and categorize complex human characteristics, such as sexual orientation, gender and ethnicity. The idea is that with enough training on abundant user data, algorithms can learn to analyze people’s appearance and behavior – and perhaps one day characterize people as well as, or even better than, other humans do.

Gender is a hard topic for people to handle. It’s a complex concept with important roles both as a cultural construct and a core aspect of an individual’s identity. Researchers, scholars and activists are increasingly revealing the diverse, fluid and multifaceted aspects of gender. In the process, they find that ignoring this diversity can lead to both harmful experiences and social injustice. For example, according to the 2016 National Transgender Survey, 47 percent of transgender participants stated that they had experienced some form of discrimination at their workplace due to their gender identity. More than half of transgender people who were harassed, assaulted or expelled because of their gender identity had attempted suicide….(More)”.

Introducing Sourcelist: Promoting diversity in technology policy


Susan Hennessey at Brookings: “…delighted to announce the launch of Sourcelist, a database of experts in technology policy from diverse backgrounds.

Here at Brookings, we built Sourcelist on the principle that technology policymaking stands to benefit from the inclusion of the voices of a broader diversity of people. It aims to help journalists, conference planners, and others to identify and connect with experts outside of their usual sources and panelists. Sourcelist’s purpose is to facilitate more diverse representation by leveraging technology to create a user-friendly resource for people whose decisions can make a difference. We hope that Sourcelist will take away the excuse that diverse experts couldn’t be found to comment on a story or participate on a panel.

Our first database is devoted to Women+. Countless organizations now recognize the institutional barriers that women and underrepresented gender identities face in tech policy. Sourcelist is a resource for those hoping to put recognition into practice.

I want to take the opportunity to personally thank the incredible team at Objectively that took an idea and turned it into the remarkable resource we’re launching today….(More)”.

The global identification challenge: Who are the 1 billion people without proof of identity?


Vyjayanti Desai at The Worldbank: “…Using a combination of the self-reported figures from country authorities, birth registration and other proxy data, the 2018 ID4D Global Dataset suggests that as many as 1 billion people struggle to prove who they are. The data also revealed that of the 1 billion people without an official proof of identity:

  • 81% live in Sub-Saharan Africa and South Asia, indicating the need to scale up efforts in these regions
  • 47% are below the national ID age of their country, highlighting the importance of strengthening birth registration efforts and creating a unique, lifetime identity;
  • 63% live in lower-middle income economies, while 28% live in low-income economies, reinforcing that lack of identification is a critical concern for the global poor….

In addition, to further strengthen understanding of who the undocumented are and the barriers they face, ID4D partnered with the 2017 Global Findex to gather for the first time this year, nationally-representative survey data from 99 countries on foundational ID coverage, use, and barriers to access. Early findings suggest that residents of low income countries, particularly women and the poorest 40%, are the most affected by a lack of ID. The survey data (albeit limited in its coverage to people aged 15 and older) confirm that the coverage gap is largest in low income countries (LICs), where 38% of the surveyed population does not have a foundational ID. Regionally, sub-Saharan Africa shows the largest coverage gap, where close to one in three people in surveyed countries lack a foundational ID.

Although global gender gaps in foundational ID coverage are relatively small, there is a large gender gap for the unregistered population in low income countries – where over 45% of women lack a foundational ID, compared to 30% of men.  The countries with the greatest #gender gaps in foundational ID coverage also tend to be those with #legal barriers for women’s access to #identity documents….(More)”.

Using Data to Inform the Science of Broadening Participation


Donna K. Ginther at the American Behavioral Scientist: “In this article, I describe how data and econometric methods can be used to study the science of broadening participation. I start by showing that theory can be used to structure the approach to using data to investigate gender and race/ethnicity differences in career outcomes. I also illustrate this process by examining whether women of color who apply for National Institutes of Health research funding are confronted with a double bind where race and gender compound their disadvantage relative to Whites. Although high-quality data are needed for understanding the barriers to broadening participation in science careers, it cannot fully explain why women and underrepresented minorities are less likely to be scientists or have less productive science careers. As researchers, it is important to use all forms of data—quantitative, experimental, and qualitative—to deepen our understanding of the barriers to broadening participation….(More)”.

From Texts to Tweets to Satellites: The Power of Big Data to Fill Gender Data Gaps


 at UN Foundation Blog: “Twitter posts, credit card purchases, phone calls, and satellites are all part of our day-to-day digital landscape.

Detailed data, known broadly as “big data” because of the massive amounts of passively collected and high-frequency information that such interactions generate, are produced every time we use one of these technologies. These digital traces have great potential and have already developed a track record for application in global development and humanitarian response.

Data2X has focused particularly on what big data can tell us about the lives of women and girls in resource-poor settings. Our research, released today in a new report, Big Data and the Well-Being of Women and Girls, demonstrates how four big data sources can be harnessed to fill gender data gaps and inform policy aimed at mitigating global gender inequality. Big data can complement traditional surveys and other data sources, offering a glimpse into dimensions of girls’ and women’s lives that have otherwise been overlooked and providing a level of precision and timeliness that policymakers need to make actionable decisions.

Here are three findings from our report that underscore the power and potential offered by big data to fill gender data gaps:

  1. Social media data can improve understanding of the mental health of girls and women.

Mental health conditions, from anxiety to depression, are thought to be significant contributors to the global burden of disease, particularly for young women, though precise data on mental health is sparse in most countries. However, research by Georgia Tech University, commissioned by Data2X, finds that social media provides an accurate barometer of mental health status…..

  1. Cell phone and credit card records can illustrate women’s economic and social patterns – and track impacts of shocks in the economy.

Our spending priorities and social habits often indicate economic status, and these activities can also expose economic disparities between women and men.

By compiling cell phone and credit card records, our research partners at MIT traced patterns of women’s expenditures, spending priorities, and physical mobility. The research found that women have less mobility diversity than men, live further away from city centers, and report less total expenditure per capita…..

  1. Satellite imagery can map rivers and roads, but it can also measure gender inequality.

Satellite imagery has the power to capture high-resolution, real-time data on everything from natural landscape features, like vegetation and river flows, to human infrastructure, like roads and schools. Research by our partners at the Flowminder Foundation finds that it is also able to measure gender inequality….(More)”.

Who Maps the World?


Sarah Holder at CityLab: “For most of human history, maps have been very exclusive,” said Marie Price, the first woman president of the American Geographical Society, appointed 165 years into its 167-year history. “Only a few people got to make maps, and they were carefully guarded, and they were not participatory.” That’s slowly changing, she said, thanks to democratizing projects like OpenStreetMap (OSM)….

But despite OSM’s democratic aims, and despite the long (albeit mostly hidden) history of lady cartographers, the OSM volunteer community is still composed overwhelmingly of men. A comprehensive statistical breakdown of gender equity in the OSM space has not yet been conducted, but Rachel Levine, a GIS operations and training coordinator with the American Red Cross, said experts estimate that only 2 to 5 percent of OSMers are women. The professional field of cartography is also male-dominated, as is the smaller subset of GIS professionals. While it would follow that the numbers of mappers of color and LGBTQ and gender-nonconforming mappers are similarly small, those statistics have gone largely unexamined….

When it comes to increasing access to health services, safety, and education—things women in many developing countries disproportionately lack—equitable cartographic representation matters. It’s the people who make the map who shape what shows up. On OMS, buildings aren’t just identified as buildings; they’re “tagged” with specifics according to mappers’ and editors’ preferences. “If two to five percent of our mappers are women, that means only a subset of that get[s] to decide what tags are important, and what tags get our attention,” said Levine.

Sports arenas? Lots of those. Strip clubs? Cities contain multitudes. Bars? More than one could possibly comprehend.

Meanwhile, childcare centers, health clinics, abortion clinics, and specialty clinics that deal with women’s health are vastly underrepresented. In 2011, the OSM community rejected an appeal to add the “childcare” tag at all. It was finally approved in 2013, and in the time since, it’s been used more than 12,000 times.

Doctors have been tagged more than 80,000 times, while healthcare facilities that specialize in abortion have been tagged only 10; gynecology, near 1,500; midwife, 233, fertility clinics, none. Only one building has been tagged as a domestic violence facility, and 15 as a gender-based violence facility. That’s not because these facilities don’t exist—it’s because the men mapping them don’t know they do, or don’t care enough to notice.

So much of the importance of mapping is about navigating the world safely. For women, especially women in less developed countries, that safety is harder to secure. “If we tag something as a public toilet, does that mean it has facilities for women? Does it mean the facilities are safe?” asked Levine. “When we’re tagging specifically, ‘This is a female toilet,’ that means somebody has gone in and said, ‘This is accessible to me.’ When women aren’t doing the tagging, we just get the toilet tag.”

“Women’s geography,” Price tells her students, is made up of more than bridges and tunnels. It’s shaped by asking things like: Where on the map do you feel safe? How would you walk from A to B in the city without having to look over your shoulder? It’s hard to map these intangibles—but not impossible….(More).

Empowerment tool for women maps cases of harassment


Springwise: “We have previously written about innovations that promote inclusion and equal rights such as edible pie charts that highlight gender inequality. Another example is a predictive text app that finds alternative words for gendered language. Now, NINA, created in Brazil, is an app for empowering women to report violence that occurs in public spaces. The project was shared to Red Bull Amaphiko, a platform for social entrepreneurs to share their work and stories.

A 2016 survey released by ActionAid and conducted by YouGov found that 86 percent of Brazilian women were victims of harassment in public spaces. Responding to these statistics, Simony César created project NINA two years ago to help tackle gender-based violence. The app collects data in real time, mapping locations in which cases of harassment have taken place. The launch and testing of the app took place on public transport. It saw 76 thousand users per day at 17 bus lines at the Federal University of Pernambuco (UFPE).

César states “The premise of NINA aims to empower women through an application that denounces the types of violence they suffer within public spaces”. It combats violence against women by making cases of harassment in the city locatable on a map. NINA can then use this data to find out which bus lines have the highest rate of harassment. It can also record the most common times that cases occur and store photographic records and short videos of harassers.

Another survey by ActionAid in March 2018 revealed that 64 percent of Brazilian women surveyed were victims of sexual harassment. These results demonstrate that the need for empowerment tools, such as NINA, is still necessary. The exposure of women to violence in public city spaces is a global issue and as a result, accessibility within cities is unequal based on gender….(More)”.

Psychographics: the behavioural analysis that helped Cambridge Analytica know voters’ minds


Michael Wade at The Conversation: “Much of the discussion has been on how Cambridge Analytica was able to obtain data on more than 50m Facebook users – and how it allegedly failed to delete this data when told to do so. But there is also the matter of what Cambridge Analytica actually did with the data. In fact the data crunching company’s approach represents a step change in how analytics can today be used as a tool to generate insights – and to exert influence.

For example, pollsters have long used segmentation to target particular groups of voters, such as through categorising audiences by gender, age, income, education and family size. Segments can also be created around political affiliation or purchase preferences. The data analytics machine that presidential candidate Hillary Clinton used in her 2016 campaign – named Ada after the 19th-century mathematician and early computing pioneer – used state-of-the-art segmentation techniques to target groups of eligible voters in the same way that Barack Obama had done four years previously.

Cambridge Analytica was contracted to the Trump campaign and provided an entirely new weapon for the election machine. While it also used demographic segments to identify groups of voters, as Clinton’s campaign had, Cambridge Analytica also segmented using psychographics. As definitions of class, education, employment, age and so on, demographics are informational. Psychographics are behavioural – a means to segment by personality.

This makes a lot of sense. It’s obvious that two people with the same demographic profile (for example, white, middle-aged, employed, married men) can have markedly different personalities and opinions. We also know that adapting a message to a person’s personality – whether they are open, introverted, argumentative, and so on – goes a long way to help getting that message across….

There have traditionally been two routes to ascertaining someone’s personality. You can either get to know them really well – usually over an extended time. Or you can get them to take a personality test and ask them to share it with you. Neither of these methods is realistically open to pollsters. Cambridge Analytica found a third way, with the assistance of two University of Cambridge academics.

The first, Aleksandr Kogan, sold them access to 270,000 personality tests completed by Facebook users through an online app he had created for research purposes. Providing the data to Cambridge Analytica was, it seems, against Facebook’s internal code of conduct, but only now in March 2018 has Kogan been banned by Facebook from the platform. In addition, Kogan’s data also came with a bonus: he had reportedly collected Facebook data from the test-takers’ friends – and, at an average of 200 friends per person, that added up to some 50m people.

However, these 50m people had not all taken personality tests. This is where the second Cambridge academic, Michal Kosinski, came in. Kosinski – who is said to believe that micro-targeting based on online data could strengthen democracy – had figured out a way to reverse engineer a personality profile from Facebook activity such as likes. Whether you choose to like pictures of sunsets, puppies or people apparently says a lot about your personality. So much, in fact, that on the basis of 300 likes, Kosinski’s model is able to predict someone’s personality profile with the same accuracy as a spouse….(More)”