Sharing Private Data for Public Good


Stefaan G. Verhulst at Project Syndicate: “After Hurricane Katrina struck New Orleans in 2005, the direct-mail marketing company Valassis shared its database with emergency agencies and volunteers to help improve aid delivery. In Santiago, Chile, analysts from Universidad del Desarrollo, ISI Foundation, UNICEF, and the GovLab collaborated with Telefónica, the city’s largest mobile operator, to study gender-based mobility patterns in order to design a more equitable transportation policy. And as part of the Yale University Open Data Access project, health-care companies Johnson & Johnson, Medtronic, and SI-BONE give researchers access to previously walled-off data from 333 clinical trials, opening the door to possible new innovations in medicine.

These are just three examples of “data collaboratives,” an emerging form of partnership in which participants exchange data for the public good. Such tie-ups typically involve public bodies using data from corporations and other private-sector entities to benefit society. But data collaboratives can help companies, too – pharmaceutical firms share data on biomarkers to accelerate their own drug-research efforts, for example. Data-sharing initiatives also have huge potential to improve artificial intelligence (AI). But they must be designed responsibly and take data-privacy concerns into account.

Understanding the societal and business case for data collaboratives, as well as the forms they can take, is critical to gaining a deeper appreciation the potential and limitations of such ventures. The GovLab has identified over 150 data collaboratives spanning continents and sectors; they include companies such as Air FranceZillow, and Facebook. Our research suggests that such partnerships can create value in three main ways….(More)”.

Journalism Initiative Crowdsources Feedback on Failed Foreign Aid Projects


Abigail Higgins at SSIR: “It isn’t unusual that a girl raped in northeastern Kenya would be ignored by law enforcement. But for Mary, whose name has been changed to protect her identity, it should have been different—NGOs had established a hotline to report sexual violence just a few years earlier to help girls like her get justice. Even though the hotline was backed by major aid institutions like Mercy Corps and the British government, calls to it regularly went unanswered.

“That was the story that really affected me. It touched me in terms of how aid failures could impact someone,” says Anthony Langat, a Nairobi-based reporter who investigated the hotline as part of a citizen journalism initiative called What Went Wrong? that examines failed foreign aid projects.

Over six months in 2018, What Went Wrong? collected 142 reports of failed aid projects in Kenya, each submitted over the phone or via social media by the very people the project was supposed to benefit. It’s a move intended to help upend the way foreign aid is disbursed and debated. Although aid organizations spend significant time evaluating whether or not aid works, beneficiaries are often excluded from that process.

“There’s a serious power imbalance,” says Peter DiCampo, the photojournalist behind the initiative. “The people receiving foreign aid generally do not have much say. They don’t get to choose which intervention they want, which one would feel most beneficial for them. Our goal is to help these conversations happen … to put power into the hands of the people receiving foreign aid.”

What Went Wrong? documented eight failed projects in an investigative series published by Devex in March. In Kibera, one of Kenya’s largest slums, public restrooms meant to improve sanitation failed to connect to water and sewage infrastructure and were later repurposed as churches. In another story, the World Bank and local thugs struggled for control over the slum’s electrical grid….(More)”

Invisible Women: Exposing Data Bias in a World Designed for Men


Book by Caroline Criado Perez: “Imagine a world where your phone is too big for your hand, where your doctor prescribes a drug that is wrong for your body, where in a car accident you are 47% more likely to be seriously injured, where every week the countless hours of work you do are not recognised or valued. If any of this sounds familiar, chances are that you’re a woman.

Invisible Women shows us how, in a world largely built for and by men, we are systematically ignoring half the population. It exposes the gender data gap – a gap in our knowledge that is at the root of perpetual, systemic discrimination against women, and that has created a pervasive but invisible bias with a profound effect on women’s lives.

Award-winning campaigner and writer Caroline Criado Perez brings together for the first time an impressive range of case studies, stories and new research from across the world that illustrate the hidden ways in which women are forgotten, and the impact this has on their health and well-being. From government policy and medical research, to technology, workplaces, urban planning and the media, Invisible Womenreveals the biased data that excludes women. In making the case for change, this powerful and provocative book will make you see the world anew….(More)”

Using Artificial Intelligence to Promote Diversity


Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury at MIT Sloan Management Review:  “Artificial intelligence has had some justifiably bad press recently. Some of the worst stories have been about systems that exhibit racial or gender bias in facial recognition applications or in evaluating people for jobs, loans, or other considerations. One program was routinely recommending longer prison sentences for blacks than for whites on the basis of the flawed use of recidivism data.

But what if instead of perpetuating harmful biases, AI helped us overcome them and make fairer decisions? That could eventually result in a more diverse and inclusive world. What if, for instance, intelligent machines could help organizations recognize all worthy job candidates by avoiding the usual hidden prejudices that derail applicants who don’t look or sound like those in power or who don’t have the “right” institutions listed on their résumés? What if software programs were able to account for the inequities that have limited the access of minorities to mortgages and other loans? In other words, what if our systems were taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand?

AI can do all of this — with guidance from the human experts who create, train, and refine its systems. Specifically, the people working with the technology must do a much better job of building inclusion and diversity into AI design by using the right data to train AI systems to be inclusive and thinking about gender roles and diversity when developing bots and other applications that engage with the public.

Design for Inclusion

Software development remains the province of males — only about one-quarter of computer scientists in the United States are women— and minority racial groups, including blacks and Hispanics, are underrepresented in tech work, too.  Groups like Girls Who Code and AI4ALL have been founded to help close those gaps. Girls Who Code has reached almost 90,000 girls from various backgrounds in all 50 states,5 and AI4ALL specifically targets girls in minority communities….(More)”.

Crowdsourced data informs women which streets are safe


Springwise“Safe & the City is a free app designed to help users identify which streets are safe for them. Sexual harassment and violent crimes against women in particular are a big problem in many urban environments. This app uses crowdsourced data and crime statistics to help female pedestrians stay safe.

It is a development of traditional navigation apps but instead of simply providing the fastest route, it also has information on what is the safest. The Live Map relies on user data. Victims can report harassment or assault on the app. The information will then be available to other users to warn them of a potential threat in the area. Incidents can be ranked from a feeling of discomfort or threat, verbal harassment, or a physical assault. Whilst navigating, the Live Map can also alert users to potentially dangerous intersections coming. This reminds people to stay alert and not only focus on their phone while walking.

The Safe Sites feature is also a way of incorporating the community. Businesses and organisations can register to be Safe Sites. They will then receive training from SafeSeekers in how to provide the best support and assistance in emergency situations. The locations of such Sites will be available on the app, should a user need one.

The IOS app launched in March 2018 on International Women’s Day. It is currently only available for London…(More)”

Tricky Design: The Ethics of Things


Book edited by Tom Fisher and Lorraine Gamman: “Tricky Things responds to the burgeoning of scholarly interest in the cultural meanings of objects, by addressing the moral complexity of certain designed objects and systems.

The volume brings together leading international designers, scholars and critics to explore some of the ways in which the practice of design and its outcomes can have a dark side, even when the intention is to design for the public good. Considering a range of designed objects and relationships, including guns, eyewear, assisted suicide kits, anti-rape devices, passports and prisons, the contributors offer a view of design as both progressive and problematic, able to propose new material and human relationships, yet also constrained by social norms and ideology. 

This contradictory, tricky quality of design is explored in the editors’ introduction, which positions the objects, systems, services and ‘things’ discussed in the book in relation to the idea of the trickster that occurs in anthropological literature, as well as in classical thought, discussing design interventions that have positive and negative ethical consequences. These will include objects, both material and ‘immaterial’, systems with both local and global scope, and also different processes of designing. 

This important new volume brings a fresh perspective to the complex nature of ‘things‘, and makes a truly original contribution to debates in design ethics, design philosophy and material culture….(More)”

Crowd-mapping gender equality – a powerful tool for shaping a better city launches in Melbourne


Nicole Kalms at The Conversation: “Inequity in cities has a long history. The importance of social and community planning to meet the challenge of creating people-centred cities looms large. While planners, government and designers have long understood the problem, uncovering the many important marginalised stories is an enormous task.

ion: “Inequity in cities has a long history. The importance of social and community planning to meet the challenge of creating people-centred cities looms large. While planners, government and designers have long understood the problem, uncovering the many important marginalised stories is an enormous task.

Technology – so often bemoaned – has provided an unexpected and powerful primary tool for designers and makers of cities. Crowd-mapping asks the community to anonymously engage and map their experiences using their smartphones and via a web app. The focus of the new Gender Equality Map launched today in two pilot locations in Melbourne is on equality or inequality in their neighbourhood.

How does it work?

Participants can map their experience of equality or inequality in their neighbourhood using locator pins. Author provided

Crowd-mapping generates geolocative data. This is made up of points “dropped” to a precise geographical location. The data can then be analysed and synthesised for insights, tendencies and “hotspots”.

The diversity of its applications shows the adaptability of the method. The digital, community-based method of crowd-mapping has been used across the globe. Under-represented citizens have embraced the opportunity to tell their stories as a way to engage with and change their experience of cities….(More)”

When AI Misjudgment Is Not an Accident


Douglas Yeung at Scientific American: “The conversation about unconscious bias in artificial intelligence often focuses on algorithms that unintentionally cause disproportionate harm to entire swaths of society—those that wrongly predict black defendants will commit future crimes, for example, or facial-recognition technologies developed mainly by using photos of white men that do a poor job of identifying women and people with darker skin.

But the problem could run much deeper than that. Society should be on guard for another twist: the possibility that nefarious actors could seek to attack artificial intelligence systems by deliberately introducing bias into them, smuggled inside the data that helps those systems learn. This could introduce a worrisome new dimension to cyberattacks, disinformation campaigns or the proliferation of fake news.

According to a U.S. government study on big data and privacy, biased algorithms could make it easier to mask discriminatory lending, hiring or other unsavory business practices. Algorithms could be designed to take advantage of seemingly innocuous factors that can be discriminatory. Employing existing techniques, but with biased data or algorithms, could make it easier to hide nefarious intent. Commercial data brokers collect and hold onto all kinds of information, such as online browsing or shopping habits, that could be used in this way.

Biased data could also serve as bait. Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them.

Algorithmic bias attacks could also be used to more easily advance ideological agendas. If hate groups or political advocacy organizations want to target or exclude people on the basis of race, gender, religion or other characteristics, biased algorithms could give them either the justification or more advanced means to directly do so. Biased data also could come into play in redistricting efforts that entrench racial segregation (“redlining”) or restrict voting rights.

Finally, national security threats from foreign actors could use deliberate bias attacks to destabilize societies by undermining government legitimacy or sharpening public polarization. This would fit naturally with tactics that reportedly seek to exploit ideological divides by creating social media posts and buying online ads designed to inflame racial tensions….(More)”.

This is how computers “predict the future”


Dan Kopf at Quartz: “The poetically named “random forest” is one of data science’s most-loved prediction algorithms. Developed primarily by statistician Leo Breiman in the 1990s, the random forest is cherished for its simplicity. Though it is not always the most accurate prediction method for a given problem, it holds a special place in machine learning because even those new to data science can implement and understand this powerful algorithm.

This was the algorithm used in an exciting 2017 study on suicide predictions, conducted by biomedical-informatics specialist Colin Walsh of Vanderbilt University and psychologists Jessica Ribeiro and Joseph Franklin of Florida State University. Their goal was to take what they knew about a set of 5,000 patients with a history of self-injury, and see if they could use those data to predict the likelihood that those patients would commit suicide. The study was done retrospectively. Sadly, almost 2,000 of these patients had killed themselves by the time the research was underway.

Altogether, the researchers had over 1,300 different characteristics they could use to make their predictions, including age, gender, and various aspects of the individuals’ medical histories. If the predictions from the algorithm proved to be accurate, the algorithm could theoretically be used in the future to identify people at high risk of suicide, and deliver targeted programs to them. That would be a very good thing.

Predictive algorithms are everywhere. In an age when data are plentiful and computing power is mighty and cheap, data scientists increasingly take information on people, companies, and markets—whether given willingly or harvested surreptitiously—and use it to guess the future. Algorithms predict what movie we might want to watch next, which stocks will increase in value, and which advertisement we’re most likely to respond to on social media. Artificial-intelligence tools, like those used for self-driving cars, often rely on predictive algorithms for decision making….(More)”.

Tech Was Supposed to Be Society’s Great Equalizer. What Happened?


Derek Thompson at The Atlantic: “Historians may look back at the early 21st century as the Gilded Age 2.0. Not since the late 1800s has the U.S. been so defined by the triad of rapid technological change, gaping economic inequality, and sudden social upheaval.

Ironically, the digital revolution was supposed to be an equalizer. The early boosters of the Internet sprang from the counterculture of the 1960s and the New Communalist movement. Some of them, like Stewart Brand, hoped to spread the sensibilities of hippie communes throughout the wilderness of the web. Others saw the internet more broadly as an opportunity to build a society that amended the failures of the physical world.

But in the last few years, the most successful tech companies have built a new economy that often accentuates the worst parts of the old world they were bent on replacing. Facebook’s platform amplifies preexisting biases—both of ideology and race—and political propaganda. Amazon’s dominion over online retail has allowed it to squash competition, not unlike the railroad monopolies of the 19th century. And Apple, in designing the most profitable product in modern history, has also designed another instrument of harmful behavioral addiction….

The only way to make technology that helps a broad array of people is to consult a broad array of people to make that technology. But the computer industry has a multi-decade history of gender discrimination. It is, perhaps, the industry’s original sin. After World War II, Great Britain was the world’s leader in computing. Its efforts to decipher Nazi codes led to the creation of the world’s first programmable digital computer. But within 30 years, the British advantage in computing and software had withered, in part due to explicit efforts to push women out of the computer-science workforce, according to Marie Hicks’ history, Programmed Inequality.

The tech industry isn’t a digital hippie commune, anymore. It’s the new aristocracy. The largest and fastest growing companies in the world, in both the U.S. and China, are tech giants. It’s our responsibility, as users and voters, to urge these companies to use their political and social power responsibly. “I think absolute power corrupts absolutely,” Broussard said. “In the history of America, we’ve had gilded ages before and we’ve had companies that have had giant monopolies over industries and it hasn’t worked out so great. So I think that one of the things that we need to do as a society is we need to take off our blinders when it comes to technology and we need to kind of examine our techno-chauvinist beliefs and say what kind of a world do we want?”…(More)”.