AI Ethics: The Future of Humanity 


Report by sparks & honey: “Through our interaction with machines, we develop emotional, human expectations of them. Alexa, for example, comes alive when we speak with it. AI is and will be a representation of its cultural context, the values and ethics we apply to one another as humans.

This machinery is eerily familiar as it mirrors us, and eventually becomes even smarter than us mere mortals. We’re programming its advantages based on how we see ourselves and the world around us, and we’re doing this at an incredible pace. This shift is pervading culture from our perceptions of beauty and aesthetics to how we interact with one another – and our AI.

Infused with technology, we’re asking: what does it mean to be human?

Our report examines:

• The evolution of our empathy from humans to animals and robots
• How we treat AI in its infancy like we do a child, allowing it space to grow
• The spectrum of our emotional comfort in a world embracing AI
• The cultural contexts fueling AI biases, such as gender stereotypes, that drive the direction of AI
• How we place an innate trust in machines, more than we do one another (Download for free)”

 

Obama Brought Silicon Valley to Washington


Jenna Wortham at The New York Times: “…“Fixing” problems with technology often just creates more problems, largely because technology is never developed in a neutral way: It embodies the values and biases of the people who create it. Crime-predicting software, celebrated when it was introduced in police departments around the country, turned out to reinforce discriminatory policing. Facebook was recently accused of suppressing conservative news from its trending topics. (The company denied a bias, but announced plans to train employees to neutralize political, racial, gender and age biases that could influence what it shows its user base.) Several studies have found that Airbnb has worsened the housing crises in some cities where it operates. In January, a report from the World Bank declared that tech companies were widening income inequality and wealth disparities, not improving them….

None of this was mentioned at South by South Lawn. Instead, speakers heralded the power of the tech community. John Lewis, the congressman and civil rights leader, gave a rousing talk that implored listeners to “get in trouble. Good trouble. Get in the way and make some noise.” Clay Dumas, chief of staff for the Office of Digital Strategy at the White House, told me in an email that the event could be considered part of a legacy to inspire social change and activism through technology. “In his final months in office,” he wrote, “President Obama wants to empower the generation of people that helped launch his candidacy and whose efforts carried him into office.”

…But a few days later, during a speech at Carnegie Mellon, Obama seemed to reckon with his feelings about the potential — and limits — of the tech world. The White House can’t be as freewheeling as a start-up, he said, because “by definition, democracy is messy. And part of government’s job is dealing with problems that nobody else wants to deal with.” But he added that he didn’t want people to become “discouraged and say, ‘I’m just not going to deal with government.’ ” Obama was the first American president to see technology as an engine to improve lives and accelerate society more quickly than any government body could. That lesson was apparent on the lawn. While I still don’t believe that technology is a panacea for society’s problems, I will always appreciate the first president who tried to bring what’s best about Silicon Valley to Washington, even if some of the bad came with it….(More)”

One Crucial Thing Can Help End Violence Against Girls


Eleanor Goldberg at The Huffington Post: “…There are statistics that demonstrate how many girls are in school, for example. But there’s a glaring lack of information on how many of them have dropped out ― and why ― concluded a new study, “Counting the Invisible Girls,” published this month by Plan International.

Why Data On Women And Girls Is Crucial

Without accurate information about the struggles girls face, such as abuse, child marriage, and dropout rates, governments and nonprofit groups can’t develop programs that cater to the specific needs of underserved girls. As a result, struggling girls across the globe, have little chance of escaping the problems that prevent them from pursuing an education and becoming economically independent.

“If data used for policy-making is incomplete, we have a real challenge. Current data is not telling the full story,” Emily Courey Pryor, senior director of Data2X, said at the Social Good Summit in New York City last month. Data2X is a U.N.-led group that works with data collectors and policymakers to identify gender data issues and to help bring about solutions.

Plan International released its report to coincide with a number of major recent events….

How Data Helps Improve The Lives Of Women And Girls 

While data isn’t a panacea, it has proven in a number of instances to help marginalized groups.

Until last year, it was legal in Guatemala for a girl to marry at age 14 ― despite the numerous health risks associated with the practice. Young brides are more vulnerable to sexual abuse and more likely to face fatal complications related to pregnancy and childbirth than those who marry later.

To urge lawmakers to raise the minimum age of marriage, Plan International partnered with advocates and civil society groups to launch its “Because I am a Girl” initiative. It analyzed traditional Mayan laws and gathered evidence about the prevalence of child marriage and its impact on children’s lives. The group presented the information before Guatemala’s Congress and in August of last year, the minimum age for marriage was raised to 18.

A number of groups are heeding the call to continue to amass better data.

In May, the Bill and Melinda Gates Foundation pledged $80 million over the next three years to gather robust and reliable data.

In September, the U.N. women announced “Making Every Woman and Girl Count,”a public-private partnership that’s working to tackle the data issue. The program was unveiled at the U.N. General Assembly, and is working with the Gates Foundation, Data2X and a number of world leaders…(More)”

A cautionary tale about humans creating biased AI models


 at TechCrunch: “Most artificial intelligence models are built and trained by humans, and therefore have the potential to learn, perpetuate and massively scale the human trainers’ biases. This is the word of warning put forth in two illuminating articles published earlier this year by Jack Clark at Bloomberg and Kate Crawford at The New York Times.

Tl;dr: The AI field lacks diversity — even more spectacularly than most of our software industry. When an AI practitioner builds a data set on which to train his or her algorithm, it is likely that the data set will only represent one worldview: the practitioner’s. The resulting AImodel demonstrates a non-diverse “intelligence” at best, and a biased or even offensive one at worst….

So what happens when you don’t consider carefully who is annotating the data? What happens when you don’t account for the differing preferences, tendencies and biases among varying humans? We ran a fun experiment to find out….Actually, we didn’t set out to run an experiment. We just wanted to create something fun that we thought our awesome tasking community would enjoy. The idea? Give people the chance to rate puppies’ cuteness in their spare time…There was a clear gender gap — a very consistent pattern of women rating the puppies as cuter than the men did. The gap between women’s and men’s ratings was more narrow for the “less-cute” (ouch!) dogs, and wider for the cuter ones. Fascinating.

I won’t even try to unpack the societal implications of these findings, but the lesson here is this: If you’re training an artificial intelligence model — especially one that you want to be able to perform subjective tasks — there are three areas in which you must evaluate and consider demographics and diversity:

  • yourself
  • your data
  • your annotators

This was a simple example: binary gender differences explaining one subjective numeric measure of an image. Yet it was unexpected and significant. As our industry deploys incredibly complex models that are pushing to the limit chip sets, algorithms and scientists, we risk reinforcing subtle biases, powerfully and at a previously unimaginable scale. Even more pernicious, many AIs reinforce their own learning, so we need to carefully consider “supervised” (aka human) re-training over time.

Artificial intelligence promises to change all of our lives — and it already subtly guides the way we shop, date, navigate, invest and more. But to make sure that it does so for the better, all of us practitioners need to go out of our way to be inclusive. We need to remain keenly aware of what makes us all, well… human. Especially the subtle, hidden stuff….(More)”

The risks of relying on robots for fairer staff recruitment


Sarah O’Connor at the Financial Times: “Robots are not just taking people’s jobs away, they are beginning to hand them out, too. Go to any recruitment industry event and you will find the air is thick with terms like “machine learning”, “big data” and “predictive analytics”.

The argument for using these tools in recruitment is simple. Robo-recruiters can sift through thousands of job candidates far more efficiently than humans. They can also do it more fairly. Since they do not harbour conscious or unconscious human biases, they will recruit a more diverse and meritocratic workforce.

This is a seductive idea but it is also dangerous. Algorithms are not inherently neutral just because they see the world in zeros and ones.

For a start, any machine learning algorithm is only as good as the training data from which it learns. Take the PhD thesis of academic researcher Colin Lee, released to the press this year. He analysed data on the success or failure of 441,769 job applications and built a model that could predict with 70 to 80 per cent accuracy which candidates would be invited to interview. The press release plugged this algorithm as a potential tool to screen a large number of CVs while avoiding “human error and unconscious bias”.

But a model like this would absorb any human biases at work in the original recruitment decisions. For example, the research found that age was the biggest predictor of being invited to interview, with the youngest and the oldest applicants least likely to be successful. You might think it fair enough that inexperienced youngsters do badly, but the routine rejection of older candidates seems like something to investigate rather than codify and perpetuate. Mr Lee acknowledges these problems and suggests it would be better to strip the CVs of attributes such as gender, age and ethnicity before using them….(More)”

The Racist Algorithm?


Anupam Chander in the Michigan Law Review (2017 Forthcoming) : “Are we on the verge of an apartheid by algorithm? Will the age of big data lead to decisions that unfairly favor one race over others, or men over women? At the dawn of the Information Age, legal scholars are sounding warnings about the ubiquity of automated algorithms that increasingly govern our lives. In his new book, The Black Box Society: The Hidden Algorithms Behind Money and Information, Frank Pasquale forcefully argues that human beings are increasingly relying on computerized algorithms that make decisions about what information we receive, how much we can borrow, where we go for dinner, or even whom we date. Pasquale’s central claim is that these algorithms will mask invidious discrimination, undermining democracy and worsening inequality. In this review, I rebut this prominent claim. I argue that any fair assessment of algorithms must be made against their alternative. Algorithms are certainly obscure and mysterious, but often no more so than the committees or individuals they replace. The ultimate black box is the human mind. Relying on contemporary theories of unconscious discrimination, I show that the consciously racist or sexist algorithm is less likely than the consciously or unconsciously racist or sexist human decision-maker it replaces. The principal problem of algorithmic discrimination lies elsewhere, in a process I label viral discrimination: algorithms trained or operated on a world pervaded by discriminatory effects are likely to reproduce that discrimination.

I argue that the solution to this problem lies in a kind of algorithmic affirmative action. This would require training algorithms on data that includes diverse communities and continually assessing the results for disparate impacts. Instead of insisting on race or gender neutrality and blindness, this would require decision-makers to approach algorithmic design and assessment in a race and gender conscious manner….(More)

The Seductions of Quantification: Measuring Human Rights, Gender Violence, and Sex Trafficking


Book by Sally Engle Merry: “We live in a world where seemingly everything can be measured. We rely on indicators to translate social phenomena into simple, quantified terms, which in turn can be used to guide individuals, organizations, and governments in establishing policy. Yet counting things requires finding a way to make them comparable. And in the process of translating the confusion of social life into neat categories, we inevitably strip it of context and meaning—and risk hiding or distorting as much as we reveal.

With The Seductions of Quantification, leading legal anthropologist Sally Engle Merry investigates the techniques by which information is gathered and analyzed in the production of global indicators on human rights, gender violence, and sex trafficking. Although such numbers convey an aura of objective truth and scientific validity, Merry argues persuasively that measurement systems constitute a form of power by incorporating theories about social change in their design but rarely explicitly acknowledging them. For instance, the US State Department’s Trafficking in Persons Report, which ranks countries in terms of their compliance with antitrafficking activities, assumes that prosecuting traffickers as criminals is an effective corrective strategy—overlooking cultures where women and children are frequently sold by their own families. As Merry shows, indicators are indeed seductive in their promise of providing concrete knowledge about how the world works, but they are implemented most successfully when paired with context-rich qualitative accounts grounded in local knowledge….(More)”.

Transparency reports make AI decision-making accountable


Phys.org: “Machine-learning algorithms increasingly make decisions about credit, medical diagnoses, personalized recommendations, advertising and job opportunities, among other things, but exactly how usually remains a mystery. Now, new measurement methods developed by Carnegie Mellon University researchers could provide important insights to this process.

 Was it a person’s age, gender or education level that had the most influence on a decision? Was it a particular combination of factors? CMU’s Quantitative Input Influence (QII) measures can provide the relative weight of each factor in the final decision, said Anupam Datta, associate professor of computer science and electrical and computer engineering.

“Demands for algorithmic transparency are increasing as the use of algorithmic decision-making systems grows and as people realize the potential of these systems to introduce or perpetuate racial or sex discrimination or other social harms,” Datta said.

“Some companies are already beginning to provide transparency reports, but work on the computational foundations for these reports has been limited,” he continued. “Our goal was to develop measures of the degree of influence of each factor considered by a system, which could be used to generate transparency reports.”

These reports might be generated in response to a particular incident—why an individual’s loan application was rejected, or why police targeted an individual for scrutiny or what prompted a particular medical diagnosis or treatment. Or they might be used proactively by an organization to see if an artificial intelligence system is working as desired, or by a regulatory agency to see whether a decision-making system inappropriately discriminated between groups of people….(More)”

City planners tap into wealth of cycling data from Strava tracking app


Peter Walker in The Guardian: “Sheila Lyons recalls the way Oregon used to collect data on how many people rode bikes. “It was very haphazard, two-hour counts done once a year,” said the woman in charge of cycling policy for the state government.“Volunteers, sitting on the street corner because they wanted better bike facilities. Pathetic, really.”

But in 2013 a colleague had an idea. She recorded her own bike rides using an app called Strava, and thought: why not ask the company to share its data? And so was born Strava Metro, both an inadvertent tech business spinoff and a similarly accidental urban planning tool, one that is now quietly helping to reshape streets in more than 70 places around the world and counting.

Using the GPS tracking capability of a smartphone and similar devices, Strata allows people to plot how far and fast they go and compare themselves against other riders. Users create designated route segments, which each have leaderboards ranked by speed.

Originally aimed just at cyclists, Strava soon incorporated running and now has options for more than two dozen pursuits. But cycling remains the most popular,and while the company is coy about overall figures, it says it adds 1 million new members every two months, and has more than six million uploads a week.

For city planners like Lyons, used to very occasional single-street bike counts,this is a near-unimaginable wealth of data. While individual details are anonymised, it still shows how many Strava-using cyclists, plus their age and gender, ride down any street at any time of the day, and the entire route they take.

The company says it initially had no idea how useful the information could be,and only began visualising data on heatmaps as a fun project for its engineers.“We’re not city planners,” said Michael Horvath, one of two former HarvardUniversity rowers and relatively veteran 40-something tech entrepreneurs who co-founded Strava in 2009.

“One of the things that we learned early on is that these people just don’t have very much data to begin with. Not only is ours a novel dataset, in many cases it’s the only dataset that speaks to the behaviour of cyclists and pedestrians in that city or region.”…(More)”

Crowdsourcing corruption in India’s maternal health services


Joan Okitoi-Heisig at DW Akademie: “…The Mera Swasthya Meri Aawaz (MSMA) project is the first of its kind in India to track illicit maternal fees demanded in government hospitals located in the northern state of Uttar Pradesh.

MSMA (“My Health, My Voice”) is part of SAHAYOG, a non-governmental umbrella organization that helped launch the project. MSMA uses an Ushahidi platform to map and collect data on unofficial fees that plague India’ ostensibly “free” maternal health services. It is one of the many projects showcased in DW Akademie’s recently launched Digital Innovation Library. SAHAYOG works closely with grassroots organizations to promote gender equality and women’s health issues from a human rights perspective…

SAYAHOG sees women’s maternal health as a human rights issue. Key to the MSMA project is exposing government facilities that extort bribes from among the poorest and most vulnerable in society.

Sandhya and her colleagues are convinced that promoting transparency and accountability through the data collected can empower the women. If they’re aware of their entitlements, she says, they can demand their rights and in the process hold leaders accountable.

“Information is power,” Sandhya explains. Without this information, she says, “they aren’t in a position to demand what is rightly theirs.”

Health care providers hold a certain degree of power when entrusted with taking care of expectant mothers. Many give into bribes for fear of being otherwise neglected or abused.

With the MSMA project, however, poor rural women have technology that is easy to use and accessible on their mobile phones, and that empowers them to make complaints and report bribes for services that are supposed to be free.

MSMA is an innovative data-driven platform that combines a toll free number, an interactive voice response system (IVRS) and a website that contains accessible reports. In addition to enabling poor women to air their frustrations anonymously, the project aggregates actionable data which can then be used by the NGO as well as the government to work towards improving the situation for mothers in India….(More)”