When AI Misjudgment Is Not an Accident


Douglas Yeung at Scientific American: “The conversation about unconscious bias in artificial intelligence often focuses on algorithms that unintentionally cause disproportionate harm to entire swaths of society—those that wrongly predict black defendants will commit future crimes, for example, or facial-recognition technologies developed mainly by using photos of white men that do a poor job of identifying women and people with darker skin.

But the problem could run much deeper than that. Society should be on guard for another twist: the possibility that nefarious actors could seek to attack artificial intelligence systems by deliberately introducing bias into them, smuggled inside the data that helps those systems learn. This could introduce a worrisome new dimension to cyberattacks, disinformation campaigns or the proliferation of fake news.

According to a U.S. government study on big data and privacy, biased algorithms could make it easier to mask discriminatory lending, hiring or other unsavory business practices. Algorithms could be designed to take advantage of seemingly innocuous factors that can be discriminatory. Employing existing techniques, but with biased data or algorithms, could make it easier to hide nefarious intent. Commercial data brokers collect and hold onto all kinds of information, such as online browsing or shopping habits, that could be used in this way.

Biased data could also serve as bait. Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them.

Algorithmic bias attacks could also be used to more easily advance ideological agendas. If hate groups or political advocacy organizations want to target or exclude people on the basis of race, gender, religion or other characteristics, biased algorithms could give them either the justification or more advanced means to directly do so. Biased data also could come into play in redistricting efforts that entrench racial segregation (“redlining”) or restrict voting rights.

Finally, national security threats from foreign actors could use deliberate bias attacks to destabilize societies by undermining government legitimacy or sharpening public polarization. This would fit naturally with tactics that reportedly seek to exploit ideological divides by creating social media posts and buying online ads designed to inflame racial tensions….(More)”.

This is how computers “predict the future”


Dan Kopf at Quartz: “The poetically named “random forest” is one of data science’s most-loved prediction algorithms. Developed primarily by statistician Leo Breiman in the 1990s, the random forest is cherished for its simplicity. Though it is not always the most accurate prediction method for a given problem, it holds a special place in machine learning because even those new to data science can implement and understand this powerful algorithm.

This was the algorithm used in an exciting 2017 study on suicide predictions, conducted by biomedical-informatics specialist Colin Walsh of Vanderbilt University and psychologists Jessica Ribeiro and Joseph Franklin of Florida State University. Their goal was to take what they knew about a set of 5,000 patients with a history of self-injury, and see if they could use those data to predict the likelihood that those patients would commit suicide. The study was done retrospectively. Sadly, almost 2,000 of these patients had killed themselves by the time the research was underway.

Altogether, the researchers had over 1,300 different characteristics they could use to make their predictions, including age, gender, and various aspects of the individuals’ medical histories. If the predictions from the algorithm proved to be accurate, the algorithm could theoretically be used in the future to identify people at high risk of suicide, and deliver targeted programs to them. That would be a very good thing.

Predictive algorithms are everywhere. In an age when data are plentiful and computing power is mighty and cheap, data scientists increasingly take information on people, companies, and markets—whether given willingly or harvested surreptitiously—and use it to guess the future. Algorithms predict what movie we might want to watch next, which stocks will increase in value, and which advertisement we’re most likely to respond to on social media. Artificial-intelligence tools, like those used for self-driving cars, often rely on predictive algorithms for decision making….(More)”.

Tech Was Supposed to Be Society’s Great Equalizer. What Happened?


Derek Thompson at The Atlantic: “Historians may look back at the early 21st century as the Gilded Age 2.0. Not since the late 1800s has the U.S. been so defined by the triad of rapid technological change, gaping economic inequality, and sudden social upheaval.

Ironically, the digital revolution was supposed to be an equalizer. The early boosters of the Internet sprang from the counterculture of the 1960s and the New Communalist movement. Some of them, like Stewart Brand, hoped to spread the sensibilities of hippie communes throughout the wilderness of the web. Others saw the internet more broadly as an opportunity to build a society that amended the failures of the physical world.

But in the last few years, the most successful tech companies have built a new economy that often accentuates the worst parts of the old world they were bent on replacing. Facebook’s platform amplifies preexisting biases—both of ideology and race—and political propaganda. Amazon’s dominion over online retail has allowed it to squash competition, not unlike the railroad monopolies of the 19th century. And Apple, in designing the most profitable product in modern history, has also designed another instrument of harmful behavioral addiction….

The only way to make technology that helps a broad array of people is to consult a broad array of people to make that technology. But the computer industry has a multi-decade history of gender discrimination. It is, perhaps, the industry’s original sin. After World War II, Great Britain was the world’s leader in computing. Its efforts to decipher Nazi codes led to the creation of the world’s first programmable digital computer. But within 30 years, the British advantage in computing and software had withered, in part due to explicit efforts to push women out of the computer-science workforce, according to Marie Hicks’ history, Programmed Inequality.

The tech industry isn’t a digital hippie commune, anymore. It’s the new aristocracy. The largest and fastest growing companies in the world, in both the U.S. and China, are tech giants. It’s our responsibility, as users and voters, to urge these companies to use their political and social power responsibly. “I think absolute power corrupts absolutely,” Broussard said. “In the history of America, we’ve had gilded ages before and we’ve had companies that have had giant monopolies over industries and it hasn’t worked out so great. So I think that one of the things that we need to do as a society is we need to take off our blinders when it comes to technology and we need to kind of examine our techno-chauvinist beliefs and say what kind of a world do we want?”…(More)”.

Direct Democracy and Political Engagement of the Marginalized


Dissertation by Jeong Hyun Kim: “…examines direct democracy’s implications for political equality by focusing on how it influences and modifies political attitudes and behaviors of marginalized groups. Using cases and data from Sweden, Switzerland, and the United States, I provide a comprehensive, global examination of how direct democratic institutions affect political participation, especially of political minority or marginalized groups.

In the first paper, I examine whether the practice of direct democracy supports women’s political participation. I theorize that the use of direct democracy enhances women’s sense of political efficacy, thereby promoting their participation in the political process. I test this argument by leveraging a quasi-experiment in Sweden from 1921 to 1944, wherein the use of direct democratic institutions was determined by a population threshold. Findings from a regression discontinuity analysis lend strong support for the positive effect of direct democracy on women’s political participation. Using web documents of minutes from direct democratic meetings, I further show that women’s participation in direct democracy is positively associated with their subsequent participation in parliamentary elections.

The second paper expands on the first paper by examining an individual-level mechanism linking experience with direct democracy and feelings of political efficacy. Using panel survey data from Switzerland, I examine the relationship between individuals’ exposure to direct democracy and the gender gap in political efficacy. I find that direct democracy increases women’s sense of political efficacy, while it has no significant effect on men. This finding confirms that the opportunity for direct legislation leads women to feel more efficacious in politics, suggesting its further implications for the gender gap in political engagement.

In the third and final paper, I examine how direct democratic votes targeting ethnic minorities influence political mobilization of minority groups. I theorize that targeted popular votes intensify the general public’s hostility towards minority groups, thereby enhancing group members’ perceptions of being stigmatized. Consequently, this creates a greater incentive for minorities to actively engage in politics. Using survey data from the United States, combined with information about state-level direct democracy, I find that direct democratic votes targeting the rights of immigrants lead to greater political activism among ethnic minorities with immigrant background. These studies contribute to the extant study of women and minority politics by illuminating new mechanisms underlying mobilization of women and minorities and clarifying the causal effect of the type of government on political equality….(More)”.

How AI Addresses Unconscious Bias in the Talent Economy


Announcement by Bob Schultz at IBM: “The talent economy is one of the great outcomes of the digital era — and the ability to attract and develop the right talent has become a competitive advantage in most industries. According to a recent IBM study, which surveyed over 2,100 Chief Human Resource Officers, 33 percent of CHROs believe AI will revolutionize the way they do business over the next few years. In that same study, 65 percent of CEOs expect that people skills will have a strong impact on their businesses over the next several years. At IBM, we see AI as a tremendous untapped opportunity to transform the way companies attract, develop, and build the workforce for the decades ahead.

Consider this: The average hiring manager has hundreds of applicants a day for key positions and spends approximately six seconds on each resume. The ability to make the right decision without analytics and AI’s predictive abilities is limited and has the potential to create unconscious bias in hiring.

That is why today, I am pleased to announce the rollout of IBM Watson Recruitment’s Adverse Impact Analysis capability, which identifies instances of bias related to age, gender, race, education, or previous employer by assessing an organization’s historical hiring data and highlighting potential unconscious biases. This capability empowers HR professionals to take action against potentially biased hiring trends — and in the future, choose the most promising candidate based on the merit of their skills and experience alone. This announcement is part of IBM’s largest ever AI toolset release, tailor made for nine industries and professions where AI will play a transformational role….(More)”.

The UK’s Gender Pay Gap Open Data Law Has Flaws, But Is A Positive Step Forward


Article by Michael McLaughlin: “Last year, the United Kingdom enacted a new regulation requiring companies to report information about their gender pay gap—a measure of the difference in average pay between men and women. The new rules are a good example of how open data can drive social change. However, the regulations have produced some misleading statistics, highlighting the importance of carefully crafting reporting requirements to ensure that they produce useful data.

In the UK, nearly 11,000 companies have filed gender pay gap reports, which include both the difference between the mean and median hourly pay rates for men and women as well the difference in bonuses. And the initial data reveals several interesting findings. Median pay for men is 11.8 percent higher than for women, on average, and nearly 87 percent of companies pay men more than women on average. In addition, over 1,000 firms had a median pay gap greater than 30 percent. The sectors with the highest pay gaps—construction, finance, and insurance—each pay men at least 20 percent more than women. A major reason for the gap is a lack of women in senior positions—UK women actually make more than men between the ages of 22-29. The total pay gap is also a result of more women holding part-time jobs.

However, as detractors note, the UK’s data can be misleading. For example, the data overstates the pay gap on bonuses because it does not adjust these figures for hours worked. More women work part-time than men, so it makes sense that women would receive less in bonus pay when they work less. The data also understates the pay gap because it excludes the high compensation of partners in organizations such as law firms, a group that includes few women. And it is important to note that—by definition—the pay gap data does not compare the wages of men and women working the same jobs, so the data says nothing about whether women receive equal pay for equal work.

Still, publication of the data has sparked an important national conversation. Google searches in the UK for the phrase “gender pay gap” experienced a 12-month high the week the regulations began enforcement, and major news sites like Financial Times have provided significant coverage of the issue by analyzing the reported data. While it is too soon to tell if the law will change employer behavior, such as businesses hiring more female executives, or employee behavior, such as women leaving companies or fields that pay less, countries with similar reporting requirements, such as Belgium, have seen the pay gap narrow following implementation of their rules.

Requiring companies to report this data to the government may be the only way to obtain gender pay gap data, because evidence suggests that the private sector will not produce this data on its own. Only 300 UK organizations joined a voluntary government program to report their gender pay gap in 2011, and as few as 11 actually published the data. Crowdsourced efforts, where women voluntary report their pay, have also suffered from incomplete data. And even complete data does not illuminate variables such as why women may work in a field that pays less….(More)”.

Biometric Mirror


University of Melbourne: “Biometric Mirror exposes the possibilities of artificial intelligence and facial analysis in public space. The aim is to investigate the attitudes that emerge as people are presented with different perspectives on their own, anonymised biometric data distinguished from a single photograph of their face. It sheds light on the specific data that people oppose and approve, the sentiments it evokes, and the underlying reasoning. Biometric Mirror also presents an opportunity to reflect on whether the plausible future of artificial intelligence is a future we want to see take shape.

Big data and artificial intelligence are some of today’s most popular buzzwords. Both are promised to help deliver insights that were previously too complex for computer systems to calculate. With examples ranging from personalised recommendation systems to automatic facial analyses, user-generated data is now analysed by algorithms to identify patterns and predict outcomes. And the common view is that these developments will have a positive impact on society.

Within the realm of artificial intelligence (AI), facial analysis gains popularity. Today, CCTV cameras and advertising screens increasingly link with analysis systems that are able to detect emotions, age, gender and demographic information of people passing by. It has proven to increase advertising effectiveness in retail environments, since campaigns can now be tailored to specific audience profiles and situations. But facial analysis models are also being developed to predict your aggression levelsexual preferencelife expectancy and likeliness of being a terrorist (or an academic) by simply monitoring surveillance camera footage or analysing a single photograph. Some of these developments have gained widespread media coverage for their innovative nature, but often the ethical and social impact is only a side thought.

Current technological developments approach ethical boundaries of the artificial intelligence age. Facial recognition and analysis in public space raise concerns as people are photographed without prior consent, and their photos disappear into a commercial operator’s infrastructure. It remains unclear how the data is processed, how the data is tailored for specific purposes and how the data is retained or disposed of. People also do not have the opportunity to review or amend their facial recognition data. Perhaps most worryingly, artificial intelligence systems may make decisions or deliver feedback based on the data, regardless of its accuracy or completeness. While facial recognition and analysis may be harmless for tailored advertising in retail environments or to unlock your phone, it quickly pushes ethical boundaries when the general purpose is to more closely monitor society… (More).

The Diversity Dashboard


Engaging Local Government Leaders:  “The Diversity Dashboard is a crowd-funded data collection effort managed by ELGL and hosted on the OpenGovplatform. The data collection includes the self reported gender, race, age, and veteran status of Chief Administrative Officers and Assistant Chief Administrative Officers in local governments of all sizes and forms.

This link includes background information about the Diversity Dashboard, and access to the “Stories” module where we highlight some key findings.

From there, you can drill down into the data, looking at pre-formatted reports and creating your own reports using the submitted data.

The more local government leaders who take the survey, the bigger the dataset, the better our understanding of what the local government leadership landscape looks like. If your local government hasn’t yet completed the survey, please take the survey!…(More)”.

Is Open Data Working for Women in Africa?


Web Foundation: “Open data has the potential to change politics, economies and societies for the better by giving people more opportunities to engage in the decisions that affect their lives. But to reach the full potential of open data, it must be available to and used by all. Yet, across the globe — and in Africa in particular — there is a significant data gap.

This report — Is open data working for women in Africa — maps the current state of open data for women across Africa, with insights from country-specific research in Nigeria, Cameroon, Uganda and South Africa with additional data from a survey of experts in 12 countries across the continent.

Our findings show that, despite the potential for open data to empower people, it has so far changed little for women living in Africa.

Key findings

  • There is a closed data culture in Africa — Most countries lack an open culture and have legislation and processes that are not gender-responsive. Institutional resistance to disclosing data means few countries have open data policies and initiatives at the national level. In addition, gender equality legislation and policies are incomplete and failing to reduce gender inequalities. And overall, Africa lacks the cross-organisational collaboration needed to strengthen the open data movement.
  • There are barriers preventing women from using the data that is available — Cultural and social realities create additional challenges for women to engage with data and participate in the technology sector. 1GB of mobile data in Africa costs, on average, 10% of average monthly income. This high cost keeps women, who generally earn less than men, offline. Moreover, time poverty, the gender pay gap and unpaid labour create economic obstacles for women to engage with digital technology.
  • Key datasets to support the advocacy objectives of women’s groups are missing — Data on budget, health and crime are largely absent as open data. Nearly all datasets in sub-Saharan Africa (373 out of 375) are closed, and sex-disaggregated data, when available online, is often not published as open data. There are few open data policies to support opening up of key datasets and even when they do exist, they largely remain in draft form. With little investment in open data initiatives, good data management practices or for implementing Right To Information (RTI) reforms, improvement is unlikely.
  • There is no strong base of research on women’s access and use of open data — There is lack of funding, little collaboration and few open data champions. Women’s groups, digital rights groups and gender experts rarely collaborate on open data and gender issues. To overcome this barrier, multi-stakeholder collaborations are essential to develop effective solutions….(More)”.

Migration Data using Social Media


European Commission JRC Technical Report: “Migration is a top political priority for the European Union (EU). Data on international migrant stocks and flows are essential for effective migration management. In this report, we estimated the number of expatriates in 17 EU countries based on the number of Facebook Network users who are classified by Facebook as “expats”. To this end, we proposed a method for correcting the over- or under-representativeness of Facebook Network users compared to countries’ actual population.

This method uses Facebook penetration rates by age group and gender in the country of previous residence and country of destination of a Facebook expat. The purpose of Facebook Network expat estimations is not to reproduce migration statistics, but rather to generate separate estimates of expatriates, since migration statistics and Facebook Network expats estimates do not measure the same quantities of interest.

Estimates of social media application users who are classified as expats can be a timely, low-cost, and almost globally available source of information for estimating stocks of international migrants. Our methodology allowed for the timely capture of the increase of Venezuelan migrants in Spain. However, there are important methodological and data integrity issues with using social media data sources for studying migration-related phenomena. For example, our methodology led us to significantly overestimate the number of expats from Philippines in Spain and in Italy and there is no evidence that this overestimation may be valid. While research on the use of big data sources for migration is in its infancy, and the diffusion of internet technologies in less developed countries is still limited, the use of big data sources can unveil useful insights on quantitative and qualitative characteristics of migration….(More)”.