The Seductions of Quantification: Measuring Human Rights, Gender Violence, and Sex Trafficking


Book by Sally Engle Merry: “We live in a world where seemingly everything can be measured. We rely on indicators to translate social phenomena into simple, quantified terms, which in turn can be used to guide individuals, organizations, and governments in establishing policy. Yet counting things requires finding a way to make them comparable. And in the process of translating the confusion of social life into neat categories, we inevitably strip it of context and meaning—and risk hiding or distorting as much as we reveal.

With The Seductions of Quantification, leading legal anthropologist Sally Engle Merry investigates the techniques by which information is gathered and analyzed in the production of global indicators on human rights, gender violence, and sex trafficking. Although such numbers convey an aura of objective truth and scientific validity, Merry argues persuasively that measurement systems constitute a form of power by incorporating theories about social change in their design but rarely explicitly acknowledging them. For instance, the US State Department’s Trafficking in Persons Report, which ranks countries in terms of their compliance with antitrafficking activities, assumes that prosecuting traffickers as criminals is an effective corrective strategy—overlooking cultures where women and children are frequently sold by their own families. As Merry shows, indicators are indeed seductive in their promise of providing concrete knowledge about how the world works, but they are implemented most successfully when paired with context-rich qualitative accounts grounded in local knowledge….(More)”.

Transparency reports make AI decision-making accountable


Phys.org: “Machine-learning algorithms increasingly make decisions about credit, medical diagnoses, personalized recommendations, advertising and job opportunities, among other things, but exactly how usually remains a mystery. Now, new measurement methods developed by Carnegie Mellon University researchers could provide important insights to this process.

 Was it a person’s age, gender or education level that had the most influence on a decision? Was it a particular combination of factors? CMU’s Quantitative Input Influence (QII) measures can provide the relative weight of each factor in the final decision, said Anupam Datta, associate professor of computer science and electrical and computer engineering.

“Demands for algorithmic transparency are increasing as the use of algorithmic decision-making systems grows and as people realize the potential of these systems to introduce or perpetuate racial or sex discrimination or other social harms,” Datta said.

“Some companies are already beginning to provide transparency reports, but work on the computational foundations for these reports has been limited,” he continued. “Our goal was to develop measures of the degree of influence of each factor considered by a system, which could be used to generate transparency reports.”

These reports might be generated in response to a particular incident—why an individual’s loan application was rejected, or why police targeted an individual for scrutiny or what prompted a particular medical diagnosis or treatment. Or they might be used proactively by an organization to see if an artificial intelligence system is working as desired, or by a regulatory agency to see whether a decision-making system inappropriately discriminated between groups of people….(More)”

City planners tap into wealth of cycling data from Strava tracking app


Peter Walker in The Guardian: “Sheila Lyons recalls the way Oregon used to collect data on how many people rode bikes. “It was very haphazard, two-hour counts done once a year,” said the woman in charge of cycling policy for the state government.“Volunteers, sitting on the street corner because they wanted better bike facilities. Pathetic, really.”

But in 2013 a colleague had an idea. She recorded her own bike rides using an app called Strava, and thought: why not ask the company to share its data? And so was born Strava Metro, both an inadvertent tech business spinoff and a similarly accidental urban planning tool, one that is now quietly helping to reshape streets in more than 70 places around the world and counting.

Using the GPS tracking capability of a smartphone and similar devices, Strata allows people to plot how far and fast they go and compare themselves against other riders. Users create designated route segments, which each have leaderboards ranked by speed.

Originally aimed just at cyclists, Strava soon incorporated running and now has options for more than two dozen pursuits. But cycling remains the most popular,and while the company is coy about overall figures, it says it adds 1 million new members every two months, and has more than six million uploads a week.

For city planners like Lyons, used to very occasional single-street bike counts,this is a near-unimaginable wealth of data. While individual details are anonymised, it still shows how many Strava-using cyclists, plus their age and gender, ride down any street at any time of the day, and the entire route they take.

The company says it initially had no idea how useful the information could be,and only began visualising data on heatmaps as a fun project for its engineers.“We’re not city planners,” said Michael Horvath, one of two former HarvardUniversity rowers and relatively veteran 40-something tech entrepreneurs who co-founded Strava in 2009.

“One of the things that we learned early on is that these people just don’t have very much data to begin with. Not only is ours a novel dataset, in many cases it’s the only dataset that speaks to the behaviour of cyclists and pedestrians in that city or region.”…(More)”

Crowdsourcing corruption in India’s maternal health services


Joan Okitoi-Heisig at DW Akademie: “…The Mera Swasthya Meri Aawaz (MSMA) project is the first of its kind in India to track illicit maternal fees demanded in government hospitals located in the northern state of Uttar Pradesh.

MSMA (“My Health, My Voice”) is part of SAHAYOG, a non-governmental umbrella organization that helped launch the project. MSMA uses an Ushahidi platform to map and collect data on unofficial fees that plague India’ ostensibly “free” maternal health services. It is one of the many projects showcased in DW Akademie’s recently launched Digital Innovation Library. SAHAYOG works closely with grassroots organizations to promote gender equality and women’s health issues from a human rights perspective…

SAYAHOG sees women’s maternal health as a human rights issue. Key to the MSMA project is exposing government facilities that extort bribes from among the poorest and most vulnerable in society.

Sandhya and her colleagues are convinced that promoting transparency and accountability through the data collected can empower the women. If they’re aware of their entitlements, she says, they can demand their rights and in the process hold leaders accountable.

“Information is power,” Sandhya explains. Without this information, she says, “they aren’t in a position to demand what is rightly theirs.”

Health care providers hold a certain degree of power when entrusted with taking care of expectant mothers. Many give into bribes for fear of being otherwise neglected or abused.

With the MSMA project, however, poor rural women have technology that is easy to use and accessible on their mobile phones, and that empowers them to make complaints and report bribes for services that are supposed to be free.

MSMA is an innovative data-driven platform that combines a toll free number, an interactive voice response system (IVRS) and a website that contains accessible reports. In addition to enabling poor women to air their frustrations anonymously, the project aggregates actionable data which can then be used by the NGO as well as the government to work towards improving the situation for mothers in India….(More)”

Global governance and ICTs: exploring online governance networks around gender and media


Claudia Padovani and Elena Pavan in the journal “Global Networks“: In this article, we address transformations in global governance brought about by information and communication technologies (ICTs). Focusing on the specific domain of ‘gender-oriented communication governance’, we investigate online interactions among different kinds of actors active in promoting gender equity in and through the media. By tracing and analysing online issue networks, we investigate which actors are capable of influencing the framing of issues and of structuring discursive practices. From the analysis, different forms of power emerge, reflecting diverse modes of engaging in online interactions, where actors can operate as network ‘programmers’, ‘mobilizers’, or ‘switchers’. Our case study suggests that, often, old ways of conceiving actors’ interactions accompany the implementation of new communication tools, while the availability of a pervasive networked infrastructure does not automatically translate into meaningful interactions among all relevant actors in a specific domain….(More)”

Opening up census data for research


Economic and Social Research Council (UK): “InFuse, an online search facility for census data, is enabling tailored search and investigation of UK census statistics – opening new opportunities for aggregating and comparing population counts.

Impacts

  • InFuse data were used for the ‘Smarter Travel’ research project studying how ‘smart choices’ for sustainable travel could be implemented and supported in transport planning. The research directly influenced UK climate-change agendas and policy, including:
    • the UK Committee on Climate Change recommendations on cost-effective-emission reductions
    • the Scottish Government’s targets and household advice for smarter travel
    • the UK Government’s Local Sustainable Transport Fund supporting 96 projects across England
    • evaluations for numerous Local Authority Transport Plans across the UK.
  • The Integration Hub, a web resource that was launched by Demos in 2015 to provide data about ethnic integration in England and Wales, uses data from InFuse to populate its interactive maps of the UK.
  • Census data downloaded from InFuse informed the Welsh Government for policies to engage Gypsy and Traveller families in education, showing that over 60 per cent aged over 16 from these communities had no qualifications.
  • Executive recruitment firm Sapphire Partners used census data from InFuse in a report on female representation on boards, revealing that 77 per cent of FTSE board members are men, and 70 per cent of new board appointments go to men.
  • A study by the Marie Curie charity into the differing needs of Black, Asian and minority ethnic groups in Scotland for end-of-life care used InFuse to determine that the minority ethnic population in Scotland has doubled since 2001 from 100,000 to 200,000 – highlighting the need for greater and more appropriate provision.
  • A Knowledge Transfer Partnership between homelessness charity Llamau and Cardiff University used InFuse data to show that Welsh young homeless people participating in the study were over twice as likely to have left school with no qualifications compared to UK-wide figures for their age group and gender….(More)”

 

Website Seeks to Make Government Data Easier to Sift Through


Steve Lohr at the New York Times: “For years, the federal government, states and some cities have enthusiastically made vast troves of data open to the public. Acres of paper records on demographics, public health, traffic patterns, energy consumption, family incomes and many other topics have been digitized and posted on the web.

This abundance of data can be a gold mine for discovery and insights, but finding the nuggets can be arduous, requiring special skills.

A project coming out of the M.I.T. Media Lab on Monday seeks to ease that challenge and to make the value of government data available to a wider audience. The project, called Data USA, bills itself as “the most comprehensive visualization of U.S. public data.” It is free, and its software code is open source, meaning that developers can build custom applications by adding other data.

Cesar A. Hidalgo, an assistant professor of media arts and sciences at the M.I.T. Media Lab who led the development of Data USA, said the website was devised to “transform data into stories.” Those stories are typically presented as graphics, charts and written summaries….Type “New York” into the Data USA search box, and a drop-down menu presents choices — the city, the metropolitan area, the state and other options. Select the city, and the page displays an aerial shot of Manhattan with three basic statistics: population (8.49 million), median household income ($52,996) and median age (35.8).

Lower on the page are six icons for related subject categories, including economy, demographics and education. If you click on demographics, one of the so-called data stories appears, based largely on data from the American Community Survey of the United States Census Bureau.

Using colorful graphics and short sentences, it shows the median age of foreign-born residents of New York (44.7) and of residents born in the United States (28.6); the most common countries of origin for immigrants (the Dominican Republic, China and Mexico); and the percentage of residents who are American citizens (82.8 percent, compared with a national average of 93 percent).

Data USA features a selection of data results on its home page. They include the gender wage gap in Connecticut; the racial breakdown of poverty in Flint, Mich.; the wages of physicians and surgeons across the United States; and the institutions that award the most computer science degrees….(More)

Accountable machines: bureaucratic cybernetics?


Alison Powell at LSE Media Policy Project Blog: “Algorithms are everywhere, or so we are told, and the black boxes of algorithmic decision-making make oversight of processes that regulators and activists argue ought to be transparent more difficult than in the past. But when, and where, and which machines do we wish to make accountable, and for what purpose? In this post I discuss how algorithms discussed by scholars are most commonly those at work on media platforms whose main products are the social networks and attention of individuals. Algorithms, in this case, construct individual identities through patterns of behaviour, and provide the opportunity for finely targeted products and services. While there are serious concerns about, for instance, price discrimination, algorithmic systems for communicating and consuming are, in my view, less inherently problematic than processes that impact on our collective participation and belonging as citizenship. In this second sphere, algorithmic processes – especially machine learning – combine with processes of governance that focus on individual identity performance to profoundly transform how citizenship is understood and undertaken.

Communicating and consuming

In the communications sphere, algorithms are what makes it possible to make money from the web for example through advertising brokerage platforms that help companies bid for ads on major newspaper websites. IP address monitoring, which tracks clicks and web activity, creates detailed consumer profiles and transform the everyday experience of communication into a constantly-updated production of consumer information. This process of personal profiling is at the heart of many of the concerns about algorithmic accountability. The consequence of perpetual production of data by individuals and the increasing capacity to analyse it even when it doesn’t appear to relate has certainly revolutionalised advertising by allowing more precise targeting, but what has it done for areas of public interest?

John Cheney-Lippold identifies how the categories of identity are now developed algorithmically, since a category like gender is not based on self-discloure, but instead on patterns of behaviour that fit with expectations set by previous alignment to a norm. In assessing ‘algorithmic identities’, he notes that these produce identity profiles which are narrower and more behaviour-based than the identities that we perform. This is a result of the fact that many of the systems that inspired the design of algorithmic systems were based on using behaviour and other markers to optimise consumption. Algorithmic identity construction has spread from the world of marketing to the broader world of citizenship – as evidenced by the Citizen Ex experiment shown at the Web We Want Festival in 2015.

Individual consumer-citizens

What’s really at stake is that the expansion of algorithmic assessment of commercially derived big data has extended the frame of the individual consumer into all kinds of other areas of experience. In a supposed ‘age of austerity’ when governments believe it’s important to cut costs, this connects with the view of citizens as primarily consumers of services, and furthermore, with the idea that a citizen is an individual subject whose relation to a state can be disintermediated given enough technology. So, with sensors on your garbage bins you don’t need to even remember to take them out. With pothole reporting platforms like FixMyStreet, a city government can be responsive to an aggregate of individual reports. But what aspects of our citizenship are collective? When, in the algorithmic state, can we expect to be together?

Put another way, is there any algorithmic process to value the long term education, inclusion, and sustenance of a whole community for example through library services?…

Seeing algorithms – machine learning in particular – as supporting decision-making for broad collective benefit rather than as part of ever more specific individual targeting and segmentation might make them more accountable. But more importantly, this would help algorithms support society – not just individual consumers….(More)”

It’s not big data that discriminates – it’s the people that use it


 in the Conversation: “Data can’t be racist or sexist, but the way it is used can help reinforce discrimination. The internet means more data is collected about us than ever before and it is used to make automatic decisions that can hugely affect our lives, from our credit scores to our employment opportunities.

If that data reflects unfair social biases against sensitive attributes, such as our race or gender, the conclusions drawn from that data might also be based on those biases.

But this era of “big data” doesn’t need to to entrench inequality in this way. If we build smarter algorithms to analyse our information and ensure we’re aware of how discrimination and injustice may be at work, we can actually use big data to counter our human prejudices.

This kind of problem can arise when computer models are used to make predictions in areas such as insurance, financial loans and policing. If members of a certain racial group have historically been more likely to default on their loans, or been more likely to be convicted of a crime, then the model can deem these people more risky. That doesn’t necessarily mean that these people actually engage in more criminal behaviour or are worse at managing their money. They may just be disproportionately targeted by police and sub-prime mortgage salesmen.

Excluding sensitive attributes

Data scientist Cathy O’Neil has written about her experience of developing models for homeless services in New York City. The models were used to predict how long homeless clients would be in the system and to match them with appropriate services. She argues that including race in the analysis would have been unethical.

If the data showed white clients were more likely to find a job than black ones, the argument goes, then staff might focus their limited resources on those white clients that would more likely have a positive outcome. While sociological research has unveiled the ways that racial disparities in homelessness and unemployment are the result of unjust discrimination, algorithms can’t tell the difference between just and unjust patterns. And so datasets should exclude characteristics that may be used to reinforce the bias, such as race.

But this simple response isn’t necessarily the answer. For one thing, machine learning algorithms can often infer sensitive attributes from a combination of other, non-sensitive facts. People of a particular race may be more likely to live in a certain area, for example. So excluding those attributes may not be enough to remove the bias….

An enlightened service provider might, upon seeing the results of the analysis, investigate whether and how racism is a barrier to their black clients getting hired. Equipped with this knowledge they could begin to do something about it. For instance, they could ensure that local employers’ hiring practices are fair and provide additional help to those applicants more likely to face discrimination. The moral responsibility lies with those responsible for interpreting and acting on the model, not the model itself.

So the argument that sensitive attributes should be stripped from the datasets we use to train predictive models is too simple. Of course, collecting sensitive data should be carefully regulated because it can easily be misused. But misuse is not inevitable, and in some cases, collecting sensitive attributes could prove absolutely essential in uncovering, predicting, and correcting unjust discrimination. For example, in the case of homeless services discussed above, the city would need to collect data on ethnicity in order to discover potential biases in employment practices….(More)

Participatory Budgeting