Discrimination by algorithm: scientists devise test to detect AI bias


 at the Guardian: “There was the voice recognition software that struggled to understand women, the crime prediction algorithm that targeted black neighbourhoods and the online ad platform which was more likely to show men highly paid executive jobs.

Concerns have been growing about AI’s so-called “white guy problem” and now scientists have devised a way to test whether an algorithm is introducing gender or racial biases into decision-making.

Mortiz Hardt, a senior research scientist at Google and a co-author of the paper, said: “Decisions based on machine learning can be both incredibly useful and have a profound impact on our lives … Despite the need, a vetted methodology in machine learning for preventing this kind of discrimination based on sensitive attributes has been lacking.”

The paper was one of several on detecting discrimination by algorithms to be presented at the Neural Information Processing Systems (NIPS) conference in Barcelona this month, indicating a growing recognition of the problem.

Nathan Srebro, a computer scientist at the Toyota Technological Institute at Chicago and co-author, said: “We are trying to enforce that you will not have inappropriate bias in the statistical prediction.”

The test is aimed at machine learning programs, which learn to make predictions about the future by crunching through vast quantities of existing data. Since the decision-making criteria are essentially learnt by the computer, rather than being pre-programmed by humans, the exact logic behind decisions is often opaque, even to the scientists who wrote the software….“Our criteria does not look at the innards of the learning algorithm,” said Srebro. “It just looks at the predictions it makes.”

Their approach, called Equality of Opportunity in Supervised Learning, works on the basic principle that when an algorithm makes a decision about an individual – be it to show them an online ad or award them parole – the decision should not reveal anything about the individual’s race or gender beyond what might be gleaned from the data itself.

For instance, if men were on average twice as likely to default on bank loans than women, and if you knew that a particular individual in a dataset had defaulted on a loan, you could reasonably conclude they were more likely (but not certain) to be male.

However, if an algorithm calculated that the most profitable strategy for a lender was to reject all loan applications from men and accept all female applications, the decision would precisely confirm a person’s gender.

“This can be interpreted as inappropriate discrimination,” said Srebro….(More)”.

Science Can Restore America’s Faith in Democracy


Ariel Procaccia in Wired: “…Like most other countries, individual states in the US employ the antiquated plurality voting system, in which each voter casts a vote for a single candidate, and the person who amasses the largest number of votes is declared the winner. If there is one thing that voting experts unanimously agree on, it is that plurality voting is a bad idea, or at least a badly outdated one….. Maine recently became the first US state to adopt instant-runoff voting; the approach will be used for choosing the governor and members of Congress and the state legislature….

So why aren’t we already using cutting-edge voting systems in national elections? Perhaps because changing election systems usually itself requires an election, where short-term political considerations may trump long-term, scientifically grounded reasoning….Despite these difficulties, in the last few years state-of-the-art voting systems have made the transition from theory to practice, through not-for-profit online platforms that focus on facilitating elections in cities and organizations, or even just on helping a group of friends decide where to go to dinner. For example, the Stanford Crowdsourced Democracy Team has created an online tool whereby residents of a city can vote on how to allocate the city’s budget for public projects such as parks and roads. This tool has been used by New York City, Boston, Chicago, and Seattle to allocate millions of dollars. Building on this success, the Stanford team is experimenting with groundbreaking methods, inspired by computational thinking, to elicit and aggregate the preferences of residents.

The Princeton-based project All Our Ideas asks voters to compare pairs of ideas, and then aggregates these comparisons via statistical methods, ultimately providing a ranking of all the ideas. To date, roughly 14 million votes have been cast using this system, and it has been employed by major cities and organizations. Among its more whimsical use cases is the Washington Post’s 2010 holiday gift guide, where the question was “what gift would you like to receive this holiday season”; the disappointingly uncreative top idea, based on tens of thousands of votes, was “money”.

Finally, the recently launched website RoboVote (which I created with collaborators at Carnegie Mellon and Harvard) offers AI-driven voting methods to help groups of people make smart collective decisions. Applications range from selecting a spot for a family vacation or a class president, to potentially high-stakes choices such as which product prototype to develop or which movie script to produce.

These examples show that centuries of research on voting can, at long last, make a societal impact in the internet age. They demonstrate what science can do for democracy, albeit on a relatively small scale, for now….(More)’

How Artificial Intelligence Will Usher in the Next Stage of E-Government


Daniel Castro at GovTech: “Since the earliest days of the Internet, most government agencies have eagerly explored how to use technology to better deliver services to citizens, businesses and other public-sector organizations. Early on, observers recognized that these efforts often varied widely in their implementation, and so researchers developed various frameworks to describe the different stages of growth and development of e-government. While each model is different, they all identify the same general progression from the informational, for example websites that make government facts available online, to the interactive, such as two-way communication between government officials and users, to the transactional, like applications that allow users to access government services completely online.

However, we will soon see a new stage of e-government: the perceptive.

The defining feature of the perceptive stage will be that the work involved in interacting with government will be significantly reduced and automated for all parties involved. This will come about principally from the integration of artificial intelligence (AI) — computer systems that can learn, reason and decide at levels similar to that of a human — into government services to make it more insightful and intelligent.

Consider the evolution of the Department of Motor Vehicles. The informational stage made it possible for users to find the hours for the local office; the interactive stage made it possible to ask the agency a question by email; and the transactional stage made it possible to renew a driver’s license online.

In the perceptive stage, the user will simply say, “Siri, I need a driver’s license,” and the individual’s virtual assistant will take over — collecting any additional information from the user, coordinating with the government’s system and scheduling any in-person meetings automatically. That’s right: AI might finally end your wait at the DMV.

In general, there are at least three ways that AI will impact government agencies. First, it will enable government workers to be more productive since the technology can be used to automate many tasks. …

Second, AI will create a faster, more responsive government. AI enables the creation of autonomous, intelligent agents — think online chatbots that answer citizens’ questions, real-time fraud detection systems that constantly monitor government expenditures and virtual legislative assistants that quickly synthesize feedback from citizens to lawmakers.

Third, AI will allow people to interact more naturally with digital government services…(More)”

Artificial Intelligence Could Help Colleges Better Plan What Courses They Should Offer


Jeffrey R. Young at EdSsurge: Big data could help community colleges better predict how industries are changing so they can tailor their IT courses and other programs. After all, if Amazon can forecast what consumers will buy and prestock items in their warehouses to meet the expected demand, why can’t colleges do the same thing when planning their curricula, using predictive analytics to make sure new degree or certificates programs are started just in time for expanding job opportunities?

That’s the argument made by Gordon Freedman, president of the nonprofit National Laboratory for Education Transformation. He’s part of a new center that will do just that, by building a data warehouse that brings together up-to-date information on what skills employers need and what colleges currently offer—and then applying artificial intelligence to attempt to predict when sectors or certain employment needs might be expanding.

He calls the approach “opportunity engineering,” and the center boasts some heavy-hitting players to assist in the efforts, including the University of Chicago, the San Diego Supercomputing Center and Argonne National Laboratory. It’s called the National Center for Opportunity Engineering & Analysis.

Ian Roark, vice president of workforce development at Pima Community College in Arizona, is among those eager for this kind of “opportunity engineering” to emerge.

He explains when colleges want to start new programs, they face a long haul—it takes time to develop a new curriculum, put it through an internal review, and then send it through an accreditor….

Other players are already trying to translate the job market into a giant data set to spot trends. LinkedIn sits on one of the biggest troves of data, with hundreds of millions of job profiles, and ambitions to create what it calls the “economic graph” of the economy. But not everyone is on LinkedIn, which attracts mainly those in white-collar jobs. And companies such as Burning Glass Technologies have scanned hundreds of thousands of job listings and attempt to provide real-time intelligence on what employers say they’re looking for. Those still don’t paint the full picture, Freedman argues, such as what jobs are forming at companies.

“We need better information from the employer, better information from the job seeker and better information from the college, and that’s what we’re going after,” Freedman says…(More)”.

Four steps to precision public health


Scott F. DowellDavid Blazes & Susan Desmond-Hellmann at Nature: “When domestic transmission of Zika virus was confirmed in the United States in July 2016, the entire country was not declared at risk — nor even the entire state of Florida. Instead, precise surveillance defined two at-risk areas of Miami-Dade County, neighbourhoods measuring just 2.6 and 3.9 square kilometres. Travel advisories and mosquito control focused on those regions. Six weeks later, ongoing surveillance convinced officials to lift restrictions in one area and expand the other.

By contrast, a campaign against yellow fever launched this year in sub-Saharan Africa defines risk at the level of entire nations, often hundreds of thousands of square kilometres. More granular assessments have been deemed too complex.

The use of data to guide interventions that benefit populations more efficiently is a strategy we call precision public health. It requires robust primary surveillance data, rapid application of sophisticated analytics to track the geographical distribution of disease, and the capacity to act on such information1.

The availability and use of precise data is becoming the norm in wealthy countries. But large swathes of the developing world are not reaping its advantages. In Guinea, it took months to assemble enough data to clearly identify the start of the largest Ebola outbreak in history. This should take days. Sub-Saharan Africa has the highest rates of childhood mortality in the world; it is also where we know the least about causes of death…..

The value of precise disease tracking was baked into epidemiology from the start. In 1854, John Snow famously located cholera cases in London. His mapping of the spread of infection through contaminated water dealt a blow to the idea that the disease was caused by bad air. These days, people and pathogens move across the globe swiftly and in great numbers. In 2009, the H1N1 ‘swine flu’ influenza virus took just 35 days to spread from Mexico and the United States to China, South Korea and 12 other countries…

The public-health community is sharing more data faster; expectations are higher than ever that data will be available from clinical trials and from disease surveillance. In the past two years, the US National Institutes of Health, the Wellcome Trust in London and the Gates Foundation have all instituted open data policies for their grant recipients, and leading journals have declared that sharing data during disease emergencies will not impede later publication.

Meanwhile, improved analysis, data visualization and machine learning have expanded our ability to use disparate data sources to decide what to do. A study published last year4 used precise geospatial modelling to infer that insecticide-treated bed nets were the single most influential intervention in the rapid decline of malaria.

However, in many parts of the developing world, there are still hurdles to the collection, analysis and use of more precise public-health data. Work towards malaria elimination in South Africa, for example, has depended largely on paper reporting forms, which are collected and entered manually each week by dozens of subdistricts, and eventually analysed at the province level. This process would be much faster if field workers filed reports from mobile phones.

Sources: Ref. 8/Bill & Melinda Gates Foundation

…Frontline workers should not find themselves frustrated by global programmes that fail to take into account data on local circumstances. Wherever they live — in a village, city or country, in the global south or north — people have the right to public-health decisions that are based on the best data and science possible, that minimize risk and cost, and maximize health in their communities…(More)”

21st Century Enlightenment Revisited


Matthew Taylor at the RSA: “The French historian Tzvetan Todorov describes the three essential ideas of the Enlightenment as ‘autonomy’, ‘universalism’ and ‘humanism’. The ideal of autonomy speaks to every individual’s right to self-determination. Universalism asserts that all human beings equally deserve basic rights and dignity (although, of course, in the 18th and 19th century most thinkers restricted this ambition to educated white men). The idea of humanism is that it is up to the people – not Gods or monarchs – through the use of rational inquiry to determine the path to greater human fulfilment….

21st Century Enlightenment 

Take autonomy; too often today we think of freedom either as a shrill demand to be able to turn our backs on wider society or in the narrow possessive terms of consumerism. Yet, brain and behavioural science have confirmed the intuition of philosophers through the ages genuine autonomy is something we only attain when we become aware of our human frailties and understand our truly social nature. Of course, freedom from oppression is the base line, but true autonomy is not a right to be granted but a goal to be pursued through self-awareness and engagement in society.

What of universalism, or social justice as we now tend to think of it? In most parts of the world and certainly in the West there have been incredible advances in equal rights. Discrimination and injustice still exist, but through struggle and reform huge strides have been made in widening the Enlightenment brotherhood of rich white men to women, people of different ethnicity, homosexuals and people with disabilities. Indeed the progress in legal equality over recent decades stands in contrast to the stubborn persistence, and even worsening, of social inequality, particularly based on class.

But the rationalist universalism of human rights needs an emotional corollary. People may be careful not to use the wrong words, but they still harbour resentment and suspicion towards other groups. …

Finally, humanism or the call of progress. The utilitarian philosophy that arose from the Enlightenment spoke to the idea that, free from the religious or autocratic dogma, the best routes to human fulfilment could be identified and should be pursued. The great motors of human progress – markets, science and technology, the modern state – shifted into gear and started to accelerate. Aspects of all these phenomena, indeed of Enlightenment ideas themselves, could be found at earlier stages of human history – what was different was the way they fed off each other and became dominant. Yet, in the process, the idea that these forces could deliver progress often became elided with the assumption that their development was the same as human progress.

Today this danger of letting the engines of progress determine the direction of the human journey feels particularly acute in relation to markets and technology. There is, for example, more discussion of how humans should best adapt to AI and robots than about how technological inquiry might be aligned with human fulfilment. The hollowing out of democratic institutions has diminished the space for public debate about what progress should comprise at just the time when the pace and scale of change makes those debates particularly vital.

A twenty first century enlightenment reinstates true autonomy over narrow ideas of freedom, it asserts a universalism based not just on legal status but on empathy and social connection and reminds us that humanism should lie at the heart of progress.

Think like a system act like an entrepreneur

There is one new strand I want to add to the 2010 account. In the face of many defeats, we must care as much about how we achieve change as about the goals we pursue. At the RSA we talk about ‘thinking like a system and acting like an entrepreneur’, a method which seeks to avoid the narrowness and path dependency of so many unsuccessful models of change. To alter the course our society is now on we need more fully to understand the high barriers to change but then to act more creatively and adaptively when we spot opportunities to take a different path….(More)”

The Government Isn’t Doing Enough to Solve Big Problems with AI


Mike Orcutt at MIT Technology Review: “The government should play a bigger role in developing new tools based on artificial intelligence, or we could miss out on revolutionary applications because they don’t have obvious commercial upside.

That was the message from prominent AI technologists and researchers at a Senate committee hearing last week. They agreed that AI is in a crucial developmental moment, and that government has a unique opportunity to shape its future. They also said that the government is in a better position than technology companies to invest in AI applications aimed at broad societal problems.

Today just a few companies, led by Google and Facebook, account for the lion’s share of AI R&D in the U.S. But Eric Horvitz, technical fellow and managing director of Microsoft Research, told the committee members that there are important areas that are rich and ripe for AI innovation, such as homelessness and addiction, where the industry isn’t making big investments. The government could help support those pursuits, Horvitz said.

For a more specific example, take the plight of a veteran seeking information online about medical options, says Andrew Moore, dean of the school of computer science at Carnegie Mellon University. If an application that could respond to freeform questions, search multiple government data sets at once, and provide helpful information about a veteran’s health care options were commercially attractive, it might be available already, he says.

There is a “real hunger for basic research” says Greg Brockman, cofounder and chief technology officer of the nonprofit research company OpenAI, because technologists understand that they haven’t made the most important advances yet. If we continue to leave the bulk of it to industry, not only could we miss out on useful applications, but also on the chance to adequately explore urgent scientific questions about ethics, safety, and security while the technology is still young, says Brockman. Since the field of AI is growing “exponentially,” it’s important to study these things now, he says, and the government could make that a “top line thing that they are trying to get done.”….(More)”.

From policing to news, how algorithms are changing our lives


Carl Miller at The National: “First, write out the numbers one to 100 in 10 rows. Cross out the one. Then circle the two, and cross out all of the multiples of two. Circle the three, and do likewise. Follow those instructions, and you’ve just completed the first three steps of an algorithm, and an incredibly ancient one. Twenty-three centuries ago, Eratosthenes was sat in the great library of Alexandria, using this process (it is called Eratosthenes’ Sieve) to find and separate prime numbers. Algorithms are nothing new, indeed even the word itself is old. Fifteen centuries after Eratosthenes, Algoritmi de numero Indorum appeared on the bookshelves of European monks, and with it, the word to describe something very simple in essence: follow a series of fixed steps, in order, to achieve a given answer to a given problem. That’s it, that’s an algorithm. Simple.

 Apart from, of course, the story of algorithms is not so simple, nor so humble. In the shocked wake of Donald Trump’s victory in the United States presidential election, a culprit needed to be found to explain what had happened. What had, against the odds, and in the face of thousands of polls, caused this tectonic shift in US political opinion? Soon the finger was pointed. On social media, and especially on Facebook, it was alleged that pro-Trump stories, based on inaccurate information, had spread like wildfire, often eclipsing real news and honestly-checked facts.
But no human editor was thrust into the spotlight. What took centre stage was an algorithm; Facebook’s news algorithm. It was this, critics said, that was responsible for allowing the “fake news” to circulate. This algorithm wasn’t humbly finding prime numbers; it was responsible for the news that you saw (and of course didn’t see) on the largest source of news in the world. This algorithm had somehow risen to become more powerful than any newspaper editor in the world, powerful enough to possibly throw an election.
So why all the fuss? Something is now happening in society that is throwing algorithms into the spotlight. They have taken on a new significance, even an allure and mystique. Algorithms are simply tools but a web of new technologies are vastly increasing the power that these tools have over our lives. The startling leaps forward in artificial intelligence have meant that algorithms have learned how to learn, and to become capable of accomplishing tasks and tackling problems that they were never been able to achieve before. Their learning is fuelled with more data than ever before, collected, stored and connected with the constellations of sensors, data farms and services that have ushered in the age of big data.

Algorithms are also doing more things; whether welding, driving or cooking, thanks to robotics. Wherever there is some kind of exciting innovation happening, algorithms are rarely far away. They are being used in more fields, for more things, than ever before and are incomparably, incomprehensibly more capable than the algorithms recognisable to Eratosthenes….(More)”

How Should a Society Be?


Brian Christian: “This is another example where AI—in this case, machine-learning methods—intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit….(More) (Video)”

Big data promise exponential change in healthcare


Gonzalo Viña in the Financial Times (Special Report: ): “When a top Formula One team is using pit stop data-gathering technology to help a drugmaker improve the way it makes ventilators for asthma sufferers, there can be few doubts that big data are transforming pharmaceutical and healthcare systems.

GlaxoSmithKline employs online technology and a data algorithm developed by F1’s elite McLaren Applied Technologies team to minimise the risk of leakage from its best-selling Ventolin (salbutamol) bronchodilator drug.

Using multiple sensors and hundreds of thousands of readings, the potential for leakage is coming down to “close to zero”, says Brian Neill, diagnostics director in GSK’s programme and risk management division.

This apparently unlikely venture for McLaren, known more as the team of such star drivers as Fernando Alonso and Jenson Button, extends beyond the work it does with GSK. It has partnered with Birmingham Children’s hospital in a £1.8m project utilising McLaren’s expertise in analysing data during a motor race to collect such information from patients as their heart and breathing rates and oxygen levels. Imperial College London, meanwhile, is making use of F1 sensor technology to detect neurological dysfunction….

Big data analysis is already helping to reshape sales and marketing within the pharmaceuticals business. Great potential, however, lies in its ability to fine tune research and clinical trials, as well as providing new measurement capabilities for doctors, insurers and regulators and even patients themselves. Its applications seem infinite….

The OECD last year said governments needed better data governance rules given the “high variability” among OECD countries about protecting patient privacy. Recently, DeepMind, the artificial intelligence company owned by Google, signed a deal with a UK NHS trust to process, via a mobile app, medical data relating to 1.6m patients. Privacy advocates say this as “worrying”. Julia Powles, a University of Cambridge technology law expert, asks if the company is being given “a free pass” on the back of “unproven promises of efficiency and innovation”.

Brian Hengesbaugh, partner at law firm Baker & McKenzie in Chicago, says the process of solving such problems remains “under-developed”… (More)