Slave to the Algorithm? Why a ‘Right to Explanation’ is Probably Not the Remedy You are Looking for


Paper by Lilian Edwards and Michael Veale: “Algorithms, particularly of the machine learning (ML) variety, are increasingly consequential to individuals’ lives but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively presents as a means to “open the black box”, hence allowing individual challenge and redress, as well as possibilities to foster accountability of ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive.

However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as “meaningful information about the logic of processing” — is unlikely to be provided by the kind of ML “explanations” computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However (section 5) “subject-centric” explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers’ worries of IP or trade secrets disclosure.

As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy”. However, in our final section, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure (“right to be forgotten”) and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds of building a better, more respectful and more user-friendly algorithmic society….(More)”

Could Big Data Help End Hunger in Africa?


Lenny Ruvaga at VOA News: “Computer algorithms power much of modern life from our Facebook feeds to international stock exchanges. Could they help end malnutrition and hunger in Africa? The International Center for Tropical Agriculture thinks so.

The International Center for Tropical Agriculture has spent the past four years developing the Nutrition Early Warning System, or NEWS.

The goal is to catch the subtle signs of a hunger crisis brewing in Africa as much as a year in advance.

CIAT says the system uses machine learning. As more information is fed into the system, the algorithms will get better at identifying patterns and trends. The system will get smarter.

Information Technology expert Andy Jarvis leads the project.

“The cutting edge side of this is really about bringing in streams of information from multiple sources and making sense of it. … But it is a huge volume of information and what it does, the novelty then, is making sense of that using things like artificial intelligence, machine learning, and condensing it into simple messages,” he said.

Other nutrition surveillance systems exist, like FEWSnet, the Famine Early Warning System Network which was created in the mid-1980s.

But CIAT says NEWS will be able to draw insights from a massive amount of diverse data enabling it to identify hunger risks faster than traditional methods.

“What is different about NEWS is that it pays attention to malnutrition, not just drought or famine, but the nutrition outcome that really matters, malnutrition especially in women and children. For the first time, we are saying these are the options way ahead of time. That gives policy makers an opportunity to really do what they intend to do which is make the lives of women and children better in Africa,” said Dr. Mercy Lung’aho, a CIAT nutrition expert.

While food emergencies like famine and drought grab headlines, the International Center for Tropical Agriculture says chronic malnutrition affects one in four people in Africa, taking a serious toll on economic growth and leaving them especially vulnerable in times of crisis….(More)”.

The Way Ahead


Transcript of lecture delivered by Stephen Fry on the 28th May  2017 • Hay Festival, Hay-on-Wye: “Peter Florence, the supremo of this great literary festival, asked me some months ago if I might, as part of Hay’s celebration of the five hundredth anniversary of Martin Luther’s kickstarting of the reformation, suggest a reform of the internet…

You will be relieved to know, that unlike Martin Luther, I do not have a full 95 theses to nail to the door, or in Hay’s case, to the tent flap. It might be worth reminding ourselves perhaps, however, of the great excitements of the early 16th century. I do not think it is a coincidence that Luther grew up as one of the very first generation to have access to printed books, much as some of you may have children who were the first to grow up with access to e-books, to iPads and to the internet….

The next big step for AI is the inevitable achievement of Artificial General Intelligence, or AGI, sometimes called ‘full artificial intelligence’ the point at which machines really do think like humans. In 2013, hundreds of experts were asked when they thought AGI may arise and the median prediction was they year 2040. After that the probability, most would say certain, is artificial super-intelligence and the possibility of reaching what is called the Technological Singularity – what computer pioneer John van Neumann described as the point “…beyond which humans affairs, as we know them, could not continue.” I don’t think I have to worry about that. Plenty of you in this tent have cause to, and your children beyond question will certainly know all about it. Unless of course the climate causes such havoc that we reach a Meteorological Singularity. Or the nuclear codes are penetrated by a self-teaching algorithm whose only purpose is to find a way to launch…

It’s clear that, while it is hard to calculate the cascade upon cascade of new developments and their positive effects, we already know the dire consequences and frightening scenarios that threaten to engulf us. We know them because science fiction writers and dystopians in all media have got there before us and laid the nightmare visions out. Their imaginations have seen it all coming. So whether you believe Ray Bradbury, George Orwell, Aldous Huxley, Isaac Asimov, Margaret Atwood, Ridley Scott, Anthony Burgess, H. G. Wells, Stanley Kubrick, Kazuo Ishiguro, Philip K. Dick, William Gibson, John Wyndham, James Cameron, the Wachowski’s or the scores and scores of other authors and film-makers who have painted scenarios of chaos and doom, you can certainly believe that a great transformation of human society is under way, greater than Gutenberg’s revolution – greater I would submit than the Industrial Revolution (though clearly dependent on it) – the greatest change to our ways of living since we moved from hunting and gathering to settling down in farms, villages and seaports and started to trade and form civilisations. Whether it will alter the behaviour, cognition and identity of the individual in the same way it is certain to alter the behaviour, cognition and identity of the group, well that is a hard question to answer.

But believe me when I say that it is happening. To be frank it has happened. The unimaginably colossal sums of money that have flowed to the first two generations of Silicon Valley pioneers have filled their coffers, their war chests, and they are all investing in autonomous cars, biotech, the IoT, robotics Artificial Intelligence and their convergence. None more so than the outlier, the front-runner Mr Elon Musk whose neural link system is well worth your reading about online on the great waitbutwhy.com website. Its author Tim Urban is a paid consultant of Elon Musk’s so he has the advantage of knowing what he is writing about but the potential disadvantage of being parti pri and lacking in objectivity. Elon Musk made enough money from his part in the founding and running of PayPal to fund his manifold exploits. The Neuralink project joins his Tesla automobile company and subsidiary battery and solar power businesses, his Space X reusable spacecraft group, his OpenAI initiative and Hyperloop transport system. The 1950s and 60s Space Race was funded by sovereign governments, this race is funded by private equity, by the original investors in Google, Apple, Facebook and so on. Nation states and their agencies are not major players in this game, least of all poor old Britain. Even if our politicians were across this issue, and they absolutely are not, our votes would still be an irrelevance….

So one thesis I would have to nail up to the tent is to clamour for government to bring all this deeper into schools and colleges. The subject of the next technological wave, I mean, not pornography and prostitution. Get people working at the leading edge of AI and robotics to come into the classrooms. But more importantly listen to them – even if what they say is unpalatable, our masters must have the intellectual courage and honesty to say if they don’t understand and ask for repetition and clarification. This time, in other words, we mustn’t let the wave engulf us, we must ride its crest. It’s not quite too late to re-gear governmental and educational planning and thinking….

The witlessness of our leaders and of ourselves is indeed a problem. The real danger surely is not technology but technophobic Canute-ism, a belief that we can control, change or stem the technological tide instead of understanding that we need to learn how to harness it. Driving cars is dangerous, but we developed driving lesson requirements, traffic controls, seat-belts, maintenance protocols, proximity sensors, emission standards – all kinds of ways of mitigating the danger so as not to deny ourselves the life-changing benefits of motoring.

We understand why angry Ned Ludd destroyed the weaving machines that were threatening his occupation (Luddites were prophetic in their way, it was weaving machines that first used the punched cards on which computers relied right up to the 1970s). We understand too why French workers took their clogs, their sabots as they were called, and threw them into the machinery to jam it up, giving us the word sabotage. But we know that they were in the end, if you’ll pardon the phrase, pissing into the wind. No technology has ever been stopped.

So what is the thesis I am nailing up? Well, there is no authority for me to protest to, no equivalent of Pope Leo X for it to be delivered to, and I am certainly no Martin Luther. The only thesis I can think worth nailing up is absurdly simple. It is a cry as much from the heart as from the head and it is just one word – Prepare. We have an advantage over our hunter gatherer and farming ancestors, for whether it is Winter that is coming, or a new Spring, is entirely in our hands, so long as we prepare….(More)”.

Eliminating the Human


I suspect that we almost don’t notice this pattern because it’s hard to imagine what an alternative focus of tech development might be. Most of the news we get barraged with is about algorithms, AI, robots and self driving cars, all of which fit this pattern, though there are indeed many technological innovations underway that have nothing to do with eliminating human interaction from our lives. CRISPR-cas9 in genetics, new films that can efficiently and cheaply cool houses and quantum computing to name a few, but what we read about most and what touches us daily is the trajectory towards less human involvement. Note: I don’t consider chat rooms and product reviews as “human interaction”; they’re mediated and filtered by a screen.

I am not saying these developments are not efficient and convenient; this is not a judgement regarding the services and technology. I am simply noticing a pattern and wondering if that pattern means there are other possible roads we could be going down, and that the way we’re going is not in fact inevitable, but is (possibly unconsciously) chosen.

Here are some examples of tech that allows for less human interaction…

Lastly, “Social” media- social “interaction” that isn’t really social.

While the appearance on social networks is one of connection—as Facebook and others frequently claim—the fact is a lot of social media is a simulation of real social connection. As has been in evidence recently, social media actually increases divisions amongst us by amplifying echo effects and allowing us to live in cognitive bubbles. We are fed what we already like or what our similarly inclined friends like… or more likely now what someone has payed for us to see in an ad that mimics content. In this way, we actually become less connected except to those in our group…..

Many transformative movements in the past succeed based on leaders, agreed upon principles and organization. Although social media is a great tool for rallying people and bypassing government channels, it does not guarantee eventual success.

Social media is not really social—ticking boxes and having followers and getting feeds is NOT being social—it’s a screen simulation of human interaction. Human interaction is much more nuanced and complicated than what happens online. Engineers like things that are quantifiable. Smells, gestures, expression, tone of voice, etc. etc.—in short, all the various ways we communicate are VERY hard to quantify, and those are often how we tell if someone likes us or not….

To repeat what I wrote above—humans are capricious, erratic, emotional, irrational and biased in what sometimes seem like counterproductive ways. I’d argue that though those might seem like liabilities, many of those attributes actually work in our favor. Many of our emotional responses have evolved over millennia, and they are based on the probability that our responses, often prodded by an emotion, will more likely than not offer the best way to deal with a situation….

Our random accidents and odd behaviors are fun—they make life enjoyable. I’m wondering what we’re left with when there are fewer and fewer human interactions. Remove humans from the equation and we are less complete as people or as a society. “We” do not exist as isolated individuals—we as individuals are inhabitants of networks, we are relationships. That is how we prosper and thrive….(More)”.

A DARPA Perspective on Artificial Intelligence


DARPA: “What’s the ground truth on artificial intelligence (AI)? In this video, John Launchbury, the Director of DARPA’s Information Innovation Office (I2O), attempts to demystify AI–what it can do, what it can’t do, and where it is headed. Through a discussion of the “three waves of AI” and the capabilities required for AI to reach its full potential, John provides analytical context to help understand the roles AI already has played, does play now, and could play in the future. (Slides can be downloaded here)….”

An AI Ally to Combat Bullying in Virtual Worlds


Simon Parkin at MIT Technology Review: “In any fictionalized universe, the distinction between playful antagonism and earnest harassment can be difficult to discern. Name-calling between friends playing a video game together is often a form of camaraderie. Between strangers, however, similar words assume a different, more troublesome quality. Being able to distinguish between the two is crucial for any video-game maker that wants to foster a welcoming community.

Spirit AI hopes to help developers support players and discourage bullying behavior with an abuse detection and intervention system called Ally. The software monitors interactions between players—what people are saying to each other and how they are behaving—through the available actions within a game or social platform. It’s able to detect verbal harassment and also nonverbal provocation—for example, one player stalking another’s avatar or abusing reporting tools.

“We’re looking at interaction patterns, combined with natural-language classifiers, rather than relying on a list of individual keywords,” explains Ruxandra Dariescu, one of Ally’s developers. “Harassment is a nuanced problem.”

When Ally identifies potentially abusive behavior, it checks to see if the potential abuser and the other player have had previous interactions. Where Ally differs from existing moderation software is that rather than simply send an alert to the game’s developers, it is able to send a computer-controlled virtual character to check in with the player—one that, through Spirit AI’s natural-language tools, is able to converse in the game’s tone and style (see “A Video-Game Algorithm to Solve Online Abuse”)….(More)”.

How Technology Can Help Solve Societal Problems


Barry LibertMegan Beck, Brian Komar and Josue Estrada at Knowledge@Wharton: “…nonprofit groups, academic institutions and philanthropic organizations engaged in social change are struggling to adapt to the new global, technological and virtual landscape.

Legacy modes of operation, governance and leadership competencies rooted in the age of physical realities continue to dominate the space. Further, organizations still operate in internal and external silos — far from crossing industry lines, which are blurring. And their ability to lead in a world that is changing at an exponential rate seems hampered by their mental models and therefore their business models of creating and sustaining value as well.

If civil society is not to get drenched and sink like a stone, it must start swimming in a new direction. This new direction starts with social organizations fundamentally rethinking the core assumptions driving their attitudes, behaviors and beliefs about creating long-term sustainable value for their constituencies in an exponentially networked world. Rather than using an organization-centric model, the nonprofit sector and related organizations need to adopt a mental model based on scaling relationships in a whole new way using today’s technologies — the SCaaP model.

Embracing social change as a platform is more than a theory of change, it is a theory of being — one that places a virtual network or individuals seeking social change at the center of everything and leverages today’s digital platforms (such as social media, mobile, big data and machine learning) to facilitate stakeholders (contributors and consumers) to connect, collaborate, and interact with each other to exchange value among each other to effectuate exponential social change and impact.

SCaaP builds on the government as a platform movement (Gov 2.0) launched by technologist Tim O’Reilly and many others. Just as Gov 2.0 was not about a new kind of government but rather, as O’Reilly notes, “government stripped down to its core, rediscovered and reimagined as if for the first time,” so it is with social change as a platform. Civil society is the primary location for collective action and SCaaP helps to rebuild the kind of participatory community celebrated by 19th century French historian Alexis de Tocqueville when he observed that Americans’ propensity for civic association is central to making our democratic experiment work. “Americans of all ages, all stations in life, and all types of disposition,” he noted, “are forever forming associations.”

But SCaaP represents a fundamental shift in how civil society operates. It is grounded in exploiting new digital technologies, but extends well beyond them to focus on how organizations think about advancing their core mission — do they go at it alone or do they collaborate as part of a network? SCaaP requires thinking and operating, in all things, as a network. It requires updating the core DNA that runs through social change organizations to put relationships in service of a cause at the center, not the institution. When implemented correctly, SCaaP will impact everything — from the way an organization allocates resources to how value is captured and measured to helping individuals achieve their full potential….(More)”.

Artificial intelligence prevails at predicting Supreme Court decisions


Matthew Hutson at Science: “See you in the Supreme Court!” President Donald Trump tweeted last week, responding to lower court holds on his national security policies. But is taking cases all the way to the highest court in the land a good idea? Artificial intelligence may soon have the answer. A new study shows that computers can do a better job than legal scholars at predicting Supreme Court decisions, even with less information.

Several other studies have guessed at justices’ behavior with algorithms. A 2011 project, for example, used the votes of any eight justices from 1953 to 2004 to predict the vote of the ninth in those same cases, with 83% accuracy. A 2004 paper tried seeing into the future, by using decisions from the nine justices who’d been on the court since 1994 to predict the outcomes of cases in the 2002 term. That method had an accuracy of 75%.

The new study draws on a much richer set of data to predict the behavior of any set of justices at any time. Researchers used the Supreme Court Database, which contains information on cases dating back to 1791, to build a general algorithm for predicting any justice’s vote at any time. They drew on 16 features of each vote, including the justice, the term, the issue, and the court of origin. Researchers also added other factors, such as whether oral arguments were heard….

From 1816 until 2015, the algorithm correctly predicted 70.2% of the court’s 28,000 decisions and 71.9% of the justices’ 240,000 votes, the authors report in PLOS ONE. That bests the popular betting strategy of “always guess reverse,” which has been the case in 63% of Supreme Court cases over the last 35 terms. It’s also better than another strategy that uses rulings from the previous 10 years to automatically go with a “reverse” or an “affirm” prediction. Even knowledgeable legal experts are only about 66% accurate at predicting cases, the 2004 study found. “Every time we’ve kept score, it hasn’t been a terribly pretty picture for humans,” says the study’s lead author, Daniel Katz, a law professor at Illinois Institute of Technology in Chicago…..Outside the lab, bankers and lawyers might put the new algorithm to practical use. Investors could bet on companies that might benefit from a likely ruling. And appellants could decide whether to take a case to the Supreme Court based on their chances of winning. “The lawyers who typically argue these cases are not exactly bargain basement priced,” Katz says….(More)”.

The rise of the algorithm need not be bad news for humans


 at the Financial Times: “The science and technology committee of the House of Commons published the responses to its inquiry on “algorithms in decision-making” on April 26. They vary in length, detail and approach, but share one important feature — the belief that human intervention may be unavoidable, indeed welcome, when it comes to trusting algorithmic decisions….

In a society in which algorithms and other automated processes are increasingly apparent, the important question, addressed by the select committee, is the extent to which we can trust such brainless technologies, which are regularly taking decisions instead of us. Now that white-collar jobs are being replaced, we may all be at the mercy of algorithmic errors — an unfair attribution of responsibility, say, or some other Kafkaesque computer-generated disaster. The best protection against such misfires is to put human intelligence back into the equation.

Trust depends on delivery, transparency and accountability. You trust your doctor, for instance, if they do what they are supposed to do, if you can see what they are doing and if they take responsibility in the event of things go wrong. The same holds true for algorithms. We trust them when it is clear what they are designed to deliver, when it is transparent whether or not they are delivering it, and, finally, when someone is accountable — or at least morally responsible, if not legally liable — if things go wrong.

Only human intelligence can solve the AI challenge Societies have to devise frameworks for directing technologies for the common good This is where humans come in. First, to design the right sorts of algorithms and so to minimise risk. Second, since even the best algorithm can sometimes go wrong, or be fed the wrong data or in some other way misused, we need to ensure that not all decisions are left to brainless machines. Third, while some crucial decisions may indeed be too complex for any human to cope with, we should nevertheless oversee and manage such decision-making processes. And fourth, the fact that a decision is taken by an algorithm is not grounds for disregarding the insight and understanding that only humans can bring when things go awry.

In short, we need a system of design, control, transparency and accountability overseen by humans. And this need not mean spurning the help provided by digital technologies. After all, while a computer may play chess better than a human, a human in tandem with a computer is unbeatable….(More).

The Next Great Experiment


A collection of essays from technologists and scholars about how machines are reshaping civil society” in the Atlantic:” Technology is changing the way people think about—and participate in—democratic society. What does that mean for democracy?…

We are witnessing, on a massive scale, diminishing faith in institutions of all kinds. People don’t trust the government. They don’t trust banks and other corporations. They certainly don’t trust the news media.

At the same time, we are living through a period of profound technological change. Along with the rise of bioengineering, networked devices, autonomous robots, space exploration, and machine learning, the mobile internet is recontextualizing how we relate to one another, dramatically changing the way people seek and share information, and reconfiguring how we express our will as citizens in a democratic society.

But trust is a requisite for democratic governance. And now, many are growing disillusioned with democracy itself.

Disentangling the complex forces that are driving these changes can help us better understand what ails democracies today, and potentially guide us toward compelling solutions. That’s why we asked more than two dozen people who think deeply about the intersection of technology and civics to reflect on two straightforward questions: Is technology hurting democracy? And can technology help save democracy?

We received an overwhelming response. Our contributors widely view 2017 as a moment of reckoning. They are concerned with many aspects of democratic life and put a spotlight in particular on correcting institutional failures that have contributed most to inequality of access—to education, information, and voting—as well as to ideological divisiveness and the spread of misinformation. They also offer concrete solutions for how citizens, corporations, and governmental bodies can improve the free flow of reliable information, pull one another out of ever-deepening partisan echo chambers, rebuild spaces for robust and civil discourse, and shore up the integrity of the voting process itself.

Despite the unanimous sense of urgency, the authors of these essays are cautiously optimistic, too. Everyone who participated in this series believes there is hope yet—for democracy, and for the institutions that support it. They also believe that technology can help, though it will take time and money to make it so. Democracy can still thrive in this uncertain age, they argue, but not without deliberate and immediate action from the people who believe it is worth protecting.

We’ll publish a new essay every day for the next several weeks, beginning with Shannon Vallor’s “Lessons From Isaac Asimov’s Multivac.”…(More)”