The rise of the algorithm need not be bad news for humans


 at the Financial Times: “The science and technology committee of the House of Commons published the responses to its inquiry on “algorithms in decision-making” on April 26. They vary in length, detail and approach, but share one important feature — the belief that human intervention may be unavoidable, indeed welcome, when it comes to trusting algorithmic decisions….

In a society in which algorithms and other automated processes are increasingly apparent, the important question, addressed by the select committee, is the extent to which we can trust such brainless technologies, which are regularly taking decisions instead of us. Now that white-collar jobs are being replaced, we may all be at the mercy of algorithmic errors — an unfair attribution of responsibility, say, or some other Kafkaesque computer-generated disaster. The best protection against such misfires is to put human intelligence back into the equation.

Trust depends on delivery, transparency and accountability. You trust your doctor, for instance, if they do what they are supposed to do, if you can see what they are doing and if they take responsibility in the event of things go wrong. The same holds true for algorithms. We trust them when it is clear what they are designed to deliver, when it is transparent whether or not they are delivering it, and, finally, when someone is accountable — or at least morally responsible, if not legally liable — if things go wrong.

Only human intelligence can solve the AI challenge Societies have to devise frameworks for directing technologies for the common good This is where humans come in. First, to design the right sorts of algorithms and so to minimise risk. Second, since even the best algorithm can sometimes go wrong, or be fed the wrong data or in some other way misused, we need to ensure that not all decisions are left to brainless machines. Third, while some crucial decisions may indeed be too complex for any human to cope with, we should nevertheless oversee and manage such decision-making processes. And fourth, the fact that a decision is taken by an algorithm is not grounds for disregarding the insight and understanding that only humans can bring when things go awry.

In short, we need a system of design, control, transparency and accountability overseen by humans. And this need not mean spurning the help provided by digital technologies. After all, while a computer may play chess better than a human, a human in tandem with a computer is unbeatable….(More).

The Next Great Experiment


A collection of essays from technologists and scholars about how machines are reshaping civil society” in the Atlantic:” Technology is changing the way people think about—and participate in—democratic society. What does that mean for democracy?…

We are witnessing, on a massive scale, diminishing faith in institutions of all kinds. People don’t trust the government. They don’t trust banks and other corporations. They certainly don’t trust the news media.

At the same time, we are living through a period of profound technological change. Along with the rise of bioengineering, networked devices, autonomous robots, space exploration, and machine learning, the mobile internet is recontextualizing how we relate to one another, dramatically changing the way people seek and share information, and reconfiguring how we express our will as citizens in a democratic society.

But trust is a requisite for democratic governance. And now, many are growing disillusioned with democracy itself.

Disentangling the complex forces that are driving these changes can help us better understand what ails democracies today, and potentially guide us toward compelling solutions. That’s why we asked more than two dozen people who think deeply about the intersection of technology and civics to reflect on two straightforward questions: Is technology hurting democracy? And can technology help save democracy?

We received an overwhelming response. Our contributors widely view 2017 as a moment of reckoning. They are concerned with many aspects of democratic life and put a spotlight in particular on correcting institutional failures that have contributed most to inequality of access—to education, information, and voting—as well as to ideological divisiveness and the spread of misinformation. They also offer concrete solutions for how citizens, corporations, and governmental bodies can improve the free flow of reliable information, pull one another out of ever-deepening partisan echo chambers, rebuild spaces for robust and civil discourse, and shore up the integrity of the voting process itself.

Despite the unanimous sense of urgency, the authors of these essays are cautiously optimistic, too. Everyone who participated in this series believes there is hope yet—for democracy, and for the institutions that support it. They also believe that technology can help, though it will take time and money to make it so. Democracy can still thrive in this uncertain age, they argue, but not without deliberate and immediate action from the people who believe it is worth protecting.

We’ll publish a new essay every day for the next several weeks, beginning with Shannon Vallor’s “Lessons From Isaac Asimov’s Multivac.”…(More)”

How maps and machine learning are helping to eliminate malaria


Allie Lieber at The Keyword: “Today is World Malaria Day, a moment dedicated to raising awareness and improving access to tools to prevent malaria. The World Health Organization says nearly half of the world’s population is at risk for malaria, and estimates that in 2015 there were 212 million malaria cases resulting in 429,000 deaths. In places with high transmission rates, children under five account for 70 percent of malaria deaths.

DiSARM (Disease Surveillance and Risk Monitoring), a project led by the Malaria Elimination Initiative and supported by the Bill and Melinda Gates Foundationand Clinton Health Access Initiative, is fighting the spread of malaria by mapping the places where malaria could occur. With the help of Google Earth Engine, DiSARM creates high resolution “risk maps” that help malaria control programs identify the areas where they should direct resources for prevention and treatment.

We sat down with Hugh Sturrock, who leads the DiSARM project and is an Assistant Professor of Epidemiology and Biostatistics in the University of California, San Francisco’s Global Health Group, to learn more about DiSARM’s fight against malaria, and how Google fits in….

How does DiSARM use Google Earth Engine to help fight malaria?

If we map where malaria is most likely to occur, we can target those areas for action. Every time someone is diagnosed with malaria in Swaziland and Zimbabwe, a team goes to the village where the infection occurred and collects a GPS point with the precise infection location. Just looking at these points won’t allow you to accurately determine the risk of malaria, though. You also need satellite imagery of conditions like rainfall, temperature, slope and elevation, which affect mosquito breeding and parasite development.

To determine the risk of malaria, DiSARM combines the precise location of the malaria infection,  with satellite data of conditions like rainfall, temperature, vegetation, elevation, which affect mosquito breeding. DiSARM’s mobile app can be used by the malaria programs and field teams to target interventions.

Google Earth Engine collects and organizes the public satellite imagery data we need. In the past we had to obtain those images from a range of sources: NASA, USGS and different universities around the world. But with Google Earth Engine, it’s all in one place and can be processed using Google computers. We combine satellite imagery data from Google Earth Engine with the locations of malaria cases collected by a country’s national malaria control program, and create models that let us generate maps identifying areas at greatest risk.

The DiSARM interface gives malaria programs a near real-time view of malaria and predicts risk at specific locations, such as health facility service areas, villages and schools. Overlaying data allows malaria control programs to identify high-risk areas that have insufficient levels of protection and better distribute their interventions….(More)”

The U.S. Federal AI Personal Assistant Pilot


/AI-Assistant-Pilot: “Welcome to GSA’s Emerging Citizen Technology program’s pilot for the effective, efficient and accountable introduction and benchmark of public service information integration into consumer-available AI Personal Assistants (IPAs) including Amazon Alexa, Google Assistant, Microsoft Cortana, and Facebook Messenger’s chatbot service — and in the process lay a strong foundation for opening our programs to self-service programs in the home, mobile devices, automobiles and further.

This pilot will require rapid development and will result in public service concepts reviewed by the platforms of your choosing, as well as the creation of a new field of shared resources and recommendations that any organization can use to deliver our program data into these emerging services.

Principles

The demand for more automated, self-service access to United States public services, when and where citizens need them, grows each day—and so do advances in the consumer technologies like Intelligent Personal Assistants designed to meet those challenges.

The U.S. General Services Administration’s (GSA) Emerging Citizen Technology program, part of the Technology Transformation Service’s Innovation Portfolio, launched an open-sourced pilot to guide dozens of federal programs make public service information available to consumer Intelligent Personal Assistants (IPAs) for the home and office, such as Amazon Alexa, Microsoft Cortana, Google Assistant, and Facebook Messenger.

These same services that help power our homes today will empower the self-driving cars of tomorrow, fuel the Internet of Things, and more. As such, the Emerging Citizen Technology program is working with federal agencies to prepare a solid understanding of the business cases and impact of these advances.

From privacy, security, accessibility, and performance to how citizens can benefit from more efficient and open access to federal services, the program is working with federal agencies to consider all aspects of its implementation. Additionally, by sharing openly with private-sector innovators, small businesses, and new entries into the field, the tech industry will gain increased transparency into working with the federal government….(More)”.

Incorporating Ethics into Artificial Intelligence


Amitai Etzioni and Oren Etzioni in the Journal of Ethics: “This article reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped machines can be addressed by the kind of ethical choices made by human beings for millennia. Ergo, there is little need to teach machines ethics even if this could be done in the first place. Finally, the article points out that it is a grievous error to draw on extreme outlier scenarios—such as the Trolley narratives—as a basis for conceptualizing the ethical issues at hand…(More)”.

Confused by data visualisation? Here’s how to cope in a world of many features


 in The Conversation: “The late data visionary Hans Rosling mesmerised the world with his work, contributing to a more informed society. Rosling used global health data to paint a stunning picture of how our world is a better place now than it was in the past, bringing hope through data.

Now more than ever, data are collected from every aspect of our lives. From social media and advertising to artificial intelligence and automated systems, understanding and parsing information have become highly valuable skills. But we often overlook the importance of knowing how to communicate data to peers and to the public in an effective, meaningful way.

The first tools that come to mind in considering how to best communicate data – especially statistics – are graphs and scatter plots. These simple visuals help us understand elementary causes and consequences, trends and so on. They are invaluable and have an important role in disseminating knowledge.

Data visualisation can take many other forms, just as data itself can be interpreted in many different ways. It can be used to highlight important achievements, as Bill and Melinda Gates have shown with their annual letters in which their main results and aspirations are creatively displayed.

Everyone has the potential to better explore data sets and provide more thorough, yet simple, representations of facts. But how can do we do this when faced with daunting levels of complex data?

A world of too many features

We can start by breaking the data down. Any data set consists of two main elements: samples and features. The former correspond to individual elements in a group; the latter are the characteristics they share….

Venturing into network analysis is easier than undertaking dimensionality reduction, since usually a high level of programming skills is not required. Widely available user-friendly software and tutorials allow people new to data visualisation to explore several aspects of network science.

The world of data visualisation is vast and it goes way beyond what has been introduced here, but those who actually reap its benefits, garnering new insights and becoming agents of positive and efficient change, are few. In an age of overwhelming information, knowing how to communicate data can make a difference – and it can help keep data’s relevance in check…(More)”

The law is adapting to a software-driven world


 in the Financial Times: “When the investor Marc Andreessen wrote in 2011 that “software is eating the world,” his point was a contentious one. He argued that the boundary between technology companies and the rest of industry was becoming blurred, and that the “information economy” would supplant the physical economy in ways that were not entirely obvious. Six years later, software’s dominance is a fact of life. What it has yet to eat, however, is the law. If almost every sector of society has been exposed to the headwinds of the digital revolution, governments and the legal profession have not. But that is about to change. The rise of complex software systems has led to new legal challenges. Take, for example, the artificial intelligence systems used in self-driving cars. Last year, the US Department of Transportation wrote to Google stating that the government would “interpret ‘driver’ in the context of Google’s described motor-vehicle design” as referring to the car’s artificial intelligence. So what does this mean for the future of law?

It means that regulations traditionally meant to govern the way that humans interact are adapting to a world that has been eaten by software, as Mr Andreessen predicted. And this is about much more than self-driving cars. Complex algorithms are used in mortgage and credit decisions, in the criminal justice and immigration systems and in the realm of national security, to name just a few areas. The outcome of this shift is unlikely to be more lawyers writing more memos. Rather, new laws will start to become more like software — embedded within applications as computer code. As technology evolves, interpreting the law itself will become more like programming software.

But there is more to this shift than technology alone. The fact is that law is both deeply opaque and unevenly accessible. The legal advice required to understand both what our governments are doing, and what our rights are, is only accessible to a select few. Studies suggest, for example, that an estimated 80 per cent of the legal needs of the poor in the US go unmet. To the average citizen, the inner workings of government have become more impenetrable over time. Granted, laws have been murky to average citizens for as long as governments have been around. But the level of disenchantment with institutions and the experts who run them is placing new pressures on governments to change their ways. The relationship between citizens and professionals — from lawyers to bureaucrats to climatologists — has become tinged with scepticism and suspicion. This mistrust is driven by the sense that society is stacked against those at the bottom — that knowledge is power, but that power costs money only a few can afford….(More)”.

Google DeepMind and healthcare in an age of algorithms


Julia Powles and Hal Hodson in Health and Technology: “Data-driven tools and techniques, particularly machine learning methods that underpin artificial intelligence, offer promise in improving healthcare systems and services. One of the companies aspiring to pioneer these advances is DeepMind Technologies Limited, a wholly-owned subsidiary of the Google conglomerate, Alphabet Inc. In 2016, DeepMind announced its first major health project: a collaboration with the Royal Free London NHS Foundation Trust, to assist in the management of acute kidney injury. Initially received with great enthusiasm, the collaboration has suffered from a lack of clarity and openness, with issues of privacy and power emerging as potent challenges as the project has unfolded. Taking the DeepMind-Royal Free case study as its pivot, this article draws a number of lessons on the transfer of population-derived datasets to large private prospectors, identifying critical questions for policy-makers, industry and individuals as healthcare moves into an algorithmic age….(More)”

Regulating by Robot: Administrative Decision Making in the Machine-Learning Era


Paper by Cary Coglianese and David Lehr: “Machine-learning algorithms are transforming large segments of the economy, underlying everything from product marketing by online retailers to personalized search engines, and from advanced medical imaging to the software in self-driving cars. As machine learning’s use has expanded across all facets of society, anxiety has emerged about the intrusion of algorithmic machines into facets of life previously dependent on human judgment. Alarm bells sounding over the diffusion of artificial intelligence throughout the private sector only portend greater anxiety about digital robots replacing humans in the governmental sphere.

A few administrative agencies have already begun to adopt this technology, while others have the clear potential in the near-term to use algorithms to shape official decisions over both rulemaking and adjudication. It is no longer fanciful to envision a future in which government agencies could effectively make law by robot, a prospect that understandably conjures up dystopian images of individuals surrendering their liberty to the control of computerized overlords. Should society be alarmed by governmental use of machine learning applications?

We examine this question by considering whether the use of robotic decision tools by government agencies can pass muster under core, time-honored doctrines of administrative and constitutional law. At first glance, the idea of algorithmic regulation might appear to offend one or more traditional doctrines, such as the nondelegation doctrine, procedural due process, equal protection, or principles of reason-giving and transparency.

We conclude, however, that when machine-learning technology is properly understood, its use by government agencies can comfortably fit within these conventional legal parameters. We recognize, of course, that the legality of regulation by robot is only one criterion by which its use should be assessed. Obviously, agencies should not apply algorithms cavalierly, even if doing so might not run afoul of the law, and in some cases, safeguards may be needed for machine learning to satisfy broader, good-governance aspirations. Yet in contrast with the emerging alarmism, we resist any categorical dismissal of a future administrative state in which key decisions are guided by, and even at times made by, algorithmic automation. Instead, we urge that governmental reliance on machine learning should be approached with measured optimism over the potential benefits such technology can offer society by making government smarter and its decisions more efficient and just….(More)”