How maps and machine learning are helping to eliminate malaria


Allie Lieber at The Keyword: “Today is World Malaria Day, a moment dedicated to raising awareness and improving access to tools to prevent malaria. The World Health Organization says nearly half of the world’s population is at risk for malaria, and estimates that in 2015 there were 212 million malaria cases resulting in 429,000 deaths. In places with high transmission rates, children under five account for 70 percent of malaria deaths.

DiSARM (Disease Surveillance and Risk Monitoring), a project led by the Malaria Elimination Initiative and supported by the Bill and Melinda Gates Foundationand Clinton Health Access Initiative, is fighting the spread of malaria by mapping the places where malaria could occur. With the help of Google Earth Engine, DiSARM creates high resolution “risk maps” that help malaria control programs identify the areas where they should direct resources for prevention and treatment.

We sat down with Hugh Sturrock, who leads the DiSARM project and is an Assistant Professor of Epidemiology and Biostatistics in the University of California, San Francisco’s Global Health Group, to learn more about DiSARM’s fight against malaria, and how Google fits in….

How does DiSARM use Google Earth Engine to help fight malaria?

If we map where malaria is most likely to occur, we can target those areas for action. Every time someone is diagnosed with malaria in Swaziland and Zimbabwe, a team goes to the village where the infection occurred and collects a GPS point with the precise infection location. Just looking at these points won’t allow you to accurately determine the risk of malaria, though. You also need satellite imagery of conditions like rainfall, temperature, slope and elevation, which affect mosquito breeding and parasite development.

To determine the risk of malaria, DiSARM combines the precise location of the malaria infection,  with satellite data of conditions like rainfall, temperature, vegetation, elevation, which affect mosquito breeding. DiSARM’s mobile app can be used by the malaria programs and field teams to target interventions.

Google Earth Engine collects and organizes the public satellite imagery data we need. In the past we had to obtain those images from a range of sources: NASA, USGS and different universities around the world. But with Google Earth Engine, it’s all in one place and can be processed using Google computers. We combine satellite imagery data from Google Earth Engine with the locations of malaria cases collected by a country’s national malaria control program, and create models that let us generate maps identifying areas at greatest risk.

The DiSARM interface gives malaria programs a near real-time view of malaria and predicts risk at specific locations, such as health facility service areas, villages and schools. Overlaying data allows malaria control programs to identify high-risk areas that have insufficient levels of protection and better distribute their interventions….(More)”

The U.S. Federal AI Personal Assistant Pilot


/AI-Assistant-Pilot: “Welcome to GSA’s Emerging Citizen Technology program’s pilot for the effective, efficient and accountable introduction and benchmark of public service information integration into consumer-available AI Personal Assistants (IPAs) including Amazon Alexa, Google Assistant, Microsoft Cortana, and Facebook Messenger’s chatbot service — and in the process lay a strong foundation for opening our programs to self-service programs in the home, mobile devices, automobiles and further.

This pilot will require rapid development and will result in public service concepts reviewed by the platforms of your choosing, as well as the creation of a new field of shared resources and recommendations that any organization can use to deliver our program data into these emerging services.

Principles

The demand for more automated, self-service access to United States public services, when and where citizens need them, grows each day—and so do advances in the consumer technologies like Intelligent Personal Assistants designed to meet those challenges.

The U.S. General Services Administration’s (GSA) Emerging Citizen Technology program, part of the Technology Transformation Service’s Innovation Portfolio, launched an open-sourced pilot to guide dozens of federal programs make public service information available to consumer Intelligent Personal Assistants (IPAs) for the home and office, such as Amazon Alexa, Microsoft Cortana, Google Assistant, and Facebook Messenger.

These same services that help power our homes today will empower the self-driving cars of tomorrow, fuel the Internet of Things, and more. As such, the Emerging Citizen Technology program is working with federal agencies to prepare a solid understanding of the business cases and impact of these advances.

From privacy, security, accessibility, and performance to how citizens can benefit from more efficient and open access to federal services, the program is working with federal agencies to consider all aspects of its implementation. Additionally, by sharing openly with private-sector innovators, small businesses, and new entries into the field, the tech industry will gain increased transparency into working with the federal government….(More)”.

Incorporating Ethics into Artificial Intelligence


Amitai Etzioni and Oren Etzioni in the Journal of Ethics: “This article reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped machines can be addressed by the kind of ethical choices made by human beings for millennia. Ergo, there is little need to teach machines ethics even if this could be done in the first place. Finally, the article points out that it is a grievous error to draw on extreme outlier scenarios—such as the Trolley narratives—as a basis for conceptualizing the ethical issues at hand…(More)”.

Confused by data visualisation? Here’s how to cope in a world of many features


 in The Conversation: “The late data visionary Hans Rosling mesmerised the world with his work, contributing to a more informed society. Rosling used global health data to paint a stunning picture of how our world is a better place now than it was in the past, bringing hope through data.

Now more than ever, data are collected from every aspect of our lives. From social media and advertising to artificial intelligence and automated systems, understanding and parsing information have become highly valuable skills. But we often overlook the importance of knowing how to communicate data to peers and to the public in an effective, meaningful way.

The first tools that come to mind in considering how to best communicate data – especially statistics – are graphs and scatter plots. These simple visuals help us understand elementary causes and consequences, trends and so on. They are invaluable and have an important role in disseminating knowledge.

Data visualisation can take many other forms, just as data itself can be interpreted in many different ways. It can be used to highlight important achievements, as Bill and Melinda Gates have shown with their annual letters in which their main results and aspirations are creatively displayed.

Everyone has the potential to better explore data sets and provide more thorough, yet simple, representations of facts. But how can do we do this when faced with daunting levels of complex data?

A world of too many features

We can start by breaking the data down. Any data set consists of two main elements: samples and features. The former correspond to individual elements in a group; the latter are the characteristics they share….

Venturing into network analysis is easier than undertaking dimensionality reduction, since usually a high level of programming skills is not required. Widely available user-friendly software and tutorials allow people new to data visualisation to explore several aspects of network science.

The world of data visualisation is vast and it goes way beyond what has been introduced here, but those who actually reap its benefits, garnering new insights and becoming agents of positive and efficient change, are few. In an age of overwhelming information, knowing how to communicate data can make a difference – and it can help keep data’s relevance in check…(More)”

The law is adapting to a software-driven world


 in the Financial Times: “When the investor Marc Andreessen wrote in 2011 that “software is eating the world,” his point was a contentious one. He argued that the boundary between technology companies and the rest of industry was becoming blurred, and that the “information economy” would supplant the physical economy in ways that were not entirely obvious. Six years later, software’s dominance is a fact of life. What it has yet to eat, however, is the law. If almost every sector of society has been exposed to the headwinds of the digital revolution, governments and the legal profession have not. But that is about to change. The rise of complex software systems has led to new legal challenges. Take, for example, the artificial intelligence systems used in self-driving cars. Last year, the US Department of Transportation wrote to Google stating that the government would “interpret ‘driver’ in the context of Google’s described motor-vehicle design” as referring to the car’s artificial intelligence. So what does this mean for the future of law?

It means that regulations traditionally meant to govern the way that humans interact are adapting to a world that has been eaten by software, as Mr Andreessen predicted. And this is about much more than self-driving cars. Complex algorithms are used in mortgage and credit decisions, in the criminal justice and immigration systems and in the realm of national security, to name just a few areas. The outcome of this shift is unlikely to be more lawyers writing more memos. Rather, new laws will start to become more like software — embedded within applications as computer code. As technology evolves, interpreting the law itself will become more like programming software.

But there is more to this shift than technology alone. The fact is that law is both deeply opaque and unevenly accessible. The legal advice required to understand both what our governments are doing, and what our rights are, is only accessible to a select few. Studies suggest, for example, that an estimated 80 per cent of the legal needs of the poor in the US go unmet. To the average citizen, the inner workings of government have become more impenetrable over time. Granted, laws have been murky to average citizens for as long as governments have been around. But the level of disenchantment with institutions and the experts who run them is placing new pressures on governments to change their ways. The relationship between citizens and professionals — from lawyers to bureaucrats to climatologists — has become tinged with scepticism and suspicion. This mistrust is driven by the sense that society is stacked against those at the bottom — that knowledge is power, but that power costs money only a few can afford….(More)”.

Google DeepMind and healthcare in an age of algorithms


Julia Powles and Hal Hodson in Health and Technology: “Data-driven tools and techniques, particularly machine learning methods that underpin artificial intelligence, offer promise in improving healthcare systems and services. One of the companies aspiring to pioneer these advances is DeepMind Technologies Limited, a wholly-owned subsidiary of the Google conglomerate, Alphabet Inc. In 2016, DeepMind announced its first major health project: a collaboration with the Royal Free London NHS Foundation Trust, to assist in the management of acute kidney injury. Initially received with great enthusiasm, the collaboration has suffered from a lack of clarity and openness, with issues of privacy and power emerging as potent challenges as the project has unfolded. Taking the DeepMind-Royal Free case study as its pivot, this article draws a number of lessons on the transfer of population-derived datasets to large private prospectors, identifying critical questions for policy-makers, industry and individuals as healthcare moves into an algorithmic age….(More)”

Regulating by Robot: Administrative Decision Making in the Machine-Learning Era


Paper by Cary Coglianese and David Lehr: “Machine-learning algorithms are transforming large segments of the economy, underlying everything from product marketing by online retailers to personalized search engines, and from advanced medical imaging to the software in self-driving cars. As machine learning’s use has expanded across all facets of society, anxiety has emerged about the intrusion of algorithmic machines into facets of life previously dependent on human judgment. Alarm bells sounding over the diffusion of artificial intelligence throughout the private sector only portend greater anxiety about digital robots replacing humans in the governmental sphere.

A few administrative agencies have already begun to adopt this technology, while others have the clear potential in the near-term to use algorithms to shape official decisions over both rulemaking and adjudication. It is no longer fanciful to envision a future in which government agencies could effectively make law by robot, a prospect that understandably conjures up dystopian images of individuals surrendering their liberty to the control of computerized overlords. Should society be alarmed by governmental use of machine learning applications?

We examine this question by considering whether the use of robotic decision tools by government agencies can pass muster under core, time-honored doctrines of administrative and constitutional law. At first glance, the idea of algorithmic regulation might appear to offend one or more traditional doctrines, such as the nondelegation doctrine, procedural due process, equal protection, or principles of reason-giving and transparency.

We conclude, however, that when machine-learning technology is properly understood, its use by government agencies can comfortably fit within these conventional legal parameters. We recognize, of course, that the legality of regulation by robot is only one criterion by which its use should be assessed. Obviously, agencies should not apply algorithms cavalierly, even if doing so might not run afoul of the law, and in some cases, safeguards may be needed for machine learning to satisfy broader, good-governance aspirations. Yet in contrast with the emerging alarmism, we resist any categorical dismissal of a future administrative state in which key decisions are guided by, and even at times made by, algorithmic automation. Instead, we urge that governmental reliance on machine learning should be approached with measured optimism over the potential benefits such technology can offer society by making government smarter and its decisions more efficient and just….(More)”

Did artificial intelligence deny you credit?


 in The Conversation: “People who apply for a loan from a bank or credit card company, and are turned down, are owed an explanation of why that happened. It’s a good idea – because it can help teach people how to repair their damaged credit – and it’s a federal law, the Equal Credit Opportunity Act. Getting an answer wasn’t much of a problem in years past, when humans made those decisions. But today, as artificial intelligence systems increasingly assist or replace people making credit decisions, getting those explanations has become much more difficult.

Traditionally, a loan officer who rejected an application could tell a would-be borrower there was a problem with their income level, or employment history, or whatever the issue was. But computerized systems that use complex machine learning models are difficult to explain, even for experts.

Consumer credit decisions are just one way this problem arises. Similar concerns exist in health care, online marketing and even criminal justice. My own interest in this area began when a research group I was part of discovered gender bias in how online ads were targeted, but could not explain why it happened.

All those industries, and many others, who use machine learning to analyze processes and make decisions have a little over a year to get a lot better at explaining how their systems work. In May 2018, the new European Union General Data Protection Regulation takes effect, including a section giving people a right to get an explanation for automated decisions that affect their lives. What shape should these explanations take, and can we actually provide them?

Identifying key reasons

One way to describe why an automated decision came out the way it did is to identify the factors that were most influential in the decision. How much of a credit denial decision was because the applicant didn’t make enough money, or because he had failed to repay loans in the past?

My research group at Carnegie Mellon University, including PhD student Shayak Sen and then-postdoc Yair Zick created a way to measure the relative influence of each factor. We call it the Quantitative Input Influence.

In addition to giving better understanding of an individual decision, the measurement can also shed light on a group of decisions: Did an algorithm deny credit primarily because of financial concerns, such as how much an applicant already owes on other debts? Or was the applicant’s ZIP code more important – suggesting more basic demographics such as race might have come into play?…(More)”

Analytics Tools Could Be the Key to Effective Message-Driven Nudging


 in Government Technology: “Appealing to the nuances of the human mind has been a feature of effective governance for as long as governance has existed, appearing prominently in the prescriptions of every great political theorist from Plato to Machiavelli. The most recent and informed iteration of this practice is nudging: leveraging insights about how humans think from behavioral science to create initiatives that encourage desirable behaviors.

Public officials nudge in many ways. Some seek to modify people’s behavior by changing the environments in which they make decisions, for instance moving vegetables to the front of a grocery store to promote healthy eating. Others try to make desirable behaviors easier, like streamlining a city website to make it simpler to sign up for a service. Still others use prompts like email reminders of a deadline to receive a free checkup to nudge people to act wisely by providing useful information.

Thus far, examples of the third type of nudging — direct messaging that prompts behavior — have been decidedly low tech. Typical initiatives have included sending behaviorally informed letters to residents who have not complied with a city code or mailing out postcard reminders to renew license plates. Governments have been attracted to these initiatives for their low cost and proven effectiveness.

While these low-tech nudges should certainly continue, cities’ recent adoption of tools that can mine and analyze data instantaneously has the potential to greatly increase the scope and effectiveness of message-driven nudging.

For one, using Internet of Things (IoT) ecosystems, cities can provide residents with real-time information so that they may make better-informed decisions. For example, cities could connect traffic sensors to messaging systems and send subscribers text messages at times of high congestion, encouraging them to take public transportation. This real-time information, paired with other nudges, could increase transit use, easing traffic and bettering the environment…
Instantaneous data-mining tools may also prove useful for nudging citizens in real time, at the moments they are most likely to partake in detrimental behavior. Tools like machine learning can analyze users’ behavior and determine if they are likely to make a suboptimal choice, like leaving the website for a city service without enrolling. Using clickstream data, the site could determine if a user is likely to leave and deliver a nudge, for example sending a message explaining that most residents enroll in the service. This strategy provides another layer of nudging, catching residents who may have been influenced by an initial nudge — like a reminder to sign up for a service or streamlined website — but may need an extra prod to follow through….(More)”