Could Big Data Help End Hunger in Africa?


Lenny Ruvaga at VOA News: “Computer algorithms power much of modern life from our Facebook feeds to international stock exchanges. Could they help end malnutrition and hunger in Africa? The International Center for Tropical Agriculture thinks so.

The International Center for Tropical Agriculture has spent the past four years developing the Nutrition Early Warning System, or NEWS.

The goal is to catch the subtle signs of a hunger crisis brewing in Africa as much as a year in advance.

CIAT says the system uses machine learning. As more information is fed into the system, the algorithms will get better at identifying patterns and trends. The system will get smarter.

Information Technology expert Andy Jarvis leads the project.

“The cutting edge side of this is really about bringing in streams of information from multiple sources and making sense of it. … But it is a huge volume of information and what it does, the novelty then, is making sense of that using things like artificial intelligence, machine learning, and condensing it into simple messages,” he said.

Other nutrition surveillance systems exist, like FEWSnet, the Famine Early Warning System Network which was created in the mid-1980s.

But CIAT says NEWS will be able to draw insights from a massive amount of diverse data enabling it to identify hunger risks faster than traditional methods.

“What is different about NEWS is that it pays attention to malnutrition, not just drought or famine, but the nutrition outcome that really matters, malnutrition especially in women and children. For the first time, we are saying these are the options way ahead of time. That gives policy makers an opportunity to really do what they intend to do which is make the lives of women and children better in Africa,” said Dr. Mercy Lung’aho, a CIAT nutrition expert.

While food emergencies like famine and drought grab headlines, the International Center for Tropical Agriculture says chronic malnutrition affects one in four people in Africa, taking a serious toll on economic growth and leaving them especially vulnerable in times of crisis….(More)”.

How Technology Can Help Solve Societal Problems


Barry LibertMegan Beck, Brian Komar and Josue Estrada at Knowledge@Wharton: “…nonprofit groups, academic institutions and philanthropic organizations engaged in social change are struggling to adapt to the new global, technological and virtual landscape.

Legacy modes of operation, governance and leadership competencies rooted in the age of physical realities continue to dominate the space. Further, organizations still operate in internal and external silos — far from crossing industry lines, which are blurring. And their ability to lead in a world that is changing at an exponential rate seems hampered by their mental models and therefore their business models of creating and sustaining value as well.

If civil society is not to get drenched and sink like a stone, it must start swimming in a new direction. This new direction starts with social organizations fundamentally rethinking the core assumptions driving their attitudes, behaviors and beliefs about creating long-term sustainable value for their constituencies in an exponentially networked world. Rather than using an organization-centric model, the nonprofit sector and related organizations need to adopt a mental model based on scaling relationships in a whole new way using today’s technologies — the SCaaP model.

Embracing social change as a platform is more than a theory of change, it is a theory of being — one that places a virtual network or individuals seeking social change at the center of everything and leverages today’s digital platforms (such as social media, mobile, big data and machine learning) to facilitate stakeholders (contributors and consumers) to connect, collaborate, and interact with each other to exchange value among each other to effectuate exponential social change and impact.

SCaaP builds on the government as a platform movement (Gov 2.0) launched by technologist Tim O’Reilly and many others. Just as Gov 2.0 was not about a new kind of government but rather, as O’Reilly notes, “government stripped down to its core, rediscovered and reimagined as if for the first time,” so it is with social change as a platform. Civil society is the primary location for collective action and SCaaP helps to rebuild the kind of participatory community celebrated by 19th century French historian Alexis de Tocqueville when he observed that Americans’ propensity for civic association is central to making our democratic experiment work. “Americans of all ages, all stations in life, and all types of disposition,” he noted, “are forever forming associations.”

But SCaaP represents a fundamental shift in how civil society operates. It is grounded in exploiting new digital technologies, but extends well beyond them to focus on how organizations think about advancing their core mission — do they go at it alone or do they collaborate as part of a network? SCaaP requires thinking and operating, in all things, as a network. It requires updating the core DNA that runs through social change organizations to put relationships in service of a cause at the center, not the institution. When implemented correctly, SCaaP will impact everything — from the way an organization allocates resources to how value is captured and measured to helping individuals achieve their full potential….(More)”.

The Next Great Experiment


A collection of essays from technologists and scholars about how machines are reshaping civil society” in the Atlantic:” Technology is changing the way people think about—and participate in—democratic society. What does that mean for democracy?…

We are witnessing, on a massive scale, diminishing faith in institutions of all kinds. People don’t trust the government. They don’t trust banks and other corporations. They certainly don’t trust the news media.

At the same time, we are living through a period of profound technological change. Along with the rise of bioengineering, networked devices, autonomous robots, space exploration, and machine learning, the mobile internet is recontextualizing how we relate to one another, dramatically changing the way people seek and share information, and reconfiguring how we express our will as citizens in a democratic society.

But trust is a requisite for democratic governance. And now, many are growing disillusioned with democracy itself.

Disentangling the complex forces that are driving these changes can help us better understand what ails democracies today, and potentially guide us toward compelling solutions. That’s why we asked more than two dozen people who think deeply about the intersection of technology and civics to reflect on two straightforward questions: Is technology hurting democracy? And can technology help save democracy?

We received an overwhelming response. Our contributors widely view 2017 as a moment of reckoning. They are concerned with many aspects of democratic life and put a spotlight in particular on correcting institutional failures that have contributed most to inequality of access—to education, information, and voting—as well as to ideological divisiveness and the spread of misinformation. They also offer concrete solutions for how citizens, corporations, and governmental bodies can improve the free flow of reliable information, pull one another out of ever-deepening partisan echo chambers, rebuild spaces for robust and civil discourse, and shore up the integrity of the voting process itself.

Despite the unanimous sense of urgency, the authors of these essays are cautiously optimistic, too. Everyone who participated in this series believes there is hope yet—for democracy, and for the institutions that support it. They also believe that technology can help, though it will take time and money to make it so. Democracy can still thrive in this uncertain age, they argue, but not without deliberate and immediate action from the people who believe it is worth protecting.

We’ll publish a new essay every day for the next several weeks, beginning with Shannon Vallor’s “Lessons From Isaac Asimov’s Multivac.”…(More)”

How maps and machine learning are helping to eliminate malaria


Allie Lieber at The Keyword: “Today is World Malaria Day, a moment dedicated to raising awareness and improving access to tools to prevent malaria. The World Health Organization says nearly half of the world’s population is at risk for malaria, and estimates that in 2015 there were 212 million malaria cases resulting in 429,000 deaths. In places with high transmission rates, children under five account for 70 percent of malaria deaths.

DiSARM (Disease Surveillance and Risk Monitoring), a project led by the Malaria Elimination Initiative and supported by the Bill and Melinda Gates Foundationand Clinton Health Access Initiative, is fighting the spread of malaria by mapping the places where malaria could occur. With the help of Google Earth Engine, DiSARM creates high resolution “risk maps” that help malaria control programs identify the areas where they should direct resources for prevention and treatment.

We sat down with Hugh Sturrock, who leads the DiSARM project and is an Assistant Professor of Epidemiology and Biostatistics in the University of California, San Francisco’s Global Health Group, to learn more about DiSARM’s fight against malaria, and how Google fits in….

How does DiSARM use Google Earth Engine to help fight malaria?

If we map where malaria is most likely to occur, we can target those areas for action. Every time someone is diagnosed with malaria in Swaziland and Zimbabwe, a team goes to the village where the infection occurred and collects a GPS point with the precise infection location. Just looking at these points won’t allow you to accurately determine the risk of malaria, though. You also need satellite imagery of conditions like rainfall, temperature, slope and elevation, which affect mosquito breeding and parasite development.

To determine the risk of malaria, DiSARM combines the precise location of the malaria infection,  with satellite data of conditions like rainfall, temperature, vegetation, elevation, which affect mosquito breeding. DiSARM’s mobile app can be used by the malaria programs and field teams to target interventions.

Google Earth Engine collects and organizes the public satellite imagery data we need. In the past we had to obtain those images from a range of sources: NASA, USGS and different universities around the world. But with Google Earth Engine, it’s all in one place and can be processed using Google computers. We combine satellite imagery data from Google Earth Engine with the locations of malaria cases collected by a country’s national malaria control program, and create models that let us generate maps identifying areas at greatest risk.

The DiSARM interface gives malaria programs a near real-time view of malaria and predicts risk at specific locations, such as health facility service areas, villages and schools. Overlaying data allows malaria control programs to identify high-risk areas that have insufficient levels of protection and better distribute their interventions….(More)”

Google DeepMind and healthcare in an age of algorithms


Julia Powles and Hal Hodson in Health and Technology: “Data-driven tools and techniques, particularly machine learning methods that underpin artificial intelligence, offer promise in improving healthcare systems and services. One of the companies aspiring to pioneer these advances is DeepMind Technologies Limited, a wholly-owned subsidiary of the Google conglomerate, Alphabet Inc. In 2016, DeepMind announced its first major health project: a collaboration with the Royal Free London NHS Foundation Trust, to assist in the management of acute kidney injury. Initially received with great enthusiasm, the collaboration has suffered from a lack of clarity and openness, with issues of privacy and power emerging as potent challenges as the project has unfolded. Taking the DeepMind-Royal Free case study as its pivot, this article draws a number of lessons on the transfer of population-derived datasets to large private prospectors, identifying critical questions for policy-makers, industry and individuals as healthcare moves into an algorithmic age….(More)”

Regulating by Robot: Administrative Decision Making in the Machine-Learning Era


Paper by Cary Coglianese and David Lehr: “Machine-learning algorithms are transforming large segments of the economy, underlying everything from product marketing by online retailers to personalized search engines, and from advanced medical imaging to the software in self-driving cars. As machine learning’s use has expanded across all facets of society, anxiety has emerged about the intrusion of algorithmic machines into facets of life previously dependent on human judgment. Alarm bells sounding over the diffusion of artificial intelligence throughout the private sector only portend greater anxiety about digital robots replacing humans in the governmental sphere.

A few administrative agencies have already begun to adopt this technology, while others have the clear potential in the near-term to use algorithms to shape official decisions over both rulemaking and adjudication. It is no longer fanciful to envision a future in which government agencies could effectively make law by robot, a prospect that understandably conjures up dystopian images of individuals surrendering their liberty to the control of computerized overlords. Should society be alarmed by governmental use of machine learning applications?

We examine this question by considering whether the use of robotic decision tools by government agencies can pass muster under core, time-honored doctrines of administrative and constitutional law. At first glance, the idea of algorithmic regulation might appear to offend one or more traditional doctrines, such as the nondelegation doctrine, procedural due process, equal protection, or principles of reason-giving and transparency.

We conclude, however, that when machine-learning technology is properly understood, its use by government agencies can comfortably fit within these conventional legal parameters. We recognize, of course, that the legality of regulation by robot is only one criterion by which its use should be assessed. Obviously, agencies should not apply algorithms cavalierly, even if doing so might not run afoul of the law, and in some cases, safeguards may be needed for machine learning to satisfy broader, good-governance aspirations. Yet in contrast with the emerging alarmism, we resist any categorical dismissal of a future administrative state in which key decisions are guided by, and even at times made by, algorithmic automation. Instead, we urge that governmental reliance on machine learning should be approached with measured optimism over the potential benefits such technology can offer society by making government smarter and its decisions more efficient and just….(More)”

Did artificial intelligence deny you credit?


 in The Conversation: “People who apply for a loan from a bank or credit card company, and are turned down, are owed an explanation of why that happened. It’s a good idea – because it can help teach people how to repair their damaged credit – and it’s a federal law, the Equal Credit Opportunity Act. Getting an answer wasn’t much of a problem in years past, when humans made those decisions. But today, as artificial intelligence systems increasingly assist or replace people making credit decisions, getting those explanations has become much more difficult.

Traditionally, a loan officer who rejected an application could tell a would-be borrower there was a problem with their income level, or employment history, or whatever the issue was. But computerized systems that use complex machine learning models are difficult to explain, even for experts.

Consumer credit decisions are just one way this problem arises. Similar concerns exist in health care, online marketing and even criminal justice. My own interest in this area began when a research group I was part of discovered gender bias in how online ads were targeted, but could not explain why it happened.

All those industries, and many others, who use machine learning to analyze processes and make decisions have a little over a year to get a lot better at explaining how their systems work. In May 2018, the new European Union General Data Protection Regulation takes effect, including a section giving people a right to get an explanation for automated decisions that affect their lives. What shape should these explanations take, and can we actually provide them?

Identifying key reasons

One way to describe why an automated decision came out the way it did is to identify the factors that were most influential in the decision. How much of a credit denial decision was because the applicant didn’t make enough money, or because he had failed to repay loans in the past?

My research group at Carnegie Mellon University, including PhD student Shayak Sen and then-postdoc Yair Zick created a way to measure the relative influence of each factor. We call it the Quantitative Input Influence.

In addition to giving better understanding of an individual decision, the measurement can also shed light on a group of decisions: Did an algorithm deny credit primarily because of financial concerns, such as how much an applicant already owes on other debts? Or was the applicant’s ZIP code more important – suggesting more basic demographics such as race might have come into play?…(More)”

Analytics Tools Could Be the Key to Effective Message-Driven Nudging


 in Government Technology: “Appealing to the nuances of the human mind has been a feature of effective governance for as long as governance has existed, appearing prominently in the prescriptions of every great political theorist from Plato to Machiavelli. The most recent and informed iteration of this practice is nudging: leveraging insights about how humans think from behavioral science to create initiatives that encourage desirable behaviors.

Public officials nudge in many ways. Some seek to modify people’s behavior by changing the environments in which they make decisions, for instance moving vegetables to the front of a grocery store to promote healthy eating. Others try to make desirable behaviors easier, like streamlining a city website to make it simpler to sign up for a service. Still others use prompts like email reminders of a deadline to receive a free checkup to nudge people to act wisely by providing useful information.

Thus far, examples of the third type of nudging — direct messaging that prompts behavior — have been decidedly low tech. Typical initiatives have included sending behaviorally informed letters to residents who have not complied with a city code or mailing out postcard reminders to renew license plates. Governments have been attracted to these initiatives for their low cost and proven effectiveness.

While these low-tech nudges should certainly continue, cities’ recent adoption of tools that can mine and analyze data instantaneously has the potential to greatly increase the scope and effectiveness of message-driven nudging.

For one, using Internet of Things (IoT) ecosystems, cities can provide residents with real-time information so that they may make better-informed decisions. For example, cities could connect traffic sensors to messaging systems and send subscribers text messages at times of high congestion, encouraging them to take public transportation. This real-time information, paired with other nudges, could increase transit use, easing traffic and bettering the environment…
Instantaneous data-mining tools may also prove useful for nudging citizens in real time, at the moments they are most likely to partake in detrimental behavior. Tools like machine learning can analyze users’ behavior and determine if they are likely to make a suboptimal choice, like leaving the website for a city service without enrolling. Using clickstream data, the site could determine if a user is likely to leave and deliver a nudge, for example sending a message explaining that most residents enroll in the service. This strategy provides another layer of nudging, catching residents who may have been influenced by an initial nudge — like a reminder to sign up for a service or streamlined website — but may need an extra prod to follow through….(More)”

Bit By Bit: Social Research in the Digital Age


Open Review of Book by Matthew J. Salganik: “In the summer of 2009, mobile phones were ringing all across Rwanda. In addition to the millions of calls between family, friends, and business associates, about 1,000 Rwandans received a call from Joshua Blumenstock and his colleagues. The researchers were studying wealth and poverty by conducting a survey of people who had been randomly sampled from a database of 1.5 million customers from Rwanda’s largest mobile phone provider. Blumenstock and colleagues asked the participants if they wanted to participate in a survey, explained the nature of the research to them, and then asked a series of questions about their demographic, social, and economic characteristics.

Everything I have said up until now makes this sound like a traditional social science survey. But, what comes next is not traditional, at least not yet. They used the survey data to train a machine learning model to predict someone’s wealth from their call data, and then they used this model to estimate the wealth of all 1.5 million customers. Next, they estimated the place of residence of all 1.5 million customers by using the geographic information embedded in the call logs. Putting these two estimates together—the estimated wealth and the estimated place of residence—Blumenstock and colleagues were able to produce high-resolution estimates of the geographic distribution of wealth across Rwanda. In particular, they could produce an estimated wealth for each of Rwanda’s 2,148 cells, the smallest administrative unit in the country.

It was impossible to validate these estimates because no one had ever produced estimates for such small geographic areas in Rwanda. But, when Blumenstock and colleagues aggregated their estimates to Rwanda’s 30 districts, they found that their estimates were similar to estimates from the Demographic and Health Survey, the gold standard of surveys in developing countries. Although these two approaches produced similar estimates in this case, the approach of Blumenstock and colleagues was about 10 times faster and 50 times cheaper than the traditional Demographic and Health Surveys. These dramatically faster and lower cost estimates create new possibilities for researchers, governments, and companies (Blumenstock, Cadamuro, and On 2015).

In addition to developing a new methodology, this study is kind of like a Rorschach inkblot test; what people see depends on their background. Many social scientists see a new measurement tool that can be used to test theories about economic development. Many data scientists see a cool new machine learning problem. Many business people see a powerful approach for unlocking value in the digital trace data that they have already collected. Many privacy advocates see a scary reminder that we live in a time of mass surveillance. Many policy makers see a way that new technology can help create a better world. In fact, this study is all of those things, and that is why it is a window into the future of social research….(More)”

AI, machine learning and personal data


Jo Pedder at the Information Commissioner’s Office Blog: “Today sees the publication of the ICO’s updated paper on big data and data protection.

But why now? What’s changed in the two and a half years since we first visited this topic? Well, quite a lot actually:

  • big data is becoming the norm for many organisations, using it to profile people and inform their decision-making processes, whether that’s to determine your car insurance premium or to accept/reject your job application;
  • artificial intelligence (AI) is stepping out of the world of science-fiction and into real life, providing the ‘thinking’ power behind virtual personal assistants and smart cars; and
  • machine learning algorithms are discovering patterns in data that traditional data analysis couldn’t hope to find, helping to detect fraud and diagnose diseases.

The complexity and opacity of these types of processing operations mean that it’s often hard to know what’s going on behind the scenes. This can be problematic when personal data is involved, especially when decisions are made that have significant effects on people’s lives. The combination of these factors has led some to call for new regulation of big data, AI and machine learning, to increase transparency and ensure accountability.

In our view though, whilst the means by which the processing of personal data are changing, the underlying issues remain the same. Are people being treated fairly? Are decisions accurate and free from bias? Is there a legal basis for the processing? These are issues that the ICO has been addressing for many years, through oversight of existing European data protection legislation….(More)”