A DARPA Perspective on Artificial Intelligence


DARPA: “What’s the ground truth on artificial intelligence (AI)? In this video, John Launchbury, the Director of DARPA’s Information Innovation Office (I2O), attempts to demystify AI–what it can do, what it can’t do, and where it is headed. Through a discussion of the “three waves of AI” and the capabilities required for AI to reach its full potential, John provides analytical context to help understand the roles AI already has played, does play now, and could play in the future. (Slides can be downloaded here)….”

An AI Ally to Combat Bullying in Virtual Worlds


Simon Parkin at MIT Technology Review: “In any fictionalized universe, the distinction between playful antagonism and earnest harassment can be difficult to discern. Name-calling between friends playing a video game together is often a form of camaraderie. Between strangers, however, similar words assume a different, more troublesome quality. Being able to distinguish between the two is crucial for any video-game maker that wants to foster a welcoming community.

Spirit AI hopes to help developers support players and discourage bullying behavior with an abuse detection and intervention system called Ally. The software monitors interactions between players—what people are saying to each other and how they are behaving—through the available actions within a game or social platform. It’s able to detect verbal harassment and also nonverbal provocation—for example, one player stalking another’s avatar or abusing reporting tools.

“We’re looking at interaction patterns, combined with natural-language classifiers, rather than relying on a list of individual keywords,” explains Ruxandra Dariescu, one of Ally’s developers. “Harassment is a nuanced problem.”

When Ally identifies potentially abusive behavior, it checks to see if the potential abuser and the other player have had previous interactions. Where Ally differs from existing moderation software is that rather than simply send an alert to the game’s developers, it is able to send a computer-controlled virtual character to check in with the player—one that, through Spirit AI’s natural-language tools, is able to converse in the game’s tone and style (see “A Video-Game Algorithm to Solve Online Abuse”)….(More)”.

How Technology Can Help Solve Societal Problems


Barry LibertMegan Beck, Brian Komar and Josue Estrada at Knowledge@Wharton: “…nonprofit groups, academic institutions and philanthropic organizations engaged in social change are struggling to adapt to the new global, technological and virtual landscape.

Legacy modes of operation, governance and leadership competencies rooted in the age of physical realities continue to dominate the space. Further, organizations still operate in internal and external silos — far from crossing industry lines, which are blurring. And their ability to lead in a world that is changing at an exponential rate seems hampered by their mental models and therefore their business models of creating and sustaining value as well.

If civil society is not to get drenched and sink like a stone, it must start swimming in a new direction. This new direction starts with social organizations fundamentally rethinking the core assumptions driving their attitudes, behaviors and beliefs about creating long-term sustainable value for their constituencies in an exponentially networked world. Rather than using an organization-centric model, the nonprofit sector and related organizations need to adopt a mental model based on scaling relationships in a whole new way using today’s technologies — the SCaaP model.

Embracing social change as a platform is more than a theory of change, it is a theory of being — one that places a virtual network or individuals seeking social change at the center of everything and leverages today’s digital platforms (such as social media, mobile, big data and machine learning) to facilitate stakeholders (contributors and consumers) to connect, collaborate, and interact with each other to exchange value among each other to effectuate exponential social change and impact.

SCaaP builds on the government as a platform movement (Gov 2.0) launched by technologist Tim O’Reilly and many others. Just as Gov 2.0 was not about a new kind of government but rather, as O’Reilly notes, “government stripped down to its core, rediscovered and reimagined as if for the first time,” so it is with social change as a platform. Civil society is the primary location for collective action and SCaaP helps to rebuild the kind of participatory community celebrated by 19th century French historian Alexis de Tocqueville when he observed that Americans’ propensity for civic association is central to making our democratic experiment work. “Americans of all ages, all stations in life, and all types of disposition,” he noted, “are forever forming associations.”

But SCaaP represents a fundamental shift in how civil society operates. It is grounded in exploiting new digital technologies, but extends well beyond them to focus on how organizations think about advancing their core mission — do they go at it alone or do they collaborate as part of a network? SCaaP requires thinking and operating, in all things, as a network. It requires updating the core DNA that runs through social change organizations to put relationships in service of a cause at the center, not the institution. When implemented correctly, SCaaP will impact everything — from the way an organization allocates resources to how value is captured and measured to helping individuals achieve their full potential….(More)”.

Artificial intelligence prevails at predicting Supreme Court decisions


Matthew Hutson at Science: “See you in the Supreme Court!” President Donald Trump tweeted last week, responding to lower court holds on his national security policies. But is taking cases all the way to the highest court in the land a good idea? Artificial intelligence may soon have the answer. A new study shows that computers can do a better job than legal scholars at predicting Supreme Court decisions, even with less information.

Several other studies have guessed at justices’ behavior with algorithms. A 2011 project, for example, used the votes of any eight justices from 1953 to 2004 to predict the vote of the ninth in those same cases, with 83% accuracy. A 2004 paper tried seeing into the future, by using decisions from the nine justices who’d been on the court since 1994 to predict the outcomes of cases in the 2002 term. That method had an accuracy of 75%.

The new study draws on a much richer set of data to predict the behavior of any set of justices at any time. Researchers used the Supreme Court Database, which contains information on cases dating back to 1791, to build a general algorithm for predicting any justice’s vote at any time. They drew on 16 features of each vote, including the justice, the term, the issue, and the court of origin. Researchers also added other factors, such as whether oral arguments were heard….

From 1816 until 2015, the algorithm correctly predicted 70.2% of the court’s 28,000 decisions and 71.9% of the justices’ 240,000 votes, the authors report in PLOS ONE. That bests the popular betting strategy of “always guess reverse,” which has been the case in 63% of Supreme Court cases over the last 35 terms. It’s also better than another strategy that uses rulings from the previous 10 years to automatically go with a “reverse” or an “affirm” prediction. Even knowledgeable legal experts are only about 66% accurate at predicting cases, the 2004 study found. “Every time we’ve kept score, it hasn’t been a terribly pretty picture for humans,” says the study’s lead author, Daniel Katz, a law professor at Illinois Institute of Technology in Chicago…..Outside the lab, bankers and lawyers might put the new algorithm to practical use. Investors could bet on companies that might benefit from a likely ruling. And appellants could decide whether to take a case to the Supreme Court based on their chances of winning. “The lawyers who typically argue these cases are not exactly bargain basement priced,” Katz says….(More)”.

The rise of the algorithm need not be bad news for humans


 at the Financial Times: “The science and technology committee of the House of Commons published the responses to its inquiry on “algorithms in decision-making” on April 26. They vary in length, detail and approach, but share one important feature — the belief that human intervention may be unavoidable, indeed welcome, when it comes to trusting algorithmic decisions….

In a society in which algorithms and other automated processes are increasingly apparent, the important question, addressed by the select committee, is the extent to which we can trust such brainless technologies, which are regularly taking decisions instead of us. Now that white-collar jobs are being replaced, we may all be at the mercy of algorithmic errors — an unfair attribution of responsibility, say, or some other Kafkaesque computer-generated disaster. The best protection against such misfires is to put human intelligence back into the equation.

Trust depends on delivery, transparency and accountability. You trust your doctor, for instance, if they do what they are supposed to do, if you can see what they are doing and if they take responsibility in the event of things go wrong. The same holds true for algorithms. We trust them when it is clear what they are designed to deliver, when it is transparent whether or not they are delivering it, and, finally, when someone is accountable — or at least morally responsible, if not legally liable — if things go wrong.

Only human intelligence can solve the AI challenge Societies have to devise frameworks for directing technologies for the common good This is where humans come in. First, to design the right sorts of algorithms and so to minimise risk. Second, since even the best algorithm can sometimes go wrong, or be fed the wrong data or in some other way misused, we need to ensure that not all decisions are left to brainless machines. Third, while some crucial decisions may indeed be too complex for any human to cope with, we should nevertheless oversee and manage such decision-making processes. And fourth, the fact that a decision is taken by an algorithm is not grounds for disregarding the insight and understanding that only humans can bring when things go awry.

In short, we need a system of design, control, transparency and accountability overseen by humans. And this need not mean spurning the help provided by digital technologies. After all, while a computer may play chess better than a human, a human in tandem with a computer is unbeatable….(More).

The Next Great Experiment


A collection of essays from technologists and scholars about how machines are reshaping civil society” in the Atlantic:” Technology is changing the way people think about—and participate in—democratic society. What does that mean for democracy?…

We are witnessing, on a massive scale, diminishing faith in institutions of all kinds. People don’t trust the government. They don’t trust banks and other corporations. They certainly don’t trust the news media.

At the same time, we are living through a period of profound technological change. Along with the rise of bioengineering, networked devices, autonomous robots, space exploration, and machine learning, the mobile internet is recontextualizing how we relate to one another, dramatically changing the way people seek and share information, and reconfiguring how we express our will as citizens in a democratic society.

But trust is a requisite for democratic governance. And now, many are growing disillusioned with democracy itself.

Disentangling the complex forces that are driving these changes can help us better understand what ails democracies today, and potentially guide us toward compelling solutions. That’s why we asked more than two dozen people who think deeply about the intersection of technology and civics to reflect on two straightforward questions: Is technology hurting democracy? And can technology help save democracy?

We received an overwhelming response. Our contributors widely view 2017 as a moment of reckoning. They are concerned with many aspects of democratic life and put a spotlight in particular on correcting institutional failures that have contributed most to inequality of access—to education, information, and voting—as well as to ideological divisiveness and the spread of misinformation. They also offer concrete solutions for how citizens, corporations, and governmental bodies can improve the free flow of reliable information, pull one another out of ever-deepening partisan echo chambers, rebuild spaces for robust and civil discourse, and shore up the integrity of the voting process itself.

Despite the unanimous sense of urgency, the authors of these essays are cautiously optimistic, too. Everyone who participated in this series believes there is hope yet—for democracy, and for the institutions that support it. They also believe that technology can help, though it will take time and money to make it so. Democracy can still thrive in this uncertain age, they argue, but not without deliberate and immediate action from the people who believe it is worth protecting.

We’ll publish a new essay every day for the next several weeks, beginning with Shannon Vallor’s “Lessons From Isaac Asimov’s Multivac.”…(More)”

How maps and machine learning are helping to eliminate malaria


Allie Lieber at The Keyword: “Today is World Malaria Day, a moment dedicated to raising awareness and improving access to tools to prevent malaria. The World Health Organization says nearly half of the world’s population is at risk for malaria, and estimates that in 2015 there were 212 million malaria cases resulting in 429,000 deaths. In places with high transmission rates, children under five account for 70 percent of malaria deaths.

DiSARM (Disease Surveillance and Risk Monitoring), a project led by the Malaria Elimination Initiative and supported by the Bill and Melinda Gates Foundationand Clinton Health Access Initiative, is fighting the spread of malaria by mapping the places where malaria could occur. With the help of Google Earth Engine, DiSARM creates high resolution “risk maps” that help malaria control programs identify the areas where they should direct resources for prevention and treatment.

We sat down with Hugh Sturrock, who leads the DiSARM project and is an Assistant Professor of Epidemiology and Biostatistics in the University of California, San Francisco’s Global Health Group, to learn more about DiSARM’s fight against malaria, and how Google fits in….

How does DiSARM use Google Earth Engine to help fight malaria?

If we map where malaria is most likely to occur, we can target those areas for action. Every time someone is diagnosed with malaria in Swaziland and Zimbabwe, a team goes to the village where the infection occurred and collects a GPS point with the precise infection location. Just looking at these points won’t allow you to accurately determine the risk of malaria, though. You also need satellite imagery of conditions like rainfall, temperature, slope and elevation, which affect mosquito breeding and parasite development.

To determine the risk of malaria, DiSARM combines the precise location of the malaria infection,  with satellite data of conditions like rainfall, temperature, vegetation, elevation, which affect mosquito breeding. DiSARM’s mobile app can be used by the malaria programs and field teams to target interventions.

Google Earth Engine collects and organizes the public satellite imagery data we need. In the past we had to obtain those images from a range of sources: NASA, USGS and different universities around the world. But with Google Earth Engine, it’s all in one place and can be processed using Google computers. We combine satellite imagery data from Google Earth Engine with the locations of malaria cases collected by a country’s national malaria control program, and create models that let us generate maps identifying areas at greatest risk.

The DiSARM interface gives malaria programs a near real-time view of malaria and predicts risk at specific locations, such as health facility service areas, villages and schools. Overlaying data allows malaria control programs to identify high-risk areas that have insufficient levels of protection and better distribute their interventions….(More)”

The U.S. Federal AI Personal Assistant Pilot


/AI-Assistant-Pilot: “Welcome to GSA’s Emerging Citizen Technology program’s pilot for the effective, efficient and accountable introduction and benchmark of public service information integration into consumer-available AI Personal Assistants (IPAs) including Amazon Alexa, Google Assistant, Microsoft Cortana, and Facebook Messenger’s chatbot service — and in the process lay a strong foundation for opening our programs to self-service programs in the home, mobile devices, automobiles and further.

This pilot will require rapid development and will result in public service concepts reviewed by the platforms of your choosing, as well as the creation of a new field of shared resources and recommendations that any organization can use to deliver our program data into these emerging services.

Principles

The demand for more automated, self-service access to United States public services, when and where citizens need them, grows each day—and so do advances in the consumer technologies like Intelligent Personal Assistants designed to meet those challenges.

The U.S. General Services Administration’s (GSA) Emerging Citizen Technology program, part of the Technology Transformation Service’s Innovation Portfolio, launched an open-sourced pilot to guide dozens of federal programs make public service information available to consumer Intelligent Personal Assistants (IPAs) for the home and office, such as Amazon Alexa, Microsoft Cortana, Google Assistant, and Facebook Messenger.

These same services that help power our homes today will empower the self-driving cars of tomorrow, fuel the Internet of Things, and more. As such, the Emerging Citizen Technology program is working with federal agencies to prepare a solid understanding of the business cases and impact of these advances.

From privacy, security, accessibility, and performance to how citizens can benefit from more efficient and open access to federal services, the program is working with federal agencies to consider all aspects of its implementation. Additionally, by sharing openly with private-sector innovators, small businesses, and new entries into the field, the tech industry will gain increased transparency into working with the federal government….(More)”.

Incorporating Ethics into Artificial Intelligence


Amitai Etzioni and Oren Etzioni in the Journal of Ethics: “This article reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped machines can be addressed by the kind of ethical choices made by human beings for millennia. Ergo, there is little need to teach machines ethics even if this could be done in the first place. Finally, the article points out that it is a grievous error to draw on extreme outlier scenarios—such as the Trolley narratives—as a basis for conceptualizing the ethical issues at hand…(More)”.

Confused by data visualisation? Here’s how to cope in a world of many features


 in The Conversation: “The late data visionary Hans Rosling mesmerised the world with his work, contributing to a more informed society. Rosling used global health data to paint a stunning picture of how our world is a better place now than it was in the past, bringing hope through data.

Now more than ever, data are collected from every aspect of our lives. From social media and advertising to artificial intelligence and automated systems, understanding and parsing information have become highly valuable skills. But we often overlook the importance of knowing how to communicate data to peers and to the public in an effective, meaningful way.

The first tools that come to mind in considering how to best communicate data – especially statistics – are graphs and scatter plots. These simple visuals help us understand elementary causes and consequences, trends and so on. They are invaluable and have an important role in disseminating knowledge.

Data visualisation can take many other forms, just as data itself can be interpreted in many different ways. It can be used to highlight important achievements, as Bill and Melinda Gates have shown with their annual letters in which their main results and aspirations are creatively displayed.

Everyone has the potential to better explore data sets and provide more thorough, yet simple, representations of facts. But how can do we do this when faced with daunting levels of complex data?

A world of too many features

We can start by breaking the data down. Any data set consists of two main elements: samples and features. The former correspond to individual elements in a group; the latter are the characteristics they share….

Venturing into network analysis is easier than undertaking dimensionality reduction, since usually a high level of programming skills is not required. Widely available user-friendly software and tutorials allow people new to data visualisation to explore several aspects of network science.

The world of data visualisation is vast and it goes way beyond what has been introduced here, but those who actually reap its benefits, garnering new insights and becoming agents of positive and efficient change, are few. In an age of overwhelming information, knowing how to communicate data can make a difference – and it can help keep data’s relevance in check…(More)”