The Government Isn’t Doing Enough to Solve Big Problems with AI


Mike Orcutt at MIT Technology Review: “The government should play a bigger role in developing new tools based on artificial intelligence, or we could miss out on revolutionary applications because they don’t have obvious commercial upside.

That was the message from prominent AI technologists and researchers at a Senate committee hearing last week. They agreed that AI is in a crucial developmental moment, and that government has a unique opportunity to shape its future. They also said that the government is in a better position than technology companies to invest in AI applications aimed at broad societal problems.

Today just a few companies, led by Google and Facebook, account for the lion’s share of AI R&D in the U.S. But Eric Horvitz, technical fellow and managing director of Microsoft Research, told the committee members that there are important areas that are rich and ripe for AI innovation, such as homelessness and addiction, where the industry isn’t making big investments. The government could help support those pursuits, Horvitz said.

For a more specific example, take the plight of a veteran seeking information online about medical options, says Andrew Moore, dean of the school of computer science at Carnegie Mellon University. If an application that could respond to freeform questions, search multiple government data sets at once, and provide helpful information about a veteran’s health care options were commercially attractive, it might be available already, he says.

There is a “real hunger for basic research” says Greg Brockman, cofounder and chief technology officer of the nonprofit research company OpenAI, because technologists understand that they haven’t made the most important advances yet. If we continue to leave the bulk of it to industry, not only could we miss out on useful applications, but also on the chance to adequately explore urgent scientific questions about ethics, safety, and security while the technology is still young, says Brockman. Since the field of AI is growing “exponentially,” it’s important to study these things now, he says, and the government could make that a “top line thing that they are trying to get done.”….(More)”.

Saving Science


Daniel Sarewitz at the New Atlantis: “Science, pride of modernity, our one source of objective knowledge, is in deep trouble. Stoked by fifty years of growing public investments, scientists are more productive than ever, pouring out millions of articles in thousands of journals covering an ever-expanding array of fields and phenomena. But much of this supposed knowledge is turning out to be contestable, unreliable, unusable, or flat-out wrong. From metastatic cancer to climate change to growth economics to dietary standards, science that is supposed to yield clarity and solutions is in many instances leading instead to contradiction, controversy, and confusion. Along the way it is also undermining the four-hundred-year-old idea that wise human action can be built on a foundation of independently verifiable truths. Science is trapped in a self-destructive vortex; to escape, it will have to abdicate its protected political status and embrace both its limits and its accountability to the rest of society.

The story of how things got to this state is difficult to unravel, in no small part because the scientific enterprise is so well-defended by walls of hype, myth, and denial. But much of the problem can be traced back to a bald-faced but beautiful lie upon which rests the political and cultural power of science. This lie received its most compelling articulation just as America was about to embark on an extended period of extraordinary scientific, technological, and economic growth. It goes like this:

Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.

 

….The fruits of curiosity-driven scientific exploration into the unknown have often been magnificent. The recent discovery of gravitational waves — an experimental confirmation of Einstein’s theoretical work from a century earlier — provided a high-publicity culmination of billions of dollars of public spending and decades of research by large teams of scientists. Multi-billion dollar investments in space exploration have yielded similarly startling knowledge about our solar system, such as the recent evidence of flowing water on Mars. And, speaking of startling, anthropologists and geneticists have used genome-sequencing technologies to offer evidence that early humans interbred with two other hominin species, Neanderthals and Denisovans. Such discoveries heighten our sense of wonder about the universe and about ourselves.

And somehow, it would seem, even as scientific curiosity stokes ever-deepening insight about the fundamental workings of our world, science managed simultaneously to deliver a cornucopia of miracles on the practical side of the equation, just as Bush predicted: digital computers, jet aircraft, cell phones, the Internet, lasers, satellites, GPS, digital imagery, nuclear and solar power. When Bush wrote his report, nothing made by humans was orbiting the earth; software didn’t exist; smallpox still did.

So one might be forgiven for believing that this amazing effusion of technological change truly was the product of “the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.” But one would be mostly wrong.

Science has been important for technological development, of course. Scientists have discovered and probed phenomena that turned out to have enormously broad technological applications. But the miracles of modernity in the above list came not from “the free play of free intellects,” but from the leashing of scientific creativity to the technological needs of the U.S. Department of Defense (DOD).

The story of how DOD mobilized science to help create our world exposes the lie for what it is and provides three difficult lessons that have to be learned if science is to evade the calamity it now faces.

First, scientific knowledge advances most rapidly, and is of most value to society, not when its course is determined by the “free play of free intellects” but when it is steered to solve problems — especially those related to technological innovation.

Second, when science is not steered to solve such problems, it tends to go off half-cocked in ways that can be highly detrimental to science itself.

Third — and this is the hardest and scariest lesson — science will be made more reliable and more valuable for society today not by being protected from societal influences but instead by being brought, carefully and appropriately, into a direct, open, and intimate relationship with those influences….(More)”

Making the Case for Evidence-Based Decision-Making


Jennifer Brooks in Stanford Social Innovation Review: “After 15 years of building linkages between evidence, policy, and practice in social programs for children and families, I have one thing to say about our efforts to promote evidence-based decision-making: We have failed to capture the hearts and minds of the majority of decision-makers in the United States.

I’ve worked with state and federal leadership, as well as program administrators in the public and nonprofit spheres. Most of them just aren’t with us. They aren’t convinced that the payoffs of evidence-based practice (the method that uses rigorous tests to assess the efficacy of a given intervention) are worth the extra difficulty or expense of implementing those practices.

Why haven’t we gotten more traction for evidence-based decision-making? Three key reasons: 1) we have wasted time debating whether randomized control trials are the optimal approach, rather than building demand for more data-based decision-making; 2) we oversold the availability of evidence-based practices and underestimated what it takes to scale them; and 3) we did all this without ever asking what problems decision-makers are trying to solve.

If we want to gain momentum for evidence-based practice, we need to focus more on figuring out how to implement such approaches on a larger scale, in a way that uses data to improve programs on an ongoing basis….

We must start by understanding and analyzing the problem the decision-maker wants to solve. We need to offer more than lists of evidence-based strategies or interventions. What outcomes do the decision-makers want to achieve? And what do data tell us about why we aren’t getting those outcomes with current methods?…

None of the following ideas is rocket science, nor am I the first person to say them, but they do suggest ways that we can move beyond our current approaches in promoting evidence-based practice.

1. We need better data.

As Michele Jolin pointed out recently, few federal programs have sufficient resources to build or use evidence. There are limited resources for evaluation and other evidence-building activities, which too often are seen as “extras.” Moreover, many programs at the local, state, and national level have minimal information to use for program management and even fewer staff with the skills required to use it effectively…

 

2. We should attend equally to practices and to the systems in which they sit.

Systems improvements without changes in practice won’t get outcomes, but without systems reforms, evidence-based practices will have difficulty scaling up. …

3. You get what you pay for.

One fear I have is that we don’t actually know whether we can get better outcomes in our public systems without spending more money. And yet cost-savings seem to be what we promise when we sell the idea of evidence-based practice to legislatures and budget directors….

4. We need to hold people accountable for program results and promote ongoing improvement.

There is an inherent tension between using data for accountability and using it for program improvement….(More)”

How the Circle Line rogue train was caught with data


Daniel Sim at the Data.gov.sg Blog: “Singapore’s MRT Circle Line was hit by a spate of mysterious disruptions in recent months, causing much confusion and distress to thousands of commuters.

Like most of my colleagues, I take a train on the Circle Line to my office at one-north every morning. So on November 5, when my team was given the chance to investigate the cause, I volunteered without hesitation.

 From prior investigations by train operator SMRT and the Land Transport Authority (LTA), we already knew that the incidents were caused by some form of signal interference, which led to loss of signals in some trains. The signal loss would trigger the emergency brake safety feature in those trains and cause them to stop randomly along the tracks.

But the incidents — which first happened in August — seemed to occur at random, making it difficult for the investigation team to pinpoint the exact cause.

We were given a dataset compiled by SMRT that contained the following information:

  • Date and time of each incident
  • Location of incident
  • ID of train involved
  • Direction of train…

LTA and SMRT eventually published a joint press release on November 11 to share the findings with the public….

When we first started, my colleagues and I were hoping to find patterns that may be of interest to the cross-agency investigation team, which included many officers at LTA, SMRT and DSTA. The tidy incident logs provided by SMRT and LTA were instrumental in getting us off to a good start, as minimal cleaning up was required before we could import and analyse the data. We were also gratified by the effective follow-up investigations by LTA and DSTA that confirmed the hardware problems on PV46.

From the data science perspective, we were lucky that incidents happened so close to one another. That allowed us to identify both the problem and the culprit in such a short time. If the incidents were more isolated, the zigzag pattern would have been less apparent, and it would have taken us more time — and data — to solve the mystery….(More).”

Using Cloud Computing to Untangle How Trees Can Cool Cities


 at CoolGreenScience: “We’ve all used Google Earth — to explore remote destinations around the world or to check out our house from above. But Google Earth Engine is a valuable tool for conservationists and geographers like myself that allows us to tackle some tricky remote-sensing analysis.

After having completed a few smaller spatial science projects in the cloud (mostly on the Google Earth Engine, or GEE, platform), I decided to give it a real workout — by analyzing more than 300 gigabytes of data across 28 United States and seven Chinese cities.

This project was part of a larger study looking at trees in cities. Why trees? Trees provide numerous valuable ecosystem services to communities: benefits associated with air and water quality, energy conservation, cooler air temperatures, and many other environmental and social benefits.

It’s easy to understand the benefits of trees: stand outside on a hot sunny day and you immediately feel cooler in the shade of a tree. But what’s not as obvious as the cooling effect are tree’s ability to remove particulate matter (PM2.5) floating around in the air we breath. And this important, as this type of air pollution is implicated in the deaths of ~3 million people per year.

The Conservancy researched the relationship between city air quality and the cooling effects of trees. Results of this study will inform the Global Cities Program initiative on Planting Healthy Air for cities ­­— the objective being to show how much trees can clean and cool, how much it will cost, and so forth….(More)”

New Institute Pushes the Boundaries of Big Data


Press Release: “Each year thousands of genomes are sequenced, millions of neuronal activity traces are recorded, and light from hundreds of millions of galaxies is captured by our newest telescopes, all creating datasets of staggering size. These complex datasets are then stored for analysis.

Ongoing analysis of these information streams has illuminated a problem, however: Scientists’ standard methodologies are inadequate to the task of analyzing massive quantities of data. The development of new methods and software to learn from data and to model — at sufficient resolution — the complex processes they reflect is now a pressing concern in the scientific community.

To address these challenges, the Simons Foundation has launched a substantial new internal research group called the Flatiron Institute (FI). The FI is the first multidisciplinary institute focused entirely on computation. It is also the first center of its kind to be wholly supported by private philanthropy, providing a permanent home for up to 250 scientists and collaborating expert programmers all working together to create, deploy and support new state-of-the-art computational methods. Few existing institutions support the combination of scientists and programmers, instead leaving programming to relatively impermanent graduate students and postdoctoral fellows, and none have done so at the scale of the Flatiron Institute or with such a broad scope, at a single location.

The institute will hold conferences and meetings and serve as a focal point for computational science around the world….(More)”.

Playing politics: exposing the flaws of nudge thinking


Book Review by Pat Kane in The New Scientist: “The cover of this book echoes its core anxiety. A giant foot presses down on a sullen, Michael Jackson-like figure – a besuited citizen coolly holding off its massive weight. This is a sinister image to associate with a volume (and its author, Cass Sunstein) that should be able to proclaim a decade of success in the government’s use of “behavioural science”, or nudge theory. But doubts are brewing about its long-term effectiveness in changing public behaviour – as well as about its selective account of evolved human nature.

influence

Nudging has had a strong and illustrious run at the highest level. Outgoing US President Barack Obama and former UK Prime Minister David Cameron both set up behavioural science units at the heart of their administrations (Sunstein was the administrator of the White House Office of Information and Regulatory Affairs from 2009 to 2012).

Sunstein insists that the powers that be cannot avoid nudging us. Every shop floor plan, every new office design, every commercial marketing campaign, every public information campaign, is an “architecting of choices”. As anyone who ever tried to leave IKEA quickly will suspect, that endless, furniture-strewn path to the exit is no accident.

Nudges “steer people in particular directions, but also allow them to go their own way”. They are entreaties to change our habits, to accept old or new norms, but they presume thatwe are ultimately free to refuse the request.

However, our freedom is easily constrained by “cognitive biases”. Our brains, say the nudgers, are lazy, energy-conserving mechanisms, often overwhelmed by information. So a good way to ensure that people pay into their pensions, for example, is to set payment as a “default” in employment contracts, so the employee has to actively untick the box. Defaults of all kinds exploit our preference for inertia and the status quo in order to increase future security….

Sunstein makes useful distinctions between nudges and the other things governments and enterprises can do. Nudges are not “mandates” (laws, regulations, punishments). A mandate would be, for example, a rigorous and well-administered carbon tax, secured through a democratic or representative process. A “nudge” puts smiley faces on your energy bill, and compares your usage to that of the eco-efficient Joneses next door (nudgers like to game our herd-like social impulses).

In a fascinating survey section, which asks Americans and others what they actually think about being the subjects of the “architecting” of their choices, Sunstein discovers that “if people are told that they are being nudged, they will react adversely and resist”.

This is why nudge thinking may be faltering – its understanding of human nature unnecessarily (and perhaps expediently) downgrades our powers of conscious thought….(More)

See The Ethics of Influence: Government in the age of behavioral science Cass R. Sunstein, Cambridge University Press

Embracing Digital Democracy: A Call for Building an Online Civic Commons


John Gastil and Robert C. Richards Jr. in Political Science & Politics (Forthcoming): “Recent advances in online civic engagement tools have created a digital civic space replete with opportunities to craft and critique laws and rules or evaluate candidates, ballot measures, and policy ideas. These civic spaces, however, remain largely disconnected from one another, such that tremendous energy dissipates from each civic portal. Long-term feedback loops also remain rare. We propose addressing these limitations by building an integrative online commons that links together the best existing tools by making them components in a larger “Democracy Machine.” Drawing on gamification principles, this integrative platform would provide incentives for drawing new people into the civic sphere, encouraging more sustained and deliberative engagement, and feedback back to government and citizen alike to improve how the public interfaces with the public sector. After describing this proposed platform, we consider the most challenging problems it faces and how to address them….(More)”

Who Is Doing Computational Social Science?


Trends in Big Data Research, a Sage Whitepaper: “Information of all kinds is now being produced, collected, and analyzed at unprecedented speed, breadth, depth, and scale. The capacity to collect and analyze massive data sets has already transformed fields such as biology, astronomy, and physics, but the social sciences have been comparatively slower to adapt, and the path forward is less certain. For many, the big data revolution promises to ask, and answer, fundamental questions about individuals and collectives, but large data sets alone will not solve major social or scientific problems. New paradigms being developed by the emerging field of “computational social science” will be needed not only for research methodology, but also for study design and interpretation, cross-disciplinary collaboration, data curation and dissemination, visualization, replication, and research ethics (Lazer et al., 2009). SAGE Publishing conducted a survey with social scientists around the world to learn more about researchers engaged in big data research and the challenges they face, as well as the barriers to entry for those looking to engage in this kind of research in the future. We were also interested in the challenges of teaching computational social science methods to students. The survey was fully completed by 9412 respondents, indicating strong interest in this topic among our social science contacts. Of respondents, 33 percent had been involved in big data research of some kind and, of those who have not yet engaged in big data research, 49 percent (3057 respondents) said that they are either “definitely planning on doing so in the future” or “might do so in the future.”…(More)”

The ethical impact of data science


Theme issue of Phil. Trans. R. Soc. A compiled and edited by Mariarosaria Taddeo and Luciano Floridi: “This theme issue has the founding ambition of landscaping data ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data ethics builds on the foundation provided by computer and information ethics but, at the same time, it refines the approach endorsed so far in this research field, by shifting the level of abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations—the interactions among hardware, software and data—rather than on the variety of digital technologies that enable them. And it emphasizes the complexity of the ethical challenges posed by data science. Because of such complexity, data ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of data science and its applications within a consistent, holistic and inclusive framework. Only as a macroethics will data ethics provide solutions that can maximize the value of data science for our societies, for all of us and for our environments….(More)”

Table of Contents:

  • The dynamics of big data and human rights: the case of scientific research; Effy Vayena, John Tasioulas
  • Facilitating the ethical use of health data for the benefit of society: electronic health records, consent and the duty of easy rescue; Sebastian Porsdam Mann, Julian Savulescu, Barbara J. Sahakian
  • Faultless responsibility: on the nature and allocation of moral responsibility for distributed moral actions; Luciano Floridi
  • Compelling truth: legal protection of the infosphere against big data spills; Burkhard Schafer
  • Locating ethics in data science: responsibility and accountability in global and distributed knowledge production systems; Sabina Leonelli
  • Privacy is an essentially contested concept: a multi-dimensional analytic for mapping privacy; Deirdre K. Mulligan, Colin Koopman, Nick Doty
  • Beyond privacy and exposure: ethical issues within citizen-facing analytics; Peter Grindrod
  • The ethics of smart cities and urban science; Rob Kitchin
  • The ethics of big data as a public good: which public? Whose good? Linnet Taylor
  • Data philanthropy and the design of the infraethics for information societies; Mariarosaria Taddeo
  • The opportunities and ethics of big data: practical priorities for a national Council of Data Ethics; Olivia Varley-Winter, Hetan Shah
  • Data science ethics in government; Cat Drew
  • The ethics of data and of data science: an economist’s perspective; Jonathan Cave
  • What’s the good of a science platform? John Gallacher