Explore our articles
View All Results

Stefaan Verhulst

Rand Corporation: “The fear that computers, by mistake or malice, might lead humanity to the brink of nuclear annihilation has haunted imaginations since the earliest days of the Cold War.

The danger might soon be more science than fiction. Stunning advances in AI have created machines that can learn and think, provoking a new arms race among the world’s major nuclear powers. It’s not the killer robots of Hollywood blockbusters that we need to worry about; it’s how computers might challenge the basic rules of nuclear deterrence and lead humans into making devastating decisions.

That’s the premise behind a new paper from RAND Corporation, How Might Artificial Intelligence Affect the Risk of Nuclear War? It’s part of a special project within RAND, known as Security 2040, to look over the horizon and anticipate coming threats.

“This isn’t just a movie scenario,” said Andrew Lohn, an engineer at RAND who coauthored the paper and whose experience with AI includes using it to route drones, identify whale calls, and predict the outcomes of NBA games. “Things that are relatively simple can raise tensions and lead us to some dangerous places if we are not careful.”…(More)”.

How Artificial Intelligence Could Increase the Risk of Nuclear War

Matteo DeSanti: “Our lives are becoming more and more digital and we expect the public services we use every day to be digital as well: booking a medical examination, receiving a pension, paying the waste tax, obtaining an authorization or a document. Moreover, we would like for all digital public services to have standards of quality comparable to the best private services we use to inform ourselves, make purchases or reservations. When using a digital public service, we would like to have concrete advantages, in particular: higher quality and ease of use, better accessibility, more flexibility and speed.

As the Three-Year Plan for Digital Transformation explains, this is a unique opportunity to design a new generation of public services making citizens and businesses the starting point rather than simply complying with rules and ordinances. We need the right professionalism, the right skills and the right tools: this is why we created Designers Italia and it is also why today we are launching the new design system.

The Public Service Design Kits introduce a method of work based on user research, the rapid exploration of solutions and the development of effective and sustainable products. Also, the Public Service Design Kits also strongly push towards higher standards, providing interface components and codeso that the country’s thousands of administrations don’t have to waste time “inventing the wheel every time.”

The fourteen kits we provide cover all aspects of a service design process, from research to user interface, from prototyping to development and each kit offers different advantages….(More)”.

Towards a new generation of public services: Designers Italia’s design kits

Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity….(More)

Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion

How artificial intelligence is transforming the world

Book by Meredith Broussard: “A guide to understanding the inner workings and outer limits of technology and why we should never assume that computers always get it right.

In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work. Broussard, a software developer and journalist, reminds us that there are fundamental limits to what we can (and should) do with technology. With this book, she offers a guide to understanding the inner workings and outer limits of technology—and issues a warning that we should never assume that computers always get things right.

Making a case against technochauvinism—the belief that technology is always the solution—Broussard argues that it’s just not true that social problems would inevitably retreat before a digitally enabled Utopia. To prove her point, she undertakes a series of adventures in computer programming. She goes for an alarming ride in a driverless car, concluding “the cyborg future is not coming any time soon”; uses artificial intelligence to investigate why students can’t pass standardized tests; deploys machine learning to predict which passengers survived the Titanic disaster; and attempts to repair the U.S. campaign finance system by building AI software. If we understand the limits of what we can do with technology, Broussard tells us, we can make better choices about what we should do with it to make the world better for everyone…(More)”.

Artificial Unintelligence

Donna K. Ginther at the American Behavioral Scientist: “In this article, I describe how data and econometric methods can be used to study the science of broadening participation. I start by showing that theory can be used to structure the approach to using data to investigate gender and race/ethnicity differences in career outcomes. I also illustrate this process by examining whether women of color who apply for National Institutes of Health research funding are confronted with a double bind where race and gender compound their disadvantage relative to Whites. Although high-quality data are needed for understanding the barriers to broadening participation in science careers, it cannot fully explain why women and underrepresented minorities are less likely to be scientists or have less productive science careers. As researchers, it is important to use all forms of data—quantitative, experimental, and qualitative—to deepen our understanding of the barriers to broadening participation….(More)”.

Using Data to Inform the Science of Broadening Participation

Conor MuldoonMichael J. O’Grady and Gregory M. P. O’Hare in the Knowledge Engineering Review: “With the growth of the Internet, crowdsourcing has become a popular way to perform intelligence tasks that hitherto would be either performed internally within an organization or not undertaken due to prohibitive costs and the lack of an appropriate communications infrastructure.

In crowdsourcing systems, whereby multiple agents are not under the direct control of a system designer, it cannot be assumed that agents will act in a manner that is consistent with the objectives of the system designer or principal agent. In situations whereby agents’ goals are to maximize their return in crowdsourcing systems that offer financial or other rewards, strategies will be adopted by agents to game the system if appropriate mitigating measures are not put in place.

The motivational and incentivization research space is quite large; it incorporates diverse techniques from a variety of different disciplines including behavioural economics, incentive theory, and game theory. This paper specifically focusses on game theoretic approaches to the problem in the crowdsourcing domain and places it in the context of the wider research landscape. It provides a survey of incentive engineering techniques that enable the creation of apt incentive structures in a range of different scenarios….(More)”.

A survey of incentive engineering for crowdsourcing

Book by Jannick Schou and Morten Hjelholt: “This book provides a study of governmental digitalization, an increasingly important area of policymaking within advanced capitalist states. It dives into a case study of digitalization efforts in Denmark, fusing a national policy study with local institutional analysis. Denmark is often framed as an international forerunner in terms of digitalizing its public sector and thus provides a particularly instructive setting for understanding this new political instrument.

Advancing a cultural political economic approach, Schou and Hjelholt argue that digitalization is far from a quick technological fix. Instead, this area must be located against wider transformations within the political economy of capitalist states. Doing so, the book excavates the political roots of digitalization and reveals its institutional consequences. It shows how new relations are being formed between the state and its citizens.

Digitalization and Public Sector Transformations pushes for a renewed approach to governmental digitalization and will be of interest to scholars working in the intersections of critical political economy, state theory and policy studies…(More)”.

Digitalization and Public Sector Transformations

Essay by Sarah Lamdan at the University of Pennsylvania Law Review: “Shortly after Donald Trump’s victory in the 2016 Presidential election, but before his inauguration, a group of concerned scholars organized in cities and college campuses across the United States, starting with the University of Pennsylvania, to prevent climate change data from disappearing from government websites. The move was led by Michelle Murphy, a scholar who had previously observed the destruction of climate change data and muzzling of government employees in Canadian Prime Minister Stephen Harper’s administration. The “guerrilla archiving” project soon swept the nation, drawing media attention as its volunteers scraped and preserved terabytes of climate change and other environmental data and materials from .gov websites. The archiving project felt urgent and necessary, as the federal government is the largest collector and archive of U.S. environmental data and information.

As it progressed, the guerrilla archiving movement became more defined: two organizations developed, the DataRefuge at the University of Pennsylvania, and the Environmental Data & Governance Initiative (EDGI), which was a national collection of academics and non-profits. These groups co-hosted data gathering sessions called DataRescue events. I joined EDGI to help members work through administrative law concepts and file Freedom of Information Act (FOIA) requests. The day-long archiving events were immensely popular and widely covered by media outlets. Each weekend, hundreds of volunteers would gather to participate in DataRescue events in U.S. cities. I helped organize the New York DataRescue event, which was held less than a month after the initial event in Pennsylvania. We had to turn people away as hundreds of local volunteers lined up to help and dozens more arrived in buses and cars, exceeding the space constraints of NYU’s cavernous MakerSpace engineering facility. Despite the popularity of the project, however, DataRescue’s goals seemed far-fetched: how could thousands of private citizens learn the contours of multitudes of federal environmental information warehouses, gather the data from all of them, and then re-post the materials in a publicly accessible format?…(More)”.

Lessons from DataRescue: The Limits of Grassroots Climate Change Data Preservation and the Need for Federal Records Law Reform

Paper by Dan Honig and Catherine Weaver: “Recent studies on global performance indicators (GPIs) reveal the distinct power that non-state actors can accrue and exercise in world politics. How and when does this happen? Using a mixed-methods approach, we examine the impact of the Aid Transparency Index (ATI), an annual rating and rankings index produced by the small UK-based NGO Publish What You Fund.

The ATI seeks to shape development aid donors’ behavior with respect to their transparency – the quality and kind of information they publicly disclose. To investigate the ATI’s effect, we construct an original panel dataset of donor transparency performance before and after ATI inclusion (2006-2013) to test whether, and which, donors alter their behavior in response to inclusion in the ATI. To further probe the causal mechanisms that explain variations in donor behavior we use qualitative research, including over 150 key informant interviews conducted between 2010-2017.

Our analysis uncovers the conditions under which the ATI influences powerful aid donors. Moreover, our mixed methods evidence reveals how this happens. Consistent with Kelley & Simmons’ central argument that GPIs exercise influence via social pressure, we find that the ATI shapes donor behavior primarily via direct effects on elites: the diffusion of professional norms, organizational learning, and peer pressure….(More)”.

A Race to the Top? The Aid Transparency Index and the Social Power of Global Performance Indicators

Hetan Shah at Nature: “Data science brings enormous potential for good — for example, to improve the delivery of public services, and even to track and fight modern slavery. No wonder researchers around the world — including members of my own organization, the Royal Statistical Society in London — have had their heads in their hands over headlines about how Facebook and the data-analytics company Cambridge Analytica might have handled personal data. We know that trustworthiness underpins public support for data innovation, and we have just seen what happens when that trust is lost….But how else might we ensure the use of data for the public good rather than for purely private gain?

Here are two proposals towards this goal.

First, governments should pass legislation to allow national statistical offices to gain anonymized access to large private-sector data sets under openly specified conditions. This provision was part of the United Kingdom’s Digital Economy Act last year and will improve the ability of the UK Office for National Statistics to assess the economy and society for the public interest.

My second proposal is inspired by the legacy of John Sulston, who died earlier this month. Sulston was known for his success in advocating for the Human Genome Project to be openly accessible to the science community, while a competitor sought to sequence the genome first and keep data proprietary.

Like Sulston, we should look for ways of making data available for the common interest. Intellectual-property rights expire after a fixed time period: what if, similarly, technology companies were allowed to use the data that they gather only for a limited period, say, five years? The data could then revert to a national charitable corporation that could provide access to certified researchers, who would both be held to account and be subject to scrutiny that ensure the data are used for the common good.

Technology companies would move from being data owners to becoming data stewards…(More)” (see also http://datacollaboratives.org/).

Use our personal data for the common good

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday