Artificial Intelligence and National Security


CRS Report: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI. In addition, many commercial AI applications must undergo significant modification prior to being functional for the military.

A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics.

Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges. AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations. While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.

Steering AI and Advanced ICTs for Knowledge Societies: a Rights, Openness, Access, and Multi-stakeholder Perspective


Report by Unesco: “Artificial Intelligence (AI) is increasingly becoming the veiled decision-maker of our times. The diverse technical applications loosely associated with this label drive more and more of our lives. They scan billions of web pages, digital trails and sensor-derived data within micro-seconds, using algorithms to prepare and produce significant decisions.

AI and its constitutive elements of data, algorithms, hardware, connectivity and storage exponentially increase the power of Information and Communications Technology (ICT). This is a major opportunity for Sustainable Development, although risks also need to be addressed.

It should be noted that the development of AI technology is part of the wider ecosystem of Internet and other advanced ICTs including big data, Internet of Things, blockchains, etc. To assess AI and other advanced ICTs’ benefits and challenges – particularly for communications and information – a useful approach is UNESCO’s Internet Universality ROAM principles.These principles urge that digital development be aligned with human Rights, Openness, Accessibility and Multi-stakeholder governance to guide the ensemble of values, norms, policies, regulations, codes and ethics that govern the development and use of AI….(More)”

Defining concepts of the digital society


A special section of Internet Policy Review edited by Christian Katzenbach and Thomas Christian Bächle: “With this new special section Defining concepts of the digital society in Internet Policy Review, we seek to foster a platform that provides and validates exactly these overarching frameworks and theories. Based on the latest research, yet broad in scope, the contributions offer effective tools to analyse the digital society. Their authors offer concise articles that portray and critically discuss individual concepts with an interdisciplinary mindset. Each article contextualises their origin and academic traditions, analyses their contemporary usage in different research approaches and discusses their social, political, cultural, ethical or economic relevance and impact as well as their analytical value. With this, the authors are building bridges between the disciplines, between research and practice as well as between innovative explanations and their conceptual heritage….(More)”

Algorithmic governance
Christian Katzenbach, Alexander von Humboldt Institute for Internet and Society
Lena Ulbricht, Berlin Social Science Center

Datafication
Ulises A. Mejias, State University of New York at Oswego
Nick Couldry, London School of Economics & Political Science

Filter bubble
Axel Bruns, Queensland University of Technology

Platformisation
Thomas Poell, University of Amsterdam
David Nieborg, University of Toronto
José van Dijck, Utrecht University

Privacy
Tobias Matzner, University of Paderborn
Carsten Ochs, University of Kassel

Causal Inference: What If


Book by Miguel A. Hernán, James M. Robins: “Causal Inference is an admittedly pretentious title for a book. Causal inference is a complex scientific task that relies on triangulating evidence from multiple sources and on the application of a variety of methodological approaches. No book can possibly provide a comprehensive description of methodologies for causal inference across the sciences. The authors of any Causal Inference book will have to choose which aspects of causal inference methodology they want to emphasize.

The title of this introduction reflects our own choices: a book that helps scientists–especially health and social scientists–generate and analyze data to make causal inferences that are explicit about both the causal question and the assumptions underlying the data analysis. Unfortunately, the scientific literature is plagued by studies in which the causal question is not explicitly stated and the investigators’ unverifiable assumptions are not declared. This casual attitude towards causal inference has led to a great deal of confusion. For example, it is not uncommon to find studies in which the effect estimates are hard to interpret because the data analysis methods cannot appropriately answer the causal question (were it explicitly stated) under the investigators’ assumptions (were they declared).

In this book, we stress the need to take the causal question seriously enough to articulate it, and to delineate the separate roles of data and assumptions for causal inference. Once these foundations are in place, causal inferences become necessarily less casual, which helps prevent confusion. The book describes various data analysis approaches that can be used to estimate the causal effect of interest under a particular set of assumptions when data are collected on each individual in a population. A key message of the book is that causal inference cannot be reduced to a collection of recipes for data analysis.

The book is divided in three parts of increasing difficulty: Part I is about causal inference without models (i.e., nonparametric identification of causal effects), Part II is about causal inference with models (i.e., estimation of causal effects with parametric models), and Part III is about causal inference from complex longitudinal data (i.e., estimation of causal effects of time-varying treatments)….(More) (Additional Material)”.

Contract for the Web


About: “The Web was designed to bring people together and make knowledge freely available. It has changed the world for good and improved the lives of billions. Yet, many people are still unable to access its benefits and, for others, the Web comes with too many unacceptable costs.

Everyone has a role to play in safeguarding the future of the Web. The Contract for the Web was created by representatives from over 80 organizations, representing governments, companies and civil society, and sets out commitments to guide digital policy agendas. To achieve the Contract’s goals, governments, companies, civil society and individuals must commit to sustained policy development, advocacy, and implementation of the Contract’s text…(More)”.

State Capabilities for Problem-Oriented Governance


Paper by Quinton Mayne, Jorrit De Jong, and Fernando Fernandez-Monge: “Governments around the world are increasingly recognizing the power of problem-oriented governance as a way to address complex public problems. As an approach to policy design and implementation, problem-oriented governance radically emphasizes the need for organizations to continuously learn and adapt. Scholars of public management, public administration, policy studies, international development, and political science have made important contributions to this problem-orientation turn; however, little systematic attention has been paid to the question of the state capabilities that underpin problem-oriented governance. In this article, we address this gap in the literature.

We argue that three core capabilities are structurally conducive to problem-oriented governance: a reflective-improvement capability, a collaborative capability, and a data-analytic capability. The article presents a conceptual framework for understanding each of these capabilities, including their chief constituent elements. It ends with a discussion of how the framework can advance empirical research as well as public-sector reform….(More)”.

Rosie the Robot: Social accountability one tweet at a time


Blogpost by Yasodara Cordova and Eduardo Vicente Goncalvese: “Every month in Brazil, the government team in charge of processing reimbursement expenses incurred by congresspeople receives more than 20,000 claims. This is a manually intensive process that is prone to error and susceptible to corruption. Under Brazilian law, this information is available to the public, making it possible to check the accuracy of this data with further scrutiny. But it’s hard to sift through so many transactions. Fortunately, Rosie, a robot built to analyze the expenses of the country’s congress members, is helping out.

Rosie was born from Operação Serenata de Amor, a flagship project we helped create with other civic hackers. We suspected that data provided by members of Congress, especially regarding work-related reimbursements, might not always be accurate. There were clear, straightforward reimbursement regulations, but we wondered how easily individuals could maneuver around them. 

Furthermore, we believed that transparency portals and the public data weren’t realizing their full potential for accountability. Citizens struggled to understand public sector jargon and make sense of the extensive volume of data. We thought data science could help make better sense of the open data  provided by the Brazilian government.

Using agile methods, specifically Domain Driven Design, a flexible and adaptive process framework for solving complex problems, our group started studying the regulations, and converting them into  software code. We did this by reverse-engineering the legal documents–understanding the reimbursement rules and brainstorming ways to circumvent them. Next, we thought about the traces this circumvention would leave in the databases and developed a way to identify these traces using the existing data. The public expenses database included the images of the receipts used to claim reimbursements and we could see evidence of expenses, such as alcohol, which weren’t allowed to be paid with public money. We named our creation, Rosie.

This method of researching the regulations to then translate them into software in an agile way is called Domain-Driven Design. Used for complex systems, this useful approach analyzes the data and the sector as an ecosystem, and then uses observations and rapid prototyping to generate and test an evolving model. This is how Rosie works. Rosie sifts through the reported data and flags specific expenses made by representatives as “suspicious.” An example could be purchases that indicate the Congress member was in two locations on the same day and time.

After finding a suspicious transaction, Rosie then automatically tweets the results to both citizens and congress members.  She invites citizens to corroborate or dismiss the suspicions, while also inviting congress members to justify themselves.

Rosie isn’t working alone. Beyond translating the law into computer code, the group also created new interfaces to help citizens check up on Rosie’s suspicions. The same information that was spread in different places in official government websites was put together in a more intuitive, indexed and machine-readable platform. This platform is called Jarbas – its name was inspired by the AI system that controls Tony Stark’s mansion in Iron Man, J.A.R.V.I.S. (which has origins in the human “Jarbas”) – and it is a website and API (application programming interface) that helps citizens more easily navigate and browse data from different sources. Together, Rosie and Jarbas helps citizens use and interpret the data to decide whether there was a misuse of public funds. So far, Rosie has tweeted 967 times. She is particularly good at detecting overpriced meals. According to an open research, made by the group, since her introduction, members of Congress have reduced spending on meals by about ten percent….(More)”.

The Challenges of Sharing Data in an Era of Politicized Science


Editorial by Howard Bauchner in JAMA: “The goal of making science more transparent—sharing data, posting results on trial registries, use of preprint servers, and open access publishing—may enhance scientific discovery and improve individual and population health, but it also comes with substantial challenges in an era of politicized science, enhanced skepticism, and the ubiquitous world of social media. The recent announcement by the Trump administration of plans to proceed with an updated version of the proposed rule “Strengthening Transparency in Regulatory Science,” stipulating that all underlying data from studies that underpin public health regulations from the US Environmental Protection Agency (EPA) must be made publicly available so that those data can be independently validated, epitomizes some of these challenges. According to EPA Administrator Andrew Wheeler: “Good science is science that can be replicated and independently validated, science that can hold up to scrutiny. That is why we’re moving forward to ensure that the science supporting agency decisions is transparent and available for evaluation by the public and stakeholders.”

Virtually every time JAMA publishes an article on the effects of pollution or climate change on health, the journal immediately receives demands from critics to retract the article for various reasons. Some individuals and groups simply do not believe that pollution or climate change affects human health. Research on climate change, and the effects of climate change on the health of the planet and human beings, if made available to anyone for reanalysis could be manipulated to find a different outcome than initially reported. In an age of skepticism about many issues, including science, with the ability to use social media to disseminate unfounded and at times potentially harmful ideas, it is challenging to balance the potential benefits of sharing data with the harms that could be done by reanalysis.

Can the experience of sharing data derived from randomized clinical trials (RCTs)—either as mandated by some funders and journals or as supported by individual investigators—serve as examples as a way to safeguard “truth” in science….

Although the sharing of data may have numerous benefits, it also comes with substantial challenges particularly in highly contentious and politicized areas, such as the effects of climate change and pollution on health, in which the public dialogue appears to be based on as much fiction as fact. The sharing of data, whether mandated by funders, including foundations and government, or volunteered by scientists who believe in the principle of data transparency, is a complicated issue in the evolving world of science, analysis, skepticism, and communication. Above all, the scientific process—including original research and reanalysis of shared data—must prevail, and the inherent search for evidence, facts, and truth must not be compromised by special interests, coercive influences, or politicized perspectives. There are no simple answers, just words of caution and concern….(More)”.

Access My Info (AMI)


About: “What do companies know about you? How do they handle your data? And who do they share it with?

Access My Info (AMI) is a project that can help answer these questions by assisting you in making data access requests to companies. AMI includes a web application that helps users send companies data access requests, and a research methodology designed to understand the responses companies make to these requests. Past AMI projects have shed light on how companies treat user data and contribute to digital privacy reforms around the world.

What are data access requests?

A data access request is a letter you can send to any company with products/services that you use. The request asks that the company disclose all the information it has about you and whether or not it has shared your data with any third-parties. If the place where you live has data protection laws that include the right to data access then companies may be legally obligated to respond…

AMI has made personal data requests in jurisdictions around the world and found common patterns.

  1. There are significant gaps between data access laws on paper and the law in practice;
  2. People have consistently encountered barriers to accessing their data.

Together with our partners in each jurisdiction, we have used Access My Info to set off a dialog between users, civil society, regulators, and companies…(More)”

A New Wave of Deliberative Democracy


Essay by Claudia Chwalisz: “….Deliberative bodies such as citizens’ councils, assemblies, and juries are often called “deliberative mini-publics” in academic literature. They are just one aspect of deliberative democracy and involve randomly selected citizens spending a significant period of time developing informed recommendations for public authorities. Many scholars emphasize two core defining featuresdeliberation (careful and open discussion to weigh the evidence about an issue) and representativeness, achieved through sortition (random selection).

Of course, the principles of deliberation and sortition are not new. Rooted in ancient Athenian democracy, they were used throughout various points of history until around two to three centuries ago. Evoked by the Greek statesman Pericles in 431 BCE, the ideas—that “ordinary citizens, though occupied with the pursuits of industry, are still fair judges of public matters” and that instead of being a “stumbling block in the way of action . . . [discussion] is an indispensable preliminary to any wise action at all”—faded to the background when elections came to dominate the contemporary notion of democracy.

But the belief in the ability of ordinary citizens to deliberate and participate in public decisionmaking has come back into vogue over the past several decades. And it is modern applications of the principles of sortition and deliberation, meaning their adaption in the context of liberal representative democratic institutions, that make them “democratic innovations” today. This is not to say that there are no longer proponents who claim that governance should be the domain of “experts” who are committed to govern for the general good and have superior knowledge to do it. Originally espoused by Plato, the argument in favor of epistocracy—rule by experts—continues to be reiterated, such as in Jason Brennan’s 2016 book Against Democracy. It is a reminder that the battle of ideas for democracy’s future is nothing new and requires constant engagement.

Today’s political context—characterized by political polarization; mistrust in politicians, governments, and fellow citizens; voter apathy; increasing political protests; and a new context of misinformation and disinformation—has prompted politicians, policymakers, civil society organizations, and citizens to reflect on how collective public decisions are being made in the twenty-first century. In particular, political tensions have raised the need for new ways of achieving consensus and taking action on issues that require long-term solutions, such as climate change and technology use. Assembling ordinary citizens from all parts of society to deliberate on a complex political issue has thus become even more appealing.

Some discussions have returned to exploring democracy’s deliberative roots. An ongoing study by the Organization for Economic Co-operation and Development (OECD) is analyzing over 700 cases of deliberative mini-publics commissioned by public authorities to inform their decisionmaking. The forthcoming report assesses the mini-publics’ use, principles of good practice, and routes to institutionalization.3 This new area of work stems from the 2017 OECD Recommendation of the Council on Open Government, which recommends that adherents (OECD members and some nonmembers) grant all stakeholders, including citizens, “equal and fair opportunities to be informed and consulted and actively engage them in all phases of the policy-cycle” and “promote innovative ways to effectively engage with stakeholders to source ideas and co-create solutions.” A better understanding of how public authorities have been using deliberative mini-publics to inform their decisionmaking around the world, not just in OECD countries, should provide a richer understanding of what works and what does not. It should also reveal the design principles needed for mini-publics to effectively function, deliver strong recommendations, increase legitimacy of the decisionmaking process, and possibly even improve public trust….(More)”.