Artificial Intelligence and National Security


CRS Report: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI. In addition, many commercial AI applications must undergo significant modification prior to being functional for the military.

A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics.

Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges. AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations. While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.

Why Data Is Not the New Oil


Blogpost by Alec Stapp: “Data is the new oil,” said Jaron Lanier in a recent op-ed for The New York Times. Lanier’s use of this metaphor is only the latest instance of what has become the dumbest meme in tech policy. As the digital economy becomes more prominent in our lives, it is not unreasonable to seek to understand one of its most important inputs. But this analogy to the physical economy is fundamentally flawed. Worse, introducing regulations premised upon faulty assumptions like this will likely do far more harm than good. Here are seven reasons why “data is the new oil” misses the mark:

1. Oil is rivalrous; data is non-rivalrous

If someone uses a barrel of oil, it can’t be consumed again. But, as Alan McQuinn, a senior policy analyst at the Information Technology and Innovation Foundation, noted, “when consumers ‘pay with data’ to access a website, they still have the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services.” Imposing restrictions on data collection makes this infinite resource finite. 

2. Oil is excludable; data is non-excludable

Oil is highly excludable because, as a physical commodity, it can be stored in ways that prevent use by non-authorized parties. However, as my colleagues pointed out in a recent comment to the FTC: “While databases may be proprietary, the underlying data usually is not.” They go on to argue that this can lead to under-investment in data collection:

[C]ompanies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also  means that firms may well be more reluctant to invest in data generation than is socially optimal. In fact, to the extent this is true there is arguably more risk of companies under-investing in data  generation than of firms over-investing in order to create data troves with which to monopolize a market. This contrasts with oil, where complete excludability is the norm.

3. Oil is fungible; data is non-fungible

Oil is a commodity, so, by definition, one barrel of oil of a given grade is equivalent to any other barrel of that grade. Data, on the other hand, is heterogeneous. Each person’s data is unique and may consist of a practically unlimited number of different attributes that can be collected into a profile. This means that oil will follow the law of one price, while a dataset’s value will be highly contingent on its particular properties and commercialization potential.

4. Oil has positive marginal costs; data has zero marginal costs

There is a significant expense to producing and distributing an additional barrel of oil (as low as $5.49 per barrel in Saudi Arabia; as high as $21.66 in the U.K.). Data is merely encoded information (bits of 1s and 0s), so gathering, storing, and transferring it is nearly costless (though, to be clear, setting up systems for collecting and processing can be a large fixed cost). Under perfect competition, the market clearing price is equal to the marginal cost of production (hence why data is traded for free services and oil still requires cold, hard cash)….(More)”.

We are finally getting better at predicting organized conflict


Tate Ryan-Mosley at MIT Technology Review: “People have been trying to predict conflict for hundreds, if not thousands, of years. But it’s hard, largely because scientists can’t agree on its nature or how it arises. The critical factor could be something as apparently innocuous as a booming population or a bad year for crops. Other times a spark ignites a powder keg, as with the assassination of Archduke Franz Ferdinand of Austria in the run-up to World War I.

Political scientists and mathematicians have come up with a slew of different methods for forecasting the next outbreak of violence—but no single model properly captures how conflict behaves. A study published in 2011 by the Peace Research Institute Oslo used a single model to run global conflict forecasts from 2010 to 2050. It estimated a less than .05% chance of violence in Syria. Humanitarian organizations, which could have been better prepared had the predictions been more accurate, were caught flat-footed by the outbreak of Syria’s civil war in March 2011. It has since displaced some 13 million people.

Bundling individual models to maximize their strengths and weed out weakness has resulted in big improvements. The first public ensemble model, the Early Warning Project, launched in 2013 to forecast new instances of mass killing. Run by researchers at the US Holocaust Museum and Dartmouth College, it claims 80% accuracy in its predictions.

Improvements in data gathering, translation, and machine learning have further advanced the field. A newer model called ViEWS, built by researchers at Uppsala University, provides a huge boost in granularity. Focusing on conflict in Africa, it offers monthly predictive readouts on multiple regions within a given state. Its threshold for violence is a single death.

Some researchers say there are private—and in some cases, classified—predictive models that are likely far better than anything public. Worries that making predictions public could undermine diplomacy or change the outcome of world events are not unfounded. But that is precisely the point. Public models are good enough to help direct aid to where it is needed and alert those most vulnerable to seek safety. Properly used, they could change things for the better, and save lives in the process….(More)”.

Information Wars: How We Lost the Global Battle Against Disinformation and What We Can Do About It


Book by Richard Stengel: “Disinformation is as old as humanity. When Satan told Eve nothing would happen if she bit the apple, that was disinformation. But the rise of social media has made disinformation even more pervasive and pernicious in our current era. In a disturbing turn of events, governments are increasingly using disinformation to create their own false narratives, and democracies are proving not to be very good at fighting it.

During the final three years of the Obama administration, Richard Stengel, the former editor of Time and an Under Secretary of State, was on the front lines of this new global information war. At the time, he was the single person in government tasked with unpacking, disproving, and combating both ISIS’s messaging and Russian disinformation. Then, in 2016, as the presidential election unfolded, Stengel watched as Donald Trump used disinformation himself, weaponizing the grievances of Americans who felt left out by modernism. In fact, Stengel quickly came to see how all three players had used the same playbook: ISIS sought to make Islam great again; Putin tried to make Russia great again; and we all know about Trump.

In a narrative that is by turns dramatic and eye-opening, Information Wars walks readers through of this often frustrating battle. Stengel moves through Russia and Ukraine, Saudi Arabia and Iraq, and introduces characters from Putin to Hillary Clinton, John Kerry and Mohamed bin Salman to show how disinformation is impacting our global society. He illustrates how ISIS terrorized the world using social media, and how the Russians launched a tsunami of disinformation around the annexation of Crimea – a scheme that became the model for their interference with the 2016 presidential election. An urgent book for our times, Information Wars stresses that we must find a way to combat this ever growing threat to democracy….(More)”.

The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation


Report by Philip Howard and Samantha Bradshaw: “…The report explores the tools, capacities, strategies and resources employed by global ‘cyber troops’, typically government agencies and political parties, to influence public opinion in 70 countries.

Key findings include:

  • Organized social media manipulation has more than doubled since 2017, with 70 countries using computational propaganda to manipulate public opinion.
  • In 45 democracies, politicians and political parties have used computational propaganda tools by amassing fake followers or spreading manipulated media to garner voter support.
  • In 26 authoritarian states, government entities have used computational propaganda as a tool of information control to suppress public opinion and press freedom, discredit criticism and oppositional voices, and drown out political dissent.
  • Foreign influence operations, primarily over Facebook and Twitter, have been attributed to cyber troop activities in seven countries: China, India, Iran, Pakistan, Russia, Saudi Arabia and Venezuela.
  • China has now emerged as a major player in the global disinformation order, using social media platforms to target international audiences with disinformation.
  • 25 countries are working with private companies or strategic communications firms offering a computational propaganda as a service.
  • Facebook remains the platform of choice for social media manipulation, with evidence of formally organised campaigns taking place in 56 countries….

The report explores the tools and techniques of computational propaganda, including the use of fake accounts – bots, humans, cyborgs and hacked accounts – to spread disinformation. The report finds:

  • 87% of countries used human accounts
  • 80% of countries used bot accounts
  • 11% of countries used cyborg accounts
  • 7% of countries used hacked or stolen accounts…(More)”.

Real-time flu tracking. By monitoring social media, scientists can monitor outbreaks as they happen.


Charles Schmidt at Nature: “Conventional influenza surveillance describes outbreaks of flu that have already happened. It is based on reports from doctors, and produces data that take weeks to process — often leaving the health authorities to chase the virus around, rather than get on top of it.

But every day, thousands of unwell people pour details of their symptoms and, perhaps unknowingly, locations into search engines and social media, creating a trove of real-time flu data. If such data could be used to monitor flu outbreaks as they happen and to make accurate predictions about its spread, that could transform public-health surveillance.

Powerful computational tools such as machine learning and a growing diversity of data streams — not just search queries and social media, but also cloud-based electronic health records and human mobility patterns inferred from census information — are making it increasingly possible to monitor the spread of flu through the population by following its digital signal. Now, models that track flu in real time and forecast flu trends are making inroads into public-health practice.

“We’re becoming much more comfortable with how these models perform,” says Matthew Biggerstaff, an epidemiologist who works on flu preparedness at the US Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia.

In 2013–14, the CDC launched the FluSight Network, a website informed by digital modelling that predicts the timing, peak and short-term intensity of the flu season in ten regions of the United States and across the whole country. According to Biggerstaff, flu forecasting helps responders to plan ahead, so they can be ready with vaccinations and communication strategies to limit the effects of the virus. Encouraged by progress in the field, the CDC announced in January 2019 that it will spend US$17.5 million to create a network of influenza-forecasting centres of excellence, each tasked with improving the accuracy and communication of real-time forecasts.

The CDC is leading the way on digital flu surveillance, but health agencies elsewhere are following suit. “We’ve been working to develop and apply these models with collaborators using a range of data sources,” says Richard Pebody, a consultant epidemiologist at Public Health England in London. The capacity to predict flu trajectories two to three weeks in advance, Pebody says, “will be very valuable for health-service planning.”…(More)”.

Guide to Mobile Data Analytics in Refugee Scenarios


Book edited Albert Ali Salah, Alex Pentland, Bruno Lepri and Emmanuel Letouzé: “After the start of the Syrian Civil War in 2011–12, increasing numbers of civilians sought refuge in neighboring countries. By May 2017, Turkey had received over 3 million refugees — the largest r efugee population in the world. Some lived in government-run camps near the Syrian border, but many have moved to cities looking for work and better living conditions. They faced problems of integration, income, welfare, employment, health, education, language, social tension, and discrimination. In order to develop sound policies to solve these interlinked problems, a good understanding of refugee dynamics is necessary.

This book summarizes the most important findings of the Data for Refugees (D4R) Challenge, which was a non-profit project initiated to improve the conditions of the Syrian refugees in Turkey by providing a database for the scientific community to enable research on urgent problems concerning refugees. The database, based on anonymized mobile call detail records (CDRs) of phone calls and SMS messages of one million Turk Telekom customers, indicates the broad activity and mobility patterns of refugees and citizens in Turkey for the year 1 January to 31 December 2017. Over 100 teams from around the globe applied to take part in the challenge, and 61 teams were granted access to the data.

This book describes the challenge, and presents selected and revised project reports on the five major themes: unemployment, health, education, social integration, and safety, respectively. These are complemented by additional invited chapters describing related projects from international governmental organizations, technological infrastructure, as well as ethical aspects. The last chapter includes policy recommendations, based on the lessons learned.

The book will serve as a guideline for creating innovative data-centered collaborations between industry, academia, government, and non-profit humanitarian agencies to deal with complex problems in refugee scenarios. It illustrates the possibilities of big data analytics in coping with refugee crises and humanitarian responses, by showcasing innovative approaches drawing on multiple data sources, information visualization, pattern analysis, and statistical analysis.It will also provide researchers and students working with mobility data with an excellent coverage across data science, economics, sociology, urban computing, education, migration studies, and more….(More)”.

To Regain Policy Competence: The Software of American Public Problem-Solving


Philip Zelikow at the Texas National Security Review: “Policymaking is a discipline, a craft, and a profession. Policymakers apply specialized knowledge — about other countries, politics, diplomacy, conflict, economics, public health, and more — to the practical solution of public problems. Effective policymaking is difficult. The “hardware” of policymaking — the tools and structures of government that frame the possibilities for useful work — are obviously important. Less obvious is that policy performance in practice often rests more on the “software” of public problem-solving: the way people size up problems, design actions, and implement policy. In other words, the quality of the policymaking.

Like policymaking, engineering is a discipline, a craft, and a profession. Engineers learn how to apply specialized knowledge — about chemistry, physics, biology, hydraulics, electricity, and more — to the solution of practical problems. Effective engineering is similarly difficult. People work hard to learn how to practice it with professional skill. But, unlike the methods taught for engineering, the software of policy work is rarely recognized or studied. It is not adequately taught. There is no canon or norms of professional practice. American policymaking is less about deliberate engineering, and is more about improvised guesswork and bureaucratized habits.

My experience is as a historian who studies the details of policy episodes and the related staff work, but also as a former official who has analyzed a variety of domestic and foreign policy issues at all three levels of American government, including federal work from different bureaucratic perspectives in five presidential administrations from Ronald Reagan to Barack Obama. From this historical and contemporary vantage point, I am struck (and a bit depressed) that the quality of U.S. policy engineering is actually much, much worse in recent decades than it was throughout much of the 20th century. This is not a partisan observation — the decline spans both Republican and Democratic administrations.

I am not alone in my observations. Francis Fukuyama recently concluded that, “[T]he overall quality of the American government has been deteriorating steadily for more than a generation,” notably since the 1970s. In the United States, “the apparently irreversible increase in the scope of government has masked a large decay in its quality.”1 This worried assessment is echoed by other nonpartisan and longtime scholars who have studied the workings of American government.2 The 2003 National Commission on Public Service observed,

The notion of public service, once a noble calling proudly pursued by the most talented Americans of every generation, draws an indifferent response from today’s young people and repels many of the country’s leading private citizens. … The system has evolved not by plan or considered analysis but by accretion over time, politically inspired tinkering, and neglect. … The need to improve performance is urgent and compelling.3

And they wrote that as the American occupation of Iraq was just beginning.

In this article, I offer hypotheses to help explain why American policymaking has declined, and why it was so much more effective in the mid-20th century than it is today. I offer a brief sketch of how American education about policy work evolved over the past hundred years, and I argue that the key software qualities that made for effective policy engineering neither came out of the academy nor migrated back into it.

I then outline a template for doing and teaching policy engineering. I break the engineering methods down into three interacting sets of analytical judgments: about assessment, design, and implementation. In teaching, I lean away from new, cumbersome standalone degree programs and toward more flexible forms of education that can pair more easily with many subject-matter specializations. I emphasize the value of practicing methods in detailed and more lifelike case studies. I stress the significance of an organizational culture that prizes written staff work of the quality that used to be routine but has now degraded into bureaucratic or opinionated dross….(More)”.

This Is Not an Atlas.


Book by kollektiv orangotango: “This Is Not an Atlas gathers more than 40 counter-cartographies from all over the world. This collection shows how maps are created and transformed as a part of political struggle, for critical research or in art and education: from indigenous territories in the Amazon to the anti-eviction movement in San Francisco; from defending commons in Mexico to mapping refugee camps with balloons in Lebanon; from slums in Nairobi to squats in Berlin; from supporting communities in the Philippines to reporting sexual harassment in Cairo. This Is Not an Atlas seeks to inspire, to document the underrepresented, and to be a useful companion when becoming a counter-cartographer yourself….(More)”.

Stop Surveillance Humanitarianism


Mark Latonero at The New York Times: “A standoff between the United Nations World Food Program and Houthi rebels in control of the capital region is threatening the lives of hundreds of thousands of civilians in Yemen.

Alarmed by reports that food is being diverted to support the rebels, the aid program is demanding that Houthi officials allow them to deploy biometric technologies like iris scans and digital fingerprints to monitor suspected fraud during food distribution.

The Houthis have reportedly blocked food delivery, painting the biometric effort as an intelligence operation, and have demanded access to the personal data on beneficiaries of the aid. The impasse led the aid organization to the decision last month to suspend food aid to parts of the starving population — once thought of as a last resort — unless the Houthis allow biometrics.

With program officials saying their staff is prevented from doing its essential jobs, turning to a technological solution is tempting. But biometrics deployed in crises can lead to a form of surveillance humanitarianism that can exacerbate risks to privacy and security.

By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need….(More)”.