Artificial Intelligence and National Security


CRS Report: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI. In addition, many commercial AI applications must undergo significant modification prior to being functional for the military.

A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics.

Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges. AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations. While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.

Steering AI and Advanced ICTs for Knowledge Societies: a Rights, Openness, Access, and Multi-stakeholder Perspective


Report by Unesco: “Artificial Intelligence (AI) is increasingly becoming the veiled decision-maker of our times. The diverse technical applications loosely associated with this label drive more and more of our lives. They scan billions of web pages, digital trails and sensor-derived data within micro-seconds, using algorithms to prepare and produce significant decisions.

AI and its constitutive elements of data, algorithms, hardware, connectivity and storage exponentially increase the power of Information and Communications Technology (ICT). This is a major opportunity for Sustainable Development, although risks also need to be addressed.

It should be noted that the development of AI technology is part of the wider ecosystem of Internet and other advanced ICTs including big data, Internet of Things, blockchains, etc. To assess AI and other advanced ICTs’ benefits and challenges – particularly for communications and information – a useful approach is UNESCO’s Internet Universality ROAM principles.These principles urge that digital development be aligned with human Rights, Openness, Accessibility and Multi-stakeholder governance to guide the ensemble of values, norms, policies, regulations, codes and ethics that govern the development and use of AI….(More)”

Rosie the Robot: Social accountability one tweet at a time


Blogpost by Yasodara Cordova and Eduardo Vicente Goncalvese: “Every month in Brazil, the government team in charge of processing reimbursement expenses incurred by congresspeople receives more than 20,000 claims. This is a manually intensive process that is prone to error and susceptible to corruption. Under Brazilian law, this information is available to the public, making it possible to check the accuracy of this data with further scrutiny. But it’s hard to sift through so many transactions. Fortunately, Rosie, a robot built to analyze the expenses of the country’s congress members, is helping out.

Rosie was born from Operação Serenata de Amor, a flagship project we helped create with other civic hackers. We suspected that data provided by members of Congress, especially regarding work-related reimbursements, might not always be accurate. There were clear, straightforward reimbursement regulations, but we wondered how easily individuals could maneuver around them. 

Furthermore, we believed that transparency portals and the public data weren’t realizing their full potential for accountability. Citizens struggled to understand public sector jargon and make sense of the extensive volume of data. We thought data science could help make better sense of the open data  provided by the Brazilian government.

Using agile methods, specifically Domain Driven Design, a flexible and adaptive process framework for solving complex problems, our group started studying the regulations, and converting them into  software code. We did this by reverse-engineering the legal documents–understanding the reimbursement rules and brainstorming ways to circumvent them. Next, we thought about the traces this circumvention would leave in the databases and developed a way to identify these traces using the existing data. The public expenses database included the images of the receipts used to claim reimbursements and we could see evidence of expenses, such as alcohol, which weren’t allowed to be paid with public money. We named our creation, Rosie.

This method of researching the regulations to then translate them into software in an agile way is called Domain-Driven Design. Used for complex systems, this useful approach analyzes the data and the sector as an ecosystem, and then uses observations and rapid prototyping to generate and test an evolving model. This is how Rosie works. Rosie sifts through the reported data and flags specific expenses made by representatives as “suspicious.” An example could be purchases that indicate the Congress member was in two locations on the same day and time.

After finding a suspicious transaction, Rosie then automatically tweets the results to both citizens and congress members.  She invites citizens to corroborate or dismiss the suspicions, while also inviting congress members to justify themselves.

Rosie isn’t working alone. Beyond translating the law into computer code, the group also created new interfaces to help citizens check up on Rosie’s suspicions. The same information that was spread in different places in official government websites was put together in a more intuitive, indexed and machine-readable platform. This platform is called Jarbas – its name was inspired by the AI system that controls Tony Stark’s mansion in Iron Man, J.A.R.V.I.S. (which has origins in the human “Jarbas”) – and it is a website and API (application programming interface) that helps citizens more easily navigate and browse data from different sources. Together, Rosie and Jarbas helps citizens use and interpret the data to decide whether there was a misuse of public funds. So far, Rosie has tweeted 967 times. She is particularly good at detecting overpriced meals. According to an open research, made by the group, since her introduction, members of Congress have reduced spending on meals by about ten percent….(More)”.

The Challenges of Sharing Data in an Era of Politicized Science


Editorial by Howard Bauchner in JAMA: “The goal of making science more transparent—sharing data, posting results on trial registries, use of preprint servers, and open access publishing—may enhance scientific discovery and improve individual and population health, but it also comes with substantial challenges in an era of politicized science, enhanced skepticism, and the ubiquitous world of social media. The recent announcement by the Trump administration of plans to proceed with an updated version of the proposed rule “Strengthening Transparency in Regulatory Science,” stipulating that all underlying data from studies that underpin public health regulations from the US Environmental Protection Agency (EPA) must be made publicly available so that those data can be independently validated, epitomizes some of these challenges. According to EPA Administrator Andrew Wheeler: “Good science is science that can be replicated and independently validated, science that can hold up to scrutiny. That is why we’re moving forward to ensure that the science supporting agency decisions is transparent and available for evaluation by the public and stakeholders.”

Virtually every time JAMA publishes an article on the effects of pollution or climate change on health, the journal immediately receives demands from critics to retract the article for various reasons. Some individuals and groups simply do not believe that pollution or climate change affects human health. Research on climate change, and the effects of climate change on the health of the planet and human beings, if made available to anyone for reanalysis could be manipulated to find a different outcome than initially reported. In an age of skepticism about many issues, including science, with the ability to use social media to disseminate unfounded and at times potentially harmful ideas, it is challenging to balance the potential benefits of sharing data with the harms that could be done by reanalysis.

Can the experience of sharing data derived from randomized clinical trials (RCTs)—either as mandated by some funders and journals or as supported by individual investigators—serve as examples as a way to safeguard “truth” in science….

Although the sharing of data may have numerous benefits, it also comes with substantial challenges particularly in highly contentious and politicized areas, such as the effects of climate change and pollution on health, in which the public dialogue appears to be based on as much fiction as fact. The sharing of data, whether mandated by funders, including foundations and government, or volunteered by scientists who believe in the principle of data transparency, is a complicated issue in the evolving world of science, analysis, skepticism, and communication. Above all, the scientific process—including original research and reanalysis of shared data—must prevail, and the inherent search for evidence, facts, and truth must not be compromised by special interests, coercive influences, or politicized perspectives. There are no simple answers, just words of caution and concern….(More)”.

Access My Info (AMI)


About: “What do companies know about you? How do they handle your data? And who do they share it with?

Access My Info (AMI) is a project that can help answer these questions by assisting you in making data access requests to companies. AMI includes a web application that helps users send companies data access requests, and a research methodology designed to understand the responses companies make to these requests. Past AMI projects have shed light on how companies treat user data and contribute to digital privacy reforms around the world.

What are data access requests?

A data access request is a letter you can send to any company with products/services that you use. The request asks that the company disclose all the information it has about you and whether or not it has shared your data with any third-parties. If the place where you live has data protection laws that include the right to data access then companies may be legally obligated to respond…

AMI has made personal data requests in jurisdictions around the world and found common patterns.

  1. There are significant gaps between data access laws on paper and the law in practice;
  2. People have consistently encountered barriers to accessing their data.

Together with our partners in each jurisdiction, we have used Access My Info to set off a dialog between users, civil society, regulators, and companies…(More)”

Technology & the Law of Corporate Responsibility – The Impact of Blockchain


Blogpost by Elizabeth Boomer: “Blockchain, a technology regularly associated with digital currency, is increasingly being utilized as a corporate social responsibility tool in major international corporations. This intersection of law, technology, and corporate responsibility was addressed earlier this month at the World Bank Law, Justice, and Development Week 2019, where the theme was Rights, Technology and Development. The law related to corporate responsibility for sustainable development is increasingly visible due in part to several lawsuits against large international corporations, alleging the use of child and forced labor. In addition, the United Nations has been working for some time on a treaty on business and human rights to encourage corporations to avoid “causing or contributing to adverse human rights impacts through their own activities and [to] address such impacts when they occur.”

DeBeersVolvo, and Coca-Cola, among other industry leaders, are using blockchain, a technology that allows digital information to be distributed and analyzed, but not copied or manipulated, to trace the source of materials and better manage their supply chains. These initiatives have come as welcome news in industries where child or forced labor in the supply chain can be hard to detect, e.g. conflict minerals, sugar, tobacco, and cacao. The issue is especially difficult when trying to trace the mining of cobalt for lithium ion batteries, increasingly used in electric cars, because the final product is not directly traceable to a single source.

While non governmental organizations (NGOs) have been advocating for improved corporate performance in supply chains regarding labor and environmental standards for years, blockchain may be a technological tool that could reliably trace information regarding various products – from food to minerals – that go through several layers of suppliers before being certified as slave- or child labor- free.

Child labor and forced labor are still common in some countries. The majority of countries worldwide have ratified International Labour Organization (ILO) Convention No. 182, prohibiting the worst forms of child labor (186 ratifications), as well as the ILO Convention prohibiting forced labor (No. 29, with 178 ratifications), and the abolition of forced labor (Convention No. 105, with 175 ratifications). However, the ILO estimates that approximately 40 million men and women are engaged in modern day slavery and 152 million children are subject to child labor, 38% of whom are working in hazardous conditions. The enduring existence of forced labor and child labor raises difficult ethical questions, because in many contexts, the victim does not have a viable alternative livelihood….(More)”.

Seeing Like a Finite State Machine


Henry Farrell at the Crooked Timber: “…So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.

This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.

These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist. Groups that are discriminated against will have no obvious recourse. Major mistakes may go uncorrected: they may be nearly invisible to a state whose data is polluted both by the means employed to observe and classify it, and the policies implemented on the basis of this data. A plausible feedback loop would see bias leading to error leading to further bias, and no ready ways to correct it. This of course, will be likely to be reinforced by the ordinary politics of authoritarianism, and the typical reluctance to correct leaders, even when their policies are leading to disaster. The flawed ideology of the leader (We must all study Comrade Xi thought to discover the truth!) and of the algorithm (machine learning is magic!) may reinforce each other in highly unfortunate ways.

In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason….(More)”

How Data Can Help in the Fight Against the Opioid Epidemic in the United States


Report by Joshua New: “The United States is in the midst of an opioid epidemic 20 years in the making….

One of the most pernicious obstacles in the fight against the opioid epidemic is that, until relatively recently, it was difficult to measure the epidemic in any comprehensive capacity beyond such high-level statistics. A lack of granular data and authorities’ inability to use data to inform response efforts allowed the epidemic to grow to devastating proportions. The maxim “you can’t manage what you can’t measure” has never been so relevant, and this failure to effectively leverage data has undoubtedly cost many lives and caused severe social and economic damage to communities ravaged by opioid addiction, with authorities limited in their ability to fight back.

Many factors contributed to the opioid epidemic, including healthcare providers not fully understanding the potential ramifications of prescribing opioids, socioeconomic conditions that make addiction more likely, and drug distributors turning a blind eye to likely criminal behavior, such as pharmacy workers illegally selling opioids on the black market. Data will not be able to solve these problems, but it can make public health officials and other stakeholders more effective at responding to them. Fortunately, recent efforts to better leverage data in the fight against the opioid epidemic have demonstrated the potential for data to be an invaluable and effective tool to inform decision-making and guide response efforts. Policymakers should aggressively pursue more data-driven strategies to combat the opioid epidemic while learning from past mistakes that helped contribute to the epidemic to prevent similar situations in the future.

The scope of this paper is limited to opportunities to better leverage data to help address problems primarily related to the abuse of prescription opioids, rather than the abuse of illicitly manufactured opioids such as heroin and fentanyl. While these issues may overlap, such as when a person develops an opioid use disorder from prescribed opioids and then seeks heroin when they are unable to obtain more from their doctor, the opportunities to address the abuse of prescription opioids are more clear-cut….(More)”.

Manual of Digital Earth


Book by Huadong Guo, Michael F. Goodchild and Alessandro Annoni: “This open access book offers a summary of the development of Digital Earth over the past twenty years. By reviewing the initial vision of Digital Earth, the evolution of that vision, the relevant key technologies, and the role of Digital Earth in helping people respond to global challenges, this publication reveals how and why Digital Earth is becoming vital for acquiring, processing, analysing and mining the rapidly growing volume of global data sets about the Earth.

The main aspects of Digital Earth covered here include: Digital Earth platforms, remote sensing and navigation satellites, processing and visualizing geospatial information, geospatial information infrastructures, big data and cloud computing, transformation and zooming, artificial intelligence, Internet of Things, and social media. Moreover, the book covers in detail the multi-layered/multi-faceted roles of Digital Earth in response to sustainable development goals, climate changes, and mitigating disasters, the applications of Digital Earth (such as digital city and digital heritage), the citizen science in support of Digital Earth, the economic value of Digital Earth, and so on. This book also reviews the regional and national development of Digital Earth around the world, and discusses the role and effect of education and ethics. Lastly, it concludes with a summary of the challenges and forecasts the future trends of Digital Earth.By sharing case studies and a broad range of general and scientific insights into the science and technology of Digital Earth, this book offers an essential introduction for an ever-growing international audience….(More)”.

The Right to Be Seen


Anne-Marie Slaughter and Yuliya Panfil at Project Syndicate: “While much of the developed world is properly worried about myriad privacy outrages at the hands of Big Tech and demanding – and securing – for individuals a “right to be forgotten,” many around the world are posing a very different question: What about the right to be seen?

Just ask the billion people who are locked out of services we take for granted – things like a bank account, a deed to a house, or even a mobile phone account – because they lack identity documents and thus can’t prove who they are. They are effectively invisible as a result of poor data.

The ability to exercise many of our most basic rights and privileges – such as the right to vote, drive, own property, and travel internationally – is determined by large administrative agencies that rely on standardized information to determine who is eligible for what. For example, to obtain a passport it is typically necessary to present a birth certificate. But what if you do not have a birth certificate? To open a bank account requires proof of address. But what if your house doesn’t have an address?

The inability to provide such basic information is a barrier to stability, prosperity, and opportunity. Invisible people are locked out of the formal economy, unable to vote, travel, or access medical and education benefits. It’s not that they are undeserving or unqualified, it’s that they are data poor.

In this context, the rich digital record provided by our smartphones and other sensors could become a powerful tool for good, so long as the risks are acknowledged. These gadgets, which have become central to our social and economic lives, leave a data trail that for many of us is the raw material that fuels what Harvard’s Shoshana Zuboff calls “surveillance capitalism.” Our Google location history shows exactly where we live and work. Our email activity reveals our social networks. Even the way we hold our smartphone can give away early signs of Parkinson’s.

But what if citizens could harness the power of these data for themselves, to become visible to administrative gatekeepers and access the rights and privileges to which they are entitled? Their virtual trail could then be converted into proof of physical facts.

That is beginning to happen. In India, slum dwellers are using smartphone location data to put themselves on city maps for the first time and register for addresses that they can then use to receive mail and register for government IDs. In Tanzania, citizens are using their mobile payment histories to build their credit scores and access more traditional financial services. And in Europe and the United States, Uber drivers are fighting for their rideshare data to advocate for employment benefits….(More)”.