Malicious Uses and Abuses of Artificial Intelligence


Report by Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro: “… looking into current and predicted criminal uses of artificial intelligence (AI)… The report provides law enforcers, policy makers and other organizations with information on existing and potential attacks leveraging AI and recommendations on how to mitigate these risks.

“AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology.” said Edvardas Šileris, Head of Europol’s Cybercrime Centre. “This report will help us not only to anticipate possible malicious uses and abuses of AI, but also to prevent and mitigate those threats proactively. This is how we can unlock the potential AI holds and benefit from the positive use of AI systems.”

The report concludes that cybercriminals will leverage AI both as an attack vector and an attack surface. Deepfakes are currently the best-known use of AI as an attack vector. However, the report warns that new screening technology will be needed in the future to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.

For example, AI could be used to support:

  • Convincing social engineering attacks at scale
  • Document-scraping malware to make attacks more efficient
  • Evasion of image recognition and voice biometrics
  • Ransomware attacks, through intelligent targeting and evasion
  • Data pollution, by identifying blind spots in detection rules..

The three organizations make several recommendations to conclude the report:

How the U.S. Military Buys Location Data from Ordinary Apps


Joseph Cox at Vice: “The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a “level” app that can be used to help, for example, install shelves in a bedroom.

Through public records, interviews with developers, and technical analysis, Motherboard uncovered two separate, parallel data streams that the U.S. military uses, or has used, to obtain location data. One relies on a company called Babel Street, which creates a product called Locate X. U.S. Special Operations Command (USSOCOM), a branch of the military tasked with counterterrorism, counterinsurgency, and special reconnaissance, bought access to Locate X to assist on overseas special forces operations. The other stream is through a company called X-Mode, which obtains location data directly from apps, then sells that data to contractors, and by extension, the military.

The news highlights the opaque location data industry and the fact that the U.S. military, which has infamously used other location data to target drone strikes, is purchasing access to sensitive data. Many of the users of apps involved in the data supply chain are Muslim, which is notable considering that the United States has waged a decades-long war on predominantly Muslim terror groups in the Middle East, and has killed hundreds of thousands of civilians during its military operations in Pakistan, Afghanistan, and Iraq. Motherboard does not know of any specific operations in which this type of app-based location data has been used by the U.S. military.

The apps sending data to X-Mode include Muslim Pro, an app that reminds users when to pray and what direction Mecca is in relation to the user’s current location. The app has been downloaded over 50 million times on Android, according to the Google Play Store, and over 98 million in total across other platforms including iOS, according to Muslim Pro’s website….(More)”.

Synthetic data: Unlocking the power of data and skills for machine learning


Karen Walker at Gov.UK: “Defence generates and holds a lot of data. We want to be able to get the best out of it, unlocking new insights that aren’t currently visible, through the use of innovative data science and analytics techniques tailored to defence’s specific needs. But this can be difficult because our data is often sensitive for a variety of reasons. For example, this might include information about the performance of particular vehicles, or personnel’s operational deployment details.

It is therefore often challenging to share data with experts who sit outside the Ministry of Defence, particularly amongst the wider data science community in government, small companies and academia. The use of synthetic data gives us a way to address this challenge and to benefit from the expertise of a wider range of people by creating datasets which aren’t sensitive. We have recently published a report from this work….(More)”.

Double image of original data and synthetic data in a 2D chart. The two images look almost identical

Why Personal Data Is a National Security Issue


Article by Susan Ariel Aaronson: “…Concerns about the national security threat from personal data held by foreigners first emerged in 2013. Several U.S. entities, including Target, J.P. Morgan, and the U.S. Office of Personnel Management were hacked. Many attributed the hacking to Chinese entities. Administration officials concluded that the Chinese government could cross-reference legally obtained and hacked-data sets to reveal information about U.S. objectives and strategy. 

Personal data troves can also be cross-referenced to identify individuals, putting both personal security as well as national security at risk. Even U.S. firms pose a direct and indirect security threat to individuals and the nation because of their failure to adequately protect personal data. For example, Facebook has a disturbing history of sharing personal data without consent and allowing its clients to use that data to manipulate users. Some app designers have enabled functionality unnecessary for their software’s operation, while others, like Anomaly 6, embedded their software in mobile apps without the permission of users or firms. Other companies use personal data without user permission to create new products. Clearview AI scraped billions of images from major web services such as Facebook, Google, and YouTube, and sold these images to law enforcement agencies around the world. 

Firms can also inadvertently aggregate personal data and in so doing threaten national security. Strava, an athletes’ social network, released a heat map of its global users’ activities in 2018. Savvy analysts were able to use the heat map to reveal secret military bases and patrol routes. Chinese-owned data firms could be a threat to national security if they share data with the Chinese government. But the problem lies in the U.S.’s failure to adequately protect personal data and police the misuse of data collected without the permission of users….(More)”.

The Atlas of Surveillance


Electronic Frontier Foundation: “Law enforcement surveillance isn’t always secret. These technologies can be discovered in news articles and government meeting agendas, in company press releases and social media posts. It just hasn’t been aggregated before.

That’s the starting point for the Atlas of Surveillance, a collaborative effort between the Electronic Frontier Foundation and the University of Nevada, Reno Reynolds School of Journalism. Through a combination of crowdsourcing and data journalism, we are creating the largest-ever repository of information on which law enforcement agencies are using what surveillance technologies. The aim is to generate a resource for journalists, academics, and, most importantly, members of the public to check what’s been purchased locally and how technologies are spreading across the country.

We specifically focused on the most pervasive technologies, including drones, body-worn cameras, face recognition, cell-site simulators, automated license plate readers, predictive policing, camera registries, and gunshot detection. Although we have amassed more than 5,000 datapoints in 3,000 jurisdictions, our research only reveals the tip of the iceberg and underlines the need for journalists and members of the public to continue demanding transparency from criminal justice agencies….(More)”.

IoT Security Is a Mess. Privacy ‘Nutrition’ Labels Could Help


Lily Hay Newman at Wired: “…Given that IoT security seems unlikely to magically improve anytime soon, researchers and regulators are rallying behind a new approach to managing IoT risk. Think of it as nutrition labels for embedded devices.

At the IEEE Symposium on Security & Privacy last month, researchers from Carnegie Mellon University presented a prototype security and privacy label they created based on interviews and surveys of people who own IoT devices, as well as privacy and security experts. They also published a tool for generating their labels. The idea is to shed light on a device’s security posture but also explain how it manages user data and what privacy controls it has. For example, the labels highlight whether a device can get security updates and how long a company has pledged to support it, as well as the types of sensors present, the data they collect, and whether the company shares that data with third parties.

“In an IoT setting, the amount of sensors and information you have about users is potentially invasive and ubiquitous,” says Yuvraj Agarwal, a networking and embedded systems researcher who worked on the project. “It’s like trying to fix a leaky bucket. So transparency is the most important part. This work shows and enumerates all the choices and factors for consumers.”

Nutrition labels on packaged foods have a certain amount of standardization around the world, but they’re still more opaque than they could be. And security and privacy issues are even less intuitive to most people than soluble and insoluble fiber. So the CMU researchers focused a lot of their efforts on making their IoT label as transparent and accessible as possible. To that end, they included both a primary and secondary layer to the label. The primary label is what would be printed on device boxes. To access the secondary label, you could follow a URL or scan a QR code to see more granular information about a device….(More)”.

COVID-19 Highlights Need for Public Intelligence


Blog by Steven Aftergood: “Hobbled by secrecy and timidity, the U.S. intelligence community has been conspicuously absent from efforts to combat the COVID-19 pandemic, the most serious national and global security challenge of our time.

The silence of intelligence today represents a departure from the straightforward approach of then-Director of National Intelligence Dan Coats who offered the clearest public warning of the risk of a pandemic at the annual threat hearing of the Senate Intelligence Committee in January 2019:

“We assess that the United States and the world will remain vulnerable to the next flu pandemic or large-scale outbreak of a contagious disease that could lead to massive rates of death and disability, severely affect the world economy, strain international resources, and increase calls on the United States for support,” DNI Coats testified.

But this year, for the first time in recent memory, the annual threat hearing was canceled, reportedly to avoid conflict between intelligence testimony and White House messaging. Though that seems humiliating to everyone involved, no satisfactory alternative explanation has been provided. The 2020 worldwide threat statement remains classified, according to an ODNI denial of a Freedom of Information Act request for a copy. And intelligence agencies have been reduced to recirculating reminders from the Centers for Disease Control to wash your hands and practice social distancing.

The US intelligence community evidently has nothing useful to say to the nation about the origins of the COVID-19 pandemic, its current spread or anticipated development, its likely impact on other security challenges, its effect on regional conflicts, or its long-term implications for global health.

These are all topics perfectly suited to open source intelligence collection and analysis. But the intelligence community disabled its open source portal last year. And the general public was barred even from that.

It didn’t — and doesn’t — have to be that way.

In 1993, the Federation of American Scientists created an international email network called ProMED — Program for Monitoring Emerging Diseases — which was intended to help discover and provide early warning about new infectious diseases.

Run on a shoestring budget and led by Stephen S. Morse, Barbara Hatch Rosenberg, Jack Woodall and Dorothy Preslar, ProMED was based on the notion that “public intelligence” is not an oxymoron. That is to say, physicians, scientists, researchers, and other members of the public — not just governments — have the need for current threat assessments that can be readily shared, consumed and analyzed. The initiative quickly proved its worth….(More)”.

We Have the Power to Destroy Ourselves Without the Wisdom to Ensure That We Don’t


EdgeCast by Toby Ord: “Lately, I’ve been asking myself questions about the future of humanity, not just about the next five years or even the next hundred years, but about everything humanity might be able to achieve in the time to come.

The past of humanity is about 200,000 years. That’s how long Homo sapiens have been around according to our current best guess (it might be a little bit longer). Maybe we should even include some of our other hominid ancestors and think about humanity somewhat more broadly. If we play our cards right, we could live hundreds of thousands of years more. In fact, there’s not much stopping us living millions of years. The typical species lives about a million years. Our 200,000 years so far would put us about in our adolescence, just old enough to be getting ourselves in trouble, but not wise enough to have thought through how we should act.

But a million years isn’t an upper bound for how long we could live. The horseshoe crab, for example, has lived for 450 million years so far. The Earth should remain habitable for at least that long. So, if we can survive as long as the horseshoe crab, we could have a future stretching millions of centuries from now. That’s millions of centuries of human progress, human achievement, and human flourishing. And if we could learn over that time how to reach out a little bit further into the cosmos to get to the planets around other stars, then we could have longer yet. If we went seven light-years at a time just making jumps of that distance, we could reach almost every star in the galaxy by continually spreading out from the new location. There are already plans in progress to send spacecraft these types of distances. If we could do that, the whole galaxy would open up to us….

Humanity is not a typical species. One of the things that most worries me is the way in which our technology might put us at risk. If we look back at the history of humanity these 2000 centuries, we see this initially gradual accumulation of knowledge and power. If you think back to the earliest humans, they weren’t that remarkable compared to the other species around them. An individual human is not that remarkable on the Savanna compared to a cheetah, or lion, or gazelle, but what set us apart was our ability to work together, to cooperate with other humans to form something greater than ourselves. It was teamwork, the ability to work together with those of us in the same tribe that let us expand to dozens of humans working together in cooperation. But much more important than that was our ability to cooperate across time, across the generations. By making small innovations and passing them on to our children, we were able to set a chain in motion wherein generations of people worked across time, slowly building up these innovations and technologies and accumulating power….(More)”.

Online collective intelligence course aims to improve responses to COVID-19 and other crises


PressRelease: “Working with 11 partner institutions around the world,  The Governance Lab (The GovLab) at the New York University Tandon School of Engineering today launches a massive open online course (MOOC) on “Collective Crisis Intelligence.” The course is free, open to anyone, and designed to help institutions improve disaster response through the use of data and volunteer participation. 

Thirteen modules have been created by leading global experts in major disasters such as the post-election violence in Kenya in 2008, the Fukushima nuclear plant disaster in 2011, the Ebola crisis in 2014, the Zika outbreak in 2016, and the current coronavirus. The course is designed to help those responding to coronavirus make use of volunteerism. 

As the COVID-19 pandemic reaches unprecedented proportions and spreads to more than 150 countries on six continents, policymakers are struggling to answer questions such as “How do we predict how the virus will spread?,” “How do we help the elderly and the homebound?,” “How do we provide economic assistance to those affected by business closures?,” and more. 

In each mini-lecture, those who have learned how to mobilize groups of people online to manage in a crisis present the basic concepts and tools to learn, analyze, and implement a crowdsourced public response. Lectures include

  • Introduction: Why Collective Intelligence Matters in a Crisis
  • Defining Actionable Problems (led by Matt Andrews, Harvard Kennedy School)
  • Three Day Evidence Review (led by Peter Bragge, Monash University, Australia)
  • Priorities for Collective Intelligence (led by Geoff Mulgan, University College London
  • Smarter Crowdsourcing (led by Beth Simone Noveck, The GovLab)
  • Crowdfunding (led by Peter Baeck, Nesta, United Kingdom)
  • Secondary Fall Out (led by Azby Brown, Safecast, Japan)
  • Crowdsourcing Surveillance (led by Tolbert Nyenswah, Johns Hopkins Bloomberg School of Public Health, United States/Liberia)
  • Crowdsourcing Data (led by Angela Oduor Lungati and Juliana Rotich, Ushahidi, Kenya)
  • Mobilizing a Network (led by Sean Bonner, Safecast, Japan)
  • Crowdsourcing Scientific Expertise (led by Ali Nouri, Federation of American Scientists)
  • Chatbots and Social Media Strategies for Crisis (led by Nashin Mahtani, PetaBencana.id, Indonesia)
  • Conclusion: Lessons Learned

The course explores such innovative uses of crowdsourcing as Safecast’s implementation of citizen science to gather information about environmental conditions after the meltdown of the Fukushima nuclear plant; Ushahidi, an online platform in Kenya for crowdsourcing data for crisis relief, human rights advocacy, transparency, and accountability campaigns; and “Ask a Scientist,” an interactive tool developed by The GovLab with the Federation of American Scientists and the New Jersey Office of Innovation, in which a network of scientists answer citizens’ questions about COVID-19.

More information on the courses is available at https://covidcourse.thegovlab.org

The 9/11 Playbook for Protecting Privacy


Adam Klein and Edward Felten at Politico: “Geolocation data—precise GPS coordinates or records of proximity to other devices, often collected by smartphone apps—is emerging as a critical tool for tracking potential spread. But other, more novel types of surveillance are already being contemplated for this first pandemic of the digital age. Body temperature readings from internet-connected thermometers are already being used at scale, but there are more exotic possibilities. Could smart-home devices be used to identify coughs of a timbre associated with Covid-19? Can facial recognition and remote temperature sensing be harnessed to identify likely carriers at a distance?

Weigh the benefits of each collection and use of data against the risks.

Each scenario will present a different level of privacy sensitivity, different collection mechanisms, different technical options affecting privacy, and varying potential value to health professionals, meaning there is no substitute for case-by-case judgment about whether the benefits of a particular use of data outweighs the risks.

The various ways to use location data, for example, present vastly different levels of concern for privacy. Aggregated location data, which combines many individualized location trails to show broader trends, is possible with few privacy risks, using methods that ensure no individual’s location trail is reconstructable from released data. For that reason, governments should not seek individualized location trails for any application where aggregated data would suffice—for example, analyzing travel trends to predict future epidemic hotspots.

If authorities need to trace the movements of identifiable people, their location trails should be obtained on the basis of an individualized showing. Gathering from companies the location trails for all users—as the Israeli government does, according to news reports—would raise far greater privacy concerns.

Establish clear rules for how data can be used, retained, and shared.

Once data is collected, the focus shifts to what the government can do with it. In counterterrorism programs, detailed rules seek to reduce the effect on individual privacy by limiting how different types of data can be used, stored, and shared.

The most basic safeguard is deleting data when it is no longer needed. Keeping data longer than needed unnecessarily exposes it to data breaches, leaks, and other potential privacy harms. Any individualized location tracking should cease, and the data should be deleted, once the individual no longer presents a danger to public health.

Poland’s new tracking app for those exposed to the coronavirus illustrates why reasonable limits are essential. The Polish government plans to retain location data collected by the app for six years. It is hard to see a public-health justification for keeping the data that long. But the story also illustrates well how a failure to consider users’ privacy can undermine a program’s efficacy: the app’s onerous terms led at least one Polish citizen to refuse to download it….(More)”.