Power to the People


Book by Kurth Cronin on “How Open Technological Innovation is Arming Tomorrow’s Terrorists…Never have so many possessed the means to be so lethal. The diffusion of modern technology (robotics, cyber weapons, 3-D printing, autonomous systems, and artificial intelligence) to ordinary people has given them access to weapons of mass violence previously monopolized by the state. In recent years, states have attempted to stem the flow of such weapons to individuals and non-state groups, but their efforts are failing.

As Audrey Kurth Cronin explains in Power to the People, what we are seeing now is an exacerbation of an age-old trend. Over the centuries, the most surprising developments in warfare have occurred because of advances in technologies combined with changes in who can use them. Indeed, accessible innovations in destructive force have long driven new patterns of political violence. When Nobel invented dynamite and Kalashnikov designed the AK-47, each inadvertently spurred terrorist and insurgent movements that killed millions and upended the international system.

That history illuminates our own situation, in which emerging technologies are altering society and redistributing power. The twenty-first century “sharing economy” has already disrupted every institution, including the armed forces. New “open” technologies are transforming access to the means of violence. Just as importantly, higher-order functions that previously had been exclusively under state military control – mass mobilization, force projection, and systems integration – are being harnessed by non-state actors. Cronin closes by focusing on how to respond so that we both preserve the benefits of emerging technologies yet reduce the risks. Power, in the form of lethal technology, is flowing to the people, but the same technologies that empower can imperil global security – unless we act strategically….(More)”.

Citizen acceptance of mass surveillance? Identity, intelligence, and biodata concerns


Paper by Westerlund, Mika; Isabelle, Diane A; Leminen, Seppo: “News media and human rights organizations are warning about the rise of the surveillance state that builds on distrust and mass surveillance of its citizens. Further, the global pandemic has fostered public-private collaboration such as the launch of contact tracing apps to tackle COVID-19. Thus, such apps also contribute to the diffusion of technologies that can collect and analyse large amounts of sensitive data and the growth of the surveillance society. This study examines the impacts of citizens’ concerns about digital identity, government’s intelligence activities, and security of the increasing biodata on their trust in and acceptance of government’s use of personal data. Our analysis of survey data from 1,486 Canadians suggest that those concerns have direct effects on people’s acceptance of government’s use of personal data, but not necessarily on the trust in the government being respectful of privacy. Authorities should be more transparent about the collection and uses of data….(More)”

The Nudge Puzzle: Matching Nudge Interventions to Cybersecurity Decisions


Paper by Verena Zimmermann and Karen Renaud: “Nudging is a promising approach, in terms of influencing people to make advisable choices in a range of domains, including cybersecurity. However, the processes underlying the concept and the nudge’s effectiveness in different contexts, and in the long term, are still poorly understood. Our research thus first reviewed the nudge concept and differentiated it from other interventions before applying it to the cybersecurity area. We then carried out an empirical study to assess the effectiveness of three different nudge-related interventions on four types of cybersecurity-specific decisions. Our study demonstrated that the combination of a simple nudge and information provision, termed a “hybrid nudge,” was at least as, and in some decision contexts even more effective in encouraging secure choices as the simple nudge on its own. This indicates that the inclusion of information when deploying a nudge, thereby increasing the intervention’s transparency, does not necessarily diminish its effectiveness.

A follow-up study explored the educational and long-term impact of our tested nudge interventions to encourage secure choices. The results indicate that the impact of the initial nudges, of all kinds, did not endure. We conclude by discussing our findings and their implications for research and practice….(More)”.

Malicious Uses and Abuses of Artificial Intelligence


Report by Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro: “… looking into current and predicted criminal uses of artificial intelligence (AI)… The report provides law enforcers, policy makers and other organizations with information on existing and potential attacks leveraging AI and recommendations on how to mitigate these risks.

“AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology.” said Edvardas Šileris, Head of Europol’s Cybercrime Centre. “This report will help us not only to anticipate possible malicious uses and abuses of AI, but also to prevent and mitigate those threats proactively. This is how we can unlock the potential AI holds and benefit from the positive use of AI systems.”

The report concludes that cybercriminals will leverage AI both as an attack vector and an attack surface. Deepfakes are currently the best-known use of AI as an attack vector. However, the report warns that new screening technology will be needed in the future to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.

For example, AI could be used to support:

  • Convincing social engineering attacks at scale
  • Document-scraping malware to make attacks more efficient
  • Evasion of image recognition and voice biometrics
  • Ransomware attacks, through intelligent targeting and evasion
  • Data pollution, by identifying blind spots in detection rules..

The three organizations make several recommendations to conclude the report:

How the U.S. Military Buys Location Data from Ordinary Apps


Joseph Cox at Vice: “The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a “level” app that can be used to help, for example, install shelves in a bedroom.

Through public records, interviews with developers, and technical analysis, Motherboard uncovered two separate, parallel data streams that the U.S. military uses, or has used, to obtain location data. One relies on a company called Babel Street, which creates a product called Locate X. U.S. Special Operations Command (USSOCOM), a branch of the military tasked with counterterrorism, counterinsurgency, and special reconnaissance, bought access to Locate X to assist on overseas special forces operations. The other stream is through a company called X-Mode, which obtains location data directly from apps, then sells that data to contractors, and by extension, the military.

The news highlights the opaque location data industry and the fact that the U.S. military, which has infamously used other location data to target drone strikes, is purchasing access to sensitive data. Many of the users of apps involved in the data supply chain are Muslim, which is notable considering that the United States has waged a decades-long war on predominantly Muslim terror groups in the Middle East, and has killed hundreds of thousands of civilians during its military operations in Pakistan, Afghanistan, and Iraq. Motherboard does not know of any specific operations in which this type of app-based location data has been used by the U.S. military.

The apps sending data to X-Mode include Muslim Pro, an app that reminds users when to pray and what direction Mecca is in relation to the user’s current location. The app has been downloaded over 50 million times on Android, according to the Google Play Store, and over 98 million in total across other platforms including iOS, according to Muslim Pro’s website….(More)”.

Synthetic data: Unlocking the power of data and skills for machine learning


Karen Walker at Gov.UK: “Defence generates and holds a lot of data. We want to be able to get the best out of it, unlocking new insights that aren’t currently visible, through the use of innovative data science and analytics techniques tailored to defence’s specific needs. But this can be difficult because our data is often sensitive for a variety of reasons. For example, this might include information about the performance of particular vehicles, or personnel’s operational deployment details.

It is therefore often challenging to share data with experts who sit outside the Ministry of Defence, particularly amongst the wider data science community in government, small companies and academia. The use of synthetic data gives us a way to address this challenge and to benefit from the expertise of a wider range of people by creating datasets which aren’t sensitive. We have recently published a report from this work….(More)”.

Double image of original data and synthetic data in a 2D chart. The two images look almost identical

Why Personal Data Is a National Security Issue


Article by Susan Ariel Aaronson: “…Concerns about the national security threat from personal data held by foreigners first emerged in 2013. Several U.S. entities, including Target, J.P. Morgan, and the U.S. Office of Personnel Management were hacked. Many attributed the hacking to Chinese entities. Administration officials concluded that the Chinese government could cross-reference legally obtained and hacked-data sets to reveal information about U.S. objectives and strategy. 

Personal data troves can also be cross-referenced to identify individuals, putting both personal security as well as national security at risk. Even U.S. firms pose a direct and indirect security threat to individuals and the nation because of their failure to adequately protect personal data. For example, Facebook has a disturbing history of sharing personal data without consent and allowing its clients to use that data to manipulate users. Some app designers have enabled functionality unnecessary for their software’s operation, while others, like Anomaly 6, embedded their software in mobile apps without the permission of users or firms. Other companies use personal data without user permission to create new products. Clearview AI scraped billions of images from major web services such as Facebook, Google, and YouTube, and sold these images to law enforcement agencies around the world. 

Firms can also inadvertently aggregate personal data and in so doing threaten national security. Strava, an athletes’ social network, released a heat map of its global users’ activities in 2018. Savvy analysts were able to use the heat map to reveal secret military bases and patrol routes. Chinese-owned data firms could be a threat to national security if they share data with the Chinese government. But the problem lies in the U.S.’s failure to adequately protect personal data and police the misuse of data collected without the permission of users….(More)”.

The Atlas of Surveillance


Electronic Frontier Foundation: “Law enforcement surveillance isn’t always secret. These technologies can be discovered in news articles and government meeting agendas, in company press releases and social media posts. It just hasn’t been aggregated before.

That’s the starting point for the Atlas of Surveillance, a collaborative effort between the Electronic Frontier Foundation and the University of Nevada, Reno Reynolds School of Journalism. Through a combination of crowdsourcing and data journalism, we are creating the largest-ever repository of information on which law enforcement agencies are using what surveillance technologies. The aim is to generate a resource for journalists, academics, and, most importantly, members of the public to check what’s been purchased locally and how technologies are spreading across the country.

We specifically focused on the most pervasive technologies, including drones, body-worn cameras, face recognition, cell-site simulators, automated license plate readers, predictive policing, camera registries, and gunshot detection. Although we have amassed more than 5,000 datapoints in 3,000 jurisdictions, our research only reveals the tip of the iceberg and underlines the need for journalists and members of the public to continue demanding transparency from criminal justice agencies….(More)”.

IoT Security Is a Mess. Privacy ‘Nutrition’ Labels Could Help


Lily Hay Newman at Wired: “…Given that IoT security seems unlikely to magically improve anytime soon, researchers and regulators are rallying behind a new approach to managing IoT risk. Think of it as nutrition labels for embedded devices.

At the IEEE Symposium on Security & Privacy last month, researchers from Carnegie Mellon University presented a prototype security and privacy label they created based on interviews and surveys of people who own IoT devices, as well as privacy and security experts. They also published a tool for generating their labels. The idea is to shed light on a device’s security posture but also explain how it manages user data and what privacy controls it has. For example, the labels highlight whether a device can get security updates and how long a company has pledged to support it, as well as the types of sensors present, the data they collect, and whether the company shares that data with third parties.

“In an IoT setting, the amount of sensors and information you have about users is potentially invasive and ubiquitous,” says Yuvraj Agarwal, a networking and embedded systems researcher who worked on the project. “It’s like trying to fix a leaky bucket. So transparency is the most important part. This work shows and enumerates all the choices and factors for consumers.”

Nutrition labels on packaged foods have a certain amount of standardization around the world, but they’re still more opaque than they could be. And security and privacy issues are even less intuitive to most people than soluble and insoluble fiber. So the CMU researchers focused a lot of their efforts on making their IoT label as transparent and accessible as possible. To that end, they included both a primary and secondary layer to the label. The primary label is what would be printed on device boxes. To access the secondary label, you could follow a URL or scan a QR code to see more granular information about a device….(More)”.

COVID-19 Highlights Need for Public Intelligence


Blog by Steven Aftergood: “Hobbled by secrecy and timidity, the U.S. intelligence community has been conspicuously absent from efforts to combat the COVID-19 pandemic, the most serious national and global security challenge of our time.

The silence of intelligence today represents a departure from the straightforward approach of then-Director of National Intelligence Dan Coats who offered the clearest public warning of the risk of a pandemic at the annual threat hearing of the Senate Intelligence Committee in January 2019:

“We assess that the United States and the world will remain vulnerable to the next flu pandemic or large-scale outbreak of a contagious disease that could lead to massive rates of death and disability, severely affect the world economy, strain international resources, and increase calls on the United States for support,” DNI Coats testified.

But this year, for the first time in recent memory, the annual threat hearing was canceled, reportedly to avoid conflict between intelligence testimony and White House messaging. Though that seems humiliating to everyone involved, no satisfactory alternative explanation has been provided. The 2020 worldwide threat statement remains classified, according to an ODNI denial of a Freedom of Information Act request for a copy. And intelligence agencies have been reduced to recirculating reminders from the Centers for Disease Control to wash your hands and practice social distancing.

The US intelligence community evidently has nothing useful to say to the nation about the origins of the COVID-19 pandemic, its current spread or anticipated development, its likely impact on other security challenges, its effect on regional conflicts, or its long-term implications for global health.

These are all topics perfectly suited to open source intelligence collection and analysis. But the intelligence community disabled its open source portal last year. And the general public was barred even from that.

It didn’t — and doesn’t — have to be that way.

In 1993, the Federation of American Scientists created an international email network called ProMED — Program for Monitoring Emerging Diseases — which was intended to help discover and provide early warning about new infectious diseases.

Run on a shoestring budget and led by Stephen S. Morse, Barbara Hatch Rosenberg, Jack Woodall and Dorothy Preslar, ProMED was based on the notion that “public intelligence” is not an oxymoron. That is to say, physicians, scientists, researchers, and other members of the public — not just governments — have the need for current threat assessments that can be readily shared, consumed and analyzed. The initiative quickly proved its worth….(More)”.