Facial Recognition Technology: Federal Law Enforcement Agencies Should Better Assess Privacy and Other Risks


Report by the U.S. Government Accountability Office: “GAO surveyed 42 federal agencies that employ law enforcement officers about their use of facial recognition technology. Twenty reported owning systems with facial recognition technology or using systems owned by other entities, such as other federal, state, local, and non-government entities (see figure).

Ownership and Use of Facial Recognition Technology Reported by Federal Agencies that Employ Law Enforcement Officers

HLP_5 - 103705

Note: For more details, see figure 2 in GAO-21-518.

Agencies reported using the technology to support several activities (e.g., criminal investigations) and in response to COVID-19 (e.g., verify an individual’s identity remotely). Six agencies reported using the technology on images of the unrest, riots, or protests following the death of George Floyd in May 2020. Three agencies reported using it on images of the events at the U.S. Capitol on January 6, 2021. Agencies said the searches used images of suspected criminal activity.

All fourteen agencies that reported using the technology to support criminal investigations also reported using systems owned by non-federal entities. However, only one has awareness of what non-federal systems are used by employees. By having a mechanism to track what non-federal systems are used by employees and assessing related risks (e.g., privacy and accuracy-related risks), agencies can better mitigate risks to themselves and the public….GAO is making two recommendations to each of 13 federal agencies to implement a mechanism to track what non-federal systems are used by employees, and assess the risks of using these systems. Twelve agencies concurred with both recommendations. U.S. Postal Service concurred with one and partially concurred with the other. GAO continues to believe the recommendation is valid, as described in the report….(More)”.

Spies Like Us: The Promise and Peril of Crowdsourced Intelligence


Book Review by Amy Zegart of “We Are Bellingcat: Global Crime, Online Sleuths, and the Bold Future of News” by Eliot Higgins: “On January 6, throngs of supporters of U.S. President Donald Trump rampaged through the U.S. Capitol in an attempt to derail Congress’s certification of the 2020 presidential election results. The mob threatened lawmakers, destroyed property, and injured more than 100 police officers; five people, including one officer, died in circumstances surrounding the assault. It was the first attack on the Capitol since the War of 1812 and the first violent transfer of presidential power in American history.

Only a handful of the rioters were arrested immediately. Most simply left the Capitol complex and disappeared into the streets of Washington. But they did not get away for long. It turns out that the insurrectionists were fond of taking selfies. Many of them posted photos and videos documenting their role in the assault on Facebook, Instagram, Parler, and other social media platforms. Some even earned money live-streaming the event and chatting with extremist fans on a site called DLive. 

Amateur sleuths immediately took to Twitter, self-organizing to help law enforcement agencies identify and charge the rioters. Their investigation was impromptu, not orchestrated, and open to anyone, not just experts. Participants didn’t need a badge or a security clearance—just an Internet connection….(More)”.

A growing problem of ‘deepfake geography’: How AI falsifies satellite images


Kim Eckart at UW News: “A fire in Central Park seems to appear as a smoke plume and a line of flames in a satellite image. Colorful lights on Diwali night in India, seen from space, seem to show widespread fireworks activity.

Both images exemplify what a new University of Washington-led study calls “location spoofing.” The photos — created by different people, for different purposes — are fake but look like genuine images of real places. And with the more sophisticated AI technologies available today, researchers warn that such “deepfake geography” could become a growing problem.

So, using satellite photos of three cities and drawing upon methods used to manipulate video and audio files, a team of researchers set out to identify new ways of detecting fake satellite photos, warn of the dangers of falsified geospatial data and call for a system of geographic fact-checking.

“This isn’t just Photoshopping things. It’s making data look uncannily realistic,” said Bo Zhao, assistant professor of geography at the UW and lead author of the study, which published April 21 in the journal Cartography and Geographic Information Science. “The techniques are already there. We’re just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it.”

As Zhao and his co-authors point out, fake locations and other inaccuracies have been part of mapmaking since ancient times. That’s due in part to the very nature of translating real-life locations to map form, as no map can capture a place exactly as it is. But some inaccuracies in maps are spoofs created by the mapmakers. The term “paper towns” describes discreetly placed fake cities, mountains, rivers or other features on a map to prevent copyright infringement. On the more lighthearted end of the spectrum, an official Michigan Department of Transportation highway map in the 1970s included the fictional cities of “Beatosu and “Goblu,” a play on “Beat OSU” and “Go Blue,” because the then-head of the department wanted to give a shoutout to his alma mater while protecting the copyright of the map….(More)”.

The Ease of Tracking Mobile Phones of U.S. Soldiers in Hot Spots


Byron Tau at the Wall Street Journal: “In 2016, a U.S. defense contractor named PlanetRisk Inc. was working on a software prototype when its employees discovered they could track U.S. military operations through the data generated by the apps on the mobile phones of American soldiers.

At the time, the company was using location data drawn from apps such as weather, games and dating services to build a surveillance tool that could monitor the travel of refugees from Syria to Europe and the U.S., according to interviews with former employees. The company’s goal was to sell the tool to U.S. counterterrorism and intelligence officials.

But buried in the data was evidence of sensitive U.S. military operations by American special-operations forces in Syria. The company’s analysts could see phones that had come from military facilities in the U.S., traveled through countries like Canada or Turkey and were clustered at the abandoned Lafarge Cement Factory in northern Syria, a staging area at the time for U.S. special-operations and allied forces.

The discovery was an early look at what today has become a significant challenge for the U.S. armed forces: how to protect service members, intelligence officers and security personnel in an age where highly revealing commercial data being generated by mobile phones and other digital services is bought and sold in bulk, and available for purchase by America’s adversaries….(More)“.

Power to the People


Book by Kurth Cronin on “How Open Technological Innovation is Arming Tomorrow’s Terrorists…Never have so many possessed the means to be so lethal. The diffusion of modern technology (robotics, cyber weapons, 3-D printing, autonomous systems, and artificial intelligence) to ordinary people has given them access to weapons of mass violence previously monopolized by the state. In recent years, states have attempted to stem the flow of such weapons to individuals and non-state groups, but their efforts are failing.

As Audrey Kurth Cronin explains in Power to the People, what we are seeing now is an exacerbation of an age-old trend. Over the centuries, the most surprising developments in warfare have occurred because of advances in technologies combined with changes in who can use them. Indeed, accessible innovations in destructive force have long driven new patterns of political violence. When Nobel invented dynamite and Kalashnikov designed the AK-47, each inadvertently spurred terrorist and insurgent movements that killed millions and upended the international system.

That history illuminates our own situation, in which emerging technologies are altering society and redistributing power. The twenty-first century “sharing economy” has already disrupted every institution, including the armed forces. New “open” technologies are transforming access to the means of violence. Just as importantly, higher-order functions that previously had been exclusively under state military control – mass mobilization, force projection, and systems integration – are being harnessed by non-state actors. Cronin closes by focusing on how to respond so that we both preserve the benefits of emerging technologies yet reduce the risks. Power, in the form of lethal technology, is flowing to the people, but the same technologies that empower can imperil global security – unless we act strategically….(More)”.

Citizen acceptance of mass surveillance? Identity, intelligence, and biodata concerns


Paper by Westerlund, Mika; Isabelle, Diane A; Leminen, Seppo: “News media and human rights organizations are warning about the rise of the surveillance state that builds on distrust and mass surveillance of its citizens. Further, the global pandemic has fostered public-private collaboration such as the launch of contact tracing apps to tackle COVID-19. Thus, such apps also contribute to the diffusion of technologies that can collect and analyse large amounts of sensitive data and the growth of the surveillance society. This study examines the impacts of citizens’ concerns about digital identity, government’s intelligence activities, and security of the increasing biodata on their trust in and acceptance of government’s use of personal data. Our analysis of survey data from 1,486 Canadians suggest that those concerns have direct effects on people’s acceptance of government’s use of personal data, but not necessarily on the trust in the government being respectful of privacy. Authorities should be more transparent about the collection and uses of data….(More)”

The Nudge Puzzle: Matching Nudge Interventions to Cybersecurity Decisions


Paper by Verena Zimmermann and Karen Renaud: “Nudging is a promising approach, in terms of influencing people to make advisable choices in a range of domains, including cybersecurity. However, the processes underlying the concept and the nudge’s effectiveness in different contexts, and in the long term, are still poorly understood. Our research thus first reviewed the nudge concept and differentiated it from other interventions before applying it to the cybersecurity area. We then carried out an empirical study to assess the effectiveness of three different nudge-related interventions on four types of cybersecurity-specific decisions. Our study demonstrated that the combination of a simple nudge and information provision, termed a “hybrid nudge,” was at least as, and in some decision contexts even more effective in encouraging secure choices as the simple nudge on its own. This indicates that the inclusion of information when deploying a nudge, thereby increasing the intervention’s transparency, does not necessarily diminish its effectiveness.

A follow-up study explored the educational and long-term impact of our tested nudge interventions to encourage secure choices. The results indicate that the impact of the initial nudges, of all kinds, did not endure. We conclude by discussing our findings and their implications for research and practice….(More)”.

Malicious Uses and Abuses of Artificial Intelligence


Report by Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro: “… looking into current and predicted criminal uses of artificial intelligence (AI)… The report provides law enforcers, policy makers and other organizations with information on existing and potential attacks leveraging AI and recommendations on how to mitigate these risks.

“AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology.” said Edvardas Šileris, Head of Europol’s Cybercrime Centre. “This report will help us not only to anticipate possible malicious uses and abuses of AI, but also to prevent and mitigate those threats proactively. This is how we can unlock the potential AI holds and benefit from the positive use of AI systems.”

The report concludes that cybercriminals will leverage AI both as an attack vector and an attack surface. Deepfakes are currently the best-known use of AI as an attack vector. However, the report warns that new screening technology will be needed in the future to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.

For example, AI could be used to support:

  • Convincing social engineering attacks at scale
  • Document-scraping malware to make attacks more efficient
  • Evasion of image recognition and voice biometrics
  • Ransomware attacks, through intelligent targeting and evasion
  • Data pollution, by identifying blind spots in detection rules..

The three organizations make several recommendations to conclude the report:

How the U.S. Military Buys Location Data from Ordinary Apps


Joseph Cox at Vice: “The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a “level” app that can be used to help, for example, install shelves in a bedroom.

Through public records, interviews with developers, and technical analysis, Motherboard uncovered two separate, parallel data streams that the U.S. military uses, or has used, to obtain location data. One relies on a company called Babel Street, which creates a product called Locate X. U.S. Special Operations Command (USSOCOM), a branch of the military tasked with counterterrorism, counterinsurgency, and special reconnaissance, bought access to Locate X to assist on overseas special forces operations. The other stream is through a company called X-Mode, which obtains location data directly from apps, then sells that data to contractors, and by extension, the military.

The news highlights the opaque location data industry and the fact that the U.S. military, which has infamously used other location data to target drone strikes, is purchasing access to sensitive data. Many of the users of apps involved in the data supply chain are Muslim, which is notable considering that the United States has waged a decades-long war on predominantly Muslim terror groups in the Middle East, and has killed hundreds of thousands of civilians during its military operations in Pakistan, Afghanistan, and Iraq. Motherboard does not know of any specific operations in which this type of app-based location data has been used by the U.S. military.

The apps sending data to X-Mode include Muslim Pro, an app that reminds users when to pray and what direction Mecca is in relation to the user’s current location. The app has been downloaded over 50 million times on Android, according to the Google Play Store, and over 98 million in total across other platforms including iOS, according to Muslim Pro’s website….(More)”.

Synthetic data: Unlocking the power of data and skills for machine learning


Karen Walker at Gov.UK: “Defence generates and holds a lot of data. We want to be able to get the best out of it, unlocking new insights that aren’t currently visible, through the use of innovative data science and analytics techniques tailored to defence’s specific needs. But this can be difficult because our data is often sensitive for a variety of reasons. For example, this might include information about the performance of particular vehicles, or personnel’s operational deployment details.

It is therefore often challenging to share data with experts who sit outside the Ministry of Defence, particularly amongst the wider data science community in government, small companies and academia. The use of synthetic data gives us a way to address this challenge and to benefit from the expertise of a wider range of people by creating datasets which aren’t sensitive. We have recently published a report from this work….(More)”.

Double image of original data and synthetic data in a 2D chart. The two images look almost identical