Real-Time Incident Data Could Change Road Safety Forever


Skip Descant at GovTech: “Data collected from connected vehicles can offer near real-time insights into highway safety problem areas, identifying near-misses, troublesome intersections and other roadway dangers.

New research from Michigan State University and Ford Mobility, which tracked driving incidents on Ford vehicles outfitted with connected vehicle technology, points to a future of greatly expanded understanding of roadway events, far beyond simply reading crash data.

“Connected vehicle data allows us to know what’s happening now. And that’s a huge thing. And I think that’s where a lot of the potential is, to allow us to actively monitor the roadways,” said Meredith Nelson, connected and automated vehicles analyst with the Michigan Department of Transportation.

The research looked at data collected from Ford vehicles in the Detroit metro region equipped with connected vehicle technology from January 2020 to June 2020, drawing on data collected by Ford’s Safety Insights platform in partnership with StreetLight Data. The data offers insights into near-miss events like hard braking, hard acceleration and hard corners. In 2020 alone, Ford has measured more than a half-billion events from tens of millions of trips.

Traditionally, researchers relied on police-reported crash data, which had its drawbacks, in part, because of the delay in reporting, said Peter Savolainen, an engineering professor in the Department of Civil and Environmental Engineering at Michigan State University, with a research focus looking at road user behavior….(More)”.

Sovereignty and Data Localization


Paper by Emily Wu: “Data localization policies impose obligations on businesses to store and process data locally, rather than in servers located overseas. The adoption of data localization laws has been increasing, driven by the fear that a nation’s sovereignty will be threatened by their inability to exert full control over data stored outside their borders. This is particularly relevant to the US given its dominance in many areas of the digital ecosystem including artificial intelligence and cloud computing.

Unfortunately, data localization policies are causing more harm than good. They are ineffective at improving security, do little to simplify the regulatory landscape, and are causing economic harms to the markets where they are imposed. In order to move away from these policies, the fear of sovereignty dilution must be addressed by alternative means. This will be achieved most effectively by focusing on both technical concerns and value concerns.

To address technical concerns, the US should:

1. Enact a federal national privacy law to reduce the fears that foreign nations have about the power of US tech companies.

2. Mandate privacy and security frameworks by industry to demonstrate the importance that US industry places on privacy and security, recognizing it as fundamental to their business success.

3. Increase investment in cybersecurity to ensure that in a competitive market, the US has the best offering in both customer experience and security assurance

4. Expand multi-lateral agreements under CLOUD Act to help alleviate the concerns that data stored by US companies will be inaccessible to foreign governments in relevant to a criminal investigation…(More)”

The real-life plan to use novels to predict the next war


Philip Oltermann at The Guardian: “…The name of the initiative was Project Cassandra: for the next two years, university researchers would use their expertise to help the German defence ministry predict the future.

The academics weren’t AI specialists, or scientists, or political analysts. Instead, the people the colonels had sought out in a stuffy top-floor room were a small team of literary scholars led by Jürgen Wertheimer, a professor of comparative literature with wild curls and a penchant for black roll-necks….

But Wertheimer says great writers have a “sensory talent”. Literature, he reasons, has a tendency to channel social trends, moods and especially conflicts that politicians prefer to remain undiscussed until they break out into the open.

“Writers represent reality in such a way that their readers can instantly visualise a world and recognise themselves inside it. They operate on a plane that is both objective and subjective, creating inventories of the emotional interiors of individual lives throughout history.”…

In its bid for further government funding, Wertheimer’s team was up against Berlin’s Fraunhofer Institute, Europe’s largest organisation for applied research and development services, which had been asked to run the same pilot project with a data-led approach. Cassandra was simply better, says the defence ministry official, who asked to remain anonymous.

“Predicting a conflict a year, or a year and a half in advance, that’s something our systems were already capable of. Cassandra promised to register disturbances five to seven years in advance – that was something new.”

The German defence ministry decided to extend Project Cassandra’s funding by two years. It wanted Wertheimer’s team to develop a method for converting literary insights into hard facts that could be used by military strategists or operatives: “emotional maps” of crisis regions, especially in Africa and the Middle East, that measured “the rise of violent language in chronological order”….(More)

Facial Recognition Technology: Federal Law Enforcement Agencies Should Better Assess Privacy and Other Risks


Report by the U.S. Government Accountability Office: “GAO surveyed 42 federal agencies that employ law enforcement officers about their use of facial recognition technology. Twenty reported owning systems with facial recognition technology or using systems owned by other entities, such as other federal, state, local, and non-government entities (see figure).

Ownership and Use of Facial Recognition Technology Reported by Federal Agencies that Employ Law Enforcement Officers

HLP_5 - 103705

Note: For more details, see figure 2 in GAO-21-518.

Agencies reported using the technology to support several activities (e.g., criminal investigations) and in response to COVID-19 (e.g., verify an individual’s identity remotely). Six agencies reported using the technology on images of the unrest, riots, or protests following the death of George Floyd in May 2020. Three agencies reported using it on images of the events at the U.S. Capitol on January 6, 2021. Agencies said the searches used images of suspected criminal activity.

All fourteen agencies that reported using the technology to support criminal investigations also reported using systems owned by non-federal entities. However, only one has awareness of what non-federal systems are used by employees. By having a mechanism to track what non-federal systems are used by employees and assessing related risks (e.g., privacy and accuracy-related risks), agencies can better mitigate risks to themselves and the public….GAO is making two recommendations to each of 13 federal agencies to implement a mechanism to track what non-federal systems are used by employees, and assess the risks of using these systems. Twelve agencies concurred with both recommendations. U.S. Postal Service concurred with one and partially concurred with the other. GAO continues to believe the recommendation is valid, as described in the report….(More)”.

Spies Like Us: The Promise and Peril of Crowdsourced Intelligence


Book Review by Amy Zegart of “We Are Bellingcat: Global Crime, Online Sleuths, and the Bold Future of News” by Eliot Higgins: “On January 6, throngs of supporters of U.S. President Donald Trump rampaged through the U.S. Capitol in an attempt to derail Congress’s certification of the 2020 presidential election results. The mob threatened lawmakers, destroyed property, and injured more than 100 police officers; five people, including one officer, died in circumstances surrounding the assault. It was the first attack on the Capitol since the War of 1812 and the first violent transfer of presidential power in American history.

Only a handful of the rioters were arrested immediately. Most simply left the Capitol complex and disappeared into the streets of Washington. But they did not get away for long. It turns out that the insurrectionists were fond of taking selfies. Many of them posted photos and videos documenting their role in the assault on Facebook, Instagram, Parler, and other social media platforms. Some even earned money live-streaming the event and chatting with extremist fans on a site called DLive. 

Amateur sleuths immediately took to Twitter, self-organizing to help law enforcement agencies identify and charge the rioters. Their investigation was impromptu, not orchestrated, and open to anyone, not just experts. Participants didn’t need a badge or a security clearance—just an Internet connection….(More)”.

A growing problem of ‘deepfake geography’: How AI falsifies satellite images


Kim Eckart at UW News: “A fire in Central Park seems to appear as a smoke plume and a line of flames in a satellite image. Colorful lights on Diwali night in India, seen from space, seem to show widespread fireworks activity.

Both images exemplify what a new University of Washington-led study calls “location spoofing.” The photos — created by different people, for different purposes — are fake but look like genuine images of real places. And with the more sophisticated AI technologies available today, researchers warn that such “deepfake geography” could become a growing problem.

So, using satellite photos of three cities and drawing upon methods used to manipulate video and audio files, a team of researchers set out to identify new ways of detecting fake satellite photos, warn of the dangers of falsified geospatial data and call for a system of geographic fact-checking.

“This isn’t just Photoshopping things. It’s making data look uncannily realistic,” said Bo Zhao, assistant professor of geography at the UW and lead author of the study, which published April 21 in the journal Cartography and Geographic Information Science. “The techniques are already there. We’re just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it.”

As Zhao and his co-authors point out, fake locations and other inaccuracies have been part of mapmaking since ancient times. That’s due in part to the very nature of translating real-life locations to map form, as no map can capture a place exactly as it is. But some inaccuracies in maps are spoofs created by the mapmakers. The term “paper towns” describes discreetly placed fake cities, mountains, rivers or other features on a map to prevent copyright infringement. On the more lighthearted end of the spectrum, an official Michigan Department of Transportation highway map in the 1970s included the fictional cities of “Beatosu and “Goblu,” a play on “Beat OSU” and “Go Blue,” because the then-head of the department wanted to give a shoutout to his alma mater while protecting the copyright of the map….(More)”.

The Ease of Tracking Mobile Phones of U.S. Soldiers in Hot Spots


Byron Tau at the Wall Street Journal: “In 2016, a U.S. defense contractor named PlanetRisk Inc. was working on a software prototype when its employees discovered they could track U.S. military operations through the data generated by the apps on the mobile phones of American soldiers.

At the time, the company was using location data drawn from apps such as weather, games and dating services to build a surveillance tool that could monitor the travel of refugees from Syria to Europe and the U.S., according to interviews with former employees. The company’s goal was to sell the tool to U.S. counterterrorism and intelligence officials.

But buried in the data was evidence of sensitive U.S. military operations by American special-operations forces in Syria. The company’s analysts could see phones that had come from military facilities in the U.S., traveled through countries like Canada or Turkey and were clustered at the abandoned Lafarge Cement Factory in northern Syria, a staging area at the time for U.S. special-operations and allied forces.

The discovery was an early look at what today has become a significant challenge for the U.S. armed forces: how to protect service members, intelligence officers and security personnel in an age where highly revealing commercial data being generated by mobile phones and other digital services is bought and sold in bulk, and available for purchase by America’s adversaries….(More)“.

Power to the People


Book by Kurth Cronin on “How Open Technological Innovation is Arming Tomorrow’s Terrorists…Never have so many possessed the means to be so lethal. The diffusion of modern technology (robotics, cyber weapons, 3-D printing, autonomous systems, and artificial intelligence) to ordinary people has given them access to weapons of mass violence previously monopolized by the state. In recent years, states have attempted to stem the flow of such weapons to individuals and non-state groups, but their efforts are failing.

As Audrey Kurth Cronin explains in Power to the People, what we are seeing now is an exacerbation of an age-old trend. Over the centuries, the most surprising developments in warfare have occurred because of advances in technologies combined with changes in who can use them. Indeed, accessible innovations in destructive force have long driven new patterns of political violence. When Nobel invented dynamite and Kalashnikov designed the AK-47, each inadvertently spurred terrorist and insurgent movements that killed millions and upended the international system.

That history illuminates our own situation, in which emerging technologies are altering society and redistributing power. The twenty-first century “sharing economy” has already disrupted every institution, including the armed forces. New “open” technologies are transforming access to the means of violence. Just as importantly, higher-order functions that previously had been exclusively under state military control – mass mobilization, force projection, and systems integration – are being harnessed by non-state actors. Cronin closes by focusing on how to respond so that we both preserve the benefits of emerging technologies yet reduce the risks. Power, in the form of lethal technology, is flowing to the people, but the same technologies that empower can imperil global security – unless we act strategically….(More)”.

Citizen acceptance of mass surveillance? Identity, intelligence, and biodata concerns


Paper by Westerlund, Mika; Isabelle, Diane A; Leminen, Seppo: “News media and human rights organizations are warning about the rise of the surveillance state that builds on distrust and mass surveillance of its citizens. Further, the global pandemic has fostered public-private collaboration such as the launch of contact tracing apps to tackle COVID-19. Thus, such apps also contribute to the diffusion of technologies that can collect and analyse large amounts of sensitive data and the growth of the surveillance society. This study examines the impacts of citizens’ concerns about digital identity, government’s intelligence activities, and security of the increasing biodata on their trust in and acceptance of government’s use of personal data. Our analysis of survey data from 1,486 Canadians suggest that those concerns have direct effects on people’s acceptance of government’s use of personal data, but not necessarily on the trust in the government being respectful of privacy. Authorities should be more transparent about the collection and uses of data….(More)”

The Nudge Puzzle: Matching Nudge Interventions to Cybersecurity Decisions


Paper by Verena Zimmermann and Karen Renaud: “Nudging is a promising approach, in terms of influencing people to make advisable choices in a range of domains, including cybersecurity. However, the processes underlying the concept and the nudge’s effectiveness in different contexts, and in the long term, are still poorly understood. Our research thus first reviewed the nudge concept and differentiated it from other interventions before applying it to the cybersecurity area. We then carried out an empirical study to assess the effectiveness of three different nudge-related interventions on four types of cybersecurity-specific decisions. Our study demonstrated that the combination of a simple nudge and information provision, termed a “hybrid nudge,” was at least as, and in some decision contexts even more effective in encouraging secure choices as the simple nudge on its own. This indicates that the inclusion of information when deploying a nudge, thereby increasing the intervention’s transparency, does not necessarily diminish its effectiveness.

A follow-up study explored the educational and long-term impact of our tested nudge interventions to encourage secure choices. The results indicate that the impact of the initial nudges, of all kinds, did not endure. We conclude by discussing our findings and their implications for research and practice….(More)”.