How digital sleuths unravelled the mystery of Iran’s plane crash


Chris Stokel-Walker at Wired: “The video shows a faint glow in the distance, zig-zagging like a piece of paper caught in an underdraft, slowly meandering towards the horizon. Then there’s a bright flash and the trees in the foreground are thrown into shadow as Ukraine International Airlines flight PS752 hits the ground early on the morning of January 8, killing all 176 people on board.

At first, it seemed like an accident – engine failure was fingered as the cause – until the first video showing the plane seemingly on fire as it weaved to the ground surfaced. United States officials started to investigate, and a more complicated picture emerged. It appeared that the plane had been hit by a missile, corroborated by a second video that appears to show the moment the missile ploughs into the Boeing 737-800. While military and intelligence officials at governments around the world were conducting their inquiries in secret, a team of investigators were using open-source intelligence (OSINT) techniques to piece together the puzzle of flight PS752.

It’s not unusual nowadays for OSINT to lead the way in decoding key news events. When Sergei Skripal was poisoned, Bellingcat, an open-source intelligence website, tracked and identified his killers as they traipsed across London and Salisbury. They delved into military records to blow the cover of agents sent to kill. And in the days after the Ukraine Airlines plane crashed into the ground outside Tehran, Bellingcat and The New York Times have blown a hole in the supposition that the downing of the aircraft was an engine failure. The pressure – and the weight of public evidence – compelled Iranian officials to admit overnight on January 10 that the country had shot down the plane “in error”.

So how do they do it? “You can think of OSINT as a puzzle. To get the complete picture, you need to find the missing pieces and put everything together,” says Loránd Bodó, an OSINT analyst at Tech versus Terrorism, a campaign group. The team at Bellingcat and other open-source investigators pore over publicly available material. Thanks to our propensity to reach for our cameraphones at the sight of any newsworthy incident, video and photos are often available, posted to social media in the immediate aftermath of events. (The person who shot and uploaded the second video in this incident, of the missile appearing to hit the Boeing plane was a perfect example: they grabbed their phone after they heard “some sort of shot fired”.) “Open source investigations essentially involve the collection, preservation, verification, and analysis of evidence that is available in the public domain to build a picture of what happened,” says Yvonne McDermott Rees, a lecturer at Swansea University….(More)”.

Lack of guidance leaves public services in limbo on AI, says watchdog


Dan Sabbagh at the Guardian: “Police forces, hospitals and councils struggle to understand how to use artificial intelligence because of a lack of clear ethical guidance from the government, according to the country’s only surveillance regulator.

The surveillance camera commissioner, Tony Porter, said he received requests for guidance all the time from public bodies which do not know where the limits lie when it comes to the use of facial, biometric and lip-reading technology.

“Facial recognition technology is now being sold as standard in CCTV systems, for example, so hospitals are having to work out if they should use it,” Porter said. “Police are increasingly wearing body cameras. What are the appropriate limits for their use?

“The problem is that there is insufficient guidance for public bodies to know what is appropriate and what is not, and the public have no idea what is going on because there is no real transparency.”

The watchdog’s comments came as it emerged that Downing Street had commissioned a review led by the Committee on Standards in Public Life, whose chairman had called on public bodies to reveal when they use algorithms in decision making.

Lord Evans, a former MI5 chief, told the Sunday Telegraph that “it was very difficult to find out where AI is being used in the public sector” and that “at the very minimum, it should be visible, and declared, where it has the potential for impacting on civil liberties and human rights and freedoms”.

AI is increasingly deployed across the public sector in surveillance and elsewhere. The high court ruled in September that the police use of automatic facial recognition technology to scan people in crowds was lawful.

Its use by South Wales police was challenged by Ed Bridges, a former Lib Dem councillor, who noticed the cameras when he went out to buy a lunchtime sandwich, but the court held that the intrusion into privacy was proportionate….(More)”.

The Economics of Violence: How Behavioral Science Can Transform our View of Crime, Insurgency, and Terrorism


Book by Gary M. Shiffman: “How do we understand illicit violence? Can we prevent it? Building on behavioral science and economics, this book begins with the idea that humans are more predictable than we like to believe, and this ability to model human behavior applies equally well to leaders of violent and coercive organizations as it does to everyday people. Humans ultimately seek survival for themselves and their communities in a world of competition. While the dynamics of ‘us vs. them’ are divisive, they also help us to survive. Access to increasingly larger markets, facilitated through digital communications and social media, creates more transnational opportunities for deception, coercion, and violence. If the economist’s perspective helps to explain violence, then it must also facilitate insights into promoting peace and security. If we can approach violence as behavioral scientists, then we can also better structure our institutions to create policies that make the world a more secure place, for us and for future generations….(More)”.

New Orleans has declared a state of emergency after a cyberattack


MIT Technology Review: “The city told its employees to shut down their computers as a precaution this weekend after an attempted cyberattack on Friday.

The news: New Orleans spotted suspicious activity in its networks at around 5 a.m. on Friday, with a spike in the attempted attacks at 8 a.m. It detected phishing attempts and ransomware, Kim LaGrue, the city’s head of IT, later told reporters. Once they were confident the city was under attack, the team shut down its servers and computers. City authorities then filed a declaration of a state of emergency with the Civil District Court, and pulled local, state, and federal authorities into a (still pending) investigation of the incident. The city is still working to recover data from the attack but will be open as usual from this morning, Mayor LaToya Cantrell said on Twitter.

Was it ransomware? The nature of the attack is still something of a mystery. Cantrell confirmed that ransomware had been detected, but the city hasn’t received any demands for ransom money.

The positives: New Orleans was at least fairly well prepared for this attack, thanks to training for this scenario and its ability to operate many of its services without internet access, officials told reporters.

A familiar story: New Orleans is just the latest government to face ransomware attacks, after nearly two dozen cities in Texas were targeted in August, plus Louisiana in November (causing the governor to declare a state of emergency). The phenomenon goes beyond the US, too: in October Johannesburg became the biggest city yet to face a ransomware attack.…(More)”.

A World With a Billion Cameras Watching You Is Just Around the Corner


Liza Lin and Newley Purnell at the Wall Street Journal: “As governments and companies invest more in security networks, hundreds of millions more surveillance cameras will be watching the world in 2021, mostly in China, according to a new report.

The report, from industry researcher IHS Markit, to be released Thursday, said the number of cameras used for surveillance would climb above 1 billion by the end of 2021. That would represent an almost 30% increase from the 770 million cameras today. China would continue to account for a little over half the total.

Fast-growing, populous nations such as India, Brazil and Indonesia would also help drive growth in the sector, the report said. The number of surveillance cameras in the U.S. would grow to 85 million by 2021, from 70 million last year, as American schools, malls and offices seek to tighten security on their premises, IHS analyst Oliver Philippou said.

Mr. Philippou said government programs to implement widespread video surveillance to monitor the public would be the biggest catalyst for the growth in China. City surveillance also was driving demand elsewhere.

“It’s a public-safety issue,” Mr. Philippou said in an interview. “There is a big focus on crime and terrorism in recent years.”

The global security-camera industry has been energized by breakthroughs in image quality and artificial intelligence. These allow better and faster facial recognition and video analytics, which governments are using to do everything from managing traffic to predicting crimes.

China leads the world in the rollout of this kind of technology. It is home to the world’s largest camera makers, with its cameras on street corners, along busy roads and in residential neighborhoods….(More)”.

Artificial Intelligence and National Security


CRS Report: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI. In addition, many commercial AI applications must undergo significant modification prior to being functional for the military.

A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics.

Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges. AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations. While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.

Cyber Influence and Cognitive Threats


Open Access book by Vladlena Benson and John Mcalaney: “In the wake of fresh allegations that personal data of Facebook users have been illegally used to influence the outcome of the US general election and the Brexit vote, the debate over manipulation of social big data continues to gain more momentum. Cyber Influence and Cognitive Threats addresses various emerging challenges in response to cyber security, examining cognitive applications in decision making, behaviour and basic human interaction. The book examines the role of psychology in cybersecurity by addressing each factor involved in the process: hackers, targets, cybersecurity practitioners, and the wider social context in which these groups operate.

Cyber Influence and Cognitive Threats covers a variety of topics including information systems, psychology, sociology, human resources, leadership, strategy, innovation, law, finance and others….(More)”.

AI Global Surveillance Technology


Carnegie Endowment: “Artificial intelligence (AI) technology is rapidly proliferating around the world. A growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens to accomplish a range of policy objectives—some lawful, others that violate human rights, and many of which fall into a murky middle ground.

In order to appropriately address the effects of this technology, it is important to first understand where these tools are being deployed and how they are being used.

To provide greater clarity, Carnegie presents an AI Global Surveillance (AIGS) Index—representing one of the first research efforts of its kind. The index compiles empirical data on AI surveillance use for 176 countries around the world. It does not distinguish between legitimate and unlawful uses of AI surveillance. Rather, the purpose of the research is to show how new surveillance capabilities are transforming the ability of governments to monitor and track individuals or systems. It specifically asks:

  • Which countries are adopting AI surveillance technology?
  • What specific types of AI surveillance are governments deploying?
  • Which countries and companies are supplying this technology?

Learn more about our findings and how AI surveillance technology is spreading rapidly around the globe….(More)”.

Weaponized Interdependence: How Global Economic Networks Shape State Coercion


Henry Farrell and Abraham L. Newman in International Security: “Liberals claim that globalization has led to fragmentation and decentralized networks of power relations. This does not explain how states increasingly “weaponize interdependence” by leveraging global networks of informational and financial exchange for strategic advantage. The theoretical literature on network topography shows how standard models predict that many networks grow asymmetrically so that some nodes are far more connected than others. This model nicely describes several key global economic networks, centering on the United States and a few other states. Highly asymmetric networks allow states with (1) effective jurisdiction over the central economic nodes and (2) appropriate domestic institutions and norms to weaponize these structural advantages for coercive ends. In particular, two mechanisms can be identified. First, states can employ the “panopticon effect” to gather strategically valuable information. Second, they can employ the “chokepoint effect” to deny network access to adversaries. Tests of the plausibility of these arguments across two extended case studies that provide variation both in the extent of U.S. jurisdiction and in the presence of domestic institutions—the SWIFT financial messaging system and the internet—confirm the framework’s expectations. A better understanding of the policy implications of the use and potential overuse of these tools, as well as the response strategies of targeted states, will recast scholarly debates on the relationship between economic globalization and state coercion….(More)”

Government wants access to personal data while it pushes privacy


Sara Fischer and Scott Rosenberg at Axios: “Over the past two years, the U.S. government has tried to rein in how major tech companies use the personal data they’ve gathered on their customers. At the same time, government agencies are themselves seeking to harness those troves of data.

Why it matters: Tech platforms use personal information to target ads, whereas the government can use it to prevent and solve crimes, deliver benefits to citizens — or (illegally) target political dissent.

Driving the news: A new report from the Wall Street Journal details the ways in which family DNA testing sites like FamilyTreeDNA are pressured by the FBI to hand over customer data to help solve criminal cases using DNA.

  • The trend has privacy experts worried about the potential implications of the government having access to large pools of genetic data, even though many people whose data is included never agreed to its use for that purpose.

The FBI has particular interest in data from genetic and social media sites, because it could help solve crimes and protect the public.

  • For example, the FBI is “soliciting proposals from outside vendors for a contract to pull vast quantities of public data” from Facebook, Twitter Inc. and other social media companies,“ the Wall Street Journal reports.
  • The request is meant to help the agency surveil social behavior to “mitigate multifaceted threats, while ensuring all privacy and civil liberties compliance requirements are met.”
  • Meanwhile, the Trump administration has also urged social media platforms to cooperate with the governmentin efforts to flag individual users as potential mass shooters.

Other agencies have their eyes on big data troves as well.

  • Earlier this year, settlement talks between Facebook and the Department of Housing and Urban Development broke down over an advertising discrimination lawsuit when, according to a Facebook spokesperson, HUD “insisted on access to sensitive information — like user data — without adequate safeguards.”
  • HUD presumably wanted access to the data to ensure advertising discrimination wasn’t occurring on the platform, but it’s unclear whether the agency needed user data to be able to support that investigation….(More)”.