We Have the Power to Destroy Ourselves Without the Wisdom to Ensure That We Don’t


EdgeCast by Toby Ord: “Lately, I’ve been asking myself questions about the future of humanity, not just about the next five years or even the next hundred years, but about everything humanity might be able to achieve in the time to come.

The past of humanity is about 200,000 years. That’s how long Homo sapiens have been around according to our current best guess (it might be a little bit longer). Maybe we should even include some of our other hominid ancestors and think about humanity somewhat more broadly. If we play our cards right, we could live hundreds of thousands of years more. In fact, there’s not much stopping us living millions of years. The typical species lives about a million years. Our 200,000 years so far would put us about in our adolescence, just old enough to be getting ourselves in trouble, but not wise enough to have thought through how we should act.

But a million years isn’t an upper bound for how long we could live. The horseshoe crab, for example, has lived for 450 million years so far. The Earth should remain habitable for at least that long. So, if we can survive as long as the horseshoe crab, we could have a future stretching millions of centuries from now. That’s millions of centuries of human progress, human achievement, and human flourishing. And if we could learn over that time how to reach out a little bit further into the cosmos to get to the planets around other stars, then we could have longer yet. If we went seven light-years at a time just making jumps of that distance, we could reach almost every star in the galaxy by continually spreading out from the new location. There are already plans in progress to send spacecraft these types of distances. If we could do that, the whole galaxy would open up to us….

Humanity is not a typical species. One of the things that most worries me is the way in which our technology might put us at risk. If we look back at the history of humanity these 2000 centuries, we see this initially gradual accumulation of knowledge and power. If you think back to the earliest humans, they weren’t that remarkable compared to the other species around them. An individual human is not that remarkable on the Savanna compared to a cheetah, or lion, or gazelle, but what set us apart was our ability to work together, to cooperate with other humans to form something greater than ourselves. It was teamwork, the ability to work together with those of us in the same tribe that let us expand to dozens of humans working together in cooperation. But much more important than that was our ability to cooperate across time, across the generations. By making small innovations and passing them on to our children, we were able to set a chain in motion wherein generations of people worked across time, slowly building up these innovations and technologies and accumulating power….(More)”.

Online collective intelligence course aims to improve responses to COVID-19 and other crises


PressRelease: “Working with 11 partner institutions around the world,  The Governance Lab (The GovLab) at the New York University Tandon School of Engineering today launches a massive open online course (MOOC) on “Collective Crisis Intelligence.” The course is free, open to anyone, and designed to help institutions improve disaster response through the use of data and volunteer participation. 

Thirteen modules have been created by leading global experts in major disasters such as the post-election violence in Kenya in 2008, the Fukushima nuclear plant disaster in 2011, the Ebola crisis in 2014, the Zika outbreak in 2016, and the current coronavirus. The course is designed to help those responding to coronavirus make use of volunteerism. 

As the COVID-19 pandemic reaches unprecedented proportions and spreads to more than 150 countries on six continents, policymakers are struggling to answer questions such as “How do we predict how the virus will spread?,” “How do we help the elderly and the homebound?,” “How do we provide economic assistance to those affected by business closures?,” and more. 

In each mini-lecture, those who have learned how to mobilize groups of people online to manage in a crisis present the basic concepts and tools to learn, analyze, and implement a crowdsourced public response. Lectures include

  • Introduction: Why Collective Intelligence Matters in a Crisis
  • Defining Actionable Problems (led by Matt Andrews, Harvard Kennedy School)
  • Three Day Evidence Review (led by Peter Bragge, Monash University, Australia)
  • Priorities for Collective Intelligence (led by Geoff Mulgan, University College London
  • Smarter Crowdsourcing (led by Beth Simone Noveck, The GovLab)
  • Crowdfunding (led by Peter Baeck, Nesta, United Kingdom)
  • Secondary Fall Out (led by Azby Brown, Safecast, Japan)
  • Crowdsourcing Surveillance (led by Tolbert Nyenswah, Johns Hopkins Bloomberg School of Public Health, United States/Liberia)
  • Crowdsourcing Data (led by Angela Oduor Lungati and Juliana Rotich, Ushahidi, Kenya)
  • Mobilizing a Network (led by Sean Bonner, Safecast, Japan)
  • Crowdsourcing Scientific Expertise (led by Ali Nouri, Federation of American Scientists)
  • Chatbots and Social Media Strategies for Crisis (led by Nashin Mahtani, PetaBencana.id, Indonesia)
  • Conclusion: Lessons Learned

The course explores such innovative uses of crowdsourcing as Safecast’s implementation of citizen science to gather information about environmental conditions after the meltdown of the Fukushima nuclear plant; Ushahidi, an online platform in Kenya for crowdsourcing data for crisis relief, human rights advocacy, transparency, and accountability campaigns; and “Ask a Scientist,” an interactive tool developed by The GovLab with the Federation of American Scientists and the New Jersey Office of Innovation, in which a network of scientists answer citizens’ questions about COVID-19.

More information on the courses is available at https://covidcourse.thegovlab.org

The 9/11 Playbook for Protecting Privacy


Adam Klein and Edward Felten at Politico: “Geolocation data—precise GPS coordinates or records of proximity to other devices, often collected by smartphone apps—is emerging as a critical tool for tracking potential spread. But other, more novel types of surveillance are already being contemplated for this first pandemic of the digital age. Body temperature readings from internet-connected thermometers are already being used at scale, but there are more exotic possibilities. Could smart-home devices be used to identify coughs of a timbre associated with Covid-19? Can facial recognition and remote temperature sensing be harnessed to identify likely carriers at a distance?

Weigh the benefits of each collection and use of data against the risks.

Each scenario will present a different level of privacy sensitivity, different collection mechanisms, different technical options affecting privacy, and varying potential value to health professionals, meaning there is no substitute for case-by-case judgment about whether the benefits of a particular use of data outweighs the risks.

The various ways to use location data, for example, present vastly different levels of concern for privacy. Aggregated location data, which combines many individualized location trails to show broader trends, is possible with few privacy risks, using methods that ensure no individual’s location trail is reconstructable from released data. For that reason, governments should not seek individualized location trails for any application where aggregated data would suffice—for example, analyzing travel trends to predict future epidemic hotspots.

If authorities need to trace the movements of identifiable people, their location trails should be obtained on the basis of an individualized showing. Gathering from companies the location trails for all users—as the Israeli government does, according to news reports—would raise far greater privacy concerns.

Establish clear rules for how data can be used, retained, and shared.

Once data is collected, the focus shifts to what the government can do with it. In counterterrorism programs, detailed rules seek to reduce the effect on individual privacy by limiting how different types of data can be used, stored, and shared.

The most basic safeguard is deleting data when it is no longer needed. Keeping data longer than needed unnecessarily exposes it to data breaches, leaks, and other potential privacy harms. Any individualized location tracking should cease, and the data should be deleted, once the individual no longer presents a danger to public health.

Poland’s new tracking app for those exposed to the coronavirus illustrates why reasonable limits are essential. The Polish government plans to retain location data collected by the app for six years. It is hard to see a public-health justification for keeping the data that long. But the story also illustrates well how a failure to consider users’ privacy can undermine a program’s efficacy: the app’s onerous terms led at least one Polish citizen to refuse to download it….(More)”.

Location Surveillance to Counter COVID-19: Efficacy Is What Matters


Susan Landau at Lawfare: “…Some government officials believe that the location information that phones can provide will be useful in the current crisis. After all, if cellphone location information can be used to track terrorists and discover who robbed a bank, perhaps it can be used to determine whether you rubbed shoulders yesterday with someone who today was diagnosed as having COVID-19, the respiratory disease that the novel coronavirus causes. But such thinking ignores the reality of how phone-tracking technology works.

Let’s look at the details of what we can glean from cellphone location information. Cell towers track which phones are in their locale—but that is a very rough measure, useful perhaps for tracking bank robbers, but not for the six-foot proximity one wants in order to determine who might have been infected by the coronavirus.

Finer precision comes from GPS signals, but these can only work outside. That means the location information supplied by your phone—if your phone and that of another person are both on—can tell you if you both went into the same subway stop around the same time. But it won’t tell you whether you rode the same subway car. And the location information from your phone isn’t fully precise. So not only can’t it reveal if, for example, you were in the same aisle in the supermarket as the ill person, but sometimes it will make errors about whether you made it into the store, as opposed to just sitting on a bench outside. What’s more, many people won’t have the location information available because GPS drains the battery, so they’ll shut it off when they’re not using it. Their phones don’t have the location information—and neither do the providers, at least not at the granularity to determine coronavirus exposure.

GPS is not the only way that cellphones can collect location information. Various other ways exist, including through the WiFi network to which a phone is connected. But while two individuals using the same WiFi network are likely to be close together inside a building, the WiFi data would typically not be able to determine whether they were in that important six-foot proximity range.

Other devices can also get within that range, including Bluetooth beacons. These are used within stores, seeking to determine precisely what people are—and aren’t—buying; they track peoples’ locations indoors within inches. But like WiFi, they’re not ubiquitous, so their ability to track exposure will be limited.

If the apps lead to the government’s dogging people’s whereabouts at work, school, in the supermarket and at church, will people still be willing to download the tracking apps that get them get discounts when they’re passing the beer aisle? China follows this kind of surveillance model, but such a surveillance-state solution is highly unlikely to be acceptable in the United States. Yet anything less is unlikely to pinpoint individuals exposed to the virus.

South Korea took a different route. In precisely tracking coronavirus exposure, the country used additional digital records, including documentation of medical and pharmacy visits, history of credit card transactions, and CCTV videos, to determine where potentially exposed people had been—then followed up with interviews not just of infected people but also of their acquaintances, to determine where they had traveled.

Validating such records is labor intensive. And for the United States, it may not be the best use of resources at this time. There’s an even more critical reason that the Korean solution won’t work for the U.S.: South Korea was able to test exposed people. The U.S. can’t do this. Currently the country has a critical shortage of test kits; patients who are not sufficiently ill as to be hospitalized are not being tested. The shortage of test kits is sufficiently acute that in New York City, the current epicenter of the pandemic, the rule is, “unless you are hospitalized and a diagnosis will impact your care, you will not be tested.” With this in mind, moving to the South Korean model of tracking potentially exposed individuals won’t change the advice from federal and state governments that everyone should engage in social distancing—but employing such tracking would divert government resources and thus be counterproductive.

Currently, phone tracking in the United States is not efficacious. It cannot be unless all people are required to carry such location-tracking devices at all times; have location tracking on; and other forms of information tracking, including much wider use of CCTV cameras, Bluetooth beacons, and the like, are also in use. There are societies like this. But so far, even in the current crisis, no one is seriously contemplating the U.S. heading in that direction….(More)”.

Milwaukee’s Amani Neighborhood Uses Data to Target Traffic Safety and Build Trust


Article by Kassie Scott: “People in Milwaukee’s Amani neighborhood are using data to identify safety issues and build relationships with the police. It’s a story of community-engaged research at its best.

In 2017, the Milwaukee Police Department received a grant under the federal Byrne Criminal Justice Innovation program, now called the Community Based Crime Reduction Program, whose purpose is to bridge the gap between practitioners and researchers and advance the use of data in making communities safer. Because of its close ties in the Amani neighborhood, the Dominican Center was selected to lead this initiative, known as the Amani Safety Initiative, and they partnered with local churches, the district attorney’s office, LISC-Milwaukee, and others. To support the effort with data and coaching, the police department contracted with Data You Can Use.

Together with Data You Can Use, the Amani Safety Initiative team first implemented a survey to gauge perceptions of public safety and police legitimacy. Neighborhood ambassadors were trained (and paid) to conduct the survey themselves, going door to door to gather the information from nearly 300 of their neighbors. The ambassadors shared these results with their neighborhood during what they called “data chats.” They also printed summary survey results on door hangers, which they distributed throughout the neighborhood.

Neighbors and community organizations were surprised by the survey results. Though violent crime and mistrust in the police were commonly thought to be the biggest issues, the data showed that residents were most concerned about traffic safety. Ultimately, residents decided to post slow-down signs in intersections.

This project stands out for letting the people in the neighborhood lead the way. Neighbors collected data, shared results, and took action. The partnership between neighbors, police, and local organizations shows how people can drive decision-making for their neighborhood.

The larger story is one of social cohesion and mutual trust. Through participating in the initiative and learning more about their neighborhood, Amani neighbors built stronger relationships with the police. The police began coming to neighborhood community meetings, which helped them build relationships with people in the community and understand the challenges they face….(More).

Facial Recognition Software requires Checks and Balances


David Eaves,  and Naeha Rashid in Policy Options: “A few weeks ago, members of the Nexus traveller identification program were notified that Canadian Border Services is upgrading its automated system, from iris scanners to facial recognition technology. This is meant to simplify identification and increase efficiency without compromising security. But it also raises profound questions concerning how we discuss and develop public policies around such technology – questions that may not be receiving sufficiently open debate in the rush toward promised greater security.

Analogous to the U.S. Customs and Border Protection (CBP) program Global Entry, Nexus is a joint Canada-US border control system designed for low-risk, pre-approved travellers. Nexus does provide a public good, and there are valid reasons to improve surveillance at airports. Even before 9/11, border surveillance was an accepted annoyance and since then, checkpoint operations have become more vigilant and complex in response to the public demand for safety.

Nexus is one of the first North America government-sponsored services to adopt facial recognition, and as such it could be a pilot program that other services will follow. Left unchecked, the technology will likely become ubiquitous at North American border crossings within the next decade, and it will probably be adopted by governments to solve domestic policy challenges.

Facial recognition software is imperfect and has documented bias, but it will continue to improve and become superior to humans in identifying individuals. Given this, questions arise such as, what policies guide the use of this technology? What policies should inform future government use? In our headlong rush toward enhanced security, we risk replicating the justification the used by the private sector in an attempt to balance effectiveness, efficiency and privacy.

One key question involves citizens’ capacity to consent. Previously, Nexus members submitted to fingerprint and retinal scans – biometric markers that are relatively unique and enable government to verify identity at the border. Facial recognition technology uses visual data and seeks, analyzes, and stores identifying facial information in a database, which is then used to compare with new images and video….(More)”.

Federal Agencies Use Cellphone Location Data for Immigration Enforcement


Byron Tau and Michelle Hackman at the Wall Street Journal: “The Trump administration has bought access to a commercial database that maps the movements of millions of cellphones in America and is using it for immigration and border enforcement, according to people familiar with the matter and documents reviewed by The Wall Street Journal.

The location data is drawn from ordinary cellphone apps, including those for games, weather and e-commerce, for which the user has granted permission to log the phone’s location.

The Department of Homeland Security has used the information to detect undocumented immigrants and others who may be entering the U.S. unlawfully, according to these people and documents.

U.S. Immigration and Customs Enforcement, a division of DHS, has used the data to help identify immigrants who were later arrested, these people said. U.S. Customs and Border Protection, another agency under DHS, uses the information to look for cellphone activity in unusual places, such as remote stretches of desert that straddle the Mexican border, the people said.

The federal government’s use of such data for law enforcement purposes hasn’t previously been reported.

Experts say the information amounts to one of the largest known troves of bulk data being deployed by law enforcement in the U.S.—and that the use appears to be on firm legal footing because the government buys access to it from a commercial vendor, just as a private company could, though its use hasn’t been tested in court.

“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” said Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws.

According to federal spending contracts, a division of DHS that creates experimental products began buying location data in 2017 from Venntel Inc. of Herndon, Va., a small company that shares several executives and patents with Gravy Analytics, a major player in the mobile-advertising world.

In 2018, ICE bought $190,000 worth of Venntel licenses. Last September, CBP bought $1.1 million in licenses for three kinds of software, including Venntel subscriptions for location data. 

The Department of Homeland Security and its components acknowledged buying access to the data, but wouldn’t discuss details about how they are using it in law-enforcement operations. People familiar with some of the efforts say it is used to generate investigative leads about possible illegal border crossings and for detecting or tracking migrant groups.

CBP has said it has privacy protections and limits on how it uses the location information. The agency says that it accesses only a small amount of the location data and that the data it does use is anonymized to protect the privacy of Americans….(More)”

Artificial intelligence, geopolitics, and information integrity


Report by John Villasenor: “Much has been written, and rightly so, about the potential that artificial intelligence (AI) can be used to create and promote misinformation. But there is a less well-recognized but equally important application for AI in helping to detect misinformation and limit its spread. This dual role will be particularly important in geopolitics, which is closely tied to how governments shape and react to public opinion both within and beyond their borders. And it is important for another reason as well: While nation-state interest in information is certainly not new, the incorporation of AI into the information ecosystem is set to accelerate as machine learning and related technologies experience continued advances.

The present article explores the intersection of AI and information integrity in the specific context of geopolitics. Before addressing that topic further, it is important to underscore that the geopolitical implications of AI go far beyond information. AI will reshape defense, manufacturing, trade, and many other geopolitically relevant sectors. But information is unique because information flows determine what people know about their own country and the events within it, as well as what they know about events occurring on a global scale. And information flows are also critical inputs to government decisions regarding defense, national security, and the promotion of economic growth. Thus, a full accounting of how AI will influence geopolitics of necessity requires engaging with its application in the information ecosystem.

This article begins with an exploration of some of the key factors that will shape the use of AI in future digital information technologies. It then considers how AI can be applied to both the creation and detection of misinformation. The final section addresses how AI will impact efforts by nation-states to promote–or impede–information integrity….(More)”.

The Gray Spectrum: Ethical Decision Making with Geospatial and Open Source Analysis


Report by The Stanley Center for Peace and Security: “Geospatial and open source analysts face decisions in their work that can directly or indirectly cause harm to individuals, organizations, institutions, and society. Though analysts may try to do the right thing, such ethically-informed decisions can be complex. This is particularly true for analysts working on issues related to nuclear nonproliferation or international security, analysts whose decisions on whether to publish certain findings could have far-reaching consequences.

The Stanley Center for Peace and Security and the Open Nuclear Network (ONN) program of One Earth Future Foundation convened a workshop to explore these ethical challenges, identify resources, and consider options for enhancing the ethical practices of geospatial and open source analysis communities.

This Readout & Recommendations brings forward observations from that workshop. It describes ethical challenges that stakeholders from relevant communities face. It concludes with a list of needs participants identified, along with possible strategies for promoting sustaining behaviors that could enhance the ethical conduct of the community of nonproliferation analysts working with geospatial and open source data.

Some Key Findings

  • A code of ethics could serve important functions for the community, including giving moral guidance to practitioners, enhancing public trust in their work, and deterring unethical behavior. Participants in the workshop saw a significant value in such a code and offered ideas for developing one.
  • Awareness of ethical dilemmas and strong ethical reasoning skills are essential for sustaining ethical practices, yet professionals in this field might not have easy access to such training. Several approaches could improve ethics education for the field overall, including starting a body of literature, developing model curricula, and offering training for students and professionals.
  • Other stakeholders—governments, commercial providers, funders, organizations, management teams, etc.—should contribute to the discussion on ethics in the community and reinforce sustaining behaviors….(More)”.

Predictive Policing Theory


Paper by Andrew Guthrie Ferguson: “Predictive policing is changing law enforcement. New place-based predictive analytic technologies allow police to predict where and when a crime might occur. Data-driven insights have been operationalized into concrete decisions about police priorities and resource allocation. In the last few years, place-based predictive policing has spread quickly across the nation, offering police administrators the ability to identify higher crime locations, to restructure patrol routes, and to develop crime suppression strategies based on the new data.

This chapter suggests that the debate about technology is better thought about as a choice of policing theory. In other words, when purchasing a particular predictive technology, police should be doing more than simply choosing the most sophisticated predictive model; instead they must first make a decision about the type of policing response that makes sense in their community. Foundational questions about whether we want police officers to be agents of social control, civic problem-solvers, or community partners lie at the heart of any choice of which predictive technology might work best for any given jurisdiction.

This chapter then examines predictive policing technology as a choice about policing theory and how the purchase of a particular predictive tool becomes – intentionally or unintentionally – a statement about police role. Interestingly, these strategic choices map on to existing policing theories. Three of the traditional policing philosophies – hot spot policing , problem-oriented policing, and community-based policing have loose parallels with new place-based predictive policing technologies like PredPol, Risk Terrain Modeling (RTM), and HunchLab. This chapter discusses these leading predictive policing technologies as illustrative examples of how police can choose between prioritizing additional police presence, targeting environmental vulnerabilities, and/or establishing a community problem-solving approach as a different means of achieving crime reduction….(More)”.