NBER Paper by Wenzhi Ding et al: Since social distancing is the primary strategy for slowing the spread of many diseases, understanding why U.S. counties respond differently to COVID-19 is critical for designing effective public policies. Using daily data from about 45 million mobile phones to measure social distancing we examine how counties responded to both local COVID-19 cases and statewide shelter-in-place orders. We find that social distancing increases more in response to cases and official orders in counties where individuals historically (1) engaged less in community activities and (2) demonstrated greater willingness to incur individual costs to contribute to social objectives. Our work highlights the importance of these two features of social capital—community engagement and individual commitment to societal institutions—in formulating public health policies….(More)”
What Nobel Laureate Elinor Ostrom’s early work tells us about defunding the police
Blog by Aaron Vansintjan: “…As she concluded in her autobiographical reflections published two years before she died in 2012, “For policing, increasing the size of governmental units consistently had a negative impact on the level of output generated as well as on efficiency of service provision… smaller police departments… consistently outperformed their better trained and better financed larger neighbors.”
But why did this happen? To explain this, Ostrom showed how, in small communities with small police forces, citizens are more active in monitoring their neighborhoods. Officers in smaller police forces also have more knowledge of the local area and better connections with the community.
She also found that larger, more centralized police forces also have a negative effect on other public services. With a larger police bureaucracy, other local frontline professionals with less funding — social workers, mental health support centers, clinics, youth support services — have less of a say in how to respond to a community’s issues such as drug use or domestic violence. The bigger the police department, the less citizens — especially those that are already marginalized, like migrants or Black communities — have a say in how policing should be conducted.
This finding became a crucial step in Ostrom’s groundbreaking work on how communities manage their resources sustainably without outside help — through deliberation, resolving conflict and setting clear community agreements. This is what she ended up becoming famous for, and what won her the Nobel Memorial Prize in Economic Sciences, placing her next to some of the foremost economists in the world.
But her research on policing shouldn’t be forgotten: It shows that, when it comes to safer communities, having more funding or larger services is not important. What’s important is the connections and trust between the community and the service provider….(More)”.
IRS Used Cellphone Location Data to Try to Find Suspects
Byron Tau at the Wall Street Journal: “The Internal Revenue Service attempted to identify and track potential criminal suspects by purchasing access to a commercial database that records the locations of millions of American cellphones.
The IRS Criminal Investigation unit, or IRS CI, had a subscription to access the data in 2017 and 2018, and the way it used the data was revealed last week in a briefing by IRS CI officials to Sen. Ron Wyden’s (D., Ore.) office. The briefing was described to The Wall Street Journal by an aide to the senator.
IRS CI officials told Mr. Wyden’s office that their lawyers had given verbal approval for the use of the database, which is sold by a Virginia-based government contractor called Venntel Inc. Venntel obtains anonymized location data from the marketing industry and resells it to governments. IRS CI added that it let its Venntel subscription lapse after it failed to locate any targets of interest during the year it paid for the service, according to Mr. Wyden’s aide.
Justin Cole, a spokesman for IRS CI, said it entered into a “limited contract with Venntel to test their services against the law enforcement requirements of our agency.” IRS CI pursues the most serious and flagrant violations of tax law, and it said it used the Venntel database in “significant money-laundering, cyber, drug and organized-crime cases.”
The episode demonstrates a growing law enforcement interest in reams of anonymized cellphone movement data collected by the marketing industry. Government entities can try to use the data to identify individuals—which in many cases isn’t difficult with such databases.
It also shows that data from the marketing industry can be used as an alternative to obtaining data from cellphone carriers, a process that requires a court order. Until 2018, prosecutors needed “reasonable grounds” to seek cell tower records from a carrier. In June 2018, the U.S. Supreme Court strengthened the requirement to show probable cause a crime has been committed before such data can be obtained from carriers….(More)”
The Bigot in the Machine: Bias in Algorithmic Systems
Article by Barbara Fister: “We are living in an “age of algorithms.” Vast quantities of information are collected, sorted, shared, combined, and acted on by proprietary black boxes. These systems use machine learning to build models and make predictions from data sets that may be out of date, incomplete, and biased. We will explore the ways bias creeps into information systems, take a look at how “big data,” artificial intelligence and machine learning often amplify bias unwittingly, and consider how these systems can be deliberately exploited by actors for whom bias is a feature, not a bug. Finally, we’ll discuss ways we can work with our communities to create a more fair and just information environment….(More)”.
Scraping Court Records Data to Find Dirty Cops
Article by Lawsuit.org: “In the 2002 dystopian sci-fi film “Minority Report,” law enforcement can manage crime by “predicting” illegal behavior before it happens. While fiction, the plot is intriguing and contributes to the conversation on advanced crime-fighting technology. However, today’s world may not be far off.
Data’s role in our lives and more accessibility to artificial intelligence is changing the way we approach topics such as research, real estate, and law enforcement. In fact, recent investigative reporting has shown that “dozens of [American] cities” are now experimenting with predictive policing technology.
Despite the current controversy surrounding predictive policing, it seems to be a growing trend that has been met with little real resistance. We may be closer to policing that mirrors the frightening depictions in “Minority Report” than we ever thought possible.
Fighting Fire With Fire
In its current state, predictive policing is defined as:
“The usage of mathematical, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity. Predictive policing methods fall into four general categories: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators’ identities, and methods for predicting victims of crime.”
While it might not be possible to prevent predictive policing from being employed by the criminal justice system, perhaps there are ways we can create a more level playing field: One where the powers of big data analysis aren’t just used to predict crime, but also are used to police law enforcement themselves.
Below, we’ve provided a detailed breakdown of what this potential reality could look like when applied to one South Florida county’s public databases, along with information on how citizens and communities can use public data to better understand the behaviors of local law enforcement and even individual police officers….(More)”.
Defining a ‘new normal’ for data privacy in the wake of COVID-19
Jack Dunn at IAPP: “…It is revealing that our relationship with privacy is amorphous and requires additional context in light of transformative technologies, new economic realities and public health emergencies. How can we reasonably evaluate the costs and benefits of Google or Facebook sharing location data with the federal government when it has been perfectly legal for Walgreen’s to share access to customer data with pharmaceutical advertisers? How does aggregating and anonymizing data safeguard privacy when a user’s personal data can be revealed through other data points?
The pandemic is only revealing that we’ve yet to reach a consensus on privacy norms that will come to define the digital age.
This isn’t the first time that technology confounded notions of privacy and consumer protection. In fact, the constitutional right to privacy was born out of another public health crisis. Before 1965, 32 women per 100,000 live births died while giving birth. Similarly, 25 infants died per 100,000 live births. As a result, medical professionals and women’s rights advocates began arguing for greater access to birth control. When state legislatures sought to minimize access, birth control advocates filed lawsuits that eventually lead to the Supreme Court’s seminal case regarding the right to privacy, Griswold v. Connecticut.…
Today, there is growing public concern over the way in which consumer data is used to consolidate economic gain among the few while steering public perception among the many — particularly at a time when privacy seems to be the price for ending public health emergencies.
But the COVID-19 outbreak is also highlighting how user data has the capacity to improve consumer well being and public health. While strict adherence to traditional notions of privacy may be ineffectual in a time of exponential technological growth, the history of our relationship to privacy and technology suggests regulatory policies can strike a balance between otherwise competing interests….(More)“.
How to Sustain Your Activism Against Police Brutality Beyond this Moment
Article by Bethany Gordon: “…Despite the haunting nature of these details and the different features of this moment, I am worried that empathetic voices lifting up this cause will quiet too soon for lasting change to occur. But it doesn’t have to happen this way. Gaining a better understanding of the empathy we feel in these moments of awareness and advocacy can help us take a more behaviorally sustainable approach.
Empathy is a complex psychological phenomenon, describing eight distinct ways that we respond to one another’s experiences and emotions, but most commonly defined in the dictionary as “the ability to understand and share the feelings of another.” Using this broader definition, scholars and activists have debated how effective empathy is as a tool for behavior change—particularly when it comes to fighting racism. Paul Bloom argues that empathy allows our bias to drive our decision-making, bell hooks states that empathy is not a promising avenue to systemic racial change, and Alisha Gaines analyzes how an overemphasis on racial empathy in a 1944 landmark study, “An American Dilemma,” led to a blindness about the impact of systemic and institutional racial barriers. This more general understanding and application of empathy has not been an effective aid to fighting systemic oppression and has led to a lot of (well-meaning?) blackface.
A more nuanced understanding of empathy—and its related concepts—may help us use it more effectively in the fight against racism. There are two strains of empathy that are relevant to the George Floyd protests and can help us better understand (and possibly change) our response: empathic distress and empathic concern, also known as compassion.
Empathic distress is a type of empathy we feel when we are disturbed by witnessing another’s suffering. Empathic distress is an egocentric response—a reaction that places our own well-being at its center. When we’re motivated to act through empathic distress, our ultimate goal is to alleviate our own suffering. This may mean we take action to help another person, but it could also mean we distract ourselves from their suffering.
Compassion is a type of empathy that is other-oriented. Compassion operates when you feel for another person rather than being distressed by their suffering, thereby making your ultimate goal about fixing the actual problem….(More)’
IoT Security Is a Mess. Privacy ‘Nutrition’ Labels Could Help
Lily Hay Newman at Wired: “…Given that IoT security seems unlikely to magically improve anytime soon, researchers and regulators are rallying behind a new approach to managing IoT risk. Think of it as nutrition labels for embedded devices.
At the IEEE Symposium on Security & Privacy last month, researchers from Carnegie Mellon University presented a prototype security and privacy label they created based on interviews and surveys of people who own IoT devices, as well as privacy and security experts. They also published a tool for generating their labels. The idea is to shed light on a device’s security posture but also explain how it manages user data and what privacy controls it has. For example, the labels highlight whether a device can get security updates and how long a company has pledged to support it, as well as the types of sensors present, the data they collect, and whether the company shares that data with third parties.
“In an IoT setting, the amount of sensors and information you have about users is potentially invasive and ubiquitous,” says Yuvraj Agarwal, a networking and embedded systems researcher who worked on the project. “It’s like trying to fix a leaky bucket. So transparency is the most important part. This work shows and enumerates all the choices and factors for consumers.”
Nutrition labels on packaged foods have a certain amount of standardization around the world, but they’re still more opaque than they could be. And security and privacy issues are even less intuitive to most people than soluble and insoluble fiber. So the CMU researchers focused a lot of their efforts on making their IoT label as transparent and accessible as possible. To that end, they included both a primary and secondary layer to the label. The primary label is what would be printed on device boxes. To access the secondary label, you could follow a URL or scan a QR code to see more granular information about a device….(More)”.
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly)
Reflection Document by The GovLab: “Racism is a systemic issue that pervades every aspect of life in the United States and around the world. In recent months, its corrosive influence has been made starkly visible, especially on Black people. Many people are hurting. Their rage and suffering stem from centuries of exclusion and from being subject to repeated bias and violence. Across the country, there have been protests decrying racial injustice. Activists have called upon the government to condemn bigotry and racism, to act against injustice, to address systemic and growing inequality.
Institutions need to take meaningful action to address such demands. Though racism is not experienced in the same way by all communities of color, policymakers must respond to the anxieties and apprehensions of Black people as well as those of communities of color more generally. This work will require institutions and individuals to reflect on how they may be complicit in perpetuating structural and systematic inequalities and harm and to ask better questions about the inequities that exist in society (laid bare in both recent acts of violence and in racial disadvantages in health outcomes during the ongoing COVID-19 crisis). This work is necessary but unlikely to be easy. As Rashida Richardson, Director of Policy Research at the AI Now Institute at NYU notes:
“Social and political stratifications also persist and worsen because they are embedded into our social and legal systems and structures. Thus, it is difficult for most people to see and understand how bias and inequalities have been automated or operationalized over time.”
We believe progress can be made, at least in part, through responsible data access and analysis, including increased availability of (disaggregated) data through data collaboration. Of course, data is only one part of the overall picture, and we make no claims that data alone can solve such deeply entrenched problems. Nonetheless, data can have an impact by making inequalities resulting from racism more quantifiable and inaction less excusable.
…Prioritizing any of these topics will also require increased community engagement and participatory agenda setting. Likewise, we are deeply conscious that data can have a negative as well as positive impact and that technology can perpetuate racism when designed and implemented without the input and participation of minority communities and organizations. While our report here focuses on the promise of data, we need to remain aware of the potential to weaponize data against vulnerable and already disenfranchised communities. In addition, (hidden) biases in data collected and used in AI algorithms, as well as in a host of other areas across the data life cycle, will only exacerbate racial inequalities if not addressed….(More)”
ALSO: The piece is supplemented by a crowdsourced listing of Data-Driven Efforts to Address Racial Inequality.
Terms of Disservice: How Silicon Valley is Destructive by Design
Book by Dipayan Ghosh on “Designing a new digital social contact for our technological future…High technology presents a paradox. In just a few decades, it has transformed the world, making almost limitless quantities of information instantly available to billions of people and reshaping businesses, institutions, and even entire economies. But it also has come to rule our lives, addicting many of us to the march of megapixels across electronic screens both large and small.
Despite its undeniable value, technology is exacerbating deep social and political divisions in many societies. Elections influenced by fake news and unscrupulous hidden actors, the cyber-hacking of trusted national institutions, the vacuuming of private information by Silicon Valley behemoths, ongoing threats to vital infrastructure from terrorist groups and even foreign governments—all these concerns are now part of the daily news cycle and are certain to become increasingly serious into the future.
In this new world of endless technology, how can individuals, institutions, and governments harness its positive contributions while protecting each of us, no matter who or where we are?
In this book, a former Facebook public policy adviser who went on to assist President Obama in the White House offers practical ideas for using technology to create an open and accessible world that protects all consumers and civilians. As a computer scientist turned policymaker, Dipayan Ghosh answers the biggest questions about technology facing the world today. Proving clear and understandable explanations for complex issues, Terms of Disservice will guide industry leaders, policymakers, and the general public as we think about how we ensure that the Internet works for everyone, not just Silicon Valley….(More)”.