Paper by Cass Sunstein: “Do people from benefit from food labels? When? By how much? Public officials face persistent challenges in answering these questions. In various nations, they use four different approaches: they refuse to do so on the ground that quantification is not feasible; they engage in breakeven analysis; they project end-states, such as economic savings or health outcomes; and they estimate willingness-to-pay for the relevant information. Each of these approaches runs into strong objections. In principle, the willingness-to-pay question has important advantages. But for those who has that question, there is a serious problem. In practice, people often lack enough information to give a sensible answer to the question how much they would be willing to pay for (more) information. People might also suffer from behavioral biases (including present bias and optimistic bias). And when preferences are labile or endogenous, even an informed and unbiased answer to the willingness to pay question may fail to capture the welfare consequences, because people may develop new tastes and values as a result of information….(More)”.
Paper by Tammy M. Frisby and Jorge L. Contreras: “Since 2013, federal research-funding agencies have been required to develop and implement broad data sharing policies. Yet agencies today continue to grapple with the mechanisms necessary to enable the sharing of a wide range of data types, from genomic and other -omics data to clinical and pharmacological data to survey and qualitative data. In 2016, the National Cancer Institute (NCI) launched the ambitious $1.8 billion Cancer Moonshot Program, which included a new Public Access and Data Sharing (PADS) Policy applicable to funding applications submitted on or after October 1, 2017. The PADS Policy encourages the immediate public release of published research results and data and requires all Cancer Moonshot grant applicants to submit a PADS plan describing how they will meet these goals. We reviewed the PADS plans submitted with approximately half of all funded Cancer Moonshot grant applications in fiscal year 2018, and found that a majority did not address one or more elements required by the PADS Policy. Many such plans made no reference to the PADS Policy at all, and several referenced obsolete or outdated National Institutes of Health (NIH) policies instead. We believe that these omissions arose from a combination of insufficient education and outreach by NCI concerning its PADS Policy, both to potential grant applicants and among NCI’s program staff and external grant reviewers. We recommend that other research funding agencies heed these findings as they develop and roll out new data sharing policies….(More)”.
Podcast Episode by Jill Lepore: “In 1966, just as the foundations of the Internet were being imagined, the federal government considered building a National Data Center. It would be a centralized federal facility to hold computer records from each federal agency, in the same way that the Library of Congress holds books and the National Archives holds manuscripts. Proponents argued that it would help regulate and compile the vast quantities of data the government was collecting. Quickly, though, fears about privacy, government conspiracies, and government ineptitude buried the idea. But now, that National Data Center looks like a missed opportunity to create rules about data and privacy before the Internet took off. And in the absence of government action, corporations have made those rules themselves….(More)”.
Press Release: “The Governance Lab (The GovLab), an action research center at New York University Tandon School of Engineering, with the support of the Henry Luce Foundation, announced the creation of The Data Assembly. Beginning in New York City, the effort will explore how communities perceive the risks and benefits of data re-use for COVID-19. Understanding that policymakers often lack information about the concerns of different stakeholders, The Data Assembly’s deliberations will inform the creation of a responsible data re-use framework to guide the use of data and technology at the city and state level to fight COVID-19’s many consequences.
The Data Assembly will hold deliberations with civil rights organizations, key data holders and policymakers, and the public at large. Consultations with these stakeholders will take place through a series of remote engagements, including surveys and an online town hall meeting. This work will allow the project to consider the perspectives of people from different strata of society and how they might exercise some control over the flow of data.
After the completion of these data re-use deliberations, The Data Assembly will create a path forward for using data responsibly to solve public challenges. The first phases of the project will commence in New York City, seeking to engage with city residents and their leaders on data governance issues.
“Data is increasingly the primary format for sharing information to understand crises and plan recovery efforts; empowering everyone to better understand how data is collected and how it should be used is paramount,” said Adrienne Schmoeker, Director of Civic Engagement & Strategy and Deputy Chief Analytics Officer at the NYC Mayor’s Office of Data Analytics. “We look forward to learning from the insights gathered by the GovLab through The Data Assembly work they are conducting in New York City.”…(More)”.
Kashmir Hill at the New York Times: “In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit….
The Shinola shoplifting occurred in October 2018. Katherine Johnston, an investigator at Mackinac Partners, a loss prevention firm, reviewed the store’s surveillance video and sent a copy to the Detroit police, according to their report.
Five months later, in March 2019, Jennifer Coulson, a digital image examiner for the Michigan State Police, uploaded a “probe image” — a still from the video, showing the man in the Cardinals cap — to the state’s facial recognition database. The system would have mapped the man’s face and searched for similar ones in a collection of 49 million photos.
The state’s technology is supplied for $5.5 million by a company called DataWorks Plus. Founded in South Carolina in 2000, the company first offered mug shot management software, said Todd Pastorini, a general manager. In 2005, the firm began to expand the product, adding face recognition tools developed by outside vendors.
When one of these subcontractors develops an algorithm for recognizing faces, DataWorks attempts to judge its effectiveness by running searches using low-quality images of individuals it knows are present in a system. “We’ve tested a lot of garbage out there,” Mr. Pastorini said. These checks, he added, are not “scientific” — DataWorks does not formally measure the systems’ accuracy or bias.
“We’ve become a pseudo-expert in the technology,” Mr. Pastorini said.
In Michigan, the DataWorks software used by the state police incorporates components developed by the Japanese tech giant NEC and by Rank One Computing, based in Colorado, according to Mr. Pastorini and a state police spokeswoman. In 2019, algorithms from both companies were included in a federal study of over 100 facial recognition systems that found they were biased, falsely identifying African-American and Asian faces 10 times to 100 times more than Caucasian faces….(More)“.
About: “The Data Dividend Project is a movement dedicated to taking back control of our personal data: our data is our property, and if we allow companies to use it, we should get paid for it. The DDP is the brainchild of former presidential candidate Andrew Yang. Its primary objective is to establish and enforce data property rights under laws such as the California Consumer Privacy Act (CCPA), which went into effect on January 1, 2020.
Every day, people are generating data simply by going about the business of living in an ever connected and digital world. Unbeknownst to most people, technology companies are tracking their every move online, extracting this data, and then buying and selling it for big money. The sale and resale of consumer data is called data brokering, which is itself a $200 billion industry.
For example, technology companies can extract location data from your mobile phone and sell it to advertisers who can then turn around and post local ads to you in real time. Until recently, the data collector – in this case, the technology company – was deemed to own the data. As the owner, the technology company could sell that data and profit handsomely. Meanwhile, you generated the data but received no share of those profits. DDP plans to change that.
Until this year, you, as the American consumer, had little recourse against technology companies who were profiting off your data without your consent or knowledge. Now, under the CCPA, Californians are endowed with a collection of unalienable data rights: the right to know what information is being collected on you, the right to delete that information, and the right to opt-out from technology companies collecting your data. These rights, however, are ignored and abused by technology companies. And unfortunately, individual consumers don’t have the leverage to be able to go up against these companies. That’s where DDP comes in….(More)“
Press Release: “The Network Advertising Initiative (NAI) released privacy Best Practices for its members to follow if they use data collected for Tailored Advertising or Ad Delivery and Reporting for non-marketing purposes, such as sharing with research institutions, public health agencies, or law enforcement entities.
“Ad tech companies have data that can be a powerful resource for the public good if they follow this set of best practices for consumer privacy,” said Leigh Freund, NAI President and CEO. “During the COVID-19 pandemic, we’ve seen the opportunity for substantial public health benefits from sharing aggregate and de-identified location data.”
The NAI Code of Conduct – the industry’s premier self-regulatory framework for privacy, transparency, and consumer choice – covers data collected and used for Tailored Advertising or Ad Delivery and Reporting. The NAI Code has long addressed certain non-marketing uses of data collected for Tailored Advertising and Ad Delivery and Reporting by prohibiting any
eligibility uses of such data, including uses for credit, insurance, healthcare, and employment decisions.
The NAI has always firmly believed that data collected for advertising purposes should not have a negative effect on consumers in their daily lives. However, over the past year, novel data uses have been introduced, especially during the recent health crisis. In the case of opted-in data such as Precise Location Information, a company may determine a user would benefit from more detailed disclosure in a just-in-time notice about non-marketing uses of the data being collected….(More)”.
NBER Paper by Wenzhi Ding et al: Since social distancing is the primary strategy for slowing the spread of many diseases, understanding why U.S. counties respond differently to COVID-19 is critical for designing effective public policies. Using daily data from about 45 million mobile phones to measure social distancing we examine how counties responded to both local COVID-19 cases and statewide shelter-in-place orders. We find that social distancing increases more in response to cases and official orders in counties where individuals historically (1) engaged less in community activities and (2) demonstrated greater willingness to incur individual costs to contribute to social objectives. Our work highlights the importance of these two features of social capital—community engagement and individual commitment to societal institutions—in formulating public health policies….(More)”
Blog by Aaron Vansintjan: “…As she concluded in her autobiographical reflections published two years before she died in 2012, “For policing, increasing the size of governmental units consistently had a negative impact on the level of output generated as well as on efficiency of service provision… smaller police departments… consistently outperformed their better trained and better financed larger neighbors.”
But why did this happen? To explain this, Ostrom showed how, in small communities with small police forces, citizens are more active in monitoring their neighborhoods. Officers in smaller police forces also have more knowledge of the local area and better connections with the community.
She also found that larger, more centralized police forces also have a negative effect on other public services. With a larger police bureaucracy, other local frontline professionals with less funding — social workers, mental health support centers, clinics, youth support services — have less of a say in how to respond to a community’s issues such as drug use or domestic violence. The bigger the police department, the less citizens — especially those that are already marginalized, like migrants or Black communities — have a say in how policing should be conducted.
This finding became a crucial step in Ostrom’s groundbreaking work on how communities manage their resources sustainably without outside help — through deliberation, resolving conflict and setting clear community agreements. This is what she ended up becoming famous for, and what won her the Nobel Memorial Prize in Economic Sciences, placing her next to some of the foremost economists in the world.
But her research on policing shouldn’t be forgotten: It shows that, when it comes to safer communities, having more funding or larger services is not important. What’s important is the connections and trust between the community and the service provider….(More)”.
Byron Tau at the Wall Street Journal: “The Internal Revenue Service attempted to identify and track potential criminal suspects by purchasing access to a commercial database that records the locations of millions of American cellphones.
The IRS Criminal Investigation unit, or IRS CI, had a subscription to access the data in 2017 and 2018, and the way it used the data was revealed last week in a briefing by IRS CI officials to Sen. Ron Wyden’s (D., Ore.) office. The briefing was described to The Wall Street Journal by an aide to the senator.
IRS CI officials told Mr. Wyden’s office that their lawyers had given verbal approval for the use of the database, which is sold by a Virginia-based government contractor called Venntel Inc. Venntel obtains anonymized location data from the marketing industry and resells it to governments. IRS CI added that it let its Venntel subscription lapse after it failed to locate any targets of interest during the year it paid for the service, according to Mr. Wyden’s aide.
Justin Cole, a spokesman for IRS CI, said it entered into a “limited contract with Venntel to test their services against the law enforcement requirements of our agency.” IRS CI pursues the most serious and flagrant violations of tax law, and it said it used the Venntel database in “significant money-laundering, cyber, drug and organized-crime cases.”
The episode demonstrates a growing law enforcement interest in reams of anonymized cellphone movement data collected by the marketing industry. Government entities can try to use the data to identify individuals—which in many cases isn’t difficult with such databases.
It also shows that data from the marketing industry can be used as an alternative to obtaining data from cellphone carriers, a process that requires a court order. Until 2018, prosecutors needed “reasonable grounds” to seek cell tower records from a carrier. In June 2018, the U.S. Supreme Court strengthened the requirement to show probable cause a crime has been committed before such data can be obtained from carriers….(More)”