Paper by Eva M. Krockow et al: “Antibiotic overprescribing is a global challenge contributing to rising levels of antibiotic resistance and mortality. We test a novel approach to antibiotic stewardship. Capitalising on the concept of “wisdom of crowds”, which states that a group’s collective judgement often outperforms the average individual, we test whether pooling treatment durations recommended by different prescribers can improve antibiotic prescribing. Using international survey data from 787 expert antibiotic prescribers, we run computer simulations to test the performance of the wisdom of crowds by comparing three data aggregation rules across different clinical cases and group sizes. We also identify patterns of prescribing bias in recommendations about antibiotic treatment durations to quantify current levels of overprescribing. Our results suggest that pooling the treatment recommendations (using the median) could improve guideline compliance in groups of three or more prescribers. Implications for antibiotic stewardship and the general improvement of medical decision making are discussed. Clinical applicability is likely to be greatest in the context of hospital ward rounds and larger, multidisciplinary team meetings, where complex patient cases are discussed and existing guidelines provide limited guidance….(More)“
Challenging the Use of Algorithm-driven Decision-making in Benefits Determinations Affecting People with Disabilities
Paper by Lydia X. Z. Brown, Michelle Richardson, Ridhi Shetty, and Andrew Crawford: “Governments are increasingly turning to algorithms to determine whether and to what extent people should receive crucial benefits for programs like Medicaid, Medicare, unemployment, and Social Security Disability. Billed as a way to increase efficiency and root out fraud, these algorithm-driven decision-making tools are often implemented without much public debate and are incredibly difficult to understand once underway. Reports from people on the ground confirm that the tools are frequently reducing and denying benefits, often with unfair and inhumane results.
Benefits recipients are challenging these tools in court, arguing that flaws in the programs’ design or execution violate their due process rights, among other claims. These cases are some of the few active courtroom challenges to algorithm-driven decision-making, producing important precedent about people’s right to notice, explanation, and other procedural due process safeguards when algorithm-driven decisions are made about them. As the legal and policy world continues to recognize the outsized impact of algorithm-driven decision-making in various aspects of our lives, public benefits cases provide important insights into how such tools can operate; the risks of errors in design and execution; and the devastating human toll when tools are adopted without effective notice, input, oversight, and accountability.
This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court. It draws on direct interviews with attorneys who have litigated these cases and plaintiffs who sought to vindicate their rights in court – in some instances suing not only for themselves, but on behalf of similarly situated people. The attorneys work in legal aid offices, civil rights litigation shops, law school clinics, and disability protection and advocacy offices. The cases cover a range of benefits issues and have netted mixed results.
People with disabilities experience disproportionate and particular harm because of unjust algorithm-driven decision-making, and we have attempted to center disabled people’s stories and cases in this paper. As disabled people fight for rights inside and outside the courtroom on a wide range of issues, we focus on litigation and highlight the major legal theories for challenging improper algorithm-driven benefit denials in the U.S.
The good news is that in some cases, plaintiffs are successfully challenging improper adverse benefits decisions with Constitutional, statutory, and administrative claims. But like other forms of civil rights and impact litigation, the bad news is that relief can be temporary and is almost always delayed. Litigation must therefore work in tandem with the development of new processes driven by people who require access to public assistance and whose needs are centered in these processes. We hope this contribution informs not only the development of effective litigation, but a broader public conversation about the thoughtful design, use, and oversight of algorithm-driven decision-making systems….(More)”.
Open data in public libraries: Gauging activities and supporting ambitions
Paper by Kaitlin Fender Throgmorton, Bree Norlander and Carole L. Palmer: “As the open data movement grows, public libraries must assess if and how to invest resources in this new service area. This paper reports on a recent survey on open data in public libraries across Washington state, conducted by the Open Data Literacy project (ODL) in collaboration with the Washington State Library. Results document interests and activity in open data across small, medium, and large libraries in relation to traditional library services and priorities. Libraries are particularly active in open data through reference services and are beginning to release their own library data to the public. While capacity and resource challenges hinder progress for some, many libraries, large and small, are making progress on new initiatives, including strategic collaborations with local government agencies. Overall, the level and range of activity suggest that Washington state public libraries of all sizes recognize the value of open data for their communities, with a groundswell of libraries moving beyond ambition to action as they develop new services through evolution and innovation….(More)”.
A qualitative study of big data and the opioid epidemic: recommendations for data governance
Paper by Elizabeth A. Evans, Elizabeth Delorme, Karl Cyr & Daniel M. Goldstein: “The opioid epidemic has enabled rapid and unsurpassed use of big data on people with opioid use disorder to design initiatives to battle the public health crisis, generally without adequate input from impacted communities. Efforts informed by big data are saving lives, yielding significant benefits. Uses of big data may also undermine public trust in government and cause other unintended harms….
We conducted focus groups and interviews in 2019 with 39 big data stakeholders (gatekeepers, researchers, patient advocates) who had interest in or knowledge of the Public Health Data Warehouse maintained by the Massachusetts Department of Public Health.
Concerns regarding big data on opioid use are rooted in potential privacy infringements due to linkage of previously distinct data systems, increased profiling and surveillance capabilities, limitless lifespan, and lack of explicit informed consent. Also problematic is the inability of affected groups to control how big data are used, the potential of big data to increase stigmatization and discrimination of those affected despite data anonymization, and uses that ignore or perpetuate biases. Participants support big data processes that protect and respect patients and society, ensure justice, and foster patient and public trust in public institutions. Recommendations for ethical big data governance offer ways to narrow the big data divide (e.g., prioritize health equity, set off-limits topics/methods, recognize blind spots), enact shared data governance (e.g., establish community advisory boards), cultivate public trust and earn social license for big data uses (e.g., institute safeguards and other stewardship responsibilities, engage the public, communicate the greater good), and refocus ethical approaches.
Using big data to address the opioid epidemic poses ethical concerns which, if unaddressed, may undermine its benefits. Findings can inform guidelines on how to conduct ethical big data governance and in ways that protect and respect patients and society, ensure justice, and foster patient and public trust in public institutions….(More)”
Consumer Reports Study Finds Marketplace Demand for Privacy and Security
Press Release: “American consumers are increasingly concerned about privacy and data security when purchasing new products and services, which may be a competitive advantage to companies that take action towards these consumer values, a new Consumer Reports study finds.
The new study, “Privacy Front and Center” from CR’s Digital Lab with support from Omidyar Network, looks at the commercial benefits for companies that differentiate their products based on privacy and data security. The study draws from a nationally representative CR survey of 5,085 adult U.S. residents conducted in February 2020, a meta-analysis of 25 years of public opinion studies, and a conjoint analysis that seeks to quantify how consumers weigh privacy and security in their hardware and software purchasing decisions.
“This study shows that raising the standard for privacy and security is a win-win for consumers and the companies,” said Ben Moskowitz, the director of the Digital Lab at Consumer Reports. “Given the rapid proliferation of internet connected devices, the rise in data breaches and cyber attacks, and the demand from consumers for heightened privacy and security measures, there’s an undeniable business case for companies to invest in creating more private and secure products.”
Here are some of the key findings from the study:
- According to CR’s February 2020 nationally representative survey, 74% of consumers are at least moderately concerned about the privacy of their personal data.
- Nearly all Americans (96%) agree that more should be done to ensure that companies protect the privacy of consumers.
- A majority of smart product owners (62%) worry about potential loss of privacy when buying them for their home or family.
- The privacy/security conscious consumer class seems to include more men and people of color.
- Experiencing a data breach correlates with a higher willingness to pay for privacy, and 30% of Americans have experienced one.
- Of the Android users who switched to iPhones, 32% indicated doing so because of Apple’s perceived privacy or security benefits relative to Android….(More)”.
The State of Digital Democracy Isn’t As Dire As It Seems
Richard Gibson at the Hedgehog Review: “American society is prone, political theorist Langdon Winner wrote in 2005, to “technological euphoria,” each bout of which is inevitably followed by a period of letdown and reassessment. Perhaps in part for this reason, reviewing the history of digital democracy feels like watching the same movie over and over again. Even Winner’s point has that quality: He first made it in the mid-eighties and has repeated it in every decade since. In the same vein, Warren Yoder, longtime director of the Public Policy Center of Mississippi, responded to the Pew survey by arguing that we have reached the inevitable “low point” with digital technology—as “has happened many times in the past with pamphleteers, muckraking newspapers, radio, deregulated television.” (“Things will get better,” Yoder cheekily adds, “just in time for a new generational crisis beginning soon after 2030.”)
So one threat the present techlash poses is to obscure the ways that digital technology in fact serves many of the functions the visionaries imagined. We now take for granted the vast array of “Gov Tech”—meaning internal government digital upgrades—that makes our democracy go. We have become accustomed to the numerous government services that citizens can avail themselves of with a few clicks, a process spearheaded by the Clinton-Gore administration. We forget how revolutionary the “Internet campaign” of Howard Dean was at the 2004 Democratic primaries, establishing the Internet-based model of campaigning that all presidential candidates use to coordinate volunteer efforts and conduct fundraising, in both cases pulling new participants into the democratic process.
An honest assessment of the current state of digital democracy would acknowledge that the good jostles with the bad and the ugly. Social media has become the new hotspot for Rheingold’s “disinformocracy.” The president’s toxic tweeting continues, though Twitter has attempted recently to provide more oversight. At the same time, digital media have played a conspicuous role in the protests following George Floyd’s death, from the phone used to record his murder to the apps and Google docs used by the organizers of protests. The protests, too, have sparked fresh debate about facial recognition software (rightly one of the major concerns in the Pew report), leading Amazon to announce in June that it was “pausing” police use of its facial recognition software for one year. The city of Boston has made a similar move. Senator Sherrod Brown’s Data Accountability and Transparency Act of 2020, now circulating in draft form, would also limit the federal government’s use of “facial surveillance technology.”
We thus need to avoid summary judgments at this still-early date in the ongoing history of digital democracy. In a superb research paper on “The Internet and Engaged Citizenship” commissioned by the American Academy of Arts and Sciences last year, the political scientist David Karpf wisely concludes that the incredible velocity of “Internet Time” befuddles our attempts to state flatly what has or hasn’t happened to democratic practices and participation in our times. The 2016 election has rightly put many observers on guard. Yet there is a danger in living headline-by-headline. We must not forget how volatile the tech scene remains. That fact leads to Karpf’s hopeful conclusion: “The Internet of 2019 is not a finished product. The choices made by technologists, investors, policy-makers, lawyers, and engaged citizens will all shape what the medium becomes next.” The same can be said about digital technology in 2020: The landscape is still evolving….(More)“.
The ambitious effort to piece together America’s fragmented health data
Nicole Wetsman at The Verge: “From the early days of the COVID-19 pandemic, epidemiologist Melissa Haendel knew that the United States was going to have a data problem. There didn’t seem to be a national strategy to control the virus, and cases were springing up in sporadic hotspots around the country. With such a patchwork response, nationwide information about the people who got sick would probably be hard to come by.
Other researchers around the country were pinpointing similar problems. In Seattle, Adam Wilcox, the chief analytics officer at UW Medicine, was reaching out to colleagues. The city was the first US COVID-19 hotspot. “We had 10 times the data, in terms of just raw testing, than other areas,” he says. He wanted to share that data with other hospitals, so they would have that information on hand before COVID-19 cases started to climb in their area. Everyone wanted to get as much data as possible in the hands of as many people as possible, so they could start to understand the virus.
Haendel was in a good position to help make that happen. She’s the chair of the National Center for Data to Health (CD2H), a National Institutes of Health program that works to improve collaboration and data sharing within the medical research community. So one week in March, just after she’d started working from home and pulled her 10th grader out of school, she started trying to figure out how to use existing data-sharing projects to help fight this new disease.
The solution Haendel and CD2H landed on sounds simple: a centralized, anonymous database of health records from people who tested positive for COVID-19. Researchers could use the data to figure out why some people get very sick and others don’t, how conditions like cancer and asthma interact with the disease, and which treatments end up being effective.
But in the United States, building that type of resource isn’t easy. “The US healthcare system is very fragmented,” Haendel says. “And because we have no centralized healthcare, that makes it also the case that we have no centralized healthcare data.” Hospitals, citing privacy concerns, don’t like to give out their patients’ health data. Even if hospitals agree to share, they all use different ways of storing information. At one institution, the classification “female” could go into a record as one, and “male” could go in as two — and at the next, they’d be reversed….(More)”.
Science Philanthropy and Societal Responsibility: A Match Made for the 21st Century
Blog by Evan S. Michelson: “The overlapping crises the world has experienced in 2020 make clear that resources from multiple sectors — government, private sector, and philanthropy — need to be deployed at multiple scales to better address societal challenges. In particular, science philanthropy has stepped up, helping to advance COVID-19 vaccine development, identify solutions to climate change, and make the tools of scientific inquiry more widely available.
As I write in my recently published book, Philanthropy and the Future of Science and Technology (Routledge, 2020), this linkage between science philanthropy and societal responsibility is one that needs to be continually strengthened and advanced as global challenges become more intertwined and as the relationship between science and society becomes more complex. In fact, science philanthropies have an important, yet often overlooked, role in raising the profile of the societal responsibility of research. One way to better understand the role science philanthropies can and should play in society is to draw on the responsible research and innovation (RRI) framework, a concept developed by scholars from fields such as science & technology policy and science & technology studies. Depending on its configuration, the RRI framework has roughly three core dimensions: anticipatory research that is forward-looking and in search of new discoveries, deliberative and inclusive approaches that better engage and integrate members of the public with the research process, and the adoption of reflexive and responsive dispositions by funders (along with those conducting research) to ensure that societal and public values are accounted for and integrated at the outset of a research effort.
Philanthropies that fund research can more explicitly consider this perspective — even just a little bit — when making their funding decisions, thereby helping to better infuse whatever support they provide for individuals, institutions, and networks with attention to broader societal concerns. For instance, doing so not only highlights the need for science philanthropies to identify and support high-quality early career researchers who are pursuing new avenues of science and technology research, but it also raises considerations of diversity, equity, and inclusion as equally important decision-making criteria for funding. The RRI framework also suggests that foundations working in science and technology should not only help to bring together networks of individual scholars and their host institutions, but that the horizon of such collaborations should be actively extended to include practitioners, decision-makers, users, and communities affected by such investigations. Philanthropies can take a further step and reflexively apply these perspectives to how they operate, how they set their strategies and grantmaking priorities, or even in how they directly manage scientific research infrastructure, which some philanthropic institutions have even begun to do within their own institutions….(More)”.
Evaluating the fake news problem at the scale of the information ecosystem
Paper by Jennifer Allen, Baird Howland, Markus Mobius, David Rothschild and Duncan J. Watts: “Fake news,” broadly defined as false or misleading information masquerading as legitimate news, is frequently asserted to be pervasive online with serious consequences for democracy. Using a unique multimode dataset that comprises a nationally representative sample of mobile, desktop, and television consumption, we refute this conventional wisdom on three levels. First, news consumption of any sort is heavily outweighed by other forms of media consumption, comprising at most 14.2% of Americans’ daily media diets. Second, to the extent that Americans do consume news, it is overwhelmingly from television, which accounts for roughly five times as much as news consumption as online. Third, fake news comprises only 0.15% of Americans’ daily media diet. Our results suggest that the origins of public misinformedness and polarization are more likely to lie in the content of ordinary news or the avoidance of news altogether as they are in overt fakery….(More)”.
Behavioral nudges reduce failure to appear for court
Paper by Alissa Fishbane, Aurelie Ouss and Anuj K. Shah: “Each year, millions of Americans fail to appear in court for low-level offenses, and warrants are then issued for their arrest. In two field studies in New York City, we make critical information salient by redesigning the summons form and providing text message reminders. These interventions reduce failures to appear by 13-21% and lead to 30,000 fewer arrest warrants over a 3-year period. In lab experiments, we find that while criminal justice professionals see failures to appear as relatively unintentional, laypeople believe they are more intentional. These lay beliefs reduce support for policies that make court information salient and increase support for punishment. Our findings suggest that criminal justice policies can be made more effective and humane by anticipating human error in unintentional offenses….(More)”