Paper by Satchit Balsari, Mathew V. Kiang, and Caroline O. Buckee: “…In recent years, large-scale streams of digital data on medical needs, population vulnerabilities, physical and medical infrastructure, human mobility, and environmental conditions have become available in near-real time. Sophisticated analytic methods for combining them meaningfully are being developed and are rapidly evolving. However, the translation of these data and methods into improved disaster response faces substantial challenges. The data exist but are not readily accessible to hospitals and response agencies. The analytic pipelines to rapidly translate them into policy-relevant insights are lacking, and there is no clear designation of responsibility or mandate to integrate them into disaster-mitigation or disaster-response strategies. Building these integrated translational pipelines that use data rapidly and effectively to address the health effects of natural disasters will require substantial investments, and these investments will, in turn, rely on clear evidence of which approaches actually improve outcomes. Public health institutions face some ongoing barriers to achieving this goal, but promising solutions are available….(More)”
The U.S. Is Getting a Crash Course in Scientific Uncertainty
Apoorva Mandavilli at the New York Times: “When the coronavirus surfaced last year, no one was prepared for it to invade every aspect of daily life for so long, so insidiously. The pandemic has forced Americans to wrestle with life-or-death choices every day of the past 18 months — and there’s no end in sight.
Scientific understanding of the virus changes by the hour, it seems. The virus spreads only by close contact or on contaminated surfaces, then turns out to be airborne. The virus mutates slowly, but then emerges in a series of dangerous new forms. Americans don’t need to wear masks. Wait, they do.
At no point in this ordeal has the ground beneath our feet seemed so uncertain. In just the past week, federal health officials said they would begin offering booster shots to all Americans in the coming months. Days earlier, those officials had assured the public that the vaccines were holding strong against the Delta variant of the virus, and that boosters would not be necessary.
As early as Monday, the Food and Drug Administration is expected to formally approve the Pfizer-BioNTech vaccine, which has already been given to scores of millions of Americans. Some holdouts found it suspicious that the vaccine was not formally approved yet somehow widely dispensed. For them, “emergency authorization” has never seemed quite enough.
Americans are living with science as it unfolds in real time. The process has always been fluid, unpredictable. But rarely has it moved at this speed, leaving citizens to confront research findings as soon as they land at the front door, a stream of deliveries that no one ordered and no one wants.
Is a visit to my ailing parent too dangerous? Do the benefits of in-person schooling outweigh the possibility of physical harm to my child? Will our family gathering turn into a superspreader event?
Living with a capricious enemy has been unsettling even for researchers, public health officials and journalists who are used to the mutable nature of science. They, too, have frequently agonized over the best way to keep themselves and their loved ones safe.
But to frustrated Americans unfamiliar with the circuitous and often contentious path to scientific discovery, public health officials have seemed at times to be moving the goal posts and flip-flopping, or misleading, even lying to, the country.
Most of the time, scientists are “edging forward in a very incremental way,” said Richard Sever, assistant director of Cold Spring Harbor Laboratory Press and a co-founder of two popular websites, bioRxiv and medRxiv, where scientists post new research.
“There are blind alleys that people go down, and a lot of the time you kind of don’t know what you don’t know.”
Biology and medicine are particularly demanding fields. Ideas are evaluated for years, sometimes decades, before they are accepted….(More)”.
The Secret Bias Hidden in Mortgage-Approval Algorithms
An investigation by The Markup: “…has found that lenders in 2019 were more likely to deny home loans to people of color than to White people with similar financial characteristics—even when we controlled for newly available financial factors that the mortgage industry for years has said would explain racial disparities in lending.
Holding 17 different factors steady in a complex statistical analysis of more than two million conventional mortgage applications for home purchases, we found that lenders were 40 percent more likely to turn down Latino applicants for loans, 50 percent more likely to deny Asian/Pacific Islander applicants, and 70 percent more likely to deny Native American applicants than similar White applicants. Lenders were 80 percent more likely to reject Black applicants than similar White applicants. These are national rates.
In every case, the prospective borrowers of color looked almost exactly the same on paper as the White applicants, except for their race.
The industry had criticized previous similar analyses for not including financial factors they said would explain disparities in lending rates but were not public at the time: debts as a percentage of income, how much of the property’s assessed worth the person is asking to borrow, and the applicant’s credit score.
The first two are now public in the Home Mortgage Disclosure Act data. Including these financial data points in our analysis not only failed to eliminate racial disparities in loan denials, it highlighted new, devastating ones.
We found that lenders gave fewer loans to Black applicants than White applicants even when their incomes were high—$100,000 a year or more—and had the same debt ratios. In fact, high-earning Black applicants with less debt were rejected more often than high-earning White applicants who have more debt….(More)”
How local governments are scaring tech companies
Ben Brody at Protocol: “Congress has failed to regulate tech, so states and cities are stepping in with their own approaches to food delivery apps, AI regulation and, yes, privacy. Tech doesn’t like what it sees….
Andrew Rigie said it isn’t worth waiting around for tech regulation in Washington.
“New York City is a restaurant capital of the world,” Rigie told Protocol. “We need to lead on these issues.”
Rigie, executive director of the New York City Hospitality Alliance, has pushed for New York City’s new laws on food delivery apps such as Uber Eats. His group supported measures to make permanent a cap on the service fees the apps charge to restaurants, ban the apps from listing eateries without permission and share customer information with restaurants that ask for it.
While Rigie’s official purview is dining in the Big Apple, his belief that the local government should lead on regulating tech companies in a way Washington hasn’t has become increasingly common.
“It wouldn’t be a surprise if lawmakers elsewhere seek to implement similar policies,” Rigie said. “Some of it could potentially come from the federal government, but New York City can’t wait for the federal government to maybe act.”
New York is not the only city to take action. While the Federal Trade Commission has faced calls to regulate third-party food delivery apps at a national level, San Francisco was first to pass a permanent fee cap for them in June.
Food apps are just a microcosm highlighting the patchworks of local-level regulation that are developing, or are already a fact of life, for tech. These regulatory patchworks occur when state and local governments move ahead of Congress to pass their own, often divergent, laws and rules. So far, states and municipalities are racing ahead of the feds on issues such as cybersecurity, municipal broadband, content moderation, gig work, the use of facial recognition, digital taxes, mobile app store fees and consumer rights to repair their own devices, among others.
Many in tech became familiar with the idea when the California Consumer Privacy Act passed in 2018, making it clear more states would follow suit, although the possibility has popped up throughout modern tech policy history on issues such as privacy requirements on ISPs, net neutrality and even cybersecurity breach notification.
Many patchworks reflect the stance of advocates, consumers and legislators that Washington has simply failed to do its job on tech. The resulting uncompromising or inconsistent approaches by local governments also has tech companies worried enough to push Congress to overrule states and establish one uniform U.S. standard.
“With a bit of a vacuum at the federal level, states are looking to step in, whether that’s on content moderation, whether that’s on speech on platforms, antitrust and anticompetitive conduct regulation, data privacy,” said April Doss, executive director of Georgetown University’s Institute for Technology Law and Policy. “It is the whole bundle of issues.”…(More)“
Abundance: On the Experience of Living in a World of Information Plenty
Book by Pablo J. Boczkowski: “The book examines the experience of living in a society that has more information available to the public than ever before. It focuses on the interpretations, emotions, and practices of dealing with this abundance in everyday life. Drawing upon extensive fieldwork and survey research conducted in Argentina, the book inquiries into the role of cultural and structural factors that mediate between the availability of information and the actual consequences for individuals, media, politics, and society. Providing the first book-length account of the topic in the Global South, it concludes that the experience of information abundance is tied to an overall unsettling of society, a reconstitution of how we understand and perform our relationships with others, and a twin depreciation of facts and appreciation of fictions….(More)”.
The Future of Digital Surveillance
Book by Yong Jin Park: “Are humans hard-wired to make good decisions about managing their privacy in an increasingly public world? Or are we helpless victims of surveillance through our use of invasive digital media? Exploring the chasm between the tyranny of surveillance and the ideal of privacy, this book traces the origins of personal data collection in digital technologies including artificial intelligence (AI) embedded in social network sites, search engines, mobile apps, the web, and email. The Future of Digital Surveillance argues against a technologically deterministic view—digital technologies by nature do not cause surveillance. Instead, the shaping of surveillance technologies is embedded in a complex set of individual psychology, institutional behaviors, and policy principles….(More)”
Mathematicians are deploying algorithms to stop gerrymandering
Article by Siobhan Roberts: “The maps for US congressional and state legislative races often resemble electoral bestiaries, with bizarrely shaped districts emerging from wonky hybrids of counties, precincts, and census blocks.
It’s the drawing of these maps, more than anything—more than voter suppression laws, more than voter fraud—that determines how votes translate into who gets elected. “You can take the same set of votes, with different district maps, and get very different outcomes,” says Jonathan Mattingly, a mathematician at Duke University in the purple state of North Carolina. “The question is, if the choice of maps is so important to how we interpret these votes, which map should we choose, and how should we decide if someone has done a good job in choosing that map?”
Over recent months, Mattingly and like-minded mathematicians have been busy in anticipation of a data release expected today, August 12, from the US Census Bureau. Every decade, new census data launches the decennial redistricting cycle—state legislators (or sometimes appointed commissions) draw new maps, moving district lines to account for demographic shifts.
In preparation, mathematicians are sharpening new algorithms—open-source tools, developed over recent years—that detect and counter gerrymandering, the egregious practice giving rise to those bestiaries, whereby politicians rig the maps and skew the results to favor one political party over another. Republicans have openly declared that with this redistricting cycle they intend to gerrymander a path to retaking the US House of Representatives in 2022….(More)”.
Privacy Tradeoffs: Who Should Make Them, and How?
Paper by Jane R. Bambauer: “Privacy debates are contentious in part because we have not reached a broadly recognized cultural consensus about whether interests in privacy are like most other interests that can be traded off in utilitarian, cost-benefit terms, or if instead privacy is different—fundamental to conceptions of dignity and personal liberty. Thus, at the heart of privacy debates is an unresolved question: is privacy just another interest that can and should be bartered, mined, and used in the economy, or is it different?
This question identifies and isolates a wedge between those who hold essentially utilitarian views of ethics (and who would see many data practices as acceptable) and those who hold views of natural and fundamental rights (for whom common data mining practices are either never acceptable or, at the very least, never acceptable without significant participation and consent of the subject).
This essay provides an intervention of a purely descriptive sort. First, I lay out several candidates for ethical guidelines that might legitimately undergird privacy law and policy. Only one of the ethical models (the natural right to sanctuary) can track the full scope and implications of fundamental rights-based privacy laws like the GDPR.
Second, the project contributes to the field of descriptive ethics by using a vignette experiment to discover which of the various ethical models people actually do seem to hold and abide by. The vignette study uses a factorial design to help isolate the roles of various factors that may contribute to the respondents’ gauge of what an ethical firm should or should not do in the context of personal data use as well as two other non-privacy-related contexts. The results can shed light on whether privacy-related ethics are different and distinct from business ethics more generally. They also illuminate which version(s) of “good” and “bad” share broad support and deserve to be reflected in privacy law or business practice.
The results of the vignette experiment show that on balance, Americans subscribe to some form of utilitarianism, although a substantial minority subscribe to a natural right to sanctuary approach. Thus, consent and prohibitions of data practices are appropriate where the likely risks to some groups (most importantly, data subjects, but also firms and third parties) outweigh the benefits….(More)”
The Myth of the Laboratories of Democracy
Paper by Charles Tyler and Heather Gerken: “A classic constitutional parable teaches that our federal system of government allows the American states to function as “laboratories of democracy.” This tale has been passed down from generation to generation, often to justify constitutional protections for state autonomy from the federal government. But scholars have failed to explain how state governments manage to overcome numerous impediments to experimentation, including re-source scarcity, free-rider problems, and misaligned incentives.
This Article maintains that the laboratories account is missing a proper appreciation for the coordinated networks of third-party organizations (such as interest groups, activists, and funders) that often fuel policy innovation. These groups are the real laboratories of democracy today, as they perform the lion’s share of tasks necessary to enact new policies; they create incentives that motivate elected officials to support their preferred policies; and they mobilize the power of the federal government to change the land-scape against which state experimentation occurs. If our federal system of government seeks to encourage policy experimentation, this insight has several implications for legal doctrine. At a high level of generality, courts should endeavor to create ground rules for regulating competition between political networks, rather than continuing futile efforts to protect state autonomy. The Article concludes by sketching the outlines of this approach in several areas of legal doctrine, including federal preemption of state law, conditional spending, and the anti-commandeering principle….(More)”
Philanthropy Can Help Communities Weed Out Inequity in Automated Decision Making Tools
Article by Chris Kingsley and Stephen Plank: “Two very different stories illustrate the impact of sophisticated decision-making tools on individuals and communities. In one, the Los Angeles Police Department publicly abandoned a program that used data to target violent offenders after residents in some neighborhoods were stopped by police as many as 30 times per week. In the other, New York City deployed data to root out landlords who discriminated against tenants using housing vouchers.
The second story shows the potential of automated data tools to promote social good — even as the first illustrates their potential for great harm.
Tools like these — typically described broadly as artificial intelligence or somewhat more narrowly as predictive analytics, which incorporates more human decision making in the data collection process — increasingly influence and automate decisions that affect people’s lives. This includes which families are investigated by child protective services, where police deploy, whether loan officers extend credit, and which job applications a hiring manager receives.
How these tools are built, used, and governed will help shape the opportunities of everyday citizens, for good or ill.
Civil-rights advocates are right to worry about the harm such technology can do by hardpwiring bias into decision making. At the Annie E. Casey Foundation, where we fund and support data-focused efforts, we consulted with civil-rights groups, data scientists, government leaders, and family advocates to learn more about what needs to be done to weed out bias and inequities in automated decision-making tools — and recently produced a report about how to harness their potential to promote equity and social good.
Foundations and nonprofit organizations can play vital roles in ensuring equitable use of A.I. and other data technology. Here are four areas in which philanthropy can make a difference:
Support the development and use of transparent data tools. The public has a right to know how A.I. is being used to influence policy decisions, including whether those tools were independently validated and who is responsible for addressing concerns about how they work. Grant makers should avoid supporting private algorithms whose design and performance are shielded by trade-secrecy claims. Despite calls from advocates, some companies have declined to disclose details that would allow the public to assess their fairness….(More)”