Forecasting hospital-level COVID-19 admissions using real-time mobility data


Paper by Brennan Klein et al: “For each of the COVID-19 pandemic waves, hospitals have had to plan for deploying surge capacity and resources to manage large but transient increases in COVID-19 admissions. While a lot of effort has gone into predicting regional trends in COVID-19 cases and hospitalizations, there are far fewer successful tools for creating accurate hospital-level forecasts. At the same time, anonymized phone-collected mobility data proved to correlate well with the number of cases for the first two waves of the pandemic (spring 2020, and fall-winter 2021). In this work, we show how mobility data could bolster hospital-specific COVID-19 admission forecasts for five hospitals in Massachusetts during the initial COVID-19 surge. The high predictive capability of the model was achieved by combining anonymized, aggregated mobile device data about users’ contact patterns, commuting volume, and mobility range with COVID hospitalizations and test-positivity data. We conclude that mobility-informed forecasting models can increase the lead-time of accurate predictions for individual hospitals, giving managers valuable time to strategize how best to allocate resources to manage forthcoming surges…(More)”.

The linguistics search engine that overturned the federal mask mandate


Article by Nicole Wetsman: “The COVID-19 pandemic was still raging when a federal judge in Florida made the fateful decision to type “sanitation” into the search bar of the Corpus of Historical American English.

Many parts of the country had already dropped mask requirements, but a federal mask mandate on planes and other public transportation was still in place. A lawsuit challenging the mandate had come before Judge Kathryn Mizelle, a former clerk for Justice Clarence Thomas. The Biden administration said the mandate was valid, based on a law that authorizes the Centers for Disease Control and Prevention (CDC) to introduce rules around “sanitation” to prevent the spread of disease.

Mizelle took a textualist approach to the question — looking specifically at the meaning of the words in the law. But along with consulting dictionaries, she consulted a database of language, called a corpus, built by a Brigham Young University linguistics professor for other linguists. Pulling every example of the word “sanitation” from 1930 to 1944, she concluded that “sanitation” was used to describe actively making something clean — not as a way to keep something clean. So, she decided, masks aren’t actually “sanitation.”

The mask mandate was overturned, one of the final steps in the defanging of public health authorities, even as infectious disease ran rampant…

Using corpora to answer legal questions, a strategy often referred to as legal corpus linguistics, has grown increasingly popular in some legal circles within the past decade. It’s been used by judges on the Michigan Supreme Court and the Utah Supreme Court, and, this past March, was referenced by the US Supreme Court during oral arguments for the first time.

“It’s been growing rapidly since 2018,” says Kevin Tobia, a professor at Georgetown Law. “And it’s only going to continue to grow.”…(More)”.

Americans’ Views of Government: Decades of Distrust, Enduring Support for Its Role


Pew Research: “Americans remain deeply distrustful of and dissatisfied with their government. Just 20% say they trust the government in Washington to do the right thing just about always or most of the time – a sentiment that has changed very little since former President George W. Bush’s second term in office.

Chart shows low public trust in federal government has persisted for nearly two decades

The public’s criticisms of the federal government are many and varied. Some are familiar: Just 6% say the phrase “careful with taxpayer money” describes the federal government extremely or very well; another 21% say this describes the government somewhat well. A comparably small share (only 8%) describes the government as being responsive to the needs of ordinary Americans.

The federal government gets mixed ratings for its handling of specific issues. Evaluations are highly positive in some respects, including for responding to natural disasters (70% say the government does a good job of this) and keeping the country safe from terrorism (68%). However, only about a quarter of Americans say the government has done a good job managing the immigration system and helping people get out of poverty (24% each). And the share giving the government a positive rating for strengthening the economy has declined 17 percentage points since 2020, from 54% to 37%.

Yet Americans’ unhappiness with government has long coexisted with their continued support for government having a substantial role in many realms. And when asked how much the federal government does to address the concerns of various groups in the United States, there is a widespread belief that it does too little on issues affecting many of the groups asked about, including middle-income people (69%), those with lower incomes (66%) and retired people (65%)…(More)”.

Aligning Artificial Intelligence with Humans through Public Policy



Paper by John Nay and James Daily: “Given that Artificial Intelligence (AI) increasingly permeates our lives, it is critical that we systematically align AI objectives with the goals and values of humans. The human-AI alignment problem stems from the impracticality of explicitly specifying the rewards that AI models should receive for all the actions they could take in all relevant states of the world. One possible solution, then, is to leverage the capabilities of AI models to learn those rewards implicitly from a rich source of data describing human values in a wide range of contexts. The democratic policy-making process produces just such data by developing specific rules, flexible standards, interpretable guidelines, and generalizable precedents that synthesize citizens’ preferences over potential actions taken in many states of the world. Therefore, computationally encoding public policies to make them legible to AI systems should be an important part of a socio-technical approach to the broader human-AI alignment puzzle. Legal scholars are exploring AI, but most research has focused on how AI systems fit within existing law, rather than how AI may understand the law. This Essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks. As a demonstration of the ability of AI to comprehend policy, we provide a case study of an AI system that predicts the relevance of proposed legislation to a given publicly traded company and its likely effect on that company. We believe this represents the “comprehension” phase of AI and policy, but leveraging policy as a key source of human values to align AI requires “understanding” policy. We outline what we believe will be required to move toward that, and two example research projects in that direction. Solving the alignment problem is crucial to ensuring that AI is beneficial both individually (to the person or group deploying the AI) and socially. As AI systems are given increasing responsibility in high-stakes contexts, integrating democratically-determined policy into those systems could align their behavior with human goals in a way that is responsive to a constantly evolving society…(More)”.

In this small Va. town, citizens review police like Uber drivers


Article by Emily Davies: “Chris Ford stepped on the gas in his police cruiser and rolled down Gold Cup Drive to catch the SUV pushing 30 mph in a 15 mph zone. Eleven hours and 37 minutes into his shift, the corporal was ready for his first traffic stop of the day.

“Look at him being sneaky,” Fordsaid, his blue lights flashing on a quiet road in this small town where a busy day could mean animals escaped from a local slaughterhouse.

Ford parked, walked toward the SUV and greeted the man who had ignored the speed limit at exactly the wrong time.

“I was doing 15,” said the driver, a Black man in a mostly White neighborhood of a mostly White town.

The officertook his license and registration back to the cruiser.

“Every time I pull over someone of color, they’re standoffish with me. Like, ‘Here’s a White police officer, here we go again.’ ” Ford, 56, said. “So I just try to be nice.”

Ford knew the stop would be scrutinized — and not just by the reporter who was allowed to ride along on his shift.

After every significant encounter with residents, officers in Warrenton are required to hand out a QR code, which is on the back of their business card, asking for feedback on the interaction. Through a series of questions, citizens can use a star-based system to rate officers on their communication, listening skills and fairness. The responses are anonymous and can be completed any time after the interaction to encourage people to give honest assessments. The program, called Guardian Score, is supposed to give power to those stopped by police in a relationship that has historically felt one-sided — and to give police departments a tool to evaluate their force on more than arrests and tickets.

“If we started to measure how officers are treating community members, we realized we could actually infuse this into the overall evaluation process of individual officers,” said Burke Brownfeld, a founder of Guardian Score and a former police officer in Alexandria. “The definition of doing a good job could change. It would also include: How are your listening skills? How fairly are you treating people based on their perception?”…(More)”.

How harmful is social media?


Gideon Lewis-Kraus in The New Yorker: “In April, the social psychologist Jonathan Haidt published an essay in The Atlantic in which he sought to explain, as the piece’s title had it, “Why the Past 10 Years of American Life Have Been Uniquely Stupid.” Anyone familiar with Haidt’s work in the past half decade could have anticipated his answer: social media. Although Haidt concedes that political polarization and factional enmity long predate the rise of the platforms, and that there are plenty of other factors involved, he believes that the tools of virality—Facebook’s Like and Share buttons, Twitter’s Retweet function—have algorithmically and irrevocably corroded public life. He has determined that a great historical discontinuity can be dated with some precision to the period between 2010 and 2014, when these features became widely available on phones….

After Haidt’s piece was published, the Google Doc—“Social Media and Political Dysfunction: A Collaborative Review”—was made available to the public. Comments piled up, and a new section was added, at the end, to include a miscellany of Twitter threads and Substack essays that appeared in response to Haidt’s interpretation of the evidence. Some colleagues and kibbitzers agreed with Haidt. But others, though they might have shared his basic intuition that something in our experience of social media was amiss, drew upon the same data set to reach less definitive conclusions, or even mildly contradictory ones. Even after the initial flurry of responses to Haidt’s article disappeared into social-media memory, the document, insofar as it captured the state of the social-media debate, remained a lively artifact.

Near the end of the collaborative project’s introduction, the authors warn, “We caution readers not to simply add up the number of studies on each side and declare one side the winner.” The document runs to more than a hundred and fifty pages, and for each question there are affirmative and dissenting studies, as well as some that indicate mixed results. According to one paper, “Political expressions on social media and the online forum were found to (a) reinforce the expressers’ partisan thought process and (b) harden their pre-existing political preferences,” but, according to another, which used data collected during the 2016 election, “Over the course of the campaign, we found media use and attitudes remained relatively stable. Our results also showed that Facebook news use was related to modest over-time spiral of depolarization. Furthermore, we found that people who use Facebook for news were more likely to view both pro- and counter-attitudinal news in each wave. Our results indicated that counter-attitudinal exposure increased over time, which resulted in depolarization.” If results like these seem incompatible, a perplexed reader is given recourse to a study that says, “Our findings indicate that political polarization on social media cannot be conceptualized as a unified phenomenon, as there are significant cross-platform differences.”…(More)”.

How science could aid the US quest for environmental justice


Jeff Tollefson at Nature: “…The network of US monitoring stations that detect air pollution catches only broad trends across cities and regions, and isn’t equipped for assessing air quality at the level of streets and neighbourhoods. So environmental scientists are exploring ways to fill the gaps.

In one project funded by NASA, researchers are developing methods to assess street-level pollution using measurements of aerosols and other contaminants from space. When the team trained its tools on Washington DC, the scientists found1 that sections in the city’s southeast, which have a larger share of Black residents, are exposed to much higher levels of fine-soot pollution than wealthier — and whiter — areas in the northwest of the city, primarily because of the presence of major roads and bus depots in the southeast.

Cumulative burden: Air-pollution levels tend to be higher in poorer and predominantly Black neighbourhoods of Washington DC.
Source: Ref. 1

The detailed pollution data painted a more accurate picture of the burden on a community that also lacks access to high-quality medical facilities and has high rates of cardiovascular disorders and other diseases. The results help to explain a more than 15-year difference in life expectancy between predominantly white neighbourhoods and some predominantly Black ones.

The analysis underscores the need to consider pollution and socio-economic data in parallel, says Susan Anenberg, director of the Climate and Health Institute at the George Washington University in Washington DC and co-leader of the project. “We can actually get neighbourhood-scale observations from space, which is quite incredible,” she says, “but if you don’t have the demographic, economic and health data as well, you’re missing a very important piece of the puzzle.”

Other projects, including one from technology company Aclima, in San Francisco, California, are focusing on ubiquitous, low-cost sensors that measure air pollution at the street level. Over the past few years, Aclima has deployed a fleet of vehicles to collect street-level data on air pollutants such as soot and greenhouse gases across 101 municipalities in the San Francisco Bay area. Their data have shown that air-pollution levels can vary as much as 800% from one neighbourhood block to the next.

Working directly with disadvantaged communities and environmental regulators in California, as well as with other states and localities, the company provides pollution monitoring on a subscription basis. It also offers the use of its screening tool, which integrates a suite of socio-economic data and can be used to assess cumulative impacts…(More)”.

I tried to read all my app privacy policies. It was 1 million words.


Article by Geoffrey A. Fowler: “…So here’s an idea: Let’s abolish the notion that we’re supposed to read privacy policies.

I’m not suggesting companies shouldn’t have to explain what they’re up to. Maybe we call them “data disclosures” for the regulators, lawyers, investigative journalists and curious consumers to pore over.

But to protect our privacy, the best place to start is for companies to simply collect less data. “Maybe don’t do things that need a million words of explanation? Do it differently,” said Slaughter. “You can’t abuse, misuse, leverage data that you haven’t collected in the first place.”

Apps and services should only collect the information they really need to provide that service — unless we opt in to let them collect more, and it’s truly an option.

I’m not holding my breath that companies will do that voluntarily, but a federal privacy law would help. While we wait for one, Slaughter said the FTC (where Democratic commissioners recently gained a majority) is thinking about how to use its existing authority “to pursue practices — including data collection, use and misuse — that are unfair to users.”

Second, we need to replace the theater of pressing “agree” with real choices about our privacy.

Today, when we do have choices to make, companies often present them in ways that pressure us into making the worst decisions for ourselves.

Apps and websites should give us the relevant information and our choices in the moment when it matters. Twitter actually does this just-in-time notice better than many other apps and websites: By default, it doesn’t collect your exact location, and only prompts you to do so when you ask to tag your location in a tweet.

Even better, technology could help us manage our choices. Cranor suggests that data disclosures could be coded to be read by machines. Companies already do this for financial information, and the TLDR Act would require consistent tags on privacy information, too. Then your computer could act kind of like a butler, interacting with apps and websites on your behalf.

Picture Siri as a butler who quizzes you briefly about your preferences and then does your bidding. The privacy settings on an iPhone already let you tell all the different apps on your phone not to collect your location. For the past year, they’ve also allowed you to ask apps not to track you.

Web browsers could serve as privacy butlers, too. Mozilla’s Firefox already lets you block certain kinds of privacy invasions. Now a new technology called the Global Privacy Control is emerging that would interact with websites and instruct them not to “sell” our data. It’s grounded in California’s privacy law, which is among the toughest in the nation, though it remains to be seen how the state will enforce GPC…(More)”.

Seeking data sovereignty, a First Nation introduces its own licence


Article by Caitrin Pilkington: “The Łı́ı́dlı̨ı̨ Kų́ę́ First Nation, or LKFN, says it is partnering with the nearby Scotty Creek research facility, outside Fort Simpson, to introduce a new application process for researchers. 

The First Nation, which also plans to create a compendium of all research gathered on its land, says the approach will be the first of its kind in the Northwest Territories.

LKFN says the current NWT-wide licensing system will still stand but a separate system addressing specific concerns was urgently required.

In the wake of a recent review of post-secondary education in the North, changes like this are being positioned as part of a larger shift in perspective about southern research taking place in the territory. 

LKFN’s initiative was approved by its council on February 7. As of April 1, any researcher hoping to study at Scotty Creek and in LKFN territory has been required to fill out a new application form. 

“When we get permits now, we independently review them and make sure certain topics are addressed in the application, so that researchers and students understand not just Scotty Creek, but the people on the land they’re on,” said Dieter Cazon, LKFN’s manager of lands and resources….

Currently, all research licensing goes through the Aurora Research Institute. The ARI’s form covers many of the same areas as the new LKFN form, but the institute has slightly different requirements for researchers.
The ARI application form asks researchers to:

  • share how they plan to release data, to ensure confidentiality;
  • describe their methodology; and
  • indicate which communities they expect to be affected by their work.

The Łı́ı́dlı̨ı̨ Kų́ę́ First Nation form asks researchers to:

  • explicitly declare that all raw data will be co-owned by the Łı́ı́dlı̨ı̨ Kų́ę́ First Nation;
  • disclose the specific equipment and infrastructure they plan to install on the land, lay out their demobilization plan, and note how often they will be travelling through the land for data collection; and
  • explain the steps they’ve taken to educate themselves about Łı́ı́dlı̨ı̨ Kų́ę́ First Nation customs and codes of research practice that will apply to their work with the community.

Cazon says the new approach will work in tandem with ARI’s system…(More)”.

The Future of Open Data: Law, Technology and Media


Book edited by Pamela Robinson, and Teresa Scassa: “The Future of Open Data flows from a multi-year Social Sciences and Humanities Research Council (SSHRC) Partnership Grant project that set out to explore open government geospatial data from an interdisciplinary perspective. Researchers on the grant adopted a critical social science perspective grounded in the imperative that the research should be relevant to government and civil society partners in the field.

This book builds on the knowledge developed during the course of the grant and asks the question, “What is the future of open data?” The contributors’ insights into the future of open data combine observations from five years of research about the Canadian open data community with a critical perspective on what could and should happen as open data efforts evolve.

Each of the chapters in this book addresses different issues and each is grounded in distinct disciplinary or interdisciplinary perspectives. The opening chapter reflects on the origins of open data in Canada and how it has progressed to the present date, taking into account how the Indigenous data sovereignty movement intersects with open data. A series of chapters address some of the pitfalls and opportunities of open data and consider how the changing data context may impact sources of open data, limits on open data, and even liability for open data. Another group of chapters considers new landscapes for open data, including open data in the global South, the data priorities of local governments, and the emerging context for rural open data…(More)”.