Paper by Xiaobin Shen, Natasha Zhang Foutz, and Beibei Li: “Infodemics impede the efficacy of business and public policies, particularly in disastrous times when high-quality information is in the greatest demand. This research proposes a multi-faceted conceptual framework to characterize an infodemic and then empirically assesses its impact on the core mitigation policy of a latest prominent disaster, the COVID-19 pandemic. Analyzing a half million records of COVID-related news media and social media, as well as .2 billion records of location data, via a multitude of methodologies, including text mining and spatio-temporal analytics, we uncover a number of interesting findings. First, the volume of the COVID information incurs an inverted-U-shaped impact on individuals’ compliance with the lockdown policy. That is, a smaller volume encourages the policy compliance, whereas an overwhelming volume discourages compliance, revealing negative ramifications of excessive information about a disaster. Second, novel information boosts policy compliance, signifying the value of offering original and distinctive, instead of redundant, information to the public during a disaster. Third, misinformation exhibits a U-shaped influence unexplored by the literature, deterring policy compliance until a larger amount surfaces, diminishing informational value, escalating public uncertainty. Overall, these findings demonstrate the power of information technology, such as media analytics and location sensing, in disaster management. They also illuminate the significance of strategic information management during disasters and the imperative need for cohesive efforts across governments, media, technology platforms, and the general public to curb future infodemics…(More)”.
How can data stop homelessness before it starts?
Article by Andrea Danes and Jessica Chamba: “When homelessness in Maidstone, England, soared by 58% over just five years, the Borough Council sought to shift its focus from crisis response to building early-intervention and prevention capacity. Working with EY teams and our UK technology partner, Xantura, the council created and implemented a data-focused tool — called OneView — that enabled the council to tackle their challenges in a new way.
Specifically, OneView’s predictive analytic and natural language generation capabilities enabled participating agencies in Maidstone to bring together their data to identify residents who were at risk of homelessness, and then to intervene before they were actually living on the street. In the initial pilot year, almost 100 households were prevented from becoming homeless, even as the COVID-19 pandemic took hold and grew. And, overall, the rate of homelessness fell by 40%.
As evidenced by the Maidstone model, data analytics and predictive modeling will play an indispensable role in enabling us to realize a very big vision — a world in which everyone has a reliable roof over their heads.
Against that backdrop, it’s important to stress that the roadmap for preventing homelessness has to contain components beyond just better avenues for using data. It must also include shrewd approaches for dealing with complex issues such as funding, standards, governance, cultural differences and informed consent to permit the exchange of personal information, among others. Perhaps most importantly, the work needs to be championed by organizational and governmental leaders who believe transformative, systemic change is possible and are committed to achieving it.
Introducing the Smart Safety Net
To move forward, human services organizations need to look beyond modernizing service delivery to transforming it, and to evolve from integration to intuitive design. New technologies provide opportunities to truly rethink and redesign in ways that would have been impossible in the past.
A Smart Safety Net can shape a bold new future for social care. Doing so will require broad, fundamental changes at an organizational level, more collaboration across agencies, data integration and greater care co-ordination. At its heart, a Smart Safety Net entails:
- A system-wide approach to addressing the needs of each individual and family, including pooled funding that supports coordination so that, for example, users in one program are automatically enrolled in other programs for which they are eligible.
- Human-centered design that genuinely integrates the recipients of services (patients, clients, customers, etc.), as well as their experiences and insights, into the creation and implementation of policies, systems and services that affect them.
- Data-driven policy, services, workflows, automation and security to improve processes, save money and facilitate accurate, real-time decision-making, especially to advance the overarching priority of nearly every program and service; that is, early intervention and prevention.
- Frontline case workers who are supported and empowered to focus on their core purpose. With a lower administrative burden, they are able to invest more time in building relationships with vulnerable constituents and act as “coaches” to improve people’s lives.
- Outcomes-based commissioning of services, measured against a more holistic wellbeing framework, from an ecosystem of public, private and not-for-profit providers, with government acting as system stewards and service integrators…(More)”.
Use of science in public policy: Lessons from the COVID-19 pandemic efforts to ‘Follow the Science’
Paper by Barry Bozeman: “The paper asks: ‘What can we learn from COVID-19 pandemic about effective use of scientific and technical information (STI) in policymaking and how might the lessons be put to use?’ The paper employs the political rhetoric of ‘follow the science’ as a lens for examining contemporary concerns in the use of STI, including (1) ‘Breadth of Science Products’, the necessity of a broader concept of STI that includes by-products science, (2) ‘Science Dynamism’, emphasizing the uncertainty and impeachability of science, (3) ‘STI Urgency’ suggesting that STI use during widespread calamities differs from more routine applications, and (4) ‘Hyper-politicization of Science’, arguing that a step-change in the contentiousness of politics affects uses and misuses of STI. The paper concludes with a discussion, STI Curation, as a possible ingredient to improving effective use. With more attention to credibility and trust of STI and to the institutional legitimacy of curators, it should prove possible to improve the effective use of STI in public policy….(More)”.
How science could aid the US quest for environmental justice
Jeff Tollefson at Nature: “…The network of US monitoring stations that detect air pollution catches only broad trends across cities and regions, and isn’t equipped for assessing air quality at the level of streets and neighbourhoods. So environmental scientists are exploring ways to fill the gaps.
In one project funded by NASA, researchers are developing methods to assess street-level pollution using measurements of aerosols and other contaminants from space. When the team trained its tools on Washington DC, the scientists found1 that sections in the city’s southeast, which have a larger share of Black residents, are exposed to much higher levels of fine-soot pollution than wealthier — and whiter — areas in the northwest of the city, primarily because of the presence of major roads and bus depots in the southeast.

The detailed pollution data painted a more accurate picture of the burden on a community that also lacks access to high-quality medical facilities and has high rates of cardiovascular disorders and other diseases. The results help to explain a more than 15-year difference in life expectancy between predominantly white neighbourhoods and some predominantly Black ones.
The analysis underscores the need to consider pollution and socio-economic data in parallel, says Susan Anenberg, director of the Climate and Health Institute at the George Washington University in Washington DC and co-leader of the project. “We can actually get neighbourhood-scale observations from space, which is quite incredible,” she says, “but if you don’t have the demographic, economic and health data as well, you’re missing a very important piece of the puzzle.”
Other projects, including one from technology company Aclima, in San Francisco, California, are focusing on ubiquitous, low-cost sensors that measure air pollution at the street level. Over the past few years, Aclima has deployed a fleet of vehicles to collect street-level data on air pollutants such as soot and greenhouse gases across 101 municipalities in the San Francisco Bay area. Their data have shown that air-pollution levels can vary as much as 800% from one neighbourhood block to the next.
Working directly with disadvantaged communities and environmental regulators in California, as well as with other states and localities, the company provides pollution monitoring on a subscription basis. It also offers the use of its screening tool, which integrates a suite of socio-economic data and can be used to assess cumulative impacts…(More)”.
Stories to Work By
Essay by William E. Spriggs: “In Charlie Chaplin’s 1936 film Modern Times, humans in a factory are reduced to adjuncts to a massive series of cogs and belts. Overlords bark commands from afar to a servant class, and Chaplin’s hapless hero is literally consumed by the machine … and then spit out by it. In the film, the bosses have all the power, and machines keep workers in check.
Modern Times’s dystopian narrative remains with us today. In particular, it is still held by many policymakers who assume that increasing technological progress, whether mechanical or informational, inevitably means that ordinary workers will lose. This view perpetuates itself when policies that could give workers more power in times of technological change are overlooked, while those that disempower workers are adopted. If we are to truly consider science policy for the future, we need to understand how this narrative about workers and technology functions, where it is misleading, and how deliberate policies can build a better world for all….
Today’s tales of pending technological dystopia—echoed in economics papers as well as in movies and news reports—blind us to the lessons we could glean from the massive disruptions of earlier periods of even greater change. Today the threat of AI is portrayed as revolutionary, and previous technological change as slow and inconsequential—but this was never the case. These narratives of technological inevitability limit the tools we have at our disposal to promote equality and opportunity.
The challenges we face today are far from insurmountable: technology is not destiny. Workers are not doomed to be Chaplin’s victim of technology with one toe caught in the gears of progress. We have choices, and the central challenge of science and technology policy for the next century will be confronting those choices head on. Policymakers should focus on the fundamental tasks of shaping how technology is deployed and enacting the economic rules we need to ensure that technology works for us all, rather than only the few….(More)”.
I tried to read all my app privacy policies. It was 1 million words.
Article by Geoffrey A. Fowler: “…So here’s an idea: Let’s abolish the notion that we’re supposed to read privacy policies.
I’m not suggesting companies shouldn’t have to explain what they’re up to. Maybe we call them “data disclosures” for the regulators, lawyers, investigative journalists and curious consumers to pore over.
But to protect our privacy, the best place to start is for companies to simply collect less data. “Maybe don’t do things that need a million words of explanation? Do it differently,” said Slaughter. “You can’t abuse, misuse, leverage data that you haven’t collected in the first place.”
Apps and services should only collect the information they really need to provide that service — unless we opt in to let them collect more, and it’s truly an option.
I’m not holding my breath that companies will do that voluntarily, but a federal privacy law would help. While we wait for one, Slaughter said the FTC (where Democratic commissioners recently gained a majority) is thinking about how to use its existing authority “to pursue practices — including data collection, use and misuse — that are unfair to users.”
Second, we need to replace the theater of pressing “agree” with real choices about our privacy.
Today, when we do have choices to make, companies often present them in ways that pressure us into making the worst decisions for ourselves.
Apps and websites should give us the relevant information and our choices in the moment when it matters. Twitter actually does this just-in-time notice better than many other apps and websites: By default, it doesn’t collect your exact location, and only prompts you to do so when you ask to tag your location in a tweet.
Even better, technology could help us manage our choices. Cranor suggests that data disclosures could be coded to be read by machines. Companies already do this for financial information, and the TLDR Act would require consistent tags on privacy information, too. Then your computer could act kind of like a butler, interacting with apps and websites on your behalf.
Picture Siri as a butler who quizzes you briefly about your preferences and then does your bidding. The privacy settings on an iPhone already let you tell all the different apps on your phone not to collect your location. For the past year, they’ve also allowed you to ask apps not to track you.
Web browsers could serve as privacy butlers, too. Mozilla’s Firefox already lets you block certain kinds of privacy invasions. Now a new technology called the Global Privacy Control is emerging that would interact with websites and instruct them not to “sell” our data. It’s grounded in California’s privacy law, which is among the toughest in the nation, though it remains to be seen how the state will enforce GPC…(More)”.
Facial Expressions Do Not Reveal Emotions
Lisa Feldman Barrett at Scientific American: “Do your facial movements broadcast your emotions to other people? If you think the answer is yes, think again. This question is under contentious debate. Some experts maintain that people around the world make specific, recognizable faces that express certain emotions, such as smiling in happiness, scowling in anger and gasping with widened eyes in fear. They point to hundreds of studies that appear to demonstrate that smiles, frowns, and so on are universal facial expressions of emotion. They also often cite Charles Darwin’s 1872 book The Expression of the Emotions in Man and Animals to support the claim that universal expressions evolved by natural selection.
Other scientists point to a mountain of counterevidence showing that facial movements during emotions vary too widely to be universal beacons of emotional meaning. People may smile in hatred when plotting their enemy’s downfall and scowl in delight when they hear a bad pun. In Melanesian culture, a wide-eyed gasping face is a symbol of aggression, not fear. These experts say the alleged universal expressions just represent cultural stereotypes. To be clear, both sides in the debate acknowledge that facial movements vary for a given emotion; the disagreement is about whether there is enough uniformity to detect what someone is feeling.
This debate is not just academic; the outcome has serious consequences. Today you can be turned down for a job because a so-called emotion-reading system watching you on camera applied artificial intelligence to evaluate your facial movements unfavorably during an interview. In a U.S. court of law, a judge or jury may sometimes hand down a harsher sentence, even death, if they think a defendant’s face showed a lack of remorse. Children in preschools across the country are taught to recognize smiles as happiness, scowls as anger and other expressive stereotypes from books, games and posters of disembodied faces. And for children on the autism spectrum, some of whom have difficulty perceiving emotion in others, these teachings do not translate to better communication….Emotion AI systems, therefore, do not detect emotions. They detect physical signals, such as facial muscle movements, not the psychological meaning of those signals. The conflation of movement and meaning is deeply embedded in Western culture and in science. An example is a recent high-profile study that applied machine learning to more than six million internet videos of faces. The human raters, who trained the AI system, were asked to label facial movements in the videos, but the only labels they were given to use were emotion words, such as “angry,” rather than physical descriptions, such as “scowling.” Moreover there was no objective way to confirm what, if anything, the anonymous people in the videos were feeling in those moments…(More)”.
Citizen power mobilized to fight against mosquito borne diseases
GigaBlog: “Just out in GigaByte is the latest data release from Mosquito Alert, a citizen science system for investigating and managing disease-carrying mosquitoes, and is part of our WHO-sponsored series on vector borne human diseases. Presenting 13,700 new database records in the Global Biodiversity Information Facility (GBIF) repository, all linked to photographs submitted by citizen volunteers and validated by entomological experts to determine if it provides evidence of the presence of any of the mosquito vectors of top concern in Europe. This is the latest of a new special issue of papers presenting biodiversity data for research on human diseases health, incentivising data sharing to fill important particular species and geographic gaps. As big fans of citizen science (and Mosquito Alert), its great to see this new data showcased in the series.
Vector-borne diseases account for more than 17% of all infectious diseases in humans. There are large gaps in knowledge related to these vectors, and data mobilization campaigns are required to improve data coverage to help research on vector-borne diseases and human health. As part of these efforts, GigaScience Press has partnered with the GBIF; and has been supported by TDR, the Special Programme for Research and Training in Tropical Diseases, hosted at the World Health Organization. Through this we launched this “Vectors of human disease” thematic series. Incentivising the sharing of this extremely important data, Article Processing Charges have been waived to assist with the global call for novel data. This effort has already led to the release of newly digitised location data for over 600,000 vector specimens observed across the Americas and Europe.
While paying credit to such a large number of volunteers, creating such a large public collection of validated mosquito images allows this dataset to be used to train machine-learning models for vector detection and classification. Sharing the data in this novel manner meant the authors of these papers had to set up a new credit system to evaluate contributions from multiple and diverse collaborators, which included university researchers, entomologists, and other non-academics such as independent researchers and citizen scientists. In the GigaByte paper these are acknowledged through collaborative authorship for the Mosquito Alert Digital Entomology Network and the Mosquito Alert Community…(More)”.
Seeking data sovereignty, a First Nation introduces its own licence
Article by Caitrin Pilkington: “The Łı́ı́dlı̨ı̨ Kų́ę́ First Nation, or LKFN, says it is partnering with the nearby Scotty Creek research facility, outside Fort Simpson, to introduce a new application process for researchers.
The First Nation, which also plans to create a compendium of all research gathered on its land, says the approach will be the first of its kind in the Northwest Territories.
LKFN says the current NWT-wide licensing system will still stand but a separate system addressing specific concerns was urgently required.
In the wake of a recent review of post-secondary education in the North, changes like this are being positioned as part of a larger shift in perspective about southern research taking place in the territory.
LKFN’s initiative was approved by its council on February 7. As of April 1, any researcher hoping to study at Scotty Creek and in LKFN territory has been required to fill out a new application form.
“When we get permits now, we independently review them and make sure certain topics are addressed in the application, so that researchers and students understand not just Scotty Creek, but the people on the land they’re on,” said Dieter Cazon, LKFN’s manager of lands and resources….
Currently, all research licensing goes through the Aurora Research Institute. The ARI’s form covers many of the same areas as the new LKFN form, but the institute has slightly different requirements for researchers.
The ARI application form asks researchers to:
- share how they plan to release data, to ensure confidentiality;
- describe their methodology; and
- indicate which communities they expect to be affected by their work.
The Łı́ı́dlı̨ı̨ Kų́ę́ First Nation form asks researchers to:
- explicitly declare that all raw data will be co-owned by the Łı́ı́dlı̨ı̨ Kų́ę́ First Nation;
- disclose the specific equipment and infrastructure they plan to install on the land, lay out their demobilization plan, and note how often they will be travelling through the land for data collection; and
- explain the steps they’ve taken to educate themselves about Łı́ı́dlı̨ı̨ Kų́ę́ First Nation customs and codes of research practice that will apply to their work with the community.
Cazon says the new approach will work in tandem with ARI’s system…(More)”.
The Future of Open Data: Law, Technology and Media
Book edited by Pamela Robinson, and Teresa Scassa: “The Future of Open Data flows from a multi-year Social Sciences and Humanities Research Council (SSHRC) Partnership Grant project that set out to explore open government geospatial data from an interdisciplinary perspective. Researchers on the grant adopted a critical social science perspective grounded in the imperative that the research should be relevant to government and civil society partners in the field.
This book builds on the knowledge developed during the course of the grant and asks the question, “What is the future of open data?” The contributors’ insights into the future of open data combine observations from five years of research about the Canadian open data community with a critical perspective on what could and should happen as open data efforts evolve.
Each of the chapters in this book addresses different issues and each is grounded in distinct disciplinary or interdisciplinary perspectives. The opening chapter reflects on the origins of open data in Canada and how it has progressed to the present date, taking into account how the Indigenous data sovereignty movement intersects with open data. A series of chapters address some of the pitfalls and opportunities of open data and consider how the changing data context may impact sources of open data, limits on open data, and even liability for open data. Another group of chapters considers new landscapes for open data, including open data in the global South, the data priorities of local governments, and the emerging context for rural open data…(More)”.