The Future of Nudging Will Be Personal


Essay by Stuart Mills: “Nudging, now more than a decade old as an intervention tool, has become something of a poster child for the behavioral sciences. We know that people don’t always act in their own best interest—sometimes spectacularly so—and nudges have emerged as a noncoercive way to live better in a world shaped by our behavioral foibles.

But with nudging’s maturity, we’ve also begun to understand some of the ways that it falls short. Take, for instance, research by Linda Thunström and her colleagues. They found that “successful” nudges can actually harm subgroups of a population. In their research, spendthrifts (those who spend freely) spent less when nudged, bringing them closer to optimal spending. But when given the same nudge, tightwads also spent less, taking them further from the optimal.

While a nudge might appear effective because a population benefited on average, at the individual level the story could be different. Should nudging penalize people that differ from the average just because, on the whole, a policy would benefit the population? Though individual versus population trade-offs are part and parcel to policymaking, as our ability to personalize advances, through technology and data, these trade-offs seem less and less appealing….(More)”.

Building Digital Worlds: Where does GIS data come from?


Julie Stoner at Library of Congress: “Whether you’ve used an online map to check traffic conditions, a fitness app to track your jogging route, or found photos tagged by location on social media, many of us rely on geospatial data more and more each day. So what are the most common ways geospatial data is created and stored, and how does it differ from how we have stored geographic information in the past?

A primary method for creating geospatial data is to digitize directly from scanned analog maps. After maps are georeferenced, GIS software allows a data creator to manually digitize boundaries, place points, or define areas using the georeferenced map image as a reference layer. The goal of digitization is to capture information carefully stored in the original map and translate it into a digital format. As an example, let’s explore and then digitize a section of this 1914 Sanborn Fire Insurance Map from Eatonville, Washington.

Sanborn Fire Insurance Map from Eatonville, Pierce County, Washington. Sanborn Map Company, October 1914. Geography & Map Division, Library of Congress.

Sanborn Fire Insurance Maps were created to detail the built environment of American towns and cities through the late 19th and early 20th centuries. The creation of these information-dense maps allowed the Sanborn Fire Insurance Company to underwrite insurance agreements without needing to inspect each building in person. Sanborn maps have become incredibly valuable sources of historic information because of the rich geographic detail they store on each page.

When extracting information from analog maps, the digitizer must decide which features will be digitized and how information about those features will be stored. Behind the geometric features created through the digitization process, a table is utilized to store information about each feature on the map.  Using the table, we can store information gleaned from the analog map, such as the name of a road or the purpose of a building. We can also quickly calculate new data, such as the length of a road segment. The data in the table can then be put to work in the visual display of the new digital information that has been created. This often done through symbolization and map labels….(More)”.

How Digital Trust Varies Around the World


Bhaskar Chakravorti, Ajay Bhalla, and Ravi Shankar Chaturvedi at Harvard Business Review: “As economies around the world digitalize rapidly in response to the pandemic, one component that can sometimes get left behind is user trust. What does it take to build out a digital ecosystem that users will feel comfortable actually using? To answer this question, the authors explored four components of digital trust: the security of an economy’s digital environment; the quality of the digital user experience; the extent to which users report trust in their digital environment; and the extent to which users actually use the digital tools available to them. They then used almost 200 indicators to rank 42 global economies on their performance in each of these four metrics, finding a number of interesting trends around how different economies have developed mechanisms for engendering trust, as well as how different types of trust do — or don’t — correspond to other digital development metrics…(More)”.

DNA databases are too white, so genetics doesn’t help everyone. How do we fix that?


Tina Hesman Saey at ScienceNews: “It’s been two decades since the Human Genome Project first unveiled a rough draft of our genetic instruction book. The promise of that medical moon shot was that doctors would soon be able to look at an individual’s DNA and prescribe the right medicines for that person’s illness or even prevent certain diseases.

That promise, known as precision medicine, has yet to be fulfilled in any widespread way. True, researchers are getting clues about some genetic variants linked to certain conditions and some that affect how drugs work in the body. But many of those advances have benefited just one group: people whose ancestral roots stem from Europe. In other words, white people.

Instead of a truly human genome that represents everyone, “what we have is essentially a European genome,” says Constance Hilliard, an evolutionary historian at the University of North Texas in Denton. “That data doesn’t work for anybody apart from people of European ancestry.”

She’s talking about more than the Human Genome Project’s reference genome. That database is just one of many that researchers are using to develop precision medicine strategies. Often those genetic databases draw on data mainly from white participants. But race isn’t the issue. The problem is that collectively, those data add up to a catalog of genetic variants that don’t represent the full range of human genetic diversity.

When people of African, Asian, Native American or Pacific Island ancestry get a DNA test to determine if they inherited a variant that may cause cancer or if a particular drug will work for them, they’re often left with more questions than answers. The results often reveal “variants of uncertain significance,” leaving doctors with too little useful information. This happens less often for people of European descent. That disparity could change if genetics included a more diverse group of participants, researchers agree (SN: 9/17/16, p. 8).

One solution is to make customized reference genomes for populations whose members die from cancer or heart disease at higher rates than other groups, for example, or who face other worse health outcomes, Hilliard suggests….(More)”.

Machine Learning Shows Social Media Greatly Affects COVID-19 Beliefs


Jessica Kent at HealthITAnalytics: “Using machine learning, researchers found that people’s biases about COVID-19 and its treatments are exacerbated when they read tweets from other users, a study published in JMIR showed.

The analysis also revealed that scientific events, like scientific publications, and non-scientific events, like speeches from politicians, equally influence health belief trends on social media.

The rapid spread of COVID-19 has resulted in an explosion of accurate and inaccurate information related to the pandemic – mainly across social media platforms, researchers noted.

“In the pandemic, social media has contributed to much of the information and misinformation and bias of the public’s attitude toward the disease, treatment and policy,” said corresponding study author Yuan Luo, chief Artificial Intelligence officer at the Institute for Augmented Intelligence in Medicine at Northwestern University Feinberg School of Medicine.

“Our study helps people to realize and re-think the personal decisions that they make when facing the pandemic. The study sends an ‘alert’ to the audience that the information they encounter daily might be right or wrong, and guide them to pick the information endorsed by solid scientific evidence. We also wanted to provide useful insight for scientists or healthcare providers, so that they can more effectively broadcast their voice to targeted audiences.”…(More)”.

How to Put Out Democracy’s Dumpster Fire


Yoshi Sodeoka in The Atlantic: “…With the wholesale transfer of so much entertainment, social interaction, education, commerce, and politics from the real world to the virtual world—a process recently accelerated by the coronavirus pandemic—many Americans have come to live in a nightmarish inversion of the Tocquevillian dream, a new sort of wilderness. Many modern Americans now seek camaraderie online, in a world defined not by friendship but by anomie and alienation. Instead of participating in civic organizations that give them a sense of community as well as practical experience in tolerance and consensus-building, Americans join internet mobs, in which they are submerged in the logic of the crowd, clicking Like or Share and then moving on. Instead of entering a real-life public square, they drift anonymously into digital spaces where they rarely meet opponents; when they do, it is only to vilify them.

Conversation in this new American public sphere is governed not by established customs and traditions in service of democracy but by rules set by a few for-profit companies in service of their needs and revenues. Instead of the procedural regulations that guide a real-life town meeting, conversation is ruled by algorithms that are designed to capture attention, harvest data, and sell advertising. The voices of the angriest, most emotional, most divisive—and often the most duplicitous—participants are amplified. Reasonable, rational, and nuanced voices are much harder to hear; radicalization spreads quickly. Americans feel powerless because they are.

In this new wilderness, democracy is becoming impossible. If one half of the country can’t hear the other, then Americans can no longer have shared institutions, apolitical courts, a professional civil service, or a bipartisan foreign policy. We can’t compromise. We can’t make collective decisions—we can’t even agree on what we’re deciding. No wonder millions of Americans refuse to accept the results of the most recent presidential election, despite the verdicts of state electoral committees, elected Republican officials, courts, and Congress. We no longer are the America Tocqueville admired, but have become the enfeebled democracy he feared, a place where each person,…(More)”.

Smart weather app helps Kenya’s herders brace for drought


Thomson Reuters Foundation: “Sitting under a low tree to escape the blazing Kenyan sun, Kaltuma Milkalkona and two young men hunch intently over the older woman’s smartphone – but they are not transfixed by the latest sports scores or a trending internet meme.

The men instead are looking at a weather alert for their village in the country’s north, sent through an app that uses weather station data to help pastoralists prepare for drought.

The myAnga app on Milkalkona’s phone showed that Merille would continue facing dry weather and that “pasture conditions (were) expected to be very poor with no grass and browse availability.”

One of the young men said he would warn his older brother, who had taken the family’s livestock to another area where there was water and pasture, not to come home yet.

Milkalkona, 42, who lives and sells clothing in the neighbouring town of Laisamis, said she often shared data from her phone with others who did not have smartphones.

“When I get the weather alerts, I usually show the people who are close to me,” she said, as well as calling others in more distant villages.

Extreme and erratic weather linked to a warming climate can be devastating for Kenya’s pastoralists, with prolonged droughts making it difficult to find enough pasture for their animals.

But armed with up-to-date weather information and advice, herders can plan ahead to ensure their livestock make it through the region’s frequent dry spells, said Frankline Agolla, co-founder of Amfratech, a Nairobi-based social enterprise that developed the myAnga app.

The app – its name means “my weather” – goes further than the weather reports anyone can get from the meteorological department by interpreting them and making recommendations to herders on the best way to protect their livelihoods.

“If there is an imminent drought, we advise them to sell their livestock early to reduce their losses,” said Agolla in an interview with the Thomson Reuters Foundation….

The app is part of Amfratech’s Climate Livestock and Markets (CLIMARK) project, which the company aims to roll out to more than 300,000 pastoralists in Kenya over the next five years, with funding and other help from partners including the Technical Centre for Agricultural and Rural Cooperation and the Kenya Livestock Marketing Council.

The app sends out weekly weather information in English, Swahili and other languages used in northern Kenya, and users can see forecasts for areas as small as a single village, Agolla said….(More)”.

Building trust in AI systems is essential


Editorial Board of the Financial Times: “…Most of the biggest tech companies, which have been at the forefront of the AI revolution, are well aware of the risks of deploying flawed systems at scale. Tech companies publicly acknowledge the need for societal acceptance if their systems are to be trusted. Although historically allergic to government intervention, some industry bosses are even calling for stricter regulation in areas such as privacy and facial recognition technology.

A parallel is often drawn between two conferences held in Asilomar, California, in 1975 and 2017. At the first, a group of biologists, lawyers and doctors created a set of ethical guidelines around research into recombinant DNA. This opened an era of responsible and fruitful biomedical research that has helped us deal with the Covid-19 pandemic today. Inspired by the example, a group of AI experts repeated the exercise 42 years later and came up with an impressive set of guidelines for the beneficial use of the technology. 

Translating such high principles into everyday practice is hard, especially when so much money is at stake. But three rules should always apply. First, teams that develop AI systems must be as diverse as possible to reduce the risk of bias. Second, complex AI systems should never be deployed in any field unless they offer a demonstrable improvement on what already exists. Third, algorithms that companies and governments deploy in sensitive areas such as healthcare, education, policing, justice and workplace monitoring should be subject to audit and comprehension by outside experts. 

The US Congress has been considering an Algorithmic Accountability Act, which would compel companies to assess the probable real-world impact of automated decision-making systems. There is even a case for creating the algorithmic equivalent of the US Food and Drug Administration to preapprove the use of AI in sensitive areas. Criminal liability for those who deploy irresponsible AI systems might also help concentrate minds.

The AI industry has talked a good game about AI ethics. But if some of the most sophisticated companies in this field cannot even convince their own employees of their good intentions, they will struggle to convince anyone else. That could result in a fierce public backlash against companies using AI. Worse, it may yet impede the real benefits of using AI for societal good in areas such as healthcare. The tech sector has to restore credibility for all our sakes….(More)”

COVID vaccination studies: plan now to pool data, or be bogged down in confusion


Natalie Dean at Nature: “More and more COVID-19 vaccines are rolling out safely around the world; just last month, the United States authorized one produced by Johnson & Johnson. But there is still much to be learnt. How long does protection last? How much does it vary by age? How well do vaccines work against various circulating variants, and how well will they work against future ones? Do vaccinated people transmit less of the virus?

Answers to these questions will help regulators to set the best policies. Now is the time to make sure that those answers are as reliable as possible, and I worry that we are not laying the essential groundwork. Our current trajectory has us on course for confusion: we must plan ahead to pool data.

Many questions remain after vaccines are approved. Randomized trials generate the best evidence to answer targeted questions, such as how effective booster doses are. But for others, randomized trials will become too difficult as more and more people are vaccinated. To fill in our knowledge gaps, observational studies of the millions of vaccinated people worldwide will be essential….

Perhaps most importantly, we must coordinate now on plans to combine data. We must take measures to counter the long-standing siloed approach to research. Investigators should be discouraged from setting up single-site studies and encouraged to contribute to a larger effort. Funding agencies should favour studies with plans for collaborating or for sharing de-identified individual-level data.

Even when studies do not officially pool data, they should make their designs compatible with others. That means up-front discussions about standardization and data-quality thresholds. Ideally, this will lead to a minimum common set of variables to be collected, which the WHO has already hammered out for COVID-19 clinical outcomes. Categories include clinical severity (such as all infections, symptomatic disease or critical/fatal disease) and patient characteristics, such as comorbidities. This will help researchers to conduct meta-analyses of even narrow subgroups. Efforts are under way to develop reporting guidelines for test-negative studies, but these will be most successful when there is broad engagement.

There are many important questions that will be addressed only by observational studies, and data that can be combined are much more powerful than lone results. We need to plan these studies with as much care and intentionality as we would for randomized trials….(More)”.

How One State Managed to Actually Write Rules on Facial Recognition


Kashmir Hill at The New York Times: “Though police have been using facial recognition technology for the last two decades to try to identify unknown people in their investigations, the practice of putting the majority of Americans into a perpetual photo lineup has gotten surprisingly little attention from lawmakers and regulators. Until now.

Lawmakers, civil liberties advocates and police chiefs have debated whether and how to use the technology because of concerns about both privacy and accuracy. But figuring out how to regulate it is tricky. So far, that has meant an all-or-nothing approach. City Councils in Oakland, Portland, San FranciscoMinneapolis and elsewhere have banned police use of the technology, largely because of bias in how it works. Studies in recent years by MIT researchers and the federal government found that many facial recognition algorithms are most accurate for white men, but less so for everyone else.

At the same time, automated facial recognition has become a powerful investigative tool, helping to identify child molesters and, in a recent high-profile example, people who participated in the Jan. 6 riot at the Capitol. Law enforcement officials in Vermont want the state’s ban lifted because there “could be hundreds of kids waiting to be saved.”

That’s why a new law in Massachusetts is so interesting: It’s not all or nothing. The state managed to strike a balance on regulating the technology, allowing law enforcement to harness the benefits of the tool, while building in protections that might prevent the false arrests that have happened before….(More)”.