A growing number of governments hope to clone America’s DARPA


The Economist: “Using messenger RNA to make vaccines was an unproven idea. But if it worked, the technique would revolutionise medicine, not least by providing protection against infectious diseases and biological weapons. So in 2013 America’s Defence Advanced Research Projects Agency (DARPA) gambled. It awarded a small, new firm called Moderna $25m to develop the idea. Eight years, and more than 175m doses later, Moderna’s covid-19 vaccine sits on the list of innovations for which DARPA can claim at least partial credit, alongside weather satellites, GPS, drones, stealth technology, voice interfaces, the personal computer and the internet.

It is the agency that shaped the modern world, and this success has spurred imitators. In America there are ARPAs for homeland security, intelligence and energy, as well as the original defence one. President Joe Biden has asked Congress for $6.5bn to set up a health version, which will, the president vows, “end cancer as we know it”. His administration also has plans for another, to tackle climate change. Germany has recently established two such agencies: one civilian (the Federal Agency for Disruptive Innovation, or SPRIN-D) and another military (the Cybersecurity Innovation Agency). Japan’s version is called Moonshot R&D. In Britain, a bill for an Advanced Research and Invention Agency—often referred to as UK ARPA—is making its way through parliament….(More)”.

Investing in Data Saves Lives


Mark Lowcock and Raj Shah at Project Syndicate: “…Our experience of building a predictive model, and its use by public-health officials in these countries, showed that this approach could lead to better humanitarian outcomes. But it was also a reminder that significant data challenges, regarding both gaps and quality, limit the viability and accuracy of such models for the world’s most vulnerable countries. For example, data on the prevalence of cardiovascular diseases was 4-7 years old in several poorer countries, and not available at all for Sudan and South Sudan.

Globally, we are still missing about 50% of the data needed to respond effectively in countries experiencing humanitarian emergencies. OCHA and The Rockefeller Foundation are cooperating to provide early insight into crises, during and beyond the COVID-19 pandemic. But realizing the full potential of our approach depends on the contributions of others.

So, as governments, development banks, and major humanitarian and development agencies reflect on the first year of the pandemic response, as well as on discussions at the recent World Bank Spring Meetings, they must recognize the crucial role data will play in recovering from this crisis and preventing future ones. Filling gaps in critical data should be a top priority for all humanitarian and development actors.

Governments, humanitarian organizations, and regional development banks thus need to invest in data collection, data-sharing infrastructure, and the people who manage these processes. Likewise, these stakeholders must become more adept at responsibly sharing their data through open data platforms and that maintain rigorous interoperability standards.

Where data are not available, the private sector should develop new sources of information through innovative methods such as using anonymized social-media data or call records to understand population movement patterns….(More)”.

We Need to Reimagine the Modern Think Tank


Article by Emma Vadehra: “We are in the midst of a great realignment in policymaking. After an era-defining pandemic, which itself served as backdrop to a generations-in-the-making reckoning on racial injustice, the era of policy incrementalism is giving way to broad, grassroots demands for structural change. But elected officials are not the only ones who need to evolve. As the broader policy ecosystem adjusts to a post-2020 world, think tanks that aim to provide the intellectual backbone to policy movements—through research, data analysis, and evidence-based recommendation—need to change their approach as well.

Think tanks may be slower to adapt because of long-standing biases around what qualifies someone to be a policy “expert.” Traditionally, think tanks assess qualifications based on educational attainment and advanced degrees, which has often meant prioritizing academic credentials over lived or professional experience on the ground. These hiring preferences alone leave many people out of the debates that shape their lives: if think tanks expect a master’s degree for mid-level and senior research and policy positions, their pool of candidates will be limited to the 4 percent of Latinos and 7 percent of Black people with those degrees (lower than the rates among white people (10.5 percent) or Asian/Pacific Islanders (17 percent)). And in specific fields like Economics, from which many think tanks draw their experts, just 0.5 percent of doctoral degrees go to Black women each year.

Think tanks alone cannot change the larger cultural and societal forces that have historically limited access to certain fields. But they can change their own practices: namely, they can change how they assess expertise and who they recruit and cultivate as policy experts. In doing so, they can push the broader policy sector—including government and philanthropic donors—to do the same. Because while the next generation marches in the streets and runs for office, the public policy sector is not doing enough to diversify and support who develops, researches, enacts, and implements policy. And excluding impacted communities from the decision-making table makes our democracy less inclusive, responsive, and effective.

Two years ago, my colleagues and I at The Century Foundation, a 100-year-old think tank that has weathered many paradigm shifts in policymaking, launched an organization, Next100, to experiment with a new model for think tanks. Our mission was simple: policy by those with the most at stake, for those with the most at stake. We believed that proximity to the communities that policy looks to serve will make policy stronger, and we put muscle and resources behind the theory that those with lived experience are as much policy experts as anyone with a PhD from an Ivy League university. The pandemic and heightened calls for racial justice in the last year have only strengthened our belief in the need to thoughtfully democratize policy development. While it’s common understanding now that COVID-19 has surfaced and exacerbated profound historical inequities, not enough has been done to question why those inequities exist, or why they run so deep. How we make policy—and who makes it—is a big reason why….(More)”

What Robots Can — And Can’t — Do For the Old and Lonely


Katie Engelhart at The New Yorker: “…In 2017, the Surgeon General, Vivek Murthy, declared loneliness an “epidemic” among Americans of all ages. This warning was partly inspired by new medical research that has revealed the damage that social isolation and loneliness can inflict on a body. The two conditions are often linked, but they are not the same: isolation is an objective state (not having much contact with the world); loneliness is a subjective one (feeling that the contact you have is not enough). Both are thought to prompt a heightened inflammatory response, which can increase a person’s risk for a vast range of pathologies, including dementia, depression, high blood pressure, and stroke. Older people are more susceptible to loneliness; forty-three per cent of Americans over sixty identify as lonely. Their individual suffering is often described by medical researchers as especially perilous, and their collective suffering is seen as an especially awful societal failing….

So what’s a well-meaning social worker to do? In 2018, New York State’s Office for the Aging launched a pilot project, distributing Joy for All robots to sixty state residents and then tracking them over time. Researchers used a six-point loneliness scale, which asks respondents to agree or disagree with statements like “I experience a general sense of emptiness.” They concluded that seventy per cent of participants felt less lonely after one year. The pets were not as sophisticated as other social robots being designed for the so-called silver market or loneliness economy, but they were cheaper, at about a hundred dollars apiece.

In April, 2020, a few weeks after New York aging departments shut down their adult day programs and communal dining sites, the state placed a bulk order for more than a thousand robot cats and dogs. The pets went quickly, and caseworkers started asking for more: “Can I get five cats?” A few clients with cognitive impairments were disoriented by the machines. One called her local department, distraught, to say that her kitty wasn’t eating. But, more commonly, people liked the pets so much that the batteries ran out. Caseworkers joked that their clients had loved them to death….(More)”.

How a largely untested AI algorithm crept into hundreds of hospitals


Vishal Khetpal and Nishant Shah at FastCompany: “Last spring, physicians like us were confused. COVID-19 was just starting its deadly journey around the world, afflicting our patients with severe lung infections, strokes, skin rashes, debilitating fatigue, and numerous other acute and chronic symptoms. Armed with outdated clinical intuitions, we were left disoriented by a disease shrouded in ambiguity.

In the midst of the uncertainty, Epic, a private electronic health record giant and a key purveyor of American health data, accelerated the deployment of a clinical prediction tool called the Deterioration Index. Built with a type of artificial intelligence called machine learning and in use at some hospitals prior to the pandemic, the index is designed to help physicians decide when to move a patient into or out of intensive care, and is influenced by factors like breathing rate and blood potassium level. Epic had been tinkering with the index for years but expanded its use during the pandemic. At hundreds of hospitals, including those in which we both work, a Deterioration Index score is prominently displayed on the chart of every patient admitted to the hospital.

The Deterioration Index is poised to upend a key cultural practice in medicine: triage. Loosely speaking, triage is an act of determining how sick a patient is at any given moment to prioritize treatment and limited resources. In the past, physicians have performed this task by rapidly interpreting a patient’s vital signs, physical exam findings, test results, and other data points, using heuristics learned through years of on-the-job medical training.

Ostensibly, the core assumption of the Deterioration Index is that traditional triage can be augmented, or perhaps replaced entirely, by machine learning and big data. Indeed, a study of 392 COVID-19 patients admitted to Michigan Medicine that the index was moderately successful at discriminating between low-risk patients and those who were at high-risk of being transferred to an ICU, getting placed on a ventilator, or dying while admitted to the hospital. But last year’s hurried rollout of the Deterioration Index also sets a worrisome precedent, and it illustrates the potential for such decision-support tools to propagate biases in medicine and change the ways in which doctors think about their patients….(More)”.

Deepfake Maps Could Really Mess With Your Sense of the World


Will Knight at Wired: “Satellite images showing the expansion of large detention camps in Xinjiang, China, between 2016 and 2018 provided some of the strongest evidence of a government crackdown on more than a million Muslims, triggering international condemnation and sanctions.

Other aerial images—of nuclear installations in Iran and missile sites in North Korea, for example—have had a similar impact on world events. Now, image-manipulation tools made possible by artificial intelligence may make it harder to accept such images at face value.

In a paper published online last month, University of Washington professor Bo Zhao employed AI techniques similar to those used to create so-called deepfakes to alter satellite images of several cities. Zhao and colleagues swapped features between images of Seattle and Beijing to show buildings where there are none in Seattle and to remove structures and replace them with greenery in Beijing.

Zhao used an algorithm called CycleGAN to manipulate satellite photos. The algorithm, developed by researchers at UC Berkeley, has been widely used for all sorts of image trickery. It trains an artificial neural network to recognize the key characteristics of certain images, such as a style of painting or the features on a particular type of map. Another algorithm then helps refine the performance of the first by trying to detect when an image has been manipulated….(More)”.

How European Governments Can Help Spur Innovations for the Public Good


Essay by By Marieke Huysentruyt: “…The stakes are high. In many OECD countries, inequality is at its highest levels in decades, and people are taking to the streets to express their discontent and demand change (in some cases at great personal risk). Only governments—with their uniquely broad scope of functions and mandates—can spur innovations for the public good in so many different domains simultaneously. Ideally, governments will step up and act collectively. After all, so many of today’s most pressing societal problems are global problems, beyond the scope of any single nation.

We Need a New Kind of Legal Framework to Activate and Transform Dormant Knowledge Into Innovations for the Public Good

Tremendous untapped potential lies dormant in knowledge and technology currently being used only for commercial purposes, but which could be put to significant social use. Consider the example of a cooling system currently being developed by Colruyt Group, a large Belgian retail group, to keep produce cool for up to three days without consuming electricity: such a technology could be applied elsewhere to great effect. For example, to help African farmers transport their milk or distribute vaccines over long (unelectrified) distances. Colruyt Group is therefore always looking for cases to implement their technology, so that it does not become dormant knowledge.

To facilitate this activation of dormant knowledges like these, we need a legal framework encouraging the development of “social impact licenses.” This would allow, for example, a technology holder to grant time-bounded permission to bring an intellectual property, a technology, a product, or a service to a predefined market for societal value creation at preferred rates or reduced costs. Another important step would be for EU governments to mandate that recipients of their innovation grants be required to give others access to their research, so it can be leveraged for practical, social purposes. Putting these sorts of measures in place would not only influence the next generation of researchers but could encourage businesses (who hold a great deal of intellectual property) to think more ambitiously about the positive societal impact that they can make.

We Need Better Information to Activate People to Search for the Public Good

Most of us lack a clear understanding of the societal problems at hand or have flawed mental models of pressing societal issues. Complexity and ambiguity tend to put people off, so governments must provide citizens with better and more reliable information about today’s most pressing societal challenges and solutions. The circular economy, greenhouse effects, the ecological transition, the global refugee problem, for example, can be difficult to grasp, and for this reason, access to non-partisan information is all the more important.

Sharing information about feasible solutions (as well as about solutions that have been tested and abandoned) can hugely accelerate discovery, as demonstrated by the joint efforts to develop a COVID-19 vaccine, shared across many labs. And just as they have played a key role in the development of the Internet and aviation technologies, governments can and should play a major role in building the technological and data infrastructure for sharing information about what works and what doesn’t. Again, because the problems are global, coordinating efforts across national boundaries could help reduce the costs and increase the benefits of such knowledge infrastructure.

Another essential tool in governments’ toolbox is fostering the development of other-regarding preferences: the more people care about others’ well-being, the more willing they are to contribute to search for the public good. For example, in a recent large-scale experiment in Germany, second-grade children were matched with mentors—potential prosocial role models—who spent one afternoon per week in one-to-one interactions with the children, doing things like visiting a zoo, museum, or playground, cooking, ice-skating, or simply having a conversation. After two years, the kids who had been assigned to mentors revealed a significant and persistent increase in prosociality, as captured through choice experiments and survey measures. Evaluations of this large-scale experiment suggest that prosociality is malleable, and that early childhood interventions of this type have the potential to systematically affect character formation, with possible long-term benefits….(More)”.

New York vs Big Tech: Lawmakers Float Data Tax in Privacy Push


GovTech article: “While New York is not the first state to propose data privacy legislation, it is the first to propose a data privacy bill that would implement a tax on big tech companies that benefit from the sale of New Yorkers’ consumer data.

Known as the Data Economy Labor Compensation and Accountability Act, the bill looks to enact a 2 percent tax on annual receipts earned off New York residents’ data. This tax and other rules and regulations aimed at safeguarding citizens’ data will be enforced by a newly created Office of Consumer Data Protection outlined in the bill.

The office would require all data controllers and processors to register annually in order to meet state compliance requirements. Failure to do so, the bill states, would result in fines.

As for the tax, all funds will be put toward improving education and closing the digital divide.

“The revenue from the tax will be put towards digital literacy, workforce redevelopment, STEAM education (science, technology, engineering, arts and mathematics), K-12 education, workforce reskilling and retraining,” said Sen. Andrew Gounardes, D-22.

As for why the bill is being proposed now, Gounardes said, “Every day, big tech companies like Amazon, Apple, Facebook and Google capitalize on the unpaid labor of billions of people to create their products and services through targeted advertising and artificial intelligence.”…(More)”

The Ancient Imagination Behind China’s AI Ambition


Essay by Jennifer Bourne: “Artificial intelligence is a modern technology, but in both the West and the East the aspiration for inventing autonomous tools and robots that can think for themselves can be traced back to ancient times. Adrienne Mayor, a historian of science at Stanford, has noted that in ancient Greece, there were myths about tools that helped men become godlike, such as the legendary inventor Daedalus who fabricated wings for himself and his son to escape from prison. 

Similar myths and stories are to be found in China too, where aspirations for advanced robots also appeared thousands of years ago. In a tale that appears in the Taoist text “Liezi,” which is attributed to the 5th-century BCE philosopher Lie Yukou, a technician named Yan Shi made a humanlike robot that could dance and sing and even dared to flirt with the king’s concubines. The king, angry and fearful, ordered the robot to be dismantled. 

In the Three Kingdoms era (220-280), a politician named Zhuge Liang invented a “fully automated” wheelbarrow (the translation from the Chinese is roughly “wooden ox”) that could reportedly carry over 200 pounds of food supplies and walk 20 miles a day without needing any fuel or manpower. Later, Zhang Zhuo, a scholar who died around 730, wrote a story about a robot that was obedient, polite and could pour wine for guests at parties. In the same collection of stories, Zhang also mentioned a robot monk who wandered around town, asking for alms and bowing to those who gave him something. And in “Extensive Records of the Taiping Era,” published in 978, a technician called Ma Daifeng is said to have invented a robot maid who did household chores for her master.

Imaginative narratives of intelligent robots or autonomous tools can be found throughout agriculture-dominated ancient China, where wealth flowed from a higher capacity for labor. So, stories reflect ancient people’s desire to get more artificial hands on deck, and to free themselves from intensive farm work….(More)”.

‘Belonging Is Stronger Than Facts’: The Age of Misinformation



Max Fisher at the New York Times: “There’s a decent chance you’ve had at least one of these rumors, all false, relayed to you as fact recently: that President Biden plans to force Americans to eat less meat; that Virginia is eliminating advanced math in schools to advance racial equality; and that border officials are mass-purchasing copies of Vice President Kamala Harris’s book to hand out to refugee children.

All were amplified by partisan actors. But you’re just as likely, if not more so, to have heard it relayed from someone you know. And you may have noticed that these cycles of falsehood-fueled outrage keep recurring.

We are in an era of endemic misinformation — and outright disinformation. Plenty of bad actors are helping the trend along. But the real drivers, some experts believe, are social and psychological forces that make people prone to sharing and believing misinformation in the first place. And those forces are on the rise.

“Why are misperceptions about contentious issues in politics and science seemingly so persistent and difficult to correct?” Brendan Nyhan, a Dartmouth College political scientist, posed in a new paper in Proceedings of the National Academy of Sciences.

It’s not for want of good information, which is ubiquitous. Exposure to good information does not reliably instill accurate beliefs anyway. Rather, Dr. Nyhan writes, a growing body of evidence suggests that the ultimate culprits are “cognitive and memory limitations, directional motivations to defend or support some group identity or existing belief, and messages from other people and political elites.”

Put more simply, people become more prone to misinformation when three things happen. First, and perhaps most important, is when conditions in society make people feel a greater need for what social scientists call ingrouping — a belief that their social identity is a source of strength and superiority, and that other groups can be blamed for their problems….(More)”.