COVID’s lesson for governments? Don’t cherry-pick advice, synthesize it


Essay by Geoff Mulgan: “Too many national leaders get good guidance yet make poor decisions…Handling complex scientific issues in government is never easy — especially during a crisis, when uncertainty is high, stakes are huge and information is changing fast. But for some of the nations that have fared the worst in the COVID-19 pandemic, there’s a striking imbalance between the scientific advice available and the capacity to make sense of it. Some advice is ignored because it’s politically infeasible or unpragmatic. Nonetheless, much good scientific input has fallen aside because there’s no means to pick it up.

Part of the problem has been a failure of synthesis — the ability to combine insights and transcend disciplinary boundaries. Creating better syntheses should be a governmental priority as the crisis moves into a new phase….

Input from evidence synthesis is crucial for policymaking. But the capacity of governments to absorb such evidence is limited, and syntheses for decisions must go much further in terms of transparently incorporating assessments of political or practical feasibility, implementation, benefits and cost, among many other factors. The gap between input and absorption is glaring.

I’ve addressed teams in the UK prime minister’s office, the European Commission and the German Chancellery about this issue. In responding to the pandemic, some countries (including France and the United Kingdom) have tried to look at epidemiological models alongside economic ones, but none has modelled the social or psychological effects of different policy choices, and none would claim to have achieved a truly synthetic approach.

There are dozens of good examples of holistic thinking and action: programmes to improve public health in Finland, cut UK street homelessness, reduce poverty in China. But for many governments, the capacity to see things in the round has waned over the past decade. The financial crisis of 2007 and then populism both shortened governments’ time horizons for planning and policy in the United States and Europe….

The worst governments rely on intuition. But even the best resort to simple heuristics — for example, that it’s best to act fast, or that prioritizing health is also good for the economy. This was certainly true in 2020 and 2021. But that might change with higher vaccination and immunity rates.

What would it mean to transcend simple heuristics and achieve a truly synthetic approach? It would involve mapping and ranking relevant factors (from potential impacts on hospital capacity to the long-run effects of isolation); using formal and informal models to capture feedbacks, trade-offs and synergies; and more creative work to shape options.

Usually, such work is best done by teams that encompass breadth and depth, disparate disciplines, diverse perspectives and both officials and outsiders. Good examples include Singapore’s Strategy Group (and Centre for Strategic Futures), which helps the country to execute sophisticated plans on anything from cybercrime to climate resilience. But most big countries, despite having large bureaucracies, lack comparable teams…(More)”.

Building with and for the Community: New Resource Library and Data Maturity Assessment Tool Now Live


Perry Hewitt and Ginger Zielinskie at Data.org: “At data.org we have heard (and personally experienced) the challenge of needing to get smarter about data, and the frustration of wading through a trove of search engine results. It takes not only time and effort, but also field experience and subject matter expertise for social impact leaders to determine if a resource is from a trustworthy source, current enough to be relevant, and appropriate for their stage of data strategy. To solve this challenge, we have built two new elements into our data.org digital platform: a Resource Library and a Data Maturity Assessment Tool….We are delighted to be launching the Data Maturity Assessment Tool. This project, too, began with the community: an early alpha co-developed in the spring of 2021 with DataKind was tested with ten organizations, and in-depth interviews yielded insights about the data topics needing investigation. With this experience and extensive desk research in hand, we sought to create a solution that was short enough to be taken online, but substantive enough to identify areas of opportunity. Our goal was to provide organizations with a pulse check, helping them measure and understand where they stand today on their data journey.  

Mindful of organizations’ need to act on an assessment, we ensured the results page offers not only a benchmark score, but also specific resources aligned with areas for growth. Integration of the Tool with our Library of guides and resources via a shared taxonomy on the backend ensures that organizations receive results with specific, vetted resources for delving more deeply into content. We also heard from social impact organizations that a significant obstacle in launching and sustaining a data competency is developing and communication a shared understanding of areas for opportunity and growth. With this challenge in mind, the results page is mindfully designed to be sharable with organizational leadership, boards, or funders to provide clarity, and to set the stage for an ongoing data conversation….(More)”.

Sample Truths


Christopher Beha at Harpers’ Magazine: “…How did we ever come to believe that surveys of this kind could tell us something significant about ourselves?

One version of the story begins in the middle of the seventeenth century, after the Thirty Years’ War left the Holy Roman Empire a patchwork of sovereign territories with uncertain borders, contentious relationships, and varied legal conventions. The resulting “weakness and need for self-definition,” the French researcher Alain Desrosières writes, created a demand among local rulers for “systematic cataloging.” This generally took the form of descriptive reports. Over time the proper methods and parameters of these reports became codified, and thus was born the discipline of Statistik: the systematic study of the attributes of a state.

As Germany was being consolidated in the nineteenth century, “certain officials proposed using the formal, detailed framework of descriptive statistics to present comparisons between the states” by way of tables in which “the countries appeared in rows, and different (literary) elements of the description appeared in columns.” In this way, a single feature, such as population or climate, could be easily removed from its context. Statistics went from being a method for creating a holistic description of one place to what Desrosières calls a “cognitive space of equivalence.” Once this change occurred, it was only a matter of time before the descriptions themselves were put into the language of equivalence, which is to say, numbers.

The development of statistical reasoning was central to the “project of legibility,” as the anthropologist James C. Scott calls it, ushered in by the rise of nation-states. Strong centralized governments, Scott writes in Seeing Like a State, required that local communities be made “legible,” their features abstracted to enable management by distant authorities. In some cases, such “state simplifications” occurred at the level of observation. Cadastral maps, for example, ignored local land-use customs, focusing instead on the points relevant to the state: How big was each plot, and who was responsible for paying taxes on it?

But legibility inevitably requires simplifying the underlying facts, often through coercion. The paradigmatic example here is postrevolutionary France. For administrative purposes, the country was divided into dozens of “departments” of roughly equal size whose boundaries were drawn to break up culturally cohesive regions such as Normandy and Provence. Local dialects were effectively banned, and use of the new, highly rational metric system was required. (As many commentators have noted, this work was a kind of domestic trial run for colonialism.)

One thing these centralized states did not need to make legible was their citizens’ opinions—on the state itself, or anything else for that matter. This was just as true of democratic regimes as authoritarian ones. What eventually helped bring about opinion polling was the rise of consumer capitalism, which created the need for market research.

But expanding the opinion poll beyond questions like “Pepsi or Coke?” required working out a few kinks. As the historian Theodore M. Porter notes, pollsters quickly learned that “logically equivalent forms of the same question produce quite different distributions of responses.” This fact might have led them to doubt the whole undertaking. Instead, they “enforced a strict discipline on employees and respondents,” instructing pollsters to “recite each question with exactly the same wording and in a specified order.” Subjects were then made “to choose one of a small number of packaged statements as the best expression of their opinions.”

This approach has become so familiar that it may be worth noting how odd it is to record people’s opinions on complex matters by asking them to choose among prefabricated options. Yet the method has its advantages. What it sacrifices in accuracy it makes up in pseudoscientific precision and quantifiability. Above all, the results are legible: the easiest way to be sure you understand what a person is telling you is to put your own words in his mouth.

Scott notes a kind of Heisenberg principle to state simplifications: “They frequently have the power to transform the facts they take note of.” This is another advantage to multiple-choice polling. If people are given a narrow range of opinions, they may well think that those are the only options available, and in choosing one, they may well accept it as wholly their own. Even those of us who reject the stricture of these options for ourselves are apt to believe that they fairly represent the opinions of others. One doesn’t have to be a postmodern relativist to suspect that what’s going on here is as much the construction of a reality as the depiction of one….(More)”.

A paradigm shift in lending to smallholder farmers: the potential of geomapping technology


new report by Small Foundation and Palladium: “… looks at the viability of geomapping as a tool to close the smallholder farmers’ financing gap and improve their livelihoods.

Geomapping is the process of collecting location information, typically with a GPS system and using it to assemble a map. For a technology provider like SyeComp, geomapping means sending field personnel out to map boundaries using a rugged, handheld GPS and then generating detailed maps. The report examines how companies like SyeComp use geomapping data to assess smallholder farmers’ risk and offers recommendations for scaling its use, with the ultimate goal of increasing smallholder farmers’ access to finance and creating pathways out of poverty.

The newly published research also indicates that geomapping technology providers within the agriculture sector are most differentiated by their specific customer segment, offering services directly to smallholder farmers or indirectly through financial institutions (FIs) or agribusinesses.

However, no matter their business model, most offer value to many stakeholders in a given value chain, either through geomapping information for FIs, market pricing information for farmers, or yield estimations for cooperatives. “Because geomapping providers are able to generate value for multiple stakeholders, their use offers a real opportunity to transform the financing landscape for smallholder farmers,” explains Eduardo Tugendhat, Palladium Director of Thought Leadership.

The report highlights how geomapping technology providers add value to the operations of financial institutions, agribusinesses, and cooperatives, and most importantly to the farmers themselves. For FIs, geomapping provides a critical, yet missing piece of the puzzle in a credit assessment—farm size and location. This information allows FIs to better understand potential yield, which they can use to modify a loan value and repayment terms. When providers overlay location information with climate risk maps, even more opportunities open for climate financing.

For agribusinesses such as product buyers, food processors and input suppliers, geomapping offers the added benefits of understanding where a farmer is located to make product collection more efficient, reduce the pestilence risk of certain farms to avoid product loss, and ensure product traceability.

Most importantly, geomapping providers deliver benefits to smallholder farmers by giving them access to locally tailored weather information, market and pricing data, and crop advice that assists farmers in achieving higher yields and getting their crops to the right buyers….(More)”.

The UN is testing technology that processes data confidentially


The Economist: “Reasons of confidentiality mean that many medical, financial, educational and other personal records, from the analysis of which much public good could be derived, are in practice unavailable. A lot of commercial data are similarly sequestered. For example, firms have more granular and timely information on the economy than governments can obtain from surveys. But such intelligence would be useful to rivals. If companies could be certain it would remain secret, they might be more willing to make it available to officialdom.

A range of novel data-processing techniques might make such sharing possible. These so-called privacy-enhancing technologies (PETs) are still in the early stages of development. But they are about to get a boost from a project launched by the United Nations’ statistics division. The UN PETs Lab, which opened for business officially on January 25th, enables national statistics offices, academic researchers and companies to collaborate to carry out projects which will test various PETs, permitting technical and administrative hiccups to be identified and overcome.

The first such effort, which actually began last summer, before the PETs Lab’s formal inauguration, analysed import and export data from national statistical offices in America, Britain, Canada, Italy and the Netherlands, to look for anomalies. Those could be a result of fraud, of faulty record keeping or of innocuous re-exporting.

For the pilot scheme, the researchers used categories already in the public domain—in this case international trade in things such as wood pulp and clocks. They thus hoped to show that the system would work, before applying it to information where confidentiality matters.

They put several kinds of PETs through their paces. In one trial, OpenMined, a charity based in Oxford, tested a technique called secure multiparty computation (SMPC). This approach involves the data to be analysed being encrypted by their keeper and staying on the premises. The organisation running the analysis (in this case OpenMined) sends its algorithm to the keeper, who runs it on the encrypted data. That is mathematically complex, but possible. The findings are then sent back to the original inquirer…(More)”.

The West already monopolized scientific publishing. Covid made it worse.


Samanth Subramanian at Quartz: “For nearly a decade, Jorge Contreras has been railing against the broken system of scientific publishing. Academic journals are dominated by the Western scientists, who not only fill their pages but also work for institutions that can afford the hefty subscription fees to these journals. “These issues have been brewing for decades,” said Contreras, a professor at the University of Utah’s College of Law who specializes in intellectual property in the sciences. “The covid crisis has certainly exacerbated things, though.”

The coronavirus pandemic triggered a torrent of academic papers. By August 2021, at least 210,000 new papers on covid-19 had been published, according to a Royal Society study. Of the 720,000-odd authors of these papers, nearly 270,000 were from the US, the UK, Italy or Spain.

These papers carry research forward, of course—but they also advance their authors’ careers, and earn them grants and patents. But many of these papers are often based on data gathered in the global south, by scientists who perhaps don’t have the resources to expand on their research and publish. Such scientists aren’t always credited in the papers their data give rise to; to make things worse, the papers appear in journals that are out of the financial reach of these scientists and their institutes.

These imbalances have, as Contreras said, been a part of the publishing landscape for years. (And it doesn’t occur just in the sciences; economists from the US or the UK, for instance, tend to study countries where English is the most common language.) But the pace and pressures of covid-19 have rendered these iniquities especially stark.

Scientists have paid to publish their covid-19 research—sometimes as much as $5,200 per article. Subscriber-only journals maintain their high fees, running into thousands of dollars a year; in 2020, the Dutch publishing house Elsevier, which puts out journals such as Cell and Gene, reported a profit of nearly $1 billion, at a margin higher than that of Apple or Amazon. And Western scientists are pressing to keep data out of GISAID, a genome database that compels users to acknowledge or collaborate with anyone who deposits the data…(More)”

The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence


Paper by Erik Brynjolfsson: “In 1950, Alan Turing proposed an “imitation game” as the ultimate test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions are indistinguishable from those of a human. Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds.

But not all types of AI are human-like—in fact, many of the most powerful systems are very different from humans —and an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created. What’s more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers…(More)”

UN chief calls for action to put out ‘5-alarm global fire’


UNAffairs: “At a time when “the only certainty is more uncertainty”, countries must unite to forge a new, more hopeful and equal path, UN Secretary-General António Guterres told the General Assembly on Friday, laying out his priorities for 2022. 

“We face a five-alarm global fire that requires the full mobilization of all countries,” he said, referring to the raging COVID-19 pandemic, a morally bankrupt global financial system, the climate crisis, lawlessness in cyberspace, and diminished peace and security. 

He stressed that countries “must go into emergency mode”, and now is the time to act as the response will determine global outcomes for decades ahead…. 

Alarm four: Technology and cyberspace 

While technology offers extraordinary possibilities for humanity, Mr. Guterres warned that “growing digital chaos is benefiting the most destructive forces and denying opportunities to ordinary people.” 

He spoke of the need to both expand internet access to the nearly three billion people still offline, and to address risks such as data misuse, misinformation and cyber-crime. 

“Our personal information is being exploited to control or manipulate us, change our behaviours, violate our human rights, and undermine democratic institutions. Our choices are taken away from us without us even knowing it”, he said. 

The UN chief called for strong regulatory frameworks to change the business models of social media companies which “profit from algorithms that prioritize addiction, outrage and anxiety at the cost of public safety”. 

He has proposed the establishment of a Global Digital Compact, bringing together governments, the private sector and civil society, to agree on key principles underpinning global digital cooperation. 

Another proposal is for a Global Code of Conduct to end the infodemic and the war on science, and promote integrity in public information, including online.  

Countries are also encouraged to step up work on banning lethal autonomous weapons, or “killer robots” as headline writers may prefer, and to begin considering new governance frameworks for biotechnology and neurotechnology…(More)”.

Building machines that work for everyone – how diversity of test subjects is a technology blind spot, and what to do about it


Article by Tahira Reid and James Gibert: “People interact with machines in countless ways every day. In some cases, they actively control a device, like driving a car or using an app on a smartphone. Sometimes people passively interact with a device, like being imaged by an MRI machine. And sometimes they interact with machines without consent or even knowing about the interaction, like being scanned by a law enforcement facial recognition system.

Human-Machine Interaction (HMI) is an umbrella term that describes the ways people interact with machines. HMI is a key aspect of researching, designing and building new technologies, and also studying how people use and are affected by technologies.

Researchers, especially those traditionally trained in engineering, are increasingly taking a human-centered approach when developing systems and devices. This means striving to make technology that works as expected for the people who will use it by taking into account what’s known about the people and by testing the technology with them. But even as engineering researchers increasingly prioritize these considerations, some in the field have a blind spot: diversity.

As an interdisciplinary researcher who thinks holistically about engineering and design and an expert in dynamics and smart materials with interests in policy, we have examined the lack of inclusion in technology design, the negative consequences and possible solutions….

It is possible to use a homogenous sample of people in publishing a research paper that adds to a field’s body of knowledge. And some researchers who conduct studies this way acknowledge the limitations of homogenous study populations. However, when it comes to developing systems that rely on algorithms, such oversights can cause real-world problems. Algorithms are as only as good as the data that is used to build them.

Algorithms are often based on mathematical models that capture patterns and then inform a computer about those patterns to perform a given task. Imagine an algorithm designed to detect when colors appear on a clear surface. If the set of images used to train that algorithm consists of mostly shades of red, the algorithm might not detect when a shade of blue or yellow is present…(More)”.

Why Privacy Matters


Book by Neil Richards: “Many people tell us that privacy is dead, or that it is dying, but such talk is a dangerous fallacy. This book explains what privacy is, what privacy isn’t, and why privacy matters. Privacy is the extent to which human information is known or used, and it is fundamentally about the social power that human information provides over other people. The best way to ensure that power is checked and channeled in ways that benefit humans and their society is through rules—rules about human information. And because human information rules of some sort are inevitable, we should craft our privacy rules to promote human values. The book suggests three such values that our human information rules should promote: identity, freedom, and protection. Identity allows us to be thinking, self-defining humans; freedom lets us be citizens; while protection safeguards our roles as situated consumers and workers, allowing us, as members of society, to trust and rely on other people so that we can live our lives and hopefully build a better future together…(More)”.