How privacy’s past may shape its future


Essay by Alessandro Acquisti, Laura Brandimarte and Jeff Hancock: “Continued expansion of human activities into digital realms gives rise to concerns about digital privacy and its invasions, often expressed in terms of data rights and internet surveillance. It may thus be tempting to construe privacy as a modern phenomenon—something our ancestors lacked and technological innovation and urban growth made possible. Research from history, anthropology, and ethnography suggests otherwise. The evidence for peoples seeking to manage the boundaries of private and public spans time and space, social class, and degree of technological sophistication. Privacy—not merely hiding of data, but the selective opening and closing of the self to others—appears to be both culturally specific and culturally universal. But what could explain the simultaneous universality and diversity of a human drive for privacy? An account of the evolutionary roots of privacy may offer an answer and teach us about privacy’s digital future and how to manage it….(More)”.

Society won’t trust A.I. until business earns that trust


Article by François Candelon, Rodolphe Charme di Carlo and Steven D. Mills: “…The concept of a social license—which was born when the mining industry, and other resource extractors, faced opposition to projects worldwide—differs from the other rules governing A.I.’s use. Academics such as Leeora Black and John Morrison, in the book The Social License: How to Keep Your Organization Legitimate,define the social license as “the negotiation of equitable impacts and benefits in relation to its stakeholders over the near and longer term. It can range from the informal, such as an implicit contract, to the formal, like a community benefit agreement.” 

The social license isn’t a document like a government permit; it’s a form of acceptance that companies must gain through consistent and trustworthy behavior as well as stakeholder interactions. Thus, a social license for A.I. will be a socially constructed perception that a company has secured the right to use the technology for specific purposes in the markets in which it operates. 

Companies cannot award themselves social licenses; they will have to win them by proving they can be trusted. As Morrison argued in 2014, akin to the capability to dig a mine, the fact that an A.I.-powered solution is technologically feasible doesn’t mean that society will find its use morally and ethically acceptable. And losing the social license will have dire consequences, as natural resource companies, such as Shell and BP, have learned in the past…(More)”

The chronic growing pains of communicating science online


Dominique Brossard and Dietram A. Scheufele at Science: “Almost a decade ago, we wrote, “Without applied research on how to best communicate science online, we risk creating a future where the dynamics of online communication systems have a stronger impact on public views about science than the specific research that we as scientists are trying to communicate”. Since then, the footprint of subscription- based news content has slowly shrunk. Meanwhile, microtargeted information increasingly dominates social media, curated and prioritized algorithmically on the basis of audience demographics, an abundance of digital trace data, and other consumer information. Partly as a result, hyperpolarized public attitudes on issues such as COVID-19 vaccines or climate change emerge and grow in separate echo chambers.

Scientists have been slow to adapt to a shift in power in the science information ecosystem—changes that are not likely to reverse.The business-as-usual response to this challenge from many parts of the scientific community—especially in science, technology, engineering, and mathematics fields— has been frustrating to those who conduct research on science communication. Many scientists-turned-communicators continue to see online communication environments mostly as tools for resolving information asymmetries between experts and lay audiences. As a result, they blog, tweet, and post podcasts and videos to promote public understanding and excitement about science. To be fair, this has been driven most recently by a demand from policy-makers and from audiences interested in policy and decision-relevant science during the COVID-19 pandemic.

Unfortunately, social science research suggests that rapidly evolving online information ecologies are likely to be minimally responsive to scientists who upload content—however engaging it may seem— to TikTok or YouTube. In highly contested national and global information environments, the scientific community is just one of many voices competing for attention and public buy-in about a range of issues, from COVID-19 to artificial intelligence to genetic engineering, among other topics. This competition for public attention has produced at least three urgent lessons that the scientific community must face as online information environments rapidly displace traditional, mainstream media….(More)”.

Bringing Open Source to the Global Lab Bench


Article by Julieta Arancio and Shannon Dosemagen: “In 2015, Richard Bowman, an optics scientist, began experimenting with 3D printing a microscope as a single piece in order to reduce the time and effort of reproducing the design. Soon after, he started the OpenFlexure project, an open-license 3D-printed microscope. The project quickly took over his research agenda and grew into a global community of hundreds of users and developers, including professional scientists, hobbyists, community scientists, clinical researchers, and teachers. Anyone with access to a 3D printer can download open-source files from the internet to create microscopes that can be used for doing soil science research, detecting diseases such as malaria, or teaching microbiology, among other things. Today, the project is supported by a core team at the Universities of Bath and Cambridge in the United Kingdom, as well as in Tanzania by the Ifakara Health Institute and Bongo Tech & Research Labs, an engineering company. 

OpenFlexure is one of many open science hardware projects that are championed by the Gathering for Open Science Hardware (GOSH), a transnational network of open science hardware advocates. Although there are differences in practice, open hardware projects operate on similar principles to open-source software, and they span disciplines ranging from nanotechnology to environmental monitoring. GOSH defines the field as “any piece of hardware used for scientific investigations that can be obtained, assembled, used, studied, modified, shared, and sold by anyone. It includes standard lab equipment as well as auxiliary materials, such as sensors, biological reagents, analog and digital electronic components.” Compared to an off-the-shelf microscope, which may cost thousands of dollars, an OpenFlexure microscope may cost a few hundred. By being significantly cheaper and easier to maintain, open hardware enables more people in more places to do science….(More)”.

Japan to pitch data-sharing framework to bolster Asia supply chains


Nikkei coverage: “The Japanese government is set to propose a scheme to promote data-sharing among companies in Asia to strengthen supply chains in the region, Nikkei has learned.

The Ministry of Economy, Trade and Industry (METI) hopes that a secure data-sharing framework like the one developed in Europe will enable companies in Asia to smoothly exchange data, such as inventory information on products and parts, as well as information on potential disruptions in procurement.

The ministry will propose the idea as a key part of Japan’s digital trade policy at an expert panel meeting on Friday. The meeting will propose a major review of industrial policy to emphasize digitization and a decarbonized economy.

It sees Europe’s efforts as a role model in terms of information-sharing. The European Union is building a data distribution infrastructure, Gaia-X, to let companies in the region share information on supply chains.

The goal is to counter the monopoly on data held by large technology companies in the U.S. and China. The EU is promoting the sharing of data by connecting different cloud services among companies. Under Gaia, companies can limit the scope of data disclosure and the use of data provided to others, based on the concept of data sovereignty.

The scheme envisioned by METI will also allow companies to decide what type of data they share and how much. The infrastructure will be developed on a regional basis, with the participation of various countries.

Google and China’s Alibaba Group Holding offer data-sharing services for supply chain, but the Japanese government is concerned that it will be difficult to protect Japanese companies’ industrial secrets unless it develops its own data infrastructure….(More)”

On Attentional Norms: The inevitability of Zooming while distracted.


Essay by Alan Jacobs: “Every medium of communication has its own attentional norms. Like all tacit rules that govern behavior, they get violated, but the violators typically act deliberately. For instance, the people who talk aloud in the movie theater typically aren’t ignorant of the norms; they transgress them for the lulz. Human beings are extremely skilled at recognizing and internalizing the norms of any given medium or environment.

Such norms are not set in stone but rather can alter over time….

It has been interesting to watch over the last two pandemic years as the norms associated with videoconferencing have coalesced. My experience strongly suggests that the attention level expected on Zoom (and other videoconferencing platforms) is quite remarkably low—medieval-churchgoing low. Obviously, there will be exceptions to this norm—no one feels free to look away when the Boss is giving a speech—but I can’t remember the last time I was on a Zoom call in which participants were not regularly cutting their video and audio, or just their audio, to talk to people in the room with them. Or they just walk out of frame for a few minutes. Or they type away furiously on Slack or email or WhatsApp or iMessage. And no one who does this acts inappropriately, because such fidgeting and alternations of attention are permitted by the norms that have emerged.

The primary exceptions to these rules, aside from the etiquette demanded of those who must listen to the Boss, occur when there are fewer than four people involved in a conversation. If there are just two or three of you, people know that before stepping away from the conversation they need to (a) inform their interlocutors of what they’re about to do, and then (b) apologize when they return. But as long as the person speaking has an audience of more than two, all bets are off. Each of us can come and go at need, or at impulse….

Distractions come in many varieties, and some apparent distractions aren’t really distractions at all. But Zoom, it seems to me, is a medium that offers constant permission to be distracted. And while the norms of any particular moment are in a sense not objectively good or bad, they can be good or bad in relation to certain human purposes. The purposes I have in my classes are not compatible with the attentional norms that we’ve learned to employ in our teleconferencing pandemic…(More)”

How NFTs could transform health information exchange


Paper by Kristin Kostick-Quenet et al: “Personal (sometimes called “protected”) health information (PHI) is highly valued and will become centrally important as big data and machine learning move to the forefront of health care and translational research. The current health information exchange (HIE) market is dominated by commercial and (to a lesser extent) not-for-profit entities and typically excludes patients. This can serve to undermine trust and create incentives for sharing data. Patients have limited agency in deciding which of their data is shared, with whom, and under what conditions. Within this context, new forms of digital ownership can inspire a digital marketplace for patient-controlled health data. We argue that nonfungible tokens (NFTs) or NFT-like frameworks can help incentivize a more democratized, transparent, and efficient system for HIE in which patients participate in decisions about how and with whom their PHI is shared…(More)”.

Facial Recognition Plan from IRS Raises Big Concerns


Article by James Hendler: “The U.S. Internal Revenue Service is planning to require citizens to create accounts with a private facial recognition company in order to file taxes online. The IRS is joining a growing number of federal and state agencies that have contracted with ID.me to authenticate the identities of people accessing services.

The IRS’s move is aimed at cutting down on identity theft, a crime that affects millions of Americans. The IRS, in particular, has reported a number of tax filings from people claiming to be others, and fraud in many of the programs that were administered as part of the American Relief Plan has been a major concern to the government.

The IRS decision has prompted a backlash, in part over concerns about requiring citizens to use facial recognition technology and in part over difficulties some people have had in using the system, particularly with some state agencies that provide unemployment benefits. The reaction has prompted the IRS to revisit its decision.

As a computer science researcher and the chair of the Global Technology Policy Council of the Association for Computing Machinery, I have been involved in exploring some of the issues with government use of facial recognition technology, both its use and its potential flaws. There have been a great number of concerns raised over the general use of this technology in policing and other government functions, often focused on whether the accuracy of these algorithms can have discriminatory affects. In the case of ID.me, there are other issues involved as well….(More)”.

COVID’s lesson for governments? Don’t cherry-pick advice, synthesize it


Essay by Geoff Mulgan: “Too many national leaders get good guidance yet make poor decisions…Handling complex scientific issues in government is never easy — especially during a crisis, when uncertainty is high, stakes are huge and information is changing fast. But for some of the nations that have fared the worst in the COVID-19 pandemic, there’s a striking imbalance between the scientific advice available and the capacity to make sense of it. Some advice is ignored because it’s politically infeasible or unpragmatic. Nonetheless, much good scientific input has fallen aside because there’s no means to pick it up.

Part of the problem has been a failure of synthesis — the ability to combine insights and transcend disciplinary boundaries. Creating better syntheses should be a governmental priority as the crisis moves into a new phase….

Input from evidence synthesis is crucial for policymaking. But the capacity of governments to absorb such evidence is limited, and syntheses for decisions must go much further in terms of transparently incorporating assessments of political or practical feasibility, implementation, benefits and cost, among many other factors. The gap between input and absorption is glaring.

I’ve addressed teams in the UK prime minister’s office, the European Commission and the German Chancellery about this issue. In responding to the pandemic, some countries (including France and the United Kingdom) have tried to look at epidemiological models alongside economic ones, but none has modelled the social or psychological effects of different policy choices, and none would claim to have achieved a truly synthetic approach.

There are dozens of good examples of holistic thinking and action: programmes to improve public health in Finland, cut UK street homelessness, reduce poverty in China. But for many governments, the capacity to see things in the round has waned over the past decade. The financial crisis of 2007 and then populism both shortened governments’ time horizons for planning and policy in the United States and Europe….

The worst governments rely on intuition. But even the best resort to simple heuristics — for example, that it’s best to act fast, or that prioritizing health is also good for the economy. This was certainly true in 2020 and 2021. But that might change with higher vaccination and immunity rates.

What would it mean to transcend simple heuristics and achieve a truly synthetic approach? It would involve mapping and ranking relevant factors (from potential impacts on hospital capacity to the long-run effects of isolation); using formal and informal models to capture feedbacks, trade-offs and synergies; and more creative work to shape options.

Usually, such work is best done by teams that encompass breadth and depth, disparate disciplines, diverse perspectives and both officials and outsiders. Good examples include Singapore’s Strategy Group (and Centre for Strategic Futures), which helps the country to execute sophisticated plans on anything from cybercrime to climate resilience. But most big countries, despite having large bureaucracies, lack comparable teams…(More)”.

Sample Truths


Christopher Beha at Harpers’ Magazine: “…How did we ever come to believe that surveys of this kind could tell us something significant about ourselves?

One version of the story begins in the middle of the seventeenth century, after the Thirty Years’ War left the Holy Roman Empire a patchwork of sovereign territories with uncertain borders, contentious relationships, and varied legal conventions. The resulting “weakness and need for self-definition,” the French researcher Alain Desrosières writes, created a demand among local rulers for “systematic cataloging.” This generally took the form of descriptive reports. Over time the proper methods and parameters of these reports became codified, and thus was born the discipline of Statistik: the systematic study of the attributes of a state.

As Germany was being consolidated in the nineteenth century, “certain officials proposed using the formal, detailed framework of descriptive statistics to present comparisons between the states” by way of tables in which “the countries appeared in rows, and different (literary) elements of the description appeared in columns.” In this way, a single feature, such as population or climate, could be easily removed from its context. Statistics went from being a method for creating a holistic description of one place to what Desrosières calls a “cognitive space of equivalence.” Once this change occurred, it was only a matter of time before the descriptions themselves were put into the language of equivalence, which is to say, numbers.

The development of statistical reasoning was central to the “project of legibility,” as the anthropologist James C. Scott calls it, ushered in by the rise of nation-states. Strong centralized governments, Scott writes in Seeing Like a State, required that local communities be made “legible,” their features abstracted to enable management by distant authorities. In some cases, such “state simplifications” occurred at the level of observation. Cadastral maps, for example, ignored local land-use customs, focusing instead on the points relevant to the state: How big was each plot, and who was responsible for paying taxes on it?

But legibility inevitably requires simplifying the underlying facts, often through coercion. The paradigmatic example here is postrevolutionary France. For administrative purposes, the country was divided into dozens of “departments” of roughly equal size whose boundaries were drawn to break up culturally cohesive regions such as Normandy and Provence. Local dialects were effectively banned, and use of the new, highly rational metric system was required. (As many commentators have noted, this work was a kind of domestic trial run for colonialism.)

One thing these centralized states did not need to make legible was their citizens’ opinions—on the state itself, or anything else for that matter. This was just as true of democratic regimes as authoritarian ones. What eventually helped bring about opinion polling was the rise of consumer capitalism, which created the need for market research.

But expanding the opinion poll beyond questions like “Pepsi or Coke?” required working out a few kinks. As the historian Theodore M. Porter notes, pollsters quickly learned that “logically equivalent forms of the same question produce quite different distributions of responses.” This fact might have led them to doubt the whole undertaking. Instead, they “enforced a strict discipline on employees and respondents,” instructing pollsters to “recite each question with exactly the same wording and in a specified order.” Subjects were then made “to choose one of a small number of packaged statements as the best expression of their opinions.”

This approach has become so familiar that it may be worth noting how odd it is to record people’s opinions on complex matters by asking them to choose among prefabricated options. Yet the method has its advantages. What it sacrifices in accuracy it makes up in pseudoscientific precision and quantifiability. Above all, the results are legible: the easiest way to be sure you understand what a person is telling you is to put your own words in his mouth.

Scott notes a kind of Heisenberg principle to state simplifications: “They frequently have the power to transform the facts they take note of.” This is another advantage to multiple-choice polling. If people are given a narrow range of opinions, they may well think that those are the only options available, and in choosing one, they may well accept it as wholly their own. Even those of us who reject the stricture of these options for ourselves are apt to believe that they fairly represent the opinions of others. One doesn’t have to be a postmodern relativist to suspect that what’s going on here is as much the construction of a reality as the depiction of one….(More)”.