The Myth of Tech Exceptionalism


Essay by Yael Eisenstat and Nils Gilman: “…What has come to be known as “tech” presents a two-faced image. On the one hand, tech represents (and especially presents itself) as all that is good about contemporary capitalism: it produces delightful new products, generates vast new troves of wealth and inspires us quite literally to reach for the heavens. On the other hand, the harms caused by “tech” have become all too familiar: facial recognition technology disproportionately misidentifying people of color, Google reinforcing racist stereotypes, Facebook stoking political polarization, AirBnB hollowing out city centers, smartphones harming mental health and on and on. Some go so far as to claim that tech is depriving us of the very essence of our humanity.

Despite these critiques, Silicon Valley in recent decades has managed to build an anti-regulatory fortress around itself by promoting the myth — rarely stated plainly, but widely believed by tech practitioners — that “tech” is somehow fundamentally different from every other industry that has come before. It is different, the myth says, because it is inherently well-intentioned and will produce not just new but previously unthinkable products. Any micro-level harm — whether to an individual, a vulnerable community, even an entire country — is by this logic deemed a worthwhile trade-off for the society-shifting, macro-level “good.”

This argument, properly labelled “tech exceptionalism,” is rooted in tech leaders’ ideological view both of themselves and government. This ideology contributes to the belief that those who choose to classify themselves as “tech companies” deserve a different set of rules and responsibilities than the rest of private industry.

For tech evangelists, “disruption” as such has become a kind of holy grail, with “unintended consequences” treated as an acceptable by-product of innovation. “Move fast and break things” was Facebook’s original motto — and if a little thing like democracy got broken in the process, well, someone could clean that up later. An entire generation of “innovators” has grown up believing that technology is the key to making the world better, that founders’ visions for how to do so are unquestionably true and that government intervention will only stymie this engine of growth and prosperity, or even worse, their aspirational future innovations…(More)”.

Technology is revolutionizing how intelligence is gathered and analyzed – and opening a window onto Russian military activity around Ukraine


Craig Nazareth at The Conversation: “…Through information captured by commercial companies and individuals, the realities of Russia’s military posturing are accessible to anyone via internet search or news feed. Commercial imaging companies are posting up-to-the-minute, geographically precise images of Russia’s military forces. Several news agencies are regularly monitoring and reporting on the situation. TikTok users are posting video of Russian military equipment on rail cars allegedly on their way to augment forces already in position around Ukraine. And internet sleuths are tracking this flow of information.

This democratization of intelligence collection in most cases is a boon for intelligence professionals. Government analysts are filling the need for intelligence assessments using information sourced from across the internet instead of primarily relying on classified systems or expensive sensors high in the sky or arrayed on the planet.

However, sifting through terabytes of publicly available data for relevant information is difficult. Knowing that much of the data could be intentionally manipulated to deceive complicates the task.

Enter the practice of open-source intelligence. The U.S. director of national intelligence defines Open-Source Intelligence, or OSINT, as the collection, evaluation and analysis of publicly available information. The information sources include news reports, social media posts, YouTube videos and satellite imagery from commercial satellite operators.

OSINT communities and government agencies have developed best practices for OSINT, and there are numerous free tools. Analysts can use the tools to develop network charts of, for example, criminal organizations by scouring publicly available financial records for criminal activity.

Private investigators are using OSINT methods to support law enforcement, corporate and government needs. Armchair sleuths have used OSINT to expose corruption and criminal activity to authorities. In short, the majority of intelligence needs can be met through OSINT…

Even with OSINT best practices and tools, OSINT contributes to the information overload intelligence analysts have to contend with. The intelligence analyst is typically in a reactive mode trying to make sense of a constant stream of ambiguous raw data and information.

Machine learning, a set of techniques that allows computers to identify patterns in large amounts of data, is proving invaluable for processing OSINT information, particularly photos and videos. Computers are much faster at sifting through large datasets, so adopting machine learning tools and techniques to optimize the OSINT process is a necessity.

Identifying patterns makes it possible for computers to evaluate information for deception and credibility and predict future trends. For example, machine learning can be used to help determine whether information was produced by a human or by a bot or other computer program and whether a piece of data is authentic or fraudulent…(More)”.

How privacy’s past may shape its future


Essay by Alessandro Acquisti, Laura Brandimarte and Jeff Hancock: “Continued expansion of human activities into digital realms gives rise to concerns about digital privacy and its invasions, often expressed in terms of data rights and internet surveillance. It may thus be tempting to construe privacy as a modern phenomenon—something our ancestors lacked and technological innovation and urban growth made possible. Research from history, anthropology, and ethnography suggests otherwise. The evidence for peoples seeking to manage the boundaries of private and public spans time and space, social class, and degree of technological sophistication. Privacy—not merely hiding of data, but the selective opening and closing of the self to others—appears to be both culturally specific and culturally universal. But what could explain the simultaneous universality and diversity of a human drive for privacy? An account of the evolutionary roots of privacy may offer an answer and teach us about privacy’s digital future and how to manage it….(More)”.

Society won’t trust A.I. until business earns that trust


Article by François Candelon, Rodolphe Charme di Carlo and Steven D. Mills: “…The concept of a social license—which was born when the mining industry, and other resource extractors, faced opposition to projects worldwide—differs from the other rules governing A.I.’s use. Academics such as Leeora Black and John Morrison, in the book The Social License: How to Keep Your Organization Legitimate,define the social license as “the negotiation of equitable impacts and benefits in relation to its stakeholders over the near and longer term. It can range from the informal, such as an implicit contract, to the formal, like a community benefit agreement.” 

The social license isn’t a document like a government permit; it’s a form of acceptance that companies must gain through consistent and trustworthy behavior as well as stakeholder interactions. Thus, a social license for A.I. will be a socially constructed perception that a company has secured the right to use the technology for specific purposes in the markets in which it operates. 

Companies cannot award themselves social licenses; they will have to win them by proving they can be trusted. As Morrison argued in 2014, akin to the capability to dig a mine, the fact that an A.I.-powered solution is technologically feasible doesn’t mean that society will find its use morally and ethically acceptable. And losing the social license will have dire consequences, as natural resource companies, such as Shell and BP, have learned in the past…(More)”

The chronic growing pains of communicating science online


Dominique Brossard and Dietram A. Scheufele at Science: “Almost a decade ago, we wrote, “Without applied research on how to best communicate science online, we risk creating a future where the dynamics of online communication systems have a stronger impact on public views about science than the specific research that we as scientists are trying to communicate”. Since then, the footprint of subscription- based news content has slowly shrunk. Meanwhile, microtargeted information increasingly dominates social media, curated and prioritized algorithmically on the basis of audience demographics, an abundance of digital trace data, and other consumer information. Partly as a result, hyperpolarized public attitudes on issues such as COVID-19 vaccines or climate change emerge and grow in separate echo chambers.

Scientists have been slow to adapt to a shift in power in the science information ecosystem—changes that are not likely to reverse.The business-as-usual response to this challenge from many parts of the scientific community—especially in science, technology, engineering, and mathematics fields— has been frustrating to those who conduct research on science communication. Many scientists-turned-communicators continue to see online communication environments mostly as tools for resolving information asymmetries between experts and lay audiences. As a result, they blog, tweet, and post podcasts and videos to promote public understanding and excitement about science. To be fair, this has been driven most recently by a demand from policy-makers and from audiences interested in policy and decision-relevant science during the COVID-19 pandemic.

Unfortunately, social science research suggests that rapidly evolving online information ecologies are likely to be minimally responsive to scientists who upload content—however engaging it may seem— to TikTok or YouTube. In highly contested national and global information environments, the scientific community is just one of many voices competing for attention and public buy-in about a range of issues, from COVID-19 to artificial intelligence to genetic engineering, among other topics. This competition for public attention has produced at least three urgent lessons that the scientific community must face as online information environments rapidly displace traditional, mainstream media….(More)”.

Bringing Open Source to the Global Lab Bench


Article by Julieta Arancio and Shannon Dosemagen: “In 2015, Richard Bowman, an optics scientist, began experimenting with 3D printing a microscope as a single piece in order to reduce the time and effort of reproducing the design. Soon after, he started the OpenFlexure project, an open-license 3D-printed microscope. The project quickly took over his research agenda and grew into a global community of hundreds of users and developers, including professional scientists, hobbyists, community scientists, clinical researchers, and teachers. Anyone with access to a 3D printer can download open-source files from the internet to create microscopes that can be used for doing soil science research, detecting diseases such as malaria, or teaching microbiology, among other things. Today, the project is supported by a core team at the Universities of Bath and Cambridge in the United Kingdom, as well as in Tanzania by the Ifakara Health Institute and Bongo Tech & Research Labs, an engineering company. 

OpenFlexure is one of many open science hardware projects that are championed by the Gathering for Open Science Hardware (GOSH), a transnational network of open science hardware advocates. Although there are differences in practice, open hardware projects operate on similar principles to open-source software, and they span disciplines ranging from nanotechnology to environmental monitoring. GOSH defines the field as “any piece of hardware used for scientific investigations that can be obtained, assembled, used, studied, modified, shared, and sold by anyone. It includes standard lab equipment as well as auxiliary materials, such as sensors, biological reagents, analog and digital electronic components.” Compared to an off-the-shelf microscope, which may cost thousands of dollars, an OpenFlexure microscope may cost a few hundred. By being significantly cheaper and easier to maintain, open hardware enables more people in more places to do science….(More)”.

Japan to pitch data-sharing framework to bolster Asia supply chains


Nikkei coverage: “The Japanese government is set to propose a scheme to promote data-sharing among companies in Asia to strengthen supply chains in the region, Nikkei has learned.

The Ministry of Economy, Trade and Industry (METI) hopes that a secure data-sharing framework like the one developed in Europe will enable companies in Asia to smoothly exchange data, such as inventory information on products and parts, as well as information on potential disruptions in procurement.

The ministry will propose the idea as a key part of Japan’s digital trade policy at an expert panel meeting on Friday. The meeting will propose a major review of industrial policy to emphasize digitization and a decarbonized economy.

It sees Europe’s efforts as a role model in terms of information-sharing. The European Union is building a data distribution infrastructure, Gaia-X, to let companies in the region share information on supply chains.

The goal is to counter the monopoly on data held by large technology companies in the U.S. and China. The EU is promoting the sharing of data by connecting different cloud services among companies. Under Gaia, companies can limit the scope of data disclosure and the use of data provided to others, based on the concept of data sovereignty.

The scheme envisioned by METI will also allow companies to decide what type of data they share and how much. The infrastructure will be developed on a regional basis, with the participation of various countries.

Google and China’s Alibaba Group Holding offer data-sharing services for supply chain, but the Japanese government is concerned that it will be difficult to protect Japanese companies’ industrial secrets unless it develops its own data infrastructure….(More)”

On Attentional Norms: The inevitability of Zooming while distracted.


Essay by Alan Jacobs: “Every medium of communication has its own attentional norms. Like all tacit rules that govern behavior, they get violated, but the violators typically act deliberately. For instance, the people who talk aloud in the movie theater typically aren’t ignorant of the norms; they transgress them for the lulz. Human beings are extremely skilled at recognizing and internalizing the norms of any given medium or environment.

Such norms are not set in stone but rather can alter over time….

It has been interesting to watch over the last two pandemic years as the norms associated with videoconferencing have coalesced. My experience strongly suggests that the attention level expected on Zoom (and other videoconferencing platforms) is quite remarkably low—medieval-churchgoing low. Obviously, there will be exceptions to this norm—no one feels free to look away when the Boss is giving a speech—but I can’t remember the last time I was on a Zoom call in which participants were not regularly cutting their video and audio, or just their audio, to talk to people in the room with them. Or they just walk out of frame for a few minutes. Or they type away furiously on Slack or email or WhatsApp or iMessage. And no one who does this acts inappropriately, because such fidgeting and alternations of attention are permitted by the norms that have emerged.

The primary exceptions to these rules, aside from the etiquette demanded of those who must listen to the Boss, occur when there are fewer than four people involved in a conversation. If there are just two or three of you, people know that before stepping away from the conversation they need to (a) inform their interlocutors of what they’re about to do, and then (b) apologize when they return. But as long as the person speaking has an audience of more than two, all bets are off. Each of us can come and go at need, or at impulse….

Distractions come in many varieties, and some apparent distractions aren’t really distractions at all. But Zoom, it seems to me, is a medium that offers constant permission to be distracted. And while the norms of any particular moment are in a sense not objectively good or bad, they can be good or bad in relation to certain human purposes. The purposes I have in my classes are not compatible with the attentional norms that we’ve learned to employ in our teleconferencing pandemic…(More)”

How NFTs could transform health information exchange


Paper by Kristin Kostick-Quenet et al: “Personal (sometimes called “protected”) health information (PHI) is highly valued and will become centrally important as big data and machine learning move to the forefront of health care and translational research. The current health information exchange (HIE) market is dominated by commercial and (to a lesser extent) not-for-profit entities and typically excludes patients. This can serve to undermine trust and create incentives for sharing data. Patients have limited agency in deciding which of their data is shared, with whom, and under what conditions. Within this context, new forms of digital ownership can inspire a digital marketplace for patient-controlled health data. We argue that nonfungible tokens (NFTs) or NFT-like frameworks can help incentivize a more democratized, transparent, and efficient system for HIE in which patients participate in decisions about how and with whom their PHI is shared…(More)”.

Facial Recognition Plan from IRS Raises Big Concerns


Article by James Hendler: “The U.S. Internal Revenue Service is planning to require citizens to create accounts with a private facial recognition company in order to file taxes online. The IRS is joining a growing number of federal and state agencies that have contracted with ID.me to authenticate the identities of people accessing services.

The IRS’s move is aimed at cutting down on identity theft, a crime that affects millions of Americans. The IRS, in particular, has reported a number of tax filings from people claiming to be others, and fraud in many of the programs that were administered as part of the American Relief Plan has been a major concern to the government.

The IRS decision has prompted a backlash, in part over concerns about requiring citizens to use facial recognition technology and in part over difficulties some people have had in using the system, particularly with some state agencies that provide unemployment benefits. The reaction has prompted the IRS to revisit its decision.

As a computer science researcher and the chair of the Global Technology Policy Council of the Association for Computing Machinery, I have been involved in exploring some of the issues with government use of facial recognition technology, both its use and its potential flaws. There have been a great number of concerns raised over the general use of this technology in policing and other government functions, often focused on whether the accuracy of these algorithms can have discriminatory affects. In the case of ID.me, there are other issues involved as well….(More)”.