Unmet Desire


Essay by I vividly remember March 2020, the month the United States shut down as COVID-19 spread uncontrollably and upended daily life. At the time, I worked at Cornell University in upstate New York. As we adjusted to a new normal, my Cornell colleague Elizabeth Day and I suspected that local leaders were facing unprecedented policy challenges that were not making the major headlines.

We decided to reach out to county policymakers throughout upstate New York, inviting them to share challenges they were facing. We offered to discuss research that might prove helpful. Responses soon poured in.

One county executive was trying to figure out how to provide childcare for first responders. Childcare centers were ordered closed, but first responders could not stay home to watch their kids. The executive needed systematic research on other options. A second local policymaker watched as her county’s offices shuttered and work moved online; she needed research on how other local leaders had used mobile vans to provide necessary services to rural residents without internet. Another county official sought to design a high-quality survey to elicit frank responses from municipal leaders about COVID-related challenges. In this case, she needed to discuss the fundamentals of survey design and implementation with an expert.

These responses led us to engage in an informal collaboration with each of these policymakers. By informal collaboration, I mean a collaborative exchange in which people with diverse forms of knowledge, expertise, and lived experience share what they know with the goal of developing an expanded understanding of a problem—yet still remain autonomous decisionmakers. In these cases, we as researchers brought knowledge about policy analysis and survey fundamentals, and the policymakers brought detailed knowledge about their present needs, local context, and historical challenges. All this diverse information was crucial to chart a way forward that was informed by evidence.

Yet it turns out our interactions were highly unusual. During our conversations, all the policymakers revealed that researchers from colleges and universities in their immediate area had never reached out in this way, and that they had no regular communication with local researchers.

This disconnect is a problem. Local policymakers are responsible for almost $2 trillion of spending annually, and they oversee many areas in which technical knowledge is essential, such as promoting economic development, building and maintaining roads, educating children, policing, fighting fires, determining acceptable land use, and providing public transportation…(More)”.

Opening Up to Open Science


Essay by Chelle Gentemann, Christopher Erdmann and Caitlin Kroeger: “The modern Hippocratic Oath outlines ethical standards that physicians worldwide swear to uphold. “I will respect the hard-won scientific gains of those physicians in whose steps I walk,” one of its tenets reads, “and gladly share such knowledge as is mine with those who are to follow.”

But what form, exactly, should knowledge-sharing take? In the practice of modern science, knowledge in most scientific disciplines is generally shared through peer-reviewed publications at the end of a project. Although publication is both expected and incentivized—it plays a key role in career advancement, for example—many scientists do not take the extra step of sharing data, detailed methods, or code, making it more difficult for others to replicate, verify, and build on their results. Even beyond that, professional science today is full of personal and institutional incentives to hold information closely to retain a competitive advantage.

This way of sharing science has some benefits: peer review, for example, helps to ensure (even if it never guarantees) scientific integrity and prevent inadvertent misuse of data or code. But the status quo also comes with clear costs: it creates barriers (in the form of publication paywalls), slows the pace of innovation, and limits the impact of research. Fast science is increasingly necessary, and with good reason. Technology has not only improved the speed at which science is carried out, but many of the problems scientists study, from climate change to COVID-19, demand urgency. Whether modeling the behavior of wildfires or developing a vaccine, the need for scientists to work together and share knowledge has never been greater. In this environment, the rapid dissemination of knowledge is critical; closed, siloed knowledge slows progress to a degree society cannot afford. Imagine the consequences today if, as in the 2003 SARS disease outbreak, the task of sequencing genomes still took months and tools for labs to share the results openly online didn’t exist. Today’s challenges require scientists to adapt and better recognize, facilitate, and reward collaboration.

Open science is a path toward a collaborative culture that, enabled by a range of technologies, empowers the open sharing of data, information, and knowledge within the scientific community and the wider public to accelerate scientific research and understanding. Yet despite its benefits, open science has not been widely embraced…(More)”

Beyond ‘X Number Served’


Essay by Mona Mourshed: “Metrics matter, but they should always be plural. Focus on the speedometer, ignore the gas gauge, and you’re sure to stop short of your destination. But while the plague of metric monomania can occasionally be an issue in business, it’s an even bigger problem within the social sector. After all, market discipline forces business leaders to weigh tradeoffs between costs and sales, or between product quality and service level speed. Multiple metrics help executives get the balance right, even as they scale.

By contrast, nonprofits too often receive (well-intended) guidance from stakeholders like funders and board members to disproportionately zero in on a single goal: serving the maximum number of beneficiaries. That’s a perfectly understandable impulse, of course. But it confuses scale with just one impact dimension, reach. “We have to recognize that a higher number does not necessarily indicate transformation,” says Lisha McCormick, CEO of Last Mile Health, which supports countries in building strong community health systems. “Higher reach alone does not equate to impact.”

This is a problem because excessively defining and valuing programs by the number of people they serve can give rise to unintended consequences. Nonprofit leaders can find themselves discussing how to serve more people through “lighter touch” models or debating ambiguous metrics like “reached” or “touched” to expand participant numbers (while fighting uneasiness about the potential adverse implications for program quality)…(More)”.

How Tech Despair Can Set You Free


Essay by Samuel Matlack: “One way to look at the twentieth century is to say that nations may rise and fall but technical progress remains forever. Its sun rises on the evil and on the good, and its rain falls on the just and on the unjust. Its sun can be brighter than a thousand suns, scorching our enemies, but, with some time and ingenuity, it can also power air conditioners and 5G. One needs to look on the bright side, living by faith and not by sight.

The century’s inquiring minds wished to know whether this faith in progress is meaningfully different from blindness. Ranking high among those minds was the French historian, sociologist, and lay theologian Jacques Ellul, and his answer was simple: No.

In America, Ellul became best known for his book The Technological Society. The book’s signature term was “technique,” an idea he developed throughout his vast body of writing. Technique is the social structure on which modern life is built. It is the consciousness that has come to govern all human affairs, suppressing questions of ultimate human purposes and meaning. Our society no longer asks why we should do anything. All that matters anymore, Ellul argued, is how to do it — to which the canned answer is always: More efficiently! Much as a modern machine can be said to run on its own, so does the technological society. Human control of it is an illusion, which means we are on a path to self-destruction — not because the social machine will necessarily kill us (although it might), but because we are fast becoming soulless creatures.

While tech pessimists celebrated Ellul’s book as an urgent warning of impending doom, tech optimists dismissed it as alarmist exaggeration. Beneath this mixed reception lies a more difficult truth, because what on the surface looks like plain old doomsaying is in fact a highly unusual project….

But looking back on that era, optimists might think they are justified in claiming that the doomsaying was overblown. The Soviet Union fell without the bomb getting dropped. No third world war has been looming, and while the world remains a dangerous place, the good guys are still winning, thanks in large part to massively efficient economies and technological supremacy. China may have more steel, but we have more guns. (Let’s not talk about the germs.) And the digital revolution, despite collateral damage, has brought a bounty of benefits we largely take for granted. So to the optimist, Ellul’s talk some seventy years ago about how we were facing a choice between suicide and freedom sounds antiquated. He was a man of his time.

So why bother? What use can we make of Ellul’s vision? Because even if we believe that our world’s most dehumanizing technological projects — from Beijing to Silicon Valley — demand a fierce defense of human dignity, why look to Ellul when we have our own productive cottage industry of critics, ethicists, theorists, and prophets? Why put up with Ellul’s abstract style and the bizarre structure of his gigantic output — the fact that one may find in any given text only half of what he actually thought about the subject, thanks to what he called his dialectical approach?…(More)”.

Shadowbanning Is Big Tech’s Big Problem


Essay by Gabriel Nicholas: “Sometimes, it feels like everyone on the internet thinks they’ve been shadowbanned. Republican politicians have been accusing Twitter of shadowbanning—that is, quietly suppressing their activity on the site—since at least 2018, when for a brief period, the service stopped autofilling the usernames of Representatives Jim Jordan, Mark Meadows, and Matt Gaetz, as well as other prominent Republicans, in its search bar. Black Lives Matter activists have been accusing TikTok of shadowbanning since 2020, when, at the height of the George Floyd protests, it sharply reduced how frequently their videos appeared on users’ “For You” pages. …When the word shadowban first appeared in the web-forum backwaters of the early 2000s, it meant something more specific. It was a way for online-community moderators to deal with trolls, shitposters, spam bots, and anyone else they deemed harmful: by making their posts invisible to everyone but the posters themselves. But throughout the 2010s, as the social web grew into the world’s primary means of sharing information and as content moderation became infinitely more complicated, the word became more common, and much more muddled. Today, people use shadowban to refer to the wide range of ways platforms may remove or reduce the visibility of their content without telling them….

According to new research I conducted at the Center for Democracy and Technology (CDT), nearly one in 10 U.S. social-media users believes they have been shadowbanned, and most often they believe it is for their political beliefs or their views on social issues. In two dozen interviews I held with people who thought they had been shadowbanned or worked with people who thought they had, I repeatedly heard users say that shadowbanning made them feel not just isolated from online discourse, but targeted, by a sort of mysterious cabal, for breaking a rule they didn’t know existed. It’s not hard to imagine what happens when social-media users believe they are victims of conspiracy…(More)”.

Governance of the Inconceivable


Essay by Lisa Margonelli: “How do scientists and policymakers work together to design governance for technologies that come with evolving and unknown risks? In the Winter 1985 Issues, seven experts reflected on the possibility of a large nuclear conflict triggering a “nuclear winter.” These experts agreed that the consequences would be horrifying: even beyond radiation effects, for example, burning cities could put enough smoke in the atmosphere to block sunlight, lowering ground temperatures and threatening people, crops, and other living things. In the same issue, former astronaut and then senator John Glenn wrote about the prospects for several nuclear nonproliferation agreements he was involved in negotiating. This broad discussion of nuclear weapons governance in Issues—involving legislators Glenn and then senator Al Gore as well as scientists, Department of Defense officials, and weapons designers—reflected the discourse of the time. In the culture at large, fears of nuclear annihilation became ubiquitous, and today you can easily find danceable playlists containing “38 Essential ’80s Songs About Nuclear Anxiety.”

But with the end of the Cold War, the breakup of the Soviet Union, and the rapid growth of a globalized economy and culture, these conversations receded from public consciousness. Issues has not run an article on nuclear weapons since 2010, when an essay argued that exaggerated fear of nuclear weapons had led to poor policy decisions. “Albert Einstein memorably proclaimed that nuclear weapons ‘have changed everything except our way of thinking,’” wrote political scientist John Mueller. “But the weapons actually seem to have changed little except our way of thinking, as well as our ways of declaiming, gesticulating, deploying military forces, and spending lots of money.”

All these old conversations suddenly became relevant again as our editorial team worked on this issue. On February 27, when Vladimir Putin ordered Russia’s nuclear weapons put on “high alert” after invading Ukraine, United Nations Secretary-General Antonio Guterres declared that “the mere idea of a nuclear conflict is simply unconceivable.” But, in the space of a day, what had long seemed inconceivable was suddenly being very actively conceived….(More)”.

The challenges of protecting data and rights in the metaverse


Article by Urvashi Aneja: “Virtual reality systems work by capturing extensive biological data about a user’s body, including pupil dilation, eye movement, facial expressions, skin temperature, and emotional responses to stimuli. Spending just 20 minutes in a VR simulation leaves nearly 2 million unique recordings of body language.

Existing data protection frameworks are woefully inadequate for dealing with the privacy implications of these technologies. Data collection is involuntary and continuous, rendering the notion of consent almost impossible. Research also shows that five minutes of VR data, with all personally identifiable information stripped, could be correctly identified using a machine learning algorithm with 95% accuracy. This type of data isn’t covered by most biometrics laws.

But a lot more than individual privacy is at stake. Such data will enable what human rights lawyer Brittan Heller has called “biometric psychography” referring to the gathering and use of biological data to reveal intimate details about a user’s likes, dislikes, preferences, and interests. In VR experiences, it is not only a user’s outward behavior that is captured, but also their emotional reactions to specific situations, through features such as pupil dilation or change in facial expressions….(More)”

Time to recognize authorship of open data


Nature Editorial: “At times, it seems there’s an unstoppable momentum towards the principle that data sets should be made widely available for research purposes (also called open data). Research funders all over the world are endorsing the open data-management standards known as the FAIR principles (which ensure data are findable, accessible, interoperable and reusable). Journals are increasingly asking authors to make the underlying data behind papers accessible to their peers. Data sets are accompanied by a digital object identifier (DOI) so they can be easily found. And this citability helps researchers to get credit for the data they generate.

But reality sometimes tells a different story. The world’s systems for evaluating science do not (yet) value openly shared data in the same way that they value outputs such as journal articles or books. Funders and research leaders who design these systems accept that there are many kinds of scientific output, but many reject the idea that there is a hierarchy among them.

In practice, those in powerful positions in science tend not to regard open data sets in the same way as publications when it comes to making hiring and promotion decisions or awarding memberships to important committees, or in national evaluation systems. The open-data revolution will stall unless this changes….

Universities, research groups, funding agencies and publishers should, together, start to consider how they could better recognize open data in their evaluation systems. They need to ask: how can those who have gone the extra mile on open data be credited appropriately?

There will always be instances in which researchers cannot be given access to human data. Data from infants, for example, are highly sensitive and need to pass stringent privacy and other tests. Moreover, making data sets accessible takes time and funding that researchers don’t always have. And researchers in low- and middle-income countries have concerns that their data could be used by researchers or businesses in high-income countries in ways that they have not consented to.

But crediting all those who contribute their knowledge to a research output is a cornerstone of science. The prevailing convention — whereby those who make their data open for researchers to use make do with acknowledgement and a citation — needs a rethink. As long as authorship on a paper is significantly more valued than data generation, this will disincentivize making data sets open. The sooner we change this, the better….(More)”.

Artificial intelligence is creating a new colonial world order


Series by  Karen Hao: “…Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. European colonialism, they say, was characterized by the violent capture of land, extraction of resources, and exploitation of people—for example, through slavery—for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence today, it is now using other, more insidious means to enrich the wealthy and powerful at the great expense of the poor….

MIT Technology Review’s new AI Colonialism series, which will be publishing throughout this week, digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid.

In part two, we head to Venezuela, where AI data-labeling firms found cheap and desperate workers amid a devastating economic crisis, creating a new model of labor exploitation. The series also looks at ways to move away from these dynamics. In part three, we visit ride-hailing drivers in Indonesia who, by building power through community, are learning to resist algorithmic control and fragmentation. In part four, we end in Aotearoa, the Maori name for New Zealand, where an Indigenous couple are wresting back control of their community’s data to revitalize its language.

Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.

That is ultimately the aim of this series: to broaden the view of AI’s impact on society so as to begin to figure out how things could be different. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way….(More)”.

How Democracies Spy on Their Citizens 


Ronan Farrow at the New Yorker: “…Commercial spyware has grown into an industry estimated to be worth twelve billion dollars. It is largely unregulated and increasingly controversial. In recent years, investigations by the Citizen Lab and Amnesty International have revealed the presence of Pegasus on the phones of politicians, activists, and dissidents under repressive regimes. An analysis by Forensic Architecture, a research group at the University of London, has linked Pegasus to three hundred acts of physical violence. It has been used to target members of Rwanda’s opposition party and journalists exposing corruption in El Salvador. In Mexico, it appeared on the phones of several people close to the reporter Javier Valdez Cárdenas, who was murdered after investigating drug cartels. Around the time that Prince Mohammed bin Salman of Saudi Arabia approved the murder of the journalist Jamal Khashoggi, a longtime critic, Pegasus was allegedly used to monitor phones belonging to Khashoggi’s associates, possibly facilitating the killing, in 2018. (Bin Salman has denied involvement, and NSO said, in a statement, “Our technology was not associated in any way with the heinous murder.”) Further reporting through a collaboration of news outlets known as the Pegasus Project has reinforced the links between NSO Group and anti-democratic states. But there is evidence that Pegasus is being used in at least forty-five countries, and it and similar tools have been purchased by law-enforcement agencies in the United States and across Europe. Cristin Flynn Goodwin, a Microsoft executive who has led the company’s efforts to fight spyware, told me, “The big, dirty secret is that governments are buying this stuff—not just authoritarian governments but all types of governments.”…(More)”.