The Modem World: A Prehistory of Social Media


Book by Kevin Driscoll: “Fifteen years before the commercialization of the internet, millions of amateurs across North America created more than 100,000 small-scale computer networks. The people who built and maintained these dial-up bulletin board systems (BBSs) in the 1980s laid the groundwork for millions of others who would bring their lives online in the 1990s and beyond. From ham radio operators to HIV/AIDS activists, these modem enthusiasts developed novel forms of community moderation, governance, and commercialization. The Modem World tells an alternative origin story for social media, centered not in the office parks of Silicon Valley or the meeting rooms of military contractors, but rather on the online communities of hobbyists, activists, and entrepreneurs. Over time, countless social media platforms have appropriated the social and technical innovations of the BBS community. How can these untold stories from the internet’s past inspire more inclusive visions of its future?…(More)”.

Artificial intelligence is creating a new colonial world order


Series by  Karen Hao: “…Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. European colonialism, they say, was characterized by the violent capture of land, extraction of resources, and exploitation of people—for example, through slavery—for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence today, it is now using other, more insidious means to enrich the wealthy and powerful at the great expense of the poor….

MIT Technology Review’s new AI Colonialism series, which will be publishing throughout this week, digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid.

In part two, we head to Venezuela, where AI data-labeling firms found cheap and desperate workers amid a devastating economic crisis, creating a new model of labor exploitation. The series also looks at ways to move away from these dynamics. In part three, we visit ride-hailing drivers in Indonesia who, by building power through community, are learning to resist algorithmic control and fragmentation. In part four, we end in Aotearoa, the Maori name for New Zealand, where an Indigenous couple are wresting back control of their community’s data to revitalize its language.

Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.

That is ultimately the aim of this series: to broaden the view of AI’s impact on society so as to begin to figure out how things could be different. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way….(More)”.

How Democracies Spy on Their Citizens 


Ronan Farrow at the New Yorker: “…Commercial spyware has grown into an industry estimated to be worth twelve billion dollars. It is largely unregulated and increasingly controversial. In recent years, investigations by the Citizen Lab and Amnesty International have revealed the presence of Pegasus on the phones of politicians, activists, and dissidents under repressive regimes. An analysis by Forensic Architecture, a research group at the University of London, has linked Pegasus to three hundred acts of physical violence. It has been used to target members of Rwanda’s opposition party and journalists exposing corruption in El Salvador. In Mexico, it appeared on the phones of several people close to the reporter Javier Valdez Cárdenas, who was murdered after investigating drug cartels. Around the time that Prince Mohammed bin Salman of Saudi Arabia approved the murder of the journalist Jamal Khashoggi, a longtime critic, Pegasus was allegedly used to monitor phones belonging to Khashoggi’s associates, possibly facilitating the killing, in 2018. (Bin Salman has denied involvement, and NSO said, in a statement, “Our technology was not associated in any way with the heinous murder.”) Further reporting through a collaboration of news outlets known as the Pegasus Project has reinforced the links between NSO Group and anti-democratic states. But there is evidence that Pegasus is being used in at least forty-five countries, and it and similar tools have been purchased by law-enforcement agencies in the United States and across Europe. Cristin Flynn Goodwin, a Microsoft executive who has led the company’s efforts to fight spyware, told me, “The big, dirty secret is that governments are buying this stuff—not just authoritarian governments but all types of governments.”…(More)”.

Why AI Failed to Live Up to Its Potential During the Pandemic


Essay by Bhaskar Chakravorti: “The pandemic could have been the moment when AI made good on its promising potential. There was an unprecedented convergence of the need for fast, evidence-based decisions and large-scale problem-solving with datasets spilling out of every country in the world. Instead, AI failed in myriad, specific ways that underscore where this technology is still weak: Bad datasets, embedded bias and discrimination, susceptibility to human error, and a complex, uneven global context all caused critical failures. But, these failures also offer lessons on how we can make AI better: 1) we need to find new ways to assemble comprehensive datasets and merge data from multiple sources, 2) there needs to be more diversity in data sources, 3) incentives must be aligned to ensure greater cooperation across teams and systems, and 4) we need international rules for sharing data…(More)”.

Research Handbook of Policy Design


Handbook edited by B. G. Peters and Guillaume Fontaine: “…The difference between policy design and policy making lies in the degree of encompassing consciousness involved in designing, which includes policy formulation, implementation and evaluation. Consequently there are differences in degrees of consciousness within the same kind of activity, from the simplest expression of “non-design”, which refers to the absence of clear intention or purpose, to “re-design”, which is the most common, incremental way to proceed, to “full design”, which suggests the attempt to control all the process by government or some other controlling actor. There are also differences in kind, from program design (at
the micro-level of intervention) to singular policy design, to meta-design when dealing with complex problems that require cross-sectorial coordination. Eventually, there are different forms or expressions (technical, political, ideological) and different patterns (transfer, innovation, accident or experiment) of policy design.
Unlike other forms of design, such as engineering or architecture, policy design exhibits specific features because of the social nature of policy targeting and modulation, which involves humans as objects and subjects with their values, conflicts, and other characteristics (Peters, 2018, p. 5). Thus, policy design is the attempt to integrate different understandings of a policy problem with different conceptions of the policy instruments to be utilized, and the different values according to which a government assess the outcomes pursued by this policy as expected, satisfactory, acceptable, and so forth. Those three components of design – causation, instruments and values – must then be combined to create a coherent plan for intervention. We will define this fourth component of design as “intervention”, meaning that there must be some strategic sense of how to make the newly designed policy work. This component requires not only an understanding of the specific policy being designed but also how that policy will mesh with the array of policies already operating. Thus, there is the need to think about some “meta-design” issues about coordination and coherence, as well as the usual challenges of implementation…(More)”.

How Smart Tech Tried to Solve the Mental Health Crisis and Only Made It Worse


Article by Emma Bedor Hiland: “Crisis Text Line was supposed to be the exception. Skyrocketing rates of depression, anxiety, and mental distress over the last decade demanded new, innovative solutions. The non-profit organization was founded in 2013 with the mission of providing free mental health text messaging services and crisis intervention tools. It seemed like the right moment to use technology to make the world a better place. Over the following years, the accolades and praise the platform received reflected its success. But their sterling reputation was tarnished overnight at the beginning of 2022 when Politico published an investigation into the way Crisis Text Line had handled and shared user data. The problem with the organization, however, goes well beyond its alleged mishandling of user information.

Despite Crisis Text Line’s assurance that its platform was anonymous, Politico’s January report showed that the company’s private messaging sessions were not actually anonymous. Data about users, including what they shared with Crisis Text Line’s volunteers, had been provided and sold to an entirely different company called Loris.ai, a tech startup that specializes in artificial intelligence software for human resources and customer service. The report brought to light a troubling relationship between the two organizations. Both had previously been headed by the same CEO, Nancy Lublin. In 2019, however, Lublin had stepped down from Loris, and in 2020 Crisis Text Line’s board ousted her following allegations that she had engaged in workplace racism.

But the troubles that enveloped Crisis Text Line can’t be blamed on one bad apple. Crisis Text Line’s board of directors had approved the relationship between the entities. In the technology and big data sectors, commodification of user data is fundamental to a platform or toolset’s economic survival, and by sharing data with Loris.ai, Crisis Text Line was able to provide needed services. The harsh reality revealed by the Politico report was that even mental healthcare is not immune from commodification, despite the risks of aggregating and sharing information about experiences and topics which continue to be stigmatized.

In the case of the Crisis Text Line-Loris.ai partnership, Loris used the nonprofit’s data to improve its own, for-profit development of machine learning algorithms sold to corporations and governments. Although Crisis Text Line maintains that all of the data shared with Loris was anonymized, the transactional nature of the relationship between the two was still fundamentally an economic one. As the Loris.ai website states, “Crisis Text Line is a Loris shareholder. Our success offers material benefit to CTL, helping this non-profit organization continue its important work. We believe this model is a blueprint for ways for-profit companies can infuse social good into their culture and operations, and for nonprofits to prosper.”…(More)”.

Better data for better therapies: The case for building health data platforms


Paper by Matthias Evers, Lucy Pérez, Lucas Robke, and Katarzyna Smietana: “Despite expanding development pipelines, many pharmaceutical companies find themselves focusing on the same limited number of derisked areas and mechanisms of action in, for example, immuno-oncology. This “herding” reflects the challenges of advancing understanding of disease and hence of developing novel therapeutic approaches. The full promise of innovation from data, AI, and ML has not yet materialized.

It is increasingly evident that one of the main reasons for this is insufficient high-quality, interconnected human data that go beyond just genes and corresponding phenotypes—the data needed by scientists to form concepts and hypotheses and by computing systems to uncover patterns too complex for scientists to understand. Only such high-quality human data would allow deployment of AI and ML, combined with human ingenuity, to unravel disease biology and open up new frontiers to prevention and cure. Here, therefore, we suggest a way of overcoming the data impediment and moving toward a systematic, nonreductionist approach to disease understanding and drug development: the establishment of trusted, large-scale platforms that collect and store the health data of volunteering participants. Importantly, such platforms would allow participants to make informed decisions about who could access and use their information to improve the understanding of disease….(More)”.

From “democratic erosion” to “a conversation among equals”


Paper by Roberto Gargarella: “In recent years, legal and political doctrinaires have been confusing the democratic crisis that is affecting most of our countries with a mere crisis of constitutionalism (i.e., a crisis in the way our system of “checks and balances” works). Expectedly, the result of this “diagnostic error” is that legal and political doctrinaires began to propose the wrong remedies for the democratic crisis. Usually, they began advocating for the “restoration” of the old system of “internal controls” or “checks and balances”, without paying attention to the democratic aspects of the crisis that would require, instead, the strengthening of “popular” controls and participatory mechanisms that favored the gradual emergence of a “conversation among equals”. In this work, I focus my attention on certain institutional alternatives – citizens’ assemblies and the like- that may help us overcome the present democratic crisis. In particular, I examine the recent practice of citizens’ assemblies and evaluate their functioning…(More)”.

A.I. Is Mastering Language. Should We Trust What It Says?


Steven Johnson at the New York Times: “You are sitting in a comfortable chair by the fire, on a cold winter’s night. Perhaps you have a mug of tea in hand, perhaps something stronger. You open a magazine to an article you’ve been meaning to read. The title suggested a story about a promising — but also potentially dangerous — new technology on the cusp of becoming mainstream, and after reading only a few sentences, you find yourself pulled into the story. A revolution is coming in machine intelligence, the author argues, and we need, as a society, to get better at anticipating its consequences. But then the strangest thing happens: You notice that the writer has, seemingly deliberately, omitted the very last word of the first .

The missing word jumps into your consciousness almost unbidden: ‘‘the very last word of the first paragraph.’’ There’s no sense of an internal search query in your mind; the word ‘‘paragraph’’ just pops out. It might seem like second nature, this filling-in-the-blank exercise, but doing it makes you think of the embedded layers of knowledge behind the thought. You need a command of the spelling and syntactic patterns of English; you need to understand not just the dictionary definitions of words but also the ways they relate to one another; you have to be familiar enough with the high standards of magazine publishing to assume that the missing word is not just a typo, and that editors are generally loath to omit key words in published pieces unless the author is trying to be clever — perhaps trying to use the missing word to make a point about your cleverness, how swiftly a human speaker of English can conjure just the right word.

Before you can pursue that idea further, you’re back into the article, where you find the author has taken you to a building complex in suburban Iowa. Inside one of the buildings lies a wonder of modern technology: 285,000 CPU cores yoked together into one giant supercomputer, powered by solar arrays and cooled by industrial fans. The machines never sleep: Every second of every day, they churn through innumerable calculations, using state-of-the-art techniques in machine intelligence that go by names like ‘‘stochastic gradient descent’’ and ‘‘convolutional neural networks.’’ The whole system is believed to be one of the most powerful supercomputers on the planet.

And what, you may ask, is this computational dynamo doing with all these prodigious resources? Mostly, it is playing a kind of game, over and over again, billions of times a second. And the game is called: Guess what the missing word is.…(More)”.

Access Rules: Freeing Data from Big Tech for a Better Future


Book by Thomas Ramge: “Information is power, and the time is now for digital liberation. Access Rules mounts a strong and hopeful argument for how informational tools at present in the hands of a few could instead become empowering machines for everyone. By forcing data-hoarding companies to open access to their data, we can reinvigorate both our economy and our society. Authors Viktor Mayer-Schönberger and Thomas Ramge contend that if we disrupt monopoly power and create a level playing field, digital innovations can emerge to benefit us all.

Over the past twenty years, Big Tech has managed to centralize the most relevant data on their servers, as data has become the most important raw material for innovation. However, dominant oligopolists like Facebook, Amazon, and Google, in contrast with their reputation as digital pioneers, are actually slowing down innovation and progress by withholding data for the benefit of their shareholders––at the expense of customers, the economy, and society. As Access Rules compellingly argues, ultimately it is up to us to force information giants, wherever they are located, to open their treasure troves of data to others. In order for us to limit global warming, contain a virus like COVID-19, or successfully fight poverty, everyone—including citizens and scientists, start-ups and established companies, as well as the public sector and NGOs—must have access to data. When everyone has access to the informational riches of the data age, the nature of digital power will change. Information technology will find its way back to its original purpose: empowering all of us to use information so we can thrive as individuals and as societies….(More)”.