AI Is a Hall of Mirrors


Essay by Meghan Houser: “Here is the paradox… First: Everything is for you. TikTok’s signature page says it, and so, in their own way, do the recommendation engines of all social media. Streaming platforms triangulate your tastes, brand “engagements” solicit feedback for a better experience next time, Google Maps asks where you want to go, Siri and Alexa wait in limbo for reply. Dating apps present our most “compatible” matches. Sacrifices in personal data pay (at least some) dividends in closer tailoring. Our phones fit our palms like lovers’ hands. Consumer goods reach us in two days or less, or, if we prefer, our mobile orders are ready when we walk into our local franchise. Touchless, frictionless, we move toward perfect inertia, skimming engineered curves in the direction of our anticipated desires.

Second: Nothing is for you. That is, you specifically, you as an individual human person, with three dimensions and password-retrieval answers that actually mean something. We all know by now that “the algorithm,” that godlike personification, is fickle. Targeted ads follow you after you buy the product. Spotify thinks lullabies are your jam because for a couple weeks one put your child to sleep. Watch a political video, get invited down the primrose path to conspiracy. The truth of aggregation, of metadata, is that the for you of it all gets its power from modeling everyone who is not, in fact, you. You are typological, a predictable deviation from the mean. The “you” that your devices know is a shadow of where your data-peers have been. Worse, the “you” that your doctor, your insurance company, or your banker knows is a shadow of your demographic peers. And sometimes the model is arrayed against you. A 2016 ProPublica investigation found that if you are Black and coming up for sentencing before a judge who relies on a criminal sentencing algorithm, you are twice as likely to be mistakenly deemed at high risk for reoffending than your white counterpart….(More)”

Whoever you are, the algorithms’ for you promise at some point rings hollow. The simple math of automation is that the more the machines are there to talk to us, the less someone else will. Get told how important your call is to us, in endless perfect repetition. Prove you’re a person to Captcha, and (if you’re like me) sometimes fail. Post a comment on TikTok or YouTube knowing that it will be swallowed by its only likely reader, the optimizing feed.

Offline, the shadow of depersonalization follows. Physical spaces are atomized and standardized into what we have long been calling brick and mortar. QR, a language readable only to the machines, proliferates. The world becomes a little less legible. Want to order at this restaurant? You need your phone as translator, as intermediary, in this its newly native land…(More)”.

Cities Are at the Forefront of AI and Civic Engagement


Article by Hollie Russon Gilman and Sarah Jacob: “…cities worldwide are already adopting AI for everyday governance needs. Buenos Aires is integrating communication with residents through Boti, an AI chatbot accessible via WhatsApp. Over 5 million residents are using the chatbot everyday month, with some months upwards of 11 million users. Boti connects residents with city services such as bike sharing or social care programs or reports. Unlike other AI systems with a closed loop, Boti can connect externally to help residents with other government services. For more sensitive issues, such as domestic abuse, Boti can connect residents with a human operator. AI, in this context, offers residents a convenient means to efficiently engage with city resources and communicate with city employees.

Another example of AI improving people’s everyday lives is SomosUna, a partnership between the Inter American Development Bank and Next2MyLife, aims to address gender-based violence in Uruguay. In response to the rise in gender-based violence during and after Covid, this initiative aims to prevent violence through a network of support and “helpers” which includes 1) training 2) technology and 3) a community of volunteers. This initiative will leverage AI technology to enhance its support network, advancing preventative measures and providing immediate assistance.

While AI can foster engagement, local government officials recognize that they must pre-engage the public to determine the role that AI should play in civic life across diverse cities. This pre-engagement and education will inform the ethical standards and considerations against which AI will be assessed.

The EU’s ITHACA project, for example, explores the application of AI in civic participation and local governance…(More)”… See also: AI Localism.

First post: A history of online public messaging


Article by Jeremy Reimer: From BBS to Facebook, here’s how messaging platforms have changed over the years…

People have been leaving public messages since the first artists painted hunting scenes on cave walls. But it was the invention of electricity that forever changed the way we talked to each other. In 1844, the first message was sent via telegraph. Samuel Morse, who created the binary Morse Code decades before electronic computers were even possible, tapped out, “What hath God wrought?” It was a prophetic first post.

World War II accelerated the invention of digital computers, but they were primarily single-use machines, designed to calculate artillery firing tables or solve scientific problems. As computers got more powerful, the idea of time-sharing became attractive. Computers were expensive, and they spent most of their time idle, waiting for a user to enter keystrokes at a terminal. Time-sharing allowed many people to interact with a single computer at the same time…(More)”.

Debugging Tech Journalism


Essay by Timothy B. Lee: “A huge proportion of tech journalism is characterized by scandals, sensationalism, and shoddy research. Can we fix it?

In November, a few days after Sam Altman was fired — and then rehired — as CEO of OpenAI, Reuters reported on a letter that may have played a role in Altman’s ouster. Several staffers reportedly wrote to the board of directors warning about “a powerful artificial intelligence discovery that they said could threaten humanity.”

The discovery: an AI system called Q* that can solve grade-school math problems.

“Researchers consider math to be a frontier of generative AI development,” the Reuters journalists wrote. Large language models are “good at writing and language translation,” but “conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence.”

This was a bit of a head-scratcher. Computers have been able to perform arithmetic at superhuman levels for decades. The Q* project was reportedly focused on word problems, which have historically been harder than arithmetic for computers to solve. Still, it’s not obvious that solving them would unlock human-level intelligence.

The Reuters article left readers with a vague impression that Q could be a huge breakthrough in AI — one that might even “threaten humanity.” But it didn’t provide readers with the context to understand what Q actually was — or to evaluate whether feverish speculation about it was justified.

For example, the Reuters article didn’t mention research OpenAI published last May describing a technique for solving math problems by breaking them down into small steps. In a December article, I dug into this and other recent research to help to illuminate what OpenAI is likely working on: a framework that would enable AI systems to search through a large space of possible solutions to a problem…(More)”.

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem


Article by Jordi Calvet-Bademunt and Jacob Mchangama: “Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

The discussions over AI’s political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry’s approach to free speech, and does it embrace international free speech standards?…In a recent report, we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times…(More)”.

‘Eugenics on steroids’: the toxic and contested legacy of Oxford’s Future of Humanity Institute


Article by Andrew Anthony: “Two weeks ago it was quietly announced that the Future of Humanity Institute, the renowned multidisciplinary research centre in Oxford, no longer had a future. It shut down without warning on 16 April. Initially there was just a brief statement on its website stating it had closed and that its research may continue elsewhere within and outside the university.

The institute, which was dedicated to studying existential risks to humanity, was founded in 2005 by the Swedish-born philosopher Nick Bostrom and quickly made a name for itself beyond academic circles – particularly in Silicon Valley, where a number of tech billionaires sang its praises and provided financial support.

Bostrom is perhaps best known for his bestselling 2014 book Superintelligence, which warned of the existential dangers of artificial intelligence, but he also gained widespread recognition for his 2003 academic paper “Are You Living in a Computer Simulation?”. The paper argued that over time humans were likely to develop the ability to make simulations that were indistinguishable from reality, and if this was the case, it was possible that it had already happened and that we are the simulations….

Among the other ideas and movements that have emerged from the FHI are longtermism – the notion that humanity should prioritise the needs of the distant future because it theoretically contains hugely more lives than the present – and effective altruism (EA), a utilitarian approach to maximising global good.

These philosophies, which have intermarried, inspired something of a cult-like following,…

Torres has come to believe that the work of the FHI and its offshoots amounts to what they call a “noxious ideology” and “eugenics on steroids”. They refuse to see Bostrom’s 1996 comments as poorly worded juvenilia, but indicative of a brutal utilitarian view of humanity. Torres notes that six years after the email thread, Bostrom wrote a paper on existential risk that helped launch the longtermist movement, in which he discusses “dysgenic pressures” – dysgenic is the opposite of eugenic. Bostrom wrote:

“Currently it seems that there is a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species, homo philoprogenitus (‘lover of many offspring’).”…(More)”.

Lethal AI weapons are here: how can we control them?


Article by David Adam: “The development of lethal autonomous weapons (LAWs), including AI-equipped drones, is on the rise. The US Department of Defense, for example, has earmarked US$1 billion so far for its Replicator programme, which aims to build a fleet of small, weaponized autonomous vehicles. Experimental submarines, tanks and ships have been made that use AI to pilot themselves and shoot. Commercially available drones can use AI image recognition to zero in on targets and blow them up. LAWs do not need AI to operate, but the technology adds speed, specificity and the ability to evade defences. Some observers fear a future in which swarms of cheap AI drones could be dispatched by any faction to take out a specific person, using facial recognition.

Warfare is a relatively simple application for AI. “The technical capability for a system to find a human being and kill them is much easier than to develop a self-driving car. It’s a graduate-student project,” says Stuart Russell, a computer scientist at the University of California, Berkeley, and a prominent campaigner against AI weapons. He helped to produce a viral 2017 video called Slaughterbots that highlighted the possible risks.

The emergence of AI on the battlefield has spurred debate among researchers, legal experts and ethicists. Some argue that AI-assisted weapons could be more accurate than human-guided ones, potentially reducing both collateral damage — such as civilian casualties and damage to residential areas — and the numbers of soldiers killed and maimed, while helping vulnerable nations and groups to defend themselves. Others emphasize that autonomous weapons could make catastrophic mistakes. And many observers have overarching ethical concerns about passing targeting decisions to an algorithm…(More)”

What is ‘lived experience’?


Article by Patrick J Casey: “Everywhere you turn, there is talk of lived experience. But there is little consensus about what the phrase ‘lived experience’ means, where it came from, and whether it has any value. Although long used by academics, it has become ubiquitous, leaping out of the ivory tower and showing up in activism, government, consulting, as well as popular culture. The Lived Experience Leaders Movement explains that those who have lived experiences have ‘[d]irect, first-hand experience, past or present, of a social issue(s) and/or injustice(s)’. A recent brief from the US Department of Health and Human Services suggests that those who have lived experience have ‘valuable and unique expertise’ that should be consulted in policy work, since engaging those with ‘knowledge based on [their] perspective, personal identities, and history’ can ‘help break down power dynamics’ and advance equity. A search of Twitter reveals a constant stream of use, from assertions like ‘Your research doesn’t override my lived experience,’ to ‘I’m pretty sure you’re not allowed to question someone’s lived experience.’

A recurring theme is a connection between lived experience and identity. A recent nominee for the US Secretary of Labor, Julie Su, is lauded as someone who will ‘bring her lived experience as a daughter of immigrants, a woman of color, and an Asian American to the role’. The Human Rights Campaign asserts that ‘[l]aws and legislation must reflect the lived experiences of LGBTQ people’. An editorial in Nature Mental Health notes that incorporation of ‘people with lived experience’ has ‘taken on the status of a movement’ in the field.

Carried a step further, the notion of lived experience is bound up with what is often called identity politics, as when one claims to be speaking from the standpoint of an identity group – ‘in my lived experience as a…’ or, simply, ‘speaking as a…’ Here, lived experience is often invoked to establish authority and prompt deference from others since, purportedly, only members of a shared identity know what it’s like to have certain kinds of experience or to be a member of that group. Outsiders sense that they shouldn’t criticise what is said because, grounded in lived experience, ‘people’s spoken truths are, in and of themselves, truths.’ Criticism of lived experience might be taken to invalidate or dehumanise others or make them feel unsafe.

So, what is lived experience? Where did it come from? And what does it have to do with identity politics?…(More)”.

AI-Powered World Health Chatbot Is Flubbing Some Answers


Article by Jessica Nix: “The World Health Organization is wading into the world of AI to provide basic health information through a human-like avatar. But while the bot responds sympathetically to users’ facial expressions, it doesn’t always know what it’s talking about.

SARAH, short for Smart AI Resource Assistant for Health, is a virtual health worker that’s available to talk 24/7 in eight different languages to explain topics like mental health, tobacco use and healthy eating. It’s part of the WHO’s campaign to find technology that can both educate people and fill staffing gaps with the world facing a health-care worker shortage.

WHO warns on its website that this early prototype, introduced on April 2, provides responses that “may not always be accurate.” Some of SARAH’s AI training is years behind the latest data. And the bot occasionally provides bizarre answers, known as hallucinations in AI models, that can spread misinformation about public health.The WHO’s artificial intelligence tool provides public health information via a lifelike avatar.Source: Bloomberg

SARAH doesn’t have a diagnostic feature like WebMD or Google. In fact, the bot is programmed to not talk about anything outside of the WHO’s purview, including questions on specific drugs. So SARAH often sends people to a WHO website or says that users should “consult with your health-care provider.”

“It lacks depth,” Ramin Javan, a radiologist and researcher at George Washington University, said. “But I think it’s because they just don’t want to overstep their boundaries and this is just the first step.”..(More)”

We Need To Rewild The Internet


Article by Maria Farrell and Robin Berjon: “In the late 18th century, officials in Prussia and Saxony began to rearrange their complex, diverse forests into straight rows of single-species trees. Forests had been sources of food, grazing, shelter, medicine, bedding and more for the people who lived in and around them, but to the early modern state, they were simply a source of timber.

So-called “scientific forestry” was that century’s growth hacking. It made timber yields easier to count, predict and harvest, and meant owners no longer relied on skilled local foresters to manage forests. They were replaced with lower-skilled laborers following basic algorithmic instructions to keep the monocrop tidy, the understory bare.

Information and decision-making power now flowed straight to the top. Decades later when the first crop was felled, vast fortunes were made, tree by standardized tree. The clear-felled forests were replanted, with hopes of extending the boom. Readers of the American political anthropologist of anarchy and order, James C. Scott, know what happened next.

It was a disaster so bad that a new word, Waldsterben, or “forest death,” was minted to describe the result. All the same species and age, the trees were flattened in storms, ravaged by insects and disease — even the survivors were spindly and weak. Forests were now so tidy and bare, they were all but dead. The first magnificent bounty had not been the beginning of endless riches, but a one-off harvesting of millennia of soil wealth built up by biodiversity and symbiosis. Complexity was the goose that laid golden eggs, and she had been slaughtered…(More)”.