The Battle for Attention


Article by Nathan Heller: “…For years, we have heard a litany of reasons why our capacity to pay attention is disturbingly on the wane. Technology—the buzzing, blinking pageant on our screens and in our pockets—hounds us. Modern life, forever quicker and more scattered, drives concentration away. For just as long, concerns of this variety could be put aside. Television was described as a force against attention even in the nineteen-forties. A lot of focussed, worthwhile work has taken place since then.

But alarms of late have grown more urgent. Last year, the Organization for Economic Cooperation and Development reported a huge ten-year decline in reading, math, and science performance among fifteen-year-olds globally, a third of whom cited digital distraction as an issue. Clinical presentations of attention problems have climbed (a recent study of data from the medical-software company Epic found an over-all tripling of A.D.H.D. diagnoses between 2010 and 2022, with the steepest uptick among elementary-school-age children), and college students increasingly struggle to get through books, according to their teachers, many of whom confess to feeling the same way. Film pacing has accelerated, with the average length of a shot decreasing; in music, the mean length of top-performing pop songs declined by more than a minute between 1990 and 2020. A study conducted in 2004 by the psychologist Gloria Mark found that participants kept their attention on a single screen for an average of two and a half minutes before turning it elsewhere. These days, she writes, people can pay attention to one screen for an average of only forty-seven seconds.

“Attention as a category isn’t that salient for younger folks,” Jac Mullen, a writer and a high-school teacher in New Haven, told me recently. “It takes a lot to show that how you pay attention affects the outcome—that if you focus your attention on one thing, rather than dispersing it across many things, the one thing you think is hard will become easier—but that’s a level of instruction I often find myself giving.” It’s not the students’ fault, he thinks; multitasking and its euphemism, “time management,” have become goals across the pedagogic field. The SAT was redesigned this spring to be forty-five minutes shorter, with many reading-comprehension passages trimmed to two or three sentences. Some Ivy League professors report being counselled to switch up what they’re doing every ten minutes or so to avoid falling behind their students’ churn. What appears at first to be a crisis of attention may be a narrowing of the way we interpret its value: an emergency about where—and with what goal—we look.

“In many ways, it’s the oldest question in advertising: how to get attention,” an executive named Joanne Leong told me one afternoon, in a conference room on the thirteenth floor of the midtown office of the Dentsu agency. We were speaking about a new attention market. Slides were projected on the wall, and bits of conversation rattled like half-melted ice cubes in the corridor outside. For decades, what was going on between an advertisement and its viewers was unclear: there was no consensus about what attention was or how to quantify it. “The difference now is that there’s better tech to measure it,” Leong said…(More)”.

The limits of state AI legislation


Article by Derek Robertson: “When it comes to regulating artificial intelligence, the action right now is in the states, not Washington.

State legislatures are often, like their counterparts in Europe, contrasted favorably with Congress — willing to take action where their politically paralyzed federal counterpart can’t, or won’t. Right now, every state except Alabama and Wyoming is considering some kind of AI legislation.

But simply acting doesn’t guarantee the best outcome. And today, two consumer advocates warn in POLITICO Magazine that most, if not all, state laws are overlooking crucial loopholes that could shield companies from liability when it comes to harm caused by AI decisions — or from simply being forced to disclose when it’s used in the first place.

Grace Gedye, an AI-focused policy analyst at Consumer Reports, and Matt Scherer, senior policy counsel at the Center for Democracy & Technology, write in an op-ed that while the use of AI systems by employers is screaming out for regulation, many of the efforts in the states are ineffectual at best.

Under the most important state laws now in consideration, they write, “Job applicants, patients, renters and consumers would still have a hard time finding out if discriminatory or error-prone AI was used to help make life-altering decisions about them.”

Transparency around how and when AI systems are deployed — whether in the public or private sector — is a key concern of the growing industry’s watchdogs. The Netherlands’ tax authority infamously immiserated tens of thousands of families by accusing them falsely of child care benefits fraud after an algorithm used to detect it went awry…

One issue: a series of jargon-filled loopholes in many bill texts that says the laws only cover systems “specifically developed” to be “controlling” or “substantial” factors in decision-making.

“Cutting through the jargon, this would mean that companies could completely evade the law simply by putting fine print at the bottom of their technical documentation or marketing materials saying that their product wasn’t designed to be the main reason for a decision and should only be used under human supervision,” they explain…(More)”

Russia Clones Wikipedia, Censors It, Bans Original


Article by Jules Roscoe: “Russia has replaced Wikipedia with a state-sponsored encyclopedia that is a clone of the original Russian Wikipedia but which conveniently has been edited to omit things that could cast the Russian government in poor light. Real Russian Wikipedia editors used to refer to the real Wikipedia as Ruwiki; the new one is called Ruviki, has “ruwiki” in its url, and has copied all Russian-language Wikipedia articles and strictly edited them to comply with Russian laws. 

The new articles exclude mentions of “foreign agents,” the Russian government’s designation for any person or entity which expresses opinions about the government and is supported, financially or otherwise, by an outside nation. Prominent “foreign agents” have included a foundation created by Alexei Navalny, a famed Russian opposition leader who died in prison in February, and Memorial, an organization dedicated to preserving the memory of Soviet terror victims, which was liquidated in 2022. The news was first reported by Novaya Gazeta, an independent Russian news outlet that relocated to Latvia after Russia invaded Ukraine in 2022. It was also picked up by Signpost, a publication that follows Wikimedia goings-on.

Both Ruviki articles about these agents include disclaimers about their status as foreign agents. Navalny’s article states he is a “video blogger” known for “involvement in extremist activity or terrorism.” It is worth mentioning that his wife, Yulia Navalnaya, firmly believes he was killed. …(More)”.

AI Is a Hall of Mirrors


Essay by Meghan Houser: “Here is the paradox… First: Everything is for you. TikTok’s signature page says it, and so, in their own way, do the recommendation engines of all social media. Streaming platforms triangulate your tastes, brand “engagements” solicit feedback for a better experience next time, Google Maps asks where you want to go, Siri and Alexa wait in limbo for reply. Dating apps present our most “compatible” matches. Sacrifices in personal data pay (at least some) dividends in closer tailoring. Our phones fit our palms like lovers’ hands. Consumer goods reach us in two days or less, or, if we prefer, our mobile orders are ready when we walk into our local franchise. Touchless, frictionless, we move toward perfect inertia, skimming engineered curves in the direction of our anticipated desires.

Second: Nothing is for you. That is, you specifically, you as an individual human person, with three dimensions and password-retrieval answers that actually mean something. We all know by now that “the algorithm,” that godlike personification, is fickle. Targeted ads follow you after you buy the product. Spotify thinks lullabies are your jam because for a couple weeks one put your child to sleep. Watch a political video, get invited down the primrose path to conspiracy. The truth of aggregation, of metadata, is that the for you of it all gets its power from modeling everyone who is not, in fact, you. You are typological, a predictable deviation from the mean. The “you” that your devices know is a shadow of where your data-peers have been. Worse, the “you” that your doctor, your insurance company, or your banker knows is a shadow of your demographic peers. And sometimes the model is arrayed against you. A 2016 ProPublica investigation found that if you are Black and coming up for sentencing before a judge who relies on a criminal sentencing algorithm, you are twice as likely to be mistakenly deemed at high risk for reoffending than your white counterpart….(More)”

Whoever you are, the algorithms’ for you promise at some point rings hollow. The simple math of automation is that the more the machines are there to talk to us, the less someone else will. Get told how important your call is to us, in endless perfect repetition. Prove you’re a person to Captcha, and (if you’re like me) sometimes fail. Post a comment on TikTok or YouTube knowing that it will be swallowed by its only likely reader, the optimizing feed.

Offline, the shadow of depersonalization follows. Physical spaces are atomized and standardized into what we have long been calling brick and mortar. QR, a language readable only to the machines, proliferates. The world becomes a little less legible. Want to order at this restaurant? You need your phone as translator, as intermediary, in this its newly native land…(More)”.

Cities Are at the Forefront of AI and Civic Engagement


Article by Hollie Russon Gilman and Sarah Jacob: “…cities worldwide are already adopting AI for everyday governance needs. Buenos Aires is integrating communication with residents through Boti, an AI chatbot accessible via WhatsApp. Over 5 million residents are using the chatbot everyday month, with some months upwards of 11 million users. Boti connects residents with city services such as bike sharing or social care programs or reports. Unlike other AI systems with a closed loop, Boti can connect externally to help residents with other government services. For more sensitive issues, such as domestic abuse, Boti can connect residents with a human operator. AI, in this context, offers residents a convenient means to efficiently engage with city resources and communicate with city employees.

Another example of AI improving people’s everyday lives is SomosUna, a partnership between the Inter American Development Bank and Next2MyLife, aims to address gender-based violence in Uruguay. In response to the rise in gender-based violence during and after Covid, this initiative aims to prevent violence through a network of support and “helpers” which includes 1) training 2) technology and 3) a community of volunteers. This initiative will leverage AI technology to enhance its support network, advancing preventative measures and providing immediate assistance.

While AI can foster engagement, local government officials recognize that they must pre-engage the public to determine the role that AI should play in civic life across diverse cities. This pre-engagement and education will inform the ethical standards and considerations against which AI will be assessed.

The EU’s ITHACA project, for example, explores the application of AI in civic participation and local governance…(More)”… See also: AI Localism.

First post: A history of online public messaging


Article by Jeremy Reimer: From BBS to Facebook, here’s how messaging platforms have changed over the years…

People have been leaving public messages since the first artists painted hunting scenes on cave walls. But it was the invention of electricity that forever changed the way we talked to each other. In 1844, the first message was sent via telegraph. Samuel Morse, who created the binary Morse Code decades before electronic computers were even possible, tapped out, “What hath God wrought?” It was a prophetic first post.

World War II accelerated the invention of digital computers, but they were primarily single-use machines, designed to calculate artillery firing tables or solve scientific problems. As computers got more powerful, the idea of time-sharing became attractive. Computers were expensive, and they spent most of their time idle, waiting for a user to enter keystrokes at a terminal. Time-sharing allowed many people to interact with a single computer at the same time…(More)”.

Debugging Tech Journalism


Essay by Timothy B. Lee: “A huge proportion of tech journalism is characterized by scandals, sensationalism, and shoddy research. Can we fix it?

In November, a few days after Sam Altman was fired — and then rehired — as CEO of OpenAI, Reuters reported on a letter that may have played a role in Altman’s ouster. Several staffers reportedly wrote to the board of directors warning about “a powerful artificial intelligence discovery that they said could threaten humanity.”

The discovery: an AI system called Q* that can solve grade-school math problems.

“Researchers consider math to be a frontier of generative AI development,” the Reuters journalists wrote. Large language models are “good at writing and language translation,” but “conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence.”

This was a bit of a head-scratcher. Computers have been able to perform arithmetic at superhuman levels for decades. The Q* project was reportedly focused on word problems, which have historically been harder than arithmetic for computers to solve. Still, it’s not obvious that solving them would unlock human-level intelligence.

The Reuters article left readers with a vague impression that Q could be a huge breakthrough in AI — one that might even “threaten humanity.” But it didn’t provide readers with the context to understand what Q actually was — or to evaluate whether feverish speculation about it was justified.

For example, the Reuters article didn’t mention research OpenAI published last May describing a technique for solving math problems by breaking them down into small steps. In a December article, I dug into this and other recent research to help to illuminate what OpenAI is likely working on: a framework that would enable AI systems to search through a large space of possible solutions to a problem…(More)”.

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem


Article by Jordi Calvet-Bademunt and Jacob Mchangama: “Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

The discussions over AI’s political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry’s approach to free speech, and does it embrace international free speech standards?…In a recent report, we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times…(More)”.

‘Eugenics on steroids’: the toxic and contested legacy of Oxford’s Future of Humanity Institute


Article by Andrew Anthony: “Two weeks ago it was quietly announced that the Future of Humanity Institute, the renowned multidisciplinary research centre in Oxford, no longer had a future. It shut down without warning on 16 April. Initially there was just a brief statement on its website stating it had closed and that its research may continue elsewhere within and outside the university.

The institute, which was dedicated to studying existential risks to humanity, was founded in 2005 by the Swedish-born philosopher Nick Bostrom and quickly made a name for itself beyond academic circles – particularly in Silicon Valley, where a number of tech billionaires sang its praises and provided financial support.

Bostrom is perhaps best known for his bestselling 2014 book Superintelligence, which warned of the existential dangers of artificial intelligence, but he also gained widespread recognition for his 2003 academic paper “Are You Living in a Computer Simulation?”. The paper argued that over time humans were likely to develop the ability to make simulations that were indistinguishable from reality, and if this was the case, it was possible that it had already happened and that we are the simulations….

Among the other ideas and movements that have emerged from the FHI are longtermism – the notion that humanity should prioritise the needs of the distant future because it theoretically contains hugely more lives than the present – and effective altruism (EA), a utilitarian approach to maximising global good.

These philosophies, which have intermarried, inspired something of a cult-like following,…

Torres has come to believe that the work of the FHI and its offshoots amounts to what they call a “noxious ideology” and “eugenics on steroids”. They refuse to see Bostrom’s 1996 comments as poorly worded juvenilia, but indicative of a brutal utilitarian view of humanity. Torres notes that six years after the email thread, Bostrom wrote a paper on existential risk that helped launch the longtermist movement, in which he discusses “dysgenic pressures” – dysgenic is the opposite of eugenic. Bostrom wrote:

“Currently it seems that there is a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species, homo philoprogenitus (‘lover of many offspring’).”…(More)”.

Lethal AI weapons are here: how can we control them?


Article by David Adam: “The development of lethal autonomous weapons (LAWs), including AI-equipped drones, is on the rise. The US Department of Defense, for example, has earmarked US$1 billion so far for its Replicator programme, which aims to build a fleet of small, weaponized autonomous vehicles. Experimental submarines, tanks and ships have been made that use AI to pilot themselves and shoot. Commercially available drones can use AI image recognition to zero in on targets and blow them up. LAWs do not need AI to operate, but the technology adds speed, specificity and the ability to evade defences. Some observers fear a future in which swarms of cheap AI drones could be dispatched by any faction to take out a specific person, using facial recognition.

Warfare is a relatively simple application for AI. “The technical capability for a system to find a human being and kill them is much easier than to develop a self-driving car. It’s a graduate-student project,” says Stuart Russell, a computer scientist at the University of California, Berkeley, and a prominent campaigner against AI weapons. He helped to produce a viral 2017 video called Slaughterbots that highlighted the possible risks.

The emergence of AI on the battlefield has spurred debate among researchers, legal experts and ethicists. Some argue that AI-assisted weapons could be more accurate than human-guided ones, potentially reducing both collateral damage — such as civilian casualties and damage to residential areas — and the numbers of soldiers killed and maimed, while helping vulnerable nations and groups to defend themselves. Others emphasize that autonomous weapons could make catastrophic mistakes. And many observers have overarching ethical concerns about passing targeting decisions to an algorithm…(More)”