Water Shortages in Latin America: How Can Behavioral Science Help?


Article by Juan Roa Duarte: “Today in 2024, one of Latin America’s largest cities, Bogota, is facing significant challenges due to prolonged droughts exacerbated by El Niño. As reservoir levels plummet, local governments have implemented water rationing measures to manage the crisis. However, these rationing measures have remained unsuccessful after one month of implementation—in fact, water usage increased during the first week.1 But why? What solution can finally help solve this crisis?

In this article, we will explore how behavioral science can help Latin American cities mitigate their water shortages—and how, surprisingly, a method my hometown Bogota used back in the ‘90s can shed some light on this current issue. We’ll also explore some modern behavioral science strategies that can be used in parallel…(More)”

The tools of global spycraft have changed


The Economist: “A few years ago intelligence analysts observed that internet-connected CCTV cameras in Taiwan and South Korea were inexplicably talking to vital parts of the Indian power grid. The strange connection turned out to be a deliberately circuitous route by which Chinese spies were communicating with malware they had previously buried deep inside crucial parts of the Indian grid (presumably to enable future sabotage). The analysts spotted it because they were scanning the internet to look for “command and control” (c2) nodes—such as cameras—that hackers use as stepping stones to their victims.

The attack was not revealed by an Indian or Western intelligence agency, but by Recorded Future, a firm in Somerville, Massachusetts. Christopher Ahlberg, its boss, claims the company has knowledge of more c2 nodes than anyone in the world. “We use that to bust Chinese and Russian intel operations constantly.” It also has billions of stolen log-in details found on the dark web (a hard-to-access part of the internet) and collects millions of images daily. “We know every UK company, every Chinese company, every Indian company,” says Mr Ahlberg.  Recorded Future has 1,700 clients in 75 countries, including 47 governments.

The Chinese intrusion and its discovery were a microcosm of modern intelligence. The internet, and devices connected to it, is everywhere, offering opportunities galore for surveillance, entrapment and covert operations. The entities monitoring it, and acting on it, are often private firms, not government agencies…(More)” See Special Issue on Watching the Watchers

UN adopts Chinese resolution with US support on closing the gap in access to artificial intelligence


Article by Edith Lederer: “The U.N. General Assembly adopted a Chinese-sponsored resolution with U.S. support urging wealthy developed nations to close the widening gap with poorer developing countries and ensure that they have equal opportunities to use and benefit from artificial intelligence.

The resolution approved Monday follows the March 21 adoption of the first U.N. resolution on artificial intelligence spearheaded by the United States and co-sponsored by 123 countries including China. It gave global support to the international effort to ensure that AI is “safe, secure and trustworthy” and that all nations can take advantage of it.

Adoption of the two nonbinding resolutions shows that the United States and China, rivals in many areas, are both determined to be key players in shaping the future of the powerful new technology — and have been cooperating on the first important international steps.

The adoption of both resolutions by consensus by the 193-member General Assembly shows widespread global support for their leadership on the issue.

Fu Cong, China’s U.N. ambassador, told reporters Monday that the two resolutions are complementary, with the U.S. measure being “more general” and the just-adopted one focusing on “capacity building.”

He called the Chinese resolution, which had more than 140 sponsors, “great and far-reaching,” and said, “We’re very appreciative of the positive role that the U.S. has played in this whole process.”

Nate Evans, spokesperson for the U.S. mission to the United Nations, said Tuesday that the Chinese-sponsored resolution “was negotiated so it would further the vision and approach the U.S. set out in March.”

“We worked diligently and in good faith with developing and developed countries to strengthen the text, ensuring it reaffirms safe, secure, and trustworthy AI that respects human rights, commits to digital inclusion, and advances sustainable development,” Evans said.

Fu said that AI technology is advancing extremely fast and the issue has been discussed at very senior levels, including by the U.S. and Chinese leaders.

“We do look forward to intensifying our cooperation with the United States and for that matter with all countries in the world on this issue, which … will have far-reaching implications in all dimensions,” he said…(More)”.

A lack of data hampers efforts to fix racial disparities in utility cutoffs


Article by Akielly Hu: “Each year, nearly 1.3 million households across the country have their electricity shut off because they cannot pay their bill. Beyond risking the health, or even lives, of those who need that energy to power medical devices and inconveniencing people in myriad ways, losing power poses a grave threat during a heat wave or cold snap.

Such disruptions tend to disproportionately impact Black and Hispanic families, a point underscored by a recent study that found customers of Minnesota’s largest electricity utility who live in communities of color were more than three times as likely to experience a shutoff than those in predominantly white neighborhoods. The finding, by University of Minnesota researchers, held even when accounting for income, poverty level, and homeownership. 

Energy policy researchers say they consistently see similar racial disparities nationwide, but a lack of empirical data to illustrate the problem is hindering efforts to address the problem. Only 30 states require utilities to report disconnections, and of those, only a handful provide data revealing where they happen. As climate change brings hotter temperatures, more frequent cold snaps, and other extremes in weather, energy analysts and advocates for disadvantaged communities say understanding these disparities and providing equitable access to reliable power will become ever more important…(More)”.

Oracles in the Machine


Essay by Zora Che: “…In sociologist Charles Cooley’s theory of the “looking glass of self,” we understand ourselves through the perceptions of others. Online, models perceive us, responding to and reinforcing the versions of ourselves which they glean from our behaviors. They sense my finger lingering, my invisible gaze apparent by the gap of my movements. My understanding of my digital self and my digital reality becomes a feedback loop churned by models I cannot see. Moreover, the model only “sees” me as data that can be optimized for objectives that I cannot uncover. That objective is something closer to optimizing my time spent on the digital product than to holding my deepest needs; the latter perhaps was never a mathematical question to begin with.

Divination and algorithmic opacity both appear to bring us what we cannot see. Diviners see through what is obscure and beyond our comprehension: it may be incomprehensible pain and grief, vertiginous lack of control, and/or the unwarranted future. The opacity of divination comes from the limitations of our own knowledge. But the opacity of algorithms comes from both the algorithm itself and the socio-technical infrastructure that it was built around. Jenna Burrell writes of three layers of opacity in models: “(1) opacity as intentional corporate or state secrecy, (2) opacity as technical illiteracy, and (3) an opacity that arises from the characteristics of machine learning algorithms and the scale required to apply them usefully.” As consumers of models, we interact with the first and third layer of the opacity―that of platforms hiding models from us, and that of the gap between what the model is optimizing for and what may be explainable. The black-box model is an alluring oracle, interacting with us in inexplicable ways: no explanation for the daily laconic message Co-Star pushes to its users, no logic behind why you received this tarot reading while scrolling, no insight into the models behind these oracles and their objectives…(More)”.

How Philanthropy Can Make Sure Data Is Used to Help — Not Harm


Article by Ryan Merkley: “We are living in an extractive data economy. Every day, people generate a firehose of new data on hundreds of apps and services. These data are often sold by data brokers indiscriminately, embedded into user profiles for ad targeting, and used to train large language models such as Chat GPT. Communities and individuals should benefit from data made by and about them, but they don’t.

That needs to change. A report released last month by the Aspen Institute, where I work, calls on foundations and other donors to lead the way in addressing these disparities and promoting responsible uses of data in their own practices and in the work of grantees. Among other things, it suggests that funders encourage grantees to make sure their data accurately represents the communities they serve and support their efforts to make that data available and accessible to constituents…(More)”.

Preparing Researchers for an Era of Freer Information


Article by Peter W.B. Phillips: “If you Google my name along with “Monsanto,” you will find a series of allegations from 2013 that my scholarly work at the University of Saskatchewan, focused on technological change in the global food system, had been unduly influenced by corporations. The allegations made use of seven freedom of information (FOI) requests. Although leadership at my university determined that my publications were consistent with university policy, the ensuing media attention, I feel, has led some colleagues, students, and partners to distance themselves to avoid being implicated by association.

In the years since, I’ve realized that my experience is not unique. I have communicated with other academics who have experienced similar FOI requests related to genetically modified organisms in the United States, Canada, England, Netherlands, and Brazil. And my field is not the only one affected: a 2015 Union of Concerned Scientists report documented requests in multiple states and disciplines—from history to climate science to epidemiology—as well as across ideologies. In the University of California system alone, researchers have received open records requests related to research on the health effects of toxic chemicals, the safety of abortions performed by clinicians rather than doctors, and the green energy production infrastructure. These requests are made possible by laws that permit anyone, for any reason, to gain access to public agencies’ records.

These open records campaigns, which are conducted by individuals and groups across the political spectrum, arise in part from the confluence of two unrelated phenomena: the changing nature of academic research toward more translational, interdisciplinary, and/or team-based investigations and the push for more transparency in taxpayer-funded institutions. Neither phenomenon is inherently negative; in fact, there are strong advantages for science and society in both trends. But problems arise when scholars are caught between them—affecting the individuals involved and potentially influencing the ongoing conduct of research…(More)”

Not all ‘open source’ AI models are actually open: here’s a ranking


Article by Elizabeth Gibney: “Technology giants such as Meta and Microsoft are describing their artificial intelligence (AI) models as ‘open source’ while failing to disclose important information about the underlying technology, say researchers who analysed a host of popular chatbot models.

The definition of open source when it comes to AI models is not yet agreed, but advocates say that ’full’ openness boosts science, and is crucial for efforts to make AI accountable. What counts as open source is likely to take on increased importance when the European Union’s Artificial Intelligence Act comes into force. The legislation will apply less strict regulations to models that are classed as open.

Some big firms are reaping the benefits of claiming to have open-source models, while trying “to get away with disclosing as little as possible”, says Mark Dingemanse, a language scientist at Radboud University in Nijmegen, the Netherlands. This practice is known as open-washing.

“To our surprise, it was the small players, with relatively few resources, that go the extra mile,” says Dingemanse, who together with his colleague Andreas Liesenfeld, a computational linguist, created a league table that identifies the most and least open models (see table). They published their findings on 5 June in the conference proceedings of the 2024 ACM Conference on Fairness, Accountability and Transparency…(More)”.

Artificial Intelligence Is Making The Housing Crisis Worse


Article by Rebecca Burns: “When Chris Robinson applied to move into a California senior living community five years ago, the property manager ran his name through an automated screening program that reportedly used artificial intelligence to detect “higher-risk renters.” Robinson, then 75, was denied after the program assigned him a low score — one that he later learned was based on a past conviction for littering.

Not only did the crime have little bearing on whether Robinson would be a good tenant, it wasn’t even one that he’d committed. The program had turned up the case of a 33-year-old man with the same name in Texas — where Robinson had never lived. He eventually corrected the error but lost the apartment and his application fee nonetheless, according to a federal class-action lawsuit that moved towards settlement this month. The credit bureau TransUnion, one of the largest actors in the multi-billion-dollar tenant screening industry, agreed to pay $11.5 million to resolve claims that its programs violated fair credit reporting laws.

Landlords are increasingly turning to private equity-backed artificial intelligence (AI) screening programs to help them select tenants, and resulting cases like Robinson’s are just the tip of the iceberg. The prevalence of incorrect, outdated, or misleading information in such reports is increasing costs and barriers to housing, according to a recent report from federal consumer regulators.

Even when screening programs turn up real data, housing and privacy advocates warn that opaque algorithms are enshrining high-tech discrimination in an already unequal housing market — the latest example of how AI can end up amplifying existing biases…(More)”.

What the Arrival of A.I. Phones and Computers Means for Our Data


Article by Brian X. Chen: “Apple, Microsoft and Google are heralding a new era of what they describe as artificially intelligent smartphones and computers. The devices, they say, will automate tasks like editing photos and wishing a friend a happy birthday.

But to make that work, these companies need something from you: more data.

In this new paradigm, your Windows computer will take a screenshot of everything you do every few seconds. An iPhone will stitch together information across many apps you use. And an Android phone can listen to a call in real time to alert you to a scam.

Is this information you are willing to share?

This change has significant implications for our privacy. To provide the new bespoke services, the companies and their devices need more persistent, intimate access to our data than before. In the past, the way we used apps and pulled up files and photos on phones and computers was relatively siloed. A.I. needs an overview to connect the dots between what we do across apps, websites and communications, security experts say.

“Do I feel safe giving this information to this company?” Cliff Steinhauer, a director at the National Cybersecurity Alliance, a nonprofit focusing on cybersecurity, said about the companies’ A.I. strategies.

All of this is happening because OpenAI’s ChatGPT upended the tech industry nearly two years ago. Apple, Google, Microsoft and others have since overhauled their product strategies, investing billions in new services under the umbrella term of A.I. They are convinced this new type of computing interface — one that is constantly studying what you are doing to offer assistance — will become indispensable.

The biggest potential security risk with this change stems from a subtle shift happening in the way our new devices work, experts say. Because A.I. can automate complex actions — like scrubbing unwanted objects from a photo — it sometimes requires more computational power than our phones can handle. That means more of our personal data may have to leave our phones to be dealt with elsewhere.

The information is being transmitted to the so-called cloud, a network of servers that are processing the requests. Once information reaches the cloud, it could be seen by others, including company employees, bad actors and government agencies. And while some of our data has always been stored in the cloud, our most deeply personal, intimate data that was once for our eyes only — photos, messages and emails — now may be connected and analyzed by a company on its servers…(More)”.