A lack of data hampers efforts to fix racial disparities in utility cutoffs


Article by Akielly Hu: “Each year, nearly 1.3 million households across the country have their electricity shut off because they cannot pay their bill. Beyond risking the health, or even lives, of those who need that energy to power medical devices and inconveniencing people in myriad ways, losing power poses a grave threat during a heat wave or cold snap.

Such disruptions tend to disproportionately impact Black and Hispanic families, a point underscored by a recent study that found customers of Minnesota’s largest electricity utility who live in communities of color were more than three times as likely to experience a shutoff than those in predominantly white neighborhoods. The finding, by University of Minnesota researchers, held even when accounting for income, poverty level, and homeownership. 

Energy policy researchers say they consistently see similar racial disparities nationwide, but a lack of empirical data to illustrate the problem is hindering efforts to address the problem. Only 30 states require utilities to report disconnections, and of those, only a handful provide data revealing where they happen. As climate change brings hotter temperatures, more frequent cold snaps, and other extremes in weather, energy analysts and advocates for disadvantaged communities say understanding these disparities and providing equitable access to reliable power will become ever more important…(More)”.

Oracles in the Machine


Essay by Zora Che: “…In sociologist Charles Cooley’s theory of the “looking glass of self,” we understand ourselves through the perceptions of others. Online, models perceive us, responding to and reinforcing the versions of ourselves which they glean from our behaviors. They sense my finger lingering, my invisible gaze apparent by the gap of my movements. My understanding of my digital self and my digital reality becomes a feedback loop churned by models I cannot see. Moreover, the model only “sees” me as data that can be optimized for objectives that I cannot uncover. That objective is something closer to optimizing my time spent on the digital product than to holding my deepest needs; the latter perhaps was never a mathematical question to begin with.

Divination and algorithmic opacity both appear to bring us what we cannot see. Diviners see through what is obscure and beyond our comprehension: it may be incomprehensible pain and grief, vertiginous lack of control, and/or the unwarranted future. The opacity of divination comes from the limitations of our own knowledge. But the opacity of algorithms comes from both the algorithm itself and the socio-technical infrastructure that it was built around. Jenna Burrell writes of three layers of opacity in models: “(1) opacity as intentional corporate or state secrecy, (2) opacity as technical illiteracy, and (3) an opacity that arises from the characteristics of machine learning algorithms and the scale required to apply them usefully.” As consumers of models, we interact with the first and third layer of the opacity―that of platforms hiding models from us, and that of the gap between what the model is optimizing for and what may be explainable. The black-box model is an alluring oracle, interacting with us in inexplicable ways: no explanation for the daily laconic message Co-Star pushes to its users, no logic behind why you received this tarot reading while scrolling, no insight into the models behind these oracles and their objectives…(More)”.

How Philanthropy Can Make Sure Data Is Used to Help — Not Harm


Article by Ryan Merkley: “We are living in an extractive data economy. Every day, people generate a firehose of new data on hundreds of apps and services. These data are often sold by data brokers indiscriminately, embedded into user profiles for ad targeting, and used to train large language models such as Chat GPT. Communities and individuals should benefit from data made by and about them, but they don’t.

That needs to change. A report released last month by the Aspen Institute, where I work, calls on foundations and other donors to lead the way in addressing these disparities and promoting responsible uses of data in their own practices and in the work of grantees. Among other things, it suggests that funders encourage grantees to make sure their data accurately represents the communities they serve and support their efforts to make that data available and accessible to constituents…(More)”.

Preparing Researchers for an Era of Freer Information


Article by Peter W.B. Phillips: “If you Google my name along with “Monsanto,” you will find a series of allegations from 2013 that my scholarly work at the University of Saskatchewan, focused on technological change in the global food system, had been unduly influenced by corporations. The allegations made use of seven freedom of information (FOI) requests. Although leadership at my university determined that my publications were consistent with university policy, the ensuing media attention, I feel, has led some colleagues, students, and partners to distance themselves to avoid being implicated by association.

In the years since, I’ve realized that my experience is not unique. I have communicated with other academics who have experienced similar FOI requests related to genetically modified organisms in the United States, Canada, England, Netherlands, and Brazil. And my field is not the only one affected: a 2015 Union of Concerned Scientists report documented requests in multiple states and disciplines—from history to climate science to epidemiology—as well as across ideologies. In the University of California system alone, researchers have received open records requests related to research on the health effects of toxic chemicals, the safety of abortions performed by clinicians rather than doctors, and the green energy production infrastructure. These requests are made possible by laws that permit anyone, for any reason, to gain access to public agencies’ records.

These open records campaigns, which are conducted by individuals and groups across the political spectrum, arise in part from the confluence of two unrelated phenomena: the changing nature of academic research toward more translational, interdisciplinary, and/or team-based investigations and the push for more transparency in taxpayer-funded institutions. Neither phenomenon is inherently negative; in fact, there are strong advantages for science and society in both trends. But problems arise when scholars are caught between them—affecting the individuals involved and potentially influencing the ongoing conduct of research…(More)”

Not all ‘open source’ AI models are actually open: here’s a ranking


Article by Elizabeth Gibney: “Technology giants such as Meta and Microsoft are describing their artificial intelligence (AI) models as ‘open source’ while failing to disclose important information about the underlying technology, say researchers who analysed a host of popular chatbot models.

The definition of open source when it comes to AI models is not yet agreed, but advocates say that ’full’ openness boosts science, and is crucial for efforts to make AI accountable. What counts as open source is likely to take on increased importance when the European Union’s Artificial Intelligence Act comes into force. The legislation will apply less strict regulations to models that are classed as open.

Some big firms are reaping the benefits of claiming to have open-source models, while trying “to get away with disclosing as little as possible”, says Mark Dingemanse, a language scientist at Radboud University in Nijmegen, the Netherlands. This practice is known as open-washing.

“To our surprise, it was the small players, with relatively few resources, that go the extra mile,” says Dingemanse, who together with his colleague Andreas Liesenfeld, a computational linguist, created a league table that identifies the most and least open models (see table). They published their findings on 5 June in the conference proceedings of the 2024 ACM Conference on Fairness, Accountability and Transparency…(More)”.

Artificial Intelligence Is Making The Housing Crisis Worse


Article by Rebecca Burns: “When Chris Robinson applied to move into a California senior living community five years ago, the property manager ran his name through an automated screening program that reportedly used artificial intelligence to detect “higher-risk renters.” Robinson, then 75, was denied after the program assigned him a low score — one that he later learned was based on a past conviction for littering.

Not only did the crime have little bearing on whether Robinson would be a good tenant, it wasn’t even one that he’d committed. The program had turned up the case of a 33-year-old man with the same name in Texas — where Robinson had never lived. He eventually corrected the error but lost the apartment and his application fee nonetheless, according to a federal class-action lawsuit that moved towards settlement this month. The credit bureau TransUnion, one of the largest actors in the multi-billion-dollar tenant screening industry, agreed to pay $11.5 million to resolve claims that its programs violated fair credit reporting laws.

Landlords are increasingly turning to private equity-backed artificial intelligence (AI) screening programs to help them select tenants, and resulting cases like Robinson’s are just the tip of the iceberg. The prevalence of incorrect, outdated, or misleading information in such reports is increasing costs and barriers to housing, according to a recent report from federal consumer regulators.

Even when screening programs turn up real data, housing and privacy advocates warn that opaque algorithms are enshrining high-tech discrimination in an already unequal housing market — the latest example of how AI can end up amplifying existing biases…(More)”.

What the Arrival of A.I. Phones and Computers Means for Our Data


Article by Brian X. Chen: “Apple, Microsoft and Google are heralding a new era of what they describe as artificially intelligent smartphones and computers. The devices, they say, will automate tasks like editing photos and wishing a friend a happy birthday.

But to make that work, these companies need something from you: more data.

In this new paradigm, your Windows computer will take a screenshot of everything you do every few seconds. An iPhone will stitch together information across many apps you use. And an Android phone can listen to a call in real time to alert you to a scam.

Is this information you are willing to share?

This change has significant implications for our privacy. To provide the new bespoke services, the companies and their devices need more persistent, intimate access to our data than before. In the past, the way we used apps and pulled up files and photos on phones and computers was relatively siloed. A.I. needs an overview to connect the dots between what we do across apps, websites and communications, security experts say.

“Do I feel safe giving this information to this company?” Cliff Steinhauer, a director at the National Cybersecurity Alliance, a nonprofit focusing on cybersecurity, said about the companies’ A.I. strategies.

All of this is happening because OpenAI’s ChatGPT upended the tech industry nearly two years ago. Apple, Google, Microsoft and others have since overhauled their product strategies, investing billions in new services under the umbrella term of A.I. They are convinced this new type of computing interface — one that is constantly studying what you are doing to offer assistance — will become indispensable.

The biggest potential security risk with this change stems from a subtle shift happening in the way our new devices work, experts say. Because A.I. can automate complex actions — like scrubbing unwanted objects from a photo — it sometimes requires more computational power than our phones can handle. That means more of our personal data may have to leave our phones to be dealt with elsewhere.

The information is being transmitted to the so-called cloud, a network of servers that are processing the requests. Once information reaches the cloud, it could be seen by others, including company employees, bad actors and government agencies. And while some of our data has always been stored in the cloud, our most deeply personal, intimate data that was once for our eyes only — photos, messages and emails — now may be connected and analyzed by a company on its servers…(More)”.

This free app is the experts’ choice for wildfire information


Article by Shira Ovide: “One of the most trusted sources of information about wildfires is an app that’s mostly run by volunteers and on a shoestring budget.

It’s called Watch Duty, and it started in 2021 as a passion project of a Silicon Valley start-up founder, John Mills. He moved to a wildfire-prone area in Northern California and felt terrified by how difficult it was to find reliable information about fire dangers.

One expert after another said Watch Duty is their go-to resource for information, including maps of wildfires, the activities of firefighting crews, air-quality alerts and official evacuation orders…

More than a decade ago, Mills started a software company that helped chain restaurants with tasks such as food safety checklists. In 2019, Mills bought property north of San Francisco that he expected to be a future home. He stayed there when the pandemic hit in 2020.

During wildfires that year, Mills said he didn’t have enough information about what was happening and what to do. He found himself glued to social media posts from hobbyists who compiled wildfire information from public safety communications that are streamed online.

Mills said the idea for Watch Duty came from his experiences, his discussions with community groups and local officials — and watching an emergency services center struggle with clunky software for dispatching help.

He put in $1 million of his money to start Watch Duty and persuaded people he knew in Silicon Valley to help him write the app’s computer code. Mills also recruited some of the people who had built social media followings for their wildfire posts.

In the first week that Watch Duty was available in three California counties, Mills said, the app had tens of thousands of users. In the past month, he said, Watch Duty has hadroughly 1.1 million users.

Watch Duty is a nonprofit. Members who pay $25 a year have access to extra features such as flight tracking for firefighting aircraft.

Mills wants to expand Watch Duty to cover other types of natural disasters. “I can’t think of anything better I can do with my life than this,” he said…(More)”.

The Behavioral Scientists Working Toward a More Peaceful World


Interview by Heather Graci: “…Nation-level data doesn’t help us understand community-level conflict. Without understanding community-level conflict, it becomes much harder to design policies to prevent it.

Cikara: “So much of the data that we have is at the level of the nation, when our effects are all happening at very local levels. You see these reports that say, “In Germany, 14 percent of the population is immigrants.” It doesn’t matter at the national level, because they’re not distributed evenly across the geography. That means that some communities are going to be at greater risk for conflict than others. But that sort of local variation and sensitivity to it, at least heretofore, has really been missing from the conversation on the research side. Even when you’re in the same place, in the same country within the same state, the same canton, there can still be a ton of variation from neighborhood to neighborhood. 

“The other thing that we know matters a lot is not just the diversity of these neighborhoods but the segregation of them. It turns out that these kinds of prejudices and violence are less likely to break out in those places where it’s both diverse and people are interdigitated with how they live. So it’s not just the numbers, it’s also the spatial organization. 

“For example, in Singapore, because so much of the real estate is state-owned, they make it so that people who are coming from different countries can’t cluster together because they assign them to live separate from one another in order to prevent these sorts of enclaves. All these structural and meta-level organizational features have really, really important inputs for intergroup dynamics and psychology.”..(More)”.

Embracing the Social in Social Science


Article by Jay Lloyd: “In a world where science is inextricably intermixed with society, the social sciences are essential to building trust in the scientific enterprise.

To begin thinking about why all the sciences should embrace the social in social science, I would like to start with cupcakes.

In my research, context is a recurring theme, so let me give you some context for cupcakes as metaphor. A few months ago, when I was asked to respond to an article in this magazine, I wrote: “In the production of science, social scientists can often feel like sprinkles on a cupcake: not essential. Social science is not the egg, the flour, or the sugar. Sprinkles are neither in the batter, nor do they see the oven. Sprinkles are a late addition. No matter the stylistic or aesthetic impact, they never alter the substance of the ‘cake’ in the cupcake.”

In writing these sentences, I was, and still am, hopeful that all kinds of future scientific research will make social science a key component of the scientific “batter” and bake social scientific knowledge, skill, and expertise into twenty-first-century scientific “cupcakes.”

But there are tensions and power differentials in the ways interdisciplinary science can be done. Most importantly, the formation of questions itself is a site of power. The questions we as a society ask science to address both reflect and create the values and power dynamics of social systems, whether the scientific disciplines recognize this influence or not. And some of those knowledge systems do not embrace the importance of insights from the social sciences because many institutions of science work hard to insulate the practice of science from the contingencies of society.

Moving forward, how do we, as researchers, develop questions that not only welcome intellectual variety within the sciences but also embrace the diversity represented in societies? As science continues to more powerfully blend, overlap, and intermix with society, embracing what social science can bring to the entire scientific enterprise is necessary. In order to accomplish these important goals, social concerns must be a key ingredient of the whole cupcake—not an afterthought, or decoration, but among the first thoughts…(More)”