The Behavioral Scientists Working Toward a More Peaceful World


Interview by Heather Graci: “…Nation-level data doesn’t help us understand community-level conflict. Without understanding community-level conflict, it becomes much harder to design policies to prevent it.

Cikara: “So much of the data that we have is at the level of the nation, when our effects are all happening at very local levels. You see these reports that say, “In Germany, 14 percent of the population is immigrants.” It doesn’t matter at the national level, because they’re not distributed evenly across the geography. That means that some communities are going to be at greater risk for conflict than others. But that sort of local variation and sensitivity to it, at least heretofore, has really been missing from the conversation on the research side. Even when you’re in the same place, in the same country within the same state, the same canton, there can still be a ton of variation from neighborhood to neighborhood. 

“The other thing that we know matters a lot is not just the diversity of these neighborhoods but the segregation of them. It turns out that these kinds of prejudices and violence are less likely to break out in those places where it’s both diverse and people are interdigitated with how they live. So it’s not just the numbers, it’s also the spatial organization. 

“For example, in Singapore, because so much of the real estate is state-owned, they make it so that people who are coming from different countries can’t cluster together because they assign them to live separate from one another in order to prevent these sorts of enclaves. All these structural and meta-level organizational features have really, really important inputs for intergroup dynamics and psychology.”..(More)”.

Embracing the Social in Social Science


Article by Jay Lloyd: “In a world where science is inextricably intermixed with society, the social sciences are essential to building trust in the scientific enterprise.

To begin thinking about why all the sciences should embrace the social in social science, I would like to start with cupcakes.

In my research, context is a recurring theme, so let me give you some context for cupcakes as metaphor. A few months ago, when I was asked to respond to an article in this magazine, I wrote: “In the production of science, social scientists can often feel like sprinkles on a cupcake: not essential. Social science is not the egg, the flour, or the sugar. Sprinkles are neither in the batter, nor do they see the oven. Sprinkles are a late addition. No matter the stylistic or aesthetic impact, they never alter the substance of the ‘cake’ in the cupcake.”

In writing these sentences, I was, and still am, hopeful that all kinds of future scientific research will make social science a key component of the scientific “batter” and bake social scientific knowledge, skill, and expertise into twenty-first-century scientific “cupcakes.”

But there are tensions and power differentials in the ways interdisciplinary science can be done. Most importantly, the formation of questions itself is a site of power. The questions we as a society ask science to address both reflect and create the values and power dynamics of social systems, whether the scientific disciplines recognize this influence or not. And some of those knowledge systems do not embrace the importance of insights from the social sciences because many institutions of science work hard to insulate the practice of science from the contingencies of society.

Moving forward, how do we, as researchers, develop questions that not only welcome intellectual variety within the sciences but also embrace the diversity represented in societies? As science continues to more powerfully blend, overlap, and intermix with society, embracing what social science can bring to the entire scientific enterprise is necessary. In order to accomplish these important goals, social concerns must be a key ingredient of the whole cupcake—not an afterthought, or decoration, but among the first thoughts…(More)”

A Generation of AI Guinea Pigs


Article by Caroline Mimbs Nyce: “This spring, the Los Angeles Unified School District—the second-largest public school district in the United States—introduced students and parents to a new “educational friend” named Ed. A learning platform that includes a chatbot represented by a small illustration of a smiling sun, Ed is being tested in 100 schools within the district and is accessible at all hours through a website. It can answer questions about a child’s courses, grades, and attendance, and point users to optional activities.

As Superintendent Alberto M. Carvalho put it to me, “AI is here to stay. If you don’t master it, it will master you.” Carvalho says he wants to empower teachers and students to learn to use AI safely. Rather than “keep these assets permanently locked away,” the district has opted to “sensitize our students and the adults around them to the benefits, but also the challenges, the risks.” Ed is just one manifestation of that philosophy; the school district also has a mandatory Digital Citizenship in the Age of AI course for students ages 13 and up.

Ed is, according to three first graders I spoke with this week at Alta Loma Elementary School, very good. They especially like it when Ed awards them gold stars for completing exercises. But even as they use the program, they don’t quite understand it. When I asked them if they know what AI is, they demurred. One asked me if it was a supersmart robot…(More)”.

Cryptographers Discover a New Foundation for Quantum Secrecy


Article by Ben Brubaker: “…Say you want to send a private message, cast a secret vote or sign a document securely. If you do any of these tasks on a computer, you’re relying on encryption to keep your data safe. That encryption needs to withstand attacks from codebreakers with their own computers, so modern encryption methods rely on assumptions about what mathematical problems are hard for computers to solve.

But as cryptographers laid the mathematical foundations for this approach to information security in the 1980s, a few researchers discovered that computational hardness wasn’t the only way to safeguard secrets. Quantum theory, originally developed to understand the physics of atoms, turned out to have deep connections to information and cryptography. Researchers found ways to base the security of a few specific cryptographic tasks directly on the laws of physics. But these tasks were strange outliers — for all others, there seemed to be no alternative to the classical computational approach.

By the end of the millennium, quantum cryptography researchers thought that was the end of the story. But in just the past few years, the field has undergone another seismic shift.

“There’s been this rearrangement of what we believe is possible with quantum cryptography,” said Henry Yuen, a quantum information theorist at Columbia University.

In a string of recent papers, researchers have shown that most cryptographic tasks could still be accomplished securely even in hypothetical worlds where practically all computation is easy. All that matters is the difficulty of a special computational problem about quantum theory itself.

“The assumptions you need can be way, way, way weaker,” said Fermi Ma, a quantum cryptographer at the Simons Institute for the Theory of Computing in Berkeley, California. “This is giving us new insights into computational hardness itself.”…(More)”.

We need a social science of data


Article by Cristina Alaimo and Jannis Kallinikos: “The practical and technical knowledge of data science must be complemented by a scientific field that can respond to these challenges and trace their implications for social practice and institutions.

Determining how such a field will look is not the job of two people but, rather, that of a whole scientific and social discourse that we as a society have the obligation to develop and maintain. Students and data users must know the power and subtlety of the artefacts they study and employ.

Such a scientific field should also provide the basis for analysing the social relations and economic dynamics of data generation and use, which are closely associated with several social groups, professions, communities and firms….(More)”.

Brazil hires OpenAI to cut costs of court battles


Article by Marcela Ayres and Bernardo Caram: “Brazil’s government is hiring OpenAI to expedite the screening and analysis of thousands of lawsuits using artificial intelligence (AI), trying to avoid costly court losses that have weighed on the federal budget.

The AI service will flag to government the need to act on lawsuits before final decisions, mapping trends and potential action areas for the solicitor general’s office (AGU).

AGU told Reuters that Microsoft would provide the artificial intelligence services from ChatGPT creator OpenAI through its Azure cloud-computing platform. It did not say how much Brazil will pay for the services.

Court-ordered debt payments have consumed a growing share of Brazil’s federal budget. The government estimated it would spend 70.7 billion reais ($13.2 billion) next year on judicial decisions where it can no longer appeal. The figure does not include small-value claims, which historically amount to around 30 billion reais annually.

The combined amount of over 100 billion reais represents a sharp increase from 37.3 billion reais in 2015. It is equivalent to about 1% of gross domestic product, or 15% more than the government expects to spend on unemployment insurance and wage bonuses to low-income workers next year.

AGU did not provide a reason for Brazil’s rising court costs…(More)”.

The revolution shall not be automated: On the political possibilities of activism through data & AI


Article by Isadora Cruxên: “Every other day now, there are headlines about some kind of artificial intelligence (AI) revolution that is taking place. If you read the news or check social media regularly, you have probably come across these too: flashy pieces either trumpeting or warning against AI’s transformative potential. Some headlines promise that AI will fundamentally change how we work and learn or help us tackle critical challenges such as biodiversity conservation and climate change. Others question its intelligence, point to its embedded biases, and draw attention to its extractive labour record and high environmental costs.

Scrolling through these headlines, it is easy to feel like the ‘AI revolution’ is happening to us — or perhaps blowing past us at speed — while we are enticed to take the backseat and let AI-powered chat-boxes like ChatGPT do the work. But the reality is that we need to take the driver’s seat.

If we want to leverage this technology to advance social justice and confront the intersecting socio-ecological challenges before us, we need to stop simply wondering what the AI revolution will do to us and start thinking collectively about how we can produce data and AI models differently. As Mimi Ọnụọha and Mother Cyborg put it in A People’s Guide to AI, “the path to a fair future starts with the humans behind the machines, not the machines themselves.”

Sure, this might seem easier said than done. Most AI research and development is being driven by big tech corporations and start-ups. As Lauren Klein and Catherine D’Ignazio discuss in “Data Feminism for AI” (see “Further reading” at the end for all works cited), the results are models, tools, and platforms that are opaque to users, and that cater to the tech ambitions and profit motives of private actors, with broader societal needs and concerns becoming afterthoughts. There is excellent critical work that explores the extractive practices and unequal power relations that underpin AI production, including its relationship to processes of dataficationcolonial data epistemologies, and surveillance capitalism (to link but a few). Interrogating, illuminating, and challenging these dynamics is paramount if we are to take the driver’s seat and find alternative paths…(More)”.

Why the future of democracy could depend on your group chats


Article by Nathan Schneider: “I became newly worried about the state of democracy when, a few years ago, my mother was elected president of her neighborhood garden club.

Her election wasn’t my worry – far from it. At the time, I was trying to resolve a conflict on a large email group I had created. Someone, inevitably, was being a jerk on the internet. I had the power to remove them, but did I have the right? I realized that the garden club had in its bylaws something I had never seen in nearly all the online communities I had been part of: basic procedures to hold people with power accountable to everyone else.

The internet has yet to catch up to my mother’s garden club.

When Alexis de Tocqueville toured the United States in the early 1830s, he made an observation that social scientists have seen over and over since: Democracy at the state and national levels depends on everyday organizations like that garden club. He called them “schools” for practicing the “general theory of association.” As members of small democracies, people were learning to be citizens of a democratic country.

How many people experience those kinds of schools today?

People interact online more than offline nowadays. Rather than practicing democracy, people most likely find themselves getting suspended from a Facebook group they rely on with no reason given or option to appeal. Or a group of friends join a chat together, but only one of them has the ability to change its settings. Or people see posts from Elon Musk inserted into their mentions on X, which he owns. All of these situations are examples of what I call “implicit feudalism.”…(More)”.

Uganda’s Sweeping Surveillance State Is Built on National ID Cards


Article by Olivia Solon: “Uganda has spent hundreds of millions of dollars in the past decade on biometric tools that document a person’s unique physical characteristics, such as their face, fingerprints and irises, to form the basis of a comprehensive identification system. While the system is central to many of the state’s everyday functions, as Museveni has grown increasingly authoritarian over nearly four decades in power, it has also become a powerful mechanism for surveilling politicians, journalists, human rights advocates and ordinary citizens, according to dozens of interviews and hundreds of pages of documents obtained and analyzed by Bloomberg and nonprofit investigative newsroom Lighthouse Reports.

It’s a cautionary tale for any country considering establishing a biometric identity system without rigorous checks and balances and input from civil society. Dozens of global south countries have adopted this approach as part of an effort to meet sustainable development goals from the UN, which considers having a legal identity to be a fundamental human right. But, despite billions of dollars of investment, with backing from organizations including the World Bank, those identity systems haven’t always lived up to expectations. In many cases, the key problem is the failure to register large swathes of the population, leading to exclusion from public services. But in other places, like Uganda, inclusion in the system has been weaponized for surveillance purposes.

A year-long investigation by Bloomberg and Lighthouse Reports sheds new light on the ways in which Museveni’s regime has built and deployed this system to target opponents and consolidate power. It shows how the underlying software and data sets are easily accessed by individuals at all levels of law enforcement, despite official claims to the contrary. It also highlights, in some cases for the first time, how senior government and law enforcement officials have used these tools to target individuals deemed to pose a political threat…(More)”.

How this mental health care app is using generative AI to improve its chatbot


Interview by Daniela Dib: “Andrea Campos struggled with depression for years before founding Yana, a mental health care app, in 2017. The app’s chatbot provides users emotional companionship in Spanish. Although she was reluctant at first, Campos began using generative artificial intelligence for the Yana chatbot after ChatGPT launched in 2022. Yana, which recently launched its English-language version, has 15 million users, and is available in Latin America and the U.S.

This interview has been edited for clarity and brevity.

How has your product evolved since you introduced generative AI to it?

At first, we didn’t use generative AI because we believed it was far from ready for mental health support. We designed and guardrailed our chatbot’s responses with decision trees. But when ChatGPT launched and we saw what it could do, it wasn’t a question of whether to use generative AI or not, but how soon — we’d fall behind otherwise. It’s been a challenge because everyone quickly began developing with generative AI, but our advantage was that, having operated our chatbot for a while, we had gathered over 2 billion data points that have been invaluable for our app’s fine-tuning. One thing is clear: It’s crucial to have a model tailored to the specific needs of our product…(More)”.