Automated Social Science: Language Models as Scientist and Subjects


Paper by Benjamin S. Manning, Kehang Zhu & John J. Horton: “We present an approach for automatically generating and testing, in silico, social scientific hypotheses. This automation is made possible by recent advances in large language models (LLM), but the key feature of the approach is the use of structural causal models. Structural causal models provide a language to state hypotheses, a blueprint for constructing LLM-based agents, an experimental design, and a plan for data analysis. The fitted structural causal model becomes an object available for prediction or the planning of follow-on experiments. We demonstrate the approach with several scenarios: a negotiation, a bail hearing, a job interview, and an auction. In each case, causal relationships are both proposed and tested by the system, finding evidence for some and not others. We provide evidence that the insights from these simulations of social interactions are not available to the LLM purely through direct elicitation. When given its proposed structural causal model for each scenario, the LLM is good at predicting the signs of estimated effects, but it cannot reliably predict the magnitudes of those estimates. In the auction experiment, the in silico simulation results closely match the predictions of auction theory, but elicited predictions of the clearing prices from the LLM are inaccurate. However, the LLM’s predictions are dramatically improved if the model can condition on the fitted structural causal model. In short, the LLM knows more than it can (immediately) tell…(More)”.

Shaping the Future of Learning: The Role of AI in Education 4.0


WEF Report: “This report explores the potential for artificial intelligence to benefit educators, students and teachers. Case studies show how AI can personalize learning experiences, streamline administrative tasks, and integrate into curricula.

The report stresses the importance of responsible deployment, addressing issues like data privacy and equitable access. Aimed at policymakers and educators, it urges stakeholders to collaborate to ensure AI’s positive integration into education systems worldwide leads to improved outcomes for all…(More)”

The Secret Life of Data


Book by Aram Sinnreich and Jesse Gilbert: “…explore the many unpredictable, and often surprising, ways in which data surveillance, AI, and the constant presence of algorithms impact our culture and society in the age of global networks. The authors build on this basic premise: no matter what form data takes, and what purpose we think it’s being used for, data will always have a secret life. How this data will be used, by other people in other times and places, has profound implications for every aspect of our lives—from our intimate relationships to our professional lives to our political systems.

With the secret uses of data in mind, Sinnreich and Gilbert interview dozens of experts to explore a broad range of scenarios and contexts—from the playful to the profound to the problematic. Unlike most books about data and society that focus on the short-term effects of our immense data usage, The Secret Life of Data focuses primarily on the long-term consequences of humanity’s recent rush toward digitizing, storing, and analyzing every piece of data about ourselves and the world we live in. The authors advocate for “slow fixes” regarding our relationship to data, such as creating new laws and regulations, ethics and aesthetics, and models of production for our datafied society.

Cutting through the hype and hopelessness that so often inform discussions of data and society, The Secret Life of Data clearly and straightforwardly demonstrates how readers can play an active part in shaping how digital technology influences their lives and the world at large…(More)”

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem


Article by Jordi Calvet-Bademunt and Jacob Mchangama: “Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

The discussions over AI’s political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry’s approach to free speech, and does it embrace international free speech standards?…In a recent report, we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times…(More)”.

‘Eugenics on steroids’: the toxic and contested legacy of Oxford’s Future of Humanity Institute


Article by Andrew Anthony: “Two weeks ago it was quietly announced that the Future of Humanity Institute, the renowned multidisciplinary research centre in Oxford, no longer had a future. It shut down without warning on 16 April. Initially there was just a brief statement on its website stating it had closed and that its research may continue elsewhere within and outside the university.

The institute, which was dedicated to studying existential risks to humanity, was founded in 2005 by the Swedish-born philosopher Nick Bostrom and quickly made a name for itself beyond academic circles – particularly in Silicon Valley, where a number of tech billionaires sang its praises and provided financial support.

Bostrom is perhaps best known for his bestselling 2014 book Superintelligence, which warned of the existential dangers of artificial intelligence, but he also gained widespread recognition for his 2003 academic paper “Are You Living in a Computer Simulation?”. The paper argued that over time humans were likely to develop the ability to make simulations that were indistinguishable from reality, and if this was the case, it was possible that it had already happened and that we are the simulations….

Among the other ideas and movements that have emerged from the FHI are longtermism – the notion that humanity should prioritise the needs of the distant future because it theoretically contains hugely more lives than the present – and effective altruism (EA), a utilitarian approach to maximising global good.

These philosophies, which have intermarried, inspired something of a cult-like following,…

Torres has come to believe that the work of the FHI and its offshoots amounts to what they call a “noxious ideology” and “eugenics on steroids”. They refuse to see Bostrom’s 1996 comments as poorly worded juvenilia, but indicative of a brutal utilitarian view of humanity. Torres notes that six years after the email thread, Bostrom wrote a paper on existential risk that helped launch the longtermist movement, in which he discusses “dysgenic pressures” – dysgenic is the opposite of eugenic. Bostrom wrote:

“Currently it seems that there is a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species, homo philoprogenitus (‘lover of many offspring’).”…(More)”.

Lethal AI weapons are here: how can we control them?


Article by David Adam: “The development of lethal autonomous weapons (LAWs), including AI-equipped drones, is on the rise. The US Department of Defense, for example, has earmarked US$1 billion so far for its Replicator programme, which aims to build a fleet of small, weaponized autonomous vehicles. Experimental submarines, tanks and ships have been made that use AI to pilot themselves and shoot. Commercially available drones can use AI image recognition to zero in on targets and blow them up. LAWs do not need AI to operate, but the technology adds speed, specificity and the ability to evade defences. Some observers fear a future in which swarms of cheap AI drones could be dispatched by any faction to take out a specific person, using facial recognition.

Warfare is a relatively simple application for AI. “The technical capability for a system to find a human being and kill them is much easier than to develop a self-driving car. It’s a graduate-student project,” says Stuart Russell, a computer scientist at the University of California, Berkeley, and a prominent campaigner against AI weapons. He helped to produce a viral 2017 video called Slaughterbots that highlighted the possible risks.

The emergence of AI on the battlefield has spurred debate among researchers, legal experts and ethicists. Some argue that AI-assisted weapons could be more accurate than human-guided ones, potentially reducing both collateral damage — such as civilian casualties and damage to residential areas — and the numbers of soldiers killed and maimed, while helping vulnerable nations and groups to defend themselves. Others emphasize that autonomous weapons could make catastrophic mistakes. And many observers have overarching ethical concerns about passing targeting decisions to an algorithm…(More)”

The Future Data Economy


Report by the IE University’s Center for the Governance of Change: “…summarizes the ideas and recommendations of a year of research into the possibilities of creating a data economy that is fair, competitive and secure, carried out together with experts in the field such as Andrea Renda and Stefaan Verhulst.

According to the report, the data economy represents “a fundamental reconfiguration of how value is generated, exchanged, and understood in our world today” but it remains deeply misunderstood:

  • The authors argue that data’s particular characteristics make it different from other commodities and therefore more difficult to regulate.
  • Optimizing data flows defies the sort of one-size-fits-all solutions that policymakers tend to search for in other domains, requiring instead a more nuanced, case-by-case approach. 
  • Policymakers need to strike a delicate balance between making data sufficiently accessible to foster innovation, competition, and economic growth, while regulating its access and use to protect privacy, security, and consumer rights.

The report identifies additional overarching principles that lay the groundwork for a more coherent regulatory framework and a more robust social contract in the future data economy:

  • A paradigm shift towards greater collaboration on all fronts to address the challenges and harness the opportunities of the data economy.
  • Greater data literacy at all levels of society to make better decisions, manage risks more effectively, and harness the potential of data responsibly.
  • Regaining social trust, not only a moral imperative but also a prerequisite for the long-term sustainability and viability of data governance models.

To realize this vision, the report advances 15 specific recommendations for policymakers, including:

  • Enshrining people’s digital rights through robust regulatory measures that empower them with genuine control over their digital experiences.
  • Investing in data stewards to increase companies’ ability to recognize opportunities for collaboration and respond to external data requests. 
  • Designing liability frameworks to properly identify responsibility in cases of data misuse…(More)”

The Open Data Maturity Ranking is shoddy – it badly needs to be re-thought


Article by Olesya Grabova: “Digitalising government is essential for Europe’s future innovation and economic growth and one of the keys to achieving this is open data – information that public entities gather, create, or fund, and it’s accessible to all to freely use.

This includes everything from public budget details to transport schedules. Open data’s benefits are vast — it fuels research, boosts innovation, and can even save lives in wartime through the creation of chatbots with information about bomb shelter locations. It’s estimated that its economic value will reach a total of EUR 194 billion for EU countries and the UK by 2030.

This is why correctly measuring European countries’ progress in open data is so important. And that’s why the European Commission developed the Open Data Maturity (ODM) ranking, which annually measures open data quality, policies, online portals, and impact across 35 European countries.

Alas, however, it doesn’t work as well as it should and this needs to be addressed.

A closer look at the report’s overall approach reveals the ranking hardly reflects countries’ real progress when it comes to open data. This flawed system, rather than guiding countries towards genuine improvement, risks misrepresenting their actual progress and misleads citizens about their country’s advancements, which further stalls opportunities for innovation.

Take Slovakia. It’s apparently the biggest climber,  leaping from 29th to 10th place in just over a year. One would expect that the country has made significant progress in making public sector information available and stimulating its reuse – one of the OMD assessment’s key elements.

A deeper examination reveals that this isn’t the case. Looking at the ODM’s methodology highlights where it falls short… and how it can be fixed…(More)”.

AI-Powered World Health Chatbot Is Flubbing Some Answers


Article by Jessica Nix: “The World Health Organization is wading into the world of AI to provide basic health information through a human-like avatar. But while the bot responds sympathetically to users’ facial expressions, it doesn’t always know what it’s talking about.

SARAH, short for Smart AI Resource Assistant for Health, is a virtual health worker that’s available to talk 24/7 in eight different languages to explain topics like mental health, tobacco use and healthy eating. It’s part of the WHO’s campaign to find technology that can both educate people and fill staffing gaps with the world facing a health-care worker shortage.

WHO warns on its website that this early prototype, introduced on April 2, provides responses that “may not always be accurate.” Some of SARAH’s AI training is years behind the latest data. And the bot occasionally provides bizarre answers, known as hallucinations in AI models, that can spread misinformation about public health.The WHO’s artificial intelligence tool provides public health information via a lifelike avatar.Source: Bloomberg

SARAH doesn’t have a diagnostic feature like WebMD or Google. In fact, the bot is programmed to not talk about anything outside of the WHO’s purview, including questions on specific drugs. So SARAH often sends people to a WHO website or says that users should “consult with your health-care provider.”

“It lacks depth,” Ramin Javan, a radiologist and researcher at George Washington University, said. “But I think it’s because they just don’t want to overstep their boundaries and this is just the first step.”..(More)”

Using Artificial Intelligence to Map the Earth’s Forests


Article from Meta and World Resources Institute: “Forests harbor most of Earth’s terrestrial biodiversity and play a critical role in the uptake of carbon dioxide from the atmosphere. Ecosystem services provided by forests underpin an essential defense against the climate and biodiversity crises. However, critical gaps remain in the scientific understanding of the structure and extent of global forests. Because the vast majority of existing data on global forests is derived from low to medium resolution satellite imagery (10 or 30 meters), there is a gap in the scientific understanding of dynamic and more dispersed forest systems such as agroforestry, drylands forests, and alpine forests, which together constitute more than a third of the world’s forests. 

Today, Meta and World Resources Institute are launching a global map of tree canopy height at a 1-meter resolution, allowing the detection of single trees at a global scale. In an effort to advance open source forest monitoring, all canopy height data and artificial intelligence models are free and publicly available…(More)”.