How should a robot explore the Moon? A simple question shows the limits of current AI systems


Article by Sally Cripps, Edward Santow, Nicholas Davis, Alex Fischer and Hadi Mohasel Afshar: “..Ultimately, AI systems should help humans make better, more accurate decisions. Yet even the most impressive and flexible of today’s AI tools – such as the large language models behind the likes of ChatGPT – can have the opposite effect.

Why? They have two crucial weaknesses. They do not help decision-makers understand causation or uncertainty. And they create incentives to collect huge amounts of data and may encourage a lax attitude to privacy, legal and ethical questions and risks…

ChatGPT and other “foundation models” use an approach called deep learning to trawl through enormous datasets and identify associations between factors contained in that data, such as the patterns of language or links between images and descriptions. Consequently, they are great at interpolating – that is, predicting or filling in the gaps between known values.

Interpolation is not the same as creation. It does not generate knowledge, nor the insights necessary for decision-makers operating in complex environments.

However, these approaches require huge amounts of data. As a result, they encourage organisations to assemble enormous repositories of data – or trawl through existing datasets collected for other purposes. Dealing with “big data” brings considerable risks around security, privacy, legality and ethics.

In low-stakes situations, predictions based on “what the data suggest will happen” can be incredibly useful. But when the stakes are higher, there are two more questions we need to answer.

The first is about how the world works: “what is driving this outcome?” The second is about our knowledge of the world: “how confident are we about this?”…(More)”.

Diversity of Expertise is Key to Scientific Impact


Paper by Angelo Salatino, Simone Angioni, Francesco Osborne, Diego Reforgiato Recupero, Enrico Motta: “Understanding the relationship between the composition of a research team and the potential impact of their research papers is crucial as it can steer the development of new science policies for improving the research enterprise. Numerous studies assess how the characteristics and diversity of research teams can influence their performance across several dimensions: ethnicity, internationality, size, and others. In this paper, we explore the impact of diversity in terms of the authors’ expertise. To this purpose, we retrieved 114K papers in the field of Computer Science and analysed how the diversity of research fields within a research team relates to the number of citations their papers received in the upcoming 5 years. The results show that two different metrics we defined, reflecting the diversity of expertise, are significantly associated with the number of citations. This suggests that, at least in Computer Science, diversity of expertise is key to scientific impact…(More)”.

Index, A History of the


A Bookish Adventure from Medieval Manuscripts to the Digital Age” by Dennis Duncan: “Most of us give little thought to the back of the book—it’s just where you go to look things up. But as Dennis Duncan reveals in this delightful and witty history, hiding in plain sight is an unlikely realm of ambition and obsession, sparring and politicking, pleasure and play. In the pages of the index, we might find Butchers, to be avoided, or Cows that sh-te Fire, or even catch Calvin in his chamber with a Nonne. Here, for the first time, is the secret world of the index: an unsung but extraordinary everyday tool, with an illustrious but little-known past.

Charting its curious path from the monasteries and universities of thirteenth-century Europe to Silicon Valley in the twenty-first, Duncan uncovers how it has saved heretics from the stake, kept politicians from high office, and made us all into the readers we are today. We follow it through German print shops and Enlightenment coffee houses, novelists’ living rooms and university laboratories, encountering emperors and popes, philosophers and prime ministers, poets, librarians and—of course—indexers along the way. Revealing its vast role in our evolving literary and intellectual culture, Duncan shows that, for all our anxieties about the Age of Search, we are all index-rakers at heart—and we have been for eight hundred years…(More)”.

Harvard fraud claims fuel doubts over science of behaviour


Article by Andrew Hill and Andrew Jack: “Claims that fraudulent data was used in papers co-authored by a star Harvard Business School ethics expert have fuelled a growing controversy about the validity of behavioural science, whose findings are routinely taught in business schools and applied within companies.

While the professor has not yet responded to details of the claims, the episode is the latest blow to a field that has risen to prominence over the past 15 years and whose findings in areas such as decision-making and team-building are widely put into practice.

Companies from Coca-Cola to JPMorgan Chase have executives dedicated to behavioural science, while governments around the world have also embraced its findings. But well-known principles in the field such as “nudge theory” are now being called into question.

The Harvard episode “is topic number one in business school circles”, said André Spicer, executive dean of London’s Bayes Business School. “There has been a large-scale replication crisis in psychology — lots of the results can’t be reproduced and some of the underlying data has found to be faked.”…

That cast a shadow over the use of behavioural science by government-linked “nudge units” such as the UK’s Behavioural Insights Team, which was spun off into a company in 2014, and the US Office of Evaluation Sciences.

However, David Halpern, now president of BIT, countered that publication bias is not unique to the field. He said he and his peers use far larger-scale, more representative and robust testing than academic research.

Halpern argued that behavioural research can help to effectively deploy government budgets. “The dirty secret of most governments and organisations is that they spend a lot of money, but have no idea if they are spending in ways that make things better.”

Academics point out that testing others’ results is part of normal scientific practice. The difference with behavioural science is that initial results that have not yet been replicated are often quickly recycled into sensational headlines, popular self-help books and business practice.

“Scientists should be better at pointing out when non-scientists over-exaggerate these things and extrapolate, but they are worried that if they do this they will ruin the positive trend [towards their field],” said Pelle Guldborg Hansen, chief executive of iNudgeyou, a centre for applied behavioural research.

Many consultancies have sprung up to cater to corporate demand for behavioural insights. “What I found was that almost anyone who had read Nudge had a licence to set up as a behavioural scientist,” said Nuala Walsh, who formed the Global Association of Applied Behavioural Scientists in 2020 to try to set some standards…(More)”.

Artificial Intelligence in Science: Challenges, Opportunities and the Future of Research


OECD Report: “The rapid advances of artificial intelligence (AI) in recent years have led to numerous creative applications in science. Accelerating the productivity of science could be the most economically and socially valuable of all the uses of AI. Utilising AI to accelerate scientific productivity will support the ability of OECD countries to grow, innovate and meet global challenges, from climate change to new contagions. This publication is aimed at a broad readership, including policy makers, the public, and stakeholders in all areas of science. It is written in non-technical language and gathers the perspectives of prominent researchers and practitioners. The book examines various topics, including the current, emerging, and potential future uses of AI in science, where progress is needed to better serve scientific advancements, and changes in scientific productivity. Additionally, it explores measures to expedite the integration of AI into research in developing countries. A distinctive contribution is the book’s examination of policies for AI in science. Policy makers and actors across research systems can do much to deepen AI’s use in science, magnifying its positive effects, while adapting to the fast-changing implications of AI for research governance…(More)”.

Use of AI in social sciences could mean humans will no longer be needed in data collection


Article by Michael Lee: A team of researchers from four Canadian and American universities say artificial intelligence could replace humans when it comes to collecting data for social science research.

Researchers from the University of Waterloo, University of Toronto, Yale University and the University of Pennsylvania published an article in the journal Science on June 15 about how AI, specifically large language models (LLMs), could affect their work.

“AI models can represent a vast array of human experiences and perspectives, possibly giving them a higher degree of freedom to generate diverse responses than conventional human participant methods, which can help to reduce generalizability concerns in research,” Igor Grossmann, professor of psychology at Waterloo and a co-author of the article, said in a news release.

Philip Tetlock, a psychology professor at UPenn and article co-author, goes so far as to say that LLMs will “revolutionize human-based forecasting” in just three years.

In their article, the authors pose the question: “How can social science research practices be adapted, even reinvented, to harness the power of foundational AI? And how can this be done while ensuring transparent and replicable research?”

The authors say the social sciences have traditionally relied on methods such as questionnaires and observational studies.

But with the ability of LLMs to pore over vast amounts of text data and generate human-like responses, the authors say this presents a “novel” opportunity for researchers to test theories about human behaviour at a faster rate and on a much larger scale.

Scientists could use LLMs to test theories in a simulated environment before applying them in the real world, the article says, or gather differing perspectives on a complex policy issue and generate potential solutions.

“It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. I put an 90 per cent chance on that,” Tetlock said. “Of course, how humans react to all of that is another matter.”

One issue the authors identified, however, is that LLMs often learn to exclude sociocultural biases, raising the question of whether models are correctly reflecting the populations they study…(More)”

How Does Data Access Shape Science?


Paper by Abhishek Nagaraj & Matteo Tranchero: “This study examines the impact of access to confidential administrative data on the rate, direction, and policy relevance of economics research. To study this question, we exploit the progressive geographic expansion of the U.S. Census Bureau’s Federal Statistical Research Data Centers (FSRDCs). FSRDCs boost data diffusion, help empirical researchers publish more articles in top outlets, and increase citation-weighted publications. Besides direct data usage, spillovers to non-adopters also drive this effect. Further, citations to exposed researchers in policy documents increase significantly. Our findings underscore the importance of data access for scientific progress and evidence-based policy formulation…(More)”.

Politicians love to appeal to common sense – but does it trump expertise?


Essay by Magda Osman: “Politicians love to talk about the benefits of “common sense” – often by pitting it against the words of “experts and elites”. But what is common sense? Why do politicians love it so much? And is there any evidence that it ever trumps expertise? Psychology provides a clue.

We often view common sense as an authority of collective knowledge that is universal and constant, unlike expertise. By appealing to the common sense of your listeners, you therefore end up on their side, and squarely against the side of the “experts”. But this argument, like an old sock, is full of holes.

Experts have gained knowledge and experience in a given speciality. In which case politicians are experts as well. This means a false dichotomy is created between the “them” (let’s say scientific experts) and “us” (non-expert mouthpieces of the people).

Common sense is broadly defined in research as a shared set of beliefs and approaches to thinking about the world. For example, common sense is often used to justify that what we believe is right or wrong, without coming up with evidence.

But common sense isn’t independent of scientific and technological discoveries. Common sense versus scientific beliefs is therefore also a false dichotomy. Our “common” beliefs are informed by, and inform, scientific and technology discoveries…

The idea that common sense is universal and self-evident because it reflects the collective wisdom of experience – and so can be contrasted with scientific discoveries that are constantly changing and updated – is also false. And the same goes for the argument that non-experts tend to view the world the same way through shared beliefs, while scientists never seem to agree on anything.

Just as scientific discoveries change, common sense beliefs change over time and across cultures. They can also be contradictory: we are told “quit while you are ahead” but also “winners never quit”, and “better safe than sorry” but “nothing ventured nothing gained”…(More)”

How Would You Defend the Planet From Asteroids? 


Article by Mahmud Farooque, Jason L. Kessler: “On September 26, 2022, NASA successfully smashed a spacecraft into a tiny asteroid named Dimorphos, altering its orbit. Although it was 6.8 million miles from Earth, the Double Asteroid Redirect Test (DART) was broadcast in real time, turning the impact into a rare pan-planetary moment accessible from smartphones around the world. 

For most people, the DART mission was the first glimmer—outside of the movies—that NASA was seriously exploring how to protect Earth from asteroids. Rightly famous for its technological prowess, NASA is less recognized for its social innovations. But nearly a decade before DART, the agency had launched the Asteroid Grand Challenge. In a pioneering approach to public engagement, the challenge brought citizens together to weigh in on how the taxpayer-funded agency might approach some technical decisions involving asteroids. 

The following account of how citizens came to engage with strategies for planetary defense—and the unexpected conclusions they reached—is based on the experiences of NASA employees, members of the Expert and Citizen Assessment of Science and Technology (ECAST) network, and forum participants…(More)”.