Stefaan Verhulst
Paper by Joseph Bak-Coleman, et al: “Emerging information technologies like social media, search engines, and AI can have a broad impact on public health, political institutions, social dynamics, and the natural world. It is critical to develop a scientific understanding of these impacts to inform evidence-based technology policy that minimizes harm and maximizes benefits. Unlike most other global-scale scientific challenges, however, the data necessary for scientific progress are generated and controlled by the same industry that might be subject to evidence-based regulation. Moreover, technology companies historically have been, and continue to be, a major source of funding for this field. These asymmetries in information and funding raise significant concerns about the potential for undue industry influence on the scientific record. In this Perspective, we explore how technology companies can influence our scientific understanding of their products. We argue that science faces unique challenges in the context of technology research that will require strengthening existing safeguards and constructing wholly new ones…(More)”.
Article by Yale Insights: “Let’s say you see an interesting headline about new research, suggesting that a ride-share service reduced traffic jams in cities. You click the link, scroll down the article, and then discover that the study authors based their results on private data from the ride-share company itself. How much would that disclosure dampen your trust in the finding?
Quite a bit, according to a new study co-authored by Prof. John Barrios. On average, the use of private data decreased people’s trust in economic results by one-fifth. And that’s a problem for economists, who often rely on these data sets to fuel their studies.
“That’s the lifeblood of modern economic research,” Barrios says. “But it also creates a tension: the very data that let us answer new questions can make our work seem less independent.” His team’s study suggested that other conflicts, such as accepting consulting fees from companies and ideological biases, also substantially lowered trust.
When conflicts undermine credibility, studies lose some of their power to shape public policy decisions. “We engage in creating knowledge, and that marketplace of ideas runs on the currency of trust,” he says. “Once that trust erodes, so too does our research in impacting the world around us.”..(More)”. See also: “The Conflict-of-Interest Discount in the Marketplace of Ideas”
Article by Daniel Björkegren: “The cost of using artificial intelligence (AI) has plummeted. Almost every smartphone in the world can access AI chatbots for free for several queries a day, through ChatGPT, WhatsApp, Google, or other providers. However, these services have yet to reach most mobile phones that lack active internet service, which number around 1.7 billion based on statistics from the International Telecommunication Union, or ITU. Those phones are in the hands of on the order of a billion people who are excluded from AI. They include some of the world’s poorest and most remote residents, who might use the technology to obtain information of all sorts, including advice on business, education, health, and agriculture.
Basic mobile phones could theoretically access AI services through text messages, through a technology developed in the 1990s called short message service or SMS. SMS allows the sending and receiving of messages of up to 160 characters of text. It is extremely efficient: SMS uses the leftover capacity of a cellular network, so the marginal resource use per message is tiny. And it is popular: over 6 trillion SMS messages were sent in the latest year in which data is available.
However, in many countries mobile phone operators charge application providers extremely high prices for SMS, which limit the viability of digital services for basic phones. SMS pricing in different countries can be opaque, so I gathered prices from several reputable international platforms and, when available, from major operators themselves. The prices I could find vary dramatically: for example, it would cost $0.08 to send 100 SMS messages to users in Ghana (through an operator)—but $44.75 to send the same messages to users in Pakistan (through the platform Vonage Nexmo; a major operator did not provide prices in response to my request). Overall, prices are high: for the median phone without internet, the priceI found is $5.33 per 100 messages sent. These prices represent the lowest bulk or application-to-person (A2P) SMS rate that would be paid by an organization sending messages from a digital service (the end of the post describes the methodology). Consumers are typically also charged separate retail prices for any messages they send, though message allowances may be included in their plans. While it may be possible for organizations to negotiate lower prices, A2P SMS is expensive any way you look at it. The median A2P SMS price corresponds to $380,616 per gigabyte of data transmitted. In comparison, when using mobile data on a smartphone the median price of bandwidth is only $6 per gigabyte. For the median price of sending one message via SMS, you could send 58,775 similarly sized messages via mobile data on a smartphone. SMS is expensive relative to mobile internet around the world, as shown in Table 1, which shows statistics by country…(More)”.
Blog by Laura Tripaldi :”…Over the past 20 years, robotics has undergone a significant transition. A field once dominated by anthropomorphic bodies and rigid materials has begun to embrace a much wider range of possible incarnations, starting with the use of plastic and flexible materials to replace steel and hard polymers. Cecilia Laschi, one of the most authoritative figures in the field of robotics, has frequently emphasized how this transition from “rigid” to “soft” robots goes far beyond a simple change in the choice of materials, reflecting instead a broader transition in the entire anatomy of the automata, in the strategies used to control them, and in the philosophy that drives their construction. The most notable engineering achievement by Laschi and her colleagues at the Sant’Anna Institute in Pisa is a robotic arm originally designed in 2011 and inspired by the muscular hydrostatics of the octopus. In octopuses, limbs are capable of complex behaviors such as swimming and manipulating objects through coordinated deformations of muscle tissue, without any need for rigid components. In the robot designed by Laschi and her colleagues, the robotic limb’s movement is achieved similarly through deformable smart materials known as “shape memory alloys.”
Unlike a conventional robot, these movements are not pre-programmed, but emerge from the material’s response to external forces. This new engineering logic is part of what Laschi describes as embodied intelligence, an approach in which the robot’s behavior emerges from integrating its physical structure and interaction with the world. The concept of “embodiment” challenges the hierarchical separation between body and mind, representation and experience. Rather than conceiving of intelligence as the product of an active mind controlling a passive body, embodiment emphasizes the relationship between cognition and corporeality. Originating in philosophy, over the last 20 years, this concept has begun to spread and establish itself in the field of engineering, opening up new avenues for the design of versatile and adaptive robots. “The octopus is a biological demonstration of how effective behavior in the real world is closely linked to body morphology,” Laschi and her co-workers explain, “a good example of embodied intelligence, whose principles derive from the observation in nature that adaptive behavior emerges from the complex and dynamic interaction between body morphology, sensorimotor control, and the environment.”…(More)”.
Blog by Sonia Livingstone: “Our RIGHTS.AI research in four global South countries (Brazil, India, Kenya and Thailand) shows that children are encountering generative AI at home and school, in the platforms they use daily. From chatbots like ChatGPT and Copilot to AI embedded in Snapchat, WhatsApp and Google Search, generative AI now powers learning tools, entertainment apps, and even experimental mental health services. It answers questions and creates content – sometimes problematically – and quietly shapes what children see and do online.
Children are fast adopters of AI technologies, but they often lack sufficient understanding of how these systems work, the capacity to ensure their beneficial use, and the competence to understand the implications. Can education about AI facilitate generative AI opportunities for children and mitigate its challenges and risks?
Through collaboration with partners in Brazil, India, Kenya and Thailand, our research methods identified three pressing challenges for AI literacy policies and programmes, which complement findings from UNICEF’s research into how AI relates to the best interests of the child. Media and digital literacy practitioners will find them familiar!…(More)”.
Paper by Anu Ramachandran, Akash Yadav, and Andrew Schroeder: “Earthquakes and other disasters can cause significant damage to health facilities. Understanding the scale of impact is important to plan for disaster response efforts and long-term health system rebuilding. Current manual methods of assessing health facility damage, however, can take several weeks to complete. Many research teams have worked to develop artificial intelligence models that use satellite imagery to detect damage to buildings, but there is still limited understanding of how these models perform in real-world settings to identify damage to healthcare facilities. Here, we take two models developed after the February 2023 earthquake in Turkey and overlay their findings with the locations of three types of health facilities: hospitals, dialysis centers, and pharmacies. We examine the accuracy and agreement between the two models and explore sources of error and uncertainty. We found that it was feasible to overlay these data to yield rapid health facility damage reports, but that the sensitivity of the models was low for the health facilities evaluated. We discuss the key sources of error and ways to improve the accuracy and usability of these models for real-world health facility analysis…(More)”.
Guidebook by Australian Government: “… is a comprehensive resource for designing, implementing, and evaluating deliberative engagement. It is developed to assist government agencies across Australia in selecting the most appropriate methods for deliberative engagement and effectively put them into practice. The Guidebook is informed by decades of research and practical experience in the field of deliberative democracy. The advice presented in the Guidebook has been grounded in practice by a series of conversations led by the Centre for Deliberative Democracy with stakeholders from across NSW Government agencies in 2024…(More)”.
Paper by Steven David Pickering, Martin Ejnar Hansen, Yosuke Sunahara: “Parliaments are beginning to experiment with artificial intelligence (AI), but public acceptance remains uncertain. We examine attitudes to AI in two parliamentary democracies: the UK (n = 990) and Japan (n = 2117). We look at two key issues: AI helping Members of Parliament (MPs) make better decisions and AI or robots making decisions instead of MPs. Using original surveys, we test the roles of demographics, institutional trust, ideology, and attitudes toward AI. In both countries, respondents are broadly cautious: support is higher for AI that assists representatives than for delegating decisions, with especially strong resistance to delegation in the UK. Trust in government (and general social trust in Japan) increases acceptance; women and older respondents are more sceptical. In the UK, right-leaning respondents are more supportive, whereas ideology is weak or negative in Japan. Perceptions of AI dominate: seeing AI as beneficial and feeling able to use it raises support, while fear lowers it. We find that legitimacy for parliamentary AI hinges not only on safeguards but on alignment with expectations of representation and accountability…(More)”.
Paper by Robin Mansell: “This paper examines whether artificial intelligence industry developers of large language models should be permitted to use copyrighted works to train their models without permission and compensation to creative industries rightsholders. This is examined in the UK context by contrasting a dominant social imaginary that prioritises market driven-growth of generative artificial intelligence applications that require text and data mining, and an alternative imaginary emphasising equity and non-market values. Policy proposals, including licensing, are discussed. It is argued that current debates privilege the interests of Big Tech in exploiting online data for profit, neglecting policies that could help to ensure that technology innovation and creative labour both contribute to the public good…(More)”.
Paper by Jacob Taylor and Joshua Tan: “In the 20th century, international cooperation became practically synonymous with the rules-based multilateral order, underpinned by treaty-based institutions such as the United Nations, the World Bank, and the World Trade Organization. But great‑power rivalries and structural inequities have eroded the functioning of these institutions, entrenching paralysis and facilitating coercion of the weak by the strong. Development finance and humanitarian aid are declining as basic principles like compromise, reciprocity, and the pursuit of mutually beneficial outcomes are called into question.
The retreat from cooperation by national governments has increased the space for other actors – including cities, firms, philanthropies, and standards bodies – to shape outcomes. In the AI sector, a handful of private companies in Shenzhen and Silicon Valley are racing to consolidate their dominance over the infrastructure and operating systems that will form the foundations of tomorrow’s economy.
If these firms are allowed to succeed unchecked, virtually everyone else will be left to choose between dependency and irrelevance. Governments and others working in the public interest will not only be highly vulnerable to geopolitical bullying and vendor lock-in; they will also have few options for capturing and redistributing AI’s benefits, or for managing the technology’s negative environmental and social externalities.
But as the coalition behind Apertus showed, a new kind of international cooperation is possible, grounded not in painstaking negotiations and intricate treaties, but in shared infrastructure for problem-solving. Regardless of which AI scenario unfolds in the coming years – technological plateau, slow diffusion, artificial general intelligence, or a collapsing bubble – middle powers’ best chance of keeping pace with the United States and China, and increasing their autonomy and resilience, lies in collaboration.
Improving the distribution of AI products is essential. To this end, middle powers, and their AI labs and firms, should scale up initiatives like the Public AI Inference Utility, the nonprofit responsible for the provision of global, web-based access to Apertus and other open-source models. But these countries will also have to close the capability gap with frontier models like GPT-5 or DeepSeek-V3.1 – and this will require bolder action. Only by coordinating energy, compute, data pipelines, and talent can middle powers co-develop a world-class AI stack…(More)”.