Stefaan Verhulst
Article by Daniel Björkegren: “The cost of using artificial intelligence (AI) has plummeted. Almost every smartphone in the world can access AI chatbots for free for several queries a day, through ChatGPT, WhatsApp, Google, or other providers. However, these services have yet to reach most mobile phones that lack active internet service, which number around 1.7 billion based on statistics from the International Telecommunication Union, or ITU. Those phones are in the hands of on the order of a billion people who are excluded from AI. They include some of the world’s poorest and most remote residents, who might use the technology to obtain information of all sorts, including advice on business, education, health, and agriculture.
Basic mobile phones could theoretically access AI services through text messages, through a technology developed in the 1990s called short message service or SMS. SMS allows the sending and receiving of messages of up to 160 characters of text. It is extremely efficient: SMS uses the leftover capacity of a cellular network, so the marginal resource use per message is tiny. And it is popular: over 6 trillion SMS messages were sent in the latest year in which data is available.
However, in many countries mobile phone operators charge application providers extremely high prices for SMS, which limit the viability of digital services for basic phones. SMS pricing in different countries can be opaque, so I gathered prices from several reputable international platforms and, when available, from major operators themselves. The prices I could find vary dramatically: for example, it would cost $0.08 to send 100 SMS messages to users in Ghana (through an operator)—but $44.75 to send the same messages to users in Pakistan (through the platform Vonage Nexmo; a major operator did not provide prices in response to my request). Overall, prices are high: for the median phone without internet, the priceI found is $5.33 per 100 messages sent. These prices represent the lowest bulk or application-to-person (A2P) SMS rate that would be paid by an organization sending messages from a digital service (the end of the post describes the methodology). Consumers are typically also charged separate retail prices for any messages they send, though message allowances may be included in their plans. While it may be possible for organizations to negotiate lower prices, A2P SMS is expensive any way you look at it. The median A2P SMS price corresponds to $380,616 per gigabyte of data transmitted. In comparison, when using mobile data on a smartphone the median price of bandwidth is only $6 per gigabyte. For the median price of sending one message via SMS, you could send 58,775 similarly sized messages via mobile data on a smartphone. SMS is expensive relative to mobile internet around the world, as shown in Table 1, which shows statistics by country…(More)”.
Blog by Laura Tripaldi :”…Over the past 20 years, robotics has undergone a significant transition. A field once dominated by anthropomorphic bodies and rigid materials has begun to embrace a much wider range of possible incarnations, starting with the use of plastic and flexible materials to replace steel and hard polymers. Cecilia Laschi, one of the most authoritative figures in the field of robotics, has frequently emphasized how this transition from “rigid” to “soft” robots goes far beyond a simple change in the choice of materials, reflecting instead a broader transition in the entire anatomy of the automata, in the strategies used to control them, and in the philosophy that drives their construction. The most notable engineering achievement by Laschi and her colleagues at the Sant’Anna Institute in Pisa is a robotic arm originally designed in 2011 and inspired by the muscular hydrostatics of the octopus. In octopuses, limbs are capable of complex behaviors such as swimming and manipulating objects through coordinated deformations of muscle tissue, without any need for rigid components. In the robot designed by Laschi and her colleagues, the robotic limb’s movement is achieved similarly through deformable smart materials known as “shape memory alloys.”
Unlike a conventional robot, these movements are not pre-programmed, but emerge from the material’s response to external forces. This new engineering logic is part of what Laschi describes as embodied intelligence, an approach in which the robot’s behavior emerges from integrating its physical structure and interaction with the world. The concept of “embodiment” challenges the hierarchical separation between body and mind, representation and experience. Rather than conceiving of intelligence as the product of an active mind controlling a passive body, embodiment emphasizes the relationship between cognition and corporeality. Originating in philosophy, over the last 20 years, this concept has begun to spread and establish itself in the field of engineering, opening up new avenues for the design of versatile and adaptive robots. “The octopus is a biological demonstration of how effective behavior in the real world is closely linked to body morphology,” Laschi and her co-workers explain, “a good example of embodied intelligence, whose principles derive from the observation in nature that adaptive behavior emerges from the complex and dynamic interaction between body morphology, sensorimotor control, and the environment.”…(More)”.
Blog by Sonia Livingstone: “Our RIGHTS.AI research in four global South countries (Brazil, India, Kenya and Thailand) shows that children are encountering generative AI at home and school, in the platforms they use daily. From chatbots like ChatGPT and Copilot to AI embedded in Snapchat, WhatsApp and Google Search, generative AI now powers learning tools, entertainment apps, and even experimental mental health services. It answers questions and creates content – sometimes problematically – and quietly shapes what children see and do online.
Children are fast adopters of AI technologies, but they often lack sufficient understanding of how these systems work, the capacity to ensure their beneficial use, and the competence to understand the implications. Can education about AI facilitate generative AI opportunities for children and mitigate its challenges and risks?
Through collaboration with partners in Brazil, India, Kenya and Thailand, our research methods identified three pressing challenges for AI literacy policies and programmes, which complement findings from UNICEF’s research into how AI relates to the best interests of the child. Media and digital literacy practitioners will find them familiar!…(More)”.
Paper by Anu Ramachandran, Akash Yadav, and Andrew Schroeder: “Earthquakes and other disasters can cause significant damage to health facilities. Understanding the scale of impact is important to plan for disaster response efforts and long-term health system rebuilding. Current manual methods of assessing health facility damage, however, can take several weeks to complete. Many research teams have worked to develop artificial intelligence models that use satellite imagery to detect damage to buildings, but there is still limited understanding of how these models perform in real-world settings to identify damage to healthcare facilities. Here, we take two models developed after the February 2023 earthquake in Turkey and overlay their findings with the locations of three types of health facilities: hospitals, dialysis centers, and pharmacies. We examine the accuracy and agreement between the two models and explore sources of error and uncertainty. We found that it was feasible to overlay these data to yield rapid health facility damage reports, but that the sensitivity of the models was low for the health facilities evaluated. We discuss the key sources of error and ways to improve the accuracy and usability of these models for real-world health facility analysis…(More)”.
Guidebook by Australian Government: “… is a comprehensive resource for designing, implementing, and evaluating deliberative engagement. It is developed to assist government agencies across Australia in selecting the most appropriate methods for deliberative engagement and effectively put them into practice. The Guidebook is informed by decades of research and practical experience in the field of deliberative democracy. The advice presented in the Guidebook has been grounded in practice by a series of conversations led by the Centre for Deliberative Democracy with stakeholders from across NSW Government agencies in 2024…(More)”.
Paper by Steven David Pickering, Martin Ejnar Hansen, Yosuke Sunahara: “Parliaments are beginning to experiment with artificial intelligence (AI), but public acceptance remains uncertain. We examine attitudes to AI in two parliamentary democracies: the UK (n = 990) and Japan (n = 2117). We look at two key issues: AI helping Members of Parliament (MPs) make better decisions and AI or robots making decisions instead of MPs. Using original surveys, we test the roles of demographics, institutional trust, ideology, and attitudes toward AI. In both countries, respondents are broadly cautious: support is higher for AI that assists representatives than for delegating decisions, with especially strong resistance to delegation in the UK. Trust in government (and general social trust in Japan) increases acceptance; women and older respondents are more sceptical. In the UK, right-leaning respondents are more supportive, whereas ideology is weak or negative in Japan. Perceptions of AI dominate: seeing AI as beneficial and feeling able to use it raises support, while fear lowers it. We find that legitimacy for parliamentary AI hinges not only on safeguards but on alignment with expectations of representation and accountability…(More)”.
Paper by Robin Mansell: “This paper examines whether artificial intelligence industry developers of large language models should be permitted to use copyrighted works to train their models without permission and compensation to creative industries rightsholders. This is examined in the UK context by contrasting a dominant social imaginary that prioritises market driven-growth of generative artificial intelligence applications that require text and data mining, and an alternative imaginary emphasising equity and non-market values. Policy proposals, including licensing, are discussed. It is argued that current debates privilege the interests of Big Tech in exploiting online data for profit, neglecting policies that could help to ensure that technology innovation and creative labour both contribute to the public good…(More)”.
Paper by Jacob Taylor and Joshua Tan: “In the 20th century, international cooperation became practically synonymous with the rules-based multilateral order, underpinned by treaty-based institutions such as the United Nations, the World Bank, and the World Trade Organization. But great‑power rivalries and structural inequities have eroded the functioning of these institutions, entrenching paralysis and facilitating coercion of the weak by the strong. Development finance and humanitarian aid are declining as basic principles like compromise, reciprocity, and the pursuit of mutually beneficial outcomes are called into question.
The retreat from cooperation by national governments has increased the space for other actors – including cities, firms, philanthropies, and standards bodies – to shape outcomes. In the AI sector, a handful of private companies in Shenzhen and Silicon Valley are racing to consolidate their dominance over the infrastructure and operating systems that will form the foundations of tomorrow’s economy.
If these firms are allowed to succeed unchecked, virtually everyone else will be left to choose between dependency and irrelevance. Governments and others working in the public interest will not only be highly vulnerable to geopolitical bullying and vendor lock-in; they will also have few options for capturing and redistributing AI’s benefits, or for managing the technology’s negative environmental and social externalities.
But as the coalition behind Apertus showed, a new kind of international cooperation is possible, grounded not in painstaking negotiations and intricate treaties, but in shared infrastructure for problem-solving. Regardless of which AI scenario unfolds in the coming years – technological plateau, slow diffusion, artificial general intelligence, or a collapsing bubble – middle powers’ best chance of keeping pace with the United States and China, and increasing their autonomy and resilience, lies in collaboration.
Improving the distribution of AI products is essential. To this end, middle powers, and their AI labs and firms, should scale up initiatives like the Public AI Inference Utility, the nonprofit responsible for the provision of global, web-based access to Apertus and other open-source models. But these countries will also have to close the capability gap with frontier models like GPT-5 or DeepSeek-V3.1 – and this will require bolder action. Only by coordinating energy, compute, data pipelines, and talent can middle powers co-develop a world-class AI stack…(More)”.
Article by By Philip Ball: “…Economic growth at a rate of 1–2% annually is the norm for industrialized nations today. But such growth rates did not happen in pre-industrial times, despite technological innovations such as the windmill and the printing press.
Mokyr showed that the key difference between now and then was what he calls “useful knowledge”, or innovations based on scientific understanding1. One example is the advances during the Industrial Revolution, beginning in the eighteenth century, when improvements in steam engines could be made systematically rather than by trial and error.
Aghion and Howitt, for their part, clarified the market mechanisms behind sustained growth. In 1992, they presented a model showing how competition between companies selling new products allows innovations to enter the marketplace and displaces older products: a process they called creative destruction2.
Underlying growth, in other words, is a steady churn of businesses and products. The researchers showed how companies invest in research and development (R&D) to improve their chances of finding a new product, and predicted the optimal level of such investment…
According to Ufuk Akcigit, an economist at the University of Chicago in Illinois, Aghion and Howitt highlight an important aspect of economic growth, which is that spending on R&D does not by itself guarantee higher rates of growth: “Unless we replace inefficient firms from the economy, we cannot make space for newcomers with new ideas and better technologies.”
“When a new entrepreneur emerges, they have every incentive to come up with a radical new technology,” Akcigit says. “As soon as they become an incumbent, their incentive vanishes” and they no longer invest in R&D to drive innovation.
Thus, because companies cannot expect to remain at the forefront of innovation indefinitely, the incentive for investing in R&D coming from market forces alone declines as a company’s market share grows. To guarantee the societal benefits of constant innovation, the model suggests that it is in society’s interests for the state to subsidize R&D, so long as the return is not merely incremental improvements.
The work of all three laureates also acknowledges the complex social consequences of growth. In the early days of the Industrial Revolution, there were concerns about how mechanization would cause unemployment among manual workers — a worry echoed today with the increasing use of AI in place of human labour. But Mokyr showed that, in fact, early mechanization led to the creation of jobs.
Creative destruction, meanwhile, leads to companies failing and jobs being lost. Aghion and Howitt emphasized that society needs safety nets and constructive negotiation of conflicts to navigate such problems.
Their model “recognizes the messiness and complexity of how innovation happens in real economies”, says Coyle. “The idea that a country’s productivity level increases by companies going bust and new ones coming in is a difficult sell, but the evidence that that’s part of the mechanism is pretty strong.”…(More)”.
Article by Stefaan G. Verhulst: “…For years, public interest advocates and other defenders of freedom on the Internet used “open” as a rallying cry. Open data. Open science. Open government. The idea was simple and noble: Knowledge should be shared freely, accessibly, and transparently to empower citizens, accelerate discovery, and improve governance

For a time, this vision made sense, even if it was imperfectly implemented. But as with many well-intentioned revolutions, openness has more recently been weaponised. What began as a movement to democratise knowledge has instead become justification for a new kind of extraction — this time not of oil or minerals, but of meaning. This phenomenon has become especially evident with the rise of generative AI, which relies on its voracious appetite for public data to train its models and refine its predictions. In the process, the very datasets, research repositories, and public web archives that were designed to serve the public interest have been harvested to train the large language models now controlled by a few corporations in a handful of countries.
The situation is dire but it is not hopeless. In what follows, we describe the problem in greater detail, outline the insufficiency of current mechanisms, and then discuss some possible mitigating responses…(More)”.
