Explore our articles
View All Results

Stefaan Verhulst

Paper by Santiago Cueto, Diether W. Beuermann, Julian Cristia, Ofer Malamud & Francisco Pardo: “This paper examines a large-scale randomized evaluation of the One Laptop Per Child (OLPC) program in 531 Peruvian rural primary schools. We use administrative data on academic performance and grade progression over 10 years to estimate the long-run effects of increased computer access on (i) school performance over time and (ii) students’ educational trajectories. Following schools over time, we find no significant effects on academic performance but some evidence of negative effects on grade progression. Following students over time, we find no significant effects on primary and secondary completion, academic performance in secondary school, or university enrollment. Survey data indicate that computer access significantly improved students’ computer skills but not their cognitive skills; treated teachers received some training but did not improve their digital skills and showed limited use of technology in classrooms, suggesting the need for additional pedagogical support…(More)”.

Laptops in the Long Run: Evidence from the One Laptop per Child Program in Rural Peru

Article by Sam Peters: “How do you teach somebody to read a language if there’s nothing for them to read? This is the problem facing developers across the African continent who are trying to train AI to understand and respond to prompts in local languages.

To train a language model, you need data. For a language like English, the easily accessible articles, books and manuals on the internet give developers a ready supply. But for most of Africa’s languages — of which there are estimated to be between 1,500 and 3,000 — there are few written resources available. Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa, uses the number of available Wikipedia articles to illustrate the amount of available data. For English, there are over 7 million articles. Tigrinya, spoken by around 9 million people in Ethiopia and Eritrea, has 335. For Akan, the most widely spoken native language in Ghana, there are none.

Of those thousands of languages, only 42 are currently supported on a language model. Of Africa’s 23 scripts and alphabets, only three — Latin, Arabic and Ge’Ez (used in the Horn of Africa) — are available. This underdevelopment “comes from a financial standpoint,” says Chinasa T. Okolo, the founder of Technēculturǎ, a research institute working to advance global equity in AI. “Even though there are more Swahili speakers than Finnish speakers, Finland is a better market for companies like Apple and Google.”

If more language models are not developed, the impact across the continent could be dire, Okolo warns. “We’re going to continue to see people locked out of opportunity,” she told CNN. As the continent looks to develop its own AI infrastructure and capabilities, those who do not speak one of these 42 languages risk being left behind…(More)”

Africa has thousands of languages. Can AI be trained on all of them?

Report by the Metagov community: “First, it identifies distinct layers of the AI stack that can be named and reimagined. Second, for each layer, it points to potential strategies, grounded in existing projects, that could steer that layer toward meaningful collective governance.

We understand collective governance as an emergent and context-sensitive practice that makes structures of power accountable to those affected by them. It can take many forms—sometimes highly participatory, and sometimes more representative. It might mean voting on members of a board, proposing a policy, submitting a code improvement, organizing a union, holding a potluck, or many other things. Governance is not only something that humans do; we (and our AIs) are part of broader ecosystems that might be part of governance processes as well. In that sense, a drought caused by AI-accelerated climate change is an input to governance. A bee dance and a village assembly could both be part of AI alignment protocols.

The idea of “points of intervention” here comes from the systems thinker Donella Meadows—especially her essay “Leverage Points: Places to Intervene in a System.” One idea that she stresses there is the power of feedback loops, which is when change in one part of a system produces change in another, and that in turn creates further change in the first, and so on. Collective governance is a way of introducing powerful feedback loops that draw on diverse knowledge and experience.

We recognize that not everyone is comfortable referring to these technologies as “intelligence.” We use the term “AI” most of all because it is now familiar to most people, as a shorthand for a set of technologies that are rapidly growing in adoption and hype. But a fundamental premise of ours is that this technology should enable, inspire, and augment human intelligence, not replace it. The best way to ensure that is to cultivate spaces of creative, collective governance.

These points of intervention do not focus on asserting ethical best practices for AI, or on defining what AI should look like or how it should work. We hope that, in the struggle to cultivate self-governance, healthy norms will evolve and sharpen in ways that we cannot now anticipate. But democracy is an opportunity, never a guarantee…(More)”

Collective Governance for AI: Points of Intervention

Book by Tom Williams: “… explores critical questions at the intersection of robotics and social justice. He considers the ways in which roboticists design their robots’ appearance, how robots think and act, how robots perceive people, and the domains into which robots are deployed. The book highlights not only the ways roboticists tend to reinforce white patriarchal power structures, but also how roboticists might instead subvert those power structures by applying theories and methods from a diverse range of fields.

Drawing on computer science; history and politics; law, criminology, and sociology; feminist, ethnic, and Black studies; literary and media studies; and social, moral, and cognitive psychology, the book connects questions of robot design with larger abolitionist movements by presenting a vision for a more socially just future of robotics…(More)”.

Degrees of Freedom: On Robotics and Social Justice

Interview by Margo Anderson: “For years, Gwen Shaffer has been leading Long Beach, Calif. residents on “data walks,” pointing out public Wi-Fi routers, security cameras, smart water meters, and parking kiosks. The goal, according to the professor of journalism and public relations at California State University, Long Beach, was to learn how residents felt about the ways in which their city collected data on them.

She also identified a critical gap in smart city design today: While cities may disclose how they collect data, they rarely offer ways to opt out. Shaffer spoke with IEEE Spectrum about the experience of leading data walks, and about her research team’s efforts to give citizens more control over the data collected by public technologies…Residents want agency. So that’s what led my research team to connect with privacy engineers at Carnegie Mellon University, in Pittsburgh. Norman Sadeh and his team had developed what they called the IoT Assistant. So I told them about our project, and proposed adapting their app for city-deployed technologies. Our plan is to give residents the opportunity to exercise their rights under the California Consumer Privacy Act with this app. So they could say, “Passport Parking app, delete all the data you’ve already collected on me. And don’t collect any more in the future.”..(More)”

Citizens of Smart Cities Need a Way to Opt Out

Book edited by Luca Belli and Walter Britto Gaspar: “This book provides a comprehensive analysis of personal data protection frameworks within the BRICS nations—Brazil, Russia, India, China, and South Africa—and explores the potential for enhanced cooperation as regards the management and regulation of international data flows, amongst the increasing number of new group members. This study is particularly relevant in light of the recent BRICS commitment, enshrined in the grouping’s 2024 Declaration, aimed at jointly promoting ‘a global framework for data governance’. The ways in which this policy objective can be achieved are explored in the conclusion of this volume, highlighting what concrete path might be realistically followed by the group. Drawing on the pioneering research of the CyberBRICS project, each chapter delves into the unique legislative landscapes of the member countries, highlighting significant regulatory developments such as Brazil’s General Data Protection Law (LGPD), Russia’s evolving privacy and data localization regulations, India’s Digital Personal Data Protection Act 2023 and its Data Empowerment and Protection Architecture, China’s Personal Information Protection Law (PIPL), and South Africa’s Protection of Personal Information Act (POPIA). The authors examine the complexities and challenges each nation faces in harmonizing data protection with economic growth and technological innovation, while also addressing issues of national sovereignty, cybersecurity, regulatory compliance, and international coordination. A comparative analysis of the BRICS personal data architectures underscores the distinctive approaches and institutional frameworks adopted by BRICS countries and how this unusual grouping is growing, influencing an increasing number of countries with its policy and governance choices. The concluding chapter synthesizes these insights to offer concrete solutions and mechanisms for sustainable transborder data transfers and digital trade, emphasizing the importance of fostering legal interoperability and shared governance principles. By proposing model contractual clauses and strategic cooperation pathways, the book advocates for a shared BRICS stance on personal data protection, aiming to balance data subject rights with the imperatives of cybersecurity and digital sovereignty in a connected digital economy. This volume is an essential resource for policymakers, legal practitioners, and scholars interested in understanding a future where emerging economies are increasingly shaping the dynamics of data governance and digital cooperation…(More)”.

Personal Data Architectures in the BRICS Countries

Article by John Thornhill: “…Trump’s Maga movement has also found its natural home on social media, with many thousands of accounts amplifying his messages. That makes it all the more jarring to discover that some of the most active “America First” accounts are run from abroad. 

In a move to secure “the integrity of the global town square”, the social media platform X last Friday began posting user location data. As a result, it emerged that dozens of influential Maga accounts are run out of foreign countries, including Russia, India and Nigeria. 

For example, the MAGA NATION account, which claims to be a “Patriot Voice for We The People” with more than 393,000 followers, is based in eastern Europe (Non-EU), X revealed.

Malign foreign actors are known to use imposter accounts — either to manipulate political debate or to generate traffic and make money. That is just one of the ways in which our infosphere is being deliberately degraded. 

There are three other types of social media deformities, too. Call them the four horsemen of the infocalypse. Unchecked, they will surely destroy our trust in almost anything we read online.

The second corrosive influence is how extremist views, once confined to the darker corners of the web, have seeped into mainstream debate, as documented by Julia Ebner, a researcher at Oxford university and author of Going Mainstream…(More)”

Taming the four horsemen of the infocalypse

Article by Stefaan Verhulst and Friederike Schüür: “As governments and international bodies race to establish guardrails for AI, most of the global agenda still focuses on managing what AI systems produce—their outputs. This article argues that such an approach is incomplete. The real foundations of safe, rights-respecting, and equitable AI lie upstream in how data is collected, governed, shared, and stewarded. Without integrating mature data governance practices, such as data stewardship and data commons, into AI governance, countries will struggle to protect fundamental rights or ensure that AI’s economic and social benefits are distributed fairly. A future-ready AI governance framework must therefore unite input and output governance into a single, coherent system…(More)”.

Toward AI Governance That Works: Examining the Building Blocks of AI and the Impacts

Article by Davidson Heath: “A century ago, two oddly domestic puzzles helped set the rules for what modern science treats as “real”: a Guinness brewer charged with quality control and a British lady insisting she can taste whether milk or tea was poured first.

Those stories sound quaint, but the machinery they inspired now decides which findings get published, promoted, and believed—and which get waved away as “not significant.” Instead of recognizing the limitations of statistical significance, fields including economics and medicine ossified around it, with dire consequences for science. In the 21st century, an obsession with statistical significance led to overprescription of both antidepressant drugs and a headache remedy with lethal side effects. There was another path we could have taken.

Sir Ronald Fisher succeeded 100 years ago in making statistical significance central to scientific investigation. Some scientists have argued for decades that blindly following his approach has led the scientific method down the wrong path. Today, statistical significance has brought many branches of science to a crisis of false-positive findings and bias.

At the beginning of the 20th century, the young science of statistics was blooming. One of the key innovations at this time was small-sample statistics—a toolkit for working with data that contain only a small number of observations. That method was championed by the great data scientist William S. Gosset. His ideas were largely ignored in favor of Fisher’s, and our ability to reach accurate and useful conclusions from data was harmed. It’s time to revive Gosset’s approach to experimentation and estimation…(More)”.

Our Obsession With Statistical Significance Is Ruining Science

Article by Claire Brown: “Zillow, the country’s largest real estate listings site, has quietly removed a feature that showed the risks from extreme weather for more than one million home sale listings on its site.

The website began publishing climate risk ratings last year using data from the risk-modeling company First Street. The scores aimed to quantify each home’s risk from floods, wildfires, wind, extreme heat and poor air quality.

But real estate agents complained they hurt sales. Some homeowners protested the scores and found there was no way to challenge the ratings.

Earlier this month Zillow stopped displaying the scores after complaints from the California Regional Multiple Listing Service, which operates a private database funded by real estate brokers and agents. Zillow relies on that listing service and others around the country for its real estate data. The California listing service, one of the largest in the country, raised concerns about the accuracy of First Street’s flood risk models.“Displaying the probability of a specific home flooding this year or within the next five years can have a significant impact on the perceived desirability of that property,” said Art Carter, California Regional Multiple Listing Service’s chief executive officer.

In a statement, Zillow spokeswoman Claire Carroll said the company remains committed to providing consumers with information that helps them make informed decisions. Real estate listings on Zillow now display hyperlinks to First Street’s website, and users can click through to view climate risk scores for a specific property.

The development highlights a growing tension within the real estate industry. Fires, floods and other disasters are posing more risks to homes as the planet warms, but forecasting exactly which houses are most vulnerable — and might sell for less — has proved fraught.

First Street models have shown that millions more properties are at risk of flooding than government estimates suggest.

Other real estate sites, including Redfin, Realtor.com and Homes.com, display similar First Street data alongside ratings for factors like walkability, public transportation and school quality…(More)”.

Zillow Removes Climate Risk Scores From Home Listings

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday