Explore our articles
View All Results

Stefaan Verhulst

Article by Damilare Dosunmu: “A battle for data sovereignty is brewing from Africa to Asia.

Developing nations are challenging Big Tech’s decades-long hold on global data by demanding that their citizens’ information be stored locally. The move is driven by the realization that countries have been giving away their most valuable resource for tech giants to build a trillion-dollar market capitalization.

In April, Nigeria asked Google, Microsoft, and Amazon to set concrete deadlines for opening data centers in the country. Nigeria has been making this demand for about four years, but the companies have so far failed to fulfill their promises. Now, Nigeria has set up a working group with the companies to ensure that data is stored within its shores.

“We told them no more waivers — that we need a road map for when they are coming to Nigeria,” Kashifu Inuwa Abdullahi, director-general of Nigeria’s technology regulator, the National Information Technology Development Agency, told Rest of World.

Other developing countries, including India, South Africa, and Vietnam, have also implemented similar rules demanding that companies store data locally. India’s central bank requires payment companies to host financial data within the country, while Vietnam mandates that foreign telecommunications, e-commerce, and online payments providers establish local offices and keep user data within its shores for at least 24 months…(More)”.

Why Big Tech is threatened by a global push for data sovereignty

Article by Maddy Crowell: “…Most of St. Lucia, which sits at the southern end of an archipelago stretching from Trinidad and Tobago to the Bahamas, is poorly mapped. Aside from strips of sandy white beaches that hug the coastline, the island is draped with dense rainforest. A few green signs hang limp and faded from utility poles like an afterthought, identifying streets named during more than a century of dueling British and French colonial rule. One major road, Micoud Highway, runs like a vein from north to south, carting tourists from the airport to beachfront resorts. Little of this is accurately represented on Google Maps. Almost nobody uses, or has, a conventional address. Locals orient one another with landmarks: the red house on the hill, the cottage next to the church, the park across from Care Growell School.

Our van wound off Micoud Highway into an empty lot beneath the shade of a banana tree. A dog panted, belly up, under the hot November sun. The group had been recruited by the Humanitarian OpenStreetMap Team, or HOT, a nonprofit that uses an open-source data platform called OpenStreetMap to create a map of the world that resembles Google’s with one key exception: Anyone can edit it, making it a sort of Wikipedia for cartographers.

The organization has an ambitious goal: Map the world’s unmapped places to help relief workers reach people when the next hurricanefire, or other crisis strikes. Since its founding in 2010, some 340,000 volunteers around the world have been remotely editing OpenStreetMap to better represent the Caribbean, Southeast Asia, parts of Africa and other regions prone to natural disasters or humanitarian emergencies. In that time, they have mapped more than 2.1 million miles of roads and 156 million buildings. They use aerial imagery captured by drones, aircraft, or satellites to help trace unmarked roads, waterways, buildings, and critical infrastructure. Once this digital chart is more clearly defined, field-mapping expeditions like the one we were taking add the names of every road, house, church, or business represented by gray silhouettes on their paper maps. The effort fine-tunes the places that bigger players like Google Maps get wrong — or don’t get at all…(More)”

Mapping the Unmapped

Article by James Plunkett: “…Unlike many political soundbites, however, missions have a strong academic heritage, drawing on years of work from Mariana Mazzucato and others. They gained support as a way for governments to be less agnostic about the direction of economic growth and its social implications, most obviously on issues like climate change, while still avoiding old-school statism. The idea is to pursue big goals not with top-down planning but with what Mazzucato calls ‘orchestration’, using the power of the state to drive innovation and shape markets to an outcome.

For these reasons, missions have proven increasingly popular with governments. They have been used by administrations from the EU to South Korea and Finland, and even in Britain under Theresa May, although she didn’t have time to make them stick.

Despite these good intentions and heritage, however, missions are proving difficult. Some say the UK government is “mission-washing” – using the word, but not really adopting the ways of working. And although missions were mentioned in the spending review, their role was notably muted when compared with the central position they had in Labour’s manifesto.

Still, it would seem a shame to let missions falter without interrogating the reasons. So why are missions so difficult? And what, if anything, could be done to strengthen them as Labour moves into year two? I’ll touch on four characteristics of missions that jar with Whitehall’s natural instincts, and in each case I’ll ask how it’s going, and how Labour could be bolder…(More)”.

Why are “missions” proving so difficult?

Article by Brian Johnston: “Scientists are always short of research funds, but the boom in the popularity of expedition cruising has given them an unexpected opportunity to access remote places.

Instead of making single, expensive visits to Antarctica, for example, scientists hitch rides on cruise ships that make repeat visits and provide the opportunity for data collection over an entire season.

Meanwhile, cruise passengers’ willingness to get involved in a “citizen science” capacity is proving invaluable for crowdsourcing data on everything from whale migration and microplastics to seabird populations. And it isn’t only the scientists who benefit. Guests get a better insight into the environments in which they sail, and feel that they’re doing their bit to understand and preserve the wildlife and landscapes around them.

Citizen-science projects produce tangible results, among them that ships in Antarctica now sail under 10 knots after a study showed that, at that speed, whales have a far greater chance of avoiding or surviving ship strikes. In 2023 Viking Cruises encountered rare giant phantom jellyfish in Antarctica, and in 2024 discovered a new chinstrap penguin colony near Antarctica’s Astrolabe Island.

Viking’s expedition ships have a Science Lab and the company works with prestigious partners such as the Cornell Lab of Ornithology and Norwegian Polar Institute. Expedition lines with visiting scientist programs include Chimu Adventures, Lindblad Expeditions and Quark Expeditions, which works with Penguin Watch to study the impact of avian flu…(More)”.

This new cruise-ship activity is surprisingly popular

UNESCO Report: “Generative Artificial Intelligence (Gen AI) has become an integral part of our digital landscape and daily life. Understanding its risks and participating in solutions is crucial to ensuring that it works for the overall social good. This PLAYBOOK introduces Red Teaming as an accessible tool for testing and evaluating AI systems for social good, exposing stereotypes, bias and potential harms. As a way of illustrating harms, practical examples of Red Teaming for social good are provided, building on the collaborative work carried out by UNESCO and Humane Intelligence. The results demonstrate forms of technology-facilitated gender-based violence (TFGBV) enabled by Gen AI and provide practical actions and recommendations on how to address these growing concerns.

Red Teaming — the practice of intentionally testing Gen AI models to expose vulnerabilities — has traditionally been used by major tech companies and AI labs. One tech company surveyed 1,000 machine learning engineers and found that 89% reported vulnerabilities (Aporia, 2024). This PLAYBOOK provides access to these critical testing methods, enabling organizations and communities to actively participate. Through the structured exercises and real-world scenarios provided, participants can systematically evaluate how Gen AI models may perpetuate, either intentionally or unintentionally, stereotypes or enable gender-based violence.By providing organizations with this easy-to-use tool to conduct their own Red Teaming exercises, participants can select their own thematic area of concern, enabling evidence-based advocacy for more equitable AI for social good…(More)”.

Red Teaming Artificial Intelligence for Social Good

Paper by Warren Liang et al: “In the age of ubiquitous computing, the convergence of wearable technologies and social sentiment analysis has opened new frontiers in both consumer engagement and patient care. These technologies generate continuous, high-frequency, multimodal data streams that are increasingly being leveraged by artificial intelligence (AI) systems for predictive analytics and adaptive interventions. This article explores a unified, integrated framework that combines physiological data from wearables and behavioral insights from social media sentiment to drive proactive engagement strategies. By embedding AI-driven systems into these intersecting data domains, healthcare organizations, consumer brands, and public institutions can offer hyper-personalized experiences, predictive health alerts, emotional wellness interventions, and behaviorally aligned communication.

This paper critically evaluates how machine learning models, natural language processing, and real-time stream analytics can synthesize structured and unstructured data for longitudinal engagement, while also exploring the ethical, privacy, and infrastructural implications of such integration. Through cross-sectoral analysis across healthcare, retail, and public health, we illustrate scalable architectures and case studies where real-world deployment of such systems has yielded measurable improvements in satisfaction, retention, and health outcomes. Ultimately, the synthesis of wearable telemetry and social context data through AI systems represents a new paradigm in engagement science — moving from passive data collection to anticipatory, context-aware engagement ecosystems…(More)”.

Harnessing Wearable Data and Social Sentiment: Designing Proactive Consumer and Patient EngagementStrategies through Integrated AI Systems

Conference Proceedings edited by Josef Drexl, Moritz Hennemann, Patricia Boshe,  and Klaus Wiedemann: “The increasing relevance of data is now recognized all over the world. The large number of regulatory acts and proposals in the field of data law serves as a testament to the significance of data processing for the economies of the world. The European Union’s Data Strategy, the African Union’s Data Policy Framework and the Australian Data Strategy only serve as examples within a plethora of regulatory actions. Yet, the purposeful and sensible use of data does not only play a role in economic terms, e.g. regarding the welfare or competitiveness of economies. The implications for society and the common good are at least equally relevant. For instance, data processing is an integral part of modern research methodology and can thus help to address the problems the world is facing today, such as climate change.

The conference was the third and final event of the Global Data Law Conference Series. Legal scholars from all over the world met, presented and exchanged their experiences on different data-related regulatory approaches. Various instruments and approaches to the regulation of data – personal or non-personal – were discussed, without losing sight of the global effects going hand-in-hand with different kinds of regulation.

In compiling the conference proceedings, this book does not only aim at providing a critical and analytical assessment of the status quo of data law in different countries today, it also aims at providing a forward-looking perspective on the pressing issues of our time, such as: How to promote sensible data sharing and purposeful data governance? Under which circumstances, if ever, do data localisation requirements make sense? How – and by whom – should international regulation be put in place? The proceedings engage in a discussion on future-oriented ideas and actions, thereby promoting a constructive and sensible approach to data law around the world…(More)”.

Comparative Data Law

The Economist: “CHINAS 1.1BN internet users churn out more data than anyone else on Earth. So does the country’s vast network of facial-recognition cameras. As autonomous cars speed down roads and flying ones criss-cross the skies, the quality and value of the information flowing from emerging technologies will soar. Yet the volume of data is not the only thing setting China apart. The government is also embedding data management into the economy and national security. That has implications for China, and holds lessons for democracies.

China’s planners see data as a factor of production, alongside labour, capital and land. Xi Jinping, the president, has called data a foundational resource “with a revolutionary impact” on international competition. The scope of this vision is unparalleled, affecting everything from civil liberties to the profits of internet firms and China’s pursuit of the lead in artificial intelligence.

Mr Xi’s vision is being enacted fast. In 2021 China released rules modelled on Europe’s General Data Protection Regulation (GDPR). Now it is diverging quickly from Western norms. All levels of government are to marshal the data resources they have. A sweeping project to assess the data piles at state-owned firms is under way. The idea is to value them as assets, and add them to balance-sheets or trade them on state-run exchanges. On June 3rd the State Council released new rules to compel all levels of government to share data.

Another big step is a digital ID, due to be launched on July 15th. Under this, the central authorities could control a ledger of every person’s websites and apps. Connecting someone’s name with their online activity will become harder for the big tech firms which used to run the system. They will see only an anonymised stream of digits and letters. Chillingly, however, the ledger may one day act as a panopticon for the state.

China’s ultimate goal appears to be to create an integrated national data ocean, covering not just consumers but industrial and state activity, too. The advantages are obvious, and include economies of scale for training AI models and lower barriers to entry for small new firms…(More)”.

China is building an entire empire on data

Article by Blake Montgomery: “…tech companies notched several victories in the fight over their use of copyrighted text to create artificial intelligence products.

Anthropic: A US judge has ruled that Anthropic, maker of the Claude chatbot, use of books to train its artificial intelligence system – without permission of the authors – did not breach copyright law. Judge William Alsup compared the Anthropic model’s use of books to a “reader aspiring to be a writer.”

And the next day, Meta: The US district judge Vince Chhabria, in San Francisco, said in his decision on the Meta case that the authors had not presented enough evidence that the technology company’s AI would cause “market dilution” by flooding the market with work similar to theirs.

The same day that Meta received its favorable ruling, a group of writers sued Microsoft, alleging copyright infringement in the creation of that company’s Megatron text generator. Judging by the rulings in favor of Meta and Anthropic, the authors are facing an uphill battle.

These three cases are skirmishes in the wider legal war over copyrighted media, which rages on. Three weeks ago, Disney and NBCUniversal sued Midjourney, alleging that the company’s namesake AI image generator and forthcoming video generator made illegal use of the studios’ iconic characters like Darth Vader and the Simpson family. The world’s biggest record labels – Sony, Universal and Warner – have sued two companies that make AI-powered music generators, Suno and Udio. On the textual front, the New York Times’ suit against OpenAI and Microsoft is ongoing.

The lawsuits over AI-generated text were filed first, and, as their rulings emerge, the next question in the copyright fight is whether decisions about one type of media will apply to the next.

“The specific media involved in the lawsuit – written works versus images versus videos versus audio – will certainly change the fair-use analysis in each case,” said John Strand, a trademark and copyright attorney with the law firm Wolf Greenfield. “The impact on the market for the copyrighted works is becoming a key factor in the fair-use analysis, and the market for books is different than that for movies.”…(More)”.

AI companies start winning the copyright fight

Article by Georges-Simon Ulrich: “When the UN created a Statistical Commission in 1946, the world was still recovering from the devastation of the second world war. Then, there was broad consensus that only reliable, internationally comparable data could prevent conflict, combat poverty and anchor global co-operation. Nearly 80 years later, this insight remains just as relevant, but the context has changed dramatically…

This erosion of institutional capacity could not come at a more critical moment. The UN is unable to respond adequately as it is facing a staffing shortfall itself. Due to ongoing austerity measures at the UN, many senior positions remain vacant, and the director of the UN Statistics Division has retired, with no successor appointed. This comes at a time when bold and innovative initiatives — such as a newly envisioned Trusted Data Observatory — are urgently needed to make official statistics more accessible and machine-readable.

Meanwhile, the threat of targeted disinformation is growing. On social media, distorted or manipulated content spreads at unprecedented speed. Emerging tools like AI chatbots exacerbate the problem. These systems rely on web content, not verified data, and are not built to separate truth from falsehood. Making matters worse, many governments cannot currently make their data usable for AI because it is not standardised, not machine-readable, or not openly accessible. The space for sober, evidence-based discourse is shrinking.

This trend undermines public trust in institutions, strips policymaking of its legitimacy, and jeopardises the UN Sustainable Development Goals (SDGs). Without reliable data, governments will be flying blind — or worse: they will be deliberately misled.

When countries lose control of their own data, or cannot integrate it into global decision-making processes, they become bystanders to their own development. Decisions about their economies, societies and environments are then outsourced to AI systems trained on skewed, unrepresentative data. The global south is particularly at risk, with many countries lacking access to quality data infrastructures. In countries such as Ethiopia, unverified information spreading rapidly on social media has fuelled misinformation-driven violence.

The Covid-19 pandemic demonstrated that strong data systems enable better crisis response. To counter these risks, the creation of a global Trusted Data Observatory (TDO) is essential. This UN co-ordinated, democratically governed platform would help catalogue and make accessible trusted data around the world — while fully respecting national sovereignty…(More)”

Bad data leads to bad policy

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday