Explore our articles
View All Results

Stefaan Verhulst

UNESCO Issue Brief by Andrea Millwood-Hargrave: “This brief delineates key principles for enhancing information accessibility and meaningful access within digital realms and addressing diverse facets such as multilingualism, metadata, and interoperability. Emphasizing the critical role of transparent, trustworthy information ecosystems, the Issue Brief advocates for inclusive design, transparency, data integrity, legal conformity, efficiency, and flexibility. Through sharing good practices ranging from Canada’s accessibility tools to Chile’s initiatives in open justice and AI policy, the document articulates actionable principles for policymakers to foster equitable access to information, thereby fortifying societal resilience and informed decision-making…(More)”.

Global challenges for Information accessibility: key principles and good practices in the digital age

Handbook by Denis Kierans and Albert Kraler: :”…brings together concepts, findings, methods, and case studies to offer a clear, practical understanding of irregular migration data. It addresses the challenges of conceptualising, measuring, interrogating, and using data on one of Europe’s most politically sensitive migration issues. Drawing on examples from across Europe and beyond, it provides guidance on concepts and definitions, ethics, estimation methods, data innovation, and policy application. It is designed to support policymakers, practitioners and researchers seeking more informed, transparent, and coordinated approaches to irregular migration data…(More)”.

Handbook on Irregular Migration Data: Concepts, Methods and Practices

A landscape review by UNICEF: “Education technology (EdTech) offers powerful opportunities to improve learning outcomes, personalize instruction, and expand access to quality education, particularly in low-resource settings and for children with disabilities. At the same time, the collection and use of student data present significant risks, including privacy violations, biased profiling, and the commercial exploitation of children’s information.

To help address these challenges, UNICEF partnered with UNESCO and the Global Privacy Assembly to produce a global landscape review on data governance in EdTech. The paper identifies the key stakeholders in EdTech data governance and the obstacles they face in protecting children’s rights. It also examines existing multi-stakeholder governance mechanisms across countries, highlighting the respective roles of governments, data protection authorities, and EdTech companies.

The landscape review is accompanied by policy recommendations that demonstrate how sound data governance principles can be applied within the EdTech sector. Developed through a global consultation process with data protection authorities, civil society organizations, academics, and EdTech companies across five regions, the recommendations include strengthening legal and regulatory frameworks, embracing anticipatory governance, promoting rights-based business models, and fostering both multi-stakeholder and multilateral collaboration.

By adopting these recommendations, stakeholders can help ensure that EdTech not only drives innovation in education but also safeguards the rights and well-being of every child…(More)”.

Data Governance for EdTech

Article by Bitange Ndemo and Marine Ragnet: ” Kenya has become a technological centre for East Africa and is often referred to as the “Silicon Savannah.” Its AI-focused startups are turning the country into a poster child for developmental leapfrogging in the Global South. However, a more fundamental concern is that future AI advancements in the country are aimed toward local inclusivity and local ownership, rather than outsourcing to foreign actors.

AI has the potential to transform agriculture, finance, healthcare, and education. From yield-enhancing precision farming tools to AI-powered credit scoring for underserved populations, AI is transforming the social and economic landscape in Kenya. The problem, however, is that a disparity exists. Rural populations lag in comparison to their urban counterparts, which widens inequality. Gender inequality, coupled with a lack of skilled professionals, makes building a domestic AI ecosystem very difficult.

Existing AI systems are often made with foreign cultures and values embedded in their code, and because of this, AI will never work seamlessly with the local context. Moreover, with the vast amounts of data that Kenyan citizens generate, monetization and control are often in the hands of foreign companies. This form of “digital colonization” is a major threat to Kenya’s autonomy.

Increasingly, AI sovereignty is becoming a major issue necessitating calls for a strategy to protect the nation’s “technological growth,” ensuring inclusivity and effectively serving the local needs. This requires a comprehensive approach that combines community innovation, skill-building, governance, and investment in infrastructure…(More)”.

AI sovereignty in Kenya: Building a future rooted in local ownership and innovation

Book by Steven Pinker: “Common knowledge is necessary for coordination, for making arbitrary but complementary choices like driving on the right, using paper currency, and coalescing behind a political leader or movement. It’s also necessary for social coordination: everything from rendezvousing at a time and place to speaking the same language to forming enduring relationships of friendship, romance, or authority. Humans have a sixth sense for common knowledge, and we create it with signals like laughter, tears, blushing, eye contact, and blunt speech.

But people also go to great lengths to avoid common knowledge—to ensure that even if everyone knows something, they can’t know that everyone else knows they know it. And so we get rituals like benign hypocrisy, veiled bribes and threats, sexual innuendo, and pretending not to see the elephant in the room.

Pinker shows how the hidden logic of common knowledge can make sense of many of life’s enigmas: financial bubbles and crashes, revolutions that come out of nowhere, the posturing and pretense of diplomacy, the eruption of social media shaming mobs and academic cancel culture, the awkwardness of a first date. Artists and humorists have long mined the intrigues of common knowledge, and Pinker liberally uses their novels, jokes, cartoons, films, and sitcom dialogues to illuminate social life’s tragedies and comedies. Along the way he answers questions like:

  • Why do people hoard toilet paper at the first sign of an emergency?
  • Why are Super Bowl ads filled with ads for crypto?
  • Why, in American presidential primary voting, do citizens typically select the candidate they believe is preferred by others rather than their favorite?
  • Why did Russian authorities arrest a protester who carried a blank sign?
  • Why is it so hard for nervous lovers to say goodbye at the end of a phone call?
  • Why does everyone agree that if we were completely honest all the time, life would be unbearable?

Consistently riveting in explaining the paradoxes of human behavior, When Everyone Knows That Everyone Knows… invites us to understand the ways we try to get into each other’s heads and the harmonies, hypocrisies, and outrages that result…(More)”.

When Everyone Knows That Everyone Knows . . .

Paper by Andrei Richter: “Since the end of the Second World War, freedom of expression has become internationally recognized as a fundamental human right, important in itself but also for all other rights, as well as for democracy. The opportunities to defend the existence and flourishing of human rights, to expose and challenge flaws in their implementation, and to make petitions and demands on their observance are only possible through freedom of expression, thus placing it at the heart of the modern human rights system worldwide.

Freedom of expression consists of freedom of information, freedom of political debate, freedom of the media, freedom of artistic expression, and freedom of cultural expression, as well as academic freedom of expression. It is, first and foremost, a legal right with sources in the modern international human rights laws and standards that are set by international organizations and international courts. The international organizations in this context are more accurately called intergovernmental organizations, as they are created by states through multilateral treaties to work in good faith on issues of common interest.

In addition to the global United Nations human rights system, there are three generally recognized regional systems or models for the protection of human rights—in Africa, in the Americas, and in Europe. They might vary in terms of their nature, scope of competence and powers, normative underpinnings, and so on, though each of them is based on a regional human rights treaty, and all three also specifically guarantee the right to freedom of expression. Under international law, the right to freedom of expression belongs to everyone…(More)”.

Freedom of Expression and International Organizations

Paper by Ida Kubiszewski: “To achieve sustainable wellbeing for both humanity and the rest of nature, we must shift from a narrow focus on Gross Domestic Product (GDP) to a broader understanding and measurement of sustainable wellbeing and prosperity within the planetary boundaries. Several hundred alternative indicators have been proposed to replace GDP, but their variety and lack of consensus have allowed GDP to retain its privileged status. What is needed now is broad agreement on shifting beyond GDP. We conducted a systematic literature review of existing alternative indicators and identified over 200 across multiple spatial scales. Using these indicators, we built a database to compare their similarities and differences. While the terminology for describing the components of wellbeing varied greatly, there was a surprising degree of agreement on the core concepts and elements. We applied semantic modelling to estimate the degree of similarity among the indicators’ components and identified those that represented a broad synthesis. Results show that indicators with around 20 components capture a large share of the overall similarity across the indicators in the dataset. Beyond 20 components, adding additional components yielded diminishing returns in similarity. Based on this, we created a 20-component indicator to serve as a model for building consensus and mapped its relationship to several well-known alternative indicators. We aim for this database and synthesis to support broad stakeholder engagement toward the consensus we need to move beyond GDP…(More)

Building consensus on societal wellbeing: a semantic synthesis of indicators to move beyond GDP

Paper by Chiara Gallese et al: “Research has shown how data sets convey social bias in Artificial Intelligence systems, especially those based on machine learning. A biased data set is not representative of reality and might contribute to perpetuate societal biases within the model. To tackle this problem, it is important to understand how to avoid biases, errors, and unethical practices while creating the data sets. In order to provide guidance for the use of data sets in contexts of critical decision-making, such as health decisions, we identified six fundamental data set features (balance, numerosity, unevenness, compliance, quality, incompleteness) that could affect model fairness. These features were the foundation for the FanFAIR framework.

We extended the FanFAIR framework for the semi-automated evaluation of fairness in data sets, by combining statistical information on data with qualitative features. In particular, we present an improved version of FanFAIR which introduces novel outlier detection capabilities working in multivariate fashion, using two state-of-the-art methods: the Empirical Cumulative-distribution Outlier Detection (ECOD) and Isolation Forest. We also introduce a novel metric for data set balance, based on an entropy measure.

We addressed the issue of how much (un)fairness can be included in a data set used for machine learning research, focusing on classification issues. We developed a rule-based approach based on fuzzy logic that combines these characteristics into a single score and enables a semi-automatic evaluation of a data set in algorithmic fairness research. Our tool produces a detailed visual report about the fairness of the data set. We show the effectiveness of FanFAIR by applying the method on two open data sets…(More)”.

FanFAIR: sensitive data sets semi-automatic fairness assessment

Report by CIPL: “Without these data categories, organizations may be unable to uncover disparities in how AI models perform across different demographic groups, making it impossible to ensure fairness and equal benefits of AI across all communities. For instance, in order to ensure a bank’s AI system is not used to assess whether a customer is creditworthy enough to apply for a mortgage in a way that disproportionally denies mortgages to people with a certain ethnicity, the developer of the AI system needs to be able to distinguish the ethnicity of the people about whom its AI system makes decisions. Regulators such as the UK’s ICO acknowledge that sensitive data may be necessary to assess discrimination risks, evaluate model performance, and retrain models accordingly. The categorical restrictions many data protection laws place on sensitive data processing, such as requiring specific consent, coupled with an increasingly broad interpretation of the concept of sensitive data, can place organizations in a position of being unable to include sensitive data in AI training datasets to the detriment of the performance of the model, where such consent is not obtainable, for example..(More)”

Rethinking Sensitive Data in the Age of AI

Article by Sara Frueh: “State and local governments around the U.S. are harnessing AI for a range of applications — such as translating public meetings into multiple languages in real time to allow broader participation, or using chatbots to deliver services to the public, for instance.

While AI systems can offer benefits to agencies and the people they serve, the technology can also be harmful if misapplied. In one high-profile example, around 40,000 Michigan residents were wrongly accused of unemployment insurance fraud based on a state AI system with a faulty algorithm and inadequate human oversight.

“We have to think about a lot of AI systems as potentially useful and quite often unreliable, and treat them as such,” said Suresh Venkatasubramanian of Brown University, co-author of a recent National Academies rapid expert consultation on AI use by state and local governments.

He urged state and local leaders to avoid extreme hype about AI, both its promise and dangers, and instead to use a careful, experimental approach. “We have to embrace an ethos of experimentation and sandboxing, where we can understand how they work in our specific contexts.”

Venkatasubramanian spoke at a National Academies webinar that explored the report’s recommendations and other AI-related resources for state and local governments. He was joined by fellow co-author Nathan McNeese of Clemson University, and Leila Doty, a privacy and AI analyst for the city of San José, California, along with Kate Stoll of the American Association for the Advancement of Science, who moderated the session.

In considering whether to implement AI, McNeese advised state and city agencies to start by asking, “What’s the problem?” or “What’s the aspect of the organization that we want to enhance?”

“You do not want to introduce AI if there is not a specific need,” said McNeese. “You don’t want to implement AI because everyone else is.”

The point was seconded by Venkatasubramanian. “If you have a problem that needs to be solved, figure out what people need to solve it,” he said. “Maybe AI can be a part of it, maybe not. Don’t start by asking, ‘How can we bring AI to this?’ That way leads to problems.”

When AI is used, the report urges a human-centered approach to designing it — one that takes people’s needs, wants, and motivations into account, explained McNeese.

Those who have domain expertise — employees who provide services of value to the public — should be involved in determining where AI tools might and might not be useful, said Venkatasubramanian. “It is really, really important to empower the people who have the expertise to understand the domain,” he stressed…(More)”.

Improving How State and Local Governments Use AI

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday