Mapping and Comparing Data Governance Frameworks: A benchmarking exercise to inform global data governance deliberations


Paper by Sara Marcucci, Natalia Gonzalez Alarcon, Stefaan G. Verhulst, Elena Wullhorst: “Data has become a critical resource for organizations and society. Yet, it is not always as valuable as it could be since there is no well-defined approach to managing and using it. This article explores the increasing importance of global data governance due to the rapid growth of data and the need for responsible data use and protection. While historically associated with private organizational governance, data governance has evolved to include governmental and institutional bodies. However, the lack of a global consensus and fragmentation in policies and practices pose challenges to the development of a common framework. The purpose of this report is to compare approaches and identify patterns in the emergent and fragmented data governance ecosystem within sectors close to the international development field, ultimately presenting key takeaways and reflections on when and why a global data governance framework may be needed. Overall, the report highlights the need for a more holistic, coordinated transnational approach to data governance to manage the global flow of data responsibly and for the public interest. The article begins by giving an overview of the current fragmented data governance ecology, to then proceed to illustrate the methodology used. Subsequently, the paper illustrates the most relevant findings stemming from the research. These are organized according to six key elements: (a) purpose, (b) principles, (c) anchoring documents, (d) data description and lifecycle, (e) processes, and (f) practices. Finally, the article closes with a series of key takeaways and final reflections…(More)”.

The Future of Human Agency


Report by Pew Research: “Advances in the internet, artificial intelligence (AI) and online applications have allowed humans to vastly expand their capabilities and increase their capacity to tackle complex problems. These advances have given people the ability to instantly access and share knowledge and amplified their personal and collective power to understand and shape their surroundings. Today there is general agreement that smart machines, bots and systems powered mostly by machine learning and artificial intelligence will quickly increase in speed and sophistication between now and 2035.

As individuals more deeply embrace these technologies to augment, improve and streamline their lives, they are continuously invited to outsource more decision-making and personal autonomy to digital tools.

Some analysts have concerns about how business, government and social systems are becoming more automated. They fear humans are losing the ability to exercise judgment and make decisions independent of these systems.

Others optimistically assert that throughout history humans have generally benefited from technological advances. They say that when problems arise, new regulations, norms and literacies help ameliorate the technology’s shortcomings. And they believe these harnessing forces will take hold, even as automated digital systems become more deeply woven into daily life.

Thus the question: What is the future of human agency? Pew Research Center and Elon University’s Imagining the Internet Center asked experts to share their insights on this; 540 technology innovators, developers, business and policy leaders, researchers, academics and activists responded. Specifically, they were asked:

By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives?

The results of this nonscientific canvassing:

  • 56% of these experts agreed with the statement that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.
  • 44% said they agreed with the statement that by 2035 smart machines, bots and systems will be designed to allow humans to easily be in control of most tech-aided decision-making.

It should be noted that in explaining their answers, many of these experts said the future of these technologies will have both positive and negative consequences for human agency. They also noted that through the ages, people have either allowed other entities to make decisions for them or have been forced to do so by tribal and national authorities, religious leaders, government bureaucrats, experts and even technology tools themselves…(More)”.

ChatGPT reminds us why good questions matter


Article by Stefaan Verhulst and Anil Ananthaswamy: “Over 100 million people used ChatGPT in January alone, according to one estimate, making it the fastest-growing consumer application in history. By producing resumes, essays, jokes and even poetry in response to prompts, the software brings into focus not just language models’ arresting power, but the importance of framing our questions correctly.

To that end, a few years ago I initiated the 100 Questions Initiative, which seeks to catalyse a cultural shift in the way we leverage data and develop scientific insights. The project aims not only to generate new questions, but also reimagine the process of asking them…

As a species and a society, we tend to look for answers. Answers appear to provide a sense of clarity and certainty, and can help guide our actions and policy decisions. Yet any answer represents a provisional end-stage of a process that begins with questions – and often can generate more questions. Einstein drew attention to the critical importance of how questions are framed, which can often determine (or at least play a significant role in determining) the answers we ultimately reach. Frame a question differently and one might reach a different answer. Yet as a society we undervalue the act of questioning – who formulates questions, how they do so, the impact they have on what we investigate, and on the decisions we make. Nor do we pay sufficient attention to whether the answers are in fact addressing the questions initially posed…(More)”.

‘There is no standard’: investigation finds AI algorithms objectify women’s bodies


Article by Hilke Schellmann: “Images posted on social media are analyzed by artificial intelligence (AI) algorithms that decide what to amplify and what to suppress. Many of these algorithms, a Guardian investigation has found, have a gender bias, and may have been censoring and suppressing the reach of countless photos featuring women’s bodies.

These AI tools, developed by large technology companies, including Google and Microsoft, are meant to protect users by identifying violent or pornographic visuals so that social media companies can block it before anyone sees it. The companies claim that their AI tools can also detect “raciness” or how sexually suggestive an image is. With this classification, platforms – including Instagram and LinkedIn – may suppress contentious imagery.

Two Guardian journalists used the AI tools to analyze hundreds of photos of men and women in underwear, working out, using medical tests with partial nudity and found evidence that the AI tags photos of women in everyday situations as sexually suggestive. They also rate pictures of women as more “racy” or sexually suggestive than comparable pictures of men. As a result, the social media companies that leverage these or similar algorithms have suppressed the reach of countless images featuring women’s bodies, and hurt female-led businesses – further amplifying societal disparities.

Even medical pictures are affected by the issue. The AI algorithms were tested on images released by the US National Cancer Institute demonstrating how to do a clinical breast examination. Google’s AI gave this photo the highest score for raciness, Microsoft’s AI was 82% confident that the image was “explicitly sexual in nature”, and Amazon classified it as representing “explicit nudity”…(More)”.

Privacy


Book edited by Carissa Veliz and Steven M. Cahn: “Companies collect and share much of your daily life, from your location and search history, to your likes, habits, and relationships. As more and more of our personal data is collected, analyzed, and distributed, we need to think carefully about what we might be losing when we give up our privacy.

Privacy is a thought-provoking collection of philosophical essays on privacy, offering deep insights into the nature of privacy, its value, and the consequences of its loss. Bringing together both classic and contemporary work, this timely volume explores the theories, issues, debates, and applications of the philosophical study of privacy. The essays address concealment and exposure, the liberal value of privacy, privacy in social media, privacy rights and public information, privacy and the limits of law, and more…(More)”.

AI governance and human rights: Resetting the relationship


Paper by Kate Jones: “Governments and companies are already deploying AI to assist in making decisions that can have major consequences for the lives of individual citizens and societies. AI offers far-reaching benefits for human development but also presents risks. These include, among others, further division between the privileged and the unprivileged; erosion of individual freedoms through surveillance; and the replacement of independent thought and judgement with automated control.

Human rights are central to what it means to be human. They were drafted and agreed, with worldwide popular support, to define freedoms and entitlements that would allow every human being to live a life of liberty and dignity. AI, its systems and its processes have the potential to alter the human experience fundamentally. But many sets of AI governance principles produced by companies, governments, civil society and international organizations do not mention human rights at all. This is an error that requires urgent correction.

This research paper aims to dispel myths about human rights; outline the principal importance of human rights for AI governance; and recommend actions that governments, organizations, companies and individuals can take to ensure that human rights are the foundation for AI governance in future…(More)”.

The Signal App and the Danger of Privacy at All Costs


Article by Reid Blackman: “…One should always worry when a person or an organization places one value above all. The moral fabric of our world is complex. It’s nuanced. Sensitivity to moral nuance is difficult, but unwavering support of one principle to rule them all is morally dangerous.

The way Signal wields the word “surveillance” reflects its coarsegrained understanding of morality. To the company, surveillance covers everything from a server holding encrypted data that no one looks at to a law enforcement agent reading data after obtaining a warrant to East Germany randomly tapping citizens’ phones. One cannot think carefully about the value of privacy — including its relative importance to other values in particular contexts — with such a broad brush.

What’s more, the company’s proposition that if anyone has access to data, then many unauthorized people probably will have access to that data is false. This response reflects a lack of faith in good governance, which is essential to any well-functioning organization or community seeking to keep its members and society at large safe from bad actors. There are some people who have access to the nuclear launch codes, but “Mission Impossible” movies aside, we’re not particularly worried about a slippery slope leading to lots of unauthorized people having access to those codes.

I am drawing attention to Signal, but there’s a bigger issue here: Small groups of technologists are developing and deploying applications of their technologies for explicitly ideological reasons, with those ideologies baked into the technologies. To use those technologies is to use a tool that comes with an ethical or political bent.

Signal is pushing against businesses like Meta that turn users of their social media platforms into the product by selling user data. But Signal embeds within itself a rather extreme conception of privacy, and scaling its technology is scaling its ideology. Signal’s users may not be the product, but they ‌‌are the witting or unwitting advocates of the moral views of the 40 or so people who operate Signal.

There’s something somewhat sneaky in all this (though I don’t think the owners of Signal intend to be sneaky). Usually advocates know that they’re advocates. They engage in some level of deliberation and reach the conclusion that a set of beliefs is for them…(More)”.

The ethics of artificial intelligence, UNESCO and the African Ubuntu perspective


Paper by Dorine Eva van Norren: “This paper aims to demonstrate the relevance of worldviews of the global south to debates of artificial intelligence, enhancing the human rights debate on artificial intelligence (AI) and critically reviewing the paper of UNESCO Commission on the Ethics of Scientific Knowledge and Technology (COMEST) that preceded the drafting of the UNESCO guidelines on AI. Different value systems may lead to different choices in programming and application of AI. Programming languages may acerbate existing biases as a people’s worldview is captured in its language. What are the implications for AI when seen from a collective ontology? Ubuntu (I am a person through other persons) starts from collective morals rather than individual ethics…

Metaphysically, Ubuntu and its conception of social personhood (attained during one’s life) largely rejects transhumanism. When confronted with economic choices, Ubuntu favors sharing above competition and thus an anticapitalist logic of equitable distribution of AI benefits, humaneness and nonexploitation. When confronted with issues of privacy, Ubuntu emphasizes transparency to group members, rather than individual privacy, yet it calls for stronger (group privacy) protection. In democratic terms, it promotes consensus decision-making over representative democracy. Certain applications of AI may be more controversial in Africa than in other parts of the world, like care for the elderly, that deserve the utmost respect and attention, and which builds moral personhood. At the same time, AI may be helpful, as care from the home and community is encouraged from an Ubuntu perspective. The report on AI and ethics of the UNESCO World COMEST formulated principles as input, which are analyzed from the African ontological point of view. COMEST departs from “universal” concepts of individual human rights, sustainability and good governance which are not necessarily fully compatible with relatedness, including future and past generations. Next to rules based approaches, which may hamper diversity, bottom-up approaches are needed with intercultural deep learning algorithms…(More)”.

Filling Public Data Gaps


Report by Judah Axelrod, Karolina Ramos, and Rebecca Bullied: “Data are central to understanding the lived experiences of different people and communities and can serve as a powerful force for promoting racial equity. Although public data, including foundational sources for policymaking such as the US Census Bureau’s American Community Survey (ACS), offer accessible information on a range of topics, challenges of timeliness, granularity, representativeness, and degrees of disaggregation can limit those data’s utility for real-time analysis. Private data—data produced by private-sector organizations either through standard business or to market as an asset for purchase—can serve as a richer, more granular, and higher-frequency supplement or alternative to public data sources. This raises questions about how well private data assets can offer race-disaggregated insights that can inform policymaking.

In this report, we explore the current landscape of public-private data sharing partnerships that address topic areas where racial equity research faces data gaps: wealth and assets, financial well-being and income, and employment and job quality. We held 20 semistructured interviews with current producers and users of private-sector data and subject matter experts in the areas of data-sharing models and ethical data usage. Our findings are divided into five key themes:

  • Incentives and disincentives, benefits, and risks to public-private data sharing
    Agreements with prestigious public partners can bolster credibility for private firms and broaden their customer base, while public partners benefit from access to real-time, granular, rich data sources. But data sharing is often time and labor intensive, and firms can be concerned with conflicting business interests or diluting the value of proprietary data assets.
  • Availability of race-disaggregated data sources
    We found no examples in our interviews of race-disaggregated data sources related to our thematic focus areas that are available externally. However, there are promising methods for data imputation, linkage, and augmentation through internal surveys.
  • Data collaboratives in practice
    Most public-private data sharing agreements we learned about are between two parties and entail free or “freemium” access. However, we found promising examples of multilateral agreements that diversify the data-sharing landscape.
  • From data champions to data stewards
    We found many examples of informal data champions who bear responsibility for relationship-building and securing data partnerships. This role has yet to mature to an institutionalized data steward within private firms we interviewed, which can make data sharing a fickle process.
  • Considerations for ethical data usage
    Data privacy and transparency about how data are accessed and used are prominent concerns among prospective data users. Interviewees also stressed the importance of not privileging existing quantitative data above qualitative insights in cases where communities have offered long-standing feedback and narratives about their own experiences facing racial inequities, and that policymakers should not use a need to collect more data as an excuse for delaying policy action.

Our research yielded several recommendations for data producers and users that engage in data sharing, and for funders seeking to advance data-sharing efforts and promote racial equity…(More)”

Operationalizing Digital Self Determination


Paper by Stefaan G. Verhulst: “We live in an era of datafication, one in which life is increasingly quantified and transformed into intelligence for private or public benefit. When used responsibly, this offers new opportunities for public good. However, three key forms of asymmetry currently limit this potential, especially for already vulnerable and marginalized groups: data asymmetries, information asymmetries, and agency asymmetries. These asymmetries limit human potential, both in a practical and psychological sense, leading to feelings of disempowerment and eroding public trust in technology. Existing methods to limit asymmetries (e.g., consent) as well as some alternatives under consideration (data ownership, collective ownership, personal information management systems) have limitations to adequately address the challenges at hand. A new principle and practice of digital self-determination (DSD) is therefore required.
DSD is based on existing concepts of self-determination, as articulated in sources as varied as Kantian philosophy and the 1966 International Covenant on Economic, Social and Cultural Rights. Updated for the digital age, DSD contains several key characteristics, including the fact that it has both an individual and collective dimension; is designed to especially benefit vulnerable and marginalized groups; and is context-specific (yet also enforceable). Operationalizing DSD in this (and other) contexts so as to maximize the potential of data while limiting its harms requires a number of steps. In particular, a responsible operationalization of DSD would consider four key prongs or categories of action: processes, people and organizations, policies, and products and technologies…(More)”.