Data Reboot: 10 Reasons why we need to change how we approach data in today’s society


Article by Stefaan Verhulst and Julia Stamm:”…In the below, we consider 10 reasons why we need to reboot the data conversations and change our approach to data governance…

1. Data is not the new oil: This phrase, sometimes attributed to Clive Humby in 2006, has become a staple of media and other commentaries. In fact, the analogy is flawed in many ways. As Mathias Risse, from the Carr Center for Human Rights Policy at Harvard, points out, oil is scarce, fungible, and rivalrous (can be used and owned by a single entity). Data, by contrast, possesses none of these properties. In particular, as we explain further below, data is shareable (i.e., non-rivalrous); its societal and economic value also greatly increases through sharing. The data-as-oil analogy should thus be discarded, both because it is inaccurate and because it artificially inhibits the potential of data.

2. Not all data is equal: Assessing the value of data can be challenging, leading many organizations to treat (e.g., collect and store) all data equally. The value of data varies widely, however, depending on context, use case, and the underlying properties of the data (the information it contains, its quality, etc.). Establishing metrics or processes to accurately value data is therefore essential. This is particularly true as the amount of data continues to explode, potentially exceeding stakeholders’ ability to store or process all generated data.

3. Weighing Risks and Benefits of data use: Following a string of high-profile privacy violations in recent years, public and regulatory attention has largely focused on the risks associated with data, and steps required to minimize those risks. Such concerns are, of course, valid and important. At the same time, a sole focus on preventing harms has led to artificial limits on maximizing the potential benefits of data — or, put another way, on the risks of not using data. It is time to apply a more balanced approach, one that weighs risks against benefits. By freeing up large amounts of currently siloed and unused data, such a responsible data framework could unleash huge amounts of social innovation and public benefit….

7. From individual consent to a social license: Social license refers to the informal demands or expectations set by society on how data may be used, reused, and shared. The notion, which originates in the field of environmental resource management, recognizes that social license may not overlap perfectly with legal or regulatory license. In some cases, it may exceed formal approvals for how data can be used, and in others, it may be more limited. Either way, public trust is as essential as legal compliance — a thriving data ecology can only exist if data holders and other stakeholders operate within the boundaries of community norms and expectations.

8. From data ownership to data stewardship: Many of the above propositions add up to an implicit recognition that we need to move beyond notions of ownership when it comes to data. As a non-rivalrous public good, data offers massive potential for the public good and social transformation. That potential varies by context and use case; sharing and collaboration are essential to ensuring that the right data is brought to bear on the most relevant social problems. A notion of stewardship — which recognizes that data is held in public trust, available to be shared in a responsible manner — is thus more helpful (and socially beneficial) than outdated notions of ownership. A number of tools and mechanisms exist to encourage stewardship and sharing. As we have elsewhere written, data collaboratives are among the most promising.

9. Data Asymmetries: Data, it was often proclaimed, would be a harbinger of greater societal prosperity and well being. The era of big data was to usher in a new tide of innovation and economic growth that would lift all boats. The reality has been somewhat different. The era of big data has rather been characterized by persistent, and in many ways worsening, asymmetries. These manifest in inequalities in access to data itself, and, more problematically, inequalities in the way the social and economic fruits of data are being distributed. We thus need to reconceptualize our approach to data, ensuring that its benefits are more equitably spread, and that it does not in fact end up exacerbating the widespread and systematic inequalities that characterize our times.

10. Reconceptualizing self-determination…(More)” (First published as Data Reboot: 10 Gründe, warum wir unseren Umgang mit Daten ändern müssen at 1E9).

The Case for Including Data Stewardship in ESG


Article by Stefaan Verhulst: “Amid all the attention to environmental, social, and governance factors in investing, better known as ESG, there has been relatively little emphasis on governance, and even less on data governance. This is a significant oversight that needs to be addressed, as data governance has a crucial role to play in achieving environmental and social goals. 

Data stewardship in particular should be considered an important ESG practice. Making data accessible for reuse in the public interest can promote social and environmental goals while boosting a company’s efficiency and profitability. And investing in companies with data-stewardship capabilities makes good sense. But first, we need to move beyond current debates on data and ESG.

Several initiatives have begun to focus on data as it relates to ESG. For example, a recent McKinsey report on ESG governance within the banking sector argues that banks “will need to adjust their data architecture, define a data collection strategy, and reorganize their data governance model to successfully manage and report ESG data.” Deloitte recognizes the need for “a robust ESG data strategy.” PepsiCo likewise highlights its ESG Data Governance Program, and Maersk emphasizes data ethics as a key component in its ESG priorities.

These efforts are meaningful, but they are largely geared toward using data to measure compliance with environmental and social commitments. They don’t do much to help us understand how companies are leveraging data as an asset to achieve environmental and social goals. In particular, as I‘ve written elsewhere, data stewardship by which privately held data is reused for public interest purposes is an important new component of corporate social responsibility, as well as a key tool in data governance. Too many data-governance efforts are focused simply on using data to measure compliance or impact. We need to move beyond that mindset. Instead, we should adopt a data stewardship approach, where data is made accessible for the public good. There are promising signs of change in this direction…(More)”.

We need a much more sophisticated debate about AI


Article by Jamie Susskind: “Twentieth-century ways of thinking will not help us deal with the huge regulatory challenges the technology poses…The public debate around artificial intelligence sometimes seems to be playing out in two alternate realities.

In one, AI is regarded as a remarkable but potentially dangerous step forward in human affairs, necessitating new and careful forms of governance. This is the view of more than a thousand eminent individuals from academia, politics, and the tech industry who this week used an open letter to call for a six-month moratorium on the training of certain AI systems. AI labs, they claimed, are “locked in an out-of-control race to develop and deploy ever more powerful digital minds”. Such systems could “pose profound risks to society and humanity”. 

On the same day as the open letter, but in a parallel universe, the UK government decided that the country’s principal aim should be to turbocharge innovation. The white paper on AI governance had little to say about mitigating existential risk, but lots to say about economic growth. It proposed the lightest of regulatory touches and warned against “unnecessary burdens that could stifle innovation”. In short: you can’t spell “laissez-faire” without “AI”. 

The difference between these perspectives is profound. If the open letter is taken at face value, the UK government’s approach is not just wrong, but irresponsible. And yet both viewpoints are held by reasonable people who know their onions. They reflect an abiding political disagreement which is rising to the top of the agenda.

But despite this divergence there are four ways of thinking about AI that ought to be acceptable to both sides.

First, it is usually unhelpful to debate the merits of regulation by reference to a particular crisis (Cambridge Analytica), technology (GPT-4), person (Musk), or company (Meta). Each carries its own problems and passions. A sound regulatory system will be built on assumptions that are sufficiently general in scope that they will not immediately be superseded by the next big thing. Look at the signal, not the noise…(More)”.

How AI Could Revolutionize Diplomacy


Article by Andrew Moore: “More than a year into Russia’s war of aggression against Ukraine, there are few signs the conflict will end anytime soon. Ukraine’s success on the battlefield has been powered by the innovative use of new technologies, from aerial drones to open-source artificial intelligence (AI) systems. Yet ultimately, the war in Ukraine—like any other war—will end with negotiations. And although the conflict has spurred new approaches to warfare, diplomatic methods remain stuck in the 19th century.

Yet not even diplomacy—one of the world’s oldest professions—can resist the tide of innovation. New approaches could come from global movements, such as the Peace Treaty Initiative, to reimagine incentives to peacemaking. But much of the change will come from adopting and adapting new technologies.

With advances in areas such as artificial intelligence, quantum computing, the internet of things, and distributed ledger technology, today’s emerging technologies will offer new tools and techniques for peacemaking that could impact every step of the process—from the earliest days of negotiations all the way to monitoring and enforcing agreements…(More)”.

Responding to the coronavirus disease-2019 pandemic with innovative data use: The role of data challenges


Paper by Jamie Danemayer, Andrew Young, Siobhan Green, Lydia Ezenwa and Michael Klein: “Innovative, responsible data use is a critical need in the global response to the coronavirus disease-2019 (COVID-19) pandemic. Yet potentially impactful data are often unavailable to those who could utilize it, particularly in data-poor settings, posing a serious barrier to effective pandemic mitigation. Data challenges, a public call-to-action for innovative data use projects, can identify and address these specific barriers. To understand gaps and progress relevant to effective data use in this context, this study thematically analyses three sets of qualitative data focused on/based in low/middle-income countries: (a) a survey of innovators responding to a data challenge, (b) a survey of organizers of data challenges, and (c) a focus group discussion with professionals using COVID-19 data for evidence-based decision-making. Data quality and accessibility and human resources/institutional capacity were frequently reported limitations to effective data use among innovators. New fit-for-purpose tools and the expansion of partnerships were the most frequently noted areas of progress. Discussion participants identified building capacity for external/national actors to understand the needs of local communities can address a lack of partnerships while de-siloing information. A synthesis of themes demonstrated that gaps, progress, and needs commonly identified by these groups are relevant beyond COVID-19, highlighting the importance of a healthy data ecosystem to address emerging threats. This is supported by data holders prioritizing the availability and accessibility of their data without causing harm; funders and policymakers committed to integrating innovations with existing physical, data, and policy infrastructure; and innovators designing sustainable, multi-use solutions based on principles of good data governance…(More)”.

Eye of the Beholder: Defining AI Bias Depends on Your Perspective


Article by Mike Barlow: “…Today’s conversations about AI bias tend to focus on high-visibility social issues such as racism, sexism, ageism, homophobia, transphobia, xenophobia, and economic inequality. But there are dozens and dozens of known biases (e.g., confirmation bias, hindsight bias, availability bias, anchoring bias, selection bias, loss aversion bias, outlier bias, survivorship bias, omitted variable bias and many, many others). Jeff Desjardins, founder and editor-in-chief at Visual Capitalist, has published a fascinating infographic depicting 188 cognitive biases–and those are just the ones we know about.

Ana Chubinidze, founder of AdalanAI, a Berlin-based AI governance startup, worries that AIs will develop their own invisible biases. Currently, the term “AI bias” refers mostly to human biases that are embedded in historical data. “Things will become more difficult when AIs begin creating their own biases,” she says.

She foresees that AIs will find correlations in data and assume they are causal relationships—even if those relationships don’t exist in reality. Imagine, she says, an edtech system with an AI that poses increasingly difficult questions to students based on their ability to answer previous questions correctly. The AI would quickly develop a bias about which students are “smart” and which aren’t, even though we all know that answering questions correctly can depend on many factors, including hunger, fatigue, distraction, and anxiety. 

Nevertheless, the edtech AI’s “smarter” students would get challenging questions and the rest would get easier questions, resulting in unequal learning outcomes that might not be noticed until the semester is over—or might not be noticed at all. Worse yet, the AI’s bias would likely find its way into the system’s database and follow the students from one class to the next…

As we apply AI more widely and grapple with its implications, it becomes clear that bias itself is a slippery and imprecise term, especially when it is conflated with the idea of unfairness. Just because a solution to a particular problem appears “unbiased” doesn’t mean that it’s fair, and vice versa. 

“There is really no mathematical definition for fairness,” Stoyanovich says. “Things that we talk about in general may or may not apply in practice. Any definitions of bias and fairness should be grounded in a particular domain. You have to ask, ‘Whom does the AI impact? What are the harms and who is harmed? What are the benefits and who benefits?’”…(More)”.

AI Ethics


Textbook by Paula Boddington: “This book introduces readers to critical ethical concerns in the development and use of artificial intelligence. Offering clear and accessible information on central concepts and debates in AI ethics, it explores how related problems are now forcing us to address fundamental, age-old questions about human life, value, and meaning. In addition, the book shows how foundational and theoretical issues relate to concrete controversies, with an emphasis on understanding how ethical questions play out in practice.

All topics are explored in depth, with clear explanations of relevant debates in ethics and philosophy, drawing on both historical and current sources. Questions in AI ethics are explored in the context of related issues in technology, regulation, society, religion, and culture, to help readers gain a nuanced understanding of the scope of AI ethics within broader debates and concerns…(More)”

Data and Democracy at Work: Advanced Information Technologies, Labor Law, and the New Working Class


Book by Brishen Rogers: “As our economy has shifted away from industrial production and service industries have become dominant, many of the nation’s largest employers are now in fields like retail, food service, logistics, and hospitality. These companies have turned to data-driven surveillance technologies that operate over a vast distance, enabling cheaper oversight of massive numbers of workers. Data and Democracy at Work argues that companies often use new data-driven technologies as a power resource—or even a tool of class domination—and that our labor laws allow them to do so.

Employers have established broad rights to use technology to gather data on workers and their performance, to exclude others from accessing that data, and to use that data to refine their managerial strategies. Through these means, companies have suppressed workers’ ability to organize and unionize, thereby driving down wages and eroding working conditions. Labor law today encourages employer dominance in many ways—but labor law can also be reformed to become a tool for increased equity. The COVID-19 pandemic and subsequent Great Resignation have indicated an increased political mobilization of the so-called essential workers of the pandemic, many of them service industry workers. This book describes the necessary legal reforms to increase workers’ associational power and democratize workplace data, establishing more balanced relationships between workers and employers and ensuring a brighter and more equitable future for us all…(More)”.

Am I Normal? The 200-Year Search for Normal People (and Why They Don’t Exist)


Book by Sarah Chaney: “Before the 19th century, the term ’normal’ was rarely ever associated with human behaviour. Normal was a term used in maths, for right angles. People weren’t normal; triangles were.

But from the 1830s, this branch of science really took off across Europe and North America, with a proliferation of IQ tests, sex studies, a census of hallucinations – even a UK beauty map (which concluded the women in Aberdeen were “the most repellent”). This book tells the surprising history of how the very notion of the normal came about, how it shaped us all, often while entrenching oppressive values.

Sarah Chaney looks at why we’re still asking the internet: Do I have a normal body? Is my sex life normal? Are my kids normal? And along the way, she challenges why we ever thought it might be a desirable thing to be…(More)”.

The Normative Challenges of AI in Outer Space: Law, Ethics, and the Realignment of Terrestrial Standards


Paper by Ugo Pagallo, Eleonora Bassi & Massimo Durante: “The paper examines the open problems that experts of space law shall increasingly address over the next few years, according to four different sets of legal issues. Such differentiation sheds light on what is old and what is new with today’s troubles of space law, e.g., the privatization of space, vis-à-vis the challenges that AI raises in this field. Some AI challenges depend on its unique features, e.g., autonomy and opacity, and how they affect pillars of the law, whether on Earth or in space missions. The paper insists on a further class of legal issues that AI systems raise, however, only in outer space. We shall never overlook the constraints of a hazardous and hostile environment, such as on a mission between Mars and the Moon. The aim of this paper is to illustrate what is still mostly unexplored or in its infancy in this kind of research, namely, the fourfold ways in which the uniqueness of AI and that of outer space impact both ethical and legal standards. Such standards shall provide for thresholds of evaluation according to which courts and legislators evaluate the pros and cons of technology. Our claim is that a new generation of sui generis standards of space law, stricter or more flexible standards for AI systems in outer space, down to the “principle of equality” between human standards and robotic standards, will follow as a result of this twofold uniqueness of AI and of outer space…(More)”.