Data Reboot: 10 Reasons why we need to change how we approach data in today’s society


Article by Stefaan Verhulst and Julia Stamm:”…In the below, we consider 10 reasons why we need to reboot the data conversations and change our approach to data governance…

1. Data is not the new oil: This phrase, sometimes attributed to Clive Humby in 2006, has become a staple of media and other commentaries. In fact, the analogy is flawed in many ways. As Mathias Risse, from the Carr Center for Human Rights Policy at Harvard, points out, oil is scarce, fungible, and rivalrous (can be used and owned by a single entity). Data, by contrast, possesses none of these properties. In particular, as we explain further below, data is shareable (i.e., non-rivalrous); its societal and economic value also greatly increases through sharing. The data-as-oil analogy should thus be discarded, both because it is inaccurate and because it artificially inhibits the potential of data.

2. Not all data is equal: Assessing the value of data can be challenging, leading many organizations to treat (e.g., collect and store) all data equally. The value of data varies widely, however, depending on context, use case, and the underlying properties of the data (the information it contains, its quality, etc.). Establishing metrics or processes to accurately value data is therefore essential. This is particularly true as the amount of data continues to explode, potentially exceeding stakeholders’ ability to store or process all generated data.

3. Weighing Risks and Benefits of data use: Following a string of high-profile privacy violations in recent years, public and regulatory attention has largely focused on the risks associated with data, and steps required to minimize those risks. Such concerns are, of course, valid and important. At the same time, a sole focus on preventing harms has led to artificial limits on maximizing the potential benefits of data — or, put another way, on the risks of not using data. It is time to apply a more balanced approach, one that weighs risks against benefits. By freeing up large amounts of currently siloed and unused data, such a responsible data framework could unleash huge amounts of social innovation and public benefit….

7. From individual consent to a social license: Social license refers to the informal demands or expectations set by society on how data may be used, reused, and shared. The notion, which originates in the field of environmental resource management, recognizes that social license may not overlap perfectly with legal or regulatory license. In some cases, it may exceed formal approvals for how data can be used, and in others, it may be more limited. Either way, public trust is as essential as legal compliance — a thriving data ecology can only exist if data holders and other stakeholders operate within the boundaries of community norms and expectations.

8. From data ownership to data stewardship: Many of the above propositions add up to an implicit recognition that we need to move beyond notions of ownership when it comes to data. As a non-rivalrous public good, data offers massive potential for the public good and social transformation. That potential varies by context and use case; sharing and collaboration are essential to ensuring that the right data is brought to bear on the most relevant social problems. A notion of stewardship — which recognizes that data is held in public trust, available to be shared in a responsible manner — is thus more helpful (and socially beneficial) than outdated notions of ownership. A number of tools and mechanisms exist to encourage stewardship and sharing. As we have elsewhere written, data collaboratives are among the most promising.

9. Data Asymmetries: Data, it was often proclaimed, would be a harbinger of greater societal prosperity and well being. The era of big data was to usher in a new tide of innovation and economic growth that would lift all boats. The reality has been somewhat different. The era of big data has rather been characterized by persistent, and in many ways worsening, asymmetries. These manifest in inequalities in access to data itself, and, more problematically, inequalities in the way the social and economic fruits of data are being distributed. We thus need to reconceptualize our approach to data, ensuring that its benefits are more equitably spread, and that it does not in fact end up exacerbating the widespread and systematic inequalities that characterize our times.

10. Reconceptualizing self-determination…(More)” (First published as Data Reboot: 10 Gründe, warum wir unseren Umgang mit Daten ändern müssen at 1E9).

The Case for Including Data Stewardship in ESG


Article by Stefaan Verhulst: “Amid all the attention to environmental, social, and governance factors in investing, better known as ESG, there has been relatively little emphasis on governance, and even less on data governance. This is a significant oversight that needs to be addressed, as data governance has a crucial role to play in achieving environmental and social goals. 

Data stewardship in particular should be considered an important ESG practice. Making data accessible for reuse in the public interest can promote social and environmental goals while boosting a company’s efficiency and profitability. And investing in companies with data-stewardship capabilities makes good sense. But first, we need to move beyond current debates on data and ESG.

Several initiatives have begun to focus on data as it relates to ESG. For example, a recent McKinsey report on ESG governance within the banking sector argues that banks “will need to adjust their data architecture, define a data collection strategy, and reorganize their data governance model to successfully manage and report ESG data.” Deloitte recognizes the need for “a robust ESG data strategy.” PepsiCo likewise highlights its ESG Data Governance Program, and Maersk emphasizes data ethics as a key component in its ESG priorities.

These efforts are meaningful, but they are largely geared toward using data to measure compliance with environmental and social commitments. They don’t do much to help us understand how companies are leveraging data as an asset to achieve environmental and social goals. In particular, as I‘ve written elsewhere, data stewardship by which privately held data is reused for public interest purposes is an important new component of corporate social responsibility, as well as a key tool in data governance. Too many data-governance efforts are focused simply on using data to measure compliance or impact. We need to move beyond that mindset. Instead, we should adopt a data stewardship approach, where data is made accessible for the public good. There are promising signs of change in this direction…(More)”.

We need a much more sophisticated debate about AI


Article by Jamie Susskind: “Twentieth-century ways of thinking will not help us deal with the huge regulatory challenges the technology poses…The public debate around artificial intelligence sometimes seems to be playing out in two alternate realities.

In one, AI is regarded as a remarkable but potentially dangerous step forward in human affairs, necessitating new and careful forms of governance. This is the view of more than a thousand eminent individuals from academia, politics, and the tech industry who this week used an open letter to call for a six-month moratorium on the training of certain AI systems. AI labs, they claimed, are “locked in an out-of-control race to develop and deploy ever more powerful digital minds”. Such systems could “pose profound risks to society and humanity”. 

On the same day as the open letter, but in a parallel universe, the UK government decided that the country’s principal aim should be to turbocharge innovation. The white paper on AI governance had little to say about mitigating existential risk, but lots to say about economic growth. It proposed the lightest of regulatory touches and warned against “unnecessary burdens that could stifle innovation”. In short: you can’t spell “laissez-faire” without “AI”. 

The difference between these perspectives is profound. If the open letter is taken at face value, the UK government’s approach is not just wrong, but irresponsible. And yet both viewpoints are held by reasonable people who know their onions. They reflect an abiding political disagreement which is rising to the top of the agenda.

But despite this divergence there are four ways of thinking about AI that ought to be acceptable to both sides.

First, it is usually unhelpful to debate the merits of regulation by reference to a particular crisis (Cambridge Analytica), technology (GPT-4), person (Musk), or company (Meta). Each carries its own problems and passions. A sound regulatory system will be built on assumptions that are sufficiently general in scope that they will not immediately be superseded by the next big thing. Look at the signal, not the noise…(More)”.

Can A.I. and Democracy Fix Each Other?


Peter Coy at The New York Times: “Democracy isn’t working very well these days, and artificial intelligence is scaring the daylights out of people. Some creative people are looking at those two problems and envisioning a solution: Democracy fixes A.I., and A.I. fixes democracy.

Attitudes about A.I. are polarized, with some focusing on its promise to amplify human potential and others dwelling on what could go wrong (and what has already gone wrong). We need to find a way out of the impasse, and leaving it to the tech bros isn’t the answer. Democracy — giving everyone a voice on policy — is clearly the way to go.

Democracy can be taken hostage by partisans, though. That’s where artificial intelligence has a role to play. It can make democracy work better by surfacing ideas from everyone, not just the loudest. It can find surprising points of agreement among seeming antagonists and summarize and digest public opinion in a way that’s useful to government officials. Assisting democracy is a more socially valuable function for large language models than, say, writing commercials for Spam in iambic pentameter.The goal, according to the people I spoke to, is to make A.I. part of the solution, not just part of the problem…(More)” (See also: Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern…)”.

The secrets of cooperation


Article by Bob Holmes: “People stop their cars simply because a little light turns from green to red. They crowd onto buses, trains and planes with complete strangers, yet fights seldom break out. Large, strong men routinely walk right past smaller, weaker ones without demanding their valuables. People pay their taxes and donate to food banks and other charities.

Most of us give little thought to these everyday examples of cooperation. But to biologists, they’re remarkable — most animals don’t behave that way.

“Even the least cooperative human groups are more cooperative than our closest cousins, chimpanzees and bonobos,” says Michael Muthukrishna, a behavioral scientist at the London School of Economics. Chimps don’t tolerate strangers, Muthukrishna says, and even young children are a lot more generous than a chimp.

Human cooperation takes some explaining — after all, people who act cooperatively should be vulnerable to exploitation by others. Yet in societies around the world, people cooperate to their mutual benefit. Scientists are making headway in understanding the conditions that foster cooperation, research that seems essential as an interconnected world grapples with climate change, partisan politics and more — problems that can be addressed only through large-scale cooperation…(More)”.

How AI Could Revolutionize Diplomacy


Article by Andrew Moore: “More than a year into Russia’s war of aggression against Ukraine, there are few signs the conflict will end anytime soon. Ukraine’s success on the battlefield has been powered by the innovative use of new technologies, from aerial drones to open-source artificial intelligence (AI) systems. Yet ultimately, the war in Ukraine—like any other war—will end with negotiations. And although the conflict has spurred new approaches to warfare, diplomatic methods remain stuck in the 19th century.

Yet not even diplomacy—one of the world’s oldest professions—can resist the tide of innovation. New approaches could come from global movements, such as the Peace Treaty Initiative, to reimagine incentives to peacemaking. But much of the change will come from adopting and adapting new technologies.

With advances in areas such as artificial intelligence, quantum computing, the internet of things, and distributed ledger technology, today’s emerging technologies will offer new tools and techniques for peacemaking that could impact every step of the process—from the earliest days of negotiations all the way to monitoring and enforcing agreements…(More)”.

Eye of the Beholder: Defining AI Bias Depends on Your Perspective


Article by Mike Barlow: “…Today’s conversations about AI bias tend to focus on high-visibility social issues such as racism, sexism, ageism, homophobia, transphobia, xenophobia, and economic inequality. But there are dozens and dozens of known biases (e.g., confirmation bias, hindsight bias, availability bias, anchoring bias, selection bias, loss aversion bias, outlier bias, survivorship bias, omitted variable bias and many, many others). Jeff Desjardins, founder and editor-in-chief at Visual Capitalist, has published a fascinating infographic depicting 188 cognitive biases–and those are just the ones we know about.

Ana Chubinidze, founder of AdalanAI, a Berlin-based AI governance startup, worries that AIs will develop their own invisible biases. Currently, the term “AI bias” refers mostly to human biases that are embedded in historical data. “Things will become more difficult when AIs begin creating their own biases,” she says.

She foresees that AIs will find correlations in data and assume they are causal relationships—even if those relationships don’t exist in reality. Imagine, she says, an edtech system with an AI that poses increasingly difficult questions to students based on their ability to answer previous questions correctly. The AI would quickly develop a bias about which students are “smart” and which aren’t, even though we all know that answering questions correctly can depend on many factors, including hunger, fatigue, distraction, and anxiety. 

Nevertheless, the edtech AI’s “smarter” students would get challenging questions and the rest would get easier questions, resulting in unequal learning outcomes that might not be noticed until the semester is over—or might not be noticed at all. Worse yet, the AI’s bias would likely find its way into the system’s database and follow the students from one class to the next…

As we apply AI more widely and grapple with its implications, it becomes clear that bias itself is a slippery and imprecise term, especially when it is conflated with the idea of unfairness. Just because a solution to a particular problem appears “unbiased” doesn’t mean that it’s fair, and vice versa. 

“There is really no mathematical definition for fairness,” Stoyanovich says. “Things that we talk about in general may or may not apply in practice. Any definitions of bias and fairness should be grounded in a particular domain. You have to ask, ‘Whom does the AI impact? What are the harms and who is harmed? What are the benefits and who benefits?’”…(More)”.

Prediction Fiction


Essay by Madeline Ashby: “…This contributes to what my colleague Scott Smith calls “flat-pack futures”, or what the Canadian scholar Sun-ha Hong calls “technofutures”, which “preach revolutionary change while practicing a politics of inertia”. These visions of possible future realities possess a mass-market sameness. They look like what happens when you tell an AI image generator to draw the future: just a slurry of genuine human creativity machined into a fine paste. Drone delivery, driverless cars, blockchain this, alt-currency that, smart mirrors, smart everything,and not a speck of dirt or illness or poverty or protest anywhere. Bloodless, bland, boring, banal. It is like ordering your future from the kids’ menu.

When we cannot acknowledge how bad things are, we cannot imagine how to improve them. As with so many challenges, the first step is admitting there is a problem. But if you are isolated, ignored, or ridiculed at work or at home for acknowledging that problem, the problem becomes impossible to deal with. How we treat existential threats to the planet today is how doctors treated women’s cancers until the latter half of the 20th century: by refusing to tell the patient she was dying.

But the issue is not just toxic positivity. Remember those myths about the warnings that go unheeded? The moral of those stories is not that some people are doomed never to be listened to. The moral of those stories is that people in power do not want to hear how they might lose it. It is not that the predictions were wrong, but that they were simply not what people wanted to hear. To work in futures, you have to tell people things they don’t want to hear. And this is when it is useful to tell a story….(More)”

To Tackle Climate Change, We Need To Update Democracy


Article by Mark Baldassare and Cheryl Katz: “…Engaging the public through direct democracy can provide an antidote to the widespread government distrust and extreme political polarization that is currently paralyzing the nation. As shown by the overwhelming and bipartisan support for the outcome of a ballot measure such as Proposition 20’s Coastal Commission, statutes enacted through the initiative process have the potential to stand the test of time. State lawmakers, in turn, feel the weight of public opinion and are loath to tinker with laws that have received majority endorsement. 

The seeming intractability of citizens’ initiatives could be seen as an argument against direct democracy. This was exemplified by recent failed propositions aimed at changing the low commercial property tax rates set by the 1978 Proposition 13 (i.e. 2020 Proposition 15) and at ending the ban on affirmative action programs established by the 1996 Proposition 209 (i.e. 2020 Proposition 16). One reason these efforts were doomed is that proponents failed to engage with the public on such controversial policy issues and did not overcome voters’ inherent skepticism. When voters are dubious about a measure’s intentions or outcome, the default is to say “no” — shown by the historical initiative pass rate of 35%.            

“Giving citizens agency in tackling the planet’s most pressing issue stands to motivate them to adopt difficult measures and make the lifestyle changes required.”

Another form of direct democracy is citizens assemblies, in which a large group of randomly selected members of the public engage in guided discussions and make policy recommendations. When applied to climate change, giving citizens agency in tackling the planet’s most pressing issue stands to motivate them to adopt difficult measures and make the lifestyle changes required. For example, political scientist Carsten Berg’s analysis of the citizens’ assemblies convened for the European Union’s Conference on the Future of Europe in 2022 describes how participation engendered a sense of group purpose and spurred collaboration toward a common goal. 

Direct democracy tools can help overcome the public’s feelings of helplessness in the face of the climate crisis and generate a shared sense of responsibility for mitigation. A 2022 research report examined the emotional experiences of participants in a 2020-21 Scottish citizens’ assembly convened to address the question of how Scotland could “tackle the climate emergency in an effective and fair way.” Compared to the general population, writes Lancaster University researcher Nadine Andrews, assembly members had “higher levels of hopefulness and optimism, lower levels of worry and overwhelm, and a lower proportion reporting that their emotions about climate change were having a negative impact on their mental health,” while participating in the process. Participants told Andrews they felt a sense of agency and empowerment to change their behavior and take “urgent climate action.”  

While invaluable for promoting climate justice, however, citizens’ assemblies have lacked the authority to create policy. As Berg points out, the outcome of the Future of Europe deliberations was non-binding, had a small reach and received little public attention. And Andrews found that participants’ hope and optimism about tackling climate change dropped in the wake of the Scottish government’s lackluster response to the panel’s report. The outcome of any such effort in California will need to be much more results-oriented…(More)”.

Can Cities Be the Source of Scalable Innovations?


Article by Christof Brandtner: “Systems change to address complex problems, including climate change, is hard to achieve. What little optimism remains to tackle such complex challenges is mostly placed in supranational schemes, such as the COP climate change conferences, or transformational national policy, such as the Green New Deal in the US. Solutions of grand design regularly disappoint, however, because of their high costs, the challenges of translating big plans to local needs, and ongoing disagreement and polarization about what works and what is detrimental.

There is hope on the skyline though. Urban innovation ecosystems can provide an alternative to grand schemes, and cities’ social sectors provide a source of ongoing innovation. Companies like Sidewalk Labs, a subsidiary of Alphabet that develops technologies for sustainable urban design, are transforming business as usual to solve complex urban problems. Social enterprises such as car-sharing programs are changing the nature of urban transportation and providing alternative options to individual car ownership. Through its iconic mobile showers, the San Francisco nonprofit LavaMae has found new ways to serve the homeless in the absence of more radical reforms of affordable housing. And the US Green Building Council (USGBC), an intermediary promoting energy-efficient construction, developed guidelines and rating systems for sustainable cities and neighborhoods.

Promising ideas are in ample supply, but the crucial question is: How can social innovators scale such innovations so that their local impact adds up to big solutions?…(More)”.