A case for democracy’s digital playground


Article by Petr Špecián: “Institutions are societies’ building blocks. Their role in shaping and channelling human potential is crucial. Yet the vast space of possible institutional designs remains largely unexplored…In the institutional landscape, there are plenty of alternative designs to explore. Some of them, such as replacing elected representation with sortition, look promising. But if they appear only faintly through the mist of uncertainty, their implementation would be an overly risky endeavour. We need more data to get a better idea of our options.lly.

To explore alternative designs for the institutional landscape, we first need more data. I propose testing new institutional designs in a ‘digital playground’ of democracy

Currently, the multitude of reform proposals overwhelms the modest capacities available for their empirical testing. Only those most prominent — such as deliberative democracy — command enough resources to enable serious examination.

And the stakes are momentous. What if a radical reform of the political institutions proves disastrous? Clever speculations combined with scant experimental evidence cannot dispel reasonable doubts.

This is where my proposal for democracy’s digital playground comes in….Democracy’s digital playground is an artificial world in which institutional mechanisms are tested and compete against each other.

In some ways, it resembles massive multiplayer online games that emulate many of the real world’s crucial features. These games encourage people to work together to overcome challenges, which then motivates them to create political institutions conducive to their efforts. They can also migrate between communities, revealing their preference for alternative modes of governance.

A ‘digital playground’ of democracy emulates real-world features. It encourages people to work together to overcome challenges, thus creating conducive political institutions

That said, digital game-worlds in their current form have limited use for democratic experimentation. Their institution-building tools are crude, since much of the cooperation and  conflict resolution  happens outside the game environment itself, through forums and chats. Nor do these communities accurately represent the diversity of populations in real-world democracies. Players are predominantly young males with ample free time. And the games’ commercial purpose hinders the researchers’ quest for knowledge, too.

But perhaps these digital worlds can be adapted. Compared with the current methods used to test institutional mechanisms, they offer many advantages. Transparency is one such: a human-designed world is less opaque than the natural world. Easy participation represents another: regardless of location or resources, diverse people may join the community.

However, most important of all is the opportunity to calibrate the digital worlds as an optimum risk environment…(More)”.

Design for a Better World


Book by Don Norman: “The world is a mess. Our dire predicament, from collapsing social structures to the climate crisis, has been millennia in the making and can be traced back to the erroneous belief that the earth’s resources are infinite. The key to change, says Don Norman, is human behavior, covered in the book’s three major themes: meaning, sustainability, and humanity-centeredness. Emphasize quality of life, not monetary rewards; restructure how we live to better protect the environment; and focus on all of humanity. Design for a Better World presents an eye-opening diagnosis of where we’ve gone wrong and a clear prescription for making things better.

Norman proposes a new way of thinking, one that recognizes our place in a complex global system where even simple behaviors affect the entire world. He identifies the economic metrics that contribute to the harmful effects of commerce and manufacturing and proposes a recalibration of what we consider important in life. His experience as both a scientist and business executive gives him the perspective to show how to make these changes while maintaining a thriving economy. Let the change begin with this book before it’s too late…(More)”

Outsourcing Virtue


Essay by  L. M. Sacasas: “To take a different class of example, we might think of the preoccupation with technological fixes to what may turn out to be irreducibly social and political problems. In a prescient essay from 2020 about the pandemic response, the science writer Ed Yong observed that “instead of solving social problems, the U.S. uses techno-fixes to bypass them, plastering the wounds instead of removing the source of injury—and that’s if people even accept the solution on offer.” There’s no need for good judgment, responsible governance, self-sacrifice or mutual care if there’s an easy technological fix to ostensibly solve the problem. No need, in other words, to be good, so long as the right technological solution can be found.

Likewise, there’s no shortage of examples involving algorithmic tools intended to outsource human judgment. Consider the case of NarxCare, a predictive program developed by Appriss Health, as reported in Wired in 2021. NarxCare is “an ‘analytics tool and care management platform’ that purports to instantly and automatically identify a patient’s risk of misusing opioids.” The article details the case of a 32-year-old woman suffering from endometriosis whose pain medications were cut off, without explanation or recourse, because she triggered a high-risk score from the proprietary algorithm. The details of the story are both fascinating and disturbing, but here’s the pertinent part for my purposes:

Appriss is adamant that a NarxCare score is not meant to supplant a doctor’s diagnosis. But physicians ignore these numbers at their peril. Nearly every state now uses Appriss software to manage its prescription drug monitoring programs, and most legally require physicians and pharmacists to consult them when prescribing controlled substances, on penalty of losing their license.

This is an obviously complex and sensitive issue, but it is hard to escape the conclusion that the use of these algorithmic systems exacerbates the same demoralizing opaqueness, evasion of responsibility and cover-your-ass dynamics that have long characterized analog bureaucracies. It becomes difficult to assume responsibility for a particular decision made in a particular case. Or, to put it otherwise, it becomes too easy to claim “the algorithm made me do it,” and it becomes so, in part, because the existing bureaucratic dynamics all but require it…(More)”.

Data Cooperatives as Catalysts for Collaboration, Data Sharing, and the (Trans)Formation of the Digital Commons


Paper by Michael Max Bühler et al: “Network effects, economies of scale, and lock-in-effects increasingly lead to a concentration of digital resources and capabilities, hindering the free and equitable development of digital entrepreneurship (SDG9), new skills, and jobs (SDG8), especially in small communities (SDG11) and their small and medium-sized enterprises (“SMEs”). To ensure the affordability and accessibility of technologies, promote digital entrepreneurship and community well-being (SDG3), and protect digital rights, we propose data cooperatives [1,2] as a vehicle for secure, trusted, and sovereign data exchange [3,4]. In post-pandemic times, community/SME-led cooperatives can play a vital role by ensuring that supply chains to support digital commons are uninterrupted, resilient, and decentralized [5]. Digital commons and data sovereignty provide communities with affordable and easy access to information and the ability to collectively negotiate data-related decisions. Moreover, cooperative commons (a) provide access to the infrastructure that underpins the modern economy, (b) preserve property rights, and (c) ensure that privatization and monopolization do not further erode self-determination, especially in a world increasingly mediated by AI. Thus, governance plays a significant role in accelerating communities’/SMEs’ digital transformation and addressing their challenges. Cooperatives thrive on digital governance and standards such as open trusted Application Programming Interfaces (APIs) that increase the efficiency, technological capabilities, and capacities of participants and, most importantly, integrate, enable, and accelerate the digital transformation of SMEs in the overall process. This policy paper presents and discusses several transformative use cases for cooperative data governance. The use cases demonstrate how platform/data-cooperatives, and their novel value creation can be leveraged to take digital commons and value chains to a new level of collaboration while addressing the most pressing community issues. The proposed framework for a digital federated and sovereign reference architecture will create a blueprint for sustainable development both in the Global South and North…(More)”

Knowledge monopolies and the innovation divide: A governance perspective


Paper by Hani Safadi and Richard Thomas Watson: “The rise of digital platforms creates knowledge monopolies that threaten innovation. Their power derives from the imposition of data obligations and persistent coupling on platform participation and their usurpation of the rights to data created by other participants to facilitate information asymmetries. Knowledge monopolies can use machine learning to develop competitive insights unavailable to every other platform participant. This information asymmetry stifles innovation, stokes the growth of the monopoly, and reinforces its ascendency. National or regional governance structures, such as laws and regulatory authorities, constrain economic monopolies deemed not in the public interest. We argue the need for legislation and an associated regulatory mechanism to curtail coercive data obligations, control, eliminate data rights exploitation, and prevent mergers and acquisitions that could create or extend knowledge monopolies…(More)”.

Towards Responsible Quantum Technology


Paper by Mauritz Kop et al: “The expected societal impact of quantum technologies (QT) urges us to proceed and innovate responsibly. This article proposes a conceptual framework for Responsible QT that seeks to integrate considerations about ethical, legal, social, and policy implications (ELSPI) into quantum R&D, while responding to the Responsible Research and Innovation dimensions of anticipation, inclusion, reflection and responsiveness. After examining what makes QT unique, we argue that quantum innovation should be guided by a methodological framework for Responsible QT, aimed at jointly safeguarding against risks by proactively addressing them, engaging stakeholders in the innovation process, and continue advancing QT (‘SEA’). We further suggest operationalizing the SEA-framework by establishing quantum-specific guiding principles. The impact of quantum computing on information security is used as a case study to illustrate (1) the need for a framework that guides Responsible QT, and (2) the usefulness of the SEA-framework for QT generally. Additionally, we examine how our proposed SEA-framework for responsible innovation can inform the emergent regulatory landscape affecting QT, and provide an outlook of how regulatory interventions for QT as base-layer technology could be designed, contextualized, and tailored to their exceptional nature in order to reduce the risk of unintended counterproductive effects of policy interventions.

Laying the groundwork for a responsible quantum ecosystem, the research community and other stakeholders are called upon to further develop the recommended guiding principles, and discuss their operationalization into best practices and real-world applications. Our proposed framework should be considered a starting point for these much needed, highly interdisciplinary efforts…(More)”.

Data Reboot: 10 Reasons why we need to change how we approach data in today’s society


Article by Stefaan Verhulst and Julia Stamm:”…In the below, we consider 10 reasons why we need to reboot the data conversations and change our approach to data governance…

1. Data is not the new oil: This phrase, sometimes attributed to Clive Humby in 2006, has become a staple of media and other commentaries. In fact, the analogy is flawed in many ways. As Mathias Risse, from the Carr Center for Human Rights Policy at Harvard, points out, oil is scarce, fungible, and rivalrous (can be used and owned by a single entity). Data, by contrast, possesses none of these properties. In particular, as we explain further below, data is shareable (i.e., non-rivalrous); its societal and economic value also greatly increases through sharing. The data-as-oil analogy should thus be discarded, both because it is inaccurate and because it artificially inhibits the potential of data.

2. Not all data is equal: Assessing the value of data can be challenging, leading many organizations to treat (e.g., collect and store) all data equally. The value of data varies widely, however, depending on context, use case, and the underlying properties of the data (the information it contains, its quality, etc.). Establishing metrics or processes to accurately value data is therefore essential. This is particularly true as the amount of data continues to explode, potentially exceeding stakeholders’ ability to store or process all generated data.

3. Weighing Risks and Benefits of data use: Following a string of high-profile privacy violations in recent years, public and regulatory attention has largely focused on the risks associated with data, and steps required to minimize those risks. Such concerns are, of course, valid and important. At the same time, a sole focus on preventing harms has led to artificial limits on maximizing the potential benefits of data — or, put another way, on the risks of not using data. It is time to apply a more balanced approach, one that weighs risks against benefits. By freeing up large amounts of currently siloed and unused data, such a responsible data framework could unleash huge amounts of social innovation and public benefit….

7. From individual consent to a social license: Social license refers to the informal demands or expectations set by society on how data may be used, reused, and shared. The notion, which originates in the field of environmental resource management, recognizes that social license may not overlap perfectly with legal or regulatory license. In some cases, it may exceed formal approvals for how data can be used, and in others, it may be more limited. Either way, public trust is as essential as legal compliance — a thriving data ecology can only exist if data holders and other stakeholders operate within the boundaries of community norms and expectations.

8. From data ownership to data stewardship: Many of the above propositions add up to an implicit recognition that we need to move beyond notions of ownership when it comes to data. As a non-rivalrous public good, data offers massive potential for the public good and social transformation. That potential varies by context and use case; sharing and collaboration are essential to ensuring that the right data is brought to bear on the most relevant social problems. A notion of stewardship — which recognizes that data is held in public trust, available to be shared in a responsible manner — is thus more helpful (and socially beneficial) than outdated notions of ownership. A number of tools and mechanisms exist to encourage stewardship and sharing. As we have elsewhere written, data collaboratives are among the most promising.

9. Data Asymmetries: Data, it was often proclaimed, would be a harbinger of greater societal prosperity and well being. The era of big data was to usher in a new tide of innovation and economic growth that would lift all boats. The reality has been somewhat different. The era of big data has rather been characterized by persistent, and in many ways worsening, asymmetries. These manifest in inequalities in access to data itself, and, more problematically, inequalities in the way the social and economic fruits of data are being distributed. We thus need to reconceptualize our approach to data, ensuring that its benefits are more equitably spread, and that it does not in fact end up exacerbating the widespread and systematic inequalities that characterize our times.

10. Reconceptualizing self-determination…(More)” (First published as Data Reboot: 10 Gründe, warum wir unseren Umgang mit Daten ändern müssen at 1E9).

The Case for Including Data Stewardship in ESG


Article by Stefaan Verhulst: “Amid all the attention to environmental, social, and governance factors in investing, better known as ESG, there has been relatively little emphasis on governance, and even less on data governance. This is a significant oversight that needs to be addressed, as data governance has a crucial role to play in achieving environmental and social goals. 

Data stewardship in particular should be considered an important ESG practice. Making data accessible for reuse in the public interest can promote social and environmental goals while boosting a company’s efficiency and profitability. And investing in companies with data-stewardship capabilities makes good sense. But first, we need to move beyond current debates on data and ESG.

Several initiatives have begun to focus on data as it relates to ESG. For example, a recent McKinsey report on ESG governance within the banking sector argues that banks “will need to adjust their data architecture, define a data collection strategy, and reorganize their data governance model to successfully manage and report ESG data.” Deloitte recognizes the need for “a robust ESG data strategy.” PepsiCo likewise highlights its ESG Data Governance Program, and Maersk emphasizes data ethics as a key component in its ESG priorities.

These efforts are meaningful, but they are largely geared toward using data to measure compliance with environmental and social commitments. They don’t do much to help us understand how companies are leveraging data as an asset to achieve environmental and social goals. In particular, as I‘ve written elsewhere, data stewardship by which privately held data is reused for public interest purposes is an important new component of corporate social responsibility, as well as a key tool in data governance. Too many data-governance efforts are focused simply on using data to measure compliance or impact. We need to move beyond that mindset. Instead, we should adopt a data stewardship approach, where data is made accessible for the public good. There are promising signs of change in this direction…(More)”.

We need a much more sophisticated debate about AI


Article by Jamie Susskind: “Twentieth-century ways of thinking will not help us deal with the huge regulatory challenges the technology poses…The public debate around artificial intelligence sometimes seems to be playing out in two alternate realities.

In one, AI is regarded as a remarkable but potentially dangerous step forward in human affairs, necessitating new and careful forms of governance. This is the view of more than a thousand eminent individuals from academia, politics, and the tech industry who this week used an open letter to call for a six-month moratorium on the training of certain AI systems. AI labs, they claimed, are “locked in an out-of-control race to develop and deploy ever more powerful digital minds”. Such systems could “pose profound risks to society and humanity”. 

On the same day as the open letter, but in a parallel universe, the UK government decided that the country’s principal aim should be to turbocharge innovation. The white paper on AI governance had little to say about mitigating existential risk, but lots to say about economic growth. It proposed the lightest of regulatory touches and warned against “unnecessary burdens that could stifle innovation”. In short: you can’t spell “laissez-faire” without “AI”. 

The difference between these perspectives is profound. If the open letter is taken at face value, the UK government’s approach is not just wrong, but irresponsible. And yet both viewpoints are held by reasonable people who know their onions. They reflect an abiding political disagreement which is rising to the top of the agenda.

But despite this divergence there are four ways of thinking about AI that ought to be acceptable to both sides.

First, it is usually unhelpful to debate the merits of regulation by reference to a particular crisis (Cambridge Analytica), technology (GPT-4), person (Musk), or company (Meta). Each carries its own problems and passions. A sound regulatory system will be built on assumptions that are sufficiently general in scope that they will not immediately be superseded by the next big thing. Look at the signal, not the noise…(More)”.

Can A.I. and Democracy Fix Each Other?


Peter Coy at The New York Times: “Democracy isn’t working very well these days, and artificial intelligence is scaring the daylights out of people. Some creative people are looking at those two problems and envisioning a solution: Democracy fixes A.I., and A.I. fixes democracy.

Attitudes about A.I. are polarized, with some focusing on its promise to amplify human potential and others dwelling on what could go wrong (and what has already gone wrong). We need to find a way out of the impasse, and leaving it to the tech bros isn’t the answer. Democracy — giving everyone a voice on policy — is clearly the way to go.

Democracy can be taken hostage by partisans, though. That’s where artificial intelligence has a role to play. It can make democracy work better by surfacing ideas from everyone, not just the loudest. It can find surprising points of agreement among seeming antagonists and summarize and digest public opinion in a way that’s useful to government officials. Assisting democracy is a more socially valuable function for large language models than, say, writing commercials for Spam in iambic pentameter.The goal, according to the people I spoke to, is to make A.I. part of the solution, not just part of the problem…(More)” (See also: Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern…)”.