Paper by Jeroen van der Heijden: “This research paper presents findings from a broad scoping of the international academic literature on the use of systems thinking and systems science in regulatory governance and practice. It builds on a systematic review of peer-reviewed articles published in the top 15 journals for regulatory scholarship. The aim of the research paper is to introduce those working in a regulatory environment to the key concepts of systems thinking and systems science, and to discuss the state of the art of regulatory knowledge on these topics.
It addresses five themes:
(1) the evolution of systems thinking,
(2) examples of systems thinking from the academic literature,
(3) evidence of how systems thinking helps improving regulatory governance, and
(4) the epistemic challenges and
(5) ethical challenges that come with applying systems thinking to regulatory governance and practice….(More)”.
Stefaan G. Verhulst, Andrew Zahuranec, Andrew Young and Michelle Winowatan at Data & Policy: “As data grows increasingly prevalent in our economy, it is increasingly clear, too, that tremendous societal value can be derived from reusing and combining previously separate datasets. One avenue that holds particular promise are data collaboratives. Data collaboratives are a new form of partnership in which data (such as data owned by corporations) or data expertise is made accessible for external parties (such as academics or statistical offices) working in the public interest. By bringing together a wide range of inter-sectoral expertise to bear on the data, collaboration can result in new insights and innovations, and can help unlock the public good potential of previously siloed data or expertise.
Yet, not all data collaboratives are successful or go beyond pilots. Based on research and analysis of hundreds of data collaboratives, one factor seems to stand out as determinative of success above all others — whether there exist individuals or teams within data-holding organizations who are empowered to proactively initiate, facilitate and coordinate data collaboratives toward the public interest. We call these individuals and teams “data stewards.”
They systematize the process of partnering, and help scale efforts when there are fledgling signs of success. Data stewards are essential for accelerating the re-use of data in the public interest by providing functional access, and more generally, to unlock the potential of our data age. Data stewards form an important — and new — link in the data value chain.
In its final report, the European Commission’s High-Level Expert Group on Business-to-Government (B2G) Data Sharing also noted the need for data stewards to enable responsible, accountable data sharing for the public interest. In their report, they write:
“A key success factor in setting up sustainable and responsible B2G partnerships is the existence, within both public- and private-sector organisations, of individuals or teams that are empowered to proactively initiate, facilitate and coordinate B2G data sharing when necessary. As such, ‘data stewards’ should become a recognised function.”
The report goes on further to acknowledge the need to scope, design, and establish a network or a community of practice around data stewardship.
Moreover, it addresses the tendency to conflate the roles of data stewards with those of individuals or groups who might better be described as chief privacy, chief data or chief security officers. This slippage is perhaps understandable, we need to redefine the role that is somewhat broader. While data management, privacy and security are key components of trusted and effective data collaboratives, the real goal is to re-use data for broader social goals (while preventing any potential harms that may result from sharing).
In particular the position paper — which captures lived experience of numerous data stewards- seeks to provide more clarity on how data stewards can accomplish these duties by:
Defining the responsibilities of a data steward; and
Identifying the roles which a data steward must fill to achieve these responsibilities…(More)”.
Report by Freedom House: “Democracy and pluralism are under assault. Dictators are toiling to stamp out the last vestiges of domestic dissent and spread their harmful influence to new corners of the world. At the same time, many freely elected leaders are dramatically narrowing their concerns to a blinkered interpretation of the national interest. In fact, such leaders—including the chief executives of the United States and India, the world’s two largest democracies—are increasingly willing to break down institutional safeguards and disregard the rights of critics and minorities as they pursue their populist agendas. As a result of these and other trends, Freedom House found that 2019 was the 14th consecutive year of decline in global freedom.
The gap between setbacks and gains widened compared with 2018, as individuals in 64 countries experienced deterioration in their political rights and civil liberties while those in just 37 experienced improvements. The negative pattern affected all regime types, but the impact was most visible near the top and the bottom of the scale. More than half of the countries that were rated Free or Not Free in 2009 have suffered a net decline in the past decade…The unchecked brutality of autocratic regimes and the ethical decay of democratic powers are combining to make the world increasingly hostile to fresh demands for better governance. A striking number of new citizen protest movements have emerged over the past year, reflecting the inexhaustible and universal desire for fundamental rights. However, these movements have in many cases confronted deeply entrenched interests that are able to endure considerable pressure and are willing to use deadly force to maintain power…(More)”.
Iqbal Dhaliwal, John Floretta & Sam Friedlander at SSIR: “…In its post-Nobel phase, one of J-PAL’s priorities is to unleash the treasure troves of big digital data in the hands of governments, nonprofits, and private firms. Primary data collection is by far the most time-, money-, and labor-intensive component of the vast majority of experiments that evaluate social policies. Randomized evaluations have been constrained by simple numbers: Some questions are just too big or expensive to answer. Leveraging administrative data has the potential to dramatically expand the types of questions we can ask and the experiments we can run, as well as implement quicker, less expensive, larger, and more reliable RCTs, an invaluable opportunity to scale up evidence-informed policymaking massively without dramatically increasing evaluation budgets.
Although administrative data hasn’t always been of the highest quality, recent advances have significantly increased the reliability and accuracy of GPS coordinates, biometrics, and digital methods of collection. But despite good intentions, many implementers—governments, businesses, and big NGOs—aren’t currently using the data they already collect on program participants and outcomes to improve anti-poverty programs and policies. This may be because they aren’t aware of its potential, don’t have the in-house technical capacity necessary to create use and privacy guidelines or analyze the data, or don’t have established partnerships with researchers who can collaborate to design innovative programs and run rigorous experiments to determine which are the most impactful.
At J-PAL, we are leveraging this opportunity through a new global research initiative we are calling the “Innovations in Data and Experiments for Action” Initiative (IDEA). IDEA supports implementers to make their administrative data accessible, analyze it to improve decision-making, and partner with researchers in using this data to design innovative programs, evaluate impact through RCTs, and scale up successful ideas. IDEA will also build the capacity of governments and NGOs to conduct these types of activities with their own data in the future….(More)”.
Michael A. Johansson & Daniela Saderi in Nature: “The public call for rapid sharing of research data relevant to the COVID-19 outbreak (see go.nature.com/2t1lyp6) is driving an unprecedented surge in (unrefereed) preprints. To help pinpoint the most important research, we have launched Outbreak Science Rapid PREreview, with support from the London-based charity Wellcome. This is an open-source platform for rapid review of preprints related to emerging outbreaks (see https://outbreaksci.prereview.org).
These reviews comprise responses to short, yes-or-no questions, with optional commenting. The questions are designed to capture structured, high-level input on the importance and quality of the research, which can be aggregated across several reviews. Scientists who have ORCID IDs can submit their reviews as they read the preprints (currently limited to the medRxiv, bioRxiv and arXiv repositories). The reviews are open and can be submitted anonymously.
Outbreaks of pathogens such as the SARS-CoV-2 coronavirus that is responsible for COVID-19 move fast and can affect anyone. Research to support outbreak response needs to be fast and open, too, as do mechanisms to review outbreak-related research. Help other scientists, as well as the media, journals and public-health officials, to find the most important COVID-19 preprints now….(More)”.
Paper by Shweta Suran, Vishwajeet Pattanaik, and Dirk Draheim: “Over the last few years, Collective Intelligence (CI) platforms have become a vital resource for learning, problem solving, decision-making, and predictions. This rising interest in the topic has to led to the development of several models and frameworks available in published literature.
Unfortunately, most of these models are built around domain-specific requirements, i.e., they are often based on the intuitions of their domain experts and developers. This has created a gap in our knowledge in the theoretical foundations of CI systems and models, in general. In this article, we attempt to fill this gap by conducting a systematic review of CI models and frameworks, identified from a collection of 9,418 scholarly articles published since 2000. Eventually, we contribute by aggregating the available knowledge from 12 CI models into one novel framework and present a generic model that describes CI systems irrespective of their domains. We add to the previously available CI models by providing a more granular view of how different components of CI systems interact. We evaluate the proposed model by examining it with respect to six popular, ongoing CI initiatives available on the Web….(More)”.
Michael Mandiberg at The Atlantic: “Wikipedia matters. In a time of extreme political polarization, algorithmically enforced filter bubbles, and fact patterns dismissed as fake news, Wikipedia has become one of the few places where we can meet to write a shared reality. We treat it like a utility, and the U.S. and U.K. trust it about as much as the news.
But we know very little about who is writing the world’s encyclopedia. We do know that just because anyone can edit, doesn’t mean that everyone does: The site’s editors are disproportionately ciswhite men from the global North. We also know that, as with most of the internet, a small number of the editors do a large amount of the editing. But that’s basically it: In the interest of improving retention, the Wikimedia Foundation’s own research focuses on the motivations of people who do edit, not on those who don’t. The media, meanwhile, frequentlyfocus onWikipedia’spersonalitystories, even when covering the bigger questions. And Wikipedia’s own culture pushes back against granular data harvesting: The Wikimedia Foundation’s strong data-privacy rules guarantee users’ anonymity and limit the modes and duration of their own use of editor data.
But as part of my research in producing Print Wikipedia, I discovered a data set that can offer an entry point into the geography of Wikipedia’s contributors. Every time anyone edits Wikipedia, the software records the text added or removed, the time of the edit, and the username of the editor. (This edit history is part of Wikipedia’s ethos of radical transparency: Everyone is anonymous, and you can see what everyone is doing.) When an editor isn’t logged in with a username, the software records that user’s IP address. I parsed all of the 884 million edits to English Wikipedia to collect and geolocate the 43 million IP addresses that have edited English Wikipedia. I also counted 8.6 million username editors who have made at least one edit to an article.
The result is a set of maps that offer, for the first time, insight into where the millions of volunteer editors who build and maintain English Wikipedia’s 5 million pages are—and, maybe more important, where they aren’t….
Like the Enlightenment itself, the modern encyclopedia has a history entwined with colonialism. Encyclopédie aimed to collect and disseminate all the world’s knowledge—but in the end, it could not escape the biases of its colonial context. Likewise, Napoleon’s Description de l’Égypte augmented an imperial military campaign with a purportedly objective study of the nation, which was itself an additional form of conquest. If Wikipedia wants to break from the past and truly live up to its goal to compile the sum of all human knowledge, it requires the whole world’s participation….(More)”.
Essay by Douglas Schuler: “The utopian optimism about democracy and the internet has given way to disillusionment. At the same time, given the complexity of today’s wicked problems, the need for democracy is critical. Unfortunately democracy is under attack around the world, and there are ominous signs of its retreat.
How does democracy fare when digital technology is added to the picture? Weaving technology and democracy together is risky, and technologists who begin any digital project with the conviction that technology can and will solve “problems” of democracy are likely to be disappointed. Technology can be a boon to democracy if it is informed technology.
The goal in writing this essay was to encourage people to help develop and cultivate a rich democratic sphere. Democracy has great potential that it rarely achieves. It is radical, critical, complex, and fragile. It takes different forms in different contexts. These forms are complex and the solutionism promoted by the computer industry and others is not appropriate in the case of democracies. The primary aim of technology in the service of democracy is not merely to make it easier or more convenient but to improve society’s civic intelligence, its ability to address the problems it faces effectively and equitably….(More)”.
Rana Foroohar at the Financial Times: “…A report by a Swedish research group called V-Dem found Taiwan was subject to more disinformation than nearly any other country, much of it coming from mainland China. Yet the popularity of pro-independence politicians is growing there, something Ms Tang views as a circular phenomenon.
When politicians enable more direct participation, the public begins to have more trust in government. Rather than social media creating “a false sense of us versus them,” she notes, decentralised technologies have “enabled a sense of shared reality” in Taiwan.
The same seems to be true in a number of other countries, including Israel, where Green party leader and former Occupy activist Stav Shaffir crowdsourced technology expertise to develop a bespoke data analysis app that allowed her to make previously opaque Treasury data transparent. She’s now heading an OECD transparency group to teach other politicians how to do the same. Part of the power of decentralised technologies is that they allow, at scale, the sort of public input on a wide range of complex issues that would have been impossible in the analogue era.
Consider “quadratic voting”, a concept that has been popularised by economist Glen Weyl, co-author of Radical Markets: Uprooting Capitalism and Democracy for a Just Society. Mr Weyl is the founder of the RadicalxChange movement, which aimsto empower a more participatory democracy. Unlike a binary “yes” or “no” vote for or against one thing, quadratic voting allows a large group of people to use a digital platform to express the strength of their desire on a variety of issues.
For example, when he headed the appropriations committee in the Colorado House of Representatives, Chris Hansen used quadratic voting to help his party quickly sort through how much of their $40m budget should be allocated to more than 100 proposals….(More)”.
Report by Aleksandra Berditchevskaia and Peter Baek: “When it comes to artificial intelligence (AI), the dominant media narratives often end up taking one of two opposing stances: AI is the saviour or the villain. Whether it is presented as the technology responsible for killer robots and mass job displacement or the one curing all disease and halting the climate crisis, it seems clear that AI will be a defining feature of our future society. However, these visions leave little room for nuance and informed public debate. They also help propel the typical trajectory followed by emerging technologies; with inevitable regularity we observe the ascent of new technologies to the peak of inflated expectations they will not be able to fulfil, before dooming them to a period languishing in the trough of disillusionment.[1]
There is an alternative vision for the future of AI development. By starting with people first, we can introduce new technologies into our lives in a more deliberate and less disruptive way. Clearly defining the problems we want to address and focusing on solutions that result in the most collective benefit can lead us towards a better relationship between machine and human intelligence. By considering AI in the context of large-scale participatory projects across areas such as citizen science, crowdsourcing and participatory digital democracy, we can both amplify what it is possible to achieve through collective effort and shape the future trajectory of machine intelligence. We call this 21st-century collective intelligence(CI).
In The Future of Minds and Machines we introduce an emerging framework for thinking about how groups of people interface with AI and map out the different ways that AI can add value to collective human intelligence and vice versa. The framework has, in large part, been developed through analysis of inspiring projects and organisations that are testing out opportunities for combining AI & CI in areas ranging from farming to monitoring human rights violations. Bringing together these two fields is not easy. The design tensions identified through our research highlight the challenges of navigating this opportunity and selecting the criteria that public sector decision-makers should consider in order to make the most of solving problems with both minds and machines….(More)”.