Paper by Feijuan He et al: “Collective intelligence (CI) refers to the intelligence that emerges at the macro-level of a collection and transcends that of the individuals. CI is a continuously popular research topic that is studied by researchers in different areas, such as sociology, economics, biology, and artificial intelligence. In this survey, we summarize the works of CI in various fields. First, according to the existence of interactions between individuals and the feedback mechanism in the aggregation process, we establish CI taxonomy that includes three paradigms: isolation, collaboration and feedback. We then conduct statistical literature analysis to explain the differences among three paradigms and their development in recent years. Second, we elaborate the types of CI under each paradigm and discuss the generation mechanism or theoretical basis of the different types of CI. Third, we describe certain CI-related applications in 2019, which can be appropriately categorized by our proposed taxonomy. Finally, we summarize the future research directions of CI under each paradigm. We hope that this survey helps researchers understand the current conditions of CI and clears the directions of future research….(More)”
The Crowd and the Cosmos: Adventures in the Zooniverse
Book by Chris Lintott: “The world of science has been transformed. Where once astronomers sat at the controls of giant telescopes in remote locations, praying for clear skies, now they have no need to budge from their desks, as data arrives in their inbox. And what they receive is overwhelming; projects now being built provide more data in a few nights than in the whole of humanity’s history of observing the Universe. It’s not just astronomy either – dealing with this deluge of data is the major challenge for scientists at CERN, and for biologists who use automated cameras to spy on animals in their natural habitats. Artificial intelligence is one part of the solution – but will it spell the end of human involvement in scientific discovery?
No, argues Chris Lintott. We humans still have unique capabilities to bring to bear – our curiosity, our capacity for wonder, and, most importantly, our capacity for surprise. It seems that humans and computers working together do better than computers can on their own. But with so much scientific data, you need a lot of scientists – a crowd, in fact. Lintott found such a crowd in the Zooniverse, the web-based project that allows hundreds of thousands of enthusiastic volunteers to contribute to science.
In this book, Lintott describes the exciting discoveries that people all over the world have made, from galaxies to pulsars, exoplanets to moons, and from penguin behavior to old ship’s logs. This approach builds on a long history of so-called “citizen science,” given new power by fast internet and distributed data. Discovery is no longer the remit only of scientists in specialist labs or academics in ivory towers. It’s something we can all take part in. As Lintott shows, it’s a wonderful way to engage with science, yielding new insights daily. You, too, can help explore the Universe in your lunch hour…(More)”.
Defining concepts of the digital society
A special section of Internet Policy Review edited by Christian Katzenbach and Thomas Christian Bächle: “With this new special section Defining concepts of the digital society in Internet Policy Review, we seek to foster a platform that provides and validates exactly these overarching frameworks and theories. Based on the latest research, yet broad in scope, the contributions offer effective tools to analyse the digital society. Their authors offer concise articles that portray and critically discuss individual concepts with an interdisciplinary mindset. Each article contextualises their origin and academic traditions, analyses their contemporary usage in different research approaches and discusses their social, political, cultural, ethical or economic relevance and impact as well as their analytical value. With this, the authors are building bridges between the disciplines, between research and practice as well as between innovative explanations and their conceptual heritage….(More)”
Algorithmic governance
Christian Katzenbach, Alexander von Humboldt Institute for Internet and Society
Lena Ulbricht, Berlin Social Science Center
Datafication
Ulises A. Mejias, State University of New York at Oswego
Nick Couldry, London School of Economics & Political Science
Filter bubble
Axel Bruns, Queensland University of Technology
Platformisation
Thomas Poell, University of Amsterdam
David Nieborg, University of Toronto
José van Dijck, Utrecht University
Privacy
Tobias Matzner, University of Paderborn
Carsten Ochs, University of Kassel
Causal Inference: What If
Book by Miguel A. Hernán, James M. Robins: “Causal Inference is an admittedly pretentious title for a book. Causal inference is a complex scientific task that relies on triangulating evidence from multiple sources and on the application of a variety of methodological approaches. No book can possibly provide a comprehensive description of methodologies for causal inference across the sciences. The authors of any Causal Inference book will have to choose which aspects of causal inference methodology they want to emphasize.
The title of this introduction reflects our own choices: a book that helps scientists–especially health and social scientists–generate and analyze data to make causal inferences that are explicit about both the causal question and the assumptions underlying the data analysis. Unfortunately, the scientific literature is plagued by studies in which the causal question is not explicitly stated and the investigators’ unverifiable assumptions are not declared. This casual attitude towards causal inference has led to a great deal of confusion. For example, it is not uncommon to find studies in which the effect estimates are hard to interpret because the data analysis methods cannot appropriately answer the causal question (were it explicitly stated) under the investigators’ assumptions (were they declared).
In this book, we stress the need to take the causal question seriously enough to articulate it, and to delineate the separate roles of data and assumptions for causal inference. Once these foundations are in place, causal inferences become necessarily less casual, which helps prevent confusion. The book describes various data analysis approaches that can be used to estimate the causal effect of interest under a particular set of assumptions when data are collected on each individual in a population. A key message of the book is that causal inference cannot be reduced to a collection of recipes for data analysis.
The book is divided in three parts of increasing difficulty: Part I is about causal inference without models (i.e., nonparametric identification of causal effects), Part II is about causal inference with models (i.e., estimation of causal effects with parametric models), and Part III is about causal inference from complex longitudinal data (i.e., estimation of causal effects of time-varying treatments)….(More) (Additional Material)”.
Contract for the Web
About: “The Web was designed to bring people together and make knowledge freely available. It has changed the world for good and improved the lives of billions. Yet, many people are still unable to access its benefits and, for others, the Web comes with too many unacceptable costs.
Everyone has a role to play in safeguarding the future of the Web. The Contract for the Web was created by representatives from over 80 organizations, representing governments, companies and civil society, and sets out commitments to guide digital policy agendas. To achieve the Contract’s goals, governments, companies, civil society and individuals must commit to sustained policy development, advocacy, and implementation of the Contract’s text…(More)”.

Access My Info (AMI)
About: “What do companies know about you? How do they handle your data? And who do they share it with?
Access My Info (AMI) is a project that can help answer these questions by assisting you in making data access requests to companies. AMI includes a web application that helps users send companies data access requests, and a research methodology designed to understand the responses companies make to these requests. Past AMI projects have shed light on how companies treat user data and contribute to digital privacy reforms around the world.
What are data access requests?
A data access request is a letter you can send to any company with products/services that you use. The request asks that the company disclose all the information it has about you and whether or not it has shared your data with any third-parties. If the place where you live has data protection laws that include the right to data access then companies may be legally obligated to respond…
AMI has made personal data requests in jurisdictions around the world and found common patterns.
- There are significant gaps between data access laws on paper and the law in practice;
- People have consistently encountered barriers to accessing their data.
Together with our partners in each jurisdiction, we have used Access My Info to set off a dialog between users, civil society, regulators, and companies…(More)”
A New Wave of Deliberative Democracy
Essay by Claudia Chwalisz: “….Deliberative bodies such as citizens’ councils, assemblies, and juries are often called “deliberative mini-publics” in academic literature. They are just one aspect of deliberative democracy and involve randomly selected citizens spending a significant period of time developing informed recommendations for public authorities. Many scholars emphasize two core defining features: deliberation (careful and open discussion to weigh the evidence about an issue) and representativeness, achieved through sortition (random selection).
Of course, the principles of deliberation and sortition are not new. Rooted in ancient Athenian democracy, they were used throughout various points of history until around two to three centuries ago. Evoked by the Greek statesman Pericles in 431 BCE, the ideas—that “ordinary citizens, though occupied with the pursuits of industry, are still fair judges of public matters” and that instead of being a “stumbling block in the way of action . . . [discussion] is an indispensable preliminary to any wise action at all”—faded to the background when elections came to dominate the contemporary notion of democracy.
But the belief in the ability of ordinary citizens to deliberate and participate in public decisionmaking has come back into vogue over the past several decades. And it is modern applications of the principles of sortition and deliberation, meaning their adaption in the context of liberal representative democratic institutions, that make them “democratic innovations” today. This is not to say that there are no longer proponents who claim that governance should be the domain of “experts” who are committed to govern for the general good and have superior knowledge to do it. Originally espoused by Plato, the argument in favor of epistocracy—rule by experts—continues to be reiterated, such as in Jason Brennan’s 2016 book Against Democracy. It is a reminder that the battle of ideas for democracy’s future is nothing new and requires constant engagement.
Today’s political context—characterized by political polarization; mistrust in politicians, governments, and fellow citizens; voter apathy; increasing political protests; and a new context of misinformation and disinformation—has prompted politicians, policymakers, civil society organizations, and citizens to reflect on how collective public decisions are being made in the twenty-first century. In particular, political tensions have raised the need for new ways of achieving consensus and taking action on issues that require long-term solutions, such as climate change and technology use. Assembling ordinary citizens from all parts of society to deliberate on a complex political issue has thus become even more appealing.
Some discussions have returned to exploring democracy’s deliberative roots. An ongoing study by the Organization for Economic Co-operation and Development (OECD) is analyzing over 700 cases of deliberative mini-publics commissioned by public authorities to inform their decisionmaking. The forthcoming report assesses the mini-publics’ use, principles of good practice, and routes to institutionalization.3 This new area of work stems from the 2017 OECD Recommendation of the Council on Open Government, which recommends that adherents (OECD members and some nonmembers) grant all stakeholders, including citizens, “equal and fair opportunities to be informed and consulted and actively engage them in all phases of the policy-cycle” and “promote innovative ways to effectively engage with stakeholders to source ideas and co-create solutions.” A better understanding of how public authorities have been using deliberative mini-publics to inform their decisionmaking around the world, not just in OECD countries, should provide a richer understanding of what works and what does not. It should also reveal the design principles needed for mini-publics to effectively function, deliver strong recommendations, increase legitimacy of the decisionmaking process, and possibly even improve public trust….(More)”.
Seeing Like a Finite State Machine
Henry Farrell at the Crooked Timber: “…So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.
This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.
These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist. Groups that are discriminated against will have no obvious recourse. Major mistakes may go uncorrected: they may be nearly invisible to a state whose data is polluted both by the means employed to observe and classify it, and the policies implemented on the basis of this data. A plausible feedback loop would see bias leading to error leading to further bias, and no ready ways to correct it. This of course, will be likely to be reinforced by the ordinary politics of authoritarianism, and the typical reluctance to correct leaders, even when their policies are leading to disaster. The flawed ideology of the leader (We must all study Comrade Xi thought to discover the truth!) and of the algorithm (machine learning is magic!) may reinforce each other in highly unfortunate ways.
In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason….(More)”
Manual of Digital Earth
Book by Huadong Guo, Michael F. Goodchild and Alessandro Annoni: “This open access book offers a summary of the development of Digital Earth over the past twenty years. By reviewing the initial vision of Digital Earth, the evolution of that vision, the relevant key technologies, and the role of Digital Earth in helping people respond to global challenges, this publication reveals how and why Digital Earth is becoming vital for acquiring, processing, analysing and mining the rapidly growing volume of global data sets about the Earth.
The main aspects of Digital Earth covered here include: Digital Earth platforms, remote sensing and navigation satellites, processing and visualizing geospatial information, geospatial information infrastructures, big data and cloud computing, transformation and zooming, artificial intelligence, Internet of Things, and social media. Moreover, the book covers in detail the multi-layered/multi-faceted roles of Digital Earth in response to sustainable development goals, climate changes, and mitigating disasters, the applications of Digital Earth (such as digital city and digital heritage), the citizen science in support of Digital Earth, the economic value of Digital Earth, and so on. This book also reviews the regional and national development of Digital Earth around the world, and discusses the role and effect of education and ethics. Lastly, it concludes with a summary of the challenges and forecasts the future trends of Digital Earth.By sharing case studies and a broad range of general and scientific insights into the science and technology of Digital Earth, this book offers an essential introduction for an ever-growing international audience….(More)”.
The Right to Be Seen
Anne-Marie Slaughter and Yuliya Panfil at Project Syndicate: “While much of the developed world is properly worried about myriad privacy outrages at the hands of Big Tech and demanding – and securing – for individuals a “right to be forgotten,” many around the world are posing a very different question: What about the right to be seen?
Just ask the billion people who are locked out of services we take for granted – things like a bank account, a deed to a house, or even a mobile phone account – because they lack identity documents and thus can’t prove who they are. They are effectively invisible as a result of poor data.
The ability to exercise many of our most basic rights and privileges – such as the right to vote, drive, own property, and travel internationally – is determined by large administrative agencies that rely on standardized information to determine who is eligible for what. For example, to obtain a passport it is typically necessary to present a birth certificate. But what if you do not have a birth certificate? To open a bank account requires proof of address. But what if your house doesn’t have an address?
The inability to provide such basic information is a barrier to stability, prosperity, and opportunity. Invisible people are locked out of the formal economy, unable to vote, travel, or access medical and education benefits. It’s not that they are undeserving or unqualified, it’s that they are data poor.
In this context, the rich digital record provided by our smartphones and other sensors could become a powerful tool for good, so long as the risks are acknowledged. These gadgets, which have become central to our social and economic lives, leave a data trail that for many of us is the raw material that fuels what Harvard’s Shoshana Zuboff calls “surveillance capitalism.” Our Google location history shows exactly where we live and work. Our email activity reveals our social networks. Even the way we hold our smartphone can give away early signs of Parkinson’s.
But what if citizens could harness the power of these data for themselves, to become visible to administrative gatekeepers and access the rights and privileges to which they are entitled? Their virtual trail could then be converted into proof of physical facts.
That is beginning to happen. In India, slum dwellers are using smartphone location data to put themselves on city maps for the first time and register for addresses that they can then use to receive mail and register for government IDs. In Tanzania, citizens are using their mobile payment histories to build their credit scores and access more traditional financial services. And in Europe and the United States, Uber drivers are fighting for their rideshare data to advocate for employment benefits….(More)”.