Constructing Digital Democracies: Facebook, Arendt, and the Politics of Design


Paper by Jennifer Forestal: “Deliberative democracy requires both equality and difference, with structures that organize a cohesive public while still accommodating the unique perspectives of each participant. While institutions like laws and norms can help to provide this balance, the built environment also plays a role supporting democratic politics—both on- and off-line.

In this article, I use the work of Hannah Arendt to articulate two characteristics the built environment needs to support democratic politics: it must (1) serves as a common world, drawing users together and emphasizing their common interests and must also (2) preserve spaces of appearance, accommodating diverse perspectives and inviting disagreement. I, then, turn to the example of Facebook to show how these characteristics can be used as criteria for evaluating how well a particular digital platform supports democratic politics and providing alternative mechanisms these sites might use to fulfill their role as a public realm….(More)”.

Characterizing Disinformation Risk to Open Data in the Post-Truth Era


Paper by Adrienne Colborne and Michael Smit: “Curated, labeled, high-quality data is a valuable commodity for tasks such as business analytics and machine learning. Open data is a common source of such data—for example, retail analytics draws on open demographic data, and weather forecast systems draw on open atmospheric and ocean data. Open data is released openly by governments to achieve various objectives, such as transparency, informing citizen engagement, or supporting private enterprise.

Critical examination of ongoing social changes, including the post-truth phenomenon, suggests the quality, integrity, and authenticity of open data may be at risk. We introduce this risk through various lenses, describe some of the types of risk we expect using a threat model approach, identify approaches to mitigate each risk, and present real-world examples of cases where the risk has already caused harm. As an initial assessment of awareness of this disinformation risk, we compare our analysis to perspectives captured during open data stakeholder consultations in Canada…(More)”.

Sharing Health Data and Biospecimens with Industry — A Principle-Driven, Practical Approach


Kayte Spector-Bagdady et al at the New England Journal of Medicine: “The advent of standardized electronic health records, sustainable biobanks, consumer-wellness applications, and advanced diagnostics has resulted in new health information repositories. As highlighted by the Covid-19 pandemic, these repositories create an opportunity for advancing health research by means of secondary use of data and biospecimens. Current regulations in this space give substantial discretion to individual organizations when it comes to sharing deidentified data and specimens. But some recent examples of health care institutions sharing individual-level data and specimens with companies have generated controversy. Academic medical centers are therefore both practically and ethically compelled to establish best practices for governing the sharing of such contributions with outside entities.1 We believe that the approach we have taken at Michigan Medicine could help inform the national conversation on this issue.

The Federal Policy for the Protection of Human Subjects offers some safeguards for research participants from whom data and specimens have been collected. For example, researchers must notify participants if commercial use of their specimens is a possibility. These regulations generally cover only federally funded work, however, and they don’t apply to deidentified data or specimens. Because participants value transparency regarding industry access to their data and biospecimens, our institution set out to create standards that would better reflect participants’ expectations and honor their trust. Using a principlist approach that balances beneficence and nonmaleficence, respect for persons, and justice, buttressed by recent analyses and findings regarding contributors’ preferences, Michigan Medicine established a formal process to guide our approach….(More)”.

Dynamic Networks Improve Remote Decision-Making


Article by Abdullah Almaatouq and Alex “Sandy” Pentland: “The idea of collective intelligence is not new. Research has long shown that in a wide range of settings, groups of people working together outperform individuals toiling alone. But how do drastic shifts in circumstances, such as people working mostly at a distance during the COVID-19 pandemic, affect the quality of collective decision-making? After all, public health decisions can be a matter of life and death, and business decisions in crisis periods can have lasting effects on the economy.

During a crisis, it’s crucial to manage the flow of ideas deliberatively and strategically so that communication pathways and decision-making are optimized. Our recently published research shows that optimal communication networks can emerge from within an organization when decision makers interact dynamically and receive frequent performance feedback. The results have practical implications for effective decision-making in times of dramatic change….

Our experiments illustrate the importance of dynamically configuring network structures and enabling decision makers to obtain useful, recurring feedback. But how do you apply such findings to real-world decision-making, whether remote or face to face, when constrained by a worldwide pandemic? In such an environment, connections among individuals, teams, and networks of teams must be continually reorganized in response to shifting circumstances and challenges. No single network structure is optimal for every decision, a fact that is clear in a variety of organizational contexts.

Public sector. Consider the teams of advisers working with governments in creating guidelines to flatten the curve and help restart national economies. The teams are frequently reconfigured to leverage pertinent expertise and integrate data from many domains. They get timely feedback on how decisions affect daily realities (rates of infection, hospitalization, death) — and then adjust recommended public health protocols accordingly. Some team members move between levels, perhaps being part of a state-level team for a while, then federal, and then back to state. This flexibility ensures that people making big-picture decisions have input from those closer to the front lines.

Witness how Germany considered putting a brake on some of its reopening measures in response to a substantial, unexpected uptick in COVID-19 infections. Such time-sensitive decisions are not made effectively without a dynamic exchange of ideas and data. Decision makers must quickly adapt to facts reported by subject-area experts and regional officials who have the relevant information and analyses at a given moment….(More)“.

Saving Our Oceans: Scaling the Impact of Robust Action Through Crowdsourcing


Paper by Amanda J. Porter, Philipp Tuertscher, and Marleen Huysman: “One approach for tackling grand challenges that is gaining traction in recent management literature is robust action: by allowing diverse stakeholders to engage with novel ideas, initiatives can cultivate successful ideas that yield greater impact. However, a potential pitfall of robust action is the length of time it takes to generate momentum. Crowdsourcing, we argue, is a valuable tool that can scale the generation of impact from robust action.

We studied an award‐winning environmental sustainability crowdsourcing initiative and found that robust action principles were indeed successful in attracting a diverse stakeholder network to generate novel ideas and develop these into sustainable solutions. Yet we also observed that the momentum and novelty generated was at risk of getting lost as the actors and their roles changed frequently throughout the process. We show the vital importance of robust action principles for connecting ideas and actors across crowdsourcing phases. These observations allow us to make a contribution to extant theory by explaining the micro‐dynamics of scaling robust action’s impact over time…(More)”.

Eye-catching advances in some AI fields are not real


Matthew Hutson at Science: “Artificial intelligence (AI) just seems to get smarter and smarter. Each iPhone learns your face, voice, and habits better than the last, and the threats AI poses to privacy and jobs continue to grow. The surge reflects faster chips, more data, and better algorithms. But some of the improvement comes from tweaks rather than the core innovations their inventors claim—and some of the gains may not exist at all, says Davis Blalock, a computer science graduate student at the Massachusetts Institute of Technology (MIT). Blalock and his colleagues compared dozens of approaches to improving neural networks—software architectures that loosely mimic the brain. “Fifty papers in,” he says, “it became clear that it wasn’t obvious what the state of the art even was.”

The researchers evaluated 81 pruning algorithms, programs that make neural networks more efficient by trimming unneeded connections. All claimed superiority in slightly different ways. But they were rarely compared properly—and when the researchers tried to evaluate them side by side, there was no clear evidence of performance improvements over a 10-year period. The result, presented in March at the Machine Learning and Systems conference, surprised Blalock’s Ph.D. adviser, MIT computer scientist John Guttag, who says the uneven comparisons themselves may explain the stagnation. “It’s the old saw, right?” Guttag said. “If you can’t measure something, it’s hard to make it better.”

Researchers are waking up to the signs of shaky progress across many subfields of AI. A 2019 meta-analysis of information retrieval algorithms used in search engines concluded the “high-water mark … was actually set in 2009.” Another study in 2019 reproduced seven neural network recommendation systems, of the kind used by media streaming services. It found that six failed to outperform much simpler, nonneural algorithms developed years before, when the earlier techniques were fine-tuned, revealing “phantom progress” in the field. In another paper posted on arXiv in March, Kevin Musgrave, a computer scientist at Cornell University, took a look at loss functions, the part of an algorithm that mathematically specifies its objective. Musgrave compared a dozen of them on equal footing, in a task involving image retrieval, and found that, contrary to their developers’ claims, accuracy had not improved since 2006. “There’s always been these waves of hype,” Musgrave says….(More)”.

More ethical, more innovative? The effects of ethical culture and ethical leadership on realized innovation


Zeger van der Wal and Mehmet Demircioglu in the Australian Journal of Public Administration (AJPA): “Are ethical public organisations more likely to realize innovation? The public administration literature is ambiguous about this relationship, with evidence being largely anecdotal and focused mainly on the ethical implications of business‐like behaviour and positive deviance, rather than how ethical behaviour and culture may contribute to innovation.

In this paper we examine the effects of ethical culture and ethical leadership on reported realized innovation, using 2017 survey data from the Australia Public Service Commission ( = 80,316). Our findings show that both ethical culture at the working group‐level and agency‐level as well as ethical leadership have significant positive associations with realized innovation in working groups. The findings are robust across agency, work location, job level, tenure, education, and gender and across different samples. We conclude our paper with theoretical and practical implications of our research findings…(More)”.

UK parliamentary select committees: crowdsourcing for evidence-based policy or grandstanding?


Paper by the The LSE GV314 Group: “In the United Kingdom, the influence of parliamentary select committees on policy depends substantially on the ‘seriousness’ with which they approach the task of gathering and evaluating a wide range of evidence and producing reports and recommendations based on it. However, select committees are often charged with being concerned with ‘political theatre’ and ‘grandstanding’ rather than producing evidence-based policy recommendations. This study, based on a survey of 919 ‘discretionary’ witnesses, including those submitting written and oral evidence, examines the case for arguing that there is political bias and grandstanding in the way select committees go about selecting witnesses, interrogating them and using their evidence to put reports together. While the research finds some evidence of such ‘grandstanding’ it does not appear to be strong enough to suggest that the role of select committees is compromised as a crowdsourcer of evidence….(More)”.

AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings


Paper by Maciej Kuziemski and Gianluca Misuraca: “The rush to understand new socio-economic contexts created by the wide adoption of AI is justified by its far-ranging consequences, spanning almost every walk of life. Yet, the public sector’s predicament is a tragic double bind: its obligations to protect citizens from potential algorithmic harms are at odds with the temptation to increase its own efficiency – or in other words – to govern algorithms, while governing by algorithms. Whether such dual role is even possible, has been a matter of debate, the challenge stemming from algorithms’ intrinsic properties, that make them distinct from other digital solutions, long embraced by the governments, create externalities that rule-based programming lacks.

As the pressures to deploy automated decision making systems in the public sector become prevalent, this paper aims to examine how the use of AI in the public sector in relation to existing data governance regimes and national regulatory practices can be intensifying existing power asymmetries. To this end, investigating the legal and policy instruments associated with the use of AI for strenghtening the immigration process control system in Canada; “optimising” the employment services” in Poland, and personalising the digital service experience in Finland, the paper advocates for the need of a common framework to evaluate the potential impact of the use of AI in the public sector.

In this regard, it discusses the specific effects of automated decision support systems on public services and the growing expectations for governments to play a more prevalent role in the digital society and to ensure that the potential of technology is harnessed, while negative effects are controlled and possibly avoided. This is of particular importance in light of the current COVID-19 emergency crisis where AI and the underpinning regulatory framework of data ecosystems, have become crucial policy issues as more and more innovations are based on large scale data collections from digital devices, and the real-time accessibility of information and services, contact and relationships between institutions and citizens could strengthen – or undermine – trust in governance systems and democracy….(More)”.

Digital Identity and the Blockchain: Universal Identity Management and the Concept of the “Self-Sovereign” Individual


Paper by Andrej J. Zwitter, Oskar J. Gstrein and Evan Yap: “While “classical” human identity has kept philosophers busy since millennia, “Digital Identity” seems primarily machine related. Telephone numbers, E-Mail inboxes, or Internet Protocol (IP)-addresses are irrelevant to define us as human beings at first glance. However, with the omnipresence of digital space the digital aspects of identity gain importance.

In this submission, we aim to put recent developments in context and provide a categorization to frame the landscape as developments proceed rapidly. First, we present selected philosophical perspectives on identity. Secondly, we explore how the legal landscape is approaching identity from a traditional dogmatic perspective both in national and international law. After blending the insights from those sections together in a third step, we will go on to describe and discuss current developments that are driven by the emergence of new tools such as “Distributed Ledger Technology” and “Zero Knowledge Proof.”

One of our main findings is that the management of digital identity is transforming from a purpose driven necessity toward a self-standing activity that becomes a resource for many digital applications. In other words, whereas traditionally identity is addressed in a predominantly sectoral fashion whenever necessary, new technologies transform digital identity management into a basic infrastructural service, sometimes even a commodity. This coincides with a trend to take the “control” over identity away from governmental institutions and corporate actors to “self-sovereign individuals,” who have now the opportunity to manage their digital self autonomously.

To make our conceptual statements more relevant, we present several already existing use cases in the public and private sector. Subsequently, we discuss potential risks that should be mitigated in order to create a desirable relationship between the individual, public institutions, and the private sector in a world where self-sovereign identity management has become the norm. We will illustrate these issues along the discussion around privacy, as well as the development of backup mechanisms for digital identities. Despite the undeniable potential for the management of identity, we suggest that particularly at this point in time there is a clear need to make detailed (non-technological) governance decisions impacting the general design and implementation of self-sovereign identity systems….(More)” – See also Field Report: On the Emergent Use of Distributed Ledger Technologies for Identity Management.