Good process is vital for good government


Andrea Siodmok and Matthew Taylor at the RSA: “…‘Bad’ process is time wasting and energy sapping. It can reinforce barriers to collaboration, solidify hierarchies and hamper adaptiveness.

‘Good process’ energises people, creates spaces for different ideas to emerge, builds trust and collective capacity.

The bad and good could be distinguished along several dimensions. Here are some:

Bad process:

  • Routine/happens because it happens            
  • Limited preparation and follow through         
  • Little or no facilitation            
  • Reinforces hierarchies, excludes key voices  
  • Rigid accountability focussed on blame           
  • Always formal and mandated           
  • Low trust/transactional       

Good process:

  • Mission/goal oriented – happens because it makes a difference
  • Sees process as part of a flow of change – clear accountability
  • Facilitated by people with necessary skills and techniques 
  • Inclusive, what matters is the quality of contributions not their source
  • Collective accountability focussed on learning 
  • Mixes formal and informal settings and methods, often voluntary
  • Trust enhancing/collaborative

Why is bad process so prevalent and good process so rare?

Because bad process is often the default. In the short term, bad process is easier, less intensive-resource, and less risky than good process.

Bringing people together in inclusive processes

Bringing key actors together in inclusive processes help us both understand the system that is maintaining the status quo and building a joint sense of mission for a new status quo.

It also helps people start to identify and organise around key opportunities for change. 

One of the most positive developments to have occurred in and around Whitehall in recent years is the emergence of informal, system spanning networks of public officials animated by shared values and goals such as One Team Gov and a whole host of bottom up networks on topics as diverse as wellbeing, inclusion, and climate change….(More)”.

Decide Madrid: A Critical Analysis of an Award-Winning e-Participation Initiative


Paper by Sonia Royo, Vicente Pina and Jaime Garcia-Rayado: “This paper analyzes the award-winning e-participation initiative of the city council of Madrid, Decide Madrid, to identify the critical success factors and the main barriers that are conditioning its performance. An exploratory case study is used as a research technique, including desk research and semi-structured interviews. The analysis distinguishes contextual, organizational and individual level factors; it considers whether the factors or barriers are more related to the information and communication technology (ICT) component, public sector context or democratic participation; it also differentiates among the different stages of the development of the initiative. Results show that individual and organizational factors related to the public sector context and democratic participation are the most relevant success factors.

The high expectations of citizens explain the high levels of participation in the initial stages of Decide Madrid. However, the lack of transparency and poor functioning of some of its participatory activities (organizational factors related to the ICT and democratic dimensions) are negatively affecting its performance. The software created for this platform, Consul, has been adopted or it is in the process of being implemented in more than 100 institutions in 33 countries. Therefore, the findings of this research can potentially be useful to improve the performance and sustainability of e-participation platforms worldwide…(More)”.

The Future of Democracy in Europe: Technology and the Evolution of Representation


Report by Chatham House: “There is a widespread sense that liberal democracy is in crisis, but little consensus exists on the specific nature and causes of the crisis. In particular, there are three prisms through which the crisis is usually seen: the rise of ‘populism’, ‘democratic deconsolidation’, and a ‘hollowing out’ of democracy. Each reflects normative assumptions about democracy.

The exact role of digital technology in the crisis is disputed. Despite the widely held perception that social media is undermining democracy, the evidence for this is limited. Over the longer term, the further development of digital technology could undermine the fundamental preconditions for democracy – though the pace and breadth of technological change make predictions about its future impact difficult.

Democracy functions in different ways in different European countries, with political systems on the continent ranging from ‘majoritarian democracies’ such as the UK to ‘consensual democracies’ such as Belgium and Switzerland. However, no type seems to be immune from the crisis. The political systems of EU member states also interact in diverse ways with the EU’s own structure, which is problematic for representative democracy as conventionally understood, but difficult to reform.

Political parties, central to the model of representative democracy that emerged in the late 18th century, have long seemed to be in decline. Recently there have been some signs of a reversal of this trend, with the emergence of parties that have used digital technology in innovative ways to reconnect with citizens. Traditional parties can learn from these new ‘digital parties’.

Recent years have also seen a proliferation of experiments in direct and deliberative democracy. There is a need for more experimentation in these alternative forms of democracy, and for further evaluation of how they can be integrated into the existing institutions and processes of representative democracy at the local, regional, national and EU levels.

We should not think of democracy in a static way – that is, as a system that can be perfected once and for all and then simply maintained and defended against threats. Democracy has continually evolved and now needs to evolve further. The solution to the crisis will not be to attempt to limit democracy in response to pressure from ‘populism’ but to deepen it further as part of a ‘democratization of democracy’….(More)”.

Imagining Regulation Differently: Co-creating for Engagement


Book edited by Morag McDermont, Tim Cole, Janet Newman and Angela Piccini: “There is an urgent need to rethink relationships between systems of government and those who are ‘governed’. This book explores ways of rethinking those relationships by bringing communities normally excluded from decision-making to centre stage to experiment with new methods of regulating for engagement.

Using original, co-produced research, it innovatively shows how we can better use a ‘bottom-up’ approach to design regulatory regimes that recognise the capabilities of communities at the margins and powerfully support the knowledge, passions and creativity of citizens. The authors provide essential guidance for all those working on co-produced research to make impactful change…(More)”.

Many Tech Experts Say Digital Disruption Will Hurt Democracy


Lee Rainie and Janna Anderson at Pew Research Center: “The years of almost unfettered enthusiasm about the benefits of the internet have been followed by a period of techlash as users worry about the actors who exploit the speed, reach and complexity of the internet for harmful purposes. Over the past four years – a time of the Brexit decision in the United Kingdom, the American presidential election and a variety of other elections – the digital disruption of democracy has been a leading concern.

The hunt for remedies is at an early stage. Resistance to American-based big tech firms is increasingly evident, and some tech pioneers have joined the chorus. Governments are actively investigating technology firms, and some tech firms themselves are requesting government regulation. Additionally, nonprofit organizations and foundations are directing resources toward finding the best strategies for coping with the harmful effects of disruption. For example, the Knight Foundation announced in 2019 that it is awarding $50 million in grants to encourage the development of a new field of research centered on technology’s impact on democracy.

In light of this furor, Pew Research Center and Elon University’s Imagining the Internet Center canvassed technology experts in the summer of 2019 to gain their insights about the potential future effects of people’s use of technology on democracy….

The main themes found in an analysis of the experts’ comments are outlined in the next two tables….(More)”.

Can Technology Support Democracy?


Essay by Douglas Schuler: “The utopian optimism about democracy and the internet has given way to disillusionment. At the same time, given the complexity of today’s wicked problems, the need for democracy is critical. Unfortunately democracy is under attack around the world, and there are ominous signs of its retreat.

How does democracy fare when digital technology is added to the picture? Weaving technology and democracy together is risky, and technologists who begin any digital project with the conviction that technology can and will solve “problems” of democracy are likely to be disappointed. Technology can be a boon to democracy if it is informed technology.

The goal in writing this essay was to encourage people to help develop and cultivate a rich democratic sphere. Democracy has great potential that it rarely achieves. It is radical, critical, complex, and fragile. It takes different forms in different contexts. These forms are complex and the solutionism promoted by the computer industry and others is not appropriate in the case of democracies. The primary aim of technology in the service of democracy is not merely to make it easier or more convenient but to improve society’s civic intelligence, its ability to address the problems it faces effectively and equitably….(More)”.

We All Wear Tinfoil Hats Now


Article by Geoff Shullenberger on “How fears of mind control went from paranoid delusion to conventional wisdom”: “In early 2017, after the double shock of Brexit and the election of Donald Trump, the British data-mining firm Cambridge Analytica gained sudden notoriety. The previously little-known company, reporters claimed, had used behavioral influencing techniques to turn out social media users to vote in both elections. By its own account, Cambridge Analytica had worked with both campaigns to produce customized propaganda for targeting individuals on Facebook likely to be swept up in the tide of anti-immigrant populism. Its methods, some news sources suggested, might have sent enough previously disengaged voters to the polls to have tipped the scales in favor of the surprise victors. To a certain segment of the public, this story seemed to answer the question raised by both upsets: How was it possible that the seemingly solid establishment consensus had been rejected? What’s more, the explanation confirmed everything that seemed creepy about the Internet, evoking a sci-fi vision of social media users turned into an army of political zombies, mobilized through subliminal manipulation.

Cambridge Analytica’s violations of Facebook users’ privacy have made it an enduring symbol of the dark side of social media. However, the more dramatic claims about the extent of the company’s political impact collapse under closer scrutiny, mainly because its much-hyped “psychographic targeting” methods probably don’t work. As former Facebook product manager Antonio García Martínez noted in a 2018 Wired article, “the public, with no small help from the media sniffing a great story, is ready to believe in the supernatural powers of a mostly unproven targeting strategy,” but “most ad insiders express skepticism about Cambridge Analytica’s claims of having influenced the election, and stress the real-world difficulty of changing anyone’s mind about anything with mere Facebook ads, least of all deeply ingrained political views.” According to García, the entire affair merely confirms a well-established truth: “In the ads world, just because a product doesn’t work doesn’t mean you can’t sell it….(More)”.

Collaborative е-Rulemaking, Democratic Bots, and the Future of Digital Democracy


Paper by Oren Perez: “… focuses on “deliberative e-rulemaking”: digital consultation processes that seek to facilitate public deliberation over policy or regulatory proposals. The main challenge of е-rulemaking platforms is to support an “intelligent” deliberative process that enables decision makers to identify a wide range of options, weigh the relevant considerations, and develop epistemically responsible solutions. This article discusses and critiques two approaches to this challenge: The Cornell Regulation Room project and model of computationally assisted regulatory participation by Livermore et al. It then proceeds to explore two alternative approaches to e-rulemaking: One is based on the implementation of collaborative, wiki-styled tools. This article discusses the findings of an experiment, which was conducted at Bar-Ilan University and explored various aspects of a wiki-based collaborative е-rulemaking system. The second approach follows a more futuristic approach, focusing on the potential development of autonomous, artificial democratic agents. This article critically discusses this alternative, also in view of the recent debate regarding the idea of “augmented democracy.”…(More)”.

Digital tools can be a useful bolster to democracy


Rana Foroohar at the Financial Times: “…A report by a Swedish research group called V-Dem found Taiwan was subject to more disinformation than nearly any other country, much of it coming from mainland China. Yet the popularity of pro-independence politicians is growing there, something Ms Tang views as a circular phenomenon.

When politicians enable more direct participation, the public begins to have more trust in government. Rather than social media creating “a false sense of us versus them,” she notes, decentralised technologies have “enabled a sense of shared reality” in Taiwan.

The same seems to be true in a number of other countries, including Israel, where Green party leader and former Occupy activist Stav Shaffir crowdsourced technology expertise to develop a bespoke data analysis app that allowed her to make previously opaque Treasury data transparent. She’s now heading an OECD transparency group to teach other politicians how to do the same. Part of the power of decentralised technologies is that they allow, at scale, the sort of public input on a wide range of complex issues that would have been impossible in the analogue era.

Consider “quadratic voting”, a concept that has been popularised by economist Glen Weyl, co-author of Radical Markets: Uprooting Capitalism and Democracy for a Just Society. Mr Weyl is the founder of the RadicalxChange movement, which aimsto empower a more participatory democracy. Unlike a binary “yes” or “no” vote for or against one thing, quadratic voting allows a large group of people to use a digital platform to express the strength of their desire on a variety of issues.

For example, when he headed the appropriations committee in the Colorado House of Representatives, Chris Hansen used quadratic voting to help his party quickly sort through how much of their $40m budget should be allocated to more than 100 proposals….(More)”.

The Future of Minds and Machines


Report by Aleksandra Berditchevskaia and Peter Baek: “When it comes to artificial intelligence (AI), the dominant media narratives often end up taking one of two opposing stances: AI is the saviour or the villain. Whether it is presented as the technology responsible for killer robots and mass job displacement or the one curing all disease and halting the climate crisis, it seems clear that AI will be a defining feature of our future society. However, these visions leave little room for nuance and informed public debate. They also help propel the typical trajectory followed by emerging technologies; with inevitable regularity we observe the ascent of new technologies to the peak of inflated expectations they will not be able to fulfil, before dooming them to a period languishing in the trough of disillusionment.[1]

There is an alternative vision for the future of AI development. By starting with people first, we can introduce new technologies into our lives in a more deliberate and less disruptive way. Clearly defining the problems we want to address and focusing on solutions that result in the most collective benefit can lead us towards a better relationship between machine and human intelligence. By considering AI in the context of large-scale participatory projects across areas such as citizen science, crowdsourcing and participatory digital democracy, we can both amplify what it is possible to achieve through collective effort and shape the future trajectory of machine intelligence. We call this 21st-century collective intelligence (CI).

In The Future of Minds and Machines we introduce an emerging framework for thinking about how groups of people interface with AI and map out the different ways that AI can add value to collective human intelligence and vice versa. The framework has, in large part, been developed through analysis of inspiring projects and organisations that are testing out opportunities for combining AI & CI in areas ranging from farming to monitoring human rights violations. Bringing together these two fields is not easy. The design tensions identified through our research highlight the challenges of navigating this opportunity and selecting the criteria that public sector decision-makers should consider in order to make the most of solving problems with both minds and machines….(More)”.