EU Policy Lab: “The Future of Government scenarios were developed through a bottom-up process on the basis of open dialogue workshops in Europe with about 130 citizens and 25 civil society and think tank representatives. The Joint Research Centre then reviewed these discussions and
The Yellow Vests movement and the urge to update democracy
Paula Forteza at OGP: “…The Yellow Vests movement in France is a complex social movement that points out social injustices from a political system that has excluded voices for decades. The movement shows the negative effects of the lack of participatory mechanisms in our institutional architecture. If the Yellow Vests are protesting in the streets today, it is certainly because an institutional dialogue was not possible, because their claims did not find an official channel of communication to reach the decision makers.
The inception of this movement is also symptomatic of the need to update our democracies. Organized through Facebook groups, the Yellow Vests is a leaderless movement that is challenging the hierarchical and vertical organization of the decision-making process. We need a more horizontal, agile and decentralized democracy to match the way civil society is getting organized on the internet. Social media platforms are not made for political mobilisation, as the rise of fake news, polarisation and foreign intervention have showed. Learning from these social media flaws, we can back an institutional change with the creation of dedicated platforms for political expression that are transparent, accountable and democratically governed.
Our reaction to this crisis needs to match the expectations. It is urgent to revitalise our democracies through a robust and impactful set of participatory initiatives. We have in our hands the future of the social contract and, in a way, the future of our democracy. Some initiatives have emerged in France: citizen questions to the government, legislative consultations, a collaborative space in the Parliament, more than 80 local participatory budgets and dozens of participatory experimentations. We need to scale up many local initiatives and include impactful and continuous participatory mechanisms into the institutional decision-making process.
Implementing Public Policy: Is it possible to escape the ‘Public Policy Futility’ trap?
Blogpost by Matt Andrews:

“Polls suggest that governments across the world face high levels of citizen dissatisfaction, and low levels of citizen trust. The 2017 Edelman Trust Barometer found, for instance, that only 43% of those surveyed trust Canada’s government. Only 15% of those surveyed trust government in South Africa, and levels are low in other countries too—including Brazil (at 24%), South Korea (28%), the United Kingdom (36%), Australia, Japan, and Malaysia (37%), Germany (38%), Russia (45%), and the United States (47%). Similar surveys find trust in government averaging only 40-45% across member countries of the Organization for Economic Cooperation and Development (OECD), and suggest that as few as 31% and 32% of Nigerians and Liberians trust government.
There are many reasons why trust in government is deficient in so many countries, and these reasons differ from place to place. One common factor across many contexts, however, is a lack of confidence that governments can or will address key policy challenges faced by citizens.
Studies show that this confidence deficiency stems from citizen observations or experiences with past public policy failures, which promote jaundiced views of their public officials’ capabilities to deliver. Put simply, citizens lose faith in government when they observe government failing to deliver on policy promises, or to ‘get things done’. Incidentally, studies show that public officials also often lose faith in their own capabilities (and those of their organizations) when they observe, experience or participate in repeated policy implementation failures. Put simply, again, these public officials lose confidence in themselves when they repeatedly fail to ‘get things done’.
I call the ‘public policy futility’ trap—where past public policy failure leads to a lack of confidence in the potential of future policy success, which feeds actual public policy failure, which generates more questions of confidence, in a vicious self fulfilling prophecy. I believe that many governments—and public policy practitioners working within governments—are caught in this trap, and just don’t believe that they can muster the kind of public policy responses needed by their citizens.
Along with my colleagues at the Building State Capability (BSC) program, I believe that many policy communities are caught in this trap, to some degree or another. Policymakers in these communities keep coming up with ideas, and political leaders keep making policy promises, but no one really believes the ideas will solve the problems that need solving or produce the outcomes and impacts that citizens need. Policy promises under such circumstances center on doing what policymakers are confident they can actually implement: like producing research and position papers and plans, or allocating inputs toward the problem (in a budget, for instance), or sponsoring visible activities (holding meetings or engaging high profile ‘experts’ for advice), or producing technical outputs (like new organizations, or laws). But they hold back from promising real solutions to real problems, as they know they cannot really implement them (given past political opposition, perhaps, or the experience of seemingly interactable coordination challenges, or cultural pushback, and more)….(More)”.
The Rise of Knowledge Economics
Cesar Hidalgo at Scientific American: “Nearly 30 years ago, Paul Romer published a paper exploring the economic value of knowledge. In that paper, he argued that, unlike the classical factors of production (capital and labor), knowledge was a “non-rival good.” This meant that it could be shared infinitely, and thus, it was the only thing that could grow in per-capita terms.
Romer’s work was recently recognized with the Nobel Prize, even though it was just the beginning of a longer story. Knowledge could be infinitely shared, but did that mean it could go everywhere? Soon after Romer’s seminal paper, Adam Jaffe, Manuel Trajtenberg and Rebecca Henderson published a paper on the geographic diffusion of knowledge. Using a statistical technique called matching, they identified a “twin” for each patent (that is, a patent filed at the same time and making similar technological claims).
Then, they compared the citations received by each patent and its twin. Compared to their twins, patents received almost four more citations from other patents originating in the same city than those originating elsewhere. Romer was right in that knowledge could be infinitely shared, but also, knowledge had difficulties
What will the study of knowledge bring us next? Will we get to a point at which we will measure Gross Domestic Knowledge as accurately as we measure Gross Domestic Product? Will we learn how to engineer knowledge diffusion? Will knowledge continue to concentrate in cities? Or will it finally break the shackles of society and spread to every corner of the world? The only thing we know for sure is that the study of knowledge is an exciting journey. The lowest hanging fruit may have already been picked, but the tree is still filled with fruits and flavors. Let’s climb it and explore…
The global race is on to build ‘City Brains’
Prediction by Geoff Mulgan, Eva Grobbink and Vincent Straub: “The USSR’s launch of the Sputnik 1 satellite in 1958 was a major psychological blow to the United States. The US had believed it was technologically far ahead of its
In 2019, China’s success in smart cities could prompt a similar “Sputnik Moment” for the rest of the world. It may not be as dramatic as that of 1958. But unlike beeping satellites and Moon landings, it could be coming to a town near you
The concept of a “smart city” has been around for several decades, often associated with hype, grandiose failures, and an overemphasis on hardware rather than people (Nesta has previously written on how we can rethink smart cities and ensure digital innovation realises the potential of technology and people). But various technologies are now coming of age which bring the vision of a smart city closer to fruition. China is in the forefront, investing heavily in sensors and infrastructures, and its ET City Brain project shows just how far the country’s thinking has progressed.
First launched in September 2016, ET City Brain is a collaboration between Chinese technology giant Alibaba and several cities. It was first trialled in Hangzhou, the hometown of Alibaba’s executive chairman, Jack Ma, but has since expanded to other Chinese cities. Earlier this year, Kuala Lumpurbecame the first city outside of China to import the ET City Brain model.
The ET City Brain system gathers large amounts of data (including logs, videos, and data stream) from sensors. These are then processed by algorithms in supercomputers and fed back into control centres around the city for administrators to act on—in some cases, automation means the system works without any human intervention at all.
So far, the project has been used to monitor congestion in Hangzhou, improve the response of emergency services in Guangzhou, and detect traffic accidents in Suzhou. In Hangzhou, Alibaba was given control of 104 traffic light junctions in the city’s Xiaoshan district and tasked with managing traffic flows. By combining mass video surveillance with live data from public transportation systems, ET City Brain was able to autonomously change traffic lights so that emergency vehicles could travel to accident scenes without interruption. As a result, arrival times for ambulances improved by 49 percent….(More)”.
Innovations In The Fight Against Corruption In Latin America
Blog Post by Beth Noveck: “…The Inter-American Development Bank (IADB) has published an important, practical and prescriptive report with recommendations for every sector of society from government to individuals on innovative and effective approaches to combatting corruption. While focused on Latin America, the report’s proposals, especially those on the application of new technology in the fight against corruption, are relevant around the world….

The recommendations about the use of new technologies, including big data, blockchain and collective intelligence, are drawn from an effort undertaken last year by the Governance Lab at New York University’s Tandon School of Engineering to crowdsource such solutions and advice on how to implement them from a hundred global experts. (See the Smarter Crowdsourcing against Corruption report here.)…
Big data, when published as open data, namely in a form that can be re-used without legal or technical restriction and in a machine-readable format that computers can analyze, is another tool in the fight against corruption. With machine readable, big and open data, those outside of government can pinpoint and measure irregularities in government contracting, as Instituto Observ is doing in Brazil.
Opening up judicial data, such as information about case processing times, judges’ and prosecutors’ salaries, information about selection processes, such as CV’s, professional and academic backgrounds, and written and oral exam scores provides activists and reformers with the tools to fight judicial corruption. The Civil Association for Equality and Justice (ACIJ) (a non-profit advocacy group) in Argentina uses such open justice data in its Concursos Transparentes (Transparent Contests) to fight judicial corruption. Jusbrasil is a private open justice company also using open data to reform the courts in Brazil….(More)”
Long Live the Human Network Effect
Julia Hobsbawm at Strategy + Business: “Picture the scene. The eyes of the world are on the Tham Luang cave system in Thailand, near the border with Myanmar. Trapped on a rock ledge deep inside is the Wild Boars soccer team of 12 boys and their coach, who had ventured into the caves about two weeks earlier. It is monsoon season. Water is rising and oxygen levels are falling. Not all of the boys can even swim. Time is running out.
Elon Musk proposes building a “kid-sized submarine” to assist the rescue effort. Musk’s solution is politely declined by Thai authorities as “not practical.” In fact, by the time Musk’s sub arrives, most of the boys are already out, alive. One of the most audacious, moving, complex, and successful rescue operations in history relied not on a single technology or hero but on the collaboration of many people, working together in a spontaneous network.
This web of connections came together organically and quickly, unassisted by algorithms, in a unique collaboration led by humans. It was a stunning example of what physicist Albert-László Barabási calls “scale-free networks”: networks that reproduce exponentially by their very nature. The exact same network effects that can be lethal in spreading a virus can be productive — beautiful, even — in creating a web of diverse human skills quickly. Networks, as Barabási puts it, “are everywhere. You just have to look for them.”…
Networks that come together like this and use technology, community, and communications in a timely manner are an example of what the U.N. calls its “leave no one behind” strategy for achieving sustainable development goals. I consider it an example of social health in action: They are the kinds of collaborations that help us live full and productive lives. And in business, there is an exciting opportunity to harness social health and the power of networks to help solve problems.
This kind of social health network, perhaps unsurprisingly, is very visible in innovations in the healthcare sector. A digital health community called The Mighty, for example, is a forum to find information about rare illnesses and connect people facing similar challenges, so that they might learn from the experiences of others. It now has 90 million engagements on its website per month and a new member joins every 20 seconds….(More)”.
We Need an FDA For Algorithms
Interview with Hannah Fry on the promise and danger of an AI world by Michael Segal:”…Why do we need an FDA for algorithms?
It used to be the case that you could just put any old colored liquid in a glass bottle and sell it as medicine and make an absolute fortune. And then not worry about whether or not it’s poisonous. We stopped that from happening because, well, for starters it’s kind of morally repugnant. But also, it harms people. We’re in that position right now with data and algorithms. You can harvest any data that you want, on anybody. You can infer any data that you like, and you can use it to manipulate them in any way that you choose. And you can roll out an algorithm that genuinely makes massive differences to people’s lives, both good and bad, without any checks and balances. To me that seems completely bonkers. So I think we need something like the FDA for algorithms. A regulatory body that can protect the intellectual property of algorithms, but at the same time ensure that the benefits to society outweigh the harms.
Why is the regulation of medicine an appropriate comparison?
If you swallow a bottle of colored liquid and then you keel over the next day, then you know for sure it was poisonous. But there are much more subtle things in pharmaceuticals that require expert analysis to be able to weigh up the benefits and the harms. To study the chemical profile of these drugs that are being sold and make sure that they actually are doing what they say they’re doing. With algorithms it’s the same thing. You can’t expect the average person in the street to study Bayesian inference or be totally well read in random forests, and have the kind of computing prowess to look up a code and analyze whether it’s doing something fairly. That’s not realistic. Simultaneously, you can’t have some code of conduct that every data science person signs up to, and agrees that they won’t tread over some lines. It has to be a government, really, that does this. It has to be government that analyzes this stuff on our behalf and makes sure that it is doing what it says it does, and in a way that doesn’t end up harming people.
How did you come to write a book about algorithms?
Back in 2011 in London, we had these really bad riots in London. I’d been working on a project with the Metropolitan Police, trying mathematically to look at how these riots had spread and to use algorithms to ask how could the police have done better. I went to go and give a talk in Berlin about this paper we’d published about our work, and they completely tore me apart. They were asking questions like, “Hang on a second, you’re creating this algorithm that has the potential to be used to suppress peaceful demonstrations in the future. How can you morally justify the work that you’re doing?” I’m kind of ashamed to say that it just hadn’t occurred to me at that point in time. Ever since, I have really thought a lot about the point that they made. And started to notice around me that other researchers in the area weren’t necessarily treating the data that they were working with, and the algorithms that they were creating, with the ethical concern they really warranted. We have this imbalance where the people who are making algorithms aren’t talking to the people who are using them. And the people who are using them aren’t talking to the people who are having decisions made about their lives by them. I wanted to write something that united those three groups….(More)”.
The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence
Blog by Julia Powles and Helen Nissenbaum: “Serious thinkers in academia and business have swarmed to the A.I. bias problem, eager to tweak and improve the data and algorithms that drive artificial intelligence. They’ve latched onto fairness as the objective, obsessing over competing constructs of the term that can be rendered in measurable, mathematical form. If the hunt for a science of computational fairness was restricted to engineers, it would be one thing. But given our contemporary exaltation and deference to technologists, it has limited the entire imagination of ethics, law, and the media as well.
There are three problems with this focus on A.I. bias. The first is that addressing bias as a computational problem obscures its root causes. Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate.
Second, even apparent success in tackling bias can have perverse consequences. Take the example of a facial recognition system that works poorly on women of color because of the group’s underrepresentation both in the training data and among system designers. Alleviating this problem by seeking to “equalize” representation merely co-opts designers in perfecting vast instruments of surveillance and classification.
When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.
Third — and most dangerous and urgent of all — is the way in which the seductive controversy of A.I. bias, and the false allure of “solving” it, detracts from bigger, more pressing questions. Bias is real, but it’s also a captivating diversion.
What has been remarkably underappreciated is the key interdependence of the twin stories of A.I. inevitability and A.I. bias. Against the corporate projection of an otherwise sunny horizon of unstoppable A.I. integration, recognizing and acknowledging bias can be seen as a strategic concession — one that subdues the scale of the challenge. Bias, like job losses and safety hazards, becomes part of the grand bargain of innovation.
The reality that bias is primarily a social problem and cannot be fully solved technically becomes a strength, rather than a weakness, for the inevitability narrative. It flips the script. It absorbs and regularizes the classification practices and underlying systems of inequality perpetuated by automation, allowing relative increases in “fairness” to be claimed as victories — even if all that is being done is to slice, dice, and redistribute the makeup of those negatively affected by actuarial decision-making.
In short, the preoccupation with narrow computational puzzles distracts us from the far more important issue of the colossal asymmetry between societal cost and private gain in the rollout of automated systems. It also denies us the possibility of asking: Should we be building these systems at all?…(More)”.
Prototyping for policy
Camilla Buchanan at Policy Lab Blog: “…Prototyping is common in the product and industrial design process – it has also extended to less tangible design sub-specialisms like service design. Prototypes are low fidelity mockups of an imagined idea or solution and they allow for testing before implementation. A product can be tested in cardboard form, a website can be tested through a
Policy is a more hazy concept, it implies a message or statement of intent which sets a direction of work. Before a policy statement is made there will be some form of strategic conversation. In governments this usually takes place at the political level amongst ministers or within political parties and there is little scope for outsiders to enter these spaces. Policies set by elected officials tend to be high-level statements – as short as a line or two in a manifesto – expressed through speeches or other policy documents like White Papers.
A policy statement therefore expresses a goal and it sets in motion realisations of that goal through laws, programmes or other activities. A short policy statement can determine major programmes of government work for many years. Policy programmes have their own problem spaces to define and there is much to do in order to translate a policy goal into practical activities. Whether consciously or not, policy programmes touch the lives of millions of people and the unintended consequences or conflicting results from the enactment of poor policies can be extremely harmful. The potential benefits of testing policy goals before they are put in place are therefore huge.
The idea of design interacting directly with
It is still early days for articulating exactly how and why the “physical making” aspect of design is so important in government contexts but almost all designers working in this way will emphasis it. An obvious benefit to building something real is that operational errors become more evident. And because prototypes make ideas manifest, they can help to build consensus or reveal where it is absent. They are also a way of asking questions and the presence of a prototype often prompts discussion of broader issues.
As an example, the picture below shows staff from the Service Design team at the consultancy OpenRoad in Vancouver considering advanced prototypes of changes to transit fare policy for the city for their client TransLink….(More).
