Future Government 2030+: Policy Implications and Recommendations


European Commission: “This report provides follow-up insights into the policy implications and offers a set of 57 recommendations, organised in nine policy areas. These stem from a process based on interviews with 20 stakeholders. The recommendations include a series of policy options and actions that could be implemented at different levels of governance systems.

The Future of Government project started in autumn 2017 as a research project of the Joint Research Centre in collaboration with Directorate General Communication Network and Technologies. It explored how we can rethink the social contract according to the needs of today’s society, what elements need to be adjusted to deliver value and good to people and society, what values we need to improve society, and how we can obtain a new sense of responsibility.

Following the “The Future of Government 2030+: A Citizen-Centric Perspective on New Government Models report“, published on 6 March, the present follow-up report provides follow-up insights into the policy implications and offers a set of 54 recommendations, organised in nine policy areas.

The recommendations of this report include a series of policy options and actions that could be implemented at different levels of governance systems. Most importantly, they include essential elements to help us build our future actions on digital government and address foundational governance challenges of the modern online world (i.e regulation of AI ) in the following 9 axes:

  1. Democracy and power relations: creating clear strategies towards full adoption of open government
  2. Participatory culture and deliberation: skilled and equipped public administration and allocation of resources to include citizens in decision-making
  3. Political trust: new participatory governance mechanisms to raise citizens’ trust
  4. Regulation: regulation on technology should follow discussion on values with full observance of fundamental rights
  5. Public-Private relationship: better synergies between public and private sectors, collaboration with young social entrepreneurs to face forthcoming challenges
  6. Public services: modular and adaptable public services, support Member States in ensuring equal access to technology
  7. Education and literacy: increase digital data literacy, critical thinking and education reforms in accordance to the needs of job markets
  8. Big data and artificial intelligence: ensure ethical use of technology, focus on technologies’ public value, explore ways to use technology for more efficient policy-making
  9. Redesign and new skills for public administration: constant re-evaluation of public servants’ skills, foresight development, modernisation of recruitment processes, more agile forms of working.

As these recommendations have shown, collaboration is needed across different policy fields and they should be acted upon as integrated package. The majority of recommendations is intended for the EU policymakers but their implementation could be more effective if done through lower levels of governance, eg. local, regional or even national. (Read full text)… (More).

Digital dystopia: how algorithms punish the poor


Ed Pilkington at The Guardian: “All around the world, from small-town Illinois in the US to Rochdale in England, from Perth, Australia, to Dumka in northern India, a revolution is under way in how governments treat the poor.

You can’t see it happening, and may have heard nothing about it. It’s being planned by engineers and coders behind closed doors, in secure government locations far from public view.

Only mathematicians and computer scientists fully understand the sea change, powered as it is by artificial intelligence (AI), predictive algorithms, risk modeling and biometrics. But if you are one of the millions of vulnerable people at the receiving end of the radical reshaping of welfare benefits, you know it is real and that its consequences can be serious – even deadly.

The Guardian has spent the past three months investigating how billions are being poured into AI innovations that are explosively recasting how low-income people interact with the state. Together, our reporters in the US, Britain, India and Australia have explored what amounts to the birth of the digital welfare state.

Their dispatches reveal how unemployment benefits, child support, housing and food subsidies and much more are being scrambled online. Vast sums are being spent by governments across the industrialized and developing worlds on automating poverty and in the process, turning the needs of vulnerable citizens into numbers, replacing the judgment of human caseworkers with the cold, bloodless decision-making of machines.

At its most forbidding, Guardian reporters paint a picture of a 21st-century Dickensian dystopia that is taking shape with breakneck speed…(More)”.

Timing Technology


Blog by Gwern Branwen: “Technological forecasts are often surprisingly prescient in terms of predicting that something was possible & desirable and what they predict eventually happens; but they are far less successful at predicting the timing, and almost always fail, with the success (and riches) going to another.

Why is their knowledge so useless? The right moment cannot be known exactly in advance, so attempts to forecast will typically be off by years or worse. For many claims, there is no way to invest in an idea except by going all in and launching a company, resulting in extreme variance in outcomes, even when the idea is good and the forecasts correct about the (eventual) outcome.

Progress can happen and can be foreseen long before, but the details and exact timing due to bottlenecks are too difficult to get right. Launching too early means failure, but being conservative & launching later is just as bad because regardless of forecasting, a good idea will draw overly-optimistic researchers or entrepreneurs to it like moths to a flame: all get immolated but the one with the dumb luck to kiss the flame at the perfect instant, who then wins everything, at which point everyone can see that the optimal time is past. All major success stories overshadow their long list of predecessors who did the same thing, but got unlucky. So, ideas can be divided into the overly-optimistic & likely doomed, or the fait accompli. On an individual level, ideas are worthless because so many others have them too—‘multiple invention’ is the rule, and not the exception.

This overall problem falls under the reinforcement learning paradigm, and successful approaches are analogous to Thompson sampling/posterior sampling: even an informed strategy can’t reliably beat random exploration which gradually shifts towards successful areas while continuing to take occasional long shots. Since people tend to systematically over-exploit, how is this implemented? Apparently by individuals acting suboptimally on the personal level, but optimally on societal level by serving as random exploration.

A major benefit of R&D, then, is in laying fallow until the ‘ripe time’ when they can be immediately exploited in previously-unpredictable ways; applied R&D or VC strategies should focus on maintaining diversity of investments, while continuing to flexibly revisit previous failures which forecasts indicate may have reached ‘ripe time’. This balances overall exploitation & exploration to progress as fast as possible, showing the usefulness of technological forecasting on a global level despite its uselessness to individuals….(More)”.

Robotic Bureaucracy: Administrative Burden and Red Tape in University Research


Essay by Barry Bozeman and Jan Youtie: “…examines university research administration and the use of software systems that automate university research grants and contract administration, including the automatic sending of emails for reporting and compliance purposes. These systems are described as “robotic bureaucracy.” The rise of regulations and their contribution to administrative burden on university research have led university administrators to increasingly rely on robotic bureaucracy to handle compliance. This article draws on the administrative burden, behavioral public administration, and electronic communications and management literatures, which are increasingly focused on the psychological and cognitive bases of behavior. These literatures suggest that the assumptions behind robotic bureaucracy ignore the extent to which these systems shift the burden of compliance from administrators to researchers….(More)”.

Three Big Things: The Most Important Forces Shaping the World


Essay by Morgan Housel: “An irony of studying history is that we often know exactly how a story ends, but have no idea where it began…

Nothing is as influential as World War II has been. But there are a few other Big Things worth paying attention to, because they’re the root influencer of so many other topics.

The three big ones that stick out are demographics, inequality, and access to information.

There are hundreds of forces shaping the world not mentioned here. But I’d argue that many, even most, are derivatives of those three.

Each of these Big Things will have a profound impact on the coming decades because they’re both transformational and ubiquitous. They impact nearly everyone, albeit in different ways. With that comes the reality that we don’t know exactly how their influence will unfold. No one in 1945 knew exactly how World War II would go on to shape the world, only that it would in extreme ways. But we can guess some of the likeliest changes.

Essay by Morgan Housel: “An irony of studying history is that we often know exactly how a story ends, but have no idea where it began…

3. Access to information closes gaps that used to create a social shield of ignorance.

Carole Cole disappeared in 1970 after running away from a juvenile detention center in Texas. She was 17.

A year later an unidentified murdered body was found in Louisiana. It was Carole, but Louisiana police had no idea. They couldn’t identify her. Carole’s disappearance went cold, as did the unidentified body.

Thirty-four years later Carole’s sister posted messages on Craigslist asking for clues into her sister’s disappearance. At nearly the same time, a sheriff’s department in Louisiana made a Facebook page asking for help identifying the Jane Doe body found 34 years before.

Six days later, someone connected the dots between the two posts.

What stumped detectives for almost four decades was solved by Facebook and Craigslist in less than a week.

This kind of stuff didn’t happen even 10 years ago. And we probably haven’t awoken to its full potential – good and bad.

The greatest innovation of the last generation has been the destruction of information barriers that used to keep strangers isolated from one another…(More)”

Why Trust Science?


Book by Naomi Oreskes: “Do doctors really know what they are talking about when they tell us vaccines are safe? Should we take climate experts at their word when they warn us about the perils of global warming? Why should we trust science when our own politicians don’t? In this landmark book, Naomi Oreskes offers a bold and compelling defense of science, revealing why the social character of scientific knowledge is its greatest strength—and the greatest reason we can trust it.

Tracing the history and philosophy of science from the late nineteenth century to today, Oreskes explains that, contrary to popular belief, there is no single scientific method. Rather, the trustworthiness of scientific claims derives from the social process by which they are rigorously vetted. This process is not perfect—nothing ever is when humans are involved—but she draws vital lessons from cases where scientists got it wrong. Oreskes shows how consensus is a crucial indicator of when a scientific matter has been settled, and when the knowledge produced is likely to be trustworthy.

Based on the Tanner Lectures on Human Values at Princeton University, this timely and provocative book features critical responses by climate experts Ottmar Edenhofer and Martin Kowarsch, political scientist Jon Krosnick, philosopher of science Marc Lange, and science historian Susan Lindee, as well as a foreword by political theorist Stephen Macedo….(More)”.

Information Wars: How We Lost the Global Battle Against Disinformation and What We Can Do About It


Book by Richard Stengel: “Disinformation is as old as humanity. When Satan told Eve nothing would happen if she bit the apple, that was disinformation. But the rise of social media has made disinformation even more pervasive and pernicious in our current era. In a disturbing turn of events, governments are increasingly using disinformation to create their own false narratives, and democracies are proving not to be very good at fighting it.

During the final three years of the Obama administration, Richard Stengel, the former editor of Time and an Under Secretary of State, was on the front lines of this new global information war. At the time, he was the single person in government tasked with unpacking, disproving, and combating both ISIS’s messaging and Russian disinformation. Then, in 2016, as the presidential election unfolded, Stengel watched as Donald Trump used disinformation himself, weaponizing the grievances of Americans who felt left out by modernism. In fact, Stengel quickly came to see how all three players had used the same playbook: ISIS sought to make Islam great again; Putin tried to make Russia great again; and we all know about Trump.

In a narrative that is by turns dramatic and eye-opening, Information Wars walks readers through of this often frustrating battle. Stengel moves through Russia and Ukraine, Saudi Arabia and Iraq, and introduces characters from Putin to Hillary Clinton, John Kerry and Mohamed bin Salman to show how disinformation is impacting our global society. He illustrates how ISIS terrorized the world using social media, and how the Russians launched a tsunami of disinformation around the annexation of Crimea – a scheme that became the model for their interference with the 2016 presidential election. An urgent book for our times, Information Wars stresses that we must find a way to combat this ever growing threat to democracy….(More)”.

Democratic Transparency in the Platform Society


Chapter by Robert Gorwa and Timothy Garton Ash: “Following an host of major scandals, transparency has emerged in recent years as one of the leading accountability mechanisms through which the companies operating global platforms for user-generated content have attempted to regain the trust of the public, politicians, and regulatory authorities. Ranging from Facebook’s efforts to partner with academics and create a reputable mechanism for third party data access and independent research to the expanded advertising disclosure tools being built for elections around the world, transparency is playing a major role in current governance debates around free expression, social media, and democracy.

This article thus seeks to (a) contextualize the recent implementation of transparency as enacted by platform companies with an overview of the ample relevant literature on digital transparency in both theory and practice; (b) consider the potential positive governance impacts of transparency as a form of accountability in the current political moment; and (c) reflect upon the potential shortfalls of transparency that should be considered by legislators, academics, and funding bodies weighing the relative benefits of policy or research dealing with transparency in this area…(More)”.

Nudging the Nudger: Toward a Choice Architecture for Regulators


Working Paper by Susan E. Dudley and Zhoudan Xie: “Behavioral research has shown that individuals do not always behave in ways that match textbook definitions of rationality. Recognizing that “bounded rationality” also occurs in the regulatory process and building on public choice insights that focus on how institutional incentives affect behavior, this article explores the interaction between the institutions in which regulators operate and their cognitive biases. It attempts to understand the extent to which the “choice architecture” regulators face reinforces or counteracts predictable cognitive biases. Just as behavioral insights are increasingly used to design choice architecture to frame individual decisions in ways that encourage welfare-enhancing choices, consciously designing the institutions that influence regulators’ policy decisions with behavioral insights in mind could lead to more public-welfare-enhancing policies. The article concludes with some modest ideas for improving regulators’ choice architecture and suggestions for further research….(More)”.

Lessons Learned for New Office of Innovation


Blog by Catherine Tkachyk: “I have worked in a government innovation office for the last eight years in four different roles and two different communities.  In that time, I’ve had numerous conversations on what works and doesn’t work for innovation in local government.  Here’s what I’ve learned: starting an innovation office in government is hard.  That is not a complaint, I love the work I do, but it comes with its own challenges.  When you think about many of the services government provides: Police; Fire; Health and Human Services; Information Technology; Human Resources; Finance; etc. very few people question whether government should provide those services.  They may question how they are provided, who is providing them, or how much they cost, but they don’t question the service.  That’s not true for innovation offices.  One of the first questions I can get from people when they hear what I do is, “Why does government need an Office of Innovation.”  My first answer is, “Do you like how government works?  If not, then maybe there should be a group of people focused on fixing it.” 

Over my career, I have come across a few lessons on how to start up an innovation office to give you the best chance for success. Some of these lessons come from listening to others, but many (probably too many) come from my own mistakes….(More)”.