Public Brainpower: Civil Society and Natural Resource Management


Book edited by Indra Øverland: ” …examines how civil society, public debate and freedom of speech affect natural resource governance. Drawing on the theories of Robert Dahl, Jurgen Habermas and Robert Putnam, the book introduces the concept of ‘public brainpower’, proposing that good institutions require: fertile public debate involving many and varied contributors to provide a broad base for conceiving new institutions; checks and balances on existing institutions; and the continuous dynamic evolution of institutions as the needs of society change.

The book explores the strength of these ideas through case studies of 18 oil and gas-producing countries: Algeria, Angola, Azerbaijan, Canada, Colombia, Egypt, Iraq, Kazakhstan, Libya, Netherlands, Nigeria, Norway, Qatar, Russia, Saudi, UAE, UK and Venezuela. The concluding chapter includes 10 tenets on how states can maximize their public brainpower, and a ranking of 33 resource-rich countries and the degree to which they succeed in doing so.

The Introduction and the chapters ‘Norway: Public Debate and the Management of Petroleum Resources and Revenues’, ‘Kazakhstan: Civil Society and Natural-Resource Policy in Kazakhstan’, and ‘Russia: Public Debate and the Petroleum Sector’ of this book are available open access under a CC BY 4.0 license at link.springer.com….(More)”.

Growing government innovation labs: an insider’s guide


Report by UNDP and Futurgov: “Effective and inspirational labs exist in many highly developed countries. In Western Europe, MindLab (Denmark) and The Behavioural Insights Team (UK) push their governments to re-imagine public services. In Asia, the Innovation Bureau in Seoul, South Korea, co-designs better services with citizens.

However, this guide is aimed towards those working in the development context. The authors believe their collective experience of running labs in Eurasia, Asia and the Middle East is directly transferrable to other regions who face similar challenges, for example, moving from poverty to inequality, or from a recent history of democratisation towards more open government.

This report does not offer a “how-to” of innovation techniques — there are plenty of guides out there. Instead, we give the real story of how government innovation labs develop in regions like ours: organic and people-driven, often operating under the radar until safe to emerge. We share a truthful  examination of the twists and turns of seeding, starting up and scaling labs, covering the challenges we faced and our failures, as much as our successes. …(More)”.

Better Data for Better Policy: Accessing New Data Sources for Statistics Through Data Collaboratives


Medium Blog by Stefaan Verhulst: “We live in an increasingly quantified world, one where data is driving key business decisions. Data is claimed to be the new competitive advantage. Yet, paradoxically, even as our reliance on data increases and the call for agile, data-driven policy making becomes more pronounced, many Statistical Offices are confronted with shrinking budgets and an increased demand to adjust their practices to a data age. If Statistical Offices fail to find new ways to deliver “evidence of tomorrow”, by leveraging new data sources, this could mean that public policy may be formed without access to the full range of available and relevant intelligence — as most business leaders have. At worst, a thinning evidence base and lack of rigorous data foundation could lead to errors and more “fake news,” with possibly harmful public policy implications.

While my talk was focused on the key ways data can inform and ultimately transform the full policy cycle (see full presentation here), a key premise I examined was the need to access, utilize and find insight in the vast reams of data and data expertise that exist in private hands through the creation of new kinds of public and private partnerships or “data collaboratives” to establish more agile and data-driven policy making.

Screen Shot 2017-10-20 at 5.18.23 AM

Applied to statistics, such approaches have already shown promise in a number of settings and countries. Eurostat itself has, for instance, experimented together with Statistics Belgium, with leveraging call detail records provided by Proximus to document population density. Statistics Netherlands (CBS) recently launched a Center for Big Data Statistics (CBDS)in partnership with companies like Dell-EMC and Microsoft. Other National Statistics Offices (NSOs) are considering using scanner data for monitoring consumer prices (Austria); leveraging smart meter data (Canada); or using telecom data for complementing transportation statistics (Belgium). We are now living undeniably in an era of data. Much of this data is held by private corporations. The key task is thus to find a way of utilizing this data for the greater public good.

Value Proposition — and Challenges

There are several reasons to believe that public policy making and official statistics could indeed benefit from access to privately collected and held data. Among the value propositions:

  • Using private data can increase the scope and breadth and thus insights offered by available evidence for policymakers;
  • Using private data can increase the quality and credibility of existing data sets (for instance, by complementing or validating them);
  • Private data can increase the timeliness and thus relevance of often-outdated information held by statistical agencies (social media streams, for example, can provide real-time insights into public behavior); and
  • Private data can lower costs and increase other efficiencies (for example, through more sophisticated analytical methods) for statistical organizations….(More)”.

“Nudge units” – where they came from and what they can do


Zeina Afif at the Worldbank: “You could say that the first one began in 2009, when the US government recruited Cass Sunstein to head The Office of Information and Regulatory Affairs (OIRA) to streamline regulations. In 2010, the UK established the first Behavioural Insights Unit (BIT) on a trial basis, under the Cabinet Office. Other countries followed suit, including the US, Australia, Canada, Netherlands, and Germany. Shortly after, countries such as India, Indonesia, Peru, Singapore, and many others started exploring the application of behavioral insights to their policies and programs. International institutions such as the World Bank, UN agencies, OECD, and EU have also established behavioral insights units to support their programs. And just this month, the Sustainable Energy Authority of Ireland launched its own Behavioral Economics Unit.

The Future
As eMBeD, the behavioral science unit at the World Bank, continues to support governments across the globe in the implementation of their units, here are some common questions we often get asked.

What are the models for a Behavioral Insights Unit in Government?
As of today, over a dozen countries have integrated behavioral insights with their operations. While there is not one model to prescribe, the setup varies from centralized or decentralized to networked….

In some countries, the units were first established at the ministerial level. One example is MineduLab in Peru, which was set up with eMBeD’s help. The unit works as an innovation lab, testing rigorous and leading research in education and behavioral science to address issues such as teacher absenteeism and motivation, parents’ engagement, and student performance….

What should be the structure of the team?
Most units start with two to four full-time staff. Profiles include policy advisors, social psychologists, experimental economists, and behavioral scientists. Experience in the public sector is essential to navigate the government and build support. It is also important to have staff familiar with designing and running experiments. Other important skills include psychology, social psychology, anthropology, design thinking, and marketing. While these skills are not always readily available in the public sector, it is important to note that all behavioral insights units partnered with academics and experts in the field.

The U.S. team, originally called the Social and Behavioral Sciences Team, is staffed mostly by seconded academic faculty, researchers, and other departmental staff. MineduLab in Peru partnered with leading experts, including the Abdul Latif Jameel Poverty Action Lab (J-PAL), Fortalecimiento de la Gestión de la Educación (FORGE), Innovations for Poverty Action (IPA), and the World Bank….(More)”

Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence


Dom Galeon in Futurism: “As artificial intelligence (AI) development progresses, experts have begun considering how best to give an AI system an ethical or moral backbone. A popular idea is to teach AI to behave ethically by learning from decisions made by the average person.

To test this assumption, researchers from MIT created the Moral Machine. Visitors to the website were asked to make choices regarding what an autonomous vehicle should do when faced with rather gruesome scenarios. For example, if a driverless car was being forced toward pedestrians, should it run over three adults to spare two children? Save a pregnant woman at the expense of an elderly man?

The Moral Machine was able to collect a huge swath of this data from random people, so Ariel Procaccia from Carnegie Mellon University’s computer science department decided to put that data to work.

In a new study published online, he and Iyad Rahwan — one of the researchers behind the Moral Machine — taught an AI using the Moral Machine’s dataset. Then, they asked the system to predict how humans would want a self-driving car to react in similar but previously untested scenarios….

This idea of having to choose between two morally problematic outcomes isn’t new. Ethicists even have a name for it: the double-effect. However, having to apply the concept to an artificially intelligent system is something humankind has never had to do before, and numerous experts have shared their opinions on how best to go about it.

OpenAI co-chairman Elon Musk believes that creating an ethical AI is a matter of coming up with clear guidelines or policies to govern development, and governments and institutions are slowly heeding Musk’s call. Germany, for example, crafted the world’s first ethical guidelines for self-driving cars. Meanwhile, Google parent company Alphabet’s AI DeepMind now has an ethics and society unit.

Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions….(More)”.

UN Opens New Office to Monitor AI Development and Predict Possible Threats


Interesting Engineering: “The United Nations has created a new office in the Netherlands dedicated to the monitoring and research of Artificial Intelligence (AI) technologies. The new office will collect information about the way in which AI is impacting the world. Researchers will have a particular focus on the way AI relates to global security but will also monitor the effects of job loss from AI and automation.

Irakli Beridze, a UN senior strategic adviser will head the office. They have described the new office saying, “A number of UN organisations operate projects involving robots and AI, such as the group of experts studying the role of autonomous military robots in the realm of conventional weapons. These are temporary measures. Ours is the first permanent UN office on this subject. We are looking at the risks as well as the advantages.”….He suggests that the speed of AI technology development is of primary concern. He explains, “This can make for instability if society does not adapt quickly enough. One of our most important tasks is to set up a network of experts from business, knowledge institutes, civil society organisations and governments. We certainly do not want to plead for a ban or a brake on technologies. We will also explore how new technology can contribute to the sustainable development goals of the UN. For this, we want to start concrete projects. We will not be a talking club.”…(More).

Voice or chatter? Making ICTs work for transformative citizen engagement


Research Report Summary by Making All Voices Count: “What are the conditions in democratic governance that make information and communication technology (ICT)-mediated citizen engagement transformative? While substantial scholarship exists on the role of the Internet and digital technologies in triggering moments of political disruption and cascading upheavals, academic interest in the sort of deep change that transforms institutional cultures of democratic governance, occurring in ‘slow time’, has been relatively muted.

This study attempts to fill this gap. It is inspired by the idea of participation in everyday democracy and seeks to explore how ICT-mediated citizen engagement can promote democratic governance and amplify citizen voice.

ICT-mediated citizen engagement is defined by this study as comprising digitally-mediated information outreach, dialogue, consultation, collaboration and decision-making, initiated either by government or by citizens, towards greater government accountability and responsiveness.

The study involved empirical explorations of citizen engagement initiatives in eight sites – two in Asia (India and Philippines), one in Africa (South Africa), three in South America (Brazil, Colombia, Uruguay) and two in Europe (Netherlands and Spain).

This summary of the larger Research Report presents recommendations for how public policies and programmes can promote ICTs for citizen engagement and transformative citizenship.  In doing so it provides an overview of the discussion the authors undertake on three inter-related dimensions, namely:

  • calibrating digitally mediated citizen participation as a measure of political empowerment and equality
  • designing techno-public spaces as bastions of inclusive democracy
  • ensuring that the rule of law upholds democratic principles in digitally mediated governance…(More. Full research report)

Systems Approaches to Public Sector Challenges


New Report by the OECD: “Complexity is a core feature of most policy issues today and in this context traditional analytical tools and problem-solving methods no longer work. This report, produced by the OECD Observatory of Public Sector Innovation, explores how systems approaches can be used in the public sector to solve complex or “wicked” problems . Consisting of three parts, the report discusses the need for systems thinking in the public sector; identifies tactics that can be employed by government agencies to work towards systems change; and provides an in-depth examination of how systems approaches have been applied in practice. Four cases of applied systems approaches are presented and analysed: preventing domestic violence (Iceland), protecting children (the Netherlands), regulating the sharing economy (Canada) and designing a policy framework to conduct experiments in government (Finland). The report highlights the need for a new approach to policy making that accounts for complexity and allows for new responses and more systemic change that deliver greater value, effectiveness and public satisfaction….(More)”.

Gaming for Infrastructure


Nilmini Rubin & Jennifer Hara  at the Stanford Social Innovation Review: “…the American Society of Civil Engineers (ASCE) estimates that the United States needs $4.56 trillion to keep its deteriorating infrastructure current but only has funding to cover less than half of necessary infrastructure spending—leaving the at least country $2.0 trillion short through the next decade. Globally, the picture is bleak as well: World Economic Forum estimates that the infrastructure gap is $1 trillion each year.

What can be done? Some argue that public-private partnerships (PPPs or P3s) are the answer. We agree that they can play an important role—if done well. In a PPP, a private party provides a public asset or service for a government entity, bears significant risk, and is paid on performance. The upside for governments and their citizens is that the private sector can be incentivized to deliver projects on time, within budget, and with reduced construction risk. The private sector can benefit by earning a steady stream of income from a long-term investment from a secure client. From the Grand Parkway Project in Texas to the Queen Alia International Airport in Jordan, PPPs have succeeded domestically and internationally.

The problem is that PPPs can be very hard to design and implement. And since they can involve commitments of millions or even billions of dollars, a PPP failure can be awful. For example, the Berlin Airport is a PPP that is six years behind schedule, and its costs overruns total roughly $3.8 billion to date.

In our experience, it can be useful for would-be partners to practice engaging in a PPP before they dive into a live project. At our organization, Tetra Tech’s Institute for Public-Private Partnerships, for example, we use an online and multiplayer game—the P3 Game—to help make PPPs work.

The game is played with 12 to 16 people who are divided into two teams: a Consortium and a Contracting Authority. In each of four rounds, players mimic the activities they would engage in during the course of a real PPP, and as in real life, they are confronted with unexpected events: The Consortium fails to comply with a routine road inspection, how should the Contracting Authority team respond? The cost of materials skyrockets, how should the Consortium team manage when it has a fixed price contract?

Players from government ministries, legislatures, construction companies, financial institutions, and other entities get to swap roles and experience a PPP from different vantage points. They think through challenges and solve problems together—practicing, failing, learning, and growing—within the confines of the game and with no real-world cost.

More than 1,000 people have participated to date, including representatives of the US Army Corps of Engineers, the World Bank, and Johns Hopkins University, using a variety of scenarios. PPP team members who work on part of the Schiphol-Amsterdam-Almere Project, a $5.6-billion road project in the Netherlands, played the game using their actual contract document….(More)”.

Journal tries crowdsourcing peer reviews, sees excellent results


Chris Lee at ArsTechnica: “Peer review is supposed to act as a sanity check on science. A few learned scientists take a look at your work, and if it withstands their objective and entirely neutral scrutiny, a journal will happily publish your work. As those links indicate, however, there are some issues with peer review as it is currently practiced. Recently, Benjamin List, a researcher and journal editor in Germany, and his graduate assistant, Denis Höfler, have come up with a genius idea for improving matters: something called selected crowd-sourced peer review….

My central point: peer review is burdensome and sometimes barely functional. So how do we improve it? The main way is to experiment with different approaches to the reviewing process, which many journals have tried, albeit with limited success. Post-publication peer review, when scientists look over papers after they’ve been published, is also an option but depends on community engagement.

But if your paper is uninteresting, no one will comment on it after it is published. Pre-publication peer review is the only moment where we can be certain that someone will read the paper.

So, List (an editor for Synlett) and Höfler recruited 100 referees. For their trial, a forum-style commenting system was set up that allowed referees to comment anonymously on submitted papers but also on each other’s comments as well. To provide a comparison, the papers that went through this process also went through the traditional peer review process. The authors and editors compared comments and (subjectively) evaluated the pros and cons. The 100-person crowd of researchers was deemed the more effective of the two.

The editors found that it took a bit more time to read and collate all the comments into a reviewers’ report. But it was still faster, which the authors loved. Typically, it took the crowd just a few days to complete their review, which compares very nicely to the usual four to six weeks of the traditional route (I’ve had papers languish for six months in peer review). And, perhaps most important, the responses were more substantive and useful compared to the typical two-to-four-person review.

So far, List has not published the trial results formally. Despite that, Synlett is moving to the new system for all its papers.

Why does crowdsourcing work?

Here we get back to something more editorial. I’d suggest that there is a physical analog to traditional peer review, called noise. Noise is not just a constant background that must be overcome. Noise is also generated by the very process that creates a signal. The difference is how the amplitude of noise grows compared to the amplitude of signal. For very low-amplitude signals, all you measure is noise, while for very high-intensity signals, the noise is vanishingly small compared to the signal, even though it’s huge compared to the noise of the low-amplitude signal.

Our esteemed peers, I would argue, are somewhat random in their response, but weighted toward objectivity. Using this inappropriate physics model, a review conducted by four reviewers can be expected (on average) to contain two responses that are, basically, noise. By contrast, a review by 100 reviewers may only have 10 responses that are noise. Overall, a substantial improvement. So, adding the responses of a large number of peers together should produce a better picture of a scientific paper’s strengths and weaknesses.

Didn’t I just say that reviewers are overloaded? Doesn’t it seem that this will make the problem worse?

Well, no, as it turns out. When this approach was tested (with consent) on papers submitted to Synlett, it was discovered that review times went way down—from weeks to days. And authors reported getting more useful comments from their reviewers….(More)”.