Defining Public Engagement: A four-level approach.


Della Rucker’s Chapter 2 for an Online Public Engagement Book: “….public engagement typically means presenting information on an project or draft plan and addressing questions or comments. For planners working on long-range issues, such as a comprehensive plan, typical public engagement actions may include feedback questions, such as “what should this area look like?” or “what is your vision for the future of the neighborhood?” Such questions, while inviting participants to take a more active role in the community decision-making than the largely passive viewer/commenter in the first example, still places the resident in a peripheral role: that of an information source, functionally similar to the demographic data and GIS map layers that the professionals use to develop plans.

In a relatively small number of cases, planners and community advocates have found more robust and more direct means of engaging residents in decision -making around the future of their communities. Public engagement specialists, often originating from a community development or academic background, have developed a variety of methods, such as World Cafe and the Fishbowl, that are designed to facilitate more meaningful sharing of information among community residents, often as much with the intent of building connectivity and mutual understanding among residents of different backgrounds as for the purpose of making policy decisions.

Finally, a small but growing number of strategies have begun to emerge that place the work of making community decisions directly in the hands of private residents. Participatory -based budgeting allocates the decision about how to use a portion of a community’s budget to a citizen — based process, and participants work collaboratively through a process that determines what projects or initiatives will be funded in then coming budget cycle. And in the collection of tactics generally known as tactical urbanism or [other names], residents directly intervene in the physical appearance or function of the community by building and placing street furniture, changing parking spaces or driving lanes to pedestrian use, creating and installing new signs, or making other kinds of physical, typically temporary, changes — sometimes with, and sometimes without, the approval of the local government. The purposes of tactical urbanist interventions are twofold: they physically demonstrate the potential impact that more permanent features would have on the community’s transportation and quality of life, and they give residents a concrete and immediate opportunity to impact their environs.

The direct impacts of either participatory budgeting or tactical urbanism intiatives tend to be limited — the amount of budget available for a participatory-based budgeting initiative is usually a fraction of the total budget, and the physical area impacted by a tactical urbanism event is generally limited to a few blocks. Anecdotal evidence from both types of activity, however, seems to indicate an increased understanding of community needs and an increased sense of agency -of having the power to influence one’s community’s future — among participants.

Online public engagement methods have the potential to facilitate a wide variety of public engagement, from making detailed project information more readily available to enabling crowdsourced decision-making around budget and policy choices. However, any discussion of online public engagement methods will soon run up against the same basic challenge: when we use that term, what kind of engagement — what kind of participant experience — are we talking about?

We could divide public participation tasks according to one of several existing organization systems, or taxonomies. The two most commonly used in public engagement theory and practice derive from Sherry R. Arnestein’s 1969 academic paper, “A Ladder of Citizen Participation,” and the International Association of Public Participation’s Public Participation Spectrum.

Although these two taxonomies reflect the same basic idea — that one’s options in selecting public engagement activities range along a spectrum from generally less to more active engagement on the part of the public — they divide and label the classifications differently. …From my perspective, both of these frameworks capture the central issue of recognizing more to less intensive public engagement options, but the number of divisions and the sometimes abstract wording appears to have made it difficult for these insights to find widespread use outside of an academic context. Practitioners who need to think though these options seem to have some tendency to become tangled in the fine-grained differentiations, and the terminology can both make these distinctions harder to think about and lead to mistaken assumption that one is doing higher-level engagement that is actually the case. Among commercial online public engagement platform providers, blog posts claiming that their tool addresses the whole Spectrum appear on a relatively regular basis, even when the tool in questions is designed for feedback, not decision -making.

For these reasons, this book will use the following framework of engagement types, which is detailed enough to demarcate what I think are the most crucial differentiations while at the same time keeping the framework simple enough to use in routine process planning.

The four engagement types we will talk about are: Telling; Asking; Discussing; Deciding…(More)”

Handbook: How to Catalyze Humanitarian Innovation in Computing Research Institutes


Patrick Meier: “The handbook below provides practical collaboration guidelines for both humanitarian organizations & computing research institutes on how to catalyze humanitarian innovation through successful partnerships. These actionable guidelines are directly applicable now and draw on extensive interviews with leading humanitarian groups and CRI’s including the International Committee of the Red Cross (ICRC), United Nations Office for the Coordination of Humanitarian Affairs (OCHA), United Nations Children’s Fund (UNICEF), United Nations High Commissioner for Refugees (UNHCR), UN Global Pulse, Carnegie Melon University (CMU), International Business Machines (IBM), Microsoft Research, Data Science for Social Good Program at the University of Chicago and others.

This handbook, which is the first of its kind, also draws directly on years of experience and lessons learned from the Qatar Computing Research Institute’s (QCRI) active collaboration and unique partnerships with multiple international humanitarian organizations. The aim of this blog post is to actively solicit feedback on this first, complete working draft, which is available here as an open and editable Google Doc. …(More)”

How Crowdsourcing Can Help Us Fight ISIS


 at the Huffington Post: “There’s no question that ISIS is gaining ground. …So how else can we fight ISIS? By crowdsourcing data – i.e. asking a relevant group of people for their input via text or the Internet on specific ISIS-related issues. In fact, ISIS has been using crowdsourcing to enhance its operations since last year in two significant ways. Why shouldn’t we?

First, ISIS is using its crowd of supporters in Syria, Iraq and elsewhere to help strategize new policies. Last December, the extremist group leveraged its global crowd via social media to brainstorm ideas on how to kill 26-year-old Jordanian coalition fighter pilot Moaz al-Kasasba. ISIS supporters used the hashtag “Suggest a Way to Kill the Jordanian Pilot Pig” and “We All Want to Slaughter Moaz” to make their disturbing suggestions, which included decapitation, running al-Kasasba over with a bulldozer and burning him alive (which was the winner). Yes, this sounds absurd and was partly a publicity stunt to boost ISIS’ image. But the underlying strategy to crowdsource new strategies makes complete sense for ISIS as it continues to evolve – which is what the US government should consider as well.

In fact, in February, the US government tried to crowdsource more counterterrorism strategies. Via its official blog, DipNote, the State Departmentasked the crowd – in this case, US citizens – for their suggestions for solutions to fight violent extremism. This inclusive approach to policymaking was obviously important for strengthening democracy, with more than 180 entries posted over two months from citizens across the US. But did this crowdsourcing exercise actually improve US strategy against ISIS? Not really. What might help is if the US government asked a crowd of experts across varied disciplines and industries about counterterrorism strategies specifically against ISIS, also giving these experts the opportunity to critique each other’s suggestions to reach one optimal strategy. This additional, collaborative, competitive and interdisciplinary expert insight can only help President Obama and his national security team to enhance their anti-ISIS strategy.

Second, ISIS has been using its crowd of supporters to collect intelligence information to better execute its strategies. Since last August, the extremist group has crowdsourced data via a Twitter campaign specifically on Saudi Arabia’s intelligence officials, including names and other personal details. This apparently helped ISIS in its two suicide bombing attacks during prayers at a Shite mosque last month; it also presumably helped ISIS infiltrate a Saudi Arabian border town via Iraq in January. This additional, collaborative approach to intelligence collection can only help President Obama and his national security team to enhance their anti-ISIS strategy.

In fact, last year, the FBI used crowdsourcing to spot individuals who might be travelling abroad to join terrorist groups. But what if we asked the crowd of US citizens and residents to give us information specifically on where they’ve seen individuals get lured by ISIS in the country, as well as on specific recruitment strategies they may have noted? This might also lead to more real-time data points on ISIS defectors returning to the US – who are they, why did they defect and what can they tell us about their experience in Syria or Iraq? Overall, crowdsourcing such data (if verifiable) would quickly create a clearer picture of trends in recruitment and defectors across the country, which can only help the US enhance its anti-ISIS strategies.

This collaborative approach to data collection could also be used in Syria and Iraq with texts and online contributions from locals helping us to map ISIS’ movements….(More)”

Field experimenting in economics: Lessons learned for public policy


Robert Metcalfe at OUP Blog: “Do neighbourhoods matter to outcomes? Which classroom interventions improve educational attainment? How should we raise money to provide important and valued public goods? Do energy prices affect energy demand? How can we motivate people to become healthier, greener, and more cooperative? These are some of the most challenging questions policy-makers face. Academics have been trying to understand and uncover these important relationships for decades.

Many of the empirical tools available to economists to answer these questions do not allow causal relationships to be detected. Field experiments represent a relatively new methodological approach capable of measuring the causal links between variables. By overlaying carefully designed experimental treatments on real people performing tasks common to their daily lives, economists are able to answer interesting and policy-relevant questions that were previously intractable. Manipulation of market environments allows these economists to uncover the hidden motivations behind economic behaviour more generally. A central tenet of field experiments in the policy world is that governments should understand the actual behavioural responses of their citizens to changes in policies or interventions.

Field experiments represent a departure from laboratory experiments. Traditionally, laboratory experiments create experimental settings with tight control over the decision environment of undergraduate students. While these studies also allow researchers to make causal statements, policy-makers are often concerned subjects in these experiments may behave differently in settings where they know they are being observed or when they are permitted to sort out of the market.

For example, you might expect a college student to contribute more to charity when she is scrutinized in a professor’s lab than when she can avoid the ask altogether. Field experiments allow researchers to make these causal statements in a setting that is more generalizable to the behaviour policy-makers are directly interested in.

To date, policy-makers traditionally gather relevant information and data by using focus groups, qualitative evidence, or observational data without a way to identify causal mechanisms. It is quite easy to elicit people’s intentions about how they behave with respect to a new policy or intervention, but there is increasing evidence that people’s intentions are a poor guide to predicting their behaviour.

However, we are starting to see a small change in how governments seek to answer pertinent questions. For instance, the UK tax office (Her Majesty’s Revenue and Customs) now uses field experiments across some of its services to improve the efficacy of scarce taxpayers money. In the US, there are movements toward gathering more evidence from field experiments.

In the corporate world, experimenting is not new. Many of the current large online companies—such as Amazon, Facebook, Google, and Microsoft—are constantly using field experiments matched with big data to improve their products and deliver better services to their customers. More and more companies will use field experiments over time to help them better set prices, tailor advertising, provide a better customer journey to increase welfare, and employ more productive workers…(More).

See also Field Experiments in the Developed World: An Introduction (Oxford Review of Economic Policy)

Five Headlines from a Big Month for the Data Revolution


Sarah T. Lucas at Post2015.org: “If the history of the data revolution were written today, it would include three major dates. May 2013, when theHigh Level Panel on the Post-2015 Development Agenda first coined the phrase “data revolution.” November 2014, when the UN Secretary-General’s Independent Expert Advisory Group (IEAG) set a vision for it. And April 2015, when five headliner stories pushed the data revolution from great idea to a concrete roadmap for action.

The April 2015 Data Revolution Headlines

1. The African Data Consensus puts Africa in the lead on bringing the data revolution to the regional level. TheAfrica Data Consensus (ADC) envisions “a profound shift in the way that data is harnessed to impact on development decision-making, with a particular emphasis on building a culture of usage.” The ADC finds consensus across 15 “data communities”—ranging from open data to official statistics to geospatial data, and is endorsed by Africa’s ministers of finance. The ADC gets top billing in my book, as the first contribution that truly reflects a large diversity of voices and creates a political hook for action. (Stay tuned for a blog from my colleague Rachel Quint on the ADC).

2. The Sustainable Development Solutions Network (SDSN) gets our minds (and wallets) around the data needed to measure the SDGs. The SDSN Needs Assessment for SDG Monitoring and Statistical Capacity Development maps the investments needed to improve official statistics. My favorite parts are the clear typology of data (see pg. 12), and that the authors are very open about the methods, assumptions, and leaps of faith they had to take in the costing exercise. They also start an important discussion about how advances in information and communications technology, satellite imagery, and other new technologies have the potential to expand coverage, increase analytic capacity, and reduce the cost of data systems.

3. The Overseas Development Institute (ODI) calls on us to find the “missing millions.” ODI’s The Data Revolution: Finding the Missing Millions presents the stark reality of data gaps and what they mean for understanding and addressing development challenges. The authors highlight that even that most fundamental of measures—of poverty levels—could be understated by as much as a quarter. And that’s just the beginning. The report also pushes us to think beyond the costs of data, and focus on how much good data can save. With examples of data lowering the cost of doing government business, the authors remind us to think about data as an investment with real economic and social returns.

4. Paris21 offers a roadmap for putting national statistic offices (NSOs) at the heart of the data revolution.Paris21’s Roadmap for a Country-Led Data Revolution does not mince words. It calls on the data revolution to “turn a vicious cycle of [NSO] underperformance and inadequate resources into a virtuous one where increased demand leads to improved performance and an increase in resources and capacity.” It makes the case for why NSOs are central and need more support, while also pushing them to modernize, innovate, and open up. The roadmap gets my vote for best design. This ain’t your grandfather’s statistics report!

5. The Cartagena Data Festival features real-live data heroes and fosters new partnerships. The Festival featured data innovators (such as terra-i using satellite data to track deforestation), NSOs on the leading edge of modernization and reform (such as Colombia and the Philippines), traditional actors using old data in new ways (such as the Inter-American Development Bank’s fantastic energy database), groups focused on citizen-generated data (such as The Data Shift and UN My World), private firms working with big data for social good (such asTelefónica), and many others—all reminding us that the data revolution is well underway and will not be stopped. Most importantly, it brought these actors together in one place. You could see the sparks flying as folks learned from each other and hatched plans together. The Festival gets my vote for best conference of a lifetime, with the perfect blend of substantive sessions, intense debate, learning, inspiration, new connections, and a lot of fun. (Stay tuned for a post from my colleague Kristen Stelljes and me for more on Cartagena).

This month full of headlines leaves no room for doubt—momentum is building fast on the data revolution. And just in time.

With the Financing for Development (FFD) conference in Addis Ababa in July, the agreement of Sustainable Development Goals in New York in September, and the Climate Summit in Paris in December, this is a big political year for global development. Data revolutionaries must seize this moment to push past vision, past roadmaps, to actual action and results…..(More)”

21st-Century Public Servants: Using Prizes and Challenges to Spur Innovation


Jenn Gustetic at the Open Government Initiative Blog: “Thousands of Federal employees across the government are using a variety of modern tools and techniques to deliver services more effectively and efficiently, and to solve problems that relate to the missions of their Agencies. These 21st-century public servants are accomplishing meaningful results by applying new tools and techniques to their programs and projects, such as prizes and challenges, citizen science and crowdsourcing, open data, and human-centered design.

Prizes and challenges have been a particularly popular tool at Federal agencies. With 397 prizes and challenges posted on challenge.gov since September 2010, there are hundreds of examples of the many different ways these tools can be designed for a variety of goals. For example:

  • NASA’s Mars Balance Mass Challenge: When NASA’s Curiosity rover pummeled through the Martian atmosphere and came to rest on the surface of Mars in 2012, about 300 kilograms of solid tungsten mass had to be jettisoned to ensure the spacecraft was in a safe orientation for landing. In an effort to seek creative concepts for small science and technology payloads that could potentially replace a portion of such jettisoned mass on future missions, NASA released the Mars Balance Mass Challenge. In only two months, over 200 concepts were submitted by over 2,100 individuals from 43 different countries for NASA to review. Proposed concepts ranged from small drones and 3D printers to radiation detectors and pre-positioning supplies for future human missions to the planet’s surface. NASA awarded the $20,000 prize to Ted Ground of Rising Star, Texas for his idea to use the jettisoned payload to investigate the Mars atmosphere in a way similar to how NASA uses sounding rockets to study Earth’s atmosphere. This was the first time Ted worked with NASA, and NASA was impressed by the novelty and elegance of his proposal: a proposal that NASA likely would not have received through a traditional contract or grant because individuals, as opposed to organizations, are generally not eligible to participate in those types of competitions.
  • National Institutes of Health (NIH) Breast Cancer Startup Challenge (BCSC): The primary goals of the BCSC were to accelerate the process of bringing emerging breast cancer technologies to market, and to stimulate the creation of start-up businesses around nine federally conceived and owned inventions, and one invention from an Avon Foundation for Women portfolio grantee.  While NIH has the capacity to enable collaborative research or to license technology to existing businesses, many technologies are at an early stage and are ideally suited for licensing by startup companies to further develop them into commercial products. This challenge established 11 new startups that have the potential to create new jobs and help promising NIH cancer inventions support the fight against breast cancer. The BCSC turned the traditional business plan competition model on its head to create a new channel to license inventions by crowdsourcing talent to create new startups.

These two examples of challenges are very different, in terms of their purpose and the process used to design and implement them. The success they have demonstrated shouldn’t be taken for granted. It takes access to resources (both information and people), mentoring, and practical experience to both understand how to identify opportunities for innovation tools, like prizes and challenges, to use them to achieve a desired outcome….

Last month, the Challenge.gov program at the General Services Administration (GSA), the Office of Personnel Management (OPM)’s Innovation Lab, the White House Office of Science and Technology Policy (OSTP), and a core team of Federal leaders in the prize-practitioner community began collaborating with the Federal Community of Practice for Challenges and Prizes to develop the other half of the open innovation toolkit, the prizes and challenges toolkit. In developing this toolkit, OSTP and GSA are thinking not only about the information and process resources that would be helpful to empower 21st-century public servants using these tools, but also how we help connect these people to one another to add another meaningful layer to the learning environment…..

Creating an inventory of skills and knowledge across the 600-person (and growing!) Federal community of practice in prizes and challenges will likely be an important resource in support of a useful toolkit. Prize design and implementation can involve tricky questions, such as:

  • Do I have the authority to conduct a prize or challenge?
  • How should I approach problem definition and prize design?
  • Can agencies own solutions that come out of challenges?
  • How should I engage the public in developing a prize concept or rules?
  • What types of incentives work best to motivate participation in challenges?
  • What legal requirements apply to my prize competition?
  • Can non-Federal employees be included as judges for my prizes?
  • How objective do the judging criteria need to be?
  • Can I partner to conduct a challenge? What’s the right agreement to use in a partnership?
  • Who can win prize money and who is eligible to compete? …(More)

Solving the obesity crisis: knowledge, nudge or nanny?


BioMedCentral Blog: ” The 5th Annual Oxford London Lecture (17 March 2015) was delivered by Professor Susan Jebb from Oxford University. The presentation was titled: ‘Knowledge, nudge and nanny: Opportunities to improve the nation’s diet’. In this guest blog Dr Helen Walls, Research Fellow at the London School of Hygiene and Tropical Medicine, covers key themes from this presentation.

“Obesity and related non-communicable disease such as diabetes, heart disease and cancer poses a significant health, social and economic burden in countries worldwide, including the United Kingdom. Whilst the need for action is clear, the nutrition policy response is a highly controversial topic. Professor Jebb raised the question of how best to achieve dietary change: through ‘knowledge, nudge or nanny’?

Education regarding healthy nutrition is an important strategy, but insufficient. People are notoriously bad at putting their knowledge to work. The inclination to overemphasise the importance of knowledge, whilst ignoring the influence of environmental factors on human behaviours, is termed the ‘fundamental attribution error’. Education may also contribute to widening inequities.

Our choices are strongly shaped by the environments in which we live. So if ‘knowledge’ is not enough, what sort of interventions are appropriate? This raises questions regarding individual choice and the role of government. Here, Professor Jebb introduced the Nuffield Intervention Ladder.

 

Nuffield Intervention Ladder
Nuffield Intervention Ladder
Nuffield Council on Bioethics. Public health ethical issues. London: Nuffield Council on Bioethics. 2007.

The Nuffield Intervention Ladder or what I will refer to as ‘the ladder’ describes intervention types from least to most intrusive on personal choice. With addressing diets and obesity, Professor Jebb believes we need a range of policy types, across the range of rungs on the ladder.

Less intrusive measures on the ladder could include provision of information about healthy and unhealthy foods, and provision of nutritional information on products (which helps knowledge be put into action). More effective than labelling is the signposting of healthier choices.

Taking a few steps up the ladder brings in ‘nudge’, a concept from behavioural economics. A nudge is any aspect of the choice architecture that alters people’s behaviour in a predictable way without forbidding options or significantly changing economic incentives. Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.

Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.

The in-store environment has a huge influence over our choices, and many nudge options would fit here. For example, gondalar-end (end of aisle) promotions create a huge up-lift in sales. Removing unhealthy products from this position could make a considerable difference to the contents of supermarket baskets.

Nudge could be used to assist people make better nutritional choices, but it’s also unlikely to be enough. We celebrate the achievement we have made with tobacco control policies and smoking reduction. Here, we use a range of intervention types, including many legislative measures – the ‘nanny’ aspect of the title of this presentation….(More)”

New surveys reveal dynamism, challenges of open data-driven businesses in developing countries


Alla Morrison at World Bank Open Data blog: “Was there a class of entrepreneurs emerging to take advantage of the economic possibilities offered by open data, were investors keen to back such companies, were governments tuned to and responsive to the demands of such companies, and what were some of the key financing challenges and opportunities in emerging markets? As we began our work on the concept of an Open Fund, we partnered with Ennovent (India), MDIF (East Asia and Latin America) and Digital Data Divide (Africa) to conduct short market surveys to answer these questions, with a focus on trying to understand whether a financing gap truly existed in these markets. The studies were fairly quick (4-6 weeks) and reached only a small number of companies (193 in India, 70 in Latin America, 63 in South East Asia, and 41 in Africa – and not everybody responded) but the findings were fairly consistent.

  • Open data is still a very nascent concept in emerging markets. and there’s only a small class of entrepreneurs/investors that is aware of the economic possibilities; there’s a lot of work to do in the ‘enabling environment’
    • In many regions the distinction between open data, big data, and private sector generated/scraped/collected data was blurry at best among entrepreneurs and investors (some of our findings consequently are better indicators of  data-driven rather than open data-driven businesses)
  • There’s a small but growing number of open data-driven companies in all the markets we surveyed and these companies target a wide range of consumers/users and are active in multiple sectors
    • A large percentage of identified companies operate in sectors with high social impact – health and wellness, environment, agriculture, transport. For instance, in India, after excluding business analytics companies, a third of data companies seeking financing are in healthcare and a fifth in food and agriculture, and some of them have the low-income population or the rural segment of India as an intended beneficiary segment. In Latin America, the number of companies in business services, research and analytics was closely followed by health, environment and agriculture. In Southeast Asia, business, consumer services, and transport came out in the lead.
    • We found the highest number of companies in Latin America and Asia with the following countries leading the way – Mexico, Chile, and Brazil, with Colombia and Argentina closely behind in Latin America; and India, Indonesia, Philippines, and Malaysia in Asia
  • An actionable pipeline of data-driven companies exists in Latin America and in Asia
    • We heard demand for different kinds of financing (equity, debt, working capital) but the majority of the need was for equity and quasi-equity in amounts ranging from $100,000 to $5 million USD, with averages of between $2 and $3 million USD depending on the region.
  • There’s a significant financing gap in all the markets
    • The investment sizes required, while they range up to several million dollars, are generally small. Analysis of more than 300 data companies in Latin America and Asia indicates a total estimated need for financing of more than $400 million
  • Venture capitals generally don’t recognize data as a separate sector and club data-driven companies with their standard information communication technology (ICT) investments
    • Interviews with founders suggest that moving beyond seed stage is particularly difficult for data-driven startups. While many companies are able to cobble together an initial seed round augmented by bootstrapping to get their idea off the ground, they face a great deal of difficulty when trying to raise a second, larger seed round or Series A investment.
    • From the perspective of startups, investors favor banal e-commerce (e.g., according toTech in Asia, out of the $645 million in technology investments made public across the region in 2013, 92% were related to fashion and online retail) or consumer service startups and ignore open data-focused startups even if they have a strong business model and solid key performance indicators. The space is ripe for a long-term investor with a generous risk appetite and multiple bottom line goals.
  • Poor data quality was the number one issue these companies reported.
    • Companies reported significant waste and inefficiency in accessing/scraping/cleaning data.

The analysis below borrows heavily from the work done by the partners. We should of course mention that the findings are provisional and should not be considered authoritative (please see the section on methodology for more details)….(More).”

Public interest models: a powerful tool for the advocacy agenda


at Open Oil: “Open financial models can clearly put analysis into a genuinely independent public space, and also trigger a rise in public understanding which could enrich the governance debate in many countries.

But there is a third function public models can serve: that of advocacy for targeted disclosure of information.

The stress here is on “targeted”. A lot of transparency debates are generic – the need to disclose data as a matter of principle.

It is striking that as the transparency agenda has advanced, and won many battles, so has a debate about whether it is contributing to an increase in accountability. As Paul Collier said: “transparency has to lead to accountability otherwise we’re just ticking loads of boxes”.

We need all these campaigns to continue, and we need to pursue maximum disclosure. Because while transparency does not guarantee accountability, it is its essential prerequisite. Necessary but not sufficient.

But here’s where modeling can help to provide some examples of how data can be used, in a very specific way, to advance accountability.

Let’s take the example of an oil project in Africa. A financial model has to deal with uncertainty and so provides three scenarios for future production and prices, which all have a radical impact on the revenues the government could expect to see. That’s unavoidable. Under the “God, Exxon and everyone else” principle, future price and to some extent production are hard to foresee.

But then there is a second layer of uncertainty caused specifically by the model having to use public domain data. The company, and the government if it exercised its rights of access to information, does not face this second layer because it has access to real data, whereas the public interest model must use estimates and extrapulations. These can be justified, written out and explained – they can be well-informed guesses, in other words, and in the blog on the analytical power of public models, we argue that you can still arrive at useful analysis and conclusions despite this handicap.

Nevertheless, they are guesses. And unlike the first layer of uncertainty, relating to future prices and the ever-changing global market, this second layer can be directly addressed by information the government already has to hand – or could get under its contractual right of access to information….(More)”

Design in policy making


at the Open Policy Making Blog: “….In recent years, notable policy and business experts have been discussing the value of design and ‘design thinking’ as an approach to improving the way Government delivers services in one form or another for (and with) citizens.  Examples include Roger Martin from Rotman Business School, Christian Bason formerly of Mindlab, Marco Steinberg of Sitra, Hilary Cottam of Participle, and many more who have been promoting the use of design as a tool for service transformation.

So what is design and how is it being applied in government?  This is the question that has been posed this week at the Service Design in Government conference in London.  This week is also the launch of some of the Policy Lab tools in the Policy Toolkit.

The Policy Lab have produced a short introduction to design, service design and design thinking.  It serves to explain how we are defining and using the term design in various ways in a policy context as well as provide practical tools and examples of design being used in policy making.

We tend to spot design when it goes wrong: badly laid out forms, websites we can’t navigate, confusing signage, transport links that don’t join together, queues for services that are in demand. Bad design is a time thief.  We can also spot good design when we see it, but how is it achieved?…(More)”