Serious Gaming Takes Flight


Dennis Glenn at “Chief Learning Officer” Media: “Gamification is one of the hottest topics in corporate learning today, yet we don’t entirely trust it. So before delving into how leaders can take a reasoned, serious approach to use games in learning environments, let’s get one thing straight: Gamification is different from serious gaming.

Gamification places nongame experiences into a gamelike environment. Serious games are educational experiences specifically designed to deliver formative or summative assessments based on predetermined learning objectives. Gamification creates an experience; serious games promote task or concept mastery. The underlying aim of serious games concentrates the user’s effort on mastery of a specific task, with a feedback loop to inform users of their progress toward that goal….

In addition to simulations and gamification, many corporate learning leaders are turning to serious games, which demand social engagement. For instance, consider the World of Warcraft wiki, which has more than 101,000 players and contributors helping others master the online game.

Some of the most important benefits to gaming:

  • Accepting failure, which is seen as a benefit to mastery.
  • Rewarding players with appropriate and timely feedback.
  • Making social connections and feeling part of something bigger.

In serious games, frequent feedback — when accompanied by specific instruction — can dramatically reduce the time to mastery. Because the computer will record all data during the assessment, learning leaders can identify specific pathways to mastery and offer them to learners.

This feedback loop leads to self-reflection and that can be translated into learning, according the 2014 paper titled “Working Paper: Learning by Thinking: How Reflection Aids Performance.” Authors Giada Di Stefano, Francesca Gino, Gary Pisano and Bradley Staats found that individuals performed significantly better on subsequent tasks when thinking about what they learned from the previously completed task.

Social learning is the final link to understanding mastery learning. In a recent massive open online course, titled “Design and Development of Educational Technology MITx: 11.132x,” instructor Scot Osterweil said our understanding of literacy is rooted in a social environment and in interactions with other people and the world. But again, engagement is key. Gaming provides the structure needed to engage with peers, often irrespective of cultural and language differences….(More)”.

The road to better data


Johannes Jütting at OECDInsightsTradition tells us that more than 3,000 years ago, Moses went to the top of Mount Sinai and came back down with 10 commandments. When the world’s presidents and prime ministers go to the top of the Sustainable Development Goals (SDGs) mountain in New York late this summer they will come down with not 10 commandments but 169. Too many?

Some people certainly think so. “Stupid development goals,” The Economist said recently. It argued that the 17 SDGs and roughly 169 targets should “honour Moses and be pruned to ten goals”. Others disagree. In a report for the Overseas Development Institute, May Miller-Dawkins, warned of the dangers of letting practicality “blunt ambition”. She backed SDGs with “high ambition”.

The debate over the “right” number of goals and targets is interesting, important even. But it misses a key point: No matter how many goals and targets are finally agreed, if we can’t measure their real impact on people’s lives, on our societies and on the environment, then they risk becoming irrelevant.

Unfortunately, we already know that many developing countries have problems compiling even basic social and economic statistics, never mind the complex web of data that will be needed to monitor the SDGs. A few examples: In 2013, about 35% of all live births were not officially registered worldwide, rising to two-thirds in developing countries. In Africa, just seven countries have data on their total number of landholders and women landholders, and none have data from before 2004. Last but not least, fast-changing economies and associated measurement challenges mean we are not sure today if we have worldwide a billion people living in extreme poverty, half a billion or more than a billion.

Why does this matter? Without adequate data, we cannot identify the problems that planning and policymaking need to address. We also cannot judge if governments and others are meeting their commitments. As a report from the Centre for Global Development notes, “Data […] serve as a ‘currency’ for accountability among and within governments, citizens, and civil society at large, and they can be used to hold development agencies accountable.”…(More)”

The extreme poverty of data


 in the Financial Times: “As finance ministers gather this week in Washington DC they cannot but agree and commit to fighting extreme poverty. All of us must rejoice in the fact that over the past 15 years, the world has reportedly already “halved the number of poor people living on the planet”.

But none of us really knows it for sure. It could be less, it could be more. In fact, for every crucial issue related to human development, whether it is poverty, inequality, employment, environment or urbanization, there is a seminal crisis at the heart of global decision making – the crisis of poor data.

Because the challenges are huge and the resources scarce, on these issues more maybe than anywhere else, we need data, to monitor the results and adapt the strategies whenever needed. Bad data feed bad management, weak accountability, loss of resources and, of course, corruption.

It is rather bewildering that while we live in this technology-driven age, the development communities and many of our African governments are relying too much on guesswork. Our friends in the development sector and our African leaders would not dream of driving their cars or flying without instruments. But somehow they pretend they can manage and develop countries without reliable data.

The development community must admit it has a big problem. The sector is relying on dodgy data sets. Take the data on extreme poverty. The data we have are mainly extrapolations of estimates from years back – even up to a decade or more ago. For 38 out of 54 African countries, data on poverty and inequality are either out-dated or non-existent. How can we measure progress with such a shaky baseline? To make things worse we also don’t know how much countries spend on fighting poverty. Only 3 per cent of African citizens live in countries where governmental budgets and expenditures are made open, according to the Open Budget Index. We will never end extreme poverty if we don’t know who or where the poor are, or how much is being spent to help them.

Our African countries have all fought and won their political independence. They should now consider the battle for economic sovereignty, which begins with the ownership of sound and robust national data: how many citizens, living where, and how, to begin with.

There are three levels of intervention required.

First, a significant increase in resources for credible, independent, national statistical institutions. Establishing a statistical office is less eye-catching than building a hospital or school but data driven policy will ensure that more hospital and schools are delivered more effectively and efficiently. We urgently need these boring statistical offices. In 2013, out of a total aid budget of $134.8bn, a mere $280m went in support of statistics. Governments must also increase the resources they put into data.

Second, innovative means of collecting data. Mobile phones, geocoding, satellites and the civic engagement of young tech-savvy citizens to collect data can all secure rapid improvements in baseline data if harnessed.

Third, everyone must take on this challenge of the global public good dimension of high quality open data. Public registers of the ownership of companies, global standards on publishing payments and contracts in the extractives sector and a global charter for open data standards will help media and citizens to track corruption and expose mismanagement. Proposals for a new world statistics body – “Worldstat” – should be developed and implemented….(More)”

How Digital Transparency Became a Force of Nature


Daniel C. Dennett and Deb Roy in Scientific American: “More than half a billion years ago a spectacularly creative burst of biological innovation called the Cambrian explosion occurred. In a geologic “instant” of several million years, organisms developed strikingly new body shapes, new organs, and new predation strategies and defenses against them. Evolutionary biologists disagree about what triggered this prodigious wave of novelty, but a particularly compelling hypothesis, advanced by University of Oxford zoologist Andrew Parker, is that light was the trigger. Parker proposes that around 543 million years ago, the chemistry of the shallow oceans and the atmosphere suddenly changed to become much more transparent. At the time, all animal life was confined to the oceans, and as soon as the daylight flooded in, eyesight became the best trick in the sea. As eyes rapidly evolved, so did the behaviors and equipment that responded to them.

Whereas before all perception was proximal — by contact or by sensed differences in chemical concentration or pressure waves — now animals could identify and track things at a distance. Predators could home in on their prey; prey could see the predators coming and take evasive action. Locomotion is a slow and stupid business until you have eyes to guide you, and eyes are useless if you cannot engage in locomotion, so perception and action evolved together in an arms race. This arms race drove much of the basic diversification of the tree of life we have today.

Parker’s hypothesis about the Cambrian explosion provides an excellent parallel for understanding a new, seemingly unrelated phenomenon: the spread of digital technology. Although advances in communications technology have transformed our world many times in the past — the invention of writing signaled the end of prehistory; the printing press sent waves of change through all the major institutions of society — digital technology could have a greater impact than anything that has come before. It will enhance the powers of some individuals and organizations while subverting the powers of others, creating both opportunities and risks that could scarcely have been imagined a generation ago.

Through social media, the Internet has put global-scale communications tools in the hands of individuals. A wild new frontier has burst open. Services such as YouTube, Facebook, Twitter, Tumblr, Instagram, WhatsApp and SnapChat generate new media on a par with the telephone or television — and the speed with which these media are emerging is truly disruptive. It took decades for engineers to develop and deploy telephone and television networks, so organizations had some time to adapt. Today a social-media service can be developed in weeks, and hundreds of millions of people can be using it within months. This intense pace of innovation gives organizations no time to adapt to one medium before the arrival of the next.

The tremendous change in our world triggered by this media inundation can be summed up in a word: transparency. We can now see further, faster, and more cheaply and easily than ever before — and we can be seen. And you and I can see that everyone can see what we see, in a recursive hall of mirrors of mutual knowledge that both enables and hobbles. The age-old game of hide-and-seek that has shaped all life on the planet has suddenly shifted its playing field, its equipment and its rules. The players who cannot adjust will not last long.

The impact on our organizations and institutions will be profound. Governments, armies, churches, universities, banks and companies all evolved to thrive in a relatively murky epistemological environment, in which most knowledge was local, secrets were easily kept, and individuals were, if not blind, myopic. When these organizations suddenly find themselves exposed to daylight, they quickly discover that they can no longer rely on old methods; they must respond to the new transparency or go extinct. Just as a living cell needs an effective membrane to protect its internal machinery from the vicissitudes of the outside world, so human organizations need a protective interface between their internal affairs and the public world, and the old interfaces are losing their effectiveness….(More at Medium)”

21st-Century Public Servants: Using Prizes and Challenges to Spur Innovation


Jenn Gustetic at the Open Government Initiative Blog: “Thousands of Federal employees across the government are using a variety of modern tools and techniques to deliver services more effectively and efficiently, and to solve problems that relate to the missions of their Agencies. These 21st-century public servants are accomplishing meaningful results by applying new tools and techniques to their programs and projects, such as prizes and challenges, citizen science and crowdsourcing, open data, and human-centered design.

Prizes and challenges have been a particularly popular tool at Federal agencies. With 397 prizes and challenges posted on challenge.gov since September 2010, there are hundreds of examples of the many different ways these tools can be designed for a variety of goals. For example:

  • NASA’s Mars Balance Mass Challenge: When NASA’s Curiosity rover pummeled through the Martian atmosphere and came to rest on the surface of Mars in 2012, about 300 kilograms of solid tungsten mass had to be jettisoned to ensure the spacecraft was in a safe orientation for landing. In an effort to seek creative concepts for small science and technology payloads that could potentially replace a portion of such jettisoned mass on future missions, NASA released the Mars Balance Mass Challenge. In only two months, over 200 concepts were submitted by over 2,100 individuals from 43 different countries for NASA to review. Proposed concepts ranged from small drones and 3D printers to radiation detectors and pre-positioning supplies for future human missions to the planet’s surface. NASA awarded the $20,000 prize to Ted Ground of Rising Star, Texas for his idea to use the jettisoned payload to investigate the Mars atmosphere in a way similar to how NASA uses sounding rockets to study Earth’s atmosphere. This was the first time Ted worked with NASA, and NASA was impressed by the novelty and elegance of his proposal: a proposal that NASA likely would not have received through a traditional contract or grant because individuals, as opposed to organizations, are generally not eligible to participate in those types of competitions.
  • National Institutes of Health (NIH) Breast Cancer Startup Challenge (BCSC): The primary goals of the BCSC were to accelerate the process of bringing emerging breast cancer technologies to market, and to stimulate the creation of start-up businesses around nine federally conceived and owned inventions, and one invention from an Avon Foundation for Women portfolio grantee.  While NIH has the capacity to enable collaborative research or to license technology to existing businesses, many technologies are at an early stage and are ideally suited for licensing by startup companies to further develop them into commercial products. This challenge established 11 new startups that have the potential to create new jobs and help promising NIH cancer inventions support the fight against breast cancer. The BCSC turned the traditional business plan competition model on its head to create a new channel to license inventions by crowdsourcing talent to create new startups.

These two examples of challenges are very different, in terms of their purpose and the process used to design and implement them. The success they have demonstrated shouldn’t be taken for granted. It takes access to resources (both information and people), mentoring, and practical experience to both understand how to identify opportunities for innovation tools, like prizes and challenges, to use them to achieve a desired outcome….

Last month, the Challenge.gov program at the General Services Administration (GSA), the Office of Personnel Management (OPM)’s Innovation Lab, the White House Office of Science and Technology Policy (OSTP), and a core team of Federal leaders in the prize-practitioner community began collaborating with the Federal Community of Practice for Challenges and Prizes to develop the other half of the open innovation toolkit, the prizes and challenges toolkit. In developing this toolkit, OSTP and GSA are thinking not only about the information and process resources that would be helpful to empower 21st-century public servants using these tools, but also how we help connect these people to one another to add another meaningful layer to the learning environment…..

Creating an inventory of skills and knowledge across the 600-person (and growing!) Federal community of practice in prizes and challenges will likely be an important resource in support of a useful toolkit. Prize design and implementation can involve tricky questions, such as:

  • Do I have the authority to conduct a prize or challenge?
  • How should I approach problem definition and prize design?
  • Can agencies own solutions that come out of challenges?
  • How should I engage the public in developing a prize concept or rules?
  • What types of incentives work best to motivate participation in challenges?
  • What legal requirements apply to my prize competition?
  • Can non-Federal employees be included as judges for my prizes?
  • How objective do the judging criteria need to be?
  • Can I partner to conduct a challenge? What’s the right agreement to use in a partnership?
  • Who can win prize money and who is eligible to compete? …(More)

Solving the obesity crisis: knowledge, nudge or nanny?


BioMedCentral Blog: ” The 5th Annual Oxford London Lecture (17 March 2015) was delivered by Professor Susan Jebb from Oxford University. The presentation was titled: ‘Knowledge, nudge and nanny: Opportunities to improve the nation’s diet’. In this guest blog Dr Helen Walls, Research Fellow at the London School of Hygiene and Tropical Medicine, covers key themes from this presentation.

“Obesity and related non-communicable disease such as diabetes, heart disease and cancer poses a significant health, social and economic burden in countries worldwide, including the United Kingdom. Whilst the need for action is clear, the nutrition policy response is a highly controversial topic. Professor Jebb raised the question of how best to achieve dietary change: through ‘knowledge, nudge or nanny’?

Education regarding healthy nutrition is an important strategy, but insufficient. People are notoriously bad at putting their knowledge to work. The inclination to overemphasise the importance of knowledge, whilst ignoring the influence of environmental factors on human behaviours, is termed the ‘fundamental attribution error’. Education may also contribute to widening inequities.

Our choices are strongly shaped by the environments in which we live. So if ‘knowledge’ is not enough, what sort of interventions are appropriate? This raises questions regarding individual choice and the role of government. Here, Professor Jebb introduced the Nuffield Intervention Ladder.

 

Nuffield Intervention Ladder
Nuffield Intervention Ladder
Nuffield Council on Bioethics. Public health ethical issues. London: Nuffield Council on Bioethics. 2007.

The Nuffield Intervention Ladder or what I will refer to as ‘the ladder’ describes intervention types from least to most intrusive on personal choice. With addressing diets and obesity, Professor Jebb believes we need a range of policy types, across the range of rungs on the ladder.

Less intrusive measures on the ladder could include provision of information about healthy and unhealthy foods, and provision of nutritional information on products (which helps knowledge be put into action). More effective than labelling is the signposting of healthier choices.

Taking a few steps up the ladder brings in ‘nudge’, a concept from behavioural economics. A nudge is any aspect of the choice architecture that alters people’s behaviour in a predictable way without forbidding options or significantly changing economic incentives. Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.

Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.

The in-store environment has a huge influence over our choices, and many nudge options would fit here. For example, gondalar-end (end of aisle) promotions create a huge up-lift in sales. Removing unhealthy products from this position could make a considerable difference to the contents of supermarket baskets.

Nudge could be used to assist people make better nutritional choices, but it’s also unlikely to be enough. We celebrate the achievement we have made with tobacco control policies and smoking reduction. Here, we use a range of intervention types, including many legislative measures – the ‘nanny’ aspect of the title of this presentation….(More)”

Citizen Science for Citizen Access to Law


Paper by Michael Curtotti, Wayne Weibel, Eric McCreath, Nicolas Ceynowa, Sara Frug, and Tom R Bruce: “This paper sits at the intersection of citizen access to law, legal informatics and plain language. The paper reports the results of a joint project of the Cornell University Legal Information Institute and the Australian National University which collected thousands of crowdsourced assessments of the readability of law through the Cornell LII site. The aim of the project is to enhance accuracy in the prediction of the readability of legal sentences. The study requested readers on legislative pages of the LII site to rate passages from the United States Code and the Code of Federal Regulations and other texts for readability and other characteristics. The research provides insight into who uses legal rules and how they do so. The study enables conclusions to be drawn as to the current readability of law and spread of readability among legal rules. The research is intended to enable the creation of a dataset of legal rules labelled by human judges as to readability. Such a dataset, in combination with machine learning, will assist in identifying factors in legal language which impede readability and access for citizens. As far as we are aware, this research is the largest ever study of readability and usability of legal language and the first research which has applied crowdsourcing to such an investigation. The research is an example of the possibilities open for enhancing access to law through engagement of end users in the online legal publishing environment for enhancement of legal accessibility and through collaboration between legal publishers and researchers….(More)”

New surveys reveal dynamism, challenges of open data-driven businesses in developing countries


Alla Morrison at World Bank Open Data blog: “Was there a class of entrepreneurs emerging to take advantage of the economic possibilities offered by open data, were investors keen to back such companies, were governments tuned to and responsive to the demands of such companies, and what were some of the key financing challenges and opportunities in emerging markets? As we began our work on the concept of an Open Fund, we partnered with Ennovent (India), MDIF (East Asia and Latin America) and Digital Data Divide (Africa) to conduct short market surveys to answer these questions, with a focus on trying to understand whether a financing gap truly existed in these markets. The studies were fairly quick (4-6 weeks) and reached only a small number of companies (193 in India, 70 in Latin America, 63 in South East Asia, and 41 in Africa – and not everybody responded) but the findings were fairly consistent.

  • Open data is still a very nascent concept in emerging markets. and there’s only a small class of entrepreneurs/investors that is aware of the economic possibilities; there’s a lot of work to do in the ‘enabling environment’
    • In many regions the distinction between open data, big data, and private sector generated/scraped/collected data was blurry at best among entrepreneurs and investors (some of our findings consequently are better indicators of  data-driven rather than open data-driven businesses)
  • There’s a small but growing number of open data-driven companies in all the markets we surveyed and these companies target a wide range of consumers/users and are active in multiple sectors
    • A large percentage of identified companies operate in sectors with high social impact – health and wellness, environment, agriculture, transport. For instance, in India, after excluding business analytics companies, a third of data companies seeking financing are in healthcare and a fifth in food and agriculture, and some of them have the low-income population or the rural segment of India as an intended beneficiary segment. In Latin America, the number of companies in business services, research and analytics was closely followed by health, environment and agriculture. In Southeast Asia, business, consumer services, and transport came out in the lead.
    • We found the highest number of companies in Latin America and Asia with the following countries leading the way – Mexico, Chile, and Brazil, with Colombia and Argentina closely behind in Latin America; and India, Indonesia, Philippines, and Malaysia in Asia
  • An actionable pipeline of data-driven companies exists in Latin America and in Asia
    • We heard demand for different kinds of financing (equity, debt, working capital) but the majority of the need was for equity and quasi-equity in amounts ranging from $100,000 to $5 million USD, with averages of between $2 and $3 million USD depending on the region.
  • There’s a significant financing gap in all the markets
    • The investment sizes required, while they range up to several million dollars, are generally small. Analysis of more than 300 data companies in Latin America and Asia indicates a total estimated need for financing of more than $400 million
  • Venture capitals generally don’t recognize data as a separate sector and club data-driven companies with their standard information communication technology (ICT) investments
    • Interviews with founders suggest that moving beyond seed stage is particularly difficult for data-driven startups. While many companies are able to cobble together an initial seed round augmented by bootstrapping to get their idea off the ground, they face a great deal of difficulty when trying to raise a second, larger seed round or Series A investment.
    • From the perspective of startups, investors favor banal e-commerce (e.g., according toTech in Asia, out of the $645 million in technology investments made public across the region in 2013, 92% were related to fashion and online retail) or consumer service startups and ignore open data-focused startups even if they have a strong business model and solid key performance indicators. The space is ripe for a long-term investor with a generous risk appetite and multiple bottom line goals.
  • Poor data quality was the number one issue these companies reported.
    • Companies reported significant waste and inefficiency in accessing/scraping/cleaning data.

The analysis below borrows heavily from the work done by the partners. We should of course mention that the findings are provisional and should not be considered authoritative (please see the section on methodology for more details)….(More).”

Sensor Law


Paper by Sandra Braman: For over two decades, information policy-making for human society has been increasingly supplemented, supplanted, and/or superceded by machinic decision-making; over three decades since legal decision-making has been explicitly put in place to serve machinic rather than social systems; and over four decades since designers of the Internet took the position that they were serving non-human (machinic, or daemon) users in addition to humans. As the “Internet of Things” becomes more and more of a reality, these developments increasingly shape the nature of governance itself. This paper’s discussion of contemporary trends in these diverse modes of human-computer interaction at the system level — interactions between social systems and technological systems — introduces the changing nature of the law as a sociotechnical problem in itself. In such an environment, technological innovations are often also legal innovations, and legal developments require socio-technical analysis as well as social, legal, political, and cultural approaches.

Examples of areas in which sensors are already receiving legal attention are rife. A non-comprehensive listing includes privacy concerns beginning but not ending with those raised by sensors embedded in phones and geolocation devices, which are the most widely discussed and those of which the public is most aware. Sensor issues arise in environmental law, health law, marine law, intellectual property law, and as they are raised by new technologies in use for national security purposes that include those confidence- and security-building measures intended for peacekeeping. They are raised by liability issues for objects that range from cars to ovens. And sensor issues are at the core of concerns about “telemetric policing,” as that is coming into use not only in North America and Europe, but in societies such as that of Brazil as well.

Sensors are involved in every stage of legal processes, from identification of persons of interest to determination of judgments and consequences of judgments. Their use significantly alters the historically-developed distinction among types of decision-making meant to come into use at different stages of the process, raising new questions about when, and how, human decision-making needs to dominate and when, and how, technological innovation might need to be shaped by the needs of social rather than human systems.

This paper will focus on the legal dimensions of sensors used in ubiquitous embedded computing….(More)”

Eight ways to make government more experimental


Jonathan Breckon et al at NESTA: “When the banners and bunting have been tidied away after the May election, and a new bunch of ministers sit at their Whitehall desks, could they embrace a more experimental approach to government?

Such an approach requires a degree of humility.  Facing up to the fact that we don’t have all the answers for the next five years.  We need to test things out, evaluate new ways of doing things with the best of social science, and grow what works.  And drop policies that fail.

But how best to go about it?  Here are our 8 ways to make it a reality:

  1. Make failure OK. A more benign attitude to risk is central to experimentation.  As a 2003 Cabinet Office review entitled Trying it Out said, a pilot that reveals a policy to be flawed should be ‘viewed as a success rather than a failure, having potentially helped to avert a potentially larger political and/or financial embarrassment’. Pilots are particularly important in fast moving areas such as technology to try promising fresh ideas in real-time. Our ‘Visible Classroom’ pilot tried an innovative approach to teacher CPD developed from technology for television subtitling.
  2. Avoid making policies that are set in stone.  Allowing policy to be more project–based, flexible and time-limited could encourage room for manoeuvre, according to a previous Nesta report State of Uncertainty; Innovation policy through experimentation.  The Department for Work and Pensions’ Employment Retention and Advancement pilot scheme to help people back to work was designed to influence the shape of legislation. It allowed for amendments and learning as it was rolled out.  We need more policy experiments like this.
  3. Work with the grain of current policy environment. Experimenters need to be opportunists. We need to be nimble and flexible. Ready to seize windows of opportunity to  experiment. Some services have to be rolled out in stages due to budget constraints. This offers opportunities to try things out before going national. For instance, The Mexican Oportunidades anti-poverty experiments which eventually reached 5.8 million households in all Mexican states, had to be trialled first in a handful of areas. Greater devolution is creating a patchwork of different policy priorities, funding and delivery models – so-called ‘natural experiments’. Let’s seize the opportunity to deliberately test and compare across different jurisdictions. What about a trial of basic income in Northern Ireland, for example, along the lines of recent Finnish proposals, or universal free childcare in Scotland?
  4. Experiments need the most robust and appropriate evaluation methods such as, if appropriate, Randomised Controlled Trials. Other methods, such as qualitative research may be needed to pry open the ‘black box’ of policies – to learn about why and how things are working. Civil servants should use the government trial advice panel as a source of expertise when setting up experiments.
  5. Grow the public debate about the importance of experimentation. Facebook had to apologise after a global backlash to psychological experiments on their 689,000 users web-users. Approval by ethics committees – normal practice for trials in hospitals and universities – is essential, but we can’t just rely on experts. We need a dedicated public understanding of experimentation programmes, perhaps run by Evidence Matters or Ask for Evidence campaigns at Sense about Science. Taking part in an experiment in itself can be a learning opportunity creating  an appetite amongt the public, something we have found from running an RCT with schools.
  6. Create ‘Skunkworks’ institutions. New or improved institutional structures within government can also help with experimentation.   The Behavioural Insights Team, located in Nesta,  operates a classic ‘skunkworks’ model, semi-detached from day-to-day bureaucracy. The nine UK What Works Centres help try things out semi-detached from central power, such as the The Education Endowment Foundation who source innovations widely from across the public and private sectors- including Nesta-  rather than generating ideas exclusively in house or in government.
  7. Find low-cost ways to experiment. People sometimes worry that trials are expensive and complicated.  This does not have to be the case. Experiments to encourage organ donation by the Government Digital Service and Behavioural Insights Team involved an estimated cost of £20,000.  This was because the digital experiments didn’t involve setting up expensive new interventions – just changing messages on  web pages for existing services. Some programmes do, however, need significant funding to evaluate and budgets need to be found for it. A memo from the White House Office for Management and Budget has asked for new Government schemes seeking funding to allocate a proportion of their budgets to ‘randomized controlled trials or carefully designed quasi-experimental techniques’.
  8. Be bold. A criticism of some experiments is that they only deal with the margins of policy and delivery. Government officials and researchers should set up more ambitious experiments on nationally important big-ticket issues, from counter-terrorism to innovation in jobs and housing….(More)