Civic Crowdfunding: Participatory Communities, Entrepreneurs and the Political Economy of Place


Rodrigo Davis: “Today I’m capping two years of studying the emergence of civic crowdfunding by submitting my master’s thesis to the MIT archives…You can read Civic Crowdfunding: Participatory Communities, Entrepreneurs and the Political Economy of Place in its entirety (173 pages) now,…
Crowdfunding is everywhere. People are using it to fund watches, comic books, even famous film directors are doing it. In what is now a $6 billion industry globally, I think the most interesting, disruptive and exciting work that’s happening is in donation-based crowdfunding. That’s worth, very roughly, $1.2 billion a year worldwide per year. Within that subset, I’ve been looking at civic projects, people who are producing shared goods for a community or broader public. These projects build on histories of community fundraising and resource pooling that long predate the Internet; what’s changed is that we’ve created a scalable, portable platform model to carry out these existing practices.
So how is civic crowdfunding doing? When I started this project very few people were using that term. No one had done any aggregated data collection and published it. So I decided to take on that task. I collected data on 1224 projects between 2010 and March 2014, which raised $10.74 million in just over three years. I focused on seven platforms: Catarse (Brazil), Citizinvestor (US), Goteo (Spain), IOBY (US), Kickstarter (US), Neighbor.ly (US) and Spacehive (UK). I didn’t collect everything. …
Here are four things I found out about civic crowdfunding.

  1. Civic crowdfunding is small-scale but relatively successful, and it has big ambitions.Currently the average civic crowdfunding project is small in scale: $6,357 is the median amount raised. But these civic projects seem to be doing pretty well. Projects tagged ‘civic’ on Kickstarter, for instance, succeed 81% of the time. If Civic were a separate category, it would be Kickstarter’s most successful category. Meanwhile, most platform owners and some incumbent institutions see civic crowdfunding as a new mechanism for public-private partnerships capable of realizing large-scale projects. In a small minority of cases, such as the three edge-case projects I explored in Chapter 3 of my thesis, civic crowdfunding has begun to fulfill some of those ambitions. For the center of gravity to shift further in the direction of these potential outcomes, though, existing institutions, including government, large non-profits and the for-profit sector, will need to engage more comprehensively with the process.
  2. Civic crowdfunding started as a hobby for green space projects by local non-profits, but larger organizations are getting involved. Almost a third of campaigners are using civic crowdfunding platforms for park and garden-related projects (29%). Event-based projects, and education and training are also popular. Sports and mobility projects are pretty uncommon. The frequency of garden and park projects is partly because these projects are not capital intensive, and they’re uncontroversial. That’s also changing. Organizations from governments to corporations and large foundations, are exploring ways to support crowdfunding for a much wider range of community-facing activities. Their modes of engagement include publicizing campaigns, match-funding campaigns on an ad-hoc basis, running their own campaigns and even building new platforms from the ground up.
  3. Civic crowdfunding is concentrated in cities (especially those where platforms are based). The genre is too new to have spread very effectively, it seems. Five states account for 80% of the projects, and this is partly a function of where the platforms are located. New York, California are our top two, followed by Illinois and Oregon. We know there’s a strong trend towards big cities. It’s hard work for communities to use crowdfunding to get projects off the ground, especially when it’s an unfamiliar process. The platforms have played a critical role in building participants’ understanding of crowdfunding and supporting them through the process.
  4. Civic crowdfunding has the same highly unequal distributional tendencies as other crowd markets. When we look at the size distribution of projects, the first thing we notice is something close to a Pareto distribution, or Long Tail. Most projects are small-scale, but a small number of high-value projects have taken a large share of the total revenue raised by civic crowdfunding. We shouldn’t be surprised by this. On Kickstarter most successful projects are between 5 and 10k, and 47% of civic projects I studied are in the same bracket. The problem is that we tend to remember the outliers, such as Veronica Mars and Spike Lee – because they show what’s possible. But they are still the outliers.

Now, here are two things we don’t know.

  1. Will civic crowdfunding deter public investment or encourage it?
  2. Will civic crowdfunding widen wealth gaps?”

Conceptualizing Open Data ecosystems: A timeline analysis of Open Data development in the UK


New paper by Tom Heath et al: “In this paper, we conceptualize Open Data ecosystems by analysing the major stakeholders in the UK. The conceptualization is based on a review of popular Open Data definitions and business ecosystem theories, which we applied to empirical data using a timeline analysis. Our work is informed by a combination of discourse analysis and in-depth interviews, undertaken during the summer of 2013. Drawing on the UK as a best practice example, we identify a set of structural business ecosystem properties: circular flow of resources, sustainability, demand that encourages supply, and dependence developing between suppliers, intermediaries, and users. However, significant gaps and shortcomings are found to remain. Most prominently, demand is not yet fully encouraging supply and actors have yet to experience fully mutual interdependence.”

Rethinking Personal Data: A New Lens for Strengthening Trust


New report from the World Economic Forum: “As we look at the dynamic change shaping today’s data-driven world, one thing is becoming increasingly clear. We really do not know that much about it. Polarized along competing but fundamental principles, the global dialogue on personal data is inchoate and pulled in a variety of directions. It is complicated, conflated and often fueled by emotional reactions more than informed understandings.
The World Economic Forum’s global dialogue on personal data seeks to cut through this complexity. A multi-year initiative with global insights from the highest levels of leadership from industry, governments, civil society and academia, this work aims to articulate an ascendant vision of the value a balanced and human-centred personal data ecosystem can create.
Yet despite these aspirations, there is a crisis in trust. Concerns are voiced from a variety of viewpoints at a variety of scales. Industry, government and civil society are all uncertain on how to create a personal data ecosystem that is adaptive, reliable, trustworthy and fair.
The shared anxieties stem from the overwhelming challenge of transitioning into a hyperconnected world. The growth of data, the sophistication of ubiquitous computing and the borderless flow of data are all outstripping the ability to effectively govern on a global basis. We need the means to effectively uphold fundamental principles in ways fit for today’s world.
Yet despite the size and scope of the complexity, it cannot become a reason for inaction. The need for pragmatic and scalable approaches which strengthen transparency, accountability and the empowerment of individuals has become a global priority.
Tools are needed to answer fundamental questions: Who has the data? Where is the data? What is being done with it? All of these uncertainties need to be addressed for meaningful progress to occur.
Objectives need to be set. The benefits and harms for using personal data need be more precisely defined. The ambiguity surrounding privacy needs to be demystified and placed into a real-world context.
Individuals need to be meaningfully empowered. Better engagement over how data is used by third parties is one opportunity for strengthening trust. Supporting the ability for individuals to use personal data for their own purposes is another area for innovation and growth. But combined, the overall lack of engagement is undermining trust.
Collaboration is essential. The need for interdisciplinary collaboration between technologists, business leaders, social scientists, economists and policy-makers is vital. The complexities for delivering a sustainable and balanced personal data ecosystem require that these multifaceted perspectives are all taken into consideration.
With a new lens for using personal data, progress can occur.

Figure 1: A new lens for strengthening trust
 

Source: World Economic Forum

The solutions to all our problems may be buried in PDFs that nobody reads


Christopher Ingraham at the Washington Post: “What if someone had already figured out the answers to the world’s most pressing policy problems, but those solutions were buried deep in a PDF, somewhere nobody will ever read them?
According to a recent report by the World Bank, that scenario is not so far-fetched. The bank is one of those high-minded organizations — Washington is full of them — that release hundreds, maybe thousands, of reports a year on policy issues big and small. Many of these reports are long and highly technical, and just about all of them get released to the world as a PDF report posted to the organization’s Web site.
The World Bank recently decided to ask an important question: Is anyone actually reading these things? They dug into their Web site traffic data and came to the following conclusions: Nearly one-third of their PDF reports had never been downloaded, not even once. Another 40 percent of their reports had been downloaded fewer than 100 times. Only 13 percent had seen more than 250 downloads in their lifetimes. Since most World Bank reports have a stated objective of informing public debate or government policy, this seems like a pretty lousy track record.
pdfs
Now, granted, the bank isn’t Buzzfeed. It wouldn’t be reasonable to expect thousands of downloads for reports with titles like “Detecting Urban Expansion and Land Tenure Security Assessment: The Case of Bahir Dar and Debre Markos Peri-Urban Areas of Ethiopia.” Moreover, downloads aren’t the be-all and end-all of information dissemination; many of these reports probably get some distribution by e-mail, or are printed and handed out at conferences. Still, it’s fair to assume that many big-idea reports with lofty goals to elevate the public discourse never get read by anyone other than the report writer and maybe an editor or two. Maybe the author’s spouse. Or mom.
I’m not picking on the World Bank here. In fact, they’re to be commended, strongly, for not only taking a serious look at the question but making their findings public for the rest of us to learn from. And don’t think for a second that this is just a World Bank problem. PDF reports are basically the bread and butter of Washington’s huge think tank industry, for instance. Every single one of these groups should be taking a serious look at their own PDF analytics the way the bank has.
Government agencies are also addicted to the PDF. As The Washington Post’s David Fahrenthold reported this week, federal agencies spend thousands of dollars and employee-hours each year producing Congressionally-mandated reports that nobody reads. And let’s not even get started on the situation in academia, where the country’s best and brightest compete for the honor of seeing their life’s work locked away behind some publisher’s paywall.”
Not every policy report is going to be a game-changer, of course. But the sheer numbers dictate that there are probably a lot of really, really good ideas out there that never see the light of day. This seems like an inefficient way for the policy community to do business, but what’s the alternative?
One final irony to ponder: You know that World Bank report, about how nobody reads its PDFs? It’s only available as a PDF. Given the attention it’s receiving, it may also be one of their most-downloaded reports ever.

Working Together in a Networked Economy


Yochai Benkler at MIT Technology Review on Distributed Innovation and Creativity, Peer Production, and Commons in a Networked Economy: “A decade ago, Wikipedia and open-source software were treated as mere curiosities in business circles. Today, these innovations represent a core challenge to how we have thought about property and contract, organization theory and management, over the past 150 years.
For the first time since before the Industrial Revolution, the most important inputs into some of the most important economic sectors are radically distributed in the population, and the core capital resources necessary for these economic activities have become widely available in wealthy countries and among the wealthier populations of emerging economies. This technological feasibility of social production generally, and peer production — the kind of network collaboration of which Wikipedia is the most prominent example — more specifically, is interacting with the high rate of change and the escalating complexity of global innovation and production systems.
Increasingly, in the business literature and practice, we see a shift toward a range of open innovation and models that allow more fluid flows of information, talent, and projects across organizations.
Peer production, the most significant organizational innovation that has emerged from Internet-mediated social practice, is large-scale collaborative engagement by groups of individuals who come together to produce products more complex than they could have produced on their own. Organizationally, it combines three core characteristics: decentralization of conception and execution of problems and solutions; harnessing of diverse motivations; and separation of governance and management from property and contract.
These characteristics make peer production highly adept at experimentation, innovation, and adaptation in changing and complex environments. If the Web was innovation on a commons-based model — allocating access and use rights in resources without giving anyone exclusive rights to exclude anyone else — Wikipedia’s organizational innovation is in problem-solving.
Wikipedia’s user-generated content model incorporates knowledge that simply cannot be managed well, either because it is tacit knowledge (possessed by individuals but difficult to communicate to others) or because it is spread among too many people to contract for. The user-generated content model also permits organizations to explore a space of highly diverse interests and tastes that was too costly for traditional organizations to explore.
Peer production allows a diverse range of people, regardless of affiliation, to dynamically assess and reassess available resources, projects, and potential collaborators and to self-assign to projects and collaborations. By leaving these elements to self-organization dynamics, peer production overcomes the lossiness of markets and bureaucracies, and its benefits are sufficient that the practice has been widely adopted by firms and even governments.
In a networked information economy, commons-based practices and open innovation provide an evolutionary model typified by repeated experimentation and adoption of successful adaptation rather than the more traditional, engineering-style approaches to building optimized systems.
Commons-based production and peer production are edge cases of a broader range of openness strategies that trade off the freedom of these two approaches and the manageability and appropriability that many more-traditional organizations seek to preserve. Some firms are using competitions and prizes to diversify the range of people who work on their problems, without ceding contractual control over the project. Many corporations are participating in networks of firms engaging in a range of open collaborative innovation practices with a more manageable set of people, resources, and projects to work with than a fully open-to-the-world project. And the innovation clusters anchored around universities represent an entrepreneurial model at the edge of academia and business, in which academia allows for investment in highly uncertain innovation, and the firms allow for high-risk, high-reward investment models.

To read the full article,  click here.

Continued Progress and Plans for Open Government Data


Steve VanRoekel, and Todd Park at the White House:  “One year ago today, President Obama signed an executive order that made open and machine-readable data the new default for government information. This historic step is helping to make government-held data more accessible to the public and to entrepreneurs while appropriately safeguarding sensitive information and rigorously protecting privacy.
Freely available data from the U.S. government is an important national resource, serving as fuel for entrepreneurship, innovation, scientific discovery, and economic growth. Making information about government operations more readily available and useful is also core to the promise of a more efficient and transparent government. This initiative is a key component of the President’s Management Agenda and our efforts to ensure the government is acting as an engine to expand economic growth and opportunity for all Americans. The Administration is committed to driving further progress in this area, including by designating Open Data as one of our key Cross-Agency Priority Goals.
Over the past few years, the Administration has launched a number of Open Data Initiatives aimed at scaling up open data efforts across the Health, Energy, Climate, Education, Finance, Public Safety, and Global Development sectors. The White House has also launched Project Open Data, designed to share best practices, examples, and software code to assist federal agencies with opening data. These efforts have helped unlock troves of valuable data—that taxpayers have already paid for—and are making these resources more open and accessible to innovators and the public.
Other countries are also opening up their data. In June 2013, President Obama and other G7 leaders endorsed the Open Data Charter, in which the United States committed to publish a roadmap for our nation’s approach to releasing and improving government data for the public.
Building upon the Administration’s Open Data progress, and in fulfillment of the Open Data Charter, today we are excited to release the U.S. Open Data Action Plan. The plan includes a number of exciting enhancements and new data releases planned in 2014 and 2015, including:

  • Small Business Data: The Small Business Administration’s (SBA) database of small business suppliers will be enhanced so that software developers can create tools to help manufacturers more easily find qualified U.S. suppliers, ultimately reducing the transaction costs to source products and manufacture domestically.
  • Smithsonian American Art Museum Collection: The Smithsonian American Art Museum’s entire digitized collection will be opened to software developers to make educational apps and tools. Today, even museum curators do not have easily accessible information about their art collections. This information will soon be available to everyone.
  • FDA Adverse Drug Event Data: Each year, healthcare professionals and consumers submit millions of individual reports on drug safety to the Food and Drug Administration (FDA). These anonymous reports are a critical tool to support drug safety surveillance. Today, this data is only available through limited quarterly reports. But the Administration will soon be making these reports available in their entirety so that software developers can build tools to help pull potentially dangerous drugs off shelves faster than ever before.

We look forward to implementing the U.S. Open Data Action Plan, and to continuing to work with our partner countries in the G7 to take the open data movement global”.

Can Big Data Stop Wars Before They Happen?


Foreign Policy: “It has been almost two decades exactly since conflict prevention shot to the top of the peace-building agenda, as large-scale killings shifted from interstate wars to intrastate and intergroup conflicts. What could we have done to anticipate and prevent the 100 days of genocidal killing in Rwanda that began in April 1994 or the massacre of thousands of Bosnian Muslims at Srebrenica just over a year later? The international community recognized that conflict prevention could no longer be limited to diplomatic and military initiatives, but that it also requires earlier intervention to address the causes of violence between nonstate actors, including tribal, religious, economic, and resource-based tensions.
For years, even as it was pursued as doggedly as personnel and funding allowed, early intervention remained elusive, a kind of Holy Grail for peace-builders. This might finally be changing. The rise of data on social dynamics and what people think and feel — obtained through social media, SMS questionnaires, increasingly comprehensive satellite information, news-scraping apps, and more — has given the peace-building field hope of harnessing a new vision of the world. But to cash in on that hope, we first need to figure out how to understand all the numbers and charts and figures now available to us. Only then can we expect to predict and prevent events like the recent massacres in South Sudan or the ongoing violence in the Central African Republic.
A growing number of initiatives have tried to make it across the bridge between data and understanding. They’ve ranged from small nonprofit shops of a few people to massive government-funded institutions, and they’ve been moving forward in fits and starts. Few of these initiatives have been successful in documenting incidents of violence actually averted or stopped. Sometimes that’s simply because violence or absence of it isn’t verifiable. The growing literature on big data and conflict prevention today is replete with caveats about “overpromising and underdelivering” and the persistent gap between early warning and early action. In the case of the Conflict Early Warning and Response Mechanism (CEWARN) system in central Africa — one of the earlier and most prominent attempts at early intervention — it is widely accepted that the project largely failed to use the data it retrieved for effective conflict management. It relied heavily on technology to produce large databases, while lacking the personnel to effectively analyze them or take meaningful early action.
To be sure, disappointments are to be expected when breaking new ground. But they don’t have to continue forever. This pioneering work demands not just data and technology expertise. Also critical is cross-discipline collaboration between the data experts and the conflict experts, who know intimately the social, political, and geographic terrain of different locations. What was once a clash of cultures over the value and meaning of metrics when it comes to complex human dynamics needs to morph into collaboration. This is still pretty rare, but if the past decade’s innovations are any prologue, we are hopefully headed in the right direction.
* * *
Over the last three years, the U.S. Defense Department, the United Nations, and the CIA have all launched programs to parse the masses of public data now available, scraping and analyzing details from social media, blogs, market data, and myriad other sources to achieve variations of the same goal: anticipating when and where conflict might arise. The Defense Department’s Information Volume and Velocity program is designed to use “pattern recognition to detect trends in a sea of unstructured data” that would point to growing instability. The U.N.’s Global Pulse initiative’s stated goal is to track “human well-being and emerging vulnerabilities in real-time, in order to better protect populations from shocks.” The Open Source Indicators program at the CIA’s Intelligence Advanced Research Projects Activity aims to anticipate “political crises, disease outbreaks, economic instability, resource shortages, and natural disasters.” Each looks to the growing stream of public data to detect significant population-level changes.
Large institutions with deep pockets have always been at the forefront of efforts in the international security field to design systems for improving data-driven decision-making. They’ve followed the lead of large private-sector organizations where data and analytics rose to the top of the corporate agenda. (In that sector, the data revolution is promising “to transform the way many companies do business, delivering performance improvements not seen since the redesign of core processes in the 1990s,” as David Court, a director at consulting firm McKinsey, has put it.)
What really defines the recent data revolution in peace-building, however, is that it is transcending size and resource limitations. It is finding its way to small organizations operating at local levels and using knowledge and subject experts to parse information from the ground. It is transforming the way peace-builders do business, delivering data-led programs and evidence-based decision-making not seen since the field’s inception in the latter half of the 20th century.
One of the most famous recent examples is the 2013 Kenyan presidential election.
In March 2013, the world was watching and waiting to see whether the vote would produce more of the violence that had left at least 1,300 people dead and 600,000 homeless during and after 2010 elections. In the intervening years, a web of NGOs worked to set up early-warning and early-response mechanisms to defuse tribal rivalries, party passions, and rumor-mongering. Many of the projects were technology-based initiatives trying to leverage data sources in new ways — including a collaborative effort spearheaded and facilitated by a Kenyan nonprofit called Ushahidi (“witness” in Swahili) that designs open-source data collection and mapping software. The Umati (meaning “crowd”) project used an Ushahidi program to monitor media reports, tweets, and blog posts to detect rising tensions, frustration, calls to violence, and hate speech — and then sorted and categorized it all on one central platform. The information fed into election-monitoring maps built by the Ushahidi team, while mobile-phone provider Safaricom donated 50 million text messages to a local peace-building organization, Sisi ni Amani (“We are Peace”), so that it could act on the information by sending texts — which had been used to incite and fuel violence during the 2007 elections — aimed at preventing violence and quelling rumors.
The first challenges came around 10 a.m. on the opening day of voting. “Rowdy youth overpowered police at a polling station in Dandora Phase 4,” one of the informal settlements in Nairobi that had been a site of violence in 2007, wrote Neelam Verjee, programs manager at Sisi ni Amani. The young men were blocking others from voting, and “the situation was tense.”
Sisi ni Amani sent a text blast to its subscribers: “When we maintain peace, we will have joy & be happy to spend time with friends & family but violence spoils all these good things. Tudumishe amani [“Maintain the peace”] Phase 4.” Meanwhile, security officers, who had been called separately, arrived at the scene and took control of the polling station. Voting resumed with little violence. According to interviews collected by Sisi ni Amani after the vote, the message “was sent at the right time” and “helped to calm down the situation.”
In many ways, Kenya’s experience is the story of peace-building today: Data is changing the way professionals in the field think about anticipating events, planning interventions, and assessing what worked and what didn’t. But it also underscores the possibility that we might be edging closer to a time when peace-builders at every level and in all sectors — international, state, and local, governmental and not — will have mechanisms both to know about brewing violence and to save lives by acting on that knowledge.
Three important trends underlie the optimism. The first is the sheer amount of data that we’re generating. In 2012, humans plugged into digital devices managed to generate more data in a single year than over the course of world history — and that rate more than doubles every year. As of 2012, 2.4 billion people — 34 percent of the world’s population — had a direct Internet connection. The growth is most stunning in regions like the Middle East and Africa where conflict abounds; access has grown 2,634 percent and 3,607 percent, respectively, in the last decade.
The growth of mobile-phone subscriptions, which allow their owners to be part of new data sources without a direct Internet connection, is also staggering. In 2013, there were almost as many cell-phone subscriptions in the world as there were people. In Africa, there were 63 subscriptions per 100 people, and there were 105 per 100 people in the Arab states.
The second trend has to do with our expanded capacity to collect and crunch data. Not only do we have more computing power enabling us to produce enormous new data sets — such as the Global Database of Events, Language, and Tone (GDELT) project, which tracks almost 300 million conflict-relevant events reported in the media between 1979 and today — but we are also developing more-sophisticated methodological approaches to using these data as raw material for conflict prediction. New machine-learning methodologies, which use algorithms to make predictions (like a spam filter, but much, much more advanced), can provide “substantial improvements in accuracy and performance” in anticipating violent outbreaks, according to Chris Perry, a data scientist at the International Peace Institute.
This brings us to the third trend: the nature of the data itself. When it comes to conflict prevention and peace-building, progress is not simply a question of “more” data, but also different data. For the first time, digital media — user-generated content and online social networks in particular — tell us not just what is going on, but also what people think about the things that are going on. Excitement in the peace-building field centers on the possibility that we can tap into data sets to understand, and preempt, the human sentiment that underlies violent conflict.
Realizing the full potential of these three trends means figuring out how to distinguish between the information, which abounds, and the insights, which are actionable. It is a distinction that is especially hard to make because it requires cross-discipline expertise that combines the wherewithal of data scientists with that of social scientists and the knowledge of technologists with the insights of conflict experts.

How Britain’s Getting Public Policy Down to a Science


in Governing: “Britain has a bold yet simple plan to do something few U.S. governments do: test the effectiveness of multiple policies before rolling them out. But are American lawmakers willing to listen to facts more than money or politics?

In medicine they do clinical trials to determine whether a new drug works. In business they use focus groups to help with product development. In Hollywood they field test various endings for movies in order to pick the one audiences like best. In the world of public policy? Well, to hear members of the United Kingdom’s Behavioural Insights Team (BIT) characterize it, those making laws and policies in the public sector tend to operate on some well-meaning mix of whim, hunch and dice roll, which all too often leads to expensive and ineffective (if not downright harmful) policy decisions.

….One of the prime BIT examples for why facts and not intuition ought to drive policy hails from the U.S. The much-vaunted “Scared Straight” program that swept the U.S. in the 1990s involved shepherding at-risk youth into maximum security prisons. There, they would be confronted by inmates who, presumably, would do the scaring while the visiting juveniles would do the straightening out. Scared Straight seemed like a good idea — let at-risk youth see up close and personal what was in store for them if they continued their wayward ways. Initially the results reported seemed not just good, but great. Programs were reporting “success rates” as high as 94 percent, which inspired other countries, including the U.K., to adopt Scared Straight-like programs.

The problem was that none of the program evaluations included a control group — a group of kids in similar circumstances with similar backgrounds who didn’t go through a Scared Straight program. There was no way to see how they would fare absent the experience. Eventually, a more scientific analysis of seven U.S. Scared Straight programs was conducted. Half of the at-risk youth in the study were left to their own devices and half were put through the program. This led to an alarming discovery: Kids who went through Scared Straight were more likely to offend than kids who skipped it — or, more precisely, who were spared it. The BIT concluded that “the costs associated with the programme (largely related to the increase in reoffending rates) were over 30 times higher than the benefits, meaning that ‘Scared Straight’ programmes cost the taxpayer a significant amount of money and actively increased crime.”

It was witnessing such random acts of policymaking that in 2010 inspired a small group of political and social scientists to set up the Behavioural Insights Team. Originally a small “skunk works” tucked away in the U.K. Treasury Department, the team gained traction under Prime Minister David Cameron, who took office evincing a keen interest in both “nonregulatory solutions to policy problems” and in spending public money efficiently, Service says. By way of example, he points to a business support program in the U.K. that would give small and medium-sized businesses up to £3,000 to subsidize advice from professionals. “But there was no proven link between receiving that money and improving business. We thought, ‘Wouldn’t it be better if you could first test the efficacy of some million-pound program or other, rather than just roll it out?’”

The BIT was set up as something of a policy research lab that would scientifically test multiple approaches to a public policy problem on a limited, controlled basis through “randomized controlled trials.” That is, it would look at multiple ways to skin the cat before writing the final cat-skinning manual. By comparing the results of various approaches — efforts to boost tax compliance, say, or to move people from welfare to work — policymakers could use the results of the trials to actually hone in on the most effective practices before full-scale rollout.

The various program and policy options that are field tested by the BIT aren’t pie-in-the-sky surmises, which is where the “behavioural” piece of the equation comes in. Before settling on what options to test, the BIT takes into account basic human behavior — what motivates us and what turns us off — and then develops several approaches to a policy problem based on actual social science and psychology.

The approach seems to work. Take, for example, the issue of recruiting organ donors. It can be a touchy topic, suggesting one’s own mortality while also conjuring up unsettling images of getting carved up and parceled out by surgeons. It’s no wonder, then, that while nine out of 10 people in England profess to support organ donations, fewer than one in three are officially registered as donors. To increase the U.K.’s ratio, the BIT decided to play around with the standard recruitment message posted on a high-traffic gov.uk website that encourages people to sign up with the national Organ Donor Register (see “‘Please Help Others,’” page 18). Seven different messages that varied in approach and tone were tested, and at the end of the trial, one message emerged clearly as the most effective — so effective, in fact, that the BIT concluded that “if the best-performing message were to be used over the whole year, it would lead to approximately 96,000 extra registrations completed.”

According to the BIT there are nine key steps to a defensible controlled randomized trial, the first and second — and the two most obvious — being that there must be at least two policy interventions to compare and that the outcome that the policies they’re meant to influence must be clear. But the “randomized” factor in the equation is critical, and it’s not necessarily easy to achieve.

In BIT-speak, “randomization units” can range from individuals (randomly chosen clients) entering the same welfare office but experiencing different interventions, to different groups of clientele or even different institutions like schools or congregate care facilities. The important point is to be sure that the groups or institutions chosen for comparison are operating in circumstances and with clientele similar enough so that researchers can confidently say that any differences in outcomes are due to different policy interventions and not other socioeconomic or cultural exigencies. There are also minimum sampling sizes that ensure legitimacy — essentially, the more the merrier.

As a matter of popular political culture, the BIT’s approach is known as “nudge theory,” a strand of behavioral economics based on the notion that the economic decisions that human beings make are just that — human — and that by tuning into what motivates and appeals to people we can much better understand why those economic decisions are made. In market economics, of course, nudge theory helps businesses tune into customer motivation. In public policy, nudge theory involves figuring out ways to motivate people to do what’s best for themselves, their families, their neighborhoods and society.

When the BIT started playing around with ways to improve tax compliance, for example, the group discovered a range of strategies to do that, from the very obvious approach — make compliance easy — to the more behaviorally complex. The idea was to key in on the sorts of messages to send to taxpayers that will resonate and improve voluntary compliance. The results can be impressive. “If you just tell taxpayers that the majority of folks in their area pay their taxes on time [versus sending out dunning letters],” says the BIT’s Service, “that adds 3 percent more people who pay, bringing in millions of pounds.” Another randomized controlled trial showed that in pestering citizens to pay various fines, personal text messages were more effective than letters.

There has been pushback on using randomized controlled trials to develop policy. Some see it as a nefarious attempt at mind control on the part of government. “Nudge” to some seems to mean “manipulate.” Service bridles at the criticism. “We’re sometimes referred to as ‘the Nudge Team,’ but we’re the ‘Behavioural Insights Team’ because we’re interested in human behavior, not mind control.”

The essence of the philosophy, Service adds, is “leading people to do the right thing.” For those interested in launching BIT-like efforts without engendering immediate ideological resistance, he suggests focusing first on “non-headline-grabbing” policy areas such as tax collection or organ donation that can be launched through administrative fiat.”

Is Your City’s Crime Data Private Property?


Adam Wisnieski at the Crime Report: “In February, the Minneapolis Police Department (MPD) announced it was moving into a new era of transparency and openness with the launch of a new public crime map.
“Crime analysis and mapping data is now in the hands of the city’s citizens,” reads the first line of the press release.
According to the release, the MPD will feed incident report data to RAIDS (Regional Analysis and Information Data Sharing) Online, a nationwide crime map operated by crime analysis software company BAIR Analytics.
Since the announcement, Minneapolis residents have used RAIDS to look at reports of murder, robbery, burglary, assault, rape and other crimes reported in their neighborhoods on a sleek, easy-to-use map, which includes data as recent as yesterday.
On the surface, it’s a major leap forward for transparency in Minneapolis. But some question why the data feed is given exclusively to a single private company.
Transparency advocates argue in fact that the data is not truly in the hands of the city’s residents until citizens can download the raw data so they can analyze, chart or map it on their own.
“For it to actually be open data, it needs to be available to the public in machine readable format,” said Lauren Reid, senior public affairs manager for Code for America, a national non-profit that promotes participation in government through technology.
“Anybody should be able to go download it and read it if they want. That’s open data.”
The Open Knowledge Foundation, a national non-profit that advocates for more government openness, argues open data is important so citizens can participate and engage government in a way that was not possible before.
“Much of the time, citizens are only able to engage with their own governance sporadically — maybe just at an election every 4 or 5 years,” reads the Open Knowledge website. “By opening up data, citizens are enabled to be much more directly informed and involved in decision-making.
“This is more than transparency: it’s about making a full ‘read/write’ society — not just about knowing what is happening in the process of governance, but being able to contribute to it.”.
Minneapolis is not alone.
As Americans demand more information on criminal activity from the government, police departments are flocking to private companies to help them get the information into the public domain.
For many U.S. cities, hooking up with these third-party mapping vendors is the most public their police department has ever been. But the trend has started a messy debate about how “public” the public data actually is.
Outsourcing Makes It Easy
For police departments, outsourcing the presentation of their crime data to a private firm is an easy decision.
Most of the crime mapping sites are free or cost very little. (The Omega Group’s CrimeMapping.com charges between $600 and $2,400 per year, depending on the size of the agency.)
The department chooses what information it wants to provide. Once the system is set up, the data flows to the companies and then to the public without a lot of effort on behalf of the department.
For the most part, the move doesn’t need legislative approval, just a memorandum of understanding. A police department can even fulfill a new law requiring a public crime map by releasing report data through one of these vendors.
Commander Scott Gerlicher of the MPD’s Strategic Information and Crime Analysis Division says the software has saved the department time.
“I don’t think we are entertaining quite as many requests from the media or the public,” he told The Crime Report. “Plus the price was right: it was free.”
The companies that run some of the most popular sites — The Omega Group’s CrimeMapping.com, Public Engines’ CrimeReports and BAIR Analytics’ RAIDS — are in the business of selling crime analysis and mapping software to police departments.
Some departments buy internal software from these companies; though some cities, like Minneapolis, just use RAIDS’ free map and have no contracts with BAIR for internal software.
Susan Smith, director of operations at BAIR Analytics, said the goal of RAIDS is to create one national map that includes all crime reports from across all jurisdictions and departments (state and local police).
For people who live near or at the edge of a city line, finding relevant crime data can be hard.
The MPD’s Gerlicher said that was one reason his department chose RAIDS — because many police agencies in the Minneapolis area had already hooked up with the firm.
The operators of these crime maps say they provide a community service.
“We try to get as many agencies as we possibly can. We truly believe this is a good service for the community,” says Gabriela Coverdale, a marketing director at the Omega Group.
Raw Data ‘Off Limits’
However, the sites do not allow the public to download any of the raw data and prohibit anyone from “scraping,” using a program to automatically pull the data from their maps.
In Minneapolis, the police department continues to post PDFs and excel spreadsheets with data, but only RAIDS gets a feed with the most recent data.
Alan Palazzolo, a Code for America fellow who works as an interactive developer for the online non-profit newspaper MinnPost, used monthly reports from the MPD to build a crime application with a map and geographic-oriented chart of crime in Minneapolis.
Nevertheless, he finds the new tool limiting.
“[The MPD’s] ability to actually put out more data, and more timely data, really opens things up,” he said. “It’s great, but they are not doing that with us.”
According to Palazzolo, the arrangement gives BAIR a market advantage that effectively prevents its data from being used for purposes it cannot control.
“Having granular, complete, and historical data would allow us to do more in-depth analysis,” wrote Palazzolo and Kaeti Hinck in an article in MinnPost last year.
“Granular data would allow us to look at smaller areas,” reads the article. “[N]eighborhoods are a somewhat arbitrary boundary when it comes to crime. Often high crime is isolated to a couple of blocks, but aggregated data does not allow us to explore this.
“More complete data would allow us to look at factors like exact locations, time of day, demographic issues, and detailed categories (like bike theft).”
The question of preference gets even messier when looking at another national crime mapping website called SpotCrime.
Unlike the other third-party mapping sites, SpotCrime is not in the business of selling crime analysis software to police departments. It operates more like a newspaper — a newspaper focused solely on the police blotter pages — and makes money off advertising.
Years ago, SpotCrime requested and received crime report data via e-mail from the Minneapolis Police Department and mapped the data on its website. According to SpotCrime owner Colin Drane, the MPD stopped sending e-mails when terminals were set up in the police department for the public to access the data.
So he instead started going through the painstaking process of transferring data from PDFs the MPD posted online and mapping them.
When the MPD hooked up with RAIDS in February, Drane asked for the same feed and was denied. He says more and more police departments around the country are hooking up with one of his competitors and not giving him the same timely data.
The MPD said it prefers RAIDS over SpotCrime and criticized some of the advertisements on SpotCrime.
“We’re not about supporting ad money,” said Gerlicher.
Drane believes all crime data in every city should be open to everyone, in order to prevent any single firm from monopolizing how the information is presented and used.
“The onus needs to be on the public agencies,” he adds. “They need to be fair with the data and they need to be fair with the public.” he said.
Transparency advocates worry that the trend is going in the opposite direction.
Ohio’s Columbus Police Department, for example, recently discontinued its public crime statistic feed and started giving the data exclusively to RAIDS.
The Columbus Dispatch wrote that the new system had less information than the old…”

Sharing in a Changing Climate


Helen Goulden in the Huffington Post: “Every month, a social research agency conducts a public opinion survey on 30,000 UK households. As part of this households are asked about what issues they think are the most important; things such as crime, unemployment, inequality, public health etc. Climate change has ranked so consistently low on these surveys that they don’t both asking any more.
On first glance, it would appear that most people don’t care about a changing climate.
Yet, that’s simply not true. Many people care deeply, but fleetingly – in the same way they may consider their own mortality before getting back to thinking about what to have for tea. And others care, but fail to change their behaviour in a way that’s proportionate to their concerns. Certainly that’s my unhappy stomping ground.
Besides what choices do we really have? Even the most progressive, large organisations have been glacial to move towards any form of real form of sustainability. For many years we have struggled with the Frankenstein-like task of stitching ‘sustainability’ onto existing business and economic models and the results, I think, speak for themselves.
That the Collaborative Economy presents us with an opportunity – in Napster-like ways – to disrupt and evolve toward something more sustainable is compelling idea. Looking out to a future filled with opportunities to reconfigure how we produce, consume and dispose of the things we want and need to live, work and play.
Whether the journey toward sustainability is short or long, it will be punctuated with a good degree of turbulence, disruption and some largely unpredictable events. How we deal with those events and what role communities, collaboration and technology play may set the framework and tone for how that future evolves. Crises and disruption to our entrenched living patterns present ripe opportunities for innovation and space for adopting new behaviours and practices.
No-one is immune from the impact of erratic and extreme weather events. And if we accept that these events are going to increase in frequency, we must draw the conclusion that emergency state and government resources may be drawn more thinly over time.
Across the world, there is a fairly well organised state and international infrastructure for dealing with emergencies , involving everyone from the Disaster Emergency Committee, the UN, central and local government and municipalities, not for profit organisations and of course, the military. There is a clear reason why we need this kind of state emergency response; I’m not suggesting that we don’t.
But through the rise of open data and mass participation in platforms that share location, identity and inventory, we are creating a new kind of mesh; a social and technological infrastructure that could considerably strengthen our ability to respond to unpredictable events.
In the last few years we have seen a sharp rise in the number of tools and crowdsourcing platforms and open source sensor networks that are focused on observing, predicting or responding to extreme events:
• Apps like Shake Alert, which emits a minute warning that an earthquake is coming
• Rio’s sensor network, which measures rainfall outside the city and can predict flooding
• Open Source sensor software Arduino which is being used to crowd-source weather and pollution data
• Propeller Health, which is using Asthma sensors on inhalers to crowd-source pollution hotspots
• Safecast, which was developed for crowdsourcing radiation levels in Japan
Increasingly we have the ability to deploy open source, distributed and networked sensors and devices for capturing and aggregating data that can help us manage our responses to extreme weather (and indeed, other kinds of) events.
Look at platforms like LocalMind and Foursquare. Today, I might be using them to find out whether there’s a free table at a bar or finding out what restaurant my friends are in. But these kind of social locative platforms present an infrastructure that could be life-saving in any kind of situation where you need to know where to go quickly to get out of trouble. We know that in the wake of disruptive events and disasters, like bombings, riots etc, people now intuitively and instinctively take to technology to find out what’s happening, where to go and how to co-ordinate response efforts.
During the 2013 Bart Strike in San Francisco, ventures like Liquid Space and SideCar enabled people to quickly find alternative places to work, or alternatives to public transport, to mitigate the inconvenience of the strike. The strike was a minor inconvenience compared to the impact of a hurricane and flood but nevertheless, in both those instances, ventures decided waive their fees; as did AirBnB when 1,400 New York AirBnB hosts opened their doors to people who had been left homeless through Hurricane Sandy in 2012.
The impulse to help is not new. The matching of people’s offers of help and resources to on-the-ground need, in real time, is.”