Transparency, legitimacy and trust


John Kamensky at Federal Times: “The Open Government movement has captured the imagination of many around the world as a way of increasing transparency, participation, and accountability. In the US, many of the federal, state, and local Open Government initiatives have been demonstrated to achieve positive results for citizens here and abroad. In fact, the White House’s science advisors released a refreshed Open Government plan in early June.
However, a recent study in Sweden says the benefits of transparency may vary, and may have little impact on citizens’ perception of legitimacy and trust in government. This research suggests important lessons on how public managers should approach the design of transparency strategies, and how they work in various conditions.
Jenny de Fine Licht, a scholar at the University of Gothenberg in Sweden, offers a more nuanced view of the influence of transparency in political decision making on public legitimacy and trust, in a paper that appears in the current issue of “Public Administration Review.” Her research challenges the assumption of many in the Open Government movement that greater transparency necessarily leads to greater citizen trust in government.
Her conclusion, based on an experiment involving over 1,000 participants, was that the type and degree of transparency “has different effects in different policy areas.” She found that “transparency is less effective in policy decisions that involve trade-offs related to questions of human life and death or well-being.”

The background

Licht says there are some policy decisions that involve what are called “taboo tradeoffs.” A taboo tradeoff, for example, would be making budget tradeoffs in policy areas such as health care and environmental quality, where human life or well-being is at stake. In cases where more money is an implicit solution, the author notes, “increased transparency in these policy areas might provoke feeling of taboo, and, accordingly, decreased perceived legitimacy.”
Other scholars, such as Harvard’s Jane Mansbridge,contend that “full transparency may not always be the best practice in policy making.” Full transparency in decision-making processes would include, for example, open appropriation committee meetings. Instead, she recommends “transparency in rationale – in procedures, information, reasons, and the facts on which the reasons are based.” That is, provide a full explanation after-the-fact.
Licht tested the hypothesis that full transparency of the decision-making process vs. partial transparency via providing after-the-fact rationales for decisions may create different results, depending on the policy arena involved…
Open Government advocates have generally assumed that full and open transparency is always better. Licht’s conclusion is that “greater transparency” does not necessarily increase citizen legitimacy and trust. Instead, the strategy of encouraging a high degree of transparency requires a more nuanced application in its use. While the she cautions about generalizing from her experiment, the potential implications for government decision-makers could be significant.
To date, many of the various Open Government initiatives across the country have assumed a “one size fits all” approach, across the board. Licht’s conclusions, however, help explain why the results of various initiatives have been divergent in terms of citizen acceptance of open decision processes.
Her experiment seems to suggest that citizen engagement is more likely to create a greater citizen sense of legitimacy and trust in areas involving “routine” decisions, such as parks, recreation, and library services. But that “taboo” decisions in policy areas involving tradeoffs of human life, safety, and well-being may not necessarily result in greater trust as a result of the use of full and open transparency of decision-making processes.
While she says that transparency – whether full or partial – is always better than no transparency, her experiment at least shows that policy makers will, at a minimum, know that the end result may not be greater legitimacy and trust. In any case, her research should engender a more nuanced conversation among Open Government advocates at all levels of government. In order to increase citizens’ perceptions of legitimacy and trust in government, it will take more than just advocating for Open Data!”

Let's amplify California's collective intelligence


Gavin Newsom and Ken Goldberg at the SFGate: “Although the results of last week’s primary election are still being certified, we already know that voter turnout was among the lowest in California’s history. Pundits will rant about the “cynical electorate” and wag a finger at disengaged voters shirking their democratic duties, but we see the low turnout as a symptom of broader forces that affect how people and government interact.
The methods used to find out what citizens think and believe are limited to elections, opinion polls, surveys and focus groups. These methods may produce valuable information, but they are costly, infrequent and often conducted at the convenience of government or special interests.
We believe that new technology has the potential to increase public engagement by tapping the collective intelligence of Californians every day, not just on election day.
While most politicians already use e-mail and social media, these channels are easily dominated by extreme views and tend to regurgitate material from mass media outlets.
We’re exploring an alternative.
The California Report Card is a mobile-friendly web-based platform that streamlines and organizes public input for the benefit of policymakers and elected officials. The report card allows participants to assign letter grades to key issues and to suggest new ideas for consideration; public officials then can use that information to inform their decisions.
In an experimental version of the report card released earlier this year, residents from all 58 counties assigned more than 20,000 grades to the state of California and also suggested issues they feel deserve priority at the state level. As one participant noted: “This platform allows us to have our voices heard. The ability to review and grade what others suggest is important. It enables elected officials to hear directly how Californians feel.”
Initial data confirm that Californians approve of our state’s rollout of Obamacare, but are very concerned about the future of our schools and universities.
There was also a surprise. California Report Card suggestions for top state priorities revealed consistently strong interest and support for more attention to disaster preparedness. Issues related to this topic were graded as highly important by a broad cross section of participants across the state. In response, we’re testing new versions of the report card that can focus on topics related to wildfires and earthquakes.
The report card is part of an ongoing collaboration between the CITRIS Data and Democracy Initiative at UC Berkeley and the Office of the Lieutenant Governor to explore how technology can improve public communication and bring the government closer to the people. Our hunch is that engineering concepts can be adapted for public policy to rapidly identify real insights from constituents and resist gaming by special interests.
You don’t have to wait for the next election to have your voice heard by officials in Sacramento. The California Report Card is now accessible from cell phones, desktop and tablet computers. We encourage you to contribute your own ideas to amplify California’s collective intelligence. It’s easy, just click “participate” on this website: CaliforniaReportCard.org”

Why Statistically Significant Studies Aren’t Necessarily Significant


Michael White in PSMagazine on how modern statistics have made it easier than ever for us to fool ourselves: “Scientific results often defy common sense. Sometimes this is because science deals with phenomena that occur on scales we don’t experience directly, like evolution over billions of years or molecules that span billionths of meters. Even when it comes to things that happen on scales we’re familiar with, scientists often draw counter-intuitive conclusions from subtle patterns in the data. Because these patterns are not obvious, researchers rely on statistics to distinguish the signal from the noise. Without the aid of statistics, it would be difficult to convincingly show that smoking causes cancer, that drugged bees can still find their way home, that hurricanes with female names are deadlier than ones with male names, or that some people have a precognitive sense for porn.
OK, very few scientists accept the existence of precognition. But Cornell psychologist Daryl Bem’s widely reported porn precognition study illustrates the thorny relationship between science, statistics, and common sense. While many criticisms were leveled against Bem’s study, in the end it became clear that the study did not suffer from an obvious killer flaw. If it hadn’t dealt with the paranormal, it’s unlikely that Bem’s work would have drawn much criticism. As one psychologist put it after explaining how the study went wrong, “I think Bem’s actually been relatively careful. The thing to remember is that this type of fudging isn’t unusual; to the contrary, it’s rampant–everyone does it. And that’s because it’s very difficult, and often outright impossible, to avoid.”…
That you can lie with statistics is well known; what is less commonly noted is how much scientists still struggle to define proper statistical procedures for handling the noisy data we collect in the real world. In an exchange published last month in the Proceedings of the National Academy of Sciences, statisticians argued over how to address the problem of false positive results, statistically significant findings that on further investigation don’t hold up. Non-reproducible results in science are a growing concern; so do researchers need to change their approach to statistics?
Valen Johnson, at Texas A&M University, argued that the commonly used threshold for statistical significance isn’t as stringent as scientists think it is, and therefore researchers should adopt a tighter threshold to better filter out spurious results. In reply, statisticians Andrew Gelman and Christian Robert argued that tighter thresholds won’t solve the problem; they simply “dodge the essential nature of any such rule, which is that it expresses a tradeoff between the risks of publishing misleading results and of important results being left unpublished.” The acceptable level of statistical significance should vary with the nature of the study. Another team of statisticians raised a similar point, arguing that a more stringent significance threshold would exacerbate the worrying publishing bias against negative results. Ultimately, good statistical decision making “depends on the magnitude of effects, the plausibility of scientific explanations of the mechanism, and the reproducibility of the findings by others.”
However, arguments over statistics usually occur because it is not always obvious how to make good statistical decisions. Some bad decisions are clear. As xkcd’s Randall Munroe illustrated in his comic on the spurious link between green jelly beans and acne, most people understand that if you keep testing slightly different versions of a hypothesis on the same set of data, sooner or later you’re likely to get a statistically significant result just by chance. This kind of statistical malpractice is called fishing or p-hacking, and most scientists know how to avoid it.
But there are more subtle forms of the problem that pervade the scientific literature. In an unpublished paper (PDF), statisticians Andrew Gelman, at Columbia University, and Eric Loken, at Penn State, argue that researchers who deliberately avoid p-hacking still unknowingly engage in a similar practice. The problem is that one scientific hypothesis can be translated into many different statistical hypotheses, with many chances for a spuriously significant result. After looking at their data, researchers decide which statistical hypothesis to test, but that decision is skewed by the data itself.
To see how this might happen, imagine a study designed to test the idea that green jellybeans cause acne. There are many ways the results could come out statistically significant in favor of the researchers’ hypothesis. Green jellybeans could cause acne in men, but not in women, or in women but not men. The results may be statistically significant if the jellybeans you call “green” include Lemon Lime, Kiwi, and Margarita but not Sour Apple. Gelman and Loken write that “researchers can perform a reasonable analysis given their assumptions and their data, but had the data turned out differently, they could have done other analyses that were just as reasonable in those circumstances.” In the end, the researchers may explicitly test only one or a few statistical hypotheses, but their decision-making process has already biased them toward the hypotheses most likely to be supported by their data. The result is “a sort of machine for producing and publicizing random patterns.”
Gelman and Loken are not alone in their concern. Last year Daniele Fanelli, at the University of Edingburgh, and John Ioannidis, at Stanford University, reported that many U.S. studies, particularly in the social sciences, may overestimate the effect sizes of their results. “All scientists have to make choices throughout a research project, from formulating the question to submitting results for publication.” These choices can be swayed “consciously or unconsciously, by scientists’ own beliefs, expectations, and wishes, and the most basic scientific desire is that of producing an important research finding.”
What is the solution? Part of the answer is to not let measures of statistical significance override our common sense—not our naïve common sense, but our scientifically-informed common sense…”

A brief history of open data


Article by Luke Fretwell in FCW: “In December 2007, 30 open-data pioneers gathered in Sebastopol, Calif., and penned a set of eight open-government data principles that inaugurated a new era of democratic innovation and economic opportunity.
“The objective…was to find a simple way to express values that a bunch of us think are pretty common, and these are values about how the government could make its data available in a way that enables a wider range of people to help make the government function better,” Harvard Law School Professor Larry Lessig said. “That means more transparency in what the government is doing and more opportunity for people to leverage government data to produce insights or other great business models.”
The eight simple principles — that data should be complete, primary, timely, accessible, machine-processable, nondiscriminatory, nonproprietary and license-free — still serve as the foundation for what has become a burgeoning open-data movement.

The benefits of open data for agencies

  • Save time and money when responding to Freedom of Information Act requests.
  • Avoid duplicative internal research.
  • Use complementary datasets held by other agencies.
  • Empower employees to make better-informed, data-driven decisions.
  • Attract positive attention from the public, media and other agencies.
  • Generate revenue and create new jobs in the private sector.

Source: Project Open Data

In the seven years since those principles were released, governments around the world have adopted open-data initiatives and launched platforms that empower researchers, journalists and entrepreneurs to mine this new raw material and its potential to uncover new discoveries and opportunities. Open data has drawn civic hacker enthusiasts around the world, fueling hackathons, challenges, apps contests, barcamps and “datapaloozas” focused on issues as varied as health, energy, finance, transportation and municipal innovation.
In the United States, the federal government initiated the beginnings of a wide-scale open-data agenda on President Barack Obama’s first day in office in January 2009, when he issued his memorandum on transparency and open government, which declared that “openness will strengthen our democracy and promote efficiency and effectiveness in government.” The president gave federal agencies three months to provide input into an open-government directive that would eventually outline what each agency planned to do with respect to civic transparency, collaboration and participation, including specific objectives related to releasing data to the public.
In May of that year, Data.gov launched with just 47 datasets and a vision to “increase public access to high-value, machine-readable datasets generated by the executive branch of the federal government.”
When the White House issued the final draft of its federal Open Government Directive later that year, the U.S. open-government data movement got its first tangible marching orders, including a 45-day deadline to open previously unreleased data to the public.
Now five years after its launch, Data.gov boasts more than 100,000 datasets from 227 local, state and federal agencies and organizations….”

Special Issue on Innovation through Open Data


A Review of the State-of-the-Art and an Emerging Research Agenda in the Journal of Theoretical and Applied Electronic Commerce Research:

  • Going Beyond Open Data: Challenges and Motivations for Smart Disclosure in Ethical Consumption (Djoko Sigit Sayogo, Jing Zhang, Theresa A. Pardo, Giri K. Tayi, Jana Hrdinova, David F. Andersen and Luis Felipe Luna-Reyes)
  • Shaping Local Open Data Initiatives: Politics and Implications (Josefin Lassinantti, Birgitta Bergvall-Kåreborn and Anna Ståhlbröst)
  • A State-of-the-Art Analysis of the Current Public Data Landscape from a Functional, Semantic and Technical Perspective (Michael Petychakis, Olga Vasileiou, Charilaos Georgis, Spiros Mouzakitis and John Psarras)
  • Using a Method and Tool for Hybrid Ontology Engineering: an Evaluation in the Flemish Research Information Space (Christophe Debruyne and Pieter De Leenheer)
  • A Metrics-Driven Approach for Quality Assessment of Linked Open Data (Behshid Behkamal, Mohsen Kahani, Ebrahim Bagheri and Zoran Jeremic)
  • Open Government Data Implementation Evaluation (Peter Parycek, Johann Höchtl and Michael Ginner)
  • Data-Driven Innovation through Open Government Data (Thorhildur Jetzek, Michel Avital and Niels Bjorn-Andersen)

Who Influences Whom? Reflections on U.S. Government Outreach to Think Tanks


Jeremy Shapiro at Brookings: “The U.S. government makes a big effort to reach out to important think tanks, often through the little noticed or understood mechanism of small, private and confidential roundtables. Indeed, for the ambitious Washington think-tanker nothing quite gets the pulse racing like the idea of attending one of these roundtables with the most important government officials. The very occasion is full of intrigue and ritual.

When the Government Calls for Advice

First, an understated e-mail arrives from some polite underling inviting you in to a “confidential, off-the-record” briefing with some official with an impressive title—a deputy secretary or a special assistant to the president, maybe even (heaven forfend) the secretary of state or the national security advisor. The thinker’s heart leaps, “they read my article; they finally see the light of my wisdom, I will probably be the next national security advisor.”
He clears his schedule of any conflicting brown bags on separatism in South Ossetia and, after a suitable interval to keep the government guessing as to his availability, replies that he might be able to squeeze it in to his schedule. Citizenship data and social security numbers are provided for security purposes, times are confirmed and ground rules are established in a multitude of emails with a seemingly never-ending array of staffers, all of whose titles include the word “special.” The thinker says nothing directly to his colleagues, but searches desperately for opportunities to obliquely allude to the meeting: “I’d love to come to your roundtable on uncovered interest rate parity, but I unfortunately have a meeting with the secretary of defense.”
On the appointed day, the thinker arrives early as instructed at an impressively massive and well-guarded government building, clears his ways through multiple layers of redundant security, and is ushered into a wood-paneled room that reeks of power and pine-sol. (Sometimes it is a futuristic conference room filled with television monitors and clocks that give the time wherever the President happens to be.) Nameless peons in sensible suits clutch government-issue notepads around the outer rim of the room as the thinker takes his seat at the center table, only somewhat disappointed to see so many other familiar thinkers in the room—including some to whom he had been obliquely hinting about the meeting the day before.
At the appointed hour, an officious staffer arrives to announce that “He” (the lead government official goes only by personal pronoun—names are unnecessary at this level) is unfortunately delayed at another meeting on the urgent international crisis of the day, but will arrive just as soon as he can get break away from the president in the Situation Room. He is, in fact, just reading email, but his long career has taught him the advantage of making people wait.
After 15 minutes of stilted chit-chat with colleagues that the thinker has the misfortune to see at virtually every event he attends in Washington, the senior government official strides calmly into the room, plops down at the head of the table and declares solemnly what a honor it is to have such distinguished experts to help with this critical area of policy. He very briefly details how very hard the U.S. government is working on this highest priority issue and declares that “we are in listening mode and are anxious to hear your sage advice.” A brave thinker raises his hand and speaks truth to power by reciting the thesis of his latest article. From there, the group is off to races as the thinkers each struggle to get in the conversation and rehearse their well-worn positions.
Forty-three minutes later, the thinkers’ “hour” is up because, the officious staffer interjects, “He” must attend a Principals Committee meeting. The senior government official thanks the experts for coming, compliments them on their fruitful ideas and their full and frank debate, instructs a nameless peon at random to assemble “what was learned here” for distribution in “the building” and strides purposefully out of the room.
The pantomime then ends and the thinker retreats back to his office to continue his thoughts. But what precisely has happened behind the rituals? Have we witnessed the vaunted academic-government exchange that Washington is so famous for? Is this how fresh ideas re-invigorate stale government groupthink?..”

US Secret Service seeks Twitter sarcasm detector


BBC: “The agency has put out a work tender looking for a software system to analyse social media data.
The software should have, among other things, the “ability to detect sarcasm and false positives”.
A spokesman for the service said it currently used the Federal Emergency Management Agency’s Twitter analytics and needed its own, adding: “We aren’t looking solely to detect sarcasm.”
The Washington Post quoted Ed Donovan as saying: “Our objective is to automate our social media monitoring process. Twitter is what we analyse.
“This is real-time stream analysis. The ability to detect sarcasm and false positives is just one of 16 or 18 things we are looking at.”…
The tender was put out earlier this week on the US government’s Federal Business Opportunities website.
It sets out the objectives of automating social media monitoring and “synthesising large sets of social media data”.
Specific requirements include “audience and geographic segmentation” and analysing “sentiment and trend”.
The software also has to have “compatibility with Internet Explorer 8”. The browser was released more than five years ago.
The agency does not detail the purpose of the analysis but does set out its mission, which includes “preserving the integrity of the economy and protecting national leaders and visiting heads of state and government”.

Open Data Is Open for Business


Jeffrey Stinson at Stateline: ” Last month, web designer Sean Wittmeyer and colleague Wojciech Magda walked away with a $25,000 prize from the state of Colorado for designing an online tool to help businesses decide where to locate in the state.
The tool, called “Beagle Score,” is a widget that can be embedded in online commercial real estate listings. It can rate a location by taxes and incentives, zoning, even the location of possible competitors – all derived from about 30 data sets posted publicly by the state of Colorado and its municipalities.
The creation of Beagle Score is an example of how states, cities, counties and the federal government are encouraging entrepreneurs to take raw government data posted on “open data” websites and turn the information into products the public will buy.
“The (Colorado contest) opened up a reason to use the data,” said Wittmeyer, 25, of Fort Collins. “It shows how ‘open data’ can solve a lot of challenges. … And absolutely, we can make it commercially viable. We can expand it to other states, and fairly quickly.”
Open-data advocates, such as President Barack Obama’s former information chief Vivek Kundra, estimate a multibillion-dollar industry can be spawned by taking raw government data files on sectors such as weather, population, energy, housing, commerce or transportation and turn them into products for the public to consume or other industries to pay for.
They can be as simple as mobile phone apps identifying every stop sign you will encounter on a trip to a different town, or as intricate as taking weather and crops data and turning it into insurance policies farmers can buy.

States, Cities Sponsor ‘Hackathons’

At least 39 states and 46 cities and counties have created open-data sites since the federal government, Utah, California and the cities of San Francisco and Washington, D.C., began opening data in 2009, according to the federal site, Data.gov.
Jeanne Holm, the federal government’s Data.gov “evangelist,” said new sites are popping up and new data are being posted almost daily. The city of Los Angeles, for example, opened a portal last week.
In March, Democratic New York Gov. Andrew Cuomo said that in the year since it was launched, his state’s site has grown to some 400 data sets with 50 million records from 45 agencies. Available are everything from horse injuries and deaths at state race tracks to maps of regulated child care centers. The most popular data: top fishing spots in the state.
State and local governments are sponsoring “hackathons,” “data paloozas,” and challenges like Colorado’s, inviting businesspeople, software developers, entrepreneurs or anyone with a laptop and a penchant for manipulating data to take part. Lexington, Kentucky, had a civic hackathon last weekend. The U.S. Transportation Department and members of the Geospatial Transportation Mapping Association had a three-day data palooza that ended Wednesday in Arlington, Virginia.
The goals of the events vary. Some, like Arlington’s transportation event, solicit ideas for how government can present its data more effectively. Others seek ideas for mining it.
Aldona Valicenti, Lexington’s chief information officer, said many cities want advice on how to use the data to make government more responsive to citizens, and to communicate with them on issues ranging from garbage pickups and snow removal to upcoming civic events.
Colorado and Wyoming had a joint hackathon last month sponsored by Google to help solve government problems. Colorado sought apps that might be useful to state emergency personnel in tracking people and moving supplies during floods, blizzards or other natural disasters. Wyoming sought help in making its tax-and-spend data more understandable and usable by its citizens.
Unless there’s some prize money, hackers may not make a buck from events like these, and participate out of fun, curiosity or a sense of public service. But those who create an app that is useful beyond the boundaries of a particular city or state, or one that is commercially valuable to business, can make serious money – just as Beagle Score plans to do. Colorado will hold onto the intellectual property rights to Beagle Score for a year. But Wittmeyer and his partner will be able to profit from extending it to other states.

States Trail in Open Data

Open data is an outgrowth of the e-government movement of the 1990s, in which government computerized more of the data it collected and began making it available on floppy disks.
States often have trailed the federal government or many cities in adjusting to the computer age and in sharing information, said Emily Shaw, national policy manager for the Sunlight Foundation, which promotes transparency in government. The first big push to share came with public accountability, or “checkbook” sites, that show where government gets its revenue and how it spends it.
The goal was to make government more transparent and accountable by offering taxpayers information on how their money was spent.
The Texas Comptroller of Public Accounts site, established in 2007, offers detailed revenue, spending, tax and contracts data. Republican Comptroller Susan Combs’ office said having a one-stop electronic site also has saved taxpayers about $12.3 million in labor, printing, postage and other costs.
Not all states’ checkbook sites are as openly transparent and detailed as Texas, Shaw said. Nor are their open-data sites. “There’s so much variation between the states,” she said.
Many state legislatures are working to set policies for releasing data. Since the start of 2010, according to the National Conference of State Legislatures, nine states have enacted open-data laws, and more legislation is pending. But California, for instance, has been posting open data for five years without legislation setting policies.
Just as states have lagged in getting data out to the public, less of it has been turned into commercial use, said Joel Gurin, senior adviser at the Governance Lab at New York University and author of the book “Open Data Now.”
Gurin leads Open Data 500, which identifies firms that that have made products from open government data and turned them into regional or national enterprises. In April, it listed 500. It soon may expand. “We’re finding more and more companies every day,” he said. “…

Lessons in Mass Collaboration


Elizabeth Walker, Ryan Siegel, Todd Khozein, Nick Skytland, Ali Llewellyn, Thea Aldrich, and Michael Brennan in the Stanford Social Innovation Review: “significant advances in technology in the last two decades have opened possibilities to engage the masses in ways impossible to imagine centuries ago. Beyond coordination, today’s technological capability permits organizations to leverage and focus public interest, talent, and energy through mass collaborative engagement to better understand and solve today’s challenges. And given the rising public awareness of a variety of social, economic, and environmental problems, organizations have seized the opportunity to leverage and lead mass collaborations in the form of hackathons.
Hackathons emerged in the mid-2000s as a popular approach to leverage the expertise of large numbers of individuals to address social issues, often through the creation of online technological solutions. Having led hundreds of mass collaboration initiatives for organizations around the world in diverse cultural contexts, we at SecondMuse offer the following lessons as a starting point for others interested in engaging the masses, as well as challenges others’ may face.

What Mass Collaboration Looks Like

An early example of a mass collaborative endeavor was Random Hacks of Kindness (RHoK), which formed in 2009. RHoK was initially developed in collaboration with Google, Microsoft, Yahoo!, NASA, the World Bank, and later, HP as a volunteer mobilization effort; it aimed to build technology that would enable communities to respond better to crises such as natural disasters. In 2012, nearly 1,000 participants attended 30 events around the world to address 176 well-defined problems.
In 2013, NASA and SecondMuse led the International Space Apps Challenge, which engaged six US federal agencies, 400 partner institutions, and 9,000 global citizens through a variety of local and global team configurations; it aimed to address 58 different challenges to improve life on Earth and in space. In Athens, Greece, for example, in direct response to the challenge of creating a space-deployable greenhouse, a team developed a modular spinach greenhouse designed to survive the harsh Martian climate. Two months later, 11,000 citizens across 95 events participated in the National Day of Civic Hacking in 83 different US cities, ultimately contributing about 150,000 person-hours and addressing 31 federal and several state and local challenges over a single weekend. One result was Keep Austin Fed from Austin, Texas, which leveraged local data to coordinate food donations for those in need.
Strong interest on the part of institutions and an enthusiastic international community has paved the way for follow-up events in 2014.

Benefits of Mass Collaboration

The benefits of this approach to problem-solving are many, including:

  • Incentivizing the use of government data. As institutions push to make data available to the public, mass collaboration can increase the usefulness of that data by creating products from it, as well as inform and streamline future data collection processes.
  • Increasing transparency. Engaging citizens in the process of addressing public concerns educates them about the work that institutions do and advances efforts to meet public expectations of transparency.
  • Increasing outcome ownership. When people engage in a collaborative process of problem solving, they naturally have a greater stake in the outcome. Put simply, the more people who participate in the process, the greater the sense of community ownership. Also, when spearheading new policies or initiatives, the support of a knowledgeable community can be important to long-term success.
  • Increasing awareness. Engaging the populace in addressing challenges of public concern increases awareness of issues and helps develop an active citizenry. As a result, improved public perception and license to operate bolster governmental and non-governmental efforts to address challenges.
  • Saving money. By providing data and structures to the public, and allowing them to build and iterate on plans and prototypes, mass collaboration gives agencies a chance to harness the power of open innovation with minimal time and funds.
  • Harnessing cognitive surplus. The advent of online tools allowing for distributed collaboration enables citizens to use their free time incrementally toward collective endeavors that benefit local communities and the nation.

Challenges of Mass Collaboration

Although the benefits can be significant, agencies planning to lead mass collaborations should be aware of several challenges:

  • Investing time and effort. A mass collaboration is most effective when it is not a one-time event. The up-front investment in building a collaboration of supporting partner organizations, creating a robust framework for action, developing the necessary tools and defining the challenges, and investing in implementation and scaling of the most promising results all require substantial time to secure long-term commitment and strong relationships.
  • Forging an institution-community relationship. Throughout the course of most engagements, the power dynamic between the organization providing the frameworks and challenges and the groupings of individuals responding to the call to action can shift dramatically as the community incorporates the endeavor into their collective identity. Everyone involved should embrace this as they lay the foundation for self-sustaining mass collaboration communities. Once participants develop a firmly entrenched collective identity and sense of ownership, the convening organization can fully tap into its collective genius, as they can work together based on trust and shared vision. Without community ownership, organizers need to allot more time, energy, and resources to keep their initiative moving forward, and to battle against volunteer fatigue, diminished productivity, and substandard output.
  • Focusing follow-up. Turning a massive infusion of creative ideas, concepts, and prototypes into concrete solutions requires a process of focused follow-up. Identifying and nurturing the most promising seeds to fruition requires time, discrete skills, insight, and—depending on the solutions you scale—support from a variety of external organizations.
  • Understanding ROI. Any resource-intensive endeavor where only a few of numerous resulting products ever see the light of day demands deep consideration of what constitutes a reasonable return on investment. For mass collaborations, this means having an initial understanding of the potential tangible and intangible outcomes, and making a frank assessment of whether those outcomes meet the needs of the collaborators.

Technological developments in the last century have enabled relationships between individuals and institutions to blossom into a rich and complex tapestry…”