Needed: A New Generation of Game Changers to Solve Public Problems


Beth Noveck: “In order to change the way we govern, it is important to train and nurture a new generation of problem solvers who possess the multidisciplinary skills to become effective agents of change. That’s why we at the GovLab have launched The GovLab Academy with the support of the Knight Foundation.
In an effort to help people in their own communities become more effective at developing and implementing creative solutions to compelling challenges, The Gov Lab Academy is offering two new training programs:
1) An online platform with an unbundled and evolving set of topics, modules and instructors on innovations in governance, including themes such as big and open data and crowdsourcing and forthcoming topics on behavioral economics, prizes and challenges, open contracting and performance management for governance;
2) Gov 3.0: A curated and sequenced, 14-week mentoring and training program.
While the online-platform is always freely available, Gov 3.0 begins on January 29, 2014 and we invite you to to participate. Please forward this email to your networks and help us spread the word about the opportunity to participate.
Please consider applying (individuals or teams may apply), if you are:

  • an expert in communications, public policy, law, computer science, engineering, business or design who wants to expand your ability to bring about social change;

  • a public servant who wants to bring innovation to your job;

  • someone with an important idea for positive change but who lacks key skills or resources to realize the vision;

  • interested in joining a network of like-minded, purpose-driven individuals across the country; or

  • someone who is passionate about using technology to solve public problems.

The program includes live instruction and conversation every Wednesday from 5:00– 6:30 PM EST for 14 weeks starting Jan 29, 2014. You will be able to participate remotely via Google Hangout.

Gov 3.0 will allow you to apply evolving technology to the design and implementation of effective solutions to public interest challenges. It will give you an overview of the most current approaches to smarter governance and help you improve your skills in collaboration, communication, and developing and presenting innovative ideas.

Over 14 weeks, you will develop a project and a plan for its implementation, including a long and short description, a presentation deck, a persuasive video and a project blog. Last term’s projects covered such diverse issues as post-Fukushima food safety, science literacy for high schoolers and prison reform for the elderly. In every case, the goal was to identify realistic strategies for making a difference quickly.  You can read the entire Gov 3.0 syllabus here.

The program will include national experts and instructors in technology and governance both as guests and as mentors to help you design your project. Last term’s mentors included current and former officials from the White House and various state, local and international governments, academics from a variety of fields, and prominent philanthropists.

People who complete the program will have the opportunity to apply for a special fellowship to pursue their projects further.

Previously taught only on campus, we are offering Gov 3.0 in beta as an online program. This is not a MOOC. It is a mentoring-intensive coaching experience. To maximize the quality of the experience, enrollment is limited.

Please submit your application by January 22, 2014. Accepted applicants (individuals and teams) will be notified on January 24, 2014. We hope to expand the program in the future so please use the same form to let us know if you would like to be kept informed about future opportunities.”

From funding agencies to scientific agency –


New paper on “Collective allocation of science funding as an alternative to peer review”: “Publicly funded research involves the distribution of a considerable amount of money. Funding agencies such as the US National Science Foundation (NSF), the US National Institutes of Health (NIH) and the European Research Council (ERC) give billions of dollars or euros of taxpayers’ money to individual researchers, research teams, universities, and research institutes each year. Taxpayers accordingly expect that governments and funding agencies will spend their money prudently and efficiently.

Investing money to the greatest effect is not a challenge unique to research funding agencies and there are many strategies and schemes to choose from. Nevertheless, most funders rely on a tried and tested method in line with the tradition of the scientific community: the peer review of individual proposals to identify the most promising projects for funding. This method has been considered the gold standard for assessing the scientific value of research projects essentially since the end of the Second World War.

However, there is mounting critique of the use of peer review to direct research funding. High on the list of complaints is the cost, both in terms of time and money. In 2012, for example, NSF convened more than 17,000 scientists to review 53,556 proposals [1]. Reviewers generally spend a considerable time and effort to assess and rate proposals of which only a minority can eventually get funded. Of course, such a high rejection rate is also frustrating for the applicants. Scientists spend an increasing amount of time writing and submitting grant proposals. Overall, the scientific community invests an extraordinary amount of time, energy, and effort into the writing and reviewing of research proposals, most of which end up not getting funded at all. This time would be better invested in conducting the research in the first place.

Peer review may also be subject to biases, inconsistencies, and oversights. The need for review panels to reach consensus may lead to sub‐optimal decisions owing to the inherently stochastic nature of the peer review process. Moreover, in a period where the money available to fund research is shrinking, reviewers may tend to “play it safe” and select proposals that have a high chance of producing results, rather than more challenging and ambitious projects. Additionally, the structuring of funding around calls‐for‐proposals to address specific topics might inhibit serendipitous discovery, as scientists work on problems for which funding happens to be available rather than trying to solve more challenging problems.

The scientific community holds peer review in high regard, but it may not actually be the best possible system for identifying and supporting promising science. Many proposals have been made to reform funding systems, ranging from incremental changes to peer review—including careful selection of reviewers [2] and post‐hoc normalization of reviews [3]—to more radical proposals such as opening up review to the entire online population [4] or removing human reviewers altogether by allocating funds through an objective performance measure [5].

We would like to add another alternative inspired by the mathematical models used to search the internet for relevant information: a highly decentralized funding model in which the wisdom of the entire scientific community is leveraged to determine a fair distribution of funding. It would still require human insight and decision‐making, but it would drastically reduce the overhead costs and may alleviate many of the issues and inefficiencies of the proposal submission and peer review system, such as bias, “playing it safe”, or reluctance to support curiosity‐driven research.

Our proposed system would require funding agencies to give all scientists within their remit an unconditional, equal amount of money each year. However, each scientist would then be required to pass on a fixed percentage of their previous year’s funding to other scientists whom they think would make best use of the money (Fig 1). Every year, then, scientists would receive a fixed basic grant from their funding agency combined with an elective amount of funding donated by their peers. As a result of each scientist having to distribute a given percentage of their previous year’s budget to other scientists, money would flow through the scientific community. Scientists who are generally anticipated to make the best use of funding will accumulate more.”

The Failure and the Promise of Public Participation


Dr. Mark Funkhouser in Governing: “In a recent study entitled Making Public Participation Legal, Matt Leighninger cites a Knight Foundation report that found that attending a public meeting was more likely to reduce a person’s sense of efficacy and attachment to the community than to increase it. That sad fact is no surprise to the government officials who have to run — and endure — public meetings.
Every public official who has served for any length of time has horror stories about these forums. The usual suspects show up — the self-appointed activists (who sometimes seem to be just a little nuts) and the lobbyists. Regular folks have made the calculation that only in extreme circumstance, when they are really scared or angry, is attending a public hearing worth their time. And who can blame them when it seems clear that the game is rigged, the decisions already have been made, and they’ll probably have to sit through hours of blather before they get their three minutes at the microphone?
So much transparency and yet so little trust. Despite the fact that governments are pumping out more and more information to citizens, trust in government has edged lower and lower, pushed in part no doubt by the lingering economic hardships and government cutbacks resulting from the recession. Most public officials I talk to now take it as an article of faith that the public generally disrespects them and the governments they work for.
Clearly the relationship between citizens and their governments needs to be reframed. Fortunately, over the last couple of decades lots of techniques have been developed by advocates of deliberative democracy and citizen participation that provide both more meaningful engagement and better community outcomes. There are decision-making forums, “visioning” forums and facilitated group meetings, most of which feature some combination of large-group, small-group and online interactions.
But here’s the rub: Our legal framework doesn’t support these new methods of public participation. This fact is made clear in Making Public Participation Legal, which was compiled by a working group that included people from the National Civic League, the American Bar Association, the International City/County Management Association and a number of leading practitioners of public participation.
The requirements for public meetings in local governments are generally built into state statutes such as sunshine or open-meetings laws or other laws governing administrative procedures. These laws may require public hearings in certain circumstances and mandate that advance notice, along with an agenda, be posted for any meeting of an “official body” — from the state legislature to a subcommittee of the city council or an advisory board of some kind. And a “meeting” is one in which a quorum attends. So if three of a city council’s nine members sit on the finance committee and two of the committee members happen to show up at a public meeting, they may risk having violated the open-meetings law…”

Why the Nate Silvers of the World Don’t Know Everything


Felix Salmon in Wired: “This shift in US intelligence mirrors a definite pattern of the past 30 years, one that we can see across fields and institutions. It’s the rise of the quants—that is, the ascent to power of people whose native tongue is numbers and algorithms and systems rather than personal relationships or human intuition. Michael Lewis’ Moneyball vividly recounts how the quants took over baseball, as statistical analy­sis trumped traditional scouting and propelled the underfunded Oakland A’s to a division-winning 2002 season. More recently we’ve seen the rise of the quants in politics. Commentators who “trusted their gut” about Mitt Romney’s chances had their gut kicked by Nate Silver, the stats whiz who called the election days before­hand as a lock for Obama, down to the very last electoral vote in the very last state.
The reason the quants win is that they’re almost always right—at least at first. They find numerical patterns or invent ingenious algorithms that increase profits or solve problems in ways that no amount of subjective experience can match. But what happens after the quants win is not always the data-driven paradise that they and their boosters expected. The more a field is run by a system, the more that system creates incentives for everyone (employees, customers, competitors) to change their behavior in perverse ways—providing more of whatever the system is designed to measure and produce, whether that actually creates any value or not. It’s a problem that can’t be solved until the quants learn a little bit from the old-fashioned ways of thinking they’ve displaced.
No matter the discipline or industry, the rise of the quants tends to happen in four stages. Stage one is what you might call pre-disruption, and it’s generally best visible in hindsight. Think about quaint dating agencies in the days before the arrival of Match .com and all the other algorithm-powered online replacements. Or think about retail in the era before floor-space management analytics helped quantify exactly which goods ought to go where. For a live example, consider Hollywood, which, for all the money it spends on market research, is still run by a small group of lavishly compensated studio executives, all of whom are well aware that the first rule of Hollywood, as memorably summed up by screenwriter William Goldman, is “Nobody knows anything.” On its face, Hollywood is ripe for quantifi­cation—there’s a huge amount of data to be mined, considering that every movie and TV show can be classified along hundreds of different axes, from stars to genre to running time, and they can all be correlated to box office receipts and other measures of profitability.
Next comes stage two, disruption. In most industries, the rise of the quants is a recent phenomenon, but in the world of finance it began back in the 1980s. The unmistakable sign of this change was hard to miss: the point at which you started getting targeted and personalized offers for credit cards and other financial services based not on the relationship you had with your local bank manager but on what the bank’s algorithms deduced about your finances and creditworthiness. Pretty soon, when you went into a branch to inquire about a loan, all they could do was punch numbers into a computer and then give you the computer’s answer.
For a present-day example of disruption, think about politics. In the 2012 election, Obama’s old-fashioned campaign operatives didn’t disappear. But they gave money and freedom to a core group of technologists in Chicago—including Harper Reed, former CTO of the Chicago-based online retailer Threadless—and allowed them to make huge decisions about fund-raising and voter targeting. Whereas earlier campaigns had tried to target segments of the population defined by geography or demographic profile, Obama’s team made the campaign granular right down to the individual level. So if a mom in Cedar Rapids was on the fence about who to vote for, or whether to vote at all, then instead of buying yet another TV ad, the Obama campaign would message one of her Facebook friends and try the much more effective personal approach…
After disruption, though, there comes at least some version of stage three: over­shoot. The most common problem is that all these new systems—metrics, algo­rithms, automated decisionmaking processes—result in humans gaming the system in rational but often unpredictable ways. Sociologist Donald T. Campbell noted this dynamic back in the ’70s, when he articulated what’s come to be known as Campbell’s law: “The more any quantitative social indicator is used for social decision-making,” he wrote, “the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”…
Policing is a good example, as explained by Harvard sociologist Peter Moskos in his book Cop in the Hood: My Year Policing Baltimore’s Eastern District. Most cops have a pretty good idea of what they should be doing, if their goal is public safety: reducing crime, locking up kingpins, confiscating drugs. It involves foot patrols, deep investigations, and building good relations with the community. But under statistically driven regimes, individual officers have almost no incentive to actually do that stuff. Instead, they’re all too often judged on results—specifically, arrests. (Not even convictions, just arrests: If a suspect throws away his drugs while fleeing police, the police will chase and arrest him just to get the arrest, even when they know there’s no chance of a conviction.)…
It’s increasingly clear that for smart organizations, living by numbers alone simply won’t work. That’s why they arrive at stage four: synthesis—the practice of marrying quantitative insights with old-fashioned subjective experience. Nate Silver himself has written thoughtfully about examples of this in his book, The Signal and the Noise. He cites baseball, which in the post-Moneyball era adopted a “fusion approach” that leans on both statistics and scouting. Silver credits it with delivering the Boston Red Sox’s first World Series title in 86 years. Or consider weather forecasting: The National Weather Service employs meteorologists who, understanding the dynamics of weather systems, can improve forecasts by as much as 25 percent compared with computers alone. A similar synthesis holds in eco­nomic forecasting: Adding human judgment to statistical methods makes results roughly 15 percent more accurate. And it’s even true in chess: While the best computers can now easily beat the best humans, they can in turn be beaten by humans aided by computers….
That’s what a good synthesis of big data and human intuition tends to look like. As long as the humans are in control, and understand what it is they’re controlling, we’re fine. It’s when they become slaves to the numbers that trouble breaks out. So let’s celebrate the value of disruption by data—but let’s not forget that data isn’t everything.

Entrepreneurs Shape Free Data Into Money


Angus Loten in the Wall Street Journal: “More cities are putting information on everything from street-cleaning schedules to police-response times and restaurant inspection reports in the public domain, in the hope that people will find a way to make money off the data.
Supporters of such programs often see them as a local economic stimulus plan, allowing software developers and entrepreneurs in cities ranging from San Francisco to South Bend, Ind., to New York, to build new businesses based on the information they get from government websites.
When Los Angeles Mayor Eric Garcetti issued an executive directive last month to launch the city’s open-data program, he cited entrepreneurs and businesses as important beneficiaries. Open-data promotes innovation and “gives companies, individuals, and nonprofit organizations the opportunity to leverage one of government’s greatest assets: public information,” according to the Dec. 18 directive.
A poster child for the movement might be 34-year-old Matt Ehrlichman of Seattle, who last year built an online business in part using Seattle work permits, professional licenses and other home-construction information gathered up by the city’s Department of Planning and Development.
While his website is free, his business, called Porch.com, has more than 80 employees and charges a $35 monthly fee to industry professionals who want to boost the visibility of their projects on the site.
The site gathers raw public data—such as addresses for homes under renovation, what they are doing, who is doing the work and how much they are charging—and combines it with photos and other information from industry professionals and homeowners. It then creates a searchable database for users to compare ideas and costs for projects near their own neighborhood.
…Ian Kalin, director of open-data services at Socrata, a Seattle-based software firm that makes the back-end applications for many of these government open-data sites, says he’s worked with hundreds of companies that were formed around open data.
Among them is Climate Corp., a San Francisco-based firm that collects weather and yield-forecasting data to help farmers decide when and where to plant crops. Launched in 2006, the firm was acquired in October by Monsanto Co. MON -2.90% , the seed-company giant, for $930 million.
Overall, the rate of new business formation declined nationally between 2006 and 2010. But according to the latest data from the Ewing Marion Kauffman Foundation, an entrepreneurship advocacy group in Kansas City, Mo., the rate of new business formation in Seattle in 2011 rose 9.41% in 2011, compared with the national average of 3.9%.
Other cities where new business formation was ahead of the national average include Chicago, Austin, Texas, Baltimore, and South Bend, Ind.—all cities that also have open-data programs. Still, how effective the ventures are in creating jobs is difficult to gauge.
One wrinkle: privacy concerns about the potential for information—such as property tax and foreclosure data—to be misused.
Some privacy advocates fear that government data that include names, addresses and other sensitive information could be used by fraudsters to target victims.”

Walgreens Taps Crowdsourcing to Deliver Cold Medicine to Shut-Ins


Mashable: “Walgreens is reaching out to consumers who are so walloped with a cold or flu that a trip to the corner drugstore seems an insurmountable obstacle.
The national drug chain is partnering with TaskRabbit, the online mobile marketplace, to allow deliveries of over-the-counter cold medicine in any of the 19 cities in which TaskRabbit is available. Such deliveries can be made via TaskRabbit’s iOS app or on its website. Standard TaskRabbit rates apply including a 20% service charge and a runner’s fee. So if a runner’s fee is $10, you would pay an additional $12 plus the cost of your cold medicine, to get the delivery.
The partnership, arranged by OMD’s Ignition Factory, runs this week through Feb. 18, typically the weeks in which cold and flu complaints have the sharpest increases. During that time, the Walgreens option will appear in TaskRabbit’s iOS app’s Task Wheel and on the website. Though TaskRabbit has partnered with other national brands, including Pepsi, this is its first with a retailer.
However, the deal is more of a pr exercise than anything else: Consumers have had the ability arrange a TaskRabbit to shop and buy cold medicine at Walgreens prior to the agreement. The chain is hoping to raise awareness about this option, though.
“We just wanted to make it as easy as possible,” says Wilson Standish, project manager at Ignition Factory. “When you’re sick, you don’t even want to get out of bed.”

Crowdsourcing Social Problems


Article by   in Reason: “reCAPTCHA and Duolingo both represent a distinctly 21st-century form of distributed problem solving. These Internet-enabled approaches tend to be faster, far less expensive, and far more resilient than the heavyweight industrial-age methods of solving big social problems that we’ve grown accustomed to over the past century. They typically involve highly diverse resources-volunteer time, crowdfunding, the capabilities of multinational corporations, entrepreneurial capital, philanthropic funding-aligned around common objectives such as reducing congestion, providing safe drinking water, or promoting healthy living. Crowdsourcing offers not just a better way of doing things, but a radical challenge to the bureaucratic status quo.
Here are several ways public, private, and nonprofit organizations can use lightweight, distributed approaches to solve societal problems faster and cheaper than the existing sclerotic models.
Chunk the Problem
The genius of reCAPTCHA and Duolingo is that they divide labor into small increments, performed for free, often by people who are unaware of the project they’re helping to complete. This strategy has wide public-policy applications, even in dealing with potholes….
Meanwhile, Finland’s DigitalKoot project enlisted volunteers to digitize their own libraries by playing a computer game that challenged them to transcribe scans of antique manuscripts.
Governments can set up a microtasking platform, not just for citizen engagement but as a way to harness the knowledge and skills of public employees across multiple departments and agencies. If microtasking can work to connect people outside the “four walls” of an organization, think of its potential as a platform to connect people and conduct work inside an organization-even an organization as bureaucratic as government.

Decentralize Service to the Self
A young woman slices her finger on a knife. As she compresses the bleeding with gauze, she needs to know if her wound warrants stitches. So she calls up Blue Cross’ 24-hour nurse hotline, where patients call to learn if they should see a doctor. The nurse asks her to describe the depth of the cut. He explains she should compress it with gauze and skip the ER. In aggregate, savings like this amount to millions of dollars of avoided emergency room visits.
Since 2003, Blue Cross has been shifting the work of basic triage and risk mitigation to customers. Britain’s National Health Service (NHS) implemented a similar program, NHS Direct, in 1998. NHS estimates that the innovation has saved it £44 million a year….
Gamify Drudgery
Finland’s national library houses an enormous archive of antique texts, which officials hoped to scan and digitize into ordinary, searchable text documents. Rather than simply hire people for the tedium of correcting garbled OCR scans, the library invited the public to play a game. An online program called DigitalKoot lets people transcribe scanned words, and by typing accurately, usher a series of cartoon moles safely across a bridge….
Build a Two-Sided Market
Road infrastructure costs government five cents per driver per mile, according to the Victoria Transport Policy Institute. “That’s a dollar the government paid for the paving of that road and the maintaining of that infrastructure…just for you, not the other 3,000 people that travelled that same segment of highway in that same hour that you did,” says Sean O’Sullivan, founder of Carma, a ridesharing application.
Ridesharing companies such as Carma, Lyft, and Zimride are attempting to recruit private cars for the public transit network, by letting riders pay a small fee to carpool. A passenger waits at a designated stop, and the app alerts drivers, who can scan a profile of their potential rider. It’s a prime example of a potent new business model…
Remove the Middleman
John McNair dropped out of high school at age 16. By his thirties, he became an entrepreneur, producing and selling handmade guitars, but carpentry alone wouldn’t grow his business. So the founder of Red Dog Guitars enrolled in a $20 class on Skillshare.com, taught by the illustrator John Contino, to learn to brand his work with hand lettered product labels. Soon, a fellow businessman was asking McNair for labels to market guitar pickups.
Traditionally, the U.S. government might invest in retraining someone like John. Instead, peer-to-peer technology has allowed a community of designers to help John develop his skills. Peer-to-peer strategies enable citizens to meet each other’s needs, cheaply. Peer-to-peer solutions can help fix problems, deliver services, and supplement traditional approaches.
Peer-to-peer can lessen our dependence on big finance. Kickstarter lets companies skip the energy of convincing a banker that their product is viable. They just need to convince customers…”

Protecting personal data in E-government: A cross-country study


Paper by Yuehua Wu in Government Information Quarterly: “This paper presents the findings of a comparative study of laws and policies employed to protect personal data processed in the context of e-government in three countries (the United States, Germany, and China) with rather different approaches. Drawing on governance theory, the paper seeks to document the mechanisms utilized and to understand the factors that shape the governance modes adopted. The cases reveal that national government regulations have not kept pace with technological change and with the current information practices of the public sector. Nonetheless, traditional government regulation remains the major governance mode for the issue under discussion. Self-regulation and code-based regulation serve supplementary roles to traditional government regulation. National context is found to impact the form and level of data protection and the choice of governance modes.”

How could technology improve policy-making?


Beccy Allen from the Hansard Society (UK): “How can civil servants be sure they have the most relevant, current and reliable data? How can open data be incorporated into the policy making process now and what is the potential for the future use of this vast array of information? How can parliamentary clerks ensure they are aware of the broadest range of expert opinion to inform committee scrutiny? And how can citizens’ views help policy makers to design better policy at all stages of the process?
These are the kind of questions that Sense4us will be exploring over the next three years. The aim is to build a digital tool for policy-makers that can:

  1. locate a broad range of relevant and current information, specific to a particular policy, incorporating open data sets and citizens’ views particularly from social media; and
  2. simulate the consequences and impact of potential policies, allowing policy-makers to change variables and thereby better understand the likely outcomes of a range of policy options before deciding which to adopt.

It is early days for open data and open policy making. The word ‘digital’ peppers the Civil Service Reform Plan but the focus is often on providing information and transactional services digitally. Less attention is paid to how digital tools could improve the nature of policy-making itself.
The Sense4us tool aims to help bridge the gap. It will be developed in consultation with policy-makers at different levels of government across Europe to ensure its potential use by a wide range of stakeholders. At the local level, our partners GESIS (the Leibniz-Institute for the Social Sciences) will be responsible for engaging with users at the city level in Berlin and in the North Rhine-Westphalia state legislature At the multi-national level Government to You (Gov2u) will engage with users in the European Parliament and Commission. Meanwhile the Society will be responsible for national level consultation with civil servants, parliamentarians and parliamentary officials in Whitehall and Westminster exploring how the tool can be used to support the UK policy process. Our academic partners leading on technical development of the tool are the IT Innovation Centre at Southampton University, eGovlab at Stockholm University, the University of Koblenz-Landau and the Knowledge Media Institute at the Open University.”

Open Government? Check. Public Participation? Not yet.


New blog post by Tina Nabatchi: “By requiring all federal agencies to be more transparent, collaborative, and participatory, the Obama Administration’s Open Government Initiative promised to bring watershed changes to government. While much progress has been made since the release of its first National Action Plan, advances in the arena of public participation have been disappointing. Champions of public participation had high hopes for the second National Action Plan, which was released by the White House on December 5, 2013. While the second plan has numerous commendable and important commitments that increase transparency and collaboration, it falls flat with regard to public participation, perhaps with the exception of its promotion of participatory budgeting.
The second plan includes three explicit commitments involving “public participation.” The first commitment, “Improving Public Participation in Government,” is to be done by: (1) “expanding and simplifying the use of the We the People e-petition platform,” and (2) “publishing best practices and metrics for public participation” (see page 2). Both of these commitments (in different form) were in the first National Action Plan.
….
Perhaps the lack of movement is because realizing the promise of public participation at the federal level requires making challenging, substantive changes to our administrative infrastructure. Several issues impede the effective use of participation in open government at the federal level, including among others:
Most of the laws that govern that use of public participation are over thirty years old and pre-date the internet. Existing laws and regulations use a narrow definition of public participation and fail to embrace the vast array of robust, empowered participatory methods. Moreover, the laws are often in tension with agency missions and the goals of participation, and leave agency staff wondering whether participatory innovations are legal.
Agency officials sometimes lack the knowledge, skills, and abilities to launch effective and meaningful participatory programs, and there are few opportunities for officials to learn about best practices from each other and from civil society. Agency officials who have taken lead roles in innovative public participation efforts do not always feel supported by the Administration.
Several laws, rules, and regulations limit agencies’ ability to collect and use routine data from participatory programs, which impedes evaluation efforts. Thus, agencies are severely restricted in their ability to appraise and improve participatory developments and implementation.
Had these issues been addressed in the second National Action Plan, then perhaps federal agencies would have been able to focus on the participatory aspects of open government and help the U.S. become a leader in public participation innovation. To this end, as the Administration moves forward with Open Government, it should work on: (1) reviewing and clarifying the legal framework for participation, including a more expansive and clear definition of public participation; (2) helping agencies develop the internal capacity needed to conduct more meaningful public participation; and (3) developing a generic, OMB-approved tool that all agencies can use to collect common data about individual participants for routine uses. Without attention to these issues, the Open Government Initiative will fail to reshape the practices and activities of public participation in the work of federal agencies.”