Website Provides Data, Tools for K-12 Educators


US Census: “As many kids across the nation go back to school this month, we are excited to roll out a new U.S. Census Bureau program, “Statistics in Schools,” aimed at making a real and positive difference in American education….The new website provides data, tools and teacher-friendly activities to K-12 educators in math, history, and social studies as well as the newly added subjects of geography and sociology. We also doubled the number of tools on the website; resulting in more than 100 resources from which teachers can choose, including:

  • Maps and historical documents — historical and current maps as well as photos, cartoons and census records.
  • News articles — examples of census data applied to current events in the news. Videos — the importance of statistics and how data relates to students today.
  • Games — test your students’ knowledge in our population bracketology game.
  • Infographics and data visualizations — census data presented visually; many linked to a classroom activity.
  • Searchable data tools that reveal population statistics by sex, age, ethnicity and race.
  • Activities organized by grade, education standard and subject.
  • Information to help teachers explain the Census Bureau to students….

The next step in the program is perhaps the most exciting, as educators throughout the nation begin to leverage Statistics in Schools to enrich their curricula. I look forward to being on this journey with you and working toward improved statistical literacy for the next generation. Please stay in touch — we will be listening closely to learn what works, what could be improved, and how the Census Bureau can continue to help you….(More)”

When Innovation Goes Wrong


Christian Seelos & Johanna Mair at Stanford Social Innovation Review: “Efforts by social enterprises to develop novel interventions receive a great deal of attention. Yet these organizations often stumble when it comes to turning innovation into impact. As a result, they fail to achieve their full potential. Here’s a guide to diagnosing and preventing several “pathologies” that underlie this failure….

The core purpose of an innovation process is the conversion of uncertainty into knowledge. Or to put it another way: Innovation is essentially a matter of learning. In fact, one critical insight that we have drawn from our research is that effective organizations approach innovation not with an expectation of success but with an expectation of learning. Innovators who expect success from innovation efforts will inevitably encounter disappointment, and the experience of failure will generate a blame culture in their organization that dramatically lowers their chance of achieving positive impact. But a focus on learning creates a sense of progress rather than a sense of failure. The high-impact organizations that we have studied owe much of their success to their wealth of accumulated knowledge—knowledge that often has emerged from failed innovation efforts.

 

Innovation uncertainty has multiple dimensions, and organizations need to be vigilant about addressing uncertainty in all of its forms. (See “Types of Innovation Uncertainty” below.) Let’s take a close look at three aspects of the innovation process that often involve a considerable degree of uncertainty.

Problem formulation | Organizations may incorrectly frame the problem that they aim to solve, and identifying that problem accurately may require several iterations and learning cycles…

Solution development | Even when an organization has an adequate understanding of a problem, it may not be able to access and deploy the resources needed to create an effective and robust solution….

Alignment with identity | Innovation may lead an organization in a direction that does not fit its culture or its sense of its purpose—its sense of “who we are.”…

In short, innovation plus scaling equals impact. Innovation is an investment of resources that creates a new potential; scaling creates impact by enacting that potential. Because innovation creates only the potential for impact, we advocate replacing the assumption that “innovation is good, and more is better” with a more critical view: Innovation, we argue, needs to prove itself on the basis of the impact that it actually creates. The goal is not innovation for its own sake but productive innovation.

Productive innovation depends on two factors: (1) an organization’s capacity for efficiently replacing innovation uncertainty with knowledge, and (2) its ability to scale up innovation outcomes by enhancing its organizational effectiveness. Innovation and scaling thus work together to form an overall social impact creation process. Over time, an investment in innovation—in the work of overcoming uncertainty—yields positive social impact, and the value of such impact will eventually exceed the cost of that investment. But that will be the case only if an organization is able to master the scaling part of this process….

 

 

Focusing on Pathologies

Through our study of social enterprises, we have devised a set of six pathologies—six ways that organizations limit their capacity for productive innovation. From the stage when people first develop (or fail to develop) the idea for an innovation to the stage when scaling efforts take off (or fail to take off), these pathologies adversely affect an organization’s ability to make its way through the social impact creation process. (See “Creating Social Impact: Six Innovation Pathologies to Avoid” below.) Organizations can greatly improve the impact of their innovation efforts by working to prevent or treat these pathologies.

Never getting started | In too many cases, organizations simply fail to invest seriously in the work of innovation. This pathology has many causes. People in organizations may have neither the time nor the incentive to develop or communicate new ideas. Or they may find that their ideas fall on deaf ears. Or they may have a tendency to discuss an idea endlessly—until the problem that gave rise to it has been replaced by another urgent problem or until an opportunity has vanished….

Pursuing too many bad ideas | Organizations in the social sector frequently fall into the habit of embracing a wide variety of ideas for innovation without regard to whether those ideas are sound. The recent obsession with “scientific” evaluation tools such as randomized controlled trials, or RCTs, exemplifies this tendency to favor costly ideas that may or may not deliver real benefits. As with other pathologies, many factors potentially contribute to this one. Funders may push their favorite solutions regardless of how well they understand the problems that those solutions target or how well a solution fits a particular organization. Or an organization may fail to invest in learning about the context of a problem before adopting a solution. Wasting scarce resources on the pursuit of bad ideas creates frustration and cynicism within an organization. It also increases innovation uncertainty and the likelihood of failure….

Stopping too early | In some instances, organizations are unable or unwilling to devote adequate resources to the development of worthy ideas. When resources are scarce and not formally dedicated to innovation processes, project managers will struggle to develop an idea and may have to abandon it prematurely. Too often, they end up taking the blame for failure, and others in their organization ignore the adverse circumstances that caused it. Decision makers then reallocate resources on an ad-hoc basis to other urgent problems or to projects that seem more important. As a result, even promising innovation efforts come to a grinding halt….

Stopping too late | Even more costly than stopping too early is stopping too late. In this pathology, an organization continues an innovation project even after the innovation proves to be ineffective or unworkable. This problem occurs, for example, when an unsuccessful innovation happens to be the pet project of a senior leader who has limited experience. Leaders who have recently joined an organization and who are keen to leave their mark rather than continue what their predecessor has built are particularly likely to engage in this pathology. Another cause of “stopping too late” is the assumption that a project budget needs to be spent. The consequences of this pathology are clear: Organizations expend scarce resources with little hope for success and without gaining any useful knowledge….

Scaling too little | To repeat an essential point that we made earlier: no scaling, no impact. This pathology—which involves a failure to move beyond the initial stages of developing, launching, and testing an intervention—is all too common in the social enterprise field. Thousands of inspired young people want to become social entrepreneurs. But few of them are willing or able to build an organization that can deliver solutions at scale. Too many organizations, therefore, remain small and lack the resources and capabilities required for translating innovation into impact….

Innovating again too soon | Too many organizations rush to launch new innovation projects instead of investing in efforts to scale interventions that they have already developed. The causes of this pathology are fairly well known: People often portray scaling as dull, routine work and innovation as its more attractive sibling. “Innovative” proposals thus attract funders more readily than proposals that focus on scaling. Reinforcing this bias is the preference among many funders for “lean projects” that reduce overhead costs to a minimum. These factors lead organizations to jump opportunistically from one innovation grant to another….(More)”

Ideas to help civil servants understand the opportunities of data


, at Gov.UK: “Back in April we set out our plan for the discovery phase for what we are now calling “data science literacy”. We explained that we were going to undertake user research with civil servants to understand how they use data. The discovery phase has helped clarify the focus of this work, and we have now begun to develop options for a data science literacy service for government.

Discovery has helped us understand what we really mean when we say ‘data literacy’. For one person it can be a basic understanding of statistics, but to someone else it might mean knowledge of new data science approaches. But on the basis of our exploration, we have started to use the term “data science literacy” to mean the ability to understand how new data science techniques and approaches can be applied in real world contexts in the civil service, and to distinguish it from a broader definition of ‘data literacy’….

In the spirit of openness and transparency we are making this long list of ideas available here:

Data science driven apps

One way in which civil servants could come to understand the opportunities of data science would be to experience products and services which are driven by data science in their everyday roles. This could be something like having a recommendation engine for actions provided to them on the basis of information already held on the customer.

Sharing knowledge across government

A key user need from our user research was to understand how others had undertaken data science projects in government. This could be supported by something like a series of videos / podcasts created by civil servants, setting out case studies and approaches to data science in government. Alternatively, we could have a regularly organised speaker series where data science projects across government are presented alongside outside speakers.

Support for using data science in departments

Users in departments need to understand and experience data science projects in government so that they can undertake their own. Potentially this could be achieved through policy, analytical and data science colleagues working in multidisciplinary teams. Colleagues could also be supported by tools of differing levels of complexity ranging from a simple infographic showing at a high level the types of data available in a department to an online tool which diagnoses which approach people should take for a data science project on the basis of their aims and the data available to them.

In practice training

Users could learn more about how to use data science in their jobs by attending more formal training courses. These could take the form of something like an off-site, week-long training course where they experience the stages of undertaking a data science project (similar to the DWP Digital Academy). An alternative model could be to allocate one day a week to work on a project with departmental importance with a data scientist (similar to theData Science Accelerator Programme for analysts).

IMG_1603

Cross-government support for collaboration

For those users who have responsibility for leading on data science transformation in their departments there is also a need to collaborate with others in similar roles. This could be achieved through interventions such as a day-long unconference to discuss anything related to data science, and using online tools such as Google Groups, Slack, Yammer, Trello etc. We also tested the idea of a collaborative online resource where data science leads and others can contribute content and learning materials / approaches.

This is by no means an exhaustive list of potential ways to encourage data science thinking by policy and delivery colleagues across government. We hope this list is of interest to others in the field and we will update in the next six months about the transition of this project to Alpha….(More)”

Civil Solutions


Citizen Scientist


Book by Mary Ellen Hannibal: “…Here is a wide-ranging adventure in becoming a citizen scientist by an award-winning writer and environmental thought leader. As Mary Ellen Hannibal wades into tide pools, follows hawks, and scours mountains to collect data on threatened species, she discovers the power of a heroic cast of volunteers—and the makings of what may be our last, best hope in slowing an unprecedented mass extinction.

Digging deeply, Hannibal traces today’s tech-enabled citizen science movement to its roots: the centuries-long tradition of amateur observation by writers and naturalists. Prompted by her novelist father’s sudden death, she also examines her own past—and discovers a family legacy of looking closely at the world. With unbending zeal for protecting the planet, she then turns her gaze to the wealth of species left to fight for.

Combining original reporting, meticulous research, and memoir in impassioned prose, Citizen Scientist is a literary event, a blueprint for action, and the story of how one woman rescued herself from an odyssey of loss—with a new kind of science….(More)”

Against transparency


 at Vox: “…Digital storage is pretty cheap and easy, so maybe the next step in open government is ubiquitous surveillance of public servants paired with open access to the recordings.

As a journalist and an all-around curious person, I can’t deny there’s something appealing about this.

Historians, too, would surely love to know everything that President Obama and his top aides said to one another regarding budget negotiations with John Boehner rather than needing to rely on secondhand news accounts influenced by the inevitable demands of spin. By the same token, historians surely would wish that there were a complete and accurate record of what was said at the Constitutional Convention in 1787 that, instead, famously operated under a policy of anonymous discussions.

But we should be cautioned by James Madison’s opinion that “no Constitution would ever have been adopted by the convention if the debates had been public.”

His view, which seems sensible, is that public or recorded debates would have been simply exercises in position-taking rather than deliberation, with each delegate playing to his base back home rather than working toward a deal.

“Had the members committed themselves publicly at first, they would have afterwards supposed consistency required them to maintain their ground,” Madison wrote, “whereas by secret discussion no man felt himself obliged to retain his opinions any longer than he was satisfied of their propriety and truth, and was open to the force of argument.”

The example comes to me by way of Cass Sunstein, who formerly held a position as a top regulatory czar in Obama’s White House, and who delivered a fascinating talk on the subject of government transparency at a June 2016 Columbia symposium on the occasion of the anniversary of the Freedom of Information Act.

Sunstein asks us to distinguish between disclosure of the government’s outputs and disclosure of the government’s inputs. Output disclosure is something like the text of the Constitution or when the Obama administration had Medicare change decades of practice and begin publishing information about what Medicare pays to hospitals and other health providers.

Input disclosure would be something like the transcript of the debates at the Constitutional Convention or a detailed record of the arguments inside the Obama administration over whether to release the Medicare data. Sunstein’s argument is that it is a mistake to simply conflate the two ideas of disclosure under one broad heading of “transparency” when considerations around the two are very different.

Public officials need to have frank discussions

The fundamental problem with input disclosure is that in addition to serving as a deterrent to misconduct, it serves as a deterrent to frankness and honesty.

There are a lot of things that colleagues might have good reason to say to one another in private that would nonetheless be very damaging if they went viral on Facebook:

  • Healthy brainstorming processes often involve tossing out bad or half-baked ideas in order to stimulate thought and elevate better ones.
  • A realistic survey of options may require a blunt assessment of the strengths and weaknesses of different members of the team or of outside groups that would be insulting if publicized.
  • Policy decisions need to be made with political sustainability in mind, but part of making a politically sustainable policy decision is you don’t come out and say you made the decision with politics in mind.
  • Someone may want to describe an actual or potential problem in vivid terms to spur action, without wanting to provoke public panic or hysteria through public discussion.
  • If a previously embarked-upon course of action isn’t working, you may want to quietly change course rather than publicly admit failure.

Journalists are, of course, interested in learning about all such matters. But it’s precisely because such things are genuinely interesting that making disclosure inevitable is risky.

Ex post facto disclosure of discussions whose participants didn’t realize they would be disclosed would be fascinating and useful. But after a round or two of disclosure, the atmosphere would change. Instead of peeking in on a real decision-making process, you would have every meeting dominated by the question “what will this look like on the home page of Politico?”…(More)”

For Quick Housing Data, Hit Craigslist


Tanvi Misra at CityLab: “…housing researchers can use the Internet bulletin board for a more worthy purpose: as a source of fairly accurate, real-time data on the U.S. rental housing market.

A new paper in the Journal of Planning Education and Research analyzed 11 million Craigslist rental listings posted between May and July 2014 across the U.S. and found a treasure trove of information on regional and local housing trends. “Being able to track rental listings data from Craigslist is really useful for urban planners to take the pulse of [changing neighborhoods] much more quickly,” says Geoff Boeing, a researcher at University of California at Berkeley’s Urban Analytics Lab, who co-authored the paper with Paul Waddell, a Berkeley professor of planning and design.

Here are a couple of big takeaways from their deep dive down the CL rabbit hole:

Overall, Craigslist listings track with HUD data (except when they don’t)

The researchers compared median rents in different Craigslist domains (metropolitan areas, essentially) to the corresponding Housing and Urban Development median rents. In New Orleans and Oklahoma City, the posted and the official rents were very similar. But in other metros, they diverged significantly. In Las Vegas, for example, the Craigslist median rent was lower than the HUD median rent, but in New York, it was much, much higher.

“That’s important for local planners to be careful with because there are totally different cultures and ways that Craigslist is used in different cities,” Boeing explains. “The economies of the cities could very much affect how rentals are being posted. If they’re posting it higher [on Craigslist], they may negotiate down eventually. Or, if they’re posting it low, they could be expecting a bidding war with a bunch of tenants coming in.” …(More)”

Situation vacant: technology triathletes wanted


Anne-Marie Slaughter in the Financial Times: “It is time to celebrate a new breed of triathletes, who work in technology. When I was dean in the public affairs school at Princeton, I would tell students to aim to work in the public, private and civic sectors over the course of their careers.

Solving public problems requires collaboration among government, business and civil society. Aspiring problem solvers need the culture and language of all three sectors and to develop a network of contacts in each.

The public problems we face, in the US and globally, require lawyers, economists and issue experts but also technologists. A lack of technologists capable of setting up HealthCare.gov, a website designed to implement the Affordable Care act, led President Barack Obama to create the US Digital Service, which deploys Swat tech teams to address specific problems in government agencies.

But functioning websites that deliver government services effectively are only the most obvious technological need for the public sector.

Government can reinvent how it engages with citizens entirely, for example by personalising public education with digital feedback or training jobseekers. But where to find the talent? The market for engineers, designers and project managers sees big tech companies competing for graduates from the world’s best universities.

Governments can offer only a fraction of those salaries, combined with a rigid work environment, ingrained resistance to innovation and none of the amenities and perks so dear to Silicon Valley .

Government’s comparative advantage, however, is mission and impact, which is precisely what Todd Park sells…Still, demand outstrips supply. ….The goal is to create an ecosystem for public interest technology comparable to that in public interest law. In the latter, a number of American philanthropists created role models, educational opportunities and career paths for aspiring lawyers who want to change the world.

That process began in the 1960s, and today every great law school has a public interest programme with scholarships for the most promising students. Many branches of government take on top law school graduates. Public interest lawyers coming out of government find jobs with think-tanks and advocacy organisations and take up research fellowships, often at the law schools that educated them. When they need to pay the mortgage or send their kids to college, they can work at large law firms with pro bono programmes….We need much more. Every public policy school at a university with a computer science, data science or technology design programme should follow suit. Every think-tank should also become a tech tank. Every non-governmental organisation should have at least one technologist on staff. Every tech company should have a pro bono scheme rewarding public interest work….(More)”

‘Homo sapiens is an obsolete algorithm’


Extract from Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari: “There’s an emerging market called Dataism, which venerates neither gods nor man – it worships data. From a Dataist perspective, we may interpret the entire human species as a single data-processing system, with individual humans serving as its chips. If so, we can also understand the whole of history as a process of improving the efficiency of this system, through four basic methods:

1. Increasing the number of processors. A city of 100,000 people has more computing power than a village of 1,000 people.

2. Increasing the variety of processors. Different processors may use diverse ways to calculate and analyse data. Using several kinds of processors in a single system may therefore increase its dynamism and creativity. A conversation between a peasant, a priest and a physician may produce novel ideas that would never emerge from a conversation between three hunter-gatherers.

3. Increasing the number of connections between processors. There is little point in increasing the mere number and variety of processors if they are poorly connected. A trade network linking ten cities is likely to result in many more economic, technological and social innovations than ten isolated cities.

4. Increasing the freedom of movement along existing connections. Connecting processors is hardly useful if data cannot flow freely. Just building roads between ten cities won’t be very useful if they are plagued by robbers, or if some autocratic despot doesn’t allow merchants and travellers to move as they wish.
These four methods often contradict one another. The greater the number and variety of processors, the harder it is to freely connect them. The construction of the sapiens data-processing system accordingly passed through four main stages, each of which was characterised by an emphasis on different methods.

The first stage began with the cognitive revolution, which made it possible to connect unlimited sapiens into a single data-processing network. This gave sapiens an advantage over all other human and animal species. Although there is a limit to the number of Neanderthals, chimpanzees or elephants you can connect to the same net, there is no limit to the number of sapiens.

Sapiens used their advantage in data processing to overrun the entire world. However, as they spread into different lands and climates they lost touch with one another, and underwent diverse cultural transformations. The result was an immense variety of human cultures, each with its own lifestyle, behaviour patterns and world view. Hence the first phase of history involved an increase in the number and variety of human processors, at the expense of connectivity: 20,000 years ago there were many more sapiens than 70,000 years ago, and sapiens in Europe processed information differently from sapiens in China. However, there were no connections between people in Europe and China, and it would have seemed utterly impossible that all sapiens may one day be part of a single data-processing web.
The second stage began with agriculture and continued until the invention of writing and money. Agriculture accelerated demographic growth, so the number of human processors rose sharply, while simultaneously enabling many more people to live together in the same place, thereby generating dense local networks that contained an unprecedented number of processors. In addition, agriculture created new incentives and opportunities for different networks to trade and communicate.

Nevertheless, during the second phase, centrifugal forces remained predominant. In the absence of writing and money, humans could not establish cities, kingdoms or empires. Humankind was still divided into innumerable little tribes, each with its own lifestyle and world view. Uniting the whole of humankind was not even a fantasy.
The third stage kicked off with the appearance of writing and money about 5,000 years ago, and lasted until the beginning of the scientific revolution. Thanks to writing and money, the gravitational field of human co-operation finally overpowered the centrifugal forces. Human groups bonded and merged to form cities and kingdoms. Political and commercial links between different cities and kingdoms also tightened. At least since the first millennium BC – when coinage, empires, and universal religions appeared – humans began to consciously dream about forging a single network that would encompass the entire globe.

This dream became a reality during the fourth and last stage of history, which began around 1492. Early modern explorers, conquerors and traders wove the first thin threads that encompassed the whole world. In the late modern period, these threads were made stronger and denser, so that the spider’s web of Columbus’s days became the steel and asphalt grid of the 21st century. Even more importantly, information was allowed to flow increasingly freely along this global grid. When Columbus first hooked up the Eurasian net to the American net, only a few bits of data could cross the ocean each year, running the gauntlet of cultural prejudices, strict censorship and political repression.

But as the years went by, the free market, the scientific community, the rule of law and the spread of democracy all helped to lift the barriers. We often imagine that democracy and the free market won because they were “good”. In truth, they won because they improved the global data-processing system.

So over the last 70,000 years humankind first spread out, then separated into distinct groups and finally merged again. Yet the process of unification did not take us back to the beginning. When the different human groups fused into the global village of today, each brought along its unique legacy of thoughts, tools and behaviours, which it collected and developed along the way. Our modern larders are now stuffed with Middle Eastern wheat, Andean potatoes, New Guinean sugar and Ethiopian coffee. Similarly, our language, religion, music and politics are replete with heirlooms from across the planet.
If humankind is indeed a single data-processing system, what is its output? Dataists would say that its output will be the creation of a new and even more efficient data-processing system, called the Internet-of-All-Things. Once this mission is accomplished, Homo sapiens will vanish….(More)

25 Years Later, What Happened to ‘Reinventing Government’?


 at Governing: “…A generation ago, governments across the United States embarked on ambitious efforts to use performance measures to “reinvent” how government worked. Much of the inspiration for this effort came from the bestselling 1992 book Reinventing Government: How the Entrepreneurial Spirit Is Transforming the Public Sector by veteran city manager Ted Gaebler and journalist David Osborne. Gaebler and Osborne challenged one of the most common complaints about public administration — that government agencies were irredeemably bureaucratic and resistant to change. The authors argued that that need not be the case. Government managers and employees could and should, the authors wrote, be as entrepreneurial as their private-sector counterparts. This meant embracing competition; measuring outcomes rather than inputs or processes; and insisting on accountability.

For public-sector leaders, Gaebler and Osborne’s book was a revelation. “I would say it has been the most influential book of the past 25 years,” says Robert J. O’Neill Jr., the executive director of the International City/County Management Association (ICMA). At the federal level, Reinventing Government inspired Vice President Al Gore’s National Performance Review. But it had its greatest impact on state and local governments. Public-sector officials across the country read Reinventing Government and ingested its ideas. Osborne joined the consulting firm Public Strategies Group and began hiring himself out as an adviser to governments.

There’s no question states and localities function differently today than they did 25 years ago. Performance management systems, though not universally beloved, have become widespread. Departments and agencies routinely measure customer satisfaction. Advances in information technology have allowed governments to develop and share outcomes more easily than ever before. Some watchdog groups consider linking outcomes to budgets — also known as performance-based budgeting — to be a best practice. Government executives in many places talk about “innovation” as if they were Silicon Valley executives. This represents real, undeniable change.

Yet despite a generation of reinvention, government is less trusted than ever before. Performance management systems are sometimes seen not as an instrument of reform but as an obstacle to it. Performance-based budgeting has had successes, but they have rarely been sustained. Some of the most innovative efforts to improve government today are pursuing quite different approaches, emphasizing grassroots employee initiatives rather than strict managerial accountability. All of this raises a question: Has the reinventing government movement left a legacy of greater effectiveness, or have the systems it generated become roadblocks that today’s reformers must work around?  Or is the answer somehow “yes” to both of those questions?

Reinventing Government presented dozens of examples of “entrepreneurial” problem-solving, organized into 10 chapters. Each chapter illustrated a theme, such as results-oriented government or enterprising government. This structure — concrete examples grouped around larger themes — reflected the distinctive sensibilities of each author. Gaebler, as a city manager, had made a name for himself by treating constraints such as funding shortfalls or bureaucratic rules as opportunities. His was a bottom-up, let-a-hundred-flowers-bloom sensibility. He wanted his fellow managers to create cultures where risks could be taken and initiative could be rewarded.

Osborne, a journalist, was more of a systematizer, drawn to sweeping ideas. In his previous book, Laboratories of Democracy, he had profiled six governors who he believed were developing new approaches for delivering services that constituted a “third way” between big government liberalism and anti-government conservatism.Reinventing Government suggested how that would work in practice. It also offered readers a daring and novel vision of what government’s core mission should be. Government, the book argued, should focus less on operating programs and more on overseeing them. Instead of “rowing” (stressing administrative detail), senior public officials should do more “steering” (concentrating on overall strategy). They should contract out more, embrace competition and insist on accountability. This aspect of Osborne’s thinking became more pronounced as time went by.

“Today we are well beyond the experimental approach,” Osborne and Peter Hutchinson, a former Minnesota finance commissioner, wrote in their 2004 book, The Price of Government: Getting the Results We Need in an Age of Permanent Fiscal Crisis. A decade of experience had produced a proven set of strategies, the book continued. The foremost should be to turn the budget process “on its head, so that it starts with the results we demand and the price we are willing to pay rather than the programs we have and the costs they incur.” In other words, performance-based budgeting. Then, they continued, “we must cut government down to its most effective size and shape, through strategic reviews, consolidation and reorganization.”

Assessing the influence and efficacy of these ideas is difficult. According to the U.S. Census, the United States has 90,106 state and local governments. Tens of thousands of public employees read Reinventing Government and the books that followed. Surveys have shown that the use of performance measurement systems is widespread across state, county and municipal government. Yet only a handful of studies have sought to evaluate systematically the impact of Reinventing Government’s core ideas. Most have focused on just one, the idea highlighted in The Price of Government: budgeting for outcomes.

To evaluate the reinventing government movement primarily by assessing performance-based budgeting might seem a bit narrow. But paying close attention to the budgeting process is the key to understanding the impact of the entire enterprise. It reveals the difficulty of sustaining even successful innovations….

“Reinventing government was relatively blind to the role of legislatures in general,” says University of Maryland public policy professor and Governing columnist Donald F. Kettl. “There was this sense that the real problem was that good people were trapped in a bad system and that freeing administrators to do what they knew how to do best would yield vast improvements. What was not part of the debate was the role that legislatures might have played in creating those constraints to begin with.”

Over time, a pattern emerged. During periods of crisis, chief executives were able to implement performance-based budgeting. Often, it worked. But eventually legislatures pushed back….

There was another problem. Measuring results, insisting on accountability — these were supposed to spur creative problem-solving. But in practice, says Blauer, “whenever the budget was invoked in performance conversations, it automatically chilled innovative thinking; it chilled engagement,” she says. Agencies got defensive. Rather than focusing on solving hard problems, they focused on justifying past performance….

The fact that reinventing government never sparked a revolution puzzles Gaebler to this day. “Why didn’t more of my colleagues pick it up and run with it?” he asks. He thinks the answer may be that many public managers were simply too risk-averse….(More)”.