The California Report Card


The California Report Card (CRC) is an online platform developed by the CITRIS Data and Democracy Initiative at UC Berkeley and Lt. Governor Gavin Newsom that explores how smartphones and networks can enhance communication between the public and government leaders. The California Report Card allows visitors to grade issues facing California and to suggest issues for future report cards.

The CRC is a mobile-optimized web application that allows participants to advise the state government on timely policy issues.  We are exploring how technology can streamline and structure input from the public to elected officials, to provide them with timely feedback on the changing opinions and priorities of their constituents.

Version 1.0 of the CRC was launched in California on 28 January 2014. Since then, over 7000 people from almost every county have assigned over 20,000 grades to the State of California and suggested issues for the next report card.
Lt. Governor Gavin Newsom: “The California Report Card is a new way for me to keep an ear to the ground.  This new app/website makes it easy for Californians to assign grades and suggest pressing issues that merit our attention.  In the first few weeks, participants conveyed that they approve of our rollout of Obamacare but are very concerned about the future of California schools and universities.  I’m also gaining insights on issues ranging from speed limits to fracking to disaster preparedness.”
“This platform allows us to have our voices heard. The ability to review and grade what others suggest is important. It enables us and elected officials to hear directly how Californians feel.” – Matt Harris, Truck Driver, Ione, CA
“This is the first system that lets us directly express our feelings to government leaders.  I also really enjoy reading and grading the suggestions from other participants.”  – Patricia Ellis Pasko, Senior Care Giver, Apple Valley, CA
“Everyone knows that report cards can motivate learning by providing quantitative feedback on strengths and weaknesses.  Similarly, the California Report Card has potential to motivate Californians and their leaders to learn from each other about timely issues.  As researchers, the patterns of participation and how they vary over time and across geography will help us learn how to design future platforms.” – Prof. Ken Goldberg, UC Berkeley.
It takes only two minutes and works on all screens (best on mobile phones held vertically), just click “Participate“.
Anyone can participate by taking a few minutes to assign grades to the State of California on issues such as: Healthcare, Education, Marriage Equality, Immigrant Rights, and Marijuana Decriminalization. Participants are also invited to enter an online “cafe” to propose issues that they’d like to see included in the next report card (version 2.0 will come out later this Spring).
Lt. Gov. Gavin Newsom and UC Berkeley Professor Ken Goldberg reviewed the data and lessons learned from version 1.0 in a public forum at UC Berkeley on 20 March 2014 that included participants who actively contributed to identifying the most important issues for version 2.0. The event can be viewed at http://bit.ly/1kv6523.
We offer community outreach programs/workshops to train local leaders on how to use the CRC and how to reach and engage under-represented groups (low-income, rural, persons with disabilities, etc.). If you are interested in participating in or hosting a workshop, please contact Brandie Nonnecke at nonnecke@citris-uc.org”

New York Police Twitter Strategy Has Unforeseen Consequences


J. David Goodman in The New York Times: “The New York Police Department has long seen its crime-fighting strategies emulated across the country and around the world.

So when a departmental Twitter campaign, meant to elicit smiling snapshots, instead attracted tens of thousands of less flattering images of officers, it did not take long for the hashtag #myNYPD to spread far beyond the five boroughs.

By Wednesday, the public relations situation in New York City had sparked imitators from Los Angeles (#myLAPD) to Mexico (#MiPolicíaMexicana) and over the ocean to Greece (#myELAS), Germany (#DankePolizei) and France (#maPolice).

The images, including circles of police officers, in riot gear poised to strike a man on a bench, or hosing down protesters, closely resembled those posted on Tuesday by critics of the Police Department in New York, in which many of the most infamous moments in recent police history had been dredged up by Twitter users….”

The Right Colors Make Data Easier To Read


Sharon Lin And Jeffrey Heer at HBR Blog: “What is the color of money? Of love? Of the ocean? In the United States, most people respond that money is green, love is red and the ocean is blue. Many concepts evoke related colors — whether due to physical appearance, common metaphors, or cultural conventions. When colors are paired with the concepts that evoke them, we call these “semantically resonant color choices.”
Artists and designers regularly use semantically resonant colors in their work. And in the research we conducted with Julie Fortuna, Chinmay Kulkarni, and Maureen Stone, we found they can be remarkably important to data visualization.
Consider these charts of (fictional) fruit sales:
fruitcharts
The only difference between the charts is the color assignment. The left-hand chart uses colors from a default palette. The right-hand chart has been assigned semantically resonant colors. (In this case, the assignment was computed automatically using an algorithm that analyzes the colors in relevant images retrieved from Google Image Search using queries for each data category name.)
Now, try answering some questions about the data in each of these charts. Which fruit had higher sales: blueberries or tangerines? How about peaches versus apples? Which chart do you find easier to read?…
To make effective visualization color choices, you need to take a number of factors into consideration. To name just two: All the colors need to be suitably different from one another, for instance, so that readers can tell them apart – what’s called “discriminability.” You also need to consider what the colors look like to the color blind — roughly 8% of the U.S. male population! Could the colors be distinguished from one another if they were reprinted in black and white?
One easy way to assign semantically resonant colors is to use colors from an existing color palette that has been carefully designed for visualization applications (ColorBrewer offers some options) but assign the colors to data values in a way that best matches concept color associations. This is the basis of our own algorithm, which acquires images for each concept and then analyzes them to learn concept color associations. However, keep in mind that color associations may vary across cultures. For example, in the United States and many western cultures, luck is often associated with green (four-leaf clovers), while red can be considered a color of danger. However, in China, luck is traditionally symbolized with the color red.

Semantically resonant colors can reinforce perception of a wide range of data categories. We believe similar gains would likely be seen for other forms of visualizations like maps, scatterplots, and line charts. So when designing visualizations for presentation or analysis, consider color choice and ask yourself how well the colors resonate with the underlying data.”

Can Government Play Moneyball?


David Bornstein in the New York Times: “…For all the attention it’s getting inside the administration, evidence-based policy-making seems unlikely to become a headline grabber; it lacks emotional appeal. But it does have intellectual heft. And one group that has been doing creative work to give the message broader appeal is Results for America, which has produced useful teaching aids under the banner “Moneyball for Government,” building on the popularity of the book and movie about Billy Beane’s Oakland A’s, and the rise of data-driven decision making in major league baseball. (Watch their video explainers here and here.)
Results for America works closely with leaders across political parties and social sectors, to build awareness about evidence-based policy making — drawing attention to key areas where government could dramatically improve people’s lives by augmenting well-tested models. They are also chronicling efforts by local governments around the country, to show how an emerging group of “Geek Cities,” including Baltimore, Denver, Miami, New York, Providence and San Antonio, are using data and evidence to drive improvements in various areas of social policy like education, youth development and employment.
“It seems like common sense to use evidence about what works to get better results,” said Michele Jolin, Results for America’s managing partner. “How could anyone be against it? But the way our system is set up, there are so many loud voices pushing to have dollars spent and policy shaped in the way that works for them. There has been no organized constituency for things that work.”
“The debate in Washington is usually about the quantity of resources,” said David Medina, a partner in Results for America. “We’re trying to bring it back to talking about quality.”
Not everyone will find this change appealing. “When you have a longstanding social service policy, there’s going to be a network of [people and groups] who are organized to keep that money flowing regardless of whether evidence suggests it’s warranted,” said Daniel Stid. “People in social services don’t like to think they’re behaving like other organized interests — like dairy farmers or mortgage brokers — but it leads to tremendous inertia in public policy.”
Beyond the politics, there are practical obstacles to overcome, too. Federal agencies lack sufficient budgets for evaluation or a common definition for what constitutes rigorous evidence. (Any lobbyist can walk into a legislator’s office and claim to have solid data to support an argument.) Up-to-date evidence also needs to be packaged in accessible ways and made available on a timely basis, so it can be used to improve programs, rather than to threaten them. Governments need to build regular evaluations into everything they do — not just conduct big, expensive studies every 10 years or so.
That means developing new ways to conduct quick and inexpensive randomized studies using data that is readily available, said Haskins, who is investigating this approach. “We should be running 10,000 evaluations a year, like they do in medicine.” That’s the only way to produce the rapid trial-and-error learning needed to drive iterative program improvements, he added. (I reported on a similar effort being undertaken by the Coalition for Evidence-Based Policy.)
Results for America has developed a scorecard to rank federal departments about how prepared they are to produce or incorporate evidence in their programs. It looks at whether a department has an office and a leader with the authority and budget to evaluate its programs. It asks: Does it make its data accessible to the public? Does it compile standards about what works and share them widely? Does it spend at least 1 percent of its budget evaluating its programs? And — most important — does it incorporate evidence in its big grant programs? For now, the Department of Education gets the top score.
The stakes are high. In 2011, for example, the Obama administration launched a process to reform Head Start, doing things like spreading best practices and forcing the worst programs to improve or lose their funding. This February, for the third time, the government released a list of Head Start providers (103 out of about 1,600) who will have to recompete for federal funding because of performance problems. That list represents tens of thousands of preschoolers, many of whom are missing out on the education they need to succeed in kindergarten — and life.
Improving flagship programs like Head Start, and others, is not just vital for the families they serve; it’s vital to restore trust in government. “I am a card-carrying member of the Republican Party and I want us to be governed well,” said Robert Shea, who pushed for better program evaluations as associate director of the Office of Management and Budget during the Bush administration, and continues to focus on this issue as chairman of the National Academy of Public Administration. “This is the most promising thing I know of to get us closer to that goal.”
“This idea has the prospect of uniting Democrats and Republicans,” said Haskins. “But it will involve a broad cultural change. It has to get down to the program administrators, board members and local staff throughout the country — so they know that evaluation is crucial to their operations.”
“There’s a deep mistrust of government and a belief that problems can’t be solved,” said Michele Jolin. “This movement will lead to better outcomes — and it will help people regain confidence in their public officials by creating a more effective, more credible way for policy choices to be made.”

The Transformative Impact of Data and Communication on Governance


Steven Livingston at Brookings: “How do digital technologies affect governance in areas of limited statehood – places and circumstances characterized by the absence of state provisioning of public goods and the enforcement of binding rules with a monopoly of legitimate force?  In the first post in this series I introduced the limited statehood concept and then described the tremendous growth in mobile telephony, GIS, and other technologies in the developing world.   In the second post I offered examples of the use of ICT in initiatives intended to fill at least some of the governance vacuum created by limited statehood.  With mobile phones, for example, farmers are informed of market conditions, have access to liquidity through M-Pesa and similar mobile money platforms….
This brings to mind another type of ICT governance initiative.  Rather than fill in for or even displace the state some ICT initiatives can strengthen governance capacity.  Digital government – the use of digital technology by the state itself — is one important possibility.  Other initiatives strengthen the state by exerting pressure. Countries with weak governance sometimes take the form of extractive states or those, which cater to the needs of an elite, leaving the majority of the population in poverty and without basic public services. This is what Daron Acemoglu and James A. Robinson call extractive political and economic institutions.  Inclusive states, on the other hand, are pluralistic, bound by the rule of law, respectful of property rights, and, in general, accountable.  Accountability mechanisms such as a free press and competitive multiparty elections are instrumental to discourage extractive institutions.  What ICT-based initiatives might lend a hand in strengthening accountability? We can point to three examples.

Example One: Using ICT to Protect Human Rights

Nonstate actors now use commercial, high-resolution remote sensing satellites to monitor weapons programs and human rights violations.  Amnesty International’s Remote Sensing for Human Rights offers one example, and Satellite Sentinel offers another.  Both use imagery from DigitalGlobe, an American remote sensing and geospatial content company.   Other organizations have used commercially available remote sensing imagery to monitor weapons proliferation.  The Institute for Science and International Security, a Washington-based NGO, revealed the Iranian nuclear weapons program in 2003 using commercial satellite imagery…

Example Two: Crowdsourcing Election Observation

Others have used mobile phones and GIS to crowdsource election observation.  For the 2011 elections in Nigeria, The Community Life Project, a civil society organization, created ReclaimNaija, an elections process monitoring system that relied on GIS and amateur observers with mobile phones to monitor the elections.  Each of the red dots represents an aggregation of geo-located incidents reported to the ReclaimNaija platform.  In a live map, clicking on a dot disaggregates the reports, eventually taking the reader to individual reports.  Rigorous statistical analysis of ReclaimNaija results and the elections suggest it contributed to the effectiveness of the election process.

ReclaimNaija: Election Incident Reporting System Map

ReclaimNaija: Election Incident Reporting System Map

Example Three: Using Genetic Analysis to Identify War Crimes

In recent years, more powerful computers have led to major breakthroughs in biomedical science.  The reduction in cost of analyzing the human genome has actually outpaced Moore’s Law.  This has opened up new possibilities for the use of genetic analysis in forensic anthropology.   In Guatemala, the Balkans, Argentina, Peru and in several other places where mass executions and genocides took place, forensic anthropologists are using genetic analysis to find evidence that is used to hold the killers – often state actors – accountable…”

Wikipedia Use Could Give Insights To The Flu Season


Agata Blaszczak-Boxe in Huffington Post: “By monitoring the number of times people look for flu information on Wikipedia, researchers may be better able to estimate the severity of a flu season, according to a new study.
Researchers created a new data-analysis system that looks at visits to Wikipedia articles, and found the system was able to estimate flu levels in the United States up to two weeks sooner than the flu data from the Centers for Disease Control and Prevention were released.
Looking at data spanning six flu seasons between December 2007 and August 2013, the new system estimated the peak flu week better than Google Flu Trends, another data-based system. The Wikipedia-based system accurately estimated the peak flu week in three out of six seasons, while the Google-based system got only two right, the researchers found.
“We were able to get really nice estimates of what the [flu] level is in the population,” said study author David McIver, a postdoctoral fellow at Boston Children’s Hospital.
The new system examined visits to Wikipedia articles that included terms related to flulike illnesses, whereas Google Flu Trends looks at searches typed into Google. The researchers analyzed the data from Wikipedia on how many times in an hour a certain article was viewed, and combined their data with flu data from the CDC, using a model they created.
The research team wanted to use a database that is accessible to everyone and create a system that could be more accurate than Google Flu Trends, which has flaws. For instance, during the swine flu pandemic in 2009, and during the 2012-2013 influenza season, Google Flu Trends got a bit “confused,” and overestimated flu numbers because of increased media coverage focused on the two illnesses, the researchers said.
When a pandemic strikes, people search for news stories related to the pandemic itself, but this doesn’t mean that they have the flu. In general, the problem with Internet-based estimation systems is that it is practically impossible to tell whether people are looking for information about an illness because they are sick, the researchers said.
In the new system, the researchers tried to overcome this issue by including a number of Wikipedia articles “to act as markers for general background-level activity of normal usage of Wikipedia,” the researchers wrote in the study. However, just like any other data-based system, the Wikipedia system is not immune to the issues related to figuring out the actual motivation of someone checking information related to the flu…
The study is published … in the journal PLOS Computational Biology.”

The Open Data 500: Putting Research Into Action


TheGovLab Blog: “On April 8, the GovLab made two significant announcements. At an open data event in Washington, DC, I was pleased to announce the official launch of the Open Data 500, our study of 500 companies that use open government data as a key business resource. We also announced that the GovLab is now planning a series of Open Data Roundtables to bring together government agencies with the businesses that use their data – and that five federal agencies have agreed to participate. Video of the event, which was hosted by the Center for Data Innovation, is available here.
The Open Data 500, funded by the John S. and James L. Knight Foundation, is the first comprehensive study of U.S.-based companies that rely on open government data.  Our website at OpenData500.com includes searchable, sortable information on 500 of these companies.  Our data about them comes from responses to a survey we’ve sent to all the companies (190 have responded) and what we’ve been able to learn from research using public information.  Anyone can now explore this website, read about specific companies or groups of companies, or download our data to analyze it. The website features an interactive tool on the home page, the Open Data Compass, that shows the connections between government agencies and different categories of companies visually.
We began work on the Open Data 500 study last fall with three goals. First, we wanted to collect information that will ultimately help calculate the economic value of open data – an important question for policymakers and others. Second, we wanted to present examples of open data companies to inspire others to use this important government resource in new ways. And third – and perhaps most important – we’ve hoped that our work will be a first step in creating a dialogue between the government agencies that provide open data and the companies that use it.
That dialogue is critically important to make government open data more accessible and useful. While open government data is a huge potential resource, and federal agencies are working to make it more available, it’s too often trapped in legacy systems that make the data difficult to find and to use. To solve this problem, we plan to connect agencies to their clients in the business community and help them work together to find and liberate the most valuable datasets.
We now plan to convene and facilitate a series of Open Data Roundtables – a new approach to bringing businesses and government agencies together. In these Roundtables, which will be informed by the Open Data 500 study, companies and the agencies that provide their data will come together in structured, results-oriented meetings that we will facilitate. We hope to help figure out what can be done to make the most valuable datasets more available and usable quickly.
We’ve been gratified by the immediate positive response to our plan from several federal agencies. The Department of Commerce has committed to help plan and participate in the first of our Roundtables, now being scheduled for May. By the time we announced our launch on April 8, the Departments of Labor, Transportation, and Treasury had also signed up. And at the end of the launch event, the Deputy Chief Information Officer of the USDA publicly committed her agency to participate as well…”

PatientsLikeMe Gives Genentech Full Access


Susan Young Rojahn in MIT Technology Review: “PatientsLikeMe, the largest online network for patients, has established its first broad partnership with a drug company. Genentech, the South San Francisco biotechnology company bought by Roche in 2009, now has access to PatientsLikeMe’s full database for five years.
PatientsLikeMe is an online network of some 250,000 people with chronic diseases who share information about symptoms, treatments, and coping mechanisms. The largest communities within the network are built around fibromyalgia, multiple sclerosis, and amyotrophic lateral sclerosis (ALS), but as many as 2,000 conditions are represented in the system. The hope is that the information shared by people with chronic disease will help the life sciences industry identify unmet needs in patients and generate medical evidence, says co-founder Ben Heywood.
The agreement with Genentech is not the first collaboration between a life sciences company and PatientsLikeMe, named one of 50 Disruptive Companies in 2012 by MIT Technology Review, but it is the broadest. Previous collaborations were more limited in scope, says Heywood, focusing on a particular research question or a specific disease area. The deal with Genentech is an all-encompassing subscription to information posted by the entire PatientsLikeMe population, without the need for new contracts and new business deals if a research program shifts direction from its original focus. “This allows for a much more rapid real-time use of the data,” says Heywood.
In 2010, PatientsLikeMe demonstrated some of its potential to advance medicine. With data from its community of ALS patients, who suffer from a progressive and fatal neurological disease, the company could see that a drug under study was not effective (see “Patients’ Social Network Predicts Drug Outcomes”). Those findings were corroborated by an academic study published that year. Another area of medicine the network can shed light on is the quality of care patients receive, including whether or not doctors are following guidelines established by medical societies for how patients are treated. “As we try to shift to patient-centered health care, we have to understand what [patients] value,” says Heywood.
In exchange for an undisclosed payment to PatientsLikeMe, Genentech has a five-year subscription to the data in the online network. The data will be de-identified– that is, Genentech will not see patient names or email addresses. Heywood says his company is hoping to establish broad agreements with other life sciences companies soon.”

'Hackathons' Aim to Solve Health Care's Ills


Amy Dockser Marcus in the Wall Street Journal: “Hackathons, the high-octane, all-night problem-solving sessions popularized by the software-coding community, are making their way into the more traditional world of health care. At Massachusetts Institute of Technology, a recent event called Hacking Medicine’s Grand Hackfest attracted more than 450 people to work for one weekend on possible solutions to problems involving diabetes, rare diseases, global health and information technology used at hospitals.
Health institutions such as New York-Presbyterian Hospital and Brigham and Women’s Hospital in Boston have held hackathons. MIT, meantime, has co-sponsored health hackathons in India, Spain and Uganda.
Hackathons of all kinds are increasingly popular. Intel Corp.  recently bought a group that organizes them. Companies hoping to spark creative thinking sponsor them. And student-run hackathons have turned into intercollegiate competitions.
But in health care, where change typically comes much more slowly than in Silicon Valley, they represent a cultural shift. To solve a problem, scientists and doctors can spend years painstakingly running experiments, gathering data, applying for grants and publishing results. So the idea of an event where people give two-minute pitches describing a problem, then join a team of strangers to come up with a solution in the course of one weekend is radical.
“We are not trying to replace the medical culture with Facebook culture,” said Elliot Cohen, who wore a hoodie over a button-down dress shirt at the MIT event in March and helped start MIT Hacking Medicine while at business school. “But we want to try to blend them more.”
Mr. Cohen co-founded and is chief technology officer at PillPack, a pharmacy that sends customers personalized packages of their medications, a company that started at a hackathon.
At MIT’s health-hack, physicians, researchers, students and a smattering of people wearing Google Glass sprawled on the floor of MIT’s Media Lab and at tables with a view of the Boston skyline. At one table, a group of college students, laptops plastered with stickers, pulled juice boxes and snacks out of backpacks, trash piling up next to them as they feverishly wrote code.
Nupur Garg, an emergency-room physician and one of the eventual winners, finished her hospital shift at 2 a.m. Saturday in New York, drove to Boston and arrived at MIT in time to pitch the need for a way to capture images of patients’ ears and throats that can be shared with specialists to help make diagnoses. She and her team immediately started working on a prototype for the device, testing early versions on anyone who stopped by their table.
Dr. Garg and teammate Nancy Liang, who runs a company that makes Web apps for 3-D printers, caught a few hours of sleep in a dorm room Saturday night. They came up with the idea for their product’s name—MedSnap—later that night while watching students use cellphone cameras to send SnapChats to one another. “There was no time to conduct surveys on what was the best name,” said Ms. Liang. “Many ideas happen after midnight.”
Winning teams in each category won $1,000, as well as access to the hackathons sponsors for advice and pilot projects.
Yet even supporters say hackathons can’t solve medicine’s challenges overnight. Harlan Krumholz, a professor at Yale School of Medicine who ran a many-months trial that found telemonitoring didn’t reduce hospitalizations or deaths of cardiology patients, said he supports the problem-solving ethos of hackathons. But he added that “improvements require a long-term commitment, not just a weekend.”
Ned McCague, a data scientist at Blue Cross Blue Shield of Massachusetts, served as a mentor at the hackathon. He said he wasn’t representing his employer, but he used his professional experiences to push groups to think about the potential customer. “They have a good idea and are excited about it, but they haven’t thought about who is paying for it,” he said.
Zen Chu, a senior lecturer in health-care innovation and entrepreneur-in-residence at MIT, and one of the founders of Hacking Medicine, said more than a dozen startups conceived since the first hackathon, in 2011, are still in operation. Some received venture-capital funding.
The upsides of hackathons were made clear to Sharon Moalem, a physician who studies rare diseases. He had spent years developing a mobile app that can take pictures of faces to help diagnose rare genetic conditions, but was stumped on how to give the images a standard size scale to make comparisons. At the hackathon, Dr. Moalem said he was approached by an MIT student who suggested sticking a coin on the subjects’ forehead. Since quarters have a standard measurement, it “creates a scale,” said Dr. Moalem.
Dr. Moalem said he had never considered such a simple, elegant solution. The team went on to write code to help standardize facial measurements based on the dimensions of a coin and a credit card.
“Sometimes when you are too close to something, you stop seeing solutions, you only see problems,” Dr. Moalem said. “I needed to step outside my own silo.”

Book Review: 'The Rule of Nobody' by Philip K. Howard


Stuart Taylor Jr in the Wall Street Journal: “Amid the liberal-conservative ideological clash that paralyzes our government, it’s always refreshing to encounter the views of Philip K. Howard, whose ideology is common sense spiked with a sense of urgency. In “The Rule of Nobody,” Mr. Howard shows how federal, state and local laws and regulations have programmed officials of both parties to follow rules so detailed, rigid and, often, obsolete as to leave little room for human judgment. He argues passionately that we will never solve our social problems until we abandon what he calls a misguided legal philosophy of seeking to put government on regulatory autopilot. He also predicts that our legal-governmental structure is “headed toward a stall and then a frightening plummet toward insolvency and political chaos.”
Mr. Howard, a big-firm lawyer who heads the nonpartisan government-reform coalition Common Good, is no conventional deregulator. But he warns that the “cumulative complexity” of the dense rulebooks that prescribe “every nuance of how law is implemented” leaves good officials without the freedom to do what makes sense on the ground. Stripped of the authority that they should have, he adds, officials have little accountability for bad results. More broadly, he argues that the very structure of our democracy is so clogged by deep thickets of dysfunctional law that it will only get worse unless conservatives and liberals alike cast off their distrust of human discretion.
The rulebooks should be “radically simplified,” Mr. Howard says, on matters ranging from enforcing school discipline to protecting nursing-home residents, from operating safe soup kitchens to building the nation’s infrastructure: Projects now often require multi-year, 5,000-page environmental impact statements before anything can begin to be constructed. Unduly detailed rules should be replaced by general principles, he says, that take their meaning from society’s norms and values and embrace the need for official discretion and responsibility.
Mr. Howard serves up a rich menu of anecdotes, including both the small-scale activities of a neighborhood and the vast administrative structures that govern national life. After a tree fell into a stream and caused flooding during a winter storm, Franklin Township, N.J., was barred from pulling the tree out until it had spent 12 days and $12,000 for the permits and engineering work that a state environmental rule required for altering any natural condition in a “C-1 stream.” The “Volcker Rule,” designed to prevent banks from using federally insured deposits to speculate in securities, was shaped by five federal agencies and countless banking lobbyists into 963 “almost unintelligible” pages. In New York City, “disciplining a student potentially requires 66 separate steps, including several levels of potential appeals”; meanwhile, civil-service rules make it virtually impossible to terminate thousands of incompetent employees. Children’s lemonade stands in several states have been closed down for lack of a vendor’s license.

 

Conservatives as well as liberals like detailed rules—complete with tedious forms, endless studies and wasteful legal hearings—because they don’t trust each other with discretion. Corporations like them because they provide not only certainty but also “a barrier to entry for potential competitors,” by raising the cost of doing business to prohibitive levels for small businesses with fresh ideas and other new entrants to markets. Public employees like them because detailed rules “absolve them of responsibility.” And, adds Mr. Howard, “lawsuits [have] exploded in this rules-based regime,” shifting legal power to “self-interested plaintiffs’ lawyers,” who have learned that they “could sue for the moon and extract settlements even in cases (as with some asbestos claims) that were fraudulent.”
So habituated have we become to such stuff, Mr. Howard says, that government’s “self-inflicted ineptitude is accepted as a state of nature, as if spending an average of eight years on environmental reviews—which should be a national scandal—were an unavoidable mountain range.” Common-sensical laws would place outer boundaries on acceptable conduct based on reasonable norms that are “far better at preventing abuse of power than today’s regulatory minefield.”
“As Mr. Howard notes, his book is part of a centuries-old rules-versus-principles debate. The philosophers and writers whom he quotes approvingly include Aristotle, James Madison, Isaiah Berlin and Roscoe Pound, a prominent Harvard law professor and dean who condemned “mechanical jurisprudence” and championed broad official discretion. Berlin, for his part, warned against “monstrous bureaucratic machines, built in accordance with the rules that ignore the teeming variety of the living world, the untidy and asymmetrical inner lives of men, and crush them into conformity.” Mr. Howard juxtaposes today’s roughly 100 million words of federal law and regulations with Madison’s warning that laws should not be “so voluminous that they cannot be read, or so incoherent that they cannot be understood.”…