New Report Finds Cost-Benefit Analyses Improve Budget Choices & Taxpayer Results


Press Release: “A new report shows cost-benefit analyses have helped states make better investments of public dollars by identifying programs and policies that deliver high returns. However, the majority of states are not yet consistently using this approach when making critical decisions. This 50-state look at cost-benefit analysis, a method that compares the expense of public programs to the returns they deliver, was released today by the Pew-MacArthur Results First Initiative, a project of The Pew Charitable Trusts and the John D. and Catherine T. MacArthur Foundation.

The study, “States’ Use of Cost-benefit Analysis: Improving Results for Taxpayers”, comes at a time when states are under continuing pressure to direct limited dollars toward the most cost-effective programs and policies while curbing spending on those that do not deliver. The report is the first comprehensive study of how all 50 states and the District of Columbia analyze the costs and benefits of programs and policies, report findings, and incorporate the assessments into decision-making. It identifies key challenges states face in conducting and using the analyses and offers strategies to overcome those obstacles. The study includes a review of state statutes, a search for cost benefit analyses released between 2008 and 2011, and interviews with legislators, legislative and program evaluation staff, executive officials, report authors, and agency officials.”

The Internet generation will learn to let go


Julian B. Gewirtz and Adam B. Kern in The Washington Post: “Ours is the first generation to have grown up with the Internet. The first generation that got suspended from school because of a photo of underage drinking posted online. The first generation that could talk in chat rooms to anyone, anywhere, without our parents knowing. The first generation that has been “tracked” and “followed” and “shared” since childhood.
All this data will remain available forever — both to the big players (tech companies, governments) and to our friends, our sort-of friends and the rest of civil society. This fact is not really new, but our generation will confront the latter on a scale beyond that experienced by previous generations…
Certainly there will be many uses for information, such as health data, that will wind up governed by law. But so many other uses cannot be predicted or legislated, and laws themselves have to be informed by values. It is therefore critical that people establish, with their actions and expectations, cultural norms that prevent their digital selves from imprisoning their real selves.
We see three possible paths: One, people become increasingly restrained about what they share and do online. Two, people become increasingly restrained about what they do, period. Three, we learn to care less about what people did when they were younger, less mature or otherwise different.
The first outcome seems unproductive. There is no longer much of an Internet without sharing, and one of the great benefits of the Internet has been its ability to nurture relationships and connections that previously had been impossible. Withdrawal is unacceptable. Fear of the digital future should not drive us apart.
The second option seems more deeply unsettling. Childhood, adolescence, college — the whole process of growing up — is, as thinkers from John Locke to Dr. Spock have written, a necessarily experimental time. Everyone makes at least one mistake, and we’d like to think that process continues into adulthood. Creativity should not be overwhelmed by the fear of what people might one day find unpalatable.
This leaves the third outcome: the idea that we must learn to care less about what people did when they were younger or otherwise different. In an area where regulations, privacy policies and treaties may take decades to catch up to reality, our generation needs to take the lead in negotiating a “cultural treaty” endorsing a new value, related to privacy, that secures our ability to have a past captured in data that is not held to be the last word but seen in light of our having grown up in a way that no one ever has before.
Growing up, that is, on the record.”

Copyright Done Right? Finland To Vote On Crowdsourced Regulations


Fast-Feed: “Talk about crowdsourcing: Finland is set to vote on a set of copyright laws that weren’t proposed by government or content-making agencies: They were drafted by citizens.
Finns are able to propose laws that the government must consider if 50,000 supporters sign a petition calling for the law within six months. A set of copyright regulations that are fairer to everyone just passed that threshold, and TorrentFreak.com reports that a government vote is likely in early 2014. The new laws were created with the help of the Finnish Electronic Frontier Foundation, and the body has promised that it will maintain pressure on the political system so that the law will actually be changed.
The proposed new laws would decriminalize file sharing and prevent house searches and surveillance of pirates. TorrentFreak reminds us of the international media outcry that happened last year when during a police raid a 9-year-old girl’s laptop was confiscated on the grounds that she stole copyrighted content. Finland’s existing copyright laws, under what’s called the Lex Karpela amendment, are very strict and criminalize the breaking of DRM for copying purposes as well as preventing discussion of the technology for doing so. The laws have been criticized by activists and observers for their strictness and infringement upon freedom of speech.”

The Danger of Human Rights Proliferation


Jacob Mchangama and Guglielmo Verdirame in Foreign Affairs on “When Defending Liberty, Less Is More“: “If human rights were a currency, its value would be in free fall, thanks to a gross inflation in the number of human rights treaties and nonbinding international instruments adopted by international organizations over the last several decades. These days, this currency is sometimes more likely to buy cover for dictatorships than protection for citizens. Human rights once enshrined the most basic principles of human freedom and dignity; today, they can include anything from the right to international solidarity to the right to peace.Consider just how enormous the body of binding human rights law has become. The Freedom Rights Project, a research group that we co-founded, counts a full 64 human-rights-related agreements under the auspices of the United Nations and the Council of Europe. A member state of both of these organizations that has ratified all these agreements would have to comply with 1,377 human rights provisions (although some of these may be technical rather than substantive). Add to this the hundreds of non-treaty instruments, such as the resolutions of the UN General Assembly and Human Rights Council (HRC). The aggregate body of human rights law now has all the accessibility of a tax code.
Supporters of human rights should worry about this explosion of regulation. If people are to demand human rights, then they must first be able to understand them — a tall order given the current bureaucratic tangle of administrative regulation…”

Metadata Liberation Movement


Holman Jenkins in the Wall Street Journal: “The biggest problem, then, with metadata surveillance may simply be that the wrong agencies are in charge of it. One particular reason why this matters is that the potential of metadata surveillance might actually be quite large but is being squandered by secret agencies whose narrow interest is only looking for terrorists….
“Big data” is only as good as the algorithms used to find out things worth finding out. The efficacy and refinement of big-data techniques are advanced by repetition, by giving more chances to find something worth knowing. Bringing metadata out of its black box wouldn’t only be a way to improve public trust in what government is doing. It would be a way to get more real value for society out of techniques that are being squandered on a fairly minor threat.
Bringing metadata out of the black box would open up new worlds of possibility—from anticipating traffic jams to locating missing persons after a disaster. It would also create an opportunity to make big data more consistent with the constitutional prohibition of unwarranted search and seizure. In the first instance, with the computer withholding identifying details of the individuals involved, any red flag could be examined by a law-enforcement officer to see, based on accumulated experience, whether the indication is of interest.
If so, a warrant could be obtained to expose the identities involved. If not, the record could immediately be expunged. All this could take place in a reasonably aboveboard, legal fashion, open to inspection in court when and if charges are brought or—this would be a good idea—a court is informed of investigations that led to no action.
Our guess is that big data techniques would pop up way too many false positives at first, and only considerable learning and practice would allow such techniques to become a useful tool. At the same time, bringing metadata surveillance out of the shadows would help the Googles, Verizons and Facebooks defend themselves from a wholly unwarranted suspicion that user privacy is somehow better protected by French or British or (heavens) Chinese companies from their own governments than U.S. data is from the U.S. government.
Most of all, it would allow these techniques to be put to work on solving problems that are actual problems for most Americans, which terrorism isn’t.”

Predictive Policing: Don’t even think about it


The Economist: “PredPol is one of a range of tools using better data, more finely crunched, to predict crime. They seem to promise better law-enforcement. But they also bring worries about privacy, and of justice systems run by machines not people.
Criminal offences, like infectious disease, form patterns in time and space. A burglary in a placid neighbourhood represents a heightened risk to surrounding properties; the threat shrinks swiftly if no further offences take place. These patterns have spawned a handful of predictive products which seem to offer real insight. During a four-month trial in Kent, 8.5% of all street crime occurred within PredPol’s pink boxes, with plenty more next door to them; predictions from police analysts scored only 5%. An earlier trial in Los Angeles saw the machine score 6% compared with human analysts’ 3%.
Intelligent policing can convert these modest gains into significant reductions in crime…
Predicting and forestalling crime does not solve its root causes. Positioning police in hotspots discourages opportunistic wrongdoing, but may encourage other criminals to move to less likely areas. And while data-crunching may make it easier to identify high-risk offenders—about half of American states use some form of statistical analysis to decide when to parole prisoners—there is little that it can do to change their motivation.
Misuse and overuse of data can amplify biases…But mathematical models might make policing more equitable by curbing prejudice.”

9 models to scale open data – past, present and future


Open Knowledge Foundation Blog: “The possibilities of open data have been enthralling us for 10 years…But that excitement isn’t what matters in the end. What matters is scale – which organisational structures will make this movement explode?  This post quickly and provocatively goes through some that haven’t worked (yet!) and some that have.
Ones that are working now
1) Form a community to enter in new data. Open Street Map and MusicBrainz are two big examples. It works as the community is the originator of the data. That said, neither has dominated its industry as much as I thought they would have by now.
2) Sell tools to an upstream generator of open data. This is what CKAN does for central Governments (and the new ScraperWiki CKAN tool helps with). It’s what mySociety does, when selling FixMyStreet installs to local councils, thereby publishing their potholes as RSS feeds.
3) Use open data (quietly). Every organisation does this and never talks about it. It’s key to quite old data resellers like Bloomberg. It is what most of ScraperWiki’s professional services customers ask us to do. The value to society is enormous and invisible. The big flaw is that it doesn’t help scale supply of open data.
4) Sell tools to downstream users. This isn’t necessarily open data specific – existing software like spreadsheets and Business Intelligence can be used with open or closed data. Lots of open data is on the web, so tools like the new ScraperWiki which work well with web data are particularly suited to it.
Ones that haven’t worked
5) Collaborative curation ScraperWiki started as an audacious attempt to create an open data curation community, based on editing scraping code in a wiki. In its original form (now called ScraperWiki Classic) this didn’t scale. …With a few exceptions, notably OpenCorporates, there aren’t yet open data curation projects.
6) General purpose data marketplaces, particularly ones that are mainly reusing open data, haven’t taken off. They might do one day, however I think they need well-adopted higher level standards for data formatting and syncing first (perhaps something like dat, perhaps something based on CSV files).
Ones I expect more of in the future
These are quite exciting models which I expect to see a lot more of.
7) Give labour/money to upstream to help them create better data. This is quite new. The only, and most excellent, example of it is the UK’s National Archive curating the Statute Law Database. They do the work with the help of staff seconded from commercial legal publishers and other parts of Government.
It’s clever because it generates money for upstream, which people trust the most, and which has the most ability to improve data quality.
8) Viral open data licensing. MySQL made lots of money this way, offering proprietary dual licenses of GPLd software to embedded systems makers. In data this could use OKFN’s Open Database License, and organisations would pay when they wanted to mix the open data with their own closed data. I don’t know anyone actively using it, although Chris Taggart from OpenCorporates mentioned this model to me years ago.
9) Corporations release data for strategic advantage. Companies are starting to release their own data for strategic gain. This is very new. Expect more of it.”

BaltimoreCode.org


Press Release: “The City of Baltimore’s Chief Technology Officer Chris Tonjes and the non-partisan, non-profit OpenGov Foundation announced today the launch of BaltimoreCode.org, a free software platform that empowers all Baltimore residents to discover, access, and use local laws when they want, and how they want.

BaltimoreCode.org lifts and ‘liberates’ the Baltimore City Charter and Code from unalterable, often hard to find online files —such as PDFs—by inserting them into user-friendly, organized and modern website formats.  This straightforward switch delivers significant results:  more clarity, context, and public understanding of the laws’ impact on Baltimore citizens’ daily lives. For the first-time, BaltimoreCode.org allows  uninhibited reuse of City law data by everyday Baltimore residents to use, share, and spread as they see fit. Simply, BaltimoreCode.org gives citizens the information they need, on their terms.”

The Republic of Choosing


William H. Simon in the Boston Review: “Cass Sunstein went to Washington with the aim of putting some theory into practice. As administrator of the Office of Information and Regulatory Affairs (OIRA) during President Obama’s first term, he drew on the behavioral economics he helped develop as an academic. In his new book, Simpler, he reports on these efforts and elaborates a larger vision in which they exemplify “the future of government.”
Simpler reports some notable achievements, but it exaggerates the practical value of the behaviorist toolkit. The Obama administration’s most important policy initiatives make only minor use of it. Despite its upbeat tone, the book implies an oddly constrained conception of the means and ends of government. It sometimes calls to mind a doctor putting on a cheerful face to say that, while there is little he can do to arrest the disease, he will try to make the patient as comfortable as possible.
…The obverse of Sunstein’s preoccupation with choice architecture is his relative indifference to other approaches to making administration less rigid. Recall that among the problems Sunstein sees with conventional regulation are, first, that it mandates conduct in situations where the regulator doesn’t know with confidence what is the right thing to do, and second, that it is insufficiently sensitive to relevant local variations in taste or circumstances.
The most common way to deal with the first problem—insufficient information—is to build learning into the process of intervention: the regulator intervenes provisionally, studies the effects of her intervention, and adapts as she learns. It is commonplace for statutes to mandate or fund demonstration or pilot projects. More importantly, statutes often demand that both top administrators and frontline workers reassess and adjust their practices continuously. This approach is the central and explicit thrust of Race to the Top’s “instructional improvement systems,” and it recurs prominently in all the statutes mentioned so far.”

Capitol Words


CaptureAbout Capitol Words: “For every day Congress is in session, Capitol Words visualizes the most frequently used words in the Congressional Record, giving you an at-a-glance view of which issues lawmakers address on a daily, weekly, monthly and yearly basis. Capitol Words lets you see what are the most popular words spoken by lawmakers on the House and Senate floor.

Methodology

The contents of the Congressional Record are downloaded daily from the website of the Government Printing Office. The GPO distributes the Congressional Record in ZIP files containing the contents of the record in plain-text format.

Each text file is parsed and turned into an XML document, with things like the title and speaker marked up. The contents of each file are then split up into words and phrases — from one word to five.

The resulting data is saved to a search engine. Capitol Words has data from 1996 to the present.”