What makes some federal agencies better than others at innovation


Tom Fox at the Washington Post:Given the complexity and difficulty of the challenges that government leaders face, encouraging innovation among their workers can pay dividends. Government-wide employee survey data, however, suggest that much more needs to be done to foster this type of culture at many federal organizations.

According to that data, nearly 90 percent of federal employees are looking for ways to be more innovative and effective, but only 54 percent feel encouraged by their leaders to come up with new ways of doing work. To make matters worse, fewer than a third say they believe creativity and innovation are rewarded in their agencies.

It’s worth pausing to examine what sets apart those agencies that do. They tend to have developed innovative cultures by providing forums for employees to share and test new ideas, by encouraging responsible risk-taking, and by occasionally bringing in outside talent for rotational assignments to infuse new thinking into the workplace.

The Department of Health and Human Services (HHS) is one example of an agency working at this. In 2010 it created the Idea Lab, with the goal to “remove barriers HHS employees face and promote better ways of working in government.”

It launched an awards program as part of Idea Lab called HHS Innovates to identify promising, new ideas likely to improve effectiveness. And to directly support implementing these ideas, the lab launched HHS Ignites, which provides teams with seed funding of $5,000 and a three-month timeframe to work on approved action plans. When the agency needs a shot of outside inspiration, it has its Entrepreneurs-in-Residence program, which enlists experts from the private and nonprofit sectors to join HHS for one or two years to develop new approaches and improve practices….

While the HHS Idea Lab program is a good concept, it’s the agency’s implementation that distinguishes it from other government efforts. Federal leaders elsewhere would be wise to borrow a few of their tactics.

As a starting point, federal leaders should issue a clear call for innovation that demands a measurable result. Too often, leaders ask for changes without any specificity as to the result they are looking to achieve. If you want your employees to be more innovative, you need to set a concrete, data-driven goal — whether that’s to reduce process steps or process times, improve customer satisfaction or reduce costs.

Secondly, you should help your employees take their ideas to implementation by playing equal parts cheerleader and drill sergeant. That is, you need to boost their confidence while at the same time pushing them to develop concrete action plans, experiments and measurements to show their ideas deliver results….(More)”

How Crowdsourcing And Machine Learning Will Change The Way We Design Cities


Shaunacy Ferro at FastCompany: “In 2011, researchers at the MIT Media Lab debuted Place Pulse, a website that served as a kind of “hot or not” for cities. Given two Google Street View images culled from a select few cities including New York City and Boston, the site asked users to click on the one that seemed safer, more affluent, or more unique. The result was an empirical way to measure urban aesthetics.

Now, that data is being used to predict what parts of cities feel the safest. StreetScore, a collaboration between the MIT Media Lab’s Macro Connections and Camera Culture groups, uses an algorithm to create a super high-resolution map of urban perceptions. The algorithmically generated data could one day be used to research the connection between urban perception and crime, as well as informing urban design decisions.

The algorithm, created by Nikhil Naik, a Ph.D. student in the Camera Culture lab, breaks an image down into its composite features—such as building texture, colors, and shapes. Based on how Place Pulse volunteers rated similar features, the algorithm assigns the streetscape a perceived safety score between 1 and 10. These scores are visualized as geographic points on a map, designed by MIT rising sophomore Jade Philipoom. Each image available from Google Maps in the two cities are represented by a colored dot: red for the locations that the algorithm tags as unsafe, and dark green for those that appear safest. The site, now limited to New York and Boston, will be expanded to feature Chicago and Detroit later this month, and eventually, with data collected from a new version of Place Pulse, will feature dozens of cities around the world….(More)”

The extreme poverty of data


 in the Financial Times: “As finance ministers gather this week in Washington DC they cannot but agree and commit to fighting extreme poverty. All of us must rejoice in the fact that over the past 15 years, the world has reportedly already “halved the number of poor people living on the planet”.

But none of us really knows it for sure. It could be less, it could be more. In fact, for every crucial issue related to human development, whether it is poverty, inequality, employment, environment or urbanization, there is a seminal crisis at the heart of global decision making – the crisis of poor data.

Because the challenges are huge and the resources scarce, on these issues more maybe than anywhere else, we need data, to monitor the results and adapt the strategies whenever needed. Bad data feed bad management, weak accountability, loss of resources and, of course, corruption.

It is rather bewildering that while we live in this technology-driven age, the development communities and many of our African governments are relying too much on guesswork. Our friends in the development sector and our African leaders would not dream of driving their cars or flying without instruments. But somehow they pretend they can manage and develop countries without reliable data.

The development community must admit it has a big problem. The sector is relying on dodgy data sets. Take the data on extreme poverty. The data we have are mainly extrapolations of estimates from years back – even up to a decade or more ago. For 38 out of 54 African countries, data on poverty and inequality are either out-dated or non-existent. How can we measure progress with such a shaky baseline? To make things worse we also don’t know how much countries spend on fighting poverty. Only 3 per cent of African citizens live in countries where governmental budgets and expenditures are made open, according to the Open Budget Index. We will never end extreme poverty if we don’t know who or where the poor are, or how much is being spent to help them.

Our African countries have all fought and won their political independence. They should now consider the battle for economic sovereignty, which begins with the ownership of sound and robust national data: how many citizens, living where, and how, to begin with.

There are three levels of intervention required.

First, a significant increase in resources for credible, independent, national statistical institutions. Establishing a statistical office is less eye-catching than building a hospital or school but data driven policy will ensure that more hospital and schools are delivered more effectively and efficiently. We urgently need these boring statistical offices. In 2013, out of a total aid budget of $134.8bn, a mere $280m went in support of statistics. Governments must also increase the resources they put into data.

Second, innovative means of collecting data. Mobile phones, geocoding, satellites and the civic engagement of young tech-savvy citizens to collect data can all secure rapid improvements in baseline data if harnessed.

Third, everyone must take on this challenge of the global public good dimension of high quality open data. Public registers of the ownership of companies, global standards on publishing payments and contracts in the extractives sector and a global charter for open data standards will help media and citizens to track corruption and expose mismanagement. Proposals for a new world statistics body – “Worldstat” – should be developed and implemented….(More)”

These researchers want to turn phones into earthquake detectors


Russell Brandom in TheVerge: “Early warning on earthquakes can help save lives, but many countries can’t afford them. That’s why scientists are turning to another location sensor already widespread in many countries: the smartphone. A single smartphone makes for a crappy earthquake sensor — but get enough of them reporting, and it won’t matter.

A new study, published today in Science Advances, says that the right network of cell phones might be able to substitute for modern seismograph arrays, providing a crucial early warning in the event of a quake. The study looks at historical earthquake data and modern smartphone hardware (based on the Nexus 5) and comes away with a map of how a smartphone-based earthquake detector might work. As it turns out, a phone’s GPS is more powerful than you might think.

A modern phone has almost everything you could want in an earthquake sensor

Early warning systems are designed to pick up the first tremors of an earthquake, projecting where the incoming quake is centered and how strong it’s likely to be. When they work, the systems are able to give citizens and first responders crucial time to prepare for the quake. There are already seismograph-based systems in place in California, Mexico, and Japan, but poorer countries often don’t have the means to implement and maintain them. This new method wouldn’t be as good as most scientific earthquake sensors, but those can cost tens of thousands of dollars each, making a smartphone-based sensor a lot cheaper. For countries that can’t afford a seismograph-based system (which includes much of the Southern Hemisphere), it could make a crucial difference in catching quakes early.

A modern phone has almost everything you could want in an earthquake sensor: specifically, a GPS-powered location sensor, an accelerometer, and multiple data connections. There are also a lot of them, even in poor countries, so a distributed system could count on getting data points from multiple angles….(More)”

Bloomberg Philanthropies Launches $42 Million “What Works Cities” Initiative


Press Release: “Today, Bloomberg Philanthropies announced the launch of the What Works Cities initiative, a $42 million program to help 100 mid-sized cities better use data and evidence. What Works Cities is the latest initiative from Bloomberg Philanthropies’ Government Innovation portfolio which promotes public sector innovation and spreads effective ideas amongst cities.

Through partners, Bloomberg Philanthropies will help mayors and local leaders use data and evidence to engage the public, make government more effective and improve people’s lives. U.S. cities with populations between 100,000 and 1 million people are invited to apply.

“While cities are working to meet new challenges with limited resources, they have access to more data than ever – and they are increasingly using it to improve people’s lives,” said Michael R. Bloomberg. “We’ll help them build on their progress, and help even more cities take steps to put data to work. What works? That’s a question that every city leader should ask – and we want to help them find answers.”

The $42 million dollar effort is the nation’s most comprehensive philanthropic initiative to help accelerate the ability of local leaders to use data and evidence to improve the lives of their residents. What Works Cities will provide mayors with robust technical assistance, expertise, and peer-to-peer learning opportunities that will help them enhance their use of data and evidence to improve services to solve problems for communities. The program will help cities:

1. Create sustainable open data programs and policies that promote transparency and robust citizen engagement;

2. Better incorporate data into budget, operational, and policy decision making;

3. Conduct low-cost, rapid evaluations that allow cities to continually improve programs; and

4. Focus funding on approaches that deliver results for citizens.

Across the initiative, Bloomberg Philanthropies will document how cities currently use data and evidence in decision making, and how this unique program of support helps them advance. Over time, the initiative will also launch a benchmark system which will collect standardized, comparable data so that cities can understand their performance relative to peers.

In cities across the country, mayors are increasingly relying on data and evidence to deliver better results for city residents. For example, New Orleans’ City Hall used data to reduce blighted residences by 10,000 and increased the number of homes brought into compliance by 62% in 2 years. The City’s “BlightStat” program has put New Orleans, once behind in efforts to revitalize abandoned and decaying properties, at the forefront of national efforts.

In New York City and other jurisdictions, open data from transit agencies has led to the creation of hundreds of apps that residents now use to get around town, choose where to live based on commuting times, provide key transit information to the visually impaired, and more. And Louisville has asked volunteers to attach GPS trackers to their asthma inhalers to see where they have the hardest time breathing. The city is now using that data to better target the sources of air pollution….

To learn more and apply to be a What Works City, visitwww.WhatWorksCities.org.”

Americans’ Views on Open Government Data


The upshot has been the appearance of a variety of “open data” and “open government” initiatives throughout the United States that try to use data as a lever to improve government performance and encourage warmer citizens’ attitudes toward government.

This report is based on the first national survey that seeks to benchmark public sentiment about the government initiatives that use data to cultivate the public square. The survey, conducted by Pew Research Center in association with the John S. and James L. Knight Foundation, captures public views at the emergent moment when new technology tools and techniques are being used to disseminate and capitalize on government data and specifically looks at:

  • People’s level of awareness of government efforts to share data
  • Whether these efforts translate into people using data to track government performance
  • If people think government data initiatives have made, or have the potential to make, government perform better or improve accountability
  • The more routine kinds of government-citizen online interactions, such as renewing licenses or searching for the hours of public facilities.

The results cover all three levels of government in America — federal, state and local — and show that government data initiatives are in their early stages in the minds of most Americans. Generally, people are optimistic that these initiatives can make government more accountable; even though many are less sure open data will improve government performance. And government does touch people online, as evidenced by high levels of use of the internet for routine information applications. But most Americans have yet to delve too deeply into government data and its possibilities to closely monitor government performance.

Among the survey’s main findings:

As open data and open government initiatives get underway, most Americans are still largely engaged in “e-Gov 1.0” online activities, with far fewer attuned to “Data-Gov 2.0” initiatives that involve agencies sharing data online for public use….

Minorities of Americans say they pay a lot of attention to how governments share data with the public and relatively few say they are aware of examples where government has done a good (or bad) job sharing data. Less than one quarter use government data to monitor how government performs in several different domains….
Americans have mixed hopes about government data initiatives. People see the potential in these initiatives as a force to improve government accountability. However, the jury is still out for many Americans as to whether government data initiatives will improve government performance….
People’s baseline level of trust in government strongly shapes how they view the possible impact of open data and open government initiatives on how government functions…
Americans’ perspectives on trusting government are shaped strongly by partisan affiliation, which in turn makes a difference in attitudes about the impacts of government data initiatives…

Americans are for the most part comfortable with government sharing online data about their communities, although they sound cautionary notes when the data hits close to home…

Smartphone users have embraced information-gathering using mobile apps that rely on government data to function, but not many see a strong link between the underlying government data and economic value…

…(More)”

What, Exactly, Do You Want?


Cass Sunstein at the New York Times: “Suppose that you value freedom of choice. Are you committed to the mere opportunity to choose, or will you also insist that people actually exercise that opportunity? Is it enough if the government, or a private institution, gives people the option of going their own way? Or is it particularly important to get people to say precisely what they want? In coming decades, these seemingly abstract questions will grow in importance, because they will decide central features of our lives.

Here’s an example. Until last month, all 50 states had a simple policy for voter registration: If you want to become a voter, you have the opportunity to register. Oregon is now the first state to adopt a radically different approach: If the relevant state officials know that you live in Oregon and are 18 or older, you’re automatically registered as a voter. If you don’t want to be one, you have the opportunity to opt out.

We could easily imagine a third approach. A state might decide that if you want some kind of benefit — say, a driver’s license — you have to say whether you want to register to vote. Under this approach, the state would require you to make an active choice about whether to be a voter. You would have to indicate your desires explicitly.

In countless contexts, the government, or some private institution, must decide among three possible approaches: Give people the opportunity to opt in; give people the opportunity to opt out; or require people to make some kind of active choice. For example, an employer may say that employees will be enrolled in a pension plan only if they opt in. Alternatively, it may automatically enroll employees in a pension plan (while allowing them the opportunity to opt out). Or it may instead tell employees that they can’t start work unless they say whether they want to participate in a pension plan.

You may think that while the decision raises philosophical puzzles, the stakes are small. If so, you would be wrong; the decision can have huge consequences. By itself, the opportunity to choose is not all that matters, because many people will not exercise that opportunity. Inertia has tremendous force, and people tend to procrastinate. If a state or a private company switches from a system of opt-out to one of opt-in, or vice versa, it can have major effects on people’s lives.

For example, Oregon expects that its new policy will produce up to 300,000 new registered voters. In 2004, Congress authorized the Department of Agriculture to allow states and localities to automatically enroll eligible poor children in school meal programs, rather than requiring their parents to sign them up. As a result, millions of such children now have access to school meals. In many nations, including the United States, Britain and Denmark, automatic enrollment in pension plans has significantly increased the number of employees who participate in pension plans. The Affordable Care Act builds on this practice with a provision that will require large employers to enroll employees automatically in health insurance plans.

In light of findings of this kind (and there are many more), a lot of people have argued that people would be much better off if many institutions switched, today or tomorrow, from “opt in” designs to “opt out.” Often they’re right; “opt out” can be a lot better. But from the standpoint of both welfare and personal freedom, opt out raises problems of its own, precisely because it does not involve an actual exercise of the power to choose….(More)

The Rule of History


Jill Lepore about Magna Carta, the Bill of Rights, and the hold of time in The New Yorker: “…Magna Carta has been taken as foundational to the rule of law, chiefly because in it King John promised that he would stop throwing people into dungeons whenever he wished, a provision that lies behind what is now known as due process of law and is understood not as a promise made by a king but as a right possessed by the people. Due process is a bulwark against injustice, but it wasn’t put in place in 1215; it is a wall built stone by stone, defended, and attacked, year after year. Much of the rest of Magna Carta, weathered by time and for centuries forgotten, has long since crumbled, an abandoned castle, a romantic ruin.

Magna Carta is written in Latin. The King and the barons spoke French. “Par les denz Dieu!” the King liked to swear, invoking the teeth of God. The peasants, who were illiterate, spoke English. Most of the charter concerns feudal financial arrangements (socage, burgage, and scutage), obsolete measures and descriptions of land and of husbandry (wapentakes and wainages), and obscure instruments for the seizure and inheritance of estates (disseisin and mort d’ancestor). “Men who live outside the forest are not henceforth to come before our justices of the forest through the common summonses, unless they are in a plea,” one article begins.

Magna Carta’s importance has often been overstated, and its meaning distorted. “The significance of King John’s promise has been anything but constant,” U.S. Supreme Court Justice John Paul Stevens aptly wrote, in 1992. It also has a very different legacy in the United States than it does in the United Kingdom, where only four of its original sixty-some provisions are still on the books. In 2012, three New Hampshire Republicans introduced into the state legislature a bill that required that “all members of the general court proposing bills and resolutions addressing individual rights or liberties shall include a direct quote from the Magna Carta which sets forth the article from which the individual right or liberty is derived.” For American originalists, in particular, Magna Carta has a special lastingness. “It is with us every day,” Justice Antonin Scalia said in a speech at a Federalist Society gathering last fall.

Much has been written of the rule of law, less of the rule of history. Magna Carta, an agreement between the King and his barons, was also meant to bind the past to the present, though perhaps not in quite the way it’s turned out. That’s how history always turns out: not the way it was meant to. In preparation for its anniversary, Magna Carta acquired a Twitter username: @MagnaCarta800th….(More)”

Citizen Science for Citizen Access to Law


Paper by Michael Curtotti, Wayne Weibel, Eric McCreath, Nicolas Ceynowa, Sara Frug, and Tom R Bruce: “This paper sits at the intersection of citizen access to law, legal informatics and plain language. The paper reports the results of a joint project of the Cornell University Legal Information Institute and the Australian National University which collected thousands of crowdsourced assessments of the readability of law through the Cornell LII site. The aim of the project is to enhance accuracy in the prediction of the readability of legal sentences. The study requested readers on legislative pages of the LII site to rate passages from the United States Code and the Code of Federal Regulations and other texts for readability and other characteristics. The research provides insight into who uses legal rules and how they do so. The study enables conclusions to be drawn as to the current readability of law and spread of readability among legal rules. The research is intended to enable the creation of a dataset of legal rules labelled by human judges as to readability. Such a dataset, in combination with machine learning, will assist in identifying factors in legal language which impede readability and access for citizens. As far as we are aware, this research is the largest ever study of readability and usability of legal language and the first research which has applied crowdsourcing to such an investigation. The research is an example of the possibilities open for enhancing access to law through engagement of end users in the online legal publishing environment for enhancement of legal accessibility and through collaboration between legal publishers and researchers….(More)”

Open Data Literature Review


Review by Emmie Tran and Ginny Scholtes: “Open data describes large datasets that governments at all levels release online and free of charge for analysis by anyone for any purpose. Entrepreneurs may use open data to create new products and services, and citizens may use it to gain insight into the government. A plethora of time saving and other useful applications have emerged from open data feeds, including more accurate traffic information, real-time arrival of public transportation, and information about crimes in neighborhoods. But data held by the government is implicitly or explicitly about individuals. While open government is often presented as an unqualified good, sometimes open data can identify individuals or groups, leading to invasions of privacy and disparate impact on vulnerable populations.

This review provides background to parties interested in open data, specifically for those attending the 19th Annual BCLT/BTLJ Symposium on open data. Part I defines open data, focusing on the origins of the open data movement and the types of data subject to government retention and public access. Part II discusses how open data can benefit society, and Part III delves into the many challenges and dangers of open data. Part IV addresses these challenges, looking at how the United States and other countries have implemented open data regimes, and considering some of the proposed measures to mitigate the dangers of open data….(More)”