Can We Focus on What Works?


John Kamensky in GovExec: “Can we shift the conversation in Washington from “waste, fraud, and abuse” to “what works and let’s fund it” instead?

I attended a recent Senate hearing on wasteful spending in the federal government, and some of the witnesses pointed to examples such as the legislative requirement that the Defense Department ship coal to Germany to heat American bases there. Others pointed to failures of large-scale computer projects and the dozens of programs on the Government Accountability Office’s High Risk List.

While many of the examples were seen as shocking, there was little conversation about focusing on what works and expanding those programs.

Interestingly, there is a movement underway across the U.S. to do just that. There are advocacy groups, foundations, states and localities promoting the idea of “let’s find out what works and fund it.” Some call this “evidence-based government,” “Moneyball government,” or “pay for success.” The federal government has dipped its toes in the water, a well, with several pilot programs in various agencies and bipartisan legislation pending in Congress.

The hot, new thing that has captured the imaginations of many policy wonks is called “Pay for Success,” or in some circles, “social impact bonds.”

In 2010, the British government launched an innovative funding scheme, which it called social impact bonds, where private sector investors committed funding upfront to pay for improved social outcomes that result in public sector savings. The investors were repaid by the government only when the outcomes were determined to have been achieved.

This funding scheme has attracted substantial attention in the U.S. where it and many variations are being piloted.

What is “Pay for Success?” According to the Urban Institute, PFS is a type of performance-based contracting used to support the delivery of targeted, high-impact preventive social services, in which intervention at an early stage can reduce the need for higher-cost services in the future.

For example, experts believe that preventing asthma attacks among at-risk children reduces emergency room visits and hospitalization, which are more costly than preventive services. When the government pays for preventive services, it hopes to lower its costs….(More)”

When Guarding Student Data Endangers Valuable Research


Susan M. Dynarski  in the New York Times: “There is widespread concern over threats to privacy posed by the extensive personal data collected by private companies and public agencies.

Some of the potential danger comes from the government: The National Security Agency has swept up the telephone records of millions of people, in what it describes as a search for terrorists. Other threats are posed by hackers, who have exploited security gaps to steal data from retail giantslike Target and from the federal Office of Personnel Management.

Resistance to data collection was inevitable — and it has been particularly intense in education.

Privacy laws have already been strengthened in some states, and multiple bills now pending in state legislatures and in Congress would tighten the security and privacy of student data. Some of this proposed legislation is so broadly written, however, that it could unintentionally choke off the use of student data for its original purpose: assessing and improving education. This data has already exposed inequities, allowing researchers and advocates to pinpoint where poor, nonwhite and non-English-speaking children have been educated inadequately by their schools.

Data gathering in education is indeed extensive: Across the United States, large, comprehensive administrative data sets now track the academic progress of tens of millions of students. Educators parse this data to understand what is working in their schools. Advocates plumb the data to expose unfair disparities in test scores and graduation rates, building cases to target more resources for the poor. Researchers rely on this data when measuring the effectiveness of education interventions.

To my knowledge there has been no large-scale, Target-like theft of private student records — probably because students’ test scores don’t have the market value of consumers’ credit card numbers. Parents’ concerns have mainly centered not on theft, but on the sharing of student data with third parties, including education technology companies. Last year, parentsresisted efforts by the tech start-up InBloom to draw data on millions of students into the cloud and return it to schools as teacher-friendly “data dashboards.” Parents were deeply uncomfortable with a third party receiving and analyzing data about their children.

In response to such concerns, some pending legislation would scale back the authority of schools, districts and states to share student data with third parties, including researchers. Perhaps the most stringent of these proposals, sponsored by Senator David Vitter, a Louisiana Republican, would effectively end the analysis of student data by outside social scientists. This legislation would have banned recent prominent research documenting the benefits of smaller classes, the value of excellent teachersand the varied performance of charter schools.

Under current law, education agencies can share data with outside researchers only to benefit students and improve education. Collaborations with researchers allow districts and states to tap specialized expertise that they otherwise couldn’t afford. The Boston public school district, for example, has teamed up with early-childhood experts at Harvard to plan and evaluate its universal prekindergarten program.

In one of the longest-standing research partnerships, the University of Chicago works with the Chicago Public Schools to improve education. Partnerships like Chicago’s exist across the nation, funded by foundations and the United States Department of Education. In one initiative, a Chicago research consortium compiled reports showing high school principals that many of the seniors they had sent off to college swiftly dropped out without earning a degree. This information spurred efforts to improve high school counseling and college placement.

Specific, tailored information in the hands of teachers, principals or superintendents empowers them to do better by their students. No national survey could have told Chicago’s principals how their students were doing in college. Administrative data can provide this information, cheaply and accurately…(More)”

Beating the news’ with EMBERS: Forecasting Civil Unrest using Open Source Indicators


Paper by Naren Ramakrishnan et al: “We describe the design, implementation, and evaluation of EMBERS, an automated, 24×7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the uptick and downtick of incidents during the June 2013 protests in Brazil. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings….(More)”

Introducing the Governance Data Alliance


“The overall assumption of the Governance Data Alliance is that governance data can contribute to improved sustainable economic and human development outcomes and democratic accountability in all countries. The contribution that governance data will make to those outcomes will of course depend on a whole range of issues that will vary across contexts; development processes, policy processes, and the role that data plays vary considerably. Nevertheless, there are some core requirements that need to be met if data is to make a difference, and articulating them can provide a framework to help us understand and improve the impact that data has on development and accountability across different contexts.

We also collectively make another implicit (and important) assumption: that the current state of affairs is vastly insufficient when it comes to the production and usage of high-quality governance data. In other words, the status quo needs to be significantly improved upon. Data gathered from participants in the April 2014 design session help to paint that picture in granular terms. Data production remains highly irregular and ad hoc; data usage does not match data production in many cases (e.g. users want data that don’t exist and do not use data that is currently produced); production costs remain high and inconsistent across producers despite possibilities for economies of scale; and feedback loops between governance data producers and governance data users are either non-existent or rarely employed. We direct readers to http://dataalliance.globalintegrity.org for a fuller treatment of those findings.

Three requirements need to be met if governance data is to lead to better development and accountability outcomes, whether those outcomes are about core “governance” issues such as levels of inclusion, or about service delivery and human development outcomes that may be shaped by the quality of governance. Those requirements are:

  • The availability of governance data.
  • The quality of governance data, including its usability and salience.
  • The informed use of governance data.

(Or to use the metaphor of markets, we face a series of market failures: supply of data is inconsistent and not uniform; user demand cannot be efficiently channeled to suppliers to redirect their production to address those deficiencies; and transaction costs abound through non-existent data standards and lack of predictability.)

If data are not available about those aspects of governance that are expected to have an impact on development outcomes and democratic accountability, no progress will be made. The risk is that data about key issues will be lacking, or that there will be gaps in coverage, whether country coverage, time periods covered, or sectors, or that data sets produced by different actors may not be comparable. This might come about for reasons including the following: a lack of knowledge – amongst producers, and amongst producers and users – about what data is needed and what data is available; high costs, and limited resources to invest in generating data; and, institutional incentives and structures (e.g. lack of autonomy, inappropriate mandate, political suppression of sensitive data, organizational dysfunction – relating, for instance, to National Statistical Offices) that limit the production of governance data….

What A Governance Data Alliance Should Do (Or, Making the Market Work)

During the several months of creative exploration around possibilities for a Governance Data Alliance, dozens of activities were identified as possible solutions (in whole or in part) to the challenges identified above. This note identifies what we believe to be the most important and immediate activities that an Alliance should undertake, knowing that other activities can and should be rolled into an Alliance work plan in the out years as the initiative matures and early successes (and failures) are achieved and digested.

A brief summary of the proposals that follow:

  1. Design and implement a peer-to-peer training program between governance data producers to improve the quality and salience of existing data.
  2. Develop a lightweight data standard to be adopted by producer organizations to make it easier for users to consume governance data.
  3. Mine the 2014 Reform Efforts Survey to understand who actually uses which governance data, currently, around the world.
  4. Leverage the 2014 Reform Efforts Survey “plumbing” to field customized follow-up surveys to better assess what data users seek in future governance data.
  5. Pilot (on a regional basis) coordinated data production amongst producer organizations to fill coverage gaps, reduce redundancies, and respond to actual usage and user preferences….(More) “

What cybersecurity can learn from citizen science


But as anyone who has observed an online forum thread dissecting the minutiae of geek culture can attest, hobbyists can be remarkably thorough in their exploration of topics they are passionate about. And it is often a point of pride to pick the subject that is the least conventional or popular.

The idea of citizen science is to include amateur science enthusiasts in the collection and processing of data. Thanks to the Internet, we’ve seen a surge in the number of self-taught experts in a variety of subjects. New participation platforms are social and gamified – utilizing people’s desire to compete or collaborate with others who share their passion.

How this process plays out differs from one app to the next, according to their needs: StarDust@Home asks volunteers to help sort through samples captured by the Stardust spacecraft when it flew through the coma of comet Wild 2 in 2004. They do this by viewing movies of the contents of the aerogel tiles that were used as collectors.

The security community is ripe for using the citizen science in similar ways to these. Most antimalware vendors make use of customer samples for adding detection and cleaning to their products. Many security companies use customers’ reports to gather file reputation, telemetry and prevalence data. And bug reports come from researchers of all ages and education levels – not just professional security researchers. “Month of Bug” events are a more controversial way that security is gamified. Could security companies or organizations be doing more to engage enthusiasts to help improve our response to security issues?

It could be argued that the stuff of security research – especially malware research – is potentially harmful in the hands of amateurs and should be handled only by seasoned professionals. Not only that, security is an adversarial system where the criminals would likely try to game the system to improve their profits. These are important concerns that would need to be addressed.

But the citizen science approach provides good lessons…(More)”

Advancing Collaboration Theory: Models, Typologies, and Evidence


New book edited by John C. Morris, Katrina Miller-Stevens: “The term collaboration is widely used but not clearly understood or operationalized. However, collaboration is playing an increasingly important role between and across public, nonprofit, and for-profit sectors. Collaboration has become a hallmark in both intragovernmental and intergovernmental relationships. As collaboration scholarship rapidly emerges, it diverges into several directions, resulting in confusion about what collaboration is and what it can be used to accomplish. This book provides much needed insight into existing ideas and theories of collaboration, advancing a revised theoretical model and accompanying typologies that further our understanding of collaborative processes within the public sector.

Organized into three parts, each chapter presents a different theoretical approach to public problems, valuing the collective insights that result from honoring many individual perspectives. Case studies in collaboration, split across three levels of government, offer additional perspectives on unanswered questions in the literature. Contributions are made by authors from a variety of backgrounds, including an attorney, a career educator, a federal executive, a human resource administrator, a police officer, a self-employed entrepreneur, as well as scholars of public administration and public policy. Drawing upon the individual experiences offered by these perspectives, the book emphasizes the commonalities of collaboration. It is from this common ground, the shared experiences forged among seemingly disparate interactions that advances in collaboration theory arise.

Advancing Collaboration Theory offers a unique compilation of collaborative models and typologies that enhance the existing understanding of public sector collaboration….(More)”

Big Data’s Impact on Public Transportation


InnovationEnterprise: “Getting around any big city can be a real pain. Traffic jams seem to be a constant complaint, and simply getting to work can turn into a chore, even on the best of days. With more people than ever before flocking to the world’s major metropolitan areas, the issues of crowding and inefficient transportation only stand to get much worse. Luckily, the traditional methods of managing public transportation could be on the verge of changing thanks to advances in big data. While big data use cases have been a part of the business world for years now, city planners and transportation experts are quickly realizing how valuable it can be when making improvements to city transportation. That hour long commute may no longer be something travelers will have to worry about in the future.

In much the same way that big data has transformed businesses around the world by offering greater insight in the behavior of their customers, it can also provide a deeper look at travellers. Like retail customers, commuters have certain patterns they like to keep to when on the road or riding the rails. Travellers also have their own motivations and desires, and getting to the heart of their actions is all part of what big data analytics is about. By analyzing these actions and the factors that go into them, transportation experts can gain a better understanding of why people choose certain routes or why they prefer one method of transportation over another. Based on these findings, planners can then figure out where to focus their efforts and respond to the needs of millions of commuters.

Gathering the accurate data needed to make knowledgeable decisions regarding city transportation can be a challenge in itself, especially considering how many people commute to work in a major city. New methods of data collection have made that effort easier and a lot less costly. One way that’s been implemented is through the gathering of call data records (CDR). From regular transactions made from mobile devices, information about location, time, and duration of an action (like a phone call) can give data scientists the necessary details on where people are traveling to, how long it takes them to get to their destination, and other useful statistics. The valuable part of this data is the sample size, which provides a much bigger picture of the transportation patterns of travellers.

That’s not the only way cities are using big data to improve public transportation though. Melbourne in Australia has long been considered one of the world’s best cities for public transit, and much of that is thanks to big data. With big data and ad hoc analysis, Melbourne’s acclaimed tram system can automatically reconfigure routes in response to sudden problems or challenges, such as a major city event or natural disaster. Data is also used in this system to fix problems before they turn serious.Sensors located in equipment like tram cars and tracks can detect when maintenance is needed on a specific part. Crews are quickly dispatched to repair what needs fixing, and the tram system continues to run smoothly. This is similar to the idea of the Internet of Things, wherein embedded sensors collect data that is then analyzed to identify problems and improve efficiency.

Sao Paulo, Brazil is another city that sees the value of using big data for its public transportation. The city’s efforts concentrate on improving the management of its bus fleet. With big data collected in real time, the city can get a more accurate picture of just how many people are riding the buses, which routes are on time, how drivers respond to changing conditions, and many other factors. Based off of this information, Sao Paulo can optimize its operations, providing added vehicles where demand is genuine whilst finding which routes are the most efficient. Without big data analytics, this process would have taken a very long time and would likely be hit-or-miss in terms of accuracy, but now, big data provides more certainty in a shorter amount of time….(More)”

Handbook: How to Catalyze Humanitarian Innovation in Computing Research Institutes


Patrick Meier: “The handbook below provides practical collaboration guidelines for both humanitarian organizations & computing research institutes on how to catalyze humanitarian innovation through successful partnerships. These actionable guidelines are directly applicable now and draw on extensive interviews with leading humanitarian groups and CRI’s including the International Committee of the Red Cross (ICRC), United Nations Office for the Coordination of Humanitarian Affairs (OCHA), United Nations Children’s Fund (UNICEF), United Nations High Commissioner for Refugees (UNHCR), UN Global Pulse, Carnegie Melon University (CMU), International Business Machines (IBM), Microsoft Research, Data Science for Social Good Program at the University of Chicago and others.

This handbook, which is the first of its kind, also draws directly on years of experience and lessons learned from the Qatar Computing Research Institute’s (QCRI) active collaboration and unique partnerships with multiple international humanitarian organizations. The aim of this blog post is to actively solicit feedback on this first, complete working draft, which is available here as an open and editable Google Doc. …(More)”

Want to fix the world? Start by making clean energy a default setting


Chris Mooney in the Washington Post: “In recent years, psychologists and behavioral scientists have begun to decipher why we make the choices that we do when it comes to using energy. And the bottom line is that it’s hard to characterize those choices as fully “rational.”

Rather than acting like perfect homo economicuses, they’ve found, we’rehighly swayed by the energy use of our neighbors and friends — peer pressure, basically. At the same time, we’re also heavily biased by the status quo — we delay in switching to new energy choices, even when they make a great deal of economic sense.

 All of which has led to the popular idea of “nudging,” or the idea that you can subtly sway people to change their behavior by changing, say, the environment in which they make choices, or the kinds of information they receive. Not in a coercive way, but rather, through gentle tweaks and prompts. And now, a major study in Nature Climate Change demonstrates that one very popular form of energy-use nudging that might be called “default switching,” or the “default effect,” does indeed work — and indeed, could possibly work at a very large scale.

“This is the first demonstration of a large-scale nudging effect using defaults in the domain of energy choices,” says Sebastian Lotz of Stanford University and the University of Lausanne in Switzerland, who conducted the research with Felix Ebeling of the University of Cologne in Germany….(More)”