What factors influence transparency in US local government?


Grichawat Lowatcharin and Charles Menifield at LSE Impact Blog: “The Internet has opened a new arena for interaction between governments and citizens, as it not only provides more efficient and cooperative ways of interacting, but also more efficient service delivery, and more efficient transaction activities. …But to what extent does increased Internet access lead to higher levels of government transparency? …While we found Internet access to be a significant predictor of Internet-enabled transparency in our simplest model, this finding did not hold true in our most extensive model. This does not negate that fact that the variable is an important factor in assessing transparency levels and Internet access. …. Our data shows that total land area, population density, percentage of minority, education attainment, and the council-manager form of government are statistically significant predictors of Internet-enabled transparency.  These findings both confirm and negate the findings of previous researchers. For example, while the effect of education on transparency appears to be the most consistent finding in previous research, we also noted that the rural/urban (population density) dichotomy and the education variable are important factors in assessing transparency levels. Hence, as governments create strategic plans that include growth models, they should not only consider the budgetary ramifications of growth, but also the fact that educated residents want more web based interaction with government. This finding was reinforced by a recent Census Bureau report indicating that some of the cities and counties in Florida and California had population increases greater than ten thousand persons per month during the period 2013-2014.

This article is based on the paper ‘Determinants of Internet-enabled Transparency at the Local Level: A Study of Midwestern County Web Sites’, in State and Local Government Review. (More)”

Making data open for everyone


Kathryn L.S. Pettit and Jonathan Schwabis at UrbanWire: “Over the past few years, there have been some exciting developments in open source tools and programming languages, business intelligence tools, big data, open data, and data visualization. These trends, and others, are changing the way we interact with and consume information and data. And that change is driving more organizations and governments to consider better ways to provide their data to more people.

The World Bank, for example, has a concerted effort underway to open its data in better and more visual ways. Google’s Public Data Explorer brings together large datasets from around the world into a single interface. For-profit providers like OpenGov and Socrata are helping local, state, and federal governments open their data (both internally and externally) in newer platforms.

We are firm believers in open data. (There are, of course, limitations to open data because of privacy or security, but that’s a discussion for another time). But open data is not simply about putting more data on the Internet. It’s not just only about posting files and telling people where to find them. To allow and encourage more people to use and interact with data, that data needs to be useful and readable not only by researchers, but also by the dad in northern Virginia or the student in rural Indiana who wants to know more about their public libraries.

Open data should be easy to access, analyze, and visualize

Many are working hard to provide more data in better ways, but we have a long way to go. Take, for example, the Congressional Budget Office (full disclosure, one of us used to work at CBO). Twice a year, CBO releases its Budget and Economic Outlook, which provides the 10-year budget projections for the federal government. Say you want to analyze 10-year budget projections for the Pell Grant program. You’d need to select “Get Data” and click on “Baseline Projections for Education” and then choose “Pell Grant Programs.” This brings you to a PDF report, where you can copy the data table you’re looking for into a format you can actually use (say, Excel). You would need to repeat the exercise to find projections for the 21 other programs for which the CBO provides data.

In another case, the Bureau of Labor Statistics has tried to provide users with query tools that avoid the use of PDFs, but still require extra steps to process. You can get the unemployment rate data through their Java Applet (which doesn’t work on all browsers, by the way), select the various series you want, and click “Get Data.” On the subsequent screen, you are given some basic formatting options, but the default display shows all of your data series as separate Excel files. You can then copy and paste or download each one and then piece them together.

Taking a step closer to the ideal of open data, the Institute of Museum and Library Services (IMLS)followed President Obama’s May 2013 executive order to make their data open in a machine-readable format. That’s great, but it only goes so far. The IMLS platform, for example, allows you to explore information about your own public library. But the data are labeled with variable names such as BRANLIB and BKMOB that are not intuitive or clear. Users then have to find the data dictionary to understand what data fields mean, how they’re defined, and how to use them.

These efforts to provide more data represent real progress, but often fail to be useful to the average person. They move from publishing data that are not readable (buried in PDFs or systems that allow the user to see only one record at a time) to data that are machine-readable (libraries of raw data files or APIs, from which data can be extracted using computer code). We now need to move from a world in which data are simply machine-readable to one in which data are human-readable….(More)”

Push, Pull, and Spill: A Transdisciplinary Case Study in Municipal Open Government


New paper by Jan Whittington et al: “Cities hold considerable information, including details about the daily lives of residents and employees, maps of critical infrastructure, and records of the officials’ internal deliberations. Cities are beginning to realize that this data has economic and other value: If done wisely, the responsible release of city information can also release greater efficiency and innovation in the public and private sector. New services are cropping up that leverage open city data to great effect.

Meanwhile, activist groups and individual residents are placing increasing pressure on state and local government to be more transparent and accountable, even as others sound an alarm over the privacy issues that inevitably attend greater data promiscuity. This takes the form of political pressure to release more information, as well as increased requests for information under the many public records acts across the country.

The result of these forces is that cities are beginning to open their data as never before. It turns out there is surprisingly little research to date into the important and growing area of municipal open data. This article is among the first sustained, cross-disciplinary assessments of an open municipal government system. We are a team of researchers in law, computer science, information science, and urban studies. We have worked hand-in-hand with the City of Seattle, Washington for the better part of a year to understand its current procedures from each disciplinary perspective. Based on this empirical work, we generate a set of recommendations to help the city manage risk latent in opening its data….(More)”

What We’ve Learned About Sharing Our Data Analysis


Jeremy Singer-Vine at Source: “Last Friday morning, Jessica Garrison, Ken Bensinger, and I published a BuzzFeed News investigation highlighting the ease with which American employers have exploited and abused a particular type of foreign worker—those on seasonal H–2 visas. The article drew on seven months’ worth of reporting, scores of interviews, hundreds of documents—and two large datasets maintained by the Department of Labor.

That same morning, we published the corresponding data, methodologies, and analytic code on GitHub. This isn’t the first time we’ve open-sourced our data and analysis; far from it. But the H–2 project represents our most ambitious effort yet. In this post, I’ll describe our current thinking on “reproducible data analyses,” and how the H–2 project reflects those thoughts.

What Is “Reproducible Data Analysis”?

It’s helpful to break down a couple of slightly oversimplified definitions. Let’s call “open-sourcing” the act of publishing the raw code behind a software project. And let’s call “reproducible data analysis” the act of open-sourcing the code and data required to reproduce a set of calculations.

Journalism has seen a mini-boom of reproducible data analysis in the past year or two. (It’s far froma novel concept, of course.) FiveThirtyEight publishes data and re-runnable computer code for many of their stories. You can download the brains and brawn behind Leo, the New York Times’ statistical model for forecasting the outcome of the 2014 midterm Senate elections. And if you want to re-runBarron’s magazine’s analysis of SEC Rule 605 reports, you can do that, too. The list goes on.

….

Why Reproducible Data Analysis?

At BuzzFeed News, our main motivation is simple: transparency. If an article includes our own calculations (and are beyond a grade-schooler’s pen-and-paper calculations), then you should be able to see—and potentially criticize—how we did it…..

There are reasons, of course, not to publish a fully-reproducible analysis. The most obvious and defensible reason: Your data includes Social Security numbers, state secrets, or other sensitive information. Sometimes, you’ll be able to scrub these bits from your data. Other times, you won’t. (Adetailed methodology is a good alternative.)

How To Publish Reproducible Data Analysis?

At BuzzFeed News, we’re still figuring out the best way to skin this cat. Other news organizations might be arrive at entirely opposite conclusions. That said, here are some tips, based on our experience:

Describe the main data sources, and how you got them. Art appraisers and data-driven reporters agree: Provenance matters. Who collected the data? What universe of things does it quantify? How did you get it?.… (More)”

The New Science of Sentencing


Anna Maria Barry-Jester et al at the Marshall Project: “Criminal sentencing has long been based on the present crime and, sometimes, the defendant’s past criminal record. In Pennsylvania, judges could soon consider a new dimension: the future.

Pennsylvania is on the verge of becoming one of the first states in the country to base criminal sentences not only on what crimes people have been convicted of, but also on whether they are deemed likely to commit additional crimes. As early as next year, judges there could receive statistically derived tools known as risk assessments to help them decide how much prison time — if any — to assign.

Risk assessments have existed in various forms for a century, but over the past two decades, they have spread through the American justice system, driven by advances in social science. The tools try to predict recidivism — repeat offending or breaking the rules of probation or parole — using statistical probabilities based on factors such as age, employment history and prior criminal record. They are now used at some stage of the criminal justice process in nearly every state. Many court systems use the tools to guide decisions about which prisoners to release on parole, for example, and risk assessments are becoming increasingly popular as a way to help set bail for inmates awaiting trial.

But Pennsylvania is about to take a step most states have until now resisted for adult defendants: using risk assessment in sentencing itself. A state commission is putting the finishing touches on a plan that, if implemented as expected, could allow some offenders considered low risk to get shorter prison sentences than they would otherwise or avoid incarceration entirely. Those deemed high risk could spend more time behind bars.

Pennsylvania, which already uses risk assessment in other phases of its criminal justice system, is considering the approach in sentencing because it is struggling with an unwieldy and expensive corrections system. Pennsylvania has roughly 50,000 people in state custody, 2,000 more than it has permanent beds for. Thousands more are in local jails, and hundreds of thousands are on probation or parole. The state spends $2 billion a year on its corrections system — more than 7 percent of the total state budget, up from less than 2 percent 30 years ago. Yet recidivism rates remain high: 1 in 3inmates is arrested again or reincarcerated within a year of being released.

States across the country are facing similar problems — Pennsylvania’s incarceration rate is almost exactly the national average — and many policymakers see risk assessment as an attractive solution. Moreover, the approach has bipartisan appeal: Among some conservatives, risk assessment appeals to the desire to spend tax dollars on locking up only those criminals who are truly dangerous to society. And some liberals hope a data-driven justice system will be less punitive overall and correct for the personal, often subconscious biases of police, judges and probation officers. In theory, using risk assessment tools could lead to both less incarceration and less crime.

There are more than 60 risk assessment tools in use across the U.S., and they vary widely. But in their simplest form, they are questionnaires — typically filled out by a jail staff member, probation officer or psychologist — that assign points to offenders based on anything from demographic factors to family background to criminal history. The resulting scores are based on statistical probabilities derived from previous offenders’ behavior. A low score designates an offender as “low risk” and could result in lower bail, less prison time or less restrictive probation or parole terms; a high score can lead to tougher sentences or tighter monitoring.

The risk assessment trend is controversial. Critics have raised numerous questions: Is it fair to make decisions in an individual case based on what similar offenders have done in the past? Is it acceptable to use characteristics that might be associated with race or socioeconomic status, such as the criminal record of a person’s parents? And even if states can resolve such philosophical questions, there are also practical ones: What to do about unreliable data? Which of the many available tools — some of them licensed by for-profit companies — should policymakers choose?…(More)”

The Data Divide: What We Want and What We Can Get


Craig Adelman and Erin Austin at Living Cities (Read Blog 1):There is no shortage of data. At every level–federal, state, county, city and even within our own organizations–we are collecting and trying to make use of data. Data is a catch-all term that suggests universal access and easy use. The problem? In reality, data is often expensive, difficult to access, created for a single purpose, quickly changing and difficult to weave together. To aid and inform future data-dependent research initiatives, we’ve outlined the common barriers that community development faces when working with data and identified three ways to overcome them.

Common barriers include:

  • Data often comes at a hefty price. …
  • Data can come with restrictions and regulations. …
  • Data is built for a specific purpose, meaning information isn’t always in the same place. …
  • Data can actually be too big. ….
  • Data gaps exist. …
  • Data can be too old. ….

As you can tell, there can be many complications when it comes to working with data, but there is still great value to using and having it. We’ve found a few way to overcome these barriers when scoping a research project:

1) Prepare to have to move to “Plan B” when trying to get answers that aren’t readily available in the data. It is incredibly important to be able to react to unexpected data conditions and to use proxy datasets when necessary in order to efficiently answer the core research question.

2) Building a data budget for your work is also advisable, as you shouldn’t anticipate that public entities or private firms will give you free data (nor that community development partners will be able to share datasets used for previous studies).

3) Identifying partners—including local governments, brokers, and community development or CDFI partners—is crucial to collecting the information you’ll need….(More)

Confronting the Internet’s Dark Side: Moral and Social Responsibility on the Free Highway


New book by Raphael Cohen-Almagor: “Terrorism, cyberbullying, child pornography, hate speech, cybercrime: along with unprecedented advancements in productivity and engagement, the Internet has ushered in a space for violent, hateful, and antisocial behavior. How do we, as individuals and as a society, protect against dangerous expressions online? Confronting the Internet’s Dark Side is the first book on social responsibility on the Internet. It aims to strike a balance between the free speech principle and the responsibilities of the individual, corporation, state, and the international community. This book brings a global perspective to the analysis of some of the most troubling uses of the Internet. It urges net users, ISPs, and liberal democracies to weigh freedom and security, finding the golden mean between unlimited license and moral responsibility. This judgment is necessary to uphold the very liberal democratic values that gave rise to the Internet and that are threatened by an unbridled use of technology. (More)

Quantifying Crowd Size with Mobile Phone and Twitter Data


, , and Being able to infer the number of people in a specific area is of extreme importance for the avoidance of crowd disasters and to facilitate emergency evacuations. Here, using a football stadium and an airport as case studies, we present evidence of a strong relationship between the number of people in restricted areas and activity recorded by mobile phone providers and the online service Twitter. Our findings suggest that data generated through our interactions with mobile phone networks and the Internet may allow us to gain valuable measurements of the current state of society….(More)”

eGov Benchmark 2015 (EU)


Capgemini: “The state of public service provision today across Europe is progressing – but not fast enough according to the latest eGovernment Benchmark report. Policymakers need to steer the course towards digital transformation now.The Background Report assesses eGovernment’s role in seven high-impact events in citizens’ lives and the availability of key IT building blocks ….

The report found that Europe is gaining more in digital maturity  as more online public services  improved in user centricity. What Member States need to focus on are improvements to mobile, transparency, and simplification.
What we found:

  • Europe is gaining in digital maturity: With an average score of 73% in 2014, user-centricity is confirmed as the most advanced indicator at the EU-28+ level, ending 3 percentage points higher than a year earlier. The results indicate year-on-year progress across all the European countries compared.
  • Mobile – a missed opportunity: Only one in four public sector websites is mobile friendly which misses out a large segment of service users.
  • Improved Transparency but still long way to go to build trust: We saw a 3 percentage point improvement from the previous measurement, but it is still unsatisfactory as it stops at 51%.
  • Slowly moving to smarter government: 1-point improvement to adopting key enablers in technology risks the transition to a smart government. Key enablers, such as authentic sources, allow for automation of services and re-use of data to further reduce burdens.
  • The Digital Single Market is yet to come: Set as one of the ten priorities by the Juncker Commission, cross-border mobility is not yet even halfway to being fully achieved.

InnovationAre European Public Services Helping Realise the Digital Single Market? to Drive the European Advantage

New technologies and models offer governments to apply innovative solutions to deliver better, faster and cheaper services.

We put forward four key recommendations for European public sector organizations to innovate.

  • Enable: Build a shared digital infrastructure as the basis. The infrastructure foundation is required to develop any technology building blocks to digital transformation – across agencies, and tiers.
  • Entice: Move from customer services to customized services. Services that entice and engage users to go online also keep them there.
  • Exploit: Make online services mandatory. Aim to make ‘By digital by default’ become the natural next step.
  • Educate. Educate. Educate: Practitioners, civil servants, leaders and users must be trained up in digital skills.

See also Infographic: Are Government Services Prepared for the Digital Age?

Accelerating the Use of Prizes to Address Tough Challenges


Tom Kalil and Jenn Gustetic in DigitalGov: “Later this year, the Federal government will celebrate the fifth anniversary of Challenge.gov, a one-stop shop that has prompted tens of thousands of individuals, including engaged citizens and entrepreneurs, to participate in more than 400 public-sector prize competitions with more than $72 million in prizes.

The May 2015 report to Congress on the Implementation of Federal Prize Authority for Fiscal Year 2014 highlights that Challenge.gov is a critical component of the Federal government’s use of prize competitions to spur innovation. Federal agencies have used prize competitions to improve the accuracy of lung cancer screenings,develop environmentally sustainable brackish water desalination technologies, encourage local governments to allow entrepreneurs to launch new startups in a day, and increase the resilience of communities in the wake of Hurricane Sandy. Numerous Federal agencies have discovered that prizes allow them to:

  • Pay only for success and establish an ambitious goal without having to predict which team or approach is most likely to succeed.
  • Reach beyond the “usual suspects” to increase the number of citizen solvers and entrepreneurs tackling a problem.
  • Bring out-of-discipline perspectives to bear.
  • Increase cost-effectiveness to maximize the return on taxpayer dollars.
  • Inspire risk-taking by offering a level playing field through credible rules and robust judging mechanisms.

To build on this momentum, the Administration will hold an event this fall to highlight the role that prizes play in solving critical national and global issues. The event will showcase public- and private-sector relevant commitments from Federal, state, and local agencies, companies, foundations, universities, and non-profits. Individuals and organizations interested in participating in this event or making commitments should send us a note at challenges [at] ostp.gov by August 28, 2015.

Commitments may include the announcement of specific, ambitious incentive prizes and/or steps that will increase public- and/or private-sector capacity to design high-impact prizes and challenges. For example:….

  • Foundations could sponsor fellowships for prize designers in the public sector to encourage the development and implementation of ambitious prizes in areas of national importance. Foundations could also sponsor workshops that bring together companies, university researchers, non-profits, and government agencies to identify potential high-impact incentive prizes.
  • Universities could establish courses and online material to help students and mid-career professionals learn to design effective prizes and challenges.
  • Researchers could conduct empirical research on incentive prizes and other market-shaping techniques (e.g. Advance Market Commitments, milestone payments) to increase our understanding of how and under what circumstances these approaches can best be used to accelerate progress on important problems.
    Working together, we can use incentive prizes to inspire people to solve some of our toughest challenges. (More)”