New paper by Ben Green presented at the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: “After decades of urban investment dominated by sprawl and outward growth, municipal governments in the United States are responsible for the upkeep of urban neighborhoods that have not received sufficient resources or maintenance in many years. One of city governments’ biggest challenges is to revitalize decaying neighborhoods given only limited resources. In this paper, we apply data science techniques to administrative data to help the City of Memphis, Tennessee improve distressed neighborhoods. We develop new methods to efficiently identify homes in need of rehabilitation and to predict the impacts of potential investments on neighborhoods. Our analyses allow Memphis to design neighborhood-improvement strategies that generate greater impacts on communities. Since our work uses data that most US cities already collect, our models and methods are highly portable and inexpensive to implement. We also discuss the challenges we encountered while analyzing government data and deploying our tools, and highlight important steps to improve future data-driven efforts in urban policy….(More)”
How to predict rising home prices, neighborhood change and gentrification
So Zillow, Accela and several other partners and local governments including Tampa, San Diego and Chattanooga have developed a common standard all cities can use to publish data about building and construction permits. The concept has important precedent: Google helped coax cities to standardize their transit data so you can track bus and train routes on Google Maps. Yelp has tried to do the same with municipal restaurant inspection data so you can see health scores when you’re scouting dinner.
Building permit data similarly has the potential to change how consumers, researchers and cities themselves understand the built world around us. Imagine, to give another example, if an app revealed that the loud construction site in your neighbor’s back yard had no permits attached to it. What if you could click one link and tell the city that, speeding up the bureaucracy around illegal construction? …(More)”
Push, Pull, and Spill: A Transdisciplinary Case Study in Municipal Open Government
New paper by Jan Whittington et al: “Cities hold considerable information, including details about the daily lives of residents and employees, maps of critical infrastructure, and records of the officials’ internal deliberations. Cities are beginning to realize that this data has economic and other value: If done wisely, the responsible release of city information can also release greater efficiency and innovation in the public and private sector. New services are cropping up that leverage open city data to great effect.
Meanwhile, activist groups and individual residents are placing increasing pressure on state and local government to be more transparent and accountable, even as others sound an alarm over the privacy issues that inevitably attend greater data promiscuity. This takes the form of political pressure to release more information, as well as increased requests for information under the many public records acts across the country.
The result of these forces is that cities are beginning to open their data as never before. It turns out there is surprisingly little research to date into the important and growing area of municipal open data. This article is among the first sustained, cross-disciplinary assessments of an open municipal government system. We are a team of researchers in law, computer science, information science, and urban studies. We have worked hand-in-hand with the City of Seattle, Washington for the better part of a year to understand its current procedures from each disciplinary perspective. Based on this empirical work, we generate a set of recommendations to help the city manage risk latent in opening its data….(More)”
IBM using Watson to build a “SIRI for Cities”
Daniel Terdiman at FastCompany: “A new app that incorporates IBM’s Watson cognitive computing platform is like Siri for ordering city services.
IBM said today that the city of Surrey, in British Columbia, Canada, has rolled out the new app, which leverages Watson’s sophisticated language and data analysis system to allow residents to make requests for things like finding out why their trash wasn’t picked up or how to find a lost cat using natural language.
Watson is best known as the computer system that autonomously vanquished the world’s best Jeopardy players during a highly publicized competition in 2011. In the years since, IBM has applied the system to a wide range of computing problems in industries like health care, banking, retail, and education. The system is based on Watson’s ability to understand natural language queries and to analyze huge data sets.
Recently, Watson rolled out a tool designed to help people detect the tone in their writing.
Surrey worked with the developer Purple Forge to build the new city services app, which will be combined with the city’s existing “My Surrey” mobile and web tools. IBM said that residents can ask a wide range of questions on devices like smartphones, laptops, or even Apple Watches. Big Blue said Surrey’s app is the first time Watson has been utilized in a “citizen services” app.
The tool offers a series of frequently asked questions, but also allows residents in the city of nearly half a million to come up with their own. IBM said Surrey officials are hopeful that the app will help them be more responsive to residents’ concerns.
Among the services users can ask about are those provided by Surrey’s police and fire departments, animal control, parking enforcement, trash pickup, and others….(More)”
Local open data ecosystems – a prototype map
Ed Parkes and Gail Dawes at Nesta: “It is increasingly recognised that some of the most important open data is published by local authorities (LAs) – data which is important to us like bin collection days, planning applications and even where your local public toilet is. Also given the likely move towards greater decentralisation, firstly through devolution to cities, the importance of the publication of local open data could arguably become more important over the next couple of years. In addition, as of 1st April, there is a new transparency code for local government requiring local authorities to publish further information on things like spending to local land assets. To pre-empt this likely renewed focus on local open data we have begun to develop a prototype map to highlight the UK’s local open data ecosystem.
Already there is some great practice in the publication of open data at a local level – such as Leeds Data Mill, London Datastore, and Open Data Sheffield. This regional activity is also characterised not just by high quality data publication, but also by pulling together through hackdays, challenges and meetups a community interested in the power of open data. This creates an ecosystem of publishers and re-users at a local level. Some of the best practice in relation to developing such an ecosystem was recognised by the last government in the announcement of a group of Local Authority Open Data Champions. Some of these were also recipients of the funding for projects from both the Cabinet Office and through the Open Data User Group.
Outside of this best practice it isn’t always easy to understand how developed smaller, less urban open data agendas are. Other than looking at each councils’ website or increasingly on the data portals that forwarding thinking councils are providing, there is a surprisingly large number of places that local authorities could make their open data available. The most well known of these is the Openly Local project but at the time of writing this now seems to be retired. Perhaps the best catalogue of local authority data is on Data.gov.uk itself. This has 1,449 datasets published by LAs across 200 different organisations. Following that there is the Open Data Communities website which hosts links to LA linked datasets. Using data from the latter, Steve Peters has developed the local data dashboard (which was itself based on the UK Local Government Open Data resource map from Owen Boswarva). In addition, local authorities can also register their open data in the LGA’s Open Data Inventory Service and take it through the ODI’s data certification process.
Prototype map of local open data eco-systems
To try to highlight patterns in local authority open data publication we decided to make a map of activity around the country (although in the first instance we’ve focused on England)….(More)
Why transparency can be a dirty word
Francis Fukuyama in the Financial Times: “It is hard to think of a political good that is more universally praised than government transparency. Whereas secrecy shelters corruption, abuse of power, undue influence and a host of other evils, transparency allows citizens to keep their rulers accountable. Or that is the theory.
It is clear that there are vast areas in which modern governments should reveal more. Edward Snowden’s revelations of eavesdropping by the National Security Agency has encouraged belief that the US government has been not nearly transparent enough. But is it possible to have too much transparency? The answer is clearly yes: demands for certain kinds of transparency have hurt government effectiveness, particularly with regard to its ability to deliberate.
The US has a number of statutes mandating transparency passed decades ago in response to perceived government abuses, and motivated by perfectly reasonable expectations that the government should operate under greater scrutiny. Yet they have had a number of unfortunate consequences.
The Federal Advisory Committee Act, for example, places onerous requirements on any public agency seeking to consult a group outside the government, requiring that they are formally approved and meet various criteria for political balance. Meetings must be held in public. The Government in the Sunshine Act stipulates that, with certain exceptions, “every portion of every meeting of an agency shall be open to public observation”.
These obligations put a serious damper on informal consultations with citizens, and even make it difficult for officials to talk to one another. Deliberation, whether in the context of a family or a federal agency, require people to pose hypotheticals and, when trying to reach agreement, make concessions.
When the process itself is open to public scrutiny, officials fear being hounded for a word taken out of context. They resort to cumbersome methods of circumventing the regulations, such as having one-on-one discussions so as not to trigger a group rule, or having subordinates do all the serious work.
The problem with the Freedom of Information Act is different. It was meant to serve investigative journalists looking into abuses of power. But today a large number of FOIA requests are filed by corporate sleuths trying to ferret out secrets for competitive advantage, or simply by individuals curious to find out what the government knows about them. The FOIA can be “weaponised”, as when the activist group Judicial Watch used it to obtain email documents on the Obama administration’s response to the 2012 attack on the US compound in Benghazi…..
National security aside, the federal government’s executive branch is probably one of the most transparent organisations on earth — no corporation, labour union, lobbying group or non-profit organisation is subject to such scrutiny. The real problem, as Professor John DiIulio of Pennsylvania university has pointed out, is that most of the work of government has been outsourced to contractors who face none of the transparency requirements of the government itself. It is an impossible task even to establish the number of such contractors in a single American city, much less how they are performing their jobs.
In Europe, where there is no equivalent to the FACA or the Sunshine Act, governments can consult citizens’ groups more flexibly. There is, of course, a large and growing distrust of European institutions by citizens. But America’s experience suggests that greater transparency requirements do not necessarily lead to more trust in government….(More)”
The Data Divide: What We Want and What We Can Get
Craig Adelman and Erin Austin at Living Cities (Read Blog 1): “There is no shortage of data. At every level–federal, state, county, city and even within our own organizations–we are collecting and trying to make use of data. Data is a catch-all term that suggests universal access and easy use. The problem? In reality, data is often expensive, difficult to access, created for a single purpose, quickly changing and difficult to weave together. To aid and inform future data-dependent research initiatives, we’ve outlined the common barriers that community development faces when working with data and identified three ways to overcome them.
Common barriers include:
- Data often comes at a hefty price. …
- Data can come with restrictions and regulations. …
- Data is built for a specific purpose, meaning information isn’t always in the same place. …
- Data can actually be too big. ….
- Data gaps exist. …
- Data can be too old. ….
As you can tell, there can be many complications when it comes to working with data, but there is still great value to using and having it. We’ve found a few way to overcome these barriers when scoping a research project:
1) Prepare to have to move to “Plan B” when trying to get answers that aren’t readily available in the data. It is incredibly important to be able to react to unexpected data conditions and to use proxy datasets when necessary in order to efficiently answer the core research question.
2) Building a data budget for your work is also advisable, as you shouldn’t anticipate that public entities or private firms will give you free data (nor that community development partners will be able to share datasets used for previous studies).
3) Identifying partners—including local governments, brokers, and community development or CDFI partners—is crucial to collecting the information you’ll need….(More)
Transform Government From The Outside In
Review by GCN of a new report by Forrester: “Agencies struggles to match the customer experience available from the private sector, and that causes citizens to become dissatisfied with government. In fact, seven of the 10 worst organizations in the Forrester’s U.S. Customer Experience Index are federal agencies, and only a third of Americans say their experience with the government meets expectations.
FINDINGS: To keep up with public expectations, Forrester found governments must embrace mobile, turn big data into actionable insights, improve the customer experience and accelerate digital government. Among the recommendations:
Agencies must shift their thinking to make mobile the primary platform for connection between citizens and government. Government staff should also have mobile access to the tools and resources needed to complete tasks in the field. Agencies should learn what mobile methods work best for citizens, ensure all citizen services are mobile-friendly and use the mobile platform for sharing information with the public and gathering incident reports and sentiments. By building mobile-friendly infrastructure and processes, like municipal Wi-Fi hotspots, the government (and its services) can be constantly connected to its citizens and businesses.
Governments must find ways to integrate, share and use the large amounts of data and analytics it collects. By aggregating citizen-driven data from precinct-level or agency-specific databases and data collected by systems already in place, the government can increase responsiveness, target areas in need and make better short-term decisions and long-term plans. Opening data to researchers, the private sector and citizens can also spark innovation across industries.
Better customer experience has a ripple effect through government, improving the efficacy of legislation, compliance, engagement and the effectiveness of government offices. This means making processes such as applying for healthcare, registering a car or paying taxes easier and available with highly functioning user-friendly websites. Such improvements in communication and digital customer service, will save citizens’ time, increase the use of government services and reduce agencies’ workloads….(More)”
Urban Informatics
Special issue of Data Engineering: “Most data related to people and the built world originates in urban settings. There is increasing demand to capture and exploit this data to support efforts in areas such as Smart Cities, City Science and Intelligent Transportation Systems. Urban informatics deals with the collection, organization, dissemination and analysis of urban information used in such applications. However, the dramatic growth in the volume of this urban data creates challenges for existing data-management and analysis techniques. The collected data is also increasingly diverse, with a wide variety of sensor, GIS, imagery and graph data arising in cities. To address these challenges, urban informatics requires development of advanced data-management approaches, analysis methods, and visualization techniques. It also provides an opportunity to confront the “Variety” axis of Big Data head on. The contributions in this issue cross the spectrum of urban information, from its origin, to archiving and retrieval, to analysis and visualization. …
Collaborative Sensing for Urban Transportation (By Sergio Ilarri, et al)
Open Civic Data: Of the People, For the People, By the People (by Arnaud Sahuguet, et al, The GovLab)
Plenario: An Open Data Discovery and Exploration Platform for Urban Science (by Charlie Catlett et al)
Riding from Urban Data to Insight Using New York City Taxis (by Juliana Freire et al)…(More)”
Unpacking Civic Tech – Inside and Outside of Government
David Moore at Participatory Politics Foundation: “…I’ll argue it’s important to unpack the big-tent term “civic tech” to at least five major component areas, overlapping in practice & flexible of course – in order to more clearly understand what we have and what we need:
- Responsive & efficient city services (e.g., SeeClickFix)
- Open data portals & open government data publishing / visualization (Socrata, OpenGov.com)
- Engagement platforms for government entities (Mindmixer aka Sidewalk)
- Community-focused organizing services (Change, NextDoor, Brigade- these could validly be split, as NextDoor is of course place-based IRL)
- Geo-based services & open mapping data (e.g.. Civic Insight)
More precisely, instead of “civic tech”, the term #GovTech can be productively applied to companies whose primary business model is vending to government entities – some #govtech is #opendata, some is civic #engagement, and that’s healthy & brilliant. But it doesn’t make sense to me to conflate as “civic tech” both government software vendors and the open-data work of good-government watchdogs. Another framework for understanding the inside / outside relationship to government, in company incorporation strategies & priorities, is broadly as follows:
- tech entirely-outside government (such as OpenCongress or OpenStates);
- tech mostly-outside government, where some elected officials volunteer to participate (such as AskThem, Councilmatic, DemocracyOS, or Change Decision Makers);
- tech mostly-inside government, paid-for-by-government (such as Mindmixer or SpeakUp or OpenTownHall) where elected officials or gov’t staff sets the priorities, with the strong expectation of an official response;
- deep legacy tech inside government, the enterprise vendors of closed-off CRM software to Congressional offices (including major defense contractors!).
These are the websites up and running today in the civic tech ecosystem – surveying them, I see there’s a lot of work still to do on developing advanced metrics towards thicker civic engagement. Towards evaluating whether the existing tools are having the impact we hope and expect them to at their level of capitalization, and to better contextualize the role of very-small non-profit alternatives….
One question to study is whether the highest-capitalized U.S. civic tech companies (Change, NextDoor, Mindmixer, Socrata, possibly Brigade) – which also generally have most users – are meeting ROI on continual engagement within communities.
- If it’s a priority metric for users of a service to attend a community meeting, for example, are NextDoor or Mindmixer having expected impact?
- How about metrics on return participation, joining an advocacy group, attending a district meeting with their U.S. reps, organizing peer-to-peer with neighbors?
- How about writing or annotating their own legislation at the city level, introducing it for an official hearing, and moving it up the chain of government to state and even federal levels for consideration? What actual new popular public policies or systemic reforms are being carefully, collaboratively passed?
- Do less-capitalized, community-based non-profits (AskThem, 596 Acres, OpenPlans’ much-missed Shareabouts, CKAN data portals, LittleSis, BeNeighbors, PBNYC tools) – with less scale, but with more open-source, open-data tools that can be remixed – improve on the tough metric of ROI on continual engagement or research-impact in the news?…(More)