Working Paper by Mark S. Fox (University of Toronto): “Cities are moving towards policymaking based on data. They are publishing data using Open Data standards, linking data from disparate sources, allowing the crowd to update their data with Smart Phone Apps that use Open APIs, and applying “Big Data” Techniques to discover relationships that lead to greater efficiencies.
One Big City Data example is from New York City (Schönberger & Cukier, 2013). Building owners were illegally converting their buildings into rooming houses that contained 10 times the number people they were designed for. These buildings posed a number of problems, including fire hazards, drugs, crime, disease and pest infestations. There are over 900,000 properties in New York City and only 200 inspectors who received over 25,000 illegal conversion complaints per year. The challenge was to distinguish nuisance complaints from those worth investigating where current methods were resulting in only 13% of the inspections resulting in vacate orders.
New York’s Analytics team created a dataset that combined data from 19 agencies including buildings, preservation, police, fire, tax, and building permits. By combining data analysis with expertise gleaned from inspectors (e.g., buildings that recently received a building permit were less likely to be a problem as they were being well maintained), the team was able to develop a rating system for complaints. Based on their analysis of this data, they were able to rate complaints such that in 70% of their visits, inspectors issued vacate orders; a fivefold increase in efficiency…
This paper provides an introduction to the concepts that underlie Big City Data. It explains the concepts of Open, Unified, Linked and Grounded data that lie at the heart of the Semantic Web. It then builds on this by discussing Data Analytics, which includes Statistics, Pattern Recognition and Machine Learning. Finally we discuss Big Data as the extension of Data Analytics to the Cloud where massive amounts of computing power and storage are available for processing large data sets. We use city data to illustrate each.”
Microsensors help map crowdsourced pollution data
Elena Craft in GreenBiz: Michael Heimbinder, a Brooklyn entrepreneur, hopes to empower individuals with his small-scale air quality monitoring system, AirCasting. The AirCasting system uses a mobile, Bluetooth-enabled air monitor not much larger than a smartphone to measure carbon dioxide, carbon monoxide, nitrogen dioxide, particulate matter and other pollutants. An accompanying Android app records and formats the information to an emissions map.
Alternatively, another instrument, the Air Quality Egg, comes pre-assembled ready to use. Innovative air monitoring systems, such as AirCasting or the Air Quality Egg, empower ordinary citizens to monitor the pollution they encounter daily and proactively address problematic sources of pollution.
This technology is part of a growing movement to enable the use of small sensors. In response to inquiries about small-sensor data, the EPA is researching the next generation of air measuring technologies. EPA experts are working with sensor developers to evaluate data quality and understand useful sensor applications. Through this ongoing collaboration, the EPA hopes to bolster measurements from conventional, stationary air-monitoring systems with data collected from individuals’ air quality microsensors….
Like many technologies emerging from the big data revolution and innovations in the energy sector, microsensing technology provides a wealth of high-quality data at a relatively low cost. It allows us to track previously undetected air pollution from traditional sources of urban smog, such as highways, and unconventional sources of pollution. Microsensing technology not only educates the public, but also helps to enlighten regulators so that policymakers can work from the facts to protect citizens’ health and welfare.
Capitol Words
About Capitol Words: “For every day Congress is in session, Capitol Words visualizes the most frequently used words in the Congressional Record, giving you an at-a-glance view of which issues lawmakers address on a daily, weekly, monthly and yearly basis. Capitol Words lets you see what are the most popular words spoken by lawmakers on the House and Senate floor.
Methodology
The contents of the Congressional Record are downloaded daily from the website of the Government Printing Office. The GPO distributes the Congressional Record in ZIP files containing the contents of the record in plain-text format.
Each text file is parsed and turned into an XML document, with things like the title and speaker marked up. The contents of each file are then split up into words and phrases — from one word to five.
The resulting data is saved to a search engine. Capitol Words has data from 1996 to the present.”
Analyzing the Analyzers
We used dimensionality reduction techniques to divide potential data scientists into five categories based on their self-ranked skill sets (Statistics, Math/Operations Research, Business, Programming, and Machine Learning/Big Data), and four categories based on their self-identification (Data Researchers, Data Businesspeople, Data Engineers, and Data Creatives). Further examining the respondents based on their division into these categories provided additional insights into the types of professional activities, educational background, and even scale of data used by different types of Data Scientists.
In this report, we combine our results with insights and data from others to provide a better understanding of the diversity of practitioners, and to argue for the value of clearer communication around roles, teams, and careers.”
Visualizing 3 Billion Tweets
Eric Gundersen from Mapbox: “This is a look at 3 billion tweets – every geotagged tweet since September 2011, mapped, showing facets of Twitter’s ecosystem and userbase in incredible new detail, revealing demographic, cultural, and social patterns down to city level detail, across the entire world. We were brought in by the data team at Gnip, who have awesome APIs and raw access to the Twitter firehose, and together Tom and data artist Eric Fischer used our open source tools to visualize the data and build interfaces that let you explore the stories of space, language, and access to technology.
This is big data, and there’s a significant level of geographic overlap between tweets, so Eric wrote an open-source tool that de-duplicated 2.7 billion overlapping datapoints, leaving 280 million unique locations…”
Visualizing the Stunning Growth of 8 Years of OpenStreetMap
Emily Badger in Atlantic Cities: “The U.S. OpenStreetMap community gathered in San Francisco over the weekend for its annual conference, the State of the Map. The loose citizen-cartography collective has now been incrementally mapping the world since 2004. While they were taking stock, it turns out the global open mapping effort has now mapped data on more than 78 million buildings and 21 million miles of road (if you wanted to drive all those roads at, say, 60 miles an hour, it would take you some 40 years to do it).
And more than a million people have chipped away at this in an impressively democratic manner: 83.6 percent of the changes in the whole database have been made by 99.9 percent of contributors.
These numbers come from the OpenStreetMap 2013 Data Report, which also contains, of course, more maps. The report, created by MapBox, includes a beautiful worldwide visualization of all the road updates made as OpenStreetMap has grown, with some of the earliest imports of data shown in green and blue, and more recent ones in white. You can navigate the full map here (scroll down), but we’ve grabbed a couple of snapshots for you as well.”
Data-Smart City Solutions
Press Release: “Today the Ash Center for Democratic Governance and Innovation at Harvard Kennedy School announced the launch of Data-Smart City Solutions, a new initiative aimed at using big data and analytics to transform the way local government operates. Bringing together leading industry, academic, and government officials, the initiative will offer city leaders a national depository of cases and best practice examples where cities and private partners use analytics to solve city problems. Data-Smart City Solutions is funded by Bloomberg Philanthropies and the John D. and Catherine T. MacArthur Foundation.
Data-Smart City Solutions highlights best practices, curates resources, and supports cities embarking on new data projects. The initiative’s website contains feature-length articles on how data drives innovation in different policy areas, profile pieces on municipal leaders at the forefront of implementing data analytics in their cities, and resources for interested officials to begin data projects in their own communities.
Recent articles include an assessment of Boston’s Adopt-a-Hydrant program as a potential harbinger of future city work promoting civic engagement and infrastructure maintenance, and a feature on how predictive technology is transforming police work. The site also spotlights municipal use of data such as San Francisco’s efforts to integrate data from different social service departments to better identify and serve at-risk youth. In addition to visiting the initiative’s website, Data-Smart City Solutions’ work is chronicled in their newsletter as well as on their Twitter page.”
The Use of Data Visualization in Government
The report presents case studies on how visualization techniques are now being used by two local governments, one state government,and three federal government agencies. Each case study discusses the audience for visualization. Understanding audience is important, as government organizations provide useful visualizations to different audiences, including the media, political oversight organizations, constituents, and internal program teams.To assist in effectively communicating to these audiences, the report details attributes of meaningful visualizations: relevance,meaning, beauty, ease of use, legibility, truthfulness, accuracy,and consistency among them.”
“
Big Data Is Not Our Master. Humans create technology. Humans can control it.
Chris Hughes in New Republic: “We’ve known for a long time that big companies can stalk our every digital move and customize our every Web interaction. Our movements are tracked by credit cards, Gmail, and tollbooths, and we haven’t seemed to care all that much.
That is, until this week’s news of government eavesdropping, with the help of these very same big companies—Verizon, Facebook, and Google, among others. For the first time, America is waking up to the realities of what all this information—known in the business as “big data”—enables governments and corporations to do….
We are suddenly wondering, Can the rise of enormous data systems that enable this surveillance be stopped or controlled? Is it possible to turn back the clock?
Technologists see the rise of big data as the inevitable march of history, impossible to prevent or alter. Viktor Mayer-Schönberger and Kenneth Cukier’s recent book Big Data is emblematic of this argument: They say that we must cope with the consequences of these changes, but they never really consider the role we play in creating and supporting these technologies themselves….
But these well-meaning technological advocates have forgotten that as a society, we determine our own future and set our own standards, norms, and policy. Talking about technological advancements as if they are pre-ordained science erases the role of human autonomy and decision-making in inventing our own future. Big data is not a Leviathan that must be coped with, but a technological trend that we have made possible and support through social and political policy.”
Smart Citizen Kit enables crowdsourced environmental monitoring
Emma Hutchings at PSFK: “The Smart Citizen Kit is a crowdsourced environmental monitoring platform. By scattering devices around the world, the creators hope to build a global network of sensors that report local environmental conditions like CO and NO2 levels, light, noise, temperature and humidity.
Organized by the Fab Lab at the Institute for Advanced Architecture of Catalonia, a team of scientists, architects, and engineers are paving the way to humanize environmental monitoring. The open-source platform consists of arduino-compatible hardware, data visualization web API and a mobile app. Users are invited to take part in the interactive global environmental database, visualizing their data and comparing it with others around the world.”