5 Ways Cities Are Using Big Data


Eric Larson in Mashable: “New York City released more than 200 high-value data sets to the public on Monday — a way, in part, to provide more content for open-sourced mapping projects like OpenStreetMap.
It’s one of the many releases since the Local Law 11 of 2012 passed in February, which calls for more transparency of the city government’s collected data.
But it’s not just New York: Cities across the world, large and small, are utilizing big data sets — like traffic statistics, energy consumption rates and GPS mapping — to launch projects to help their respective communities.
We rounded up a few of our favorites below….

1. Seattle’s Power Consumption

The city of Seattle recently partnered with Microsoft and Accenture on a pilot project to reduce the area’s energy usage. Using Microsoft’s Azure cloud, the project will collect and analyze hundreds of data sets collected from four downtown buildings’ management systems.
With predictive analytics, then, the system will work to find out what’s working and what’s not — i.e. where energy can be used less, or not at all. The goal is to reduce power usage by 25%.

2. SpotHero

Finding parking spots — especially in big cities — is undoubtably a headache.

SpotHero is an app, for both iOS and Android devices, that tracks down parking spots in a select number of cities. How it works: Users type in an address or neighborhood (say, Adams Morgan in Washington, D.C.) and are taken to a listing of available garages and lots nearby — complete with prices and time durations.
The app tracks availability in real-time, too, so a spot is updated in the system as soon as it’s snagged.
Seven cities are currently synced with the app: Washington, D.C., New York, Chicago, Baltimore, Boston, Milwaukee and Newark, N.J.

3. Adopt-a-Hydrant

Anyone who’s spent a winter in Boston will agree: it snows.

In January, the city’s Office of New Urban Mechanics released an app called Adopt-a-Hydrant. The program is mapped with every fire hydrant in the city proper — more than 13,000, according to a Harvard blog post — and lets residents pledge to shovel out one, or as many as they choose, in the almost inevitable event of a blizzard.
Once a pledge is made, volunteers receive a notification if their hydrant — or hydrants — become buried in snow.

4. Adopt-a-Sidewalk

Similar to Adopt-a-Hydrant, Chicago’s Adopt-a-Sidewalk app lets residents of the Windy City pledge to shovel sidewalks after snowfall. In a city just as notorious for snowstorms as Boston, it’s an effective way to ensure public spaces remain free of snow and ice — especially spaces belonging to the elderly or disabled.

If you’re unsure which part of town you’d like to “adopt,” just register on the website and browse the map — you’ll receive a pop-up notification for each street you swipe that’s still available.

5. Less Congestion for Lyon

Last year, researchers at IBM teamed up with the city of Lyon, France (about four hours south of Paris), to build a system that helps traffic operators reduce congestion on the road.

The system, called the “Decision Support System Optimizer (DSSO),” uses real-time traffic reports to detect and predict congestions. If an operator sees that a traffic jam is likely to occur, then, she/he can adjust traffic signals accordingly to keep the flow of cars moving smoothly.
It’s an especially helpful tool for emergencies — say, when an ambulance is en route to the hospital. Over time, the algorithms in the system will “learn” from its most successful recommendations, then apply that knowledge when making future predictions.”

Twitter’s activist roots: How Twitter’s past shapes its use as a protest tool


Radio Netherlands Worldwide: “Surprised when demonstrators from all over the world took to Twitter as a protest tool? Evan “Rabble” Henshaw-Plath, member of Twitter’s founding team, was not. Rather, he sees it as a return to its roots: Inspired by protest coordination tools like TXTMob, and shaped by the values and backgrounds of Twitter’s founders, he believes activist potential was built into the service from the start.

It took a few revolutions before Twitter was taken seriously. Critics claimed that its 140-character limit only provided space for the most trivial thoughts: neat for keeping track of Ashton Kutcher’s lunch choices, but not much else. It made the transition from Silicon Valley toy into Middle East protest tool seem all the more astonishing.
Unless, Twitter co-founder Evan Henshaw-Plath argues, you know the story of how Twitter came to be. Evan Henshaw-Plath was the lead developer at Odeo, the company that started and eventually became Twitter. TXTMob, an activist tool deployed during the 2004 Republican National Convention in the US to coordinate protest efforts via SMS was, says Henshaw-Plath, a direct inspiration for Twitter.
Protest 1.0
In 2004, while Henshaw-Plath was working at Odeo, he and a few other colleagues found a fun side-project in working on TXTMob, an initiative by what he describes as a “group of academic artist/prankster/hacker/makers” that operated under the ostensibly serious moniker of Institute for Applied Autonomy (IAA). Earlier IAA projects included small graffiti robots on wheels that spray painted slogans on pavements during demonstrations, and a pudgy talking robot with big puppy eyes made to distribute subversive literature to people who ignored less-cute human pamphleteers.
TXTMob was a more serious endeavor than these earlier projects: a tactical protest coordination tool. With TXTMob, users could quickly exchange text messages with large groups of other users about protest locations and police crackdowns….”

Index: The Data Universe


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on the data universe and was originally published in 2013.

  • How much data exists in the digital universe as of 2012: 2.7 zetabytes*
  • Increase in the quantity of Internet data from 2005 to 2012: +1,696%
  • Percent of the world’s data created in the last two years: 90
  • Number of exabytes (=1 billion gigabytes) created every day in 2012: 2.5; that number doubles every month
  • Percent of the digital universe in 2005 created by the U.S. and western Europe vs. emerging markets: 48 vs. 20
  • Percent of the digital universe in 2012 created by emerging markets: 36
  • Percent of the digital universe in 2020 predicted to be created by China alone: 21
  • How much information in the digital universe is created and consumed by consumers (video, social media, photos, etc.) in 2012: 68%
  • Percent of which enterprises have liability or responsibility for (copyright, privacy, compliance with regulations, etc.): 80
  • Amount included in the Obama Administration’s 2-12 Big Data initiative: over $200 million
  • Amount the Department of Defense is investing annually on Big Data projects as of 2012: over $250 million
  • Data created per day in 2012: 2.5 quintillion bytes
  • How many terabytes* of data collected by the U.S. Library of Congress as of April 2011: 235
  • How many terabytes of data collected by Walmart per hour as of 2012: 2,560, or 2.5 petabytes*
  • Projected growth in global data generated per year, as of 2011: 40%
  • Number of IT jobs created globally by 2015 to support big data: 4.4 million (1.9 million in the U.S.)
  • Potential shortage of data scientists in the U.S. alone predicted for 2018: 140,000-190,000, in addition to 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions
  • Time needed to sequence the complete human genome (analyzing 3 billion base pairs) in 2003: ten years
  • Time needed in 2013: one week
  • The world’s annual effective capacity to exchange information through telecommunication networks in 1986, 2007, and (predicted) 2013: 281 petabytes, 65 exabytes, 667 exabytes
  • Projected amount of digital information created annually that will either live in or pass through the cloud: 1/3
  • Increase in data collection volume year-over-year in 2012: 400%
  • Increase in number of individual data collectors from 2011 to 2012: nearly double (over 300 data collection parties in 2012)

*1 zetabyte = 1 billion terabytes | 1 petabyte = 1,000 terabytes | 1 terabyte = 1,000 gigabytes | 1 gigabyte = 1 billion bytes

Sources

The Logic of Connective Action- Digital Media and the Personalization of Contentious Politics


New book by W. Lance Bennett and Alexandra Segerberg: “The Logic of Connective Action explains the rise of a personalized digitally networked politics in which diverse individuals address the common problems of our times such as economic fairness and climate change. Rich case studies from the United States, United Kingdom, and Germany illustrate a theoretical framework for understanding how large-scale connective action is coordinated using inclusive discourses such as “We Are the 99%” that travel easily through social media. In many of these mobilizations, communication operates as an organizational process that may replace or supplement familiar forms of collective action based on organizational resource mobilization, leadership, and collective action framing. In some cases, connective action emerges from crowds that shun leaders, as when Occupy protesters created media networks to channel resources and create loose ties among dispersed physical groups. In other cases, conventional political organizations deploy personalized communication logics to enable large-scale engagement with a variety of political causes. The Logic of Connective Action shows how power is organized in communication-based networks, and what political outcomes may result.”

Smartphones As Weather Surveillance Systems


Tom Simonite in MIT Technology Review: “You probably never think about the temperature of your smartphone’s battery, but it turns out to provide an interesting method for tracking outdoor air temperature. It’s a discovery that adds to other evidence that mobile apps could provide a new way to measure what’s happening in the atmosphere and improve weather forecasting.
Startup OpenSignal, whose app crowdsources data on cellphone reception, first noticed in 2012 that changes in battery temperature correlated with those outdoors. On Tuesday, they published a scientific paper on that technique in a geophysics journal and announced that the technique will be used to interpret data from a weather crowdsourcing app. OpenSignal originally started collecting data on battery temperatures to try and understand the connections between signal strength and how quickly a device chews through its battery.
OpenSignal’s crowdsourced weather-tracking effort joins another accidentally enabled by smartphones. A project called PressureNET that collects air pressure data by taking advantage of the fact many Android phones have a barometer inside to aid their GPS function (see “App Feeds Scientists Atmospheric Data From Thousands of Smartphones”). Cliff Mass, an atmospheric scientist at the University of Washington, is working to incorporate PressureNET data into weather models that usually rely on data from weather stations. He believes that smartphones could provide valuable data from places where there are no weather stations, if enough people start sharing data using apps like PressureNET.
Other research suggests that logging changes in cell network signal strength perceived by smartphones could provide yet more weather data. In February researchers in the Netherlands produced detailed maps of rainfall compiled by monitoring fluctuations in the signal strength measured by cellular network masts, caused by water droplets in the atmosphere.”

How to do scientific research without even trying (much)


Ars Technica: “To some extent, scientific research requires expensive or specialized equipment—some work just requires a particle accelerator or a virus containment facility. But plenty of other research has very simple requirements: a decent camera, a bit of patience, or being in the right place at the right time. Since that sort of work is open to anyone, getting the public involved can be a huge win for scientists, who can then obtain much more information than they could have gathered on their own.
A group of Spanish researchers has now written an article that is a mixture of praise for this sort of citizen science, a resource list for people hoping to get involved, and a how-to guide for anyone inspired to join in. The researchers focus on their own area of interest—insects, specifically the hemiptera or “true bugs”—but a lot of what they say applies to other areas of research.

The paper also lists a variety of regional-specific sites that focus on insect identification and tracking, such as ones for the UK, Belgium, and Slovenia. But a dedicated system isn’t required for this sort of resource. In the researchers’ home base on the Iberian Peninsula, insects are tracked via a Flickr group. (If you’re interested in insect research and based in the US, you can also find dozens of projects at the SciStarter site.) We’ve uploaded some of the most amazing images into a gallery that accompanies this article.
ZooKeys, 2013. DOI: 10.3897/zookeys.319.4342

E-petition systems and political participation: About institutional challenges and democratic opportunities


New paper by Knud Böhle and Ulrich Riehm in First Monday: “The implementation of e–petition systems holds the promise to increase the participative and deliberative potential of petitions. The most ambitious e–petition systems allow for electronic submission, make publicly available the petition text, related documents and the final decision, allow supporting a petition by electronically co–signing it, and provide electronic discussion forums. Based on a comprehensive survey (2010/2011) of parliamentary petition bodies at the national level covering the 27 member states of the European Union (EU) plus Norway and Switzerland, the state of public e–petitioning in the EU is presented, and the relevance of e–petition systems as a means of political participation is discussed….
The most interesting finding is that some petition systems — by leveraging the potential of the Internet — further the involvement of the public considerably. This happens in two ways: first by nudging e–petition systems in the direction of lightweight instruments of direct democracy and second by making the institution itself more open, transparent, accountable, effective, and responsive through the involvement of the public. Both development paths might also lead to expectations that eventually cannot be complied with by the petition body without more substantial transformations of the institution. This or that might happen. Empirically, we ain’t seen almost nothing yet.”

Crowdfunding gives rise to projects truly in public domain


USA Today: “Crowdfunding, the cyberpractice of pooling individuals’ money for a cause, so far has centered on private enterprise. It’s now spreading to public spaces and other community projects that are typically the domain of municipalities.

The global reach and speed of the Internet are raising not just money but awareness and galvanizing communities.

SmartPlanet.com recently reported that crowdfunding capital projects is gaining momentum, giving communities part ownership of everything from a 66-story downtown skyscraper in Bogota to a bridge in Rotterdam, the Netherlands. Several websites such as neighborland.com and neighbor.ly are platforms to raise money for projects ranging from planting fruit trees in San Francisco to building a playground that accommodates disabled children in Parsippany, N.J.

“Community groups are increasingly ready to challenge cities’ plans,” says Bryan Boyer, an independent consultant and adviser to The Finnish Innovation Fund SITRA, a think tank. “We’re all learning to live in the context of a networked society.”

Crowdfund
Crowdfunder, which connects entrepreneurs and investors globally, just launched a local version — CROWDFUNDx.”

Technology and Economic Prosperity


EDUARDO PORTER in The New York Times: “The impact of a technological innovation depends on how deeply it embeds itself in everything we do.
Earlier this month, a couple of economists at the Harvard Business School and the Toulouse School of Economics in France produced a paper asking “If Technology Has Arrived Everywhere, Why Has Income Diverged?” Economic prosperity, they noted, is ultimately driven by technological innovation. So if technologies today spread much more quickly than they used to from rich to poor countries, how come the income divide between rich and poor nations remains so large?
It took 119 years, on average, for the spindle to spread outside of Europe to the poorer reaches of the late-18th-century world, according to the authors. The Internet encircled the globe in seven. One might expect that this would have helped developing countries catch up with the richest nations at the frontier of technology
The reason that this did not happen, the authors propose, is that despite spreading faster, new technologies have not embedded themselves as deeply, measured by their prevalence, relative to the size of the economy. “The divergence in the degree of assimilation of technologies started about 100 years ago,” observed Diego Comin of Harvard Business School, one of the authors.”

Global Internet Policy Observatory (GIPO)


European Commission Press Release: “The Commission today unveiled plans for the Global Internet Policy Observatory (GIPO), an online platform to improve knowledge of and participation of all stakeholders across the world in debates and decisions on Internet policies. GIPO will be developed by the Commission and a core alliance of countries and Non Governmental Organisations involved in Internet governance. Brazil, the African Union, Switzerland, the Association for Progressive Communication, Diplo Foundation and the Internet Society have agreed to cooperate or have expressed their interest to be involved in the project.
The Global Internet Policy Observatory will act as a clearinghouse for monitoring Internet policy, regulatory and technological developments across the world.
It will:

  • automatically monitor Internet-related policy developments at the global level, making full use of “big data” technologies;
  • identify links between different fora and discussions, with the objective to overcome “policy silos”;
  • help contextualise information, for example by collecting existing academic information on a specific topic, highlighting the historical and current position of the main actors on a particular issue, identifying the interests of different actors in various policy fields;
  • identify policy trends, via quantitative and qualitative methods such as semantic and sentiment analysis;
  • provide easy-to-use briefings and reports by incorporating modern visualisation techniques;”