Mapping the Age of Every Building in Manhattan


Kriston Capps at CityLab: “The Harlem Renaissance was the epicenter of new movements in dance, poetry, painting, and literature, and its impact still registers in all those art forms. If you want to trace the Harlem Renaissance, though, best look to Harlem itself.
Many if not most of the buildings in Harlem today rose between 1900 and 1940—and a new mapping tool called Urban Layers reveals exactly where and when. Harlem boasts very few of the oldest buildings in Manhattan today, but it does represent the island’s densest concentration of buildings constructed during the Great Migration.
Thanks to Morphocode‘s Urban Layers, it’s possible to locate nearly every 19th-century building still standing in Manhattan today. That’s just one of the things that you can isolate with the map, which combines two New York City building datasets (PLUTO and Building Footprints) and Mapbox GL JS vector technology to generate an interactive architectural history.
So, looking specifically at Harlem again (with some of the Upper West Side thrown in for good measure), it’s easy to see that very few of the buildings that went up between 1765 to 1860 still stand today….”

Ebola and big data: Call for help


The Economist: “WITH at least 4,500 people dead, public-health authorities in west Africa and worldwide are struggling to contain Ebola. Borders have been closed, air passengers screened, schools suspended. But a promising tool for epidemiologists lies unused: mobile-phone data.
When people make mobile-phone calls, the network generates a call data record (CDR) containing such information as the phone numbers of the caller and receiver, the time of the call and the tower that handled it—which gives a rough indication of the device’s location. This information provides researchers with an insight into mobility patterns. Indeed phone companies use these data to decide where to build base stations and thus improve their networks, and city planners use them to identify places to extend public transport.
But perhaps the most exciting use of CDRs is in the field of epidemiology. Until recently the standard way to model the spread of a disease relied on extrapolating trends from census data and surveys. CDRs, by contrast, are empirical, immediate and updated in real time. You do not have to guess where people will flee to or move. Researchers have used them to map malaria outbreaks in Kenya and Namibia and to monitor the public response to government health warnings during Mexico’s swine-flu epidemic in 2009. Models of population movements during a cholera outbreak in Haiti following the earthquake in 2010 used CDRs and provided the best estimates of where aid was most needed.
Doing the same with Ebola would be hard: in west Africa most people do not own a phone. But CDRs are nevertheless better than simulations based on stale, unreliable statistics. If researchers could track population flows from an area where an outbreak had occurred, they could see where it would be likeliest to break out next—and therefore where they should deploy their limited resources. Yet despite months of talks, and the efforts of the mobile-network operators’ trade association and several smaller UN agencies, telecoms firms have not let researchers use the data (see article).
One excuse is privacy, which is certainly a legitimate worry, particularly in countries fresh from civil war, or where tribal tensions exist. But the phone data can be anonymised and aggregated in a way that alleviates these concerns. A bigger problem is institutional inertia. Big data is a new field. The people who grasp the benefits of examining mobile-phone usage tend to be young, and lack the clout to free them for research use.”

Ebola’s Information Paradox


 Steven Johnson at The New York Times:” …The story of the Broad Street outbreak is perhaps the most famous case study in public health and epidemiology, in large part because it led to the revolutionary insight that cholera was a waterborne disease, not airborne as most believed at the time. But there is another element of the Broad Street outbreak that warrants attention today, as popular anxiety about Ebola surges across the airwaves and subways and living rooms of the United States: not the spread of the disease itself, but the spread of information about the disease.

It was a full seven days after Baby Lewis became ill, and four days after the Soho residents began dying in mass numbers, before the outbreak warranted the slightest mention in the London papers, a few short lines indicating that seven people had died in the neighborhood. (The report understated the growing death toll by an order of magnitude.) It took two entire weeks before the press began treating the outbreak as a major news event for the city.

Within Soho, the information channels were equally unreliable. Rumors spread throughout the neighborhood that the entire city had succumbed at the same casualty rate, and that London was facing a catastrophe on the scale of the Great Fire of 1666. But this proved to be nothing more than rumor. Because the Soho crisis had originated with a single-point source — the poisoned well — its range was limited compared with its intensity. If you lived near the Broad Street well, you were in grave danger. If you didn’t, you were likely to be unaffected.

Compare this pattern of information flow to the way news spreads now. On Thursday, Craig Spencer, a New York doctor, was given a diagnosis of Ebola after presenting a high fever, and the entire world learned of the test result within hours of the patient himself learning it. News spread with similar velocity several weeks ago with the Dallas Ebola victim, Thomas Duncan. In a sense, it took news of the cholera outbreak a week to travel the 20 blocks from Soho to Fleet Street in 1854; today, the news travels at nearly the speed of light, as data traverses fiber-optic cables. Thanks to that technology, the news channels have been on permanent Ebola watch for weeks now, despite the fact that, as the joke went on Twitter, more Americans have been married to Kim Kardashian than have died in the United States from Ebola.

As societies and technologies evolve, the velocities vary with which disease and information can spread. The tremendous population density of London in the 19th century enabled the cholera bacterium to spread through a neighborhood with terrifying speed, while the information about that terror moved more slowly. This was good news for the mental well-being of England’s wider population, which was spared the anxiety of following the death count as if it were a stock ticker. But it was terrible from a public health standpoint; the epidemic had largely faded before the official institutions of public health even realized the magnitude of the outbreak….

Information travels faster than viruses do now. This is why we are afraid. But this is also why we are safe.”

From the smart city to the wise city: The role of universities in place-based leadership


Paper by Hambleton, R.: “For a variety of reasons the notion of the smart city has grown in popularity and some even claim that all cities now have to be ‘smart’. For example, some digital enthusiasts argue that advances in Information and Communication Technologies (ICT) are ushering in a new era in which pervasive electronic connections will inevitably lead to significant changes that make cities more liveable and more democratic. This paper will cast a critical eye over these claims. It will unpack the smart city rhetoric and show that, in fact, three competing perspectives are struggling for ascendancy within the smart cities discourse: 1) The digital city (emphasising a strong commitment to the use of ICT in governance), 2) The green city (reflecting the growing use of the US phrase smart growth, which is concerned to apply sound urban planning principles), and 3) The learning city (emphasising the way in which cities learn, network and innovate). Five digital danger zones will be identified and discussed. This analysis will suggest that scholars and policy makers who wish to improve the quality of life in cities should focus their attention on wisdom, not smartness. Civic leaders need to exercise judgement based on values if they are to create inclusive, sustainable cities. It is not enough to be clever, quick, ingenious, nor will it help if Big Data is superseded by Even Bigger Data. Universities can play a much more active role in place-based leadership in the cities where they are located. To do this effectively they need to reconsider the nature of modern scholarship. The paper will show how a growing number of universities are doing precisely this. Two respected examples will be presented to show how urban universities, if they are committed to engaged scholarship, can make a significant contribution to the creation of the wise city.”

8 ideas for the future of cities


TED: “In 2012, the TED Prize was awarded to an idea: The City2.0, a place to celebrate actions taken by citizens around the world to make their cities more livable, beautiful and sustainable. This week, The City2.0 website evolves. On the relaunched TEDCity2.org, you’ll find great talks on topics like housing, education and food, and how they relate to life in the bustling metropolis. You’ll find video explorations of 10 award-winning local projects that received funding through this TED Prize wish, and resources for those hoping to spark change in their own cities. The site will also be the home of all future TEDCity2.0 projects. In other words, it’s an online haven for everyone who wants to create the city of the future.
Below, a sampling of the great ideas you’ll find on TEDCity2.org. Enjoy, as most of these have never been seen on TED.com before….”

Quantifying the Livable City


Brian Libby at City Lab: “By the time Constantine Kontokosta got involved with New York City’s Hudson Yards development, it was already on track to be historically big and ambitious.
 
Over the course of the next decade, developers from New York’s Related Companies and Canada-based Oxford Properties Group are building the largest real-estate development in United States history: a 28-acre neighborhood on Manhattan’s far West Side over a Long Island Rail Road yard, with some 17 million square feet of new commercial, residential, and retail space.
Hudson Yards is also being planned as an innovative model of efficiency. Its waste management systems, for example, will utilize a vast vacuum-tube system to collect garbage from each building into a central terminal, meaning no loud garbage trucks traversing the streets by night. Onsite power generation will prevent blackouts like those during Hurricane Sandy, and buildings will be connected through a micro-grid that allows them to share power with each other.
Yet it was Kontokosta, the deputy director of academics at New York University’s Center for Urban Science and Progress (CUSP), who conceived of Hudson Yards as what is now being called the nation’s first “quantified community.” This entails an unprecedentedly wide array of data being collected—not just on energy and water consumption, but real-time greenhouse gas emissions and airborne pollutants, measured with tools like hyper-spectral imagery.

New York has led the way in recent years with its urban data collection. In 2009, Mayor Michael Bloomberg signed Local Law 84, which requires privately owned buildings over 50,000 square feet in size to provide annual benchmark reports on their energy and water use. Unlike a LEED rating or similar, which declares a building green when it opens, the city benchmarking is a continuous assessment of its operations…”

Chicago uses big data to save itself from urban ills


Aviva Rutkin in the New Scientist: “THIS year in Chicago, some kids will get lead poisoning from the paint or pipes in their homes. Some restaurants will cook food in unsanitary conditions and, here and there, a street corner will be suddenly overrun with rats. These kinds of dangers are hard to avoid in a city of more than 2.5 million people. The problem is, no one knows for certain where or when they will pop up.

The Chicago city government is hoping to change that by knitting powerful predictive models into its everyday city inspections. Its latest project, currently in pilot tests, analyses factors such as home inspection records and census data, and uses the results to guess which buildings are likely to cause lead poisoning in children – a problem that affects around 500,000 children in the US each year. The idea is to identify trouble spots before kids are exposed to dangerous lead levels.

“We are able to prevent problems instead of just respond to them,” says Jay Bhatt, chief innovation officer at the Chicago Department of Public Health. “These models are just the beginning of the use of predictive analytics in public health and we are excited to be at the forefront of these efforts.”

Chicago’s projects are based on the thinking that cities already have what they need to raise their municipal IQ: piles and piles of data. In 2012, city officials built WindyGrid, a platform that collected data like historical facts about buildings and up-to-date streams such as bus locations, tweets and 911 calls. The project was designed as a proof of concept and was never released publicly but it led to another, called Plenario, that allowed the public to access the data via an online portal.

The experience of building those tools has led to more practical applications. For example, one tool matches calls to the city’s municipal hotline complaining about rats with conditions that draw rats to a particular area, such as excessive moisture from a leaking pipe, or with an increase in complaints about garbage. This allows officials to proactively deploy sanitation crews to potential hotspots. It seems to be working: last year, resident requests for rodent control dropped by 15 per cent.

Some predictions are trickier to get right. Charlie Catlett, director of the Urban Center for Computation and Data in Chicago, is investigating an old axiom among city cops: that violent crime tends to spike when there’s a sudden jump in temperature. But he’s finding it difficult to test its validity in the absence of a plausible theory for why it might be the case. “For a lot of things about cities, we don’t have that underlying theory that tells us why cities work the way they do,” says Catlett.

Still, predictive modelling is maturing, as other cities succeed in using it to tackle urban ills….Such efforts can be a boon for cities, making them more productive, efficient and safe, says Rob Kitchin of Maynooth University in Ireland, who helped launched a real-time data site for Dublin last month called the Dublin Dashboard. But he cautions that there’s a limit to how far these systems can aid us. Knowing that a particular street corner is likely to be overrun with rats tomorrow doesn’t address what caused the infestation in the first place. “You might be able to create a sticking plaster or be able to manage it more efficiently, but you’re not going to be able to solve the deep structural problems….”

The Role Of Open Data In Choosing Neighborhood


PlaceILive Blog: “To what extent is it important to get familiar with our environment?
If we think about how the world surrounding us has changed throughout the years, it is not so unreasonable that, while walking to work, we might encounter some new little shops, restaurants, or gas stations we had never noticed before. Likewise, how many times did we wander about for hours just to find green spaces for a run? And the only one we noticed was even more polluted than other urban areas!
Citizens are not always properly informed about the evolution of the places they live in. And that is why it would be crucial for people to be constantly up-to-date with accurate information of the neighborhood they have chosen or are going to choose.
London is a neat evidence of how transparency in providing data is basic in order to succeed as a Smart City.
The GLA’s London Datastore, for instance, is a public platform of datasets revealing updated figures on the main services offered by the town, in addition to population’s lifestyle and environmental risks. These data are then made more easily accessible to the community through the London Dashboard.
The importance of dispensing free information can be also proved by the integration of maps, which constitute an efficient means of geolocation. Consulting a map where it’s easy to find all the services you need as close as possible can be significant in the search for a location.
Wheel 435
(source: Smart London Plan)
The Open Data Index, published by The Open Knowledge Foundation in 2013, is another useful tool for data retrieval: it showcases a rank of different countries in the world with scores based on openness and availability of data attributes such as transport timetables and national statistics.
Here it is possible to check UK Open Data Census and US City Open Data Census.
As it was stated, making open data available and easily findable online not only represented a success for US cities but favoured apps makers and civic hackers too. Lauren Reid, a spokesperson at Code for America, reported according to Government Technology: “The more data we have, the better picture we have of the open data landscape.”
That is, on the whole, what Place I Live puts the biggest effort into: fostering a new awareness of the environment by providing free information, in order to support citizens willing to choose the best place they can live.
The outcome is soon explained. The website’s homepage offers visitors the chance to type address of their interest, displaying an overview of neighborhood parameters’ evaluation and a Life Quality Index calculated for every point on the map.
The research of the nearest medical institutions, schools or ATMs thus gets immediate and clear, as well as the survey about community’s generic information. Moreover, data’s reliability and accessibility are constantly examined by a strong team of professionals with high competence in data analysis, mapping, IT architecture and global markets.
For the moment the company’s work is focused on London, Berlin, Chicago, San Francisco and New York, while higher goals to reach include more than 200 cities.
US Open Data Census finally saw San Francisco’s highest score achievement as a proof of the city’s labour in putting technological expertise at everyone’s disposal, along with the task of fulfilling users’ needs through meticulous selections of datasets. This challenge seems to be successfully overcome by San Francisco’s new investment, partnering with the University of Chicago, in a data analytics dashboard on sustainability performance statistics named Sustainable Systems Framework, which is expected to be released in beta version by the the end of 2015’s first quarter.
 
Another remarkable collaboration in Open Data’s spread comes from the Bartlett Centre for Advanced Spatial Analysis (CASA) of the University College London (UCL); Oliver O’Brien, researcher at UCL Department of Geography and software developer at the CASA, is indeed one of the contributors to this cause.
Among his products, an interesting accomplishment is London’s CityDashboard, a real-time reports’ control panel in terms of spatial data. The web page also allows to visualize the whole data translated into a simplified map and to look at other UK cities’ dashboards.
Plus, his Bike Share Map is a live global view to bicycle sharing systems in over a hundred towns around the world, since bike sharing has recently drawn a greater public attention as an original form of transportation, in Europe and China above all….”

3D printed maps could help the blind navigate their city


Springwise: “Modern technology has turned many of the things we consume from physical objects into pixels on a screen. While this has benefited the majority of us, those with sight difficulties don’t get along well with visual stimuli or touchscreen devices. In the past, we’ve seen Yahoo! Japan develop Hands On Search, a project that lets blind kids carry out web searches with 3D printed results. Now the country’s governmental department GSI is creating software that will enable those with visual impairments to print out 3D versions of online maps.
The official mapping body for Japan — much like the US Geological Survey — GSI already has paper maps for the blind, using embossed surfaces to mark out roads. It’s now developing a program that is able to do the same thing for digital maps.
The software first differentiates the highways, railway lines and walkways from the rest of the landscape. It then creates a 3D relief model that uses different textures to distinguish the features so that anyone running their finger along them will be able to determine what it is. The program also takes into account contour lines, creating accurate topographical representations of a particular area….
Website: www.gsi.go.jp

The Data Manifesto


Development Initiatives: “Staging a Data Revolution

Accessible, useable, timely and complete data is core to sustainable development and social progress. Access to information provides people with a base to make better choices and have more control over their lives. Too often attempts to deliver sustainable economic, social and environmental results are hindered by the failure to get the right information, in the right format, to the right people, at the right time. Worse still, the most acute data deficits often affect the people and countries facing the most acute problems.

The Data Revolution should be about data grounded in real life. Data and information that gets to the people who need it at national and sub-national levels to help with the decisions they face – hospital directors, school managers, city councillors, parliamentarians. Data that goes beyond averages – that is disaggregated to show the different impacts of decisions, policies and investments on gender, social groups and people living in different places and over time.

We need a Data Revolution that sets a new political agenda, that puts existing data to work, that improves the way data is gathered and ensures that information can be used. To deliver this vision, we need the following steps.


12 steps to a Data Revolution

1.     Implement a national ‘Data Pledge’ to citizens that is supported by governments, private and non-governmental sectors
2.     Address real world questions with joined up and disaggregated data
3.      Empower and up-skill data users of the future through education
4.     Examine existing frameworks and publish existing data
5.     Build an information bank of data assets
6.     Allocate funding available for better data according to national and sub-national priorities
7.     Strengthen national statistical systems’ capacity to collect data
8.     Implement a policy that data is ‘open by default’
9.     Improve data quality by subjecting it to public scrutiny
10.  Put information users’ needs first
11.  Recognise technology cannot solve all barriers to information
12.  Invest in infomediaries’ capacity to translate data into information that policymakers, civil society and the media can actually use…”