Three Things Great Data Storytellers Do Differently


Jake Porway at Stanford Social Innovation Review: “…At DataKind, we use data science and algorithms in the service of humanity, and we believe that communicating about our work using data for social impact is just as important as the work itself. There’s nothing worse than findings gathering dust in an unread report.

We also believe our projects should always start with a question. It’s clear from the questions above and others that the art of data storytelling needs some demystifying. But rather than answering each question individually, I’d like to pose a broader question that can help us get at some of the essentials: What do great data storytellers do differently and what can we learn from them?

1. They answer the most important question: So what?

Knowing how to compel your audience with data is more of an art than a science. Most people still have negative associations with numbers and statistics—unpleasant memories of boring math classes, intimidating technical concepts, or dry accounting. That’s a shame, because the message behind the numbers can be so enriching and enlightening.

The solution? Help your audience understand the “so what,” not the numbers. Ask: Why should someone care about your findings? How does this information impact them? My strong opinion is that most people actually don’t want to look at data. They need to trust that your methods are sound and that you’re reasoning from data, but ultimately they just want to know what it all means for them and what they should do next.

A great example of going straight to the “so what” is this beautiful, interactive visualization by Periscopic about gun deaths. It uses data sparingly but still evokes a very clear anti-gun message….

2. They inspire us to ask more questions.

The best data visualization helps people investigate a topic further, instead of drawing a conclusion for them or persuading them to believe something new.

For example, the nonprofit DC Action for Children was interested in leveraging publicly available data from government agencies and the US Census, as well as DC Action for Children’s own databases, to help policymakers, parents, and community members understand the conditions influencing child well-being in Washington, DC. We helped create a tool that could bring together data in a multitude of forms, and present it in a way that allowed people to delve into the topic themselves and uncover surprising truths, such as the fact that one out of every three kids in DC lives in a neighborhood without a grocery store….

3. They use rigorous analysis instead of just putting numbers on a page.

Data visualization isn’t an end goal; it’s a process. It’s often the final step in a long manufacturing chain, along which we poke, prod, and mold data to create that pretty graph.

Years ago, the New York City Department of Parks & Recreation (NYC Parks) approached us—armed with data about every single tree in the city, including when it was planted and how it was pruned—and wanted to know: Does pruning trees in one year reduce the number of hazardous tree conditions in the following year? This is one of the first things our volunteer data scientists came up with:

Visualization of NYC Parks’ Department data showing tree density in New York City.

This is a visualization of tree density New York—and it was met with oohs and aahs. It was interactive! You could see where different types of trees lived! It was engaging! But another finding that came out of this work arguably had a greater impact. Brian D’Alessandro, one of our volunteer data scientists, used statistical modeling to help NYC Parks calculate a number: 22 percent. It turns out that if you prune trees in New York, there are 22 percent fewer emergencies on those blocks than on the blocks where you didn’t prune. This number is helping the city become more effective by understanding how to best allocate its resources, and now other urban forestry programs are asking New York how they can do the same thing. There was no sexy visualization, no interactivity—just a rigorous statistical model of the world that’s shaping how cities protect their citizens….(More)”

Your City Needs a Local Data Intermediary Now


Matt Lawyue and Kathryn Pettit at Next City: “Imagine if every community nationwide had access to their own data — data on which children are missing too many days of school, which neighborhoods are becoming unaffordable, or where more mothers are getting better access to prenatal care.

This is a reality in some areas, where neighborhood data is analyzed to evaluate community health and to promote development. Cleveland is studying cases of lead poisoning and the impact on school readiness and educational outcomes for children. Detroit is tracking the extent of property blight and abandonment.

But good data doesn’t just happen.

These activities are possible because of local intermediaries, groups that bridge the gap between data and local stakeholders: nonprofits, government agencies, foundations and residents. These groups access data that are often confidential and indecipherable to the public and make them accessible and useful. And with the support of the National Neighborhood Indicators Partnership (NNIP), groups around the country are championing community development at the local level.

Without a local data intermediary in Baltimore, we might know less about what happened there last year and why.

Freddie Gray’s death prompted intense discussion about police brutality and discrimination against African-Americans. But the Baltimore Neighborhood Indicators Alliance (BNIA) helped root this incident and others like it within a particular place, highlighting what can happen when disadvantage is allowed to accumulate over decades.

BNIA, an NNIP member, was formed in 2000 to help community organizations use data shared by government agencies. By the time of Gray’s death, BNIA had 15 years of data across more than 150 indicators that demonstrated clear socioeconomic disadvantages for residents of Gray’s neighborhood, Sandtown-Winchester. The neighborhood had a 34 percent housing vacancy rate and 23 percent unemployment. The neighborhood lacks highway access and is poorly served by public transit, leaving residents cut off from jobs and services.

With BNIA’s help, national and local media outlets, including the New York Times,MSNBC and the Baltimore Sun portrayed a community beset by concentrated poverty, while other Baltimore neighborhoods benefited from economic investment and rising incomes. BNIA data, which is updated yearly, has also been used to develop policy ideas to revitalize the neighborhood, from increasing the use of housing choice vouchers to tackling unemployment.

Local data intermediaries like BNIA harness neighborhood data to make underserved people and unresolved issues visible. They work with government agencies to access raw data (e.g., crime reports, property records, and vital statistics) and facilitate their use to improve quality of life for residents.

But it’s not easy. Uncovering useful, actionable information requires trust, technical expertise, knowledge of the local context and coordination among multiple stakeholders.

This is why the NNIP is vital. NNIP is a peer network of more than two dozen local data intermediaries and the Urban Institute, working to democratize data by building local capacity and planning joint activities. Before NNIP’s founding partners, there were no advanced information systems documenting and tracking neighborhood indicators. Since 1996, NNIP has been a platform for sharing best practices, providing technical assistance, managing cross-site projects and analysis, and expanding the outreach of local data intermediaries to national networks and federal agencies. The partnership continues to grow. In order to foster this capacity in more places, NNIP has just released a guide for local communities to start a data intermediary….(More)”

Soon Your City Will Know Everything About You


Currently, the biggest users of these sensor arrays are in cities, where city governments use them to collect large amounts of policy-relevant data. In Los Angeles, the crowdsourced traffic and navigation app Waze collects data that helps residents navigate the city’s choked highway networks. In Chicago, an ambitious program makes public data available to startups eager to build apps for residents. The city’s 49th ward has been experimenting with participatory budgeting and online votingto take the pulse of the community on policy issues. Chicago has also been developing the “Array of Things,” a network of sensors that track, among other things, the urban conditions that affect bronchitis.

Edmonton uses the cloud to track the condition of playground equipment. And a growing number of countries have purpose-built smart cities, like South Korea’s high tech utopia city of Songdo, where pervasive sensor networks and ubiquitous computing generate immense amounts of civic data for public services.

The drive for smart cities isn’t restricted to the developed world. Rio de Janeiro coordinates the information flows of 30 different city agencies. In Beijing and Da Nang (Vietnam), mobile phone data is actively tracked in the name of real-time traffic management. Urban sensor networks, in other words, are also developing in countries with few legal protections governing the usage of data.

These services are promising and useful. But you don’t have to look far to see why the Internet of Things has serious privacy implications. Public data is used for “predictive policing” in at least 75 cities across the U.S., including New York City, where critics maintain that using social media or traffic data to help officers evaluate probable cause is a form of digital stop-and-frisk. In Los Angeles, the security firm Palantir scoops up publicly generated data on car movements, merges it with license plate information collected by the city’s traffic cameras, and sells analytics back to the city so that police officers can decide whether or not to search a car. In Chicago, concern is growing about discriminatory profiling because so much information is collected and managed by the police department — an agency with a poor reputation for handling data in consistent and sensitive ways. In 2015, video surveillance of the police shooting Laquan McDonald outside a Burger King was erased by a police employee who ironically did not know his activities were being digitally recorded by cameras inside the restaurant.

Since most national governments have bungled privacy policy, cities — which have a reputation for being better with administrative innovations — will need to fill this gap. A few countries, such as Canada and the U.K., have independent “privacy commissioners” who are responsible for advocating for the public when bureaucracies must decide how to use or give out data. It is pretty clear that cities need such advocates too.

What would Urban Privacy Commissioners do? They would teach the public — and other government staff — about how policy algorithms work. They would evaluate the political context in which city agencies make big data investments. They would help a city negotiate contracts that protect residents’ privacy while providing effective analysis to policy makers and ensuring that open data is consistently serving the public good….(more)”.

While governments talk about smart cities, it’s citizens who create them


Carlo Ratti at the Conversation: “The Australian government recently released an ambitious Smart Cities Plan, which suggests that cities should be first and foremost for people:

If our cities are to continue to meet their residents’ needs, it is essential for people to engage and participate in planning and policy decisions that have an impact on their lives.

Such statements are a good starting point – and should probably become central to Australia’s implementation efforts. A lot of knowledge has been collected over the past decade from successful and failed smart cities experiments all over the world; reflecting on them could provide useful information for the Australian government as it launches its national plan.

What is a smart city?

But, before embarking on such review, it would help to start from a definition of “smart city”.

The term has been used and abused in recent years, so much so that today it has lost meaning. It is often used to encompass disparate applications: we hear people talk and write about “smart city” when they refer to anything from citizen engagement to Zipcar, from open data to Airbnb, from smart biking to broadband.

Where to start with a definition? It is a truism to say the internet has transformed our lives over the past 20 years. Everything in the way we work, meet, mate and so on is very different today than it was just a few decades ago, thanks to a network of connectivity that now encompasses most people on the planet.

In a similar way, we are today at the beginning of a new technological revolution: the internet is entering physical space – the very space of our cities – and is becoming the Internet of Things; it is opening the door to a new world of applications that, as with the first wave of the internet, can incorporate many domains….

What should governments do?

In the above technological context, what should governments do? Over the past few years, the first wave of smart city applications followed technological excitement.

For instance, some of Korea’s early experiments such as Songdo City were engineered by the likes of Cisco, with technology deployment assisted by top-down policy directives.

In a similar way, in 2010, Rio de Janeiro launched the Integrated Centre of Command and Control, engineered by IBM. It’s a large control room for the city, which collects real-time information from cameras and myriad sensors suffused in the urban fabric.

Such approaches revealed many shortcomings, most notably the lack of civic engagement. It is as if they thought of the city simply as a “computer in open air”. These approaches led to several backlashes in the research and academic community.

A more interesting lesson can come from the US, where the focus is more on developing a rich Internet of Things innovation ecosystem. There are many initiatives fostering spaces – digital and physical – for people to come together and collaborate on urban and civic innovations….

That isn’t to say that governments should take a completely hands-off approach to urban development. Governments certainly have an important role to play. This includes supporting academic research and promoting applications in fields that might be less appealing to venture capital – unglamorous but nonetheless crucial domains such as municipal waste or water services.

The public sector can also promote the use of open platforms and standards in such projects, which would speed up adoption in cities worldwide.

Still, the overarching goal should always be to focus on citizens. They are in the best position to determine how to transform their cities and to make decisions that will have – as the Australian Smart Cities Plan puts it – “an impact on their lives”….(more)”

Is civic technology the killer app for democracy?


 at TechCrunch: “Smartphone apps have improved convenience for public transportation in many urban centers. In Washington, DC, riders can download apps to help them figure out where to go, when to show up and how long to wait for a bus or train. However, the problem with public transport in DC is not the lack of modern, helpful and timely information. The problem is that the Metro subway system is onfire. 

Critical infrastructure refers to the vital systems that connect us. Like the water catastrophe in Flint, Michigan and our crumbling roads, bridges and airports, the Metro system in DC is experiencing a systems failure. The Metro’s problems arise from typical public challenges like  poor management and deferred maintenance.

Upgrades of physical infrastructure are not easy and nimble like a software patch or an agile design process. They are slow, expensive and subject to deliberation and scrutiny. In other words, they are the fundamental substance of democratic decision-making: big decisions with long-term implications that require thoughtful strategy, significant investment, political leadership and public buy-in.

A killer app is an application you love so much you buy into a whole new way of doing things. Email and social media are good examples of killer apps. The killer app for Metro would have to get political leaders to look beyond their narrow, short-term interests and be willing to invest in modern public transportation for our national capital region.

The same is true for fixing our critical infrastructure throughout the nation. The killer apps for the systems on which we rely daily won’t be technical, they will be human. It will be Americans working together to a build a technology-enabled resilient democracy —one that is inclusive, responsive and successful in the Information Age.

In 2007, the I-35 bridge in Minneapolis collapsed into the Mississippi river. During his presidential bid, Senator John McCain used this event as an example of the failure of our leaders to make trade-offs for common national purpose. Case in point, an extravagantly expensive congressionally funded Alaskan “bridge to nowhere” that served just a handful of people on an island. But how many apps to nowhere are we building?.

In DC, commuters who can afford alternatives will leave Metro. They’ll walk, drive, ordera car service or locate a bikeshare. The people who suffer from the public service risk and imbalance of the current Metro system are those who have no choice.

So here’s the challenge: Modern technology needs to create an inclusive society. Our current technical approach too often means that we’re prioritizing progress or profit for the few over the many. This pattern defeats the purpose of both the technology revolution and American democracy. Government and infrastructure are supposed to serve everyone, but technology thus far has made it so that public failures affect some Americans more than others. …

For democracy to succeed in the Information Age, we’ll need some new rules of engagement with technology. The White House recently released its third report on data and its implications for society. The 2016 report pays special attention to the ethics of machine automation and algorithms. The authors stress the importance of ethical analytics and propose the principle of “equal opportunity by design.” It’s an excellent point of departure as we recalibrate old systems and build new bridges to a more resilient, inclusive and prosperous nation….(more)”

How to implement “open innovation” in city government


Victor Mulas at the Worldbank: “City officials are facing increasingly complex challenges. As urbanization rates grow, cities face higher demand for services from a larger and more densely distributed population. On the other hand, rapid changes in the global economy are affecting cities that struggle to adapt to these changes, often resulting in economic depression and population drain.

“Open innovation” is the latest buzz word circulating in forums on how to address the increased volume and complexity of challenges for cities and governments in general.

But, what is open innovation?

Traditionally, public services were designed and implemented by a group of public officials. Open innovation allows us to design these services with multiple actors, including those who stand to benefit from the services, resulting in more targeted and better tailored services, often implemented through partnership with these stakeholders. Open innovation allows cities to be more productive in providing services while addressing increased demand and higher complexity of services to be delivered.

New York, Barcelona, Amsterdam and many other cities have been experimenting with this concept, introducing challenges for entrepreneurs to address common problems or inviting stakeholders to co-create new services.   Open innovation has gone from being a “buzzword” to another tool in the city officials’ toolbox.

However, even cities that embrace open innovation are still struggling to implement it beyond a few specific areas.  This is understandable, as introducing open innovation practically requires a new way of doing things for city governments, which tend to be complex and bureaucratic organizations.

Counting with an engaged mayor is not enough to bring this kind of transformation. Changing the behavior of city officials requires their buy-in, it can’t be done top down

We have been introducing open innovation to cities and governments for the last three years in Chile, Colombia, Egypt and Mozambique. We have addressed specific challenges and iteratively designed and tested a systematic methodology to introduce open innovation in government through both a top-down and a bottom-up approaches. We have tested this methodology in Colombia (Cali, Barranquilla and Manizales) and Chile (metropolitan area of Gran Concepción).   We have identified “internal champions” (i.e., government officials who advocate the new methodology), and external stakeholders organized in an “innovation hub” that provides long-term sustainability and scalability of interventions. We believe that this methodology is easily applicable beyond cities to other government entities at the regional and national levels. …To understand how the methodology practically works, we describe in this report the process and its results in its application in the city area of Gran Concepción, in Chile. For this activity, the urban transport sector was selected and the target of intervention were the regional and municipal government departments in charge or urban transport in the area of Gran Concepción. The activity in Chile resulted in a threefold impact:

  1. It catalyzed the adoption of the bottom-up smart city model following this new methodology throughout Chile; and
  2. It expanded the implementation and mainstreaming of the methodologies developed and tested through this activity in other World Bank projects.

More information about this activity in Chile can be found in the Smart City Gran Concepcion webpage…(More)”

Open data + increased disclosure = better public-private partnerships


David Bloomgarden and Georg Neumann at Fomin Blog: “The benefits of open and participatory public procurement are increasingly being recognized by international bodies such as the Group of 20 major economies, the Organisation for Economic Co-operation and Development, and multilateral development banks. Value for money, more competition, and better goods and services for citizens all result from increased disclosure of contract data. Greater openness is also an effective tool to fight fraud and corruption.

However, because public-private partnerships (PPPs) are planned during a long timeframe and involve a large number of groups, therefore, implementing greater levels of openness in disclosure is complicated. This complexity can be a challenge to good design. Finding a structured and transparent approach to managing PPP contract data is fundamental for a project to be accepted and used by its local community….

In open contracting, all data is disclosed during the public procurement process—from the planning stage, to the bidding and awarding of the contract, to the monitoring of the implementation. A global open source data standard is used to publish that data, which is already being implemented in countries as diverse as Canada, Paraguay, and the Ukraine. Using open data throughout the contracting process provides opportunities to innovate in managing bids, fixing problems, and integrating feedback as needed. Open contracting contributes to the overall social and environmental sustainability of infrastructure investments.

In the case of Mexico’s airport, the project publishes details of awarded contracts, including visualizing the flow of funds and detailing the full amounts of awarded contracts and renewable agreements. Standardized, timely, and open data that follow global standards such as the Open Contracting Data Standard will make this information useful for analysis of value for money, cost-benefit, sustainability, and monitoring performance. Crucially, open contracting will shift the focus from the inputs into a PPP, to the outputs: the goods and services being delivered.

Benefits of open data for PPPs

We think that better and open data will lead to better PPPs. Here’s how:

1. Using user feedback to fix problems

The Brazilian state of Minas Gerais has been a leader in transparent PPP contracts with full proactive disclosure of the contract terms, as well as of other relevant project information—a practice that puts a government under more scrutiny but makes for better projects in the long run.

According to Marcos Siqueira, former head of the PPP Unit in Minas Gerais, “An adequate transparency policy can provide enough information to users so they can become contract watchdogs themselves.”

For example, a public-private contract was signed in 2014 to build a $300 million waste treatment plant for 2.5 million people in the metropolitan area of Belo Horizonte, the capital of Minas Gerais. As the team members conducted appraisals, they disclosed them on the Internet. In addition, the team held around 20 public meetings and identified all the stakeholders in the project. One notable result of the sharing and discussion of this information was the relocation of the facility to a less-populated area. When the project went to the bidding phase, it was much closer to the expectations of its various stakeholders.

2. Making better decisions on contracts and performance

Chile has been a leader in developing PPPs (which it refers to as concessions) for several decades, in a range of sectors: urban and inter-urban roads, seaports, airports, hospitals, and prisons. The country tops the list for the best enabling environment for PPPs in Latin America and the Caribbean, as measured by Infrascope, an index produced by the Economist Intelligence Unit and the Multilateral Investment Fund of the IDB Group.

Chile’s distinction is that it discloses information on performance of PPPs that are underway. The government’s Concessions Unit regularly publishes summaries of the projects during their different phases, including construction and operation. The reports are non-technical, yet include all the necessary information to understand the scope of the project…(More)”

Smart crowds in smart cities: real life, city scale deployments of a smartphone based participatory crowd management platform


Tobias FrankePaul Lukowicz and Ulf Blanke at the Journal of Internet Services and Applications: “Pedestrian crowds are an integral part of cities. Planning for crowds, monitoring crowds and managing crowds, are fundamental tasks in city management. As a consequence, crowd management is a sprawling R&D area (see related work) that includes theoretical models, simulation tools, as well as various support systems. There has also been significant interest in using computer vision techniques to monitor crowds. However, overall, the topic of crowd management has been given only little attention within the smart city domain. In this paper we report on a platform for smart, city-wide crowd management based on a participatory mobile phone sensing platform. Originally, the apps based on this platform have been conceived as a technology validation tool for crowd based sensing within a basic research project. However, the initial deployments at the Notte Bianca Festival1 in Malta and at the Lord Mayor’s Show in London2 generated so much interest within the civil protection community that it has gradually evolved into a full-blown participatory crowd management system and is now in the process of being commercialized through a startup company. Until today it has been deployed at 14 events in three European countries (UK, Netherlands, Switzerland) and used by well over 100,000 people….

Obtaining knowledge about the current size and density of a crowd is one of the central aspects of crowd monitoring . For the last decades, automatic crowd monitoring in urban areas has mainly been performed by means of image processing . One use case for such video-based applications can be found in, where a CCTV camera-based system is presented that automatically alerts the staff of subway stations when the waiting platform is congested. However, one of the downsides of video-based crowd monitoring is the fact that video cameras tend to be considered as privacy invading. Therefore,  presents a privacy preserving approach to video-based crowd monitoring where crowd sizes are estimated without people models or object tracking.

With respect to the mitigation of catastrophes induced by panicking crowds (e.g. during an evacuation), city planners and architects increasingly rely on tools simulating crowd behaviors in order to optimize infrastructures. Murakami et al. presents an agent based simulation for evacuation scenarios. Shendarkar et al. presents a work that is also based on BSI (believe, desire, intent) agents – those agents however are trained in a virtual reality environment thereby giving greater flexibility to the modeling. Kluepfel et al. on the other hand uses a cellular automaton model for the simulation of crowd movement and egress behavior.

With smartphones becoming everyday items, the concept of crowd sourcing information from users of mobile application has significantly gained traction. Roitman et al. presents a smart city system where the crowd can send eye witness reports thereby creating deeper insights for city officials. Szabo et al. takes this approach one step further and employs the sensors built into smartphones for gathering data for city services such as live transit information. Ghose et al. utilizes the same principle for gathering information on road conditions. Pan et al. uses a combination of crowd sourcing and social media analysis for identifying traffic anomalies….(More)”.

We know where you live


MIT News Office: “From location data alone, even low-tech snoopers can identify Twitter users’ homes, workplaces….Researchers at MIT and Oxford University have shown that the location stamps on just a handful of Twitter posts — as few as eight over the course of a single day — can be enough to disclose the addresses of the poster’s home and workplace to a relatively low-tech snooper.

The tweets themselves might be otherwise innocuous — links to funny videos, say, or comments on the news. The location information comes from geographic coordinates automatically associated with the tweets.

Twitter’s location-reporting service is off by default, but many Twitter users choose to activate it. The new study is part of a more general project at MIT’s Internet Policy Research Initiative to help raise awareness about just how much privacy people may be giving up when they use social media.

The researchers describe their research in a paper presented last week at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems, where it received an honorable mention in the best-paper competition, a distinction reserved for only 4 percent of papers accepted to the conference.

“Many people have this idea that only machine-learning techniques can discover interesting patterns in location data,” says Ilaria Liccardi, a research scientist at MIT’s Internet Policy Research Initiative and first author on the paper. “And they feel secure that not everyone has the technical knowledge to do that. With this study, what we wanted to show is that when you send location data as a secondary piece of information, it is extremely simple for people with very little technical knowledge to find out where you work or live.”

Conclusions from clustering

In their study, Liccardi and her colleagues — Alfie Abdul-Rahman and Min Chen of Oxford’s e-Research Centre in the U.K. — used real tweets from Twitter users in the Boston area. The users consented to the use of their data, and they also confirmed their home and work addresses, their commuting routes, and the locations of various leisure destinations from which they had tweeted.

The time and location data associated with the tweets were then presented to a group of 45 study participants, who were asked to try to deduce whether the tweets had originated at the Twitter users’ homes, their workplaces, leisure destinations, or locations along their commutes. The participants were not recruited on the basis of any particular expertise in urban studies or the social sciences; they just drew what conclusions they could from location clustering….

Predictably, participants fared better with map-based representations, correctly identifying Twitter users’ homes roughly 65 percent of the time and their workplaces at closer to 70 percent. Even the tabular representation was informative, however, with accuracy rates of just under 50 percent for homes and a surprisingly high 70 percent for workplaces….(More; Full paper )”

City planners tap into wealth of cycling data from Strava tracking app


Peter Walker in The Guardian: “Sheila Lyons recalls the way Oregon used to collect data on how many people rode bikes. “It was very haphazard, two-hour counts done once a year,” said the woman in charge of cycling policy for the state government.“Volunteers, sitting on the street corner because they wanted better bike facilities. Pathetic, really.”

But in 2013 a colleague had an idea. She recorded her own bike rides using an app called Strava, and thought: why not ask the company to share its data? And so was born Strava Metro, both an inadvertent tech business spinoff and a similarly accidental urban planning tool, one that is now quietly helping to reshape streets in more than 70 places around the world and counting.

Using the GPS tracking capability of a smartphone and similar devices, Strata allows people to plot how far and fast they go and compare themselves against other riders. Users create designated route segments, which each have leaderboards ranked by speed.

Originally aimed just at cyclists, Strava soon incorporated running and now has options for more than two dozen pursuits. But cycling remains the most popular,and while the company is coy about overall figures, it says it adds 1 million new members every two months, and has more than six million uploads a week.

For city planners like Lyons, used to very occasional single-street bike counts,this is a near-unimaginable wealth of data. While individual details are anonymised, it still shows how many Strava-using cyclists, plus their age and gender, ride down any street at any time of the day, and the entire route they take.

The company says it initially had no idea how useful the information could be,and only began visualising data on heatmaps as a fun project for its engineers.“We’re not city planners,” said Michael Horvath, one of two former HarvardUniversity rowers and relatively veteran 40-something tech entrepreneurs who co-founded Strava in 2009.

“One of the things that we learned early on is that these people just don’t have very much data to begin with. Not only is ours a novel dataset, in many cases it’s the only dataset that speaks to the behaviour of cyclists and pedestrians in that city or region.”…(More)”