Mapping information economy business with big data: findings from the UK


NESTA: “This paper uses innovative ‘big data’ resources to measure the size of the information economy in the UK.

Key Findings

  • Counts of information economy firms are 42 per cent larger than SIC-based estimates
  • Using ‘big data’ estimates, the research finds 225,800 information economy businesses in the UK
  • Information economy businesses are highly clustered across the country, with very high counts in the Greater South East, notably London (especially central and east London), as well as big cities such as Manchester, Birmingham and Bristol
  • Looking at local clusters, we find hotspots in Middlesbrough, Aberdeen, Brighton, Cambridge and Coventry, among others

Information and Communications Technologies – and the digital economy they support – are of enduring interest to researchers and policymakers. National and local government are particularly keen to understand the characteristics and growth potential of ‘their’ digital businesses.
Given the recent resurgence of interest in industrial policy across many developed countries, there is now substantial policy interest in developing stronger, more competitive digital economies. For example, the UK’s current industrial strategy combines horizontal interventions with support for seven key sectors, of which the ‘information economy’ is one.
The desire to grow high–tech clusters is often prominent in the policy mix – for instance, the UK’s Tech City UK initiative, Regional Innovation Clusters in the US and elements of ‘smart specialisation’ policies in the EU.
In this paper, NIESR and Growth Intelligence use novel ‘big data’ sources to improve our understanding of information economy businesses in the UK – that is, those involved in the production of ICTs. We use this experience to critically reflect on some of the opportunities and challenges presented by big data tools and analytics for economic research and policymaking.”
– See more at: http://www.nesta.org.uk/publications/mapping-information-economy-business-big-data-findings-uk-0#sthash.2ismEMr2.dpuf

Smart cities: the state-of-the-art and governance challenge


New Paper by Mark Deakin in Triple Helix – A Journal of University-Industry-Government Innovation and Entrepreneurship: “Reflecting on the governance of smart cities, the state-of-the-art this paper advances offers a critique of recent city ranking and future Internet accounts of their development. Armed with these critical insights, it goes on to explain smart cities in terms of the social networks, cultural attributes and environmental capacities, vis-a-vis, vital ecologies of the intellectual capital, wealth creation and standards of participatory governance regulating their development. The Triple Helix model which the paper advances to explain these performances in turn suggests that cities are smart when the ICTs of future Internet developments successfully embed the networks society needs for them to not only generate intellectual capital, or create wealth, but also cultivate the environmental capacity, ecology and vitality of those spaces which the direct democracy of their participatory governance open up, add value to and construct.”

Look to Government—Yes, Government—for New Social Innovations


Paper by Christian Bason and Philip Colligan: “If asked to identify the hotbed of social innovation right now, many people would likely point to the new philanthropy of Silicon Valley or the social entrepreneurship efforts supported by Ashoka, Echoing Green, and Skoll Foundation. Very few people, if any, would mention their state capital or Capitol Hill. While local and national governments may have promulgated some of the greatest advances in human history — from public education to putting a man on the moon — public bureaucracies are more commonly known to stifle innovation.
Yet, around the world, there are local, regional, and national government innovators who are challenging this paradigm. They are pioneering a new form of experimental government — bringing new knowledge and practices to the craft of governing and policy making; drawing on human-centered design, user engagement, open innovation, and cross-sector collaboration; and using data, evidence, and insights in new ways.
Earlier this year, Nesta, the UK’s innovation foundation (which Philip helps run), teamed up with Bloomberg Philanthropies to publish i-teams, the first global review of public innovation teams set up by national and city governments. The study profiled 20 of the most established i-teams from around the world, including:

  • French Experimental Fund for Youth, which has supported more than 554 experimental projects (such as one that reduces school drop-out rates) that have benefited over 480,000 young people;
  • Nesta’s Innovation Lab, which has run 70 open innovation challenges and programs supporting over 750 innovators working in fields as diverse as energy efficiency, healthcare, and digital education;
  • New Orleans’ Innovation and Delivery team, which achieved a 19% reduction in the number of murders in the city in 2013 compared to the previous year.

How are i-teams achieving these results? The most effective ones are explicit about the goal they seek – be it creating a solution to a specific policy challenge, engaging citizenry in behaviors that help the commonweal, or transforming the way government behaves. Importantly, these teams are also able to deploy the right skills, capabilities, and methods for the job.
In addition, ­i-teams have a strong bias toward action. They apply academic research in behavioral economics and psychology to public policy and services, focusing on rapid experimentation and iteration. The approach stands in stark contrast to the normal routines of government.
Take for example, The UK’s Behavioural Insights Team (BIT), often called the Nudge Unit. It sets clear goals, engages the right expertise to prototype means to the end, and tests innovations rapidly in the field, to learn what’s not working and rapidly scales what is.
One of BIT’s most famous projects changed taxpayer behavior. BIT’s team of economists, behavioral psychologists, and seasoned government staffers came up with minor changes to tax letters, sent out by the UK Government, that subtlety introduced positive peer pressure. By simply altering the letters to say that most people in their local area had already paid their taxes, BIT was able to boost repayment rates by around 5%. This trial was part of a range of interventions, which have helped forward over £200 million in additional tax revenue to HM Revenue & Customs, the UK’s tax authority.
The Danish government’s internal i-team, MindLab (which Christian ran for 8 years) has likewise influenced citizen behavior….”

Activists Wield Search Data to Challenge and Change Police Policy


at the New York Times: “One month after a Latino youth died from a gunshot as he sat handcuffed in the back of a police cruiser here last year, 150 demonstrators converged on Police Headquarters, some shouting “murderers” as baton-wielding officers in riot gear fired tear gas.

The police say the youth shot himself with a hidden gun. But to many residents of this city, which is 40 percent black, the incident fit a pattern of abuse and bias against minorities that includes frequent searches of cars and use of excessive force. In one case, a black female Navy veteran said she was beaten by an officer after telling a friend she was visiting that the friend did not have to let the police search her home.

Yet if it sounds as if Durham might have become a harbinger of Ferguson, Mo. — where the fatal shooting of an unarmed black teenager by a white police officer led to weeks of protests this summer — things took a very different turn. Rather than relying on demonstrations to force change, a coalition of ministers, lawyers and community and political activists turned instead to numbers. They used an analysis of state data from 2002 to 2013 that showed that the Durham police searched black male motorists at more than twice the rate of white males during stops. Drugs and other illicit materials were found no more often on blacks….

The use of statistics is gaining traction not only in North Carolina, where data on police stops is collected under a 15-year-old law, but in other cities around the country.

Austin, Tex., began requiring written consent for searches without probable cause two years ago, after its independent police monitor reported that whites stopped by the police were searched one in every 28 times, while blacks were searched one in eight times.

In Kalamazoo, Mich., a city-funded study last year found that black drivers were nearly twice as likely to be stopped, and then “much more likely to be asked to exit their vehicle, to be handcuffed, searched and arrested.”

As a result, Jeff Hadley, the public safety chief of Kalamazoo, imposed new rules requiring officers to explain to supervisors what “reasonable suspicion” they had each time they sought a driver’s consent to a search. Traffic stops have declined 42 percent amid a drop of more than 7 percent in the crime rate, he said.

“It really stops the fishing expeditions,” Chief Hadley said of the new rules. Though the findings demoralized his officers, he said, the reaction from the African-American community stunned him. “I thought they would be up in arms, but they said: ‘You’re not telling us anything we didn’t already know. How can we help?’ ”

The School of Government at the University of North Carolina at Chapel Hill has a new manual for defense lawyers, prosecutors and judges, with a chapter that shows how stop and search data can be used by the defense to raise challenges in cases where race may have played a role…”

Colombia’s Data-Driven Fight Against Crime


One Monday in 1988, El Mundo newspaper of Medellín, Colombia, reported, as it did every Monday, on the violent deaths in the city of two million people over the weekend. An article giving an hour-by-hour description of the deaths from Saturday night to Sunday night was remarkable for, among other things, the journalist’s skill in finding different ways to report a murder. “Someone took the life of Luís Alberto López at knife point … Luís Alberto Patiño ceased to exist with a bullet in his head … Mario Restrepo turned up dead … An unidentified person killed Néstor Alvarez with three shots.” In reporting 27 different murders, the author repeated his phrasing only once.

….What Guerrero did to make Cali safer was remarkable because it worked, and because of the novelty of his strategy. Before becoming mayor, Guerrero was not a politician, but a Harvard-trained epidemiologist who was president of the Universidad del Valle in Cali. He set out to prevent murder the way a doctor prevents disease. What public health workers are doing now to stop the spread of Ebola, Guerrero did in Cali to stop the spread of violence.

Although his ideas have now been used in dozens of cities throughout Latin America, they are worth revisiting because they are not employed in the places that need them most. The most violent places in Latin America are Honduras, El Salvador and Guatemala — indeed, they are among the most violent countries in the world not at war. The wave of youth migration to the United States is from these countries, and the refugees are largely fleeing violence.

One small municipality in El Salvador, Santa Tecla, has employed Cali’s strategies since about 10 years ago, and the homicide rate has dropped there. But Santa Tecla is an anomaly. Most of the region’s cities have not tried to do what Guerrero did — and they are failing to protect their citizens….

Guerrero went on to spread his ideas. Working with the Pan-American Health Organization and the Inter-American Development Bank, he took his epidemiological methods to 18 other countries.

“The approach was very low-cost and pragmatic,” said Joan Serra Hoffman, a senior specialist in crime and violence prevention in Latin America and the Caribbean at the World Bank. “You could see it was conceived by someone who was an academic and a policy maker. It can be fully operational for between $50,000 and $80,000.”…

Can Government Mine Tweets to Assess Public Opinion?


at Government Technology: “What if instead of going to a city meeting, you could go on Twitter, tweet your opinion, and still be heard by those in government? New research suggests this is a possibility.
The Urban Attitudes Lab at Tufts University has conducted research on accessing “big data” on social networking sites for civic purposes, according to Justin Hollander, associate professor in the Department of Urban and Environmental Policy and Planning at Tufts.
About six months ago, Hollander began researching new ways of accessing how people think about the places they live, work and play. “We’re looking to see how tapping into social media data to understand attitudes and opinions can benefit both urban planning and public policy,” he said.
Harnessing natural comments — there are about one billion tweets per day — could help governments learn what people are saying and feeling, said Hollander. And while formal types of data can be used as proxies for how happy people are, people openly share their sentiments on social networking sites.
Twitter and other social media sites can also provide information in an unobtrusive way. “The idea is that we can capture a potentially more valid and reliable view [of people’s] opinions about the world,” he said. As an inexact science, social science relies on a wide range of data sources to inform research, including surveys, interviews and focus groups; but people respond to being the subject of study, possibly affecting outcomes, Hollander said.
Hollander is also interested in extracting data from social sites because it can be done on a 24/7 basis, which means not having to wait for government to administer surveys, like the Decennial Census. Information from Twitter can also be connected to place; Hollander has approximated that about 10 percent of all tweets are geotagged to location.
In its first study earlier this year, the lab looked at using big data to learn about people’s sentiments and civic interests in New Bedford, Mass., comparing Twitter messages with the city’s published meeting minutes.
To extract tweets over a six-week period from February to April, researchers used the lab’s own software to capture 122,186 tweets geotagged within the city that also had words pertaining to the New Bedford area. Hollander said anyone can get API information from Twitter to also mine data from an area as small as a neighborhood containing a couple hundred houses.
Researchers used IBM’s SPSS Modeler software, comparing this to custom-designed software, to leverage a sentiment dictionary of nearly 3,000 words, assigning a sentiment score to each phrase — ranging from -5 for awful feelings to +5 for feelings of elation. The lab did this for the Twitter messages, and found that about 7 percent were positive versus 5.5 percent negative, and correspondingly in the minutes, 1.7 percent were positive and .7 percent negative. In total, about 11,000 messages contained sentiments.
The lab also used NVivo qualitative software to analyze 24 key words in a one-year sample of the city’s meeting minutes. By searching for the same words in Twitter posts, the researchers found that “school,” “health,” “safety,” “parks,” “field” and “children” were used frequently across both mediums.
….
Next up for the lab is a new study contrasting Twitter posts from four Massachusetts cities with the recent election results.

The Next Frontier of Engagement: Civic Innovation Labs


Maayan Dembo at Planetizen: “As described by Clayton Christensen, a professor at the Harvard Business School who developed the term “disruptive innovation,” a successful office for social innovation should employ four main tactics to accomplish its mission. First, governments should invest “in innovations that are developed and identified by citizens outside of government who better understand the problems.” Second, the office should support “‘bottom-up’ initiatives, in preference to ‘trickle-down’ philanthropy—because the societal impact of the former is typically greater.” Third, Christensen argues that the office should utilize impact metrics to measure performance and, finally, that it should also invest in social innovation outside of the non-profit sector.
Los Angeles’ most recent citizen-driven social innovation initiative, the Civic Innovation Lab, is an 11-month project aimed at prototyping new solutions for issues within the city of Los Angeles. It is supported by the HubLA, Learn Do Share, the Los Angeles *City  Tech Bullpen, and Innovate LA, a membership organization within the Los Angeles County Economic Development Corporation. Private and public sector support for such labs, in one of the largest cities in America, is highly unprecedented, and because this initiative in Los Angeles is a new mechanism explicitly supported by the public sector, it warrants a critical check on its motivations and accomplishments. Depending on its success, the Civic Innovation Lab could serve as a model for future municipalities.
The Los Angeles Civic Innovation Lab operates in three main phases: 1) workshops where citizens learn about the possibilities of Open Data and discuss what deep challenges face Los Angeles (called the “Discover, Define, Design” stage), 2) a call for solutions to solve the design challenges brought to light in the first phase, and 3) a six-month accelerator program to prototype selected solutions. I participated in the most recent Civic Innovation Lab session, a three-day workshop concluding the “Discover, Define, Design” phase….”

Good data make better cities


Stephen Goldsmith and Susan Crawford at the Boston Globe: “…Federal laws prevent sharing of information among state workers helping the same family. In one state’s public health agency, workers fighting obesity cannot receive information from another official inside the same agency assigned to a program aimed at fighting diabetes. In areas where citizens are worried about environmental justice, sensors collecting air quality information are feared — because they could monitor the movements of people. Cameras that might provide a crucial clue to the identity of a terrorist are similarly feared because they might capture images of innocent bystanders.
In order for the public to develop confidence that data tools work for its betterment, not against it, we have work to do. Leaders need to establish policies covering data access, retention, security, and transparency. Forensic capacity — to look back and see who had access to what for what reason — should be a top priority in the development of any data system. So too should clear consequences for data misuse by government employees.
If we get this right, the payoffs for democracy will be enormous. Data can provide powerful insights into the equity of public services and dramatically increase the effectiveness of social programs. Existing 311 digital systems can become platforms for citizen engagement rather than just channels for complaints. Government services can be graded by citizens and improved in response to a continuous loop of interaction. Cities can search through anonymized data in a huge variety of databases for correlations between particular facts and desired outcomes and then apply that knowledge to drive toward results — what can a city do to reduce rates of obesity and asthma? What bridges are in need of preventative maintenance? And repurposing dollars from ineffective programs and vendors to interventions that work will help cities be safer, cleaner, and more effective.
The digital revolution has finally reached inside the walls of city hall, making this the best time within living memory to be involved in local government. We believe that doing many small things right using data will build trust, making it more likely that citizens will support their city’s need to do big things — including addressing economic dislocation.
Data rules should genuinely protect individuals, not limit our ability to serve them better. When it comes to data, unreasoning fear is our greatest enemy…”

Cities Find Rewards in Cheap Technologies


Nanette Byrnes at MIT Technology Review: “Cities around the globe, whether rich or poor, are in the midst of a technology experiment. Urban planners are pulling data from inexpensive sensors mounted on traffic lights and park benches, and from mobile apps on citizens’ smartphones, to analyze how their cities really operate. They hope the data will reveal how to run their cities better and improve urban life. City leaders and technology experts say that managing the growing challenges of cities well and affordably will be close to impossible without smart technology.
Fifty-four percent of humanity lives in urban centers, and almost all of the world’s projected population growth over the next three decades will take place in cities, including many very poor cities. Because of their density and often strained infrastructure, cities have an outsize impact on the environment, consuming two-thirds of the globe’s energy and contributing 70 percent of its greenhouse-gas emissions. Urban water systems are leaky. Pollution levels are often extreme.
But cities also contribute most of the world’s economic production. Thirty percent of the world’s economy and most of its innovation are concentrated in just 100 cities. Can technology help manage rapid population expansion while also nurturing cities’ all-important role as an economic driver? That’s the big question at the heart of this Business Report.
Selling answers to that question has become a big business. IBM, Cisco, Hitachi, Siemens, and others have taken aim at this market, publicizing successful examples of cities that have used their technology to tackle the challenges of parking, traffic, transportation, weather, energy use, water management, and policing. Cities already spend a billion dollars a year on these systems, and that’s expected to grow to $12 billion a year or more in the next 10 years.
To justify this kind of outlay, urban technologists will have to move past the test projects that dominate discussions today. Instead, they’ll have to solve some of the profound and growing problems of urban living. Cities leaning in that direction are using various technologies to ease parking, measure traffic, and save water (see “Sensing Santander”), reduce rates of violent crime (see “Data-Toting Cops”), and prepare for ever more severe weather patterns.
There are lessons to be learned, too, from cities whose grandiose technological ideas have fallen short, like the eco-city initiative of Tianjin, China (see “China’s Future City”), which has few residents despite great technology and deep government support.
The streets are similarly largely empty in the experimental high-tech cities of Songdo, South Korea; Masdar City, Abu Dhabi; and Paredes, Portugal, which are being designed to have minimal impact on the environment and offer high-tech conveniences such as solar-powered air-conditioning and pneumatic waste disposal systems instead of garbage trucks. Meanwhile, established cities are taking a much more incremental, less ambitious, and perhaps more workable approach, often benefiting from relatively inexpensive and flexible digital technologies….”

The Reliability of Tweets as a Supplementary Method of Seasonal Influenza Surveillance


New Paper by Ming-Hsiang Tsou et al in the Journal of Medical Internet Research: “Existing influenza surveillance in the United States is focused on the collection of data from sentinel physicians and hospitals; however, the compilation and distribution of reports are usually delayed by up to 2 weeks. With the popularity of social media growing, the Internet is a source for syndromic surveillance due to the availability of large amounts of data. In this study, tweets, or posts of 140 characters or less, from the website Twitter were collected and analyzed for their potential as surveillance for seasonal influenza.
Objective: There were three aims: (1) to improve the correlation of tweets to sentinel-provided influenza-like illness (ILI) rates by city through filtering and a machine-learning classifier, (2) to observe correlations of tweets for emergency department ILI rates by city, and (3) to explore correlations for tweets to laboratory-confirmed influenza cases in San Diego.
Methods: Tweets containing the keyword “flu” were collected within a 17-mile radius from 11 US cities selected for population and availability of ILI data. At the end of the collection period, 159,802 tweets were used for correlation analyses with sentinel-provided ILI and emergency department ILI rates as reported by the corresponding city or county health department. Two separate methods were used to observe correlations between tweets and ILI rates: filtering the tweets by type (non-retweets, retweets, tweets with a URL, tweets without a URL), and the use of a machine-learning classifier that determined whether a tweet was “valid”, or from a user who was likely ill with the flu.
Results: Correlations varied by city but general trends were observed. Non-retweets and tweets without a URL had higher and more significant (P<.05) correlations than retweets and tweets with a URL. Correlations of tweets to emergency department ILI rates were higher than the correlations observed for sentinel-provided ILI for most of the cities. The machine-learning classifier yielded the highest correlations for many of the cities when using the sentinel-provided or emergency department ILI as well as the number of laboratory-confirmed influenza cases in San Diego. High correlation values (r=.93) with significance at P<.001 were observed for laboratory-confirmed influenza cases for most categories and tweets determined to be valid by the classifier.
Conclusions: Compared to tweet analyses in the previous influenza season, this study demonstrated increased accuracy in using Twitter as a supplementary surveillance tool for influenza as better filtering and classification methods yielded higher correlations for the 2013-2014 influenza season than those found for tweets in the previous influenza season, where emergency department ILI rates were better correlated to tweets than sentinel-provided ILI rates. Further investigations in the field would require expansion with regard to the location that the tweets are collected from, as well as the availability of more ILI data…”