Sharing Data Is a Form of Corporate Philanthropy


Matt Stempeck in HBR Blog:  “Ever since the International Charter on Space and Major Disasters was signed in 1999, satellite companies like DMC International Imaging have had a clear protocol with which to provide valuable imagery to public actors in times of crisis. In a single week this February, DMCii tasked its fleet of satellites on flooding in the United Kingdom, fires in India, floods in Zimbabwe, and snow in South Korea. Official crisis response departments and relevant UN departments can request on-demand access to the visuals captured by these “eyes in the sky” to better assess damage and coordinate relief efforts.

DMCii is a private company, yet it provides enormous value to the public and social sectors simply by periodically sharing its data.
Back on Earth, companies create, collect, and mine data in their day-to-day business. This data has quickly emerged as one of this century’s most vital assets. Public sector and social good organizations may not have access to the same amount, quality, or frequency of data. This imbalance has inspired a new category of corporate giving foreshadowed by the 1999 Space Charter: data philanthropy.
The satellite imagery example is an area of obvious societal value, but data philanthropy holds even stronger potential closer to home, where a wide range of private companies could give back in meaningful ways by contributing data to public actors. Consider two promising contexts for data philanthropy: responsive cities and academic research.
The centralized institutions of the 20th century allowed for the most sophisticated economic and urban planning to date. But in recent decades, the information revolution has helped the private sector speed ahead in data aggregation, analysis, and applications. It’s well known that there’s enormous value in real-time usage of data in the private sector, but there are similarly huge gains to be won in the application of real-time data to mitigate common challenges.
What if sharing economy companies shared their real-time housing, transit, and economic data with city governments or public interest groups? For example, Uber maintains a “God’s Eye view” of every driver on the road in a city:
stempeck2
Imagine combining this single data feed with an entire portfolio of real-time information. An early leader in this space is the City of Chicago’s urban data dashboard, WindyGrid. The dashboard aggregates an ever-growing variety of public datasets to allow for more intelligent urban management.
stempeck3
Over time, we could design responsive cities that react to this data. A responsive city is one where services, infrastructure, and even policies can flexibly respond to the rhythms of its denizens in real-time. Private sector data contributions could greatly accelerate these nascent efforts.
Data philanthropy could similarly benefit academia. Access to data remains an unfortunate barrier to entry for many researchers. The result is that only researchers with access to certain data, such as full-volume social media streams, can analyze and produce knowledge from this compelling information. Twitter, for example, sells access to a range of real-time APIs to marketing platforms, but the price point often exceeds researchers’ budgets. To accelerate the pursuit of knowledge, Twitter has piloted a program called Data Grants offering access to segments of their real-time global trove to select groups of researchers. With this program, academics and other researchers can apply to receive access to relevant bulk data downloads, such as an period of time before and after an election, or a certain geographic area.
Humanitarian response, urban planning, and academia are just three sectors within which private data can be donated to improve the public condition. There are many more possible applications possible, but few examples to date. For companies looking to expand their corporate social responsibility initiatives, sharing data should be part of the conversation…
Companies considering data philanthropy can take the following steps:

  • Inventory the information your company produces, collects, and analyzes. Consider which data would be easy to share and which data will require long-term effort.
  • Think who could benefit from this information. Who in your community doesn’t have access to this information?
  • Who could be harmed by the release of this data? If the datasets are about people, have they consented to its release? (i.e. don’t pull a Facebook emotional manipulation experiment).
  • Begin conversations with relevant public agencies and nonprofit partners to get a sense of the sort of information they might find valuable and their capacity to work with the formats you might eventually make available.
  • If you expect an onslaught of interest, an application process can help qualify partnership opportunities to maximize positive impact relative to time invested in the program.
  • Consider how you’ll handle distribution of the data to partners. Even if you don’t have the resources to set up an API, regular releases of bulk data could still provide enormous value to organizations used to relying on less-frequently updated government indices.
  • Consider your needs regarding privacy and anonymization. Strip the data of anything remotely resembling personally identifiable information (here are some guidelines).
  • If you’re making data available to researchers, plan to allow researchers to publish their results without obstruction. You might also require them to share the findings with the world under Open Access terms….”

Chief Executive of Nesta on the Future of Government Innovation


Interview between Rahim Kanani and Geoff Mulgan, CEO of NESTA and member of the MacArthur Research Network on Opening Governance: “Our aspiration is to become a global center of expertise on all kinds of innovation, from how to back creative business start-ups and how to shape innovations tools such as challenge prizes, to helping governments act as catalysts for new solutions,” explained Geoff Mulgan, chief executive of Nesta, the UK’s innovation foundation. In an interview with Mulgan, we discussed their new report, published in partnership with Bloomberg Philanthropies, which highlights 20 of the world’s top innovation teams in government. Mulgan and I also discussed the founding and evolution of Nesta over the past few years, and leadership lessons from his time inside and outside government.
Rahim Kanani: When we talk about ‘innovations in government’, isn’t that an oxymoron?
Geoff Mulgan: Governments have always innovated. The Internet and World Wide Web both originated in public organizations, and governments are constantly developing new ideas, from public health systems to carbon trading schemes, online tax filing to high speed rail networks.  But they’re much less systematic at innovation than the best in business and science.  There are very few job roles, especially at senior levels, few budgets, and few teams or units.  So although there are plenty of creative individuals in the public sector, they succeed despite, not because of the systems around them. Risk-taking is punished not rewarded.   Over the last century, by contrast, the best businesses have learned how to run R&D departments, product development teams, open innovation processes and reasonably sophisticated ways of tracking investments and returns.
Kanani: This new report, published in partnership with Bloomberg Philanthropies, highlights 20 of the world’s most effective innovation teams in government working to address a range of issues, from reducing murder rates to promoting economic growth. Before I get to the results, how did this project come about, and why is it so important?
Mulgan: If you fail to generate new ideas, test them and scale the ones that work, it’s inevitable that productivity will stagnate and governments will fail to keep up with public expectations, particularly when waves of new technology—from smart phones and the cloud to big data—are opening up dramatic new possibilities.  Mayor Bloomberg has been a leading advocate for innovation in the public sector, and in New York he showed the virtues of energetic experiment, combined with rigorous measurement of results.  In the UK, organizations like Nesta have approached innovation in a very similar way, so it seemed timely to collaborate on a study of the state of the field, particularly since we were regularly being approached by governments wanting to set up new teams and asking for guidance.
Kanani: Where are some of the most effective innovation teams working on these issues, and how did you find them?
Mulgan: In our own work at Nesta, we’ve regularly sought out the best innovation teams that we could learn from and this study made it possible to do that more systematically, focusing in particular on the teams within national and city governments.  They vary greatly, but all the best ones are achieving impact with relatively slim resources.  Some are based in central governments, like Mindlab in Denmark, which has pioneered the use of design methods to reshape government services, from small business licensing to welfare.  SITRA in Finland has been going for decades as a public technology agency, and more recently has switched its attention to innovation in public services. For example, providing mobile tools to help patients manage their own healthcare.   In the city of Seoul, the Mayor set up an innovation team to accelerate the adoption of ‘sharing’ tools, so that people could share things like cars, freeing money for other things.  In south Australia the government set up an innovation agency that has been pioneering radical ways of helping troubled families, mobilizing families to help other families.
Kanani: What surprised you the most about the outcomes of this research?
Mulgan: Perhaps the biggest surprise has been the speed with which this idea is spreading.  Since we started the research, we’ve come across new teams being created in dozens of countries, from Canada and New Zealand to Cambodia and Chile.  China has set up a mobile technology lab for city governments.  Mexico City and many others have set up labs focused on creative uses of open data.  A batch of cities across the US supported by Bloomberg Philanthropy—from Memphis and New Orleans to Boston and Philadelphia—are now showing impressive results and persuading others to copy them.
 

Selected Readings on Sentiment Analysis


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of sentiment analysis was originally published in 2014.

Sentiment Analysis is a field of Computer Science that uses techniques from natural language processing, computational linguistics, and machine learning to predict subjective meaning from text. The term opinion mining is often used interchangeably with Sentiment Analysis, although it is technically a subfield focusing on the extraction of opinions (the umbrella under which sentiment, evaluation, appraisal, attitude, and emotion all lie).

The rise of Web 2.0 and increased information flow has led to an increase in interest towards Sentiment Analysis — especially as applied to social networks and media. Events causing large spikes in media — such as the 2012 Presidential Election Debates — are especially ripe for analysis. Such analyses raise a variety of implications for the future of crowd participation, elections, and governance.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Choi, Eunsol et al. “Hedge detection as a lens on framing in the GMO debates: a position paper.” Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics 13 Jul. 2012: 70-79. http://bit.ly/1wweftP

  • Understanding the ways in which participants in public discussions frame their arguments is important for understanding how public opinion is formed. This paper adopts the position that it is time for more computationally-oriented research on problems involving framing. In the interests of furthering that goal, the authors propose the following question: In the controversy regarding the use of genetically-modified organisms (GMOs) in agriculture, do pro- and anti-GMO articles differ in whether they choose to adopt a more “scientific” tone?
  • Prior work on the rhetoric and sociology of science suggests that hedging may distinguish popular-science text from text written by professional scientists for their colleagues. The paper proposes a detailed approach to studying whether hedge detection can be used to understand scientific framing in the GMO debates, and provides corpora to facilitate this study. Some of the preliminary analyses suggest that hedges occur less frequently in scientific discourse than in popular text, a finding that contradicts prior assertions in the literature.

Michael, Christina, Francesca Toni, and Krysia Broda. “Sentiment analysis for debates.” (Unpublished MSc thesis). Department of Computing, Imperial College London (2013). http://bit.ly/Wi86Xv

  • This project aims to expand on existing solutions used for automatic sentiment analysis on text in order to capture support/opposition and agreement/disagreement in debates. In addition, it looks at visualizing the classification results for enhancing the ease of understanding the debates and for showing underlying trends. Finally, it evaluates proposed techniques on an existing debate system for social networking.

Murakami, Akiko, and Rudy Raymond. “Support or oppose?: classifying positions in online debates from reply activities and opinion expressions.” Proceedings of the 23rd International Conference on Computational Linguistics: Posters 23 Aug. 2010: 869-875. https://bit.ly/2Eicfnm

  • In this paper, the authors propose a method for the task of identifying the general positions of users in online debates, i.e., support or oppose the main topic of an online debate, by exploiting local information in their remarks within the debate. An online debate is a forum where each user posts an opinion on a particular topic while other users state their positions by posting their remarks within the debate. The supporting or opposing remarks are made by directly replying to the opinion, or indirectly to other remarks (to express local agreement or disagreement), which makes the task of identifying users’ general positions difficult.
  • A prior study has shown that a link-based method, which completely ignores the content of the remarks, can achieve higher accuracy for the identification task than methods based solely on the contents of the remarks. In this paper, it is shown that utilizing the textual content of the remarks into the link-based method can yield higher accuracy in the identification task.

Pang, Bo, and Lillian Lee. “Opinion mining and sentiment analysis.” Foundations and trends in information retrieval 2.1-2 (2008): 1-135. http://bit.ly/UaCBwD

  • This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Its focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. It includes material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.

Ranade, Sarvesh et al. “Online debate summarization using topic directed sentiment analysis.” Proceedings of the Second International Workshop on Issues of Sentiment Discovery and Opinion Mining 11 Aug. 2013: 7. http://bit.ly/1nbKtLn

  • Social networking sites provide users a virtual community interaction platform to share their thoughts, life experiences and opinions. Online debate forum is one such platform where people can take a stance and argue in support or opposition of debate topics. An important feature of such forums is that they are dynamic and grow rapidly. In such situations, effective opinion summarization approaches are needed so that readers need not go through the entire debate.
  • This paper aims to summarize online debates by extracting highly topic relevant and sentiment rich sentences. The proposed approach takes into account topic relevant, document relevant and sentiment based features to capture topic opinionated sentences. ROUGE (Recall-Oriented Understudy for Gisting Evaluation, which employ a set of metrics and a software package to compare automatically produced summary or translation against human-produced onces) scores are used to evaluate the system. This system significantly outperforms several baseline systems and show improvement over the state-of-the-art opinion summarization system. The results verify that topic directed sentiment features are most important to generate effective debate summaries.

Schneider, Jodi. “Automated argumentation mining to the rescue? Envisioning argumentation and decision-making support for debates in open online collaboration communities.” http://bit.ly/1mi7ztx

  • Argumentation mining, a relatively new area of discourse analysis, involves automatically identifying and structuring arguments. Following a basic introduction to argumentation, the authors describe a new possible domain for argumentation mining: debates in open online collaboration communities.
  • Based on our experience with manual annotation of arguments in debates, the authors propose argumentation mining as the basis for three kinds of support tools, for authoring more persuasive arguments, finding weaknesses in others’ arguments, and summarizing a debate’s overall conclusions.

GitHub: A Swiss Army knife for open government


FCW: “Today, more than 300 government agencies are using the platform for public and private development. Cities (Chicago, Philadelphia, San Francisco), states (New York, Washington, Utah) and countries (United Kingdom, Australia) are sharing code and paving a new road to civic collaboration….

In addition to a rapidly growing code collection, the General Services Administration’s new IT development shop has created a “/Developer program” to “provide comprehensive support for any federal agency engaged in the production or use of APIs.”
The Consumer Financial Protection Bureau has built a full-blown website on GitHub to showcase the software and design work its employees are doing.
Most of the White House’s repos relate to Drupal-driven websites, but the Obama administration has also shared its iOS and Android apps, which together have been forked nearly 400 times.

Civic-focused organizations — such as the OpenGov Foundation, the Sunlight Foundation and the Open Knowledge Foundation — are also actively involved with original projects on GitHub. Those projects include the OpenGov Foundation’s Madison document-editing tool touted by the likes of Rep. Darrell Issa (R-Calif.) and the Open Knowledge Foundation’s CKAN, which powers hundreds of government data platforms around the world.
According to GovCode, an aggregator of public government open-source projects hosted on GitHub, there have been hundreds of individual contributors and nearly 90,000 code commits, which involve making a set of tentative changes permanent.
The nitty-gritty
Getting started on GitHub is similar to the process for other social networking platforms. Users create individual accounts and can set up “organizations” for agencies or cities. They can then create repositories (or repos) to collaborate on projects through an individual or organizational account. Other developers or organizations can download repo code for reuse or repurpose it in their own repositories (called forking), and make it available to others to do the same.
Collaborative aspects of GitHub include pull requests that allow developers to submit and accept updates to repos that build on and grow an open-source project. There are wikis, gists (code snippet sharing) and issue tracking for bugs, feature requests, or general questions and answers.
GitHub provides free code hosting for all public repos. Upgrade offerings include personal and organizational plans based on the number of private repos. For organizations that want a self-hosted GitHub development environment, GitHub Enterprise, used by the likes of CFPB, allows for self-hosted, private repos behind a firewall.
GitHub’s core user interface can be unwelcoming or even intimidating to the nondeveloper, but GitHub’s Pages package offers Web-hosting features that include domain mapping and lightweight content management tools such as static site generator Jekyll and text editor Atom.
Notable government projects that use Pages are the White House’s Project Open Data, 18F’s /Developer Program, CFPB’s Open Tech website and New York’s Open Data Handbook. Indeed, Wired recently commented that the White House’s open-data GitHub efforts “could help fix government.”…
See also: GitHub for Government (GovLab)

Meet the UK start-ups changing the world with open data


Sophie Curtis in The Telegraph: “Data is more accessible today than anyone could have imagined 10 or 20 years ago. From corporate databases to social media and embedded sensors, data is exploding, with total worldwide volume expected to reach 6.6 zettabytes by 2020.
Open data is information that is available for anyone to use, for any purpose, at no cost. For example, the Department for Education publishes open data about the performance of schools in England, so that companies can create league tables and citizens can find the best-performing schools in their catchment area.
Governments worldwide are working to open up more of their data. Since January 2010, more than 18,500 UK government data sets have been released via the data.gov.uk web portal, creating new opportunities for organisations to build innovative digital services.
Businesses are also starting to realise the value of making their non-personal data freely available, with open innovation leading to the creation products and services that they can benefit from….

Now a range of UK start-ups are working with the ODI to build businesses using open data, and have already unlocked a total of £2.5 million worth of investments and contracts.
Mastodon C joined the ODI start-up programme at its inception in December 2012. Shortly after joining, the company teamed up with Ben Goldacre and Open Healthcare UK, and embarked on a project investigating the use of branded statins over the far cheaper generic versions.
The data analysis identified potential efficiency savings to the NHS of £200 million. The company is now also working with the Technology Strategy Board and Nesta to help them gain better insight into their data.
Another start-up, CarbonCulture is a community platform designed to help people use resources more efficiently. The company uses high-tech metering to monitor carbon use in the workplace and help clients save money.
Organisations such as 10 Downing Street, Tate, Cardiff Council, the GLA and the UK Parliament are using the company’s digital tools to monitor and improve their energy consumption. CarbonCulture has also helped the Department of Energy and Climate Change reduce its gas use by 10 per cent.
Spend Network’s business is built on collecting the spend statements and tender documents published by government in the UK and Europe and then publishing this data openly so that anyone can use it. The company currently hosts over £1.2 trillion of transactions from the UK and over 1.8 million tenders from across Europe.
One of the company’s major breakthroughs was creating the first national, open spend analysis for central and local government. This was used to uncover a 45 per cent delay in the UK’s tendering process, holding up £22 billion of government funds to the economy.
Meanwhile, TransportAPI uses open data feeds from Traveline, Network Rail and Transport for London to provide nationwide timetables, departure and infrastructure information across all modes of public transport.
TransportAPI currently has 700 developers and organisations signed up to its platform, including individual taxpayers and public sector organisations like universities and local authorities. Travel portals, hyperlocal sites and business analytics are also integrating features, such as the ‘nearest transport’ widget, into their websites.
These are just four examples of how start-ups are using open data to create new digital services. The ODI this week announced seven new open data start-ups joining the programme, covering 3D printed learning materials, helping disabled communities, renewable energy markets, and smart cities….”

Digital Government: Turning the Rhetoric into Reality


Miguel Carrasco and Peter Goss at BCG Perspectives: “Getting better—but still plenty of room for improvement: that’s the current assessment by everyday users of their governments’ efforts to deliver online services. The public sector has made good progress, but most countries are not moving nearly as quickly as users would like. Many governments have made bold commitments, and a few countries have determined to go “digital by default.” Most are moving more modestly, often overwhelmed by complexity and slowed by bureaucratic skepticism over online delivery as well as by a lack of digital skills. Developing countries lead in the rate of online usage, but they mostly trail developed nations in user satisfaction.
Many citizens—accustomed to innovation in such sectors as retailing, media, and financial services—wish their governments would get on with it. Of the services that can be accessed online, many only provide information and forms, while users are looking to get help and transact business. People want to do more. Digital interaction is often faster, easier, and more efficient than going to a service center or talking on the phone, but users become frustrated when the services do not perform as expected. They know what good online service providers offer. They have seen a lot of improvement in recent years, and they want their governments to make even better use of digital’s capabilities.
Many governments are already well on the way to improving digital service delivery, but there is often a gap between rhetoric and reality. There is no shortage of government policies and strategies relating to “digital first,” “e-government,” and “gov2.0,” in addition to digital by default. But governments need more than a strategy. “Going digital” requires leadership at the highest levels, investments in skills and human capital, and cultural and behavioral change. Based on BCG’s work with numerous governments and new research into the usage of, and satisfaction with, government digital services in 12 countries, we see five steps that most governments will want to take:

1. Focus on value. Put the priority on services with the biggest gaps between their importance to constituents and constituents’ satisfaction with digital delivery. In most countries, this will mean services related to health, education, social welfare, and immigration.

2. Adopt service design thinking. Governments should walk in users’ shoes. What does someone encounter when he or she goes to a government service website—plain language or bureaucratic legalese? How easy is it for the individual to navigate to the desired information? How many steps does it take to do what he or she came to do? Governments can make services easy to access and use by, for example, requiring users to register once and establish a digital credential, which can be used in the future to access online services across government.

3. Lead users online, keep users online. Invest in seamless end-to-end capabilities. Most government-service sites need to advance from providing information to enabling users to transact their business in its entirety, without having to resort to printing out forms or visiting service centers.

4. Demonstrate visible senior-leadership commitment. Governments can signal—to both their own officials and the public—the importance and the urgency that they place on their digital initiatives by where they assign responsibility for the effort.

5. Build the capabilities and skills to execute. Governments need to develop or acquire the skills and capabilities that will enable them to develop and deliver digital services.

This report examines the state of government digital services through the lens of Internet users surveyed in Australia, Denmark, France, Indonesia, the Kingdom of Saudi Arabia, Malaysia, the Netherlands, Russia, Singapore, the United Arab Emirates (UAE), the UK, and the U.S. We investigated 37 different government services. (See Exhibit 1.)…”

Open Governments, Open Data: A New Lever for Transparency, Citizen Engagement, and Economic Growth


Joel Gurin at the SAIS Review of International Affairs: “The international open data movement is beginning to have an impact on government policy, business strategy, and economic development. Roughly sixty countries in the Open Government Partnership have committed to principles that include releasing government data as open data—that is, free public data in forms that can be readily used. Hundreds of businesses are using open data to create jobs and build economic value. Up to now, however, most of this activity has taken place in developed countries, with the United States and United Kingdom in the lead. The use of open data faces more obstacles in developing countries, but has growing promise there, as well.”

Towards a comparative science of cities: using mobile traffic records in New York, London and Hong Kong


Book chapter by S. Grauwin, S. Sobolevsky, S. Moritz, I. Gódor, C. Ratti, to be published in “Computational Approaches for Urban Environments” (Springer Ed.), October 2014: “This chapter examines the possibility to analyze and compare human activities in an urban environment based on the detection of mobile phone usage patterns. Thanks to an unprecedented collection of counter data recording the number of calls, SMS, and data transfers resolved both in time and space, we confirm the connection between temporal activity profile and land usage in three global cities: New York, London and Hong Kong. By comparing whole cities typical patterns, we provide insights on how cultural, technological and economical factors shape human dynamics. At a more local scale, we use clustering analysis to identify locations with similar patterns within a city. Our research reveals a universal structure of cities, with core financial centers all sharing similar activity patterns and commercial or residential areas with more city-specific patterns. These findings hint that as the economy becomes more global, common patterns emerge in business areas of different cities across the globe, while the impact of local conditions still remains recognizable on the level of routine people activity.”

Index: The Networked Public


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on the networked public and was originally published in 2014.

Global Overview

  • The proportion of global population who use the Internet in 2013: 38.8%, up 3 percentage points from 2012
  • Increase in average global broadband speeds from 2012 to 2013: 17%
  • Percent of internet users surveyed globally that access the internet at least once a day in 2012: 96
  • Hours spent online in 2012 each month across the globe: 35 billion
  • Country with the highest online population, as a percent of total population in 2012: United Kingdom (85%)
  • Country with the lowest online population, as a percent of total population in 2012: India (8%)
  • Trend with the highest growth rate in 2012: Location-based services (27%)
  • Years to reach 50 million users: telephone (75), radio (38), TV (13), internet (4)

Growth Rates in 2014

  • Rate at which the total number of Internet users is growing: less than 10% a year
  • Worldwide annual smartphone growth: 20%
  • Tablet growth: 52%
  • Mobile phone growth: 81%
  • Percentage of all mobile users who are now smartphone users: 30%
  • Amount of all web usage in 2013 accounted for by mobile: 14%
  • Amount of all web usage in 2014 accounted for by mobile: 25%
  • Percentage of money spent on mobile used for app purchases: 68%
  • Growth of BitCoin wallet between 2013 and 2014: 8 times increase
  • Number of listings on AirBnB in 2014: 550k, 83% growth year on year
  • How many buyers are on Alibaba in 2014: 231MM buyers, 44% growth year on year

Social Media

  • Number of Whatsapp messages on average sent per day: 50 billion
  • Number sent per day on Snapchat: 1.2 billion
  • How many restaurants are registered on GrubHub in 2014: 29,000
  • Amount the sale of digital songs fell in 2013: 6%
  • How much song streaming grew in 2013: 32%
  • Number of photos uploaded and shared every day on Flickr, Snapchat, Instagram, Facebook and Whatsapp combined in 2014: 1.8 billion
  • How many online adults in the U.S. use a social networking site of some kind: 73%
  • Those who use multiple social networking sites: 42%
  • Dominant social networking platform: Facebook, with 71% of online adults
  • Number of Facebook users in 2004, its founding year: 1 million
  • Number of monthly active users on Facebook in September 2013: 1.19 billion, an 18% increase year-over-year
  • How many Facebook users log in to the site daily: 63%
  • Instagram users who log into the service daily: 57%
  • Twitter users who are daily visitors: 46%
  • Number of photos uploaded to Facebook every minute: over 243,000, up 16% from 2012
  • How much of the global internet population is actively using Twitter every month: 21%
  • Number of tweets per minute: 350,000, up 250% from 2012
  • Fastest growing demographic on Twitter: 55-64 year age bracket, up 79% from 2012
  • Fastest growing demographic on Facebook: 45-54 year age bracket, up 46% from 2012
  • How many LinkedIn accounts are created every minute: 120, up 20% from 2012
  • The number of Google searches in 2013: 3.5 million, up 75% from 2012
  • Percent of internet users surveyed globally that use social media in 2012: 90
  • Percent of internet users surveyed globally that use social media daily: 60
  • Time spent social networking, the most popular online activity: 22%, followed by searches (21%), reading content (20%), and emails/communication (19%)
  • The average age at which a child acquires an online presence through their parents in 10 mostly Western countries: six months
  • Number of children in those countries who have a digital footprint by age 2: 81%
  • How many new American marriages between 2005-2012 began by meeting online, according to a nationally representative study: more than one-third 
  • How many of the world’s 505 leaders are on Twitter: 3/4
  • Combined Twitter followers: of 505 world leaders: 106 million
  • Combined Twitter followers of Justin Bieber, Katy Perry, and Lady Gaga: 122 million
  • How many times all Wikipedias are viewed per month: nearly 22 billion times
  • How many hits per second: more than 8,000 
  • English Wikipedia’s share of total page views: 47%
  • Number of articles in the English Wikipedia in December 2013: over 4,395,320 
  • Platform that reaches more U.S. adults between ages 18-34 than any cable network: YouTube
  • Number of unique users who visit YouTube each month: more than 1 billion
  • How many hours of video are watched on YouTube each month: over 6 billion, 50% more than 2012
  • Proportion of YouTube traffic that comes from outside the U.S.: 80%
  • Most common activity online, based on an analysis of over 10 million web users: social media
  • People on Twitter who recommend products in their tweets: 53%
  • People who trust online recommendations from people they know: 90%

Mobile and the Internet of Things

  • Number of global smartphone users in 2013: 1.5 billion
  • Number of global mobile phone users in 2013: over 5 billion
  • Percent of U.S. adults that have a cell phone in 2013: 91
  • Number of which are a smartphone: almost two thirds
  • Mobile Facebook users in March 2013: 751 million, 54% increase since 2012
  • Growth rate of global mobile traffic as a percentage of global internet traffic as of May 2013: 15%, up from .9% in 2009
  • How many smartphone owners ages 18–44 “keep their phone with them for all but two hours of their waking day”: 79%
  • Those who reach for their smartphone immediately upon waking up: 62%
  • Those who couldn’t recall a time their phone wasn’t within reach or in the same room: 1 in 4
  • Facebook users who access the service via a mobile device: 73.44%
  • Those who are “mobile only”: 189 million
  • Amount of YouTube’s global watch time that is on mobile devices: almost 40%
  • Number of objects connected globally in the “internet of things” in 2012: 8.7 billion
  • Number of connected objects so far in 2013: over 10 billion
  • Years from tablet introduction for tables to surpass desktop PC and notebook shipments: less than 3 (over 55 million global units shipped in 2013, vs. 45 million notebooks and 35 million desktop PCs)
  • Number of wearable devices estimated to have been shipped worldwide in 2011: 14 million
  • Projected number of wearable devices in 2016: between 39-171 million
  • How much of the wearable technology market is in the healthcare and medical sector in 2012: 35.1%
  • How many devices in the wearable tech market are fitness or activity trackers: 61%
  • The value of the global wearable technology market in 2012: $750 million
  • The forecasted value of the market in 2018: $5.8 billion
  • How many Americans are aware of wearable tech devices in 2013: 52%
  • Devices that have the highest level of awareness: wearable fitness trackers,
  • Level of awareness for wearable fitness trackers amongst American consumers: 1 in 3 consumers
  • Value of digital fitness category in 2013: $330 million
  • How many American consumers surveyed are aware of smart glasses: 29%
  • Smart watch awareness amongst those surveyed: 36%

Access

  • How much of the developed world has mobile broadband subscriptions in 2013: 3/4
  • How much of the developing world has broadband subscription in 2013: 1/5
  • Percent of U.S. adults that had a laptop in 2012: 57
  • How many American adults did not use the internet at home, at work, or via mobile device in 2013: one in five
  • Amount President Obama initiated spending in 2009 in an effort to expand access: $7 billion
  • Number of Americans potentially shut off from jobs, government services, health care and education, among other opportunities due to digital inequality: 60 million
  • American adults with a high-speed broadband connection at home as of May 2013: 7 out of 10
  • Americans aged 18-29 vs. 65+ with a high-speed broadband connection at home as of May 2013: 80% vs. 43
  • American adults with college education (or more) vs. adults with no high school diploma that have a high-speed broadband connection at home as of May 2013: 89% vs. 37%
  • Percent of U.S. adults with college education (or more) that use the internet in 2011: 94
  • Those with no high school diploma that used the internet in 2011: 43
  • Percent of white American households that used the internet in 2013: 67
  • Black American households that used the internet in 2013: 57
  • States with lowest internet use rates in 2013: Mississippi, Alabama and Arkansas
  • How many American households have only wireless telephones as of the second half of 2012: nearly two in five
  • States with the highest prevalence of wireless-only adults according to predictive modeling estimates: Idaho (52.3%), Mississippi (49.4%), Arkansas (49%)
  • Those with the lowest prevalence of wireless-only adults: New Jersey (19.4%), Connecticut (20.6%), Delaware (23.3%) and New York (23.5%)

Sources

Transparency, legitimacy and trust


John Kamensky at Federal Times: “The Open Government movement has captured the imagination of many around the world as a way of increasing transparency, participation, and accountability. In the US, many of the federal, state, and local Open Government initiatives have been demonstrated to achieve positive results for citizens here and abroad. In fact, the White House’s science advisors released a refreshed Open Government plan in early June.
However, a recent study in Sweden says the benefits of transparency may vary, and may have little impact on citizens’ perception of legitimacy and trust in government. This research suggests important lessons on how public managers should approach the design of transparency strategies, and how they work in various conditions.
Jenny de Fine Licht, a scholar at the University of Gothenberg in Sweden, offers a more nuanced view of the influence of transparency in political decision making on public legitimacy and trust, in a paper that appears in the current issue of “Public Administration Review.” Her research challenges the assumption of many in the Open Government movement that greater transparency necessarily leads to greater citizen trust in government.
Her conclusion, based on an experiment involving over 1,000 participants, was that the type and degree of transparency “has different effects in different policy areas.” She found that “transparency is less effective in policy decisions that involve trade-offs related to questions of human life and death or well-being.”

The background

Licht says there are some policy decisions that involve what are called “taboo tradeoffs.” A taboo tradeoff, for example, would be making budget tradeoffs in policy areas such as health care and environmental quality, where human life or well-being is at stake. In cases where more money is an implicit solution, the author notes, “increased transparency in these policy areas might provoke feeling of taboo, and, accordingly, decreased perceived legitimacy.”
Other scholars, such as Harvard’s Jane Mansbridge,contend that “full transparency may not always be the best practice in policy making.” Full transparency in decision-making processes would include, for example, open appropriation committee meetings. Instead, she recommends “transparency in rationale – in procedures, information, reasons, and the facts on which the reasons are based.” That is, provide a full explanation after-the-fact.
Licht tested the hypothesis that full transparency of the decision-making process vs. partial transparency via providing after-the-fact rationales for decisions may create different results, depending on the policy arena involved…
Open Government advocates have generally assumed that full and open transparency is always better. Licht’s conclusion is that “greater transparency” does not necessarily increase citizen legitimacy and trust. Instead, the strategy of encouraging a high degree of transparency requires a more nuanced application in its use. While the she cautions about generalizing from her experiment, the potential implications for government decision-makers could be significant.
To date, many of the various Open Government initiatives across the country have assumed a “one size fits all” approach, across the board. Licht’s conclusions, however, help explain why the results of various initiatives have been divergent in terms of citizen acceptance of open decision processes.
Her experiment seems to suggest that citizen engagement is more likely to create a greater citizen sense of legitimacy and trust in areas involving “routine” decisions, such as parks, recreation, and library services. But that “taboo” decisions in policy areas involving tradeoffs of human life, safety, and well-being may not necessarily result in greater trust as a result of the use of full and open transparency of decision-making processes.
While she says that transparency – whether full or partial – is always better than no transparency, her experiment at least shows that policy makers will, at a minimum, know that the end result may not be greater legitimacy and trust. In any case, her research should engender a more nuanced conversation among Open Government advocates at all levels of government. In order to increase citizens’ perceptions of legitimacy and trust in government, it will take more than just advocating for Open Data!”