Paper by Tuhkuri, Joonas: “There are over 3 billion searches globally on Google every day. This report examines whether Google search queries can be used to predict the present and the near future unemployment rate in Finland. Predicting the present and the near future is of interest, as the official records of the state of the economy are published with a delay. To assess the information contained in Google search queries, the report compares a simple predictive model of unemployment to a model that contains a variable, Google Index, formed from Google data. In addition, cross-correlation analysis and Granger-causality tests are performed. Compared to a simple benchmark, Google search queries improve the prediction of the present by 10 % measured by mean absolute error. Moreover, predictions using search terms perform 39 % better over the benchmark for near future unemployment 3 months ahead. Google search queries also tend to improve the prediction accuracy around turning points. The results suggest that Google searches contain useful information of the present and the near future unemployment rate in Finland.”
EU-funded tool to help our brain deal with big data
EU Press Release: “Every single minute, the world generates 1.7 million billion bytes of data, equal to 360,000 DVDs. How can our brain deal with increasingly big and complex datasets? EU researchers are developing an interactive system which not only presents data the way you like it, but also changes the presentation constantly in order to prevent brain overload. The project could enable students to study more efficiently or journalists to cross check sources more quickly. Several museums in Germany, the Netherlands, the UK and the United States have already showed interest in the new technology.
Data is everywhere: it can either be created by people or generated by machines, such as sensors gathering climate information, satellite imagery, digital pictures and videos, purchase transaction records, GPS signals, etc. This information is a real gold mine. But it is also challenging: today’s datasets are so huge and complex to process that they require new ideas, tools and infrastructures.
Researchers within CEEDs (@ceedsproject) are transposing big data into an interactive environment to allow the human mind to generate new ideas more efficiently. They have built what they are calling an eXperience Induction Machine (XIM) that uses virtual reality to enable a user to ‘step inside’ large datasets. This immersive multi-modal environment – located at Pompeu Fabra University in Barcelona – also contains a panoply of sensors which allows the system to present the information in the right way to the user, constantly tailored according to their reactions as they examine the data. These reactions – such as gestures, eye movements or heart rate – are monitored by the system and used to adapt the way in which the data is presented.
Jonathan Freeman,Professor of Psychology at Goldsmiths, University of London and coordinator of CEEDs, explains: “The system acknowledges when participants are getting fatigued or overloaded with information. And it adapts accordingly. It either simplifies the visualisations so as to reduce the cognitive load, thus keeping the user less stressed and more able to focus. Or it will guide the person to areas of the data representation that are not as heavy in information.”
Neuroscientists were the first group the CEEDs researchers tried their machine on (BrainX3). It took the typically huge datasets generated in this scientific discipline and animated them with visual and sound displays. By providing subliminal clues, such as flashing arrows, the machine guided the neuroscientists to areas of the data that were potentially more interesting to each person. First pilots have already demonstrated the power of this approach in gaining new insights into the organisation of the brain….”
The Emergence of Government Innovation Teams
Hollie Russon Gilman at TechTank: “A new global currency is emerging. Governments understand that people at home and abroad evaluate them based on how they use technology and innovative approaches in their service delivery and citizen engagement. This raises opportunities, and critical questions about the role of innovation in 21st century governance.
Bloomberg Philanthropies and Nesta, the UK’s Innovation foundation, recently released a global report highlighting 20 government innovation teams. Importantly, the study included teams that were established and funded by all levels of government (city, regional and national), and aims to find creative solutions to seemingly intractable solutions. This report features 20 teams across six continents and features some basic principles and commonalities that are instructive for all types of innovators, inside and outside, of government.
Using Government to Locally Engage
One of the challenges of representational democracy is that elected officials and government officials spend time in bureaucracies isolated from the very people they aim to serve. Perhaps there can be different models. For example, Seoul’s Innovation Bureau is engaging citizens to re-design and re-imagine public services. Seoul is dedicated to becoming a Sharing City; including Tool Kit Centers where citizens can borrow machinery they would rarely use that would also benefit the whole community. This approach puts citizens at the center of their communities and leverages government to work for the people…
As I’ve outlined in a earlier TechTank post, there are institutional constraints for governments to try the unknown. There are potential electoral costs, greater disillusionment, and gaps in vital service delivery. Yet, despite all of these barriers there are a variety of promising tools. For example, Finland has Sitra, an Innovation fund, whose mission is to foster experimentation to transform a diverse set of policy issues including sustainable energy and healthcare. Sitra invests in both the practical research and experiments to further public sector issues as well as invest in early stage companies.
We need a deeper understanding of the opportunities, and challenges, of innovation in government. Luckily there are many researchers, think-tanks, and organizations beginning analysis. For example, Professor and Associate Dean Anita McGahan, of the Rotman School of Management at the University of Toronto, calls for a more strategic approach toward understanding the use of innovation, including big data, in the public sector…”
In Tests, Scientists Try to Change Behaviors
Wall Street Journal: “Behavioral scientists look for environmental ‘nudges’ to influence how people act. Pelle Guldborg Hansen, a behavioral scientist, is trying to figure out how to board passengers on a plane with less fuss.
The goal is to make plane-boarding more efficient by coaxing passengers to want to be more orderly, not by telling them they must. It is one of many projects in which Dr. Hansen seeks to encourage people, when faced with options, to make better choices. Among these: prompting people to properly dispose of cigarette butts outside of bars and clubs and inducing hospital workers to use hand sanitizers.
Dr. Hansen, 37 years old, is director of the Initiative for Science, Society & Policy, a collaboration of the University of Southern Denmark and Roskilde University. The concept behind his work is known commonly as a nudge, dubbed such because of the popular 2008 book of the same name by U.S. academics Richard Thaler and Cass Sunstein that examined how people make decisions.
At the Copenhagen airport, Dr. Hansen recently deployed a team of three young researchers to mill about a gate in terminal B. The trio was dressed casually in jeans and wore backpacks. They blended in with the passengers, except for the badges they wore displaying airport credentials, and the clipboards and pens they carried to record how the boarding process unfolds.
Thirty-five minutes before a flight departed, the team got into position. Andreas Rathmann Jensen stood in one corner, then moved to another, so he could survey the entire gate area. He mapped where people were sitting and where they placed their bags. This behavior can vary depending, for example, if people are flying alone, with a partner or in a group.
Johannes Schuldt-Jensen circulated among the rows and counted how many bags were blocking seats and how many seats were empty as boarding time approached. He wore headphones, though he wasn’t listening to music, because people seem less suspicious of behavior when a person has headphones on, he says. Another researcher, Kasper Hulgaard, counted how many people were standing versus sitting.
The researchers are mapping out gate-seating patterns for a total of about 500 flights. Some early observations: The more people who are standing, the more chaotic boarding tends to be. Copenhagen airport seating areas are designed for groups, even though most travelers come solo or in pairs. Solo flyers like to sit in a corner and put their bag on an adjacent seat. Pairs of travelers tend to perch anywhere as long as they can sit side-by-side….”
Sharing Data Is a Form of Corporate Philanthropy
Matt Stempeck in HBR Blog: “Ever since the International Charter on Space and Major Disasters was signed in 1999, satellite companies like DMC International Imaging have had a clear protocol with which to provide valuable imagery to public actors in times of crisis. In a single week this February, DMCii tasked its fleet of satellites on flooding in the United Kingdom, fires in India, floods in Zimbabwe, and snow in South Korea. Official crisis response departments and relevant UN departments can request on-demand access to the visuals captured by these “eyes in the sky” to better assess damage and coordinate relief efforts.
Back on Earth, companies create, collect, and mine data in their day-to-day business. This data has quickly emerged as one of this century’s most vital assets. Public sector and social good organizations may not have access to the same amount, quality, or frequency of data. This imbalance has inspired a new category of corporate giving foreshadowed by the 1999 Space Charter: data philanthropy.
The satellite imagery example is an area of obvious societal value, but data philanthropy holds even stronger potential closer to home, where a wide range of private companies could give back in meaningful ways by contributing data to public actors. Consider two promising contexts for data philanthropy: responsive cities and academic research.
The centralized institutions of the 20th century allowed for the most sophisticated economic and urban planning to date. But in recent decades, the information revolution has helped the private sector speed ahead in data aggregation, analysis, and applications. It’s well known that there’s enormous value in real-time usage of data in the private sector, but there are similarly huge gains to be won in the application of real-time data to mitigate common challenges.
What if sharing economy companies shared their real-time housing, transit, and economic data with city governments or public interest groups? For example, Uber maintains a “God’s Eye view” of every driver on the road in a city:
Imagine combining this single data feed with an entire portfolio of real-time information. An early leader in this space is the City of Chicago’s urban data dashboard, WindyGrid. The dashboard aggregates an ever-growing variety of public datasets to allow for more intelligent urban management.
Over time, we could design responsive cities that react to this data. A responsive city is one where services, infrastructure, and even policies can flexibly respond to the rhythms of its denizens in real-time. Private sector data contributions could greatly accelerate these nascent efforts.
Data philanthropy could similarly benefit academia. Access to data remains an unfortunate barrier to entry for many researchers. The result is that only researchers with access to certain data, such as full-volume social media streams, can analyze and produce knowledge from this compelling information. Twitter, for example, sells access to a range of real-time APIs to marketing platforms, but the price point often exceeds researchers’ budgets. To accelerate the pursuit of knowledge, Twitter has piloted a program called Data Grants offering access to segments of their real-time global trove to select groups of researchers. With this program, academics and other researchers can apply to receive access to relevant bulk data downloads, such as an period of time before and after an election, or a certain geographic area.
Humanitarian response, urban planning, and academia are just three sectors within which private data can be donated to improve the public condition. There are many more possible applications possible, but few examples to date. For companies looking to expand their corporate social responsibility initiatives, sharing data should be part of the conversation…
Companies considering data philanthropy can take the following steps:
- Inventory the information your company produces, collects, and analyzes. Consider which data would be easy to share and which data will require long-term effort.
- Think who could benefit from this information. Who in your community doesn’t have access to this information?
- Who could be harmed by the release of this data? If the datasets are about people, have they consented to its release? (i.e. don’t pull a Facebook emotional manipulation experiment).
- Begin conversations with relevant public agencies and nonprofit partners to get a sense of the sort of information they might find valuable and their capacity to work with the formats you might eventually make available.
- If you expect an onslaught of interest, an application process can help qualify partnership opportunities to maximize positive impact relative to time invested in the program.
- Consider how you’ll handle distribution of the data to partners. Even if you don’t have the resources to set up an API, regular releases of bulk data could still provide enormous value to organizations used to relying on less-frequently updated government indices.
- Consider your needs regarding privacy and anonymization. Strip the data of anything remotely resembling personally identifiable information (here are some guidelines).
- If you’re making data available to researchers, plan to allow researchers to publish their results without obstruction. You might also require them to share the findings with the world under Open Access terms….”
Chief Executive of Nesta on the Future of Government Innovation
Interview between Rahim Kanani and Geoff Mulgan, CEO of NESTA and member of the MacArthur Research Network on Opening Governance: “Our aspiration is to become a global center of expertise on all kinds of innovation, from how to back creative business start-ups and how to shape innovations tools such as challenge prizes, to helping governments act as catalysts for new solutions,” explained Geoff Mulgan, chief executive of Nesta, the UK’s innovation foundation. In an interview with Mulgan, we discussed their new report, published in partnership with Bloomberg Philanthropies, which highlights 20 of the world’s top innovation teams in government. Mulgan and I also discussed the founding and evolution of Nesta over the past few years, and leadership lessons from his time inside and outside government.
Rahim Kanani: When we talk about ‘innovations in government’, isn’t that an oxymoron?
Geoff Mulgan: Governments have always innovated. The Internet and World Wide Web both originated in public organizations, and governments are constantly developing new ideas, from public health systems to carbon trading schemes, online tax filing to high speed rail networks. But they’re much less systematic at innovation than the best in business and science. There are very few job roles, especially at senior levels, few budgets, and few teams or units. So although there are plenty of creative individuals in the public sector, they succeed despite, not because of the systems around them. Risk-taking is punished not rewarded. Over the last century, by contrast, the best businesses have learned how to run R&D departments, product development teams, open innovation processes and reasonably sophisticated ways of tracking investments and returns.
Kanani: This new report, published in partnership with Bloomberg Philanthropies, highlights 20 of the world’s most effective innovation teams in government working to address a range of issues, from reducing murder rates to promoting economic growth. Before I get to the results, how did this project come about, and why is it so important?
Mulgan: If you fail to generate new ideas, test them and scale the ones that work, it’s inevitable that productivity will stagnate and governments will fail to keep up with public expectations, particularly when waves of new technology—from smart phones and the cloud to big data—are opening up dramatic new possibilities. Mayor Bloomberg has been a leading advocate for innovation in the public sector, and in New York he showed the virtues of energetic experiment, combined with rigorous measurement of results. In the UK, organizations like Nesta have approached innovation in a very similar way, so it seemed timely to collaborate on a study of the state of the field, particularly since we were regularly being approached by governments wanting to set up new teams and asking for guidance.
Kanani: Where are some of the most effective innovation teams working on these issues, and how did you find them?
Mulgan: In our own work at Nesta, we’ve regularly sought out the best innovation teams that we could learn from and this study made it possible to do that more systematically, focusing in particular on the teams within national and city governments. They vary greatly, but all the best ones are achieving impact with relatively slim resources. Some are based in central governments, like Mindlab in Denmark, which has pioneered the use of design methods to reshape government services, from small business licensing to welfare. SITRA in Finland has been going for decades as a public technology agency, and more recently has switched its attention to innovation in public services. For example, providing mobile tools to help patients manage their own healthcare. In the city of Seoul, the Mayor set up an innovation team to accelerate the adoption of ‘sharing’ tools, so that people could share things like cars, freeing money for other things. In south Australia the government set up an innovation agency that has been pioneering radical ways of helping troubled families, mobilizing families to help other families.
Kanani: What surprised you the most about the outcomes of this research?
Mulgan: Perhaps the biggest surprise has been the speed with which this idea is spreading. Since we started the research, we’ve come across new teams being created in dozens of countries, from Canada and New Zealand to Cambodia and Chile. China has set up a mobile technology lab for city governments. Mexico City and many others have set up labs focused on creative uses of open data. A batch of cities across the US supported by Bloomberg Philanthropy—from Memphis and New Orleans to Boston and Philadelphia—are now showing impressive results and persuading others to copy them.
Selected Readings on Sentiment Analysis
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of sentiment analysis was originally published in 2014.
Sentiment Analysis is a field of Computer Science that uses techniques from natural language processing, computational linguistics, and machine learning to predict subjective meaning from text. The term opinion mining is often used interchangeably with Sentiment Analysis, although it is technically a subfield focusing on the extraction of opinions (the umbrella under which sentiment, evaluation, appraisal, attitude, and emotion all lie).
The rise of Web 2.0 and increased information flow has led to an increase in interest towards Sentiment Analysis — especially as applied to social networks and media. Events causing large spikes in media — such as the 2012 Presidential Election Debates — are especially ripe for analysis. Such analyses raise a variety of implications for the future of crowd participation, elections, and governance.
Selected Reading List (in alphabetical order)
- Choi, Tan, Lee, Danescu-Niculescu-Mizil, Spindel — Hedge Detection as a Lens on Framing in the GMO Debates: A Position Paper — a position paper to suggest looking at hedge detection in whether adopting a “scientific tone” indicates an opinion in the debate on GMOs.
- Christina Michael, Francesca Toni, and Krysia Broda — Sentiment Analysis for Debates — a paper looking at several techniques and applications of Sentiment Analysis on online debates.
- Akiko Murakami, Rudy Raymond — Support or Oppose? Classifying Positions in Online Debates from Reply Activities and Opinion Expressions — a paper seeking to identify the general positions of users in online debates by exploiting local information in their remarks within the debate, and using Sentiment Analysis on the text.
- Bo Pang, Lillian Lee — Opinion Mining & Sentiment Analysis — a general survey on Sentiment Analysis and approaches, with examples of applications.
- Ranade, Gupta, Varma, Mamidi — Online debate summarization using topic directed sentiment analysis — a paper aiming to summarize online debates by extracting highly topic relevant and sentiment rich sentences.
- Jodi Schneider — Automated argumentation mining to the rescue? Envisioning argumentation and decision-making support for debates in open online collaboration communities — a paper describing a new possible domain for argumentation mining: debates in open online collaboration communities.
Annotated Selected Reading List (in alphabetical order)
Choi, Eunsol et al. “Hedge detection as a lens on framing in the GMO debates: a position paper.” Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics 13 Jul. 2012: 70-79. http://bit.ly/1wweftP
- Understanding the ways in which participants in public discussions frame their arguments is important for understanding how public opinion is formed. This paper adopts the position that it is time for more computationally-oriented research on problems involving framing. In the interests of furthering that goal, the authors propose the following question: In the controversy regarding the use of genetically-modified organisms (GMOs) in agriculture, do pro- and anti-GMO articles differ in whether they choose to adopt a more “scientific” tone?
- Prior work on the rhetoric and sociology of science suggests that hedging may distinguish popular-science text from text written by professional scientists for their colleagues. The paper proposes a detailed approach to studying whether hedge detection can be used to understand scientific framing in the GMO debates, and provides corpora to facilitate this study. Some of the preliminary analyses suggest that hedges occur less frequently in scientific discourse than in popular text, a finding that contradicts prior assertions in the literature.
Michael, Christina, Francesca Toni, and Krysia Broda. “Sentiment analysis for debates.” (Unpublished MSc thesis). Department of Computing, Imperial College London (2013). http://bit.ly/Wi86Xv
- This project aims to expand on existing solutions used for automatic sentiment analysis on text in order to capture support/opposition and agreement/disagreement in debates. In addition, it looks at visualizing the classification results for enhancing the ease of understanding the debates and for showing underlying trends. Finally, it evaluates proposed techniques on an existing debate system for social networking.
Murakami, Akiko, and Rudy Raymond. “Support or oppose?: classifying positions in online debates from reply activities and opinion expressions.” Proceedings of the 23rd International Conference on Computational Linguistics: Posters 23 Aug. 2010: 869-875. https://bit.ly/2Eicfnm
- In this paper, the authors propose a method for the task of identifying the general positions of users in online debates, i.e., support or oppose the main topic of an online debate, by exploiting local information in their remarks within the debate. An online debate is a forum where each user posts an opinion on a particular topic while other users state their positions by posting their remarks within the debate. The supporting or opposing remarks are made by directly replying to the opinion, or indirectly to other remarks (to express local agreement or disagreement), which makes the task of identifying users’ general positions difficult.
- A prior study has shown that a link-based method, which completely ignores the content of the remarks, can achieve higher accuracy for the identification task than methods based solely on the contents of the remarks. In this paper, it is shown that utilizing the textual content of the remarks into the link-based method can yield higher accuracy in the identification task.
Pang, Bo, and Lillian Lee. “Opinion mining and sentiment analysis.” Foundations and trends in information retrieval 2.1-2 (2008): 1-135. http://bit.ly/UaCBwD
- This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Its focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. It includes material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.
Ranade, Sarvesh et al. “Online debate summarization using topic directed sentiment analysis.” Proceedings of the Second International Workshop on Issues of Sentiment Discovery and Opinion Mining 11 Aug. 2013: 7. http://bit.ly/1nbKtLn
- Social networking sites provide users a virtual community interaction platform to share their thoughts, life experiences and opinions. Online debate forum is one such platform where people can take a stance and argue in support or opposition of debate topics. An important feature of such forums is that they are dynamic and grow rapidly. In such situations, effective opinion summarization approaches are needed so that readers need not go through the entire debate.
- This paper aims to summarize online debates by extracting highly topic relevant and sentiment rich sentences. The proposed approach takes into account topic relevant, document relevant and sentiment based features to capture topic opinionated sentences. ROUGE (Recall-Oriented Understudy for Gisting Evaluation, which employ a set of metrics and a software package to compare automatically produced summary or translation against human-produced onces) scores are used to evaluate the system. This system significantly outperforms several baseline systems and show improvement over the state-of-the-art opinion summarization system. The results verify that topic directed sentiment features are most important to generate effective debate summaries.
Schneider, Jodi. “Automated argumentation mining to the rescue? Envisioning argumentation and decision-making support for debates in open online collaboration communities.” http://bit.ly/1mi7ztx
- Argumentation mining, a relatively new area of discourse analysis, involves automatically identifying and structuring arguments. Following a basic introduction to argumentation, the authors describe a new possible domain for argumentation mining: debates in open online collaboration communities.
- Based on our experience with manual annotation of arguments in debates, the authors propose argumentation mining as the basis for three kinds of support tools, for authoring more persuasive arguments, finding weaknesses in others’ arguments, and summarizing a debate’s overall conclusions.
GitHub: A Swiss Army knife for open government
FCW: “Today, more than 300 government agencies are using the platform for public and private development. Cities (Chicago, Philadelphia, San Francisco), states (New York, Washington, Utah) and countries (United Kingdom, Australia) are sharing code and paving a new road to civic collaboration….
Civic-focused organizations — such as the OpenGov Foundation, the Sunlight Foundation and the Open Knowledge Foundation — are also actively involved with original projects on GitHub. Those projects include the OpenGov Foundation’s Madison document-editing tool touted by the likes of Rep. Darrell Issa (R-Calif.) and the Open Knowledge Foundation’s CKAN, which powers hundreds of government data platforms around the world.
According to GovCode, an aggregator of public government open-source projects hosted on GitHub, there have been hundreds of individual contributors and nearly 90,000 code commits, which involve making a set of tentative changes permanent.
The nitty-gritty
Getting started on GitHub is similar to the process for other social networking platforms. Users create individual accounts and can set up “organizations” for agencies or cities. They can then create repositories (or repos) to collaborate on projects through an individual or organizational account. Other developers or organizations can download repo code for reuse or repurpose it in their own repositories (called forking), and make it available to others to do the same.
Collaborative aspects of GitHub include pull requests that allow developers to submit and accept updates to repos that build on and grow an open-source project. There are wikis, gists (code snippet sharing) and issue tracking for bugs, feature requests, or general questions and answers.
GitHub provides free code hosting for all public repos. Upgrade offerings include personal and organizational plans based on the number of private repos. For organizations that want a self-hosted GitHub development environment, GitHub Enterprise, used by the likes of CFPB, allows for self-hosted, private repos behind a firewall.
GitHub’s core user interface can be unwelcoming or even intimidating to the nondeveloper, but GitHub’s Pages package offers Web-hosting features that include domain mapping and lightweight content management tools such as static site generator Jekyll and text editor Atom.
Notable government projects that use Pages are the White House’s Project Open Data, 18F’s /Developer Program, CFPB’s Open Tech website and New York’s Open Data Handbook. Indeed, Wired recently commented that the White House’s open-data GitHub efforts “could help fix government.”…
See also: GitHub for Government (GovLab)
Meet the UK start-ups changing the world with open data
Now a range of UK start-ups are working with the ODI to build businesses using open data, and have already unlocked a total of £2.5 million worth of investments and contracts.
Mastodon C joined the ODI start-up programme at its inception in December 2012. Shortly after joining, the company teamed up with Ben Goldacre and Open Healthcare UK, and embarked on a project investigating the use of branded statins over the far cheaper generic versions.
The data analysis identified potential efficiency savings to the NHS of £200 million. The company is now also working with the Technology Strategy Board and Nesta to help them gain better insight into their data.
Another start-up, CarbonCulture is a community platform designed to help people use resources more efficiently. The company uses high-tech metering to monitor carbon use in the workplace and help clients save money.
Organisations such as 10 Downing Street, Tate, Cardiff Council, the GLA and the UK Parliament are using the company’s digital tools to monitor and improve their energy consumption. CarbonCulture has also helped the Department of Energy and Climate Change reduce its gas use by 10 per cent.
Spend Network’s business is built on collecting the spend statements and tender documents published by government in the UK and Europe and then publishing this data openly so that anyone can use it. The company currently hosts over £1.2 trillion of transactions from the UK and over 1.8 million tenders from across Europe.
One of the company’s major breakthroughs was creating the first national, open spend analysis for central and local government. This was used to uncover a 45 per cent delay in the UK’s tendering process, holding up £22 billion of government funds to the economy.
Meanwhile, TransportAPI uses open data feeds from Traveline, Network Rail and Transport for London to provide nationwide timetables, departure and infrastructure information across all modes of public transport.
TransportAPI currently has 700 developers and organisations signed up to its platform, including individual taxpayers and public sector organisations like universities and local authorities. Travel portals, hyperlocal sites and business analytics are also integrating features, such as the ‘nearest transport’ widget, into their websites.
These are just four examples of how start-ups are using open data to create new digital services. The ODI this week announced seven new open data start-ups joining the programme, covering 3D printed learning materials, helping disabled communities, renewable energy markets, and smart cities….”
Digital Government: Turning the Rhetoric into Reality
BCG Perspectives: “Getting better—but still plenty of room for improvement: that’s the current assessment by everyday users of their governments’ efforts to deliver online services. The public sector has made good progress, but most countries are not moving nearly as quickly as users would like. Many governments have made bold commitments, and a few countries have determined to go “digital by default.” Most are moving more modestly, often overwhelmed by complexity and slowed by bureaucratic skepticism over online delivery as well as by a lack of digital skills. Developing countries lead in the rate of online usage, but they mostly trail developed nations in user satisfaction.
Many citizens—accustomed to innovation in such sectors as retailing, media, and financial services—wish their governments would get on with it. Of the services that can be accessed online, many only provide information and forms, while users are looking to get help and transact business. People want to do more. Digital interaction is often faster, easier, and more efficient than going to a service center or talking on the phone, but users become frustrated when the services do not perform as expected. They know what good online service providers offer. They have seen a lot of improvement in recent years, and they want their governments to make even better use of digital’s capabilities.
Many governments are already well on the way to improving digital service delivery, but there is often a gap between rhetoric and reality. There is no shortage of government policies and strategies relating to “digital first,” “e-government,” and “gov2.0,” in addition to digital by default. But governments need more than a strategy. “Going digital” requires leadership at the highest levels, investments in skills and human capital, and cultural and behavioral change. Based on BCG’s work with numerous governments and new research into the usage of, and satisfaction with, government digital services in 12 countries, we see five steps that most governments will want to take:
1. Focus on value. Put the priority on services with the biggest gaps between their importance to constituents and constituents’ satisfaction with digital delivery. In most countries, this will mean services related to health, education, social welfare, and immigration.
2. Adopt service design thinking. Governments should walk in users’ shoes. What does someone encounter when he or she goes to a government service website—plain language or bureaucratic legalese? How easy is it for the individual to navigate to the desired information? How many steps does it take to do what he or she came to do? Governments can make services easy to access and use by, for example, requiring users to register once and establish a digital credential, which can be used in the future to access online services across government.
3. Lead users online, keep users online. Invest in seamless end-to-end capabilities. Most government-service sites need to advance from providing information to enabling users to transact their business in its entirety, without having to resort to printing out forms or visiting service centers.
4. Demonstrate visible senior-leadership commitment. Governments can signal—to both their own officials and the public—the importance and the urgency that they place on their digital initiatives by where they assign responsibility for the effort.
5. Build the capabilities and skills to execute. Governments need to develop or acquire the skills and capabilities that will enable them to develop and deliver digital services.
This report examines the state of government digital services through the lens of Internet users surveyed in Australia, Denmark, France, Indonesia, the Kingdom of Saudi Arabia, Malaysia, the Netherlands, Russia, Singapore, the United Arab Emirates (UAE), the UK, and the U.S. We investigated 37 different government services. (See Exhibit 1.)…”