Hype Cycle for Digital Government Technology, 2016


Gartner: “This Hype Cycle helps government agencies eager to embrace digital transformation by highlighting critical technologies that can be adopted quickly.

What You Need to Know

Austerity continues to impact governments, and the requirement to transform is substantial. Years of cuts have left IT departments struggling to operate bimodally, focused on maintaining operations, but not delivering innovation. Effective and efficient mission delivery necessitates more technology, not less, so senior organizational leaders look outside the IT department to source innovation and additional capacity. Digital government demands improvements in the value chain, using end-to-end frictionless transactions, as the outcome of technical and process improvement. CIOs’ focus must move from infrastructure and its costs toward quickly delivering true mission outcome improvements.

This Hype Cycle highlights technologies that government CIOs should be implementing or planning for to ensure the organization obtains the necessary, impactful capabilities to deliver the digital government agenda quickly. To maintain their own relevance, government CIOs must recognize their organizations’ need for innovation and be mindful of the top trends and technologies disrupting their organizations.

The Hype Cycle

This Hype Cycle addresses all geographies and tiers of government tackling the opportunities presented by digital disruptions. The technologies herein support digital government and the global trends identified in“The Top 10 Strategic Technology Trends for Government in 2016.” These technologies align to one or more of the trends and offer a mix of benefits, from increased effectiveness and efficiency, to improved security and enhanced customer interaction. Our intention is to draw attention to those technologies that map directly to these trends, and their inclusion is explained in the section “Off the Hype Cycle.”

Slower economic growth, higher debt, rising citizen expectations and an aging population demand innovative delivery of citizen-facing services, so the ROI to support investment in these technologies must be measured in more-effective outcomes. Agencies crave new solutions and capabilities to help ease the pressure on them, and they think that, if not delivered by their incumbent IT provider, they will increasingly source directly.

The technologies selected provide practical and pragmatic choices for those CIOs who need to deliver strategic solutions to enhance organizational capabilities. Use Cloud Office and Enterprise File Synchronization and Sharing (EFSS) to deliver a better digital workplace experience, or make better use of open data by using data quality tools, API marketplaces, and geospatial and location intelligence tools. We also offer a glimpse of the future to provide a better understanding of how smart machines, such as Smart Robots or Cognitive Expert Advisors, will impact your agency. In either case, using the technologies highlighted here and using Strategic Technology Maps to assess what priority and when you might be able to move will assist the business in knowing when functionality will become available. This knowledge may inspire the use of commercially available capabilities and forgo the desire for customers to self-source solutions.

Figure 1. Hype Cycle for Digital Government, 2016

Source: Gartner (July 2016)

The Priority Matrix

The Priority Matrix shows those technologies and the time frame by which they are expected to mature and deliver benefits. Transformational and high benefits accrue immediately and run forward, delivering across the next decade. It is no surprise seeing that immediate benefits accrue to tactical investment technologies, such as Social Media Engagement Applications. They allow government agencies to go beyond monitoring citizen satisfaction by giving them a level of analysis and allowing them to engage in an informed two-way debate. Thus, multichannel citizen engagement can become a measured reality, taking government out to where the citizens communicate. This is supported by customer engagement hubs that allow personalized, contextual engagement with customers across all interaction channels, regardless of medium.

Smart Machines and the Internet of Things (IoT) are also featured this year, with real examples of technologies to help smart cities progress, with the inclusion of smart transportation solutions and an IOT platform that can help government agencies deal with the plethora of data sources that will undoubtedly emerge. It must be noted, these technologies operate as digital platforms, per se. “Implementing once, serving many” must become a mantra for digital government if it is to succeed at being both effective and efficient.

Figure 2. Priority Matrix for Digital Government, 2016

Source: Gartner (July 2016)…(More).”

Datavores of Local Government


Tom Symons at NESTA: “How data can help councils provide more personalised, effective and efficient services.

Key findings

  • After years of hype and little delivery, councils are now using big and small data in a range of innovative ways to improve decision making and inform public service transformation.
  • Emerging trends in the use of data include predictive algorithms, mass data integration across the public sector, open data and smart cities.
  • We identify seven ways in which councils can get more value from the data they hold.

What does the data revolution mean for councils and local public services? Local government collects huge amounts of data, about everything from waste collection to procurement processes to care services for some of the most vulnerable people in society. By using council data better, is there potential to make these services more personalised, more effective and more efficient?

Nesta’s Local Datavores research programme aims to answer this question. Where, how and to what extent can better data use can help councils to achieve their strategic objectives? This report is the first in a series, aimed primarily at helping local public sector staff, from senior commissioners through to frontline professionals, get more value from the data they hold….(More)”-

From smart city to open city: Lessons from Jakarta Smart City


Putri, D.A., CH Karlina, M., Tanaya, J., at the Centre for Innovation Policy and Governance, Indonesia: “In 2011, Indonesia started its Open Government journey when along with seven other countries it initiated Open Government Partnership. Following the global declaration, Indonesia launched the Open Government Indonesia (OGI) in January 2012 with the aim to introduce open government reforms, including open data. This initiative is supported by Law No. 14/2008 on Freedom of Information. Despite its early stage, the implementation of Open Government in Indonesia has shown promising developments, with three action plans enacted in the last four years. In the Southeast Asian region, Indonesia could be considered a pioneer in implementing the open data initiative at national as well as sub-national levels. In some cases, the open data initiative at sub- national level has even surpassed the progress at the national level. Jakarta, for example, became the first city to have its own gubernatorial bylaw on data and system management, which requires the city administration and agencies to open its public data, thus leading to the birth of open data initiatives in the city. The city also have Jakarta Smart City that connect sub-districts officials with the citizen. Jakarta Smart City is an initiative that promote openness of the government through public service delivery. This paper aims to take a closer look on the dynamics of citizens-generated data in Jakarta and how Jakarta smart city program contributes to the implementation of open data….(More)”

Open data and its usability: an empirical view from the Citizen’s perspective


Paper by Weerakkody, V., Irani, Z., Kapoor, K. et al. in Information Systems FrontiersGovernment legislation and calls for greater levels of oversight and transparency are leading public bodies to publish their raw datasets online. Policy makers and elected officials anticipate that the accessibility of open data through online Government portals for citizens will enable public engagement in policy making through increased levels of fact based content elicited from open data. The usability and benefits of such open data are being argued as contributing positively towards public sector reforms, which are under extreme pressures driven by extended periods of austerity. However, there is very limited scholarly studies that have attempted to empirically evaluate the performance of government open data websites and the acceptance and use of these data from a citizen perspective. Given this research void, an adjusted diffusion of innovation model based on Rogers’ diffusion of innovations theory (DOI) is proposed and used in this paper to empirically determine the predictors influencing the use of public sector open data. A good understanding of these predictors affecting the acceptance and use of open data will likely assist policy makers and public administrations in determining the policy instruments that can increase the acceptance and use of open data through an active promotion campaign to engage-contribute-use….(More)”

#HackthePayGap


Department of Commerce: “More than 50 years ago, President John F. Kennedy signed the Equal Pay Act into law. Yet just yesterday, Secretary of Commerce Penny Pritzker addressed developers, data scientists, and designers who are using Department of Commerce data to build new tools and products aimed at ending the pay disparities that still disadvantage women in today’s economy.

Speaking at the White House Hack the Pay Gap Demo Day, Secretary Pritzker stressed that the issue of equal pay for equal work is not just a women’s issue, but an injustice that impacts families and threatens our nation’s economic prosperity. While the pay gap remains a stubborn and persistent problem, Secretary Pritzker pointed to open data as a powerful new tool for workers, businesses, and the public to advance equality in the workplace.

Last April the Commerce Department, Presidential Innovation Fellows, and the White House Council on Women and Girls invited data scientists and developers from across America to “Hack the Pay Gap” using MIDAAS (Making Income Data Available as a Service) – a new application programming interface (API) designed to improve public access to the U.S. Census Bureau’s income, population, and geographic data…..For example, the “What’s my Pay Gap” project asks you to answers questions about yourself and allows you to discover how your personal wage gap grows and shrinks depending on your demographic characteristics. Another project named “Aware,” provides a survey and data analytics platform for companies to use in order to make data-driven decisions about combating the pay gap in their own organizations. In addition, the Secretary listened to a presentation on the PowerShift application that provides users salary breakdown and range data on what men in a similar situation are making in addition to legal information about fair pay….To learn more about the Hack the Pay Gap challenge visit paygap.pif.gov.”

US start-up aims to steer through flood of data


Richard Waters in the Financial Times: “The “open data” movement has produced a deluge of publicly available information this decade, as governments like those in the UK and US have released large volumes of data for general use.

But the flood has left researchers and data scientists with a problem: how do they find the best data sets, ensure these are accurate and up to date, and combine them with other sources of information?

The most ambitious in a spate of start-ups trying to tackle this problem is set to be unveiled on Monday, when data.world opens for limited release. A combination of online repository and social network, the site is designed to be a central platform to support the burgeoning activity around freely available data.

The aim closely mirrors Github, which has been credited with spurring the open source software movement by becoming both a place to store and find free programs as well as a crowdsourcing tool for identifying the most useful.

“We are at an inflection point,” said Jeff Meisel, chief marketing officer for the US Census Bureau. A “massive amount of data” has been released under open data provisions, he said, but “what hasn’t been there are the tools, the communities, the infrastructure to make that data easier to mash up”….

Data.world plans to seed its site with about a thousand data sets and attract academics as its first users, said Mr Hurt. By letting users create personal profiles on the site, follow others and collaborate around the information they are working on, the site hopes to create the kind of social dynamic that makes it more useful the more it is used.

An attraction of the service is the ability to upload data in any format and then use common web standards to link different data sets and create mash-ups with the information, said Dean Allemang, an expert in online data….(More)”

Enablers for Smart Cities


Book by Amal El Fallah Seghrouchni, Fuyuki Ishikawa, Laurent Hérault, and Hideyuki Tokuda: “Smart cities are a new vision for urban development.  They integrate information and communication technology infrastructures – in the domains of artificial intelligence, distributed and cloud computing, and sensor networks – into a city, to facilitate quality of life for its citizens and sustainable growth.  This book explores various concepts for the development of these new technologies (including agent-oriented programming, broadband infrastructures, wireless sensor networks, Internet-based networked applications, open data and open platforms), and how they can provide smart services and enablers in a range of public domains.

The most significant research, both established and emerging, is brought together to enable academics and practitioners to investigate the possibilities of smart cities, and to generate the knowledge and solutions required to develop and maintain them…(More)”

How Twitter gives scientists a window into human happiness and health


 at the Conversation: “Since its public launch 10 years ago, Twitter has been used as a social networking platform among friends, an instant messaging service for smartphone users and a promotional tool for corporations and politicians.

But it’s also been an invaluable source of data for researchers and scientists – like myself – who want to study how humans feel and function within complex social systems.

By analyzing tweets, we’ve been able to observe and collect data on the social interactions of millions of people “in the wild,” outside of controlled laboratory experiments.

It’s enabled us to develop tools for monitoring the collective emotions of large populations, find the happiest places in the United States and much more.

So how, exactly, did Twitter become such a unique resource for computational social scientists? And what has it allowed us to discover?

Twitter’s biggest gift to researchers

On July 15, 2006, Twittr (as it was then known) publicly launched as a “mobile service that helps groups of friends bounce random thoughts around with SMS.” The ability to send free 140-character group texts drove many early adopters (myself included) to use the platform.

With time, the number of users exploded: from 20 million in 2009 to 200 million in 2012 and 310 million today. Rather than communicating directly with friends, users would simply tell their followers how they felt, respond to news positively or negatively, or crack jokes.

For researchers, Twitter’s biggest gift has been the provision of large quantities of open data. Twitter was one of the first major social networks to provide data samples through something called Application Programming Interfaces (APIs), which enable researchers to query Twitter for specific types of tweets (e.g., tweets that contain certain words), as well as information on users.

This led to an explosion of research projects exploiting this data. Today, a Google Scholar search for “Twitter” produces six million hits, compared with five million for “Facebook.” The difference is especially striking given that Facebook has roughly five times as many users as Twitter (and is two years older).

Twitter’s generous data policy undoubtedly led to some excellent free publicity for the company, as interesting scientific studies got picked up by the mainstream media.

Studying happiness and health

With traditional census data slow and expensive to collect, open data feeds like Twitter have the potential to provide a real-time window to see changes in large populations.

The University of Vermont’s Computational Story Lab was founded in 2006 and studies problems across applied mathematics, sociology and physics. Since 2008, the Story Lab has collected billions of tweets through Twitter’s “Gardenhose” feed, an API that streams a random sample of 10 percent of all public tweets in real time.

I spent three years at the Computational Story Lab and was lucky to be a part of many interesting studies using this data. For example, we developed a hedonometer that measures the happiness of the Twittersphere in real time. By focusing on geolocated tweets sent from smartphones, we were able to map the happiest places in the United States. Perhaps unsurprisingly, we found Hawaii to be the happiest state and wine-growing Napa the happiest city for 2013.

A map of 13 million geolocated U.S. tweets from 2013, colored by happiness, with red indicating happiness and blue indicating sadness. PLOS ONE, Author provided

These studies had deeper applications: Correlating Twitter word usage with demographics helped us understand underlying socioeconomic patterns in cities. For example, we could link word usage with health factors like obesity, so we built a lexicocalorimeter to measure the “caloric content” of social media posts. Tweets from a particular region that mentioned high-calorie foods increased the “caloric content” of that region, while tweets that mentioned exercise activities decreased our metric. We found that this simple measure correlates with other health and well-being metrics. In other words, tweets were able to give us a snapshot, at a specific moment in time, of the overall health of a city or region.

Using the richness of Twitter data, we’ve also been able to see people’s daily movement patterns in unprecedented detail. Understanding human mobility patterns, in turn, has the capacity to transform disease modeling, opening up the new field of digital epidemiology….(More)”

There aren’t any rules on how social scientists use private data. Here’s why we need them.


 at SSRC: “The politics of social science access to data are shifting rapidly in the United States as in other developed countries. It used to be that states were the most important source of data on their citizens, economy, and society. States needed to collect and aggregate large amounts of information for their own purposes. They gathered this directly—e.g., through censuses of individuals and firms—and also constructed relevant indicators. Sometimes state agencies helped to fund social science projects in data gathering, such as the National Science Foundation’s funding of the American National Election Survey over decades. While scholars such as James Scott and John Brewer disagreed about the benefits of state data gathering, they recognized the state’s primary role.

In this world, the politics of access to data were often the politics of engaging with the state. Sometimes the state was reluctant to provide information, either for ethical reasons (e.g. the privacy of its citizens) or self-interest. However, democratic states did typically provide access to standard statistical series and the like, and where they did not, scholars could bring pressure to bear on them. This led to well-understood rules about the common availability of standard data for many research questions and built the foundations for standard academic practices. It was relatively easy for scholars to criticize each other’s work when they were drawing on common sources. This had costs—scholars tended to ask the kinds of questions that readily available data allowed them to ask—but also significant benefits. In particular, it made research more easily reproducible.

We are now moving to a very different world. On the one hand, open data initiatives in government are making more data available than in the past (albeit often without much in the way of background resources or documentation).The new universe of private data is reshaping social science research in some ways that are still poorly understood. On the other, for many research purposes, large firms such as Google or Facebook (or even Apple) have much better data than the government. The new universe of private data is reshaping social science research in some ways that are still poorly understood. Here are some of the issues that we need to think about:…(More)”

Postal big data: Global flows as proxy indicators for national wellbeing


Data Driven Journalism: “A new project has developed an innovative means to approximate socioeconomic indicators by analyzing the network of international postal flows.

The project used 14 million aggregated electronic postal records from 187 countries collected by the Universal Postal Union over a four-year period (2010-2014) to create an international network showing the way post flows around the world.

In addition, the project builds upon previous research efforts using global flow networks, derived from the five following open data sources:

For each network, a country’s degree of connectivity for incoming and outgoing flows was quantified using the Jaccard coefficient and Spearman’s rank correlation coefficient….

To understand these connections in the context of socioeconomic indicators, the researchers then compared these positions to the values of GDP, Life expectancy, Corruption Perception Index, Internet penetration rate, Happiness index, Gini index, Economic Complexity Index, Literacy, Poverty, CO2 emissions, Fixed phone line penetration, Mobile phone users, and the Human Development Index.

007.png

Image: Spearman rank correlations between global flow network degrees and socioeconomic indicators (CC BY 4.0).

From this analysis, the researchers revealed that:

  • The best-performing degree, in terms of consistently high performance across indicators is the global degree, suggesting that looking at how well connected a country is in the global multiplex can be more indicative of its socioeconomic profile as a whole than looking at single networks.
  • GDP per capita and life expectancy are most closely correlated with the global degree, closely followed by the postal, trade and IP weighed degrees – indicative of a relationship between national wealth and the flow of goods and information.
  • Similarly to GDP, the rate of poverty of a country is best represented by the global degree, followed by the postal degree. The negative correlation indicates that the more impoverished a country is, the less well connected it is to the rest of the world.
  • Low human development (high rank) is most highly negatively correlated with the global degree, followed by the postal, trade and IP degrees. This shows that high human development (low rank) is associated with high global connectivity and activity in terms of incoming and outgoing flows of information and goods. ….Read the fully study here.”