Data enriched research, data enhanced impact: the importance of UK data infrastructure.


Matthew Woollard at LSE Impact Blog: “…Data made available for reuse, such as those in the UK Data Service collection have huge potential. They can unlock new discoveries in research, provide evidence for policy decisions and help promote core data skills in the next generation of researchers. By being part of a single infrastructure, data owners and data creators can work together with the UK Data Service – rather than duplicating efforts – to engage with the people who can drive the impact of their research further to provide real benefit to society. As a service we are also identifying new ways to understand and promote our impact, and our Impact Fellow and Director of Impact and Communications, Victoria Moody, is focusing on raising the visibility of the UK Data Service holdings and developing and promoting the use and impact of the data and resources in policy-relevant research, especially to new audiences such as policymakers, government sectors, charities, the private sector and the media…..

We are improving how we demonstrate the impact of both the Service and the data which we hold, by focusing on generating more and more authentic user corroboration. Our emphasis is on drawing together evidence about the reach and significance of the impact of our data and resources, and of the Service as a whole through our infrastructure and expertise. Headline impact indicators through which we will better understand our impact cover a range of areas (outlined above) where the Service brings efficiency to data access and re-use, benefit to its users and a financial and social return on investment.

We are working to understand more about how Service data contributes to impact by tracking the use of Service data in a range of initiatives focused on developing impact from research and by developing our insight into usage of our data by our users. Data in the collection have featured in a range of impact case studies in the Research Excellence Framework 2014. We are also developing a focus on understanding the specific beneficial effect, rather than simply that data were used in an output, that is – as it appears in policy, debate or the evidential process (although important). Early thoughts in developing this process are where (ideally) cited data can be tracked through the specific beneficial outcome and on to an evidenced effect, corroborated by the end user.

data service 1

Our impact case studies demonstrate how the data have supported research which has led to policy change in a range of areas including; the development of mathematical models for Practice based Commissioning budgets for adult mental health in the UK and informing public policy on obesity; both using the Health Survey for England. Service data have also informed the development of impact around understanding public attitudes towards the police and other legal institutions using the Crime Survey for England and Wales and research to support the development of the national minimum wage using the Labour Force Survey. The cutting-edge new Demos Integration Hub maps the changing face of Britain’s diversity, revealing a mixed picture in the integration and upward mobility of ethnic minority communities and uses 2011 Census aggregate data (England and Wales) and Understanding Society….(More)”

Open government data: Out of the box


The Economist on “The open-data revolution has not lived up to expectations. But it is only getting started…

The app that helped save Mr Rich’s leg is one of many that incorporate government data—in this case, supplied by four health agencies. Six years ago America became the first country to make all data collected by its government “open by default”, except for personal information and that related to national security. Almost 200,000 datasets from 170 outfits have been posted on the data.gov website. Nearly 70 other countries have also made their data available: mostly rich, well-governed ones, but also a few that are not, such as India (see chart). The Open Knowledge Foundation, a London-based group, reckons that over 1m datasets have been published on open-data portals using its CKAN software, developed in 2010.

Anonymous hackers could be Islamic State’s online nemesis


 at the Conversation: “One of the key issues the West has had to face in countering Islamic State (IS) is the jihadi group’s mastery of online propaganda, seen in hundreds of thousands of messages celebrating the atrocities against civilians and spreading the message of radicalisation. It seems clear that efforts to counter IS online are missing the mark.

A US internal State Department assessment noted in June 2015 how the violent narrative of IS had “trumped” the efforts of the world’s richest and most technologically advanced nations. Meanwhile in Europe, Interpol was to track and take down social media accounts linked to IS, as if that would solve the problem – when in fact doing so meant potentially missing out on intelligence gathering opportunities.

Into this vacuum has stepped Anonymous, a fragmented loose network of hacktivists that has for years launched occasional cyberattacks against government, corporate and civil society organisations. The group announced its intention to take on IS and its propaganda online, using its networks to crowd-source the identity of IS-linked accounts. Under the banner of #OpIsis and #OpParis, Anonymous published lists of thousands of Twitter accounts claimed to belong to IS members or sympathisers, claiming more than 5,500 had been removed.

The group pursued a similar approach following the attacks on Charlie Hebdo magazine in January 2015, with @OpCharlieHebdo taking down more than 200 jihadist Twitter acounts, bringing down the website Ansar-Alhaqq.net and publishing a list of 25,000 accounts alongside a guide on how to locate pro-IS material online….

Anonymous has been prosecuted for cyber attacks in many countries under cybercrime laws, as their activities are not seen as legitimate protest. It is worth mentioning the ethical debate around hacktivism, as some see cyber attacks that take down accounts or websites as infringing on others’ freedom of expression, while others argue that hacktivism should instead create technologies to circumvent censorship, enable digital equality and open access to information….(More)”

Crowdsourced phone camera footage maps conflicts


Springwise: “The UN requires accurate proof when investigating possible war crimes, but with different sides of a conflict providing contradicting evidence, and the unsafe nature of the environment, gaining genuine insight can be problematic. A team based at Goldsmith’s University in the UK are using amateur footage to investigate.

Forensic Architecture makes use of the increasingly prevalent smartphone footage on social media networks. By crowdsourcing several viewpoints around a given location on an accurately 3D rendered map, the team are able to determine where explosive devices were used, and of what calibre. Key resources are smoke plumes from explosions, which provide a unique shape at any moment, allowing the team to map them and identify the smoke at the exact moment from various viewpoints, providing a dossier of evidence to build up evidence against a war crime.

While Forensic Architecture’s method has been developed to validate war crime atrocities, the potential uses in other areas where satellite data are not available are numerous — forest fire sources could be located based on smoke plumes, and potential crowd crush scenarios may be spotted before they occur….(More)”

Public Participation Organizations and Open Policy


Paper by Helen Pallett at Science Communication: “This article builds on work in Science and Technology Studies and cognate disciplines concerning the institutionalization of public engagement and participation practices. It describes and analyses ethnographic qualitative research into one “organization of participation,” the UK government–funded Sciencewise program. Sciencewise’s interactions with broader political developments are explored, including the emergence of “open policy” as a key policy object in the UK context. The article considers what the new imaginary of openness means for institutionalized forms of public participation in science policymaking, asking whether this is illustrative of a “constitutional moment” in relations between society and science policymaking….(More)

Looking for Open Data from a different country? Try the European Data portal


Wendy Carrara in DAE blog: “The Open Data movement is reaching all countries in Europe. Data Portals give you access to re-usable government information. But have you ever tried to find Open Data from another country whose language you do not speak? Or have you tried to see whether data from one country exist also in a similar way in another? The European Data Portal that we just launched can help you….

The European Data Portal project main work streams is the development of a new pan-European open data infrastructure. Its goal is to be a gateway offering access to data published by administrations in countries across Europe, from the EU and beyond.
The portal is launched during the European Data Forum in Luxembourg.

Additionally we will support public administrations in publishing more data as open data and have targeted actions to stimulate re-use. By taking a look at the data released by other countries and made available on the European Data Portal, governments can also be inspired to publish new data sets they had not though about in the first place.

The re-use of Open Data will further boost the economy. The benefits of Open Data are diverse and range from improved performance of public administrations and economic growth in the private sector to wider social welfare. The economic studyconducted by the European Data Portal team estimates that between 2016 and 2020, the market size of Open Data is expected to increase by 36.9% to a value of 75.7 bn EUR in 2020.

For data to be re-used, it has to be accessible

Currently, the portal includes over 240.000 datasets from 34 European countries. Information about the data available is structured into thirteen different categories ranging from agriculture to transport, including science, justice, health and so on. This enables you to quickly browse through categories and feel inspired by the data made accessible….(More)”

E-Gov’s Untapped Potential for Cutting the Public Workforce


Robert D. Atkinson at Governing: “Since the flourishing of the Internet in the mid-1990s, e-government advocates have promised that information technology not only would make it easier to access public services but also would significantly increase government productivity and lower costs. Compared to the private sector, however, this promise has remained largely unfulfilled, in part because of a resistance to employing technology to replace government workers.

It’s not surprising, then, that state budget directors and budget committees usually look at IT as a cost rather than as a strategic investment that can produce a positive financial return for taxpayers. Until governments make a strong commitment to using IT to increase productivity — including as a means of workforce reduction — it will remain difficult to bring government into the 21st-century digital economy.

The benefits can be sizeable. My organization, the Information Technology and Innovation Foundation, estimates that if states focus on using IT to drive productivity, they stand to save more than $11 billion over the next five years. States can achieve these productivity gains in two primary ways:

First, they can use e-government to substitute for person-to-person interactions. For example, by moving just nine state services online — from one-stop business registration to online vehicle-license registration — Utah reduced the need for government employees to interact with citizens, saving an average of $13 per transaction.

And second, they can use IT to optimize performance and cut costs. In 2013, for example, Pennsylvania launched a mobile app to streamline the inspection process for roads and bridges, reducing the time it took for manual data entry. Inspectors saved about 15 minutes per survey, which added up to a savings of over $550,000 in 2013.

So if technology can cut costs, why has e-government not lived up to its original promise? One key reason is that most state governments have focused first and foremost on using IT to improve service quality and access rather than to increase productivity. In part, this is because boosting productivity involves reducing headcount, and state chief information officers and other policymakers often are unwilling to openly advocate for using technology in this way for fear that it will generate opposition from government workers and their unions. This is why replacing labor with modern IT tools has long been the third rail for the public-sector IT community.

This is not necessarily the case in some other nations that have moved to aggressively deploy IT to reduce headcount. The first goal of the Danish Agency for Digitisation’s strategic plan is “a productive and efficient public sector.” To get there, the agency plans to focus on automation of public administrative procedures. Denmark even introduced a rule in which all communications with government need to be done electronically, eliminating telephone receptionists at municipal offices. Likewise, the United Kingdom’s e-government strategy set a goal of increasing productivity by 2.5 percent, including through headcount cuts.

Another reason e-government has not lived up to its full promise is that many state IT systems are woefully out of date, especially compared to the systems the corporate sector uses. But if CIOs and other advocates of modern digital government are going to be able to make their case effectively for resources to bring their technology into the 21st century, they will need to make a more convincing bottom-line case to appropriators. This argument should be about saving money, including through workforce reduction.

Policymakers should base this case not just on savings for government but also for the state’s businesses and citizens….(More)”

Questioning Smart Urbanism: Is Data-Driven Governance a Panacea?


 at the Chicago Policy Review: “In the era of data explosion, urban planners are increasingly relying on real-time, streaming data generated by “smart” devices to assist with city management. “Smart cities,” referring to cities that implement pervasive and ubiquitous computing in urban planning, are widely discussed in academia, business, and government. These cities are characterized not only by their use of technology but also by their innovation-driven economies and collaborative, data-driven city governance. Smart urbanism can seem like an effective strategy to create more efficient, sustainable, productive, and open cities. However, there are emerging concerns about the potential risks in the long-term development of smart cities, including political neutrality of big data, technocratic governance, technological lock-ins, data and network security, and privacy risks.

In a study entitled, “The Real-Time City? Big Data and Smart Urbanism,” Rob Kitchin provides a critical reflection on the potential negative effects of data-driven city governance on social development—a topic he claims deserves greater governmental, academic, and social attention.

In contrast to traditional datasets that rely on samples or are aggregated to a coarse scale, “big data” is huge in volume, high in velocity, and diverse in variety. Since the early 2000s, there has been explosive growth in data volume due to the rapid development and implementation of technology infrastructure, including networks, information management, and data storage. Big data can be generated from directed, automated, and volunteered sources. Automated data generation is of particular interest to urban planners. One example Kitchin cites is urban sensor networks, which allow city governments to monitor the movements and statuses of individuals, materials, and structures throughout the urban environment by analyzing real-time data.

With the huge amount of streaming data collected by smart infrastructure, many city governments use real-time analysis to manage different aspects of city operations. There has been a recent trend in centralizing data streams into a single hub, integrating all kinds of surveillance and analytics. These one-stop data centers make it easier for analysts to cross-reference data, spot patterns, identify problems, and allocate resources. The data are also often accessible by field workers via operations platforms. In London and some other cities, real-time data are visualized on “city dashboards” and communicated to citizens, providing convenient access to city information.

However, the real-time city is not a flawless solution to all the problems faced by city managers. The primary concern is the politics of big, urban data. Although raw data are often perceived as neutral and objective, no data are free of bias; the collection of data is a subjective process that can be shaped by various confounding factors. The presentation of data can also be manipulated to answer a specific question or enact a particular political vision….(More)”

Build digital democracy


Dirk Helbing & Evangelos Pournaras in Nature: “Fridges, coffee machines, toothbrushes, phones and smart devices are all now equipped with communicating sensors. In ten years, 150 billion ‘things’ will connect with each other and with billions of people. The ‘Internet of Things’ will generate data volumes that double every 12 hours rather than every 12 months, as is the case now.

Blinded by information, we need ‘digital sunglasses’. Whoever builds the filters to monetize this information determines what we see — Google and Facebook, for example. Many choices that people consider their own are already determined by algorithms. Such remote control weakens responsible, self-determined decision-making and thus society too.

The European Court of Justice’s ruling on 6 October that countries and companies must comply with European data-protection laws when transferring data outside the European Union demonstrates that a new digital paradigm is overdue. To ensure that no government, company or person with sole control of digital filters can manipulate our decisions, we need information systems that are transparent, trustworthy and user-controlled. Each of us must be able to choose, modify and build our own tools for winnowing information.

With this in mind, our research team at the Swiss Federal Institute of Technology in Zurich (ETH Zurich), alongside international partners, has started to create a distributed, privacy-preserving ‘digital nervous system’ called Nervousnet. Nervousnet uses the sensor networks that make up the Internet of Things, including those in smartphones, to measure the world around us and to build a collective ‘data commons’. The many challenges ahead will be best solved using an open, participatory platform, an approach that has proved successful for projects such as Wikipedia and the open-source operating system Linux.

A wise king?

The science of human decision-making is far from understood. Yet our habits, routines and social interactions are surprisingly predictable. Our behaviour is increasingly steered by personalized advertisements and search results, recommendation systems and emotion-tracking technologies. Thousands of pieces of metadata have been collected about every one of us (seego.nature.com/stoqsu). Companies and governments can increasingly manipulate our decisions, behaviour and feelings1.

Many policymakers believe that personal data may be used to ‘nudge’ people to make healthier and environmentally friendly decisions. Yet the same technology may also promote nationalism, fuel hate against minorities or skew election outcomes2 if ethical scrutiny, transparency and democratic control are lacking — as they are in most private companies and institutions that use ‘big data’. The combination of nudging with big data about everyone’s behaviour, feelings and interests (‘big nudging’, if you will) could eventually create close to totalitarian power.

Countries have long experimented with using data to run their societies. In the 1970s, Chilean President Salvador Allende created computer networks to optimize industrial productivity3. Today, Singapore considers itself a data-driven ‘social laboratory’4 and other countries seem keen to copy this model.

The Chinese government has begun rating the behaviour of its citizens5. Loans, jobs and travel visas will depend on an individual’s ‘citizen score’, their web history and political opinion. Meanwhile, Baidu — the Chinese equivalent of Google — is joining forces with the military for the ‘China brain project’, using ‘deep learning’ artificial-intelligence algorithms to predict the behaviour of people on the basis of their Internet activity6.

The intentions may be good: it is hoped that big data can improve governance by overcoming irrationality and partisan interests. But the situation also evokes the warning of the eighteenth-century philosopher Immanuel Kant, that the “sovereign acting … to make the people happy according to his notions … becomes a despot”. It is for this reason that the US Declaration of Independence emphasizes the pursuit of happiness of individuals.

Ruling like a ‘benevolent dictator’ or ‘wise king’ cannot work because there is no way to determine a single metric or goal that a leader should maximize. Should it be gross domestic product per capita or sustainability, power or peace, average life span or happiness, or something else?

Better is pluralism. It hedges risks, promotes innovation, collective intelligence and well-being. Approaching complex problems from varied perspectives also helps people to cope with rare and extreme events that are costly for society — such as natural disasters, blackouts or financial meltdowns.

Centralized, top-down control of data has various flaws. First, it will inevitably become corrupted or hacked by extremists or criminals. Second, owing to limitations in data-transmission rates and processing power, top-down solutions often fail to address local needs. Third, manipulating the search for information and intervening in individual choices undermines ‘collective intelligence’7. Fourth, personalized information creates ‘filter bubbles’8. People are exposed less to other opinions, which can increase polarization and conflict9.

Fifth, reducing pluralism is as bad as losing biodiversity, because our economies and societies are like ecosystems with millions of interdependencies. Historically, a reduction in diversity has often led to political instability, collapse or war. Finally, by altering the cultural cues that guide peoples’ decisions, everyday decision-making is disrupted, which undermines rather than bolsters social stability and order.

Big data should be used to solve the world’s problems, not for illegitimate manipulation. But the assumption that ‘more data equals more knowledge, power and success’ does not hold. Although we have never had so much information, we face ever more global threats, including climate change, unstable peace and socio-economic fragility, and political satisfaction is low worldwide. About 50% of today’s jobs will be lost in the next two decades as computers and robots take over tasks. But will we see the macroeconomic benefits that would justify such large-scale ‘creative destruction’? And how can we reinvent half of our economy?

The digital revolution will mainly benefit countries that achieve a ‘win–win–win’ situation for business, politics and citizens alike10. To mobilize the ideas, skills and resources of all, we must build information systems capable of bringing diverse knowledge and ideas together. Online deliberation platforms and reconfigurable networks of smart human minds and artificially intelligent systems can now be used to produce collective intelligence that can cope with the diverse and complex challenges surrounding us….(More)” See Nervousnet project

The Power of Nudges, for Good and Bad


Richard H. Thaler in the New York Times: “Nudges, small design changes that can markedly affect individual behavior, have been catching on. These techniques rely on insights from behavioral science, and when used ethically, they can be very helpful. But we need to be sure that they aren’t being employed to sway people to make bad decisions that they will later regret.

Whenever I’m asked to autograph a copy of “Nudge,” the book I wrote with Cass Sunstein, the Harvard law professor, I sign it, “Nudge for good.” Unfortunately, that is meant as a plea, not an expectation.

Three principles should guide the use of nudges:

■ All nudging should be transparent and never misleading.

■ It should be as easy as possible to opt out of the nudge, preferably with as little as one mouse click.

■ There should be good reason to believe that the behavior being encouraged will improve the welfare of those being nudged.
As far as I know, the government teams in Britain and the United States that have focused on nudging have followed these guidelines scrupulously. But the private sector is another matter. In this domain, I see much more troubling behavior.

For example, last spring I received an email telling me that the first prominent review of a new book of mine had appeared: It was in The Times of London. Eager to read the review, I clicked on a hyperlink, only to run into a pay wall. Still, I was tempted by an offer to take out a one-month trial subscription for the price of just £1. As both a consumer and producer of newspaper articles, I have no beef with pay walls. But before signing up, I read the fine print. As expected, I would have to provide credit card information and would be automatically enrolled as a subscriber when the trial period expired. The subscription rate would then be £26 (about $40) a month. That wasn’t a concern because I did not intend to become a paying subscriber. I just wanted to read that one article.

But the details turned me off. To cancel, I had to give 15 days’ notice, so the one-month trial offer actually was good for just two weeks. What’s more, I would have to call London, during British business hours, and not on a toll-free number. That was both annoying and worrying. As an absent-minded American professor, I figured there was a good chance I would end up subscribing for several months, and that reading the article would end up costing me at least £100….

These examples are not unusual. Many companies are nudging purely for their own profit and not in customers’ best interests. In a recent column in The New York Times, Robert Shiller called such behavior “phishing.” Mr. Shiller and George Akerlof, both Nobel-winning economists, have written a book on the subject, “Phishing for Phools.”

Some argue that phishing — or evil nudging — is more dangerous in government than in the private sector. The argument is that government is a monopoly with coercive power, while we have more choice in the private sector over which newspapers we read and which airlines we fly.

I think this distinction is overstated. In a democracy, if a government creates bad policies, it can be voted out of office. Competition in the private sector, however, can easily work to encourage phishing rather than stifle it.

One example is the mortgage industry in the early 2000s. Borrowers were encouraged to take out loans that they could not repay when real estate prices fell. Competition did not eliminate this practice, because it was hard for anyone to make money selling the advice “Don’t take that loan.”

As customers, we can help one another by resisting these come-ons. The more we turn down questionable offers like trip insurance and scrutinize “one month” trials, the less incentive companies will have to use such schemes. Conversely, if customers reward firms that act in our best interests, more such outfits will survive and flourish, and the options available to us will improve….(More)