Government Communications in a Digital Age


Book by Kim Murphy: “Just like political parties, governments must adapt to the demands of the digital sphere as their legitimacy is dependent on their ability to communicate decisions to citizens. However, despite abundant research into how the Internet is changing political communications, little is known about how governments use digital technologies to communicate with citizens. There is also little knowledge of how different political systems shape the use of technology in this respect. Therefore, from a comparative perspective this study examines how government organisations in Germany and Great Britain are using websites and social media to interact with citizens and the media on a daily basis. Its empirical approach involves a content analysis of government websites and social media pages and a social network analysis of Twitter networks. Its findings show that government ministries predominantly use websites and social media for one-way communication and that social media is supporting the personalisation of government communications….(More)”.

The great ‘unnewsed’ struggle to participate fully in democracy


Polly Curtis in the Financial Times: “…We once believed in utopian dreams about how a digital world would challenge power structures, democratise information and put power into the hands of the audience. Twenty years ago, I even wrote a university dissertation on how the internet was going to re-democratise society.

Two decades on, power structures have certainly been disrupted, but that utopianism has now crashed into a different reality: a growing and largely unrecognised crisis of the “unnewsed” population. The idea of the unnewsed stems from the concept of the “unbanked”, people who are dispossessed of the structures of society that depend on having a bank account.

Not having news does the same for you in a democratic system. It is a global problem. In parts of the developing world the digital divide is defined by the cost of data, often splitting between rural and urban, and in some places male control of mobile phones exacerbates the disenfranchisement of women. Even in the affluent west, where data is cheap and there are more sim cards than people, that digital divide exists. In the US the concept of “news deserts”, communities with no daily local news outlet, is well established.

Last week, the Reuters Digital News Report, an annual survey of the digital news habits of 75,000 people in 38 countries, reported that 32 per cent now actively avoid the news — avoidance is up 6 percentage points overall and 11 points in the UK. When I dug into other data on news consumption, from the UK communications regulator Ofcom, I found that those who claim not to follow any news are younger, less educated, have lower incomes and are less likely to be in work than those who do. We don’t like to talk about it, but news habits are closely aligned to something that looks very like class. How people get their news explains some of this — and demonstrates the class divide in access to information.

Research by Oxford university’s Reuters Institute last year found that there is greater social inequality in news consumption online than offline. Whereas on average we all use the same number of news sources offline, those on the lower end of the socio-economic scale use significantly fewer sources online. Even the popular tabloids, with their tradition of campaigning news for mass audiences, now have higher social class readers online than in print. Instead of democratising information, there is a risk that the digital revolution is exacerbating gaps in news habits….(More)”.

Self-Sovereign Identity


/sɛlf-ˈsɑvrən aɪˈdɛntəti/

A decentralized identification mechanism that gives individuals control over what, when, and to whom their personal information is shared.

An identification document (ID) is a crucial part of every individual’s life, in that it is often a prerequisite for accessing a variety of services—ranging from creating a bank account to enrolling children in school to buying alcoholic beverages to signing up for an email account to voting in an election—and also a proof of simply being. This system poses fundamental problems, which a field report by The GovLab on Blockchain and Identity frames as follows:

“One of the central challenges of modern identity is its fragmentation and variation across platform and individuals. There are also issues related to interoperability between different forms of identity, and the fact that different identities confer very different privileges, rights, services or forms of access. The universe of identities is vast and manifold. Every identity in effect poses its own set of challenges and difficulties—and, of course, opportunities.”

A report published in New America echoed this point, by arguing that:

“Societally, we lack a coherent approach to regulating the handling of personal data. Users share and generate far too much data—both personally identifiable information (PII) and metadata, or “data exhaust”—without a way to manage it. Private companies, by storing an increasing amount of PII, are taking on an increasing level of risk. Solution architects are recreating the wheel, instead of flying over the treacherous terrain we have just described.”

SSI is dubbed as the solution for those identity problems mentioned above. Identity Woman, a researcher and advocate for SSI, goes even further by arguing that generating “a digital identity that is not under the control of a corporation, an organization or a government” is essential “in pursuit of social justice, deep democracy, and the development of new economies that share wealth and protect the environment.”

To inform the analysis of blockchain-based Self-Sovereign Identity (SSI), The GovLab report argues that identity is “a process, not a thing” and breaks it into a 5-stage lifecycle, which are provisioning, administration, authentication, authorization, and auditing/monitoring. At each stage, identification serves a unique function and poses different challenges.

With SSI, individuals have full control over how their personal information is shared, who gets access to it, and when. The New America report summarizes the potential of SSI in the following paragraphs:

“We believe that the great potential of SSI is that it can make identity in the digital world function more like identity in the physical world, in which every person has a unique and persistent identity which is represented to others by means of both their physical attributes and a collection of credentials attested to by various external sources of authority.”

[…]

“SSI, in contrast, gives the user a portable, digital credential (like a driver’s license or some other document that proves your age), the authenticity of which can be securely validated via cryptography without the recipient having to check with the authority that issued it. This means that while the credential can be used to access many different sites and services, there is no third-party broker to track the services to which the user is authenticating. Furthermore, cryptographic techniques called “zero-knowledge proofs” (ZKPs) can be used to prove possession of a credential without revealing the credential itself. This makes it possible, for example, for users to prove that they are over the age of 21 without having to share their actual birth dates, which are both sensitive information and irrelevant to a binary, yes-or-no ID transaction.”

Some case studies on the application of SSI in the real world presented on The GovLab Blockchange website include a government-issued self-sovereign ID using blockchain technology in the city of Zug in Switzerland; a mobile election voting platform, secured via smart biometrics, real-time ID verification and the blockchain for irrefutability piloted in West Virginia; and a blockchain-based land and property transaction/registration in Sweden.

Nevertheless, on the hype of this new and emerging technology, the authors write:

“At their core, blockchain technologies offer new capacity for increasing the immutability, integrity, and resilience of information capture and disclosure mechanisms, fostering the potential to address some of the information asymmetries described above. By leveraging a shared and verified database of ledgers stored in a distributed manner, blockchain seeks to redesign information ecosystems in a more transparent, immutable, and trusted manner. Solving information asymmetries may turn out to be the real contribution of blockchain, and this—much more than the current enthusiasm over virtual currencies—is the real reason to assess its potential.

“It is important to emphasize, of course, that blockchain’s potential remains just that for the moment—only potential. Considerable hype surrounds the emerging technology, and much remains to be done and many obstacles to overcome if blockchain is to achieve the enthusiasts’ vision of “radical transparency.”

Further readings:

100 Radical Innovation Breakthroughs for the future


The Radical Innovation Breakthrough Inquirer for the European Commission: “This report provides insights on 100 emerging developments that may exert a strong impact on global value creation and offer important solutions to societal needs. We identified this set of emerging developments through a carefully designed procedure that combined machine learning algorithms and human evaluation. After successive waves of selection and refinement, the resulting 100 emerging topics were subjected to several assessment procedures, including expert consultation and analysis of related patents and publications.

Having analysed the potential importance of each of these innovations for Europe, their current maturity and the relative strength of Europe in related R&D, we can make some general policy recommendations that follow.

However, it is important to note that our recommendations are based on the extremes of the distributions, and thus not all RIBs are named under the recommendations. Yet, the totality of the set of Radical Innovation Breakthrough (RIBs) and Radical Societal Breakthrough (RSBs) descriptions and their recent progress directions constitute an important collection of intelligence material that can inform strategic planning in research an innovation policy, industry and enterprise policy, and local development policy….(More)”.

Data & Policy: A new venue to study and explore policy–data interaction


Opening editorial by Stefaan G. Verhulst, Zeynep Engin and Jon Crowcroft: “…Policy–data interactions or governance initiatives that use data have been the exception rather than the norm, isolated prototypes and trials rather than an indication of real, systemic change. There are various reasons for the generally slow uptake of data in policymaking, and several factors will have to change if the situation is to improve. ….

  • Despite the number of successful prototypes and small-scale initiatives, policy makers’ understanding of data’s potential and its value proposition generally remains limited (Lutes, 2015). There is also limited appreciation of the advances data science has made the last few years. This is a major limiting factor; we cannot expect policy makers to use data if they do not recognize what data and data science can do.
  • The recent (and justifiable) backlash against how certain private companies handle consumer data has had something of a reverse halo effect: There is a growing lack of trust in the way data is collected, analyzed, and used, and this often leads to a certain reluctance (or simply risk-aversion) on the part of officials and others (Engin, 2018).
  • Despite several high-profile open data projects around the world, much (probably the majority) of data that could be helpful in governance remains either privately held or otherwise hidden in silos (Verhulst and Young, 2017b). There remains a shortage not only of data but, more specifically, of high-quality and relevant data.
  • With few exceptions, the technical capacities of officials remain limited, and this has obviously negative ramifications for the potential use of data in governance (Giest, 2017).
  • It’s not just a question of limited technical capacities. There is often a vast conceptual and values gap between the policy and technical communities (Thompson et al., 2015; Uzochukwu et al., 2016); sometimes it seems as if they speak different languages. Compounding this difference in world views is the fact that the two communities rarely interact.
  • Yet, data about the use and evidence of the impact of data remain sparse. The impetus to use more data in policy making is stymied by limited scholarship and a weak evidential basis to show that data can be helpful and how. Without such evidence, data advocates are limited in their ability to make the case for more data initiatives in governance.
  • Data are not only changing the way policy is developed, but they have also reopened the debate around theory- versus data-driven methods in generating scientific knowledge (Lee, 1973; Kitchin, 2014; Chivers, 2018; Dreyfuss, 2017) and thus directly questioning the evidence base to utilization and implementation of data within policy making. A number of associated challenges are being discussed, such as: (i) traceability and reproducibility of research outcomes (due to “black box processing”); (ii) the use of correlation instead of causation as the basis of analysis, biases and uncertainties present in large historical datasets that cause replication and, in some cases, amplification of human cognitive biases and imperfections; and (iii) the incorporation of existing human knowledge and domain expertise into the scientific knowledge generation processes—among many other topics (Castelvecchi, 2016; Miller and Goodchild, 2015; Obermeyer and Emanuel, 2016; Provost and Fawcett, 2013).
  • Finally, we believe that there should be a sound under-pinning a new theory of what we call Policy–Data Interactions. To date, in reaction to the proliferation of data in the commercial world, theories of data management,1 privacy,2 and fairness3 have emerged. From the Human–Computer Interaction world, a manifesto of principles of Human–Data Interaction (Mortier et al., 2014) has found traction, which intends reducing the asymmetry of power present in current design considerations of systems of data about people. However, we need a consistent, symmetric approach to consideration of systems of policy and data, how they interact with one another.

All these challenges are real, and they are sticky. We are under no illusions that they will be overcome easily or quickly….

During the past four conferences, we have hosted an incredibly diverse range of dialogues and examinations by key global thought leaders, opinion leaders, practitioners, and the scientific community (Data for Policy, 2015201620172019). What became increasingly obvious was the need for a dedicated venue to deepen and sustain the conversations and deliberations beyond the limitations of an annual conference. This leads us to today and the launch of Data & Policy, which aims to confront and mitigate the barriers to greater use of data in policy making and governance.

Data & Policy is a venue for peer-reviewed research and discussion about the potential for and impact of data science on policy. Our aim is to provide a nuanced and multistranded assessment of the potential and challenges involved in using data for policy and to bridge the “two cultures” of science and humanism—as CP Snow famously described in his lecture on “Two Cultures and the Scientific Revolution” (Snow, 1959). By doing so, we also seek to bridge the two other dichotomies that limit an examination of datafication and is interaction with policy from various angles: the divide between practice and scholarship; and between private and public…

So these are our principles: scholarly, pragmatic, open-minded, interdisciplinary, focused on actionable intelligence, and, most of all, innovative in how we will share insight and pushing at the boundaries of what we already know and what already exists. We are excited to launch Data & Policy with the support of Cambridge University Press and University College London, and we’re looking for partners to help us build it as a resource for the community. If you’re reading this manifesto it means you have at least a passing interest in the subject; we hope you will be part of the conversation….(More)”.

How not to conduct a consultation – and why asking the public is not always such a great idea


Agnes Batory & Sara Svensson at Policy and Politics: “Involving people in policy-making is generally a good thing. Policy-makers themselves often pay at least lip-service to the importance of giving citizens a say. In the academic literature, participatory governance has been, with some exaggeration, almost universally hailed as a panacea to all ills in Western democracies. In particular, it is advocated as a way to remedy the alienation of voters from politicians who seem to be oblivious to the concerns of the common man and woman, with an ensuing decline in public trust in government. Representation by political parties is ridden with problems, so the argument goes, and in any case it is overly focused on the act of voting in elections – a one-off event once every few years which limits citizens’ ability to control the policy agenda. On the other hand, various forms of public participation are expected to educate citizens, help develop a civic culture, and boost the legitimacy of decision-making. Consequently, practices to ensure that citizens can provide direct input into policy-making are to be welcomed on both pragmatic and normative grounds.  

I do not disagree with these generally positive expectations. However, the main objective of my recent article in Policy and Politics, co-authored with Sara Svensson, is to inject a dose of healthy scepticism into the debate or, more precisely, to show that there are circumstances in which public consultations will achieve anything but greater legitimacy and better policy-outcomes. We do this partly by discussing the more questionable assumptions in the participatory governance literature, and partly by examining a recent, glaring example of the misuse, and abuse, of popular input….(More)”.

Privacy Enhancing Technologies


The Royal Society: “How can technologies help organisations and individuals protect data in practice and, at the same time, unlock opportunities for data access and use?

The Royal Society’s Privacy Enhancing Technologies project has been investigating this question and has launched a report (PDF) setting out the current use, development and limits of privacy enhancing technologies (PETs) in data analysis. 

The data we generate every day holds a lot of value and potentially also contains sensitive information that individuals or organisations might not wish to share with everyone. The protection of personal or sensitive data featured prominently in the social and ethical tensions identified in our British Academy and Royal Society report Data management and use: Governance in the 21st century. For example, how can organisations best use data for public good whilst protecting sensitive information about individuals? Under other circumstances, how can they share data with groups with competing interests whilst protecting commercially or otherwise sensitive information?

Realising the full potential of large-scale data analysis may be constrained by important legal, reputational, political, business and competition concerns.  Certain risks can potentially be mitigated and managed with a set of emerging technologies and approaches often collectively referred to as ‘Privacy Enhancing Technologies’ (PETs). 

This disruptive set of technologies, combined with changes in wider policy and business frameworks, could enable the sharing and use of data in a privacy-preserving manner. They also have the potential to reshape the data economy and to change the trust relationships between citizens, governments and companies.

This report provides a high-level overview of five current and promising PETs of a diverse nature, with their respective readiness levels and illustrative case studies from a range of sectors, with a view to inform in particular applied data science research and the digital strategies of government departments and businesses. This report also includes recommendations on how the UK could fully realise the potential of PETs and to allow their use on a greater scale.

The project was informed by a series of conversations and evidence gathering events, involving a range of stakeholders across academia, government and the private sector (also see the project terms of reference and Working Group)….(More)”.

Number of fact-checking outlets surges to 188 in more than 60 countries


Mark Stencel at Poynter: “The number of fact-checking outlets around the world has grown to 188 in more than 60 countries amid global concerns about the spread of misinformation, according to the latest tally by the Duke Reporters’ Lab.

Since the last annual fact-checking census in February 2018, we’ve added 39 more outlets that actively assess claims from politicians and social media, a 26% increase. The new total is also more than four times the 44 fact-checkers we counted when we launched our global database and map in 2014.

Globally, the largest growth came in Asia, which went from 22 to 35 outlets in the past year. Nine of the 27 fact-checking outlets that launched since the start of 2018 were in Asia, including six in India. Latin American fact-checking also saw a growth spurt in that same period, with two new outlets in Costa Rica, and others in Mexico, Panama and Venezuela.

The actual worldwide total is likely much higher than our current tally. That’s because more than a half-dozen of the fact-checkers we’ve added to the database since the start of 2018 began as election-related partnerships that involved the collaboration of multiple organizations. And some those election partners are discussing ways to continue or reactivate that work— either together or on their own.

Over the past 12 months, five separate multimedia partnerships enlisted more than 60 different fact-checking organizations and other news companies to help debunk claims and verify information for voters in MexicoBrazilSweden,Nigeria and the Philippines. And the Poynter Institute’s International Fact-Checking Network assembled a separate team of 19 media outlets from 13 countries to consolidate and share their reporting during the run-up to last month’s elections for the European Parliament. Our database includes each of these partnerships, along with several others— but not each of the individual partners. And because they were intentionally short-run projects, three of these big partnerships appear among the 74 inactive projects we also document in our database.

Politics isn’t the only driver for fact-checkers. Many outlets in our database are concentrating efforts on viral hoaxes and other forms of online misinformation — often in coordination with the big digital platforms on which that misinformation spreads.

We also continue to see new topic-specific fact-checkers such as Metafact in Australia and Health Feedback in France— both of which launched in 2018 to focus on claims about health and medicine for a worldwide audience….(More)”.

We should extend EU bank data sharing to all sectors


Carlos Torres Vila in the Financial Times: “Data is now driving the global economy — just look at the list of the world’s most valuable companies. They collect and exploit the information that users generate through billions of online interactions taking place every day. 


But companies are hoarding data too, preventing others, including the users to whom the data relates, from accessing and using it. This is true of traditional groups such as banks, telcos and utilities, as well as the large digital enterprises that rely on “proprietary” data. 
Global and national regulators must address this problem by forcing companies to give users an easy way to share their own data, if they so choose. This is the logical consequence of personal data belonging to users. There is also the potential for enormous socio-economic benefits if we can create consent-based free data flows. 
We need data-sharing across companies in all sectors in a real time, standardised way — not at a speed and in a format dictated by the companies that stockpile user data. These new rules should apply to all electronic data generated by users, whether provided directly or observed during their online interactions with any provider, across geographic borders and in any sector. This could include everything from geolocation history and electricity consumption to recent web searches, pension information or even most recently played songs. 

This won’t be easy to achieve in practice, but the good news is that we already have a framework that could be the model for a broader solution. The UK’s Open Banking system provides a tantalising glimpse of what may be possible. In Europe, the regulation known as the Payment Services Directive 2 allows banking customers to share data about their transactions with multiple providers via secure, structured IT interfaces. We are already seeing this unlock new business models and drive competition in digital financial services. But these rules do not go far enough — they only apply to payments history, and that isn’t enough to push forward a data-driven economic revolution across other sectors of the economy. 

We need a global framework with common rules across regions and sectors. This has already happened in financial services: after the 2008 financial crisis, the G20 strengthened global banking standards and created the Financial Stability Board. The rules, while not perfect, have delivered uniformity which has strengthened the system. 

We need a similar global push for common rules on the use of data. While it will be difficult to achieve consensus on data, and undoubtedly more difficult still to implement and enforce it, I believe that now is the time to decide what we want. The involvement of the G20 in setting up global standards will be essential to realising the potential that data has to deliver a better world for all of us. There will be complaints about the cost of implementation. I know first hand how expensive it can be to simultaneously open up and protect sensitive core systems. 

The alternative is siloed data that holds back innovation. There will also be justified concerns that easier data sharing could lead to new user risks. Security must be a non-negotiable principle in designing intercompany interfaces and protecting access to sensitive data. But Open Banking shows that these challenges are resolvable. …(More)”.

France Bans Judge Analytics, 5 Years In Prison For Rule Breakers


Artificial Lawyer: “In a startling intervention that seeks to limit the emerging litigation analytics and prediction sector, the French Government has banned the publication of statistical information about judges’ decisions – with a five year prison sentence set as the maximum punishment for anyone who breaks the new law.

Owners of legal tech companies focused on litigation analytics are the most likely to suffer from this new measure.

The new law, encoded in Article 33 of the Justice Reform Act, is aimed at preventing anyone – but especially legal tech companies focused on litigation prediction and analytics – from publicly revealing the pattern of judges’ behaviour in relation to court decisions.

A key passage of the new law states:

‘The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.’ *

As far as Artificial Lawyer understands, this is the very first example of such a ban anywhere in the world.

Insiders in France told Artificial Lawyer that the new law is a direct result of an earlier effort to make all case law easily accessible to the general public, which was seen at the time as improving access to justice and a big step forward for transparency in the justice sector.

However, judges in France had not reckoned on NLP and machine learning companies taking the public data and using it to model how certain judges behave in relation to particular types of legal matter or argument, or how they compare to other judges.

In short, they didn’t like how the pattern of their decisions – now relatively easy to model – were potentially open for all to see.

Unlike in the US and the UK, where judges appear to have accepted the fait accompli of legal AI companies analysing their decisions in extreme detail and then creating models as to how they may behave in the future, French judges have decided to stamp it out….(More)”.