Rethink government with AI


Helen Margetts and Cosmina Dorobantu at Nature: “People produce more than 2.5 quintillion bytes of data each day. Businesses are harnessing these riches using artificial intelligence (AI) to add trillions of dollars in value to goods and services each year. Amazon dispatches items it anticipates customers will buy to regional hubs before they are purchased. Thanks to the vast extractive might of Google and Facebook, every bakery and bicycle shop is the beneficiary of personalized targeted advertising.

But governments have been slow to apply AI to hone their policies and services. The reams of data that governments collect about citizens could, in theory, be used to tailor education to the needs of each child or to fit health care to the genetics and lifestyle of each patient. They could help to predict and prevent traffic deaths, street crime or the necessity of taking children into care. Huge costs of floods, disease outbreaks and financial crises could be alleviated using state-of-the-art modelling. All of these services could become cheaper and more effective.

This dream seems rather distant. Governments have long struggled with much simpler technologies. Flagship policies that rely on information technology (IT) regularly flounder. The Affordable Care Act of former US president Barack Obama nearly crumbled in 2013 when HealthCare.gov, the website enabling Americans to enrol in health insurance plans, kept crashing. Universal Credit, the biggest reform to the UK welfare state since the 1940s, is widely regarded as a disaster because of its failure to pay claimants properly. It has also wasted £837 million (US$1.1 billion) on developing one component of its digital system that was swiftly decommissioned. Canada’s Phoenix pay system, introduced in 2016 to overhaul the federal government’s payroll process, has remunerated 62% of employees incorrectly in each fiscal year since its launch. And My Health Record, Australia’s digital health-records system, saw more than 2.5 million people opt out by the end of January this year over privacy, security and efficacy concerns — roughly 1 in 10 of those who were eligible.

Such failures matter. Technological innovation is essential for the state to maintain its position of authority in a data-intensive world. The digital realm is where citizens live and work, shop and play, meet and fight. Prices for goods are increasingly set by software. Work is mediated through online platforms such as Uber and Deliveroo. Voters receive targeted information — and disinformation — through social media.

Thus the core tasks of governments, such as enforcing regulation, setting employment rights and ensuring fair elections require an understanding of data and algorithms. Here we highlight the main priorities, drawn from our experience of working with policymakers at The Alan Turing Institute in London….(More)”.

Ethics guidelines for trustworthy AI


European Commission: “Following the publication of the draft ethics guidelines in December 2018 to which more than 500 comments were received, the independent expert group presents today their ethics guidelines for trustworthy artificial intelligence.

Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

In summer 2019, the Commission will launch a pilot phase involving a wide range of stakeholders. Already today, companies, public administrations and organisations can sign up to the European AI Alliance and receive a notification when the pilot starts.

Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the Commission will evaluate the outcome and propose any next steps….(More)”.

The Politics of Referendum Use in European Democracies


Book by Saskia Hollander: “This book demonstrates that the generally assumed dichotomy between referendums and representative democracy does not do justice to the great diversity of referendum types and of how referendums are used in European democracies. Although in all referendums citizens vote directly on issues rather than letting their political representatives do this for them, some referendums are more direct than others.

Rather than reflecting the direct power of the People, most referendums in EU countries are held by, and serve the interests of, the political elites, most notably the executive. The book shows that these interests rarely match the justifications given in the public debate. Instead of being driven by the need to compensate for the deficiency of political parties, decision-makers use referendums primarily to protect the position of their party. In unravelling the strategic role played by national referendums in decision-making, this book makes an unconventional contribution to the debate on the impact of referendums on democracy….(More)”

Does increased ‘participation’ equal a new-found enthusiasm for democracy?


Blog by Stephen King and Paige Nicol: “With a few months under our belts, 2019 looks unlikely to be the year of a great global turnaround for democracy. The decade of democratic ‘recession’ that Larry Diamond declared in 2015 has dragged on and deepened, and may now be teetering on the edge of becoming a full-blown depression. 

The start of each calendar year is marked by the release of annual indices, rankings, and reports on how democracy is faring around the world. 2018 reports from Freedom House and the Economist Intelligence Unit (EIU) highlighted precipitous declines in civil liberties in long-standing democracies as well as authoritarian states. Some groups, including migrants, women, ethnic and other minorities, opposition politicians, and journalists have been particularly affected by these setbacks. According to the Committee to Protect Journalists, the number of journalists murdered nearly doubled last year, while the number imprisoned remained above 250 for the third consecutive year. 

Yet, the EIU also found a considerable increase in political participation worldwide. Levels of participation (including voting, protesting, and running for elected office, among other dimensions) increased substantially enough last year to offset falling scores in the other four categories of the index. Based on the methodology used, the rise in political participation was significant enough to prevent a decline in the global overall score for democracy for the first time in three years.

Though this development could give cause for optimism we believe it could also raise new concerns. 

In Zimbabwe, Sudan, and Venezuela we see people who, through desperation and frustration, have taken to the streets – a form of participation which has been met with brutal crackdowns. Time has yet to tell what the ultimate outcome of these protests will be, but it is clear that governments with autocratic tendencies have more – and cheaper – tools to monitor, direct, control, and suppress participation than ever before. 

Elsewhere, we see a danger of people becoming dislocated and disenchanted with democracy, as their representatives fail to take meaningful action on the issues that matter to them. In the UK Parliament, as Brexit discussions have become increasingly polarised and fractured along party political and ideological lines, Foreign Secretary Jeremy Hunt warned that there was a threat of social unrest if Parliament was seen to be frustrating the ‘will of the people.’ 

While we see enhanced participation as crucial to just and fair societies, it alone will not be the silver bullet that saves democracy. Whether this trend becomes a cause for hope or concern will depend on three factors: who is participating, what form does participation take, and how is participation received by those with power?…(More)”.

Social capital predicts corruption risk in towns


Paper by Johannes Wachs, Taha Yasseri, Balázs Lengyel and János Kertész: “Corruption is a social plague: gains accrue to small groups, while its costs are borne by everyone. Significant variation in its level between and within countries suggests a relationship between social structure and the prevalence of corruption, yet, large-scale empirical studies thereof have been missing due to lack of data. In this paper, we relate the structural characteristics of social capital of settlements with corruption in their local governments. Using datasets from Hungary, we quantify corruption risk by suppressed competition and lack of transparency in the settlement’s awarded public contracts. We characterize social capital using social network data from a popular online platform. Controlling for social, economic and political factors, we find that settlements with fragmented social networks, indicating an excess of bonding social capital has higher corruption risk, and settlements with more diverse external connectivity, suggesting a surplus of bridging social capital is less exposed to corruption. We interpret fragmentation as fostering in-group favouritism and conformity, which increase corruption, while diversity facilitates impartiality in public life and stifles corruption….(More)”.

Data-driven models of governance across borders


Introduction to Special Issue of FirstMonday, edited by Payal Arora and Hallam Stevens: “This special issue looks closely at contemporary data systems in diverse global contexts and through this set of papers, highlights the struggles we face as we negotiate efficiency and innovation with universal human rights and social inclusion. The studies presented in these essays are situated in diverse models of policy-making, governance, and/or activism across borders. Attention to big data governance in western contexts has tended to highlight how data increases state and corporate surveillance of citizens, affecting rights to privacy. By moving beyond Euro-American borders — to places such as Africa, India, China, and Singapore — we show here how data regimes are motivated and understood on very different terms….

To establish a kind of baseline, the special issue opens by considering attitudes toward big data in Europe. René König’s essay examines the role of “citizen conferences” in understanding the public’s view of big data in Germany. These “participatory technology assessments” demonstrated that citizens were concerned about the control of big data (should it be under the control of the government or individuals?), about the need for more education about big data technologies, and the need for more government regulation. Participants expressed, in many ways, traditional liberal democratic views and concerns about these technologies centered on individual rights, individual responsibilities, and education. Their proposed solutions too — more education and more government regulation — fit squarely within western liberal democratic traditions.

In contrast to this, Payal Arora’s essay draws us immediately into the vastly different contexts of data governance in India and China. India’s Aadhaar biometric identification system, through tracking its citizens with iris scanning and other measures, promises to root out corruption and provide social services to those most in need. Likewise, China’s emerging “social credit system,” while having immense potential for increasing citizen surveillance, offers ways of increasing social trust and fostering more responsible social behavior online and offline. Although the potential for authoritarian abuses of both systems is high, Arora focuses on how these technologies are locally understood and lived on an everyday basis, which spans from empowering to oppressing their people. From this perspective, the technologies offer modes of “disrupt[ing] systems of inequality and oppression” that should open up new conversations about what democratic participation can and should look like in China and India.

If China and India offer contrasting non-democratic and democratic cases, we turn next to a context that is neither completely western nor completely non-western, neither completely democratic nor completely liberal. Hallam Stevens’ account of government data in Singapore suggests the very different role that data can play in this unique political and social context. Although the island state’s data.gov.sg participates in global discourses of sharing, “open data,” and transparency, much of the data made available by the government is oriented towards the solution of particular economic and social problems. Ultimately, the ways in which data are presented may contribute to entrenching — rather than undermining or transforming — existing forms of governance. The account of data and its meanings that is offered here once again challenges the notion that such data systems can or should be understood in the same ways that similar systems have been understood in the western world.

If systems such as Aadhaar, “social credit,” and data.gov.sg profess to make citizens and governments more visible and legible, Rolien Hoyngexamines what may remain invisible even within highly pervasive data-driven systems. In the world of e-waste, data-driven modes of surveillance and logistics are critical for recycling. But many blind spots remain. Hoyng’s account reminds us that despite the often-supposed all-seeing-ness of big data, we should remain attentive to what escapes the data’s gaze. Here, in midst of datafication, we find “invisibility, uncertainty, and, therewith, uncontrollability.” This points also to the gap between the fantasies of how data-driven systems are supposed to work, and their realization in the world. Such interstices allow individuals — those working with e-waste in Shenzhen or Africa, for example — to find and leverage hidden opportunities. From this perspective, the “blind spots of big data” take on a very different significance.

Big data systems provide opportunities for some, but reduce those for others. Mark Graham and Mohammad Amir Anwar examine what happens when online outsourcing platforms create a “planetary labor market.” Although providing opportunities for many people to make money via their Internet connection, Graham and Anwar’s interviews with workers across sub-Saharan Africa demonstrate how “platform work” alters the balance of power between labor and capital. For many low-wage workers across the globe, the platform- and data-driven planetary labor market means downward pressure on wages, fewer opportunities to collectively organize, less worker agency, and less transparency about the nature of the work itself. Moving beyond bold pronouncements that the “world is flat” and big data as empowering, Graham and Anwar show how data-driven systems of employment can act to reduce opportunities for those residing in the poorest parts of the world. The affordances of data and platforms create a planetary labor market for global capital but tie workers ever-more tightly to their own localities. Once again, the valances of global data systems look very different from this “bottom-up” perspective.

Philippa Metcalfe and Lina Dencik shift this conversation from the global movement of labor to that of people, as they write about the implications of European datafication systems on the governance of refugees entering this region. This work highlights how intrinsic to datafication systems is the classification, coding, and collating of people to legitimize the extent of their belonging in the society they seek to live in. The authors argue that these datafied regimes of power have substantively increased their role in the regulating of human mobility in the guise of national security. These means of data surveillance can foster new forms of containment and entrapment of entire groups of people, creating further divides between “us” and “them.” Through vast interoperable databases, digital registration processes, biometric data collection, and social media identity verification, refugees have become some of the most monitored groups at a global level while at the same time, their struggles remain the most invisible in popular discourse….(More)”.

Trustworthy Privacy Indicators: Grades, Labels, Certifications and Dashboards


Paper by Joel R. Reidenberg et al: “Despite numerous groups’ efforts to score, grade, label, and rate the privacy of websites, apps, and network-connected devices, these attempts at privacy indicators have, thus far, not been widely adopted. Privacy policies, however, remain long, complex, and impractical for consumers. Communicating in some short-hand form, synthesized privacy content is now crucial to empower internet users and provide them more meaningful notice, as well as nudge consumers and data processors toward more meaningful privacy. Indeed, on the basis of these needs, the National Institute of Standards and Technology and the Federal Trade Commission in the United States, as well as lawmakers and policymakers in the European Union, have advocated for the development of privacy indicator systems.

Efforts to develop privacy grades, scores, labels, icons, certifications, seals, and dashboards have wrestled with various deficiencies and obstacles for the wide-scale deployment as meaningful and trustworthy privacy indicators. This paper seeks to identify and explain these deficiencies and obstacles that have hampered past and current attempts. With these lessons, the article then offers criteria that will need to be established in law and policy for trustworthy indicators to be successfully deployed and adopted through technological tools. The lack of standardization prevents user-recognizability and dependability in the online marketplace, diminishes the ability to create automated tools for privacy, and reduces incentives for consumers and industry to invest in a privacy indicators. Flawed methods in selection and weighting of privacy evaluation criteria and issues interpreting language that is often ambiguous and vague jeopardize success and reliability when baked into an indicator of privacy protectiveness or invasiveness. Likewise, indicators fall short when those organizations rating or certifying the privacy practices are not objective, trustworthy, and sustainable.

Nonetheless, trustworthy privacy rating systems that are meaningful, accurate, and adoptable can be developed to assure effective and enduring empowerment of consumers. This paper proposes a framework using examples from prior and current attempts to create privacy indicator systems in order to provide a valuable resource for present-day, real world policymaking….(More)”.

Nudging the dead: How behavioural psychology inspired Nova Scotia’s organ donation scheme


Joseph Brean at National Post: “Nova Scotia’s decision to presume people’s consent to donating their organs after death is not just a North American first. It is also the latest example of how deeply behavioural psychology has changed policy debates.

That is a rare achievement for science. Governments used to appeal to people’s sense of reason, religion, civic duty, or fear of consequences. Today, when they want to change how their citizens behave, they use psychological tricks to hack their minds.

Nudge politics, as it came to be known, has been an intellectual hit among wonks and technocrats ever since Daniel Kahneman won the Nobel Prize in 2002 for destroying the belief people make decisions based on good information and reasonable expectations. Not so, he showed. Not even close. Human decision-making is an organic process, all but immune to reason, but strangely susceptible to simple environmental cues, just waiting to be exploited by a clever policymaker….

Organ donation is a natural fit. Nova Scotia’s experiment aims to solve a policy problem by getting people to do what they always tend to do about government requests — nothing.

The cleverness is evident in the N.S. government’s own words, which play on the meaning of “opportunity”: “Every Nova Scotian will have the opportunity to be an organ and tissue donor unless they opt out.” The policy applies to kidneys, pancreas, heart, liver, lungs, small bowel, cornea, sclera, skin, bones, tendons and heart valves.

It is so clever it aims to make progress as people ignore it. The default position is a positive for the policy. It assumes poor pickup. You can opt out of organ donation if you want. Nova Scotia is simply taking the informed gamble that you probably won’t. That is the goal, and it will make for a revealing case study.

Organ donation is an important question, and chronically low donation rates can reasonably be called a crisis. But most people make their personal choice “thoughtlessly,” as Kahneman wrote in the 2011 book Thinking, Fast and Slow.

He referred to European statistics which showed vast differences in organ donation rights between neighbouring and culturally similar countries, such as Sweden and Denmark, or Germany and Austria. The key difference, he noted, was what he called “framing effects,” or how the question was asked….(More)”.

Understanding algorithmic decision-making: Opportunities and challenges


Study by Claude Castelluccia and Daniel Le Métayer for the European Parliament: “While algorithms are hardly a recent invention, they are nevertheless increasingly involved in systems used to support decision-making. These systems, known as ‘ADS’ (algorithmic decision systems), often rely on the analysis of large amounts of personal data to infer correlations or, more generally, to derive information deemed useful to make decisions. Human intervention in the decision-making may vary, and may even be completely out of the loop in entirely automated systems. In many situations, the impact of the decision on people can be significant, such as access to credit, employment, medical treatment, or judicial sentences, among other things.

Entrusting ADS to make or to influence such decisions raises a variety of ethical, political, legal, or technical issues, where great care must be taken to analyse and address them correctly. If they are neglected, the expected benefits of these systems may be negated by a variety of different risks for individuals (discrimination, unfair practices, loss of autonomy, etc.), the economy (unfair practices, limited access to markets, etc.), and society as a whole (manipulation, threat to democracy, etc.).

This study reviews the opportunities and risks related to the use of ADS. It presents policy options to reduce the risks and explain their limitations. We sketch some options to overcome these limitations to be able to benefit from the tremendous possibilities of ADS while limiting the risks related to their use. Beyond providing an up-to date and systematic review of the situation, the study gives a precise definition of a number of key terms and an analysis of their differences to help clarify the debate. The main focus of the study is the technical aspects of ADS. However, to broaden the discussion, other legal, ethical and social dimensions are considered….(More)”.

Data: The Lever to Promote Innovation in the EU


Blog Post by Juan Murillo Arias: “…But in order for data to truly become a lever that foments innovation in benefit of society as a whole, we must understand and address the following factors:

1. Disconnected, disperse sources. As users of digital services (transportation, finance, telecommunications, news or entertainment) we leave a different digital footprint for each service that we use. These footprints, which are different facets of the same polyhedron, can even be contradictory on occasion. For this reason, they must be seen as complementary. Analysts should be aware that they must cross data sources from different origins in order to create a reliable picture of our preferences, otherwise we will be basing decisions on partial or biased information. How many times do we receive advertising for items we have already purchased, or tourist destinations where we have already been? And this is just one example of digital marketing. When scoring financial solvency, or monitoring health, the more complete the digital picture is of the person, the more accurate the diagnosis will be.

Furthermore, from the user’s standpoint, proper management of their entire, disperse digital footprint is a challenge. Perhaps centralized consent would be very beneficial. In the financial world, the PSD2 regulations have already forced banks to open this information to other banks if customers so desire. Fostering competition and facilitating portability is the purpose, but this opening up has also enabled the development of new services of information aggregation that are very useful to financial services users. It would be ideal if this step of breaking down barriers and moving toward a more transparent market took place simultaneously in all sectors in order to avoid possible distortions to competition and by extension, consumer harm. Therefore, customer consent would open the door to building a more accurate picture of our preferences.

2. The public and private sectors’ asymmetric capacity to gather data.This is related to citizens using public services less frequently than private services in the new digital channels. However, governments could benefit from the information possessed by private companies. These anonymous, aggregated data can help to ensure a more dynamic public management. Even personal data could open the door to customized education or healthcare on an individual level. In order to analyze all of this, the European Commissionhas created a working group including 23 experts. The purpose is to come up with a series of recommendations regarding the best legal, technical and economic framework to encourage this information transfer across sectors.

3. The lack of incentives for companies and citizens to encourage the reuse of their data.The reality today is that most companies solely use the sources internally. Only a few have decided to explore data sharing through different models (for academic research or for the development of commercial services). As a result of this and other factors, the public sector largely continues using the survey method to gather information instead of reading the digital footprint citizens produce. Multiple studies have demonstrated that this digital footprint would be useful to describe socioeconomic dynamics and monitor the evolution of official statistical indicators. However, these studies have rarely gone on to become pilot projects due to the lack of incentives for a private company to open up to the public sector, or to society in general, making this new activity sustainable.

4. Limited commitment to the diversification of services.Another barrier is the fact that information based product development is somewhat removed from the type of services that the main data generators (telecommunications, banks, commerce, electricity, transportation, etc.) traditionally provide. Therefore, these data based initiatives are not part of their main business and are more closely tied to companies’ innovation areas where exploratory proofs of concept are often not consolidated as a new line of business.

5. Bidirectionality. Data should also flow from the public sector to the rest of society. The first regulatory framework was created for this purpose. Although it is still very recent (the PSI Directive on the re-use of public sector data was passed in 2013), it is currently being revised, in attempt to foster the consolidation of an open data ecosystem that emanates from the public sector as well. On the one hand it would enable greater transparency, and on the other, the development of solutions to improve multiple fields in which public actors are key, such as the environment, transportation and mobility, health, education, justice and the planning and execution of public works. Special emphasis will be placed on high value data sets, such as statistical or geospatial data — data with tremendous potential to accelerate the emergence of a wide variety of information based data products and services that add value.The Commission will begin working with the Member States to identify these data sets.

In its report, Creating Data through Open Data, the European open data portal estimates that government agencies making their data accessible will inject an extra €65 billion in the EU economy this year.

6. The commitment to analytical training and financial incentives for innovation.They are the key factors that have given rise to the digital unicorns that have emerged, more so in the U.S. and China than in Europe….(More)”