The Technology Trap: Capital, Labor, and Power in the Age of Automation


Book by Carl Benedikt Frey: “From the Industrial Revolution to the age of artificial intelligence, The Technology Trap takes a sweeping look at the history of technological progress and how it has radically shifted the distribution of economic and political power among society’s members. As Carl Benedikt Frey shows, the Industrial Revolution created unprecedented wealth and prosperity over the long run, but the immediate consequences of mechanization were devastating for large swaths of the population. Middle-income jobs withered, wages stagnated, the labor share of income fell, profits surged, and economic inequality skyrocketed. These trends, Frey documents, broadly mirror those in our current age of automation, which began with the Computer Revolution.

Just as the Industrial Revolution eventually brought about extraordinary benefits for society, artificial intelligence systems have the potential to do the same. But Frey argues that this depends on how the short term is managed. In the nineteenth century, workers violently expressed their concerns over machines taking their jobs. The Luddite uprisings joined a long wave of machinery riots that swept across Europe and China. Today’s despairing middle class has not resorted to physical force, but their frustration has led to rising populism and the increasing fragmentation of society. As middle-class jobs continue to come under pressure, there’s no assurance that positive attitudes to technology will persist.
The Industrial Revolution was a defining moment in history, but few grasped its enormous consequences at the time. The Technology Trap demonstrates that in the midst of another technological revolution, the lessons of the past can help us to more effectively face the present….(More)”.

What Would More Democratic A.I. Look Like?


Blog post by Andrew Burgess: “Something curious is happening in Finland. Though much of the global debate around artificial intelligence (A.I.) has become concerned with unaccountable, proprietary systems that could control our lives, the Finnish government has instead decided to embrace the opportunity by rolling out a nationwide educational campaign.

Conceived in 2017, shortly after Finland’s A.I. strategy was announced, the government wants to rebuild the country’s economy around the high-end opportunities of artificial intelligence, and has launched a national programto train 1 percent of the population — that’s 55,000 people — in the basics of A.I. “We’ll never have so much money that we will be the leader of artificial intelligence,” said economic minister Mika Lintilä at the launch. “But how we use it — that’s something different.”

Artificial intelligence can have many positive applications, from being trained to identify cancerous cells in biopsy screenings, predict weather patterns that can help farmers increase their crop yields, and improve traffic efficiency.

But some believe that A.I. expertise is currently too concentrated in the hands of just a few companies with opaque business models, meaning resources are being diverted away from projects that could be more socially, rather than commercially, beneficial. Finland’s approach of making A.I. accessible and understandable to its citizens is part of a broader movement of people who want to democratize the technology, putting utility and opportunity ahead of profit.

This shift toward “democratic A.I.” has three main principles: that all society will be impacted by A.I. and therefore its creators have a responsibility to build open, fair, and explainable A.I. services; that A.I. should be used for social benefit and not just for private profit; and that because A.I. learns from vast quantities of data, the citizens who create that data — about their shopping habits, health records, or transport needs — have a right to say and understand how it is used.

A growing movement across industry and academia believes that A.I. needs to be treated like any other “public awareness” program — just like the scheme rolled out in Finland….(More)”.

Our data, our society, our health: a vision for inclusive and transparent health data science in the UK and Beyond


Paper by Elizabeth Ford et al in Learning Health Systems: “The last six years have seen sustained investment in health data science in the UK and beyond, which should result in a data science community that is inclusive of all stakeholders, working together to use data to benefit society through the improvement of public health and wellbeing.

However, opportunities made possible through the innovative use of data are still not being fully realised, resulting in research inefficiencies and avoidable health harms. In this paper we identify the most important barriers to achieving higher productivity in health data science. We then draw on previous research, domain expertise, and theory, to outline how to go about overcoming these barriers, applying our core values of inclusivity and transparency.

We believe a step-change can be achieved through meaningful stakeholder involvement at every stage of research planning, design and execution; team-based data science; as well as harnessing novel and secure data technologies. Applying these values to health data science will safeguard a social license for health data research, and ensure transparent and secure data usage for public benefit….(More)”.

Transparency, Fairness, Data Protection, Neutrality: Data Management Challenges in the Face of New Regulation


Paper by Serge Abiteboul and Julia Stoyanovich: “The data revolution continues to transform every sector of science, industry and government. Due to the incredible impact of data-driven technology on society, we are becoming increasingly aware of the imperative to use data and algorithms responsibly — in accordance with laws and ethical norms. In this article we discuss three recent regulatory frameworks: the European Union’s General Data Protection Regulation (GDPR), the New York City Automated Decisions Systems (ADS) Law, and the Net Neutrality principle, that aim to protect the rights of individuals who are impacted by data collection and analysis. These frameworks are prominent examples of a global trend: Governments are starting to recognize the need to regulate data-driven algorithmic technology. 


Our goal in this paper is to bring these regulatory frameworks to the attention of the data management community, and to underscore the technical challenges they raise and which we, as a community, are well-equipped to address. The main .take-away of this article is that legal and ethical norms cannot be incorporated into data-driven systems as an afterthought. Rather, we must think in terms of responsibility by design, viewing it as a systems requirement….(More)”

PayStats helps assess the impact of the low-emission area Madrid Central


BBVA API Market: “How do town-planning decisions affect a city’s routines? How can data help assess and make decisions? The granularity and detailed information offered by PayStats allowed Madrid’s city council to draw a more accurate map of consumer behavior and gain an objective measurement of the impact of the traffic restriction measures on commercial activity.

In this case, 20 million aggregate and anonymized transactions with BBVA cards and any other card at BBVA POS terminals were analyzed to study the effect of the changes made by Madrid’s city council to road access to the city center.

The BBVA PayStats API is targeted at all kinds of organizations including the public sector, as in this case. Madrid’s city council used it to find out how restricting car access to Madrid Central impacted Christmas shopping. From information gathered between December 1 2018 and January 7 2019, a comparison was made between data from the last two Christmases as well as the increased revenue in Madrid Central (Gran Vía and five subareas) vs. the increase in the entire city.

According to the report drawn up by council experts, 5.984 billion euros were spent across the city. The sample shows a 3.3% increase in spending in Madrid when compared to the same time the previous year; this goes up to 9.5% in Gran Vía and reaches 8.6% in the central area….(More)”.

Data Trusts as an AI Governance Mechanism


Paper by Chris Reed and Irene YH Ng: “This paper is a response to the Singapore Personal Data Protection Commission consultation on a draft AI Governance Framework. It analyses the five data trust models proposed by the UK Open Data Institute and identifies that only the contractual and corporate models are likely to be legally suitable for achieving the aims of a data trust.

The paper further explains how data trusts might be used as in the governance of AI, and investigates the barriers which Singapore’s data protection law presents to the use of data trusts and how those barriers might be overcome. Its conclusion is that a mixed contractual/corporate model, with an element of regulatory oversight and audit to ensure consumer confidence that data is being used appropriately, could produce a useful AI governance tool…(More)”.

Democracy vs. Disinformation


Ana Palacio at Project Syndicate: “These are difficult days for liberal democracy. But of all the threats that have arisen in recent years – populism, nationalism, illiberalism – one stands out as a key enabler of the rest: the proliferation and weaponization of disinformation.

The threat is not a new one. Governments, lobby groups, and other interests have long relied on disinformation as a tool of manipulation and control.

What is new is the ease with which disinformation can be produced and disseminated. Advances in technology allow for the increasingly seamless manipulation or fabrication of video and audio, while the pervasiveness of social media enables false information to be rapidly amplified among receptive audiences.

Beyond introducing falsehoods into public discourse, the spread of disinformation can undermine the possibility of discourse itself, by calling into question actual facts. This “truth decay” – apparent in the widespread rejection of experts and expertise – undermines the functioning of democratic systems, which depend on the electorate’s ability to make informed decisions about, say, climate policy or the prevention of communicable diseases.

The West has been slow to recognize the scale of this threat. It was only after the 2016 Brexit referendum and US presidential election that the power of disinformation to reshape politics began to attract attention. That recognition was reinforced in 2017, during the French presidential election and the illegal referendum on Catalan independence.

Now, systematic efforts to fight disinformation are underway. So far, the focus has been on tactical approaches, targeting the “supply side” of the problem: unmasking Russia-linked fake accounts, blocking disreputable sources, and adjusting algorithms to limit public exposure to false and misleading news. Europe has led the way in developing policy responses, such as soft guidelines for industry, national legislation, and strategic communications.

Such tactical actions – which can be implemented relatively easily and bring tangible results quickly – are a good start. But they are not nearly enough.

To some extent, Europe seems to recognize this. Early this month, the Atlantic Council organized #DisinfoWeek Europe, a series of strategic dialogues focused on the global challenge of disinformation. And more ambitious plans are already in the works, including French President Emmanuel Macron’s recently proposed European Agency for the Protection of Democracies, which would counter hostile manipulation campaigns.

But, as is so often the case in Europe, the gap between word and deed is vast, and it remains to be seen how all of this will be implemented and scaled up. In any case, even if such initiatives do get off the ground, they will not succeed unless they are accompanied by efforts that tackle the demand side of the problem: the factors that make liberal democratic societies today so susceptible to manipulation….(More)”.

Regulating disinformation with artificial intelligence


Paper for the European Parliamentary Research Service: “This study examines the consequences of the increasingly prevalent use of artificial intelligence (AI) disinformation initiatives upon freedom of expression, pluralism and the functioning of a democratic polity. The study examines the trade-offs in using automated technology to limit the spread of disinformation online. It presents options (from self-regulatory to legislative) to regulate automated content recognition (ACR) technologies in this context. Special attention is paid to the opportunities for the European Union as a whole to take the lead in setting the framework for designing these technologies in a way that enhances accountability and transparency and respects free speech. The present project reviews some of the key academic and policy ideas on technology and disinformation and highlights their relevance to European policy.

Chapter 1 introduces the background to the study and presents the definitions used. Chapter 2 scopes the policy boundaries of disinformation from economic, societal and technological perspectives, focusing on the media context, behavioural economics and technological regulation. Chapter 3 maps and evaluates existing regulatory and technological responses to disinformation. In Chapter 4, policy options are presented, paying particular attention to interactions between technological solutions, freedom of expression and media pluralism….(More)”.

China, India and the rise of the ‘civilisation state’


Gideon Rachman at the Financial Times: “The 19th-century popularised the idea of the “nation state”. The 21st could be the century of the “civilisation state”. A civilisation state is a country that claims to represent not just a historic territory or a particular language or ethnic-group, but a distinctive civilisation.

It is an idea that is gaining ground in states as diverse as China, India, Russia, Turkey and, even, the US. The notion of the civilisation state has distinctly illiberal implications. It implies that attempts to define universal human rights or common democratic standards are wrong-headed, since each civilisation needs political institutions that reflect its own unique culture. The idea of a civilisation state is also exclusive. Minority groups and migrants may never fit in because they are not part of the core civilisation.

One reason that the idea of the civilisation state is likely to gain wider currency is the rise of China. In speeches to foreign audiences, President Xi Jinping likes to stress the unique history and civilisation of China. This idea has been promoted by pro-government intellectuals, such as Zhang Weiwei of Fudan university. In an influential book, The China Wave: Rise of a Civilisational State, Mr Zhang argues that modern China has succeeded because it has turned its back on western political ideas — and instead pursued a model rooted in its own Confucian culture and exam-based meritocratic traditions. Mr Zhang was adapting an idea first elaborated by Martin Jacques, a western writer, in a bestselling book, When China Rules The World. “China’s history of being a nation state”, Mr Jacques argues, “dates back only 120-150 years: its civilisational history dates back thousands of years.” He believes that the distinct character of Chinese civilisation leads to social and political norms that are very different from those prevalent in the west, including “the idea that the state should be based on familial relations [and] a very different view of the relationship between the individual and society, with the latter regarded as much more important”. …

Civilisational views of the state are also gaining ground in Russia. Some of the ideologues around Vladimir Putin now embrace the idea that Russia represents a distinct Eurasian civilisation, which should never have sought to integrate with the west. In a recent article Vladislav Surkov, a close adviser to the Russian president, argued that his country’s “repeated fruitless efforts to become a part of western civilisation are finally over”. Instead, Russia should embrace its identity as “a civilisation that has absorbed both east and west” with a “hybrid mentality, intercontinental territory and bipolar history. It is charismatic, talented, beautiful and lonely. Just as a half-breed should be.” In a global system moulded by the west, it is unsurprising that some intellectuals in countries such as China, India or Russia should want to stress the distinctiveness of their own civilisations.

What is more surprising is that rightwing thinkers in the US are also retreating from the idea of “universal values” — in favour of emphasising the unique and allegedly endangered nature of western civilisation….(More)”.

Data Trusts: Ethics, Architecture and Governance for Trustworthy Data Stewardship


Web Science Institute Paper by Kieron O’Hara: “In their report on the development of the UK AI industry, Wendy Hall and Jérôme Pesenti
recommend the establishment of data trusts, “proven and trusted frameworks and agreements” that will “ensure exchanges [of data] are secure and mutually beneficial” by promoting trust in the use of data for AI. Hall and Pesenti leave the structure of data trusts open, and the purpose of this paper is to explore the questions of (a) what existing structures can data trusts exploit, and (b) what relationship do data trusts have to
trusts as they are understood in law?

The paper defends the following thesis: A data trust works within the law to provide ethical, architectural and governance support for trustworthy data processing

Data trusts are therefore both constraining and liberating. They constrain: they respect current law, so they cannot render currently illegal actions legal. They are intended to increase trust, and so they will typically act as
further constraints on data processors, adding the constraints of trustworthiness to those of law. Yet they also liberate: if data processors
are perceived as trustworthy, they will get improved access to data.

Most work on data trusts has up to now focused on gaining and supporting the trust of data subjects in data processing. However, all actors involved in AI – data consumers, data providers and data subjects – have trust issues which data trusts need to address.

Furthermore, it is not only personal data that creates trust issues; the same may be true of any dataset whose release might involve an organisation risking competitive advantage. The paper addresses four areas….(More)”.