Data in Society: Challenging Statistics in an Age of Globalisation


Book edited by Jeff Evans, Sally Ruane and Humphrey Southall: “Statistical data and evidence-based claims are increasingly central to our everyday lives. Critically examining ‘Big Data’, this book charts the recent explosion in sources of data, including those precipitated by global developments and technological change. It sets out changes and controversies related to data harvesting and construction, dissemination and data analytics by a range of private, governmental and social organisations in multiple settings.

Analysing the power of data to shape political debate, the presentation of ideas to us by the media, and issues surrounding data ownership and access, the authors suggest how data can be used to uncover injustices and to advance social progress…(More)”.

Responsible data sharing in a big data-driven translational research platform: lessons learned


Paper by S. Kalkman et al: “The sharing of clinical research data is increasingly viewed as a moral duty [1]. Particularly in the context of making clinical trial data widely available, editors of international medical journals have labeled data sharing a highly efficient way to advance scientific knowledge [2,3,4]. The combination of even larger datasets into so-called “Big Data” is considered to offer even greater benefits for science, medicine and society [5]. Several international consortia have now promised to build grand-scale, Big Data-driven translational research platforms to generate better scientific evidence regarding disease etiology, diagnosis, treatment and prognosis across various disease areas [6,7,8].

Despite anticipated benefits, large-scale sharing of health data is charged with ethical questions. Stakeholders have been urged to consider how to manage privacy and confidentiality issues, ensure valid informed consent, and determine who gets to decide about data access [9]. More fundamentally, new data sharing activities prompt questions about social justice and public trust [10]. To balance potential benefits and ethical considerations, data sharing platforms require guidance for the processes of interaction and decision-making. In the European Union (EU), legal norms specified for the sharing of personal data for health research, most notably those set out in the General Data Protection Regulation (GDPR) (EU 2016/679), remain open to interpretation and offer limited practical guidance to researchers [12,12,13]. Striking in this regard is that the GDPR itself stresses the importance of adherence to ethical standards, when broad consent is put forward as a legal basis for the processing of personal data. For example, Recital 33 of the GDPR states that data subjects should be allowed to give “consent to certain areas of scientific research when in keeping with recognised ethical standards for scientific research” [14]. In fact, the GDPR actually encourages data controllers to establish self-regulating mechanisms, such as a code of conduct. To foster responsible and sustainable data sharing in translational research platforms, ethical guidance and governance is therefore necessary. Here, we define governance as ‘the processes of interaction and decision-making among the different stakeholders that are involved in a collective problem that lead to the creation, reinforcement, or reproduction of social norms and institutions’…(More)”.

From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy


Paper by Elettra Bietti: “The word ‘ethics’ is under siege in technology policy circles. Weaponized in support of deregulation, self-regulation or hands-off governance, “ethics” is increasingly identified with technology companies’ self-regulatory efforts and with shallow appearances of ethical behavior. So-called “ethics washing” by tech companies is on the rise, prompting criticism and scrutiny from scholars and the tech community at large. In parallel to the growth of ethics washing, its condemnation has led to a tendency to engage in “ethics bashing.” This consists in the trivialization of ethics and moral philosophy now understood as discrete tools or pre-formed social structures such as ethics boards, self-governance schemes or stakeholder groups.

The misunderstandings underlying ethics bashing are at least three-fold: (a) philosophy and “ethics” are seen as a communications strategy and as a form of instrumentalized cover-up or façade for unethical behavior, (b) philosophy is understood in opposition and as alternative to political representation and social organizing and (c) the role and importance of moral philosophy is downplayed and portrayed as mere “ivory tower” intellectualization of complex problems that need to be dealt with in practice.

This paper argues that the rhetoric of ethics and morality should not be reductively instrumentalized, either by the industry in the form of “ethics washing,” or by scholars and policy-makers in the form of “ethics bashing.” Grappling with the role of philosophy and ethics requires moving beyond both tendencies and seeing ethics as a mode of inquiry that facilitates the evaluation of competing tech policy strategies. In other words, we must resist narrow reductivism of moral philosophy as instrumentalized performance and renew our faith in its intrinsic moral value as a mode of knowledge-seeking and inquiry. Far from mandating a self-regulatory scheme or a given governance structure, moral philosophy in fact facilitates the questioning and reconsideration of any given practice, situating it within a complex web of legal, political and economic institutions. Moral philosophy indeed can shed new light on human practices by adding needed perspective, explaining the relationship between technology and other worthy goals, situating technology within the human, the social, the political. It has become urgent to start considering technology ethics also from within and not only from outside of ethics….(More)”.

Too much information? The new challenge for decision-makers


Daniel Winter at the Financial Times: “…Concern over technology’s capacity both to shrink the world and complicate it has grown steadily since the second world war — little wonder, perhaps, when the existential threats it throws up have expanded from nuclear weapons to encompass climate change (and any consequent geoengineering), gene editing and AI as well. The financial crisis of 2008, in which poorly understood investment instruments made economies totter, has added to the unease over our ability to make sense of things.

From preoccupying cold war planners, attempts to codify best practice in sense-making have gone on to exercise (often profitably) business academics and management consultants, and now draw large audiences online.

Blogs, podcasts and YouTube channels such as Rebel Wisdom and Future Thinkers aim to arm their followers with the tools they need to understand the world, and make the right decisions. Daniel Schmachtenberger is one such voice, whose interviews on YouTube and his podcast Civilization Emerging have reached hundreds of thousands of people.

“Due to increasing technological capacity — increasing population multiplied by increasing impact per person — we’re making more and more consequential choices with worse and worse sense-making to inform those choices,” he says in one video. “Exponential tech is leading to exponential disinformation.” Strengthening individuals’ ability to handle and filter information would go a long way towards improving the “information ecology”, Mr Schmachtenberger argues. People need to get used to handling complex information and should train themselves to be less distracted. “The impulse to say, ‘hey, make it really simple so everyone can get it’ and the impulse to say ‘[let’s] help people actually make sense of the world well’ are different things,” he says. Of course, societies have long been accustomed to handling complexity. No one person can possibly memorise the entirety of US law or be an expert in every field of medicine. Libraries, databases, and professional and academic networks exist to aggregate expertise.

The increasing bombardment of data — the growing amount of evidence that can inform any course of action — pushes such systems to the limit, prompting people to offload the work to computers. Yet this only defers the problem. As AI becomes more sophisticated, its decision-making processes become more opaque. The choice as to whether to trust it — to let it run a self-driving car in a crowded town, say — still rests with us.

Far from being able to outsource all complex thinking to the cloud, Prof Guillén warns that leaders will need to be as skilled as ever at handling and critically evaluating information. It will be vital, he suggests, to build flexibility into the policymaking process.

“The feedback loop between the effects of the policy and how you need to recalibrate the policy in real time becomes so much faster and so much more unpredictable,” he says. “That’s the effect that complex policies produce.” A more piecemeal approach could better suit regulation in fast-moving fields, he argues, with shorter “bursts” of rulemaking, followed by analysis of the effects and then adjustments or additions where necessary.

Yet however adept policymakers become at dealing with a complex world, their task will at some point always resist simplification. That point is where the responsibility resides. Much as we may wish it otherwise, governance will always be as much an art as a science….(More)”.

A Formal Theory of Democratic Deliberation


Paper by Hun Chung and John Duggan: “Inspired by impossibility theorems of social choice theory, many democratic theorists have argued that aggregative forms of democracy cannot lend full democratic justification for the collective decisions reached. Hence, democratic theorists have turned their attention to deliberative democracy, according to which “outcomes are democratically legitimate if and only if they could be the object of a free and reasoned agreement among equals” (Cohen 1997a, 73).

However, relatively little work has been done to offer a formal theory of democratic deliberation. This article helps fill that gap by offering a formal theory of three different modes of democratic deliberation: myopic discussion, constructive discussion, and debate. We show that myopic discussion suffers from indeterminacy of long run outcomes, while constructive discussion and debate are conclusive. Finally, unlike the other two modes of deliberation, debate is path independent and converges to a unique compromise position, irrespective of the initial status quo….(More)”.

Peopling Europe through Data Practices


Introduction to Special Issue of Science, Technology & Human Values by Baki Cakici, Evelyn Ruppert and Stephan Scheel: “Politically, Europe has been unable to address itself to a constituted polity and people as more than an agglomeration of nation-states. From the resurgence of nationalisms to the crisis of the single currency and the unprecedented decision of a member state to leave the European Union (EU), core questions about the future of Europe have been rearticulated: Who are the people of Europe? Is there a European identity? What does it mean to say, “I am European?” Where does Europe begin and end? and Who can legitimately claim to be a part of a “European” people?

The special issue (SI) seeks to contest dominant framings of the question “Who are the people of Europe?” as only a matter of government policies, electoral campaigns, or parliamentary debates. Instead, the contributions start from the assumption that answers to this question exist in data practices where people are addressed, framed, known, and governed as European. The central argument of this SI is that it is through data practices that the EU seeks to simultaneously constitute its population as a knowable, governable entity, and as a distinct form of peoplehood where common personhood is more important than differences….(More)”.

Official Statistics 4.0: Verified Facts for People in the 21st Century


Book by Walter J. Radermacher: “This book explores official statistics and their social function in modern societies. Digitisation and globalisation are creating completely new opportunities and risks, a context in which facts (can) play an enormously important part if they are produced with a quality that makes them credible and purpose-specific. In order for this to actually happen, official statistics must continue to actively pursue the modernisation of their working methods.

This book is not about the technical and methodological challenges associated with digitisation and globalisation; rather, it focuses on statistical sociology, which scientifically deals with the peculiarities and pitfalls of governing-by-numbers, and assigns statistics a suitable position in the future informational ecosystem. Further, the book provides a comprehensive overview of modern issues in official statistics, embodied in a historical and conceptual framework that endows it with different and innovative perspectives. Central to this work is the quality of statistical information provided by official statistics. The implementation of the UN Sustainable Development Goals in the form of indicators is another driving force in the search for answers, and is addressed here….(More)”

Lack of guidance leaves public services in limbo on AI, says watchdog


Dan Sabbagh at the Guardian: “Police forces, hospitals and councils struggle to understand how to use artificial intelligence because of a lack of clear ethical guidance from the government, according to the country’s only surveillance regulator.

The surveillance camera commissioner, Tony Porter, said he received requests for guidance all the time from public bodies which do not know where the limits lie when it comes to the use of facial, biometric and lip-reading technology.

“Facial recognition technology is now being sold as standard in CCTV systems, for example, so hospitals are having to work out if they should use it,” Porter said. “Police are increasingly wearing body cameras. What are the appropriate limits for their use?

“The problem is that there is insufficient guidance for public bodies to know what is appropriate and what is not, and the public have no idea what is going on because there is no real transparency.”

The watchdog’s comments came as it emerged that Downing Street had commissioned a review led by the Committee on Standards in Public Life, whose chairman had called on public bodies to reveal when they use algorithms in decision making.

Lord Evans, a former MI5 chief, told the Sunday Telegraph that “it was very difficult to find out where AI is being used in the public sector” and that “at the very minimum, it should be visible, and declared, where it has the potential for impacting on civil liberties and human rights and freedoms”.

AI is increasingly deployed across the public sector in surveillance and elsewhere. The high court ruled in September that the police use of automatic facial recognition technology to scan people in crowds was lawful.

Its use by South Wales police was challenged by Ed Bridges, a former Lib Dem councillor, who noticed the cameras when he went out to buy a lunchtime sandwich, but the court held that the intrusion into privacy was proportionate….(More)”.

A Matter of Trust: Higher Education Institutions as Information Fiduciaries in an Age of Educational Data Mining and Learning Analytics


Paper by Kyle M. L. Jones, Alan Rubel and Ellen LeClere: “Higher education institutions are mining and analyzing student data to effect educational, political, and managerial outcomes. Done under the banner of “learning analytics,” this work can—and often does—surface sensitive data and information about, inter alia, a student’s demographics, academic performance, offline and online movements, physical fitness, mental wellbeing, and social network. With these data, institutions and third parties are able to describe student life, predict future behaviors, and intervene to address academic or other barriers to student success (however defined). Learning analytics, consequently, raise serious issues concerning student privacy, autonomy, and the appropriate flow of student data.

We argue that issues around privacy lead to valid questions about the degree to which students should trust their institution to use learning analytics data and other artifacts (algorithms, predictive scores) with their interests in mind. We argue that higher education institutions are paradigms of information fiduciaries. As such, colleges and universities have a special responsibility to their students. In this article, we use the information fiduciary concept to analyze cases when learning analytics violate an institution’s responsibility to its students….(More)”.

Dissent in Consensusland: An Agonistic Problematization of Multi-stakeholder Governance


Martin Fougère and Nikodemus Solitander at the Journal of Business Ethics: “Multi-stakeholder initiatives involve actors from several spheres of society (market, civil society and state) in collaborative arrangements to reach objectives typically related to sustainable development. In political CSR literature, these arrangements have been framed as improvements to transnational governance and as being somehow democratic.

We draw on Mouffe’s works on agonistic pluralism to problematize the notion that consensus-led multi-stakeholder initiatives bring more democratic control on corporate power. We examine two initiatives which address two very different issue areas: the Roundtable on Sustainable Palm Oil (RSPO) and the Bangladesh Accord on Fire and Building Safety (The Accord).

We map the different kinds of adversarial relations involved in connection with the issues meant to be governed by the two initiatives, and find those adversarial relations to take six main shapes, affecting the initiatives in different ways: (1) competing regulatory initiatives; (2) pressure-response relations within multi-stakeholder initiatives; (3) pressure-response relations between NGOs and states through multi-stakeholder initiatives; (4) collaboration and competition between multi-stakeholder initiatives and states; (5) pressure-response relations between civil society actors and multi-stakeholder initiatives; and (6) counter-hegemonic movements against multi-stakeholder initiatives as hegemonic projects.

We conclude that multi-stakeholder initiatives cannot be democratic by themselves, and we argue that business and society researchers should not look at democracy or politics only internally to these initiatives, but rather study how issue areas are regulated through interactions between a variety of actors—both within and without the multi-stakeholder initiatives—who get to have a legitimate voice in this regulation….(More)”.