Chapter by Christoph Busch in “Data Economy and Algorithmic Regulation: A Handbook on Personalized Law”, C.H.Beck Nomos Hart, 2020: “Technological advances in data collection and information processing makes it possible to tailor legal norms to specific individuals and achieve an unprecedented degree of regulatory precision. However, the benefits of such a “personalized law” must not be confounded with the false promise of “perfect enforcement”. To the contrary, the enforcement of personalized law might be even more challenging and complex than the enforcement of impersonal and uniform rules. Starting from this premise, the first part of this Essay explores how algorithmic personalization of legal rules could be operationalized for tailoring disclosures on digital marketplaces, mitigating discrimination in the sharing economy and optimizing the flow of traffic in smart cities. The second part of the Essay looks into an aspect of personalized law that has so far been rather under-researched: a transition towards personalized law involves not only changes in the design of legal rules, but also necessitates modifications regarding compliance monitoring and enforcement. It is argued that personalized law can be conceptualized as a form of algorithmic regulation or governance-by-data. Therefore, the implementation of personalized law requires setting up a regulatory framework for ensuring algorithmic accountability. In a broader perspective, this Essay aims to create a link between the scholarly debate on algorithmic decision-making and automated legal enforcement and the emerging debate on personalized law….(More)”.
Philosophy Is a Public Service
Jonathon Keats at Nautilus: “…One of my primary techniques, adapted from philosophy, is to undertake large-scale thought experiments. In these experiments, I create alternative realities that provide perspectives on our own society, and provoke dialogue about who and what we want to become. Another of my techniques is to create philosophical instruments: tools and devices with which people can collectively investigate the places they inhabit.
The former technique is exemplified by Centuries of the Bristlecone, and other environmentally-calibrated clocks I’m developing in other cities, such as a timepiece modulated by the flow of rivers in Alaska, currently in planning at the Anchorage Museum.
The latter is exemplified by a project I initiated in Berlin in 2014, which I’ve now instigated in cities around the world. It’s a new kind of camera that produces a single exposure over a span of 100 years. People hide these cameras throughout their city, providing a means for the next generation to observe the decisions that citizens make about their urban environment: decisions about development and gentrification and sustainability. In a sense, these devices are intergenerational surveillance cameras. They prompt people to consider the long-term impact of their actions. They encourage people to act in ways that will change the picture to reflect what they want the next generation to see.
But the truth is that most of my projects—perhaps even the two I’ve just mentioned—combine techniques from philosophy and many other disciplines. In order to map out possible futures for society, especially while navigating the shifting terrain of climate change, the philosopher-explorer needs to be adaptable. And most likely you won’t have all the skills and tools you need. I believe that anyone can become a philosopher-explorer. The practice benefits from more practitioners. No particular abilities are needed, except a capacity for collaboration.
Ayear ago, I was invited by the Fraunhofer Institute for Building Physics to envision the city of the future. Through Fraunhofer’s artist-in-lab program, I had the opportunity to work with leading scientists and engineers, and to run computer simulations and physical experiments on state-of-the-art equipment in Stuttgart and Holzkirchen, Germany.
My starting point was to consider one of the most serious problems faced by cities today: sea level rise. Global sea levels are expected to increase by two-and-a-half meters by the end of the century, and as much as 15 meters in the next 300 years. With 11 percent of the world population living less than 10 meters above the current sea level, many cities will probably be submerged in the future: mega-cities including New York and Shanghai. One likely response is that people will migrate inland, seeking ever higher elevations.
The question I asked myself was this: Would it make more sense to stay put?…(More)”.
Data in Society: Challenging Statistics in an Age of Globalisation
Book edited by Jeff Evans, Sally Ruane and Humphrey Southall: “Statistical data and evidence-based claims are increasingly central to our everyday lives. Critically examining ‘Big Data’, this book charts the recent explosion in sources of data, including those precipitated by global developments and technological change. It sets out changes and controversies related to data harvesting and construction, dissemination and data analytics by a range of private, governmental and social organisations in multiple settings.
Analysing the power of data to shape political debate, the presentation of ideas to us by the media, and issues surrounding data ownership and access, the authors suggest how data can be used to uncover injustices and to advance social progress…(More)”.
Responsible data sharing in a big data-driven translational research platform: lessons learned
Paper by S. Kalkman et al: “The sharing of clinical research data is increasingly viewed as a moral duty [1]. Particularly in the context of making clinical trial data widely available, editors of international medical journals have labeled data sharing a highly efficient way to advance scientific knowledge [2,3,4]. The combination of even larger datasets into so-called “Big Data” is considered to offer even greater benefits for science, medicine and society [5]. Several international consortia have now promised to build grand-scale, Big Data-driven translational research platforms to generate better scientific evidence regarding disease etiology, diagnosis, treatment and prognosis across various disease areas [6,7,8].
Despite anticipated benefits, large-scale sharing of health data is charged with ethical questions. Stakeholders have been urged to consider how to manage privacy and confidentiality issues, ensure valid informed consent, and determine who gets to decide about data access [9]. More fundamentally, new data sharing activities prompt questions about social justice and public trust [10]. To balance potential benefits and ethical considerations, data sharing platforms require guidance for the processes of interaction and decision-making. In the European Union (EU), legal norms specified for the sharing of personal data for health research, most notably those set out in the General Data Protection Regulation (GDPR) (EU 2016/679), remain open to interpretation and offer limited practical guidance to researchers [12,12,13]. Striking in this regard is that the GDPR itself stresses the importance of adherence to ethical standards, when broad consent is put forward as a legal basis for the processing of personal data. For example, Recital 33 of the GDPR states that data subjects should be allowed to give “consent to certain areas of scientific research when in keeping with recognised ethical standards for scientific research” [14]. In fact, the GDPR actually encourages data controllers to establish self-regulating mechanisms, such as a code of conduct. To foster responsible and sustainable data sharing in translational research platforms, ethical guidance and governance is therefore necessary. Here, we define governance as ‘the processes of interaction and decision-making among the different stakeholders that are involved in a collective problem that lead to the creation, reinforcement, or reproduction of social norms and institutions’…(More)”.
From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy
Paper by Elettra Bietti: “The word ‘ethics’ is under siege in technology policy circles. Weaponized in support of deregulation, self-regulation or hands-off governance, “ethics” is increasingly identified with technology companies’ self-regulatory efforts and with shallow appearances of ethical behavior. So-called “ethics washing” by tech companies is on the rise, prompting criticism and scrutiny from scholars and the tech community at large. In parallel to the growth of ethics washing, its condemnation has led to a tendency to engage in “ethics bashing.” This consists in the trivialization of ethics and moral philosophy now understood as discrete tools or pre-formed social structures such as ethics boards, self-governance schemes or stakeholder groups.
The misunderstandings underlying ethics bashing are at least three-fold: (a) philosophy and “ethics” are seen as a communications strategy and as a form of instrumentalized cover-up or façade for unethical behavior, (b) philosophy is understood in opposition and as alternative to political representation and social organizing and (c) the role and importance of moral philosophy is downplayed and portrayed as mere “ivory tower” intellectualization of complex problems that need to be dealt with in practice.
This paper argues that the rhetoric of ethics and morality should not be reductively instrumentalized, either by the industry in the form of “ethics washing,” or by scholars and policy-makers in the form of “ethics bashing.” Grappling with the role of philosophy and ethics requires moving beyond both tendencies and seeing ethics as a mode of inquiry that facilitates the evaluation of competing tech policy strategies. In other words, we must resist narrow reductivism of moral philosophy as instrumentalized performance and renew our faith in its intrinsic moral value as a mode of knowledge-seeking and inquiry. Far from mandating a self-regulatory scheme or a given governance structure, moral philosophy in fact facilitates the questioning and reconsideration of any given practice, situating it within a complex web of legal, political and economic institutions. Moral philosophy indeed can shed new light on human practices by adding needed perspective, explaining the relationship between technology and other worthy goals, situating technology within the human, the social, the political. It has become urgent to start considering technology ethics also from within and not only from outside of ethics….(More)”.
Too much information? The new challenge for decision-makers
Daniel Winter at the Financial Times: “…Concern over technology’s capacity both to shrink the world and complicate it has grown steadily since the second world war — little wonder, perhaps, when the existential threats it throws up have expanded from nuclear weapons to encompass climate change (and any consequent geoengineering), gene editing and AI as well. The financial crisis of 2008, in which poorly understood investment instruments made economies totter, has added to the unease over our ability to make sense of things.
From preoccupying cold war planners, attempts to codify best practice in sense-making have gone on to exercise (often profitably) business academics and management consultants, and now draw large audiences online.
Blogs, podcasts and YouTube channels such as Rebel Wisdom and Future Thinkers aim to arm their followers with the tools they need to understand the world, and make the right decisions. Daniel Schmachtenberger is one such voice, whose interviews on YouTube and his podcast Civilization Emerging have reached hundreds of thousands of people.
“Due to increasing technological capacity — increasing population multiplied by increasing impact per person — we’re making more and more consequential choices with worse and worse sense-making to inform those choices,” he says in one video. “Exponential tech is leading to exponential disinformation.” Strengthening individuals’ ability to handle and filter information would go a long way towards improving the “information ecology”, Mr Schmachtenberger argues. People need to get used to handling complex information and should train themselves to be less distracted. “The impulse to say, ‘hey, make it really simple so everyone can get it’ and the impulse to say ‘[let’s] help people actually make sense of the world well’ are different things,” he says. Of course, societies have long been accustomed to handling complexity. No one person can possibly memorise the entirety of US law or be an expert in every field of medicine. Libraries, databases, and professional and academic networks exist to aggregate expertise.
The increasing bombardment of data — the growing amount of evidence that can inform any course of action — pushes such systems to the limit, prompting people to offload the work to computers. Yet this only defers the problem. As AI becomes more sophisticated, its decision-making processes become more opaque. The choice as to whether to trust it — to let it run a self-driving car in a crowded town, say — still rests with us.
Far from being able to outsource all complex thinking to the cloud, Prof Guillén warns that leaders will need to be as skilled as ever at handling and critically evaluating information. It will be vital, he suggests, to build flexibility into the policymaking process.
“The feedback loop between the effects of the policy and how you need to recalibrate the policy in real time becomes so much faster and so much more unpredictable,” he says. “That’s the effect that complex policies produce.” A more piecemeal approach could better suit regulation in fast-moving fields, he argues, with shorter “bursts” of rulemaking, followed by analysis of the effects and then adjustments or additions where necessary.
Yet however adept policymakers become at dealing with a complex world, their task will at some point always resist simplification. That point is where the responsibility resides. Much as we may wish it otherwise, governance will always be as much an art as a science….(More)”.
A Formal Theory of Democratic Deliberation
Paper by Hun Chung and John Duggan: “Inspired by impossibility theorems of social choice theory, many democratic theorists have argued that aggregative forms of democracy cannot lend full democratic justification for the collective decisions reached. Hence, democratic theorists have turned their attention to deliberative democracy, according to which “outcomes are democratically legitimate if and only if they could be the object of a free and reasoned agreement among equals” (Cohen 1997a, 73).
However, relatively little work has been done to offer a formal theory of democratic deliberation. This article helps fill that gap by offering a formal theory of three different modes of democratic deliberation: myopic discussion, constructive discussion, and debate. We show that myopic discussion suffers from indeterminacy of long run outcomes, while constructive discussion and debate are conclusive. Finally, unlike the other two modes of deliberation, debate is path independent and converges to a unique compromise position, irrespective of the initial status quo….(More)”.
Peopling Europe through Data Practices
Introduction to Special Issue of Science, Technology & Human Values by Baki Cakici, Evelyn Ruppert and Stephan Scheel: “Politically, Europe has been unable to address itself to a constituted polity and people as more than an agglomeration of nation-states. From the resurgence of nationalisms to the crisis of the single currency and the unprecedented decision of a member state to leave the European Union (EU), core questions about the future of Europe have been rearticulated: Who are the people of Europe? Is there a European identity? What does it mean to say, “I am European?” Where does Europe begin and end? and Who can legitimately claim to be a part of a “European” people?
The special issue (SI) seeks to contest dominant framings of the question “Who are the people of Europe?” as only a matter of government policies, electoral campaigns, or parliamentary debates. Instead, the contributions start from the assumption that answers to this question exist in data practices where people are addressed, framed, known, and governed as European. The central argument of this SI is that it is through data practices that the EU seeks to simultaneously constitute its population as a knowable, governable entity, and as a distinct form of peoplehood where common personhood is more important than differences….(More)”.
Official Statistics 4.0: Verified Facts for People in the 21st Century
Book by Walter J. Radermacher: “This book explores official statistics and their social function in modern societies. Digitisation and globalisation are creating completely new opportunities and risks, a context in which facts (can) play an enormously important part if they are produced with a quality that makes them credible and purpose-specific. In order for this to actually happen, official statistics must continue to actively pursue the modernisation of their working methods.
This book is not about the technical and methodological challenges associated with digitisation and globalisation; rather, it focuses on statistical sociology, which scientifically deals with the peculiarities and pitfalls of governing-by-numbers, and assigns statistics a suitable position in the future informational ecosystem. Further, the book provides a comprehensive overview of modern issues in official statistics, embodied in a historical and conceptual framework that endows it with different and innovative perspectives. Central to this work is the quality of statistical information provided by official statistics. The implementation of the UN Sustainable Development Goals in the form of indicators is another driving force in the search for answers, and is addressed here….(More)”
Lack of guidance leaves public services in limbo on AI, says watchdog
Dan Sabbagh at the Guardian: “Police forces, hospitals and councils struggle to understand how to use artificial intelligence because of a lack of clear ethical guidance from the government, according to the country’s only surveillance regulator.
The surveillance camera commissioner, Tony Porter, said he received requests for guidance all the time from public bodies which do not know where the limits lie when it comes to the use of facial, biometric and lip-reading technology.
“Facial recognition technology is now being sold as standard in CCTV systems, for example, so hospitals are having to work out if they should use it,” Porter said. “Police are increasingly wearing body cameras. What are the appropriate limits for their use?
“The problem is that there is insufficient guidance for public bodies to know what is appropriate and what is not, and the public have no idea what is going on because there is no real transparency.”
The watchdog’s comments came as it emerged that Downing Street had commissioned a review led by the Committee on Standards in Public Life, whose chairman had called on public bodies to reveal when they use algorithms in decision making.
Lord Evans, a former MI5 chief, told the Sunday Telegraph that “it was very difficult to find out where AI is being used in the public sector” and that “at the very minimum, it should be visible, and declared, where it has the potential for impacting on civil liberties and human rights and freedoms”.
AI is increasingly deployed across the public sector in surveillance and elsewhere. The high court ruled in September that the police use of automatic facial recognition technology to scan people in crowds was lawful.
Its use by South Wales police was challenged by Ed Bridges, a former Lib Dem councillor, who noticed the cameras when he went out to buy a lunchtime sandwich, but the court held that the intrusion into privacy was proportionate….(More)”.