Five myths about whistleblowers


Dana Gold in the Washington Post: “When a whistleblower revealed the Trump administration’s decision to overturn 25 security clearance denials, it was the latest in a long and storied history of insiders exposing significant abuses of public trust. Whistles were blown on U.S. involvement in Vietnam, the Watergate coverupEnron’s financial fraud, the National Security Agency’s mass surveillance of domestic electronic communications and, during the Trump administration, the corruption of former Environmental Protection Agency chief Scott Pruitt , Cambridge Analytica’s theft of Facebook users’ data to develop targeted political ads, and harm to children posed by the “zero tolerance” immigration policy. Despite the essential role whistleblowers play in illuminating the truth and protecting the public interest, several myths persist about them, some pernicious.

MYTH NO. 1 Whistleblowers are employees who report problems externally….

MYTH NO. 2 Whistleblowers are either disloyal or heroes….

MYTH NO. 3 ‘Leaker’ is another term for ‘whistleblower.’…

MYTH NO. 4 Remaining anonymous is the best strategy for whistleblowing….

MYTH NO. 5 Julian Assange is a whistleblower….(More)”.

Safeguards for human studies can’t cope with big data


Nathaniel Raymond at Nature: “One of the primary documents aiming to protect human research participants was published in the US Federal Register 40 years ago this week. The Belmont Report was commissioned by Congress in the wake of the notorious Tuskegee syphilis study, in which researchers withheld treatment from African American men for years and observed how the disease caused blindness, heart disease, dementia and, in some cases, death.

The Belmont Report lays out core principles now generally required for human research to be considered ethical. Although technically governing only US federally supported research, its influence reverberates across academia and industry globally. Before academics with US government funding can begin research involving humans, their institutional review boards (IRBs) must determine that the studies comply with regulation largely derived from a document that was written more than a decade before the World Wide Web and nearly a quarter of a century before Facebook.

It is past time for a Belmont 2.0. We should not be asking those tasked with protecting human participants to single-handedly identify and contend with the implications of the digital revolution. Technological progress, including machine learning, data analytics and artificial intelligence, has altered the potential risks of research in ways that the authors of the first Belmont report could not have predicted. For example, Muslim cab drivers can be identified from patterns indicating that they stop to pray; the Ugandan government can try to identify gay men from their social-media habits; and researchers can monitor and influence individuals’ behaviour online without enrolling them in a study.

Consider the 2014 Facebook ‘emotional contagion study’, which manipulated users’ exposure to emotional content to evaluate effects on mood. That project, a collaboration with academic researchers, led the US Department of Health and Human Services to launch a long rule-making process that tweaked some regulations governing IRBs.

A broader fix is needed. Right now, data science overlooks risks to human participants by default….(More)”.

The Economics of Social Data


Paper by Dirk Bergemann and Alessandro Bonatti: “Large internet platforms collect data from individual users in almost every interaction on the internet. Whenever an individual browses a news website, searches for a medical term or for a travel recommendation, or simply checks the weather forecast on an app, that individual generates data. A central feature of the datacollected from the individuals is its social aspect. Namely, the data captured from an individual user is not only informative about this specific individual, but also about users in some metric similar to the individual. Thus, the individual data is really social data. The social nature of the data generates an informational externality that we investigate in this note….(More)”.

AI Ethics: Seven Traps


Blog Post by Annette Zimmermann and Bendert Zevenbergen: “… In what follows, we outline seven ‘AI ethics traps’. In doing so, we hope to provide a resource for readers who want to understand and navigate the public debate on the ethics of AI better, who want to contribute to ongoing discussions in an informed and nuanced way, and who want to think critically and constructively about ethical considerations in science and technology more broadly. Of course, not everybody who contributes to the current debate on AI Ethics is guilty of endorsing any or all of these traps: the traps articulate extreme versions of a range of possible misconceptions, formulated in a deliberately strong way to highlight the ways in which one might prematurely dismiss ethical reasoning about AI as futile.

1. The reductionism trap:

“Doing the morally right thing is essentially the same as acting in a fair way. (or: transparent, or egalitarian, or <substitute any other value>). So ethics is the same as fairness (or transparency, or equality, etc.). If we’re being fair, then we’re being ethical.”

            Even though the problem of algorithmic bias and its unfair impact on decision outcomes is an urgent problem, it does not exhaust the ethical problem space. As important as algorithmic fairness is, it is crucial to avoid reducing ethics to a fairness problem alone. Instead, it is important to pay attention to how the ethically valuable goal of optimizing for a specific value like fairness interacts with other important ethical goals. Such goals could include—amongst many others—the goal of creating transparent and explainable systems which are open to democratic oversight and contestation, the goal of improving the predictive accuracy of machine learning systems, the goal of avoiding paternalistic infringements of autonomy rights, or the goal of protecting the privacy interests of data subjects. Sometimes, these different values may conflict: we cannot always optimize for everything at once. This makes it all the more important to adopt a sufficiently rich, pluralistic view of the full range of relevant ethical values at stake—only then can one reflect critically on what kinds of ethical trade-offs one may have to confront.

2. The simplicity trap:

“In order to make ethics practical and action-guiding, we need to distill our moral framework into a user-friendly compliance checklist. After we’ve decided on a particular path of action, we’ll go through that checklist to make sure that we’re being ethical.”

            Given the high visibility and urgency of ethical dilemmas arising in the context of AI, it is not surprising that there are more and more calls to develop actionable AI ethics checklists. For instance, a 2018 draft report by the European Commission’s High-Level Expert Group on Artificial Intelligence specifies a preliminary ‘assessment list’ for ‘trustworthy AI’. While the report plausibly acknowledges that such an assessment list must be context-sensitive and that it is not exhaustive, it nevertheless identifies a list of ten fixed ethical goals, including privacy and transparency. But can and should ethical values be articulated in a checklist in the first place? It is worth examining this underlying assumption critically. After all, a checklist implies a one-off review process: on that view, developers or policy-makers could determine whether a particular system is ethically defensible at a specific moment in time, and then move on without confronting any further ethical concerns once the checklist criteria have been satisfied once. But ethical reasoning cannot be a static one-off assessment: it required an ongoing process of reflection, deliberation, and contestation. Simplicity is good—but the willingness to reconsider simple frameworks, when required, is better. Setting a fixed ethical agenda ahead of time risks obscuring new ethical problems that may arise at a later point in time, or ongoing ethical problems that become apparent to human decision-makers only later.

3. The relativism trap:

“We all disagree about what is morally valuable, so it’s pointless to imagine that there is a universalbaseline against which we can use in order to evaluate moral choices. Nothing is objectively morally good: things can only be morally good relative to each person’s individual value framework.”

            Public discourse on the ethics of AI frequently produces little more than an exchange of personal opinions or institutional positions. In light of pervasive moral disagreement, it is easy to conclude that ethical reasoning can never stand on firm ground: it always seems to be relative to a person’s views and context. But this does not mean that ethical reasoning about AI and its social and political implications is futile: some ethical arguments about AI may ultimately be more persuasive than others. While it may not always be possible to determine ‘the one right answer’, it is often possible to identify at least  some paths of action are clearly wrong, and some paths of action that are comparatively better (if not optimal all things considered). If that is the case, comparing the respective merits of ethical arguments can be action-guiding for developers and policy-makers, despite the presence of moral disagreement. Thus, it is possible and indeed constructive for AI ethics to welcome value pluralism, without collapsing into extreme value relativism.

4. The value alignment trap:

“If relativism is wrong (see #3), there must be one morally right answer. We need to find that right answer, and ensure that everyone in our organisation acts in alignment with that answer. If our ethical reasoning leads to moral disagreement, that means that we have failed.”…(More)”.

Seeing, Naming, Knowing


Essay by Nora N. Khan for Brooklyn Rail: “…. Throughout this essay, I use “machine eye” as a metaphor for the unmoored orb, a kind of truly omnidirectional camera (meaning, a camera that can look in every direction and vector that defines the dimensions of a sphere), and as a symbolic shorthand for the sum of four distinct realms in which automated vision is deployed as a service. (Vision as a Service, reads the selling tag for a new AI surveillance camera company).10 Those four general realms are: 

1. Massive AI systems fueled by the public’s flexible datasets of their personal images, creating a visual culture entirely out of digitized images. 

2. Facial recognition technologies and neural networks improving atop their databases. 

3. The advancement of predictive policing to sort people by types. 

4. The combination of location-based tracking, license plate-reading, and heat sensors to render skein-like, live, evolving maps of people moving, marked as likely to do X.

Though we live the results of its seeing, and its interpretation of its seeing, for now I would hold on blaming ourselves for this situation. We are, after all, the living instantiations of a few thousand years of such violent seeing globally, enacted through imperialism, colonialism, caste stratification, nationalist purges, internal class struggle, and all the evolving theory to support and galvanize the above. Technology simply recasts, concentrates, and amplifies these “tendencies.” They can be hard to see at first because the eye’s seeing seems innocuous, and is designed to seem so. It is a direct expression of the ideology of software, which reflects its makers’ desires. These makers are lauded as American pioneers, innovators, genius-heroes living in the Bay Area in the late 1970s, vibrating at a highly specific frequency, the generative nexus of failed communalism and an emerging Californian Ideology. That seductive ideology has been exported all over the world, and we are only now contending with its impact.

Because the workings of machine visual culture are so remote from our sense perception, and because it so acutely determines our material (economic, social), and affective futures, I invite you to see underneath the eye’s outer glass shell, its holder, beyond it, to the grid that organizes its “mind.” That mind simulates a strain of ideology about who exactly gets to gather data about those on that grid below, and how that data should be mobilized to predict the movements and desires of the grid dwellers. This mind, a vast computational regime we are embedded in, drives the machine eye. And this computational regime has specific values that determine what is seen, how it is seen, and what that seeing means….(More)”.

The Bad Pupil


CCCBLab: “In recent years we have been witnessing a constant trickle of news on artificial intelligence, machine learning and computer vision. We are told that machines learn, see, create… and all this builds up a discourse based on novelty, on a possible future and on a series of worries and hopes. It is difficult, sometimes, to figure out in this landscape which are real developments, and which are fantasies or warnings. And, undoubtedly, this fog that surrounds it forms part of the power that we grant, both in the present and on credit, to these tools, and of the negative and positive concerns that they arouse in us. Many of these discourses may fall into the field of false debates or, at least, of the return of old debates. Thus, in the classical artistic field, associated with the discourse on creation and authorship, there is discussion regarding the entity to be awarded to the images created with these tools. (Yet wasn’t the argument against photography in art that it was an image created automatically and without human participation? And wasn’t that also an argument in favour of taking it and using it to put an end to a certain idea of art?)

Metaphors are essential in the discourse on all digital tools and the power that they have. Are expressions such as “intelligence”, “vision”, “learning”, “neural” and the entire range of similar words the most adequate for defining these types of tools? Probably not, above all if their metaphorical nature is sidestepped. We would not understand them in the same way if we called them tools of probabilistic classification or if instead of saying that an artificial intelligence “has painted” a Rembrandt, we said that it has produced a statistical reproduction of his style (something which is still surprising, and to be celebrated, of course). These names construct an entity for these tools that endows them with a supposed autonomy and independence upon which their future authority is based.

Because that is what it’s about in many discourses: constructing a characterisation that legitimises an objective or non-human capacity in data analysis….

We now find ourselves in what is, probably, the point of the first cultural reception of these tools. Of their development in fields of research and applications that have already been derived, we are moving on to their presence in the public discourse. It is in this situation and context, where we do not fully know the breadth and characteristics of these technologies (meaning fears are more abstract and diffuse and, thus, more present and powerful), when it is especially important to understand what we are talking about, to appropriate the tools and to intervene in the discourses. Before their possibilities are restricted and solidified until they seem indisputable, it is necessary to experiment with them and reflect on them; taking advantage of the fact that we can still easily perceive them as in creation, malleable and open.

In our projects The Bad Pupil. Critical pedagogy for artificial intelligences and Latent Spaces. Machinic Imaginations we have tried to approach to these tools and their imaginary. In the statement of intentions of the former, we expressed our desire, in the face of the regulatory context and the metaphor of machine learning, to defend the bad pupil as one who escapes the norm. And also how, faced with an artificial intelligence that seeks to replicate the human on inhuman scales, it is necessary to defend and construct a non-mimetic one that produces unexpected relations and images.

Fragment of De zeven werken van barmhartigheid, Meester van Alkmaar, 1504 (Rijksmuseum, Amsterdam) analysed with YOLO9000 | The Bad Pupil - Estampa

Fragment of De zeven werken van barmhartigheid, Meester van Alkmaar, 1504 (Rijksmuseum, Amsterdam) analysed with YOLO9000 | The Bad Pupil – Estampa

Both projects are also attempts to appropriate these tools, which means, first of all, escaping industrial barriers and their standards. In this field in which mass data are an asset within reach of big companies, employing quantitively poor datasets and non-industrial calculation potentials is not just a need but a demand….(More)”.

Privacy’s not dead. It’s just not evenly distributed


Alex Pasternack in Fast Company: “In the face of all the data abuse, many of us have, quite reasonably, thrown up our hands. But privacy didn’t die. It’s just been beaten up, sold, obscured, diffused unevenly across society. What privacy is and why it matters increasingly depends upon who you are, your age, your income, gender, ethnicity, where you’re from, and where you live. To borrow William Gibson’s famous quote about the future and its unevenness and inequalities, privacy is alive—it’s just not evenly distributed. And while we don’t all care about it the same way—we’re even divided on what exactly privacy is—its harms are still real. Even when our own privacy isn’t violated, privacy violations can still hurt us.

Privacy is personal, from the creepy feeling that our phones are literally listening to the endless parade of data breaches that test our ability to care anymore. It’s the unsettling feeling of giving “consent” without knowing what that means, “agreeing” to contracts we didn’t read with companies we don’t really trust. (Forget about understanding all the details; researchers have shown that most privacy policies surpass the reading level of the average person.)

It’s the data about us that’s harvested, bought, sold, and traded by an obscure army of data brokers without our knowledge, feeding marketers, landlords, employers, immigration officialsinsurance companies, debt collectors, as well as stalkers and who knows who else. It’s the body camera or the sports arena or the social network capturing your face for who knows what kind of analysis. Don’t think of personal data as just “data.” As it gets more detailed and more correlated, increasingly, our data is us.

And “privacy” isn’t just privacy. It’s also tied up with security, freedom, social justice, free speech, and free thought. Privacy harms aren’t only personal, but societal. It’s not just the multibillion-dollar industry that aims to nab you and nudge you, but the multibillion-dollar spyware industry that helps governments nab dissidents and send them to prison or worse. It’s the supposedly fair and transparent algorithms that aren’t, turning our personal data into risk scores that can help perpetuate race, class, and gender divides, often without our knowing it.

Privacy is about dark ads bought with dark money and the micro-targeting of voters by overseas propagandists or by political campaigns at home. That kind of influence isn’t just the promise of a shadowy Cambridge Analytica or state-run misinformation campaigns, but also the premise of modern-day digital ad campaigns. (Note that Facebook’s research division later hired one of the researchers behind the Cambridge app.) And as the micro-targeting gets more micro, the tech giants that deal in ads are only getting more macro….(More)”

(This story is part of The Privacy Divide, a series that explores the fault lines and disparities–economic, cultural, philosophical–that have developed around digital privacy and its impact on society.)

A Skeptical View of Information Fiduciaries


Paper by Lina Khan and David Pozen: “The concept of “information fiduciaries” has surged to the forefront of debates on online platform regulation. Developed by Professor Jack Balkin, the concept is meant to rebalance the relationship between ordinary individuals and the digital companies that accumulate, analyze, and sell their personal data for profit. Just as the law imposes special duties of care, confidentiality, and loyalty on doctors, lawyers, and accountants vis-à-vis their patients and clients, Balkin argues, so too should it impose special duties on corporations such as Facebook, Google, and Twitter vis-à-vis their end users. Over the past several years, this argument has garnered remarkably broad support and essentially zero critical pushback.

This Essay seeks to disrupt the emerging consensus by identifying a number of lurking tensions and ambiguities in the theory of information fiduciaries, as well as a number of reasons to doubt the theory’s capacity to resolve them satisfactorily. Although we agree with Balkin that the harms stemming from dominant online platforms call for legal intervention, we question whether the concept of information fiduciaries is an adequate or apt response to the problems of information insecurity that he stresses, much less to more fundamental problems associated with outsized market share and business models built on pervasive surveillance. We also call attention to the potential costs of adopting an information-fiduciary framework—a framework that, we fear, invites an enervating complacency toward online platforms’ structural power and a premature abandonment of more robust visions of public regulation….(More)”.

Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems


Introduction by A.F. Winfield, K. Michael, J. Pitt, V. Evers of Special Issue of Proceedings of the IEEE: “…The primary focus of this special issue is machine ethics, that is the question of how autonomous systems can be imbued with ethical values. Ethical autonomous systems are needed because, inevitably, near future systems are moral agents; consider driverless cars, or medical diagnosis AIs, both of which will need to make choices with ethical consequences. This special issue includes papers that describe both implicit ethical agents, that is machines designed to avoid unethical outcomes, and explicit ethical agents: machines which either encode or learn ethics and determine actions based on those ethics. Of course ethical machines are socio-technical systems thus, as a secondary focus, this issue includes papers that explore the societal and regulatory implications of machine ethics, including the question of ethical governance. Ethical governance is needed in order to develop standards and processes that allow us to transparently and robustly assure the safety of ethical autonomous systems and hence build public trust and confidence….(More)?

The Governance of Digital Technology, Big Data, and the Internet: New Roles and Responsibilities for Business


Introduction to Special Issue of Business and Society by Dirk Matten, Ronald Deibert & Mikkel Flyverbom: “The importance of digital technologies for social and economic developments and a growing focus on data collection and privacy concerns have made the Internet a salient and visible issue in global politics. Recent developments have increased the awareness that the current approach of governments and business to the governance of the Internet and the adjacent technological spaces raises a host of ethical issues. The significance and challenges of the digital age have been further accentuated by a string of highly exposed cases of surveillance and a growing concern about issues of privacy and the power of this new industry. This special issue explores what some have referred to as the “Internet-industrial complex”—the intersections between business, states, and other actors in the shaping, development, and governance of the Internet…(More)”.