Platform Surveillance


Editorial by David Murakami Wood and Torin Monahan of Special Issue of Surveillance and Society: “This editorial introduces this special responsive issue on “platform surveillance.” We develop the term platform surveillance to account for the manifold and often insidious ways that digital platforms fundamentally transform social practices and relations, recasting them as surveillant exchanges whose coordination must be technologically mediated and therefore made exploitable as data. In the process, digital platforms become dominant social structures in their own right, subordinating other institutions, conjuring or sedimenting social divisions and inequalities, and setting the terms upon which individuals, organizations, and governments interact.

Emergent forms of platform capitalism portend new governmentalities, as they gradually draw existing institutions into alignment or harmonization with the logics of platform surveillance while also engendering subjectivities (e.g., the gig-economy worker) that support those logics. Because surveillance is essential to the operations of digital platforms, because it structures the forms of governance and capital that emerge, the field of surveillance studies is uniquely positioned to investigate and theorize these phenomena….(More)”.

Understanding algorithmic decision-making: Opportunities and challenges


Study by Claude Castelluccia and Daniel Le Métayer for the European Parliament: “While algorithms are hardly a recent invention, they are nevertheless increasingly involved in systems used to support decision-making. These systems, known as ‘ADS’ (algorithmic decision systems), often rely on the analysis of large amounts of personal data to infer correlations or, more generally, to derive information deemed useful to make decisions. Human intervention in the decision-making may vary, and may even be completely out of the loop in entirely automated systems. In many situations, the impact of the decision on people can be significant, such as access to credit, employment, medical treatment, or judicial sentences, among other things.

Entrusting ADS to make or to influence such decisions raises a variety of ethical, political, legal, or technical issues, where great care must be taken to analyse and address them correctly. If they are neglected, the expected benefits of these systems may be negated by a variety of different risks for individuals (discrimination, unfair practices, loss of autonomy, etc.), the economy (unfair practices, limited access to markets, etc.), and society as a whole (manipulation, threat to democracy, etc.).

This study reviews the opportunities and risks related to the use of ADS. It presents policy options to reduce the risks and explain their limitations. We sketch some options to overcome these limitations to be able to benefit from the tremendous possibilities of ADS while limiting the risks related to their use. Beyond providing an up-to date and systematic review of the situation, the study gives a precise definition of a number of key terms and an analysis of their differences to help clarify the debate. The main focus of the study is the technical aspects of ADS. However, to broaden the discussion, other legal, ethical and social dimensions are considered….(More)”.

Know-how: Big Data, AI and the peculiar dignity of tacit knowledge


Essay by Tim Rogan: “Machine learning – a kind of sub-field of artificial intelligence (AI) – is a means of training algorithms to discern empirical relationships within immense reams of data. Run a purpose-built algorithm by a pile of images of moles that might or might not be cancerous. Then show it images of diagnosed melanoma. Using analytical protocols modelled on the neurons of the human brain, in an iterative process of trial and error, the algorithm figures out how to discriminate between cancers and freckles. It can approximate its answers with a specified and steadily increasing degree of certainty, reaching levels of accuracy that surpass human specialists. Similar processes that refine algorithms to recognise or discover patterns in reams of data are now running right across the global economy: medicine, law, tax collection, marketing and research science are among the domains affected. Welcome to the future, say the economist Erik Brynjolfsson and the computer scientist Tom Mitchell: machine learning is about to transform our lives in something like the way that steam engines and then electricity did in the 19th and 20th centuries. 

Signs of this impending change can still be hard to see. Productivity statistics, for instance, remain worryingly unaffected. This lag is consistent with earlier episodes of the advent of new ‘general purpose technologies’. In past cases, technological innovation took decades to prove transformative. But ideas often move ahead of social and political change. Some of the ways in which machine learning might upend the status quo are already becoming apparent in political economy debates.

The discipline of political economy was created to make sense of a world set spinning by steam-powered and then electric industrialisation. Its central question became how best to regulate economic activity. Centralised control by government or industry, or market freedoms – which optimised outcomes? By the end of the 20th century, the answer seemed, emphatically, to be market-based order. But the advent of machine learning is reopening the state vs market debate. Which between state, firm or market is the better means of coordinating supply and demand? Old answers to that question are coming under new scrutiny. In an eye-catching paper in 2017, the economists Binbin Wang and Xiaoyan Li at Sichuan University in China argued that big data and machine learning give centralised planning a new lease of life. The notion that market coordination of supply and demand encompassed more information than any single intelligence could handle would soon be proved false by 21st-century AI.

How seriously should we take such speculations? Might machine learning bring us full-circle in the history of economic thought, to where measures of economic centralisation and control – condemned long ago as dangerous utopian schemes – return, boasting new levels of efficiency, to constitute a new orthodoxy?

A great deal turns on the status of tacit knowledge….(More)”.

How the NYPD is using machine learning to spot crime patterns


Colin Wood at StateScoop: “Civilian analysts and officers within the New York City Police Department are using a unique computational tool to spot patterns in crime data that is closing cases.

A collection of machine-learning models, which the department calls Patternizr, was first deployed in December 2016, but the department only revealed the system last month when its developers published a research paper in the Informs Journal on Applied Analytics. Drawing on 10 years of historical data about burglary, robbery and grand larceny, the tool is the first of its kind to be used by law enforcement, the developers wrote.

The NYPD hired 100 civilian analysts in 2017 to use Patternizr. It’s also available to all officers through the department’s Domain Awareness System, a citywide network of sensors, databases, devices, software and other technical infrastructure. Researchers told StateScoop the tool has generated leads on several cases that traditionally would have stretched officers’ memories and traditional evidence-gathering abilities.

Connecting similar crimes into patterns is a crucial part of gathering evidence and eventually closing in on an arrest, said Evan Levine, the NYPD’s assistant commissioner of data analytics and one of Patternizr’s developers. Taken independently, each crime in a string of crimes may not yield enough evidence to identify a perpetrator, but the work of finding patterns is slow and each officer only has a limited amount of working knowledge surrounding an incident, he said.

“The goal here is to alleviate all that kind of busywork you might have to do to find hits on a pattern,” said Alex Chohlas-Wood, a Patternizr researcher and deputy director of the Computational Policy Lab at Stanford University.

The knowledge of individual officers is limited in scope by dint of the NYPD’s organizational structure. The department divides New York into 77 precincts, and a person who commits crimes across precincts, which often have arbitrary boundaries, is often more difficult to catch because individual beat officers are typically focused on a single neighborhood.

There’s also a lot of data to sift through. In 2016 alone, about 13,000 burglaries, 15,000 robberies and 44,000 grand larcenies were reported across the five boroughs.

Levine said that last month, police used Patternizr to spot a pattern of three knife-point robberies around a Bronx subway station. It would have taken police much longer to connect those crimes manually, Levine said.

The software works by an analyst feeding it “seed” case, which is then compared against a database of hundreds of thousands of crime records that Patternizr has already processed. The tool generates a “similarity score” and returns a rank-ordered list and a map. Analysts can read a few details of each complaint before examining the seed complaint and similar complaints in a detailed side-by-side view or filtering results….(More)”.

AI Ethics: Seven Traps


Blog Post by Annette Zimmermann and Bendert Zevenbergen: “… In what follows, we outline seven ‘AI ethics traps’. In doing so, we hope to provide a resource for readers who want to understand and navigate the public debate on the ethics of AI better, who want to contribute to ongoing discussions in an informed and nuanced way, and who want to think critically and constructively about ethical considerations in science and technology more broadly. Of course, not everybody who contributes to the current debate on AI Ethics is guilty of endorsing any or all of these traps: the traps articulate extreme versions of a range of possible misconceptions, formulated in a deliberately strong way to highlight the ways in which one might prematurely dismiss ethical reasoning about AI as futile.

1. The reductionism trap:

“Doing the morally right thing is essentially the same as acting in a fair way. (or: transparent, or egalitarian, or <substitute any other value>). So ethics is the same as fairness (or transparency, or equality, etc.). If we’re being fair, then we’re being ethical.”

            Even though the problem of algorithmic bias and its unfair impact on decision outcomes is an urgent problem, it does not exhaust the ethical problem space. As important as algorithmic fairness is, it is crucial to avoid reducing ethics to a fairness problem alone. Instead, it is important to pay attention to how the ethically valuable goal of optimizing for a specific value like fairness interacts with other important ethical goals. Such goals could include—amongst many others—the goal of creating transparent and explainable systems which are open to democratic oversight and contestation, the goal of improving the predictive accuracy of machine learning systems, the goal of avoiding paternalistic infringements of autonomy rights, or the goal of protecting the privacy interests of data subjects. Sometimes, these different values may conflict: we cannot always optimize for everything at once. This makes it all the more important to adopt a sufficiently rich, pluralistic view of the full range of relevant ethical values at stake—only then can one reflect critically on what kinds of ethical trade-offs one may have to confront.

2. The simplicity trap:

“In order to make ethics practical and action-guiding, we need to distill our moral framework into a user-friendly compliance checklist. After we’ve decided on a particular path of action, we’ll go through that checklist to make sure that we’re being ethical.”

            Given the high visibility and urgency of ethical dilemmas arising in the context of AI, it is not surprising that there are more and more calls to develop actionable AI ethics checklists. For instance, a 2018 draft report by the European Commission’s High-Level Expert Group on Artificial Intelligence specifies a preliminary ‘assessment list’ for ‘trustworthy AI’. While the report plausibly acknowledges that such an assessment list must be context-sensitive and that it is not exhaustive, it nevertheless identifies a list of ten fixed ethical goals, including privacy and transparency. But can and should ethical values be articulated in a checklist in the first place? It is worth examining this underlying assumption critically. After all, a checklist implies a one-off review process: on that view, developers or policy-makers could determine whether a particular system is ethically defensible at a specific moment in time, and then move on without confronting any further ethical concerns once the checklist criteria have been satisfied once. But ethical reasoning cannot be a static one-off assessment: it required an ongoing process of reflection, deliberation, and contestation. Simplicity is good—but the willingness to reconsider simple frameworks, when required, is better. Setting a fixed ethical agenda ahead of time risks obscuring new ethical problems that may arise at a later point in time, or ongoing ethical problems that become apparent to human decision-makers only later.

3. The relativism trap:

“We all disagree about what is morally valuable, so it’s pointless to imagine that there is a universalbaseline against which we can use in order to evaluate moral choices. Nothing is objectively morally good: things can only be morally good relative to each person’s individual value framework.”

            Public discourse on the ethics of AI frequently produces little more than an exchange of personal opinions or institutional positions. In light of pervasive moral disagreement, it is easy to conclude that ethical reasoning can never stand on firm ground: it always seems to be relative to a person’s views and context. But this does not mean that ethical reasoning about AI and its social and political implications is futile: some ethical arguments about AI may ultimately be more persuasive than others. While it may not always be possible to determine ‘the one right answer’, it is often possible to identify at least  some paths of action are clearly wrong, and some paths of action that are comparatively better (if not optimal all things considered). If that is the case, comparing the respective merits of ethical arguments can be action-guiding for developers and policy-makers, despite the presence of moral disagreement. Thus, it is possible and indeed constructive for AI ethics to welcome value pluralism, without collapsing into extreme value relativism.

4. The value alignment trap:

“If relativism is wrong (see #3), there must be one morally right answer. We need to find that right answer, and ensure that everyone in our organisation acts in alignment with that answer. If our ethical reasoning leads to moral disagreement, that means that we have failed.”…(More)”.

Seeing, Naming, Knowing


Essay by Nora N. Khan for Brooklyn Rail: “…. Throughout this essay, I use “machine eye” as a metaphor for the unmoored orb, a kind of truly omnidirectional camera (meaning, a camera that can look in every direction and vector that defines the dimensions of a sphere), and as a symbolic shorthand for the sum of four distinct realms in which automated vision is deployed as a service. (Vision as a Service, reads the selling tag for a new AI surveillance camera company).10 Those four general realms are: 

1. Massive AI systems fueled by the public’s flexible datasets of their personal images, creating a visual culture entirely out of digitized images. 

2. Facial recognition technologies and neural networks improving atop their databases. 

3. The advancement of predictive policing to sort people by types. 

4. The combination of location-based tracking, license plate-reading, and heat sensors to render skein-like, live, evolving maps of people moving, marked as likely to do X.

Though we live the results of its seeing, and its interpretation of its seeing, for now I would hold on blaming ourselves for this situation. We are, after all, the living instantiations of a few thousand years of such violent seeing globally, enacted through imperialism, colonialism, caste stratification, nationalist purges, internal class struggle, and all the evolving theory to support and galvanize the above. Technology simply recasts, concentrates, and amplifies these “tendencies.” They can be hard to see at first because the eye’s seeing seems innocuous, and is designed to seem so. It is a direct expression of the ideology of software, which reflects its makers’ desires. These makers are lauded as American pioneers, innovators, genius-heroes living in the Bay Area in the late 1970s, vibrating at a highly specific frequency, the generative nexus of failed communalism and an emerging Californian Ideology. That seductive ideology has been exported all over the world, and we are only now contending with its impact.

Because the workings of machine visual culture are so remote from our sense perception, and because it so acutely determines our material (economic, social), and affective futures, I invite you to see underneath the eye’s outer glass shell, its holder, beyond it, to the grid that organizes its “mind.” That mind simulates a strain of ideology about who exactly gets to gather data about those on that grid below, and how that data should be mobilized to predict the movements and desires of the grid dwellers. This mind, a vast computational regime we are embedded in, drives the machine eye. And this computational regime has specific values that determine what is seen, how it is seen, and what that seeing means….(More)”.

The Bad Pupil


CCCBLab: “In recent years we have been witnessing a constant trickle of news on artificial intelligence, machine learning and computer vision. We are told that machines learn, see, create… and all this builds up a discourse based on novelty, on a possible future and on a series of worries and hopes. It is difficult, sometimes, to figure out in this landscape which are real developments, and which are fantasies or warnings. And, undoubtedly, this fog that surrounds it forms part of the power that we grant, both in the present and on credit, to these tools, and of the negative and positive concerns that they arouse in us. Many of these discourses may fall into the field of false debates or, at least, of the return of old debates. Thus, in the classical artistic field, associated with the discourse on creation and authorship, there is discussion regarding the entity to be awarded to the images created with these tools. (Yet wasn’t the argument against photography in art that it was an image created automatically and without human participation? And wasn’t that also an argument in favour of taking it and using it to put an end to a certain idea of art?)

Metaphors are essential in the discourse on all digital tools and the power that they have. Are expressions such as “intelligence”, “vision”, “learning”, “neural” and the entire range of similar words the most adequate for defining these types of tools? Probably not, above all if their metaphorical nature is sidestepped. We would not understand them in the same way if we called them tools of probabilistic classification or if instead of saying that an artificial intelligence “has painted” a Rembrandt, we said that it has produced a statistical reproduction of his style (something which is still surprising, and to be celebrated, of course). These names construct an entity for these tools that endows them with a supposed autonomy and independence upon which their future authority is based.

Because that is what it’s about in many discourses: constructing a characterisation that legitimises an objective or non-human capacity in data analysis….

We now find ourselves in what is, probably, the point of the first cultural reception of these tools. Of their development in fields of research and applications that have already been derived, we are moving on to their presence in the public discourse. It is in this situation and context, where we do not fully know the breadth and characteristics of these technologies (meaning fears are more abstract and diffuse and, thus, more present and powerful), when it is especially important to understand what we are talking about, to appropriate the tools and to intervene in the discourses. Before their possibilities are restricted and solidified until they seem indisputable, it is necessary to experiment with them and reflect on them; taking advantage of the fact that we can still easily perceive them as in creation, malleable and open.

In our projects The Bad Pupil. Critical pedagogy for artificial intelligences and Latent Spaces. Machinic Imaginations we have tried to approach to these tools and their imaginary. In the statement of intentions of the former, we expressed our desire, in the face of the regulatory context and the metaphor of machine learning, to defend the bad pupil as one who escapes the norm. And also how, faced with an artificial intelligence that seeks to replicate the human on inhuman scales, it is necessary to defend and construct a non-mimetic one that produces unexpected relations and images.

Fragment of De zeven werken van barmhartigheid, Meester van Alkmaar, 1504 (Rijksmuseum, Amsterdam) analysed with YOLO9000 | The Bad Pupil - Estampa

Fragment of De zeven werken van barmhartigheid, Meester van Alkmaar, 1504 (Rijksmuseum, Amsterdam) analysed with YOLO9000 | The Bad Pupil – Estampa

Both projects are also attempts to appropriate these tools, which means, first of all, escaping industrial barriers and their standards. In this field in which mass data are an asset within reach of big companies, employing quantitively poor datasets and non-industrial calculation potentials is not just a need but a demand….(More)”.

How AI Can Cure the Big Idea Famine


Saahil Jayraj Dama at JoDS: “Today too many people are still deprived of basic amenities such as medicine, while current patent laws continue to convolute and impede innovation. But if allowed, AI can provide an opportunity to redefine this paradigm and be the catalyst for change—if….

Which brings us to the most befitting answer: No one owns the intellectual property rights to AI-generated creations, and these creations fall into the public domain. This may seem unpalatable at first, especially since intellectual property laws have played such a fundamental role in our society so far. We have been conditioned to a point where it seems almost unimaginable that some creations should directly enter the public domain upon their birth.

But, doctrinally, this is the only proposition that stays consistent to extant intellectual property laws. Works created by AI have no rightful owner because the application of mind to generate the creation, along with the actual generation of the creation, would entirely be done by the AI system. Human involvement is ancillary and is limited to creating an environment within which such a creation can take form.

This can be better understood through a hypothetical example: If an AI system were to invent a groundbreaking pharmaceutical ingredient which completely treats balding, then the system would likely begin by understanding the problem and state of prior art. It would undertake research on causes of balding, existing cures, problems with existing cures, and whether its proposed cure would have any harmful side effects. It would also possibly combine research and knowledge across various domains, which could range from Ayurveda to modern-day biochemistry, before developing its invention.

The developer can lay as much stake to this invention as the team behind AlphaGo for beating Lee Sedol at Go. The user is even further detached from the exercise of ingenuity: She would be the person who first thought, “We should build a Go playing AI system,” and direct the AI system to learn Go by watching certain videos and playing against itself. Despite the intervention of all these entities, the fact remains that the victory only belongs to AlphaGo itself.

Doctrinal issues aside, this solution ties in with what people need from intellectual property laws: more openness and accessibility. The demands for improved access to medicines and knowledge, fights against cultural monopolies, and brazen violations of unjust intellectual property laws are all symptomatic of the growing public discontent against strong intellectual property laws. Through AI, we can design legal systems which address these concerns and reform the heavy handed approach that has been adopted toward intellectual property rights so far.

Tying the Threads Together

For the above to materialize, governments and legislators need to accept that our present intellectual property system is broken and inconsistent with what people want. Too many people are being deprived of basic amenities such as medicines, patent trolls and patent thickets are slowing innovation, educational material is still outside the reach of most people, and culture is not spreading as widely as it should. AI can provide an opportunity for us to redefine this paradigm—it can lead to a society that draws and benefits from an enriched public domain.

However, this approach does come with built-in cynicism because it contemplates an almost complete overhaul of the system. One could argue that if open access for AI-generated creations does become the norm, then innovation and creativity would suffer as people would no longer have the incentive to create. People may even refuse to use their AI systems, and instead stick to producing inventions and creative works by themselves. This would be detrimental to scientific and cultural progress and would also slow adoption of AI systems in society.

Yet, judging by the pace at which these systems have progressed so far and what they can currently do, it is easy to imagine a reality where humans developing inventions and producing creative works almost becomes an afterthought. If a machine can access all the world’s publicly available knowledge and information to develop an invention, or study a user’s likes and dislikes while producing a new musical composition, it is easy to see how humans would, eventually, be pushed out of the loop. AI-generated creations are, thus, inevitable.

The incentive theory will have to be reimagined, too. Constant innovation coupled with market forces will change the system from “incentive-to-create” to “incentive-to-create-well.” While every book, movie, song, and invention is treated at par under the law, only the best inventions and creative works will thrive under the new model. If a particular developer’s AI system can write incredible dialogue for a comedy film or invent the most efficient car engines, the market would want more of these AI systems. Thus incentive will not be eliminated, it will just take a different form.

It is true that writing about such grand schemes is significantly tougher than practically implementing them. But, for any idea to succeed, it must start with a discussion such as this one. Admittedly, we are still a moonshot away from any country granting formal recognition to open access as the basis of its intellectual property laws. And even if a country were to do this, it faces a plethora of hoops to jump through, such as conducting feasibility-testing and dealing with international and internal pressure. Despite these issues, facilitating better access through AI systems remains an objective worth achieving for any society that takes pride in being democratic and equal….(More)”.

What Would More Democratic A.I. Look Like?


Blog post by Andrew Burgess: “Something curious is happening in Finland. Though much of the global debate around artificial intelligence (A.I.) has become concerned with unaccountable, proprietary systems that could control our lives, the Finnish government has instead decided to embrace the opportunity by rolling out a nationwide educational campaign.

Conceived in 2017, shortly after Finland’s A.I. strategy was announced, the government wants to rebuild the country’s economy around the high-end opportunities of artificial intelligence, and has launched a national programto train 1 percent of the population — that’s 55,000 people — in the basics of A.I. “We’ll never have so much money that we will be the leader of artificial intelligence,” said economic minister Mika Lintilä at the launch. “But how we use it — that’s something different.”

Artificial intelligence can have many positive applications, from being trained to identify cancerous cells in biopsy screenings, predict weather patterns that can help farmers increase their crop yields, and improve traffic efficiency.

But some believe that A.I. expertise is currently too concentrated in the hands of just a few companies with opaque business models, meaning resources are being diverted away from projects that could be more socially, rather than commercially, beneficial. Finland’s approach of making A.I. accessible and understandable to its citizens is part of a broader movement of people who want to democratize the technology, putting utility and opportunity ahead of profit.

This shift toward “democratic A.I.” has three main principles: that all society will be impacted by A.I. and therefore its creators have a responsibility to build open, fair, and explainable A.I. services; that A.I. should be used for social benefit and not just for private profit; and that because A.I. learns from vast quantities of data, the citizens who create that data — about their shopping habits, health records, or transport needs — have a right to say and understand how it is used.

A growing movement across industry and academia believes that A.I. needs to be treated like any other “public awareness” program — just like the scheme rolled out in Finland….(More)”.

Data Trusts as an AI Governance Mechanism


Paper by Chris Reed and Irene YH Ng: “This paper is a response to the Singapore Personal Data Protection Commission consultation on a draft AI Governance Framework. It analyses the five data trust models proposed by the UK Open Data Institute and identifies that only the contractual and corporate models are likely to be legally suitable for achieving the aims of a data trust.

The paper further explains how data trusts might be used as in the governance of AI, and investigates the barriers which Singapore’s data protection law presents to the use of data trusts and how those barriers might be overcome. Its conclusion is that a mixed contractual/corporate model, with an element of regulatory oversight and audit to ensure consumer confidence that data is being used appropriately, could produce a useful AI governance tool…(More)”.