Indigenous Protocol and Artificial Intelligence


Indigenous Protocol and Artificial Intelligence Working Group: “This position paper on Indigenous Protocol (IP) and Artificial Intelligence (AI) is a starting place for those who want to design and create AI from an ethical position that centers Indigenous concerns. Each Indigenous community will have its own particular approach to the questions we raise in what follows. What we have written here is not a substitute for establishing and maintaining relationships of reciprocal care and support with specific Indigenous communities. Rather, this document offers a range of ideas to take into consideration when entering into conversations which prioritize Indigenous perspectives in the development of artificial intelligence.

The position paper is an attempt to capture multiple layers of a discussion that happened over 20 months, across 20 time zones, during two workshops, and between Indigenous people (and a few non-Indigenous folks) from diverse communities in Aotearoa, Australia, North America, and the Pacific.

Our aim, however, is not to provide a unified voice. Indigenous ways of knowing are rooted in distinct, sovereign territories across the planet. These extremely diverse landscapes and histories have influenced different communities and their discrete cultural protocols over time. A single ‘Indigenous perspective’ does not exist, as epistemologies are motivated and shaped by the grounding of specific communities in particular territories. Historically, scholarly traditions that homogenize diverse Indigenous cultural practices have resulted in ontological and epistemological violence, and a flattening of the rich texture and variability of Indigenous thought….(More)”.

Blind-sided by privacy? Digital contact tracing, the Apple/Google API and big tech’s newfound role as global health policy makers


Paper by Tamar Sharon: “Since the outbreak of COVID-19, governments have turned their attention to digital contact tracing. In many countries, public debate has focused on the risks this technology poses to privacy, with advocates and experts sounding alarm bells about surveillance and mission creep reminiscent of the post 9/11 era. Yet, when Apple and Google launched their contact tracing API in April 2020, some of the world’s leading privacy experts applauded this initiative for its privacy-preserving technical specifications. In an interesting twist, the tech giants came to be portrayed as greater champions of privacy than some democratic governments.

This article proposes to view the Apple/Google API in terms of a broader phenomenon whereby tech corporations are encroaching into ever new spheres of social life. From this perspective, the (legitimate) advantage these actors have accrued in the sphere of the production of digital goods provides them with (illegitimate) access to the spheres of health and medicine, and more worrisome, to the sphere of politics. These sphere transgressions raise numerous risks that are not captured by the focus on privacy harms. Namely, a crowding out of essential spherical expertise, new dependencies on corporate actors for the delivery of essential, public goods, the shaping of (global) public policy by non-representative, private actors and ultimately, the accumulation of decision-making power across multiple spheres. While privacy is certainly an important value, its centrality in the debate on digital contact tracing may blind us to these broader societal harms and unwittingly pave the way for ever more sphere transgressions….(More)”.

The Data Delusion: Protecting Individual Data is Not Enough When the Harm is Collective


Essay by Martin Tisné: “On March 17, 2018, questions about data privacy exploded with the scandal of the previously unknown consulting company Cambridge Analytica. Lawmakers are still grappling with updating laws to counter the harms of big data and AI. In the Spring of 2020, the Covid-19 pandemic brought questions about sufficient legal protections back to the public debate, with urgent warnings about the privacy implications of contact tracing apps. But the surveillance consequences of the pandemic’s aftermath are much bigger than any app: transport, education, health
systems and offices are being turned into vast surveillance networks. If we only consider individual trade-offs between privacy sacrifices and alleged health benefits, we will miss the point. The collective nature of big data means people are more impacted by other people’s data than by data about them. Like climate change, the threat is societal and personal.

In the era of big data and AI, people can suffer because of how the sum of individual data is analysed and sorted into groups by algorithms. Novel forms of collective data-driven harms are appearing as a result: online housing, job and credit ads discriminating on the basis of race and gender, women disqualified from jobs on the basis of gender and foreign actors targeting light-right groups, pulling them to the far-right.2 Our public debate, governments, and laws are ill-equipped to deal with these collective, as opposed to individual, harms….(More)”.

Surprising Alternative Uses of IoT Data


Essay by Massimo Russo and Tian Feng: “With COVID-19, the press has been leaning on IoT data as leading indicators in a time of rapid change. The Wall Street Journal and New York Times have leveraged location data from companies like TomTom, INRIX, and Cuebiq to predict economic slowdown and lockdown effectiveness.¹ Increasingly we’re seeing use cases like these, of existing data being used for new purposes and to drive new insights.² Even before the crisis, IoT data was revealing surprising insights when used in novel ways. In 2018, fitness app Strava’s exercise “heatmap” shockingly revealed locations, internal maps, and patrol routes of US military bases abroad.³

The idea of alternative data is also trending in the financial sector. Defined in finance as data from non-traditional data sources such as satellites and sensors, financial alternative data has grown from a niche tool used by select hedge funds to an investment input for large institutional investors.⁴ The sector is forecasted to grow seven-fold from 2016 to 2020, with spending nearing $2 billion.⁵ And it’s easy to see why: alternative data linked to IoT sources are able to give investors a real time, scalable view into how businesses and markets are performing.

This phenomenon of repurposing IoT data collected for one purpose for use for another purpose will extend beyond crisis or financial applications and will be focus of this article. For the purpose of our discussion, we’ll define intended data use as ones that deliver the value directly associated with the IoT application. On the other hand, alternative data use as ones linked to insights and application using the data outside of the intent of the initial IoT application.⁶ Alternative data use is important because it is incremental value outside of the original application.

Why should we think about this today? Increasingly CTOs are pursuing IoT projects with a fixed application in mind. Whereas early in IoT maturity, companies were eager to pilot the technology, now the focus has rightly shifted to IoT use cases with tangible ROI. In this environment, how should companies think about external data sharing when potential use cases are distant, unknown, or not yet existent? How can companies balance the abstract value of future use cases with the tangible risk of data misuse?…(More)”.

Science Fictions: Exposing Fraud, Bias, Negligence and Hype in Science


Book by Stuart Ritchie: “So much relies on science. But what if science itself can’t be relied on?

Medicine, education, psychology, health, parenting – wherever it really matters, we look to science for guidance. Science Fictions reveals the disturbing flaws that undermine our understanding of all of these fields and more.

While the scientific method will always be our best and only way of knowing about the world, in reality the current system of funding and publishing science not only fails to safeguard against scientists’ inescapable biases and foibles, it actively encourages them. Many widely accepted and highly influential theories and claims – about ‘priming’ and ‘growth mindset’, sleep and nutrition, genes and the microbiome, as well as a host of drugs, allergies and therapies – turn out to be based on unreliable, exaggerated and even fraudulent papers. We can trace their influence in everything from austerity economics to the anti-vaccination movement, and occasionally count the cost of them in human lives….(More)”.

Regulating Electronic Means to Fight the Spread of COVID-19


In Custodia Legis Library of Congress: “It appears that COVID-19 will not go away any time soon. As there is currently no known cure or vaccine against it, countries have to find other ways to prevent and mitigate the spread of this infectious disease. Many countries have turned to electronic measures to provide general information and advice on COVID-19, allow people to check symptoms, trace contacts and alert people who have been in proximity to an infected person, identify “hot spots,” and track compliance with confinement measures and stay-at-home orders.

The Global Legal Research Directorate (GLRD) of the Law Library of Congress recently completed research on the kind of electronic measures countries around the globe are employing to fight the spread of COVID-19 and their potential privacy and data protection implications. We are excited to share with you the report that resulted from this research, Regulating Electronic Means to Fight the Spread of COVID-19. The report covers 23 selected jurisdictions, namely ArgentinaAustraliaBrazilChinaEnglandFranceIcelandIndiaIranIsraelItalyJapanMexicoNorwayPortugalthe Russian FederationSouth AfricaSouth KoreaSpainTaiwanTurkeythe United Arab Emirates, and the European Union (EU).

The surveys found that dedicated coronavirus apps that are downloaded to an individual’s mobile phone (particularly contact tracing apps), the use of anonymized mobility data, and creating electronic databases were the most common electronic measures. Whereas the EU recommends the use of voluntary apps because of the “high degree of intrusiveness” of mandatory apps, some countries take a different approach and require installing an app for people who enter the country from abroad, people who return to work, or people who are ordered to quarantine.

However, these electronic measures also raise privacy and data protection concerns, in particular as they relate to sensitive health data. The surveys discuss the different approaches countries have taken to ensure compliance with privacy and data protection regulations, such as conducting rights impact assessments before the measures were deployed or having data protection agencies conduct an assessment after deployment.

The map below shows which jurisdictions have adopted COVID-19 contact tracing apps and the technologies they use.

Map shows COVID-19 contact tracing apps in selected jurisdictions. Created by Susan Taylor, Law Library of Congress, based on surveys in “Regulating Electronic Means to Fight the Spread of COVID-19” (Law Library of Congress, June 2020). This map does not cover other COVID-19 apps that use GPS/geolocation….(More)”.

What science can do for democracy: a complexity science approach


Paper by Tina Eliassi-Rad et al: “Political scientists have conventionally assumed that achieving democracy is a one-way ratchet. Only very recently has the question of “democratic backsliding” attracted any research attention. We argue that democratic instability is best understood with tools from complexity science. The explanatory power of complexity science arises from several features of complex systems. Their relevance in the context of democracy is discussed. Several policy recommendations are offered to help (re)stabilize current systems of representative democracy…(More)”.

Data-Driven Unsustainability? An Interdisciplinary Perspective on Governing the Environmental Impacts of a Data-Driven Society


Paper by Federica Lucivero et al : “Data-driven digital technologies are often presented in policy agendas as contributing to the goal of sustainable development by providing information to reduce energy consumption and offering a green alternative to industries and behaviour with a higher environmental footprint. However, it is widely acknowledged in the context of environmental research that Information and Communication Technologies (ICT) in general, and data centres and cloud computing in particular, have a heavy footprint featuring a high consumption of non-renewable energy, waste production and carbon dioxide emissions. In spite of this, environmental issues have so far figured only sparsely in both policy initiatives supporting data-driven digital initiatives, as well as in recent ethics and governance scholarly literature discussing the data-driven revolution. We convened an interdisciplinary workshop to map out the current conceptual landscape on the environmental impacts of data-driven technologies, and to explore how ethical thinking can contribute to it. In this commentary, we discuss the main themes that emerged and our call for action….(More)”.

Social Research in Times of Big Data: The Challenges of New Data Worlds and the Need for a Sociology of Social Research


Paper by Rainer Diaz-Bone et al: “The phenomenon of big data does not only deeply affect current societies but also poses crucial challenges to social research. This article argues for moving towards a sociology of social research in order to characterize the new qualities of big data and its deficiencies. We draw on the neopragmatist approach of economics of convention (EC) as a conceptual basis for such a sociological perspective.

This framework suggests investigating processes of quantification in their interplay with orders of justifications and logics of evaluation. Methodological issues such as the question of the “quality of big data” must accordingly be discussed in their deep entanglement with epistemic values, institutional forms, and historical contexts and as necessarily implying political issues such as who controls and has access to data infrastructures. On this conceptual basis, the article uses the example of health to discuss the challenges of big data analysis for social research.

Phenomena such as the rise of new and massive privately owned data infrastructures, the economic valuation of huge amounts of connected data, or the movement of “quantified self” are presented as indications of a profound transformation compared to established forms of doing social research. Methodological and epistemological, but also institutional and political, strategies are presented to face the risk of being “outperformed” and “replaced” by big data analysis as they are already done in big US American and Chinese Internet enterprises. In conclusion, we argue that the sketched developments have important implications both for research practices and methods teaching in the era of big data…(More)”.

AI+1: Shaping Our Integrated Future


Report edited by the Rockefeller Foundation: “As we speak—and browse, and post photos, and move about—artificial intelligence is transforming the fabric of our lives. It is making life easier, better informed, healthier, more convenient. It also threatens to crimp our freedoms, worsen social disparities, and gives inordinate powers to unseen forces.

Both AI’s virtues and risks have been on vivid display during this moment of global turmoil, forcing a deeper conversation around its responsible use and, more importantly, the rules and regulations needed to harness its power for good.

This is a vastly complex subject, with no easy conclusions. With no roadmap, however, we risk creating more problems instead of solving meaningful ones.

Last fall The Rockefeller Foundation convened a unique group of thinkers and doers at its Bellagio Center in Italy to weigh one of the great challenges of our time: How to harness the powers of machine learning for social good and minimize its harms. The resulting AI + 1 report includes diverse perspectives from top technologists, philosophers, economists, and artists at a critical moment during the current Covid-19 pandemic.

The report’s authors present a mix of skepticism and hope centered on three themes:

  1. AI is more than a technology. It reflects the values in its system, suggesting that any ethical lapses simply mirror our own deficiencies. And yet, there’s hope: AI can also inspire us, augment us, and make us go deeper.
  2. AI’s goals need to be society’s goals. As opposed to the market-driven, profit-making ones that dominate its use today, applying AI responsibly is to use it to support systems that have human goals.
  3. We need a new rule-making system to guide its responsible development. Self-regulation simply isn’t enough. Cross-sector oversight must start with transparency and access to meaningful information, as well as an ability to expose harm.

AI itself is a slippery force, hard to pin down and define, much less regulate. We describe it using imprecise metaphors and deepen our understanding of it through nuanced conversation. This collection of essays provokes the kind of thoughtful consideration that will help us wrestle with AI’s complexity, develop a common language, create bridges between sectors and communities, and build practical solutions. We hope that you join us….(More)”.