Artificial intelligence is creating a new colonial world order


Series by  Karen Hao: “…Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. European colonialism, they say, was characterized by the violent capture of land, extraction of resources, and exploitation of people—for example, through slavery—for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence today, it is now using other, more insidious means to enrich the wealthy and powerful at the great expense of the poor….

MIT Technology Review’s new AI Colonialism series, which will be publishing throughout this week, digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid.

In part two, we head to Venezuela, where AI data-labeling firms found cheap and desperate workers amid a devastating economic crisis, creating a new model of labor exploitation. The series also looks at ways to move away from these dynamics. In part three, we visit ride-hailing drivers in Indonesia who, by building power through community, are learning to resist algorithmic control and fragmentation. In part four, we end in Aotearoa, the Maori name for New Zealand, where an Indigenous couple are wresting back control of their community’s data to revitalize its language.

Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.

That is ultimately the aim of this series: to broaden the view of AI’s impact on society so as to begin to figure out how things could be different. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way….(More)”.

How Democracies Spy on Their Citizens 


Ronan Farrow at the New Yorker: “…Commercial spyware has grown into an industry estimated to be worth twelve billion dollars. It is largely unregulated and increasingly controversial. In recent years, investigations by the Citizen Lab and Amnesty International have revealed the presence of Pegasus on the phones of politicians, activists, and dissidents under repressive regimes. An analysis by Forensic Architecture, a research group at the University of London, has linked Pegasus to three hundred acts of physical violence. It has been used to target members of Rwanda’s opposition party and journalists exposing corruption in El Salvador. In Mexico, it appeared on the phones of several people close to the reporter Javier Valdez Cárdenas, who was murdered after investigating drug cartels. Around the time that Prince Mohammed bin Salman of Saudi Arabia approved the murder of the journalist Jamal Khashoggi, a longtime critic, Pegasus was allegedly used to monitor phones belonging to Khashoggi’s associates, possibly facilitating the killing, in 2018. (Bin Salman has denied involvement, and NSO said, in a statement, “Our technology was not associated in any way with the heinous murder.”) Further reporting through a collaboration of news outlets known as the Pegasus Project has reinforced the links between NSO Group and anti-democratic states. But there is evidence that Pegasus is being used in at least forty-five countries, and it and similar tools have been purchased by law-enforcement agencies in the United States and across Europe. Cristin Flynn Goodwin, a Microsoft executive who has led the company’s efforts to fight spyware, told me, “The big, dirty secret is that governments are buying this stuff—not just authoritarian governments but all types of governments.”…(More)”.

Why AI Failed to Live Up to Its Potential During the Pandemic


Essay by Bhaskar Chakravorti: “The pandemic could have been the moment when AI made good on its promising potential. There was an unprecedented convergence of the need for fast, evidence-based decisions and large-scale problem-solving with datasets spilling out of every country in the world. Instead, AI failed in myriad, specific ways that underscore where this technology is still weak: Bad datasets, embedded bias and discrimination, susceptibility to human error, and a complex, uneven global context all caused critical failures. But, these failures also offer lessons on how we can make AI better: 1) we need to find new ways to assemble comprehensive datasets and merge data from multiple sources, 2) there needs to be more diversity in data sources, 3) incentives must be aligned to ensure greater cooperation across teams and systems, and 4) we need international rules for sharing data…(More)”.

Research Handbook of Policy Design


Handbook edited by B. G. Peters and Guillaume Fontaine: “…The difference between policy design and policy making lies in the degree of encompassing consciousness involved in designing, which includes policy formulation, implementation and evaluation. Consequently there are differences in degrees of consciousness within the same kind of activity, from the simplest expression of “non-design”, which refers to the absence of clear intention or purpose, to “re-design”, which is the most common, incremental way to proceed, to “full design”, which suggests the attempt to control all the process by government or some other controlling actor. There are also differences in kind, from program design (at
the micro-level of intervention) to singular policy design, to meta-design when dealing with complex problems that require cross-sectorial coordination. Eventually, there are different forms or expressions (technical, political, ideological) and different patterns (transfer, innovation, accident or experiment) of policy design.
Unlike other forms of design, such as engineering or architecture, policy design exhibits specific features because of the social nature of policy targeting and modulation, which involves humans as objects and subjects with their values, conflicts, and other characteristics (Peters, 2018, p. 5). Thus, policy design is the attempt to integrate different understandings of a policy problem with different conceptions of the policy instruments to be utilized, and the different values according to which a government assess the outcomes pursued by this policy as expected, satisfactory, acceptable, and so forth. Those three components of design – causation, instruments and values – must then be combined to create a coherent plan for intervention. We will define this fourth component of design as “intervention”, meaning that there must be some strategic sense of how to make the newly designed policy work. This component requires not only an understanding of the specific policy being designed but also how that policy will mesh with the array of policies already operating. Thus, there is the need to think about some “meta-design” issues about coordination and coherence, as well as the usual challenges of implementation…(More)”.

Better data for better therapies: The case for building health data platforms


Paper by Matthias Evers, Lucy Pérez, Lucas Robke, and Katarzyna Smietana: “Despite expanding development pipelines, many pharmaceutical companies find themselves focusing on the same limited number of derisked areas and mechanisms of action in, for example, immuno-oncology. This “herding” reflects the challenges of advancing understanding of disease and hence of developing novel therapeutic approaches. The full promise of innovation from data, AI, and ML has not yet materialized.

It is increasingly evident that one of the main reasons for this is insufficient high-quality, interconnected human data that go beyond just genes and corresponding phenotypes—the data needed by scientists to form concepts and hypotheses and by computing systems to uncover patterns too complex for scientists to understand. Only such high-quality human data would allow deployment of AI and ML, combined with human ingenuity, to unravel disease biology and open up new frontiers to prevention and cure. Here, therefore, we suggest a way of overcoming the data impediment and moving toward a systematic, nonreductionist approach to disease understanding and drug development: the establishment of trusted, large-scale platforms that collect and store the health data of volunteering participants. Importantly, such platforms would allow participants to make informed decisions about who could access and use their information to improve the understanding of disease….(More)”.

Access Rules: Freeing Data from Big Tech for a Better Future


Book by Thomas Ramge: “Information is power, and the time is now for digital liberation. Access Rules mounts a strong and hopeful argument for how informational tools at present in the hands of a few could instead become empowering machines for everyone. By forcing data-hoarding companies to open access to their data, we can reinvigorate both our economy and our society. Authors Viktor Mayer-Schönberger and Thomas Ramge contend that if we disrupt monopoly power and create a level playing field, digital innovations can emerge to benefit us all.

Over the past twenty years, Big Tech has managed to centralize the most relevant data on their servers, as data has become the most important raw material for innovation. However, dominant oligopolists like Facebook, Amazon, and Google, in contrast with their reputation as digital pioneers, are actually slowing down innovation and progress by withholding data for the benefit of their shareholders––at the expense of customers, the economy, and society. As Access Rules compellingly argues, ultimately it is up to us to force information giants, wherever they are located, to open their treasure troves of data to others. In order for us to limit global warming, contain a virus like COVID-19, or successfully fight poverty, everyone—including citizens and scientists, start-ups and established companies, as well as the public sector and NGOs—must have access to data. When everyone has access to the informational riches of the data age, the nature of digital power will change. Information technology will find its way back to its original purpose: empowering all of us to use information so we can thrive as individuals and as societies….(More)”.

Decoding human behavior with big data? Critical, constructive input from the decision sciences


Paper by Konstantinos V. Katsikopoulos and Marc C. Canellas: “Big data analytics employs algorithms to uncover people’s preferences and values, and support their decision making. A central assumption of big data analytics is that it can explain and predict human behavior. We investigate this assumption, aiming to enhance the knowledge basis for developing algorithmic standards in big data analytics. First, we argue that big data analytics is by design atheoretical and does not provide process-based explanations of human behavior; thus, it is unfit to support deliberation that is transparent and explainable. Second, we review evidence from interdisciplinary decision science, showing that the accuracy of complex algorithms used in big data analytics for predicting human behavior is not consistently higher than that of simple rules of thumb. Rather, it is lower in situations such as predicting election outcomes, criminal profiling, and granting bail. Big data algorithms can be considered as candidate models for explaining, predicting, and supporting human decision making when they match, in transparency and accuracy, simple, process-based, domain-grounded theories of human behavior. Big data analytics can be inspired by behavioral and cognitive theory….(More)”.

Making forest data fair and open


Paper by Renato A. F. de Lima : “It is a truth universally acknowledged that those in possession of time and good fortune must be in want of information. Nowhere is this more so than for tropical forests, which include the richest and most productive ecosystems on Earth. Information on tropical forest carbon and biodiversity, and how these are changing, is immensely valuable, and many different stakeholders wish to use data on tropical and subtropical forests. These include scientists, governments, nongovernmental organizations and commercial interests, such as those extracting timber or selling carbon credits. Another crucial, often-ignored group are the local communities for whom forest information may help to assert their rights and conserve or restore their forests.

A widespread view is that to lead to better public outcomes it is necessary and sufficient for forest data to be open and ‘Findable, Accessible, Interoperable, Reusable’ (FAIR). There is indeed a powerful case. Open data — those that anyone can use and share without restrictions — can encourage transparency and reproducibility, foster innovation and be used more widely, thus translating into a greater public good (for example, https://creativecommons.org). Open biological collections and genetic sequences such as GBIF or GenBank have enabled species discovery, and open Earth observation data helps people to understand and monitor deforestation (for example, Global Forest Watch). But the perspectives of those who actually make the forest measurements are much less recognized, meaning that open and FAIR data can be extremely unfair indeed. We argue here that forest data policies and practices must be fair in the correct, linguistic use of the term — just and equitable.

In a world in which forest data origination — measuring, monitoring and sustaining forest science — is secured by large, long-term capital investment (such as through space missions and some officially supported national forest inventories), making all data open makes perfect sense. But where data origination depends on insecure funding and precarious employment conditions, top-down calls to make these data open can be deeply problematic. Even when well-intentioned, such calls ignore the socioeconomic context of the places where the forest plots are located and how knowledge is created, entrenching the structural inequalities that characterize scientific research and collaboration among and within nations. A recent review found scant evidence for open data ever lessening such inequalities. Clearly, only a privileged part of the global community is currently able to exploit the potential of open forest data. Meanwhile, some local communities are de facto owners of their forests and associated knowledge, so making information open — for example, the location of valuable species — may carry risks to themselves and their forests….(More)”.

Inclusive policy making in a digital age: The case for crowdsourced deliberation


Blog by Theo Bass: “In 2016, the Finnish Government ran an ambitious experiment to test if and how citizens across the country could meaningfully contribute to the law-making process.

Many people in Finland use off-road snowmobiles to get around in the winter, raising issues like how to protect wildlife, keep pedestrians safe, and compensate property owners for use of their land for off-road traffic.

To hear from people across the country who would be most affected by new laws, the government set up an online platform to understand problems they faced and gather solutions. Citizens could post comments and suggestions, respond to one another, and vote on ideas they liked. Over 700 people took part, generating around 250 policy ideas.

The exercise caught the attention of academics Tanja Aitamurto and Hélène Landemore. In 2017, they wrote a paper coining the term crowdsourced deliberation — an ‘open, asynchronous, depersonalized, and distributed kind of online deliberation occurring among self‐selected participants’ — to describe the interactions they saw on the platform.

Many other crowdsourced deliberation initiatives have emerged in recent years, although they haven’t always been given that name. From France to Taiwan, governments have experimented with opening policy making and enabling online conversations among diverse groups of thousands of people, leading to the adoption of new regulations or laws.

So what’s distinctive about this approach and why should policy makers consider it alongside others? In this post I’ll make a case for crowdsourced deliberation, comparing it to two other popular methods for inclusive policy making…(More)”.

Russia Is Leaking Data Like a Sieve


Matt Burgess at Wired: “Names, birthdays, passport numbers, job titles—the personal information goes on for pages and looks like any typical data breach. But this data set is very different. It allegedly contains the personal information of 1,600 Russian troops who served in Bucha, a Ukrainian city devastated during Russia’s war and the scene of multiple potential war crimes.

The data set is not the only one. Another allegedly contains the names and contact details of 620 Russian spies who are registered to work at the Moscow office of the FSB, the country’s main security agency. Neither set of information was published by hackers. Instead they were put online by Ukraine’s intelligence services, with all the names and details freely available to anyone online. “Every European should know their names,” Ukrainian officials wrote in a Facebook post as they published the data.

Since Russian troops crossed Ukraine’s borders at the end of February, colossal amounts of information about the Russian state and its activities have been made public. The data offers unparalleled glimpses into closed-off private institutions, and it may be a gold mine for investigators, from journalists to those tasked with investigating war crimes. Broadly, the data comes in two flavors: information published proactively by Ukranian authorities or their allies, and information obtained by hacktivists. Hundreds of gigabytes of files and millions of emails have been made public.

“Both sides in this conflict are very good at information operations,” says Philip Ingram, a former colonel in British military intelligence. “The Russians are quite blatant about the lies that they’ll tell,” he adds. Since the war started, Russian disinformation has been consistently debunked. Ingram says Ukraine has to be more tactical with the information it publishes. “They have to make sure that what they’re putting out is credible and they’re not caught out telling lies in a way that would embarrass them or embarrass their international partners.”

Both the lists of alleged FSB officers and Russian troops were published online by Ukraine’s Central Intelligence Agency at the end of March and start of April, respectively. While WIRED has not been able to verify the accuracy of the data—and Ukrainian cybersecurity officials did not respond to a request for comment—Aric Toler, from investigative outlet Bellingcat, tweeted that the FSB details appear to have been combined from previous leaks and open source information. It is unclear how up-to-date the information is…(More)”.