Digital Constitutionalism in Europe: Reframing Rights and Powers in the Algorithmic Society


Book by Giovanni De Gregorio: “This book is about rights and powers in the digital age. It is an attempt to reframe the role of constitutional democracies in the algorithmic society. By focusing on the European constitutional framework as a lodestar, this book examines the rise and consolidation of digital constitutionalism as a reaction to digital capitalism. The primary goal is to examine how European digital constitutionalism can protect fundamental rights and democratic values against the charm of digital liberalism and the challenges raised by platform powers. Firstly, this book investigates the reasons leading to the development of digital constitutionalism in Europe. Secondly, it provides a normative framework analysing to what extent European constitutionalism provides an architecture to protect rights and limit the exercise of unaccountable powers in the algorithmic society….(More)”.

To make AI fair, here’s what we must learn to do


Article by Mona Sloane: “…From New York City to California and the European Union, many artificial intelligence (AI) regulations are in the works. The intent is to promote equity, accountability and transparency, and to avoid tragedies similar to the Dutch childcare-benefits scandal.

But these won’t be enough to make AI equitable. There must be practical know-how on how to build AI so that it does not exacerbate social inequality. In my view, that means setting out clear ways for social scientists, affected communities and developers to work together.

Right now, developers who design AI work in different realms from the social scientists who can anticipate what might go wrong. As a sociologist focusing on inequality and technology, I rarely get to have a productive conversation with a technologist, or with my fellow social scientists, that moves beyond flagging problems. When I look through conference proceedings, I see the same: very few projects integrate social needs with engineering innovation.

To spur fruitful collaborations, mandates and approaches need to be designed more effectively. Here are three principles that technologists, social scientists and affected communities can apply together to yield AI applications that are less likely to warp society.

Include lived experience. Vague calls for broader participation in AI systems miss the point. Nearly everyone interacting online — using Zoom or clicking reCAPTCHA boxes — is feeding into AI training data. The goal should be to get input from the most relevant participants.

Otherwise, we risk participation-washing: superficial engagement that perpetuates inequality and exclusion. One example is the EU AI Alliance: an online forum, open to anyone, designed to provide democratic feedback to the European Commission’s appointed expert group on AI. When I joined in 2018, it was an unmoderated echo chamber of mostly men exchanging opinions, not representative of the population of the EU, the AI industry or relevant experts…(More)”

Radically Human: How New Technology Is Transforming Business and Shaping Our Future


Book by Paul Daugherty and H. James Wilson: “Technology advances are making tech more . . . human. This changes everything you thought you knew about innovation and strategy. In their groundbreaking book, “Human + Machine,” Accenture technology leaders Paul R. Daugherty and H. James Wilson showed how leading organizations use the power of human-machine collaboration to transform their processes and their bottom lines. Now, as new AI powered technologies like the metaverse, natural language processing, and digital twins begin to rapidly impact both life and work, those companies and other pioneers across industries are tipping the balance even more strikingly toward the human side with technology-led strategy that is reshaping the very nature of innovation. In “Radically Human,” Daugherty and Wilson show this profound shift, fast-forwarded by the pandemic, toward more human–and more humane–technology. Artificial intelligence is becoming less artificial and more intelligent. Instead of data-hungry approaches to AI, innovators are pursuing data-efficient approaches that enable machines to learn as humans do. Instead of replacing workers with machines, they’re unleashing human expertise to create human-centered AI. In place of lumbering legacy IT systems, they’re building cloud-first IT architectures able to continuously adapt to a world of billions of connected devices. And they’re pursuing strategies that will take their place alongside classic, winning business formulas like disruptive innovation. These against-the-grain approaches to the basic building blocks of business–Intelligence, Data, Expertise, Architecture, and Strategy (IDEAS)–are transforming competition. Industrial giants and startups alike are drawing on this radically human IDEAS framework to create new business models, optimize post-pandemic approaches to work and talent, rebuild trust with their stakeholders, and show the way toward a sustainable future….(More)”.

Governing AI to Advance Shared Prosperity


Chapter by Ekaterina Klinova: “This chapter describes a governance approach to promoting AI research and development that creates jobs and advances shared prosperity. Concerns over the labor-saving focus of AI advancement are shared by a growing number of economists, technologists, and policymakers around the world. They warn about the risk of AI entrenching poverty and inequality globally. Yet, translating those concerns into proactive governance interventions that would steer AI away from generating excessive levels of automation remains difficult and largely unattempted. Key causes of this difficulty arise from two types of sources: (1) insufficiently deep understanding of the full composition of factors giving AI R&D its present emphasis on labor-saving applications; and (2) lack of tools and processes that would enable AI practitioners and policymakers to anticipate and assess the impact of AI technologies on employment, wages and job quality. This chapter argues that addressing (2) will require creating worker-participatory means of differentiating between genuinely worker-benefiting AI and worker-displacing or worker-exploiting AI. To contribute to tackling (1), this chapter reviews AI practitioners’ motivations and constraints, such as relevant laws, market incentives, as well as less tangible but still highly influential constraining and motivating factors, including explicit and implicit norms in the AI field, visions of future societal order popular among the field’s members and ways that AI practitioners define goals worth pursuing and measure success. I highlight how each of these factors contributes meaningfully to giving AI advancement its excessive labor-saving emphasis and describe opportunities for governance interventions that could correct that over emphasis….(More)”.

AI & Society


Special Issue of Daedalus edited by James Manyika: “AI is transforming our relationships with technology and with others, our senses of self, as well as our approaches to health care, banking, democracy, and the courts. But while AI in its many forms has become ubiquitous and its benefits to society and the individual have grown, its impacts are varied. Concerns about its unintended effects and misuses have become paramount in conversations about the successful integration of AI in society. This volume explores the many facets of artificial intelligence: its technology, its potential futures, its effects on labor and the economy, its relationship with inequalities, its role in law and governance, its challenges to national security, and what it says about us as humans…(More)” See also https://aiethicscourse.org/

Artificial intelligence is creating a new colonial world order


Series by  Karen Hao: “…Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. European colonialism, they say, was characterized by the violent capture of land, extraction of resources, and exploitation of people—for example, through slavery—for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence today, it is now using other, more insidious means to enrich the wealthy and powerful at the great expense of the poor….

MIT Technology Review’s new AI Colonialism series, which will be publishing throughout this week, digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid.

In part two, we head to Venezuela, where AI data-labeling firms found cheap and desperate workers amid a devastating economic crisis, creating a new model of labor exploitation. The series also looks at ways to move away from these dynamics. In part three, we visit ride-hailing drivers in Indonesia who, by building power through community, are learning to resist algorithmic control and fragmentation. In part four, we end in Aotearoa, the Maori name for New Zealand, where an Indigenous couple are wresting back control of their community’s data to revitalize its language.

Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.

That is ultimately the aim of this series: to broaden the view of AI’s impact on society so as to begin to figure out how things could be different. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way….(More)”.

Why AI Failed to Live Up to Its Potential During the Pandemic


Essay by Bhaskar Chakravorti: “The pandemic could have been the moment when AI made good on its promising potential. There was an unprecedented convergence of the need for fast, evidence-based decisions and large-scale problem-solving with datasets spilling out of every country in the world. Instead, AI failed in myriad, specific ways that underscore where this technology is still weak: Bad datasets, embedded bias and discrimination, susceptibility to human error, and a complex, uneven global context all caused critical failures. But, these failures also offer lessons on how we can make AI better: 1) we need to find new ways to assemble comprehensive datasets and merge data from multiple sources, 2) there needs to be more diversity in data sources, 3) incentives must be aligned to ensure greater cooperation across teams and systems, and 4) we need international rules for sharing data…(More)”.

A.I. Is Mastering Language. Should We Trust What It Says?


Steven Johnson at the New York Times: “You are sitting in a comfortable chair by the fire, on a cold winter’s night. Perhaps you have a mug of tea in hand, perhaps something stronger. You open a magazine to an article you’ve been meaning to read. The title suggested a story about a promising — but also potentially dangerous — new technology on the cusp of becoming mainstream, and after reading only a few sentences, you find yourself pulled into the story. A revolution is coming in machine intelligence, the author argues, and we need, as a society, to get better at anticipating its consequences. But then the strangest thing happens: You notice that the writer has, seemingly deliberately, omitted the very last word of the first .

The missing word jumps into your consciousness almost unbidden: ‘‘the very last word of the first paragraph.’’ There’s no sense of an internal search query in your mind; the word ‘‘paragraph’’ just pops out. It might seem like second nature, this filling-in-the-blank exercise, but doing it makes you think of the embedded layers of knowledge behind the thought. You need a command of the spelling and syntactic patterns of English; you need to understand not just the dictionary definitions of words but also the ways they relate to one another; you have to be familiar enough with the high standards of magazine publishing to assume that the missing word is not just a typo, and that editors are generally loath to omit key words in published pieces unless the author is trying to be clever — perhaps trying to use the missing word to make a point about your cleverness, how swiftly a human speaker of English can conjure just the right word.

Before you can pursue that idea further, you’re back into the article, where you find the author has taken you to a building complex in suburban Iowa. Inside one of the buildings lies a wonder of modern technology: 285,000 CPU cores yoked together into one giant supercomputer, powered by solar arrays and cooled by industrial fans. The machines never sleep: Every second of every day, they churn through innumerable calculations, using state-of-the-art techniques in machine intelligence that go by names like ‘‘stochastic gradient descent’’ and ‘‘convolutional neural networks.’’ The whole system is believed to be one of the most powerful supercomputers on the planet.

And what, you may ask, is this computational dynamo doing with all these prodigious resources? Mostly, it is playing a kind of game, over and over again, billions of times a second. And the game is called: Guess what the missing word is.…(More)”.

Decoding human behavior with big data? Critical, constructive input from the decision sciences


Paper by Konstantinos V. Katsikopoulos and Marc C. Canellas: “Big data analytics employs algorithms to uncover people’s preferences and values, and support their decision making. A central assumption of big data analytics is that it can explain and predict human behavior. We investigate this assumption, aiming to enhance the knowledge basis for developing algorithmic standards in big data analytics. First, we argue that big data analytics is by design atheoretical and does not provide process-based explanations of human behavior; thus, it is unfit to support deliberation that is transparent and explainable. Second, we review evidence from interdisciplinary decision science, showing that the accuracy of complex algorithms used in big data analytics for predicting human behavior is not consistently higher than that of simple rules of thumb. Rather, it is lower in situations such as predicting election outcomes, criminal profiling, and granting bail. Big data algorithms can be considered as candidate models for explaining, predicting, and supporting human decision making when they match, in transparency and accuracy, simple, process-based, domain-grounded theories of human behavior. Big data analytics can be inspired by behavioral and cognitive theory….(More)”.

Cities Take the Lead in Setting Rules Around How AI Is Used


Jackie Snow at the Wall Street Journal: “As cities and states roll out algorithms to help them provide services like policing and traffic management, they are also racing to come up with policies for using this new technology.

AI, at its worst, can disadvantage already marginalized groups, adding to human-driven bias in hiring, policing and other areas. And its decisions can often be opaque—making it difficult to tell how to fix that bias, as well as other problems. (The Wall Street Journal discussed calls for regulation of AI, or at least greater transparency about how the systems work, with three experts.)

Cities are looking at a number of solutions to these problems. Some require disclosure when an AI model is used in decisions, while others mandate audits of algorithms, track where AI causes harm or seek public input before putting new AI systems in place.

Here are some ways cities are redefining how AI will work within their borders and beyond.

Explaining the algorithms: Amsterdam and Helsinki

One of the biggest complaints against AI is that it makes decisions that can’t be explained, which can lead to complaints about arbitrary or even biased results.

To let their citizens know more about the technology already in use in their cities, Amsterdam and Helsinki collaborated on websites that document how each city government uses algorithms to deliver services. The registry includes information on the data sets used to train an algorithm, a description of how an algorithm is used, how public servants use the results, the human oversight involved and how the city checks the technology for problems like bias.

Amsterdam has six algorithms fully explained—with a goal of 50 to 100—on the registry website, including how the city’s automated parking-control and trash-complaint reports work. Helsinki, which is only focusing on the city’s most advanced algorithms, also has six listed on its site, with another 10 to 20 left to put up.

“We needed to assess the risk ourselves,” says Linda van de Fliert, an adviser at Amsterdam’s Chief Technology Office. “And we wanted to show the world that it is possible to be transparent.”…(More)” See also AI Localism: The Responsible Use and Design of Artificial Intelligence at the Local Level