The secrets of cooperation


Article by Bob Holmes: “People stop their cars simply because a little light turns from green to red. They crowd onto buses, trains and planes with complete strangers, yet fights seldom break out. Large, strong men routinely walk right past smaller, weaker ones without demanding their valuables. People pay their taxes and donate to food banks and other charities.

Most of us give little thought to these everyday examples of cooperation. But to biologists, they’re remarkable — most animals don’t behave that way.

“Even the least cooperative human groups are more cooperative than our closest cousins, chimpanzees and bonobos,” says Michael Muthukrishna, a behavioral scientist at the London School of Economics. Chimps don’t tolerate strangers, Muthukrishna says, and even young children are a lot more generous than a chimp.

Human cooperation takes some explaining — after all, people who act cooperatively should be vulnerable to exploitation by others. Yet in societies around the world, people cooperate to their mutual benefit. Scientists are making headway in understanding the conditions that foster cooperation, research that seems essential as an interconnected world grapples with climate change, partisan politics and more — problems that can be addressed only through large-scale cooperation…(More)”.

How AI Could Revolutionize Diplomacy


Article by Andrew Moore: “More than a year into Russia’s war of aggression against Ukraine, there are few signs the conflict will end anytime soon. Ukraine’s success on the battlefield has been powered by the innovative use of new technologies, from aerial drones to open-source artificial intelligence (AI) systems. Yet ultimately, the war in Ukraine—like any other war—will end with negotiations. And although the conflict has spurred new approaches to warfare, diplomatic methods remain stuck in the 19th century.

Yet not even diplomacy—one of the world’s oldest professions—can resist the tide of innovation. New approaches could come from global movements, such as the Peace Treaty Initiative, to reimagine incentives to peacemaking. But much of the change will come from adopting and adapting new technologies.

With advances in areas such as artificial intelligence, quantum computing, the internet of things, and distributed ledger technology, today’s emerging technologies will offer new tools and techniques for peacemaking that could impact every step of the process—from the earliest days of negotiations all the way to monitoring and enforcing agreements…(More)”.

Innovation in Real Places


Book by Dan Breznitz: “Across the world, cities and regions have wasted trillions of dollars on blindly copying the Silicon Valley model of growth creation. Since the early years of the information age, we’ve been told that economic growth derives from harnessing technological innovation. To do this, places must create good education systems, partner with local research universities, and attract innovative hi-tech firms. We have lived with this system for decades, and the result is clear: a small number of regions and cities at the top of the high-tech industry but many more fighting a losing battle to retain economic dynamism.

But are there other models that don’t rely on a flourishing high-tech industry? In Innovation in Real Places, Dan Breznitz argues that there are. The purveyors of the dominant ideas on innovation have a feeble understanding of the big picture on global production and innovation. They conflate innovation with invention and suffer from techno-fetishism. In their devotion to start-ups, they refuse to admit that the real obstacle to growth for most cities is the overwhelming power of the real hubs, which siphon up vast amounts of talent and money. Communities waste time, money, and energy pursuing this road to nowhere. Breznitz proposes that communities instead focus on where they fit in the four stages in the global production process. Some are at the highest end, and that is where the Clevelands, Sheffields, and Baltimores are being pushed toward. But that is bad advice. Success lies in understanding the changed structure of the global system of production and then using those insights to enable communities to recognize their own advantages, which in turn allows to them to foster surprising forms of specialized innovation. As he stresses, all localities have certain advantages relative to at least one stage of the global production process, and the trick is in recognizing it. Leaders might think the answer lies in high-tech or high-end manufacturing, but more often than not, they’re wrong. Innovation in Real Places is an essential corrective to a mythology of innovation and growth that too many places have bought into in recent years. Best of all, it has the potential to prod local leaders into pursuing realistic and regionally appropriate models for growth and innovation…(More)”.

Responding to the coronavirus disease-2019 pandemic with innovative data use: The role of data challenges


Paper by Jamie Danemayer, Andrew Young, Siobhan Green, Lydia Ezenwa and Michael Klein: “Innovative, responsible data use is a critical need in the global response to the coronavirus disease-2019 (COVID-19) pandemic. Yet potentially impactful data are often unavailable to those who could utilize it, particularly in data-poor settings, posing a serious barrier to effective pandemic mitigation. Data challenges, a public call-to-action for innovative data use projects, can identify and address these specific barriers. To understand gaps and progress relevant to effective data use in this context, this study thematically analyses three sets of qualitative data focused on/based in low/middle-income countries: (a) a survey of innovators responding to a data challenge, (b) a survey of organizers of data challenges, and (c) a focus group discussion with professionals using COVID-19 data for evidence-based decision-making. Data quality and accessibility and human resources/institutional capacity were frequently reported limitations to effective data use among innovators. New fit-for-purpose tools and the expansion of partnerships were the most frequently noted areas of progress. Discussion participants identified building capacity for external/national actors to understand the needs of local communities can address a lack of partnerships while de-siloing information. A synthesis of themes demonstrated that gaps, progress, and needs commonly identified by these groups are relevant beyond COVID-19, highlighting the importance of a healthy data ecosystem to address emerging threats. This is supported by data holders prioritizing the availability and accessibility of their data without causing harm; funders and policymakers committed to integrating innovations with existing physical, data, and policy infrastructure; and innovators designing sustainable, multi-use solutions based on principles of good data governance…(More)”.

Eye of the Beholder: Defining AI Bias Depends on Your Perspective


Article by Mike Barlow: “…Today’s conversations about AI bias tend to focus on high-visibility social issues such as racism, sexism, ageism, homophobia, transphobia, xenophobia, and economic inequality. But there are dozens and dozens of known biases (e.g., confirmation bias, hindsight bias, availability bias, anchoring bias, selection bias, loss aversion bias, outlier bias, survivorship bias, omitted variable bias and many, many others). Jeff Desjardins, founder and editor-in-chief at Visual Capitalist, has published a fascinating infographic depicting 188 cognitive biases–and those are just the ones we know about.

Ana Chubinidze, founder of AdalanAI, a Berlin-based AI governance startup, worries that AIs will develop their own invisible biases. Currently, the term “AI bias” refers mostly to human biases that are embedded in historical data. “Things will become more difficult when AIs begin creating their own biases,” she says.

She foresees that AIs will find correlations in data and assume they are causal relationships—even if those relationships don’t exist in reality. Imagine, she says, an edtech system with an AI that poses increasingly difficult questions to students based on their ability to answer previous questions correctly. The AI would quickly develop a bias about which students are “smart” and which aren’t, even though we all know that answering questions correctly can depend on many factors, including hunger, fatigue, distraction, and anxiety. 

Nevertheless, the edtech AI’s “smarter” students would get challenging questions and the rest would get easier questions, resulting in unequal learning outcomes that might not be noticed until the semester is over—or might not be noticed at all. Worse yet, the AI’s bias would likely find its way into the system’s database and follow the students from one class to the next…

As we apply AI more widely and grapple with its implications, it becomes clear that bias itself is a slippery and imprecise term, especially when it is conflated with the idea of unfairness. Just because a solution to a particular problem appears “unbiased” doesn’t mean that it’s fair, and vice versa. 

“There is really no mathematical definition for fairness,” Stoyanovich says. “Things that we talk about in general may or may not apply in practice. Any definitions of bias and fairness should be grounded in a particular domain. You have to ask, ‘Whom does the AI impact? What are the harms and who is harmed? What are the benefits and who benefits?’”…(More)”.

AI Ethics


Textbook by Paula Boddington: “This book introduces readers to critical ethical concerns in the development and use of artificial intelligence. Offering clear and accessible information on central concepts and debates in AI ethics, it explores how related problems are now forcing us to address fundamental, age-old questions about human life, value, and meaning. In addition, the book shows how foundational and theoretical issues relate to concrete controversies, with an emphasis on understanding how ethical questions play out in practice.

All topics are explored in depth, with clear explanations of relevant debates in ethics and philosophy, drawing on both historical and current sources. Questions in AI ethics are explored in the context of related issues in technology, regulation, society, religion, and culture, to help readers gain a nuanced understanding of the scope of AI ethics within broader debates and concerns…(More)”

Data and Democracy at Work: Advanced Information Technologies, Labor Law, and the New Working Class


Book by Brishen Rogers: “As our economy has shifted away from industrial production and service industries have become dominant, many of the nation’s largest employers are now in fields like retail, food service, logistics, and hospitality. These companies have turned to data-driven surveillance technologies that operate over a vast distance, enabling cheaper oversight of massive numbers of workers. Data and Democracy at Work argues that companies often use new data-driven technologies as a power resource—or even a tool of class domination—and that our labor laws allow them to do so.

Employers have established broad rights to use technology to gather data on workers and their performance, to exclude others from accessing that data, and to use that data to refine their managerial strategies. Through these means, companies have suppressed workers’ ability to organize and unionize, thereby driving down wages and eroding working conditions. Labor law today encourages employer dominance in many ways—but labor law can also be reformed to become a tool for increased equity. The COVID-19 pandemic and subsequent Great Resignation have indicated an increased political mobilization of the so-called essential workers of the pandemic, many of them service industry workers. This book describes the necessary legal reforms to increase workers’ associational power and democratize workplace data, establishing more balanced relationships between workers and employers and ensuring a brighter and more equitable future for us all…(More)”.

Prediction Fiction


Essay by Madeline Ashby: “…This contributes to what my colleague Scott Smith calls “flat-pack futures”, or what the Canadian scholar Sun-ha Hong calls “technofutures”, which “preach revolutionary change while practicing a politics of inertia”. These visions of possible future realities possess a mass-market sameness. They look like what happens when you tell an AI image generator to draw the future: just a slurry of genuine human creativity machined into a fine paste. Drone delivery, driverless cars, blockchain this, alt-currency that, smart mirrors, smart everything,and not a speck of dirt or illness or poverty or protest anywhere. Bloodless, bland, boring, banal. It is like ordering your future from the kids’ menu.

When we cannot acknowledge how bad things are, we cannot imagine how to improve them. As with so many challenges, the first step is admitting there is a problem. But if you are isolated, ignored, or ridiculed at work or at home for acknowledging that problem, the problem becomes impossible to deal with. How we treat existential threats to the planet today is how doctors treated women’s cancers until the latter half of the 20th century: by refusing to tell the patient she was dying.

But the issue is not just toxic positivity. Remember those myths about the warnings that go unheeded? The moral of those stories is not that some people are doomed never to be listened to. The moral of those stories is that people in power do not want to hear how they might lose it. It is not that the predictions were wrong, but that they were simply not what people wanted to hear. To work in futures, you have to tell people things they don’t want to hear. And this is when it is useful to tell a story….(More)”

Am I Normal? The 200-Year Search for Normal People (and Why They Don’t Exist)


Book by Sarah Chaney: “Before the 19th century, the term ’normal’ was rarely ever associated with human behaviour. Normal was a term used in maths, for right angles. People weren’t normal; triangles were.

But from the 1830s, this branch of science really took off across Europe and North America, with a proliferation of IQ tests, sex studies, a census of hallucinations – even a UK beauty map (which concluded the women in Aberdeen were “the most repellent”). This book tells the surprising history of how the very notion of the normal came about, how it shaped us all, often while entrenching oppressive values.

Sarah Chaney looks at why we’re still asking the internet: Do I have a normal body? Is my sex life normal? Are my kids normal? And along the way, she challenges why we ever thought it might be a desirable thing to be…(More)”.

The Normative Challenges of AI in Outer Space: Law, Ethics, and the Realignment of Terrestrial Standards


Paper by Ugo Pagallo, Eleonora Bassi & Massimo Durante: “The paper examines the open problems that experts of space law shall increasingly address over the next few years, according to four different sets of legal issues. Such differentiation sheds light on what is old and what is new with today’s troubles of space law, e.g., the privatization of space, vis-à-vis the challenges that AI raises in this field. Some AI challenges depend on its unique features, e.g., autonomy and opacity, and how they affect pillars of the law, whether on Earth or in space missions. The paper insists on a further class of legal issues that AI systems raise, however, only in outer space. We shall never overlook the constraints of a hazardous and hostile environment, such as on a mission between Mars and the Moon. The aim of this paper is to illustrate what is still mostly unexplored or in its infancy in this kind of research, namely, the fourfold ways in which the uniqueness of AI and that of outer space impact both ethical and legal standards. Such standards shall provide for thresholds of evaluation according to which courts and legislators evaluate the pros and cons of technology. Our claim is that a new generation of sui generis standards of space law, stricter or more flexible standards for AI systems in outer space, down to the “principle of equality” between human standards and robotic standards, will follow as a result of this twofold uniqueness of AI and of outer space…(More)”.