Explore our articles
View All Results

Stefaan Verhulst

Paper by Ruben Durante, Nicola Mastrorocco, Luigi Minale & James M. Snyder Jr. : “We use novel and unique survey data from Italy to shed light on key questions regarding the measurement of social capital and the use of social capital indicators for empirical work. Our data cover a sample of over 600,000 respondents interviewed between 2000 and 2015. We identify four distinct components of social capital – i) social participation, ii) political participation, iii) trust in others, and iv) trust in institutions – and examine how they relate to each other. We then study how each dimension of social capital relates to various socioeconomic factors both at the individual and the aggregate level, and to various proxies of social capital commonly used in the literature. Finally, building on previous work, we investigate to what extent different dimensions of social capital predict differences in key economic, political, and health outcomes. Our findings support the view that social capital is a multifaceted object with multiple dimensions that, while related, are distinct from each other. Future work should take such multidimensionality into account and carefully consider what measure of social capital to use…(More)”.

Unpacking Social Capital

Article by Jamie Susskind: “Twentieth-century ways of thinking will not help us deal with the huge regulatory challenges the technology poses…The public debate around artificial intelligence sometimes seems to be playing out in two alternate realities.

In one, AI is regarded as a remarkable but potentially dangerous step forward in human affairs, necessitating new and careful forms of governance. This is the view of more than a thousand eminent individuals from academia, politics, and the tech industry who this week used an open letter to call for a six-month moratorium on the training of certain AI systems. AI labs, they claimed, are “locked in an out-of-control race to develop and deploy ever more powerful digital minds”. Such systems could “pose profound risks to society and humanity”. 

On the same day as the open letter, but in a parallel universe, the UK government decided that the country’s principal aim should be to turbocharge innovation. The white paper on AI governance had little to say about mitigating existential risk, but lots to say about economic growth. It proposed the lightest of regulatory touches and warned against “unnecessary burdens that could stifle innovation”. In short: you can’t spell “laissez-faire” without “AI”. 

The difference between these perspectives is profound. If the open letter is taken at face value, the UK government’s approach is not just wrong, but irresponsible. And yet both viewpoints are held by reasonable people who know their onions. They reflect an abiding political disagreement which is rising to the top of the agenda.

But despite this divergence there are four ways of thinking about AI that ought to be acceptable to both sides.

First, it is usually unhelpful to debate the merits of regulation by reference to a particular crisis (Cambridge Analytica), technology (GPT-4), person (Musk), or company (Meta). Each carries its own problems and passions. A sound regulatory system will be built on assumptions that are sufficiently general in scope that they will not immediately be superseded by the next big thing. Look at the signal, not the noise…(More)”.

We need a much more sophisticated debate about AI

Peter Coy at The New York Times: “Democracy isn’t working very well these days, and artificial intelligence is scaring the daylights out of people. Some creative people are looking at those two problems and envisioning a solution: Democracy fixes A.I., and A.I. fixes democracy.

Attitudes about A.I. are polarized, with some focusing on its promise to amplify human potential and others dwelling on what could go wrong (and what has already gone wrong). We need to find a way out of the impasse, and leaving it to the tech bros isn’t the answer. Democracy — giving everyone a voice on policy — is clearly the way to go.

Democracy can be taken hostage by partisans, though. That’s where artificial intelligence has a role to play. It can make democracy work better by surfacing ideas from everyone, not just the loudest. It can find surprising points of agreement among seeming antagonists and summarize and digest public opinion in a way that’s useful to government officials. Assisting democracy is a more socially valuable function for large language models than, say, writing commercials for Spam in iambic pentameter.The goal, according to the people I spoke to, is to make A.I. part of the solution, not just part of the problem…(More)” (See also: Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern…)”.

Can A.I. and Democracy Fix Each Other?

Article by Bob Holmes: “People stop their cars simply because a little light turns from green to red. They crowd onto buses, trains and planes with complete strangers, yet fights seldom break out. Large, strong men routinely walk right past smaller, weaker ones without demanding their valuables. People pay their taxes and donate to food banks and other charities.

Most of us give little thought to these everyday examples of cooperation. But to biologists, they’re remarkable — most animals don’t behave that way.

“Even the least cooperative human groups are more cooperative than our closest cousins, chimpanzees and bonobos,” says Michael Muthukrishna, a behavioral scientist at the London School of Economics. Chimps don’t tolerate strangers, Muthukrishna says, and even young children are a lot more generous than a chimp.

Human cooperation takes some explaining — after all, people who act cooperatively should be vulnerable to exploitation by others. Yet in societies around the world, people cooperate to their mutual benefit. Scientists are making headway in understanding the conditions that foster cooperation, research that seems essential as an interconnected world grapples with climate change, partisan politics and more — problems that can be addressed only through large-scale cooperation…(More)”.

The secrets of cooperation

Article by Andrew Moore: “More than a year into Russia’s war of aggression against Ukraine, there are few signs the conflict will end anytime soon. Ukraine’s success on the battlefield has been powered by the innovative use of new technologies, from aerial drones to open-source artificial intelligence (AI) systems. Yet ultimately, the war in Ukraine—like any other war—will end with negotiations. And although the conflict has spurred new approaches to warfare, diplomatic methods remain stuck in the 19th century.

Yet not even diplomacy—one of the world’s oldest professions—can resist the tide of innovation. New approaches could come from global movements, such as the Peace Treaty Initiative, to reimagine incentives to peacemaking. But much of the change will come from adopting and adapting new technologies.

With advances in areas such as artificial intelligence, quantum computing, the internet of things, and distributed ledger technology, today’s emerging technologies will offer new tools and techniques for peacemaking that could impact every step of the process—from the earliest days of negotiations all the way to monitoring and enforcing agreements…(More)”.

How AI Could Revolutionize Diplomacy

Book by Dan Breznitz: “Across the world, cities and regions have wasted trillions of dollars on blindly copying the Silicon Valley model of growth creation. Since the early years of the information age, we’ve been told that economic growth derives from harnessing technological innovation. To do this, places must create good education systems, partner with local research universities, and attract innovative hi-tech firms. We have lived with this system for decades, and the result is clear: a small number of regions and cities at the top of the high-tech industry but many more fighting a losing battle to retain economic dynamism.

But are there other models that don’t rely on a flourishing high-tech industry? In Innovation in Real Places, Dan Breznitz argues that there are. The purveyors of the dominant ideas on innovation have a feeble understanding of the big picture on global production and innovation. They conflate innovation with invention and suffer from techno-fetishism. In their devotion to start-ups, they refuse to admit that the real obstacle to growth for most cities is the overwhelming power of the real hubs, which siphon up vast amounts of talent and money. Communities waste time, money, and energy pursuing this road to nowhere. Breznitz proposes that communities instead focus on where they fit in the four stages in the global production process. Some are at the highest end, and that is where the Clevelands, Sheffields, and Baltimores are being pushed toward. But that is bad advice. Success lies in understanding the changed structure of the global system of production and then using those insights to enable communities to recognize their own advantages, which in turn allows to them to foster surprising forms of specialized innovation. As he stresses, all localities have certain advantages relative to at least one stage of the global production process, and the trick is in recognizing it. Leaders might think the answer lies in high-tech or high-end manufacturing, but more often than not, they’re wrong. Innovation in Real Places is an essential corrective to a mythology of innovation and growth that too many places have bought into in recent years. Best of all, it has the potential to prod local leaders into pursuing realistic and regionally appropriate models for growth and innovation…(More)”.

Innovation in Real Places

Paper by Jamie Danemayer, Andrew Young, Siobhan Green, Lydia Ezenwa and Michael Klein: “Innovative, responsible data use is a critical need in the global response to the coronavirus disease-2019 (COVID-19) pandemic. Yet potentially impactful data are often unavailable to those who could utilize it, particularly in data-poor settings, posing a serious barrier to effective pandemic mitigation. Data challenges, a public call-to-action for innovative data use projects, can identify and address these specific barriers. To understand gaps and progress relevant to effective data use in this context, this study thematically analyses three sets of qualitative data focused on/based in low/middle-income countries: (a) a survey of innovators responding to a data challenge, (b) a survey of organizers of data challenges, and (c) a focus group discussion with professionals using COVID-19 data for evidence-based decision-making. Data quality and accessibility and human resources/institutional capacity were frequently reported limitations to effective data use among innovators. New fit-for-purpose tools and the expansion of partnerships were the most frequently noted areas of progress. Discussion participants identified building capacity for external/national actors to understand the needs of local communities can address a lack of partnerships while de-siloing information. A synthesis of themes demonstrated that gaps, progress, and needs commonly identified by these groups are relevant beyond COVID-19, highlighting the importance of a healthy data ecosystem to address emerging threats. This is supported by data holders prioritizing the availability and accessibility of their data without causing harm; funders and policymakers committed to integrating innovations with existing physical, data, and policy infrastructure; and innovators designing sustainable, multi-use solutions based on principles of good data governance…(More)”.

Responding to the coronavirus disease-2019 pandemic with innovative data use: The role of data challenges

Article by Mike Barlow: “…Today’s conversations about AI bias tend to focus on high-visibility social issues such as racism, sexism, ageism, homophobia, transphobia, xenophobia, and economic inequality. But there are dozens and dozens of known biases (e.g., confirmation bias, hindsight bias, availability bias, anchoring bias, selection bias, loss aversion bias, outlier bias, survivorship bias, omitted variable bias and many, many others). Jeff Desjardins, founder and editor-in-chief at Visual Capitalist, has published a fascinating infographic depicting 188 cognitive biases–and those are just the ones we know about.

Ana Chubinidze, founder of AdalanAI, a Berlin-based AI governance startup, worries that AIs will develop their own invisible biases. Currently, the term “AI bias” refers mostly to human biases that are embedded in historical data. “Things will become more difficult when AIs begin creating their own biases,” she says.

She foresees that AIs will find correlations in data and assume they are causal relationships—even if those relationships don’t exist in reality. Imagine, she says, an edtech system with an AI that poses increasingly difficult questions to students based on their ability to answer previous questions correctly. The AI would quickly develop a bias about which students are “smart” and which aren’t, even though we all know that answering questions correctly can depend on many factors, including hunger, fatigue, distraction, and anxiety. 

Nevertheless, the edtech AI’s “smarter” students would get challenging questions and the rest would get easier questions, resulting in unequal learning outcomes that might not be noticed until the semester is over—or might not be noticed at all. Worse yet, the AI’s bias would likely find its way into the system’s database and follow the students from one class to the next…

As we apply AI more widely and grapple with its implications, it becomes clear that bias itself is a slippery and imprecise term, especially when it is conflated with the idea of unfairness. Just because a solution to a particular problem appears “unbiased” doesn’t mean that it’s fair, and vice versa. 

“There is really no mathematical definition for fairness,” Stoyanovich says. “Things that we talk about in general may or may not apply in practice. Any definitions of bias and fairness should be grounded in a particular domain. You have to ask, ‘Whom does the AI impact? What are the harms and who is harmed? What are the benefits and who benefits?’”…(More)”.

Eye of the Beholder: Defining AI Bias Depends on Your Perspective

Textbook by Paula Boddington: “This book introduces readers to critical ethical concerns in the development and use of artificial intelligence. Offering clear and accessible information on central concepts and debates in AI ethics, it explores how related problems are now forcing us to address fundamental, age-old questions about human life, value, and meaning. In addition, the book shows how foundational and theoretical issues relate to concrete controversies, with an emphasis on understanding how ethical questions play out in practice.

All topics are explored in depth, with clear explanations of relevant debates in ethics and philosophy, drawing on both historical and current sources. Questions in AI ethics are explored in the context of related issues in technology, regulation, society, religion, and culture, to help readers gain a nuanced understanding of the scope of AI ethics within broader debates and concerns…(More)”

AI Ethics

Book by Brishen Rogers: “As our economy has shifted away from industrial production and service industries have become dominant, many of the nation’s largest employers are now in fields like retail, food service, logistics, and hospitality. These companies have turned to data-driven surveillance technologies that operate over a vast distance, enabling cheaper oversight of massive numbers of workers. Data and Democracy at Work argues that companies often use new data-driven technologies as a power resource—or even a tool of class domination—and that our labor laws allow them to do so.

Employers have established broad rights to use technology to gather data on workers and their performance, to exclude others from accessing that data, and to use that data to refine their managerial strategies. Through these means, companies have suppressed workers’ ability to organize and unionize, thereby driving down wages and eroding working conditions. Labor law today encourages employer dominance in many ways—but labor law can also be reformed to become a tool for increased equity. The COVID-19 pandemic and subsequent Great Resignation have indicated an increased political mobilization of the so-called essential workers of the pandemic, many of them service industry workers. This book describes the necessary legal reforms to increase workers’ associational power and democratize workplace data, establishing more balanced relationships between workers and employers and ensuring a brighter and more equitable future for us all…(More)”.

Data and Democracy at Work: Advanced Information Technologies, Labor Law, and the New Working Class

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday