What Collective Narcissism Does to Society


Essay by  Scott Barry Kaufman: “In 2005, the psychologist Agnieszka Golec de Zavala was researching extremist groups, trying to understand what leads people to commit acts of terrorist violence. She began to notice something that looked a lot like what the 20th-century scholars Theodor Adorno and Erich Fromm had referred to as “group narcissism”: Golec de Zavala defined it to me as “a belief that the exaggerated greatness of one’s group is not sufficiently recognized by others,” in which that thirst for recognition is never satiated. At first, she thought it was a fringe phenomenon, but important nonetheless. She developed the Collective Narcissism Scale to measure the severity of group-narcissistic beliefs, including statements such as “My group deserves special treatment” and “I insist upon my group getting the respect that is due to it” with which respondents rate their agreement.

Sixteen years later, Golec de Zavala is a professor at SWPS University, in Poland, and a lecturer at Goldsmiths, University of London, leading the study of group narcissism—and she’s realized that there’s nothing fringe about it. This thinking can happen in seemingly any kind of assemblage: a religious, political, gender, racial, or ethnic group, but also a sports team, club, or cult. Now, she said, she’s terrified at how widely she’s finding it manifested across the globe.

Collective narcissism is not simply tribalism. Humans are inherently tribal, and that’s not necessarily a bad thing. Having a healthy social identity can have an immensely positive impact on well-being. Collective narcissists, though, are often more focused on out-group prejudice than in-group loyalty. In its most extreme form, group narcissism can fuel political radicalism and potentially even violence. But in everyday settings, too, it can keep groups from listening to one another, and lead them to reduce people on the “other side” to one-dimensional characters. The best way to avoid that is by teaching people how to be proud of their group—without obsessing over recognition….(More)”.

Design for Social Innovation: Case Studies from Around the World


Book edited By Mariana Amatullo, Bryan Boyer, Jennifer May and Andrew Shea: “The United Nations, Australia Post, and governments in the UK, Finland, Taiwan, France, Brazil, and Israel are just a few of the organizations and groups utilizing design to drive social change. Grounded by a global survey in sectors as diverse as public health, urban planning, economic development, education, humanitarian response, cultural heritage, and civil rights, Design for Social Innovation captures these stories and more through 45 richly illustrated case studies from six continents.

From advocating to understanding and everything in between, these cases demonstrate how designers shape new products, services, and systems while transforming organizations and supporting individual growth.

How is this work similar or different around the world? How are designers building sustainable business practices with this work? Why are organizations investing in design capabilities? What evidence do we have of impact by design? Leading practitioners and educators, brought together in seven dynamic roundtable discussions, provide context to the case studies.

Design for Social Innovation is a must-have for professionals, organizations, and educators in design, philanthropy, social innovation, and entrepreneurship. This book marks the first attempt to define the contours of a global overview that showcases the cultural, economic, and organizational levers propelling design for social innovation forward today…(More)”

The Cambridge Handbook of Commons Research Innovations


Book edited by Sheila R. Foster and Chrystie F. Swiney: “The commons theory, first articulated by Elinor Ostrom, is increasingly used as a framework to understand and rethink the management and governance of many kinds of shared resources. These resources can include natural and digital properties, cultural goods, knowledge and intellectual property, and housing and urban infrastructure, among many others. In a world of increasing scarcity and demand – from individuals, states, and markets – it is imperative to understand how best to induce cooperation among users of these resources in ways that advance sustainability, affordability, equity, and justice. This volume reflects this multifaceted and multidisciplinary field from a variety of perspectives, offering new applications and extensions of the commons theory, which is as diverse as the scholars who study it and is still developing in exciting ways…(More)”.

A Data-Driven Company: 21 lessons for large organizations to create value from AI


Book by Richard Benjamins: “Are you planning to start working with big data, analytics or AI, but don’t know where to begin or what to expect? Have you started your data journey and are wondering how to get to the next level? Want to know how to fun your data journey, organize your data team, measure the results and scale? Don’t worry, you are not alone. Many organizations are struggling with the same question.

In this book you will learn about: the different stages of the data journey: typical organizational, technological, business, people, and ethical decisions that organization encounter on the journey; and different options available, along with the corresponding pros and cons. We’ll also present practical examples of options available, and decisions taken; and the perspectives of 20 experts and professionals from organizations like AXA,BBVA, ENGIE, EY, KPMG, MTN, O2, the ODI, OdiselA, Rabobank, Santander, Telefonica and Vodafone….(More)”.

Against longtermism


Essay by Phil Torres: “The point is that longtermism might be one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about. I believe this needs to change because, as a former longtermist who published an entire book four years ago in defence of the general idea, I have come to see this worldview as quite possibly the most dangerous secular belief system in the world today. But to understand the nature of the beast, we need to first dissect it, examining its anatomical features and physiological functions….

Why do I think this ideology is so dangerous? The short answer is that elevating the fulfilment of humanity’s supposed potential above all else could nontrivially increase the probability that actual people – those alive today and in the near future – suffer extreme harms, even death. Consider that, as I noted elsewhere, the longtermist ideology inclines its adherents to take an insouciant attitude towards climate change. Why? Because even if climate change causes island nations to disappear, triggers mass migrations and kills millions of people, it probably isn’t going to compromise our longterm potential over the coming trillions of years. If one takes a cosmic view of the situation, even a climate catastrophe that cuts the human population by 75 per cent for the next two millennia will, in the grand scheme of things, be nothing more than a small blip – the equivalent of a 90-year-old man having stubbed his toe when he was two.

Bostrom’s argument is that ‘a non-existential disaster causing the breakdown of global civilisation is, from the perspective of humanity as a whole, a potentially recoverable setback.’ It might be ‘a giant massacre for man’, he adds, but so long as humanity bounces back to fulfil its potential, it will ultimately register as little more than ‘a small misstep for mankind’. Elsewhere, he writes that the worst natural disasters and devastating atrocities in history become almost imperceptible trivialities when seen from this grand perspective. Referring to the two world wars, AIDS and the Chernobyl nuclear accident, he declares that ‘tragic as such events are to the people immediately affected, in the big picture of things … even the worst of these catastrophes are mere ripples on the surface of the great sea of life.’

This way of seeing the world, of assessing the badness of AIDS and the Holocaust, implies that future disasters of the same (non-existential) scope and intensity should also be categorised as ‘mere ripples’. If they don’t pose a direct existential risk, then we ought not to worry much about them, however tragic they might be to individuals. As Bostrom wrote in 2003, ‘priority number one, two, three and four should … be to reduce existential risk.’ He reiterated this several years later in arguing that we mustn’t ‘fritter … away’ our finite resources on ‘feel-good projects of suboptimal efficacy’ such as alleviating global poverty and reducing animal suffering, since neither threatens our longterm potential, and our longterm potential is what really matters…(More)”.

Conceptual and normative approaches to AI governance for a global digital ecosystem supportive of the UN Sustainable Development Goals (SDGs)


Paper by Amandeep S. Gill & Stefan Germann: “AI governance is like one of those mythical creatures that everyone speaks of but which no one has seen. Sometimes, it is reduced to a list of shared principles such as transparency, non-discrimination, and sustainability; at other times, it is conflated with specific mechanisms for certification of algorithmic solutions or ways to protect the privacy of personal data. We suggest a conceptual and normative approach to AI governance in the context of a global digital public goods ecosystem to enable progress on the UN Sustainable Development Goals (SDGs). Conceptually, we propose rooting this approach in the human capability concept—what people are able to do and to be, and in a layered governance framework connecting the local to the global. Normatively, we suggest the following six irreducibles: a. human rights first; b. multi-stakeholder smart regulation; c. privacy and protection of personal data; d. a holistic approach to data use captured by the 3Ms—misuse of data, missed use of data and missing data; e. global collaboration (‘digital cooperation’); f. basing governance more in practice, in particular, thinking separately and together about data and algorithms. Throughout the article, we use examples from the health domain particularly in the current context of the Covid-19 pandemic. We conclude by arguing that taking a distributed but coordinated global digital commons approach to the governance of AI is the best guarantee of citizen-centered and societally beneficial use of digital technologies for the SDGs…(More)”.

Under What Conditions Are Data Valuable for Development?


Paper by Dean Jolliffe et al: “Data produced by the public sector can have transformational impacts on development outcomes through better targeting of resources, improved service delivery, cost savings in policy implementation, increased accountability, and more. Around the world, the amount of data produced by the public sector is increasing at a rapid pace, yet their transformational impacts have not been realized fully. Why has the full value of these data not been realized yet? This paper outlines 12 conditions needed for the production and use of public sector data to generate value for development and presents case studies substantiating these conditions. The conditions are that data need to have adequate spatial and temporal coverage (are complete, frequent, and timely), are of high quality (are accurate, comparable, and granular), are easy to use (are accessible, understandable, and interoperable), and are safe to use (are impartial, confidential, and appropriate)…(More)”.

How the Data Revolution Will Help the World Fight Climate Change


Article by Robert Muggah and Carlo Ratti: “…The rapidly increasing volume and variety of Big Data collected in cities—whose potential has barely been tapped—can help solve the pressing need for actionable insight. For one, it can be used to track the climate crisis as it happens. Collected in real-time and in high resolution, data can serve as an interface between aspirational goals and daily implementation. Take the case of mobility, a key contributor to carbon, nitrogen, and particulate emissions. A wealth of data from fixed sensors, outdoor video footage, navigation devices, and mobile phones could be processed in real time to classify all modes of city transportation. This can be used to generate granular knowledge of which vehicles—from gas-guzzling SUVs to electric bikes—are contributing to traffic and emissions in a given hour, day, week, or month. This kind of just-in-time analytics can inform agile policy adjustments: Data showing too many miles driven by used diesel vehicles might indicate the need for more targeted car buyback programs while better data about bike use can bolster arguments for dedicated lanes and priority at stoplights.

Data-driven analytics are already improving energy use efficiency in buildings, where heating, cooling, and electricity use are among the chief culprits of greenhouse gas emissions. It is now possible to track spatial and temporal electricity consumption patterns inside commercial and residential properties with smart meters. City authorities can use them to monitor which buildings are using the most power and when. This kind of data can then be used to set incentives to reduce consumption and optimize energy distribution over a 24-hour period. Utilities can charge higher prices during peak usage hours that put the most carbon-intensive strain on the grid. Although peak pricing strategies have existed for decades, data abundance and advanced computing could now help utilities make use of their full potential. Likewise, thermal cameras in streets can identify buildings with energy leaks, especially during colder periods. Tenants can use this data to replace windows or add insulation, substantially reducing their utility bills while also contributing to local climate action.

The data revolution is being harnessed by some cities to hasten the energy transition. A good example of this is the Helsinki Hot Heart proposal that recently won a city-wide energy challenge (and which one of our firms—Carlo Ratti Associati—is involved in). Helsinki currently relies on a district heating system powered by coal power plants that are expected to be phased out by 2030. A key question is whether it is possible to power the city using intermittent renewable energy sources. The project proposes giant water basins, floating off the shore in the Baltic Sea, that act as insulated thermal batteries to accumulate heat during peak renewable production, releasing it through the district heating system. This is only possible through a fine-tuned collection of sensors, algorithms, and actuators. Relying on the flow of water and bytes, Helsinki Hot Hearth would offer a path to digital physical systems that could take cities like Helsinki to a sustainable, data-driven future….(More)”.

The “9Rs Framework”: Establishing the Business Case for Data Collaboration and Re-Using Data in the Public Interest


Article by Stefaan G. Verhulst, Andrew Young, and Andrew J. Zahuranec: “When made accessible and re-used responsibly, privately held data has the potential to generate enormous public value. Whether it’s enabling better science, supporting evidence-based government programs, or helping community groups to identify people who need help, data can be used to make better public interest decisions and improve people’s lives.

Yet, for all the discussion of the societal value of having organizations provide access to their data, there’s been little discussion of the business case on why to make data available for reuse. What motivates an organization to make its datasets accessible for societal purposes? How does doing so support their organizational goals? What’s the return on investment of using organizational resources to make data available to others?

GRAPHIC: The 9Rs Framework: The Business Case for Data Reuse in the Public Interest

The Open Data Policy Lab addresses these questions with its “9R Framework,” a method for describing and identifying the business case for data reuse for the public good. The 9R Framework consists of nine motivations identified through several years of studying and establishing data collaboratives, categorized by different types of return on investment: license to operate, brand equity, or knowledge and insights. Considered together, these nine motivations add up to a model to help organizations understand the business value of making their data assets accessible….(More)”.

Understanding Algorithmic Discrimination in Health Economics Through the Lens of Measurement Errors


Paper by Anirban Basu, Noah Hammarlund, Sara Khor & Aasthaa Bansal: “There is growing concern that the increasing use of machine learning and artificial intelligence-based systems may exacerbate health disparities through discrimination. We provide a hierarchical definition of discrimination consisting of algorithmic discrimination arising from predictive scores used for allocating resources and human discrimination arising from allocating resources by human decision-makers conditional on these predictive scores. We then offer an overarching statistical framework of algorithmic discrimination through the lens of measurement errors, which is familiar to the health economics audience. Specifically, we show that algorithmic discrimination exists when measurement errors exist in either the outcome or the predictors, and there is endogenous selection for participation in the observed data. The absence of any of these phenomena would eliminate algorithmic discrimination. We show that although equalized odds constraints can be employed as bias-mitigating strategies, such constraints may increase algorithmic discrimination when there is measurement error in the dependent variable….(More)”.