The End of Real Social Networks


Essay by Daron Acemoglu: “Social media platforms are not only creating echo chambers, propagating falsehoods, and facilitating the circulation of extremist ideas. Previous media innovations, dating back at least to the printing press, did that, too, but none of them shook the very foundations of human communication and social interaction.

CAMBRIDGE – Not only are billions of people around the world glued to their mobile phones, but the information they consume has changed dramatically – and not for the better. On dominant social-media platforms like Facebook, researchers have documented that falsehoods spread faster and more widely than similar content that includes accurate information. Though users are not demanding misinformation, the algorithms that determine what people see tend to favor sensational, inaccurate, and misleading content, because that is what generates “engagement” and thus advertising revenue.

As the internet activist Eli Pariser noted in 2011, Facebook also creates filter bubbles, whereby individuals are more likely to be presented with content that reinforces their own ideological leanings and confirms their own biases. And more recent research has demonstrated that this process has a major influence on the type of information users see.

Even leaving aside Facebook’s algorithmic choices, the broader social-media ecosystem allows people to find subcommunities that align with their interests. This is not necessarily a bad thing. If you are the only person in your community with an interest in ornithology, you no longer have to be alone, because you can now connect with ornithology enthusiasts from around the world. But, of course, the same applies to the lone extremist who can now use the same platforms to access or propagate hate speech and conspiracy theories.

No one disputes that social-media platforms have been a major conduit for hate speech, disinformation, and propaganda. Reddit and YouTube are breeding grounds for right-wing extremism. The Oath Keepers used Facebook, especially, to organize their role in the January 6, 2021, attack on the United States Capitol. Former US President Donald Trump’s anti-Muslim tweets were found to have fueled violence against minorities in the US.

True, some find such observations alarmist, noting that large players like Facebook and YouTube (which is owned by Google/Alphabet) do much more to police hate speech and misinformation than their smaller rivals do, especially now that better moderation practices have been developed. Moreover, other researchers have challenged the finding that falsehoods spread faster on Facebook and Twitter, at least when compared to other media.

Still others argue that even if the current social-media environment is treacherous, the problem is transitory. After all, novel communication tools have always been misused. Martin Luther used the printing press to promote not just Protestantism but also virulent anti-Semitism. Radio proved to be a powerful tool in the hands of demagogues like Father Charles Coughlin in the US and the Nazis in Germany. Both print and broadcast outlets remain full of misinformation to this day, but society has adjusted to these media and managed to contain their negative effects…(More)”.

Breakthroughs in Smart City Implementation


Book edited by Leo P. Ligthart and Ramjee Prasad: “Breakthroughs in Smart City Implementation should give answers on a wide variety of present social, political and technological problems. Green and long-lasting solutions are needed in coming 10 years and beyond on areas as green and long lasting solutions for improving air quality, quality of life of residents in cities, traffic congestions and many more.Two Conasense branches, established in China and in India, report in six book chapters on initiatives needed to overcome the obvious shortcomings at present. Three more chapters complete this fifth Conasense book: an introductory chapter concerning Smart City from Conasense perspective, a chapter showing that not technology but the people in the cities are most important and a chapter on recent results and prospects of “Human in the Loop” in smart vehicular systems…(More)”.

AI Ethics: Global Perspectives


The AI Ethics: Global Perspectives course released three new video modules this week.

  • In “AI Ethics and Hate Speech”, Maha Jouini from the African Center for Artificial Intelligence and Digital Technology, explores the intersection between AI and hate speech in the context of the MENA region.
  • Maxime Ducret at the University of Lyon and Carl Mörch from the FARI AI Institute for the Common Good introduce the ethical implications of the use of AI technologies in the field of dentistry in their module “How Your Teeth, Your Smile and AI Ethics are Related”.
  • And finally, in “Ethics in AI for Peace”, AI for Peace’s Branka Panic talks about how the “algo age” brought with it many technical, legal, and ethical questions that exceeded the scope of existing peacebuilding and peacetech ethics frameworks.

To watch these lectures in full and register for the course, visit our website

Collection of Case Studies of Institutional Adoption of Citizen Science


About TIME4CS : “The first objective was to increase our knowledge about the actions leading to institutional changes in RPOs (which are necessary to promote CS in science and technology) through a complete and up-to-date picture based upon the identification, mapping, monitoring and analysis of ongoing CS practices. To accomplish this objective, we, the TIME4CS project team, have collected and analysed 37 case studies on the institutional adoption of Citizen Science and Open Science around the world, which this article addresses.

For an organisation to open up and accept data and information that was produced outside it, with a different framework for data collection and quality assurance, there are multiple challenges. These include existing practices and procedures, legal obligations, as well as resistance from within due to framing of such action as a threat. Research that was carried out with multiple international case studies (Haklay et al. 2014; GFDRR 2018), demonstrated the importance of different institutional and funding structures needed to enable such activities and the use of the resulting information…(More)”.

Income Inequality Is Rising. Are We Even Measuring It Correctly?


Article by Jon Jachimowicz et al: “Income inequality is on the rise in many countries around the world, according to the United Nations. What’s more, disparities in global income were exacerbated by the COVID-19 pandemic, with some countries facing greater economic losses than others.

Policymakers are increasingly focusing on finding ways to reduce inequality to create a more just and equal society for all. In making decisions on how to best intervene, policymakers commonly rely on the Gini coefficient, a statistical measure of resource distribution, including wealth and income levels, within a population. The Gini coefficient measures perfect equality as zero and maximum inequality as one, with higher numbers indicating a greater concentration of resources in the hands of a few.

This measure has long dominated our understanding (pdf) of what inequality means, largely because this metric is used by governments around the world, is released by statistics bureaus in multiple countries, and is commonly discussed in news media and policy discussions alike.

In our paper, recently published in Nature Human Behaviour, we argue that researchers and policymakers rely too heavily on the Gini coefficient—and that by broadening our understanding of how we measure inequality, we can both uncover its impact and intervene to more effectively correct It…(More)”.

Another World Is Possible: How to Reignite Social and Political Imagination


Book by Geoff Mulgan: “As the world confronts the fast catastrophe of Covid and the slow calamity of climate change, we also face a third, less visible emergency: a crisis of imagination. We can easily picture ecological disaster or futures dominated by technology. But we struggle to imagine a world in which people thrive and where we improve our democracy, welfare, neighbourhoods or education. Many are resigned to fatalism—yet they desperately want transformational social change.

This book argues that, although the threats are real, we can use creative imagination to achieve a better future: visualising where we want to go and how to get there. Political and social thinker Geoff Mulgan offers lessons we can learn from the past, and methods we can use now to open up thinking about the future and spark action.

Drawing on social sciences, the arts, philosophy and history, Mulgan shows how we can recharge our collective imagination. From Socrates to Star Wars, he provides a roadmap for the future….(More)”.

Decisions Over Decimals: Striking the Balance between Intuition and Information


Book by Christopher J. Frank, Paul F. Magnone, Oded Netzer: “Agile decision making is imperative as you lead in a data-driven world. Amid streams of data and countless meetings, we make hasty decisions, slow decisions, and often no decisions. Uniquely bridging theory and practice, Decision over Decimals breaks this pattern by uniting data intelligence with human judgment to get to action – a sharp approach the authors refer to as Quantitative Intuition (QI). QI raises the power of thinking beyond big data without neglecting it and chasing the perfect decision while appreciating that such a thing can never really exist….(More)”.

Nudging Science Towards Fairer Evaluations: Evidence From Peer Review


Paper by Inna Smirnova, Daniel M. Romero, and Misha Teplitskiy: “Peer review is widely used to select scientific projects for funding and publication, but there is growing evidence that it is biased towards prestigious individuals and institutions. Although anonymizing submissions can reduce prestige bias, many organizations do not implement anonymization, in part because enforcing it can be prohibitively costly. Here, we examine whether nudging but not forcing authors to anonymize their submissions reduces prestige bias. We partnered with IOP Publishing, one of the largest academic publishers, which adopted a policy strongly encouraging authors to anonymize their submissions and staggered the policy rollout across its physics journal portfolio. We examine 156,015 submissions to 57 peer-reviewed journals received between January 2018 and February 2022 and measure author prestige with citations accrued at submission time. Higher prestige first authors were less likely to anonymize. Nevertheless, for low-prestige authors, the policy increased positive peer reviews by 2.4% and acceptance by 5.6%. For middle- and high-prestige authors, the policy decreased positive reviews (1.8% and 1%) and final acceptance (4.6% and 2.2%). The policy did not have unintended consequences on reviewer recruitment or the characteristics of submitting authors. Overall, nudges are a simple, low-cost, and effective method to reduce prestige bias and should be considered by organizations for which enforced-anonymization is impractical…(More)”.

The Low Threshold for Face Recognition in New Delhi


Article by Varsha Bansal: “Indian law enforcement is starting to place huge importance on facial recognition technology. Delhi police, looking into identifying people involved in civil unrest in northern India in the past few years, said that they would consider 80 percent accuracy and above as a “positive” match, according to documents obtained by the Internet Freedom Foundation through a public records request.

Facial recognition’s arrival in India’s capital region marks the expansion of Indian law enforcement officials using facial recognition data as evidence for potential prosecution, ringing alarm bells among privacy and civil liberties experts. There are also concerns about the 80 percent accuracy threshold, which critics say is arbitrary and far too low, given the potential consequences for those marked as a match. India’s lack of a comprehensive data protection law makes matters even more concerning.

The documents further state that even if a match is under 80 percent, it would be considered a “false positive” rather than a negative, which would make that individual “subject to due verification with other corroborative evidence.”

“This means that even though facial recognition is not giving them the result that they themselves have decided is the threshold, they will continue to investigate,” says Anushka Jain, associate policy counsel for surveillance and technology with the IFF, who filed for this information. “This could lead to harassment of the individual just because the technology is saying that they look similar to the person the police are looking for.” She added that this move by the Delhi Police could also result in harassment of people from communities that have been historically targeted by law enforcement officials…(More)”

Blue Spoons: Sparking Communication About Appropriate Technology Use


Paper by Arun G. Chandrasekhar, Esther Duflo, Michael Kremer, João F. Pugliese, Jonathan Robinson & Frank Schilbach: “An enduring puzzle regarding technology adoption in developing countries is that new technologies often diffuse slowly through the social network. Two of the key predictions of the canonical epidemiological model of technology diffusion are that forums to share information and higher returns to technology should both spur social transmission. We design a large-scale experiment to test these predictions among farmers in Western Kenya, and we fail to find support for either. However, in the same context, we introduce a technology that diffuses very fast: a simple kitchen spoon (painted in blue) to measure out how much fertilizer to use. We develop a model that explains both the failure of the standard approaches and the surprising success of this new technology. The core idea of the model is that not all information is reliable, and farmers are reluctant to develop a reputation of passing along false information. The model and data suggest that there is value in developing simple, transparent technologies to facilitate communication…(More)”.