Paper by Manuel Funke, Moritz Schularick and Christoph Trebesch: “Populism at the country level is at an all-time high, with more than 25 percent of nations currently governed by populists. How do economies perform under populist leaders? We build a new long-run cross-country database to study the macroeconomic history of populism. We identify 51 populist presidents and prime ministers from 1900 to 2020 and show that the economic cost of populism is high. After 15 years, GDP per capita is 10 percent lower compared to a plausible nonpopulist counterfactual. Economic disintegration, decreasing macroeconomic stability, and the erosion of institutions typically go hand in hand with populist rule…(More)”.
Privacy-Enhancing and Privacy-Preserving Technologies: Understanding the Role of PETs and PPTs in the Digital Age
Paper by the Centre for Information Policy Leadership: “…explores how organizations are approaching privacy-enhancing technologies (“PETs”) and how PETs can advance data protection principles, and provides examples of how specific types of PETs work. It also explores potential challenges to the use of PETs and possible solutions to those challenges.
CIPL emphasizes the enormous potential inherent in these technologies to mitigate privacy risks and support innovation, and recommends a number of steps to foster further development and adoption of PETs. In particular, CIPL calls for policymakers and regulators to incentivize the use of PETs through clearer guidance on key legal concepts that impact the use of PETs, and by adopting a pragmatic approach to the application of these concepts.
CIPL’s recommendations towards wider adoption are as follows:
- Issue regulatory guidance and incentives regarding PETs: Official regulatory guidance addressing PETs in the context of specific legal obligations or concepts (such as anonymization) will incentivize greater investment in PETs.
- Increase education and awareness about PETs: PET developers and providers need to show tangible evidence of the value of PETs and help policymakers, regulators and organizations understand how such technologies can facilitate responsible data use.
- Develop industry standards for PETs: Industry standards would help facilitate interoperability for the use of PETs across jurisdictions and help codify best practices to support technical reliability to foster trust in these technologies.
- Recognize PETs as a demonstrable element of accountability: PETs complement robust data privacy management programs and should be recognized as an element of organizational accountability…(More)”.
How Moral Can A.I. Really Be?
Article by Paul Bloom: “…The problem isn’t just that people do terrible things. It’s that people do terrible things that they consider morally good. In their 2014 book “Virtuous Violence,” the anthropologist Alan Fiske and the psychologist Tage Rai argue that violence is often itself a warped expression of morality. “People are impelled to violence when they feel that to regulate certain social relationships, imposing suffering or death is necessary, natural, legitimate, desirable, condoned, admired, and ethically gratifying,” they write. Their examples include suicide bombings, honor killings, and war. The philosopher Kate Manne, in her book “Down Girl,” makes a similar point about misogynistic violence, arguing that it’s partially rooted in moralistic feelings about women’s “proper” role in society. Are we sure we want A.I.s to be guided by our idea of morality?
Schwitzgebel suspects that A.I. alignment is the wrong paradigm. “What we should want, probably, is not that superintelligent AI align with our mixed-up, messy, and sometimes crappy values but instead that superintelligent AI have ethically good values,” he writes. Perhaps an A.I. could help to teach us new values, rather than absorbing old ones. Stewart, the former graduate student, argued that if researchers treat L.L.M.s as minds and study them psychologically, future A.I. systems could help humans discover moral truths. He imagined some sort of A.I. God—a perfect combination of all the great moral minds, from Buddha to Jesus. A being that’s better than us.
Would humans ever live by values that are supposed to be superior to our own? Perhaps we’ll listen when a super-intelligent agent tells us that we’re wrong about the facts—“this plan will never work; this alternative has a better chance.” But who knows how we’ll respond if one tells us, “You think this plan is right, but it’s actually wrong.” How would you feel if your self-driving car tried to save animals by refusing to take you to a steakhouse? Would a government be happy with a military A.I. that refuses to wage wars it considers unjust? If an A.I. pushed us to prioritize the interests of others over our own, we might ignore it; if it forced us to do something that we consider plainly wrong, we would consider its morality arbitrary and cruel, to the point of being immoral. Perhaps we would accept such perverse demands from God, but we are unlikely to give this sort of deference to our own creations. We want alignment with our own values, then, not because they are the morally best ones, but because they are ours…(More)”
WikiCrow: Automating Synthesis of Human Scientific Knowledge
About: “As scientists, we stand on the shoulders of giants. Scientific progress requires curation and synthesis of prior knowledge and experimental results. However, the scientific literature is so expansive that synthesis, the comprehensive combination of ideas and results, is a bottleneck. The ability of large language models to comprehend and summarize natural language will transform science by automating the synthesis of scientific knowledge at scale. Yet current LLMs are limited by hallucinations, lack access to the most up-to-date information, and do not provide reliable references for statements.
Here, we present WikiCrow, an automated system that can synthesize cited Wikipedia-style summaries for technical topics from the scientific literature. WikiCrow is built on top of Future House’s internal LLM agent platform, PaperQA, which in our testing, achieves state-of-the-art (SOTA) performance on a retrieval-focused version of PubMedQA and other benchmarks, including a new retrieval-first benchmark, LitQA, developed internally to evaluate systems retrieving full-text PDFs across the entire scientific literature.
As a demonstration of the potential for AI to impact scientific practice, we use WikiCrow to generate draft articles for the 15,616 human protein-coding genes that currently lack Wikipedia articles, or that have article stubs. WikiCrow creates articles in 8 minutes, is much more consistent than human editors at citing its sources, and makes incorrect inferences or statements about 9% of the time, a number that we expect to improve as we mature our systems. WikiCrow will be a foundational tool for the AI Scientists we plan to build in the coming years, and will help us to democratize access to scientific research…(More)”.
Artificial Intelligence and the City
Book edited by Federico Cugurullo, Federico Caprotti, Matthew Cook, Andrew Karvonen, Pauline McGuirk, and Simon Marvin: “This book explores in theory and practice how artificial intelligence (AI) intersects with and alters the city. Drawing upon a range of urban disciplines and case studies, the chapters reveal the multitude of repercussions that AI is having on urban society, urban infrastructure, urban governance, urban planning and urban sustainability.
Contributors also examine how the city, far from being a passive recipient of new technologies, is influencing and reframing AI through subtle processes of co-constitution. The book advances three main contributions and arguments:
- First, it provides empirical evidence of the emergence of a post-smart trajectory for cities in which new material and decision-making capabilities are being assembled through multiple AIs.
- Second, it stresses the importance of understanding the mutually constitutive relations between the new experiences enabled by AI technology and the urban context.
- Third, it engages with the concepts required to clarify the opaque relations that exist between AI and the city, as well as how to make sense of these relations from a theoretical perspective…(More)”.
Steering Responsible AI: A Case for Algorithmic Pluralism
Paper by Stefaan G. Verhulst: “In this paper, I examine questions surrounding AI neutrality through the prism of existing literature and scholarship about mediation and media pluralism. Such traditions, I argue, provide a valuable theoretical framework for how we should approach the (likely) impending era of AI mediation. In particular, I suggest examining further the notion of algorithmic pluralism. Contrasting this notion to the dominant idea of algorithmic transparency, I seek to describe what algorithmic pluralism may be, and present both its opportunities and challenges. Implemented thoughtfully and responsibly, I argue, Algorithmic or AI pluralism has the potential to sustain the diversity, multiplicity, and inclusiveness that are so vital to democracy…(More)”.
Want to know if your data are managed responsibly? Here are 15 questions to help you find out
Article by P. Alison Paprica et al: “As the volume and variety of data about people increases, so does the number of ideas about how data might be used. Studies show that many people want their data to be used for public benefit.
However, the research also shows that public support for use of data is conditional, and only given when risks such as those related to privacy, commercial exploitation and artificial intelligence misuse are addressed.
It takes a lot of work for organizations to establish data governance and management practices that mitigate risks while also encouraging beneficial uses of data. So much so, that it can be challenging for responsible organizations to communicate their data trustworthiness without providing an overwhelming amount of technical and legal details.
To address this challenge our team undertook a multiyear project to identify, refine and publish a short list of essential requirements for responsible data stewardship.
Our 15 minimum specification requirements (min specs) are based on a review of the scientific literature and the practices of 23 different data-focused organizations and initiatives.
As part of our project, we compiled over 70 public resources, including examples of organizations that address the full list of min specs: ICES, the Hartford Data Collaborative and the New Brunswick Institute for Research, Data and Training.
Our hope is that information related to the min specs will help organizations and data-sharing initiatives share best practices and learn from each other to improve their governance and management of data…(More)”.
Gaza and the Future of Information Warfare
Article by P. W. Singer and Emerson T. Brooking: “The Israel-Hamas war began in the early hours of Saturday, October 7, when Hamas militants and their affiliates stole over the Gazan-Israeli border by tunnel, truck, and hang glider, killed 1,200 people, and abducted over 200 more. Within minutes, graphic imagery and bombastic propaganda began to flood social media platforms. Each shocking video or post from the ground drew new pairs of eyes, sparked horrified reactions around the world, and created demand for more. A second front in the war had been opened online, transforming physical battles covering a few square miles into a globe-spanning information conflict.
In the days that followed, Israel launched its own bloody retaliation against Hamas; its bombardment of cities in the Gaza Strip killed more than 10,000 Palestinians in the first month. With a ground invasion in late October, Israeli forces began to take control of Gazan territory. The virtual battle lines, meanwhile, only became more firmly entrenched. Digital partisans clashed across Facebook, Instagram, X, TikTok, YouTube, Telegram, and other social media platforms, each side battling to be the only one heard and believed, unshakably committed to the righteousness of its own cause.
The physical and digital battlefields are now merged. In modern war, smartphones and cameras transmit accounts of nearly every military action across the global information space. The debates they spur, in turn, affect the real world. They shape public opinion, provide vast amounts of intelligence to actors around the world, and even influence diplomatic and military operational decisions at both the strategic and tactical levels. In our 2018 book, we dubbed this phenomenon “LikeWar,” defined as a political and military competition for command of attention. If cyberwar is the hacking of online networks, LikeWar is the hacking of the people on them, using their likes and shares to make a preferred narrative go viral…(More)”.
Governing the economics of the common good
Paper by Mariana Mazzucato: “To meet today’s grand challenges, economics requires an understanding of how common objectives may be collaboratively set and met. Tied to the assumption that the state can, at best, fix market failures and is always at risk of ‘capture’, economic theory has been unable to offer such a framework. To move beyond such limiting assumptions, the paper provides a renewed conception of the common good, going beyond the classic public good and commons approach, as a way of steering and shaping (rather than just fixing) the economy towards collective goals…(More)”.
When Science Meets Power
Book by Geoff Mulgan: “Science and politics have collaborated throughout human history, and science is repeatedly invoked today in political debates, from pandemic management to climate change. But the relationship between the two is muddled and muddied.
Leading policy analyst Geoff Mulgan here calls attention to the growing frictions caused by the expanding authority of science, which sometimes helps politics but often challenges it.
He dissects the complex history of states’ use of science for conquest, glory and economic growth and shows the challenges of governing risk – from nuclear weapons to genetic modification, artificial intelligence to synthetic biology. He shows why the governance of science has become one of the biggest challenges of the twenty-first century, ever more prominent in daily politics and policy.
Whereas science is ordered around what we know and what is, politics engages what we feel and what matters. How can we reconcile the two, so that crucial decisions are both well informed and legitimate?
The book proposes new ways to organize democracy and government, both within nations and at a global scale, to better shape science and technology so that we can reap more of the benefits and fewer of the harms…(More)”.