Stefaan Verhulst
Book by Liliana Doganova: “Forest fires, droughts, and rising sea levels beg a nagging question: have we lost our capacity to act on the future? Liliana Doganova’s book sheds new light on this anxious query. It argues that our relationship to the future has been trapped in the gears of a device called discounting. While its incidence remains little known, discounting has long been entrenched in market and policy practices, shaping the ways firms and governments look to the future and make decisions accordingly. Thus, a sociological account of discounting formulas has become urgent.
Discounting means valuing things through the flows of costs and benefits that they are likely to generate in the future, with these future flows being literally dis-counted as they are translated in the present. How have we come to think of the future, and of valuation, in such terms? Building on original empirical research in the historical sociology of discounting, Doganova takes us to some of the sites and moments in which discounting took shape and gained momentum: valuation of European forests in the eighteenth and nineteenth centuries; economic theories devised in the early 1900s; debates over business strategies in the postwar era; investor-state disputes over the nationalization of natural resources; and drug development in the biopharmaceutical industry today. Weaving these threads together, the book pleads for an understanding of discounting as a political technology, and of the future as a contested domain…(More)”
Essay by Adam Zable, Marine Ragnet, Roshni Singh, Hannah Chafetz, Andrew J. Zahuranec, and Stefaan G. Verhulst: “In what follows we provide a series of case studies of how AI can be used to promote peace, leveraging what we learned at the Kluz Prize for PeaceTech and NYU Prep and Becera events. These case studies and applications of AI are limited to what was included in these initiatives and are not fully comprehensive. With these examples of the role of technology before, during, and after a conflict, we hope to broaden the discussion around the potential positive uses of AI in the context of today’s global challenges.
The table above summarizes the how AI may be harnessed throughout the conflict cycle and the supporting examples from the Kluz Prize for PeaceTech and NYU PREP and Becera events
(1) The Use of AI Before a Conflict
AI can support conflict prevention by predicting emerging tensions and supporting mediation efforts. In recent years, AI-driven early warning systems have been used to identify patterns that precede violence, allowing for timely interventions.
For instance, The Violence & Impacts Early-Warning System (VIEWS), developed by a research consortium at Uppsala University in Sweden and the Peace Research Institute Oslo (PRIO) in Norway, employs AI and machine learning algorithms to analyze large datasets, including conflict history, political events, and socio-economic indicators—supporting negative peace and peacebuilding efforts. These algorithms are trained to recognize patterns that precede violent conflict, using both supervised and unsupervised learning methods to make predictions about the likelihood and severity of conflicts up to three years in advance. The system also uses predictive analytics to identify potential hotspots, where specific factors—such as spikes in political unrest or economic instability—suggest a higher risk of conflict…(More)”.
Article by Emanuel Maiberg: “A group of Wikipedia editors have formed WikiProject AI Cleanup, “a collaboration to combat the increasing problem of unsourced, poorly-written AI-generated content on Wikipedia.”
The group’s goal is to protect one of the world’s largest repositories of information from the same kind of misleading AI-generated information that has plagued Google search results, books sold on Amazon, and academic journals.
“A few of us had noticed the prevalence of unnatural writing that showed clear signs of being AI-generated, and we managed to replicate similar ‘styles’ using ChatGPT,” Ilyas Lebleu, a founding member of WikiProject AI Cleanup, told me in an email. “Discovering some common AI catchphrases allowed us to quickly spot some of the most egregious examples of generated articles, which we quickly wanted to formalize into an organized project to compile our findings and techniques.”…(More)”.
Essay by Dario Amodei: “I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.
In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes right. Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one…(More)”.
Blog by Daro: “There is a problem related to how we effectively help people receiving social services and public benefit programs. It’s a problem that we have been thinking, talking, and writing about for years. It’s a problem that once you see it, you can’t unsee it. It’s also a problem that you’re likely familiar with, whether you have direct experience with the dynamics themselves, or you’ve been frustrated by how these dynamics impact your work. In February, we organized a convening at Georgetown University in collaboration with Georgetown’s Massive Data Institute to discuss how so many of us can be frustrated by the same problem but haven’t been able to really make any headway toward a solution.
For as long as social services have existed, people have been trying to understand how to manage and evaluate those services. How do we determine what to scale and what to change? How do we replicate successes and how do we minimize unsuccessful interventions? To answer these questions we have tried to create, use, and share evidence about these programs to inform our decision-making. However – and this is a big however – despite our collective efforts, we have difficulty determining whether there’s been an increase in using evidence, or most importantly, whether there’s actually been an improvement in the quality and impact of social services and public benefit programs…(More)”.
OECD Toolkit: “…a comprehensive guide designed to help policymakers and public sector leaders translate principles for safe, secure, and trustworthy Artificial Intelligence (AI) into actionable policies. AI can help improve the efficiency of internal operations, the effectiveness of policymaking, the responsiveness of public services, and overall transparency and accountability. Recognising both the opportunities and risks posed by AI, this toolkit provides practical insights, shares good practices for the use of AI in and by the public sector, integrates ethical considerations, and provides an overview of G7 trends. It further showcases public sector AI use cases, detailing their benefits, as well as the implementation challenges faced by G7 members, together with the emerging policy responses to guide and coordinate the development, deployment, and use of AI in the public sector. The toolkit finally highlights key stages and factors characterising the journey of public sector AI solutions…(More)”.
Article by Jylana L. Sheats: “..Although most discussions of artificial intelligence focus on its impacts on business and research, AI is also poised to transform government in the United States and beyond. AI-guided disaster response is just one piece of the picture. The U.S. Department of Health and Human Services has an experimental AI program to diagnose COVID-19 and flu cases by analyzing the sound of patients coughing into their smartphones. The Department of Justice uses AI algorithms to help prioritize which tips in the FBI’s Threat Intake Processing System to act on first. Other proposals, still at the concept stage, aim to extend the applications of AI to improve the efficiency and effectiveness of nearly every aspect of public services.
The early applications illustrate the potential for AI to make government operations more effective and responsive. They illustrate the looming challenges, too. The federal government will have to recruit, train, and retain skilled workers capable of managing the new technology, competing with the private sector for top talent. The government also faces a daunting task ensuring the ethical and equitable use of AI. Relying on algorithms to direct disaster relief or to flag high-priority crimes raises immediate concerns: What if biases built into the AI overlook some of the groups that most need assistance, or unfairly target certain populations? As AI becomes embedded into more government operations, the opportunities for misuse and unintended consequences will only expand…(More)”.
Paper by Panle Jia Barwick, Siyu Chen, Chao Fu & Teng Li: “Concerns over the excessive use of mobile phones, especially among youths and young adults, are growing. Leveraging administrative student data from a Chinese university merged with mobile phone records, random roommate assignments, and a policy shock that affects peers’ peers, we present, to our knowledge, the first estimates of both behavioral spillover and contextual peer effects, and the first estimates of medium-term impacts of mobile app usage on academic achievement, physical health, and labor market outcomes. App usage is contagious: a one s.d. increase in roommates’ in-college app usage raises own app usage by 4.4% on average, with substantial heterogeneity across students. App usage is detrimental to both academic performance and labor market outcomes. A one s.d. increase in own app usage reduces GPAs by 36.2% of a within-cohort-major s.d. and lowers wages by 2.3%. Roommates’ app usage exerts both direct effects (e.g., noise and disruptions) and indirect effects (via behavioral spillovers) on GPA and wage, resulting in a total negative impact of over half the size of the own usage effect. Extending China’s minors’ game restriction policy of 3 hours per week to college students would boost their initial wages by 0.7%. Using high-frequency GPS data, we identify one underlying mechanism: high app usage crowds out time in study halls and increases absences from and late arrivals at lectures…(More)”.
Paper by Laura Espinosa and Marcel Salathé: “Online public health discourse is becoming more and more important in shaping public health dynamics. Large Language Models (LLMs) offer a scalable solution for analysing the vast amounts of unstructured text found on online platforms. Here, we explore the effectiveness of Large Language Models (LLMs), including GPT models and open-source alternatives, for extracting public stances towards vaccination from social media posts. Using an expert-annotated dataset of social media posts related to vaccination, we applied various LLMs and a rule-based sentiment analysis tool to classify the stance towards vaccination. We assessed the accuracy of these methods through comparisons with expert annotations and annotations obtained through crowdsourcing. Our results demonstrate that few-shot prompting of best-in-class LLMs are the best performing methods, and that all alternatives have significant risks of substantial misclassification. The study highlights the potential of LLMs as a scalable tool for public health professionals to quickly gauge public opinion on health policies and interventions, offering an efficient alternative to traditional data analysis methods. With the continuous advancement in LLM development, the integration of these models into public health surveillance systems could substantially improve our ability to monitor and respond to changing public health attitudes…(More)”.
Article by Magda Osman: “Ken Murphy, CEO of the British multinational supermarket chain Tesco, recently said at a conference that Tesco “could use Clubcard data to nudge customers towards healthier choices”.
So how would this work, and do we want it? Our recent study, published in the Scientific Journal of Research and Reviews, provides an answer.
Loyalty schemes have been around as far back as the 1980s, with the introduction of airlines’ frequent flyer programmes.
Advancements in loyalty schemes have been huge, with some even using gamified approaches, such as leaderboards, trophies and treasure hunts, to keep us engaged. The loyalty principle relies on a form of social exchange, namely reciprocity.
The ongoing reciprocal relationship means that we use a good or service regularly because we trust the service provider, we are satisfied with the service, and we deem the rewards we get as reasonable – be they discounts, vouchers or gifts.
In exchange, we accept that, in many cases, loyalty schemes collect data on us. Our purchasing history, often tied to our demographics, generates improvements in the delivery of the service.
If we accept this, then we continue to benefit from reward schemes, such as promotional offers or other discounts. The effectiveness depends not only on making attractive offers to us for things we are interested in purchasing, but also other discounted items that we hadn’t considered buying…(More)”