Stefaan Verhulst
Essay by Emir Efendić and Philippe Van de Calseyde: “Take your best guess for the questions below. Without looking up the answers, jot down your guess in your notes app or on a piece of paper.
- What is the weight of the Liberty Bell?
- Saudi Arabia consumes what percentage of the oil it produces?
- What percent of the world’s population lives in China, India, and the European Union combined?
Next, we want you to take a second guess at these questions. But here’s the catch, this time try answering from the perspective a friend whom you often disagree with. (For us, it’s the colleague with whom we shared an office in grad school, ever the contrarian.) How would your friend answer these questions? Write down the second guesses.
Now, the correct answers. The Liberty Bell weighs 2,080 pounds, and, when we conducted the study in 2021, Saudi Arabia consumed 32.5 percent of the oil it produced, and 43.2 percent of the world’s population lived in China, India, and the European Union combined.
For the final step, compare your first guess with the average of both your guesses.
If you’re like most of the participants in our experiment, averaging the two guesses for each question brings you closer to the answer. Why this is has to do with the fascinating way in which people make estimates and how principles of aggregation can be used to improve numerical estimates.
A lot of research has shown that the aggregate of individual judgements can be quite accurate, in what has been termed the “wisdom of the crowds.” What makes a crowd so wise? Its wisdom relies on a relatively simple principle: when people’s guesses are sufficiently diverse and independent, averaging judgments increases accuracy by canceling out errors across individuals.
Interestingly, research suggests that the same principles underlying wise crowds also apply when multiple estimates from a single person are averaged—a phenomenon known as the “wisdom of the inner crowd.” As it turns out, the average guess of the same person is often more accurate than each individual guess on its own.
Although effective, multiple guesses from a single person do suffer from a major drawback. They are typically quite similar to one another, as people tend to anchor on their first guess when generating a second guess….(More)”.
Paper by Jennifer Hansen and Yiu-Shing Pang: “This commentary explores the potential of private companies to advance scientific progress and solve social challenges through opening and sharing their data. Open data can accelerate scientific discoveries, foster collaboration, and promote long-term business success. However, concerns regarding data privacy and security can hinder data sharing. Companies have options to mitigate the challenges through developing data governance mechanisms, collaborating with stakeholders, communicating the benefits, and creating incentives for data sharing, among others. Ultimately, open data has immense potential to drive positive social impact and business value, and companies can explore solutions for their specific circumstances and tailor them to their specific needs…(More)”.
Paper by Andy E. Williams: “Increasing the number, diversity, or uniformity of opinions in a group does not necessarily imply that those opinions will converge into a single more “intelligent” one, if an objective definition of the term intelligent exists as it applies to opinions. However, a recently developed approach called human-centric functional modeling provides what might be the first general model for individual or collective intelligence. In the case of the collective intelligence of groups, this model suggests how a cacophony of incoherent opinions in a large group might be combined into coherent collective reasoning by a hypothetical platform called “general collective intelligence” (GCI). When applied to solving group problems, a GCI might be considered a system that leverages collective reasoning to increase the beneficial insights that might be derived from the information available to any group. This GCI model also suggests how the collective reasoning ability (intelligence) might be exponentially increased compared to the intelligence of any individual in a group, potentially resulting in what is predicted to be a collective superintelligence….(More)”
Article by Alfredo Molina Ledesma: “When Claudia Sheimbaum Pardo became Mayor of Mexico City 2018, she wanted a new approach to tackling the city’s most pressing problems. Crime was at the very top of the agenda – only 7% of the city’s inhabitants considered it a safe place. New policies were needed to turn this around.
Data became a central part of the city’s new strategy. The Digital Agency for Public Innovation was created in 2019 – tasked with using data to help transform the city. To put this into action, the city administration immediately implemented an open data policy and launched their official data platform, Portal de Datos Abiertos. The policy and platform aimed to make data that Mexico City collects accessible to anyone: municipal agencies, businesses, academics, and ordinary people.
“The main objective of the open data strategy of Mexico City is to enable more people to make use of the data generated by the government in a simple and interactive manner,” said Jose Merino, Head of the Digital Agency for Public Innovation. “In other words, what we aim for is to democratize the access and use of information.” To achieve this goal a new tool for interactive data visualization called Sistema Ajolote was developed in open source and integrated into the Open Data Portal…
Information that had never been made public before, such as street-level crime from the Attorney General’s Office, is now accessible to everyone. Academics, businesses and civil society organizations can access the data to create solutions and innovations that complement the city’s new policies. One example is the successful “Hoyo de Crimen” app, which proposes safe travel routes based on the latest street-level crime data, enabling people to avoid crime hotspots as they walk or cycle through the city.
Since the introduction of the open data policy – which has contributed to a comprehensive crime reduction and social support strategy – high-impact crime in the city has decreased by 53%, and 43% of Mexico City residents now consider the city to be a safe place…(More)”.
Book by Gianclaudio Malgieri: “Vulnerability has traditionally been viewed through the lens of specific groups of people, such as ethnic minorities, children, the elderly, or people with disabilities. With the rise of digital media, our perceptions of vulnerable groups and individuals have been reshaped as new vulnerabilities and different vulnerable sub-groups of users, consumers, citizens, and data subjects emerge.
Vulnerability and Data Protection Law not only depicts these problems but offers the reader a detailed investigation of the concept of data subjects and a reconceptualization of the notion of vulnerability within the General Data Protection Regulation. The regulation offers a forward-facing set of tools that-though largely underexplored-are essential in rebalancing power asymmetries and mitigating induced vulnerabilities in the age of artificial intelligence.
Considering the new risks and potentialities of the digital market, the new awareness about cognitive weaknesses, and the new philosophical sensitivity about the condition of human vulnerability, the author looks for a more general and layered definition of the data subject’s vulnerability that goes beyond traditional labels. In doing so, he seeks to promote a ‘vulnerability-aware’ interpretation of the GDPR.
A heuristic analysis that re-interprets the whole GDPR, this work is essential for both scholars of data protection law and for policymakers looking to strengthen regulations and protect the data of vulnerable individuals…(More)”.
Chapter by Ingrid Schneider: “This chapter challenges the current business models of the dominant platforms in the digital economy. In the search for alternatives, and towards the aim of achieving digital sovereignty, it proceeds in four steps: First, it discusses scholarly proposals to constitute a new intellectual property right on data. Second, it examines four models of data governance distilled from the literature that seek to see data administered (1) as a private good regulated by the market, (2) as a public good regulated by the state, (3) as a common good managed by a commons’ community, and (4) as a data trust supervised by means of stewardship by a trustee. Third, the strengths and weaknesses of each of these models, which are ideal types and serve as heuristics, are critically appraised. Fourth, data trusteeship which at present seems to be emerging as a promising implementation model for better data governance, is discussed in more detail, both in an empirical-descriptive way, by referring to initiatives in several countries, and analytically, by highlighting the challenges and pitfalls of data trusteeship…(More)”.
Paper by Christopher Small et al: “Polis is a platform that leverages machine intelligence to scale up deliberative processes. In this paper, we explore the opportunities and risks associated with applying Large Language Models (LLMs) towards challenges with facilitating, moderating and summarizing the results of Polis engagements. In particular, we demonstrate with pilot experiments using Anthropic’s Claude that LLMs can indeed augment human intelligence to help more efficiently run Polis conversations. In particular, we find that summarization capabilities enable categorically new methods with immense promise to empower the public in collective meaning-making exercises. And notably, LLM context limitations have a significant impact on insight and quality of these results.
However, these opportunities come with risks. We discuss some of these risks, as well as principles and techniques for characterizing and mitigating them, and the implications for other deliberative or political systems that may employ LLMs. Finally, we conclude with several open future research directions for augmenting tools like Polis with LLMs….(More)”.
Blog by Abhi Nemani: “Government services often entail a plethora of paperwork and processes that can be exasperating and time-consuming for citizens. Whether it’s applying for a passport, filing taxes, or registering a business, chances are one has encountered some form of sludge.
Sludge is a term coined by Cass Sunstein, in his straightforward book, Sludge, a legal scholar and former administrator of the White House Office of Information and Regulatory Affairs, to describe unnecessarily effortful processes, bureaucratic procedures, and other barriers to desirable outcomes in government services…
So how can sludge be reduced or eliminated in government services? Sunstein suggests that one way to achieve this is to conduct Sludge Audits, which are systematic evaluations of the costs and benefits of existing or proposed sludge. He also recommends that governments adopt ethical principles and guidelines for the design and use of public services. He argues that by reducing sludge, governments can enhance the quality of life and well-being of their citizens.
One example of sludge reduction in government is the simplification and automation of tax filing in some countries. According to a study by the World Bank, countries that have implemented electronic tax filing systems have reduced the time and cost of tax compliance for businesses and individuals. The study also found that electronic tax filing systems have improved tax administration efficiency, transparency, and revenue collection. Some countries, such as Estonia and Chile, have gone further by pre-filling tax returns with information from various sources, such as employers, banks, and other government agencies. This reduces the burden on taxpayers to provide or verify data, and increases the accuracy and completeness of tax returns.
Future Opportunities for AI in Cutting Sludge
AI technology is rapidly evolving, and its potential applications are manifold. Here are a few opportunities for further AI deployment:
- AI-assisted policy design: AI can analyze vast amounts of data to inform policy design, identifying areas of administrative burden and suggesting improvements.
- Smart contracts and blockchain: These technologies could automate complex procedures, such as contract execution or asset transfer, reducing the need for paperwork.
- Enhanced citizen engagement: AI could personalize government services, making them more accessible and less burdensome.
Key Takeaways:
- AI could play a significant role in policy design, contract execution, and citizen engagement.
- These technologies hold the potential to significantly reduce sludge…(More)”.
Book by Rosanna Guadagno: “Incorporating relevant theory and research from psychology (social, cognitive, clinical, developmental, and personality), mass communication, and media studies, Psychological Processes in Social Media: Why We Click examines both the positive and negative psychological impact of social media use. The book covers a broad range of topics such as research methods, social influence and the viral spread of information, the use of social media in political movements, prosocial behavior, trolling and cyberbullying, friendship and romantic relationships, and much more. Emphasizing the integration of theory and application throughout, Psychological Processes in Social Media: Why We Click offers an illuminating look at the psychological implications and processes around the use of social media..(More)”.
Article by Michael Lee: A team of researchers from four Canadian and American universities say artificial intelligence could replace humans when it comes to collecting data for social science research.
Researchers from the University of Waterloo, University of Toronto, Yale University and the University of Pennsylvania published an article in the journal Science on June 15 about how AI, specifically large language models (LLMs), could affect their work.
“AI models can represent a vast array of human experiences and perspectives, possibly giving them a higher degree of freedom to generate diverse responses than conventional human participant methods, which can help to reduce generalizability concerns in research,” Igor Grossmann, professor of psychology at Waterloo and a co-author of the article, said in a news release.
Philip Tetlock, a psychology professor at UPenn and article co-author, goes so far as to say that LLMs will “revolutionize human-based forecasting” in just three years.
In their article, the authors pose the question: “How can social science research practices be adapted, even reinvented, to harness the power of foundational AI? And how can this be done while ensuring transparent and replicable research?”
The authors say the social sciences have traditionally relied on methods such as questionnaires and observational studies.
But with the ability of LLMs to pore over vast amounts of text data and generate human-like responses, the authors say this presents a “novel” opportunity for researchers to test theories about human behaviour at a faster rate and on a much larger scale.
Scientists could use LLMs to test theories in a simulated environment before applying them in the real world, the article says, or gather differing perspectives on a complex policy issue and generate potential solutions.
“It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. I put an 90 per cent chance on that,” Tetlock said. “Of course, how humans react to all of that is another matter.”
One issue the authors identified, however, is that LLMs often learn to exclude sociocultural biases, raising the question of whether models are correctly reflecting the populations they study…(More)”