Stefaan Verhulst
Essay by Julia Freeland Fisher: “Last year, a Harvard study on chatbots drew a startling conclusion: AI companions significantly reduce loneliness. The researchers found that “synthetic conversation partners,” or bots engineered to be caring and friendly, curbed loneliness on par with interacting with a fellow human. The study was silent, however, on the irony behind these findings: synthetic interaction is not a real, lasting connection. Should the price of curing loneliness really be more isolation?
Missing that subtext is emblematic of our times. Near-term upsides often overshadow long-term consequences. Even with important lessons learned about the harms of social media and big tech over the past two decades, today, optimism about AI’s potential is soaring, at least in some circles.
Bots present an especially tempting fix to long-standing capacity constraints across education, health care, and other social services. AI coaches, tutors, navigators, caseworkers, and assistants could overcome the very real challenges—like cost, recruitment, training, and retention—that have made access to vital forms of high-quality human support perennially hard to scale.
But scaling bots that simulate human support presents new risks. What happens if, across a wide range of “human” services, we trade access to more services for fewer human connections?…(More)”.
About: “Wikenigma is a unique wiki-based resource specifically dedicated to documenting fundamental gaps in human knowledge.
Listing scientific and academic questions to which no-one, anywhere, has yet been able to provide a definitive answer. [ 1141 so far ]
That’s to say, a compendium of so-called ‘Known Unknowns’.
The idea is to inspire and promote interest in scientific and academic research by highlighting opportunities to investigate problems which no-one has yet been able to solve.
You can start browsing the content via the main menu on the left (or in the ‘Main Menu’ section if you’re using a small-screen device) Alternatively, the search box (above right) will find any articles with details that match your search terms…(More)”.
Paper by Jonathan E. LoTempio Jr & Jonathan D. Moreno: “Since the Human Genome Project, the consensus position in genomics has been that data should be shared widely to achieve the greatest societal benefit. This position relies on imprecise definitions of the concept of ‘broad data sharing’. Accordingly, the implementation of data sharing varies among landmark genomic studies. In this Perspective, we identify definitions of broad that have been used interchangeably, despite their distinct implications. We further offer a framework with clarified concepts for genomic data sharing and probe six examples in genomics that produced public data. Finally, we articulate three challenges. First, we explore the need to reinterpret the limits of general research use data. Second, we consider the governance of public data deposition from extant samples. Third, we ask whether, in light of changing concepts of broad, participants should be encouraged to share their status as participants publicly or not. Each of these challenges is followed with recommendations…(More)”.
Book by Nicholas Carr: “From the telegraph and telephone in the 1800s to the internet and social media in our own day, the public has welcomed new communication systems. Whenever people gain more power to share information, the assumption goes, society prospers. Superbloom tells a startlingly different story. As communication becomes more mechanized and efficient, it breeds confusion more than understanding, strife more than harmony. Media technologies all too often bring out the worst in us.
A celebrated writer on the human consequences of technology, Nicholas Carr reorients the conversation around modern communication, challenging some of our most cherished beliefs about self-expression, free speech, and media democratization. He reveals how messaging apps strip nuance from conversation, how “digital crowding” erodes empathy and triggers aggression, how online political debates narrow our minds and distort our perceptions, and how advances in AI are further blurring the already hazy line between fantasy and reality.
Even as Carr shows how tech companies and their tools of connection have failed us, he forces us to confront inconvenient truths about our own nature. The human psyche, it turns out, is profoundly ill-suited to the “superbloom” of information that technology has unleashed.
With rich psychological insights and vivid examples drawn from history and science, Superbloom provides both a panoramic view of how media shapes society and an intimate examination of the fate of the self in a time of radical dislocation. It may be too late to change the system, Carr counsels, but it’s not too late to change ourselves…(More)”.
Paper by Eve Tsybina et al: “Smart cities improve citizen services by converting data into data-driven decisions. This conversion is not coincidental and depends on the underlying movement of information through four layers: devices, data communication and handling, operations, and planning and economics. Here we examine how this flow of information enables smartness in five major infrastructure sectors: transportation, energy, health, governance and municipal utilities. We show how success or failure within and between layers results in disparities in city smartness across different regions and sectors. Regions such as Europe and Asia exhibit higher levels of smartness compared to Africa and the USA. Furthermore, within one region, such as the USA or the Middle East, smarter cities manage the flow of information more efficiently. Sectors such as transportation and municipal utilities, characterized by extensive data, strong analytics and efficient information flow, tend to be smarter than healthcare and energy. The flow of information, however, generates risks associated with data collection and artificial intelligence deployment at each layer. We underscore the importance of seamless data transformation in achieving cost-effective and sustainable urban improvements and identify both supportive and impeding factors in the journey towards smarter cities…(More)”.
Paper by Stefan Baack et al: “Many AI companies are training their large language models (LLMs) on data without the permission of the copyright owners. The permissibility of doing so varies by jurisdiction: in countries like the EU and Japan, this is allowed under certain restrictions, while in the United States, the legal landscape is more ambiguous. Regardless of the legal status, concerns from creative producers have led to several high-profile copyright lawsuits, and the threat of litigation is commonly cited as a reason for the recent trend towards minimizing the information shared about training datasets by both corporate and public interest actors. This trend in limiting data information causes harm by hindering transparency, accountability, and innovation in the broader ecosystem by denying researchers, auditors, and impacted individuals access to the information needed to understand AI models.
While this could be mitigated by training language models on open access and public domain data, at the time of writing, there are no such models (trained at a meaningful scale) due to the substantial technical and sociological challenges in assembling the necessary corpus. These challenges include incomplete and unreliable metadata, the cost and complexity of digitizing physical records, and the diverse set of legal and technical skills required to ensure relevance and responsibility in a quickly changing landscape. Building towards a future where AI systems can be trained on openly licensed data that is responsibly curated and governed requires collaboration across legal, technical, and policy domains, along with investments in metadata standards, digitization, and fostering a culture of openness…(More)”.
Article by Yaqub Chaudhary and Jonnie Penn: “The rapid proliferation of large language models (LLMs) invites the possibility of a new marketplace for behavioral and psychological data that signals intent. This brief article introduces some initial features of that emerging marketplace. We survey recent efforts by tech executives to position the capture, manipulation, and commodification of human intentionality as a lucrative parallel to—and viable extension of—the now-dominant attention economy, which has bent consumer, civic, and media norms around users’ finite attention spans since the 1990s. We call this follow-on the intention economy. We characterize it in two ways. First, as a competition, initially, between established tech players armed with the infrastructural and data capacities needed to vie for first-mover advantage on a new frontier of persuasive technologies. Second, as a commodification of hitherto unreachable levels of explicit and implicit data that signal intent, namely those signals borne of combining (a) hyper-personalized manipulation via LLM-based sycophancy, ingratiation, and emotional infiltration and (b) increasingly detailed categorization of online activity elicited through natural language.
This new dimension of automated persuasion draws on the unique capabilities of LLMs and generative AI more broadly, which intervene not only on what users want, but also, to cite Williams, “what they want to want” (Williams, 2018, p. 122). We demonstrate through a close reading of recent technical and critical literature (including unpublished papers from ArXiv) that such tools are already being explored to elicit, infer, collect, record, understand, forecast, and ultimately manipulate, modulate, and commodify human plans and purposes, both mundane (e.g., selecting a hotel) and profound (e.g., selecting a political candidate)…(More)”.
Report by the Local Government Association (UK): “This report is themed around four inter-related areas on the state of local government digital: market concentration, service delivery, technology, and delivery capabilities. It is particularly challenging to assess the current state of digital transformation in local government, given the diversity of experience, resources and lack of consistent data collection on digital transformation and technology estates.
This report is informed through our regular and extensive engagement with local government, primary research carried out by the LGA, and the research of stakeholders. It is worth noting that research on market concentration is challenging as it is a highly sensitive area.
Key messages:
- Local Government is a vital part of the public sector innovation ecosystem. Local government needs their priorities and context to be understood within cross public sector digital transformation ambitions through representation on public sector strategic boards and subsequently integrated into the design of public sector guidance and cross-government products at the earliest point. This will reduce the likelihood of duplication at public expense. Local government must also have equivalent access to training as civil servants…(More)”.
Worldbank Blog: “Government data is only as reliable as the statistics officials who produce it. Yet, surprisingly little is known about these officials themselves. For decades, they have diligently collected data on others – such as households and firms – to generate official statistics, from poverty rates to inflation figures. Yet, data about statistics officials themselves is missing. How competent are they at analyzing statistical data? How motivated are they to excel in their roles? Do they uphold integrity when producing official statistics, even in the face of opposing career incentives or political pressures? And what can National Statistical Offices (NSOs) do to cultivate a workforce that is competent, motivated, and ethical?
We surveyed 13,300 statistics officials in 14 countries in Latin America and the Caribbean to find out. Five results stand out. For further insights, consult our Inter-American Development Bank (IDB) report, Making National Statistical Offices Work Better.
1. The competence and management of statistics officials shape the quality of statistical data
Our survey included a short exam assessing basic statistical competencies, such as descriptive statistics and probability. Statistical competence correlates with data quality: NSOs with higher exam scores among employees tend to achieve better results in the World Bank’s Statistical Performance Indicators (r = 0.36).
NSOs with better management practices also have better statistical performance. For instance, NSOs with more robust recruitment and selection processes have better statistical performance (r = 0.62)…(More)”.
Book by Marion K. Poetz and Henry Sauermann: “This book explores how millions of people can significantly contribute to scientific research with their effort and experience, even if they are not working at scientific institutions and may not have formal scientific training.
Drawing on a strong foundation of scholarship on crowd involvement, this book helps researchers recognize and understand the benefits and challenges of crowd involvement across key stages of the scientific process. Designed as a practical toolkit, it enables scientists to critically assess the potential of crowd participation, determine when it can be most effective, and implement it to achieve meaningful scientific and societal outcomes.
The book also discusses how recent developments in artificial intelligence (AI) shape the role of crowds in scientific research and can enhance the effectiveness of crowd science projects…(More)”