Paper by Jonathan E. LoTempio Jr & Jonathan D. Moreno: “Since the Human Genome Project, the consensus position in genomics has been that data should be shared widely to achieve the greatest societal benefit. This position relies on imprecise definitions of the concept of ‘broad data sharing’. Accordingly, the implementation of data sharing varies among landmark genomic studies. In this Perspective, we identify definitions of broad that have been used interchangeably, despite their distinct implications. We further offer a framework with clarified concepts for genomic data sharing and probe six examples in genomics that produced public data. Finally, we articulate three challenges. First, we explore the need to reinterpret the limits of general research use data. Second, we consider the governance of public data deposition from extant samples. Third, we ask whether, in light of changing concepts of broad, participants should be encouraged to share their status as participants publicly or not. Each of these challenges is followed with recommendations…(More)”.
Superbloom: How Technologies of Connection Tear Us Apart
Book by Nicholas Carr: “From the telegraph and telephone in the 1800s to the internet and social media in our own day, the public has welcomed new communication systems. Whenever people gain more power to share information, the assumption goes, society prospers. Superbloom tells a startlingly different story. As communication becomes more mechanized and efficient, it breeds confusion more than understanding, strife more than harmony. Media technologies all too often bring out the worst in us.
A celebrated writer on the human consequences of technology, Nicholas Carr reorients the conversation around modern communication, challenging some of our most cherished beliefs about self-expression, free speech, and media democratization. He reveals how messaging apps strip nuance from conversation, how “digital crowding” erodes empathy and triggers aggression, how online political debates narrow our minds and distort our perceptions, and how advances in AI are further blurring the already hazy line between fantasy and reality.
Even as Carr shows how tech companies and their tools of connection have failed us, he forces us to confront inconvenient truths about our own nature. The human psyche, it turns out, is profoundly ill-suited to the “superbloom” of information that technology has unleashed.
With rich psychological insights and vivid examples drawn from history and science, Superbloom provides both a panoramic view of how media shapes society and an intimate examination of the fate of the self in a time of radical dislocation. It may be too late to change the system, Carr counsels, but it’s not too late to change ourselves…(More)”.
Smart cities: the data to decisions process
Paper by Eve Tsybina et al: “Smart cities improve citizen services by converting data into data-driven decisions. This conversion is not coincidental and depends on the underlying movement of information through four layers: devices, data communication and handling, operations, and planning and economics. Here we examine how this flow of information enables smartness in five major infrastructure sectors: transportation, energy, health, governance and municipal utilities. We show how success or failure within and between layers results in disparities in city smartness across different regions and sectors. Regions such as Europe and Asia exhibit higher levels of smartness compared to Africa and the USA. Furthermore, within one region, such as the USA or the Middle East, smarter cities manage the flow of information more efficiently. Sectors such as transportation and municipal utilities, characterized by extensive data, strong analytics and efficient information flow, tend to be smarter than healthcare and energy. The flow of information, however, generates risks associated with data collection and artificial intelligence deployment at each layer. We underscore the importance of seamless data transformation in achieving cost-effective and sustainable urban improvements and identify both supportive and impeding factors in the journey towards smarter cities…(More)”.
Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models
Article by Yaqub Chaudhary and Jonnie Penn: “The rapid proliferation of large language models (LLMs) invites the possibility of a new marketplace for behavioral and psychological data that signals intent. This brief article introduces some initial features of that emerging marketplace. We survey recent efforts by tech executives to position the capture, manipulation, and commodification of human intentionality as a lucrative parallel to—and viable extension of—the now-dominant attention economy, which has bent consumer, civic, and media norms around users’ finite attention spans since the 1990s. We call this follow-on the intention economy. We characterize it in two ways. First, as a competition, initially, between established tech players armed with the infrastructural and data capacities needed to vie for first-mover advantage on a new frontier of persuasive technologies. Second, as a commodification of hitherto unreachable levels of explicit and implicit data that signal intent, namely those signals borne of combining (a) hyper-personalized manipulation via LLM-based sycophancy, ingratiation, and emotional infiltration and (b) increasingly detailed categorization of online activity elicited through natural language.
This new dimension of automated persuasion draws on the unique capabilities of LLMs and generative AI more broadly, which intervene not only on what users want, but also, to cite Williams, “what they want to want” (Williams, 2018, p. 122). We demonstrate through a close reading of recent technical and critical literature (including unpublished papers from ArXiv) that such tools are already being explored to elicit, infer, collect, record, understand, forecast, and ultimately manipulate, modulate, and commodify human plans and purposes, both mundane (e.g., selecting a hotel) and profound (e.g., selecting a political candidate)…(More)”.
Good government data requires good statistics officials – but how motivated and competent are they?
Worldbank Blog: “Government data is only as reliable as the statistics officials who produce it. Yet, surprisingly little is known about these officials themselves. For decades, they have diligently collected data on others – such as households and firms – to generate official statistics, from poverty rates to inflation figures. Yet, data about statistics officials themselves is missing. How competent are they at analyzing statistical data? How motivated are they to excel in their roles? Do they uphold integrity when producing official statistics, even in the face of opposing career incentives or political pressures? And what can National Statistical Offices (NSOs) do to cultivate a workforce that is competent, motivated, and ethical?
We surveyed 13,300 statistics officials in 14 countries in Latin America and the Caribbean to find out. Five results stand out. For further insights, consult our Inter-American Development Bank (IDB) report, Making National Statistical Offices Work Better.
1. The competence and management of statistics officials shape the quality of statistical data
Our survey included a short exam assessing basic statistical competencies, such as descriptive statistics and probability. Statistical competence correlates with data quality: NSOs with higher exam scores among employees tend to achieve better results in the World Bank’s Statistical Performance Indicators (r = 0.36).
NSOs with better management practices also have better statistical performance. For instance, NSOs with more robust recruitment and selection processes have better statistical performance (r = 0.62)…(More)”.
How and When to Involve Crowds in Scientific Research
Book by Marion K. Poetz and Henry Sauermann: “This book explores how millions of people can significantly contribute to scientific research with their effort and experience, even if they are not working at scientific institutions and may not have formal scientific training.
Drawing on a strong foundation of scholarship on crowd involvement, this book helps researchers recognize and understand the benefits and challenges of crowd involvement across key stages of the scientific process. Designed as a practical toolkit, it enables scientists to critically assess the potential of crowd participation, determine when it can be most effective, and implement it to achieve meaningful scientific and societal outcomes.
The book also discusses how recent developments in artificial intelligence (AI) shape the role of crowds in scientific research and can enhance the effectiveness of crowd science projects…(More)”
The Tyranny of Now
Essay by Nicholas Carr: “…Communication systems are also transportation systems. Each medium carries information from here to there, whether in the form of thoughts and opinions, commands and decrees, or artworks and entertainments.
What Innis saw is that some media are particularly good at transporting information across space, while others are particularly good at transporting it through time. Some are space-biased while others are time-biased. Each medium’s temporal or spatial emphasis stems from its material qualities. Time-biased media tend to be heavy and durable. They last a long time, but they are not easy to move around. Think of a gravestone carved out of granite or marble. Its message can remain legible for centuries, but only those who visit the cemetery are able to read it. Space-biased media tend to be lightweight and portable. They’re easy to carry, but they decay or degrade quickly. Think of a newspaper printed on cheap, thin stock. It can be distributed in the morning to a large, widely dispersed readership, but by evening it’s in the trash.
Because every society organizes and sustains itself through acts of communication, the material biases of media do more than determine how long messages last or how far they reach. They play an important role in shaping a society’s size, form, and character — and ultimately its fate. As the sociologist Andrew Wernick explained in a 1999 essay on Innis, “The portability of media influences the extent, and the durability of media the longevity, of empires, institutions, and cultures.”
In societies where time-biased media are dominant, the emphasis is on tradition and ritual, on maintaining continuity with the past. People are held together by shared beliefs, often religious or mythologic, passed down through generations. Elders are venerated, and power typically resides in a theocracy or monarchy. Because the society lacks the means to transfer knowledge and exert influence across a broad territory, it tends to remain small and insular. If it grows, it does so in a decentralized fashion, through the establishment of self-contained settlements that hold the same traditions and beliefs…(More)”
Artificial Intelligence Narratives
A Global Voices Report: “…Framing AI systems as intelligent is further complicated and intertwined with neighboring narratives. In the US, AI narratives often revolve around opposing themes such as hope and fear, often bridging two strong emotions: existential fears and economic aspirations. In either case, they propose that the technology is powerful. These narratives contribute to the hype surrounding AI tools and their potential impact on society. Some examples include:
- An “AI arms race” between the United States and other global powers, particularly China.
- AI is a driver of economic growth and a transformative agent reshaping society.
- AI is a threat to labor.
- AI as a tool for good to benefit society.
- AI is an inevitable force that requires urgent regulation.
- If you don’t use AI, you will fall behind.
- AI is a democratizing force, lowering barriers to entry globally.
Many of these framings often present AI as an unstoppable and accelerating force. While this narrative can generate excitement and investment in AI research, it can also contribute to a sense of technological determinism and a lack of critical engagement with the consequences of widespread AI adoption. Counter-narratives are many and expand on the motifs of surveillance, erosions of trust, bias, job impacts, exploitation of labor, high-risk uses, the concentration of power, and environmental impacts, among others.
These narrative frames, combined with the metaphorical language and imagery used to describe AI, contribute to the confusion and lack of public knowledge about the technology. By positioning AI as a transformative, inevitable, and necessary tool for national success, these narratives can shape public opinion and policy decisions, often in ways that prioritize rapid adoption and commercialization…(More)”
Information Ecosystems and Troubled Democracy
Report by the Observatory on Information and Democracy: “This inaugural meta-analysis provides a critical assessment of the role of information ecosystems in the Global North and Global Majority World, focusing on their relationship with information integrity (the quality of public discourse), the fairness of political processes, the protection of media freedoms, and the resilience of public institutions.
The report addresses three thematic areas with a cross-cutting theme of mis- and disinformation:
- Media, Politics and Trust;
- Artificial Intelligence, Information Ecosystems and Democracy;
- and Data Governance and Democracy.
The analysis is based mainly on academic publications supplemented by reports and other materials from different disciplines and regions (1,664 citations selected among a total corpus of over +2700 resources aggregated). The report showcases what we can learn from landmark research on often intractable challenges posed by rapid changes in information and communication spaces…(More)”.
What’s a Fact, Anyway?
Essay by Fergus McIntosh: “…For journalists, as for anyone, there are certain shortcuts to trustworthiness, including reputation, expertise, and transparency—the sharing of sources, for example, or the prompt correction of errors. Some of these shortcuts are more perilous than others. Various outfits, positioning themselves as neutral guides to the marketplace of ideas, now tout evaluations of news organizations’ trustworthiness, but relying on these requires trusting in the quality and objectivity of the evaluation. Official data is often taken at face value, but numbers can conceal motives: think of the dispute over how to count casualties in recent conflicts. Governments, meanwhile, may use their powers over information to suppress unfavorable narratives: laws originally aimed at misinformation, many enacted during the COVID-19 pandemic, can hinder free expression. The spectre of this phenomenon is fuelling a growing backlash in America and elsewhere.
Although some categories of information may come to be considered inherently trustworthy, these, too, are in flux. For decades, the technical difficulty of editing photographs and videos allowed them to be treated, by most people, as essentially incontrovertible. With the advent of A.I.-based editing software, footage and imagery have swiftly become much harder to credit. Similar tools are already used to spoof voices based on only seconds of recorded audio. For anyone, this might manifest in scams (your grandmother calls, but it’s not Grandma on the other end), but for a journalist it also puts source calls into question. Technologies of deception tend to be accompanied by ones of detection or verification—a battery of companies, for example, already promise that they can spot A.I.-manipulated imagery—but they’re often locked in an arms race, and they never achieve total accuracy. Though chatbots and A.I.-enabled search engines promise to help us with research (when a colleague “interviewed” ChatGPT, it told him, “I aim to provide information that is as neutral and unbiased as possible”), their inability to provide sourcing, and their tendency to hallucinate, looks more like a shortcut to nowhere, at least for now. The resulting problems extend far beyond media: election campaigns, in which subtle impressions can lead to big differences in voting behavior, feel increasingly vulnerable to deepfakes and other manipulations by inscrutable algorithms. Like everyone else, journalists have only just begun to grapple with the implications.
In such circumstances, it becomes difficult to know what is true, and, consequently, to make decisions. Good journalism offers a way through, but only if readers are willing to follow: trust and naïveté can feel uncomfortably close. Gaining and holding that trust is hard. But failure—the end point of the story of generational decay, of gold exchanged for dross—is not inevitable. Fact checking of the sort practiced at The New Yorker is highly specific and resource-intensive, and it’s only one potential solution. But any solution must acknowledge the messiness of truth, the requirements of attention, the way we squint to see more clearly. It must tell you to say what you mean, and know that you mean it…(More)”.