Superbloom: How Technologies of Connection Tear Us Apart


Book by Nicholas Carr: “From the telegraph and telephone in the 1800s to the internet and social media in our own day, the public has welcomed new communication systems. Whenever people gain more power to share information, the assumption goes, society prospers. Superbloom tells a startlingly different story. As communication becomes more mechanized and efficient, it breeds confusion more than understanding, strife more than harmony. Media technologies all too often bring out the worst in us.

A celebrated writer on the human consequences of technology, Nicholas Carr reorients the conversation around modern communication, challenging some of our most cherished beliefs about self-expression, free speech, and media democratization. He reveals how messaging apps strip nuance from conversation, how “digital crowding” erodes empathy and triggers aggression, how online political debates narrow our minds and distort our perceptions, and how advances in AI are further blurring the already hazy line between fantasy and reality.

Even as Carr shows how tech companies and their tools of connection have failed us, he forces us to confront inconvenient truths about our own nature. The human psyche, it turns out, is profoundly ill-suited to the “superbloom” of information that technology has unleashed.

With rich psychological insights and vivid examples drawn from history and science, Superbloom provides both a panoramic view of how media shapes society and an intimate examination of the fate of the self in a time of radical dislocation. It may be too late to change the system, Carr counsels, but it’s not too late to change ourselves…(More)”.

Smart cities: the data to decisions process


Paper by Eve Tsybina et al: “Smart cities improve citizen services by converting data into data-driven decisions. This conversion is not coincidental and depends on the underlying movement of information through four layers: devices, data communication and handling, operations, and planning and economics. Here we examine how this flow of information enables smartness in five major infrastructure sectors: transportation, energy, health, governance and municipal utilities. We show how success or failure within and between layers results in disparities in city smartness across different regions and sectors. Regions such as Europe and Asia exhibit higher levels of smartness compared to Africa and the USA. Furthermore, within one region, such as the USA or the Middle East, smarter cities manage the flow of information more efficiently. Sectors such as transportation and municipal utilities, characterized by extensive data, strong analytics and efficient information flow, tend to be smarter than healthcare and energy. The flow of information, however, generates risks associated with data collection and artificial intelligence deployment at each layer. We underscore the importance of seamless data transformation in achieving cost-effective and sustainable urban improvements and identify both supportive and impeding factors in the journey towards smarter cities…(More)”.

Towards Best Practices for Open Datasets for LLM Training


Paper by Stefan Baack et al: “Many AI companies are training their large language models (LLMs) on data without the permission of the copyright owners. The permissibility of doing so varies by jurisdiction: in countries like the EU and Japan, this is allowed under certain restrictions, while in the United States, the legal landscape is more ambiguous. Regardless of the legal status, concerns from creative producers have led to several high-profile copyright lawsuits, and the threat of litigation is commonly cited as a reason for the recent trend towards minimizing the information shared about training datasets by both corporate and public interest actors. This trend in limiting data information causes harm by hindering transparency, accountability, and innovation in the broader ecosystem by denying researchers, auditors, and impacted individuals access to the information needed to understand AI models.
While this could be mitigated by training language models on open access and public domain data, at the time of writing, there are no such models (trained at a meaningful scale) due to the substantial technical and sociological challenges in assembling the necessary corpus. These challenges include incomplete and unreliable metadata, the cost and complexity of digitizing physical records, and the diverse set of legal and technical skills required to ensure relevance and responsibility in a quickly changing landscape. Building towards a future where AI systems can be trained on openly licensed data that is responsibly curated and governed requires collaboration across legal, technical, and policy domains, along with investments in metadata standards, digitization, and fostering a culture of openness…(More)”.

Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models


Article by Yaqub Chaudhary and Jonnie Penn: “The rapid proliferation of large language models (LLMs) invites the possibility of a new marketplace for behavioral and psychological data that signals intent. This brief article introduces some initial features of that emerging marketplace. We survey recent efforts by tech executives to position the capture, manipulation, and commodification of human intentionality as a lucrative parallel to—and viable extension of—the now-dominant attention economy, which has bent consumer, civic, and media norms around users’ finite attention spans since the 1990s. We call this follow-on the intention economy. We characterize it in two ways. First, as a competition, initially, between established tech players armed with the infrastructural and data capacities needed to vie for first-mover advantage on a new frontier of persuasive technologies. Second, as a commodification of hitherto unreachable levels of explicit and implicit data that signal intent, namely those signals borne of combining (a) hyper-personalized manipulation via LLM-based sycophancy, ingratiation, and emotional infiltration and (b) increasingly detailed categorization of online activity elicited through natural language.

This new dimension of automated persuasion draws on the unique capabilities of LLMs and generative AI more broadly, which intervene not only on what users want, but also, to cite Williams, “what they want to want” (Williams, 2018, p. 122). We demonstrate through a close reading of recent technical and critical literature (including unpublished papers from ArXiv) that such tools are already being explored to elicit, infer, collect, record, understand, forecast, and ultimately manipulate, modulate, and commodify human plans and purposes, both mundane (e.g., selecting a hotel) and profound (e.g., selecting a political candidate)…(More)”.

State of Digital Local Government


Report by the Local Government Association (UK): “This report is themed around four inter-related areas on the state of local government digital: market concentration, service delivery, technology, and delivery capabilities.  It is particularly challenging to assess the current state of digital transformation in local government, given the diversity of experience, resources and lack of consistent data collection on digital transformation and technology estates. 

This report is informed through our regular and extensive engagement with local government, primary research carried out by the LGA, and the research of stakeholders. It is worth noting that research on market concentration is challenging as it is a highly sensitive area.

Key messages:

  1. Local Government is a vital part of the public sector innovation ecosystem. Local government needs their priorities and context to be understood within cross public sector digital transformation ambitions through representation on public sector strategic boards and subsequently integrated into the design of public sector guidance and cross-government products at the earliest point. This will reduce the likelihood of duplication at public expense. Local government must also have equivalent access to training as civil servants…(More)”.

Good government data requires good statistics officials – but how motivated and competent are they?


Worldbank Blog: “Government data is only as reliable as the statistics officials who produce it. Yet, surprisingly little is known about these officials themselves. For decades, they have diligently collected data on others –  such as households and firms – to generate official statistics, from poverty rates to inflation figures. Yet, data about statistics officials themselves is missing. How competent are they at analyzing statistical data? How motivated are they to excel in their roles? Do they uphold integrity when producing official statistics, even in the face of opposing career incentives or political pressures? And what can National Statistical Offices (NSOs) do to cultivate a workforce that is competent, motivated, and ethical?

We surveyed 13,300 statistics officials in 14 countries in Latin America and the Caribbean to find out. Five results stand out. For further insights, consult our Inter-American Development Bank (IDB) report, Making National Statistical Offices Work Better.

1. The competence and management of statistics officials shape the quality of statistical data

Our survey included a short exam assessing basic statistical competencies, such as descriptive statistics and probability. Statistical competence correlates with data quality: NSOs with higher exam scores among employees tend to achieve better results in the World Bank’s Statistical Performance Indicators (r = 0.36).

NSOs with better management practices also have better statistical performance. For instance, NSOs with more robust recruitment and selection processes have better statistical performance (r = 0.62)…(More)”.

How and When to Involve Crowds in Scientific Research


Book by Marion K. Poetz and Henry Sauermann: “This book explores how millions of people can significantly contribute to scientific research with their effort and experience, even if they are not working at scientific institutions and may not have formal scientific training. 

Drawing on a strong foundation of scholarship on crowd involvement, this book helps researchers recognize and understand the benefits and challenges of crowd involvement across key stages of the scientific process. Designed as a practical toolkit, it enables scientists to critically assess the potential of crowd participation, determine when it can be most effective, and implement it to achieve meaningful scientific and societal outcomes.

The book also discusses how recent developments in artificial intelligence (AI) shape the role of crowds in scientific research and can enhance the effectiveness of crowd science projects…(More)”

Nearly all Americans use AI, though most dislike it, poll shows


Axios: “The vast majority of Americans use products that involve AI, but their views of the technology remain overwhelmingly negative, according to a Gallup-Telescope survey published Wednesday.

Why it matters: The rapid advancement of generative AI threatens to have far-reaching consequences for Americans’ everyday lives, including reshaping the job marketimpacting elections, and affecting the health care industry.

The big picture: An estimated 99% of Americans used at least one AI-enabled product in the past week, but nearly two-thirds didn’t realize they were doing so, according to the poll’s findings.

  • These products included navigation apps, personal virtual assistants, weather forecasting apps, streaming services, shopping websites and social media platforms.
  • Ellyn Maese, a senior research consultant at Gallup, told Axios that the disconnect is because there is “a lot of confusion when it comes to what is just a computer program versus what is truly AI and intelligent.”

Zoom in: Despite its prevalent use, Americans’ views of AI remain overwhelmingly bleak, the survey found.

  • 72% of those surveyed had a “somewhat” or “very” negative opinion of how AI would impact the spread of false information, while 64% said the same about how it affects social connections.
  • The only area where a majority of Americans (61%) had a positive view of AI’s impact was regarding how it might help medical diagnosis and treatment…

State of play: The survey found that 68% of Americans believe the government and businesses equally bear responsibility for addressing the spread of false information related to AI.

  • 63% said the same about personal data privacy violations.
  • Majorities of those surveyed felt the same about combatting the unauthorized use of individuals’ likenesses (62%) and AI’s impact on job losses (52%).
  • In fact, the only area where Americans felt differently was when it came to national security threats; 62% of those surveyed said the government bore primary responsibility for reducing such threats…(More).”

Governing artificial intelligence means governing data: (re)setting the agenda for data justice


Paper by Linnet Taylor, Siddharth Peter de Souza, Aaron Martin, and Joan López Solano: “The field of data justice has been evolving to take into account the role of data in powering the field of artificial intelligence (AI). In this paper we review the main conceptual bases for governing data and AI: the market-based approach, the personal–non-personal data distinction and strategic sovereignty. We then analyse how these are being operationalised into practical models for governance, including public data trusts, data cooperatives, personal data sovereignty, data collaboratives, data commons approaches and indigenous data sovereignty. We interrogate these models’ potential for just governance based on four benchmarks which we propose as a reformulation of the Data Justice governance agenda identified by Taylor in her 2017 framework. Re-situating data justice at the intersection of data and AI, these benchmarks focus on preserving and strengthening public infrastructures and public goods; inclusiveness; contestability and accountability; and global responsibility. We demonstrate how they can be used to test whether a governance approach will succeed in redistributing power, engaging with public concerns and creating a plural politics of AI…(More)”.

Local Government and Citizen Co-production Through Neighborhood Associations


Chapter by Kohei Suzuki: “This chapter explores co-production practices of local governments, focusing on the role of neighborhood associations (NHAs) in Japan. This chapter investigates the overall research question of this book: how to enhance government performance under resource constraints, by focusing on the concept of citizen co-production. Historically, NHAs have played a significant role in supplementing municipal service provisions as co-producers for local governments. Despite the rich history of NHAs and their contributions to public service delivery at the municipal level, theoretical and empirical studies on NHAs and co-production practices remain limited. This chapter aims to address this research gap by exploring the following research questions: What are NHAs from a perspective of citizen co-production? What are the potential contributions of studying NHAs to the broader theory of co-production? What are the future research agendas? The chapter provides an overview of the origin and evolution of the co-production concept. It then examines the main characteristics and activities of NHAs and discusses their roles in supplementing local public service provision. Finally, the chapter proposes potential research agendas to advance studies on co-production using Japan as a case study…(More)”.