Conversational Swarms of Humans and AI Agents enable Hybrid Collaborative Decision-making


Paper by Louis Rosenberg et al: “Conversational Swarm Intelligence (CSI) is an AI-powered communication and collaboration technology that allows large, networked groups (of potentially unlimited size) to hold thoughtful conversational deliberations in real-time. Inspired by the efficient decision-making dynamics of fish schools, CSI divides a human population into a set of small subgroups connected by AI agents. This enables the full group to hold a unified conversation. In this study, groups of 25 participants were tasked with selecting a roster of players in a real Fantasy Baseball contest. A total of 10 trials were run using CSI. In half the trials, each subgroup was augmented with a fact-providing AI agent referred to herein as an Infobot. The Infobot was loaded with a wide range of MLB statistics. The human participants could query the Infobot the same way they would query other persons in their subgroup. Results show that when using CSI, the 25-person groups outperformed 72% of individually surveyed participants and showed significant intelligence amplification versus the mean score (p=0.016). The CSI-enabled groups also significantly outperformed the most popular picks across the collected surveys for each daily contest (p<0.001). The CSI sessions that used Infobots scored slightly higher than those that did not, but it was not statistically significant in this study. That said, 85% of participants agreed with the statement ‘Our decisions were stronger because of information provided by the Infobot’ and only 4% disagreed. In addition, deliberations that used Infobots showed significantly less variance (p=0.039) in conversational content across members. This suggests that Infobots promoted more balanced discussions in which fewer members dominated the dialog. This may be because the infobot enabled participants to confidently express opinions with the support of factual data…(More)”.

Effective Data Stewardship in Higher Education: Skills, Competences, and the Emerging Role of Open Data Stewards


Paper by Panos Fitsilis et al: “The significance of open data in higher education stems from the changing tendencies towards open science, and open research in higher education encourages new ways of making scientific inquiry more transparent, collaborative and accessible. This study focuses on the critical role of open data stewards in this transition, essential for managing and disseminating research data effectively in universities, while it also highlights the increasing demand for structured training and professional policies for data stewards in academic settings. Building upon this context, the paper investigates the essential skills and competences required for effective data stewardship in higher education institutions by elaborating on a critical literature review, coupled with practical engagement in open data stewardship at universities, provided insights into the roles and responsibilities of data stewards. In response to these identified needs, the paper proposes a structured training framework and comprehensive curriculum for data stewardship, a direct response to the gaps identified in the literature. It addresses five key competence categories for open data stewards, aligning them with current trends and essential skills and knowledge in the field. By advocating for a structured approach to data stewardship education, this work sets the foundation for improved data management in universities and serves as a critical step towards professionalizing the role of data stewards in higher education. The emphasis on the role of open data stewards is expected to advance data accessibility and sharing practices, fostering increased transparency, collaboration, and innovation in academic research. This approach contributes to the evolution of universities into open ecosystems, where there is free flow of data for global education and research advancement…(More)”.

Enabling Digital Innovation in Government


OECD Report: “…presents the OECD’s definition of GovTech (Chapter 2) and sets out the GovTech Policy Framework (Chapter 3). The framework is designed to guide governments on how to establish the conditions for successful, sustainable, and effective GovTech.

The framework consists of two parts: the GovTech Building Blocks and the GovTech Enablers. The building blocks (Chapter 3) represent the foundations at the micro-level needed to establish impactful GovTech practices within public sectors by introducing more agile practices, mitigating risks, and building meaningful collaboration with the GovTech ecosystem. These building blocks include:

  • Mature digital government infrastructure: including the necessary technology, infrastructure, tools, and data governance to enable both GovTech collaborations and the digital solutions they develop.
  • Capacities for collaboration and experimentation: within the public sector, including the digital skills and multidisciplinary teams; agile processes, tools, and methodologies; and a culture that encourages experimentation and accepts failure. 
  • Resources and implementation support: considering how to make funding available, how to evolve procurement approaches, and how to scale successful pilots across organisations and internationally.
  • Availability and maturity of GovTech partners: including acceleration programmes to support start-ups growth by facilitating access to capital, the scaling up of solutions, and minimising barriers to access procurement opportunities.

At the macro-level, the enablers (Chapter 4) instead create an environment that fosters the development of GovTech and facilitates good practices. This is done at the:

  • Strategic layer: where governments could use GovTech strategies and champions in senior leadership positions to mobilise support and set a clear direction for GovTech.
  • Institutional layer: where governments could seek collaboration and knowledge-sharing across institutions at the national, regional, or policy levels.
  • Network layer: where both governments and GovTech actors should seek to mobilise the network collectively to strengthen the GovTech practice and garner broader support from communities…(More)”

Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers


Article by Scharon Harding: “A trend on Reddit that sees Londoners giving false restaurant recommendations in order to keep their favorites clear of tourists and social media influencers highlights the inherent flaws of Google Search’s reliance on Reddit and Google’s AI Overview.

In May, Google launched AI Overviews in the US, an experimental feature that populates the top of Google Search results with a summarized answer based on an AI model built into Google’s web rankings. When Google first debuted AI Overview, it quickly became apparent that the feature needed work with accuracy and its ability to properly summarize information from online sources. AI Overviews are “built to only show information that is backed up by top web results,” Liz Reid, VP and head of Google Search, wrote in a May blog post. But as my colleague Benj Edwards pointed out at the time, that setup could contribute to inaccurate, misleading, or even dangerous results: “The design is based on the false assumption that Google’s page-ranking algorithm favors accurate results and not SEO-gamed garbage.”

As Edwards alluded to, many have complained about Google Search results’ quality declining in recent years, as SEO spam and, more recently, AI slop float to the top of searches. As a result, people often turn to the Reddit hack to make Google results more helpful. By adding “site:reddit.com” to search results, users can hone their search to more easily find answers from real people. Google seems to understand the value of Reddit and signed an AI training deal with the company that’s reportedly worth $60 million per year…(More)”.

Rediscovering the Pleasures of Pluralism: The Potential of Digitally Mediated Civic Participation


Essay by Lily L. Tsai and Alex Pentland: “Human society developed when most collective decision-making was limited to small, geographically concentrated groups such as tribes or extended family groups. Discussions about community issues could take place among small numbers of people with similar concerns. As coordination across larger distances evolved, the costs of travel required representatives from each clan or smaller group to participate in deliberations and decision-making involving multiple local communities. Divergence in the interests of representatives and their constituents opened up opportunities for corruption and elite capture.

Technologies now enable very large numbers of people to communicate, coordinate, and make collective decisions on the same platform. We have new opportunities for digitally enabled civic participation and direct democracy that scale for both the smallest and largest groups of people. Quantitative experiments, sometimes including tens of millions of individuals, have examined inclusiveness and efficiency in decision-making via digital networks. Their findings suggest that large networks of nonexperts can make practical, productive decisions and engage in collective action under certain (1) conditions. (2) These conditions include shared knowledge among individuals and communities with similar concerns, and information about their recent actions and outcomes…(More)”

Exploring the Intersections of Open Data and Generative AI: Recent Additions to the Observatory


Blog by Roshni Singh, Hannah Chafetz, Andrew Zahuranec, Stefaan Verhulst: “The Open Data Policy Lab’s Observatory of Examples of How Open Data and Generative AI Intersect provides real-world use cases of where open data from official sources intersects with generative artificial intelligence (AI), building from the learnings from our report, “A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI.” 

The Observatory includes over 80 examples from several domains and geographies–ranging from supporting administrative work within the legal department of the Government of France to assisting researchers across the African continent in navigating cross-border data sharing laws. The examples include generative AI chatbots to improve access to services, conversational tools to help analyze data, datasets to improve the quality of the AI output, and more. A key feature of the Observatory is its categorization across our Spectrum of Scenarios framework, shown below. Through this effort, we aim to bring together the work already being done and identify ways to use generative AI for the public good.

Screenshot 2024 10 25 at 10.50.23 am

This Observatory is an attempt to grapple with the work currently being done to apply generative AI in conjunction with official open data. It does not make a value judgment on their efficacy or practices. Many of these examples have ethical implications, which merit further attention and study. 

From September through October, we added to the Observatory:

  • Bayaan Platform: A conversational tool by the Statistics Centre Abu Dhabi that provides decision makers with data analytics and visualization support.
  • Berufsinfomat: A generative AI tool for career coaching in Austria.
  • ChatTCU: A chatbot for Brazil’s Federal Court of Accounts.
  • City of Helsinki’s AI Register: An initiative aimed at leveraging open city data to enhance civic services and facilitate better engagement with residents.
  • Climate Q&A: A generative AI chatbot that provides information about climate change based on scientific reports.
  • DataLaw.Bot: A generative AI tool that disseminates data sharing regulations with researchers across several African countries…(More)”.

South Korea leverages open government data for AI development


Article by Si Ying Thian: “In South Korea, open government data is powering artificial intelligence (AI) innovations in the private sector.

Take the case of TTCare which may be the world’s first mobile application to analyse eye and skin disease symptoms in pets.

AI Hub allows users to search by industry, data format and year (top row), with the data sets made available based on the particular search term “pet” (bottom half of the page). Image: AI Hub, provided by courtesy of Baek

The AI model was trained on about one million pieces of data – half of the data coming from the government-led AI Hub and the rest collected by the firm itself, according to the Korean newspaper Donga.

AI Hub is an integrated platform set up by the government to support the country’s AI infrastructure.

TTCare’s CEO Heo underlined the importance of government-led AI training data in improving the model’s ability to diagnose symptoms. The firm’s training data is currently accessible through AI Hub, and any Korean citizen can download or use it.

Pushing the boundaries of open data

Over the years, South Korea has consistently come up top in the world’s rankings for Open, Useful, and Re-usable data (OURdata) Index.

The government has been pushing the boundaries of what it can do with open data – beyond just making data usable by providing APIs. Application Programming Interfaces, or APIs, make it easier for users to tap on open government data to power their apps and services.

There is now rising interest from public sector agencies to tap on such data to train AI models, said South Korea’s National Information Society Agency (NIA)’s Principal Manager, Dongyub Baek, although this is still at an early stage.

Baek sits in NIA’s open data department, which handles policies, infrastructure such as the National Open Data Portal, as well as impact assessments of the government initiatives…(More)”

Unlocking data for climate action requires trusted marketplaces


Report by Digital Impact Alliance: “In 2024, the northern hemisphere recorded the hottest summer overall, the hottest day, and the hottest ever month of August. That same month – August 2024 – this warming fueled droughts in Italy and intensified typhoons that devastated parts of the Philippines, Taiwan, and China. The following month, new research calculated that warming is costing the global economy billions of dollars: an increase in extreme heat and severe drought costs about 0.2% of a country’s GDP. 

These are only the latest stories and statistics that illustrate the growing costs of climate change – data points that have emerged in the short time since we published our second Spotlight on unlocking climate data with open transaction networks.

This third paper in the series continues the work of the Joint Learning Network on Unlocking Data for Climate Action (Climate Data JLN). This multi-disciplinary network identified multiple promising models to explore in the context of unlocking data for climate action. This Spotlight paper examines the third of these models: data spaces. Through examination of data spaces in action, the paper analyzes the key elements that render them more or less applicable to specific climate-related data sets. Data spaces are relatively new and mostly conceptual, with only a handful of implementations in process and concentrated in a few geographic areas. While this model requires extensive up-front work to agree upon governance and technical standards, the result is an approach that overcomes trust and financing issues by maintaining data sovereignty and creating a marketplace for data exchange…(More)”.

Local Systems


Position Paper by USAID: “…describes the key approaches USAID will use to translate systems thinking into systems practice. It focuses on ways USAID can better understand and engage local systems to support them in producing more sustainable results. Systems thinking is a mindset and set of tools that we use to understand how systems behave and produce certain results or outcomes. Systems practice is the application of systems thinking to better understand challenges and strengthen the capacity of local systems to unlock locally led, sustained progress. The shift from systems thinking to systems practice is driven by a desire to integrate systems practice throughout the Program Cycle and increase our capacity to actively and adaptively manage programming in ways that recognize complexity and help make our programs more effective and sustainable.

These approaches will be utilized alongside and within the context of USAID’s policies and guidance, including technical guidance for specific sectors, as well as evidence and lessons learned from partners around the world. Systems thinking is a long-standing discipline that can serve as a powerful tool for understanding and working with local systems. It has been a consistent component of USAID’s decades-long commitment to locally led development and humanitarian assistance. USAID uses systems thinking to better understand the complex and interrelated challenges we confront – from climate change to migration to governance – and the perspectives of diverse stakeholders on these issues. When we understand challenges as complex systems – where outcomes emerge from the interactions and relationships between actors and elements in that system – we can leverage and help strengthen the local capacities and relationships that will ultimately drive sustainable progress…(More)”.

Trust in artificial intelligence makes Trump/Vance a transhumanist ticket


Article by Filip Bialy: “AI plays a central role in the 2024 US presidential election, as a tool for disinformation and as a key policy issue. But its significance extends beyond these, connecting to an emerging ideology known as TESCREAL, which envisages AI as a catalyst for unprecedented progress, including space colonisation. After this election, TESCREALism may well have more than one representative in the White House, writes Filip Bialy

In June 2024, the essay Situational Awareness by former OpenAI employee Leopold Aschenbrenner sparked intense debate in the AI community. The author predicted that by 2027, AI would surpass human intelligence. Such claims are common among AI researchers. They often assert that only a small elite – mainly those working at companies like OpenAI – possesses inside knowledge of the technology. Many in this group hold a quasi-religious belief in the imminent arrival of artificial general intelligence (AGI) or artificial superintelligence (ASI)…

These hopes and fears, however, are not only religious-like but also ideological. A decade ago, Silicon Valley leaders were still associated with the so-called Californian ideology, a blend of hippie counterculture and entrepreneurial yuppie values. Today, figures like Elon Musk, Mark Zuckerberg, and Sam Altman are under the influence of a new ideological cocktail: TESCREAL. Coined in 2023 by Timnit Gebru and Émile P. Torres, TESCREAL stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism.

While these may sound like obscure terms, they represent ideas developed over decades, with roots in eugenics. Early 20th-century eugenicists such as Francis Galton promoted selective breeding to enhance future generations. Later, with advances in genetic engineering, the focus shifted from eugenics’ racist origins to its potential to eliminate genetic defects. TESCREAL represents a third wave of eugenics. It aims to digitise human consciousness and then propagate digital humans into the universe…(More)”