DAOs of Collective Intelligence? Unraveling the Complexity of Blockchain Governance in Decentralized Autonomous Organizations


Paper by Mark C. Ballandies, Dino Carpentras, and Evangelos Pournaras: “Decentralized autonomous organizations (DAOs) have transformed organizational structures by shifting from traditional hierarchical control to decentralized approaches, leveraging blockchain and cryptoeconomics. Despite managing significant funds and building global networks, DAOs face challenges like declining participation, increasing centralization, and inabilities to adapt to changing environments, which stifle innovation. This paper explores DAOs as complex systems and applies complexity science to explain their inefficiencies. In particular, we discuss DAO challenges, their complex nature, and introduce the self-organization mechanisms of collective intelligence, digital democracy, and adaptation. By applying these mechansims to improve DAO design and construction, a practical design framework for DAOs is created. This contribution lays a foundation for future research at the intersection of complexity science and DAOs…(More)”.

New AI standards group wants to make data scraping opt-in


Article by Kate Knibbs: “The first wave of major generative AI tools largely were trained on “publicly available” data—basically, anything and everything that could be scraped from the Internet. Now, sources of training data are increasingly restricting access and pushing for licensing agreements. With the hunt for additional data sources intensifying, new licensing startups have emerged to keep the source material flowing.

The Dataset Providers Alliance, a trade group formed this summer, wants to make the AI industry more standardized and fair. To that end, it has just released a position paper outlining its stances on major AI-related issues. The alliance is made up of seven AI licensing companies, including music copyright-management firm Rightsify, Japanese stock-photo marketplace Pixta, and generative-AI copyright-licensing startup Calliope Networks. (At least five new members will be announced in the fall.)

The DPA advocates for an opt-in system, meaning that data can be used only after consent is explicitly given by creators and rights holders. This represents a significant departure from the way most major AI companies operate. Some have developed their own opt-out systems, which put the burden on data owners to pull their work on a case-by-case basis. Others offer no opt-outs whatsoever…(More)”.

Building LLMs for the social sector: Emerging pain points


Blog by Edmund Korley: “…One of the sprint’s main tracks focused on using LLMs to enhance the impact and scale of chat services in the social sector.

Six organizations participated, with operations spanning Africa and India. Bandhu empowers India’s blue-collar workers and migrants by connecting them to jobs and affordable housing, helping them take control of their livelihoods and future stability. Digital Green enhances rural farmers’ agency with AI-driven insights to improve agricultural productivity and livelihoods. Jacaranda Health provides mothers in sub-Saharan Africa with essential information and support to improve maternal and newborn health outcomes. Kabakoo equips youth in Francophone Africa with digital skills, fostering self-reliance and economic independence. Noora Health teaches Indian patients and caregivers critical health skills, enhancing their ability to manage care. Udhyam provides micro-entrepreneurs’ with education, mentorship, and financial support to build sustainable businesses.

These organizations demonstrate diverse ways one can boost human agency: they help people in underserved communities take control of their lives, make more informed choices, and build better futures – and they are piloting AI interventions to scale these efforts…(More)”.

Using internet search data as part of medical research


Blog by Susan Thomas and Matthew Thompson: “…In the UK, almost 50 million health-related searches are made using Google per year. Globally there are 100s of millions of health-related searches every day. And, of course, people are doing these searches in real-time, looking for answers to their concerns in the moment. It’s also possible that, even if people aren’t noticing and searching about changes to their health, their behaviour is changing. Maybe they are searching more at night because they are having difficulty sleeping or maybe they are spending more (or less) time online. Maybe an individual’s search history could actually be really useful for researchers. This realisation has led medical researchers to start to explore whether individuals’ online search activity could help provide those subtle, almost unnoticeable signals that point to the beginning of a serious illness.

Our recent review found 23 studies have been published so far that have done exactly this. These studies suggest that online search activity among people later diagnosed with a variety of conditions ranging from pancreatic cancer and stroke to mood disorders, was different to people who did not have one of these conditions.

One of these studies was published by researchers at Imperial College London, who used online search activity to identify signals of women with gynaecological malignancies. They found that women with malignant (e.g. ovarian cancer) and benign conditions had different search patterns, up to two months prior to a GP referral. 

Pause for a moment, and think about what this could mean. Ovarian cancer is one of the most devastating cancers women get. It’s desperately hard to detect early – and yet there are signals of this cancer visible in women’s internet searches months before diagnosis?…(More)”.

Nexus: A Brief History of Information Networks From the Stone Age to AI 


Book by Yuval Noah Harari: “For the last 100,000 years, we Sapiens have accumulated enormous power. But despite all our discoveries, inventions, and conquests, we now find ourselves in an existential crisis. The world is on the verge of ecological collapse. Misinformation abounds. And we are rushing headlong into the age of AI—a new information network that threatens to annihilate us. For all that we have accomplished, why are we so self-destructive?

Nexus looks through the long lens of human history to consider how the flow of information has shaped us, and our world. Taking us from the Stone Age, through the canonization of the Bible, early modern witch-hunts, Stalinism, Nazism, and the resurgence of populism today, Yuval Noah Harari asks us to consider the complex relationship between information and truth, bureaucracy and mythology, wisdom and power. He explores how different societies and political systems throughout history have wielded information to achieve their goals, for good and ill. And he addresses the urgent choices we face as non-human intelligence threatens our very existence.
 
Information is not the raw material of truth; neither is it a mere weapon. Nexus explores the hopeful middle ground between these extremes, and in doing so, rediscovers our shared humanity…(More)”.

Advocating an International Decade for Data under G20 Sponsorship


G20 Policy Brief by Lorrayne Porciuncula, David Passarelli, Muznah Siddiqui, and Stefaan Verhulst: “This brief draws attention to the important role of data in social and economic development. It advocates the establishment of an International Decade for Data (IDD) from 2025-2035 under G20 sponsorship. The IDD can be used to bridge existing data governance initiatives and deliver global ambitions to use data for social impact, innovation, economic growth, research, and social development. Despite the critical importance of data governance to achieving the SDGs and to emerging topics such as artificial intelligence, there is no unified space that brings together stakeholders to coordinate and shape the data dimension of digital societies.

While various data governance processes exist, they often operate in silos, without effective coordination and interoperability. This fragmented landscape inhibits progress toward a more inclusive and sustainable digital future. The envisaged IDD fosters an integrated approach to data governance that supports all stakeholders in navigating complex data landscapes. Central to this proposal are new institutional frameworks (e.g. data collaboratives), mechanisms (e.g. digital social licenses and sandboxes), and professional domains (e.g. data stewards), that can respond to the multifaceted issue of data governance and the multiplicity of actors involved.

The G20 can capitalize on the Global Digital Compact’s momentum and create a task force to position itself as a data champion through the launch of the IDD, enabling collective progress and steering global efforts towards a more informed and responsible data-centric society…(More)”.

Frontier AI: double-edged sword for public sector


Article by Zeynep Engin: “The power of the latest AI technologies, often referred to as ‘frontier AI’, lies in their ability to automate decision-making by harnessing complex statistical insights from vast amounts of unstructured data, using models that surpass human understanding. The introduction of ChatGPT in late 2022 marked a new era for these technologies, making advanced AI models accessible to a wide range of users, a development poised to permanently reshape how our societies function.

From a public policy perspective, this capacity offers the optimistic potential to enable personalised services at scale, potentially revolutionising healthcare, education, local services, democratic processes, and justice, tailoring them to everyone’s unique needs in a digitally connected society. The ambition is to achieve better outcomes than humanity has managed so far without AI assistance. There is certainly a vast opportunity for improvement, given the current state of global inequity, environmental degradation, polarised societies, and other chronic challenges facing humanity.

However, it is crucial to temper this optimism with recognising the significant risks. In their current trajectories, these technologies are already starting to undermine hard-won democratic gains and civil rights. Integrating AI into public policy and decision-making processes risks exacerbating existing inequalities and unfairness, potentially leading to new, uncontrollable forms of discrimination at unprecedented speed and scale. The environmental impacts, both direct and indirect, could be catastrophic, while the rise of AI-powered personalised misinformation and behavioural manipulation is contributing to increasingly polarised societies.

Steering the direction of AI to be in the public interest requires a deeper understanding of its characteristics and behaviour. To imagine and design new approaches to public policy and decision-making, we first need a comprehensive understanding of what this remarkable technology offers and its potential implications…(More)”.

Policies must be justified by their wellbeing-to-cost ratio


Article by Richard Layard: “…What is its value for money — that is, how much wellbeing does it deliver per (net) pound it costs the government? This benefit/cost ratio (or BCR) should be central to every discussion.

The science exists to produce these numbers and, if the British government were to require them of the spending departments, it would be setting an example of rational government to the whole world.

Such a move would, of course, lead to major changes in priorities. At the London School of Economics we have been calculating the benefits and costs of policies across a whole range of government departments.

In our latest report on value for money, the best policies are those that save the government more money than they cost — for example by getting people back to work. Classic examples of this are treatments for mental health. The NHS Talking Therapies programme now treats 750,000 people a year for anxiety disorders and depression. Half of them recover and the service demonstrably pays for itself. It needs to expand.

But we also need a parallel service for those addicted to alcohol, drugs and gambling. These individuals are more difficult to treat — but the savings if they recover are greater. Again, it will pay for itself. And so will the improved therapy service for children and young people that Labour has promised.

However, most spending policies do cost more than they save. For these it is crucial to measure the benefit/cost ratio, converting the wellbeing benefit into its monetary equivalent. For example, we can evaluate the wellbeing gain to a community of having more police and subsequently less crime. Once this is converted into money, we calculate that the benefit/cost ratio is 12:1 — very high…(More)”.

Data sovereignty for local governments. Considerations and enablers


Report by JRC Data sovereignty for local governments refers to a capacity to control and/or access data, and to foster a digital transformation aligned with societal values and EU Commission political priorities. Data sovereignty clauses are an instrument that local governments may use to compel companies to share data of public interest. Albeit promising, little is known about the peculiarities of this instrument and how it has been implemented so far. This policy brief aims at filling the gap by systematising existing knowledge and providing policy-relevant recommendations for its wider implementation…(More)”.

AI has a democracy problem. Citizens’ assemblies can help.


Article by Jack Stilgoe: “…With AI, beneath all the hype, some companies know that they have a democracy problem. OpenAI admitted as much when they funded a program of pilot projects for what they called “Democratic Inputs to AI.” There have been some interesting efforts to involve the public in rethinking cutting-edge AI. A collaboration between Anthropic, one of OpenAI’s competitors, and the Collective Intelligence Project asked 1000 Americans to help shape what they called “Collective Constitutional AI.” They were asked to vote on statements such as “the AI should not be toxic” and “AI should be interesting,” and they were given the option of adding their own statements (one of the stranger statements reads “AI should not spread Marxist communistic ideology”). Anthropic used these inputs to tweak its “Claude” Large Language Model, which, when tested against standard AI benchmarks, seemed to help mitigate the model’s biases.

In using the word “constitutional,” Anthropic admits that, in making AI systems, they are doing politics by other means. We should welcome the attempt to open up. But, ultimately, these companies are interested in questions of design, not regulation. They would like there to be a societal consensus, a set of human values to which they can “align” their systems. Politics is rarely that neat…(More)”.