The Collaboration Playbook: A leader’s guide to cross-sector collaboration


Playbook by Ian Taylor and Nigel Ball: “The challenges facing our societies and economies today are so large and complex that, in many cases, cross-sector collaboration is not a choice, but an imperative. Yet collaboration remains elusive for many, often being put into the ‘too hard’ category. This playbook offers guidance on how we can seize collaboration opportunities successfully and rise to the challenges.

The recommendations in the playbook were informed by academic literature and practitioner experience. Rather than offer a procedural, step-by-step guide, this playbook offers provoking questions and frameworks that applies to different situations and objectives. While formal aspects such as contracts and procedures are well understood, it was found that what was needed was guidance on the intangible elements, sometimes referred to as ‘positive chemistry’. The significance of aspects like leadership, trust, culture, learning and power in cross-sector collaborations can be the game-changers for productive endeavours but are hard to get right.

Structured around these five key themes, the playbook presents 18 discreet ‘plays’ for effective collaboration. The plays allow the reader to delve into specific areas of interest to gain a deeper understanding of what it means for their collaborative work.

The intention of the playbook is to provide a resource that informs and guides cross-sector leaders. It will be especially relevant for those working in, and partnering with, central and local government in an effort to improve social outcomes…(More)”.

Collective Intelligence: The Rise of Swarm Systems and their Impact on Society


Book edited by Uwe Seebacher and Christoph Legat: “Unlock the future of technology with this captivating exploration of swarm intelligence. Dive into the future of autonomous systems, enhanced by cutting-edge multi-agent systems and predictive research. Real-world examples illustrate how these algorithms drive intelligent, coordinated behavior in industries like manufacturing and energy. Discover the innovative Industrial-Disruption-Index (IDI), pioneered by Uwe Seebacher, which predicts industry disruptions using swarm intelligence. Case studies from media to digital imaging offer invaluable insights into the future of industrial life cycles.

Ideal for AI enthusiasts and professionals, this book provides inspiring, actionable insights for the future. It redefines artificial intelligence, showcasing how predictive intelligence can revolutionize group coordination for more efficient and sustainable systems. A crucial chapter highlights the shift from the Green Deal to the Emerald Deal, showing how swarm intelligence addresses societal challenges…(More)”.

Online consent: how much do we need to know?


Paper by Bartlomiej Chomanski & Lode Lauwaert: “When you visit a website and click a button that says, ‘I agree to these terms’—do you really agree? Many scholars who consider this question (Solove 2013; Barocas & Nissenbaum 2014; Hull 2015; Pascalev 2017; Yeung 2017; Becker 2019; Zuboff 2019; Andreotta et al. 2022; Wolmarans and Vorhoeve 2022) would tend to answer ‘no’—or, at the very least, they would deem your agreement normatively deficient. The reasoning behind that conclusion is in large part driven by the claim that when most people click ‘I agree’ when visiting online websites and platforms, they do not really know what they are agreeing to. Their lack of knowledge about the privacy policy and other terms of the online agreements thus makes their consent problematic in morally salient ways.

We argue that this prevailing view is wrong. Uninformed consent to online terms and conditions (what we will call, for short, ‘online consent’) is less ethically problematic than many scholars suppose. Indeed, we argue that uninformed online consent preceded by the legitimate exercise of the right not to know (RNTK, to be explained below) is prima facie valid and does not appear normatively deficient in other ways, despite being uninformed.

The paper proceeds as follows. In Sect. 2, we make more precise the concept of online consent and summarize the case against it, as presented in the literature. In Sect. 3 we explain the arguments for the RNTK in bioethics and show that analogous reasoning leads to endorsing the RNTK in online contexts. In Sect. 4, we demonstrate that the appeal to the RNTK helps defuse the critics’ arguments against online consent. Section 5 concludes: online consent is valid (with caveats, to be explored in what follows)…(More)”

Civic Engagement & Policymaking Toolkit


About: “This toolkit serves as a guide for science centers and museums and other science engagement organizations to thoughtfully identify and implement ways to nurture civic experiences like these across their work or deepen ongoing civic initiatives for meaningful change within their communities…

This toolkit outlines a Community Science Approach, Civic Engagement & Policymaking, where science and technology are factors in collective civic action and policy decisions to meet community goals. It includes:

  • Guidance for your team on how to get started with this work,
  • An overview of what Civic Engagement & Policymaking as a Community Science Approach can entail,
  • Descriptions of four roles your organization can play to authentically engage with communities on civic priorities,
  • Examples of real collaborations between science engagement organizations and their partners that advance community priorities,
  • Tools, guides, and other resources to help you prepare for new civic engagement efforts and/or expand or deepen existing civic engagement efforts…(More)”.

Generative Agent Simulations of 1,000 People


Paper by Joon Sung Park: “The promise of human behavioral simulation–general-purpose computational agents that replicate human behavior across domains–could enable broad applications in policymaking and social science. We present a novel agent architecture that simulates the attitudes and behaviors of 1,052 real individuals–applying large language models to qualitative interviews about their lives, then measuring how well these agents replicate the attitudes and behaviors of the individuals that they represent. The generative agents replicate participants’ responses on the General Social Survey 85% as accurately as participants replicate their own answers two weeks later, and perform comparably in predicting personality traits and outcomes in experimental replications. Our architecture reduces accuracy biases across racial and ideological groups compared to agents given demographic descriptions. This work provides a foundation for new tools that can help investigate individual and collective behavior…(More)”.

What AI Can’t Do for Democracy


Essay by Daniel Berliner: “In short, there is increasing optimism among both theorists and practitioners over the potential for technology-enabled civic engagement to rejuvenate or deepen democracy. Is this optimism justified?

The answer depends on how we think about what civic engagement can do. Political representatives are often unresponsive to the preferences of ordinary people. Their misperceptions of public needs and preferences are partly to blame, but the sources of democratic dysfunction are much deeper and more structural than information alone. Working to ensure many more “citizens’ voices are truly heard” will thus do little to improve government responsiveness in contexts where the distribution of power means that policymakers have no incentive to do what citizens say. And as some critics have argued, it can even distract from recognizing and remedying other problems, creating a veneer of legitimacy—what health policy expert Sherry Arnstein once famously derided as mere “window dressing.”

Still, there are plenty of cases where contributions from citizens can highlight new problems that need addressingnew perspectives by which issues are understood, and new ideas for solving public problems—from administrative agencies seeking public input to city governments seeking to resolve resident complaints and citizens’ assemblies deliberating on climate policy. But even in these and other contexts, there is reason to doubt AI’s usefulness across the board. The possibilities of AI for civic engagement depend crucially on what exactly it is that policymakers want to learn from the public. For some types of learning, applications of AI can make major contributions to enhance the efficiency and efficacy of information processing. For others, there is no getting around the fundamental needs for human attention and context-specific knowledge in order to adequately make sense of public voices. We need to better understand these differences to avoid wasting resources on tools that might not deliver useful information…(More)”.

People-centred and participatory policymaking


Blog by the UK Policy Lab: “…Different policies can play out in radically different ways depending on circumstance and place. Accordingly it is important for policy professionals to have access to a diverse suite of people-centred methods, from gentle and compassionate techniques that increase understanding with small groups of people to higher-profile, larger-scale engagements. The image below shows a spectrum of people-centred and participatory methods that can be used in policy, ranging from light-touch involvement (e.g. consultation), to structured deliberation (e.g. citizens’ assemblies) and deeper collaboration and empowerment (e.g. participatory budgeting). This spectrum of participation is speculatively mapped against stages of the policy cycle…(More)”.

Social Innovation and the Journey to Transformation


Special series by Skoll for the Stanford Social Innovation Review: “…we explore system orchestration, collaborative funding, government partnerships, mission-aligned investing, reimagined storytelling, and evaluation and learning. These seven articles highlight successful approaches to collective action and share compelling examples of social transformation.

The time is now for philanthropy to align the speed and scale of our investments with the scope of the global challenges that social innovators seek to address. We hope this series will spark fresh thinking and new ideas for how we can create durable systemic change quickly and together…(More)”.

From Digital Sovereignty to Digital Agency


Article by Akash Kapur: “In recent years, governments have increasingly pursued variants of digital sovereignty to regulate and control the global digital ecosystem. The pursuit of AI sovereignty represents the latest iteration in this quest. 

Digital sovereignty may offer certain benefits, but it also poses undeniable risks, including the possibility of undermining the very goals of autonomy and self-reliance that nations are seeking. These risks are particularly pronounced for smaller nations with less capacity, which might do better in a revamped, more inclusive, multistakeholder system of digital governance. 

Organizing digital governance around agency rather than sovereignty offers the possibility of such a system. Rather than reinforce the primacy of nations, digital agency asserts the rights, priorities, and needs not only of sovereign governments but also of the constituent parts—the communities and individuals—they purport to represent.

Three cross-cutting principles underlie the concept of digital agency: recognizing stakeholder multiplicity, enhancing the latent possibilities of technology, and promoting collaboration. These principles lead to three action-areas that offer a guide for digital policymakers: reinventing institutions, enabling edge technologies, and building human capacity to ensure technical capacity…(More)”.

How public-private partnerships can ensure ethical, sustainable and inclusive AI development


Article by Rohan Sharma: “Artificial intelligence (AI) has the potential to solve some of today’s most pressing societal challenges – from climate change to healthcare disparities – but it could also exacerbate existing inequalities if not developed and deployed responsibly.

The rapid pace of AI development, growing awareness of AI’s societal impact and the urgent need to harness AI for positive change make bridging the ‘AI divide’ essential now. Public-private partnerships (PPPs) can play a crucial role in ensuring AI is developed ethically, sustainably and inclusively by leveraging the strengths of multiple stakeholders across sectors and regions…

To bridge the AI divide effectively, collaboration among governments, private companies, civil society and other stakeholders is crucial. PPPs unite these stakeholders’ strengths to ensure AI is developed ethically, sustainably, and inclusively.

1. Bridging the resource and expertise gap

By combining public oversight and private innovation, PPPs bridge resource and expertise gaps. Governments offer funding, regulations and access to public data; companies contribute technical expertise, creativity and market solutions. This collaboration accelerates AI technologies for social good.

Singapore’s National AI Strategy 2.0, for instance, exemplifies how PPPs drive ethical AI development. By bringing together over one hundred experts from academia, industry and government, Singapore is building a trusted AI ecosystem focused on global challenges like health and climate change. Empowering citizens and businesses to use AI responsibly, Singapore demonstrates how PPPs create inclusive AI systems, serving as a model for others.

2. Fostering cross-border collaboration

AI development is a global endeavour, but countries vary in expertise and resources. PPPs facilitate international knowledge sharing, technology transfer and common ethical standards, ensuring AI benefits are distributed globally, rather than concentrated in a few regions or companies.

3. Ensuring multi-stakeholder engagement

Inclusive AI development requires involving not just public and private sectors, but also civil society organizations and local communities. Engaging these groups in PPPs brings diverse perspectives to AI design and deployment, integrating ethical, social and cultural considerations from the start.

These approaches underscore the value of PPPs in driving AI development through diverse expertise, shared resources and international collaboration…(More)”.