Open Secrets: Ukraine and the Next Intelligence Revolution


Article by Amy Zegart: “Russia’s invasion of Ukraine has been a watershed moment for the world of intelligence. For weeks before the shelling began, Washington publicly released a relentless stream of remarkably detailed findings about everything from Russian troop movements to false-flag attacks the Kremlin would use to justify the invasion. 

This disclosure strategy was new: spy agencies are accustomed to concealing intelligence, not revealing it. But it was very effective. By getting the truth out before Russian lies took hold, the United States was able to rally allies and quickly coordinate hard-hitting sanctions. Intelligence disclosures set Russian President Vladimir Putin on his back foot, wondering who and what in his government had been penetrated so deeply by U.S. agencies, and made it more difficult for other countries to hide behind Putin’s lies and side with Russia.

The disclosures were just the beginning. The war has ushered in a new era of intelligence sharing between Ukraine, the United States, and other allies and partners, which has helped counter false Russian narratives, defend digital systems from cyberattacks, and assisted Ukrainian forces in striking Russian targets on the battlefield. And it has brought to light a profound new reality: intelligence isn’t just for government spy agencies anymore…

The explosion of open-source information online, commercial satellite capabilities, and the rise of AI are enabling all sorts of individuals and private organizations to collect, analyze, and disseminate intelligence.

In the past several years, for instance, the amateur investigators of Bellingcat—a volunteer organization that describes itself as “an intelligence agency for the people”—have made all kinds of discoveries. Bellingcat identified the Russian hit team that tried to assassinate former Russian spy officer Sergei Skripal in the United Kingdom and located supporters of the Islamic State (also known as ISIS) in Europe. It also proved that Russians were behind the shootdown of Malaysia Airlines flight 17 over Ukraine. 

Bellingcat is not the only civilian intelligence initiative. When the Iranian government claimed in 2020 that a small fire had broken out in an industrial shed, two U.S. researchers working independently and using nothing more than their computers and the Internet proved within hours that Tehran was lying….(More)”.

Is bigger better? A study of the effect of group size on collective intelligence in online groups


Paper by Nada Hashmi, G. Shankaranarayanan and Thomas W. Malone: “What is the optimal size for online groups that use electronic communication and collaboration tools? Previous research typically suggested optimal group sizes of about 5 to 7 members, but this research predominantly examined in-person groups. Here we investigate online groups whose members communicate with each other using two electronic collaboration tools: text chat and shared editing. Unlike previous research that studied groups performing a single task, here we measure group performance using a test of collective intelligence (CI) that includes a combination of tasks specifically chosen to predict performance on a wide range of other tasks [72]. Our findings suggest that there is a curvilinear relationship between group size and performance and that the optimal group size in online groups is between 25 and 35. This, in turn, suggests that online groups may now allow more people to be productively involved in group decision-making than was possible with in-person groups in the past…(More)”.

Which Connections Really Help You Find a Job?


Article by Iavor Bojinov, Karthik Rajkumar, Guillaume Saint-Jacques, Erik Brynjolfsson, and Sinan Aral: “Whom should you connect with the next time you’re looking for a job? To answer this question, we analyzed data from multiple large-scale randomized experiments involving 20 million people to measure how different types of connections impact job mobility. Our results, published recently in Science Magazine, show that your strongest ties — namely your connections to immediate coworkers, close friends, and family — were actually the least helpful for finding new opportunities and securing a job. You’ll have better luck with your weak ties: the more infrequent, arm’s-length relationships with acquaintances.

To be more specific, the ties that are most helpful for finding new jobs tend to be moderately weak: They strike a balance between exposing you to new social circles and information and having enough familiarity and overlapping interests so that the information is useful. Our findings uncovered the relationship between the strength of the connection (as measured by the number of mutual connections prior to connecting) and the likelihood that a job seeker transitions to a new role within the organization of a connection.The observation that weak ties are more beneficial for finding a job is not new. Sociologist Mark Granovetter first laid out this idea in a seminal 1973 paper that described how a person’s network affects their job prospects. Since then, the theory, known as the “strength of weak ties,” has become one of the most influential in the social sciences — underpinning network theories of information diffusion, industry structure, and human cooperation….(More)”.

The network science of collective intelligence


Article by Damon Centola: “In the last few years, breakthroughs in computational and experimental techniques have produced several key discoveries in the science of networks and human collective intelligence. This review presents the latest scientific findings from two key fields of research: collective problem-solving and the wisdom of the crowd. I demonstrate the core theoretical tensions separating these research traditions and show how recent findings offer a new synthesis for understanding how network dynamics alter collective intelligence, both positively and negatively. I conclude by highlighting current theoretical problems at the forefront of research on networked collective intelligence, as well as vital public policy challenges that require new research efforts…(More)”.

Democratised and declassified: the era of social media war is here


Essay by David V. Gioe & Ken Stolworthy: “In October 1962, Adlai Stevenson, US ambassador to the United Nations, grilled Soviet Ambassador Valerian Zorin about whether the Soviet Union had deployed nuclear-capable missiles to Cuba. While Zorin waffled (and didn’t know in any case), Stevenson went in for the kill: ‘I am prepared to wait for an answer until Hell freezes over… I am also prepared to present the evidence in this room.’ Stevenson then theatrically revealed several poster-sized photographs from a US U-2 spy plane, showing Soviet missile bases in Cuba, directly contradicting Soviet claims to the contrary. It was the first time that (formerly classified) imagery intelligence (IMINT) had been marshalled as evidence to publicly refute another state in high-stakes diplomacy, but it also revealed the capabilities of US intelligence collection to a stunned audience. 

During the Cuban missile crisis — and indeed until the end of the Cold War — such exquisite airborne and satellite collection was exclusively the purview of the US, UK and USSR. The world (and the world of intelligence) has come a long way in the past 60 years. By the time President Putin launched his ‘special military operation’ in Ukraine in late February 2022, IMINT and geospatial intelligence (GEOINT) was already highly democratised. Commercial satellite companies, such as Maxar or Google Earth, provide high resolution images free of charge. Thanks to such ubiquitous imagery online, anyone could see – in remarkable clarity – that the Russian military was massing on Ukraine’s border. Geolocation stamped photos and user generated videos uploaded to social media platforms, such as Telegram or TikTok, enabled  further refinement of – and confidence in – the view of Russian military activity. And continued citizen collection showed a change in Russian positions over time without waiting for another satellite to pass over the area. Of course, such a show of force was not guaranteed to presage an invasion, but there was no hiding the composition and scale of the build-up. 

Once the Russians actually invaded, there was another key development – the democratisation of near real-time battlefield awareness. In a digitally connected context, everyone can be a sensor or intelligence collector, wittingly or unwittingly. This dispersed and crowd-sourced collection against the Russian campaign was based on the huge number of people taking pictures of Russian military equipment and formations in Ukraine and posting them online. These average citizens likely had no idea what exactly they were snapping a picture of, but established military experts on the internet do. Sometimes within minutes, internet platforms such as Twitter had threads and threads of what the pictures were, and what they revealed, providing what intelligence professionals call Russian ‘order of battle’…(More)”.

Collective Intelligence in Action – Using Machine Data and Insights to Improve UNDP Sensemaking


UNDP Report: “At its heart, sensemaking is a strategic process designed to extract insights from current projects to generate actionable intelligence for UNDP Country Offices (CO) and other stakeholders. Also, the approach has the potential to increase coherency amongst portfolios of projects, surface common patterns, identify connections, gaps and future perspectives, and determine strategic actions to accelerate the impact of their work.

 By adopting a data-driven approach and looking into structured and semi-structured data from https://open.undp.org/ as well as unstructured data from Open UNDP, project documents and annual progress reports of selected projects, this endeavor aims to extract useful insights for the CO colleagues to better understand where their portfolio is working and identify entry points for breaking silos between teams and spurring collaboration. It is designed to help improve Sensemaking, support better strategy and improve management decisions…(More)”.

Cutting through complexity using collective intelligence


Blog by the UK Policy Lab: “In November 2021 we established a Collective Intelligence Lab (CILab), with the aim of improving policy outcomes by tapping into collective intelligence (CI). We define CI as the diversity of thought and experience that is distributed across groups of people, from public servants and domain experts to members of the public. We have been experimenting with a digital tool, Pol.is, to capture diverse perspectives and new ideas on key government priority areas. To date we have run eight debates on issues as diverse as Civil Service modernisation, fisheries management and national security. Across these debates over 2400 civil servants, subject matter experts and members of the public have participated…

From our experience using CILab on live policy issues, we have identified a series of policy use cases that echo findings from the government of Taiwan and organisations such as Nesta. These use cases include: 1) stress-testing existing policies and current thinking, 2) drawing out consensus and divergence on complex, contentious issues, and 3) identifying novel policy ideas

1) Stress-testing existing policy and current thinking

CI could be used to gauge expert and public sentiment towards existing policy ideas by asking participants to discuss existing policies and current thinking on Pol.is. This is well suited to testing public and expert opinions on current policy proposals, especially where their success depends on securing buy-in and action from stakeholders. It can also help collate views and identify barriers to effective implementation of existing policy.

From the initial set of eight CILab policy debates, we have learnt that it is sometimes useful to design a ‘crossover point’ into the process. This is where part way through a debate, statements submitted by policymakers, subject matter experts and members of the public can be shown to each other, in a bid to break down groupthink across those groups. We used this approach in a Pol.is debate on a topic relating to UK foreign policy, and think it could help test how existing policies on complex areas such as climate change or social care are perceived within and outside government…(More)”

Collective Intelligence


Editorial to the Inaugural Issue by Jessica Flack et al: “It is easy to see the potential of collective intelligence research to serve as a unifying force in the sciences. Its “nuts and bolts” methodological and conceptual questions apply across scales – how to characterize minimal and optimal algorithms for aggregating and storing information; how to derive macroscopic collective outputs from microscopic inputs; how to measure the robustness and vulnerability of collective outcomes, the design of algorithms for information aggregation; the role of diversity in forecasting and estimation; the dynamics of problem-solving in groups; team dynamics and complementary and synergistic roles; open innovation processes, and, more recently, the practical options for combining artificial and collective intelligence.

Despite this potential, the collective intelligence scholarly community is currently distributed over somewhat independent clusters of fields and research groups. We hope to bring these groups together. In this spirit, we will provide space for cross-cutting research aimed at principles of collective intelligence but also for field-specific research.

How should we understand the objectives of collective intelligence in different contexts? These can include identifying an object, making predictions, solving a problem, taking action, achieving an outcome, surviving in a dynamic environment, or a combination of these. Clarity on objectives is essential to measure or evaluate collective intelligence.

What can we learn about how collective intelligence addresses different types of problems, such as the characteristics of static, stochastic, and dynamic environments? For example, if stochastic, is the distribution of states best described as coming from a fixed distribution, as produced by a Markov Process, or as deeply uncertain? If a multi-agent system, to what extent do those entities cooperate or compete? What combinations of hierarchies and various forms of self-organization–such as markets, democracies, and communities–can align goals and coordinate actions?

What causes collective intelligence? How are the core processes needed for intelligence–such as sensing, deciding, and learning–performed in very different types of collective systems? What precisely is the relationship between diversity and collective intelligence (where the patterns are much more complex than often assumed)? Or the roles of synchrony and synergy in teams? What are some non-obvious patterns, such as how a slow learning rate among some population members maintains memory? What is the role of noise (as discussed in our first published dialogue), which, while harmful to the individual, can be potentially beneficial for the collective? When can a propensity for mistakes be helpful?

How should we understand the relationships between levels? For example, can aggregate or macroscale variables be derived from microscale interactions and mechanisms, or vice-versa?

Where does collective intelligence reside, and how is it “stored”—in individual heads, encoded in interaction networks and circuits, or embodied in the interaction of a group with its environment?

How are trade-offs handled in different contexts–speed and accuracy, focus and peripheral vision, exploration and exploitation?

These–and dozens of related questions–are relevant to many disciplines, and each may benefit from insights derived from others, particularly if we can develop common principles and concepts…(More)”.

Toward a Demand-Driven, Collaborative Data Agenda for Adolescent Mental Health


Paper by Stefaan Verhulst et al: “Existing datasets and research in the field of adolescent mental health do not always meet the needs of practitioners, policymakers, and program implementers, particularly in the context of vulnerable populations. Here, we introduce a collaborative, demand-driven methodology for the development of a strategic adolescent mental health research agenda. Ultimately, this agenda aims to guide future data sharing and collection efforts that meet the most pressing data needs of key stakeholders…

We conducted a rapid literature search to summarize common themes in adolescent mental health research into a “topic map”. We then hosted two virtual workshops with a range of international experts to discuss the topic map and identify shared priorities for future collaboration and research…

Our topic map identifies 10 major themes in adolescent mental health, organized into system-level, community-level, and individual-level categories. The engagement of cross-sectoral experts resulted in the validation of the mapping exercise, critical insights for refining the topic map, and a collaborative list of priorities for future research…

This innovative agile methodology enables a focused deliberation with diverse stakeholders and can serve as the starting point for data generation and collaboration practices, both in the field of adolescent mental health and other topics…(More)”.

Localising AI for crisis response


Report by Aleks Berditchevskaia and Kathy Peach, Isabel Stewart: “Putting power back in the hands of frontline humanitarians and local communities.

This report documents the results of a year-long project to design and evaluate new proof-of-concept Collective Crisis Intelligence tools. These are tools that combine data from crisis-affected communities with the processing power of AI to improve humanitarian action.

The two collective crisis intelligence tool prototypes developed were:

  • NFRI-Predict: a tool that predicts which non-food aid items (NFRI) are most needed by different types of households in different regions of Nepal after a crisis.
  • Report and Respond: a French language SMS-based tool that allows Red Cross volunteers in Cameroon to check the accuracy of COVID-19 rumours or misinformation they hear from the community while they’re in the field, and receive real-time guidance on appropriate responses.

Both tools were developed using Nesta’s Participatory AI methods, which aimed to address some of the risks associated with humanitarian AI by involving local communities in the design, development and evaluation of the new tools.

The project was a partnership between Nesta’s Centre for Collective Intelligence Design (CCID) and Data Analytics Practice (DAP), the Nepal Red Cross and Cameroon Red Cross, IFRC Solferino Academy, and Open Lab Newcastle University, and it was funded by the UK Humanitarian Innovation Hub.

We found that collective crisis intelligence:

  • has the potential to make local humanitarian action more timely and appropriate to local needs.
  • can transform locally-generated data to drive new forms of (anticipatory) action.

We found that participatory AI:

  • can overcome several critiques and limitations of AI – as well as helping to improve model performance.
  • helps to surface tensions between the assumptions and standards set by AI gatekeepers versus the pragmatic reality of implementation.
  • creates opportunities for building and sharing new capabilities among frontline staff and data scientists.

We also validated that collective crisis intelligence and participatory AI can help increase trust in AI tools, but more research is needed to untangle the factors that were responsible…(More)”.