Which Connections Really Help You Find a Job?


Article by Iavor Bojinov, Karthik Rajkumar, Guillaume Saint-Jacques, Erik Brynjolfsson, and Sinan Aral: “Whom should you connect with the next time you’re looking for a job? To answer this question, we analyzed data from multiple large-scale randomized experiments involving 20 million people to measure how different types of connections impact job mobility. Our results, published recently in Science Magazine, show that your strongest ties — namely your connections to immediate coworkers, close friends, and family — were actually the least helpful for finding new opportunities and securing a job. You’ll have better luck with your weak ties: the more infrequent, arm’s-length relationships with acquaintances.

To be more specific, the ties that are most helpful for finding new jobs tend to be moderately weak: They strike a balance between exposing you to new social circles and information and having enough familiarity and overlapping interests so that the information is useful. Our findings uncovered the relationship between the strength of the connection (as measured by the number of mutual connections prior to connecting) and the likelihood that a job seeker transitions to a new role within the organization of a connection.The observation that weak ties are more beneficial for finding a job is not new. Sociologist Mark Granovetter first laid out this idea in a seminal 1973 paper that described how a person’s network affects their job prospects. Since then, the theory, known as the “strength of weak ties,” has become one of the most influential in the social sciences — underpinning network theories of information diffusion, industry structure, and human cooperation….(More)”.

The network science of collective intelligence


Article by Damon Centola: “In the last few years, breakthroughs in computational and experimental techniques have produced several key discoveries in the science of networks and human collective intelligence. This review presents the latest scientific findings from two key fields of research: collective problem-solving and the wisdom of the crowd. I demonstrate the core theoretical tensions separating these research traditions and show how recent findings offer a new synthesis for understanding how network dynamics alter collective intelligence, both positively and negatively. I conclude by highlighting current theoretical problems at the forefront of research on networked collective intelligence, as well as vital public policy challenges that require new research efforts…(More)”.

Democratised and declassified: the era of social media war is here


Essay by David V. Gioe & Ken Stolworthy: “In October 1962, Adlai Stevenson, US ambassador to the United Nations, grilled Soviet Ambassador Valerian Zorin about whether the Soviet Union had deployed nuclear-capable missiles to Cuba. While Zorin waffled (and didn’t know in any case), Stevenson went in for the kill: ‘I am prepared to wait for an answer until Hell freezes over… I am also prepared to present the evidence in this room.’ Stevenson then theatrically revealed several poster-sized photographs from a US U-2 spy plane, showing Soviet missile bases in Cuba, directly contradicting Soviet claims to the contrary. It was the first time that (formerly classified) imagery intelligence (IMINT) had been marshalled as evidence to publicly refute another state in high-stakes diplomacy, but it also revealed the capabilities of US intelligence collection to a stunned audience. 

During the Cuban missile crisis — and indeed until the end of the Cold War — such exquisite airborne and satellite collection was exclusively the purview of the US, UK and USSR. The world (and the world of intelligence) has come a long way in the past 60 years. By the time President Putin launched his ‘special military operation’ in Ukraine in late February 2022, IMINT and geospatial intelligence (GEOINT) was already highly democratised. Commercial satellite companies, such as Maxar or Google Earth, provide high resolution images free of charge. Thanks to such ubiquitous imagery online, anyone could see – in remarkable clarity – that the Russian military was massing on Ukraine’s border. Geolocation stamped photos and user generated videos uploaded to social media platforms, such as Telegram or TikTok, enabled  further refinement of – and confidence in – the view of Russian military activity. And continued citizen collection showed a change in Russian positions over time without waiting for another satellite to pass over the area. Of course, such a show of force was not guaranteed to presage an invasion, but there was no hiding the composition and scale of the build-up. 

Once the Russians actually invaded, there was another key development – the democratisation of near real-time battlefield awareness. In a digitally connected context, everyone can be a sensor or intelligence collector, wittingly or unwittingly. This dispersed and crowd-sourced collection against the Russian campaign was based on the huge number of people taking pictures of Russian military equipment and formations in Ukraine and posting them online. These average citizens likely had no idea what exactly they were snapping a picture of, but established military experts on the internet do. Sometimes within minutes, internet platforms such as Twitter had threads and threads of what the pictures were, and what they revealed, providing what intelligence professionals call Russian ‘order of battle’…(More)”.

Collective Intelligence in Action – Using Machine Data and Insights to Improve UNDP Sensemaking


UNDP Report: “At its heart, sensemaking is a strategic process designed to extract insights from current projects to generate actionable intelligence for UNDP Country Offices (CO) and other stakeholders. Also, the approach has the potential to increase coherency amongst portfolios of projects, surface common patterns, identify connections, gaps and future perspectives, and determine strategic actions to accelerate the impact of their work.

 By adopting a data-driven approach and looking into structured and semi-structured data from https://open.undp.org/ as well as unstructured data from Open UNDP, project documents and annual progress reports of selected projects, this endeavor aims to extract useful insights for the CO colleagues to better understand where their portfolio is working and identify entry points for breaking silos between teams and spurring collaboration. It is designed to help improve Sensemaking, support better strategy and improve management decisions…(More)”.

Cutting through complexity using collective intelligence


Blog by the UK Policy Lab: “In November 2021 we established a Collective Intelligence Lab (CILab), with the aim of improving policy outcomes by tapping into collective intelligence (CI). We define CI as the diversity of thought and experience that is distributed across groups of people, from public servants and domain experts to members of the public. We have been experimenting with a digital tool, Pol.is, to capture diverse perspectives and new ideas on key government priority areas. To date we have run eight debates on issues as diverse as Civil Service modernisation, fisheries management and national security. Across these debates over 2400 civil servants, subject matter experts and members of the public have participated…

From our experience using CILab on live policy issues, we have identified a series of policy use cases that echo findings from the government of Taiwan and organisations such as Nesta. These use cases include: 1) stress-testing existing policies and current thinking, 2) drawing out consensus and divergence on complex, contentious issues, and 3) identifying novel policy ideas

1) Stress-testing existing policy and current thinking

CI could be used to gauge expert and public sentiment towards existing policy ideas by asking participants to discuss existing policies and current thinking on Pol.is. This is well suited to testing public and expert opinions on current policy proposals, especially where their success depends on securing buy-in and action from stakeholders. It can also help collate views and identify barriers to effective implementation of existing policy.

From the initial set of eight CILab policy debates, we have learnt that it is sometimes useful to design a ‘crossover point’ into the process. This is where part way through a debate, statements submitted by policymakers, subject matter experts and members of the public can be shown to each other, in a bid to break down groupthink across those groups. We used this approach in a Pol.is debate on a topic relating to UK foreign policy, and think it could help test how existing policies on complex areas such as climate change or social care are perceived within and outside government…(More)”

Collective Intelligence


Editorial to the Inaugural Issue by Jessica Flack et al: “It is easy to see the potential of collective intelligence research to serve as a unifying force in the sciences. Its “nuts and bolts” methodological and conceptual questions apply across scales – how to characterize minimal and optimal algorithms for aggregating and storing information; how to derive macroscopic collective outputs from microscopic inputs; how to measure the robustness and vulnerability of collective outcomes, the design of algorithms for information aggregation; the role of diversity in forecasting and estimation; the dynamics of problem-solving in groups; team dynamics and complementary and synergistic roles; open innovation processes, and, more recently, the practical options for combining artificial and collective intelligence.

Despite this potential, the collective intelligence scholarly community is currently distributed over somewhat independent clusters of fields and research groups. We hope to bring these groups together. In this spirit, we will provide space for cross-cutting research aimed at principles of collective intelligence but also for field-specific research.

How should we understand the objectives of collective intelligence in different contexts? These can include identifying an object, making predictions, solving a problem, taking action, achieving an outcome, surviving in a dynamic environment, or a combination of these. Clarity on objectives is essential to measure or evaluate collective intelligence.

What can we learn about how collective intelligence addresses different types of problems, such as the characteristics of static, stochastic, and dynamic environments? For example, if stochastic, is the distribution of states best described as coming from a fixed distribution, as produced by a Markov Process, or as deeply uncertain? If a multi-agent system, to what extent do those entities cooperate or compete? What combinations of hierarchies and various forms of self-organization–such as markets, democracies, and communities–can align goals and coordinate actions?

What causes collective intelligence? How are the core processes needed for intelligence–such as sensing, deciding, and learning–performed in very different types of collective systems? What precisely is the relationship between diversity and collective intelligence (where the patterns are much more complex than often assumed)? Or the roles of synchrony and synergy in teams? What are some non-obvious patterns, such as how a slow learning rate among some population members maintains memory? What is the role of noise (as discussed in our first published dialogue), which, while harmful to the individual, can be potentially beneficial for the collective? When can a propensity for mistakes be helpful?

How should we understand the relationships between levels? For example, can aggregate or macroscale variables be derived from microscale interactions and mechanisms, or vice-versa?

Where does collective intelligence reside, and how is it “stored”—in individual heads, encoded in interaction networks and circuits, or embodied in the interaction of a group with its environment?

How are trade-offs handled in different contexts–speed and accuracy, focus and peripheral vision, exploration and exploitation?

These–and dozens of related questions–are relevant to many disciplines, and each may benefit from insights derived from others, particularly if we can develop common principles and concepts…(More)”.

Toward a Demand-Driven, Collaborative Data Agenda for Adolescent Mental Health


Paper by Stefaan Verhulst et al: “Existing datasets and research in the field of adolescent mental health do not always meet the needs of practitioners, policymakers, and program implementers, particularly in the context of vulnerable populations. Here, we introduce a collaborative, demand-driven methodology for the development of a strategic adolescent mental health research agenda. Ultimately, this agenda aims to guide future data sharing and collection efforts that meet the most pressing data needs of key stakeholders…

We conducted a rapid literature search to summarize common themes in adolescent mental health research into a “topic map”. We then hosted two virtual workshops with a range of international experts to discuss the topic map and identify shared priorities for future collaboration and research…

Our topic map identifies 10 major themes in adolescent mental health, organized into system-level, community-level, and individual-level categories. The engagement of cross-sectoral experts resulted in the validation of the mapping exercise, critical insights for refining the topic map, and a collaborative list of priorities for future research…

This innovative agile methodology enables a focused deliberation with diverse stakeholders and can serve as the starting point for data generation and collaboration practices, both in the field of adolescent mental health and other topics…(More)”.

Localising AI for crisis response


Report by Aleks Berditchevskaia and Kathy Peach, Isabel Stewart: “Putting power back in the hands of frontline humanitarians and local communities.

This report documents the results of a year-long project to design and evaluate new proof-of-concept Collective Crisis Intelligence tools. These are tools that combine data from crisis-affected communities with the processing power of AI to improve humanitarian action.

The two collective crisis intelligence tool prototypes developed were:

  • NFRI-Predict: a tool that predicts which non-food aid items (NFRI) are most needed by different types of households in different regions of Nepal after a crisis.
  • Report and Respond: a French language SMS-based tool that allows Red Cross volunteers in Cameroon to check the accuracy of COVID-19 rumours or misinformation they hear from the community while they’re in the field, and receive real-time guidance on appropriate responses.

Both tools were developed using Nesta’s Participatory AI methods, which aimed to address some of the risks associated with humanitarian AI by involving local communities in the design, development and evaluation of the new tools.

The project was a partnership between Nesta’s Centre for Collective Intelligence Design (CCID) and Data Analytics Practice (DAP), the Nepal Red Cross and Cameroon Red Cross, IFRC Solferino Academy, and Open Lab Newcastle University, and it was funded by the UK Humanitarian Innovation Hub.

We found that collective crisis intelligence:

  • has the potential to make local humanitarian action more timely and appropriate to local needs.
  • can transform locally-generated data to drive new forms of (anticipatory) action.

We found that participatory AI:

  • can overcome several critiques and limitations of AI – as well as helping to improve model performance.
  • helps to surface tensions between the assumptions and standards set by AI gatekeepers versus the pragmatic reality of implementation.
  • creates opportunities for building and sharing new capabilities among frontline staff and data scientists.

We also validated that collective crisis intelligence and participatory AI can help increase trust in AI tools, but more research is needed to untangle the factors that were responsible…(More)”.

What Happened to Consensus Reality?


Essay by Jon Askonas: “Do you feel that people you love and respect are going insane? That formerly serious thinkers or commentators are increasingly unhinged, willing to subscribe to wild speculations or even conspiracy theories? Do you feel that, even if there’s some blame to go around, it’s the people on the other side of the aisle who have truly lost their minds? Do you wonder how they can possibly be so blind? Do you feel bewildered by how absurd everything has gotten? Do many of your compatriots seem in some sense unintelligible to you? Do you still consider them your compatriots?

If you feel this way, you are not alone.

We have come a long way from the optimism of the 1990s and 2000s about how the Internet would usher in a new golden era, expanding the domain of the information society to the whole world, with democracy sure to follow. Now we hear that the Internet foments misinformation and erodes democracy. Yet as dire as these warnings are, they are usually followed with suggestions that with more scrutiny on tech CEOs, more aggressive content moderation, and more fact-checking,  Americans might yet return to accepting the same model of reality. Last year, a New York Times article titled “How the Biden Administration Can Help Solve Our Reality Crisis”  suggested creating a federal “reality czar.”

This is a fantasy. The breakup of consensus reality — a shared sense of facts, expectations, and concepts about the world — predates the rise of social media and is driven by much deeper economic and technological currents.

Postwar Americans enjoyed a world where the existence of an objective, knowable reality just seemed like common sense, where alternate facts belonged only to fringe realms of the deluded or deluding. But a shared sense of reality is not natural. It is the product of social institutions that were once so powerful they could hold together a shared picture of the world, but are now well along a path of decline. In the hope of maintaining their power, some have even begun to abandon the project of objectivity altogether.

Attempts to restore consensus reality by force — the current implicit project of the establishment — are doomed to failure. The only question now is how we will adapt our institutions to a life together where a shared picture of the world has been shattered.

This series aims to trace the forces that broke consensus reality. More than a history of the rise and fall of facts, these essays attempt to show a technological reordering of social reality unlike any before encountered, and an accompanying civilizational shift not seen in five hundred years…(More)”.

From Knowing to Doing: Operationalizing the 100 Questions for Air Quality Initiative


Report by Jessica Seddon, Stefaan G. Verhulst and Aimee Maron: “…summarizes the September 2021 capstone event that wrapped up 100 Questions for Air Quality, led by GovLab and World Resources Institute (WR). This initiative brought together a group of 100 atmospheric scientists, policy experts, academics and data providers from around the world to identify the most important questions for setting a new, high-impact agenda for further investments in data and data science. After a thorough process of sourcing questions, clustering and ranking them – the public was asked to vote. The results were surprising: the most important question was not about what new data or research is needed, but on how we do more with what we already know to generate political will and investments in air quality solutions.

Co-hosted by Clean Air Fund, Climate and Clean Air Coalition, and Clean Air Catalyst, the 2021 roundtable discussion focused on an answer to that question. This conference proceeding summary reflects early findings from that session and offers a starting point for a much-needed conversation on data-to-action. The group of experts and practitioners from academia, businesses, foundations, government, multilateral organizations, nonprofits, and think tanks have not been identified so they could speak freely….(More)”.