Using online search activity for earlier detection of gynaecological malignancy


Paper by Jennifer F. Barcroft et al: Ovarian cancer is the most lethal and endometrial cancer the most common gynaecological cancer in the UK, yet neither have a screening program in place to facilitate early disease detection. The aim is to evaluate whether online search data can be used to differentiate between individuals with malignant and benign gynaecological diagnoses.

This is a prospective cohort study evaluating online search data in symptomatic individuals (Google user) referred from primary care (GP) with a suspected cancer to a London Hospital (UK) between December 2020 and June 2022. Informed written consent was obtained and online search data was extracted via Google takeout and anonymised. A health filter was applied to extract health-related terms for 24 months prior to GP referral. A predictive model (outcome: malignancy) was developed using (1) search queries (terms model) and (2) categorised search queries (categories model). Area under the ROC curve (AUC) was used to evaluate model performance. 844 women were approached, 652 were eligible to participate and 392 were recruited. Of those recruited, 108 did not complete enrollment, 12 withdrew and 37 were excluded as they did not track Google searches or had an empty search history, leaving a cohort of 235.s

The cohort had a median age of 53 years old (range 20–81) and a malignancy rate of 26.0%. There was a difference in online search data between those with a benign and malignant diagnosis, noted as early as 360 days in advance of GP referral, when search queries were used directly, but only 60 days in advance, when queries were divided into health categories. A model using online search data from patients (n = 153) who performed health-related search and corrected for sample size, achieved its highest sample-corrected AUC of 0.82, 60 days prior to GP referral.

Online search data appears to be different between individuals with malignant and benign gynaecological conditions, with a signal observed in advance of GP referral date. Online search data needs to be evaluated in a larger dataset to determine its value as an early disease detection tool and whether its use leads to improved clinical outcomes…(More)”.

Influence of public innovation laboratories on the development of public sector ambidexterity


Article by Christophe Favoreu et al: “Ambidexterity has become a major issue for public organizations as they manage increasingly strong contradictory pressures to optimize existing processes while innovating. Moreover, although public innovation laboratories are emerging, their influence on the development of ambidexterity remains largely unexplored. Our research aims to understand how innovation laboratories contribute to the formation of individual ambidexterity within the public sector. Drawing from three case studies, this research underscores the influence of these labs on public ambidexterity through the development of innovations by non-specialized actors and the deployment and reuse of innovative managerial practices and techniques outside the i-labs…(More)”.

Responsible Data Re-use in Developing Countries: Social Licence through Public Engagement


Report by Stefaan Verhulst, Laura Sandor, Natalia Mejia Pardo, Elena Murray and Peter Addo: “The datafication era has transformed the technological landscape, digitizing multiple areas of human life and offering opportunities for societal progress through the re-use of digital data. Developing countries stand to benefit from datafication but are faced with challenges like insufficient data quality and limited infrastructure. One of the primary obstacles to unlocking data re-use lies in agency asymmetries—disparities in decision-making authority among stakeholders—which fuel public distrust. Existing consent frameworks amplify the challenge, as they are individual-focused, lack information, and fail to address the nuances of data re-use. To address these limitations, a Social License for re-use becomes imperative—a community-focused approach that fosters responsible data practices and benefits all stakeholders. This shift is crucial for establishing trust and collaboration, and bridging the gap between institutions, governments, and citizens…(More)”.

Untapped


About: “Twenty-first century collective intelligence- combining people’s knowledge and skills, new forms of data and increasingly, technology – has the untapped potential to transform the way we understand and act on climate change.

Collective intelligence for climate action in the Global South takes many forms: from crowdsourcing of indigenous knowledge to preserve biodiversity to participatory monitoring of extreme heat and farmer experiments adapting crops to weather variability.

This research analyzes 100+ climate case studies across 45 countries that tap into people’s participation and use new forms of data. This research illustrates the potential that exists in communities everywhere to contribute to climate adaptation and mitigation efforts. It also aims to shine a light on practical ways in which these initiatives could be designed and further developed so this potential can be fully unleashed…(More)”.

Central banks use AI to assess climate-related risks


Article by Huw Jones: “Central bankers said on Tuesday they have broken new ground by using artificial intelligence to collect data for assessing climate-related financial risks, just as the volume of disclosures from banks and other companies is set to rise.

The Bank for International Settlements, a forum for central banks, the Bank of Spain, Germany’s Bundesbank and the European Central Bank said their experimental Gaia AI project was used to analyse company disclosures on carbon emissions, green bond issuance and voluntary net-zero commitments.

Regulators of banks, insurers and asset managers need high-quality data to assess the impact of climate-change on financial institutions. However, the absence of a single reporting standard confronts them with a patchwork of public information spread across text, tables and footnotes in annual reports.

Gaia was able to overcome differences in definitions and disclosure frameworks across jurisdictions to offer much-needed transparency, and make it easier to compare indicators on climate-related financial risks, the central banks said in a joint statement.

Despite variations in how the same data is reported by companies, Gaia focuses on the definition of each indicator, rather than how the data is labelled.

Furthermore, with the traditional approach, each additional key performance indicator, or KPI, and each new institution requires the analyst to either search for the information in public corporate reports or contact the institution for information…(More)”.

The Wisdom of Partisan Crowds: Comparing Collective Intelligence in Humans and LLM-based Agents


Paper by Yun-Shiuan Chuang et al: “Human groups are able to converge to more accurate beliefs through deliberation, even in the presence of polarization and partisan bias – a phenomenon known as the “wisdom of partisan crowds.” Large Language Models (LLMs) agents are increasingly being used to simulate human collective behavior, yet few benchmarks exist for evaluating their dynamics against the behavior of human groups. In this paper, we examine the extent to which the wisdom of partisan crowds emerges in groups of LLM-based agents that are prompted to role-play as partisan personas (e.g., Democrat or Republican). We find that they not only display human-like partisan biases, but also converge to more accurate beliefs through deliberation, as humans do. We then identify several factors that interfere with convergence, including the use of chain-of-thought prompting and lack of details in personas. Conversely, fine-tuning on human data appears to enhance convergence. These findings show the potential and limitations of LLM-based agents as a model of human collective intelligence…(More)”

Data Disquiet: Concerns about the Governance of Data for Generative AI


Paper by Susan Aaronson: “The growing popularity of large language models (LLMs) has raised concerns about their accuracy. These chatbots can be used to provide information, but it may be tainted by errors or made-up or false information (hallucinations) caused by problematic data sets or incorrect assumptions made by the model. The questionable results produced by chatbots has led to growing disquiet among users, developers and policy makers. The author argues that policy makers need to develop a systemic approach to address these concerns. The current piecemeal approach does not reflect the complexity of LLMs or the magnitude of the data upon which they are based, therefore, the author recommends incentivizing greater transparency and accountability around data-set development…(More)”.

God-like: A 500-Year History of Artificial Intelligence in Myths, Machines, Monsters


Book by Kester Brewin: “In the year 1600 a monk is burned at the stake for claiming to have built a device that will allow him to know all things.

350 years later, having witnessed ‘Trinity’ – the first test of the atomic bomb – America’s leading scientist outlines a memory machine that will help end war on earth.

25 years in the making, an ex-soldier finally unveils this ‘machine for augmenting human intellect’, dazzling as he stands ‘Zeus-like, dealing lightning with both hands.’

AI is both stunningly new and rooted in ancient desires. As we finally welcome this ‘god-like’ technology amongst us, what can learn from the myths and monsters of the past about how to survive alongside our greatest ever invention?…(More)”.

Bring on the Policy Entrepreneurs


Article by Erica Goldman: “Teaching early-career researchers the skills to engage in the policy arena could prepare them for a lifetime of high-impact engagement and invite new perspectives into the democratic process.

In the first six months of the COVID-19 pandemic, the scientific literature worldwide was flooded with research articles, letters, reviews, notes, and editorials related to the virus. One study estimates that a staggering 23,634 unique documents were published between January 1 and June 30, 2020, alone.

Making sense of that emerging science was an urgent challenge. As governments all over the world scrambled to get up-to-date guidelines to hospitals and information to an anxious public, Australia stood apart in its readiness to engage scientists and decisionmakers collaboratively. The country used what was called a “living evidence” approach to synthesizing new information, making it available—and helpful—in real time.

Each week during the pandemic, the Australian National COVID‑19 Clinical Evidence Taskforce came together to evaluate changes in the scientific literature base. They then spoke with a single voice to the Australian clinical community so clinicians had rapid, evidence-based, and nationally agreed-upon guidelines to provide the clarity they needed to care for people with COVID-19.

This new model for consensus-aligned, evidence-based decisionmaking helped Australia navigate the pandemic and build trust in the scientific enterprise, but it did not emerge overnight. It took years of iteration and effort to get the living evidence model ready to meet the moment; the crisis of the pandemic opened a policy window that living evidence was poised to surge through. Australia’s example led the World Health Organization and the United Kingdom’s National Institute for Health and Care Excellence to move toward making living evidence models a pillar of decisionmaking for all their health care guidelines. On its own, this is an incredible story, but it also reveals a tremendous amount about how policies get changed…(More)”.

Navigating the Future of Work: Perspectives on Automation, AI, and Economic Prosperity


Report by Erik Brynjolfsson, Adam Thierer and Daron Acemoglu: “Experts and the media tend to overestimate technology’s negative impact on employment. Case studies suggest that technology-induced unemployment fears are often exaggerated, evidenced by the McKinsey Global Institute reversing its AI forecasts and the growth in jobs predicted to be at high risk of automation.

Flexible work arrangements, technical recertification, and creative apprenticeship models offer real-time learning and adaptable skills development to prepare workers for future labor market and technological changes.

AI can potentially generate new employment opportunities, but the complex transition for workers displaced by automation—marked by the need for retraining and credentialing—indicates that the productivity benefits may not adequately compensate for job losses, particularly among low-skilled workers.

Instead of resorting to conflictual relationships, labor unions in the US must work with employers to support firm automation while simultaneously advocating for worker skill development, creating a competitive business enterprise built on strong worker representation similar to that found in Germany…(More)”.