Science Diplomacy and the Rise of Technopoles


Article by Vaughan Turekian and Peter Gluckman: “…Science diplomacy has an important, even existential imperative to help the world reconsider the necessity of working together toward big global goals. Climate change may be the most obvious example of where global action is needed, but many other issues have similar characteristics—deep ocean resources, space, and other ungoverned areas, to name a few.

However, taking up this mantle requires acknowledging why past efforts have failed to meet their goals. The global commitment to Sustainable Development Goals (SDGs) is an example. Weaknesses in the UN system, compounded by varied commitments from member states, will prevent the achievement of the SDGs by 2030. This year’s UN Summit of the Future is intended to reboot the global commitment to the sustainability agenda. Regardless of what type of agreement is signed at the summit, its impact may be limited.  

Science diplomacy has an important, even existential imperative to help the world reconsider the necessity of working together toward big global goals.

The science community must play an active part in ensuring progress is in fact made, but that will require an expansion of the community’s current role. To understand what this might mean, consider that the Pact for the Future agreed in New York City in September 2024 places “science, technology, and innovation” as one of its five themes. But that becomes actionable either in the narrow sense that technology will provide “answers” to global problems or in the platitudinous sense that science provides advice that is not acted upon. This dichotomy of unacceptable approaches has long bedeviled science’s influence.

For the world to make better use of science, science must take on an expanded responsibility in solving problems at both global and local scales. And science itself must become part of a toolkit—both at the practical and the diplomatic level—to address the sorts of challenges the world will face in the future. To make this happen, more countries must make science diplomacy a core part of their agenda by embedding science advisors within foreign ministries, connecting diplomats to science communities.

As the pace of technological change generates both existential risk and economic, environmental, and social opportunities, science diplomacy has a vital task in balancing outcomes for the benefit of more people. It can also bring the science community (including the social sciences and humanities) to play a critical role alongside nation states. And, as new technological developments enable nonstate actors, and especially the private sector, science diplomacy has an important role to play in helping nation states develop policy that can identify common solutions and engage key partners…(More)”.

As AI-powered health care expands, experts warn of biases


Article by Marta Biino: “Google’s DeepMind artificial intelligence research laboratory and German pharma company BioNTech are both building AI-powered lab assistants to help scientists conduct experiments and perform tasks, the Financial Times reported.

It’s the latest example of how developments in artificial intelligence are revolutionizing a number of fields, including medicine. While AI has long been used in radiology, for image analysis, or oncology to classify skin lesions for example, as the technology continues to advance its applications are growing.

OpenAI’s GPT models have outperformed humans in making cancer diagnoses based on MRI reports and beat PhD-holders in standardized science tests, to name a few.

However, as AI’s use in health care expands, some fear the notoriously biased technology could carry negative repercussions for patients…(More)”.

Harnessing the feed: social media for mental health information and support 


Report by ReachOut: “…highlights how a social media ban could cut young people off from vital mental health support, including finding that 73 per cent of young people in Australia turn to social media when it comes to support for their mental health.

Based on research with over 2000 young people, the report found a range of benefits for young people seeking mental health support via social media (predominantly TikTok, YouTube and Instagram). 66 per cent of young people surveyed reported increased awareness about their mental health because of relevant content they accessed via social media, 47 per said they had looked for information about how to get professional mental health support on social media and 40 per cent said they sought professional support after viewing mental health information on social media. 

Importantly, half of young people with a probable mental health condition said that they were searching for mental health information or support on social media because they don’t have access to professional support. 

However, young people also highlighted a range of concerns about social media via the research. 38 per cent were deeply concerned about harmful mental health content they have come across on platforms and 43 per cent of the young people who sought support online were deeply concerned about the addictive nature of social media.  

The report highlights young people’s calls for social media to be safer. They want: an end to addictive features like infinite scroll, more control over the content they see, better labelling of mental health information from credible sources, better education and more mental health information provided across platforms…(More)”.

How The New York Times incorporates editorial judgment in algorithms to curate its home page


Article by Zhen Yang: “Whether on the web or the app, the home page of The New York Times is a crucial gateway, setting the stage for readers’ experiences and guiding them to the most important news of the day. The Times publishes over 250 stories daily, far more than the 50 to 60 stories that can be featured on the home page at a given time. Traditionally, editors have manually selected and programmed which stories appear, when and where, multiple times daily. This manual process presents challenges:

  • How can we provide readers a relevant, useful, and fresh experience each time they visit the home page?
  • How can we make our editorial curation process more efficient and scalable?
  • How do we maximize the reach of each story and expose more stories to our readers?

To address these challenges, the Times has been actively developing and testing editorially driven algorithms to assist in curating home page content. These algorithms are editorially driven in that a human editor’s judgment or input is incorporated into every aspect of the algorithm — including deciding where on the home page the stories are placed, informing the rankings, and potentially influencing and overriding algorithmic outputs when necessary. From the get-go, we’ve designed algorithmic programming to elevate human curation, not to replace it…

The Times began using algorithms for content recommendations in 2011 but only recently started applying them to home page modules. For years, we only had one algorithmically-powered module, “Smarter Living,” on the home page, and later, “Popular in The Times.” Both were positioned relatively low on the page.

Three years ago, the formation of a cross-functional team — including newsroom editors, product managers, data scientists, data analysts, and engineers — brought the momentum needed to advance our responsible use of algorithms. Today, nearly half of the home page is programmed with assistance from algorithms that help promote news, features, and sub-brand content, such as The Athletic and Wirecutter. Some of these modules, such as the features module located at the top right of the home page on the web version, are in highly visible locations. During major news moments, editors can also deploy algorithmic modules to display additional coverage to complement a main module of stories near the top of the page. (The topmost news package of Figure 1 is an example of this in action.)…(More)”

How is editorial judgment incorporated into algorithmic programming?

Someone Put Facial Recognition Tech onto Meta’s Smart Glasses to Instantly Dox Strangers


Article by Joseph Cox: “A pair of students at Harvard have built what big tech companies refused to release publicly due to the overwhelming risks and danger involved: smart glasses with facial recognition technology that automatically looks up someone’s face and identifies them. The students have gone a step further too. Their customized glasses also pull other information about their subject from around the web, including their home address, phone number, and family members. 

The project is designed to raise awareness of what is possible with this technology, and the pair are not releasing their code, AnhPhu Nguyen, one of the creators, told 404 Media. But the experiment, tested in some cases on unsuspecting people in the real world according to a demo video, still shows the razor thin line between a world in which people can move around with relative anonymity, to one where your identity and personal information can be pulled up in an instant by strangers.

Nguyen and co-creator Caine Ardayfio call the project I-XRAY. It uses a pair of Meta’s commercially available Ray Ban smart glasses, and allows a user to “just go from face to name,” Nguyen said…(More)”.

From Bits to Biology: A New Era of Biological Renaissance powered by AI


Article by Milad Alucozai: “…A new wave of platforms is emerging to address these limitations. Designed with the modern scientist in mind, these platforms prioritize intuitive interfaces, enabling researchers with diverse computational backgrounds to easily navigate and analyze data. They emphasize collaboration, allowing teams to share data and insights seamlessly. And they increasingly incorporate artificial intelligence, offering powerful tools for accelerating analysis and discovery. This shift marks a move towards more user-centric, efficient, and collaborative computational biology, empowering researchers to tackle increasingly complex biological questions. 

Emerging Platforms: 

  • Seqera Labs: Spearheading a movement towards efficient and reproducible research, Seqera Labs provides a suite of tools, including the popular open-source workflow language Nextflow. Their platform empowers researchers to design scalable and reproducible data analysis pipelines, particularly for cloud environments. Seqera streamlines complex computational workflows across diverse biological disciplines by emphasizing automation and flexibility, making data-intensive research scalable, flexible, and collaborative. 
  • Form Bio: Aimed at democratizing access to computational biology, Form Bio provides a comprehensive tech suite built to enable accelerated cell and gene therapy development and computational biology at scale. Its emphasis on collaboration and intuitive design fosters a more inclusive research environment to help organizations streamline therapeutic development and reduce time-to-market.  
  • Code Ocean: Addressing the critical need for reproducibility in research, Code Ocean provides a unique platform for sharing and executing research code, data, and computational environments. By encapsulating these elements in a portable and reproducible format, Code Ocean promotes transparency and facilitates the reuse of research methods, ultimately accelerating scientific discovery. 
  • Pluto Biosciences: Championing a collaborative approach to biological discovery, Pluto Biosciences offers an interactive platform for visualizing and analyzing complex biological data. Its intuitive tools empower researchers to explore data, generate insights, and seamlessly share findings with collaborators. This fosters a more dynamic and interactive research process, facilitating knowledge sharing and accelerating breakthroughs. 

 Open Source Platform: 

  • Galaxy: A widely used open-source platform for bioinformatics analysis. It provides a user-friendly web interface and a vast collection of tools for various tasks, from sequence analysis to data visualization. Its open-source nature fosters community development and customization, making it a versatile tool for diverse research needs. 
  • Bioconductor is a prominent open-source platform for bioinformatics analysis, akin to Galaxy’s commitment to accessibility and community-driven development. It leverages the power of the R programming language, providing a wealth of packages for tasks ranging from genomic data analysis to statistical modeling. Its open-source nature fosters a collaborative environment where researchers can freely access, utilize, and contribute to a growing collection of tools…(More)”

Data-driven decisions: the case for randomised policy trials


Speech by Andrew Leigh: “…In 1747, 31-year-old Scottish naval surgeon James Lind set about determining the most effective treatment for scurvy, a disease that was killing thousands of sailors around the world. Selecting 12 sailors suffering from scurvy, Lind divided them into six pairs. Each pair received a different treatment: cider; sulfuric acid; vinegar; seawater; a concoction of nutmeg, garlic and mustard; and two oranges and a lemon. In less than a week, the pair who had received oranges and lemons were back on active duty, while the others languished. Given that sulphuric acid was the British Navy’s main treatment for scurvy, this was a crucial finding.

The trial provided robust evidence for the powers of citrus because it created a credible counterfactual. The sailors didn’t choose their treatments, nor were they assigned based on the severity of their ailment. Instead, they were randomly allocated, making it likely that difference in their recovery were due to the treatment rather than other characteristics.

Lind’s randomised trial, one of the first in history, has attained legendary status. Yet because 1747 was so long ago, it is easy to imagine that the methods he used are no longer applicable. After all, Lind’s research was conducted at a time before electricity, cars and trains, an era when slavery was rampant and education was reserved for the elite. Surely, some argue, ideas from such an age have been superseded today.

In place of randomised trials, some put their faith in ‘big data’. Between large-scale surveys and extensive administrative datasets, the world is awash in data as never before. Each day, hundreds of exabytes of data are produced. Big data has improved the accuracy of weather forecasts, permitted researchers to study social interactions across racial and ethnic lines, enabled the analysis of income mobility at a fine geographic scale and much more…(More)”

Mapmatics: A Mathematician’s Guide to Navigating the World


Book by Paulina Rowińska: “Why are coastlines and borders so difficult to measure? How does a UPS driver deliver hundreds of packages in a single day? And where do elusive serial killers hide? The answers lie in the crucial connection between maps and math.

In Mapmatics, mathematician Paulina Rowińska leads us on a riveting journey around the globe to discover how maps and math are deeply entwined, and always have been. From a sixteenth-century map, an indispensable navigation tool that exaggerates the size of northern countries, to public transport maps that both guide and confound passengers, to congressional maps that can empower or silence whole communities, she reveals how maps and math have shaped not only our sense of space but our worldview. In her hands, we learn how to read maps like a mathematician—to extract richer information and, just as importantly, to question our conclusions by asking what we don’t see…(More)”.

Orphan Articles: The Dark Matter of Wikipedia


Paper by Akhil Arora, Robert West, Martin Gerlach: “With 60M articles in more than 300 language versions, Wikipedia is the largest platform for open and freely accessible knowledge. While the available content has been growing continuously at a rate of around 200K new articles each month, very little attention has been paid to the accessibility of the content. One crucial aspect of accessibility is the integration of hyperlinks into the network so the articles are visible to readers navigating Wikipedia. In order to understand this phenomenon, we conduct the first systematic study of orphan articles, which are articles without any incoming links from other Wikipedia articles, across 319 different language versions of Wikipedia. We find that a surprisingly large extent of content, roughly 15\% (8.8M) of all articles, is de facto invisible to readers navigating Wikipedia, and thus, rightfully term orphan articles as the dark matter of Wikipedia. We also provide causal evidence through a quasi-experiment that adding new incoming links to orphans (de-orphanization) leads to a statistically significant increase of their visibility in terms of the number of pageviews. We further highlight the challenges faced by editors for de-orphanizing articles, demonstrate the need to support them in addressing this issue, and provide potential solutions for developing automated tools based on cross-lingual approaches. Overall, our work not only unravels a key limitation in the link structure of Wikipedia and quantitatively assesses its impact, but also provides a new perspective on the challenges of maintenance associated with content creation at scale in Wikipedia…(More)”.

Who Owns AI?


Paper by Amy Whitaker: “While artificial intelligence (AI) stands to transform artistic practice and creative industries, little has been theorized about who owns AI for creative work. Lawsuits brought against AI companies such as OpenAI and Meta under copyright law invite novel reconsideration of the value of creative work. This paper synthesizes across copyright, hybrid practice, and cooperative governance to work toward collective ownership and decision-making. This paper adds to research in arts entrepreneurship because copyright and shared value is so vital to the livelihood of working artists, including writers, filmmakers, and others in the creative industries. Sarah Silverman’s lawsuit against OpenAI is used as the main case study. The conceptual framework of material and machine, one and many, offers a lens onto value creation and shared ownership of AI. The framework includes a reinterpretation of the fourth factor of fair use under U.S. copyright law to refocus on the doctrinal language of value. AI uses the entirety of creative work in a way that is overlooked because of the small scale of one whole work relative to the overall size of the AI model. Yet a theory of value for creative work gives it dignity in its smallness, the way that one vote still has dignity in a national election of millions. As we navigate these frontiers of AI, experimental models pioneered by artists may be instructive far outside the arts…(More)”.