AI-accelerated Nazca survey nearly doubles the number of known figurative geoglyphs and sheds light on their purpose


Paper by Masato Sakai, Akihisa Sakurai, Siyuan Lu, and Marcus Freitag: “It took nearly a century to discover a total of 430 figurative Nazca geoglyphs, which offer significant insights into the ancient cultures at the Nazca Pampa. Here, we report the deployment of an AI system to the entire Nazca region, a UNESCO World Heritage site, leading to the discovery of 303 new figurative geoglyphs within only 6 mo of field survey, nearly doubling the number of known figurative geoglyphs. Even with limited training examples, the developed AI approach is demonstrated to be effective in detecting the smaller relief-type geoglyphs, which unlike the giant line-type geoglyphs are very difficult to discern. The improved account of figurative geoglyphs enables us to analyze their motifs and distribution across the Nazca Pampa. We find that relief-type geoglyphs depict mainly human motifs or motifs of things modified by humans, such as domesticated animals and decapitated heads (81.6%). They are typically located within viewing distance (on average 43 m) of ancient trails that crisscross the Nazca Pampa and were most likely built and viewed at the individual or small-group level. On the other hand, the giant line-type figurative geoglyphs mainly depict wild animals (64%). They are found an average of 34 m from the elaborate linear/trapezoidal network of geoglyphs, which suggests that they were probably built and used on a community level for ritual activities…(More)”

The Age of AI Nationalism and Its Effects


Paper by Susan Ariel Aaronson: “Policy makers in many countries are determined to develop artificial intelligence (AI) within their borders because they view AI as essential to both national security and economic growth. Some countries have proposed adopting AI sovereignty, where the nation develops AI for its people, by its people and within its borders. In this paper, the author makes a distinction between policies designed to advance domestic AI and policies that, with or without direct intent, hamper the production or trade of foreign-produced AI (known as “AI nationalism”). AI nationalist policies in one country can make it harder for firms in another country to develop AI. If officials can limit access to key components of the AI supply chain, such as data, capital, expertise or computing power, they may be able to limit the AI prowess of competitors in country Y and/or Z. Moreover, if policy makers can shape regulations in ways that benefit local AI competitors, they may also impede the competitiveness of other nations’ AI developers. AI nationalism may seem appropriate given the import of AI, but this paper aims to illuminate how AI nationalistic policies may backfire and could divide the world into AI haves and have nots…(More)”.

We are Developing AI at the Detriment of the Global South — How a Focus on Responsible Data Re-use Can Make a Difference


Article by Stefaan Verhulst and Peter Addo: “…At the root of this debate runs a frequent concern with how data is collected, stored, used — and responsibly reused for other purposes that initially collected for…

In this article, we propose that promoting responsible reuse of data requires addressing the power imbalances inherent in the data ecology. These imbalances disempower key stakeholders, thereby undermining trust in data management practices. As we recently argued in a report on “responsible data reuse in developing countries,” prepared for Agence Française de Development (AFD), power imbalences may be particularly pernicious when considering the use of data in the Global South. Addressing these requires broadening notions of consent, beyond current highly individualized approaches, in favor of what we instead term a social license for reuse.

In what follows, we explain what a social license means, and propose three steps to help achieve that goal. We conclude by calling for a new research agenda — one that would stretch existing disciplinary and conceptual boundaries — to reimagine what social licenses might mean, and how they could be operationalized…(More)”.

The ABC’s of Who Benefits from Working with AI: Ability, Beliefs, and Calibration


Paper by Andrew Caplin: “We use a controlled experiment to show that ability and belief calibration jointly determine the benefits of working with Artificial Intelligence (AI). AI improves performance more for people with low baseline ability. However, holding ability constant, AI assistance is more valuable for people who are calibrated, meaning they have accurate beliefs about their own ability. People who know they have low ability gain the most from working with AI. In a counterfactual analysis, we show that eliminating miscalibration would cause AI to reduce performance inequality nearly twice as much as it already does…(More)”.

First-of-its-kind dataset connects greenhouse gases and air quality


NOAA Research: “The GReenhouse gas And Air Pollutants Emissions System (GRA²PES), from NOAA and the National Institute of Standards and Technology (NIST), combines information on greenhouse gas and air quality pollutant sources into a single national database, offering innovative interactive map displays and new benefits for both climate and public health solutions.

A new U.S.-based system to combine air quality and greenhouse gas pollution sources into a single national research database is now available in the U.S. Greenhouse Gas Center portal. This geospatial data allows leaders at city, state, and regional scales to more easily identify and take steps to address air quality issues while reducing climate-related hazards for populations.

The dataset is the GReenhouse gas And Air Pollutants Emissions System (GRA²PES). A research project developed by NOAA and NIST, GRA²PES captures monthly greenhouse gas (GHG) emissions activity for multiple economic sectors to improve measurement and modeling for both GHG and air pollutants across the contiguous U.S.

Having the GHG and air quality constituents in the same dataset will be exceedingly helpful, said Columbia University atmospheric scientist Roisin Commane, the lead on a New York City project to improve emissions estimates…(More)”.

As AI-powered health care expands, experts warn of biases


Article by Marta Biino: “Google’s DeepMind artificial intelligence research laboratory and German pharma company BioNTech are both building AI-powered lab assistants to help scientists conduct experiments and perform tasks, the Financial Times reported.

It’s the latest example of how developments in artificial intelligence are revolutionizing a number of fields, including medicine. While AI has long been used in radiology, for image analysis, or oncology to classify skin lesions for example, as the technology continues to advance its applications are growing.

OpenAI’s GPT models have outperformed humans in making cancer diagnoses based on MRI reports and beat PhD-holders in standardized science tests, to name a few.

However, as AI’s use in health care expands, some fear the notoriously biased technology could carry negative repercussions for patients…(More)”.

How The New York Times incorporates editorial judgment in algorithms to curate its home page


Article by Zhen Yang: “Whether on the web or the app, the home page of The New York Times is a crucial gateway, setting the stage for readers’ experiences and guiding them to the most important news of the day. The Times publishes over 250 stories daily, far more than the 50 to 60 stories that can be featured on the home page at a given time. Traditionally, editors have manually selected and programmed which stories appear, when and where, multiple times daily. This manual process presents challenges:

  • How can we provide readers a relevant, useful, and fresh experience each time they visit the home page?
  • How can we make our editorial curation process more efficient and scalable?
  • How do we maximize the reach of each story and expose more stories to our readers?

To address these challenges, the Times has been actively developing and testing editorially driven algorithms to assist in curating home page content. These algorithms are editorially driven in that a human editor’s judgment or input is incorporated into every aspect of the algorithm — including deciding where on the home page the stories are placed, informing the rankings, and potentially influencing and overriding algorithmic outputs when necessary. From the get-go, we’ve designed algorithmic programming to elevate human curation, not to replace it…

The Times began using algorithms for content recommendations in 2011 but only recently started applying them to home page modules. For years, we only had one algorithmically-powered module, “Smarter Living,” on the home page, and later, “Popular in The Times.” Both were positioned relatively low on the page.

Three years ago, the formation of a cross-functional team — including newsroom editors, product managers, data scientists, data analysts, and engineers — brought the momentum needed to advance our responsible use of algorithms. Today, nearly half of the home page is programmed with assistance from algorithms that help promote news, features, and sub-brand content, such as The Athletic and Wirecutter. Some of these modules, such as the features module located at the top right of the home page on the web version, are in highly visible locations. During major news moments, editors can also deploy algorithmic modules to display additional coverage to complement a main module of stories near the top of the page. (The topmost news package of Figure 1 is an example of this in action.)…(More)”

How is editorial judgment incorporated into algorithmic programming?

Someone Put Facial Recognition Tech onto Meta’s Smart Glasses to Instantly Dox Strangers


Article by Joseph Cox: “A pair of students at Harvard have built what big tech companies refused to release publicly due to the overwhelming risks and danger involved: smart glasses with facial recognition technology that automatically looks up someone’s face and identifies them. The students have gone a step further too. Their customized glasses also pull other information about their subject from around the web, including their home address, phone number, and family members. 

The project is designed to raise awareness of what is possible with this technology, and the pair are not releasing their code, AnhPhu Nguyen, one of the creators, told 404 Media. But the experiment, tested in some cases on unsuspecting people in the real world according to a demo video, still shows the razor thin line between a world in which people can move around with relative anonymity, to one where your identity and personal information can be pulled up in an instant by strangers.

Nguyen and co-creator Caine Ardayfio call the project I-XRAY. It uses a pair of Meta’s commercially available Ray Ban smart glasses, and allows a user to “just go from face to name,” Nguyen said…(More)”.

From Bits to Biology: A New Era of Biological Renaissance powered by AI


Article by Milad Alucozai: “…A new wave of platforms is emerging to address these limitations. Designed with the modern scientist in mind, these platforms prioritize intuitive interfaces, enabling researchers with diverse computational backgrounds to easily navigate and analyze data. They emphasize collaboration, allowing teams to share data and insights seamlessly. And they increasingly incorporate artificial intelligence, offering powerful tools for accelerating analysis and discovery. This shift marks a move towards more user-centric, efficient, and collaborative computational biology, empowering researchers to tackle increasingly complex biological questions. 

Emerging Platforms: 

  • Seqera Labs: Spearheading a movement towards efficient and reproducible research, Seqera Labs provides a suite of tools, including the popular open-source workflow language Nextflow. Their platform empowers researchers to design scalable and reproducible data analysis pipelines, particularly for cloud environments. Seqera streamlines complex computational workflows across diverse biological disciplines by emphasizing automation and flexibility, making data-intensive research scalable, flexible, and collaborative. 
  • Form Bio: Aimed at democratizing access to computational biology, Form Bio provides a comprehensive tech suite built to enable accelerated cell and gene therapy development and computational biology at scale. Its emphasis on collaboration and intuitive design fosters a more inclusive research environment to help organizations streamline therapeutic development and reduce time-to-market.  
  • Code Ocean: Addressing the critical need for reproducibility in research, Code Ocean provides a unique platform for sharing and executing research code, data, and computational environments. By encapsulating these elements in a portable and reproducible format, Code Ocean promotes transparency and facilitates the reuse of research methods, ultimately accelerating scientific discovery. 
  • Pluto Biosciences: Championing a collaborative approach to biological discovery, Pluto Biosciences offers an interactive platform for visualizing and analyzing complex biological data. Its intuitive tools empower researchers to explore data, generate insights, and seamlessly share findings with collaborators. This fosters a more dynamic and interactive research process, facilitating knowledge sharing and accelerating breakthroughs. 

 Open Source Platform: 

  • Galaxy: A widely used open-source platform for bioinformatics analysis. It provides a user-friendly web interface and a vast collection of tools for various tasks, from sequence analysis to data visualization. Its open-source nature fosters community development and customization, making it a versatile tool for diverse research needs. 
  • Bioconductor is a prominent open-source platform for bioinformatics analysis, akin to Galaxy’s commitment to accessibility and community-driven development. It leverages the power of the R programming language, providing a wealth of packages for tasks ranging from genomic data analysis to statistical modeling. Its open-source nature fosters a collaborative environment where researchers can freely access, utilize, and contribute to a growing collection of tools…(More)”

Mapmatics: A Mathematician’s Guide to Navigating the World


Book by Paulina Rowińska: “Why are coastlines and borders so difficult to measure? How does a UPS driver deliver hundreds of packages in a single day? And where do elusive serial killers hide? The answers lie in the crucial connection between maps and math.

In Mapmatics, mathematician Paulina Rowińska leads us on a riveting journey around the globe to discover how maps and math are deeply entwined, and always have been. From a sixteenth-century map, an indispensable navigation tool that exaggerates the size of northern countries, to public transport maps that both guide and confound passengers, to congressional maps that can empower or silence whole communities, she reveals how maps and math have shaped not only our sense of space but our worldview. In her hands, we learn how to read maps like a mathematician—to extract richer information and, just as importantly, to question our conclusions by asking what we don’t see…(More)”.