Artificial Intelligence, Scientific Discovery, and Product Innovation


Paper by Aidan Toner-Rodgers: “… studies the impact of artificial intelligence on innovation, exploiting the randomized introduction of a new materials discovery technology to 1,018 scientists in the R&D lab of a large U.S. firm. AI-assisted researchers discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation. These compounds possess more novel chemical structures and lead to more radical inventions. However, the technology has strikingly disparate effects across the productivity distribution: while the bottom third of scientists see little benefit, the output of top researchers nearly doubles. Investigating the mechanisms behind these results, I show that AI automates 57% of “idea-generation” tasks, reallocating researchers to the new task of evaluating model-produced candidate materials. Top scientists leverage their domain knowledge to prioritize promising AI suggestions, while others waste significant resources testing false positives. Together, these findings demonstrate the potential of AI-augmented research and highlight the complementarity between algorithms and expertise in the innovative process. Survey evidence reveals that these gains come at a cost, however, as 82% of scientists report reduced satisfaction with their work due to decreased creativity and skill underutilization…(More)”.

Voice and Access in AI: Global AI Majority Participation in Artificial Intelligence Development and Governance


Paper by Sumaya N. Adan et al: “Artificial intelligence (AI) is rapidly emerging as one of the most transformative technologies in human history, with the potential to profoundly impact all aspects of society globally. However, access to AI and participation in its development and governance is concentrated among a few countries with advanced AI capabilities, while the ‘Global AI Majority’ – defined as the population of countries primarily encompassing Africa, Latin America, South and Southeast Asia, and parts of Eastern Europe – is largely excluded. These regions, while diverse, share common challenges in accessing and influencing advanced AI technologies.

This white paper investigates practical remedies to increase voice in and access to AI governance and capabilities for the Global AI Majority, while addressing the security and commercial concerns of frontier AI states. We examine key barriers facing the Global AI Majority, including limited access to digital and compute infrastructure, power concentration in AI development, Anglocentric data sources, and skewed talent distributions. The paper also explores the dual-use dilemma of AI technologies and how it motivates frontier AI states to implement restrictive policies.

We evaluate a spectrum of AI development initiatives, ranging from domestic model creation to structured access to deployed models, assessing their feasibility for the Global AI Majority. To resolve governance dilemmas, we propose three key approaches: interest alignment, participatory architecture, and safety assurance…(More)”.

The Rise of AI-Generated Content in Wikipedia


Paper by Creston Brooks, Samuel Eggert, and Denis Peskoff: “The rise of AI-generated content in popular information sources raises significant concerns about accountability, accuracy, and bias amplification. Beyond directly impacting consumers, the widespread presence of this content poses questions for the long-term viability of training language models on vast internet sweeps. We use GPTZero, a proprietary AI detector, and Binoculars, an open-source alternative, to establish lower bounds on the presence of AI-generated content in recently created Wikipedia pages. Both detectors reveal a marked increase in AI-generated content in recent pages compared to those from before the release of GPT-3.5. With thresholds calibrated to achieve a 1% false positive rate on pre-GPT-3.5 articles, detectors flag over 5% of newly created English Wikipedia articles as AI-generated, with lower percentages for German, French, and Italian articles. Flagged Wikipedia articles are typically of lower quality and are often self-promotional or partial towards a specific viewpoint on controversial topics…(More)”

AI and Data Science for Public Policy


Introduction to Special Issue by Kenneth Benoit: “Artificial intelligence (AI) and data science are reshaping public policy by enabling more data-driven, predictive, and responsive governance, while at the same time producing profound changes in knowledge production and education in the social and policy sciences. These advancements come with ethical and epistemological challenges surrounding issues of bias, transparency, privacy, and accountability. This special issue explores the opportunities and risks of integrating AI into public policy, offering theoretical frameworks and empirical analyses to help policymakers navigate these complexities. The contributions explore how AI can enhance decision-making in areas such as healthcare, justice, and public services, while emphasising the need for fairness, human judgment, and democratic accountability. The issue provides a roadmap for harnessing AI’s potential responsibly, ensuring it serves the public good and upholds democratic values…(More)”.

Inside the New Nonprofit AI Initiatives Seeking to Aid Teachers and Farmers in Rural Africa


Article by Andrew R. Chow: “Over the past year, rural farmers in Malawi have been seeking advice about their crops and animals from a generative AI chatbot. These farmers ask questions in Chichewa, their native tongue, and the app, Ulangizi, responds in kind, using conversational language based on information taken from the government’s agricultural manual. “In the past we could wait for days for agriculture extension workers to come and address whatever problems we had on our farms,” Maron Galeta, a Malawian farmer, told Bloomberg. “Just a touch of a button we have all the information we need.”

The nonprofit behind the app, Opportunity International, hopes to bring similar AI-based solutions to other impoverished communities. In February, Opportunity ran an acceleration incubator for humanitarian workers across the world to pitch AI-based ideas and then develop them alongside mentors from institutions like Microsoft and Amazon. On October 30, Opportunity announced the three winners of this program: free-to-use apps that aim to help African farmers with crop and climate strategy, teachers with lesson planning, and school leaders with administration management. The winners will each receive about $150,000 in funding to pilot the apps in their communities, with the goal of reaching millions of people within two years. 

Greg Nelson, the CTO of Opportunity, hopes that the program will show the power of AI to level playing fields for those who previously faced barriers to accessing knowledge and expertise. “Since the mobile phone, this is the biggest democratizing change that we have seen in our lifetime,” he says…(More)”.

The Routledge Handbook of Artificial Intelligence and Philanthropy


Open Access Book edited by Giuseppe Ugazio and Milos Maricic: “…acts as a catalyst for the dialogue between two ecosystems with much to gain from collaboration: artificial intelligence (AI) and philanthropy. Bringing together leading academics, AI specialists, and philanthropy professionals, it offers a robust academic foundation for studying both how AI can be used and implemented within philanthropy and how philanthropy can guide the future development of AI in a responsible way.

The contributors to this Handbook explore various facets of the AI‑philanthropy dynamic, critically assess hurdles to increased AI adoption and integration in philanthropy, map the application of AI within the philanthropic sector, evaluate how philanthropy can and should promote an AI that is ethical, inclusive, and responsible, and identify the landscape of risk strategies for their limitations and/or potential mitigation. These theoretical perspectives are complemented by several case studies that offer a pragmatic perspective on diverse, successful, and effective AI‑philanthropy synergies.

As a result, this Handbook stands as a valuable academic reference capable of enriching the interactions of AI and philanthropy, uniting the perspectives of scholars and practitioners, thus building bridges between research and implementation, and setting the foundations for future research endeavors on this topic…(More)”.

Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers


Article by Scharon Harding: “A trend on Reddit that sees Londoners giving false restaurant recommendations in order to keep their favorites clear of tourists and social media influencers highlights the inherent flaws of Google Search’s reliance on Reddit and Google’s AI Overview.

In May, Google launched AI Overviews in the US, an experimental feature that populates the top of Google Search results with a summarized answer based on an AI model built into Google’s web rankings. When Google first debuted AI Overview, it quickly became apparent that the feature needed work with accuracy and its ability to properly summarize information from online sources. AI Overviews are “built to only show information that is backed up by top web results,” Liz Reid, VP and head of Google Search, wrote in a May blog post. But as my colleague Benj Edwards pointed out at the time, that setup could contribute to inaccurate, misleading, or even dangerous results: “The design is based on the false assumption that Google’s page-ranking algorithm favors accurate results and not SEO-gamed garbage.”

As Edwards alluded to, many have complained about Google Search results’ quality declining in recent years, as SEO spam and, more recently, AI slop float to the top of searches. As a result, people often turn to the Reddit hack to make Google results more helpful. By adding “site:reddit.com” to search results, users can hone their search to more easily find answers from real people. Google seems to understand the value of Reddit and signed an AI training deal with the company that’s reportedly worth $60 million per year…(More)”.

South Korea leverages open government data for AI development


Article by Si Ying Thian: “In South Korea, open government data is powering artificial intelligence (AI) innovations in the private sector.

Take the case of TTCare which may be the world’s first mobile application to analyse eye and skin disease symptoms in pets.

AI Hub allows users to search by industry, data format and year (top row), with the data sets made available based on the particular search term “pet” (bottom half of the page). Image: AI Hub, provided by courtesy of Baek

The AI model was trained on about one million pieces of data – half of the data coming from the government-led AI Hub and the rest collected by the firm itself, according to the Korean newspaper Donga.

AI Hub is an integrated platform set up by the government to support the country’s AI infrastructure.

TTCare’s CEO Heo underlined the importance of government-led AI training data in improving the model’s ability to diagnose symptoms. The firm’s training data is currently accessible through AI Hub, and any Korean citizen can download or use it.

Pushing the boundaries of open data

Over the years, South Korea has consistently come up top in the world’s rankings for Open, Useful, and Re-usable data (OURdata) Index.

The government has been pushing the boundaries of what it can do with open data – beyond just making data usable by providing APIs. Application Programming Interfaces, or APIs, make it easier for users to tap on open government data to power their apps and services.

There is now rising interest from public sector agencies to tap on such data to train AI models, said South Korea’s National Information Society Agency (NIA)’s Principal Manager, Dongyub Baek, although this is still at an early stage.

Baek sits in NIA’s open data department, which handles policies, infrastructure such as the National Open Data Portal, as well as impact assessments of the government initiatives…(More)”

Trust in artificial intelligence makes Trump/Vance a transhumanist ticket


Article by Filip Bialy: “AI plays a central role in the 2024 US presidential election, as a tool for disinformation and as a key policy issue. But its significance extends beyond these, connecting to an emerging ideology known as TESCREAL, which envisages AI as a catalyst for unprecedented progress, including space colonisation. After this election, TESCREALism may well have more than one representative in the White House, writes Filip Bialy

In June 2024, the essay Situational Awareness by former OpenAI employee Leopold Aschenbrenner sparked intense debate in the AI community. The author predicted that by 2027, AI would surpass human intelligence. Such claims are common among AI researchers. They often assert that only a small elite – mainly those working at companies like OpenAI – possesses inside knowledge of the technology. Many in this group hold a quasi-religious belief in the imminent arrival of artificial general intelligence (AGI) or artificial superintelligence (ASI)…

These hopes and fears, however, are not only religious-like but also ideological. A decade ago, Silicon Valley leaders were still associated with the so-called Californian ideology, a blend of hippie counterculture and entrepreneurial yuppie values. Today, figures like Elon Musk, Mark Zuckerberg, and Sam Altman are under the influence of a new ideological cocktail: TESCREAL. Coined in 2023 by Timnit Gebru and Émile P. Torres, TESCREAL stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism.

While these may sound like obscure terms, they represent ideas developed over decades, with roots in eugenics. Early 20th-century eugenicists such as Francis Galton promoted selective breeding to enhance future generations. Later, with advances in genetic engineering, the focus shifted from eugenics’ racist origins to its potential to eliminate genetic defects. TESCREAL represents a third wave of eugenics. It aims to digitise human consciousness and then propagate digital humans into the universe…(More)”

Open-Access AI: Lessons From Open-Source Software


Article by Parth NobelAlan Z. RozenshteinChinmayi Sharma: “Before analyzing how the lessons of open-source software might (or might not) apply to open-access AI, we need to define our terms and explain why we use the term “open-access AI” to describe models like Llama rather than the more commonly used “open-source AI.” We join many others in arguing that “open-source AI” is a misnomer for such models. It’s misleading to fully import the definitional elements and assumptions that apply to open-source software when talking about AI. Rhetoric matters, and the distinction isn’t just semantic; it’s about acknowledging the meaningful differences in access, control, and development. 

The software industry definition of “open source” grew out of the free software movement, which makes the point that “users have the freedom to run, copy, distribute, study, change and improve” software. As the movement emphasizes, one should “think of ‘free’ as in ‘free speech,’ not as in ‘free beer.’” What’s “free” about open-source software is that users can do what they want with it, not that they initially get it for free (though much open-source software is indeed distributed free of charge). This concept is codified by the Open Source Initiative as the Open Source Definition (OSD), many aspects of which directly apply to Llama 3.2. Llama 3.2’s license makes it freely redistributable by license holders (Clause 1 of the OSD) and allows the distribution of the original models, their parts, and derived works (Clauses 3, 7, and 8). ..(More)”.