Paper by Pablo Villalobos: We investigate the potential constraints on LLM scaling posed by the availability of public human-generated text data. We forecast the growing demand for training data based on current trends and estimate the total stock of public human text data. Our findings indicate that if current LLM development trends continue, models will be trained on datasets roughly equal in size to the available stock of public human text data between 2026 and 2032, or slightly earlier if models are overtrained. We explore how progress in language modeling can continue when human-generated text datasets cannot be scaled any further. We argue that synthetic data generation, transfer learning from data-rich domains, and data efficiency improvements might support further progress…(More)”.
How to contribute:
Did you come across – or create – a compelling project/report/book/app at the leading edge of innovation in governance?
Share it with us at info@thelivinglib.org so that we can add it to the Collection!
About the Curator
Get the latest news right in you inbox
Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday
Related articles
DATA
Monitoring the Re-Use and Impact of Non-Traditional Data
Posted in September 11, 2025 by Stefaan Verhulst
DATA, data collaboratives
Piloting an infrastructure for the secondary use of health data: learnings from the HealthData@EU Pilot
Posted in September 10, 2025 by Stefaan Verhulst
artificial intelligence, DATA, privacy
Co-creating Consent for Data Use — AI-Powered Ethics for Biomedical AI
Posted in September 10, 2025 by Stefaan Verhulst