How Philanthropy Built, Lost, and Could Reclaim the A.I. Race


Article by Sara Herschander: “How do we know you won’t pull an OpenAI?”

It’s the question Stella Biderman has gotten used to answering when she seeks funding from major foundations for EleutherAI, her two-year-old nonprofit A.I. lab that has developed open-source artificial intelligence models.

The irony isn’t lost on her. Not long ago, she declined a deal dangled by one of Silicon Valley’s most prominent venture capitalists who, with the snap of his fingers, promised to raise $100 million for the fledgling nonprofit lab — over 30 times EleutherAI’s current annual budget — if only the lab’s leaders would agree to drop its 501(c)(3) status.

In today’s A.I. gold rush, where tech giants spend billions on increasingly powerful models and top researchers command seven-figure salaries, to be a nonprofit A.I. lab is to be caught in a Catch-22: defend your mission to increasingly wary philanthropic funders or give in to temptation and become a for-profit company.

Philanthropy once played an outsize role in building major A.I. research centers and nurturing influential theorists — by donating hundreds of millions of dollars, largely to university labs — yet today those dollars are dwarfed by the billions flowing from corporations and venture capitalists. For tech nonprofits and their philanthropic backers, this has meant embracing a new role: pioneering the research and safeguards the corporate world won’t touch.

“If making a lot of money was my goal, that would be easy,” said Biderman, whose employees have seen their pay packages triple or quadruple after being poached by companies like OpenAI, Anthropic, and Google.

But EleutherAI doesn’t want to join the race to build ever-larger models. Instead, backed by grants from Open Philanthropy, Omidyar Network, and A.I. companies Hugging Face and StabilityAI, the group has carved out a different niche: researching how A.I. systems make decisions, maintaining widely used training datasets, and shaping global policy around A.I. safety and transparency…(More)”.