Explore our articles
View All Results
artificial intelligence, DATA
Share:

OpenAI is huge in India. Its models are steeped in caste bias.

Article by Nilesh Christopher: “When Dhiraj Singha began applying for postdoctoral sociology fellowships in Bengaluru, India, in March, he wanted to make sure the English in his application was pitch-perfect. So he turned to ChatGPT.

He was surprised to see that in addition to smoothing out his language, it changed his identity—swapping out his surname for “Sharma,” which is associated with privileged high-caste Indians. Though his application did not mention his last name, the chatbot apparently interpreted the “s” in his email address as Sharma rather than Singha, which signals someone from the caste-oppressed Dalits.

“The experience [of AI] actually mirrored society,” Singha says. 

Singha says the swap reminded him of the sorts of microaggressions he’s encountered when dealing with people from more privileged castes. Growing up in a Dalit neighborhood in West Bengal, India, he felt anxious about his surname, he says. Relatives would discount or ridicule his ambition of becoming a teacher, implying that Dalits were unworthy of a job intended for privileged castes. Through education, Singha overcame the internalized shame, becoming a first-generation college graduate in his family. Over time he learned to present himself confidently in academic circles.

But this experience with ChatGPT brought all that pain back. “It reaffirms who is normal or fit to write an academic cover letter,” Singha says, “by considering what is most likely or most probable.”

Singha’s experience is far from unique. An MIT Technology Review investigation finds that caste bias is rampant in OpenAI’s products, including ChatGPT. Though CEO Sam Altman boasted during the launch of GPT-5 in August that India was its second-largest market, we found that both this new model, which now powers ChatGPT, and Sora, OpenAI’s text-to-video generator, exhibit caste bias. This risks entrenching discriminatory views in ways that are currently going unaddressed. 

Working closely with Jay Chooi, a Harvard undergraduate AI safety researcher, we developed a test inspired by AI fairness studies conducted by researchers from the University of Oxford and New York University, and we ran the tests through Inspect, a framework for AI safety testing developed by the UK AI Security Institute…(More)”.

Share
How to contribute:

Did you come across – or create – a compelling project/report/book/app at the leading edge of innovation in governance?

Share it with us at info@thelivinglib.org so that we can add it to the Collection!

About the Curator

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday

Related articles

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday