Article by R. Trebor Scholz & Mark Esposito: “The digital economy’s story often centers on stock prices and initial public offerings, but the processes and people behind it reveal a very different reality. Across outsourcing hubs like Nairobi, Manila, and Hyderabad, content moderators working for Facebook, OpenAI, and their subcontractors spend hours each day reviewing beheadings, sexual violence, child abuse, and hate speech to train and police AI systems. This form of labor has led many to report severe psychological harm, including depression, anxiety, and post-traumatic stress disorder. Investigations have documented suicide attempts among moderators in Kenya and the Philippines, alongside widespread reports of suicidal ideation linked to relentless exposure to traumatic content, low pay, and a lack of mental-health support. These incidents are not isolated tragedies, but rather symptoms of an industry structured to offload risk downward through opaque contracting chains while concentrating profit and control at the top.
These cases are a stark reminder that when technological systems are designed solely for extraction and efficiency, they isolate and break the people who sustain them. As artificial intelligence (AI) accelerates, we face a similar precipice. Without deliberate intervention, these extractive logics will scale globally, further concentrating power at the top, unless we choose to build a fundamentally different system…(More)”.