UNESCO Report: “Generative Artificial Intelligence (Gen AI) has become an integral part of our digital landscape and daily life. Understanding its risks and participating in solutions is crucial to ensuring that it works for the overall social good. This PLAYBOOK introduces Red Teaming as an accessible tool for testing and evaluating AI systems for social good, exposing stereotypes, bias and potential harms. As a way of illustrating harms, practical examples of Red Teaming for social good are provided, building on the collaborative work carried out by UNESCO and Humane Intelligence. The results demonstrate forms of technology-facilitated gender-based violence (TFGBV) enabled by Gen AI and provide practical actions and recommendations on how to address these growing concerns.
Red Teaming — the practice of intentionally testing Gen AI models to expose vulnerabilities — has traditionally been used by major tech companies and AI labs. One tech company surveyed 1,000 machine learning engineers and found that 89% reported vulnerabilities (Aporia, 2024). This PLAYBOOK provides access to these critical testing methods, enabling organizations and communities to actively participate. Through the structured exercises and real-world scenarios provided, participants can systematically evaluate how Gen AI models may perpetuate, either intentionally or unintentionally, stereotypes or enable gender-based violence.By providing organizations with this easy-to-use tool to conduct their own Red Teaming exercises, participants can select their own thematic area of concern, enabling evidence-based advocacy for more equitable AI for social good…(More)”.