Report by Suvradip Maitra, Lyndal Sleep, Suzanna Fay, Paul Henman: “Artificial intelligence (AI) and automated processes provide considerable promise to enhance human wellbeing by fully automating or co-producing services with human service providers. Concurrently, if not well considered, automation also provides ways in which to generate harms at scale and speed. To address this challenge, much discussion to date has focused on principles of ethical AI and accountable algorithms with a groundswell of early work seeking to translate these into practical frameworks and processes to ensure such principles are enacted. AI risk assessment frameworks to detect and evaluate possible harms is one dominant approach, as are a growing body of AI audit frameworks, with concomitant emerging governmental and organisational regulatory settings, and associate professionals.
The research outlined in this report took a different approach. Building on work in social services on trauma-informed practice, researchers identified key principles and a practical framework that framed AI design, development and deployment as a reflective, constructive exercise that resulting in algorithmic supported services to be cognisant and inclusive of the diversity of human experience, and particularly those who have experienced trauma. This study resulted in a practical, co-designed, piloted Trauma Informed Algorithmic Assessment Toolkit.
This Toolkit has been designed to assist organisations in their use of automation in service delivery at any stage of their automation journey: ideation; design; development; piloting; deployment or evaluation. While of particular use for social service organisations working with people who may have experienced past trauma, the tool will be beneficial for any organisation wanting to ensure safe, responsible and ethical use of automation and AI…(More)”.