Paper by Zhen-Song Chen and Zheng Ma: “Ensuring fair and accurate hiring outcomes is critical for both job seekers’ economic opportunities and organizational development. This study addresses the challenge of mitigating biases in AI-powered resume screening systems by leveraging crowd intelligence, thereby enhancing problem-solving efficiency and decision-making quality. We propose a novel counterfactual resume-annotation method based on a causal model to capture and correct biases from human resource (HR) representatives, providing robust ground truth data for supervised machine learning. The proposed model integrates multiple language embedding models and diverse HR-labeled data to train a cohort of resume-screening agents. By training 60 such agents with different models and data, we harness their crowd intelligence to optimize for three objectives: accuracy, fairness, and a balance of both. Furthermore, we develop a binary bias-detection model to visualize and analyze gender bias in both human and machine outputs. The results suggest that harnessing crowd intelligence using both accuracy and fairness objectives helps AI systems robustly output accurate and fair results. By contrast, a sole focus on accuracy may lead to severe fairness degradation, while, conversely, a sole focus on fairness leads to a relatively minor loss of accuracy. Our findings underscore the importance of balancing accuracy and fairness in AI-powered resume-screening systems to ensure equitable hiring outcomes and foster inclusive organizational development…(More)”