Ali Lange at CDT: “Digital technology has empowered new voices, made the world more accessible, and increased the speed of almost every decision we make as businesses, communities, and individuals. Much of this convenience is powered by lines of code that rapidly execute instructions based on rules set by programmers (or, in the case of machine learning, generated from statistical correlations in massive datasets)—otherwise known as algorithms. The technology that drives our automated world is sophisticated and obscure, making it difficult to determine how the decisions made by automated systems might fairly or unfairly, positively or negatively, impact individuals. It is also harder to identify where bias may inadvertently arise. Algorithmically driven outcomes are influenced, but not exclusively determined, by technical and legal limitations. The landscape of algorithmic decision-making is also shaped by policy choices in technology companies and by government agencies. Some automated systems create positive outcomes for individuals, and some threaten a fair society. By looking at a few case studies and drawing out the prevailing policy principle, we can draw conclusions about how to critically approach the existing web of automated decision-making. Before considering these specific examples, we will present a summary of the policy debate around data-driven decisions to give context to the examples raised. Then we will analyze three case studies from diverse industries to determine what policy interventions might be applied more broadly to encourage positive outcomes and prevent the risk of discrimination….(More)”