Paper by John Wihbey: “As advanced artificial intelligence (AI) technologies are developed and deployed, core zones of information and knowledge that support democratic life will be mediated more comprehensively by machines. Chatbots and AI agents may structure most internet, media, and public informational domains. What humans believe to be true and worthy of attention – what becomes public knowledge – may increasingly be influenced by the judgments of advanced AI systems. This pattern will present profound challenges to democracy. A pattern of what we might consider “epistemic risk” will threaten the possibility of AI ethical alignment with human values. AI technologies are trained on data from the human past, but democratic life often depends on the surfacing of human tacit knowledge and previously unrevealed preferences. Accordingly, as AI technologies structure the creation of public knowledge, the substance may be increasingly a recursive byproduct of AI itself – built on what we might call “epistemic anachronism.” This paper argues that epistemic capture or lock-in and a corresponding loss of autonomy are pronounced risks, and it analyzes three example domains – journalism, content moderation, and polling – to explore these dynamics. The pathway forward for achieving any vision of ethical and responsible AI in the context of democracy means an insistence on epistemic modesty within AI models, as well as norms that emphasize the incompleteness of AI’s judgments with respect to human knowledge and values…(More)” – See also: Steering Responsible AI: A Case for Algorithmic Pluralism