Article by Griffin McCutcheon, John Malloy, Caitlyn Hall, and Nivedita Mahesh: “From the esoteric worlds of predictive health care and cybersecurity to Google’s e-mail completion and translation apps, the impacts of AI are increasingly being felt in our everyday lived experience. The way it has crepted into our lives in such diverse ways and its proficiency in low-level knowledge shows that AI is here to stay. But like any helpful new tool, there are notable flaws and consequences to blindly adapting it.
AI is a tool—not a cure-all to modern problems….
Connecterra is trying to use TensorFlow to address global hunger through AI-enabled efficient farming and sustainable food development. The company uses AI-equipped sensors to track cattle health, helping farmers look for signs of illness early on. But, this only benefits one type of farmer: those rearing cattle who are able to afford a device to outfit their entire herd. Applied this way, AI can only improve the productivity of specific resource-intensive dairy farms and is unlikely to meet Connecterra’s goal of ending world hunger.
This solution, and others like it, ignores the wider social context of AI’s application. The belief that AI is a cure-all tool that will magically deliver solutions if only you can collect enough data is misleading and ultimately dangerous as it prevents other effective solutions from being implemented earlier or even explored. Instead, we need to both build AI responsibly and understand where it can be reasonably applied.
Challenges with AI are exacerbated because these tools often come to the public as a “black boxes”—easy to use but entirely opaque in nature. This shields the user from understanding what biases and risks may be involved, and this lack of public understanding of AI tools and their limitations is a serious problem. We shouldn’t put our complete trust in programs whose workings their creators cannot interpret. These poorly understood conclusions from AI generate risk for individual users, companies or government projects where these tools are used.
With AI’s pervasiveness and the slow change of policy, where do we go from here? We need a more rigorous system in place to evaluate and manage risk for AI tools….(More)”.