Rule of the Robots


Book by Martin Ford: “If you have a smartphone, you have AI in your pocket. AI is impossible to avoid online. And it has already changed everything from how doctors diagnose disease to how you interact with friends or read the news. But in Rule of the Robots, Martin Ford argues that the true revolution is yet to come. 

In this sequel to his prescient New York Times bestseller Rise of the Robots, Ford presents us with a striking vision of the very near future. He argues that AI is a uniquely powerful technology that is altering every dimension of human life, often for the better. For example, advanced science is being done by machines, solving devilish problems in molecular biology that humans could not, and AI can help us fight climate change or the next pandemic. It also has a capacity for profound harm. Deep fakes—AI-generated audio or video of events that never happened—are poised to cause havoc throughout society. AI empowers authoritarian regimes like China with unprecedented mechanisms for social control. And AI can be deeply biased, learning bigoted attitudes from us and perpetuating them. 

In short, this is not a technology to simply embrace, or let others worry about. The machines are coming, and they won’t stop, and each of us needs to know what that means if we are to thrive in the twenty-first century. And Rule of the Robots is the essential guide to all of it: both AI and the future of our economy, our politics, our lives…(More)”.

Enrollment algorithms are contributing to the crises of higher education


Paper by Alex Engler: “Hundreds of higher education institutions are procuring algorithms that strategically allocate scholarships to convince more students to enroll. In doing so, these enrollment management algorithms help colleges vary the cost of attendance to students’ willingness to pay, a crucial aspect of competition in the higher education market. This paper elaborates on the specific two-stage process by which these algorithms first predict how likely prospective students are to enroll, and second help decide how to disburse scholarships to convince more of those prospective students to attend the college. These algorithms are valuable to colleges for institutional planning and financial stability, as well as to help reach their preferred financial, demographic, and scholastic outcomes for the incoming student body.

Unfortunately, the widespread use of enrollment management algorithms may also be hurting students, especially due to their narrow focus on enrollment. The prevailing evidence suggests that these algorithms generally reduce the amount of scholarship funding offered to students. Further, algorithms excel at identifying a student’s exact willingness to pay, meaning they may drive enrollment while also reducing students’ chances to persist and graduate. The use of this two-step process also opens many subtle channels for algorithmic discrimination to perpetuate unfair financial aid practices. Higher education is already suffering from low graduation rates, high student debt, and stagnant inequality for racial minorities—crises that enrollment algorithms may be making worse.

This paper offers a range of recommendations to ameliorate the risks of enrollment management algorithms in higher education. Categorically, colleges should not use predicted likelihood to enroll in either the admissions process or in awarding need-based aid—these determinations should only be made based on the applicant’s merit and financial circumstances, respectively. When colleges do use algorithms to distribute scholarships, they should proceed cautiously and document their data, processes, and goals. Colleges should also examine how scholarship changes affect students’ likelihood to graduate, or whether they may deepen inequities between student populations. Colleges should also ensure an active role for humans in these processes, such as exclusively using people to evaluate application quality and hiring internal data scientists who can challenge algorithmic specifications. State policymakers should consider the expanding role of these algorithms too, and should try to create more transparency about their use in public institutions. More broadly, policymakers should consider enrollment management algorithms as a concerning symptom of pre-existing trends towards higher tuition, more debt, and reduced accessibility in higher education….(More)”.

The geography of AI


Report by Mark Muro and Sifan Liu: “Much of the U.S. artificial intelligence (AI) discussion revolves around futuristic dreams of both utopia and dystopia. From extreme to extreme, the promises range from solutions to global climate change to a “robot apocalypse.”

However, it bears remembering that AI is also becoming a real-world economic fact with major implications for national and regional economic development as the U.S. crawls out of the COVID-19 pandemic.

Based on advanced uses of statistics, algorithms, and fast computer processing, AI has become a focal point of U.S. innovation debates. Even more, AI is increasingly viewed as the next great “general purpose technology”—one that has the power to boost the productivity of sector after sector of the economy.

All of which is why state and city leaders are increasingly assessing AI for its potential to spur economic growth. Such leaders are analyzing where their regions stand and what they need to do to ensure their locations are not left behind.

In response to such questions, this analysis examines the extent, location, and concentration of AI technology creation and business activity in U.S. metropolitan areas.

Employing seven basic measures of AI capacity, the report benchmarks regions on the basis of their core AI assets and capabilities as they relate to two basic dimensions: AI research and AI commercialization. In doing so, the assessment categorizes metro areas into five tiers of regional AI involvement and extracts four main findings reflecting that involvement…(More)”.

Artificial intelligence masters’ programs


An analysis “of curricula building blocks” by JRC-European Commission: “This report identifies building blocks of master programs on Artificial Intelligence (AI), on the basis of the existing programs available in the European Union. These building blocks provide a first analysis that requires acceptance and sharing by the AI community. The proposal analyses first, the knowledge contents, and second, the educational competences declared as the learning outcomes, of 45 post-graduate academic masters’ programs related with AI from universities in 13 European countries (Belgium, Denmark, Finland, France, Germany, Italy, Ireland, Netherlands, Portugal, Spain, and Sweden in the EU; plus Switzerland and the United Kingdom).

As a closely related and relevant part of Informatics and Computer Science, major AI-related curricula on data science have been also taken into consideration for the analysis. The definition of a specific AI curriculum besides data science curricula is motivated by the necessity of a deeper understanding of topics and skills of the former that build up the foundations of strong AI versus narrow AI, which is the general focus of the latter. The body of knowledge with the proposed building blocks for AI consists of a number of knowledge areas, which are classified as Essential, Core, General and Applied.

First, the AI Essentials cover topics and competences from foundational disciplines that are fundamental to AI. Second, topics and competences showing a close interrelationship and specific of AI are classified in a set of AI Core domain-specific areas, plus one AI General area for non-domain-specific knowledge. Third, AI Applied areas are built on top of topics and competences required to develop AI applications and services under a more philosophical and ethical perspective. All the knowledge areas are refined into knowledge units and topics for the analysis. As the result of studying core AI knowledge topics from the master programs sample, machine learning is observed to prevail, followed in order by: computer vision; human-computer interaction; knowledge representation and reasoning; natural language processing; planning, search and optimisation; and robotics and intelligent automation. A significant number of master programs analysed are significantly focused on machine learning topics, despite being initially classified in another domain. It is noteworthy that machine learning topics, along with selected topics on knowledge representation, depict a high degree of commonality in AI and data science programs. Finally, the competence-based analysis of the sample master programs’ learning outcomes, based on Bloom’s cognitive levels, outputs that understanding and creating cognitive levels are dominant.

Besides, analysing and evaluating are the most scarce cognitive levels. Another relevant outcome is that master programs on AI under the disciplinary lenses of engineering studies show a notable scarcity of competences related with informatics or computing, which are fundamental to AI….(More)”.

Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers


Paper by Baobao Zhang, Markus Anderljung, Lauren Kahn, Naomi Dreksler, Michael C. Horowitz, and Allan Dafoe: “Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI, including through their work, advocacy, and choice of employment. Nevertheless, this influential group’s attitudes are not well understood, undermining our ability to discern consensuses or disagreements between AI/ML researchers. To examine these researchers’ views, we conducted a survey of those who published in two top AI/ML conferences (N = 524). We compare these results with those from a 2016 survey of AI/ML researchers (Grace et al., 2018) and a 2018 survey of the US public (Zhang & Dafoe, 2020). We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations to shape the development and use of AI in the public interest; moderate trust in most Western tech companies; and low trust in national militaries, Chinese tech companies, and Facebook….(More)”.

The Open-Source Movement Comes to Medical Datasets


Blog by Edmund L. Andrews: “In a move to democratize research on artificial intelligence and medicine, Stanford’s Center for Artificial Intelligence in Medicine and Imaging (AIMI) is dramatically expanding what is already the world’s largest free repository of AI-ready annotated medical imaging datasets.

Artificial intelligence has become an increasingly pervasive tool for interpreting medical images, from detecting tumors in mammograms and brain scans to analyzing ultrasound videos of a person’s pumping heart.

Many AI-powered devices now rival the accuracy of human doctors. Beyond simply spotting a likely tumor or bone fracture, some systems predict the course of a patient’s illness and make recommendations.

But AI tools have to be trained on expensive datasets of images that have been meticulously annotated by human experts. Because those datasets can cost millions of dollars to acquire or create, much of the research is being funded by big corporations that don’t necessarily share their data with the public.

“What drives this technology, whether you’re a surgeon or an obstetrician, is data,” says Matthew Lungren, co-director of AIMI and an assistant professor of radiology at Stanford. “We want to double down on the idea that medical data is a public good, and that it should be open to the talents of researchers anywhere in the world.”

Launched two years ago, AIMI has already acquired annotated datasets for more than 1 million images, many of them from the Stanford University Medical Center. Researchers can download those datasets at no cost and use them to train AI models that recommend certain kinds of action.

Now, AIMI has teamed up with Microsoft’s AI for Health program to launch a new platform that will be more automated, accessible, and visible. It will be capable of hosting and organizing scores of additional images from institutions around the world. Part of the idea is to create an open and global repository. The platform will also provide a hub for sharing research, making it easier to refine different models and identify differences between population groups. The platform can even offer cloud-based computing power so researchers don’t have to worry about building local resource intensive clinical machine-learning infrastructure….(More)”.

AI, big data, and the future of consent


Paper by Adam J. Andreotta, Nin Kirkham & Marco Rizzi: “In this paper, we discuss several problems with current Big data practices which, we claim, seriously erode the role of informed consent as it pertains to the use of personal information. To illustrate these problems, we consider how the notion of informed consent has been understood and operationalised in the ethical regulation of biomedical research (and medical practices, more broadly) and compare this with current Big data practices. We do so by first discussing three types of problems that can impede informed consent with respect to Big data use. First, we discuss the transparency (or explanation) problem. Second, we discuss the re-repurposed data problem. Third, we discuss the meaningful alternatives problem. In the final section of the paper, we suggest some solutions to these problems. In particular, we propose that the use of personal data for commercial and administrative objectives could be subject to a ‘soft governance’ ethical regulation, akin to the way that all projects involving human participants (e.g., social science projects, human medical data and tissue use) are regulated in Australia through the Human Research Ethics Committees (HRECs). We also consider alternatives to the standard consent forms, and privacy policies, that could make use of some of the latest research focussed on the usability of pictorial legal contracts…(More)”

The social dilemma in AI development and why we have to solve it


Paper by Inga Strümke, Marija Slavkovik and Vince I. Madai: “While the demand for ethical artificial intelligence (AI) systems increases, the number of unethical uses of AI accelerates, even though there is no shortage of ethical guidelines. We argue that a main underlying cause for this is that AI developers face a social dilemma in AI development ethics, preventing the widespread adaptation of ethical best practices. We define the social dilemma for AI development and describe why the current crisis in AI development ethics cannot be solved without relieving AI developers of their social dilemma. We argue that AI development must be professionalised to overcome the social dilemma, and discuss how medicine can be used as a template in this process….(More)”.

The Secret Bias Hidden in Mortgage-Approval Algorithms


An investigation by The Markup: “…has found that lenders in 2019 were more likely to deny home loans to people of color than to White people with similar financial characteristics—even when we controlled for newly available financial factors that the mortgage industry for years has said would explain racial disparities in lending.

Holding 17 different factors steady in a complex statistical analysis of more than two million conventional mortgage applications for home purchases, we found that lenders were 40 percent more likely to turn down Latino applicants for loans, 50 percent more likely to deny Asian/Pacific Islander applicants, and 70 percent more likely to deny Native American applicants than similar White applicants. Lenders were 80 percent more likely to reject Black applicants than similar White applicants. These are national rates.

In every case, the prospective borrowers of color looked almost exactly the same on paper as the White applicants, except for their race.

The industry had criticized previous similar analyses for not including financial factors they said would explain disparities in lending rates but were not public at the time: debts as a percentage of income, how much of the property’s assessed worth the person is asking to borrow, and the applicant’s credit score.

The first two are now public in the Home Mortgage Disclosure Act data. Including these financial data points in our analysis not only failed to eliminate racial disparities in loan denials, it highlighted new, devastating ones.

We found that lenders gave fewer loans to Black applicants than White applicants even when their incomes were high—$100,000 a year or more—and had the same debt ratios. In fact, high-earning Black applicants with less debt were rejected more often than high-earning White applicants who have more debt….(More)”

Understanding Potential Sources of Harm throughout the Machine Learning Life Cycle


Paper by Harini Suresh and John Guttag: “As machine learning (ML) increasingly affects people and society, awareness of its potential unwanted consequences has also grown. To anticipate, prevent, and mitigate undesirable downstream consequences, it is critical that we understand when and how harm might be introduced throughout the ML life cycle. In this case study, we provide a framework that identifies seven distinct potential sources of downstream harm in machine learning, spanning data collection, development, and deployment. We describe how these issues arise, how they are relevant to particular applications, and how they motivate different mitigations. …(More)