Change of heart: how algorithms could revolutionise organ donations


Tej Kohli at TheNewEconomy: “Artificial intelligence (AI) and biotechnology are both on an exponential growth trajectory, with the potential to improve how we experience our lives and even to extend life itself. But few have considered how these two frontier technologies could be brought together symbiotically to tackle global health and environmental challenges…

For example, combination technologies could tackle a global health issue such as organ donation. According to the World Health Organisation, an average of around 100,800 solid organ transplants were performed each year as of 2008. Yet, in the US, there are nearly 113,000 people waiting for a life-saving organ transplant, while thousands of good organs are discarded each year. For years, those in need of a kidney transplant had limited options: they either had to find a willing and biologically viable living donor, or wait for a viable deceased donor to show up in their local hospital.

But with enough patients and willing donors, big data and AI make it possible to facilitate far more matches than this one-to-one system allows, through a system of paired kidney donation. Patients can now procure a donor who is not a biological fit and still receive a kidney, because AI can match donors to recipients across a massive array of patient-donor relationships. In fact, a single person who steps forward to donate a kidney – to a loved one or even to a stranger – can set off a domino effect that saves dozens of lives by resolving the missing link in a long chain of pairings….

The moral and ethical implications of today’s frontier technologies are far-reaching. Fundamental questions have not been adequately addressed. How will algorithms weigh the needs of poor and wealthy patients? Should a donor organ be sent to a distant patient – potentially one in a different country – with a low rejection risk or to a nearby patient whose rejection risk is only slightly higher?

These are important questions, but I believe we should get combination technologies up and working, and then decide on the appropriate controls. The matching power of AI means that eight lives could be saved by just one deceased organ donor; innovations in biotechnology could ensure that organs are never wasted. The faster these technologies advance, the more lives we can save…(More)”.

Artificial intelligence, geopolitics, and information integrity


Report by John Villasenor: “Much has been written, and rightly so, about the potential that artificial intelligence (AI) can be used to create and promote misinformation. But there is a less well-recognized but equally important application for AI in helping to detect misinformation and limit its spread. This dual role will be particularly important in geopolitics, which is closely tied to how governments shape and react to public opinion both within and beyond their borders. And it is important for another reason as well: While nation-state interest in information is certainly not new, the incorporation of AI into the information ecosystem is set to accelerate as machine learning and related technologies experience continued advances.

The present article explores the intersection of AI and information integrity in the specific context of geopolitics. Before addressing that topic further, it is important to underscore that the geopolitical implications of AI go far beyond information. AI will reshape defense, manufacturing, trade, and many other geopolitically relevant sectors. But information is unique because information flows determine what people know about their own country and the events within it, as well as what they know about events occurring on a global scale. And information flows are also critical inputs to government decisions regarding defense, national security, and the promotion of economic growth. Thus, a full accounting of how AI will influence geopolitics of necessity requires engaging with its application in the information ecosystem.

This article begins with an exploration of some of the key factors that will shape the use of AI in future digital information technologies. It then considers how AI can be applied to both the creation and detection of misinformation. The final section addresses how AI will impact efforts by nation-states to promote–or impede–information integrity….(More)”.

Artificial Morality


Essay by Bruce Sterling: “This is an essay about lists of moral principles for the creators of Artificial Intelligence. I collect these lists, and I have to confess that I find them funny.

Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done.

What I find comical is a programmer’s approach to morality — the urge to carefully type out some moral code before raising unholy hell. Many professions other than programming have stern ethical standards: lawyers and doctors, for instance. Are lawyers and doctors evil? It depends. If a government is politically corrupt, then a nation’s lawyers don’t escape that moral stain. If a health system is slaughtering the population with mis-prescribed painkillers, then doctors can’t look good, either.

So if AI goes south, for whatever reason, programmers are just bound to look culpable and sinister. Careful lists of moral principles will not avert that moral judgment, no matter how many earnest efforts they make to avoid bias, to build and test for safety, to provide feedback for user accountability, to design for privacy, to consider the colossal abuse potential, and to eschew any direct involvement in AI weapons, AI spyware, and AI violations of human rights.

I’m not upset by moral judgments in well-intentioned manifestos, but it is an odd act of otherworldly hubris. Imagine if car engineers claimed they could build cars fit for all genders and races, test cars for safety-first, mirror-plate the car windows and the license plates for privacy, and make sure that military tanks and halftracks and James Bond’s Aston-Martin spy-mobile were never built at all. Who would put up with that presumptuous behavior? Not a soul, not even programmers.

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.

I’m not a cynic about morality per se, but everybody’s got some. The military, the spies, the secret police and organized crime, they all have their moral codes. Military AI flourishes worldwide, and so does cyberwar AI, and AI police-state repression….(More)”.

An AI Epidemiologist Sent the First Warnings of the Wuhan Virus


Eric Niiler at Wired: “On January 9, the World Health Organization notified the public of a flu-like outbreak in China: a cluster of pneumonia cases had been reported in Wuhan, possibly from vendors’ exposure to live animals at the Huanan Seafood Market. The US Centers for Disease Control and Prevention had gotten the word out a few days earlier, on January 6. But a Canadian health monitoring platform had beaten them both to the punch, sending word of the outbreak to its customers on December 31.

BlueDot uses an AI-driven algorithm that scours foreign-language news reports, animal and plant disease networks, and official proclamations to give its clients advance warning to avoid danger zones like Wuhan.

Speed matters during an outbreak, and tight-lipped Chinese officials do not have a good track record of sharing information about diseases, air pollution, or natural disasters. But public health officials at WHO and the CDC have to rely on these very same health officials for their own disease monitoring. So maybe an AI can get there faster. “We know that governments may not be relied upon to provide information in a timely fashion,” says Kamran Khan, BlueDot’s founder and CEO. “We can pick up news of possible outbreaks, little murmurs or forums or blogs of indications of some kind of unusual events going on.”…

The firm isn’t the first to look for an end-run around public health officials, but they are hoping to do better than Google Flu Trends, which was euthanized after underestimating the severity of the 2013 flu season by 140 percent. BlueDot successfully predicted the location of the Zika outbreak in South Florida in a publication in the British medical journal The Lancet….(More)”.

AI Isn’t a Solution to All Our Problems


Article by Griffin McCutcheon, John Malloy, Caitlyn Hall, and Nivedita Mahesh: “From the esoteric worlds of predictive health care and cybersecurity to Google’s e-mail completion and translation apps, the impacts of AI are increasingly being felt in our everyday lived experience. The way it has crepted into our lives in such diverse ways and its proficiency in low-level knowledge shows that AI is here to stay. But like any helpful new tool, there are notable flaws and consequences to blindly adapting it. 

AI is a tool—not a cure-all to modern problems….

Connecterra is trying to use TensorFlow to address global hunger through AI-enabled efficient farming and sustainable food development. The company uses AI-equipped sensors to track cattle health, helping farmers look for signs of illness early on. But, this only benefits one type of farmer: those rearing cattle who are able to afford a device to outfit their entire herd. Applied this way, AI can only improve the productivity of specific resource-intensive dairy farms and is unlikely to meet Connecterra’s goal of ending world hunger.

This solution, and others like it, ignores the wider social context of AI’s application. The belief that AI is a cure-all tool that will magically deliver solutions if only you can collect enough data is misleading and ultimately dangerous as it prevents other effective solutions from being implemented earlier or even explored. Instead, we need to both build AI responsibly and understand where it can be reasonably applied. 

Challenges with AI are exacerbated because these tools often come to the public as a “black boxes”—easy to use but entirely opaque in nature. This shields the user from understanding what biases and risks may be involved, and this lack of public understanding of AI tools and their limitations is a serious problem. We shouldn’t put our complete trust in programs whose workings their creators cannot interpret. These poorly understood conclusions from AI generate risk for individual users, companies or government projects where these tools are used. 

With AI’s pervasiveness and the slow change of policy, where do we go from here? We need a more rigorous system in place to evaluate and manage risk for AI tools….(More)”.

Information literacy in the age of algorithms


Report by Alison J. Head, Ph.D., Barbara Fister, Margy MacMillan: “…Three sets of questions guided this report’s inquiry:

  1. What is the nature of our current information environment, and how has it influenced how we access, evaluate, and create knowledge today? What do findings from a decade of PIL research tell us about the information skills and habits students will need for the future?
  2. How aware are current students of the algorithms that filter and shape the news and information they encounter daily? What
    concerns do they have about how automated decision-making systems may influence us, divide us, and deepen inequalities?
  3. What must higher education do to prepare students to understand the new media landscape so they will be able to participate in sharing and creating information responsibly in a changing and challenged world?
    To investigate these questions, we draw on qualitative data that PIL researchers collected from student focus groups and faculty interviews during fall 2019 at eight U.S. colleges and universities. Findings from a sample of 103 students and 37 professors reveal levels of awareness and concerns about the age of algorithms on college campuses. They are presented as research takeaways….(More)”.

Machine Learning, Big Data and the Regulation of Consumer Credit Markets: The Case of Algorithmic Credit Scoring


Paper by Nikita Aggarwal et al: “Recent advances in machine learning (ML) and Big Data techniques have facilitated the development of more sophisticated, automated consumer credit scoring models — a trend referred to as ‘algorithmic credit scoring’ in recognition of the increasing reliance on computer (particularly ML) algorithms for credit scoring. This chapter, which forms part of the 2018 collection of short essays ‘Autonomous Systems and the Law’, examines the rise of algorithmic credit scoring, and considers its implications for the regulation of consumer creditworthiness assessment and consumer credit markets more broadly.

The chapter argues that algorithmic credit scoring, and the Big Data and ML technologies underlying it, offer both benefits and risks for consumer credit markets. On the one hand, it could increase allocative efficiency and distributional fairness in these markets, by widening access to, and lowering the cost of, credit, particularly for ‘thin-file’ and ‘no-file’ consumers. On the other hand, algorithmic credit scoring could undermine distributional fairness and efficiency, by perpetuating discrimination in lending against certain groups and by enabling the more effective exploitation of borrowers.

The chapter considers how consumer financial regulation should respond to these risks, focusing on the UK/EU regulatory framework. As a general matter, it argues that the broadly principles and conduct-based approach of UK consumer credit regulation provides the flexibility necessary for regulators and market participants to respond dynamically to these risks. However, this approach could be enhanced through the introduction of more robust product oversight and governance requirements for firms in relation to their use of ML systems and processes. Supervisory authorities could also themselves make greater use of ML and Big Data techniques in order to strengthen the supervision of consumer credit firms.

Finally, the chapter notes that cross-sectoral data protection regulation, recently updated in the EU under the GDPR, offers an important avenue to mitigate risks to consumers arising from the use of their personal data. However, further guidance is needed on the application and scope of this regime in the consumer financial context….(More)”.

The future is intelligent: Harnessing the potential of artificial intelligence in Africa


Youssef Travaly and Kevin Muvunyi at Brookings: “…AI in particular presents countless avenues for both the public and private sectors to optimize solutions to the most crucial problems facing the continent today, especially for struggling industries. For example, in health care, AI solutions can help scarce personnel and facilities do more with less by speeding initial processing, triage, diagnosis, and post-care follow up. Furthermore, AI-based pharmacogenomics applications, which focus on the likely response of an individual to therapeutic drugs based on certain genetic markers, can be used to tailor treatments. Considering the genetic diversity found on the African continent, it is highly likely that the application of these technologies in Africa will result in considerable advancement in medical treatment on a global level.

In agricultureAbdoulaye Baniré Diallo, co-founder and chief scientific officer of the AI startup My Intelligent Machines, is working with advanced algorithms and machine learning methods to leverage genomic precision in livestock production models. With genomic precision, it is possible to build intelligent breeding programs that minimize the ecological footprint, address changing consumer demands, and contribute to the well-being of people and animals alike through the selection of good genetic characteristics at an early stage of the livestock production process. These are just a few examples that illustrate the transformative potential of AI technology in Africa.

However, a number of structural challenges undermine rapid adoption and implementation of AI on the continent. Inadequate basic and digital infrastructure seriously erodes efforts to activate AI-powered solutions as it reduces crucial connectivity. (For more on strategies to improve Africa’s digital infrastructure, see the viewpoint on page 67 of the full report). A lack of flexible and dynamic regulatory systems also frustrates the growth of a digital ecosystem that favors AI technology, especially as tech leaders want to scale across borders. Furthermore, lack of relevant technical skills, particularly for young people, is a growing threat. This skills gap means that those who would have otherwise been at the forefront of building AI are left out, preventing the continent from harnessing the full potential of transformative technologies and industries.

Similarly, the lack of adequate investments in research and development is an important obstacle. Africa must develop innovative financial instruments and public-private partnerships to fund human capital development, including a focus on industrial research and innovation hubs that bridge the gap between higher education institutions and the private sector to ensure the transition of AI products from lab to market….(More)”.

Technology Can't Fix Algorithmic Injustice


Annette Zimmerman, Elena Di Rosa and Hochan Kima at Boston Review: “A great deal of recent public debate about artificial intelligence has been driven by apocalyptic visions of the future. Humanity, we are told, is engaged in an existential struggle against its own creation. Such worries are fueled in large part by tech industry leaders and futurists, who anticipate systems so sophisticated that they can perform general tasks and operate autonomously, without human control. Stephen Hawking, Elon Musk, and Bill Gates have all publicly expressed their concerns about the advent of this kind of “strong” (or “general”) AI—and the associated existential risk that it may pose for humanity. In Hawking’s words, the development of strong AI “could spell the end of the human race.”

These are legitimate long-term worries. But they are not all we have to worry about, and placing them center stage distracts from ethical questions that AI is raising here and now. Some contend that strong AI may be only decades away, but this focus obscures the reality that “weak” (or “narrow”) AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.

What responsibilities and obligations do we bear for AI’s social consequences in the present—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable. Instead, we should recognize that developing and deploying weak AI involves making consequential choices—choices that demand greater democratic oversight not just from AI developers and designers, but from all members of society….(More)”.

Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems


Paper by Tim Wu: “Software has partially or fully displaced many former human activities, such as catching speeders or flying airplanes, and proven itself able to surpass humans in certain contests, like Chess and Jeopardy. What are the prospects for the displacement of human courts as the centerpiece of legal decision-making?

Based on the case study of hate speech control on major tech platforms, particularly on Twitter and Facebook, this Essay suggests displacement of human courts remains a distant prospect, but suggests that hybrid machine–human systems are the predictable future of legal adjudication, and that there lies some hope in that combination, if done well….(More)”.