Change of heart: how algorithms could revolutionise organ donations


Tej Kohli at TheNewEconomy: “Artificial intelligence (AI) and biotechnology are both on an exponential growth trajectory, with the potential to improve how we experience our lives and even to extend life itself. But few have considered how these two frontier technologies could be brought together symbiotically to tackle global health and environmental challenges…

For example, combination technologies could tackle a global health issue such as organ donation. According to the World Health Organisation, an average of around 100,800 solid organ transplants were performed each year as of 2008. Yet, in the US, there are nearly 113,000 people waiting for a life-saving organ transplant, while thousands of good organs are discarded each year. For years, those in need of a kidney transplant had limited options: they either had to find a willing and biologically viable living donor, or wait for a viable deceased donor to show up in their local hospital.

But with enough patients and willing donors, big data and AI make it possible to facilitate far more matches than this one-to-one system allows, through a system of paired kidney donation. Patients can now procure a donor who is not a biological fit and still receive a kidney, because AI can match donors to recipients across a massive array of patient-donor relationships. In fact, a single person who steps forward to donate a kidney – to a loved one or even to a stranger – can set off a domino effect that saves dozens of lives by resolving the missing link in a long chain of pairings….

The moral and ethical implications of today’s frontier technologies are far-reaching. Fundamental questions have not been adequately addressed. How will algorithms weigh the needs of poor and wealthy patients? Should a donor organ be sent to a distant patient – potentially one in a different country – with a low rejection risk or to a nearby patient whose rejection risk is only slightly higher?

These are important questions, but I believe we should get combination technologies up and working, and then decide on the appropriate controls. The matching power of AI means that eight lives could be saved by just one deceased organ donor; innovations in biotechnology could ensure that organs are never wasted. The faster these technologies advance, the more lives we can save…(More)”.

Artificial intelligence, geopolitics, and information integrity


Report by John Villasenor: “Much has been written, and rightly so, about the potential that artificial intelligence (AI) can be used to create and promote misinformation. But there is a less well-recognized but equally important application for AI in helping to detect misinformation and limit its spread. This dual role will be particularly important in geopolitics, which is closely tied to how governments shape and react to public opinion both within and beyond their borders. And it is important for another reason as well: While nation-state interest in information is certainly not new, the incorporation of AI into the information ecosystem is set to accelerate as machine learning and related technologies experience continued advances.

The present article explores the intersection of AI and information integrity in the specific context of geopolitics. Before addressing that topic further, it is important to underscore that the geopolitical implications of AI go far beyond information. AI will reshape defense, manufacturing, trade, and many other geopolitically relevant sectors. But information is unique because information flows determine what people know about their own country and the events within it, as well as what they know about events occurring on a global scale. And information flows are also critical inputs to government decisions regarding defense, national security, and the promotion of economic growth. Thus, a full accounting of how AI will influence geopolitics of necessity requires engaging with its application in the information ecosystem.

This article begins with an exploration of some of the key factors that will shape the use of AI in future digital information technologies. It then considers how AI can be applied to both the creation and detection of misinformation. The final section addresses how AI will impact efforts by nation-states to promote–or impede–information integrity….(More)”.

Artificial Morality


Essay by Bruce Sterling: “This is an essay about lists of moral principles for the creators of Artificial Intelligence. I collect these lists, and I have to confess that I find them funny.

Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done.

What I find comical is a programmer’s approach to morality — the urge to carefully type out some moral code before raising unholy hell. Many professions other than programming have stern ethical standards: lawyers and doctors, for instance. Are lawyers and doctors evil? It depends. If a government is politically corrupt, then a nation’s lawyers don’t escape that moral stain. If a health system is slaughtering the population with mis-prescribed painkillers, then doctors can’t look good, either.

So if AI goes south, for whatever reason, programmers are just bound to look culpable and sinister. Careful lists of moral principles will not avert that moral judgment, no matter how many earnest efforts they make to avoid bias, to build and test for safety, to provide feedback for user accountability, to design for privacy, to consider the colossal abuse potential, and to eschew any direct involvement in AI weapons, AI spyware, and AI violations of human rights.

I’m not upset by moral judgments in well-intentioned manifestos, but it is an odd act of otherworldly hubris. Imagine if car engineers claimed they could build cars fit for all genders and races, test cars for safety-first, mirror-plate the car windows and the license plates for privacy, and make sure that military tanks and halftracks and James Bond’s Aston-Martin spy-mobile were never built at all. Who would put up with that presumptuous behavior? Not a soul, not even programmers.

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.

I’m not a cynic about morality per se, but everybody’s got some. The military, the spies, the secret police and organized crime, they all have their moral codes. Military AI flourishes worldwide, and so does cyberwar AI, and AI police-state repression….(More)”.

What if you ask and they say yes? Consumers' willingness to disclose personal data is stronger than you think


Grzegorz Mazurek and Karolina Małagocka at Business Horizons: “Technological progress—including the development of online channels and universal access to the internet via mobile devices—has advanced both the quantity and the quality of data that companies can acquire. Private information such as this may be considered a type of fuel to be processed through the use of technologies, and represents a competitive market advantage.

This article describes situations in which consumers tend to disclose personal information to companies and explores factors that encourage them to do so. The empirical studies and examples of market activities described herein illustrate to managers just how rewards work and how important contextual integrity is to customer digital privacy expectations. Companies’ success in obtaining client data depends largely on three Ts: transparency, type of data, and trust. These three Ts—which, combined, constitute a main T (i.e., the transfer of personal data)—deserve attention when seeking customer information that can be converted to competitive advantage and market success….(More)”.

Big data in official statistics


Paper by Barteld Braaksma and Kees Zeelenberg: “In this paper, we describe and discuss opportunities for big data in official statistics. Big data come in high volume, high velocity and high variety. Their high volume may lead to better accuracy and more details, their high velocity may lead to more frequent and more timely statistical estimates, and their high variety may give opportunities for statistics in new areas. But there are also many challenges: there are uncontrolled changes in sources that threaten continuity and comparability, and data that refer only indirectly to phenomena of statistical interest.

Furthermore, big data may be highly volatile and selective: the coverage of the population to which they refer may change from day to day, leading to inexplicable jumps in time-series. And very often, the individual observations in these big data sets lack variables that allow them to be linked to other datasets or population frames. This severely limits the possibilities for correction of selectivity and volatility. Also, with the advance of big data and open data, there is much more scope for disclosure of individual data, and this poses new problems for statistical institutes. So, big data may be regarded as so-called nonprobability samples. The use of such sources in official statistics requires other approaches than the traditional one based on surveys and censuses.

A first approach is to accept the big data just for what they are: an imperfect, yet very timely, indicator of developments in society. In a sense, this is what national statistical institutes (NSIs) often do: we collect data that have been assembled by the respondents and the reason why, and even just the fact that they have been assembled is very much the same reason why they are interesting for society and thus for an NSI to collect. In short, we might argue: these data exist and that’s why they are interesting.

A second approach is to use formal models and extract information from these data. In recent years, many new methods for dealing with big data have been developed by mathematical and applied statisticians. New methods like machine-learning techniques can be considered alongside more traditional methods like Bayesian techniques. National statistical institutes have always been reluctant to use models, apart from specific cases like small-area estimates. Based on experience at Statistics Netherlands, we argue that NSIs should not be afraid to use models, provided that their use is documented and made transparent to users. On the other hand, in official statistics, models should not be used for all kinds of purposes….(More)”.

10 Privacy Risks and 10 Privacy Enhancing Technologies to Watch in the Next Decade


Future of Privacy Forum: “Today, FPF is publishing a white paper co-authored by CEO Jules Polonetsky and hackylawyER Founder Elizabeth Renieris to help corporate officers, nonprofit leaders, and policymakers better understand privacy risks that will grow in prominence during the 2020s, as well as rising technologies that will be used to help manage privacy through the decade. Leaders must understand the basics of technologies like biometric scanning, collaborative robotics, and spatial computing in order to assess how existing and proposed policies, systems, and laws will address them, and to support appropriate guidance for the implementation of new digital products and services.

The white paper, Privacy 2020: 10 Privacy Risks and 10 Privacy Enhancing Technologies to Watch in the Next Decade, identifies ten technologies that are likely to create increasingly complex data protection challenges. Over the next decade, privacy considerations will be driven by innovations in tech linked to human bodies, health, and social networks; infrastructure; and computing power. The white paper also highlights ten developments that can enhance privacy – providing cause for optimism that organizations will be able to manage data responsibly. Some of these technologies are already in general use, some will soon be widely deployed, and others are nascent….(More)”.

The Gray Spectrum: Ethical Decision Making with Geospatial and Open Source Analysis


Report by The Stanley Center for Peace and Security: “Geospatial and open source analysts face decisions in their work that can directly or indirectly cause harm to individuals, organizations, institutions, and society. Though analysts may try to do the right thing, such ethically-informed decisions can be complex. This is particularly true for analysts working on issues related to nuclear nonproliferation or international security, analysts whose decisions on whether to publish certain findings could have far-reaching consequences.

The Stanley Center for Peace and Security and the Open Nuclear Network (ONN) program of One Earth Future Foundation convened a workshop to explore these ethical challenges, identify resources, and consider options for enhancing the ethical practices of geospatial and open source analysis communities.

This Readout & Recommendations brings forward observations from that workshop. It describes ethical challenges that stakeholders from relevant communities face. It concludes with a list of needs participants identified, along with possible strategies for promoting sustaining behaviors that could enhance the ethical conduct of the community of nonproliferation analysts working with geospatial and open source data.

Some Key Findings

  • A code of ethics could serve important functions for the community, including giving moral guidance to practitioners, enhancing public trust in their work, and deterring unethical behavior. Participants in the workshop saw a significant value in such a code and offered ideas for developing one.
  • Awareness of ethical dilemmas and strong ethical reasoning skills are essential for sustaining ethical practices, yet professionals in this field might not have easy access to such training. Several approaches could improve ethics education for the field overall, including starting a body of literature, developing model curricula, and offering training for students and professionals.
  • Other stakeholders—governments, commercial providers, funders, organizations, management teams, etc.—should contribute to the discussion on ethics in the community and reinforce sustaining behaviors….(More)”.

You Are Now Remotely Controlled


Essay by Shoshana Zuboff in The New York Times: “…Only repeated crises have taught us that these platforms are not bulletin boards but hyper-velocity global bloodstreams into which anyone may introduce a dangerous virus without a vaccine. This is how Facebook’s chief executive, Mark Zuckerberg, could legally refuse to remove a faked video of Speaker of the House Nancy Pelosi and later double down on this decision, announcing that political advertising would not be subject to fact-checking.

All of these delusions rest on the most treacherous hallucination of them all: the belief that privacy is private. We have imagined that we can choose our degree of privacy with an individual calculation in which a bit of personal information is traded for valued services — a reasonable quid pro quo.For example, when Delta Air Lines piloted a biometric data system at the Atlanta airport, the company reported that of nearly 25,000 customers who traveled there each week, 98 percent opted into the process, noting that “the facial recognition option is saving an average of two seconds for each customer at boarding, or nine minutes when boarding a wide body aircraft.”

In fact the rapid development of facial recognition systems reveals the public consequences of this supposedly private choice. Surveillance capitalists have demanded the right to take our faces wherever they appear — on a city street or a Facebook page. The Financial Times reported that a Microsoft facial recognition training database of 10 million images plucked from the internet without anyone’s knowledge and supposedly limited to academic research was employed by companies like IBM and state agencies that included the United States and Chinese military. Among these were two Chinese suppliers of equipment to officials in Xinjiang, where members of the Uighur community live in open-air prisons under perpetual surveillance by facial recognition systems.

Privacy is not private, because the effectiveness of these and other private or public surveillance and control systems depends upon the pieces of ourselves that we give up — or that are secretly stolen from us.

Our digital century was to have been democracy’s Golden Age. Instead, we enter its third decade marked by a stark new form of social inequality best understood as “epistemic inequality.” It recalls a pre-Gutenberg era of extreme asymmetries of knowledge and the power that accrues to such knowledge, as the tech giants seize control of information and learning itself. The delusion of “privacy as private” was crafted to breed and feed this unanticipated social divide. Surveillance capitalists exploit the widening inequity of knowledge for the sake of profits. They manipulate the economy, our society and even our lives with impunity, endangering not just individual privacy but democracy itself. Distracted by our delusions, we failed to notice this bloodless coup from above….(More)”.

An AI Epidemiologist Sent the First Warnings of the Wuhan Virus


Eric Niiler at Wired: “On January 9, the World Health Organization notified the public of a flu-like outbreak in China: a cluster of pneumonia cases had been reported in Wuhan, possibly from vendors’ exposure to live animals at the Huanan Seafood Market. The US Centers for Disease Control and Prevention had gotten the word out a few days earlier, on January 6. But a Canadian health monitoring platform had beaten them both to the punch, sending word of the outbreak to its customers on December 31.

BlueDot uses an AI-driven algorithm that scours foreign-language news reports, animal and plant disease networks, and official proclamations to give its clients advance warning to avoid danger zones like Wuhan.

Speed matters during an outbreak, and tight-lipped Chinese officials do not have a good track record of sharing information about diseases, air pollution, or natural disasters. But public health officials at WHO and the CDC have to rely on these very same health officials for their own disease monitoring. So maybe an AI can get there faster. “We know that governments may not be relied upon to provide information in a timely fashion,” says Kamran Khan, BlueDot’s founder and CEO. “We can pick up news of possible outbreaks, little murmurs or forums or blogs of indications of some kind of unusual events going on.”…

The firm isn’t the first to look for an end-run around public health officials, but they are hoping to do better than Google Flu Trends, which was euthanized after underestimating the severity of the 2013 flu season by 140 percent. BlueDot successfully predicted the location of the Zika outbreak in South Florida in a publication in the British medical journal The Lancet….(More)”.

The State of Open Humanitarian Data


Report by Centre for Humanitarian Data: “The goal of this report is to increase awareness of the data available for humanitarian response activities and to highlight what is missing, as measured through OCHA’s Humanitarian Data Exchange (HDX) platform. We want to recognize the valuable and long-standing contributions of data-sharing organizations. We also want to be more targeted in our outreach on what data is required to understand crises so that new actors might be compelled to join the platform. Data is not an end in itself but a critical ingredient to the analysis that informs decision making. With nearly 168 million people in need of humanitarian assistance in 2020 — the highest figure in decades — there is no time, or data, to lose…(More)”.