Building Trust with the Algorithms in Our Lives


Essay by Dylan Walsh: “Algorithms are omnipresent in our increasingly digital lives. They offer us new music and friends. They recommend books and clothing. They deliver information about the world. They help us find romantic partners one day, efficient commutes the next, cancer diagnoses the third.

And yet most people display an aversion to algorithms. They don’t fully trust the recommendations made by computer programs. When asked, they prefer human predictions to those put forward by algorithms.

“But given the growing prevalence of algorithms, it seems important we learn to trust and appreciate them,” says Taly Reich, associate professor at Yale SOM. “Is there an intervention that would help reduce this aversion?”

New research conducted by Reich and two colleagues, Alex Kaju of HEC Montreal and Sam Maglio of the University of Toronto, finds that clearly demonstrating an algorithm’s ability to learn from past mistakes increases the trust that people place in the algorithm. It also inclines people to prefer the predictions made by algorithms over those made by humans.

In arriving at this result, Reich drew on her foundational work on the value of mistakes. In a series of prior papers, Reich has established how mistakes, in the right context, can create benefits; people who make mistakes can come across as more knowledgeable and credible than people who don’t. Applying this insight to predictive models, Reich and her colleagues investigated whether framing algorithms as capable of learning from their mistakes enhanced trust in the recommendations that algorithms make.

In one of several experiments, for instance, participants were asked whether a trained psychologist or an algorithm would be better at evaluating somebody’s personality. Under one condition, no further information was provided. In another condition, identical performance data for both the psychologist and the algorithm explicitly demonstrated improvement over time. In the first three months, each one was correct 60% of the time, incorrect 40% of the time; by six months, they were correct 70% of the time; and over the course of the first year the rate moved up to 80% correct.

Absent information about the capacity to learn, participants chose a psychologist over an algorithm 75% of the time. But when shown how the algorithm improved over time, they chose it 66% of the time—more often than the human. Participants overcame any potential algorithm aversion and instead expressed what Reich and her colleagues term “algorithm appreciation,” or even “algorithm investment,” by choosing it at a higher rate than the human. These results held across several different cases, from selecting the best artwork to finding a well-matched romantic partner. In every instance, when the algorithm exhibited learning over time, it was trusted more often than its human counterpart…(More)”

Government must earn public trust that AI is being used safely and responsibly


Article by Sue Bateman and Felicity Burch: “Algorithms have the potential to improve so much of what we do in the public sector, from the delivery of frontline public services to informing policy development across every sector. From first responders to first permanent secretaries, artificial intelligence has the potential to enable individuals to make better and more informed decisions.

In order to realise that potential over the long term, however, it is vital that we earn the public’s trust that AI is being used in a way that is safe and responsible.

One way to build that trust is transparency. That is why today, we’re delighted to announce the launch of the Algorithmic Transparency Recording Standard (the Standard), a world-leading, simple and clear format to help public sector organisations to record the algorithmic tools they use. The Standard has been endorsed by the Data Standards Authority, which recommends the standards, guidance and other resources government departments should follow when working on data projects.

Enabling transparent public sector use of algorithms and AI is vital for a number of reasons. 

Firstly, transparency can support innovation in organisations, whether that is helping senior leaders to engage with how their teams are using AI, sharing best practice across organisations or even just doing both of those things better or more consistently than done previously. The Information Commissioner’s Office took part in the piloting of the Standard and they have noted how it “encourages different parts of an organisation to work together and consider ethical aspects from a range of perspective”, as well as how it “helps different teams… within an organisation – who may not typically work together – learn about each other’s work”.

Secondly, transparency can help to improve engagement with the public, and reduce the risk of people opting out of services – where that is an option. If a significant proportion of the public opt out, this can mean that the information the algorithms use is not representative of the wider public and risks perpetuating bias. Transparency can also facilitate greater accountability: enabling citizens to understand or, if necessary, challenge a decision.

Finally, transparency is a gateway to enabling other goals in data ethics that increase justified public trust in algorithms and AI. 

For example, the team at The National Archives described the benefit of using the Standard as a “checklist of things to think about” when procuring algorithmic systems, and the Thames Valley Police team who piloted the Standard emphasised how transparency could “prompt the development of more understandable models”…(More)”.

AI governance and human rights: Resetting the relationship


Paper by Kate Jones: “Governments and companies are already deploying AI to assist in making decisions that can have major consequences for the lives of individual citizens and societies. AI offers far-reaching benefits for human development but also presents risks. These include, among others, further division between the privileged and the unprivileged; erosion of individual freedoms through surveillance; and the replacement of independent thought and judgement with automated control.

Human rights are central to what it means to be human. They were drafted and agreed, with worldwide popular support, to define freedoms and entitlements that would allow every human being to live a life of liberty and dignity. AI, its systems and its processes have the potential to alter the human experience fundamentally. But many sets of AI governance principles produced by companies, governments, civil society and international organizations do not mention human rights at all. This is an error that requires urgent correction.

This research paper aims to dispel myths about human rights; outline the principal importance of human rights for AI governance; and recommend actions that governments, organizations, companies and individuals can take to ensure that human rights are the foundation for AI governance in future…(More)”.

AI in the Common Interest


Article by Gabriela Ramos & Mariana Mazzucato: “In short, it was a year in which already serious concerns about how technologies are being designed and used deepened into even more urgent misgivings. Who is in charge here? Who should be in charge? Public policies and institutions should be designed to ensure that innovations are improving the world, yet many technologies are currently being deployed in a vacuum. We need inclusive mission-oriented governance structures that are centered around a true common good. Capable governments can shape this technological revolution to serve the public interest.

Consider AI, which the Oxford English Dictionary defines broadly as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” AI can make our lives better in many ways. It can enhance food production and management, by making farming more efficient and improving food safety. It can help us bolster resilience against natural disastersdesign energy-efficient buildingsimprove power storage, and optimize renewable energy deployment. And it can enhance the accuracy of medical diagnostics when combined with doctors’ own assessments.

These applications would make our lives better in many ways. But with no effective rules in place, AI is likely to create new inequalities and amplify pre-existing ones. One need not look far to find examples of AI-powered systems reproducing unfair social biases. In one recent experiment, robots powered by a machine-learning algorithm became overtly racist and sexist. Without better oversight, algorithms that are supposed to help the public sector manage welfare benefits may discriminate against families that are in real need. Equally worrying, public authorities in some countries are already using AI-powered facial-recognition technology to monitor political dissent and subject citizens to mass-surveillance regimes.

Market concentration is also a major concern. AI development – and control of the underlying data – is dominated by just a few powerful players in just a few locales. Between 2013 and 2021, China and the United States accounted for 80% of private AI investment globally. There is now a massive power imbalance between the private owners of these technologies and the rest of us…(More)”.

Responsible AI in Africa: Challenges and Opportunities


Open Access Book edited by Damian Okaibedi Eke, Kutoma Wakunuma, and Simisola Akintoye: “In the last few years, a growing and thriving AI ecosystem has emerged in Africa. Within this ecosystem, there are local tech spaces as well as a number of internationally driven technology hubs and centres established by big tech companies such as Twitter, Google, Facebook, Alibaba Group, Huawei, Amazon and Microsoft have significantly increased the development and deployment of AI systems in Africa. While these tech spaces and hubs are focused on using AI to meet local challenges (e.g. poverty, illiteracy, famine, corruption, environmental disasters, terrorism and health crisis), the ethical, legal and socio-cultural implications of AI in Africa have largely been ignored. To ensure that Africans benefit from the attendant gains of AI, ethical, legal and socio-cultural impacts of AI need to be robustly considered and mitigated…(More)”.

Human-AI Teaming


Report by the National Academies of Sciences, Engineering, and Medicine: “Although artificial intelligence (AI) has many potential benefits, it has also been shown to suffer from a number of challenges for successful performance in complex real-world environments such as military operations, including brittleness, perceptual limitations, hidden biases, and lack of a model of causation important for understanding and predicting future events. These limitations mean that AI will remain inadequate for operating on its own in many complex and novel situations for the foreseeable future, and that AI will need to be carefully managed by humans to achieve their desired utility.

Human-AI Teaming: State-of-the-Art and Research Needs examines the factors that are relevant to the design and implementation of AI systems with respect to human operations. This report provides an overview of the state of research on human-AI teaming to determine gaps and future research priorities and explores critical human-systems integration issues for achieving optimal performance…(More)”

The ethics of artificial intelligence, UNESCO and the African Ubuntu perspective


Paper by Dorine Eva van Norren: “This paper aims to demonstrate the relevance of worldviews of the global south to debates of artificial intelligence, enhancing the human rights debate on artificial intelligence (AI) and critically reviewing the paper of UNESCO Commission on the Ethics of Scientific Knowledge and Technology (COMEST) that preceded the drafting of the UNESCO guidelines on AI. Different value systems may lead to different choices in programming and application of AI. Programming languages may acerbate existing biases as a people’s worldview is captured in its language. What are the implications for AI when seen from a collective ontology? Ubuntu (I am a person through other persons) starts from collective morals rather than individual ethics…

Metaphysically, Ubuntu and its conception of social personhood (attained during one’s life) largely rejects transhumanism. When confronted with economic choices, Ubuntu favors sharing above competition and thus an anticapitalist logic of equitable distribution of AI benefits, humaneness and nonexploitation. When confronted with issues of privacy, Ubuntu emphasizes transparency to group members, rather than individual privacy, yet it calls for stronger (group privacy) protection. In democratic terms, it promotes consensus decision-making over representative democracy. Certain applications of AI may be more controversial in Africa than in other parts of the world, like care for the elderly, that deserve the utmost respect and attention, and which builds moral personhood. At the same time, AI may be helpful, as care from the home and community is encouraged from an Ubuntu perspective. The report on AI and ethics of the UNESCO World COMEST formulated principles as input, which are analyzed from the African ontological point of view. COMEST departs from “universal” concepts of individual human rights, sustainability and good governance which are not necessarily fully compatible with relatedness, including future and past generations. Next to rules based approaches, which may hamper diversity, bottom-up approaches are needed with intercultural deep learning algorithms…(More)”.

How the algorithm tipped the balance in Ukraine


David Ignatius at The Washington Post: “Two Ukrainian military officers peer at a laptop computer operated by a Ukrainian technician using software provided by the American technology company Palantir. On the screen are detailed digital maps of the battlefield at Bakhmut in eastern Ukraine, overlaid with other targeting intelligence — most of it obtained from commercial satellites.

As we lean closer, we see can jagged trenches on the Bakhmut front, where Russian and Ukrainian forces are separated by a few hundred yards in one of the bloodiest battles of the war. A click of the computer mouse displays thermal images of Russian and Ukrainian artillery fire; another click shows a Russian tank marked with a “Z,” seen through a picket fence, an image uploaded by a Ukrainian spy on the ground.

If this were a working combat operations center, rather than a demonstration for a visiting journalist, the Ukrainian officers could use a targeting program to select a missile, artillery piece or armed drone to attack the Russian positions displayed on the screen. Then drones could confirm the strike, and a damage assessment would be fed back into the system.

This is the “wizard war” in the Ukraine conflict — a secret digital campaign that has never been reported before in detail — and it’s a big reason David is beating Goliath here. The Ukrainians are fusing their courageous fighting spirit with the most advanced intelligence and battle-management software ever seen in combat.

“Tenacity, will and harnessing the latest technology give the Ukrainians a decisive advantage,” Gen. Mark A. Milley, chairman of the Joint Chiefs of Staff, told me last week. “We are witnessing the ways wars will be fought, and won, for years to come.”

I think Milley is right about the transformational effect of technology on the Ukraine battlefield. And for me, here’s the bottom line: With these systems aiding brave Ukrainian troops, the Russians probably cannot win this war…(More)” See also Part 2.

How to spot AI-generated text


Article by Melissa Heikkilä: “This sentence was written by an AI—or was it? OpenAI’s new chatbot, ChatGPT, presents us with a problem: How will we know whether what we read online is written by a human or a machine?

Since it was released in late November, ChatGPT has been used by over a million people. It has the AI community enthralled, and it is clear the internet is increasingly being flooded with AI-generated text. People are using it to come up with jokes, write children’s stories, and craft better emails. 

ChatGPT is OpenAI’s spin-off of its large language model GPT-3, which generates remarkably human-sounding answers to questions that it’s asked. The magic—and danger—of these large language models lies in the illusion of correctness. The sentences they produce look right—they use the right kinds of words in the correct order. But the AI doesn’t know what any of it means. These models work by predicting the most likely next word in a sentence. They haven’t a clue whether something is correct or false, and they confidently present information as true even when it is not. 

In an already polarized, politically fraught online world, these AI tools could further distort the information we consume. If they are rolled out into the real world in real products, the consequences could be devastating. 

We’re in desperate need of ways to differentiate between human- and AI-written text in order to counter potential misuses of the technology, says Irene Solaiman, policy director at AI startup Hugging Face, who used to be an AI researcher at OpenAI and studied AI output detection for the release of GPT-3’s predecessor GPT-2. 

New tools will also be crucial to enforcing bans on AI-generated text and code, like the one recently announced by Stack Overflow, a website where coders can ask for help. ChatGPT can confidently regurgitate answers to software problems, but it’s not foolproof. Getting code wrong can lead to buggy and broken software, which is expensive and potentially chaotic to fix…(More)”.

How AI That Powers Chatbots and Search Queries Could Discover New Drugs


Karen Hao at The Wall Street Journal: “In their search for new disease-fighting medicines, drug makers have long employed a laborious trial-and-error process to identify the right compounds. But what if artificial intelligence could predict the makeup of a new drug molecule the way Google figures out what you’re searching for, or email programs anticipate your replies—like “Got it, thanks”?

That’s the aim of a new approach that uses an AI technique known as natural language processing—​the same technology​ that enables OpenAI’s ChatGPT​ to ​generate human-like responses​—to analyze and synthesize proteins, which are the building blocks of life and of many drugs. The approach exploits the fact that biological codes have something in common with search queries and email texts: Both are represented by a series of letters.  

Proteins are made up of dozens to thousands of small chemical subunits known as amino acids, and scientists use special notation to document the sequences. With each amino acid corresponding to a single letter of the alphabet, proteins are represented as long, sentence-like combinations.

Natural language algorithms, which quickly analyze language and predict the next step in a conversation, can also be applied to this biological data to create protein-language models. The models encode what might be called the grammar of proteins—the rules that govern which amino acid combinations yield specific therapeutic properties—to predict the sequences of letters that could become the basis of new drug molecules. As a result, the time required for the early stages of drug discovery could shrink from years to months.

“Nature has provided us with tons of examples of proteins that have been designed exquisitely with a variety of functions,” says Ali Madani, founder of ProFluent Bio, a Berkeley, Calif.-based startup focused on language-based protein design. “We’re learning the blueprint from nature.”…(More)”.