AI translation is jeopardizing Afghan asylum claims


Article by Andrew Deck: “In 2020, Uma Mirkhail got a firsthand demonstration of how damaging a bad translation can be.

A crisis translator specializing in Afghan languages, Mirkhail was working with a Pashto-speaking refugee who had fled Afghanistan. A U.S. court had denied the refugee’s asylum bid because her written application didn’t match the story told in the initial interviews.

In the interviews, the refugee had first maintained that she’d made it through one particular event alone, but the written statement seemed to reference other people with her at the time — a discrepancy large enough for a judge to reject her asylum claim.

After Mirkhail went over the documents, she saw what had gone wrong: An automated translation tool had swapped the “I” pronouns in the woman’s statement to “we.”

Mirkhail works with Respond Crisis Translation, a coalition of over 2,500 translators that provides interpretation and translation services for migrants and asylum seekers around the world. She told Rest of World this kind of small mistake can be life-changing for a refugee. In the wake of the Taliban’s return to power in Afghanistan, there is an urgent demand for crisis translators working in languages such as Pashto and Dari. Working alongside refugees, these translators can help clients navigate complex immigration systems, including drafting immigration forms such as asylum applications. But a new generation of machine translation tools is changing the landscape of this field — and adding a new set of risks for refugees…(More)”.

The Coming Age of AI-Powered Propaganda


Essay by Josh A. Goldstein and Girish Sastry: “In the seven years since Russian operatives interfered in the 2016 U.S. presidential election, in part by posing as Americans in thousands of fake social media accounts, another technology with the potential to accelerate the spread of propaganda has taken center stage: artificial intelligence, or AI. Much of the concern has focused on the risks of audio and visual “deepfakes,” which use AI to invent images or events that did not actually occur. But another AI capability is just as worrisome. Researchers have warned for years that generative AI systems trained to produce original language—“language models,” for short—could be used by U.S. adversaries to mount influence operations. And now, these models appear to be on the cusp of enabling users to generate a near limitless supply of original text with limited human effort. This could improve the ability of propagandists to persuade unwitting voters, overwhelm online information environments, and personalize phishing emails. The danger is twofold: not only could language models sway beliefs; they could also corrode public trust in the information people rely on to form judgments and make decisions.

The progress of generative AI research has outpaced expectations. Last year, language models were used to generate functional proteins, beat human players in strategy games requiring dialogue, and create online assistants. Conversational language models have come into wide use almost overnight: more than 100 million people used OpenAI’s ChatGPT program in the first two months after it was launched, in December 2022, and millions more have likely used the AI tools that Google and Microsoft introduced soon thereafter. As a result, risks that seemed theoretical only a few years ago now appear increasingly realistic. For example, the AI-powered “chatbot” that powers Microsoft’s Bing search engine has shown itself to be capable of attempting to manipulate users—and even threatening them.

As generative AI tools sweep the world, it is hard to imagine that propagandists will not make use of them to lie and mislead…(More)”.

What AI Means For Animals


Article by Peter Singer and Tse Yip Fai: “The ethics of artificial intelligence has attracted considerable attention, and for good reason. But the ethical implications of AI for billions of nonhuman animals are not often discussed. Given the severe impacts some AI systems have on huge numbers of animals, this lack of attention is deeply troubling.

As more and more AI systems are deployed, they are beginning to directly impact animals in factory farms, zoospet care and through drones that target animals. AI also has indirect impacts on animals, both good and bad — it can be used to replace some animal experiments, for example, or to decode animal “languages.” AI can also propagate speciesist biases — try searching “chicken” on any search engine and see if you get more pictures of living chickens or dead ones. While all of these impacts need ethical assessment, the area in which AI has by far the most significant impact on animals is factory farming. The use of AI in factory farms will, in the long run, increase the already huge number of animals who suffer in terrible conditions.

AI systems in factory farms can monitor animals’ body temperature, weight and growth rates and detect parasitesulcers and injuriesMachine learning models can be created to see how physical parameters relate to rates of growth, disease, mortality and — the ultimate criterion — profitability. The systems can then prescribe treatments for diseases or vary the quantity of food provided. In some cases, they can use their connected physical components to act directly on the animals, emitting sounds to interact with them — giving them electric shocks (when the grazing animal reaches the boundary of the desired area, for example), marking and tagging their bodies or catching and separating them.

You might be thinking that this would benefit the animals — that it means they will get sick less often, and when they do get sick, the problems will be quickly identified and cured, with less room for human error. But the short-term animal welfare benefits brought about by AI are, in our view, clearly outweighed by other consequences…(More)” See also: AI Ethics: The Case for Including Animals.

Workforce ecosystems and AI


Report by David Kiron, Elizabeth J. Altman, and Christoph Riedl: “Companies increasingly rely on an extended workforce (e.g., contractors, gig workers, professional service firms, complementor organizations, and technologies such as algorithmic management and artificial intelligence) to achieve strategic goals and objectives. When we ask leaders to describe how they define their workforce today, they mention a diverse array of participants, beyond just full- and part-time employees, all contributing in various ways. Many of these leaders observe that their extended workforce now comprises 30-50% of their entire workforce. For example, Novartis has approximately 100,000 employees and counts more than 50,000 other workers as external contributors. Businesses are also increasingly using crowdsourcing platforms to engage external participants in the development of products and services. Managers are thinking about their workforce in terms of who contributes to outcomes, not just by workers’ employment arrangements.

Our ongoing research on workforce ecosystems demonstrates that managing work across organizational boundaries with groups of interdependent actors in a variety of employment relationships creates new opportunities and risks for both workers and businesses. These are not subtle shifts. We define a workforce ecosystem as:

A structure that encompasses actors, from within the organization and beyond, working to create value for an organization. Within the ecosystem, actors work toward individual and collective goals with interdependencies and complementarities among the participants.

The emergence of workforce ecosystems has implications for management theory, organizational behavior, social welfare, and policymakers. In particular, issues surrounding work and worker flexibility, equity, and data governance and transparency pose substantial opportunities for policymaking.

At the same time, artificial intelligence (AI)—which we define broadly to include machine learning and algorithmic management—is playing an increasingly large role within the corporate context. The widespread use of AI is already displacing workers through automation, augmenting human performance at work, and creating new job categories…(More)”.

You Can’t Regulate What You Don’t Understand


Article by Tim O’Reilly: “The world changed on November 30, 2022 as surely as it did on August 12, 1908 when the first Model T left the Ford assembly line. That was the date when OpenAI released ChatGPT, the day that AI emerged from research labs into an unsuspecting world. Within two months, ChatGPT had over a hundred million users—faster adoption than any technology in history.

The hand wringing soon began…

All of these efforts reflect the general consensus that regulations should address issues like data privacy and ownership, bias and fairness, transparency, accountability, and standards. OpenAI’s own AI safety and responsibility guidelines cite those same goals, but in addition call out what many people consider the central, most general question: how do we align AI-based decisions with human values? They write:

“AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.”

But whose human values? Those of the benevolent idealists that most AI critics aspire to be? Those of a public company bound to put shareholder value ahead of customers, suppliers, and society as a whole? Those of criminals or rogue states bent on causing harm to others? Those of someone well meaning who, like Aladdin, expresses an ill-considered wish to an all-powerful AI genie?

There is no simple way to solve the alignment problem. But alignment will be impossible without robust institutions for disclosure and auditing. If we want prosocial outcomes, we need to design and report on the metrics that explicitly aim for those outcomes and measure the extent to which they have been achieved. That is a crucial first step, and we should take it immediately. These systems are still very much under human control. For now, at least, they do what they are told, and when the results don’t match expectations, their training is quickly improved. What we need to know is what they are being told.

What should be disclosed? There is an important lesson for both companies and regulators in the rules by which corporations—which science-fiction writer Charlie Stross has memorably called “slow AIs”—are regulated. One way we hold companies accountable is by requiring them to share their financial results compliant with Generally Accepted Accounting Principles or the International Financial Reporting Standards. If every company had a different way of reporting its finances, it would be impossible to regulate them…(More)”

Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models


Paper by Shaolei Ren, Pengfei Li, Jianyi Yang, and Mohammad A. Islam: “The growing carbon footprint of artificial intelligence (AI) models, especially large ones such as GPT-3 and GPT-4, has been undergoing public scrutiny. Unfortunately, however, the equally important and enormous water footprint of AI models has remained under the radar. For example, training GPT-3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough for producing 370 BMW cars or 320 Tesla electric vehicles) and the water consumption would have been tripled if training were done in Microsoft’s Asian data centers, but such information has been kept as a secret. This is extremely concerning, as freshwater scarcity has become one of the most pressing challenges shared by all of us in the wake of the rapidly growing population, depleting water resources, and aging water infrastructures. To respond to the global water challenges, AI models can, and also should, take social responsibility and lead by example by addressing their own water footprint. In this paper, we provide a principled methodology to estimate fine-grained water footprint of AI models, and also discuss the unique spatial-temporal diversities of AI models’ runtime water efficiency. Finally, we highlight the necessity of holistically addressing water footprint along with carbon footprint to enable truly sustainable AI…(More)”.

Recalibrating assumptions on AI


Essay by Arthur Holland Michel: “Many assumptions about artificial intelligence (AI) have become entrenched despite the lack of evidence to support them. Basing policies on these assumptions is likely to increase the risk of negative impacts for certain demographic groups. These dominant assumptions include claims that AI is ‘intelligent’ and ‘ethical’, that more data means better AI, and that AI development is a ‘race’.

The risks of this approach to AI policymaking are often ignored, while the potential positive impacts of AI tend to be overblown. By illustrating how a more evidence-based, inclusive discourse can improve policy outcomes, this paper makes the case for recalibrating the conversation around AI policymaking…(More)”

How public money is shaping the future of AI


Report by Ethica: “The European Union aims to become the “home of trustworthy Artificial Intelligence” and has committed the biggest existing public funding to invest in AI over the next decade. However, the lack of accessible data and comprehensive reporting on the Framework Programmes’ results and impact hinder the EU’s capacity to achieve its objectives and undermine the credibility of its commitments. 

This research commissioned by the European AI & Society Fund, recommends publicly accessible data, effective evaluation of the real-world impacts of funding, and mechanisms for civil society participation in funding before investing further public funds to achieve the EU’s goal of being the epicenter of trustworthy AI.

Among its findings, the research has highlighted the negative impact of the European Union’s investment in artificial intelligence (AI). The EU invested €10bn into AI via its Framework Programmes between 2014 and 2020, representing 13.4% of all available funding. However, the investment process is top-down, with little input from researchers or feedback from previous grantees or civil society organizations. Furthermore, despite the EU’s aim to fund market-focused innovation, research institutions and higher and secondary education establishments received 73% of the total funding between 2007 and 2020. Germany, France, and the UK were the largest recipients, receiving 37.4% of the total EU budget.

The report also explores the lack of commitment to ethical AI, with only 30.3% of funding calls related to AI mentioning trustworthiness, privacy, or ethics. Additionally, civil society organizations are not involved in the design of funding programs, and there is no evaluation of the economic or societal impact of the funded work. The report calls for political priorities to align with funding outcomes in specific, measurable ways, citing transport as the most funded sector in AI despite not being an EU strategic focus, while programs to promote SME and societal participation in scientific innovation have been dropped….(More)”.

The NIST Trustworthy and Responsible Artificial Intelligence Resource Center


About: “The NIST Trustworthy and Responsible Artificial Intelligence Resource Center (AIRC) is a platform to support people and organizations in government, industry, and academia—both in the U.S. and internationally—driving technical and scientific innovation in AI. It serves as a one-stop-shop for foundational content, technical documents, and AI toolkits such as repository hub for standards, measurement methods and metrics, and data sets. It also provides a common forum for all AI actors to engage and collaborate in the development and deployment of trustworthy and responsible AI technologies that benefit all people in a fair and equitable manner.

The NIST AIRC is developed to support and operationalize the NIST AI Risk Management Framework (AI RMF 1.0) and its accompanying playbook. To match the complexity of AI technology, the AIRC will grow over time to provide an engaging interactive space that enables stakeholders to share AI RMF case studies and profiles, educational materials and technical guidance related to AI risk management.

The initial release of the AIRC (airc.nist.gov) provides access to the foundational content, including the AI RMF 1.0, the playbook, and a trustworthy and responsible AI glossary. It is anticipated that in the coming months enhancements to the AIRC will include structured access to relevant technical and policy documents; access to a standards hub that connects various standards promoted around the globe; a metrics hub to assist in test, evaluation, verification, and validation of AI; as well as software tools, resources and guidance that promote trustworthy and responsible AI development and use. Visitors to the AIRC will be able to tailor the above content they see based on their requirements (organizational role, area of expertise, etc.).

Over time the Trustworthy and Responsible AI Resource Center will enable distribution of stakeholder produced content, case studies, and educational materials…(More)”.

Outsourcing Virtue


Essay by  L. M. Sacasas: “To take a different class of example, we might think of the preoccupation with technological fixes to what may turn out to be irreducibly social and political problems. In a prescient essay from 2020 about the pandemic response, the science writer Ed Yong observed that “instead of solving social problems, the U.S. uses techno-fixes to bypass them, plastering the wounds instead of removing the source of injury—and that’s if people even accept the solution on offer.” There’s no need for good judgment, responsible governance, self-sacrifice or mutual care if there’s an easy technological fix to ostensibly solve the problem. No need, in other words, to be good, so long as the right technological solution can be found.

Likewise, there’s no shortage of examples involving algorithmic tools intended to outsource human judgment. Consider the case of NarxCare, a predictive program developed by Appriss Health, as reported in Wired in 2021. NarxCare is “an ‘analytics tool and care management platform’ that purports to instantly and automatically identify a patient’s risk of misusing opioids.” The article details the case of a 32-year-old woman suffering from endometriosis whose pain medications were cut off, without explanation or recourse, because she triggered a high-risk score from the proprietary algorithm. The details of the story are both fascinating and disturbing, but here’s the pertinent part for my purposes:

Appriss is adamant that a NarxCare score is not meant to supplant a doctor’s diagnosis. But physicians ignore these numbers at their peril. Nearly every state now uses Appriss software to manage its prescription drug monitoring programs, and most legally require physicians and pharmacists to consult them when prescribing controlled substances, on penalty of losing their license.

This is an obviously complex and sensitive issue, but it is hard to escape the conclusion that the use of these algorithmic systems exacerbates the same demoralizing opaqueness, evasion of responsibility and cover-your-ass dynamics that have long characterized analog bureaucracies. It becomes difficult to assume responsibility for a particular decision made in a particular case. Or, to put it otherwise, it becomes too easy to claim “the algorithm made me do it,” and it becomes so, in part, because the existing bureaucratic dynamics all but require it…(More)”.