Africa fell in love with crypto. Now, it’s complicated


Article by Martin K.N Siele: “Chiamaka, a former product manager at a Nigerian cryptocurrency startup, has sworn off digital currencies. The 22-year-old has weathered a layoff and lost savings worth 4,603,500 naira ($9,900) after the collapse of FTX in November 2022. She now works for a corporate finance company in Lagos, earning a salary that is 45% lower than her previous job.

“I used to be bullish on crypto because I believed it could liberate Africans financially,” Chiamaka, who asked to be identified by a pseudonym as she was concerned about breaching her contract with her current employer, told Rest of World. “Instead, it has managed to do the opposite so far … at least to me and a few of my friends.”

Chiamaka is among the tens of millions of Africans who bought into the cryptocurrency frenzy over the last few years. According to one estimate in mid-2022, around 53 million Africans owned crypto — 16.5% of the total global crypto users. Nigeria led with over 22 million users, ranking fourth globally. Blockchain startups and businesses on the continent raised $474 million in 2022, a 429% increase from the previous year, according to the African Blockchain Report. Young African creatives also became major proponents of non-fungible tokens (NFTs), taking inspiration from pop culture and the continent’s history. Several decentralized autonomous organizations (DAOs), touted as the next big thing, emerged across Africa…(More)”.

End of data sharing could make Covid-19 harder to control, experts and high-risk patients warn


Article by Sam Whitehead: “…The federal government’s public health emergency that’s been in effect since January 2020 expires May 11. The emergency declaration allowed for sweeping changes in the U.S. health care system, like requiring state and local health departments, hospitals, and commercial labs to regularly share data with federal officials.

But some shared data requirements will come to an end and the federal government will lose access to key metrics as a skeptical Congress seems unlikely to grant agencies additional powers. And private projects, like those from The New York Times and Johns Hopkins University, which made covid data understandable and useful for everyday people, stopped collecting data in March.

Public health legal scholars, data experts, former and current federal officials, and patients at high risk of severe covid outcomes worry the scaling back of data access could make it harder to control covid.

There have been improvements in recent years, such as major investments in public health infrastructure and updated data reporting requirements in some states. But concerns remain that the overall shambolic state of U.S. public health data infrastructure could hobble the response to any future threats.

“We’re all less safe when there’s not the national amassing of this information in a timely and coherent way,” said Anne Schuchat, former principal deputy director of the Centers for Disease Control and Prevention.

A lack of data in the early days of the pandemic left federal officials, like Schuchat, with an unclear picture of the rapidly spreading coronavirus. And even as the public health emergency opened the door for data-sharing, the CDC labored for months to expand its authority.

Eventually, more than a year into the pandemic, the CDC gained access to data from private health care settings, such as hospitals and nursing homes, commercial labs, and state and local health departments…(More)”. See also: Why we still need data to understand the COVID-19 pandemic

Why we need applied humanities approaches


Article by Kathryn Strong Hansen: “Since the term “applied humanities” is not especially common, some explanation may be helpful. Applied humanities education prepares students to use humanities knowledge and methods in practice rather than only in theory. As the University of Arizona’s Department of Public and Applied Humanities puts it, the goal is “public enrichment and the direct and tangible improvement of the human condition.” While this goal undoubtedly involves “intrahumanities” outputs like museum and exhibit curation or textual editing, public enrichment through the humanities can also be pursued through science and engineering curricula.

The direct goal of much science education is improving the human condition, such as CRISPR developments opening up possibilities for gene therapies. Similarly, good engineering seeks to improve the human condition, like the LEED-certified building methods that minimize negative impacts on the environment.

Since the humanities concern themselves with the human experience in all its facets, they can offer much to STEM endeavors, and applied humanities approaches have been implemented for many decades. One of the most established applied humanities pursuits is applied linguistics, which has existed as a field of study since about 1948. Another useful and growing example is that of the medical humanities, which provide medical practitioners with training that can help them interact more effectively with patients and navigate the emotional impact of their profession.

While applied approaches might be less widespread or established in other humanities fields, they are just as needed. In part, they are needed because the skills and knowledge of humanities scholars can help students in a multiplicity of fields, including STEM disciplines, to improve their understanding of their subject matter and how it connects to society at large…(More)”.

How to worry wisely about AI


The Economist:  “Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart…and replace us? Should we risk loss of control of our civilisation?” These questions were asked last month in an open letter from the Future of Life Institute, an ngo. It called for a six-month “pause” in the creation of the most advanced forms of artificial intelligence (ai), and was signed by tech luminaries including Elon Musk. It is the most prominent example yet of how rapid progress in ai has sparked anxiety about the potential dangers of the technology.

In particular, new “large language models” (llms)—the sort that powers Chatgpt, a chatbot made by Openai, a startup—have surprised even their creators with their unexpected talents as they have been scaled up. Such “emergent” abilities include everything from solving logic puzzles and writing computer code to identifying films from plot summaries written in emoji…(More)”.

AI translation is jeopardizing Afghan asylum claims


Article by Andrew Deck: “In 2020, Uma Mirkhail got a firsthand demonstration of how damaging a bad translation can be.

A crisis translator specializing in Afghan languages, Mirkhail was working with a Pashto-speaking refugee who had fled Afghanistan. A U.S. court had denied the refugee’s asylum bid because her written application didn’t match the story told in the initial interviews.

In the interviews, the refugee had first maintained that she’d made it through one particular event alone, but the written statement seemed to reference other people with her at the time — a discrepancy large enough for a judge to reject her asylum claim.

After Mirkhail went over the documents, she saw what had gone wrong: An automated translation tool had swapped the “I” pronouns in the woman’s statement to “we.”

Mirkhail works with Respond Crisis Translation, a coalition of over 2,500 translators that provides interpretation and translation services for migrants and asylum seekers around the world. She told Rest of World this kind of small mistake can be life-changing for a refugee. In the wake of the Taliban’s return to power in Afghanistan, there is an urgent demand for crisis translators working in languages such as Pashto and Dari. Working alongside refugees, these translators can help clients navigate complex immigration systems, including drafting immigration forms such as asylum applications. But a new generation of machine translation tools is changing the landscape of this field — and adding a new set of risks for refugees…(More)”.

The Coming Age of AI-Powered Propaganda


Essay by Josh A. Goldstein and Girish Sastry: “In the seven years since Russian operatives interfered in the 2016 U.S. presidential election, in part by posing as Americans in thousands of fake social media accounts, another technology with the potential to accelerate the spread of propaganda has taken center stage: artificial intelligence, or AI. Much of the concern has focused on the risks of audio and visual “deepfakes,” which use AI to invent images or events that did not actually occur. But another AI capability is just as worrisome. Researchers have warned for years that generative AI systems trained to produce original language—“language models,” for short—could be used by U.S. adversaries to mount influence operations. And now, these models appear to be on the cusp of enabling users to generate a near limitless supply of original text with limited human effort. This could improve the ability of propagandists to persuade unwitting voters, overwhelm online information environments, and personalize phishing emails. The danger is twofold: not only could language models sway beliefs; they could also corrode public trust in the information people rely on to form judgments and make decisions.

The progress of generative AI research has outpaced expectations. Last year, language models were used to generate functional proteins, beat human players in strategy games requiring dialogue, and create online assistants. Conversational language models have come into wide use almost overnight: more than 100 million people used OpenAI’s ChatGPT program in the first two months after it was launched, in December 2022, and millions more have likely used the AI tools that Google and Microsoft introduced soon thereafter. As a result, risks that seemed theoretical only a few years ago now appear increasingly realistic. For example, the AI-powered “chatbot” that powers Microsoft’s Bing search engine has shown itself to be capable of attempting to manipulate users—and even threatening them.

As generative AI tools sweep the world, it is hard to imagine that propagandists will not make use of them to lie and mislead…(More)”.

The Myth of Objective Data


Article by Melanie Feinberg: “The notion that human judgment pollutes scientific attempts to understand natural phenomena as they really are may seem like a stable and uncontroversial value. However, as Lorraine Daston and Peter Galison have established, objectivity is a fairly recent historical development.

In Daston and Galison’s account, which focuses on scientific visualization, objectivity arose in the 19th century, congruent with the development of photography. Before photography, scientific illustration attempted to portray an ideal exemplar rather than an actually existing specimen. In other words, instead of drawing a realistic portrait of an individual fruit fly — which has unique, idiosyncratic characteristics — an 18th-century scientific illustrator drew an ideal fruit fly. This ideal representation would better portray average fruit fly characteristics, even as no actual fruit fly is ever perfectly average.

With the advent of photography, drawings of ideal types began to lose favor. The machinic eye of the lens was seen as enabling nature to speak for itself, providing access to a truer, more objective reality than the human eye of the illustrator. Daston and Galison emphasize, however, that this initial confidence in the pure eye of the machine was swiftly undermined. Scientists soon realized that photographic devices introduce their own distortions into the images that they produce, and that no eye provides an unmediated view onto nature. From the perspective of scientific visualization, the idea that machines allow us to see true has long been outmoded. In everyday discourse, however, there is a continuing tendency to characterize the objective as that which speaks for itself without the interference of human perception, interpretation, judgment, and so on.

This everyday definition of objectivity particularly affects our understanding of data collection. If in our daily lives we tend to overlook the diverse, situationally textured sense-making actions that information seekers, conversation listeners, and other recipients of communicative acts perform to make automated information systems function, we are even less likely to acknowledge and value the interpretive work of data collectors, even as these actions create the conditions of possibility upon which data analysis can operate…(More)”.

What AI Means For Animals


Article by Peter Singer and Tse Yip Fai: “The ethics of artificial intelligence has attracted considerable attention, and for good reason. But the ethical implications of AI for billions of nonhuman animals are not often discussed. Given the severe impacts some AI systems have on huge numbers of animals, this lack of attention is deeply troubling.

As more and more AI systems are deployed, they are beginning to directly impact animals in factory farms, zoospet care and through drones that target animals. AI also has indirect impacts on animals, both good and bad — it can be used to replace some animal experiments, for example, or to decode animal “languages.” AI can also propagate speciesist biases — try searching “chicken” on any search engine and see if you get more pictures of living chickens or dead ones. While all of these impacts need ethical assessment, the area in which AI has by far the most significant impact on animals is factory farming. The use of AI in factory farms will, in the long run, increase the already huge number of animals who suffer in terrible conditions.

AI systems in factory farms can monitor animals’ body temperature, weight and growth rates and detect parasitesulcers and injuriesMachine learning models can be created to see how physical parameters relate to rates of growth, disease, mortality and — the ultimate criterion — profitability. The systems can then prescribe treatments for diseases or vary the quantity of food provided. In some cases, they can use their connected physical components to act directly on the animals, emitting sounds to interact with them — giving them electric shocks (when the grazing animal reaches the boundary of the desired area, for example), marking and tagging their bodies or catching and separating them.

You might be thinking that this would benefit the animals — that it means they will get sick less often, and when they do get sick, the problems will be quickly identified and cured, with less room for human error. But the short-term animal welfare benefits brought about by AI are, in our view, clearly outweighed by other consequences…(More)” See also: AI Ethics: The Case for Including Animals.

The Surveillance Ad Model Is Toxic — Let’s Not Install Something Worse


Article by Elizabeth M. Renieris: “At this stage, law and policy makerscivil society and academic researchers largely agree that the existing business model of the Web — algorithmically targeted behavioural advertising based on personal data, sometimes also referred to as surveillance advertising — is toxic. They blame it for everything from the erosion of individual privacy to the breakdown of democracy. Efforts to address this toxicity have largely focused on a flurry of new laws (and legislative proposals) requiring enhanced notice to, and consent from, users and limiting the sharing or sale of personal data by third parties and data brokers, as well as the application of existing laws to challenge ad-targeting practices.

In response to the changing regulatory landscape and zeitgeist, industry is also adjusting its practices. For example, Google has introduced its Privacy Sandbox, a project that includes a planned phaseout of third-party cookies from its Chrome browser — a move that, although lagging behind other browsers, is nonetheless significant given Google’s market share. And Apple has arguably dealt one of the biggest blows to the existing paradigm with the introduction of its AppTrackingTransparency (ATT) tool, which requires apps to obtain specific, opt-in consent from iPhone users before collecting and sharing their data for tracking purposes. The ATT effectively prevents apps from collecting a user’s Identifier for Advertisers, or IDFA, which is a unique Apple identifier that allows companies to recognize a user’s device and track its activity across apps and websites.

But the shift away from third-party cookies on the Web and third-party tracking of mobile device identifiers does not equate to the end of tracking or even targeted ads; it just changes who is doing the tracking or targeting and how they go about it. Specifically, it doesn’t provide any privacy protections from first parties, who are more likely to be hegemonic platforms with the most user data. The large walled gardens of Apple, Google and Meta will be less impacted than smaller players with limited first-party data at their disposal…(More)”.

You Can’t Regulate What You Don’t Understand


Article by Tim O’Reilly: “The world changed on November 30, 2022 as surely as it did on August 12, 1908 when the first Model T left the Ford assembly line. That was the date when OpenAI released ChatGPT, the day that AI emerged from research labs into an unsuspecting world. Within two months, ChatGPT had over a hundred million users—faster adoption than any technology in history.

The hand wringing soon began…

All of these efforts reflect the general consensus that regulations should address issues like data privacy and ownership, bias and fairness, transparency, accountability, and standards. OpenAI’s own AI safety and responsibility guidelines cite those same goals, but in addition call out what many people consider the central, most general question: how do we align AI-based decisions with human values? They write:

“AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.”

But whose human values? Those of the benevolent idealists that most AI critics aspire to be? Those of a public company bound to put shareholder value ahead of customers, suppliers, and society as a whole? Those of criminals or rogue states bent on causing harm to others? Those of someone well meaning who, like Aladdin, expresses an ill-considered wish to an all-powerful AI genie?

There is no simple way to solve the alignment problem. But alignment will be impossible without robust institutions for disclosure and auditing. If we want prosocial outcomes, we need to design and report on the metrics that explicitly aim for those outcomes and measure the extent to which they have been achieved. That is a crucial first step, and we should take it immediately. These systems are still very much under human control. For now, at least, they do what they are told, and when the results don’t match expectations, their training is quickly improved. What we need to know is what they are being told.

What should be disclosed? There is an important lesson for both companies and regulators in the rules by which corporations—which science-fiction writer Charlie Stross has memorably called “slow AIs”—are regulated. One way we hold companies accountable is by requiring them to share their financial results compliant with Generally Accepted Accounting Principles or the International Financial Reporting Standards. If every company had a different way of reporting its finances, it would be impossible to regulate them…(More)”