AI-Powered World Health Chatbot Is Flubbing Some Answers


Article by Jessica Nix: “The World Health Organization is wading into the world of AI to provide basic health information through a human-like avatar. But while the bot responds sympathetically to users’ facial expressions, it doesn’t always know what it’s talking about.

SARAH, short for Smart AI Resource Assistant for Health, is a virtual health worker that’s available to talk 24/7 in eight different languages to explain topics like mental health, tobacco use and healthy eating. It’s part of the WHO’s campaign to find technology that can both educate people and fill staffing gaps with the world facing a health-care worker shortage.

WHO warns on its website that this early prototype, introduced on April 2, provides responses that “may not always be accurate.” Some of SARAH’s AI training is years behind the latest data. And the bot occasionally provides bizarre answers, known as hallucinations in AI models, that can spread misinformation about public health.The WHO’s artificial intelligence tool provides public health information via a lifelike avatar.Source: Bloomberg

SARAH doesn’t have a diagnostic feature like WebMD or Google. In fact, the bot is programmed to not talk about anything outside of the WHO’s purview, including questions on specific drugs. So SARAH often sends people to a WHO website or says that users should “consult with your health-care provider.”

“It lacks depth,” Ramin Javan, a radiologist and researcher at George Washington University, said. “But I think it’s because they just don’t want to overstep their boundaries and this is just the first step.”..(More)”

What can improve democracy?


Report by the Pew Research Center: “…surveys have long found that people in many countries are dissatisfied with their democracy and want major changes to their political systems – and this year is no exception. But high and growing rates of discontent certainly raise the question: What do people think could fix things?

A graphic showing that People in most countries surveyed suggest changes to politicians will improve democracy

We set out to answer this by asking more than 30,000 respondents in 24 countries an open-ended question: “What do you think would help improve the way democracy in your country is working?” While the second- and third-most mentioned priorities vary greatly, across most countries surveyed, there is one clear top answer: Democracy can be improved with better or different politicians.

People want politicians who are more responsive to their needs and who are more competent and honest, among other factors. People also focus on questions of descriptive representation – the importance of having politicians with certain characteristics such as a specific race, religion or gender.

Respondents also think citizens can improve their own democracy. Across most of the 24 countries surveyed, issues of public participation and of different behavior from the people themselves are a top-five priority.

Other topics that come up regularly include:

  • Economic reform, especially reforms that will enhance job creation.
  • Government reform, including implementing term limits, adjusting the balance of power between institutions and other factors.

We explore these topics and the others we coded in the following chapters:

  • Politicians, changing leadership and political parties (Chapter 1)
  • Government reform, special interests and the media (Chapter 2)
  • Economic and policy changes (Chapter 3)
  • Citizen behavior and individual rights and equality (Chapter 4)
  • Electoral reform and direct democracy (Chapter 5)
  • Rule of law, safety and the judicial system (Chapter 6)…(More)”.

Using Artificial Intelligence to Map the Earth’s Forests


Article from Meta and World Resources Institute: “Forests harbor most of Earth’s terrestrial biodiversity and play a critical role in the uptake of carbon dioxide from the atmosphere. Ecosystem services provided by forests underpin an essential defense against the climate and biodiversity crises. However, critical gaps remain in the scientific understanding of the structure and extent of global forests. Because the vast majority of existing data on global forests is derived from low to medium resolution satellite imagery (10 or 30 meters), there is a gap in the scientific understanding of dynamic and more dispersed forest systems such as agroforestry, drylands forests, and alpine forests, which together constitute more than a third of the world’s forests. 

Today, Meta and World Resources Institute are launching a global map of tree canopy height at a 1-meter resolution, allowing the detection of single trees at a global scale. In an effort to advance open source forest monitoring, all canopy height data and artificial intelligence models are free and publicly available…(More)”.

Social Choice for AI Alignment: Dealing with Diverse Human Feedback


Paper by Vincent Conitzer, et al: “Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, so that, for example, they refuse to comply with requests for help with committing crimes or with producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans’ expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about ”collective” preferences or otherwise use it to make collective choices about model behavior? In this paper, we argue that the field of social choice is well positioned to address these questions…(More)”.

We Need To Rewild The Internet


Article by Maria Farrell and Robin Berjon: “In the late 18th century, officials in Prussia and Saxony began to rearrange their complex, diverse forests into straight rows of single-species trees. Forests had been sources of food, grazing, shelter, medicine, bedding and more for the people who lived in and around them, but to the early modern state, they were simply a source of timber.

So-called “scientific forestry” was that century’s growth hacking. It made timber yields easier to count, predict and harvest, and meant owners no longer relied on skilled local foresters to manage forests. They were replaced with lower-skilled laborers following basic algorithmic instructions to keep the monocrop tidy, the understory bare.

Information and decision-making power now flowed straight to the top. Decades later when the first crop was felled, vast fortunes were made, tree by standardized tree. The clear-felled forests were replanted, with hopes of extending the boom. Readers of the American political anthropologist of anarchy and order, James C. Scott, know what happened next.

It was a disaster so bad that a new word, Waldsterben, or “forest death,” was minted to describe the result. All the same species and age, the trees were flattened in storms, ravaged by insects and disease — even the survivors were spindly and weak. Forests were now so tidy and bare, they were all but dead. The first magnificent bounty had not been the beginning of endless riches, but a one-off harvesting of millennia of soil wealth built up by biodiversity and symbiosis. Complexity was the goose that laid golden eggs, and she had been slaughtered…(More)”.

On the Manipulation of Information by Governments


Paper by Ariel Karlinsky and Moses Shayo: “Governmental information manipulation has been hard to measure and study systematically. We hand-collect data from official and unofficial sources in 134 countries to estimate misreporting of Covid mortality during 2020-21. We find that between 45%–55% of governments misreported the number of deaths. The lion’s share of misreporting cannot be attributed to a country’s capacity to accurately diagnose and report deaths. Contrary to some theoretical expectations, there is little evidence of governments exaggerating the severity of the pandemic. Misreporting is higher where governments face few social and institutional constraints, in countries holding elections, and in countries with a communist legacy…(More)”

Crowdsourcing for collaborative crisis communication: a systematic review


Paper by Maria Clara Pestana, Ailton Ribeiro and Vaninha Vieira: “Efficient crisis response and support during emergency scenarios rely on collaborative communication channels. Effective communication between operational centers, civilian responders, and public institutions is vital. Crowdsourcing fosters communication and collaboration among a diverse public. The primary objective is to explore the state-of-the-art in crowdsourcing for collaborative crisis communication guided by a systematic literature review. The study selected 20 relevant papers published in the last decade. The findings highlight solutions to facilitate rapid emergency responses, promote seamless coordination between stakeholders and the general public, and ensure data credibility through a rigorous validation process…(More)”.

The Formalization of Social Precarities


Anthology edited by Murali Shanmugavelan and Aiha Nguyen: “…explores platformization from the point of view of precarious gig workers in the Majority World. In countries like Bangladesh, Brazil, and India — which reinforce social hierarchies via gender, race, and caste — precarious workers are often the most marginalized members of society. Labor platforms made familiar promises to workers in these countries: work would be democratized, and people would have the opportunity to be their own boss. Yet even as platforms have upended the legal relationship between worker and employer, they have leaned into social structures to keep workers precarious — and in fact formalized those social precarities through surveillance and data collection…(More)”.

A Brief History of Automations That Were Actually People


Article by Brian Contreras: “If you’ve ever asked a chatbot a question and received nonsensical gibberish in reply, you already know that “artificial intelligence” isn’t always very intelligent.

And sometimes it isn’t all that artificial either. That’s one of the lessons from Amazon’s recent decision to dial back its much-ballyhooed “Just Walk Out” shopping technology, a seemingly science-fiction-esque software that actually functioned, in no small part, thanks to behind-the-scenes human labor.

This phenomenon is nicknamed “fauxtomation” because it “hides the human work and also falsely inflates the value of the ‘automated’ solution,” says Irina Raicu, director of the Internet Ethics program at Santa Clara University’s Markkula Center for Applied Ethics.

Take Just Walk Out: It promises a seamless retail experience in which customers at Amazon Fresh groceries or third-party stores can grab items from the shelf, get billed automatically and leave without ever needing to check out. But Amazon at one point had more than 1,000 workers in India who trained the Just Walk Out AI model—and manually reviewed some of its sales—according to an article published last year on the Information, a technology business website.

An anonymous source who’d worked on the Just Walk Out technology told the outlet that as many as 700 human reviews were needed for every 1,000 customer transactions. Amazon has disputed the Information’s characterization of its process. A company representative told Scientific American that while Amazon “can’t disclose numbers,” Just Walk Out has “far fewer” workers annotating shopping data than has been reported. In an April 17 blog post, Dilip Kumar, vice president of Amazon Web Services applications, wrote that “this is no different than any other AI system that places a high value on accuracy, where human reviewers are common.”…(More)”

The End of the Policy Analyst? Testing the Capability of Artificial Intelligence to Generate Plausible, Persuasive, and Useful Policy Analysis


Article by Mehrdad Safaei and Justin Longo: “Policy advising in government centers on the analysis of public problems and the developing of recommendations for dealing with them. In carrying out this work, policy analysts consult a variety of sources and work to synthesize that body of evidence into useful decision support documents commonly called briefing notes. Advances in natural language processing (NLP) have led to the continuing development of tools that can undertake a similar task. Given a brief prompt, a large language model (LLM) can synthesize information in content databases. This article documents the findings from an experiment that tested whether contemporary NLP technology is capable of producing public policy relevant briefing notes that expert evaluators judge to be useful. The research involved two stages. First, briefing notes were created using three models: NLP generated; human generated; and NLP generated/human edited. Next, two panels of retired senior public servants (with only one panel informed of the use of NLP in the experiment) were asked to judge the briefing notes using a heuristic evaluation rubric. The findings indicate that contemporary NLP tools were not able to, on their own, generate useful policy briefings. However, the feedback from the expert evaluators indicates that automatically generated briefing notes might serve as a useful supplement to the work of human policy analysts. And the speed with which the capabilities of NLP tools are developing, supplemented with access to a larger corpus of previously prepared policy briefings and other policy-relevant material, suggests that the quality of automatically generated briefings may improve significantly in the coming years. The article concludes with reflections on what such improvements might mean for the future practice of policy analysis…(More)”.