Paper by Susan Ariel Aaronson: “Policy makers in many countries are determined to develop artificial intelligence (AI) within their borders because they view AI as essential to both national security and economic growth. Some countries have proposed adopting AI sovereignty, where the nation develops AI for its people, by its people and within its borders. In this paper, the author makes a distinction between policies designed to advance domestic AI and policies that, with or without direct intent, hamper the production or trade of foreign-produced AI (known as “AI nationalism”). AI nationalist policies in one country can make it harder for firms in another country to develop AI. If officials can limit access to key components of the AI supply chain, such as data, capital, expertise or computing power, they may be able to limit the AI prowess of competitors in country Y and/or Z. Moreover, if policy makers can shape regulations in ways that benefit local AI competitors, they may also impede the competitiveness of other nations’ AI developers. AI nationalism may seem appropriate given the import of AI, but this paper aims to illuminate how AI nationalistic policies may backfire and could divide the world into AI haves and have nots…(More)”.
The ABC’s of Who Benefits from Working with AI: Ability, Beliefs, and Calibration
Paper by Andrew Caplin: “We use a controlled experiment to show that ability and belief calibration jointly determine the benefits of working with Artificial Intelligence (AI). AI improves performance more for people with low baseline ability. However, holding ability constant, AI assistance is more valuable for people who are calibrated, meaning they have accurate beliefs about their own ability. People who know they have low ability gain the most from working with AI. In a counterfactual analysis, we show that eliminating miscalibration would cause AI to reduce performance inequality nearly twice as much as it already does…(More)”.
How Generative AI Content Could Influence the U.S. Election
Article by Valerie Wirtschafter: “…The contested nature of the presidential race means such efforts will undoubtedly continue, but they likely will remain discoverable, and their reach and ability to shape election outcomes will be minimal. Instead, the most meaningful uses of generative AI content could occur in highly targeted scenarios just prior to the election and/or in a contentious post-election environment where experience has demonstrated that potential “evidence” of malfeasance need not be true to mobilize a small subset of believers to act.
Because U.S. elections are managed at the state and county levels, low-level actors in some swing precincts or counties are catapulted to the national spotlight every four years. Since these actors are not well known to the public, targeted and personal AI-generated content can cause significant harm. Before the election, this type of fabricated content could take the form of a last-minute phone call by someone claiming to be election worker alerting voters to an issue at their polling place.
After the election, it could become harassment of election officials or “evidence” of foul play. Due to the localized and personalized nature of this type of effort, it could be less rapidly discoverable for unknown figures not regularly in the public eye, difficult to debunk or prevent with existing tools and guardrails, and damaging to reputations. This tailored approach need not be driven by domestic actors—in fact, in the lead up to the 2020 elections, Iranian actors pretended to be members of the Proud Boys and sent threatening emails to Democratic voters in select states demanding they vote for Donald Trump. Although election officials have worked tirelessly to brace for this possibility, they are correct to be on guard…(More)”
Buried Academic Treasures
Barrett and Greene: “…one of the presenters who said: “We have lots of research that leads to no results.”
As some of you know, we’ve written a book with Don Kettl to help academically trained researchers write in a way that would be understandable by decision makers who could make use of their findings. But the keys to writing well are only a small part of the picture. Elected and appointed officials have the capacity to ignore nearly anything, no matter how well written it is.
This is more than just a frustration to researchers, it’s a gigantic loss to the world of public administration. We spend lots of time reading through reports and frequently come across nuggets of insights that we believe could help make improvements in nearly every public sector endeavor from human resources to budgeting to performance management to procurement and on and on. We, and others, can do our best to get attention for this kind of information, but that doesn’t mean that the decision makers have the time or the inclination to take steps toward taking advantage of great ideas.
We don’t want to place the blame for the disconnect between academia and practitioners on either party. To one degree or the other they’re both at fault, with taxpayers and the people who rely on government services – and that’s pretty much everybody except for people who have gone off the grid – as the losers.
Following, from our experience, are six reasons we believe that it’s difficult to close the gap between the world of research and the realm of utility. The first three are aimed at government leaders, the last three have academics in mind…(More)”
First-of-its-kind dataset connects greenhouse gases and air quality
NOAA Research: “The GReenhouse gas And Air Pollutants Emissions System (GRA²PES), from NOAA and the National Institute of Standards and Technology (NIST), combines information on greenhouse gas and air quality pollutant sources into a single national database, offering innovative interactive map displays and new benefits for both climate and public health solutions.
A new U.S.-based system to combine air quality and greenhouse gas pollution sources into a single national research database is now available in the U.S. Greenhouse Gas Center portal. This geospatial data allows leaders at city, state, and regional scales to more easily identify and take steps to address air quality issues while reducing climate-related hazards for populations.
The dataset is the GReenhouse gas And Air Pollutants Emissions System (GRA²PES). A research project developed by NOAA and NIST, GRA²PES captures monthly greenhouse gas (GHG) emissions activity for multiple economic sectors to improve measurement and modeling for both GHG and air pollutants across the contiguous U.S.
Having the GHG and air quality constituents in the same dataset will be exceedingly helpful, said Columbia University atmospheric scientist Roisin Commane, the lead on a New York City project to improve emissions estimates…(More)”.
As AI-powered health care expands, experts warn of biases
Article by Marta Biino: “Google’s DeepMind artificial intelligence research laboratory and German pharma company BioNTech are both building AI-powered lab assistants to help scientists conduct experiments and perform tasks, the Financial Times reported.
It’s the latest example of how developments in artificial intelligence are revolutionizing a number of fields, including medicine. While AI has long been used in radiology, for image analysis, or oncology to classify skin lesions for example, as the technology continues to advance its applications are growing.
OpenAI’s GPT models have outperformed humans in making cancer diagnoses based on MRI reports and beat PhD-holders in standardized science tests, to name a few.
However, as AI’s use in health care expands, some fear the notoriously biased technology could carry negative repercussions for patients…(More)”.
Harnessing the feed: social media for mental health information and support
Report by ReachOut: “…highlights how a social media ban could cut young people off from vital mental health support, including finding that 73 per cent of young people in Australia turn to social media when it comes to support for their mental health.
Based on research with over 2000 young people, the report found a range of benefits for young people seeking mental health support via social media (predominantly TikTok, YouTube and Instagram). 66 per cent of young people surveyed reported increased awareness about their mental health because of relevant content they accessed via social media, 47 per said they had looked for information about how to get professional mental health support on social media and 40 per cent said they sought professional support after viewing mental health information on social media.
Importantly, half of young people with a probable mental health condition said that they were searching for mental health information or support on social media because they don’t have access to professional support.
However, young people also highlighted a range of concerns about social media via the research. 38 per cent were deeply concerned about harmful mental health content they have come across on platforms and 43 per cent of the young people who sought support online were deeply concerned about the addictive nature of social media.
The report highlights young people’s calls for social media to be safer. They want: an end to addictive features like infinite scroll, more control over the content they see, better labelling of mental health information from credible sources, better education and more mental health information provided across platforms…(More)”.
How The New York Times incorporates editorial judgment in algorithms to curate its home page
Article by Zhen Yang: “Whether on the web or the app, the home page of The New York Times is a crucial gateway, setting the stage for readers’ experiences and guiding them to the most important news of the day. The Times publishes over 250 stories daily, far more than the 50 to 60 stories that can be featured on the home page at a given time. Traditionally, editors have manually selected and programmed which stories appear, when and where, multiple times daily. This manual process presents challenges:
- How can we provide readers a relevant, useful, and fresh experience each time they visit the home page?
- How can we make our editorial curation process more efficient and scalable?
- How do we maximize the reach of each story and expose more stories to our readers?
To address these challenges, the Times has been actively developing and testing editorially driven algorithms to assist in curating home page content. These algorithms are editorially driven in that a human editor’s judgment or input is incorporated into every aspect of the algorithm — including deciding where on the home page the stories are placed, informing the rankings, and potentially influencing and overriding algorithmic outputs when necessary. From the get-go, we’ve designed algorithmic programming to elevate human curation, not to replace it…
The Times began using algorithms for content recommendations in 2011 but only recently started applying them to home page modules. For years, we only had one algorithmically-powered module, “Smarter Living,” on the home page, and later, “Popular in The Times.” Both were positioned relatively low on the page.
Three years ago, the formation of a cross-functional team — including newsroom editors, product managers, data scientists, data analysts, and engineers — brought the momentum needed to advance our responsible use of algorithms. Today, nearly half of the home page is programmed with assistance from algorithms that help promote news, features, and sub-brand content, such as The Athletic and Wirecutter. Some of these modules, such as the features module located at the top right of the home page on the web version, are in highly visible locations. During major news moments, editors can also deploy algorithmic modules to display additional coverage to complement a main module of stories near the top of the page. (The topmost news package of Figure 1 is an example of this in action.)…(More)”
How is editorial judgment incorporated into algorithmic programming?
Data-driven decisions: the case for randomised policy trials
Speech by Andrew Leigh: “…In 1747, 31-year-old Scottish naval surgeon James Lind set about determining the most effective treatment for scurvy, a disease that was killing thousands of sailors around the world. Selecting 12 sailors suffering from scurvy, Lind divided them into six pairs. Each pair received a different treatment: cider; sulfuric acid; vinegar; seawater; a concoction of nutmeg, garlic and mustard; and two oranges and a lemon. In less than a week, the pair who had received oranges and lemons were back on active duty, while the others languished. Given that sulphuric acid was the British Navy’s main treatment for scurvy, this was a crucial finding.
The trial provided robust evidence for the powers of citrus because it created a credible counterfactual. The sailors didn’t choose their treatments, nor were they assigned based on the severity of their ailment. Instead, they were randomly allocated, making it likely that difference in their recovery were due to the treatment rather than other characteristics.
Lind’s randomised trial, one of the first in history, has attained legendary status. Yet because 1747 was so long ago, it is easy to imagine that the methods he used are no longer applicable. After all, Lind’s research was conducted at a time before electricity, cars and trains, an era when slavery was rampant and education was reserved for the elite. Surely, some argue, ideas from such an age have been superseded today.
In place of randomised trials, some put their faith in ‘big data’. Between large-scale surveys and extensive administrative datasets, the world is awash in data as never before. Each day, hundreds of exabytes of data are produced. Big data has improved the accuracy of weather forecasts, permitted researchers to study social interactions across racial and ethnic lines, enabled the analysis of income mobility at a fine geographic scale and much more…(More)”
Citizen scientists will be needed to meet global water quality goals
University College London: “Sustainable development goals for water quality will not be met without the involvement of citizen scientists, argues an international team led by a UCL researcher, in a new policy brief.
The policy brief and attached technical brief are published by Earthwatch Europe on behalf of the United Nations Environment Program (UNEP)-coordinated World Water Quality Alliance that has supported citizen science projects in Kenya, Tanzania and Sierra Leone. The reports detail how policymakers can learn from examples where citizen scientists (non-professionals engaged in the scientific process, such as by collecting data) are already making valuable contributions.
The report authors focus on how to meet one of the UN’s Sustainable Development Goals around improving water quality, which the UN states is necessary for the health and prosperity of people and the planet…
“Locals who know the water and use the water are both a motivated and knowledgeable resource, so citizen science networks can enable them to provide large amounts of data and act as stewards of their local water bodies and sources. Citizen science has the potential to revolutionize the way we manage water resources to improve water quality.”…
The report authors argue that improving water quality data will require governments and organizations to work collaboratively with locals who collect their own data, particularly where government monitoring is scarce, but also where there is government support for citizen science schemes. Water quality improvement has a particularly high potential for citizen scientists to make an impact, as professionally collected data is often limited by a shortage of funding and infrastructure, while there are effective citizen science monitoring methods that can provide reliable data.
The authors write that the value of citizen science goes beyond the data collected, as there are other benefits pertaining to education of volunteers, increased community involvement, and greater potential for rapid response to water quality issues…(More)”.