AI in the Public Service: Here for Good


Special Issue of Ethos: “…For the public good, we want AI to help unlock and drive transformative impact, in areas where there is significant potential for breakthroughs, such as cancer research, material sciences or climate change. But we also want to raise the level of generalised adoption. For the user base in the public sector, we want to learn how best to use this new tool in ways that can allow us to not only do things better, but do better things.

This is not to suggest that AI is always the best solution: it is one of many tools in the digital toolkit. Sometimes, simpler computational methods will suffice. That said, AI represents new, untapped potential for the Public Service to enhance our daily work and deliver better outcomes that ultimately benefit Singapore and Singaporeans….

To promote general adoption, we made available AI tools, such as Pair, 1 SmartCompose, 2 and AIBots. 3 They are useful to a wide range of public officers for many general tasks. Other common tools of this nature may include chatbots to support customer-facing and service delivery needs, translation, summarisation, and so on. Much of what public officers do involves words and language, which is an area that LLM-based AI technology can now help with.

Beyond improving the productivity of the Public Service, the real value lies in AI’s broader ability to transform our business and operating models to deliver greater impact. In driving adoption, we want to encourage public officers to experiment with different approaches to figure out where we can create new value by doing things differently, rather than just settle for incremental value from doing things the same old ways using new tools.

For example, we have seen how AI and automation have transformed language translation, software engineering, identity verification and border clearance. This is just the beginning and much more is possible in many other domains…(More)”.

AI helped Uncle Sam catch $1 billion of fraud in one year. And it’s just getting started


Article by Matt Egan: “The federal government’s bet on using artificial intelligence to fight financial crime appears to be paying off.

Machine learning AI helped the US Treasury Department to sift through massive amounts of data and recover $1 billion worth of check fraud in fiscal 2024 alone, according to new estimates shared first with CNN. That’s nearly triple what the Treasury recovered in the prior fiscal year.

“It’s really been transformative,” Renata Miskell, a top Treasury official, told CNN in a phone interview.

“Leveraging data has upped our game in fraud detection and prevention,” Miskell said.

The Treasury Department credited AI with helping officials prevent and recover more than $4 billion worth of fraud overall in fiscal 2024, a six-fold spike from the year before.

US officials quietly started using AI to detect financial crime in late 2022, taking a page out of what many banks and credit card companies already do to stop bad guys.

The goal is to protect taxpayer money against fraud, which spiked during the Covid-19 pandemic as the federal government scrambled to disburse emergency aid to consumers and businesses.

To be sure, Treasury is not using generative AI, the kind that has captivated users of OpenAI’s ChatGPT and Google’s Gemini by generating images, crafting song lyrics and answering complex questions (even though it still sometimes struggles with simple queries)…(More)”.

The Number


Article by John Lanchester: “…The other pieces published in this series have human protagonists. This one doesn’t: The main character of this piece is not a person but a number. Like all the facts and numbers cited above, it comes from the federal government. It’s a very important number, which has for a century described economic reality, shaped political debate and determined the fate of presidents: the consumer price index.

The CPI is crucial for multiple reasons, and one of them is not because of what it is but what it represents. The gathering of data exemplifies our ambition for a stable, coherent society. The United States is an Enlightenment project based on the supremacy of reason; on the idea that things can be empirically tested; that there are self-evident truths; that liberty, progress and constitutional government walk arm in arm and together form the recipe for the ideal state. Statistics — numbers created by the state to help it understand itself and ultimately to govern itself — are not some side effect of that project but a central part of what government is and does…(More)”.

WikiProject AI Cleanup


Article by Emanuel Maiberg: “A group of Wikipedia editors have formed WikiProject AI Cleanup, “a collaboration to combat the increasing problem of unsourced, poorly-written AI-generated content on Wikipedia.”

The group’s goal is to protect one of the world’s largest repositories of information from the same kind of misleading AI-generated information that has plagued Google search resultsbooks sold on Amazon, and academic journals.

“A few of us had noticed the prevalence of unnatural writing that showed clear signs of being AI-generated, and we managed to replicate similar ‘styles’ using ChatGPT,” Ilyas Lebleu, a founding member of WikiProject AI Cleanup, told me in an email. “Discovering some common AI catchphrases allowed us to quickly spot some of the most egregious examples of generated articles, which we quickly wanted to formalize into an organized project to compile our findings and techniques.”…(More)”.

What AI Can Do for Your Country


Article by Jylana L. Sheats: “..Although most discussions of artificial intelligence focus on its impacts on business and research, AI is also poised to transform government in the United States and beyond. AI-guided disaster response is just one piece of the picture. The U.S. Department of Health and Human Services has an experimental AI program to diagnose COVID-19 and flu cases by analyzing the sound of patients coughing into their smartphones. The Department of Justice uses AI algorithms to help prioritize which tips in the FBI’s Threat Intake Processing System to act on first. Other proposals, still at the concept stage, aim to extend the applications of AI to improve the efficiency and effectiveness of nearly every aspect of public services. 

The early applications illustrate the potential for AI to make government operations more effective and responsive. They illustrate the looming challenges, too. The federal government will have to recruit, train, and retain skilled workers capable of managing the new technology, competing with the private sector for top talent. The government also faces a daunting task ensuring the ethical and equitable use of AI. Relying on algorithms to direct disaster relief or to flag high-priority crimes raises immediate concerns: What if biases built into the AI overlook some of the groups that most need assistance, or unfairly target certain populations? As AI becomes embedded into more government operations, the opportunities for misuse and unintended consequences will only expand…(More)”.

Behavioural science: could supermarket loyalty cards nudge us to make healthier choices?


Article by Magda Osman: “Ken Murphy, CEO of the British multinational supermarket chain Tesco, recently said at a conference that Tesco “could use Clubcard data to nudge customers towards healthier choices”.

So how would this work, and do we want it? Our recent study, published in the Scientific Journal of Research and Reviews, provides an answer.

Loyalty schemes have been around as far back as the 1980s, with the introduction of airlines’ frequent flyer programmes.

Advancements in loyalty schemes have been huge, with some even using gamified approaches, such as leaderboards, trophies and treasure hunts, to keep us engaged. The loyalty principle relies on a form of social exchange, namely reciprocity.

The ongoing reciprocal relationship means that we use a good or service regularly because we trust the service provider, we are satisfied with the service, and we deem the rewards we get as reasonable – be they discounts, vouchers or gifts.

In exchange, we accept that, in many cases, loyalty schemes collect data on us. Our purchasing history, often tied to our demographics, generates improvements in the delivery of the service.

If we accept this, then we continue to benefit from reward schemes, such as promotional offers or other discounts. The effectiveness depends not only on making attractive offers to us for things we are interested in purchasing, but also other discounted items that we hadn’t considered buying…(More)”

Ensuring citizens’ assemblies land


Article by Graham Smith: “…the evidence shows that while the recommendations of assemblies are well considered and could help shape more robust policy, too often they fail to land. Why is this?

The simple answer is that so much time, resources and energy is spent on organising the assembly itself – ensuring the best possible experience for citizens – that the relationship with the local authority and its decision-making processes is neglected.

First, the question asked of the assembly does not always relate to a specific set of decisions about to be made by an authority. Is the relevant policy process open and ready for input? On a number of occasions assemblies have taken place just after a new policy or strategy has been agreed. Disastrous timing.

This does not mean assemblies should only be run when they are tied to a particular decision-making process. Sometimes it is important to open up a policy area with a broad question. And sometimes it makes sense to empower citizens to set the agenda and focus on the issues they find most compelling

The second element is the failure of authorities to prepare to receive recommendations from citizens.

One story is where the first a public official knew about an assembly was when its recommendations landed on their desk. They were not received in the best spirit.

Too often assemblies are commissioned by enthusiastic politicians and public officials who have not done the necessary work to ensure their colleagues are willing to give a considered response to the citizens’ recommendations. Too often an assembly will be organised by a department or ministry where the results require others in the authority to respond – but those other politicians and officials feel no connection to the process.

And too often, an assembly ends, and it is not clear who within the public authority has the responsibility to take the recommendations forward to ensure they are given a fair hearing across the authority.

For citizens’ assemblies to be effective requires political and administrative work well beyond just organising the assembly. If this is not done, it is not only a waste of resources, but it can do serious damage to democracy and trust as those citizens who have invested their time and energy into the process become disillusioned.

Those authorities where citizens’ assemblies have had meaningful impacts are those that have not only invested in the assembly, but also into preparing the authority to receive the recommendations. Often this has meant continuing support and resourcing for assembly members after the process. They are the best advocates for their work…(More)”


How Generative AI Content Could Influence the U.S. Election


Article by Valerie Wirtschafter: “…The contested nature of the presidential race means such efforts will undoubtedly continue, but they likely will remain discoverable, and their reach and ability to shape election outcomes will be minimal. Instead, the most meaningful uses of generative AI content could occur in highly targeted scenarios just prior to the election and/or in a contentious post-election environment where experience has demonstrated that potential “evidence” of malfeasance need not be true to mobilize a small subset of believers to act.

Because U.S. elections are managed at the state and county levels, low-level actors in some swing precincts or counties are catapulted to the national spotlight every four years. Since these actors are not well known to the public, targeted and personal AI-generated content can cause significant harm. Before the election, this type of fabricated content could take the form of a last-minute phone call by someone claiming to be election worker alerting voters to an issue at their polling place.

After the election, it could become harassment of election officials or “evidence” of foul play. Due to the localized and personalized nature of this type of effort, it could be less rapidly discoverable for unknown figures not regularly in the public eye, difficult to debunk or prevent with existing tools and guardrails, and damaging to reputations. This tailored approach need not be driven by domestic actors—in fact, in the lead up to the 2020 elections, Iranian actors pretended to be members of the Proud Boys and sent threatening emails to Democratic voters in select states demanding they vote for Donald Trump. Although election officials have worked tirelessly to brace for this possibility, they are correct to be on guard…(More)”

Science Diplomacy and the Rise of Technopoles


Article by Vaughan Turekian and Peter Gluckman: “…Science diplomacy has an important, even existential imperative to help the world reconsider the necessity of working together toward big global goals. Climate change may be the most obvious example of where global action is needed, but many other issues have similar characteristics—deep ocean resources, space, and other ungoverned areas, to name a few.

However, taking up this mantle requires acknowledging why past efforts have failed to meet their goals. The global commitment to Sustainable Development Goals (SDGs) is an example. Weaknesses in the UN system, compounded by varied commitments from member states, will prevent the achievement of the SDGs by 2030. This year’s UN Summit of the Future is intended to reboot the global commitment to the sustainability agenda. Regardless of what type of agreement is signed at the summit, its impact may be limited.  

Science diplomacy has an important, even existential imperative to help the world reconsider the necessity of working together toward big global goals.

The science community must play an active part in ensuring progress is in fact made, but that will require an expansion of the community’s current role. To understand what this might mean, consider that the Pact for the Future agreed in New York City in September 2024 places “science, technology, and innovation” as one of its five themes. But that becomes actionable either in the narrow sense that technology will provide “answers” to global problems or in the platitudinous sense that science provides advice that is not acted upon. This dichotomy of unacceptable approaches has long bedeviled science’s influence.

For the world to make better use of science, science must take on an expanded responsibility in solving problems at both global and local scales. And science itself must become part of a toolkit—both at the practical and the diplomatic level—to address the sorts of challenges the world will face in the future. To make this happen, more countries must make science diplomacy a core part of their agenda by embedding science advisors within foreign ministries, connecting diplomats to science communities.

As the pace of technological change generates both existential risk and economic, environmental, and social opportunities, science diplomacy has a vital task in balancing outcomes for the benefit of more people. It can also bring the science community (including the social sciences and humanities) to play a critical role alongside nation states. And, as new technological developments enable nonstate actors, and especially the private sector, science diplomacy has an important role to play in helping nation states develop policy that can identify common solutions and engage key partners…(More)”.

As AI-powered health care expands, experts warn of biases


Article by Marta Biino: “Google’s DeepMind artificial intelligence research laboratory and German pharma company BioNTech are both building AI-powered lab assistants to help scientists conduct experiments and perform tasks, the Financial Times reported.

It’s the latest example of how developments in artificial intelligence are revolutionizing a number of fields, including medicine. While AI has long been used in radiology, for image analysis, or oncology to classify skin lesions for example, as the technology continues to advance its applications are growing.

OpenAI’s GPT models have outperformed humans in making cancer diagnoses based on MRI reports and beat PhD-holders in standardized science tests, to name a few.

However, as AI’s use in health care expands, some fear the notoriously biased technology could carry negative repercussions for patients…(More)”.