Fixing frictions: ‘sludge audits’ around the world


OECD Report: “Governments worldwide are increasingly adopting behavioural science methodologies to address “sludge” – the unjustified frictions impeding people’ access to government services and exacerbating psychological burdens. Sludge audits, grounded in behavioural science, provide a structured approach for identifying, quantifying, and preventing sludge in public services and government processes. This document delineates Good Practice Principles, derived from ten case studies conducted during the International Sludge Academy, aimed at promoting the integration of sludge audit methodologies into public governance and service design. By enhancing government efficiency and bolstering public trust in government, these principles contribute to the broader agenda on administrative simplification, digital services, and public sector innovation…(More)”.

Your Driving App Is Leading You Astray


Article by Julia Angwin: “…If you use a navigation app, you probably have felt helpless anger when your stupid phone endangers your life, and the lives of all the drivers around you, to potentially shave a minute or two from your drive time. Or maybe it’s stuck you on an ugly freeway when a glorious, ocean-hugging alternative lies a few miles away. Or maybe it’s trapped you on a route with no four-way stops, ignoring a less stressful solution that doesn’t leave you worried about a car barreling out of nowhere.

For all the discussion of the many extraordinary ways algorithms have changed our society and our lives, one of the most impactful, and most infuriating, often escapes notice. Dominated by a couple of enormously powerful tech monopolists that have better things to worry about, our leading online mapping systems from Google and Apple are not nearly as good as they could be.

You may have heard the extreme stories, such as when navigation apps like Waze and Google Maps apparently steered drivers into lakes and onto impassable dirt roads, or when jurisdictions beg Waze to stop dumping traffic onto their residential streets. But the reality is these apps affect us, our roads and our communities every minute of the day. Primarily programmed to find the fastest route, they endanger and infuriate us on a remarkably regular basis….

The best hope for competition relies on the success of OpenStreetMap. Its data underpins most maps other than Google, including AmazonFacebook and Apple, but it is so under-resourced that it only recently hired paid systems administrators to ensure its back-end machines kept running….In addition, we can promote competition by using the few available alternatives. To navigate cities with public transit, try apps such as Citymapper that offer bike, transit and walking directions. Or use the privacy-focused Organic Maps…(More)”.

The Economic Case for Reimagining the State


Report by the Tony Blair Institute: “The new government will need to lean in to support the diffusion of AI-era tech across the economy by adopting a pro-innovation, pro-technology stance, as advocated by the Tony Blair Institute for Global Change in our paper Accelerating the Future: Industrial Strategy in the Era of AI.

AI-era tech can also transform public services, creating a smaller, lower-cost state that delivers better outcomes for citizens. New TBI analysis suggests:

  • Adoption of AI across the public-sector workforce could save around one-fifth of workforce time at a comparatively low cost. If the government chooses to bank these time savings and reduce the size of the workforce, this could result in annual net savings of £10 billion per year by the end of this Parliament and £34 billion per year by the end of the next – enough to pay for the entire defence budget.
  • AI-era tech also offers significant potential to improve the UK’s health services. We envisage a major expansion of the country’s preventative-health-care system, including: a digital health record for every citizen; improved access to health checks online, at home and on the high street; and a wider rollout of preventative treatments across the population. This programme could lead to the triple benefit of a healthier population, a healthier economy (with more people in work) and healthier public finances (since more workers mean more tax revenues). Even a narrow version of this programme – focused only on cardiovascular disease – could lead to 70,000 more people in work and generate net savings to the Exchequer worth £600 million by the end of this parliamentary term, and £1.2 billion by the end of the next. Much larger gains are possible – worth £6 billion per year by 2040 – if medical treatments continue to advance and the programme expands to cover a wider range of conditions, including obesity and cancer.
  • Introducing a digital ID could significantly improve the way that citizens interact with government, in terms of saving them time, easing access and creating a more personalised service. A digital ID could also generate a net gain of about £2 billion per year for the Exchequer by helping to reduce benefit fraud, improve the efficiency of tax-revenue collection and better target welfare payments in a crisis. Based on international experience, we think it is achievable for the government to implement a digital ID within three years and generate cumulative net savings of almost £4 billion during this Parliament, and nearly £10 billion during the next term.
  • AI could also lead to a 6 per cent boost in educational attainment by helping to improve the quality of teaching, save teacher time and improve the ability of students to absorb lesson content. These gains would take time to materialise but could eventually raise UK GDP by up to 6 per cent in the long run and create more than £30 billion in fiscal space per year.

The four public-sector use cases outlined above could create substantial fiscal savings for the new government worth £12 billion a year (0.4 per cent of GDP) by the end of this parliamentary term, £37 billion (1.3 per cent of GDP) by the end of the next, and more than £40 billion (1.5 per cent of GDP) by 2040…(More)”.

Scaling Synthetic Data Creation with 1,000,000,000 Personas


Paper by Xin Chan, et al: “We propose a novel persona-driven data synthesis methodology that leverages various perspectives within a large language model (LLM) to create diverse synthetic data. To fully exploit this methodology at scale, we introduce Persona Hub — a collection of 1 billion diverse personas automatically curated from web data. These 1 billion personas (~13% of the world’s total population), acting as distributed carriers of world knowledge, can tap into almost every perspective encapsulated within the LLM, thereby facilitating the creation of diverse synthetic data at scale for various scenarios. By showcasing Persona Hub’s use cases in synthesizing high-quality mathematical and logical reasoning problems, instructions (i.e., user prompts), knowledge-rich texts, game NPCs and tools (functions) at scale, we demonstrate persona-driven data synthesis is versatile, scalable, flexible, and easy to use, potentially driving a paradigm shift in synthetic data creation and applications in practice, which may have a profound impact on LLM research and development…(More)”.

How to build a Collective Mind that speaks for humanity in real-time


Blog by Louis Rosenberg: “This begs the question — could large human groups deliberate in real-time with the efficiency of fish schools and quickly reach optimized decisions?

For years this goal seemed impossible. That’s because conversational deliberations have been shown to be most productive in small groups of 4 to 7 people and quickly degrade as groups grow larger. This is because the “airtime per person” gets progressively squeezed and the wait-time to respond to others steadily increases. By 12 to 15 people, the conversational dynamics change from thoughtful debate to a series of monologues that become increasingly disjointed. By 20 people, the dialog ceases to be a conversation at all. This problem seemed impenetrable until recent advances in Generative AI opened up new solutions.

The resulting technology is called Conversational Swarm Intelligence and it promises to allow groups of almost any size (200, 2000, or even 2 million people) to discuss complex problems in real-time and quickly converge on solutions with significantly amplified intelligence. The first step is to divide the population into small subgroups, each sized for thoughtful dialog. For example, a 1000-person group could be divided into 200 subgroups of 5, each routed into their own chat room or video conferencing session. Of course, this does not create a single unified conversation — it creates 200 parallel conversations…(More)”.

Collaborating with Journalists and AI: Leveraging Social Media Images for Enhanced Disaster Resilience and Recovery


Paper by Murthy Dhiraj et al: “Methods to meaningfully integrate journalists into crisis informatics remain lacking. We explored the feasibility of generating a real-time, priority-driven map of infrastructure damage during a natural disaster by strategically selecting journalist networks to identify sources of image-based infrastructure-damage data. Using the REST Twitter API, 1,000,522 tweets were collected from September 13-18, 2018, during and after Hurricane Florence made landfall in the United States. Tweets were classified by source (e.g., news organizations or citizen journalists), and 11,638 images were extracted. We utilized Google’s AutoML Vision software to successfully develop a machine learning image classification model to interpret this sample of images. As a result, 80% of our labeled data was used for training, 10% for validation, and 10% for testing. The model achieved an average precision of 90.6%, an average recall of 77.2%, and an F1 score of .834. In the future, establishing strategic networks of journalists ahead of disasters will reduce the time needed to identify disaster-response targets, thereby focusing relief and recovery efforts in real-time. This approach ultimately aims to save lives and mitigate harm…(More)”.

Citizen engagement


European Union Report: “…considers how to approach citizen engagement for the EU missions. Engagement and social dialogue should aim to ensure that innovation is human-centred and that missions maintain wide public legitimacy. But citizen engagement is complex and significantly changes the traditional responsibilities of the research and innovation community and calls for new capabilities. This report provides insights to build these capabilities and explores effective ways to help citizens understand their role within the EU missions, showing how to engage them throughout the various stages of implementation. The report considers both the challenges and administrative burdens of citizen engagement and sets out how to overcome them, as well as demonstrated the wider opportunity of “double additionality” where citizen engagement methods serve to fundamentally transform an entire research and innovation portfolio…(More)”.

A new index is using AI tools to measure U.S. economic growth in a broader way


Article by Jeff Cox: “Measuring the strength of the sprawling U.S. economy is no easy task, so one firm is sending artificial intelligence in to do the job.

The Zeta Economic Index, launched Monday, uses generative AI to analyze what its developers call “trillions of behavioral signals,” largely focused on consumer activity, to score growth on both a broad level of health and a separate measure on stability.

At its core, the index will gauge online and offline activity across eight categories, aiming to give a comprehensive look that incorporates standard economic data points such as unemployment and retail sales combined with high-frequency information for the AI age.

“The algorithm is looking at traditional economic indicators that you would normally look at. But then inside of our proprietary algorithm, we’re ingesting the behavioral data and transaction data of 240 million Americans, which nobody else has,” said David Steinberg, co-founder, chairman and CEO of Zeta Global.

“So instead of looking at the data in the rearview mirror like everybody else, we’re trying to put it out in advance to give a 30-day advanced snapshot of where the economy is going,” he added…(More)”.

Exploring Digital Biomarkers for Depression Using Mobile Technology


Paper by Yuezhou Zhang et al: “With the advent of ubiquitous sensors and mobile technologies, wearables and smartphones offer a cost-effective means for monitoring mental health conditions, particularly depression. These devices enable the continuous collection of behavioral data, providing novel insights into the daily manifestations of depressive symptoms.

We found several significant links between depression severity and various behavioral biomarkers: elevated depression levels were associated with diminished sleep quality (assessed through Fitbit metrics), reduced sociability (approximated by Bluetooth), decreased levels of physical activity (quantified by step counts and GPS data), a slower cadence of daily walking (captured by smartphone accelerometers), and disturbances in circadian rhythms (analyzed across various data streams).
Leveraging digital biomarkers for assessing and continuously monitoring depression introduces a new paradigm in early detection and development of customized intervention strategies. Findings from these studies not only enhance our comprehension of depression in real-world settings but also underscore the potential of mobile technologies in the prevention and management of mental health issues…(More)”

Building an AI ecosystem in a small nation: lessons from Singapore’s journey to the forefront of AI


Paper by Shaleen Khanal, Hongzhou Zhang & Araz Taeihagh: “Artificial intelligence (AI) is arguably the most transformative technology of our time. While all nations would like to mobilize their resources to play an active role in AI development and utilization, only a few nations, such as the United States and China, have the resources and capacity to do so. If so, how can smaller or less resourceful countries navigate the technological terrain to emerge at the forefront of AI development? This research presents an in-depth analysis of Singapore’s journey in constructing a robust AI ecosystem amidst the prevailing global dominance of the United States and China. By examining the case of Singapore, we argue that by designing policies that address risks associated with AI development and implementation, smaller countries can create a vibrant AI ecosystem that encourages experimentation and early adoption of the technology. In addition, through Singapore’s case, we demonstrate the active role the government can play, not only as a policymaker but also as a steward to guide the rest of the economy towards the application of AI…(More)”.