The Digital Economy Report 2024

Report by UNCTAD: “…underscores the urgent need for environmentally sustainable and inclusive digitalization strategies.

Digital technology and infrastructure depend heavily on raw materials, and the production and disposal of more and more devices, along with growing water and energy needs are taking an increasing toll on the planet.

For example, the production and use of digital devices, data centres and information and communications technology (ICT) networks account for an estimated 6% to 12% of global electricity use.

Developing countries bear the brunt of the environmental costs of digitalization while reaping fewer benefits. They export low value-added raw materials and import high value-added devices, along with increasing digital waste. Geopolitical tensions over critical minerals, abundant in many of these countries, complicate the challenges.

The report calls for bold action from policymakers, industry leaders and consumers. It urges a global shift towards a circular digital economy, focusing on circularity by design through durable products, responsible consumption, reuse and recycling, and sustainable business models…(More)”.

The era of predictive AI Is almost over

Essay by Dean W. Ball: “Artificial intelligence is a Rorschach test. When OpenAI’s GPT-4 was released in March 2023, Microsoft researchers triumphantly, and prematurely, announced that it possessed “sparks” of artificial general intelligence. Cognitive scientist Gary Marcus, on the other hand, argued that Large Language Models like GPT-4 are nowhere close to the loosely defined concept of AGI. Indeed, Marcus is skeptical of whether these models “understand” anything at all. They “operate over ‘fossilized’ outputs of human language,” he wrote in a 2023 paper, “and seem capable of implementing some automatic computations pertaining to distributional statistics, but are incapable of understanding due to their lack of generative world models.” The “fossils” to which Marcus refers are the models’ training data — these days, something close to all the text on the Internet.

This notion — that LLMs are “just” next-word predictors based on statistical models of text — is so common now as to be almost a trope. It is used, both correctly and incorrectly, to explain the flaws, biases, and other limitations of LLMs. Most importantly, it is used by AI skeptics like Marcus to argue that there will soon be diminishing returns from further LLM development: We will get better and better statistical approximations of existing human knowledge, but we are not likely to see another qualitative leap toward “general intelligence.”

There are two problems with this deflationary view of LLMs. The first is that next-word prediction, at sufficient scale, can lead models to capabilities that no human designed or even necessarily intended — what some call “emergent” capabilities. The second problem is that increasingly — and, ironically, starting with ChatGPT — language models employ techniques that combust the notion of pure next-word prediction of Internet text…(More)”

Fixing frictions: ‘sludge audits’ around the world

OECD Report: “Governments worldwide are increasingly adopting behavioural science methodologies to address “sludge” – the unjustified frictions impeding people’ access to government services and exacerbating psychological burdens. Sludge audits, grounded in behavioural science, provide a structured approach for identifying, quantifying, and preventing sludge in public services and government processes. This document delineates Good Practice Principles, derived from ten case studies conducted during the International Sludge Academy, aimed at promoting the integration of sludge audit methodologies into public governance and service design. By enhancing government efficiency and bolstering public trust in government, these principles contribute to the broader agenda on administrative simplification, digital services, and public sector innovation…(More)”.

Your Driving App Is Leading You Astray

Article by Julia Angwin: “…If you use a navigation app, you probably have felt helpless anger when your stupid phone endangers your life, and the lives of all the drivers around you, to potentially shave a minute or two from your drive time. Or maybe it’s stuck you on an ugly freeway when a glorious, ocean-hugging alternative lies a few miles away. Or maybe it’s trapped you on a route with no four-way stops, ignoring a less stressful solution that doesn’t leave you worried about a car barreling out of nowhere.

For all the discussion of the many extraordinary ways algorithms have changed our society and our lives, one of the most impactful, and most infuriating, often escapes notice. Dominated by a couple of enormously powerful tech monopolists that have better things to worry about, our leading online mapping systems from Google and Apple are not nearly as good as they could be.

You may have heard the extreme stories, such as when navigation apps like Waze and Google Maps apparently steered drivers into lakes and onto impassable dirt roads, or when jurisdictions beg Waze to stop dumping traffic onto their residential streets. But the reality is these apps affect us, our roads and our communities every minute of the day. Primarily programmed to find the fastest route, they endanger and infuriate us on a remarkably regular basis….

The best hope for competition relies on the success of OpenStreetMap. Its data underpins most maps other than Google, including AmazonFacebook and Apple, but it is so under-resourced that it only recently hired paid systems administrators to ensure its back-end machines kept running….In addition, we can promote competition by using the few available alternatives. To navigate cities with public transit, try apps such as Citymapper that offer bike, transit and walking directions. Or use the privacy-focused Organic Maps…(More)”.

The Economic Case for Reimagining the State

Report by the Tony Blair Institute: “The new government will need to lean in to support the diffusion of AI-era tech across the economy by adopting a pro-innovation, pro-technology stance, as advocated by the Tony Blair Institute for Global Change in our paper Accelerating the Future: Industrial Strategy in the Era of AI.

AI-era tech can also transform public services, creating a smaller, lower-cost state that delivers better outcomes for citizens. New TBI analysis suggests:

  • Adoption of AI across the public-sector workforce could save around one-fifth of workforce time at a comparatively low cost. If the government chooses to bank these time savings and reduce the size of the workforce, this could result in annual net savings of £10 billion per year by the end of this Parliament and £34 billion per year by the end of the next – enough to pay for the entire defence budget.
  • AI-era tech also offers significant potential to improve the UK’s health services. We envisage a major expansion of the country’s preventative-health-care system, including: a digital health record for every citizen; improved access to health checks online, at home and on the high street; and a wider rollout of preventative treatments across the population. This programme could lead to the triple benefit of a healthier population, a healthier economy (with more people in work) and healthier public finances (since more workers mean more tax revenues). Even a narrow version of this programme – focused only on cardiovascular disease – could lead to 70,000 more people in work and generate net savings to the Exchequer worth £600 million by the end of this parliamentary term, and £1.2 billion by the end of the next. Much larger gains are possible – worth £6 billion per year by 2040 – if medical treatments continue to advance and the programme expands to cover a wider range of conditions, including obesity and cancer.
  • Introducing a digital ID could significantly improve the way that citizens interact with government, in terms of saving them time, easing access and creating a more personalised service. A digital ID could also generate a net gain of about £2 billion per year for the Exchequer by helping to reduce benefit fraud, improve the efficiency of tax-revenue collection and better target welfare payments in a crisis. Based on international experience, we think it is achievable for the government to implement a digital ID within three years and generate cumulative net savings of almost £4 billion during this Parliament, and nearly £10 billion during the next term.
  • AI could also lead to a 6 per cent boost in educational attainment by helping to improve the quality of teaching, save teacher time and improve the ability of students to absorb lesson content. These gains would take time to materialise but could eventually raise UK GDP by up to 6 per cent in the long run and create more than £30 billion in fiscal space per year.

The four public-sector use cases outlined above could create substantial fiscal savings for the new government worth £12 billion a year (0.4 per cent of GDP) by the end of this parliamentary term, £37 billion (1.3 per cent of GDP) by the end of the next, and more than £40 billion (1.5 per cent of GDP) by 2040…(More)”.

Scaling Synthetic Data Creation with 1,000,000,000 Personas

Paper by Xin Chan, et al: “We propose a novel persona-driven data synthesis methodology that leverages various perspectives within a large language model (LLM) to create diverse synthetic data. To fully exploit this methodology at scale, we introduce Persona Hub — a collection of 1 billion diverse personas automatically curated from web data. These 1 billion personas (~13% of the world’s total population), acting as distributed carriers of world knowledge, can tap into almost every perspective encapsulated within the LLM, thereby facilitating the creation of diverse synthetic data at scale for various scenarios. By showcasing Persona Hub’s use cases in synthesizing high-quality mathematical and logical reasoning problems, instructions (i.e., user prompts), knowledge-rich texts, game NPCs and tools (functions) at scale, we demonstrate persona-driven data synthesis is versatile, scalable, flexible, and easy to use, potentially driving a paradigm shift in synthetic data creation and applications in practice, which may have a profound impact on LLM research and development…(More)”.

How to build a Collective Mind that speaks for humanity in real-time

Blog by Louis Rosenberg: “This begs the question — could large human groups deliberate in real-time with the efficiency of fish schools and quickly reach optimized decisions?

For years this goal seemed impossible. That’s because conversational deliberations have been shown to be most productive in small groups of 4 to 7 people and quickly degrade as groups grow larger. This is because the “airtime per person” gets progressively squeezed and the wait-time to respond to others steadily increases. By 12 to 15 people, the conversational dynamics change from thoughtful debate to a series of monologues that become increasingly disjointed. By 20 people, the dialog ceases to be a conversation at all. This problem seemed impenetrable until recent advances in Generative AI opened up new solutions.

The resulting technology is called Conversational Swarm Intelligence and it promises to allow groups of almost any size (200, 2000, or even 2 million people) to discuss complex problems in real-time and quickly converge on solutions with significantly amplified intelligence. The first step is to divide the population into small subgroups, each sized for thoughtful dialog. For example, a 1000-person group could be divided into 200 subgroups of 5, each routed into their own chat room or video conferencing session. Of course, this does not create a single unified conversation — it creates 200 parallel conversations…(More)”.

Collaborating with Journalists and AI: Leveraging Social Media Images for Enhanced Disaster Resilience and Recovery

Paper by Murthy Dhiraj et al: “Methods to meaningfully integrate journalists into crisis informatics remain lacking. We explored the feasibility of generating a real-time, priority-driven map of infrastructure damage during a natural disaster by strategically selecting journalist networks to identify sources of image-based infrastructure-damage data. Using the REST Twitter API, 1,000,522 tweets were collected from September 13-18, 2018, during and after Hurricane Florence made landfall in the United States. Tweets were classified by source (e.g., news organizations or citizen journalists), and 11,638 images were extracted. We utilized Google’s AutoML Vision software to successfully develop a machine learning image classification model to interpret this sample of images. As a result, 80% of our labeled data was used for training, 10% for validation, and 10% for testing. The model achieved an average precision of 90.6%, an average recall of 77.2%, and an F1 score of .834. In the future, establishing strategic networks of journalists ahead of disasters will reduce the time needed to identify disaster-response targets, thereby focusing relief and recovery efforts in real-time. This approach ultimately aims to save lives and mitigate harm…(More)”.

Citizen engagement

European Union Report: “…considers how to approach citizen engagement for the EU missions. Engagement and social dialogue should aim to ensure that innovation is human-centred and that missions maintain wide public legitimacy. But citizen engagement is complex and significantly changes the traditional responsibilities of the research and innovation community and calls for new capabilities. This report provides insights to build these capabilities and explores effective ways to help citizens understand their role within the EU missions, showing how to engage them throughout the various stages of implementation. The report considers both the challenges and administrative burdens of citizen engagement and sets out how to overcome them, as well as demonstrated the wider opportunity of “double additionality” where citizen engagement methods serve to fundamentally transform an entire research and innovation portfolio…(More)”.