Spectrum Auctions: Designing markets to benefit the public, industry and the economy


Book by Geoffrey Myers: “Access to the radio spectrum is vital for modern digital communication. It is an essential component for smartphone capabilities, the Cloud, the Internet of Things, autonomous vehicles, and multiple other new technologies. Governments use spectrum auctions to decide which companies should use what parts of the radio spectrum. Successful auctions can fuel rapid innovation in products and services, unlock substantial economic benefits, build comparative advantage across all regions, and create billions of dollars of government revenues. Poor auction strategies can leave bandwidth unsold and delay innovation, sell national assets to firms too cheaply, or create uncompetitive markets with high mobile prices and patchy coverage that stifles economic growth. Corporate bidders regularly complain that auctions raise their costs, while government critics argue that insufficient revenues are raised. The cross-national record shows many examples of both highly successful auctions and miserable failures.

Drawing on experience from the UK and other countries, senior regulator Geoffrey Myers explains how to optimise the regulatory design of auctions, from initial planning to final implementation. Spectrum Auctions offers unrivalled expertise for regulators and economists engaged in practical auction design or company executives planning bidding strategies. For applied economists, teachers, and advanced students this book provides unrivalled insights in market design and public management. Providing clear analytical frameworks, case studies of auctions, and stage-by-stage advice, it is essential reading for anyone interested in designing public-interested and successful spectrum auctions…(More)”.

EU Parliament pushes for more participatory tools for Europeans


Article by Silvia Ellena: “A majority of EU lawmakers adopted a report on Thursday (14 September) calling for more participatory tools at EU level. The report, which has no direct legislative impact, passed with 316 votes in favour, 137 against and 47 abstentions.

“We send a clear message to upgrade our democracy, a new EU Agora that involves citizens in European democratic life,” said Alin Mituța (Renew), co-rapporteur on the file, following the adoption of the report.

In the report, the Parliament called for the creation of a European Agora, an annual “structured participation mechanism” composed of citizens, who would deliberate on the EU’s priorities for the year ahead, providing input for the Commission work plan.

Moreover, EU lawmakers called for the creation of a one-stop-shop for all the existing instruments to make sure citizens have easier access to them.

The report also encourages increased use of mini-publics as well as the institutionalisation of other deliberative processes, such as the European Citizens’ Panels, which were set up by the Commission as a follow-up to the EU-wide democratic experiment known as the Conference on the Future of Europe (CoFoE).

These panels, made of randomly selected citizens, were called to deliberate on upcoming legislation earlier this year.

Other participatory tools suggested in the report include EU-wide referendums on key EU policies as well as pan-European online citizens’ consultations to increase citizens’ knowledge of the EU as well as their trust in EU decision-making.

Finally, the Parliament called for an increased focus on the impact of EU policies on youth, suggesting the use of the ‘youth check’, a monitoring tool which has been promoted by the European Youth Forum and included in the CoFoE recommendations.

Other European institutions are already experimenting with the youth check, such as the European Economic and Social Committee (EESC), whose recently appointed president included a youth test among the priorities for his mandate…

According to EU lawmakers, citizens’ participation plays a key role in strengthening democracy and the EU Commission should develop a “comprehensive European strategy to enhance citizenship competences in the EU”…(More)”.

Enhancing the security of communication infrastructure


OECD Report: “The digital security of communication networks is crucial to the functioning of our societies. Four trends are shaping networks, raising digital security implications: i) the increasing criticality of communication networks, ii) increased virtualisation of networks and use of cloud services, iii) a shift towards more openness in networks and iv) the role of artificial intelligence in networks. These trends bring benefits and challenges to digital security. While digital security ultimately depends on the decisions made by private actors (e.g. network operators and their suppliers), the report underlines the role governments can play to enhance the digital security of communication networks. It outlines key policy objectives and actions governments can take to incentivise the adoption of best practices and support stakeholders to reach an optimal level of digital security, ranging from light-touch to more interventionist approaches…(More)”.

Unlocking AI’s Potential for Everyone


Article by Diane Coyle: “…But while some policymakers do have deep knowledge about AI, their expertise tends to be narrow, and most other decision-makers simply do not understand the issue well enough to craft sensible policies. Owing to this relatively low knowledge base and the inevitable asymmetry of information between regulators and regulated, policy responses to specific issues are likely to remain inadequate, heavily influenced by lobbying, or highly contested.

So, what is to be done? Perhaps the best option is to pursue more of a principles-based policy. This approach has already gained momentum in the context of issues like misinformation and trolling, where many experts and advocates believe that Big Tech companies should have a general duty of care (meaning a default orientation toward caution and harm reduction).

In some countries, similar principles already apply to news broadcasters, who are obligated to pursue accuracy and maintain impartiality. Although enforcement in these domains can be challenging, the upshot is that we do already have a legal basis for eliciting less socially damaging behavior from technology providers.

When it comes to competition and market dominance, telecoms regulation offers a serviceable model with its principle of interoperability. People with competing service providers can still call each other because telecom companies are all required to adhere to common technical standards and reciprocity agreements. The same is true of ATMs: you may incur a fee, but you can still withdraw cash from a machine at any bank.

In the case of digital platforms, a lack of interoperability has generally been established by design, as a means of locking in users and creating “moats.” This is why policy discussions about improving data access and ensuring access to predictable APIs have failed to make any progress. But there is no technical reason why some interoperability could not be engineered back in. After all, Big Tech companies do not seem to have much trouble integrating the new services that they acquire when they take over competitors.

In the case of LLMs, interoperability probably could not apply at the level of the models themselves, since not even their creators understand their inner workings. However, it can and should apply to interactions between LLMs and other services, such as cloud platforms…(More)”.

Unleashing the metaverse for skills and workforce development


Article by Gemma Rodon, Marjorie Chinen, and Diego Angel-Urdinola: “The metaverse is revolutionizing skills and workforce development, reshaping learning in fields like auto-mechanics, health care, welding and various vocations. It offers future workers with invaluable, cost-effective, flexible, standardized and safe apprenticeship opportunities tailored for the demands of the global economy….Given its importance and potential, the World Bank’s EdTech team, with support from a Digital Development Partnership Grant, has recently completed a knowledge pack (KP) that provides evidence and case studies showcasing the advantages and results of using the metaverse, notably virtual and extended reality (XR) labs, for workforce development and offers guidance on implementation and steps necessary to deploy this technology. XR is an umbrella term that encompasses all immersive technologies that blend physical and digital worlds, including virtual reality (VR), augmented reality (AR), and mixed reality (MR).

The KP compiles a catalog of available virtual and XR labs in the market in high-demand sectors, such as auto-mechanics, nursing, and welding. Overall, the metaverse is reshaping workforce development in three key aspects:

  • Reducing risks and fostering safety: Some training situations and learning experiences may be dangerous or difficult to access (e.g., health care, welding training, emergency preparedness, mass disasters, etc.).
  • Promoting technical proficiency: The metaverse allows for unlimited practice opportunities and can personalize the pace (and scenarios) of the learning experiences in a simulated environment.

Enhancing efficiency and monitoring: Training in the metaverse requires less investment in inputs and consumables; allows for easier adjustments to changes in the industry and facilitates data collection and analysis on students’ use and performance…(More)”.

State Capacities and Wicked Problems of Public Policy: Addressing Vulnerabilities that Affect Human Development


Report by Mosqueira, Edgardo; and Alessandro, Martín: “There is a growing mismatch between the types of public problems that governments face and the capabilities of the public administrations that design and implement policies to address them. Traditional administrative processes, the division of labor into ministries and agencies with clearly defined mandates, and even results-based management tools (such as logical frameworks or project management methodologies) are useful to address problems with relatively linear and predictable cause-effect relationships, in which success depends on the reliable execution of a predefined plan. In contrast, when dealing with wicked problems, multiple factors and actors are involved, often pushing in opposite directions and generating impacts across different sectors. Therefore, it is difficult to both align incentives and predict the effect of interventions. Such complex systems require a different approach, one that promotes collaboration among diverse actors, experimentation and learning to understand what works, and the ability to make rapid adjustments to interventions. This report illustrates the characteristics of wicked problems in two crucial development areas for Latin American and Caribbean countries: inequality and climate change. For each, it proposes institutional and managerial reforms that would expand the intervention capacities of LAC governments and analyzes the most relevant contexts for each option…(More)”.

EU leadership in trustworthy AI: Guardrails, Innovation & Governance


Article by Thierry Breton: “As mentioned in President von der Leyen’s State of the Union letter of intent, Europe should lead global efforts on artificial intelligence, guiding innovation, setting guardrails and developing global governance.

First, on innovation: we will launch the EU AI Start-Up Initiative, leveraging one of Europe’s biggest assets: its public high-performance computing infrastructure. We will identify the most promising European start-ups in AI and give them access to our supercomputing capacity.

I have said it before: AI is a combination of data, computing and algorithms. To train and finetune the most advanced foundation models, developers need large amounts of computing power.

Europe is a world leader in supercomputing through its European High-Performance Computing Joint Undertaking (EuroHPC). Soon, Europe will have its first exascale supercomputers, JUPITER in Germany and JULES VERNE in France (able to perform a quintillion -that means a billion billion- calculations per second), in addition to various existing supercomputers (such as LEONARDO in Italy and LUMI in Finland).

Access to Europe’s supercomputing infrastructure will help start-ups bring down the training time for their newest AI models from months or years to days or weeks. And it will help them lead the development and scale-up of AI responsibly and in line with European values.

This goes together with our broader efforts to support AI innovation across the value chain – from AI start-ups to all those businesses using AI technologies in their industrial ecosystems. This includes our Testing and Experimentation Facilities for AI (launched in January 2023)our Digital Innovation Hubsthe development of regulatory sandboxes under the AI Act, our support for the European Partnership on AI, Data and Robotics and the cutting-edge research supported by HorizonEurope.

Second, guardrails for AI: Europe has pioneered clear rules for AI systems through the EU AI Act, the world’s first comprehensive regulatory framework for AI. My teams are working closely with the Parliament and Council to support the swift adoption of the EU AI Act. This will give citizens and businesses confidence in AI developed in Europe, knowing that it is safe and respects fundamental rights and European values. And it serves as an inspiration for global rules and principles for trustworthy AI.

As reiterated by President von der Leyen, we are developing an AI Pact that will convene AI companies, help them prepare for the implementation of the EU AI Act and encourage them to commit voluntarily to applying the principles of the Act before its date of applicability.

Third, governance: with the AI Act and the Coordinated Plan on AI, we are working towards a governance framework for AI, which can be a centre of expertise, in particular on large foundation models, and promote cooperation, not only between Member States, but also internationally…(More)”

AI often mangles African languages. Local scientists and volunteers are taking it back to school


Article by Sandeep Ravindran: “Imagine joyfully announcing to your Facebook friends that your wife gave birth, and having Facebook automatically translate your words to “my prostitute gave birth.” Shamsuddeen Hassan Muhammad, a computer science Ph.D. student at the University of Porto, says that’s what happened to a friend when Facebook’s English translation mangled the nativity news he shared in his native language, Hausa.

Such errors in artificial intelligence (AI) translation are common with African languages. AI may be increasingly ubiquitous, but if you’re from the Global South, it probably doesn’t speak your language.

That means Google Translate isn’t much help, and speech recognition tools such as Siri or Alexa can’t understand you. All of these services rely on a field of AI known as natural language processing (NLP), which allows AI to “understand” a language. The overwhelming majority of the world’s 7000 or so languages lack data, tools, or techniques for NLP, making them “low-resourced,” in contrast with a handful of “high-resourced” languages such as English, French, German, Spanish, and Chinese.

Hausa is the second most spoken African language, with an estimated 60 million to 80 million speakers, and it’s just one of more than 2000 African languages that are mostly absent from AI research and products. The few products available don’t work as well as those for English, notes Graham Neubig, an NLP researcher at Carnegie Mellon University. “It’s not the people who speak the languages making the technology.” More often the technology simply doesn’t exist. “For example, now you cannot talk to Siri in Hausa, because there is no data set to train Siri,” Muhammad says.

He is trying to fill that gap with a project he co-founded called HausaNLP, one of several launched within the past few years to develop AI tools for African languages…(More)”.

Rules of Order: Assessing the State of Global Governance


Paper by Stewart Patrick: “The current disorder has multiple causes, although their relative weight can be debated. They include intensifying strategic competition between the United States and China, two superpowers with dramatically different world order visions and clashing material interests; Russia’s brazen assault against its neighbor, resulting in the most serious armed conflict in Europe since World War II; an ongoing diffusion of power from advanced market democracies to emerging nations with diverse preferences, combined with resistance from established powers against accommodating them in multilateral institutions; a widespread retreat from turbocharged globalization, as national governments seek to claw back autonomy from market forces to pursue industrial, social, national security, and other policies and, in some cases, to weaponize interdependence; growing alienation between richer and poorer nations, exacerbated by accelerating climate change and stalled development; a global democratic recession now in its seventeenth year that has left no democracy unscathed; and a resurgence of sovereignty-minded nationalism that calls on governments to take back control from forces blamed for undermining national security, prosperity, and identity. (The “America First” ethos of Donald Trump’s presidency, which rejected the tenets of post-1945 U.S. internationalism, is but the most prominent recent example.) In sum, the crisis of cooperation is as much a function of the would-be global problem-solvers as it is a function of the problems themselves.

Given these centrifugal tendencies, is there any hope for a renewed open, rules-based world order? As a first step in answering this question, this paper surveys areas of global convergence and divergence on principles and rules of state conduct across fourteen major global issue areas. These are grouped into four categories: (1) rules to promote basic stability and peaceful coexistence by reducing the specter of violence; (2) rules to facilitate economic exchange and prosperity; (3) rules to promote cooperation on transnational and even planetary challenges like climate change, pandemics, the global commons, and the regulation of cutting-edge technologies; and (4) rules that seek to embed liberal values, particularly principles of democracy and human rights, in the international sphere. This stocktaking reveals significant preference diversity and normative disagreement among nations in both emerging and long-established spheres of interdependence. Ideally, this brief survey will give global policymakers a better sense of what, collectively, they are up against—and perhaps even suggest ways to bridge existing differences…(More)”

Computing the Climate: How We Know What We Know About Climate Change


Book by Steve M. Easterbrook: “How do we know that climate change is an emergency? How did the scientific community reach this conclusion all but unanimously, and what tools did they use to do it? This book tells the story of climate models, tracing their history from nineteenth-century calculations on the effects of greenhouse gases, to modern Earth system models that integrate the atmosphere, the oceans, and the land using the full resources of today’s most powerful supercomputers. Drawing on the author’s extensive visits to the world’s top climate research labs, this accessible, non-technical book shows how computer models help to build a more complete picture of Earth’s climate system. ‘Computing the Climate’ is ideal for anyone who has wondered where the projections of future climate change come from – and why we should believe them…(More)”.