Article by Matteo Wong: “A slate of four AI companies might soon rule Silicon Valley…Chatbots and their ilk are still in their early stages, but everything in the world of AI is already converging around just four companies. You could refer to them by the acronym GOMA: Google, OpenAI, Microsoft, and Anthropic. Shortly after OpenAI released ChatGPT last year, Microsoft poured $10 billion into the start-up and shoved OpenAI-based chatbots into its search engine, Bing. Not to be outdone, Google announced that more AI features were coming to Search, Maps, Docs, and more, and introduced Bard, its own rival chatbot. Microsoft and Google are now in a race to integrate generative AI into just about everything. Meanwhile, Anthropic, a start-up launched by former OpenAI employees, has raised billions of dollars in its own right, including from Google. Companies such as Slack, Expedia, Khan Academy, Salesforce, and Bain are integrating ChatGPT into their products; many others are using Anthropic’s chatbot, Claude. Executives from GOMA have also met with leaders and officials around the world to shape the future of AI’s deployment and regulation. The four have overlapping but separate proposals for AI safety and regulation, but they have joined together to create the Frontier Model Forum, a consortium whose stated mission is to protect against the supposed world-ending dangers posed by terrifyingly capable models that do not yet exist but, it warns, are right around the corner. That existential language—about bioweapons and nuclear robots—has since migrated its way into all sorts of government proposals and language. If AI is truly reshaping the world, these companies are the sculptors…”…(More)”.
FickleFormulas: The Political Economy of Macroeconomic Measurement
About: “Statistics about economic activities are critical to governance. Measurements of growth, unemployment and inflation rates, public debts – they all tell us ‘how our economies are doing’ and inform policy. Citizens punish politicians who fail to deliver on them.
FickleFormulas has integrated two research projects at the University of Amsterdam that ran from 2014 to 2020. Its researchers have studied the origins of the formulas behind these indicators: why do we measure our economies the way we do? After all, it is far from self-evident how to define and measure economic indicators. Our choices have deeply distributional consequences, producing winners and losers, and they shape our future, for example when GDP figures hide the cost of environmental destruction.
Criticisms of particular measures are hardly new. GDP in particular has been denounced as a deeply deficient measure of production at best and a fundamentally misleading guidepost for human development at worst. But also measures of inflation, balances of payments and trade, unemployment figures, productivity or public debt hide unsolved and maybe insoluble problems. In FickleFormulas we have asked: which social, political and economic factors shape the formulas used to calculate macroeconomic indicators?
In our quest for answers we have mobilized scholarship and expertise scattered across academic disciplines – a wealth of knowledge brought together for example here. We have reconstructed expert-deliberations of past decades, but mostly we wanted to learn from those who actually design macroeconomic indicators: statisticians at national statistical offices or organizations such as the OECD, the UN, the IMF, or the World Bank. For us, understanding macroeconomic indicators has been impossible without talking to the people who live and breathe them….(More)”.
Lifelines of Our Society
Book by Dirk van Laak: “Infrastructure is essential to defining how the public functions, yet there is little public knowledge regarding why and how it became today’s strongest global force over government and individual lives. Who should build and maintain infrastructures? How are they to be protected? And why are they all in such bad shape? In Lifelines of Our Society, Dirk van Laak offers broad audiences a history of global infrastructures—focused on Western societies, over the past two hundred years—that considers all their many paradoxes. He illustrates three aspects of infrastructure: their development, their influence on nation building and colonialism, and finally, how individuals internalize infrastructure and increasingly become not only its user but regulator.
Beginning with public works, infrastructure in the nineteenth century carried the hope that it would facilitate world peace. Van Laak shows how, instead, it transformed to promote consumerism’s individual freedoms and our notions of work, leisure, and fulfillment. Lifelines of Our Society reveals how today’s infrastructure is both a source and a reflection of concentrated power and economic growth, which takes the form of cities under permanent construction. Symbols of power, van Laak describes, come with vulnerability, and this book illustrates the dual nature of infrastructure’s potential to hold nostalgia and inspire fear, to ease movement and govern ideas, and to bring independence to the nuclear family and control governments of the Global South…(More)”.
The danger of building strong narratives on weak data
Article by John Burn-Murdoch: “Measuring gross domestic product is extremely complicated. Around the world, national statistics offices are struggling to get the sums right the first time around.
Some struggle more than others. When Ireland first reported its estimate for GDP growth in Q1 2015, it came in at 1.4 per cent. One year later, and with some fairly unique distortions due to its location as headquarters for many US big tech and pharma companies, this was revised upwards to an eye-watering 21.4 per cent.
On average, five years after an estimate of quarterly Irish GDP growth is first published, the latest revision of that figure is two full percentage points off the original value. The equivalent for the UK is almost 10 times smaller at 0.25 percentage points, making the ONS’s initial estimates among the most accurate in the developed world, narrowly ahead of the US at 0.26 and well ahead of the likes of Japan (0.46) and Norway (0.56).
But it’s not just the size of revisions that matters, it’s the direction. Out of 24 developed countries that consistently report quarterly GDP revisions to the OECD, the UK’s initial estimates are the most pessimistic. Britain’s quarterly growth figures typically end up 0.15 percentage points higher than first thought. The Germans go up by 0.07 on average, the French by 0.04, while the Americans, ever optimistic, typically end up revising their estimates down by 0.11 percentage points.
In other words, next time you hear a set of quarterly growth figures, it wouldn’t be unreasonable to mentally add 0.15 to the UK one and subtract 0.11 from the US.
This may all sound like nerdy detail, but it matters because people graft strong narratives on to this remarkably flimsy data. Britain was the only G7 economy yet to rebound past pre-Covid levels until it wasn’t. Ireland is booming, apparently, except its actual individual consumption per capita — a much better measure of living standards than GDP — has fallen steadily from just above the western European average in 2007 to 10 per cent below last year.
And the phenomenon is not exclusive to economic data. Two years ago, progressives critical of the government’s handling of the pandemic took to calling the UK “Plague Island”, citing Britain’s reported Covid death rates, which were among the highest in the developed world. But with the benefit of hindsight, we know that Britain was simply better at counting its deaths than most countries…(More)”
Toward Bridging the Data Divide
Blog by Randeep Sudan, Craig Hammer, and Yaroslav Eferin: “Developing countries face a data conundrum. Despite more data being available than ever in the world, low- and middle-income countries often lack adequate access to valuable data and struggle to fully use the data they have.
This seemingly paradoxical situation represents a data divide. The terms “digital divide” and “data divide” are often used interchangeably but differ. The digital divide is the gap between those with access to digital technologies and those without access. On the other hand, the data divide is the gap between those who have access to high-quality data and those who do not. The data divide can negatively skew development across countries and therefore is a serious issue that needs to be addressed…
The effects of the data divide are alarming, with low- and middle-income countries getting left behind. McKinsey estimates that 75% of the value that could be created through Generative AI (such as ChatGPT) would be in four areas of economic activity: customer operations, marketing and sales, software engineering, and research and development. They further estimate that Generative AI could add between $2.6 trillion and $4.4 trillion in value in these four areas.
PWC estimates that approximately 70% of all economic value generated by AI will likely accrue to just two countries: the USA and China. These two countries account for nearly two-thirds of the world’s hyperscale data centers, high rates of 5G adoption, the highest number of AI researchers, and the most funding for AI startups. This situation creates serious concerns for growing global disparities in accessing benefits from data collection and processing, and the related generation of insights and opportunities. These disparities will only increase over time without deliberate efforts to counteract this imbalance…(More)”
Private sector access to public sector personal data: exploring data value and benefit sharing
Literature review for the Scottish Government: “The aim of this review is to enable the Scottish Government to explore the issues relevant to the access of public sector personal data (as defined by the European Union General Data Protection Regulation, GDPR) with or by the private sector in publicly trusted ways, to unlock the public benefit of this data. This literature review will specifically enable the Scottish Government to establish whether there are
(I) models/approaches of costs/benefits/data value/benefit-sharing, and
(II) intellectual property rights or royalties schemes regarding the use of public sector personal data with or by the private sector both in the UK and internationally.
In conducting this literature review, we used an adapted systematic review, and undertook thematic analysis of the included literature to answer several questions central to the aim of this research. Such questions included:
- Are there any models of costs and/or benefits regarding the use of public sector personal data with or by the private sector?
- Are there any models of valuing data regarding the use of public sector personal data with or by the private sector?
- Are there any models for benefit-sharing in respect of the use of public sector personal data with or by the private sector?
- Are there any models in respect of the use of intellectual property rights or royalties regarding the use of public sector personal data with or by the private sector?..(More)”.
Unlocking the value of supply chain data across industries
MIT Technology Review Insights: “The product shortages and supply-chain delays of the global covid-19 pandemic are still fresh memories. Consumers and industry are concerned that the next geopolitical climate event may have a similar impact. Against a backdrop of evolving regulations, these conditions mean manufacturers want to be prepared against short supplies, concerned customers, and weakened margins.
For supply chain professionals, achieving a “phygital” information flow—the blending of physical and digital data—is key to unlocking resilience and efficiency. As physical objects travel through supply chains, they generate a rich flow of data about the item and its journey—from its raw materials, its manufacturing conditions, even its expiration date—bringing new visibility and pinpointing bottlenecks.
This phygital information flow offers significant advantages, enhancing the ability to create rich customer experiences to satisfying environmental, social, and corporate governance (ESG) goals. In a 2022 EY global survey of executives, 70% of respondents agreed that a sustainable supply chain will increase their company’s revenue.
For disparate parties to exchange product information effectively, they require a common framework and universally understood language. Among supply chain players, data standards create a shared foundation. Standards help uniquely identify, accurately capture, and automatically share critical information about products, locations, and assets across trading communities…(More)”.
How to improve economic forecasting
Article by Nicholas Gruen: “Today’s four-day weather forecasts are as accurate as one-day forecasts were 30 years ago. Economic forecasts, on the other hand, aren’t noticeably better. Former Federal Reserve chair Ben Bernanke should ponder this in his forthcoming review of the Bank of England’s forecasting.
There’s growing evidence that we can improve. But myopia and complacency get in the way. Myopia is an issue because economists think technical expertise is the essence of good forecasting when, actually, two things matter more: forecasters’ understanding of the limits of their expertise and their judgment in handling those limits.
Enter Philip Tetlock, whose 2005 book on geopolitical forecasting showed how little experts added to forecasting done by informed non-experts. To compare forecasts between the two groups, he forced participants to drop their vague weasel words — “probably”, “can’t be ruled out” — and specify exactly what they were forecasting and with what probability.
That started sorting the sheep from the goats. The simple “point forecasts” provided by economists — such as “growth will be 3.0 per cent” — are doubly unhelpful in this regard. They’re silent about what success looks like. If I have forecast 3.0 per cent growth and actual growth comes in at 3.2 per cent — did I succeed or fail? Such predictions also don’t tell us how confident the forecaster is.
By contrast, “a 70 per cent chance of rain” specifies a clear event with a precise estimation of the weather forecaster’s confidence. Having rigorously specified the rules of the game, Tetlock has since shown how what he calls “superforecasting” is possible and how diverse teams of superforecasters do even better.
What qualities does Tetlock see in superforecasters? As well as mastering necessary formal techniques, they’re open-minded, careful, curious and self-critical — in other words, they’re not complacent. Aware, like Socrates, of how little they know, they’re constantly seeking to learn — from unfolding events and from colleagues…(More)”.
Data Governance and Policy in Africa
This open access book edited by Bitange Ndemo, Njuguna Ndung’u, Scholastica Odhiambo and Abebe Shimeles: “…examines data governance and its implications for policymaking in Africa. Bringing together economists, lawyers, statisticians, and technology experts, it assesses gaps in both the availability and use of existing data across the continent, and argues that data creation, management and governance need to improve if private and public sectors are to reap the benefits of big data and digital technologies. It also considers lessons from across the globe to assess principles, norms and practices that can guide the development of data governance in Africa….(More)”.
Should Computers Decide How Much Things Cost?
Article by Colin Horgan: “In the summer of 2012, the Wall Street Journal reported that the travel booking website Orbitz had, in some cases, been suggesting to Apple users hotel rooms that cost more per night than those it was showing to Windows users. The company found that people who used Mac computers spent as much as 30 percent more a night on hotels. It was one of the first high-profile instances where the predictive capabilities of algorithms were shown to impact consumer-facing prices.
Since then, the pool of data available to corporations about each of us (the information we’ve either volunteered or that can be inferred from our web browsing and buying histories) has expanded significantly, helping companies build ever more precise purchaser profiles. Personalized pricing is now widespread, even if many consumers are only just realizing what it is. Recently, other algorithm-driven pricing models, like Uber’s surge or Ticketmaster’s dynamic pricing for concerts, have surprised users and fans. In the past few months, dynamic pricing—which is based on factors such as quantity—has pushed up prices of some concert tickets even before they hit the resale market, including for artists like Drake and Taylor Swift. And while personalized pricing is slightly different, these examples of computer-driven pricing have spawned headlines and social media posts that reflect a growing frustration with data’s role in how prices are dictated.
The marketplace is said to be a realm of assumed fairness, dictated by the rules of competition, an objective environment where one consumer is the same as any other. But this idea is being undermined by the same opaque and confusing programmatic data profiling that’s slowly encroaching on other parts of our lives—the algorithms. The Canadian government is currently considering new consumer-protection regulations, including what to do to control algorithm-based pricing. While strict market regulation is considered by some to be a political risk, another solution may exist—not at the point of sale but at the point where our data is gathered in the first place.
In theory, pricing algorithms aren’t necessarily bad…(More)”.