Bubble Trouble


Article by Bryan McMahon: “…Venture capital (VC) funds, drunk on a decade of “growth at all costs,” have poured about $200 billion into generative AI. Making matters worse, the stock market’s bull run is deeply dependent on the growth of the Big Tech companies fueling the AI bubble. In 2023, 71 percent of the total gains in the S&P 500 were attributable to the “Magnificent Seven”—Apple, Nvidia, Tesla, Alphabet, Meta, Amazon, and Microsoft—all of which are among the biggest spenders on AI. Just four—Microsoft, Alphabet, Amazon, and Meta—combined for $246 billion of capital expenditure in 2024 to support the AI build-out. Goldman Sachs expects Big Tech to spend over $1 trillion on chips and data centers to power AI over the next five years. Yet OpenAI, the current market leader, expects to lose $5 billion this year, and its annual losses to swell to $11 billion by 2026. If the AI bubble bursts, it not only threatens to wipe out VC firms in the Valley but also blow a gaping hole in the public markets and cause an economy-wide meltdown…(More)”.

Commerce Secretary’s Comments Raise Fears of Interference in Federal Data


Article by Ben Casselman and Colby Smith: “Comments from a member of President Trump’s cabinet over the weekend have renewed concerns that the new administration could seek to interfere with federal statistics — especially if they start to show that the economy is slipping into a recession.

In an interview on Fox News on Sunday, Howard Lutnick, the commerce secretary, suggested that he planned to change the way the government reports data on gross domestic product in order to remove the impact of government spending.

“You know that governments historically have messed with G.D.P.,” he said. “They count government spending as part of G.D.P. So I’m going to separate those two and make it transparent.”

It wasn’t immediately clear what Mr. Lutnick meant. The basic definition of gross domestic product is widely accepted internationally and has been unchanged for decades. It tallies consumer spending, private-sector investment, net exports, and government investment and spending to arrive at a broad measure of all goods and services produced in a country.The Bureau of Economic Analysis, which is part of Mr. Lutnick’s department, already produces a detailed breakdown of G.D.P. into its component parts. Many economists focus on a measure — known as “final sales to private domestic purchasers” — that excludes government spending and is often seen as a better indicator of underlying demand in the economy. That measure has generally shown stronger growth in recent quarters than overall G.D.P. figures.

In recent weeks, however, there have been mounting signs elsewhere that the economy could be losing momentumConsumer spending fell unexpectedly in January, applications for unemployment insurance have been creeping upward, and measures of housing construction and home sales have turned down. A forecasting model from the Federal Reserve Bank of Atlanta predicts that G.D.P. could contract sharply in the first quarter of the year, although most private forecasters still expect modest growth.

Cuts to federal spending and the federal work force could act as a further drag on economic growth in coming months. Removing federal spending from G.D.P. calculations, therefore, could obscure the impact of the administration’s policies…(More)”.

China wants tech companies to monetize data, but few are buying in


Article by Lizzi C. Lee: “Chinese firms generate staggering amounts of data daily, from ride-hailing trips to online shopping transactions. A recent policy allowed Chinese companies to record data as assets on their balance sheets, the first such regulation in the world, paving the way for data to be traded in a marketplace and boost company valuations. 

But uptake has been slow. When China Unicom, one of the world’s largest mobile operators, reported its earnings recently, eagle-eyed accountants spotted that the company had listed 204 million yuan ($28 million) in data assets on its balance sheet. The state-owned operator was the first Chinese tech giant to take advantage of the Ministry of Finance’s new corporate data policy, which permits companies to classify data as inventory or intangible assets. 

“No other country is trying to do this on a national level. It could drive global standards of data management and accounting,” Ran Guo, an affiliated researcher at the Asia Society Policy Institute specializing in data governance in China, told Rest of World. 

In 2023 alone, China generated 32.85 zettabytes — more than 27% of the global total, according to a government survey. To put that in perspective, storing this volume on standard 1-terabyte hard drives would require more than 32 billion units….Tech companies that are data-rich are well-positioned tobenefit from logging data as assets to turn the formalized assets into tradable commodities, said Guo. But companies must first invest in secure storage and show that the data is legally obtained in order to meet strict government rules on data security. 

“This can be costly and complex,” he said. “Not all data qualifies as an asset, and companies must meet stringent requirements.” 

Even China Unicom, a state-owned enterprise, is likely complying with the new policy due to political pressure rather than economic incentive, said Guo, who conducted field research in China last year on the government push for data resource development. The telecom operator did not respond to a request for comment. 

Private technology companies in China, meanwhile, tend to be protective of their data. A Chinese government statement in 2022 pushed private enterprises to “open up their data.” But smaller firms could lack the resources to meet the stringent data storage and consumer protection standards, experts and Chinese tech company employees told Rest of World...(More)”.

Nonprofits, Stop Doing Needs Assessments.


Design for Social Impact: “Too many non-profits and funders still roll into communities with a clipboard and a mission to document everything “missing.”

Needs assessments have become a default tool for diagnosing deficits, reinforcing a saviour mentality where outsiders decide what’s broken and needs fixing.

I’ve sat in meetings where non-profits present lists of what communities lack:

  • “Youth don’t have leadership skills”
  • “Parents don’t value education”
  • “Grassroots organisations don’t have capacity”

The subtext? “They need us.”

And because funding is tied to these narratives of scarcity, organisations learn to describe themselves in the language of need rather than strength—because that’s what gets funded…Now, I’m not saying that organisations or funders should never ask people what their needs are. The key issue is how needs assessments are framed and used. Too often, they use extractive “data” collection methodologies and reinforce top-down, deficit-based narratives, where communities are defined primarily by what they lack rather than what they bring.

Starting with what’s already working (asset mapping) and then identifying what’s needed to strengthen and expand those assets is different from leading with gaps, which can frame communities as passive recipients rather than active problem-solvers.

Arguably, a balanced synergy between assessing needs and asset mapping can be powerful—so long as the process centres on community agency, self-determination, and long-term sustainability rather than diagnosing problems for external intervention.

Also, asset-based mapping to me does not mean that you swoop in with the same clipboard and demand people document their strengths…(More)”.

Tech tycoons have got the economics of AI wrong


The Economist: “…The Jevons paradox—the idea that efficiency leads to more use of a resource, not less—has in recent days provided comfort to Silicon Valley titans worried about the impact of DeepSeek, the maker of a cheap and efficient Chinese chatbot, which threatens the more powerful but energy-guzzling American varieties. Satya Nadella, the boss of Microsoft, posted on X, a social-media platform, that “Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of,” along with a link to the Wikipedia page for the economic principle. Under this logic, DeepSeek’s progress will mean more demand for data centres, Nvidia chips and even the nuclear reactors that the hyperscalers were, prior to the unveiling of DeepSeek, paying to restart. Nothing to worry about if the price falls, Microsoft can make it up on volume.

The logic, however self-serving, has a ring of truth to it. Jevons’s paradox is real and observable in a range of other markets. Consider the example of lighting. William Nordhaus, a Nobel-prizewinning economist, has calculated that a Babylonian oil lamp, powered by sesame oil, produced about 0.06 lumens of light per watt of energy. That compares with up to 110 lumens for a modern light-emitting diode. The world has not responded to this dramatic improvement in energy efficiency by enjoying the same amount of light as a Babylonian at lower cost. Instead, it has banished darkness completely, whether through more bedroom lamps than could have been imagined in ancient Mesopotamia or the Las Vegas sphere, which provides passersby with the chance to see a 112-metre-tall incandescent emoji. Urban light is now so cheap and so abundant that many consider it to be a pollutant.

Likewise, more efficient chatbots could mean that AI finds new uses (some no doubt similarly obnoxious). The ability of DeepSeek’s model to perform about as well as more compute-hungry American AI shows that data centres are more productive than previously thought, rather than less. Expect, the logic goes, more investment in data centres and so on than you did before.

Although this idea should provide tech tycoons with some solace, they still ought to worry. The Jevons paradox is a form of a broader phenomenon known as “rebound effects”. These are typically not large enough to fully offset savings from improved efficiency….Basing the bull case for AI on the Jevons paradox is, therefore, a bet not on the efficiency of the technology but on the level of demand. If adoption is being held back by price then efficiency gains will indeed lead to greater use. If technological progress raises expectations rather than reduces costs, as is typical in health care, then chatbots will make up an ever larger proportion of spending. At the moment, that looks unlikely. America’s Census Bureau finds that only 5% of American firms currently use AI and 7% have plans to adopt it in the future. Many others find the tech difficult to use or irrelevant to their line of business…(More)”.

Will Artificial Intelligence Replace Us or Empower Us?


Article by Peter Coy: “…But A.I. could also be designed to empower people rather than replace them, as I wrote a year ago in a newsletter about the M.I.T. Shaping the Future of Work Initiative.

Which of those A.I. futures will be realized was a big topic at the San Francisco conference, which was the annual meeting of the American Economic Association, the American Finance Association and 65 smaller groups in the Allied Social Science Associations.

Erik Brynjolfsson of Stanford was one of the busiest economists at the conference, dashing from one panel to another to talk about his hopes for a human-centric A.I. and his warnings about what he has called the “Turing Trap.”

Alan Turing, the English mathematician and World War II code breaker, proposed in 1950 to evaluate the intelligence of computers by whether they could fool someone into thinking they were human. His “imitation game” led the field in an unfortunate direction, Brynjolfsson argues — toward creating machines that behaved as much like humans as possible, instead of like human helpers.

Henry Ford didn’t set out to build a car that could mimic a person’s walk, so why should A.I. experts try to build systems that mimic a person’s mental abilities? Brynjolfsson asked at one session I attended.

Other economists have made similar points: Daron Acemoglu of M.I.T. and Pascual Restrepo of Boston University use the term “so-so technologies” for systems that replace human beings without meaningfully increasing productivity, such as self-checkout kiosks in supermarkets.

People will need a lot more education and training to take full advantage of A.I.’s immense power, so that they aren’t just elbowed aside by it. “In fact, for each dollar spent on machine learning technology, companies may need to spend nine dollars on intangible human capital,” Brynjolfsson wrote in 2022, citing research by him and others…(More)”.

To Whom Does the World Belong?


Essay by Alexander Hartley: “For an idea of the scale of the prize, it’s worth remembering that 90 percent of recent U.S. economic growth, and 65 percent of the value of its largest 500 companies, is already accounted for by intellectual property. By any estimate, AI will vastly increase the speed and scale at which new intellectual products can be minted. The provision of AI services themselves is estimated to become a trillion-dollar market by 2032, but the value of the intellectual property created by those services—all the drug and technology patents; all the images, films, stories, virtual personalities—will eclipse that sum. It is possible that the products of AI may, within my lifetime, come to represent a substantial portion of all the world’s financial value.

In this light, the question of ownership takes on its true scale, revealing itself as a version of Bertolt Brecht’s famous query: To whom does the world belong?


Questions of AI authorship and ownership can be divided into two broad types. One concerns the vast troves of human-authored material fed into AI models as part of their “training” (the process by which their algorithms “learn” from data). The other concerns ownership of what AIs produce. Call these, respectively, the input and output problems.

So far, attention—and lawsuits—have clustered around the input problem. The basic business model for LLMs relies on the mass appropriation of human-written text, and there simply isn’t anywhere near enough in the public domain. OpenAI hasn’t been very forthcoming about its training data, but GPT-4 was reportedly trained on around thirteen trillion “tokens,” roughly the equivalent of ten trillion words. This text is drawn in large part from online repositories known as “crawls,” which scrape the internet for troves of text from news sites, forums, and other sources. Fully aware that vast data scraping is legally untested—to say the least—developers charged ahead anyway, resigning themselves to litigating the issue in retrospect. Lawyer Peter Schoppert has called the training of LLMs without permission the industry’s “original sin”—to be added, we might say, to the technology’s mind-boggling consumption of energy and water in an overheating planet. (In September, Bloomberg reported that plans for new gas-fired power plants have exploded as energy companies are “racing to meet a surge in demand from power-hungry AI data centers.”)…(More)”.

Can the world’s most successful index get back up the rankings?


Article by James Watson: “You know your ranking model is influential when national governments change policies with the explicit goal of boosting their position on your index. That was the power of the Ease of Doing Business Index (also known as Doing Business) until 2021.

However, the index’s success became its downfall. Some governments set up dedicated teams with an explicit goal of improving the country’s performance on the index. If those teams’ activity was solely focussed on positive policy reform, that would be great; unfortunately, in at least some cases, they were simply trying to game the results.

World Bank’s Business Ready Index

Index ranking optimisation (aka gaming the results)

To give an example of how that could happen, we need to take a brief detour into the world of qualitative indicators. Bear with me. In many indexes grappling with complex topics, there is a perennial problem of data availability. Imagine you want to measure the number of days it takes to set up a new business (this was one of the indicators in Doing Business). You will find that most of the time the data either doesn’t exist or is rarely updated by governments. Instead, put very simplistically, you’d need to ask a few experts or businesses for their views, and use those to create a numerical score for your index.

This is a valid approach, and it’s used in a lot of studies. Take Transparency International’s long-running Corruption Perceptions Index (CPI). Transparency International goes to great lengths to use robust and comparable data across countries, but measuring actual corruption is not viable — for obvious reasons. So the CPI does something different, and the clue is in the name: it measures people’s perceptions of corruption. It asks local businesses and experts whether they think there’s much bribery, nepotism and other forms of corruption in their country. This foundational input is then bolstered with other data points. The data doesn’t aim to measure corruption; instead, it’s about assessing which countries are more, or less, corrupt. 

Transparency International’s Corruption Perceptions Index (CPI)

This technique can work well, but it got a bit shaky as Doing Business’s fame grew. Some governments that were anxious to move up the rankings started urging the World Bank to tweak the methodology used to assess their ratings, or to use the views of specific experts. The analysts responsible for assessing a country’s scores and data points were put under significant pressure, often facing strong criticism from governments that didn’t agree with their assessments. In the end, an internal review showed that a number of countries’ scores had been improperly manipulated…The criticism must have stung, because the team behind the World Bank’s new Business Ready report has spent three years trying to address those issues. The new methodology handbook lands with a thump at 704 pages…(More)”.

The Next Phase of the Data Economy: Economic & Technological Perspectives


Paper by Jad Esber et al: The data economy is poised to evolve toward a model centered on individual agency and control, moving us toward a world where data is more liquid across platforms and applications. In this future, products will either utilize existing personal data stores or create them when they don’t yet exist, empowering individuals to fully leverage their own data for various use cases.

The analysis begins by establishing a foundation for understanding data as an economic good and the dynamics of data markets. The article then investigates the concept of personal data stores, analyzing the historical challenges that have limited their widespread adoption. Building on this foundation, the article then considers how recent shifts in regulation, technology, consumer behavior, and market forces are converging to create new opportunities for a user-centric data economy. The article concludes by discussing potential frameworks for value creation and capture within this evolving paradigm, summarizing key insights and potential future directions for research, development, and policy.

We hope this article can help shape the thinking of scholars, policymakers, investors, and entrepreneurs, as new data ownership and privacy technologies emerge, and regulatory bodies around the world mandate open flows of data and new terms of service intended to empower users as well as small-to-medium–sized businesses…(More)”.

OECD Digital Economy Outlook 2024


OECD Report: “The most recent phase of digital transformation is marked by rapid technological changes, creating both opportunities and risks for the economy and society. The Volume 2 of the OECD Digital Economy Outlook 2024 explores emerging priorities, policies and governance practices across countries. It also examines trends in the foundations that enable digital transformation, drive digital innovation and foster trust in the digital age. The volume concludes with a statistical annex…

In 2023, digital government, connectivity and skills topped the list of digital policy priorities. Increasingly developed at a high level of government, national digital strategies play a critical role in co-ordinating these efforts. Nearly half of the 38 countries surveyed develop these strategies through dedicated digital ministries, up from just under a quarter in 2016. Among 1 200 policy initiatives tracked across the OECD, one-third aim to boost digital technology adoption, social prosperity, and innovation. AI and 5G are the most often-cited technologies…(More)”