A World of Unintended Consequences


Essay by Edward Tenner: “One of the great, underappreciated facts about our technology-driven age is that unintended consequences tend to outnumber intended ones. As much as we would like to believe that we are in control, scholars who have studied catastrophic failures have shown that humility is ultimately the only justifiable attitude…

Here’s a story about a revolution that never happened. Nearly 90 years ago, a 26-year-old newly credentialed Harvard sociology PhD and future American Philosophical Society member, Robert K. Merton, published a paper in the American Sociological Review that would become one of the most frequently cited in his discipline: “The Unanticipated Consequences of Purposive Social Action.”While the language of the paper was modest, it offered an obvious but revolutionary insight: many or most phenomena in the social world are unintended – for better or worse. Today, even management gurus like Tom Peters acknowledge that, “Unintended consequences outnumber intended consequences. … Strategies rarely unfold as we imagined. Intended consequences are rare.”

Merton had promised a monograph on the history and analysis of the problem, with its “vast scope and manifold implications.” Somewhere along the way, however, he abandoned the project, perhaps because it risked becoming a book about everything. Moreover, his apparent retreat may have discouraged other social scientists from attempting it, revealing one of the paradoxes of the subject’s study: because it is so universal and important, it may be best suited for case studies rather than grand theories.

Ironically, while unintentionality-centered analysis might have produced a Copernican revolution in social science, it is more likely that it would have unleashed adverse unintended consequences for any scholar attempting it – just as Thomas Kuhn’s idea of scientific paradigms embroiled him in decades of controversies. Besides, there are also ideological barriers to the study of unintended consequences. For every enthusiast there seems to be a hater, and dwelling on the unintended consequences of an opponent’s policies invites retaliation in kind.

This was economist Albert O. Hirschman’s point in his own critique of the theme. Hirschman himself had formidable credentials as a student of unintended consequences. One of his most celebrated and controversial ideas, the “hiding hand,” was a spin-off of Adam Smith’s famous metaphor for the market (the invisible hand). In Development Projects Observed, Hirschman noted that many successful programs might never have been launched had all the difficulties been known; but once a commitment was made, human ingenuity prevailed, and new and unforeseen solutions were found. The Sydney Opera House, for example, exceeded its budget by 1,300%, but it turned out to be a bargain once it became Australia’s unofficial icon…(More)”

2025 Technology and innovation report


UNCTAD Report: Frontier technologies, particularly artificial intelligence (AI), are profoundly transforming our economies and societies, reshaping production processes, labour markets and the ways in which we live and interact. Will AI accelerate progress towards the Sustainable Development Goals, or will it exacerbate existing inequalities, leaving the underprivileged further behind? How can developing countries harness AI for sustainable development? AI is the first technology in history that can make decisions and generate ideas on its own. This sets it apart from traditional technologies and challenges the notion of technological neutrality.
The rapid development of AI has also outpaced the ability of Governments to respond effectively. The Technology and Innovation Report 2025 aims to guide policymakers through the complex AI
andscape and support them in designing science, technology and innovation (STI) policies that foster inclusive and equitable technological progress.
The world already has significant digital divides, and with the rise of AI, these could widen even further. In response, the Report argues for AI development based on inclusion and equity, shifting the focus from
technology to people. AI technologies should complement rather than displace human workers and production should be restructured so that the benefits are shared fairly among countries, firms and
workers. It is also important to strengthen international collaboration, to enable countries to co-create inclusive AI governance.


The Report examines five core themes:
A. AI at the technological frontier
B. Leveraging AI for productivity and workers’ empowerment
C. Preparing to seize AI opportunities
D. Designing national policies for AI
E. Global collaboration for inclusive and equitable AI…(More)”

The Measure of Progress: Counting What Really Matters


Book by Diane Coyle: “The ways that statisticians and governments measure the economy were developed in the 1940s, when the urgent economic problems were entirely different from those of today. In The Measure of Progress, Diane Coyle argues that the framework underpinning today’s economic statistics is so outdated that it functions as a distorting lens, or even a set of blinkers. When policymakers rely on such an antiquated conceptual tool, how can they measure, understand, and respond with any precision to what is happening in today’s digital economy? Coyle makes the case for a new framework, one that takes into consideration current economic realities.

Coyle explains why economic statistics matter. They are essential for guiding better economic policies; they involve questions of freedom, justice, life, and death. Governments use statistics that affect people’s lives in ways large and small. The metrics for economic growth were developed when a lack of physical rather than natural capital was the binding constraint on growth, intangible value was less important, and the pressing economic policy challenge was managing demand rather than supply. Today’s challenges are different. Growth in living standards in rich economies has slowed, despite remarkable innovation, particularly in digital technologies. As a result, politics is contentious and democracy strained.

Coyle argues that to understand the current economy, we need different data collected in a different framework of categories and definitions, and she offers some suggestions about what this would entail. Only with a new approach to measurement will we be able to achieve the right kind of growth for the benefit of all…(More)”.

Bubble Trouble


Article by Bryan McMahon: “…Venture capital (VC) funds, drunk on a decade of “growth at all costs,” have poured about $200 billion into generative AI. Making matters worse, the stock market’s bull run is deeply dependent on the growth of the Big Tech companies fueling the AI bubble. In 2023, 71 percent of the total gains in the S&P 500 were attributable to the “Magnificent Seven”—Apple, Nvidia, Tesla, Alphabet, Meta, Amazon, and Microsoft—all of which are among the biggest spenders on AI. Just four—Microsoft, Alphabet, Amazon, and Meta—combined for $246 billion of capital expenditure in 2024 to support the AI build-out. Goldman Sachs expects Big Tech to spend over $1 trillion on chips and data centers to power AI over the next five years. Yet OpenAI, the current market leader, expects to lose $5 billion this year, and its annual losses to swell to $11 billion by 2026. If the AI bubble bursts, it not only threatens to wipe out VC firms in the Valley but also blow a gaping hole in the public markets and cause an economy-wide meltdown…(More)”.

Commerce Secretary’s Comments Raise Fears of Interference in Federal Data


Article by Ben Casselman and Colby Smith: “Comments from a member of President Trump’s cabinet over the weekend have renewed concerns that the new administration could seek to interfere with federal statistics — especially if they start to show that the economy is slipping into a recession.

In an interview on Fox News on Sunday, Howard Lutnick, the commerce secretary, suggested that he planned to change the way the government reports data on gross domestic product in order to remove the impact of government spending.

“You know that governments historically have messed with G.D.P.,” he said. “They count government spending as part of G.D.P. So I’m going to separate those two and make it transparent.”

It wasn’t immediately clear what Mr. Lutnick meant. The basic definition of gross domestic product is widely accepted internationally and has been unchanged for decades. It tallies consumer spending, private-sector investment, net exports, and government investment and spending to arrive at a broad measure of all goods and services produced in a country.The Bureau of Economic Analysis, which is part of Mr. Lutnick’s department, already produces a detailed breakdown of G.D.P. into its component parts. Many economists focus on a measure — known as “final sales to private domestic purchasers” — that excludes government spending and is often seen as a better indicator of underlying demand in the economy. That measure has generally shown stronger growth in recent quarters than overall G.D.P. figures.

In recent weeks, however, there have been mounting signs elsewhere that the economy could be losing momentumConsumer spending fell unexpectedly in January, applications for unemployment insurance have been creeping upward, and measures of housing construction and home sales have turned down. A forecasting model from the Federal Reserve Bank of Atlanta predicts that G.D.P. could contract sharply in the first quarter of the year, although most private forecasters still expect modest growth.

Cuts to federal spending and the federal work force could act as a further drag on economic growth in coming months. Removing federal spending from G.D.P. calculations, therefore, could obscure the impact of the administration’s policies…(More)”.

China wants tech companies to monetize data, but few are buying in


Article by Lizzi C. Lee: “Chinese firms generate staggering amounts of data daily, from ride-hailing trips to online shopping transactions. A recent policy allowed Chinese companies to record data as assets on their balance sheets, the first such regulation in the world, paving the way for data to be traded in a marketplace and boost company valuations. 

But uptake has been slow. When China Unicom, one of the world’s largest mobile operators, reported its earnings recently, eagle-eyed accountants spotted that the company had listed 204 million yuan ($28 million) in data assets on its balance sheet. The state-owned operator was the first Chinese tech giant to take advantage of the Ministry of Finance’s new corporate data policy, which permits companies to classify data as inventory or intangible assets. 

“No other country is trying to do this on a national level. It could drive global standards of data management and accounting,” Ran Guo, an affiliated researcher at the Asia Society Policy Institute specializing in data governance in China, told Rest of World. 

In 2023 alone, China generated 32.85 zettabytes — more than 27% of the global total, according to a government survey. To put that in perspective, storing this volume on standard 1-terabyte hard drives would require more than 32 billion units….Tech companies that are data-rich are well-positioned tobenefit from logging data as assets to turn the formalized assets into tradable commodities, said Guo. But companies must first invest in secure storage and show that the data is legally obtained in order to meet strict government rules on data security. 

“This can be costly and complex,” he said. “Not all data qualifies as an asset, and companies must meet stringent requirements.” 

Even China Unicom, a state-owned enterprise, is likely complying with the new policy due to political pressure rather than economic incentive, said Guo, who conducted field research in China last year on the government push for data resource development. The telecom operator did not respond to a request for comment. 

Private technology companies in China, meanwhile, tend to be protective of their data. A Chinese government statement in 2022 pushed private enterprises to “open up their data.” But smaller firms could lack the resources to meet the stringent data storage and consumer protection standards, experts and Chinese tech company employees told Rest of World...(More)”.

Nonprofits, Stop Doing Needs Assessments.


Design for Social Impact: “Too many non-profits and funders still roll into communities with a clipboard and a mission to document everything “missing.”

Needs assessments have become a default tool for diagnosing deficits, reinforcing a saviour mentality where outsiders decide what’s broken and needs fixing.

I’ve sat in meetings where non-profits present lists of what communities lack:

  • “Youth don’t have leadership skills”
  • “Parents don’t value education”
  • “Grassroots organisations don’t have capacity”

The subtext? “They need us.”

And because funding is tied to these narratives of scarcity, organisations learn to describe themselves in the language of need rather than strength—because that’s what gets funded…Now, I’m not saying that organisations or funders should never ask people what their needs are. The key issue is how needs assessments are framed and used. Too often, they use extractive “data” collection methodologies and reinforce top-down, deficit-based narratives, where communities are defined primarily by what they lack rather than what they bring.

Starting with what’s already working (asset mapping) and then identifying what’s needed to strengthen and expand those assets is different from leading with gaps, which can frame communities as passive recipients rather than active problem-solvers.

Arguably, a balanced synergy between assessing needs and asset mapping can be powerful—so long as the process centres on community agency, self-determination, and long-term sustainability rather than diagnosing problems for external intervention.

Also, asset-based mapping to me does not mean that you swoop in with the same clipboard and demand people document their strengths…(More)”.

Tech tycoons have got the economics of AI wrong


The Economist: “…The Jevons paradox—the idea that efficiency leads to more use of a resource, not less—has in recent days provided comfort to Silicon Valley titans worried about the impact of DeepSeek, the maker of a cheap and efficient Chinese chatbot, which threatens the more powerful but energy-guzzling American varieties. Satya Nadella, the boss of Microsoft, posted on X, a social-media platform, that “Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of,” along with a link to the Wikipedia page for the economic principle. Under this logic, DeepSeek’s progress will mean more demand for data centres, Nvidia chips and even the nuclear reactors that the hyperscalers were, prior to the unveiling of DeepSeek, paying to restart. Nothing to worry about if the price falls, Microsoft can make it up on volume.

The logic, however self-serving, has a ring of truth to it. Jevons’s paradox is real and observable in a range of other markets. Consider the example of lighting. William Nordhaus, a Nobel-prizewinning economist, has calculated that a Babylonian oil lamp, powered by sesame oil, produced about 0.06 lumens of light per watt of energy. That compares with up to 110 lumens for a modern light-emitting diode. The world has not responded to this dramatic improvement in energy efficiency by enjoying the same amount of light as a Babylonian at lower cost. Instead, it has banished darkness completely, whether through more bedroom lamps than could have been imagined in ancient Mesopotamia or the Las Vegas sphere, which provides passersby with the chance to see a 112-metre-tall incandescent emoji. Urban light is now so cheap and so abundant that many consider it to be a pollutant.

Likewise, more efficient chatbots could mean that AI finds new uses (some no doubt similarly obnoxious). The ability of DeepSeek’s model to perform about as well as more compute-hungry American AI shows that data centres are more productive than previously thought, rather than less. Expect, the logic goes, more investment in data centres and so on than you did before.

Although this idea should provide tech tycoons with some solace, they still ought to worry. The Jevons paradox is a form of a broader phenomenon known as “rebound effects”. These are typically not large enough to fully offset savings from improved efficiency….Basing the bull case for AI on the Jevons paradox is, therefore, a bet not on the efficiency of the technology but on the level of demand. If adoption is being held back by price then efficiency gains will indeed lead to greater use. If technological progress raises expectations rather than reduces costs, as is typical in health care, then chatbots will make up an ever larger proportion of spending. At the moment, that looks unlikely. America’s Census Bureau finds that only 5% of American firms currently use AI and 7% have plans to adopt it in the future. Many others find the tech difficult to use or irrelevant to their line of business…(More)”.

Will Artificial Intelligence Replace Us or Empower Us?


Article by Peter Coy: “…But A.I. could also be designed to empower people rather than replace them, as I wrote a year ago in a newsletter about the M.I.T. Shaping the Future of Work Initiative.

Which of those A.I. futures will be realized was a big topic at the San Francisco conference, which was the annual meeting of the American Economic Association, the American Finance Association and 65 smaller groups in the Allied Social Science Associations.

Erik Brynjolfsson of Stanford was one of the busiest economists at the conference, dashing from one panel to another to talk about his hopes for a human-centric A.I. and his warnings about what he has called the “Turing Trap.”

Alan Turing, the English mathematician and World War II code breaker, proposed in 1950 to evaluate the intelligence of computers by whether they could fool someone into thinking they were human. His “imitation game” led the field in an unfortunate direction, Brynjolfsson argues — toward creating machines that behaved as much like humans as possible, instead of like human helpers.

Henry Ford didn’t set out to build a car that could mimic a person’s walk, so why should A.I. experts try to build systems that mimic a person’s mental abilities? Brynjolfsson asked at one session I attended.

Other economists have made similar points: Daron Acemoglu of M.I.T. and Pascual Restrepo of Boston University use the term “so-so technologies” for systems that replace human beings without meaningfully increasing productivity, such as self-checkout kiosks in supermarkets.

People will need a lot more education and training to take full advantage of A.I.’s immense power, so that they aren’t just elbowed aside by it. “In fact, for each dollar spent on machine learning technology, companies may need to spend nine dollars on intangible human capital,” Brynjolfsson wrote in 2022, citing research by him and others…(More)”.

To Whom Does the World Belong?


Essay by Alexander Hartley: “For an idea of the scale of the prize, it’s worth remembering that 90 percent of recent U.S. economic growth, and 65 percent of the value of its largest 500 companies, is already accounted for by intellectual property. By any estimate, AI will vastly increase the speed and scale at which new intellectual products can be minted. The provision of AI services themselves is estimated to become a trillion-dollar market by 2032, but the value of the intellectual property created by those services—all the drug and technology patents; all the images, films, stories, virtual personalities—will eclipse that sum. It is possible that the products of AI may, within my lifetime, come to represent a substantial portion of all the world’s financial value.

In this light, the question of ownership takes on its true scale, revealing itself as a version of Bertolt Brecht’s famous query: To whom does the world belong?


Questions of AI authorship and ownership can be divided into two broad types. One concerns the vast troves of human-authored material fed into AI models as part of their “training” (the process by which their algorithms “learn” from data). The other concerns ownership of what AIs produce. Call these, respectively, the input and output problems.

So far, attention—and lawsuits—have clustered around the input problem. The basic business model for LLMs relies on the mass appropriation of human-written text, and there simply isn’t anywhere near enough in the public domain. OpenAI hasn’t been very forthcoming about its training data, but GPT-4 was reportedly trained on around thirteen trillion “tokens,” roughly the equivalent of ten trillion words. This text is drawn in large part from online repositories known as “crawls,” which scrape the internet for troves of text from news sites, forums, and other sources. Fully aware that vast data scraping is legally untested—to say the least—developers charged ahead anyway, resigning themselves to litigating the issue in retrospect. Lawyer Peter Schoppert has called the training of LLMs without permission the industry’s “original sin”—to be added, we might say, to the technology’s mind-boggling consumption of energy and water in an overheating planet. (In September, Bloomberg reported that plans for new gas-fired power plants have exploded as energy companies are “racing to meet a surge in demand from power-hungry AI data centers.”)…(More)”.