Paper by xiaohui jiang and Masaru Yarime: “The Chinese government has been playing an important role in stimulating innovation among Chinese enterprises. Small and medium-sized enterprises (SMEs), with their limited internal resources, particularly face a severe challenge in implementing innovation activities that depend upon data, funding sources, and talents. However, the rapidly developing smart city projects in China, where significant amounts of data are available from various sophisticated devices and generous funding opportunities, are providing rich opportunities for SMEs to explore data-driven innovation. Chinese Governments are trying to actively engage SMEs in the process of smart city construction. When cooperating with the government, the availability of and access to data involved in the government contracts and the ability required in the project help SMEs to train and improve their innovation ability.In this article, we intend to address how obtaining different types of government contracts (equipment supply, platform building, data analysis) can influence firms’ performance on innovation. Obtaining different types of government contracts are regarded as receiving different types of treatments. The hypothesis is that the data analysis type of contracts has a larger positive influence on improving the innovation ability compared to the platform building type, while the platform building type of contracts can have a larger influence compared to equipment supply. Focusing on the case of SMEs in China, this research aims to shed light on how the government and enterprises collaborate in smart city projects to facilitate innovation. Data on companies’ registered capital, industry, and software products from 1990– 2020 is compiled from the Tianyancha website. A panel dataset is established with the key characteristics of the SMEs, software productions, and their record on government contracts. Based on the company’s basic characteristics, we divided six pairs of treatment and control groups using propensity score matching (PSM) and then ran a validity test to confirm that the result of the division was reliable. Then based on the established control and treatment pairs, we run a difference-in-difference (DID) model, and the result supports our original hypothesis. The statistics shows mixed result, Hypothesis 1 which indicates that companies obtaining data analysis contracts will experience greater innovation improvements compared to those with platform-building contracts, is partially confirmed when using software copyright as an outcome variable. However, when using patent data as an indicator, the statistics is insignificant. Hypothesis 2, which posits that companies with platform-building contracts will show greater innovation improvements than those with equipment supply contracts, is not supported. Hypothesis 3 which suggests that companies receiving government contracts will have higher innovation outputs than those without, is confirmed. The case studies later have revealed the complex mechanisms behind the scenario…(More)”.
Why are “missions” proving so difficult?
Article by James Plunkett: “…Unlike many political soundbites, however, missions have a strong academic heritage, drawing on years of work from Mariana Mazzucato and others. They gained support as a way for governments to be less agnostic about the direction of economic growth and its social implications, most obviously on issues like climate change, while still avoiding old-school statism. The idea is to pursue big goals not with top-down planning but with what Mazzucato calls ‘orchestration’, using the power of the state to drive innovation and shape markets to an outcome.
For these reasons, missions have proven increasingly popular with governments. They have been used by administrations from the EU to South Korea and Finland, and even in Britain under Theresa May, although she didn’t have time to make them stick.
Despite these good intentions and heritage, however, missions are proving difficult. Some say the UK government is “mission-washing” – using the word, but not really adopting the ways of working. And although missions were mentioned in the spending review, their role was notably muted when compared with the central position they had in Labour’s manifesto.
Still, it would seem a shame to let missions falter without interrogating the reasons. So why are missions so difficult? And what, if anything, could be done to strengthen them as Labour moves into year two? I’ll touch on four characteristics of missions that jar with Whitehall’s natural instincts, and in each case I’ll ask how it’s going, and how Labour could be bolder…(More)”.
China is building an entire empire on data
The Economist: “CHINA’S 1.1BN internet users churn out more data than anyone else on Earth. So does the country’s vast network of facial-recognition cameras. As autonomous cars speed down roads and flying ones criss-cross the skies, the quality and value of the information flowing from emerging technologies will soar. Yet the volume of data is not the only thing setting China apart. The government is also embedding data management into the economy and national security. That has implications for China, and holds lessons for democracies.
China’s planners see data as a factor of production, alongside labour, capital and land. Xi Jinping, the president, has called data a foundational resource “with a revolutionary impact” on international competition. The scope of this vision is unparalleled, affecting everything from civil liberties to the profits of internet firms and China’s pursuit of the lead in artificial intelligence.
Mr Xi’s vision is being enacted fast. In 2021 China released rules modelled on Europe’s General Data Protection Regulation (GDPR). Now it is diverging quickly from Western norms. All levels of government are to marshal the data resources they have. A sweeping project to assess the data piles at state-owned firms is under way. The idea is to value them as assets, and add them to balance-sheets or trade them on state-run exchanges. On June 3rd the State Council released new rules to compel all levels of government to share data.
Another big step is a digital ID, due to be launched on July 15th. Under this, the central authorities could control a ledger of every person’s websites and apps. Connecting someone’s name with their online activity will become harder for the big tech firms which used to run the system. They will see only an anonymised stream of digits and letters. Chillingly, however, the ledger may one day act as a panopticon for the state.
China’s ultimate goal appears to be to create an integrated national data ocean, covering not just consumers but industrial and state activity, too. The advantages are obvious, and include economies of scale for training AI models and lower barriers to entry for small new firms…(More)”.
Trends in AI Supercomputers
Paper by Konstantin F. Pilz, James Sanders, Robi Rahman, and Lennart Heim: “Frontier AI development relies on powerful AI supercomputers, yet analysis of these systems is limited. We create a dataset of 500 AI supercomputers from 2019 to 2025 and analyze key trends in performance, power needs, hardware cost, ownership, and global distribution. We find that the computational performance of AI supercomputers has doubled every nine months, while hardware acquisition cost and power needs both doubled every year. The leading system in March 2025, xAI’s Colossus, used 200,000 AI chips, had a hardware cost of $7B, and required 300 MW of power, as much as 250,000 households. As AI supercomputers evolved from tools for science to industrial machines, companies rapidly expanded their share of total AI supercomputer performance, while the share of governments and academia diminished. Globally, the United States accounts for about 75% of total performance in our dataset, with China in second place at 15%. If the observed trends continue, the leading AI supercomputer in 2030 will achieve 2×1022 16-bit FLOP/s, use two million AI chips, have a hardware cost of $200 billion, and require 9 GW of power. Our analysis provides visibility into the AI supercomputer landscape, allowing policymakers to assess key AI trends like resource needs, ownership, and national competitiveness…(More)”.
The Global A.I. Divide
Article by Adam Satariano and Paul Mozur: “Last month, Sam Altman, the chief executive of the artificial intelligence company OpenAI, donned a helmet, work boots and a luminescent high-visibility vest to visit the construction site of the company’s new data center project in Texas.
Bigger than New York’s Central Park, the estimated $60 billion project, which has its own natural gas plant, will be one of the most powerful computing hubs ever created when completed as soon as next year.
Around the same time as Mr. Altman’s visit to Texas, Nicolás Wolovick, a computer science professor at the National University of Córdoba in Argentina, was running what counts as one of his country’s most advanced A.I. computing hubs. It was in a converted room at the university, where wires snaked between aging A.I. chips and server computers.
“Everything is becoming more split,” Dr. Wolovick said. “We are losing.”
Artificial intelligence has created a new digital divide, fracturing the world between nations with the computing power for building cutting-edge A.I. systems and those without. The split is influencing geopolitics and global economics, creating new dependencies and prompting a desperate rush to not be excluded from a technology race that could reorder economies, drive scientific discovery and change the way that people live and work.
The biggest beneficiaries by far are the United States, China and the European Union. Those regions host more than half of the world’s most powerful data centers, which are used for developing the most complex A.I. systems, according to data compiled by Oxford University researchers. Only 32 countries, or about 16 percent of nations, have these large facilities filled with microchips and computers, giving them what is known in industry parlance as “compute power.”..(More)”.
The path for AI in poor nations does not need to be paved with billions
Editorial in Nature: “Coinciding with US President Donald Trump’s tour of Gulf states last week, Saudi Arabia announced that it is embarking on a large-scale artificial intelligence (AI) initiative. The proposed venture will have state backing and considerable involvement from US technology firms. It is the latest move in a global expansion of AI ambitions beyond the existing heartlands of the United States, China and Europe. However, as Nature India, Nature Africa and Nature Middle East report in a series of articles on AI in low- and middle-income countries (LMICs) published on 21 May (see go.nature.com/45jy3qq), the path to home-grown AI doesn’t need to be paved with billions, or even hundreds of millions, of dollars, or depend exclusively on partners in Western nations or China…, as a News Feature that appears in the series makes plain (see go.nature.com/3yrd3u2), many initiatives in LMICs aren’t focusing on scaling up, but on ‘scaling right’. They are “building models that work for local users, in their languages, and within their social and economic realities”.
More such local initiatives are needed. Some of the most popular AI applications, such as OpenAI’s ChatGPT and Google Gemini, are trained mainly on data in European languages. That would mean that the model is less effective for users who speak Hindi, Arabic, Swahili, Xhosa and countless other languages. Countries are boosting home-grown apps by funding start-up companies, establishing AI education programmes, building AI research and regulatory capacity and through public engagement.
Those LMICs that have started investing in AI began by establishing an AI strategy, including policies for AI research. However, as things stand, most of the 55 member states of the African Union and of the 22 members of the League of Arab States have not produced an AI strategy. That must change…(More)”.
Europe’s dream to wean off US tech gets reality check
Article by Pieter Haeck and Mathieu Pollet: “..As the U.S. continues to up the ante in questioning transatlantic ties, calls are growing in Europe to reduce the continent’s reliance on U.S. technology in critical areas such as cloud services, artificial intelligence and microchips, and to opt for European alternatives instead.
But the European Commission is preparing on Thursday to acknowledge publicly what many have said in private: Europe is nowhere near being able to wean itself off U.S. Big Tech.
In a new International Digital Strategy the EU will instead promote collaboration with the U.S., according to a draft seen by POLITICO, as well as with other tech players including China, Japan, India and South Korea. “Decoupling is unrealistic and cooperation will remain significant across the technological value chain,” the draft reads.
It’s a reality check after a year that has seen calls for a technologically sovereign Europe gain significant traction. In December the Commission appointed Finland’s Henna Virkkunen as the first-ever commissioner in charge of tech sovereignty. After few months in office, European Parliament lawmakers embarked on an effort to draft a blueprint for tech sovereignty.
Even more consequential has been the rapid rise of the so-called Eurostack movement, which advocates building out a European tech infrastructure and has brought together effective voices including competition economist Cristina Caffarra and Kai Zenner, an assistant to key European lawmaker Axel Voss.
There’s wide agreement on the problem: U.S. cloud giants capture over two-thirds of the European market, the U.S. outpaces the EU in nurturing companies for artificial intelligence, and Europe’s stake in the global microchips market has crumbled to around 10 percent. Thursday’s strategy will acknowledge the U.S.’s “superior ability to innovate” and “Europe’s failure to capitalise on the digital revolution.”
What’s missing are viable solutions to the complex problem of unwinding deep-rooted dependencies….(More)”
The New Control Society
Essay by Jon Askonas: “Let me tell you two stories about the Internet. The first story is so familiar it hardly warrants retelling. It goes like this. The Internet is breaking the old powers of the state, the media, the church, and every other institution. It is even breaking society itself. By subjecting their helpless users to ever more potent algorithms to boost engagement, powerful platforms distort reality and disrupt our politics. YouTube radicalizes young men into misogynists. TikTok turns moderate progressives into Hamas supporters. Facebook boosts election denialism; or it censors stories doubting the safety of mRNA vaccines. On the world stage, the fate of nations hinges on whether Twitter promotes color revolutions, WeChat censors Hong Kong protesters, and Facebook ads boost the Brexit campaign. The platforms are producing a fractured society: diversity of opinion is running amok, consensus is dead.
The second story is very different. In the 2023 essay “The age of average,” Alex Murrell recounts a project undertaken in the 1990s by Russian artists Vitaly Komar and Alexander Melamid. The artists commissioned a public affairs firm to poll over a thousand Americans on their ideal painting: the colors they liked, the subjects they gravitated toward, and so forth. Using the aggregate data, the artists created a painting, and they repeated this procedure in a number of other countries, exhibiting the final collection as an art exhibition called The People’s Choice. What they found, by and large, was not individual and national difference but the opposite: shocking uniformity — landscapes with a few animals and human figures with trees and a blue-hued color palette.
And it isn’t just paintings that are converging, Murrell argues. Car designs look more like each other than ever. Color is disappearing as most cars become white, gray, or black. From Sydney to Riyadh to Cleveland, an upscale coffee shop is more likely than ever to bear the same design features: reclaimed wood, hanging Edison bulbs, marble countertops. So is an Airbnb. Even celebrities increasingly look the same, with the rising ubiquity of “Instagram face” driven by cosmetic injectables and Photoshop touch-ups.
Murrell focuses on design, but the same trend holds elsewhere: Kirk Goldsberry, a basketball statistician, has shown that the top two hundred shot locations in the NBA today, which twenty years ago formed a wide array of the court, now form a narrow ring at the three-point line, with a dense cluster near the hoop. The less said about the sameness of pop melodies or Hollywood movies, the better.
As we approach the moment when all information everywhere from all time is available to everyone at once, what we find is not new artistic energy, not explosive diversity, but stifling sameness. Everything is converging — and it’s happening even as the power of the old monopolies and centralized tastemakers is broken up.
Are the powerful platforms now in charge? Or are the forces at work today something even bigger?..(More)”.
Policy Implications of DeepSeek AI’s Talent Base
Brief by Amy Zegart and Emerson Johnston: “Chinese startup DeepSeek’s highly capable R1 and V3 models challenged prevailing beliefs about the United States’ advantage in AI innovation, but public debate focused more on the company’s training data and computing power than human talent. We analyzed data on the 223 authors listed on DeepSeek’s five foundational technical research papers, including information on their research output, citations, and institutional affiliations, to identify notable talent patterns. Nearly all of DeepSeek’s researchers were educated or trained in China, and more than half never left China for schooling or work. Of the quarter or so that did gain some experience in the United States, most returned to China to work on AI development there. These findings challenge the core assumption that the United States holds a natural AI talent lead. Policymakers need to reinvest in competing to attract and retain the world’s best AI talent while bolstering STEM education to maintain competitiveness…(More)”.
How Bad Is China’s Economy? The Data Needed to Answer Is Vanishing
Article by Rebecca Feng and Jason Douglas: “Not long ago, anyone could comb through a wide range of official data from China. Then it started to disappear.
Land sales measures, foreign investment data and unemployment indicators have gone dark in recent years. Data on cremations and a business confidence index have been cut off. Even official soy sauce production reports are gone.
In all, Chinese officials have stopped publishing hundreds of data points once used by researchers and investors, according to a Wall Street Journal analysis.
In most cases, Chinese authorities haven’t given any reason for ending or withholding data. But the missing numbers have come as the world’s second biggest economy has stumbled under the weight of excessive debt, a crumbling real-estate market and other troubles—spurring heavy-handed efforts by authorities to control the narrative.China’s National Bureau of Statistics stopped publishing some numbers related to unemployment in urban areas in recent years. After an anonymous user on the bureau’s website asked why one of those data points had disappeared, the bureau said only that the ministry that provided it stopped sharing the data.

The disappearing data have made it harder for people to know what’s going on in China at a pivotal time, with the trade war between Washington and Beijing expected to hit China hard and weaken global growth. Plunging trade with the U.S. has already led to production shutdowns and job cuts.
Getting a true read on China’s growth has always been tricky. Many economists have long questioned the reliability of China’s headline gross domestic product data, and concerns have intensified recently. Official figures put GDP growth at 5% last year and 5.2% in 2023, but some have estimated that Beijing overstated its numbers by as much as 2 to 3 percentage points.
To get what they consider to be more realistic assessments of China’s growth, economists have turned to alternative sources such as movie box office revenues, satellite data on the intensity of nighttime lights, the operating rates of cement factories and electricity generation by major power companies. Some parse location data from mapping services run by private companies such as Chinese tech giant Baidu to gauge business activity.
One economist said he has been assessing the health of China’s services sector by counting news stories about owners of gyms and beauty salons who abruptly close up and skip town with users’ membership fees…(More)”.