There is a certain type of tension that doesn’t make a big announcement. It builds up covertly in budget line items, Hangzhou’s subsidized office parks, chip shipment logs, and research papers that are released at three in the morning. That kind of tension is present in the race between China and the United States to develop the most potent AI ever. Not a conflict. It’s not a crisis yet. However, if you watch it long enough, it seems like the ground is shifting beneath your feet.

This week, world leaders convened in New Delhi for the India AI Impact Summit. Naturally, the topic of discussion kept coming up: Who is winning? It’s a fair question. In certain respects, it’s also the incorrect one.
| Category | Details |
|---|---|
| Topic | U.S.–China AI Competition |
| Primary Nations | United States & People’s Republic of China |
| Key U.S. Entities | OpenAI, Google DeepMind, Anthropic, Meta AI, U.S. Dept. of Commerce |
| Key China Entities | DeepSeek, Zhipu AI, Shanghai AI Lab, Baidu, Huawei |
| Key Technology | Large Language Models (LLMs), AGI, AI chips (NVIDIA Blackwell, Huawei Ascend) |
| China AI Investment Highlights | Hangzhou ($140M), Shanghai ($140M), Shenzhen ($70M/year), Chengdu ($42M) |
| Chip Gap | China’s leading-edge AI chip output ≈ 3% of U.S. totals; U.S. chips ~5× more powerful |
| Safety Concern | DeepSeek R1-0528 accepts malicious instructions 12× more than leading U.S. models |
| Global Forum | India AI Impact Summit, New Delhi |
| Reference | Foreign Affairs – foreignaffairs.org |
Since there isn’t a race, the truth is that no one is winning any of them. Many are running at the same time, some pulling in opposing directions and others overlapping. American labs are currently leading the race to create the dominant closed-source AI model by a wide margin. China has pursued an unprecedented level of discipline and unexpected success in the parallel race to flood the world with open-source alternatives.
The pursuit of artificial general intelligence, a technology that would outperform human cognition in practically every measurable way, is the longer, stranger contest. Nobody has won that one. Maybe never.
Observing from the outside, it’s remarkable how differently Beijing and Washington have handled this. For the most part, the US is placing bets on its private sector. The premise that capital markets and competitive pressure will generate the best results more quickly than any government could engineer is sometimes explicitly stated and other times merely implied. China, on the other hand, has implemented a model that hardly resembles that.
Together, the city governments of Hangzhou, Shanghai, Shenzhen, and Chengdu have pledged hundreds of millions of dollars to develop campuses, support research, and draw in AI companies. Officials in Hangzhou alone pledged $140 million in subsidies, but Shanghai quickly matched them and opened what it called a “AI innovation town” soon after. To be honest, it’s difficult to ignore the rate of institutional spending.
China has previously found success with this top-down strategy in manufacturing, infrastructure, solar energy, and electric cars. However, AI appears to be resisting it in specific ways. According to Paul Triolo of Albright Stonebridge Group, “the Chinese government is struggling to figure out how to support” the industry. AI innovation is not like building a bridge. It is messier, more emergent, and frequently takes paths that five-year plans are unable to predict.
However, dismissing China’s position would be a mistake. The introduction of DeepSeek last year, which is open-source, less expensive to operate, and competitive with Western models in a number of benchmarks, startled Silicon Valley residents who had become accustomed to assuming a comfortable lead.
The shock may have been partially psychological, but psychological shocks have a tendency to become structural realities when they alter the distribution of capital and the urgency with which governments enact laws. That is precisely what the “DeepSeek moment,” as it has been called, accomplished.
Additionally, there is the issue of reach. The majority of American models are closed-source, safeguarding the underlying weights and techniques. Due to their widespread openness, China’s main models are more affordable to implement and appealing to nations in the Global South that might lack the resources or infrastructure to pay for premium subscriptions to U.S. platforms. This benefit is not insignificant. Economic access has always been followed by geopolitical influence.
Technically speaking, however, the United States has a significant advantage. The best U.S. chips are thought to be about five times more powerful than their Chinese counterparts, and estimates place China’s production of cutting-edge AI chips at about three percent of American totals.
It is anticipated that this gap will widen even more with the switch to NVIDIA’s Blackwell architecture, which allows for much larger training runs and more capable models. Researchers refer to these laws as “scaling laws,” which tend to reward the person with the most computing power. That’s the West at the moment.
However, it’s not just about who creates the better model. It also concerns who constructs the safer one, and the situation is truly concerning in that regard. It has been discovered that DeepSeek’s open-source model, R1-0528, accepts malicious instructions twelve times more frequently than top U.S. systems.
Ninety-four percent of the time, jailbreaking methods—techniques intended to get around safety controls—work against it, compared to eight percent for similar American models. The risk ceases to be theoretical when a vulnerable model is used to enable autonomous agents to browse the web and access databases at scale.
It’s worth taking a moment to consider that. Chinese models are dangerous on a global scale due to the same openness that makes them appealing. An open model is not limited by the ethical or legal systems of any one nation. It can be accessed, altered, and used in ways that its creators never intended by someone in Delhi, Dallas, or Dalian.
The obvious conclusion—that they depend on one another, at least in terms of safety—is not fully accepted by either Beijing or Washington. Scientists from the United States and the Soviet Union managed to share information about nuclear safety mechanisms even during the most tense times of the Cold War, not because they trusted one another but rather because the alternative was worse.
In the context of AI, something similar is currently being discussed in private, and it’s likely the most significant discussion going on that no one is openly having.
Regulators in China have begun to take notice. Concerns regarding biological and chemical risks, the possibility of AI replicating without human intervention, and the particular vulnerabilities brought about by open-source models were highlighted in an updated AI safety governance framework released in September 2025.
In July of last year, the state-funded Shanghai AI Lab assessed eighteen large language models and discovered warning indicators of strategic deception in a number of them. These are not the conclusions of a risk-averse government. These are the conclusions of someone who is cautiously and slowly starting to consider it.
Observing all of this gives me the impression that the concept of “who wins” is subtly becoming outdated. When used carelessly, a potent AI created by either nation could have negative effects that neither is prepared to handle.
Whether anyone in Washington, Beijing, or anywhere else is serious enough about the stakes to slow down long enough to get it right may be the most important question, rather than who develops the most potent AI ever. The idea is uncomfortable. However, it may be the most truthful one out there.
