📆 2025-11-14

The geopolitics of AI

⌛ Reading time: 8 min

In early November 2025, Nvidia CEO Jensen Huang speaking on the sidelines of the Future of AI Summit, remarked:

China is going to win the AI race.

That's quite a statement.

Of course, comments from CEOs are always aiming to benefit the companies they run. Much virtual ink has been spilt commenting on the Nvidia CEO claim. However, the whole topic of AI is hyper hyped at this late stage of 2025. The most important consideration to make when thinking about what to think about that statement, is that Huang also claims the US can win the AI race, if it buys more Nvidia hardware. Perhaps, the statement by Huang is aiming to hit at some insecurity and FOMO sentiment among some in the US that the way to win the AI Wars, is to quickly throw blank cheques at Nvidia.

The consequential technology race

Despite the highly financially charged information environment, more serious considerations have been made regarding the geopolitical impacts of articial intelligence. A recent Frederick Kempe article at the Atlantic Council explores strategically impactful themes, including the following:

The world has entered the most consequential tech race since the dawn of the nuclear age, but this time the weapons are algorithms instead of atoms. Rather than a race to obtain a single superweapon, this is one to determine how societies think, work, and make decisions. AI is transforming not only the distribution of power around the globe but also the very nature of that power and how it will be exercised.

Despite this comparison, Kempe goes on to highlight some important differences between the Manhattan Project and what is going on today with AI. Here are some highlights:

Despite these differences, Kempe states that these races will both be won via scientific breakthroughs, and both share potential for great good and catastrophic harm. He goes on to state that the consequences of who may win the AI race could impact global norms, and shape the spread of either authoritarianism or openness and everthing that exists in such systems.

Geopolitics into 2026

However, perhaps AI won't be that impactful upon the geopolitical landscape after all? In a recent article in The Economist, the editor-in-chief explored how 21st century geopolitics will become clearer in 2026. That piece did not include a single mention of AI.

A recent Lowy Institute article on the rise of geopolitical risk also makes no mention of AI.

Is the overall hype surrounding AI impacting aspects of geopolitical commentary and analyses? Or are some people missing it's significance? No industry or walk of life can totally isolate itself from hype and popular opinion, especially when non-technical industries seek to understand technical topics. The traditionally non-technical think tank and geopolitical advice industry may be as vulnerable to AI hype as anyone else. But are they over or under-estimating it?

The plateauing of LLM intelligence

The underlying issue with AI, specifically with large language models (LLMs), is that their intelligence growth does appear to be plateauing significantly (see here, here and here). The emerging AI industry is no longer tauting the incredible growth of intelligence of the latest LLM models, but rather, are emphasising peripheral and supplementary areas like deep research, agents and integrating AI into every product imaginable. While some novel and groundbreaking applications of AI are likely to continue to emerge, further radical increases in the underlying intelligence, the actual intelligence of LLMs, is unlikely to eventuate with current technology and approaches.

This is important. Looking through the hype, it is clear that AI experts are well aware that LLM intelligence has been plateauing, and they are hard at work seeking the next breakthrough. Breakthroughs may or may not emerge. They are not certain. Furthermore, the issue with hallucinations has still not been solved.

Some current research efforts

One major effort in current research is to migrate from feeding models more data (which we don't have), to letting them learn predominantly through their own experience (paper). Basically the entire internet has already been fed to the current generation of LLMs, and this is one of the primary reasons LLMs are not really getting much smarter any more. This is why there is so much emphasis on better harnessing the capabilities of current levels of LLM intelligence, which still remains valuable. However, different approaches like the era of experience is one major area of focus which aims to provide new data to models in an innovative way. The main idea is that models have already been fed the internet, so they need a new source of data to improve further. To enable the next iteration of learning, this new approach aims to allow agents to continuously learn from data generated via their own experiences, such as by interacting with their environment. The linked paper is worth a read for more.

Another major area of focus seeking the next breakthrough is world models. Current LLMs have no context about the real world we exist in, so there is no grounding or context in that sense. The use of world models, including spatial and physics aspects, aims to instil in models the context of the real world we live in. The idea is that if models understood in rich detail the context of the real world we live in, their responses will be better grounded, could minimise hallucinations, and even provide additional information to allow the models to further increase the intelligence of their responses. How this could be done, remains the subject of current research.

What next?

Despite current research and the overall AI hype, it is very possible that the increases seen over the past few years will not continue. There is no innevitable path towards super intelligence and Artificial General Intelligence (AGI). If there are no further major breakthroughs, we will not see AI get much smarter than what we have today. This means that AI will not become the technology of the century, or millenium, like current LEAP forecasts are tending to predict. Having said that, even without further major breakthroughs, we will continue to see AI integrated into offerings across a wide range of products and services. It will change the world. But it might be more akin to a progression from the calculator and google search than being as impactful as agriculture or electricity. But current LLMs are already useful for some tasks and used carefully can save workers time. Any broad increases in productivity is impactful.

If AI intelligence does not progress through this plateauing, there will be far less impact on the globe geopolitically. Instead, there will be a financial reckoning as billions, even trillions, of dollars of investment bets will seek to be salvaged as potentially enormous profits from potentially transformative technology doesn't eventuate. If superintelligence doesn't emerge, it will be as though the Manhatten Project was never able to create the nuclear bomb. There will be no new superweapon. There will not be some greater than human super intelligence emerge who's creator controls it and the world on the grand global stage.

While this humble blog post is not a formal forecast, I am saying that unless there are further scientific breakthroughs (whether that be via experience, world models or something else), it is logical to conclude that the grandest claims of geopolitical impacts of AI are over-stated and flawed. Rather than AI having geopolitical impacts, there will be global financial impacts. New technological breakthroughs (at least as impactful as Googles transformers) remain potential black swan events that could upend everything. And that is the hope of many.


📌 Post tags: geo-pol ai link-post