Abdullah Kamran
After several years of extraordinary momentum, the artificial intelligence sector is entering a more uncertain and reflective phase. What once appeared to be an unstoppable wave of innovation and investment is now being examined with greater caution by industry insiders, financial regulators, and policymakers. The very factors that propelled AI to the centre of the US economy, soaring valuations, massive capital inflows, and promises of exponential technological progress, are now prompting uncomfortable questions about sustainability, profitability, and long-term direction.
AI’s rise has been spectacular. Trillion-dollar valuations, record-breaking venture capital investments, and the meteoric rise of technology stocks have made AI the defining economic and technological narrative of the decade. Cloud infrastructure, advanced semiconductors, and large-scale data centres have absorbed unprecedented levels of capital. For many investors, AI symbolised not just another technology cycle, but a fundamental transformation comparable to electricity or the internet. Yet history shows that periods of intense technological optimism often blur the line between genuine innovation and speculative excess.
In recent months, that historical parallel has become harder to ignore. Senior figures within the technology sector itself have begun to acknowledge signs of overexuberance. OpenAI’s chief executive has cautioned that investors may be excessively excited, while Alphabet’s leadership has openly admitted that elements of irrationality exist in the market. Financial institutions have echoed these warnings. The Bank of England has raised concerns about the risk of sharp corrections in technology stock valuations, and the International Monetary Fund has drawn comparisons with the late 1990s, when enthusiasm for internet companies culminated in the dot-com crash. These signals do not suggest that AI lacks value, but they do point to a potential misalignment between expectations and near-term realities.
At the heart of the debate lies a deeper structural problem. Much of today’s investment logic rests on the assumption that AI capabilities, particularly large language models, will continue to improve at an exponential pace. However, there are growing indications that these systems may be approaching performance plateaus. Each new generation of models requires dramatically higher training costs, more energy, and more specialized hardware, yet delivers smaller incremental gains. If this trend persists, it undermines the central premise used to justify trillion-dollar valuations and relentless capital expansion.
Profitability presents an equally serious challenge. While hundreds of millions of users interact with generative AI tools, only a small fraction are willing to pay for premium subscriptions. This gap between widespread usage and sustainable revenue highlights a fundamental weakness in current business models. Proprietary generative AI systems remain extremely expensive to develop and operate, and so far, revenues have not kept pace with costs. As a result, some investors are beginning to reassess their appetite for funding consumer-facing AI startups, demanding clearer and more credible pathways to profit.
If confidence erodes, the consequences could ripple across the US technology sector. A contraction in funding would likely force companies to retrench, reduce staff, and pivot away from speculative consumer applications. Prices for advanced AI services could rise, making widespread societal adoption more difficult. In this scenario, AI risks becoming a premium technology accessible only to large corporations and governments, rather than a broadly diffused public utility.
One likely outcome of such pressure would be a shift toward defence and national security markets. Governments offer predictable, long-term contracts and are less sensitive to short-term profitability concerns. The recent awarding of Pentagon contracts to leading AI firms for military applications already signals this direction. A securitization of AI—moving away from chatbots and creative tools toward surveillance, intelligence analysis, and autonomous systems, would not only reshape the industry’s priorities but also raise profound ethical and governance questions.
While a potential AI bubble would have global repercussions, China appears comparatively well-positioned to absorb the shock and possibly benefit from it. Unlike the US model, which relies heavily on private capital and high market expectations, China’s AI ecosystem is supported by a mix of state funding, subsidies, and coordinated industrial policy. Innovation is distributed across universities, state-backed firms, small enterprises, and research institutions. Rather than chasing dramatic breakthroughs, Chinese developers often emphasize incremental improvements, cost efficiency, and practical deployment.
Crucially, China has invested heavily in smaller, cheaper, and open-source AI models. These systems may lack the cutting-edge performance of the most advanced Western models, but they are easier to adopt, cheaper to deploy, and well-suited to real-world applications. If Western companies retreat into premium markets or focus on government contracts, they risk leaving large segments of the global market underserved. China, by contrast, could fill this gap by offering affordable AI solutions to developing countries, replicating patterns already seen in telecommunications, renewable energy, and electric vehicles.
This strategy carries long-term geopolitical implications. As Chinese AI systems become embedded in infrastructure, public services, and digital ecosystems across the Global South, dependency deepens. Over time, switching costs rise, and technological reliance extends beyond software into hardware, standards, and data governance. Such dominance could shape global norms around data use, surveillance, and state control, raising concerns about sovereignty, security, and privacy.
Policy responses in the United States remain uncertain and reactive. Recent signals allowing limited semiconductor exports to China may be reversed if strategic anxieties intensify. Export controls are often seen as a tool to slow competitors, but their effectiveness is limited by enforcement challenges, dual-use technologies, and the unintended consequence of accelerating domestic innovation in targeted countries. Restrictive measures may delay rivals, but they rarely halt technological progress altogether.
More fundamentally, governance trajectories are likely to diverge further. China is expected to continue pursuing a state-driven model that prioritizes civilian deployment, market penetration, and centralized oversight. The US and its allies, meanwhile, may increasingly frame AI through a national security lens, emphasizing military integration and strategic competition. This divergence complicates international coordination and makes the prospect of shared global AI governance increasingly remote.
A potential AI bubble, therefore, is not merely a question of inflated valuations correcting themselves. It represents a critical juncture where economic pressures, strategic competition, and policy choices intersect. The decisions taken in this period will shape who controls AI technologies, how they are deployed, and who benefits from them. If a correction does occur in the coming years, policymakers and industry leaders will face a narrow window to separate genuine, transformative innovation from speculative excess. Getting that distinction right may determine not only the future of the AI industry, but the balance of technological power in the decades ahead.












