Inside the trillion-dollar pursuit of artificial general intelligence.
The world’s most powerful technology companies are starting to see intelligence as a key resource. This resource could decide economic, political, and even civilizational dominance. And they are investing in it more than ever.
In 2025, NVIDIA surpassed US $3.1 trillion in market capitalization, briefly overtaking Apple, fueled by record demand for its AI-training chips. The company’s H200 and B100 GPUs are booked out through 2026, and hyperscalers are signing decade-long supply contracts that look less like procurement and more like sovereign energy deals. (Reuters, May 2025)
At the same time, OpenAI secured $22 billion in compute commitments from CoreWeave to ensure exclusive access to large-scale training capacity. This is an unprecedented pre-purchase of intelligence infrastructure.
Rival labs like Anthropic, xAI, and Google DeepMind have responded with similar scaling projects, each aiming to train multimodal models approaching “general reasoning.”
The Economic Arms Race
Governments are no longer passive observers.
In June 2025, the United States announced the National AI Compute Resource — a publicly funded network of GPU clusters designed to secure domestic training capacity and reduce dependency on private cloud monopolies.
The United Kingdom, meanwhile, committed an additional £500 million to expand its “Isambard-AI” supercomputer in Bristol, aiming to make it Europe’s most powerful open-research system.
Private capital is flowing just as aggressively.
Goldman Sachs estimates that global AI infrastructure investment will exceed US $1 trillion by 2027, led by data-center construction, chip manufacturing, and model-training facilities.
Compute itself — once an invisible cost — is becoming the tradeable core of value.
Safety as Strategy
The pursuit of superintelligence isn’t just technical; it’s political.
After the Bletchley Park AI Safety Summit in late 2024, which brought together the U.S., U.K., and China, national regulators began drafting frameworks to control “frontier” model training.
By mid-2025, both the U.S. AI Safety Institute and the U.K. AI Safety Institute had established evaluation protocols to test large models for emergent behavior before deployment.
Labs now view compliance not as a burden but as a moat: passing those audits could become a prerequisite for access to public compute subsidies or investor confidence.
The economics of superintelligence aren’t theoretical anymore.
The Superintelligence Shadow
Still, the race shows no signs of slowing.
In August 2025, xAI, Elon Musk’s research company, said it had secured 25,000 NVIDIA H100s to train its “Grok 3” model. Around the same time, Anthropic announced its “Claude-Next” project, aiming for reasoning abilities beyond GPT-4-class models. (TechCrunch, Aug 2025)
Behind the competition lies a more existential motive: whoever reaches general intelligence first will define the baseline of capability for everyone else.
This is why venture capital, state investment, and research ambition have merged into one ecosystem — each funding round doubling as an arms-race declaration.
The Cost of God Mode
Building superintelligence requires not just breakthroughs in science, but trillions in compute, energy, and cooling.
By 2030, AI data-centers are expected to consume up to 4% of global electricity, according to the International Energy Agency. (IEA, 2025 update)
That demand is now driving alliances between AI labs and renewable-energy providers — from Microsoft’s nuclear-powered data-center pilots to Amazon’s long-term solar contracts.
Infrastructure is intelligence.