AI Infrastructure: Momentum, Mismatch, and the Emerging Correction Risk

By | December 24, 2025

In these final days of 2025, markets are hitting record highs on the strength of AI-driven valuations. Yet the conversation around an emerging AI correction risk has intensified as investors weigh stretched multiples and rising capital costs. The year began with a jolt when DeepSeek briefly rattled the market in January, raising fears of a sharper reset before sentiment quickly recovered. But signs of fatigue are becoming harder to ignore. One recent example is Blue Owl’s decision to withdraw from a $10 billion funding deal with Oracle, whose stock has now fallen below its September level when it announced the $300 billion agreement with OpenAI. Adding context, according to the Wall Street Journal, “Business investment in AI might have accounted for as much as half of the growth in gross domestic product, adjusted for inflation, in the first six months of the year.”

Against this backdrop, I co‑authored an Insight Note with my business partner Riad Hartani titled AI Infrastructure: Momentum, Mismatch, and the Emerging Correction Risk. The Note builds on a recent interview Riad did with Prakash Sangam at Tantra Analyst, available here.

Key Takeaways

The Insight Note’s main takeaways can be summarized as follows:

Strategic Positioning
  • AI will transform economies and societies over the long term, but near-term corrections will stem from economic mismatches (underutilization, timing gaps, circular financing) rather than technology failures
  • Any correction will prune overleveraged GPU-centric clouds and inflexible facilities while somewhat sparing hyperscalers, diversified cloud providers, and adaptable data centers with modular designs that can adapt supply to demand
  • A potential correction would affect the layers of AI infrastructure according to their specific dynamics. Impacts would differ across geographies due to the centralized nature of large data center deployments.
Core Investment Risks
  • Supply growth plans outpaces near-term enterprise demand, which remains in a cycle of pilots and proofs-of-concept despite years of hype
  • Hyperscaler spending dominates 80-90% of AI compute demand, creating systemic contagion risk where modest capex pauses cascade through entire supply chains
  • Hardware refresh cycles of 3-4 years threaten to obsolete new data centers before they generate adequate returns, affecting debt financing terms and requirements
Investment Discipline Required
  • Prioritize companies with validated enterprise demand pipelines and proven revenue over those touting speculative capacity or hyperscaler memorandums of understanding
  • Scrutinize circular financing where GPU vendors inflate demand signals through equity swaps, pre-purchases, or capacity arrangements with startups
  • Verify power procurement and customer acquisition strategies before committing capital, as many projects advance without secured energy allocations, grid interconnections or solid customer engagement
  • Favor adaptable infrastructure with modular power and cooling systems that can repurpose across hardware generations and workload shifts
  • Select diversified, multi-tenant operators over single-use GPU facilities dependent on hyperscaler leases
Source: Tom Loftus/WJL newsletter.

Reducing the Risk of Stranded Assets

The practical implications for operators and investors of architectural moves toward custom ASICs and memory optimized designs are immediate and tangible. Stranded assets become a real risk when a facility cannot adapt its power distribution, cooling loops, or rack layouts to host lower density or differently cooled hardware. Facilities built for sustained, ultra-high density GPU loads will see utilization fall if production workloads migrate to alternative silicon or to distributed NPUs that favor lower power footprints.

Prioritize flexibility in new builds and retrofits. Specify modular power distribution, convertible cooling systems, and rack infrastructure that supports a range of power densities and form factors. Negotiate staged deployments and firm utilization commitments to align capex with validated demand and reduce exposure to rapid architectural pivots. Offer multi-architecture hosting that supports GPUs and ASICs to diversify revenue and shorten vacancy cycles.

Monitor hyperscaler roadmaps and silicon trends closely because in-house ASIC moves can compress third party pricing power and accelerate obsolescence. Model shorter payback windows by explicitly including refresh cycles and stress test scenarios where a portion of GPU demand migrates to specialized accelerators. Maintain active engagement with customers so you can anticipate shifts in workload profiles and adjust capacity plans before vacancy appears.

Invest in modular construction, standardized interfaces, and vendor agnostic systems that let you reconfigure power and cooling quickly. Build commercial terms that tie expansion to utilization milestones and include exit or conversion clauses to limit downside. Modularity, multi-architecture support, and firm utilization commitments are the most effective levers to reduce stranded asset risk and preserve asset value as compute architectures evolve.


Over the past two years, our team at Xona Partners evaluated dozens of AI data center plans for investors and developers. If you’d like to explore how our experience can support your work, feel free to reach out.