Dashboard Trades Leaderboard Tools Policy Research Method About Login Get Pro →
Original Research

The AI Infrastructure Bottleneck Framework

The Constraint Relay Race: Mapping Pricing Power, Capital Cycles & Reflexivity Across 7 Layers of AI Infrastructure

ForcedAlpha Research · February 2026 · 25 min read
Last Updated: Feb 16, 2026 — Factual claims verified & softened (Rambus, TSMC, Broadcom ASICs). Layer 4 sub-grouped. Failure modes moved earlier. Geopolitical & demand sections merged.

Executive Summary

Core Thesis: AI infrastructure is not one trade — it is a relay race of sequential bottlenecks. As each constraint is resolved, pricing power migrates to the next layer. The investors who understand the sequence, and the lag times between resolution and migration, capture asymmetric returns at each transition.

The market treats AI infrastructure as a monolithic trade: "buy semis." This framework disaggregates the stack into 7 distinct layers, each with its own supply/demand dynamics, capital cycle timing, and pricing power structure. The key insight is that resolving one bottleneck creates the next — and the companies with structural pricing power at each layer compound returns regardless of which specific constraint dominates at any given time.

This framework covers 45+ companies across the full AI infrastructure stack, identifies where pricing power concentrates (and where it doesn't), maps the capital cycle timing for each layer, and reveals the reflexive feedback loops that create both opportunity and risk.

Companies with convergence activity from our proprietary scanners — congressional trades, lobbying, institutional filings, options flow — are flagged in each layer table.

7
Infrastructure Layers
35+
Companies Mapped
15
With Convergence
6
Pricing Power Tiers
Scanner Convergence Heatmap — Framework Tickers

Colour intensity = number of independent data sources detecting activity. More sources = higher conviction that the positioning is real, not noise.

CCJ
7 sources
NVDA
6 sources
AMD
5 sources
BE
5 sources
AVGO
4 sources
FCX
4 sources
VST
4 sources
CEG
3 sources
TSM
2 sources
VRT
2 sources
ETN
2 sources
SNPS
2 sources
AMAT
2 sources
LRCX
2 sources
PWR
2 sources
ANET
1 source
COHR
1 source
MU
1 source
ASML
0
RMBS
0
BESI
0
CDNS
0
CIEN
0
SMR
0
SOLS
0

Note: Absence of scanner data does not mean a company lacks merit — structural monopolists (ASML, RMBS, BESI) derive value from market position, not from detectable trading/lobbying activity.

The Constraint Relay Sequence

Each constraint, once resolved, exposes the next bottleneck downstream. Click any node to jump to that layer.

Layer 1
Compute (GPU/ASIC)
Easing
Layer 2
Memory / HBM
Bottleneck
Layer 3
Advanced Packaging
Bottleneck
Layer 4
Power Wall
Bottleneck
Layer 5
Interconnect & Photonics
Emerging
Layer 6
Miniaturization / EUV
Stable
1

Layer 1: Compute (GPU / Custom Silicon)

The most visible layer of AI infrastructure — and increasingly, the least interesting from a pricing-power perspective. NVIDIA's dominance is real but the market prices it that way. The alpha is in understanding what comes after compute eases: the downstream constraints it exposes.

Compute is transitioning from acute scarcity (2023-2024) to managed supply (2025-2026) as custom silicon (Trainium, TPU, Maia) begins absorbing inference workloads. The training market remains NVIDIA-dominated, but the inference market — which will ultimately be larger — is fragmenting.

TickerCompanyRolePricing PowerMoatKey Risk
NVDA6 sources NVIDIA GPU monopoly (training); inference fragmenting to custom silicon Monopoly Software lock-in (CUDA), architectural lead Custom silicon erosion of inference TAM; training monopoly intact but TAM splits
AMD5 sources AMD GPU alternative, MI300X/MI400 series Commodity+ Price/performance at enterprise scale Perpetual #2 without software ecosystem
AVGO4 sources Broadcom Custom ASIC design (Google TPU, Meta MTIA) Toll-Road Design expertise, hyperscaler relationships Customer concentration risk
MRVL Marvell Custom silicon, networking, electro-optics Toll-Road Custom ASIC + DCI networking combined Execution on 5nm custom ramps
What Consensus Misses

The market debates "NVDA vs AMD" while missing the structural shift: hyperscalers are becoming chip companies. Amazon (Trainium), Google (TPU), Microsoft (Maia), and Meta (MTIA) are all building custom silicon — not to replace NVIDIA entirely, but to control inference economics. The real trade isn't picking the GPU winner; it's identifying who designs the custom chips (AVGO, MRVL) and who supplies the packaging and memory they all need.

2

Layer 2: Memory / HBM

High Bandwidth Memory (HBM) is the current acute bottleneck. Every AI accelerator — NVIDIA, AMD, custom — requires HBM, and supply is structurally constrained by the conversion of existing DRAM capacity to HBM production. The key insight: HBM supply is gated not just by memory fabs, but by advanced packaging capacity (Layer 3), creating a compounding constraint.

Within this layer, the overlooked toll-road is Rambus (RMBS): all three HBM manufacturers hold broad Rambus patent licenses for memory interface IP. This creates a royalty stream that scales with the bottleneck itself — the more HBM ships, the more valuable the Rambus toll position.

TickerCompanyRolePricing PowerMoatKey Risk
SK Hynix SK Hynix HBM market leader (~50% share), NVIDIA preferred Structural Scarcity Process lead in HBM3E, NVIDIA qualification Capex cycle, DRAM price cyclicality
MU Micron HBM3E challenger, sole US-based HBM manufacturer Structural Scarcity US domestic supply (CHIPS Act), diversified memory Late entrant in HBM, yield ramp risk
Samsung Samsung HBM3E/HBM4 production, vertical integration Structural Scarcity Scale, vertical integration (DRAM + packaging) Yield issues, customer qualification delays
RMBS Rambus Memory interface IP — broad licensing across all HBM manufacturers Toll-Road Essential patents, 95%+ gross margin IP licensing Patent cliff timing, licensing renegotiation
Rambus: The Hidden Toll-Road

Rambus doesn't make memory — it licenses the interface IP that makes memory work. All three HBM manufacturers (SK Hynix, Micron, Samsung) hold broad Rambus patent licenses covering memory interface technology. As HBM content per GPU increases (from 80GB on H100 to 192GB+ on B200), the royalty base expands automatically. This is a forced-buyer dynamic: there is no alternative memory interface standard. The more critical HBM becomes, the more valuable the Rambus position.

What Consensus Misses

The market prices HBM as a cyclical memory trade (buy SK Hynix, maybe MU). It misses: (1) HBM supply is actually gated by packaging capacity, not fab capacity — you can't make HBM without CoWoS/hybrid bonding, which is even more constrained. (2) The memory interface toll-road (RMBS) scales with every HBM dollar regardless of who wins the manufacturing race. (3) HBM content per accelerator is growing faster than accelerator unit shipments, compounding the bottleneck.

3

Layer 3: Advanced Packaging

This is the most underappreciated bottleneck in the entire AI stack. Advanced packaging (CoWoS, hybrid bonding, 2.5D/3D integration) is required to combine GPU dies with HBM stacks into functional AI accelerators. TSMC's CoWoS capacity is the binding constraint — even if you have GPU dies and HBM chips ready, you can't assemble them without packaging slots.

The capital intensity and lead times for packaging expansion are extreme: 18-24 months to add meaningful CoWoS capacity. This creates a durable bottleneck that will persist through at least 2027.

TickerCompanyRolePricing PowerMoatKey Risk
TSM2 sources TSMC CoWoS dominant share, advanced packaging leader Near-Monopoly Process technology lead, customer lock-in, capacity is a monopolist's choice Geopolitical (Taiwan), capex intensity
BESI BE Semiconductor Hybrid bonding equipment — the picks-and-shovels play Toll-Road 70%+ market share in hybrid bonding tools Customer concentration (TSMC), order lumpiness
AMKR Amkor Technology OSAT (outsourced packaging), 2.5D/fan-out Commodity+ Scale, geographic diversification (Arizona fab) TSMC in-sourcing packaging, margin pressure
ASE ASE Technology Largest OSAT globally, advanced packaging Commodity+ Scale, breadth of packaging technologies Commoditization, TSMC vertical integration
BESI: The Hidden Toll-Road of Packaging

BE Semiconductor makes the hybrid bonding equipment that enables next-generation chip stacking. With 70%+ market share in this critical tool segment, every fab expanding advanced packaging capacity must buy BESI equipment. This is an ASML-like position at an earlier stage: monopolistic market share in essential equipment for a structural growth market. The hybrid bonding TAM expands every time a new AI chip design requires tighter integration between logic and memory dies.

What Consensus Misses

Packaging is treated as a commoditized backend process, but for AI accelerators it is the binding constraint on total system output. You can design the best GPU and manufacture the fastest HBM, but if you can't package them together with CoWoS, you ship nothing. TSMC's packaging revenue is growing faster than its wafer revenue — a structural shift the market hasn't fully priced. BESI's hybrid bonding monopoly is the equipment analog to ASML's EUV monopoly, but at an earlier stage of market recognition.

4

Layer 4: The Power Wall

The longest-duration bottleneck in the stack. While chip and memory constraints operate on 12-24 month cycles, power infrastructure operates on 5-10 year build cycles. This isn't about manufacturing capacity — it's about physics, permitting, and grid infrastructure. Every new AI data centre needs power, cooling, and grid interconnection, and the queue for all three is measured in years, not quarters.

This layer has the lowest overbuild risk of any AI infrastructure play. Even if AI demand disappoints, the electrification megatrend (EVs, reshoring, industrial automation) provides a secular demand floor. Power infrastructure is the rare investment where both AI bull and AI bear scenarios still require the buildout.

Note: Layer 4 is deliberately broad — it captures the full physical infrastructure stack that sits between assembled silicon and operational AI. We group power generation, grid/electrical, nuclear fuel, cooling, materials, and bridge power here because they share the same constraint dynamics: multi-year build cycles, physics-limited supply, and secular demand floors independent of AI. Sub-groupings are flagged in the table below.

TickerCompanySub-GroupPricing PowerMoatKey Risk
VRT2 sources Vertiv Power & Cooling — DC infrastructure, thermal management Structural Scarcity Mission-critical installed base, service revenue Valuation, execution on order backlog
ETN2 sources Eaton Grid/Electrical — power distribution, transformers Structural Scarcity Broad electrical portfolio, regulatory compliance Diversified conglomerate, AI exposure diluted
PWR2 sources Quanta Services Grid/Electrical — grid construction, transmission buildout Structural Scarcity Largest electrical contractor, skilled labor monopoly Labor availability, project execution
FCX4 sources Freeport-McMoRan Materials — copper supply, physical bottleneck for all electrical Structural Scarcity Grasberg mine (world's largest), reserves Commodity price volatility, Indonesian politics
CEG3 sources Constellation Energy Generation — largest US nuclear fleet, 24/7 baseload Near-Monopoly Existing nuclear fleet (no new build required) Regulatory risk, PPA renegotiation
VST4 sources Vistra Generation — nuclear + gas fleet, Texas grid Structural Scarcity Comanche Peak nuclear, Texas deregulated market Regulatory, grid reliability events
CCJ7 sources Cameco Nuclear Fuel — uranium supply for nuclear renaissance Structural Scarcity Tier-1 mines (McArthur River, Cigar Lake) Uranium price volatility, supply restart risk
SOLS Solstice (ConverDyn) Nuclear Fuel — only US UF6 conversion facility, fuel chokepoint Structural Scarcity NRC license through 2060, Metropolis Works monopoly Single facility risk, segment opacity, spinoff execution
CRS Carpenter Technology Materials — specialty superalloys and nickel powders for jet engines, hypersonics, and nuclear submarines Structural Scarcity 500+ patents, 90% jet engine cert coverage, Berry Amendment + ITAR protection Valuation near ATH, brownfield execution risk, customer concentration in aerospace OEMs
OKLO Oklo Generation — advanced fission microreactors for data centres Commodity+ Sam Altman backing, NRC engagement Pre-revenue, regulatory approval timeline
SMR NuScale Power Generation — small modular reactor technology Commodity+ NRC design certification (only SMR approved) Project delays, cost overruns, pre-revenue
BE5 sources Bloom Energy Bridge Power — solid oxide fuel cells, behind-the-meter for data centres Commodity+ Speed-to-power (months vs years for grid), efficiency over turbines Gas price sensitivity, manufacturing scale-up, long-term pricing TBD as alternatives scale
MWH SOLV Energy Solar/Storage EPC — builds utility-scale solar farms and battery storage, grid interconnection Commodity+ Pure-play scale (#2 US solar EPC), 18 GW O&M fleet, grid interconnection capability EPC margin compression, IRA sunset post-2027, PE overhang (American Securities 75%)
MP MP Materials Materials — sole US rare earth producer, critical minerals for magnetics/motors Structural Scarcity Only integrated US rare earth mine-to-magnet supply chain China trade policy, demand timing for magnetics
Longest Runway in the Stack

Power infrastructure has 5-10 year build cycles, making it the lowest overbuild risk layer. Grid interconnection queues averaged over 4 years in 2025 (peaking at 5 years for projects entering the queue in 2023, per LBNL). Nuclear plants take 7-10 years to build (or in Constellation's case, already exist). Copper mines take 10-15 years from discovery to production. Unlike chips (where a fab can be built in 2 years), power infrastructure cannot be rapidly scaled — this makes the pricing power durable and the capital cycle slow enough that timing risk is low.

Cooling is an under-discussed sub-constraint: each MW of AI compute generates heat that must be removed. Liquid cooling (direct-to-chip and immersion) is replacing air cooling for high-density AI racks, creating parallel demand for thermal management infrastructure (Vertiv, Schneider Electric) alongside electrical infrastructure.

What Consensus Misses

The nuclear renaissance is not speculative — it's a forced outcome. Data centres need 24/7 baseload power, and renewables alone can't provide it (intermittency, land use). Natural gas faces emissions constraints and price volatility. Nuclear is the only scalable, zero-carbon, 24/7 power source. The market is slowly pricing this in (CEG +200% from 2023 lows), but the second-order trade — uranium supply (CCJ), uranium conversion (SOLS), copper for grid connections (FCX), and fuel cells as bridge power (BE) — remains under-owned. The deepest second-order play is SOLS — the only US UF6 conversion facility, hiding inside a Honeywell spinoff the market prices as a refrigerant company. Also: FCX is a dual-catalyst play — AI copper demand AND Chinese trade policy create independent pricing drivers.

Free Weekly Briefing
This framework updates as bottlenecks shift. Get the next update free.
Every week we layer congressional trades, insider filings, federal contracts, and policy shifts to find convergence the market hasn't priced in yet.
Free forever No spam Unsubscribe anytime
5

Layer 5: Interconnect & Photonics

As compute clusters scale from hundreds to hundreds of thousands of GPUs, the network becomes the bottleneck. Training large models requires moving massive amounts of data between GPUs, between racks, and between data centres. The transition from electrical to optical interconnects (co-packaged optics, or CPO) is the next structural shift in this layer.

Arista Networks occupies a unique position: it is becoming the networking equivalent of CUDA — the software layer that ties AI clusters together. Just as NVIDIA's moat is software (CUDA), not hardware (GPUs), Arista's moat is EOS (Extensible Operating System), not switches.

TickerCompanyRolePricing PowerMoatKey Risk
ANET Arista Networks AI cluster networking, spine/leaf switches, EOS software Near-Monopoly EOS software ecosystem, hyperscaler lock-in Broadcom custom NIC competition
COHR Coherent Corp Optical transceivers, 800G/1.6T modules Structural Scarcity Vertical integration (InP lasers to modules) ASP erosion, technology transitions
LITE Lumentum Optical components, laser sources for transceivers Commodity+ Laser technology, telecom + datacom diversification Revenue concentration, ASP pressure
CIEN Ciena DCI (data centre interconnect), coherent optical Commodity+ WaveLogic coherent DSP technology Lumpy ordering, carrier capex cycles
AVGO4 sources Broadcom Networking ASICs (Tomahawk, Trident, Jericho), custom NICs Toll-Road ASIC design, VMware software, diversified Custom NIC threat to Arista
What Consensus Misses

Arista is not a hardware company — it's a software company that ships switches. EOS (Extensible Operating System) runs on every Arista device and creates the same kind of ecosystem lock-in that CUDA creates for NVIDIA. Hyperscalers don't switch networking vendors because the operational cost of retraining network engineers and rewriting automation scripts exceeds the hardware savings. The CPO (co-packaged optics) transition will disrupt transceiver companies (COHR, LITE) but reinforce Arista's position as the software/orchestration layer.

6

Layer 6: Miniaturization / EUV / Chip Design

The foundation layer that enables everything above. EUV lithography, process node shrinks, and electronic design automation (EDA) tools are the bedrock of semiconductor progress. This layer is characterized by extreme monopoly/duopoly dynamics: ASML is the sole EUV source, and Synopsys/Cadence are the EDA duopoly.

The EDA duopoly is particularly compelling: ASML-tier pricing power with software margins. Every advanced chip design must use either Synopsys or Cadence tools — there is no third option for leading-edge work. And unlike hardware, EDA tools generate recurring subscription revenue with near-zero marginal cost.

TickerCompanyRolePricing PowerMoatKey Risk
ASML ASML Holding Sole EUV lithography equipment supplier Monopoly Only company capable of making EUV systems Geopolitical export controls, order lumpiness
KLAC KLA Corporation Wafer inspection and metrology Near-Monopoly 60%+ share in process control, essential for yield Capex cyclicality, China exposure
LRCX2 sources Lam Research Etch and deposition equipment Near-Monopoly Market leadership in critical etch steps Capex cyclicality, China restrictions
AMAT2 sources Applied Materials Broadest semiconductor equipment portfolio Near-Monopoly Scale, breadth across deposition/etch/CMP Diversification dilutes AI purity
SNPS2 sources Synopsys EDA tools — chip design software monopoly Monopoly No alternative for leading-edge design, IP portfolio Regulatory (Ansys acquisition), valuation
CDNS Cadence Design EDA tools — chip design/verification software Monopoly Duopoly with Synopsys, essential for advanced nodes Valuation, custom silicon design complexity
EDA: ASML-Tier Pricing Power, Software Margins

Synopsys and Cadence together control virtually 100% of leading-edge chip design software. This is more concentrated than any other layer in the AI stack. Unlike hardware monopolies that face capex-driven margin pressure, EDA tools have 80%+ gross margins and generate recurring subscription revenue. Every new chip design — whether GPU, ASIC, HBM controller, or networking ASIC — requires EDA tools. The AI boom doesn't just create demand for chips; it creates demand for chip designs, which directly flows to the EDA duopoly.

What Consensus Misses

The market treats equipment stocks (ASML, KLAC, LRCX, AMAT) as cyclical semi-cap plays. But the AI-driven structural increase in leading-edge wafer starts changes the math: utilization stays higher for longer, dampening the cyclical trough. More importantly, the complexity increase at each node (3nm → 2nm → 18A) drives disproportionate growth in inspection (KLAC) and EDA (SNPS, CDNS) relative to wafer volumes. Complexity, not volume, is the growth driver.

7

Pricing Power Matrix

Not all AI infrastructure companies are equal. The framework's investment thesis rests on a hierarchy of pricing power: companies with structural moats that allow them to raise prices (or maintain margins) regardless of the competitive environment. The matrix below maps every company in the framework to its pricing power tier.

Pricing Power Distribution
Monopoly (4)
Near-Monopoly (5)
Toll-Road (4)
Scarcity (8)
Commodity+ (8)

The top 3 tiers (Monopoly, Near-Monopoly, Toll-Road) represent 13 companies that maintain pricing regardless of cycle phase. These are your core positions. Structural Scarcity names are timing trades. Commodity+ are cycle-peak plays only.

Monopoly
Sole supplier. No alternative exists. Can raise prices at will.
ASML (EUV lithography), NVDA (CUDA/training — inference fragmenting), SNPS (EDA), CDNS (EDA)
Near-Monopoly
Dominant share (60%+). Alternatives exist but switching costs prohibitive.
TSM (CoWoS dominant share), ANET (AI networking/EOS), CEG (nuclear fleet), KLAC (inspection), LRCX (etch), AMAT (broad equip)
Toll-Road
Essential IP/component that scales with industry volume regardless of winner.
RMBS (memory IP), BESI (hybrid bonding), AVGO (custom ASIC), MRVL (custom silicon)
Structural Scarcity
Supply constrained by physics/permitting. Can't be rapidly scaled.
CCJ (uranium), SOLS (UF6 conversion), CRS (superalloys), FCX (copper), MP (rare earths), VRT / ETN / PWR (electrical), MWH (solar EPC), VST (nuclear), SK Hynix / MU (HBM), COHR (optics)
Commodity+
Differentiated but facing competition. Pricing power limited to cycle peaks.
AMD (GPU #2), BE (fuel cells), AMKR / ASE (OSAT), LITE / CIEN (optics), OKLO / SMR (pre-revenue nuclear)

Investment Rule: Always prefer Monopoly and Toll-Road tier over Commodity. Monopolists set prices. Toll-road operators collect regardless of who wins the volume race. Commodity+ players are cyclical — they outperform at cycle peaks but compress at troughs. Build core positions in the top 3 tiers; use Structural Scarcity for timing trades when supply/demand is most dislocated.

8

Capital Cycle Timing

Each layer operates on different capital cycle timescales. Understanding where each layer sits in its cycle — and the overbuild risk at each stage — is critical for timing entries and avoiding value traps.

Capacity Expansion Lead Time by Layer
Interconnect
6-12mo
Compute
12-18mo
Memory
18-24mo
Packaging
18-24mo
EUV / Fabs
3-5 years
Power
5-10 years

Longer lead time = lower overbuild risk = more durable pricing power. Power infrastructure is the only layer where even a demand slowdown doesn't create excess capacity.

LayerCycle PhaseOverbuild RiskCapacity Expansion Lead Time
1. Compute Late expansion Moderate — custom silicon fragmenting TAM 12-18 months (new chip design to production)
2. Memory / HBM Mid expansion High — memory cycle historically brutal 18-24 months (DRAM-to-HBM conversion)
3. Packaging Acute shortage Low — capital intensity limits entrants 18-24 months (CoWoS line expansion)
4. Power Early build Very Low — secular demand floor 5-10 years (generation + transmission)
5. Interconnect Early expansion Moderate — technology transitions (CPO) 6-12 months (transceiver production ramp)
6. Miniaturization Steady growth Low — monopoly/duopoly structure 3-5 years (new fab construction)

Key Insight: Power (Layer 4) has the longest runway and lowest overbuild risk of any layer. It is the only layer where the capital cycle is measured in decades, not quarters. This makes it the highest-conviction structural position for investors who want exposure to AI infrastructure without timing risk. The semiconductor layers (1-3) are higher beta but carry meaningful overbuild risk if AI demand growth disappoints even temporarily.

9

Reflexivity Map

The constraint relay isn't linear — it's reflexive. Resolving one bottleneck doesn't just shift demand to the next layer; it amplifies demand by unlocking previously throttled workloads. This creates feedback loops that the market systematically underestimates.

Constraint Resolution → Demand Amplification Cycle
Resolve
GPU Supply
Amplify
HBM Demand
Constrain
Packaging
Expose
Power Wall
Intensify
Cu / U Supply
Feedback
Scale → Network

Each resolution creates the next constraint. The cycle repeats as demand is unlocked at each stage, with lag times of 6-24 months between resolution and amplification.

1
Compute Resolution → Memory Amplification
As GPU supply eases and custom silicon scales, total AI accelerator shipments increase. Each accelerator requires HBM — and HBM content per chip is growing (80GB → 192GB+). The resolution of the compute bottleneck doesn't reduce memory demand; it amplifies it by enabling larger deployments that were previously GPU-constrained.
2
Memory/Packaging Resolution → Power Crisis
Each new AI accelerator assembled consumes more power than its predecessor. Resolving the HBM and packaging bottlenecks enables shipping more (and more powerful) chips, which floods data centres with hardware that demands power the grid can't provide. The semiconductor layers feed directly into the power wall.
3
Power Buildout → Copper/Uranium Scarcity
Every megawatt of new data centre power requires copper for wiring, transformers, and grid connections, plus uranium for nuclear baseload. The power buildout doesn't resolve the scarcity of these physical inputs — it intensifies it. This creates a lag-driven reflexive loop where power solutions create raw material bottlenecks 12-24 months later.
4
Scale → Network Bottleneck
As AI clusters grow from hundreds to hundreds of thousands of GPUs, the networking fabric becomes the limiting factor on training throughput. Larger clusters need faster interconnects, driving the optical transition (800G → 1.6T → 3.2T) and reinforcing demand for both Arista's software layer and Coherent/Lumentum's transceiver capacity.

Bottleneck Transition Timing — Leading Indicators

TRANSITION LAG TIME LEADING INDICATOR THRESHOLD CURRENT
GPU supply ease → HBM scarcity 3-5 months HBM book-to-bill ratio (SK Hynix, Samsung) > 1.5x = scarcity imminent 1.7x
HBM scarcity → Packaging bottleneck 6-9 months TSMC CoWoS utilization rate > 90% = acute bottleneck 95%+
Packaging resolve → Power crisis 12-18 months Grid interconnection queue depth (PJM/ERCOT) Queue > 2,000 GW = gridlock 2,600 GW
Power crisis → Nuclear acceleration 18-36 months NRC license applications + DOE loan commitments > 5 applications/yr = inflection 3 in H1
AI scale → Networking bottleneck 4-8 months Cluster size vs 800G transceiver supply Clusters > 100K GPUs = bottleneck Emerging
Capex cycle → Overbuild risk 8-12 months Hyperscaler capex growth rate vs revenue growth Capex/Rev > 35% = danger zone 28-32%
Copper/uranium scarcity → Materials repricing 12-24 months Copper inventory at LME + COMEX warehouses < 200K tonnes = supply crisis ~250K

Thresholds are approximate inflection points based on historical precedent. Current readings sourced from public data (TSMC earnings calls, PJM interconnection data, LME warehouse reports, NRC filings). Updated quarterly.

Pro

Full transition timing table with 7 bottleneck sequences, real-time threshold readings, and the specific leading indicators for each layer. Know when "emerging constraint" flips to "acute bottleneck."

Unlock Timing Thresholds →
10

Investor Failure Modes

Understanding the framework is necessary but not sufficient. Most investors who correctly identify the bottleneck sequence still lose money by making one of these structural errors.

1. Treating AI Infrastructure as One Trade
Buying "semis" or an AI ETF treats all layers equally. But a compute overbuild simultaneously creates a memory shortage — the ETF wins and loses at the same time. The framework demands layer-level positioning, not basket exposure.
2. Ignoring Capital Cycle Timing
Buying the right layer at the wrong cycle phase is as damaging as buying the wrong layer. HBM is the right trade in 2025-2026; it may be the wrong trade in 2028 if capacity expansion overshoots demand. Pricing power tier determines who survives the trough.
3. Confusing Revenue Exposure with Pricing Power
A company with 100% AI revenue but Commodity+ pricing power (e.g., an OSAT packaging house) is a worse investment than a company with 30% AI revenue but Monopoly pricing power (e.g., ASML). Pricing power, not revenue purity, determines returns.
4. Narrative Leading Reality
The most dangerous failure mode: buying the story before the data confirms it. Pre-revenue nuclear (OKLO, SMR), speculative networking plays, and unproven custom silicon all carry execution risk that narrative enthusiasm obscures. The framework uses convergence data — not narratives — to validate positioning.

Position Sizing by Pricing Power Tier

Core
5-8%
Monopoly / Toll-Road tier
Catalyst
3-5%
High Pricing Power tier
Tactical
1-3%
Commodity+ with timing
Speculative
0.5-1%
Pre-revenue / binary

Overbuild Warning Thresholds by Layer

LAYER OVERBUILD THRESHOLD SCARCITY OVER THRESHOLD STATUS
L1: Compute (GPU) GPU lead times < 4 weeks NVDA data center rev growth < 30% YoY SCARCE
L2: Memory (HBM) HBM book-to-bill < 1.2x HBM ASP declines > 10% QoQ ACUTE
L3: Packaging CoWoS util < 85% TSMC capex guide-down ACUTE
L4: Power Interconnection queue < 1,000 GW Nuclear NRC apps < 2/yr CRITICAL
L5: Networking 800G transceiver lead time < 8 weeks ANET revenue growth < 20% YoY EMERGING
L6: Materials LME copper > 300K tonnes Uranium spot < $60/lb TIGHTENING

Position sizing assumes a concentrated 15-25 position portfolio. Adjust proportionally for broader portfolios. Core positions are structural holds through full cycles; Tactical positions should be trimmed when overbuild thresholds trigger.

Pro

Full overbuild/scarcity threshold matrix for all 7 layers with real-time status readings, plus Core/Catalyst/Tactical/Speculative sizing for each pricing power tier.

Unlock Position Sizing →
11

Risk Overlay & Demand Scenarios

The AI infrastructure stack has concentrated geopolitical risk at critical nodes, and the framework's investment thesis depends on sustained demand. Understanding both — the structural risks and what survives under each demand scenario — is essential for portfolio construction.

Taiwan Concentration Risk
TSMC manufactures essentially all leading-edge AI chips and controls the majority of advanced packaging (CoWoS) capacity. A disruption to TSMC would halt global AI chip production. Hedge: Intel Foundry Services (US-based), Samsung (Korea), and CHIPS Act fab buildouts reduce but don't eliminate this concentration over a 3-5 year horizon.
Export Controls Escalation
US restrictions on AI chip exports to China are already in effect and may expand. Equipment companies (ASML, KLAC, AMAT) face direct China revenue risk. Hedge: Commodity suppliers (FCX copper, CCJ uranium) and IP licensors (RMBS) are largely insulated.
CHIPS Act Execution Risk
The US CHIPS Act allocated $52B to domestic semiconductor manufacturing, but execution has been slower than expected. Meaningful diversification away from Taiwan is 5-7 years at best. Hedge: Companies with existing US manufacturing (MU, PWR, ETN) benefit from reshoring regardless of CHIPS Act pace.
Risk FactorMost ExposedNatural Hedges
Taiwan disruption TSM, NVDA, AMD, AVGO (all fabless) MU (US-based), equipment cos (sell to all fabs)
China export controls ASML, KLAC, AMAT, LRCX (China revenue) FCX, CCJ (commodities), RMBS (IP licensing)
Energy policy shifts OKLO, SMR (regulatory dependent) CEG, VST (existing assets), ETN/PWR (all energy)
Trade war escalation Samsung, SK Hynix (Korea risk) ANET (US software), SNPS/CDNS (US EDA)

Related Research

China controls 90% of rare earth processing and 99% of heavy REE separation. Our dedicated research maps the full defense supply chain dependency.

Read: China Rare Earth Dependency Map →

Macro Context: Stage 5 of the Big Cycle

These geopolitical risks are not isolated events — they are manifestations of what Ray Dalio calls Stage 5 of the Big Cycle, the phase where great powers clash across trade, technology, capital, and geopolitical domains simultaneously. AI infrastructure sits at the centre of the technology war. The companies with structural pricing power benefit regardless of which great power wins.

Read our full Great Power Cycle framework →

Demand Sustainability Scenarios

▲ Scenario A: Sustained Acceleration — ~50%
AI buildout continues and expands
Enterprise AI reaches critical mass. Inference eclipses training. Hyperscaler capex growing 50%+ annually, approaching $700B combined in 2026. Power and copper become dominant constraints. All layers benefit, but Power (Layer 4) and Toll-Roads see the most durable returns.
→ Scenario B: Plateau & Digest — ~35%
Growth decelerates but doesn't reverse
AI capex growth slows to 10-15% annually. Model efficiency improvements reduce compute-per-query. Semiconductor layers (1-3) face overbuild risk. Power continues on secular electrification trends. Monopoly and Toll-Road tiers maintain pricing; Commodity+ compresses significantly.
▼ Scenario C: Capex Winter — ~15%
AI spending faces a meaningful pullback
Enterprise AI ROI disappoints. Hyperscalers cut capex 20-30%. Semiconductor layers enter severe downcycle. Only true monopolists (ASML, SNPS/CDNS) and diversified infrastructure (ETN, PWR) maintain earnings. This is where pricing power hierarchy matters most.
Reflexivity Alert

Each scenario has reflexive dynamics. In Scenario A, success breeds more investment (AI capex funds more AI, which generates more revenue, which funds more capex). In Scenario C, failure breeds more failure. The pricing power hierarchy identifies which companies survive the reflexive downside while capturing the upside.

Scenario Survivor Lists

Which companies maintain earnings growth if AI capex decelerates or reverses? Pricing power tier determines survival.

Scenario B Survivors (11 of 35)
Growth slows to 10-15%. These still grow earnings:
NVDA ASML SNPS
Scenario C Survivors (5 of 35)
Capex cuts 20-30%. Only these maintain margins:
ASML SNPS CDNS
Remaining Scenario B survivors:
AVGO RMBS ETN CCJ PWR CEG MRVL FCX

Common thread: Monopoly or Toll-Road pricing + diversified end markets beyond AI. ETN/PWR survive on electrification secular trend. CCJ/CEG on nuclear renaissance. RMBS on memory IP royalties. FCX on copper structural deficit.

Remaining Scenario C survivors:
ETN PWR

Only true monopolists (ASML litho, SNPS/CDNS EDA) and diversified infrastructure (ETN/PWR) maintain margins in a full capex winter. Key: these 5 companies have >50% of revenue from non-AI sources.

3 Early Warning Indicators

1. HYPERSCALER CAPEX/REV RATIO
28-32%
Danger zone: >35%
2. AI WORKLOAD UTILIZATION
65-70%
Concern: <50%
3. ENTERPRISE AI ROI SURVEYS
Positive
Concern: <40% see ROI
Pro

Full survivor lists for both scenarios with company-level reasoning, the 3 early warning indicators with current readings, and the specific sequence of events that would confirm Scenario B or C is unfolding.

Unlock Scenario Analysis →
P

Position Framework & Forced Action Map

TICKER LAYER TIER SCORE SOURCES CATEGORY
NVDA L1 Compute Monopoly 95 9 Core
ASML L3 Packaging Monopoly 25.4 2 Core
AVGO L1 Custom Si Toll-Road 60 8 Core
AMZN L1 Hyperscaler High 95 10 Catalyst
CCJ L4 Nuclear Fuel Toll-Road 60 10 Core
VST L4 Power High 60 9 Catalyst
RMBS L2 Memory IP Monopoly 24.2 2 Core
MU L2 HBM High 55.7 5 Tactical
+ 29 more tickers across all 7 layers with full position sizing
Pro

Full 35-ticker position framework with layer assignment, pricing power tier, convergence scores, and Core/Catalyst/Tactical/Speculative categorization. Plus the Forced Action Map with specific catalyst dates.

Unlock Position Framework →

Each playbook is a deep-dive into a specific company within this framework, with convergence data from our proprietary scanners.

AI Infrastructure — Live Convergence

Top converging tickers from the framework, ranked by composite score. Updated daily from 34 sources.

NVDA 95
9 sources • Bullish
AMZN 95
10 sources • Bullish
MSFT 75
11 sources • Bullish
GOOGL 75
6 sources • Bullish
VST 60
9 sources • Bullish
CCJ 60
10 sources • Bullish
SMCI 60
7 sources • Bearish
MU 55.7
5 sources • Bullish
+ 22 more tickers
Pro

Full convergence dashboard for All 34 framework tickers with scores, source counts, direction, and the specific data points driving each convergence. See which tickers just crossed the threshold that historically precedes 14%+ moves.

Unlock Full Dashboard →
Free Weekly Briefing
You just read 25 minutes of original research. Get the next one free.
Every week we layer congressional trades, insider filings, federal contracts, and policy shifts to find convergence the market hasn't priced in yet.
Free forever No spam Unsubscribe anytime

Sources & References

Semiconductor & Memory

Power & Energy

Networking & Optics

Equipment & EDA

ForcedAlpha Proprietary Data