The Constraint Relay Race: Mapping Pricing Power, Capital Cycles & Reflexivity Across 7 Layers of AI Infrastructure
Core Thesis: AI infrastructure is not one trade — it is a relay race of sequential bottlenecks. As each constraint is resolved, pricing power migrates to the next layer. The investors who understand the sequence, and the lag times between resolution and migration, capture asymmetric returns at each transition.
The market treats AI infrastructure as a monolithic trade: "buy semis." This framework disaggregates the stack into 7 distinct layers, each with its own supply/demand dynamics, capital cycle timing, and pricing power structure. The key insight is that resolving one bottleneck creates the next — and the companies with structural pricing power at each layer compound returns regardless of which specific constraint dominates at any given time.
This framework covers 45+ companies across the full AI infrastructure stack, identifies where pricing power concentrates (and where it doesn't), maps the capital cycle timing for each layer, and reveals the reflexive feedback loops that create both opportunity and risk.
Companies with convergence activity from our proprietary scanners — congressional trades, lobbying, institutional filings, options flow — are flagged in each layer table.
Colour intensity = number of independent data sources detecting activity. More sources = higher conviction that the positioning is real, not noise.
Note: Absence of scanner data does not mean a company lacks merit — structural monopolists (ASML, RMBS, BESI) derive value from market position, not from detectable trading/lobbying activity.
Each constraint, once resolved, exposes the next bottleneck downstream. Click any node to jump to that layer.
The most visible layer of AI infrastructure — and increasingly, the least interesting from a pricing-power perspective. NVIDIA's dominance is real but the market prices it that way. The alpha is in understanding what comes after compute eases: the downstream constraints it exposes.
Compute is transitioning from acute scarcity (2023-2024) to managed supply (2025-2026) as custom silicon (Trainium, TPU, Maia) begins absorbing inference workloads. The training market remains NVIDIA-dominated, but the inference market — which will ultimately be larger — is fragmenting.
| Ticker | Company | Role | Pricing Power | Moat | Key Risk |
|---|---|---|---|---|---|
| NVDA6 sources | NVIDIA | GPU monopoly (training); inference fragmenting to custom silicon | Monopoly | Software lock-in (CUDA), architectural lead | Custom silicon erosion of inference TAM; training monopoly intact but TAM splits |
| AMD5 sources | AMD | GPU alternative, MI300X/MI400 series | Commodity+ | Price/performance at enterprise scale | Perpetual #2 without software ecosystem |
| AVGO4 sources | Broadcom | Custom ASIC design (Google TPU, Meta MTIA) | Toll-Road | Design expertise, hyperscaler relationships | Customer concentration risk |
| MRVL | Marvell | Custom silicon, networking, electro-optics | Toll-Road | Custom ASIC + DCI networking combined | Execution on 5nm custom ramps |
The market debates "NVDA vs AMD" while missing the structural shift: hyperscalers are becoming chip companies. Amazon (Trainium), Google (TPU), Microsoft (Maia), and Meta (MTIA) are all building custom silicon — not to replace NVIDIA entirely, but to control inference economics. The real trade isn't picking the GPU winner; it's identifying who designs the custom chips (AVGO, MRVL) and who supplies the packaging and memory they all need.
Amazon (AMZN) Playbook — The Trainium/custom silicon angle: how Amazon is vertically integrating from chips to energy to distribution. Custom silicon is margin capture, not innovation theatre.
High Bandwidth Memory (HBM) is the current acute bottleneck. Every AI accelerator — NVIDIA, AMD, custom — requires HBM, and supply is structurally constrained by the conversion of existing DRAM capacity to HBM production. The key insight: HBM supply is gated not just by memory fabs, but by advanced packaging capacity (Layer 3), creating a compounding constraint.
Within this layer, the overlooked toll-road is Rambus (RMBS): all three HBM manufacturers hold broad Rambus patent licenses for memory interface IP. This creates a royalty stream that scales with the bottleneck itself — the more HBM ships, the more valuable the Rambus toll position.
| Ticker | Company | Role | Pricing Power | Moat | Key Risk |
|---|---|---|---|---|---|
| SK Hynix | SK Hynix | HBM market leader (~50% share), NVIDIA preferred | Structural Scarcity | Process lead in HBM3E, NVIDIA qualification | Capex cycle, DRAM price cyclicality |
| MU | Micron | HBM3E challenger, sole US-based HBM manufacturer | Structural Scarcity | US domestic supply (CHIPS Act), diversified memory | Late entrant in HBM, yield ramp risk |
| Samsung | Samsung | HBM3E/HBM4 production, vertical integration | Structural Scarcity | Scale, vertical integration (DRAM + packaging) | Yield issues, customer qualification delays |
| RMBS | Rambus | Memory interface IP — broad licensing across all HBM manufacturers | Toll-Road | Essential patents, 95%+ gross margin IP licensing | Patent cliff timing, licensing renegotiation |
Rambus doesn't make memory — it licenses the interface IP that makes memory work. All three HBM manufacturers (SK Hynix, Micron, Samsung) hold broad Rambus patent licenses covering memory interface technology. As HBM content per GPU increases (from 80GB on H100 to 192GB+ on B200), the royalty base expands automatically. This is a forced-buyer dynamic: there is no alternative memory interface standard. The more critical HBM becomes, the more valuable the Rambus position.
The market prices HBM as a cyclical memory trade (buy SK Hynix, maybe MU). It misses: (1) HBM supply is actually gated by packaging capacity, not fab capacity — you can't make HBM without CoWoS/hybrid bonding, which is even more constrained. (2) The memory interface toll-road (RMBS) scales with every HBM dollar regardless of who wins the manufacturing race. (3) HBM content per accelerator is growing faster than accelerator unit shipments, compounding the bottleneck.
Rambus (RMBS) Playbook — Full structural thesis on the memory interface toll-road: forced buyer dynamics, patent portfolio analysis, and why 95% gross margins are sustainable.
This is the most underappreciated bottleneck in the entire AI stack. Advanced packaging (CoWoS, hybrid bonding, 2.5D/3D integration) is required to combine GPU dies with HBM stacks into functional AI accelerators. TSMC's CoWoS capacity is the binding constraint — even if you have GPU dies and HBM chips ready, you can't assemble them without packaging slots.
The capital intensity and lead times for packaging expansion are extreme: 18-24 months to add meaningful CoWoS capacity. This creates a durable bottleneck that will persist through at least 2027.
| Ticker | Company | Role | Pricing Power | Moat | Key Risk |
|---|---|---|---|---|---|
| TSM2 sources | TSMC | CoWoS dominant share, advanced packaging leader | Near-Monopoly | Process technology lead, customer lock-in, capacity is a monopolist's choice | Geopolitical (Taiwan), capex intensity |
| BESI | BE Semiconductor | Hybrid bonding equipment — the picks-and-shovels play | Toll-Road | 70%+ market share in hybrid bonding tools | Customer concentration (TSMC), order lumpiness |
| AMKR | Amkor Technology | OSAT (outsourced packaging), 2.5D/fan-out | Commodity+ | Scale, geographic diversification (Arizona fab) | TSMC in-sourcing packaging, margin pressure |
| ASE | ASE Technology | Largest OSAT globally, advanced packaging | Commodity+ | Scale, breadth of packaging technologies | Commoditization, TSMC vertical integration |
BE Semiconductor makes the hybrid bonding equipment that enables next-generation chip stacking. With 70%+ market share in this critical tool segment, every fab expanding advanced packaging capacity must buy BESI equipment. This is an ASML-like position at an earlier stage: monopolistic market share in essential equipment for a structural growth market. The hybrid bonding TAM expands every time a new AI chip design requires tighter integration between logic and memory dies.
Packaging is treated as a commoditized backend process, but for AI accelerators it is the binding constraint on total system output. You can design the best GPU and manufacture the fastest HBM, but if you can't package them together with CoWoS, you ship nothing. TSMC's packaging revenue is growing faster than its wafer revenue — a structural shift the market hasn't fully priced. BESI's hybrid bonding monopoly is the equipment analog to ASML's EUV monopoly, but at an earlier stage of market recognition.
The longest-duration bottleneck in the stack. While chip and memory constraints operate on 12-24 month cycles, power infrastructure operates on 5-10 year build cycles. This isn't about manufacturing capacity — it's about physics, permitting, and grid infrastructure. Every new AI data centre needs power, cooling, and grid interconnection, and the queue for all three is measured in years, not quarters.
This layer has the lowest overbuild risk of any AI infrastructure play. Even if AI demand disappoints, the electrification megatrend (EVs, reshoring, industrial automation) provides a secular demand floor. Power infrastructure is the rare investment where both AI bull and AI bear scenarios still require the buildout.
Note: Layer 4 is deliberately broad — it captures the full physical infrastructure stack that sits between assembled silicon and operational AI. We group power generation, grid/electrical, nuclear fuel, cooling, materials, and bridge power here because they share the same constraint dynamics: multi-year build cycles, physics-limited supply, and secular demand floors independent of AI. Sub-groupings are flagged in the table below.
| Ticker | Company | Sub-Group | Pricing Power | Moat | Key Risk |
|---|---|---|---|---|---|
| VRT2 sources | Vertiv | Power & Cooling — DC infrastructure, thermal management | Structural Scarcity | Mission-critical installed base, service revenue | Valuation, execution on order backlog |
| ETN2 sources | Eaton | Grid/Electrical — power distribution, transformers | Structural Scarcity | Broad electrical portfolio, regulatory compliance | Diversified conglomerate, AI exposure diluted |
| PWR2 sources | Quanta Services | Grid/Electrical — grid construction, transmission buildout | Structural Scarcity | Largest electrical contractor, skilled labor monopoly | Labor availability, project execution |
| FCX4 sources | Freeport-McMoRan | Materials — copper supply, physical bottleneck for all electrical | Structural Scarcity | Grasberg mine (world's largest), reserves | Commodity price volatility, Indonesian politics |
| CEG3 sources | Constellation Energy | Generation — largest US nuclear fleet, 24/7 baseload | Near-Monopoly | Existing nuclear fleet (no new build required) | Regulatory risk, PPA renegotiation |
| VST4 sources | Vistra | Generation — nuclear + gas fleet, Texas grid | Structural Scarcity | Comanche Peak nuclear, Texas deregulated market | Regulatory, grid reliability events |
| CCJ7 sources | Cameco | Nuclear Fuel — uranium supply for nuclear renaissance | Structural Scarcity | Tier-1 mines (McArthur River, Cigar Lake) | Uranium price volatility, supply restart risk |
| SOLS | Solstice (ConverDyn) | Nuclear Fuel — only US UF6 conversion facility, fuel chokepoint | Structural Scarcity | NRC license through 2060, Metropolis Works monopoly | Single facility risk, segment opacity, spinoff execution |
| CRS | Carpenter Technology | Materials — specialty superalloys and nickel powders for jet engines, hypersonics, and nuclear submarines | Structural Scarcity | 500+ patents, 90% jet engine cert coverage, Berry Amendment + ITAR protection | Valuation near ATH, brownfield execution risk, customer concentration in aerospace OEMs |
| OKLO | Oklo | Generation — advanced fission microreactors for data centres | Commodity+ | Sam Altman backing, NRC engagement | Pre-revenue, regulatory approval timeline |
| SMR | NuScale Power | Generation — small modular reactor technology | Commodity+ | NRC design certification (only SMR approved) | Project delays, cost overruns, pre-revenue |
| BE5 sources | Bloom Energy | Bridge Power — solid oxide fuel cells, behind-the-meter for data centres | Commodity+ | Speed-to-power (months vs years for grid), efficiency over turbines | Gas price sensitivity, manufacturing scale-up, long-term pricing TBD as alternatives scale |
| MWH | SOLV Energy | Solar/Storage EPC — builds utility-scale solar farms and battery storage, grid interconnection | Commodity+ | Pure-play scale (#2 US solar EPC), 18 GW O&M fleet, grid interconnection capability | EPC margin compression, IRA sunset post-2027, PE overhang (American Securities 75%) | MP | MP Materials | Materials — sole US rare earth producer, critical minerals for magnetics/motors | Structural Scarcity | Only integrated US rare earth mine-to-magnet supply chain | China trade policy, demand timing for magnetics |
Power infrastructure has 5-10 year build cycles, making it the lowest overbuild risk layer. Grid interconnection queues averaged over 4 years in 2025 (peaking at 5 years for projects entering the queue in 2023, per LBNL). Nuclear plants take 7-10 years to build (or in Constellation's case, already exist). Copper mines take 10-15 years from discovery to production. Unlike chips (where a fab can be built in 2 years), power infrastructure cannot be rapidly scaled — this makes the pricing power durable and the capital cycle slow enough that timing risk is low.
Cooling is an under-discussed sub-constraint: each MW of AI compute generates heat that must be removed. Liquid cooling (direct-to-chip and immersion) is replacing air cooling for high-density AI racks, creating parallel demand for thermal management infrastructure (Vertiv, Schneider Electric) alongside electrical infrastructure.
The nuclear renaissance is not speculative — it's a forced outcome. Data centres need 24/7 baseload power, and renewables alone can't provide it (intermittency, land use). Natural gas faces emissions constraints and price volatility. Nuclear is the only scalable, zero-carbon, 24/7 power source. The market is slowly pricing this in (CEG +200% from 2023 lows), but the second-order trade — uranium supply (CCJ), uranium conversion (SOLS), copper for grid connections (FCX), and fuel cells as bridge power (BE) — remains under-owned. The deepest second-order play is SOLS — the only US UF6 conversion facility, hiding inside a Honeywell spinoff the market prices as a refrigerant company. Also: FCX is a dual-catalyst play — AI copper demand AND Chinese trade policy create independent pricing drivers.
Bloom Energy (BE) Playbook — Bridge power for data centres: fuel cells deploy in months (vs years for grid), solving the acute gap between AI demand and power supply. 5 independent data sources converging. Note: BE is Commodity+ on pricing power (long-term competition from gas turbines and grid buildout), but high conviction (8.5) due to timing — the supply-demand gap is widest now, and BE is one of few companies that can deliver power in months.
Solstice (SOLS) Playbook — The hidden nuclear monopoly: only US uranium conversion facility, NRC license through 2060, legacy contracts repricing at identical cost. The toll-road the nuclear renaissance must pass through. Conviction: 8.1/10.
MP Materials (MP) Playbook — Sole US rare earth producer with DoD equity stake and $110/kg floor. 7 converging data points across congressional, policy, and institutional sources.
Carpenter Technology (CRS) Playbook — The specialty alloy tollbooth: 500+ patents, 90% jet engine cert coverage, Berry Amendment protection. SASC senator buying while Loeb/Cohen/Griffin load positions. EU €800B rearmament as structural demand floor. Conviction: 8.0/10.
As compute clusters scale from hundreds to hundreds of thousands of GPUs, the network becomes the bottleneck. Training large models requires moving massive amounts of data between GPUs, between racks, and between data centres. The transition from electrical to optical interconnects (co-packaged optics, or CPO) is the next structural shift in this layer.
Arista Networks occupies a unique position: it is becoming the networking equivalent of CUDA — the software layer that ties AI clusters together. Just as NVIDIA's moat is software (CUDA), not hardware (GPUs), Arista's moat is EOS (Extensible Operating System), not switches.
| Ticker | Company | Role | Pricing Power | Moat | Key Risk |
|---|---|---|---|---|---|
| ANET | Arista Networks | AI cluster networking, spine/leaf switches, EOS software | Near-Monopoly | EOS software ecosystem, hyperscaler lock-in | Broadcom custom NIC competition |
| COHR | Coherent Corp | Optical transceivers, 800G/1.6T modules | Structural Scarcity | Vertical integration (InP lasers to modules) | ASP erosion, technology transitions |
| LITE | Lumentum | Optical components, laser sources for transceivers | Commodity+ | Laser technology, telecom + datacom diversification | Revenue concentration, ASP pressure |
| CIEN | Ciena | DCI (data centre interconnect), coherent optical | Commodity+ | WaveLogic coherent DSP technology | Lumpy ordering, carrier capex cycles |
| AVGO4 sources | Broadcom | Networking ASICs (Tomahawk, Trident, Jericho), custom NICs | Toll-Road | ASIC design, VMware software, diversified | Custom NIC threat to Arista |
Arista is not a hardware company — it's a software company that ships switches. EOS (Extensible Operating System) runs on every Arista device and creates the same kind of ecosystem lock-in that CUDA creates for NVIDIA. Hyperscalers don't switch networking vendors because the operational cost of retraining network engineers and rewriting automation scripts exceeds the hardware savings. The CPO (co-packaged optics) transition will disrupt transceiver companies (COHR, LITE) but reinforce Arista's position as the software/orchestration layer.
The foundation layer that enables everything above. EUV lithography, process node shrinks, and electronic design automation (EDA) tools are the bedrock of semiconductor progress. This layer is characterized by extreme monopoly/duopoly dynamics: ASML is the sole EUV source, and Synopsys/Cadence are the EDA duopoly.
The EDA duopoly is particularly compelling: ASML-tier pricing power with software margins. Every advanced chip design must use either Synopsys or Cadence tools — there is no third option for leading-edge work. And unlike hardware, EDA tools generate recurring subscription revenue with near-zero marginal cost.
| Ticker | Company | Role | Pricing Power | Moat | Key Risk |
|---|---|---|---|---|---|
| ASML | ASML Holding | Sole EUV lithography equipment supplier | Monopoly | Only company capable of making EUV systems | Geopolitical export controls, order lumpiness |
| KLAC | KLA Corporation | Wafer inspection and metrology | Near-Monopoly | 60%+ share in process control, essential for yield | Capex cyclicality, China exposure |
| LRCX2 sources | Lam Research | Etch and deposition equipment | Near-Monopoly | Market leadership in critical etch steps | Capex cyclicality, China restrictions |
| AMAT2 sources | Applied Materials | Broadest semiconductor equipment portfolio | Near-Monopoly | Scale, breadth across deposition/etch/CMP | Diversification dilutes AI purity |
| SNPS2 sources | Synopsys | EDA tools — chip design software monopoly | Monopoly | No alternative for leading-edge design, IP portfolio | Regulatory (Ansys acquisition), valuation |
| CDNS | Cadence Design | EDA tools — chip design/verification software | Monopoly | Duopoly with Synopsys, essential for advanced nodes | Valuation, custom silicon design complexity |
Synopsys and Cadence together control virtually 100% of leading-edge chip design software. This is more concentrated than any other layer in the AI stack. Unlike hardware monopolies that face capex-driven margin pressure, EDA tools have 80%+ gross margins and generate recurring subscription revenue. Every new chip design — whether GPU, ASIC, HBM controller, or networking ASIC — requires EDA tools. The AI boom doesn't just create demand for chips; it creates demand for chip designs, which directly flows to the EDA duopoly.
The market treats equipment stocks (ASML, KLAC, LRCX, AMAT) as cyclical semi-cap plays. But the AI-driven structural increase in leading-edge wafer starts changes the math: utilization stays higher for longer, dampening the cyclical trough. More importantly, the complexity increase at each node (3nm → 2nm → 18A) drives disproportionate growth in inspection (KLAC) and EDA (SNPS, CDNS) relative to wafer volumes. Complexity, not volume, is the growth driver.
Not all AI infrastructure companies are equal. The framework's investment thesis rests on a hierarchy of pricing power: companies with structural moats that allow them to raise prices (or maintain margins) regardless of the competitive environment. The matrix below maps every company in the framework to its pricing power tier.
The top 3 tiers (Monopoly, Near-Monopoly, Toll-Road) represent 13 companies that maintain pricing regardless of cycle phase. These are your core positions. Structural Scarcity names are timing trades. Commodity+ are cycle-peak plays only.
Investment Rule: Always prefer Monopoly and Toll-Road tier over Commodity. Monopolists set prices. Toll-road operators collect regardless of who wins the volume race. Commodity+ players are cyclical — they outperform at cycle peaks but compress at troughs. Build core positions in the top 3 tiers; use Structural Scarcity for timing trades when supply/demand is most dislocated.
Each layer operates on different capital cycle timescales. Understanding where each layer sits in its cycle — and the overbuild risk at each stage — is critical for timing entries and avoiding value traps.
Longer lead time = lower overbuild risk = more durable pricing power. Power infrastructure is the only layer where even a demand slowdown doesn't create excess capacity.
| Layer | Cycle Phase | Overbuild Risk | Capacity Expansion Lead Time |
|---|---|---|---|
| 1. Compute | Late expansion | Moderate — custom silicon fragmenting TAM | 12-18 months (new chip design to production) |
| 2. Memory / HBM | Mid expansion | High — memory cycle historically brutal | 18-24 months (DRAM-to-HBM conversion) |
| 3. Packaging | Acute shortage | Low — capital intensity limits entrants | 18-24 months (CoWoS line expansion) |
| 4. Power | Early build | Very Low — secular demand floor | 5-10 years (generation + transmission) |
| 5. Interconnect | Early expansion | Moderate — technology transitions (CPO) | 6-12 months (transceiver production ramp) |
| 6. Miniaturization | Steady growth | Low — monopoly/duopoly structure | 3-5 years (new fab construction) |
Key Insight: Power (Layer 4) has the longest runway and lowest overbuild risk of any layer. It is the only layer where the capital cycle is measured in decades, not quarters. This makes it the highest-conviction structural position for investors who want exposure to AI infrastructure without timing risk. The semiconductor layers (1-3) are higher beta but carry meaningful overbuild risk if AI demand growth disappoints even temporarily.
The constraint relay isn't linear — it's reflexive. Resolving one bottleneck doesn't just shift demand to the next layer; it amplifies demand by unlocking previously throttled workloads. This creates feedback loops that the market systematically underestimates.
Each resolution creates the next constraint. The cycle repeats as demand is unlocked at each stage, with lag times of 6-24 months between resolution and amplification.
| TRANSITION | LAG TIME | LEADING INDICATOR | THRESHOLD | CURRENT |
|---|---|---|---|---|
| GPU supply ease → HBM scarcity | 3-5 months | HBM book-to-bill ratio (SK Hynix, Samsung) | > 1.5x = scarcity imminent | 1.7x |
| HBM scarcity → Packaging bottleneck | 6-9 months | TSMC CoWoS utilization rate | > 90% = acute bottleneck | 95%+ |
| Packaging resolve → Power crisis | 12-18 months | Grid interconnection queue depth (PJM/ERCOT) | Queue > 2,000 GW = gridlock | 2,600 GW |
| Power crisis → Nuclear acceleration | 18-36 months | NRC license applications + DOE loan commitments | > 5 applications/yr = inflection | 3 in H1 |
| AI scale → Networking bottleneck | 4-8 months | Cluster size vs 800G transceiver supply | Clusters > 100K GPUs = bottleneck | Emerging |
| Capex cycle → Overbuild risk | 8-12 months | Hyperscaler capex growth rate vs revenue growth | Capex/Rev > 35% = danger zone | 28-32% |
| Copper/uranium scarcity → Materials repricing | 12-24 months | Copper inventory at LME + COMEX warehouses | < 200K tonnes = supply crisis | ~250K |
Thresholds are approximate inflection points based on historical precedent. Current readings sourced from public data (TSMC earnings calls, PJM interconnection data, LME warehouse reports, NRC filings). Updated quarterly.
Full transition timing table with 7 bottleneck sequences, real-time threshold readings, and the specific leading indicators for each layer. Know when "emerging constraint" flips to "acute bottleneck."
Unlock Timing Thresholds →Understanding the framework is necessary but not sufficient. Most investors who correctly identify the bottleneck sequence still lose money by making one of these structural errors.
| LAYER | OVERBUILD THRESHOLD | SCARCITY OVER THRESHOLD | STATUS |
|---|---|---|---|
| L1: Compute (GPU) | GPU lead times < 4 weeks | NVDA data center rev growth < 30% YoY | SCARCE |
| L2: Memory (HBM) | HBM book-to-bill < 1.2x | HBM ASP declines > 10% QoQ | ACUTE |
| L3: Packaging | CoWoS util < 85% | TSMC capex guide-down | ACUTE |
| L4: Power | Interconnection queue < 1,000 GW | Nuclear NRC apps < 2/yr | CRITICAL |
| L5: Networking | 800G transceiver lead time < 8 weeks | ANET revenue growth < 20% YoY | EMERGING |
| L6: Materials | LME copper > 300K tonnes | Uranium spot < $60/lb | TIGHTENING |
Position sizing assumes a concentrated 15-25 position portfolio. Adjust proportionally for broader portfolios. Core positions are structural holds through full cycles; Tactical positions should be trimmed when overbuild thresholds trigger.
Full overbuild/scarcity threshold matrix for all 7 layers with real-time status readings, plus Core/Catalyst/Tactical/Speculative sizing for each pricing power tier.
Unlock Position Sizing →The AI infrastructure stack has concentrated geopolitical risk at critical nodes, and the framework's investment thesis depends on sustained demand. Understanding both — the structural risks and what survives under each demand scenario — is essential for portfolio construction.
| Risk Factor | Most Exposed | Natural Hedges |
|---|---|---|
| Taiwan disruption | TSM, NVDA, AMD, AVGO (all fabless) | MU (US-based), equipment cos (sell to all fabs) |
| China export controls | ASML, KLAC, AMAT, LRCX (China revenue) | FCX, CCJ (commodities), RMBS (IP licensing) |
| Energy policy shifts | OKLO, SMR (regulatory dependent) | CEG, VST (existing assets), ETN/PWR (all energy) |
| Trade war escalation | Samsung, SK Hynix (Korea risk) | ANET (US software), SNPS/CDNS (US EDA) |
Related Research
China controls 90% of rare earth processing and 99% of heavy REE separation. Our dedicated research maps the full defense supply chain dependency.
Read: China Rare Earth Dependency Map →Macro Context: Stage 5 of the Big Cycle
These geopolitical risks are not isolated events — they are manifestations of what Ray Dalio calls Stage 5 of the Big Cycle, the phase where great powers clash across trade, technology, capital, and geopolitical domains simultaneously. AI infrastructure sits at the centre of the technology war. The companies with structural pricing power benefit regardless of which great power wins.
Each scenario has reflexive dynamics. In Scenario A, success breeds more investment (AI capex funds more AI, which generates more revenue, which funds more capex). In Scenario C, failure breeds more failure. The pricing power hierarchy identifies which companies survive the reflexive downside while capturing the upside.
Which companies maintain earnings growth if AI capex decelerates or reverses? Pricing power tier determines survival.
Common thread: Monopoly or Toll-Road pricing + diversified end markets beyond AI. ETN/PWR survive on electrification secular trend. CCJ/CEG on nuclear renaissance. RMBS on memory IP royalties. FCX on copper structural deficit.
Only true monopolists (ASML litho, SNPS/CDNS EDA) and diversified infrastructure (ETN/PWR) maintain margins in a full capex winter. Key: these 5 companies have >50% of revenue from non-AI sources.
Full survivor lists for both scenarios with company-level reasoning, the 3 early warning indicators with current readings, and the specific sequence of events that would confirm Scenario B or C is unfolding.
Unlock Scenario Analysis →| TICKER | LAYER | TIER | SCORE | SOURCES | CATEGORY |
|---|---|---|---|---|---|
| NVDA | L1 Compute | Monopoly | 95 | 9 | Core |
| ASML | L3 Packaging | Monopoly | 25.4 | 2 | Core |
| AVGO | L1 Custom Si | Toll-Road | 60 | 8 | Core |
| AMZN | L1 Hyperscaler | High | 95 | 10 | Catalyst |
| CCJ | L4 Nuclear Fuel | Toll-Road | 60 | 10 | Core |
| VST | L4 Power | High | 60 | 9 | Catalyst |
| RMBS | L2 Memory IP | Monopoly | 24.2 | 2 | Core |
| MU | L2 HBM | High | 55.7 | 5 | Tactical |
| + 29 more tickers across all 7 layers with full position sizing | |||||
Full 35-ticker position framework with layer assignment, pricing power tier, convergence scores, and Core/Catalyst/Tactical/Speculative categorization. Plus the Forced Action Map with specific catalyst dates.
Unlock Position Framework →Each playbook is a deep-dive into a specific company within this framework, with convergence data from our proprietary scanners.
Top converging tickers from the framework, ranked by composite score. Updated daily from 34 sources.
Full convergence dashboard for All 34 framework tickers with scores, source counts, direction, and the specific data points driving each convergence. See which tickers just crossed the threshold that historically precedes 14%+ moves.
Unlock Full Dashboard →Semiconductor & Memory
Power & Energy
Networking & Optics
Equipment & EDA
ForcedAlpha Proprietary Data