Beyond the obvious names
Owning “AI” is not just a question of picking the cleverest model. It is semiconductors and memory to feed the model, optics to move data around at 800G and beyond, and the kit to keep ever-hotter racks cool and powered. It is also the power stations, transformers, cables and control systems that keep the whole thing humming. Hyperscale operators already run more than a thousand facilities globally and are still adding capacity. That spend has to land somewhere and, for investors, the more durable moats often sit in the picks-and-shovels.
Where the moats live
Some parts of the stack have unusually strong competitive positions. ASML remains the only supplier of EUV lithography tools, which the industry needs for the most advanced chips. High-bandwidth memory has become a structural bottleneck as model sizes grow, and leading players are investing heavily to keep up. In networking, the shift to 800G and then 1.6T optics underpins a multi-year upgrade inside the data hall. None of this is glamorous. All of it is necessary.
Power is the constraint
Electricity is the new oxygen. Data-centre power demand is rising quickly, and in some regions operators are exploring on-site generation because the grid cannot connect them fast enough. Over the decade, that implies sustained demand for generation, grid reinforcement, high-voltage equipment, and storage. It is also a reminder to keep portfolios global. Policy and permitting timelines are not the same in Texas, Dublin and Tokyo, and capital will flow to where it can get built.