
Key numbers: Nvidia controls >90% of the AI training GPU market and entered a $20B non‑exclusive licensing deal with Groq; Grand View Research forecasts the global AI market to grow at a 30.6% CAGR from 2026–2033. The article frames AI as splitting into a cyclical, training-dominated market (Nvidia-led) and a steadier inference market where Broadcom and custom ASICs could gain share and offer more stable recurring revenue. Recommendation: monitor inference-oriented chipmakers (e.g., Broadcom) as potential higher-stability plays versus training-focused Nvidia.
The next phase of AI is less about raw flops and more about cost-per-query, latency, and distribution. Hyperscalers that internalize inference (chip design, software stack, regional edge infra) will shift durable margin pools away from general-purpose silicon vendors and into a narrower set of specialized suppliers and cloud operators over a 3–5 year horizon. That redistribution creates non-obvious winners: high-margin analog/IP-rich ASIC vendors, HBM and advanced packaging suppliers that reduce TCO, and software/service layers that monetize predictable per-query billing. Conversely, vendors whose value is concentrated in monolithic, high-power cards face margin compression from disaggregated accelerators and tighter hyperscaler procurement cycles. Catalysts and timing are staggered: measurable share gains for ASIC-oriented vendors can show up in supplier revenue within 2–6 quarters as hyperscalers move from prototypes to fleet rollouts, while the broader TAM reshuffle plays out over multiple years. Tail risks include a prolonged training-capex rebound, a sudden standardization around a dominant open inference stack, or macro-driven capex pullbacks that reset vendor economics within 6–12 months.
AI-powered research, real-time alerts, and portfolio analytics for institutional investors.
Request a DemoOverall Sentiment
mildly positive
Sentiment Score
0.15
Ticker Sentiment