Back to News
Market Impact: 0.25

3 Genius Artificial Intelligence (AI) Stocks You'll Regret Not Buying Now

MSFTNVDAAVGOINTCNFLXNDAQ
Artificial IntelligenceTechnology & InnovationCompany FundamentalsCorporate Guidance & OutlookAnalyst EstimatesProduct LaunchesCorporate Earnings
3 Genius Artificial Intelligence (AI) Stocks You'll Regret Not Buying Now

Nvidia is expected to deliver ~70% revenue growth this fiscal year and trades at ~22x forward earnings (near the S&P's ~21x), implying the market discounts continued rapid growth; Broadcom projects its custom AI chip business will generate $100B+ by end-2027 and reported $8.4B in AI semiconductor revenue last quarter (up 106% YoY). Microsoft’s valuation is described as near decade-low P/E levels, presenting a buying opportunity per the author. Recommendation: the author recommends MSFT, NVDA and AVGO as buys based on persistent AI data-center demand and Broadcom's aggressive guidance, though the piece is opinion-driven and likely to have modest direct market impact.

Analysis

Winners are not just the headline chip designers — hyperscalers and cloud brokers that secure bespoke silicon deals will see structural OPEX/CapEx improvements that change procurement math. Expect procurement teams to push a 2-4 quarter RFP cycle toward custom ASICs driven by unit-cost per inference and datacenter TCO, which will increase negotiating leverage for large cloud buyers and compress resale opportunities for traditional OEM blade vendors. Supply-side effects will matter more than single-quarter guidance: TSMC/ASML capacity allocation and back-end testing bottlenecks will create windows where design winners convert orders into margin disproportionately. That creates timing dispersion — calendar-year 2026 could be the trough for non-winning suppliers while 2027–2028 concentrates the wins; revenue shift rates of 20–30% into winners within 12–24 months are realistic based on past ASIC adoption curves. Key risks are cadence and architectural obsolescence. A single generational leap in model sparsity or a lightweight inference architecture could materially reduce GPU-hours per model, collapsing demand elasticity; conversely, sustained large-model training budgets would keep demand high but concentrate counterparty risk into a few hyperscalers. Short-term catalysts to watch are large multi-year customer disclosures, TSMC capacity announcements, and export-control moves — any of which can reprice winners within days but play out in realized revenues over quarters.