Back to News
Market Impact: 0.35

Billionaire Bill Ackman Sold a Long-Time Holding to Make Room for These 2 New Artificial Intelligence (AI) Stocks

HLTGOOGLNVDAINTCAMZNMETANFLX
Artificial IntelligenceTechnology & InnovationCompany FundamentalsCorporate EarningsCorporate Guidance & OutlookCapital Returns (Dividends / Buybacks)Travel & LeisureInvestor Sentiment & Positioning

Ackman sold Pershing Square's remaining stake in Hilton after a >7-year hold as fundamentals improved (revenue +35% from $8.9B to >$12B, operating income +88%, EPS +145%, rooms 913k→1.3M, loyalty members 85M→243M) but the stock now trades at >32x forward earnings versus ~23x when he first bought. He redeployed capital into Amazon and Meta on generative AI opportunity: Amazon plans roughly $200B in capex (likely pushing FCF negative) while AWS AI services report triple-digit growth and cloud revenue +24% YoY in Q4; Meta may raise capex to as much as ~$135B (from $72B), and both names trade at lower forward P/Es (Amazon <27x, Meta <20x) versus Hilton.

Analysis

The most durable competitive effect from the current AI capex cycle is not a single winner but a reconfiguration of the data‑center value chain: cloud providers that internalize model training and inference (custom silicon + software stack) will compress addressable revenue for third‑party GPU cycles while pushing up demand for memory (HBM), power provisioning, and high‑density networking. Economically, inference workloads are where unit economics matter most — reducing cost per inference by even 2x can unlock much larger TAMs for ad/commerce monetization and edge services, so investments that lower $/inference are self‑reinforcing moats. Key near‑term risks are execution and timing: large frontloaded capex can meaningfully depress FCF and create 12–36 month valuation volatility even if long‑run returns are attractive. Catalysts that will re‑rate players are measurable — quarterly AI revenue growth, cost/unit of inference, server utilization, and the cadence of custom silicon rollout — while downside catalysts include slower model adoption, privacy/regulatory frictions that reduce ad engagement, or a rapid commoditization of inference accelerators that restores GPU share but at lower prices. The consensus underestimates the dispersion between training and inference economics. Market participants that lump GPU incumbency with long‑term dominance miss opportunities to long integrated cloud providers that convert AI improvements into higher ARPU (ads + commerce) and to short high‑multiple businesses lacking structural data or scale. Practical monitors: $/inference improvements (target: 2x reduction within 12–24 months), capex/revenue ratio trending down after year‑2 of the spend cycle, and sequential gross margin inflection driven by AI services adoption.