Back to News
Market Impact: 0.6

Why SanDisk Stock (SNDK) Is Falling Today and What Bank of America Sees Ahead

SNDKGOOGLGOOGBAC
Artificial IntelligenceTechnology & InnovationTrade Policy & Supply ChainCompany FundamentalsAnalyst InsightsManagement & GovernanceInvestor Sentiment & PositioningCorporate Guidance & Outlook

SanDisk shares tumbled ~7% today (after a 3.5% drop yesterday) amid concerns that Google’s TurboQuant compression could reduce future memory demand; the company also announced a $1.0B investment in Nanya tied to a long-term supply deal, raising near-term margin and cash-flow questions. Bank of America’s Wamsi Mohan rates SNDK Buy with a $900 price objective (~42% upside), citing durable NAND demand, longer-term supply agreements and a shift to higher-margin products; Street consensus is a Strong Buy (12 Buys, 3 Holds) with an average PT of $700 and a high of $1,000.

Analysis

The market is confusing a per-model memory-intensity assumption with aggregate addressable demand: improved compression lowers capacity per inference but increases the marginal ROI of deploying models. That should drive hyperscalers to run more parallel models, longer retention windows for embeddings, and more frequent re-scoring — all of which increase total I/O, endurance needs, and demand for higher-margin, performance SSDs and computational-storage devices rather than commodity raw NAND alone. Second-order winners will be suppliers that can monetize firmware, controllers, and endurance (QoS) — firms that contract for multi-year fixed+variable revenue will capture a larger share of surplus even if bits-per-model fall. Conversely, pure-spot NAND/DRAM commodity exposure is most at risk: lower per-model capacity plus abundant industry wafer supply creates a route for price pressure on vanilla commodity footprints while skewing premium to differentiated products. Timing matters: near-term volatility will be driven by sentiment and quarterly results (days–weeks), adoption cadence and hyperscaler CapEx decisions play out over 3–12 months, while wide compression adoption that structurally reduces intensity is a 2–5 year outcome. Tail risks cut both ways — an open-source or broadly adopted compression layer could materially lower long-term intensity, but a large uplift in model count/real-time use (LLM as a service) can more than offset per-model savings within 12–24 months.

AllMind AI Terminal

AI-powered research, real-time alerts, and portfolio analytics for institutional investors.