
Wall Street banks are increasingly bullish on Micron, with DA Davidson and Deutsche Bank both reiterating $1,000 price targets, implying more than 30% upside from current levels above $800. Analysts argue AI-driven demand for DRAM and other memory chips is being underestimated, particularly for key-value cache in large language models, while Mizuho sees 2026 DRAM and NAND pricing up 355% and 510% year over year, respectively. Barclays also said data center capacity is set to double from 2025 to 2030 and should largely be absorbed, supporting the broader AI infrastructure trade.
The market is still treating AI memory as a cyclical afterthought, but the better frame is that memory is becoming the gating item for inference economics. If key-value cache intensity keeps rising with model size and context length, DRAM/NAND pricing can stay tight even if compute capex eventually decelerates, which would keep Micron’s earnings power elevated longer than the market’s typical 12-18 month memory upcycle model assumes. That shifts MU from a pure beta trade on AI enthusiasm to a structural beneficiary of AI token growth. The second-order winner is likely the entire memory supply chain, but especially the most capacity-constrained, price-disciplined producers; the loser is the rest of the AI stack if memory captures a larger share of incremental dollar spend. In practice, that means some of the valuation multiple expansion currently attributed to compute leaders could migrate to memory names, while equipment and foundry beneficiaries may see relative margin pressure if they do not participate in the memory bottleneck. ARM and TSM look less directly levered to this specific bottleneck, which matters if investors rotate toward the scarcer input. The biggest risk is that the street may be extrapolating spot-tight conditions into 2026-27 and ignoring memory’s history of sudden supply response. The trade works best over months, not days: the near-term move is already extended, but if pricing remains firm into the next two quarters, estimates can keep grinding higher. What would break the thesis is either an aggressive capex response from the big three or evidence that model efficiency reduces memory intensity faster than token demand scales. Contrarian take: the consensus is underpricing the duration of the cycle, not the existence of it. The more interesting question is whether MU becomes the cleanest public-market expression of AI infrastructure scarcity outside of compute, and whether that re-rates the whole memory complex to a permanently higher trough multiple. If so, pullbacks are likely to be shallow until the market gets proof of capacity expansion or demand digestion.
AI-powered research, real-time alerts, and portfolio analytics for institutional investors.
Request a DemoOverall Sentiment
strongly positive
Sentiment Score
0.72
Ticker Sentiment