Back to News
Market Impact: 0.35

Are Amazon and Alphabet's Custom Chips a Threat to Nvidia?

NVDAAMDAMZNGOOGLAVGONFLX
Artificial IntelligenceTechnology & InnovationProduct LaunchesCompany FundamentalsCorporate Guidance & OutlookAntitrust & CompetitionAnalyst Insights

Alphabet's TPU business is expanding, with Google Cloud revenue up 63% year over year in Q4 and operating margin at 33%, while Alphabet is now selling TPUs directly to select customers. Amazon's custom chip unit is also accelerating, with AWS growth at 28% and Trainium3 nearly sold out, and Trainium4 capacity already significantly booked despite being 18 months away. The piece argues this is a competitive headwind for Nvidia, but also notes GPUs remain the most flexible option and still have a durable role in AI workloads.

Analysis

The market is moving from a binary "GPUs vs. custom silicon" debate to a portfolio optimization story. Custom chips will keep taking share in the most standardized, high-volume workloads, but that is more likely to compress Nvidia's growth mix than break the franchise; the real risk is not unit loss, it's pricing discipline across the stack as hyperscalers become better at benchmarking cost-per-token. That said, the external sale of TPU capacity and near-sold-out Trainium supply are more important as demand signals than as direct threats to Nvidia: they validate that AI infrastructure capex is still underpenetrated and that customers are willing to pre-commit multi-year demand for compute they can model around. The second-order winner is Broadcom, which monetizes the "shovel seller" layer in custom silicon without bearing the full adoption risk. If Google and Amazon keep scaling internal accelerators, AVGO gains from design wins, interconnect, and custom logic content even as the end-customer logos become more self-sufficient. The more subtle loser is any AI software vendor whose economics rely on abundant, elastic GPU supply; custom chips reduce inference cost and can raise model usage, but they also lower switching costs for cloud customers that can standardize around proprietary stacks, which may deepen hyperscaler moat rather than broaden the ecosystem. For Nvidia, the key hedge is flexibility: the more heterogeneous and enterprise-facing the workload, the more its GPUs remain the default insurance policy against vendor lock-in. The consensus seems to underappreciate how this creates a bifurcated market over the next 12-24 months: custom silicon captures the lowest-friction workloads, while Nvidia retains the premium slice where portability, software compatibility, and burst capacity matter most. The risk is not one event but a gradual margin reset if hyperscalers negotiate harder on next-gen GPU pricing once their own chips prove sufficient for a larger share of inference. Near term, this is bullish for AMZN and GOOGL fundamentals because custom chips expand cloud capacity and improve ROI on AI capex, but the bigger trade is that both companies can monetize AI demand with better unit economics than pure GPU pass-through. The reversal trigger would be any evidence that custom chips are cannibalizing cloud attach rates or that performance gains flatten after the next generation, which would push buyers back toward Nvidia's more general-purpose architecture.