Q2 2026 Micron Technology Inc Earnings Call

Speaker #1: Ladies and gentlemen, thank you for joining us, and welcome to Micron Technology's fiscal second quarter 2026 financial conference call. After today's prepared remarks, we will host a question-and-answer session.

Operator: Ladies and gentlemen, thank you for joining us, and welcome to Micron Technology's Fiscal Q2 2026 Financial Conference Call. After today's prepared remarks, we will host a question-and-answer session. I will now hand the conference over to Satya Kumar, Investor Relations. Satya, please go ahead.

Operator: Ladies and gentlemen, thank you for joining us, and welcome to Micron Technology's Fiscal Q2 2026 Financial Conference Call. After today's prepared remarks, we will host a question-and-answer session. I will now hand the conference over to Satya Kumar, Investor Relations. Satya, please go ahead.

Speaker #1: I will now hand the conference over to Satya Kumar, Investor Relations. Satya, please go ahead.

Speaker #2: Thank you, and welcome to Micron Technology's fiscal second quarter 2026 financial conference call. On the call with me today are Sanjay Mehrotra, our Chairman, President, and CEO; and Mark Murphy, our CFO.

Satya Kumar: Thank you, and welcome to Micron Technology's Fiscal Q2 2026 financial conference call. On the call with me today are Sanjay Mehrotra, our Chairman, President, and CEO, and Mark Murphy, our CFO. Today's call is being webcast from our investor relations site at investors.micron.com, including audio and slides. In addition, the press release detailing our quarterly results has been posted on the website, along with the prepared remarks for this call. Today's discussion contains forward-looking statements that are subject to risks and uncertainties. These forward-looking statements include statements regarding our future financial and operating performance, as well as trends and expectations in our business, customers, market, industry, products, and regulatory and other matters. These statements are based on our current assumptions, and we assume no obligation to update these statements.

Satya Kumar: Thank you, and welcome to Micron Technology's Fiscal Q2 2026 financial conference call. On the call with me today are Sanjay Mehrotra, our Chairman, President, and CEO, and Mark Murphy, our CFO. Today's call is being webcast from our investor relations site at investors.micron.com, including audio and slides. In addition, the press release detailing our quarterly results has been posted on the website, along with the prepared remarks for this call. Today's discussion contains forward-looking statements that are subject to risks and uncertainties. These forward-looking statements include statements regarding our future financial and operating performance, as well as trends and expectations in our business, customers, market, industry, products, and regulatory and other matters. These statements are based on our current assumptions, and we assume no obligation to update these statements.

Speaker #2: Today's call is being webcast from our Investor Relations site at investors.micron.com, including audio and slides. In addition, the press release detailing our quarterly results has been posted on the website, along with the prepared remarks for this call.

Speaker #2: Today's discussion contains forward-looking statements that are subject to risks and uncertainties. These forward-looking statements include statements regarding our future financial and operating performance, as well as trends and expectations in our business, customers, market, industry, products, and regulatory and other matters.

Speaker #2: These statements are based on our current assumptions, and we assume no obligation to update these statements. Please refer to our most recent financial report on Forms 10-K, Forms 10-Q, and our other filings with the SEC for more information on the risks and uncertainties that could cause actual results to differ materially from expectations.

Satya Kumar: Please refer to our most recent financial report on Forms 10-K, Forms 10-Q, and our other filings with the SEC for more information on the risks and uncertainties that could cause actual results to differ materially from expectations. Today's discussion of financial results is presented on a non-GAAP financial basis, unless otherwise specified. A reconciliation of GAAP to non-GAAP financial measures can be found on our website. I'll now turn the call over to Sanjay.

Satya Kumar: Please refer to our most recent financial report on Forms 10-K, Forms 10-Q, and our other filings with the SEC for more information on the risks and uncertainties that could cause actual results to differ materially from expectations. Today's discussion of financial results is presented on a non-GAAP financial basis, unless otherwise specified. A reconciliation of GAAP to non-GAAP financial measures can be found on our website. I'll now turn the call over to Sanjay.

Speaker #2: Today's discussion of financial results is presented on a non-GAAP financial basis, unless otherwise specified. A reconciliation of GAAP to non-GAAP financial measures can be found on our website.

Speaker #2: I'll now turn the call over to Sanjay.

Speaker #3: Thank you, Satya. Micron delivered an exceptional fiscal Q2 with stellar records in revenue, gross margin, EPS, and free cash flow. Quarterly revenue nearly tripled versus one year ago, and revenue for DRAM, NAND, HBM, and each business unit reached new highs.

Sanjay Mehrotra: Thank you, Satya. Micron delivered an exceptional fiscal Q2, with stellar records in revenue, gross margin, EPS, and free cash flow. Quarterly revenue nearly tripled versus one year ago, and revenue for DRAM, NAND, HBM, and each business unit reached new highs. Our fiscal Q3 single quarter revenue guidance exceeds the full year revenue for every year in our company's history through fiscal 2024. For fiscal Q3, we anticipate exceptional records across revenue, gross margin, EPS, and free cash flow. Reflecting confidence in the sustained strength of our business, I'm pleased to announce that our board has approved a 30% increase in our quarterly dividend. The step up in our results and outlook are the outcome of an increase in memory demand driven by AI, structural supply constraints, and Micron's strong execution across the board. Our memory and storage solutions are at the heart of this AI revolution.

Sanjay Mehrotra: Thank you, Satya. Micron delivered an exceptional fiscal Q2, with stellar records in revenue, gross margin, EPS, and free cash flow. Quarterly revenue nearly tripled versus one year ago, and revenue for DRAM, NAND, HBM, and each business unit reached new highs. Our fiscal Q3 single quarter revenue guidance exceeds the full year revenue for every year in our company's history through fiscal 2024. For fiscal Q3, we anticipate exceptional records across revenue, gross margin, EPS, and free cash flow. Reflecting confidence in the sustained strength of our business, I'm pleased to announce that our board has approved a 30% increase in our quarterly dividend. The step up in our results and outlook are the outcome of an increase in memory demand driven by AI, structural supply constraints, and Micron's strong execution across the board. Our memory and storage solutions are at the heart of this AI revolution.

Speaker #3: Our fiscal Q3 single-quarter revenue guidance exceeds the full-year revenue for every year in our company's history through fiscal 2024. For fiscal Q3, we anticipate exceptional records across revenue, gross margin, EPS, and free cash flow.

Speaker #3: Reflecting confidence in the sustained strength of our business, I'm pleased to announce that our board has approved a 30% increase in our quarterly dividend.

Speaker #3: The step-up in our results and outlook are the outcome of an increase in memory demand driven by AI, structural supply constraints, and Micron's strong execution across the board.

Speaker #3: Our memory and storage solutions are at the heart of this AI revolution. Memory makes AI smarter and more capable, enabling longer context windows, deeper reasoning chains, and multi-agent orchestration.

Sanjay Mehrotra: Memory makes AI smarter and more capable, enabling longer context windows, deeper reasoning chains, and multi-agent orchestration. As AI evolves, we expect compute architectures to become more memory intensive. This is why we strongly believe that Micron is one of the biggest beneficiaries and enablers of AI. AI hasn't just increased demand for memory, it has fundamentally recast memory as a defining strategic asset in the AI era. We continue to work with customers on strategic customer agreements, or SCAs, that are different from prior LTAs and have specific commitments over a multi-year time horizon for improved visibility and stability in our business model. These SCAs also provide customers greater certainty to plan their businesses while reinforcing long-term engagement across our broad product portfolio. We are excited to have signed our first five-year SCA.

Sanjay Mehrotra: Memory makes AI smarter and more capable, enabling longer context windows, deeper reasoning chains, and multi-agent orchestration. As AI evolves, we expect compute architectures to become more memory intensive. This is why we strongly believe that Micron is one of the biggest beneficiaries and enablers of AI. AI hasn't just increased demand for memory, it has fundamentally recast memory as a defining strategic asset in the AI era. We continue to work with customers on strategic customer agreements, or SCAs, that are different from prior LTAs and have specific commitments over a multi-year time horizon for improved visibility and stability in our business model. These SCAs also provide customers greater certainty to plan their businesses while reinforcing long-term engagement across our broad product portfolio. We are excited to have signed our first five-year SCA.

Speaker #3: As AI evolves, we expect compute architectures to become more memory-intensive. This is why we strongly believe that Micron is one of the biggest beneficiaries and enablers of AI.

Speaker #3: AI hasn't just increased demand for memory; it has fundamentally recast memory as a defining strategic asset in the AI era. We continue to work with customers on strategic customer agreements, or SCAs, that are different from prior LTAs and have specific commitments over a multi-year time horizon for improved visibility and stability in our business model.

Speaker #3: These SCAs also provide customers greater certainty to plan their businesses, while reinforcing long-term engagement across our broad product portfolio. We are excited to have signed our first five-year SCA.

Speaker #3: We are making excellent progress ramping our industry-leading 1-gamma DRAM and G9 NAND technology nodes. We expect 1-gamma to become the highest volume node in Micron's history.

Sanjay Mehrotra: We are making excellent progress ramping our industry-leading 1-gamma DRAM and G9 NAND technology nodes. We expect 1-gamma to become the highest volume node in Micron's history. Our 1-gamma node was already the fastest RAM to mature yields, is ramping volumes faster than all prior nodes in our history, and is on track to become a majority of our DRAM bit mix by mid-calendar 2026. We plan to increase EUV adoption at the 1-delta DRAM node, utilizing the latest generation EUV tools. These more advanced EUV tools will help us optimize both clean room space efficiency and patterning when scaling to 1-delta and beyond. In NAND, our G9 node also remains on track to constitute a majority of bits by mid-calendar 2026. We also achieved a record mix of QLC bits in the quarter.

Sanjay Mehrotra: We are making excellent progress ramping our industry-leading 1-gamma DRAM and G9 NAND technology nodes. We expect 1-gamma to become the highest volume node in Micron's history. Our 1-gamma node was already the fastest RAM to mature yields, is ramping volumes faster than all prior nodes in our history, and is on track to become a majority of our DRAM bit mix by mid-calendar 2026. We plan to increase EUV adoption at the 1-delta DRAM node, utilizing the latest generation EUV tools. These more advanced EUV tools will help us optimize both clean room space efficiency and patterning when scaling to 1-delta and beyond. In NAND, our G9 node also remains on track to constitute a majority of bits by mid-calendar 2026. We also achieved a record mix of QLC bits in the quarter.

Speaker #3: Our 1-gamma node was already the fastest ramp to mature yields, is ramping volumes faster than all prior nodes in our history, and is on track to become a majority of our DRAM bit mix by mid-calendar 2026.

Speaker #3: We plan to increase EUV adoption at the 1-delta DRAM node utilizing the latest generation EUV tools. These more advanced EUV tools will help us optimize both clean room space efficiency and patterning when scaling to 1-delta and beyond.

Speaker #3: In NAND, our G9 node also remains on track to constitute a majority of bits by mid-calendar 2026. We also achieved a record max of QLC bits in the quarter.

Speaker #3: Looking ahead, we expect co-location of R&D and high-volume manufacturing at our Boise and Singapore sites to speed up time to market for our leading-edge products.

Sanjay Mehrotra: Looking ahead, we expect co-location of R&D and high volume manufacturing at our Boise and our Singapore sites to speed up time to market for our leading-edge products. We see an unprecedented set of opportunities for memory and storage to enable the AI era across market segments and expect to meaningfully increase our R&D investments in fiscal 2027. Micron's technology leadership, product excellence, and manufacturing execution is being recognized in quality scores from our customers. I'm pleased to report that a clear majority of our customers rank Micron number one in quality. Turning to our end markets. AI demand is driving DRAM and NAND data center bit TAM to exceed 50% of the industry TAM for the first time in calendar 2026. Traditional server demand is robust, driven by a combination of demand from workloads initiated by agentic AI, as well as broad-based server refresh.

Sanjay Mehrotra: Looking ahead, we expect co-location of R&D and high volume manufacturing at our Boise and our Singapore sites to speed up time to market for our leading-edge products. We see an unprecedented set of opportunities for memory and storage to enable the AI era across market segments and expect to meaningfully increase our R&D investments in fiscal 2027. Micron's technology leadership, product excellence, and manufacturing execution is being recognized in quality scores from our customers. I'm pleased to report that a clear majority of our customers rank Micron number one in quality. Turning to our end markets. AI demand is driving DRAM and NAND data center bit TAM to exceed 50% of the industry TAM for the first time in calendar 2026. Traditional server demand is robust, driven by a combination of demand from workloads initiated by agentic AI, as well as broad-based server refresh.

Speaker #3: We see an unprecedented set of opportunities for memory and storage to enable the AI era across market segments and expect to meaningfully increase our R&D investments in fiscal 2027.

Speaker #3: Micron's technology leadership, product excellence, and manufacturing execution are being recognized in quality scores from our customers. I'm pleased to report that a clear majority of our customers rank Micron number one in quality.

Speaker #3: Turning to our end markets, AI demand is driving DRAM and NAND data center bit TAM to exceed 50% of the industry TAM for the first time in calendar 2026.

Speaker #3: Traditional server demand is robust, driven by a combination of demand from workloads initiated by agentic AI, as well as broad-based server refresh. AI server demand continues to be strong.

Sanjay Mehrotra: AI server demand continues to be strong. Both AI and traditional server demand are constrained by lack of adequate DRAM and NAND supply. We expect server units to grow in the low teens % range in calendar 2026, driven by growth in both AI and traditional servers. We expect server DRAM content to continue to grow in calendar 2026 with the introduction of new platforms. At NVIDIA's GTC, we announced that Micron has begun volume shipment of its HBM4 36 GB 12-high in the first quarter of calendar year 2026 and is designed for NVIDIA Vera Rubin. With our HBM4 production ramp and volume shipments underway, we expect to reach mature yields faster than HBM3E.

Sanjay Mehrotra: AI server demand continues to be strong. Both AI and traditional server demand are constrained by lack of adequate DRAM and NAND supply. We expect server units to grow in the low teens % range in calendar 2026, driven by growth in both AI and traditional servers. We expect server DRAM content to continue to grow in calendar 2026 with the introduction of new platforms. At NVIDIA's GTC, we announced that Micron has begun volume shipment of its HBM4 36 GB 12-high in the first quarter of calendar year 2026 and is designed for NVIDIA Vera Rubin. With our HBM4 production ramp and volume shipments underway, we expect to reach mature yields faster than HBM3E.

Speaker #3: Both AI and traditional server demand are constrained by lack of adequate DRAM and NAND supply. We expect server units to grow in the low-teens percentage range in calendar 2026, driven by growth in both AI and traditional servers.

Speaker #3: We expect server DRAM content to continue to grow in calendar 2026 with the introduction of new platforms. At NVIDIA's GTC, we announced that Micron has begun volume shipment of its HBM4 36-gigabyte 12-high in the first quarter of calendar year 2026, and is designed for NVIDIA Vera Rubin.

Speaker #3: With our HBM4 production ramp and volume shipments underway, we expect to reach mature yields faster than HBM3e. We have also sampled our HBM4 16-high product, which provides 48 gigabytes of HBM capacity in each HBM cube, a 33% increase in HBM capacity compared to HBM4 12-high.

Sanjay Mehrotra: We have also sampled our HBM4 16-high product, which provides 48 GB of HBM capacity in each HBM cube, a 33% increase in HBM capacity compared to HBM4 12-high. Development of HBM4E, our next generation HBM product, is well underway, and we expect to ramp volume in calendar 2027. Our HBM4E will leverage Micron's production-proven, industry-leading 1-gamma DRAM technology node and is set to deliver another step function improvement in performance, enabling a whole new generation of AI compute platforms across the industry. Additionally, HBM4E customization options offer us further differentiation opportunities and even deeper R&D engagement with customers. Micron pioneered the development of LPDRAM for the data center, which consumes one-third the power of DDR DRAM server modules.

Sanjay Mehrotra: We have also sampled our HBM4 16-high product, which provides 48 GB of HBM capacity in each HBM cube, a 33% increase in HBM capacity compared to HBM4 12-high. Development of HBM4E, our next generation HBM product, is well underway, and we expect to ramp volume in calendar 2027. Our HBM4E will leverage Micron's production-proven, industry-leading 1-gamma DRAM technology node and is set to deliver another step function improvement in performance, enabling a whole new generation of AI compute platforms across the industry. Additionally, HBM4E customization options offer us further differentiation opportunities and even deeper R&D engagement with customers. Micron pioneered the development of LPDRAM for the data center, which consumes one-third the power of DDR DRAM server modules.

Speaker #3: Development of HBM4e, our next-generation HBM product, is well underway, and we expect to ramp volume in calendar 2027. Our HBM4e will leverage Micron's production-proven, industry-leading 1-gamma DRAM technology node, and is set to deliver another step-function improvement in performance, enabling a whole new generation of AI compute platforms across the industry.

Speaker #3: Additionally, HBM4e customization options offer us further differentiation opportunities and even deeper R&D engagement with customers. Micron pioneered the development of LPDRAM for the data center, which consumes one-third the power of DDR DRAM server modules.

Speaker #3: Building on this leadership, we sampled the industry's first 256-gigabyte LP SoCam2 product, which is built using our 1-gamma node and enables a massive 2 terabytes of capacity per CPU, quadrupling the content from just a year ago.

Sanjay Mehrotra: Building on this leadership, we sampled the industry's first 256 GB LPCAMM2 product, which is built using our 1-gamma node and enables a massive 2 TB of capacity per CPU, quadrupling the content from just a year ago. We see expanding use of LP DRAM in the data center in the years ahead, and we are excited to maintain an industry-leading innovative product roadmap in this market. Rapid growth in AI inference is driving the emergence of new architectures optimized for the token economics of specific workloads. Micron's broad portfolio of HBM, LP, DDR, and SSD is a critical enabler across these architectures. At GTC, the recent announcement of NVIDIA Groq 3 LPX implements up to 12 TB of DDR5 in a rack-scale architecture.

Sanjay Mehrotra: Building on this leadership, we sampled the industry's first 256 GB LPCAMM2 product, which is built using our 1-gamma node and enables a massive 2 TB of capacity per CPU, quadrupling the content from just a year ago. We see expanding use of LP DRAM in the data center in the years ahead, and we are excited to maintain an industry-leading innovative product roadmap in this market. Rapid growth in AI inference is driving the emergence of new architectures optimized for the token economics of specific workloads. Micron's broad portfolio of HBM, LP, DDR, and SSD is a critical enabler across these architectures. At GTC, the recent announcement of NVIDIA Groq 3 LPX implements up to 12 TB of DDR5 in a rack-scale architecture.

Speaker #3: We see expanding use of LPDRAM in the data center in the years ahead, and we are excited to maintain an industry-leading, innovative product roadmap in this market.

Speaker #3: Rapid growth in AI inference is driving the emergence of new architectures optimized for the token economics of specific workloads. Micron's broad portfolio of HBM, LP, DDR, and SSD is a critical enabler across these architectures.

Speaker #3: At GTC, the recent announcement of NVIDIA Grok 3 LPX implements up to 12 terabytes of DDR5 in a rack-scale architecture. We are seeing an acceleration in NAND bit demand in the data center due to AI use cases such as vector database and KVCache offload, and due to the growing share of SSDs in capacity storage tiers.

Sanjay Mehrotra: We are seeing an acceleration in NAND bit demand in the data center due to AI use cases such as vector database and KV cache offload, and due to growing share of SSDs in capacity storage tiers. Micron's data center SSD product portfolio, enabled by our technology leadership and vertical integration, covers the spectrum from highest performance to highest capacity. We are now in high volume production of our G9 NAND-based PCIe Gen 6 high-performance data center SSDs. Our 122 TB high-capacity SSD is seeing strong adoption and delivers 16 times the sequential read throughput per watt of a capacity-matched HDD configuration. Our strategy and execution are delivering results. Our data center SSD market share increased for the fourth consecutive calendar year in 2025 to a new record.

Sanjay Mehrotra: We are seeing an acceleration in NAND bit demand in the data center due to AI use cases such as vector database and KV cache offload, and due to growing share of SSDs in capacity storage tiers. Micron's data center SSD product portfolio, enabled by our technology leadership and vertical integration, covers the spectrum from highest performance to highest capacity. We are now in high volume production of our G9 NAND-based PCIe Gen 6 high-performance data center SSDs. Our 122 TB high-capacity SSD is seeing strong adoption and delivers 16 times the sequential read throughput per watt of a capacity-matched HDD configuration. Our strategy and execution are delivering results. Our data center SSD market share increased for the fourth consecutive calendar year in 2025 to a new record.

Speaker #3: Micron's data center SSD product portfolio, enabled by our technology leadership and vertical integration, covers the spectrum from highest performance to highest capacity. We are now in high-volume production of our G9 NAND-based PCIe Gen6 high-performance data center SSDs.

Speaker #3: Our 122-terabyte high-capacity SSD is seeing strong adoption and delivers 16 times the sequential read throughput per watt of a capacity-matched HDD configuration. Our strategy and execution are delivering—SSD market share increased for the fourth consecutive calendar year in 2025 to a new record.

Speaker #3: In fiscal Q2, data center NAND revenues more than doubled sequentially, reaching a substantial new record, and we expect further growth in the quarter ahead.

Sanjay Mehrotra: In fiscal Q2, data centers NAND revenues more than doubled sequentially, reaching a substantial new record, and we expect further growth in the quarter ahead. Micron's data center SSD portfolio is industry-leading, and we have secured a robust set of design wins across our customer base. We are now seeing NAND demand significantly in excess of our available supply for the foreseeable future. In calendar 2026, a number of factors, including DRAM and NAND supply constraints, could cause PC and smartphone units to decline in the low double-digit % range. Over time, we expect the value of on-device AI to drive strong memory content growth in PCs and smartphones. In PCs, there have been exciting innovations recently with agentic AI applications such as OpenClaw, where AI agents can perform tasks independently on the host PC and also initiate workloads in the cloud.

Sanjay Mehrotra: In fiscal Q2, data centers NAND revenues more than doubled sequentially, reaching a substantial new record, and we expect further growth in the quarter ahead. Micron's data center SSD portfolio is industry-leading, and we have secured a robust set of design wins across our customer base. We are now seeing NAND demand significantly in excess of our available supply for the foreseeable future. In calendar 2026, a number of factors, including DRAM and NAND supply constraints, could cause PC and smartphone units to decline in the low double-digit % range. Over time, we expect the value of on-device AI to drive strong memory content growth in PCs and smartphones. In PCs, there have been exciting innovations recently with agentic AI applications such as OpenClaw, where AI agents can perform tasks independently on the host PC and also initiate workloads in the cloud.

Speaker #3: Micron's data center SSD portfolio is industry-leading, and we have secured a robust set of design VANs across our customer base. We are now seeing NAND demand significantly in excess of our available supply for the foreseeable future.

Speaker #3: In calendar 2026, a number of factors, including DRAM and NAND supply constraints, could cause PC and smartphone units to decline in the low double-digit percentage range.

Speaker #3: Over time, we expect the value of on-device AI to drive strong memory content growth in PCs and smartphones. In PCs, there has been exciting innovation recently with agentic AI applications such as OpenCloud, where AI agents can perform tasks independently on the host PC and also initiate workloads in the cloud.

Speaker #3: PCs with on-device agentic AI capabilities have recommended memory configurations of at least 32 gigabytes, twice as much as the average PC. Additionally, the fast-growing new category of personal AI workstations, such as NVIDIA DGX Spark and AMD Ryzen AI Halo, come in 128-gigabyte configurations ideal for using large language models on-device.

Sanjay Mehrotra: PCs with on-device agentic AI capabilities have recommended memory configurations of at least 32 GB, twice as much as the average PC. Additionally, the fast-growing new category of personal AI workstations, such as NVIDIA DGX Spark and AMD Ryzen AI Halo, come in 128 GB configurations, ideal for using large language models on device. Likewise, in smartphones, OEMs have recently announced new flagship devices such as Samsung Galaxy S26 and Google Pixel 10 with agentic AI integrated into their mobile operating systems. The mix of flagship smartphones shipping with 12 GB or more of DRAM increased to nearly 80% in calendar Q4, up from under 20% a year ago. Micron is well positioned to accelerate the opportunities in these markets with our industry-leading portfolio of products. In PC, Micron completed qualifications for LPCAMM2 at a major OEM.

Sanjay Mehrotra: PCs with on-device agentic AI capabilities have recommended memory configurations of at least 32 GB, twice as much as the average PC. Additionally, the fast-growing new category of personal AI workstations, such as NVIDIA DGX Spark and AMD Ryzen AI Halo, come in 128 GB configurations, ideal for using large language models on device. Likewise, in smartphones, OEMs have recently announced new flagship devices such as Samsung Galaxy S26 and Google Pixel 10 with agentic AI integrated into their mobile operating systems. The mix of flagship smartphones shipping with 12 GB or more of DRAM increased to nearly 80% in calendar Q4, up from under 20% a year ago. Micron is well positioned to accelerate the opportunities in these markets with our industry-leading portfolio of products. In PC, Micron completed qualifications for LPCAMM2 at a major OEM.

Speaker #3: Likewise, in smartphones, OEMs have recently announced new flagship devices such as the Samsung Galaxy S26 and Google Pixel 10, with agentic AI integrated into their mobile operating systems.

Speaker #3: The mix of flagship smartphones shipping with 12 gigabytes or more of DRAM increased to nearly 80% in calendar Q4, up from under 20% a year ago.

Speaker #3: Micron is well-positioned to accelerate the opportunities in these markets with our industry-leading portfolio of products. In PC, Micron completed qualifications for LPCAM2 at a major OEM. In SCF, we launched the industry's first Gen5 QLC client SSD based on G9 NAND.

Sanjay Mehrotra: At CES, we launched the industry's first Gen5 QLC client SSD based on G9 NAND. Micron's LPDDR5X is now designed into leading personal AI workstations, expanding our addressable market with high volumes shipped to key customers. In smartphones, Micron continues to receive strong interest and feedback from OEM and ecosystem partners on our 1-gamma based LPDDR6 samples. We built momentum with additional qualifications and mass production of our 10.7 Gb per second 1-gamma LPDDR5X 16 Gb product. We saw continued pricing improvement across automotive, industrial, and embedded markets. Total AEBU revenue reached a record, with automotive and industrial revenue together exceeding $2 billion in the quarter. In automotive, OEMs are deploying level 2+ ADAS across their fleets at an accelerating pace.

Sanjay Mehrotra: At CES, we launched the industry's first Gen5 QLC client SSD based on G9 NAND. Micron's LPDDR5X is now designed into leading personal AI workstations, expanding our addressable market with high volumes shipped to key customers. In smartphones, Micron continues to receive strong interest and feedback from OEM and ecosystem partners on our 1-gamma based LPDDR6 samples. We built momentum with additional qualifications and mass production of our 10.7 Gb per second 1-gamma LPDDR5X 16 Gb product. We saw continued pricing improvement across automotive, industrial, and embedded markets. Total AEBU revenue reached a record, with automotive and industrial revenue together exceeding $2 billion in the quarter. In automotive, OEMs are deploying level 2+ ADAS across their fleets at an accelerating pace.

Speaker #3: Micron's LPDDR5X is now designed into leading personal AI workstations, expanding our addressable market with high volumes shipped to key customers. In smartphones, Micron continues to receive strong interest and feedback from OEM and ecosystem partners on our 1-gamma-based LPDDR6 samples.

Speaker #3: We built momentum with additional qualifications and mass production of our 10.7 gigabit-per-second 1 Gamma LPDDR5X 16-gigabit product. We saw continued pricing improvement across automotive, industrial, and embedded markets.

Speaker #3: Total AEBU revenue reached a record, with automotive and industrial revenue together exceeding $2 billion in the quarter. In automotive, OEMs are deploying Level 2+ ADAS across their fleets at an accelerating pace.

Speaker #3: The average car today has less than L2 ADAS capability, containing approximately 16 gigabytes of DRAM, while vehicles with L4 autonomy require over 300 gigabytes.

Sanjay Mehrotra: The average car today has less than L2 ADAS capability, containing approximately 16 GB of DRAM, while vehicles with L4 autonomy require over 300 GB. As more advanced ADAS and smart cabin adoption scales, we expect robust long-term growth in automotive memory demand. We have shipped samples of the industry's first automotive-grade 1-gamma LPDDR5 DRAM, and in NAND we were first in the industry with a G9-based UFS 4.1 automotive solution, further reinforcing our technology leadership in this market. Rapid improvements in AI are supercharging the capabilities of robots. We believe we are on the cusp of a 20-year growth vector in robotics and expect robotics to become one of the largest product categories in the technology world.

Sanjay Mehrotra: The average car today has less than L2 ADAS capability, containing approximately 16 GB of DRAM, while vehicles with L4 autonomy require over 300 GB. As more advanced ADAS and smart cabin adoption scales, we expect robust long-term growth in automotive memory demand. We have shipped samples of the industry's first automotive-grade 1-gamma LPDDR5 DRAM, and in NAND we were first in the industry with a G9-based UFS 4.1 automotive solution, further reinforcing our technology leadership in this market. Rapid improvements in AI are supercharging the capabilities of robots. We believe we are on the cusp of a 20-year growth vector in robotics and expect robotics to become one of the largest product categories in the technology world.

Speaker #3: As more advanced ADAS and smart cabin adoption scales, we expect robust long-term growth in automotive memory demand. We have shipped samples of the industry's first automotive-grade 1-gamma LPDDR5 DRAM, and in NAND, we were first in the industry with a G9-based UFS 4.1 automotive solution—further reinforcing our technology leadership in this market.

Speaker #3: Rapid improvements in AI are supercharging the capabilities of robots. We believe we are on the cusp of a 20-year growth vector in robotics and expect robotics to become one of the largest product categories in the technology world.

Speaker #3: Humanoid robots will be AI-enabled and will be powered by a compute platform that rivals that of a high-end, L4-capable automobile, thus requiring significant memory and storage capacity.

Sanjay Mehrotra: Humanoid robots will be AI-enabled and will be powered by a compute platform that rivals that of a high-end L4-capable automobile, thus requiring significant memory and storage capacity. We expect this exciting new category of growth to further underpin the long-term favorable dynamics that shape our industry environment. Micron is very well positioned to leverage this opportunity in close partnership with our customers, enabled by our industry-leading technology, product solutions, and operational capabilities. Now turning to our market outlook. We expect both DRAM and NAND industry bit demand in calendar 2026 to be constrained by supply. We continue to expect supply demand conditions for both DRAM and NAND to remain tight beyond calendar 2026. We expect industry DRAM bit shipments in calendar 2026 to grow in the low 20s% range, slightly above our prior outlook.

Sanjay Mehrotra: Humanoid robots will be AI-enabled and will be powered by a compute platform that rivals that of a high-end L4-capable automobile, thus requiring significant memory and storage capacity. We expect this exciting new category of growth to further underpin the long-term favorable dynamics that shape our industry environment. Micron is very well positioned to leverage this opportunity in close partnership with our customers, enabled by our industry-leading technology, product solutions, and operational capabilities. Now turning to our market outlook. We expect both DRAM and NAND industry bit demand in calendar 2026 to be constrained by supply. We continue to expect supply demand conditions for both DRAM and NAND to remain tight beyond calendar 2026. We expect industry DRAM bit shipments in calendar 2026 to grow in the low 20s% range, slightly above our prior outlook.

Speaker #3: We expect this exciting new category of growth to further underpin the long-term favorable dynamics that shape our industry environment. Micron is very well positioned to leverage this opportunity in close partnership with our customers, enabled by our industry-leading technology, product solutions, and operational capabilities.

Speaker #3: Now turning to our market outlook, we expect both DRAM and NAND industry bit demand in calendar 2026 to be constrained by supply. We continue to expect supply-demand conditions for both DRAM and NAND to remain tight beyond calendar 2026.

Speaker #3: We expect industry DRAM bit shipments in calendar 2026 to grow in the low-20s percentage range, slightly above our prior outlook. In DRAM, cleanroom constraints and long construction lead times, higher HBM trade ratio, higher HBM growth rates, and declining bits-per-wafer growth from node migrations constrained bit supply growth.

Sanjay Mehrotra: In DRAM, cleanroom constraints and long construction lead times, higher HBM trade ratio, higher HBM growth rates, and declining bits per wafer growth from node migrations constrained bit supply growth. We expect industry NAND bit shipments in calendar 2026 to grow approximately 20%. In NAND, some industry suppliers redirecting cleanroom space for DRAM and overall limited cleanroom space constrain bit supply growth. We expect Micron DRAM and NAND supply to grow approximately in line with the industry in calendar 2026. Micron is working to address the unprecedented gap between supply and demand, and we achieved several important milestones in expanding our global manufacturing footprint this past quarter. In DRAM, earlier this week, we announced the successful closing of the acquisition of the Tongluo site from Powerchip Semiconductor, completing the transaction ahead of schedule.

Sanjay Mehrotra: In DRAM, cleanroom constraints and long construction lead times, higher HBM trade ratio, higher HBM growth rates, and declining bits per wafer growth from node migrations constrained bit supply growth. We expect industry NAND bit shipments in calendar 2026 to grow approximately 20%. In NAND, some industry suppliers redirecting cleanroom space for DRAM and overall limited cleanroom space constrain bit supply growth. We expect Micron DRAM and NAND supply to grow approximately in line with the industry in calendar 2026. Micron is working to address the unprecedented gap between supply and demand, and we achieved several important milestones in expanding our global manufacturing footprint this past quarter. In DRAM, earlier this week, we announced the successful closing of the acquisition of the Tongluo site from Powerchip Semiconductor, completing the transaction ahead of schedule.

Speaker #3: We expect industry NAND bit shipments in calendar 2026 to grow approximately 20%. In NAND, some industry suppliers are redirecting clean room space for DRAM, and overall limited clean room space has constrained bit supply growth.

Speaker #3: We expect Micron DRAM and NAND supply to grow approximately in line with the industry in calendar 2026. Micron is working to address the unprecedented gap between supply and demand, and we achieved several important milestones in expanding our global manufacturing footprint this past quarter.

Speaker #3: In DRAM, earlier this week, we announced the successful closing of the acquisition of the Tongluo site from Powerchip Semiconductor, completing the transaction ahead of schedule.

Speaker #3: We expect this site to support meaningful product shipments from the existing fab beginning in fiscal 2028. Adding to the existing fab, we plan to begin construction of a similar-sized second clean room at this site by the end of fiscal 2026.

Sanjay Mehrotra: We expect this site to support meaningful product shipments from the existing fab beginning in fiscal 2028. Adding to the existing fab, we plan to begin construction of a similar sized second clean room at this site by the end of fiscal 2026. We continue to expect initial wafer output at our first Idaho fab in mid-calendar 2027, and ground preparation has begun for our second Idaho fab. We broke ground on our first fab at the New York site, and initial ground preparation activities are ahead of plan. In Japan, we are making good progress on ground preparation for our clean room expansion to enable future technology transitions in our Hiroshima site.

Sanjay Mehrotra: We expect this site to support meaningful product shipments from the existing fab beginning in fiscal 2028. Adding to the existing fab, we plan to begin construction of a similar sized second clean room at this site by the end of fiscal 2026. We continue to expect initial wafer output at our first Idaho fab in mid-calendar 2027, and ground preparation has begun for our second Idaho fab. We broke ground on our first fab at the New York site, and initial ground preparation activities are ahead of plan. In Japan, we are making good progress on ground preparation for our clean room expansion to enable future technology transitions in our Hiroshima site.

Speaker #3: We continue to expect initial wafer output at our first Idaho fab in mid-calendar 2027, and ground preparation has begun for our second Idaho fab.

Speaker #3: We broke ground on our first fab at the New York site, and initial ground preparation activities are ahead of plan. In Japan, we are making good progress on ground preparation for our clean room expansion to enable future technology transitions at our Hiroshima site.

Speaker #3: In NAND, the combination of a higher demand outlook and our decision to co-locate R&D clean room in our manufacturing fab underpin our decision to break ground for a new NAND fab at our Singapore site.

Sanjay Mehrotra: In NAND, the combination of a higher demand outlook and our decision to co-locate R&D clean room in our manufacturing fab underpin our decision to break ground for a new NAND fab at our Singapore site. We expect initial wafer output from this fab in the second half of calendar 2028. In assembly and test, we commenced commercial shipments from our new facility in India. The state-of-the-art facility will be among the largest single floor assembly and test clean rooms in the world. Our Singapore advanced packaging facility for HBM is on track to contribute meaningfully to Micron's HBM supply in calendar year 2027. We expect fiscal 2026 CapEx to be above $25 billion.

Sanjay Mehrotra: In NAND, the combination of a higher demand outlook and our decision to co-locate R&D clean room in our manufacturing fab underpin our decision to break ground for a new NAND fab at our Singapore site. We expect initial wafer output from this fab in the second half of calendar 2028. In assembly and test, we commenced commercial shipments from our new facility in India. The state-of-the-art facility will be among the largest single floor assembly and test clean rooms in the world. Our Singapore advanced packaging facility for HBM is on track to contribute meaningfully to Micron's HBM supply in calendar year 2027. We expect fiscal 2026 CapEx to be above $25 billion.

Speaker #3: We expect initial wafer output from this fab in the second half of calendar 2028. In assembly and test, we commenced commercial shipments from our new facility in India.

Speaker #3: The state-of-the-art facility will be among the largest single-floor assembly and test clean rooms in the world. Our Singapore advanced packaging facility for HBM is on track to contribute meaningfully to Micron's HBM supply in calendar year 2027.

Speaker #3: We expect fiscal 2026 CAPEX to be above $25 billion. Some of our last earnings call estimated the majority of the increase is driven by clean room facility-related CAPEX, of which the largest factor is Tongluo, followed by construction spend increase in our US fab projects.

Sanjay Mehrotra: From our last earnings call estimate, the majority of the increase is driven by clean-room facility-related CapEx, of which the largest factor is Tongluo, followed by construction-spend increase in our US fab projects. We project our fiscal 2027 CapEx to step up meaningfully to support HBM and DRAM related investments. We expect construction-related CapEx to increase by over $10 billion year-over-year in fiscal 2027 as we build out our global manufacturing sites to address long-term demand opportunities. In addition, we expect higher equipment spend year-over-year in fiscal 2027. As we make these investments, we will continue to be responsive to the market environment and our customer demand to appropriately align our supply plans. I will now turn it over to Mark for our fiscal Q2 financial results and outlook.

Sanjay Mehrotra: From our last earnings call estimate, the majority of the increase is driven by clean-room facility-related CapEx, of which the largest factor is Tongluo, followed by construction-spend increase in our US fab projects. We project our fiscal 2027 CapEx to step up meaningfully to support HBM and DRAM related investments. We expect construction-related CapEx to increase by over $10 billion year-over-year in fiscal 2027 as we build out our global manufacturing sites to address long-term demand opportunities. In addition, we expect higher equipment spend year-over-year in fiscal 2027. As we make these investments, we will continue to be responsive to the market environment and our customer demand to appropriately align our supply plans. I will now turn it over to Mark for our fiscal Q2 financial results and outlook.

Speaker #3: We project our fiscal 2027 CAPEX to step up meaningfully to support HBM and DRAM-related investments. We expect construction-related CAPEX to increase by over $10 billion year over year in fiscal 2027 as we build out our global manufacturing sites to address long-term demand opportunities.

Speaker #3: In addition, we expect higher equipment spend year over year in fiscal 2027. As we make these investments, we will continue to be responsive to the market environment and our customer demand to appropriately align our supply plans.

Speaker #3: I will now turn it over to Mark for our fiscal Q2 financial results and outlook.

Speaker #2: Thank you, Sanjay, and good afternoon, everyone. Micron delivered strong financial results for the fiscal second quarter, with revenue, gross margin, and EPS all exceeding the high end of our guidance.

Mark Murphy: Thank you, Sanjay, and good afternoon, everyone. Micron delivered strong financial results for the fiscal Q2, with revenue, gross margin, and EPS all exceeding the high end of our guidance. In fiscal Q2, we generated record free cash flow, reduced our debt, and closed the quarter with the highest net cash position in our history. Total fiscal Q2 revenue was $23.9 billion, up 75% sequentially and up 196% year-over-year, representing our fourth consecutive quarterly revenue record. The $10.2 billion sequential increase is the largest in our history. Fiscal Q2 DRAM revenue was a record $18.8 billion, up 207% year-over-year, and represented 79% of total revenue. Sequentially, DRAM revenue increased 74%. Bit shipments were up mid-single digits.

Mark Murphy: Thank you, Sanjay, and good afternoon, everyone. Micron delivered strong financial results for the fiscal Q2, with revenue, gross margin, and EPS all exceeding the high end of our guidance. In fiscal Q2, we generated record free cash flow, reduced our debt, and closed the quarter with the highest net cash position in our history. Total fiscal Q2 revenue was $23.9 billion, up 75% sequentially and up 196% year-over-year, representing our fourth consecutive quarterly revenue record. The $10.2 billion sequential increase is the largest in our history. Fiscal Q2 DRAM revenue was a record $18.8 billion, up 207% year-over-year, and represented 79% of total revenue. Sequentially, DRAM revenue increased 74%. Bit shipments were up mid-single digits.

Speaker #2: In fiscal Q2, we generated record free cash flow, reduced our debt, and closed the quarter with the highest net cash position in our history. Total fiscal Q2 revenue was $23.9 billion.

Speaker #2: Up 75% sequentially and up 196% year over year, representing our fourth consecutive quarterly revenue record. The $10.2 billion sequential increase is the largest in our history.

Speaker #2: Fiscal Q2 DRAM revenue was a record $18.8 billion, up 207% year over year, and represented 79% of total revenue. Sequentially, DRAM revenue increased 74%.

Speaker #2: Bit shipments were up mid-single digits. Prices increased in the mid-60s percentage range, driven by tight industry conditions and included favorable mix. Fiscal Q2 NAND revenue was a record $5 billion.

Mark Murphy: Prices increased in the mid-60s% range, driven by tight industry conditions, and included favorable mix. Fiscal Q2 NAND revenue was a record $5 billion, up 169% year-over-year, and represented 21% of Micron's total revenue. Sequentially, NAND revenue increased 82%. NAND bit shipments increased in the low single-digit % range. Prices increased in the high 70s% range, driven by tight NAND industry conditions, and included favorable mix. The consolidated gross margin for fiscal Q2 was 75%, up 18 percentage points sequentially. This improvement was driven primarily by higher pricing, and also included favorable mix and cost performance. Fiscal Q2 gross margin nearly doubled from a year ago and was a company record. Now, turning to quarterly financial performance by business unit.

Mark Murphy: Prices increased in the mid-60s% range, driven by tight industry conditions, and included favorable mix. Fiscal Q2 NAND revenue was a record $5 billion, up 169% year-over-year, and represented 21% of Micron's total revenue. Sequentially, NAND revenue increased 82%. NAND bit shipments increased in the low single-digit % range. Prices increased in the high 70s% range, driven by tight NAND industry conditions, and included favorable mix. The consolidated gross margin for fiscal Q2 was 75%, up 18 percentage points sequentially. This improvement was driven primarily by higher pricing, and also included favorable mix and cost performance. Fiscal Q2 gross margin nearly doubled from a year ago and was a company record. Now, turning to quarterly financial performance by business unit.

Speaker #2: Up 169% year over year, and represented 21% of Micron's total revenue. Sequentially, NAND revenue increased 82%. NAND bit shipments increased in the low single-digit percentage range.

Speaker #2: Prices increased in the high 70s percentage range, driven by tight NAND industry conditions and included favorable mix. The consolidated gross margin for fiscal Q2 was 75%.

Speaker #2: Up 18 percentage points sequentially. This improvement was driven primarily by higher pricing and also included favorable mix and cost performance. Fiscal Q2 gross margin nearly doubled from a year ago and was a company record.

Speaker #2: Now, turning to quarterly financial performance by business unit. Cloud Memory business unit revenue was a record $7.7 billion and represented 32% of total company revenue.

Mark Murphy: Cloud Memory Business Unit revenue was a record $7.7 billion and represented 32% of total company revenue. CMBU revenue was up 47% sequentially, driven by an increase in prices and favorable mix. CMBU gross margins were 74%, higher by 9 percentage points sequentially, driven by higher pricing and cost execution. Core data center business unit revenue was a record $5.7 billion, and represented 24% of total company revenue. CDBU gross margins were 74%, up 23 percentage points sequentially, driven by higher pricing and favorable mix. Mobile and client business unit revenue was a record $7.7 billion and represented 32% of total company revenue. MCBU revenue was up 81% sequentially, driven by higher pricing, partially offset by lower bit shipments. MCBU gross margins were 79%, up 25 percentage points sequentially, driven primarily by higher pricing and favorable mix.

Mark Murphy: Cloud Memory Business Unit revenue was a record $7.7 billion and represented 32% of total company revenue. CMBU revenue was up 47% sequentially, driven by an increase in prices and favorable mix. CMBU gross margins were 74%, higher by 9 percentage points sequentially, driven by higher pricing and cost execution. Core data center business unit revenue was a record $5.7 billion, and represented 24% of total company revenue. CDBU gross margins were 74%, up 23 percentage points sequentially, driven by higher pricing and favorable mix. Mobile and client business unit revenue was a record $7.7 billion and represented 32% of total company revenue. MCBU revenue was up 81% sequentially, driven by higher pricing, partially offset by lower bit shipments. MCBU gross margins were 79%, up 25 percentage points sequentially, driven primarily by higher pricing and favorable mix.

Speaker #2: CMBU revenue was up 47% sequentially, driven by an increase in prices and favorable mix. CMBU gross margins were 74%, higher by 9 percentage points sequentially.

Speaker #2: Driven by higher pricing and cost execution. Core Data Center business unit revenue was a record $5.7 billion and represented 24% of total company revenue.

Speaker #2: CDBU gross margins were 74%, up 23 percentage points sequentially. This was driven by higher pricing and a favorable mix. Mobile and Client Business Unit revenue was a record $7.7 billion.

Speaker #2: And represented 32% of total company revenue. MCBU revenue was up 81% sequentially, driven by higher pricing, partially offset by lower bit shipments. MCBU gross margins were 79%.

Speaker #2: Up 25 percentage points sequentially, driven primarily by higher pricing and favorable mix. Automotive and Embedded Business Unit revenue was a record $2.7 billion and represented 11% of total company revenue.

Mark Murphy: Automotive and embedded business unit revenue was a record $2.7 billion and represented 11% of total company revenue. AEBU revenue was up 57% sequentially, driven by higher pricing, partially offset by lower bit shipments. AEBU gross margins were 68%, up 23 percentage points sequentially, driven primarily by higher pricing. Operating expenses in fiscal Q2 were $1.4 billion, up $87 million quarter-over-quarter. The sequential increase was driven by higher R&D expenses. We generated operating income of $16.5 billion in fiscal Q2, resulting in an operating margin of 69%, up 22 percentage points sequentially and 44 percentage points year-over-year. Fiscal Q2 taxes were $2.5 billion on an effective tax rate of 15.1%.

Mark Murphy: Automotive and embedded business unit revenue was a record $2.7 billion and represented 11% of total company revenue. AEBU revenue was up 57% sequentially, driven by higher pricing, partially offset by lower bit shipments. AEBU gross margins were 68%, up 23 percentage points sequentially, driven primarily by higher pricing. Operating expenses in fiscal Q2 were $1.4 billion, up $87 million quarter-over-quarter. The sequential increase was driven by higher R&D expenses. We generated operating income of $16.5 billion in fiscal Q2, resulting in an operating margin of 69%, up 22 percentage points sequentially and 44 percentage points year-over-year. Fiscal Q2 taxes were $2.5 billion on an effective tax rate of 15.1%.

Speaker #2: AEBU revenue was up 57% sequentially, driven by higher pricing partially offset by lower bid shipments. AEBU gross margins were 68%, up 23 percentage points sequentially.

Speaker #2: Driven primarily by higher pricing. Operating expenses in fiscal Q2 were $1.4 billion, up $87 million quarter over quarter. The sequential increase was driven by higher R&D expenses.

Speaker #2: We generated operating income of $16.5 billion in fiscal Q2, resulting in an operating margin of 69%—up 22 percentage points sequentially, and 44 percentage points year over year.

Speaker #2: Fiscal Q2 taxes were $2.5 billion on an effective tax rate of 15.1%. Non-GAAP diluted earnings per share in fiscal Q2 was $12.20, with 155% sequential growth and 682% growth versus the year-ago quarter.

Mark Murphy: Non-GAAP diluted earnings per share in fiscal Q2 was $12.20, with 155% sequential growth and 682% growth versus the year ago quarter. Turning to cash flow and capital expenditures, in fiscal Q2, operating cash flows were $11.9 billion. Capital expenditures were $5 billion, resulting in free cash flow of $6.9 billion. Fiscal Q2 free cash flow was a quarterly record for the company, exceeding our prior record in fiscal Q1 2026 by 77%. Ending inventory for fiscal Q2 was $8.3 billion, up $62 million sequentially, with days of inventory at 123. DRAM inventory days remain especially tight and below 120 days.

Mark Murphy: Non-GAAP diluted earnings per share in fiscal Q2 was $12.20, with 155% sequential growth and 682% growth versus the year ago quarter. Turning to cash flow and capital expenditures, in fiscal Q2, operating cash flows were $11.9 billion. Capital expenditures were $5 billion, resulting in free cash flow of $6.9 billion. Fiscal Q2 free cash flow was a quarterly record for the company, exceeding our prior record in fiscal Q1 2026 by 77%. Ending inventory for fiscal Q2 was $8.3 billion, up $62 million sequentially, with days of inventory at 123. DRAM inventory days remain especially tight and below 120 days.

Speaker #2: Turning to cash flow and capital expenditures in fiscal Q2, operating cash flow was $11.9 billion. Capital expenditures were $5 billion, resulting in free cash flow of $6.9 billion.

Speaker #2: Fiscal Q2 free cash flow was a quarterly record for the company, exceeding our prior record in fiscal Q1 2026 by 77%. Ending inventory for fiscal Q2 was $8.3 billion.

Speaker #2: Up $62 million sequentially, with days of inventory at 123. DRAM inventory days remain especially tight and below 120 days. We reached record levels of cash and investments of $16.7 billion.

Mark Murphy: We reached record levels of cash and investments of $16.7 billion at quarter end and had liquidity over $20 billion when including our untapped credit facility. In fiscal Q2, we repurchased $350 million of shares as permitted by the terms of the CHIPS Agreement. During the quarter, we also reduced debt by $1.6 billion, including redemption of senior notes maturing in 2029 and 2030. The weighted average maturity on our outstanding debt is August 2034. We closed the quarter with $10.1 billion of debt and a net cash balance of $6.5 billion. Reinvesting in the profitable growth of our business across R&D, CapEx, and other strategic investments remains our top priority for capital allocation.

Mark Murphy: We reached record levels of cash and investments of $16.7 billion at quarter end and had liquidity over $20 billion when including our untapped credit facility. In fiscal Q2, we repurchased $350 million of shares as permitted by the terms of the CHIPS Agreement. During the quarter, we also reduced debt by $1.6 billion, including redemption of senior notes maturing in 2029 and 2030. The weighted average maturity on our outstanding debt is August 2034. We closed the quarter with $10.1 billion of debt and a net cash balance of $6.5 billion. Reinvesting in the profitable growth of our business across R&D, CapEx, and other strategic investments remains our top priority for capital allocation.

Speaker #2: At quarter end, we had liquidity over $20 billion, when including our untapped credit facility. In fiscal Q2, we repurchased $350 million of shares as permitted by the terms of the CHIPS agreement.

Speaker #2: During the quarter, we also reduced debt by $1.6 billion, including redemption of senior notes maturing in 2029 and 2030. The weighted average maturity on our outstanding debt is August 2034.

Speaker #2: We closed the quarter with $10.1 billion of debt and a net cash balance of $6.5 billion. We are reinvesting in the profitable growth of our business across R&D, CAPEX, and other strategic investments.

Speaker #2: Remains our top priority for capital allocation. We are committed to maintaining a strong balance sheet. Have reduced our total debt by over $5 billion in the last three quarters.

Mark Murphy: We are committed to maintaining a strong balance sheet, have reduced our total debt by over $5 billion in the last 3 quarters, and are at our strongest net cash position ever. Reflecting the sustained strength of our technology leadership and cash generation, as Sanjay mentioned, the board has approved a 30% increase in our quarterly dividend to $0.15 per share. Now turning to our guidance. We expect fiscal Q3 revenue to be a record $33.5 billion, ±$750 million. Gross margin to be approximately 81%. Operating expenses to be approximately $1.4 billion. Based on a share count of approximately 1.15 billion shares, we expect EPS to be a record $19.15 per share, ±$0.40.

Mark Murphy: We are committed to maintaining a strong balance sheet, have reduced our total debt by over $5 billion in the last 3 quarters, and are at our strongest net cash position ever. Reflecting the sustained strength of our technology leadership and cash generation, as Sanjay mentioned, the board has approved a 30% increase in our quarterly dividend to $0.15 per share. Now turning to our guidance. We expect fiscal Q3 revenue to be a record $33.5 billion, ±$750 million. Gross margin to be approximately 81%. Operating expenses to be approximately $1.4 billion. Based on a share count of approximately 1.15 billion shares, we expect EPS to be a record $19.15 per share, ±$0.40.

Speaker #2: And are at our strongest net cash position ever, reflecting the sustained strength of our technology leadership and cash generation. As Sanjay mentioned, the board has approved a 30% increase in our quarterly dividend to $0.15 per share.

Speaker #2: Now, turning to our guidance. We expect fiscal Q3 revenue to be a record $33.5 billion, plus or minus $750 million. Gross margin to be approximately 81%.

Speaker #2: And operating expenses to be approximately $1.4 billion. Based on a share count of approximately 1.15 billion shares, we expect EPS to be a record $19.15 per share.

Speaker #2: Plus or minus $0.40. We expect higher price, lower cost, and favorable mix to all contribute to gross margin expansion in Q3. As mentioned last quarter, Micron's fiscal Q4 2026 OPEX will also reflect the effect of an additional workweek in the 53-week fiscal year.

Mark Murphy: We expect higher price, lower cost, and favorable mix to all contribute to gross margin expansion in Q3. As mentioned last quarter, Micron's fiscal Q4 2026 OpEx will also reflect the effect of an additional work week in this 53-week fiscal year. We expect to increase our fiscal 2027 OpEx as we ramp R&D investments in support of an unprecedented set of long-term opportunities in memory and storage. We expect a fiscal Q3 and fiscal year 2026 tax rate of around 15.1%. Micron continues to invest in a disciplined manner across our global footprint. To address customer demand, as mentioned earlier, we now project our capital spending in fiscal 2026 to be above $25 billion. In fiscal Q3, we project CapEx of approximately $7 billion while delivering significantly higher free cash flow on stronger operating cash flow.

Mark Murphy: We expect higher price, lower cost, and favorable mix to all contribute to gross margin expansion in Q3. As mentioned last quarter, Micron's fiscal Q4 2026 OpEx will also reflect the effect of an additional work week in this 53-week fiscal year. We expect to increase our fiscal 2027 OpEx as we ramp R&D investments in support of an unprecedented set of long-term opportunities in memory and storage. We expect a fiscal Q3 and fiscal year 2026 tax rate of around 15.1%. Micron continues to invest in a disciplined manner across our global footprint. To address customer demand, as mentioned earlier, we now project our capital spending in fiscal 2026 to be above $25 billion. In fiscal Q3, we project CapEx of approximately $7 billion while delivering significantly higher free cash flow on stronger operating cash flow.

Speaker #2: We expect to increase our fiscal 2027 OPEX as we ramp R&D investments in support of an unprecedented set of long-term opportunities in memory and storage.

Speaker #2: We expect a fiscal Q3 and fiscal year 2026 tax rate of around 15.1%. Micron continues to invest in a disciplined manner across our global footprint.

Speaker #2: To address customer demand, as mentioned earlier, we now project our capital spending in fiscal 2026 to be above $25 billion. In fiscal Q3, we project CAPEX of approximately $7 billion.

Speaker #2: While delivering significantly higher free cash flow on stronger operating cash flow. Due to this, we expect our construction spend growth rate to outpace equipment spend growth.

Mark Murphy: Due to the need for clean room capacity, we expect our construction spend growth rate to outpace equipment spend growth in both fiscal 2026 and fiscal 2027. Any impacts that may occur due to trade or geopolitical developments are not included in our guidance. I'll now turn it over to Sanjay to close.

Mark Murphy: Due to the need for clean room capacity, we expect our construction spend growth rate to outpace equipment spend growth in both fiscal 2026 and fiscal 2027. Any impacts that may occur due to trade or geopolitical developments are not included in our guidance. I'll now turn it over to Sanjay to close.

Speaker #2: In both fiscal 2026 and fiscal 2027, any impacts that may occur due to trade or geopolitical developments are not included in our guidance. I'll now turn it over to Sanjay to close.

Speaker #1: Thank you, Mark. Decades of investment in innovation and execution have established Micron as a technology leader in memory and storage, and as one of the semiconductor industry’s biggest beneficiaries and enablers of AI.

Sanjay Mehrotra: Thank you, Mark. Decades of investment in innovation and execution has established Micron as a technology leader in memory and storage, and as one of the semiconductor industry's biggest beneficiaries and enablers of AI. As the only US-based manufacturer of advanced memory products, Micron is uniquely positioned to capitalize on the unprecedented opportunities ahead. I want to thank our team members worldwide, whose execution made this outstanding quarter possible. As strong as these results are, I'm even more excited about what's ahead for Micron. We will now open for questions.

Sanjay Mehrotra: Thank you, Mark. Decades of investment in innovation and execution has established Micron as a technology leader in memory and storage, and as one of the semiconductor industry's biggest beneficiaries and enablers of AI. As the only US-based manufacturer of advanced memory products, Micron is uniquely positioned to capitalize on the unprecedented opportunities ahead. I want to thank our team members worldwide, whose execution made this outstanding quarter possible. As strong as these results are, I'm even more excited about what's ahead for Micron. We will now open for questions.

Speaker #1: As the only U.S.-based manufacturer of advanced memory products, Micron is uniquely positioned to capitalize on the unprecedented opportunities ahead. I want to thank our team members worldwide whose execution made this outstanding quarter possible.

Speaker #1: As strong as these results are, I'm even more excited about what's ahead for Micron. We will now open for questions.

Speaker #2: Thank you.

Satya Kumar: Thank you.

Satya Kumar: Thank you.

Speaker #3: We will now begin the question and answer session. Please limit yourself to one question and one follow-up. If you would like to ask a question, please press star one to raise your hand, and star six to unmute.

Operator: We will now begin the question and answer session. Please limit yourself to one question and one follow-up. If you would like to ask a question, please press star one to raise your hand and star six to unmute. Your first question comes from the line of Krish Sankar from TD Cowen. Please go ahead.

Operator: We will now begin the question and answer session. Please limit yourself to one question and one follow-up. If you would like to ask a question, please press star one to raise your hand and star six to unmute. Your first question comes from the line of Krish Sankar from TD Cowen. Please go ahead.

Speaker #3: Your first question comes from the line of Krish Sankar from TD Cowen. Please go ahead.

Krish Sankar: Mark Murphy, the 81% gross margin guide is very impressive. Just kind of curious how to think about the sustainability of gross margins, especially as you bring more HBM4 into the mix. If you can give some color on how to think about gross margins in the August quarter and beyond, that'd be very helpful. Then I have a follow-up for Sanjay Mehrotra.

Krish Sankar: Mark Murphy, the 81% gross margin guide is very impressive. Just kind of curious how to think about the sustainability of gross margins, especially as you bring more HBM4 into the mix. If you can give some color on how to think about gross margins in the August quarter and beyond, that'd be very helpful. Then I have a follow-up for Sanjay Mehrotra.

Speaker #2: Mark, the 81% gross margin guide is very impressive. Just kind of curious, how do you think about the sustainability of gross margins, especially as you bring more HBM4 into the mix?

Speaker #2: So if you can give some thought on how to think about gross margins in the August quarter and beyond, that'd be very helpful. Then I have a follow-up for Sanjay.

Speaker #4: Yeah, so Chris, this is Mark. So we provide a strong guide, up 600 basis points sequentially into the third quarter. We're not going to provide the fourth quarter gross margin guidance.

Mark Murphy: Chris, this is Mark. We provide a strong guide up 600 basis points, you know, sequentially into Q3. We're not gonna provide Q4 gross margin guidance. However, we have indicated that, you know, that market conditions we expect to remain tight beyond 2026. Clearly beyond Q4. You know, we, you know, what you're seeing reflected in our gross margin is, you know, the benefits of AI driving a multiyear investment cycle, most of which is ahead of us. AI requires more high performance, more memory, and more high-performance memory. That's reflected in the margins.

Mark Murphy: Chris, this is Mark. We provide a strong guide up 600 basis points, you know, sequentially into Q3. We're not gonna provide Q4 gross margin guidance. However, we have indicated that, you know, that market conditions we expect to remain tight beyond 2026. Clearly beyond Q4. You know, we, you know, what you're seeing reflected in our gross margin is, you know, the benefits of AI driving a multiyear investment cycle, most of which is ahead of us. AI requires more high performance, more memory, and more high-performance memory. That's reflected in the margins.

Speaker #4: However, we have indicated that market conditions we expect to remain tight beyond '26, so clearly beyond the fourth quarter. What you're seeing reflected in our gross margin is the benefits of AI driving a multi-year investment cycle.

Speaker #4: Most of which is ahead of us. And an AI requires more high performance, more memory, and more high performance memory. And that's reflected in the margins.

Mark Murphy: You know, also, you know, we've talked about supply factors, and those are going to continue beyond 2026. You know, the 81% contemplates a growth in HBM4, but we expect, you know, as I mentioned, market conditions to be strong. Now, you know, keep in mind that at these gross margin levels, incremental increase in price is gonna have less of an effect on gross margin. But you know, beyond that, we're not providing a Q4 gross margin.

Mark Murphy: You know, also, you know, we've talked about supply factors, and those are going to continue beyond 2026. You know, the 81% contemplates a growth in HBM4, but we expect, you know, as I mentioned, market conditions to be strong. Now, you know, keep in mind that at these gross margin levels, incremental increase in price is gonna have less of an effect on gross margin. But you know, beyond that, we're not providing a Q4 gross margin.

Speaker #4: Also, we've talked about supply factors, and those are going to continue beyond '26. The 81% contemplates a growth in HBM4, but we expect, as I mentioned, market conditions to be strong.

Speaker #4: Now, keep in mind that as these gross margin levels incrementally increase, an increase in price is going to have less of an effect on gross margin.

Speaker #4: But beyond that, we're not providing a fourth-quarter gross margin.

Speaker #2: Got it. Thanks for that, Mark. And then a quick question for Sanjay on the FCA. Congrats on your first five-year FCA. How different is it from an LTA?

Krish Sankar: Got it. Thanks for that, Mark. A quick question for Sanjay on the SCA. Congrats on your first five-year SCA. How different is it from an LTA? Is this a multi-year volume and price commitment, or does the price get negotiated every year? Also, how to think about cancellation terms of the SCA in case the cycle slows down during the timeframe. Thanks, Sanjay.

Krish Sankar: Got it. Thanks for that, Mark. A quick question for Sanjay on the SCA. Congrats on your first five-year SCA. How different is it from an LTA? Is this a multi-year volume and price commitment, or does the price get negotiated every year? Also, how to think about cancellation terms of the SCA in case the cycle slows down during the timeframe. Thanks, Sanjay.

Speaker #2: Is this a multi-year volume and price commitment, or does the price get negotiated every year? And also, how should we think about cancellation terms in the FCA in case the cycle slows down during the timeframe?

Speaker #2: Thanks, Sanjay.

Speaker #1: So, thank you for recognizing us for the first FCA that we have completed here. And, as you noted, FCA is a multi-year agreement. And we noted that in our remarks as well.

Sanjay Mehrotra: Thank you for recognizing us for the first SCA that we have completed here. You know, as you noted, SCA is multi-year agreement, and we noted that in our remarks as well. LTAs have tended to be, you know, typically one-year agreement. Of course, in this environment of extremely tight supply outlook, you know, in the foreseeable timeframe as well, of course, our customers are very motivated, you know, in order for their own planning purposes and for their better predictability to have these structural strategic agreements with us. Of course, you know, these agreements are really meant to bring stability and greater visibility into our business model as well. We have completed one SCA, so we are not going to be getting into the specifics here of these agreements.

Sanjay Mehrotra: Thank you for recognizing us for the first SCA that we have completed here. You know, as you noted, SCA is multi-year agreement, and we noted that in our remarks as well. LTAs have tended to be, you know, typically one-year agreement. Of course, in this environment of extremely tight supply outlook, you know, in the foreseeable timeframe as well, of course, our customers are very motivated, you know, in order for their own planning purposes and for their better predictability to have these structural strategic agreements with us. Of course, you know, these agreements are really meant to bring stability and greater visibility into our business model as well. We have completed one SCA, so we are not going to be getting into the specifics here of these agreements.

Speaker #1: LTAs have tended to be typically one-year agreements. And of course, in this environment of extremely tight supply outlook, in the foreseeable timeframe as well, our customers are very motivated.

Speaker #1: In order for their own planning purposes, and for their better predictability, to have these structural strategic agreements with us. And of course, these agreements are really meant to bring stability and greater visibility into our business model as well.

Speaker #1: We have completed one FCA, so we are not going to be getting into the specifics here. All these agreements—I'm sure you can appreciate that these FCAs are confidential in nature.

Sanjay Mehrotra: I'm sure you can appreciate that these SCAs are confidential in nature. But of course, you know, these SCAs are meant to achieve the objectives of, for the customers in terms of their ability to plan and be able to count on supply commitments that are in the agreements, but also for us to be able to count on specific commitments that are there from the customers. You know, these are meant to go across the periods when the industry is very tight versus, you know, other parts of the industry environment as well. That's why they are long-term agreements, and they have robust terms in them. Got it. Thank you, Sanjay. Robust terms in them for us as well as for our customers. Got it. Thank you very much.

Sanjay Mehrotra: I'm sure you can appreciate that these SCAs are confidential in nature. But of course, you know, these SCAs are meant to achieve the objectives of, for the customers in terms of their ability to plan and be able to count on supply commitments that are in the agreements, but also for us to be able to count on specific commitments that are there from the customers. You know, these are meant to go across the periods when the industry is very tight versus, you know, other parts of the industry environment as well. That's why they are long-term agreements, and they have robust terms in them. Got it. Thank you, Sanjay. Robust terms in them for us as well as for our customers. Got it. Thank you very much.

Speaker #1: But of course, these FCAs are meant to achieve the objectives for the customers, in terms of their ability to plan and be able to count on supply commitments that are in the agreements.

Speaker #1: But also for us to be able to count on specific commitments that are there from the customers. And these are meant to go across the periods when the industry is very tight, versus other parts of the industry environment as well.

Speaker #1: So that's why they're long-term agreements, and they have robust terms in them.

Speaker #2: Got it. Thanks a lot, Sanjay.

Speaker #1: Robust terms in them for us, as well as for our customers.

Speaker #2: Got it. Thank you very much.

Speaker #3: Your next question comes from the line of Joseph Moore from Morgan Stanley. Your line is open. Please go ahead.

Operator: Your next question comes from the line of Joseph Moore from Morgan Stanley. Your line is open. Please go ahead.

Operator: Your next question comes from the line of Joseph Moore from Morgan Stanley. Your line is open. Please go ahead.

Speaker #5: Allocation questions by end market. Obviously, AI is the area that has the most urgency, but do you worry about demand destruction for things like PCs and smartphones?

Joseph Moore: Allocation questions by end market. Obviously, the AI is the area that has the most urgency, but do you worry about demand destruction for things like PCs and smartphones? Are you trying to balance big customer, small customer? Just how are you thinking about that allocation process?

Joseph Moore: Allocation questions by end market. Obviously, the AI is the area that has the most urgency, but do you worry about demand destruction for things like PCs and smartphones? Are you trying to balance big customer, small customer? Just how are you thinking about that allocation process?

Speaker #5: Are you trying to balance big customers and small customers? Just how are you thinking about that allocation process?

Speaker #1: I mean, clearly supply is extremely tight, and supply is tight across all end markets. Demand trends are strong across the end markets. While price-sensitive markets, such as the consumer examples that you gave, may have some demand that is getting impacted due to the higher prices.

Sanjay Mehrotra: I mean, you know, clearly supply is extremely tight and, you know, supply is tight across all end markets. You know, demand trends are strong across the end markets while, you know, price-sensitive markets such as the consumer examples that you gave may have some demand that is getting impacted due to the higher prices. Overall demand in those markets as well stay pretty strong. Our goal and strategy always is to be a diversified supplier to our various end markets. I think that is very important for us. Of course, you know, data center is becoming a bigger and bigger part of the industry TAM. Of course, you know, bigger portion of the supply goes there, and that's the main driver of growth for the industry as well as for Micron revenues.

Sanjay Mehrotra: I mean, you know, clearly supply is extremely tight and, you know, supply is tight across all end markets. You know, demand trends are strong across the end markets while, you know, price-sensitive markets such as the consumer examples that you gave may have some demand that is getting impacted due to the higher prices. Overall demand in those markets as well stay pretty strong. Our goal and strategy always is to be a diversified supplier to our various end markets. I think that is very important for us. Of course, you know, data center is becoming a bigger and bigger part of the industry TAM. Of course, you know, bigger portion of the supply goes there, and that's the main driver of growth for the industry as well as for Micron revenues.

Speaker #1: But overall, demand in those markets as well stayed pretty strong. And our goal and strategy always is to be a diversified supplier to our various end markets.

Speaker #1: I think that is very important for us. Of course, data center is becoming a bigger and bigger part of the industry TAM. So, of course, a bigger portion of the supply goes there.

Speaker #1: And that's the main driver of growth for the industry, as well as for Micron results. But other parts of the markets are important to us, such as PCs, smartphones, automotive, and, of course, industrial.

Sanjay Mehrotra: Other parts of the markets are important to us, such as PC, such as smartphone, automotive, of course, you know, industrial, and we want to maintain that well-diversified mix for our end markets. I would just like to point out that overall, whether in data center or in the consumer parts of the market, such as smartphone or PCs, you know, the AI trend is continuing to drive greater and greater requirement for memory content. Of course, you know, customers are working, you know, in this tight supply environment to manage the mix of their products. Overall, we are very much working with customers across our end markets.

Sanjay Mehrotra: Other parts of the markets are important to us, such as PC, such as smartphone, automotive, of course, you know, industrial, and we want to maintain that well-diversified mix for our end markets. I would just like to point out that overall, whether in data center or in the consumer parts of the market, such as smartphone or PCs, you know, the AI trend is continuing to drive greater and greater requirement for memory content. Of course, you know, customers are working, you know, in this tight supply environment to manage the mix of their products. Overall, we are very much working with customers across our end markets.

Speaker #1: And we want to maintain that well-diversified mix for our end markets. And I would just like to point out that overall, whether in data center or in the consumer parts of the markets, such as smartphone or PCs, the AI trend is continuing to drive greater and greater requirement for memory content. Of course, customers are working.

Speaker #1: In this tight supply environment, we are managing the mix of our products. But overall, we are very much working with customers across our end markets.

Speaker #5: Agreed. Thank you. And I think in the past, you've sort of said that some customers are getting 70% of what they're asking for. Is that still kind of the ballpark of what you're dealing with?

Joseph Moore: Great. Thank you. I think in the past, you've sort of said that some customers are getting 70% of what they're asking for. You know, is that still kind of the ballpark of what you're dealing with? Are their customers higher, lower than that than they were three months ago?

Joseph Moore: Great. Thank you. I think in the past, you've sort of said that some customers are getting 70% of what they're asking for. You know, is that still kind of the ballpark of what you're dealing with? Are their customers higher, lower than that than they were three months ago?

Speaker #5: Are there customers higher, lower than that, than they were three months ago?

Speaker #1: Yeah, yeah. What we have said is, in the last earnings call, that for some of our key customers we are able to fulfill only 50% to two-thirds of their demand in the medium term.

Sanjay Mehrotra: Yeah, what we have said is, in the last earnings call that we are able to fulfill only 50% to two-thirds of their demand in the medium term. Yes, that still remains the case.

Sanjay Mehrotra: Yeah, what we have said is, in the last earnings call that we are able to fulfill only 50% to two-thirds of their demand in the medium term. Yes, that still remains the case.

Speaker #1: And yes, that still remains the case.

Speaker #5: Great. Thank you. Great quarter.

Joseph Moore: Great. Thank you. Great quarter.

Joseph Moore: Great. Thank you. Great quarter.

Speaker #1: Thank you.

Sanjay Mehrotra: Thank you.

Sanjay Mehrotra: Thank you.

Speaker #3: Your next question comes from the line of Timothy Arcuri. Your line is open. Please go ahead.

Operator: Your next question comes from the line of Timothy Arcuri. Your line is open. Please go ahead.

Operator: Your next question comes from the line of Timothy Arcuri. Your line is open. Please go ahead.

Speaker #2: Thanks a lot. Sanjay, I also wanted to ask about the FCAs. Can you give I think we're all trying to sort of think on the to the other side of the cycle.

Timothy Arcuri: Thanks a lot. Sanjay, I also wanted to ask about the SCAs. I think we're all trying to sort of think on the, you know, to the other side of the cycle and hope that these SCAs provide some mechanism that will kind of limit your, you know, gross margin on the downside to a certain, you know, number. I know you don't want to give too many details, but is it fair to say that there is a mechanism in these SCAs that would limit your gross margin on the downside when things do finally roll back over?

Timothy Arcuri: Thanks a lot. Sanjay, I also wanted to ask about the SCAs. I think we're all trying to sort of think on the, you know, to the other side of the cycle and hope that these SCAs provide some mechanism that will kind of limit your, you know, gross margin on the downside to a certain, you know, number. I know you don't want to give too many details, but is it fair to say that there is a mechanism in these SCAs that would limit your gross margin on the downside when things do finally roll back over?

Speaker #2: And hope that these FCAs provide some mechanism that will kind of limit your gross margin on the downside to a certain number. So, I know you don't want to give too many details.

Speaker #2: But is it fair to say that there is a mechanism in these FCAs that would limit your gross margin on the downside when things do finally roll back over?

Speaker #1: So, certainly not getting into the specifics of these FCAs, for the obvious reasons of confidentiality of these agreements. We have successfully completed one FCA.

Sanjay Mehrotra: You know, certainly not getting into the specifics of these SCAs for the obvious reasons of confidentiality of these agreements. You know, we have successfully completed one SCA. We are in discussions with multiple other customers. If and when we complete these agreements and as appropriate, you know, we will of course share further details with you. But what I want to highlight is that these SCAs are multi-year, and they have specific commitments in them, you know. You know, these are robust agreements. Of course, these are meant, you know, to absolutely give us the visibility and stability toward our business model. Beyond that, really, I cannot get into any specifics at this point.

Sanjay Mehrotra: You know, certainly not getting into the specifics of these SCAs for the obvious reasons of confidentiality of these agreements. You know, we have successfully completed one SCA. We are in discussions with multiple other customers. If and when we complete these agreements and as appropriate, you know, we will of course share further details with you. But what I want to highlight is that these SCAs are multi-year, and they have specific commitments in them, you know. You know, these are robust agreements. Of course, these are meant, you know, to absolutely give us the visibility and stability toward our business model. Beyond that, really, I cannot get into any specifics at this point.

Speaker #1: We are in discussions with multiple other customers. If and when we complete these agreements, and as appropriate, we will, of course, share further details with you.

Speaker #1: But what I want to highlight is that these FCAs are multi-year, and they have specific commitments in them. And beyond—and these are robust agreements.

Speaker #1: And of course, these are meant to absolutely give us the visibility and stability toward our business model. Beyond that, really, I cannot get into any specifics at this point.

Speaker #2: Okay, thanks. And then, Mark, I mean, I just had a question about cash. So, you're going to generate, I don't know, $35 to $40 billion in free cash flow this fiscal year.

Timothy Arcuri: Okay, thanks. Mark, I mean, I just had a question about cash. You're gonna generate, I don't know, $35 to 40 billion in free cash flow this fiscal year. You're gonna probably have more than $50 billion in cash by the end of the calendar year. What do you do with this? Are you planning to, like, set aside a bunch of it to sort of, you know, buy back a bunch of stock on the other side? I guess with respect to that, you do have restrictions on the repo from the money you took from the CHIPS Act. Is there any way to get that reworked? Thanks.

Timothy Arcuri: Okay, thanks. Mark, I mean, I just had a question about cash. You're gonna generate, I don't know, $35 to 40 billion in free cash flow this fiscal year. You're gonna probably have more than $50 billion in cash by the end of the calendar year. What do you do with this? Are you planning to, like, set aside a bunch of it to sort of, you know, buy back a bunch of stock on the other side? I guess with respect to that, you do have restrictions on the repo from the money you took from the CHIPS Act. Is there any way to get that reworked? Thanks.

Speaker #2: You're going to probably have more than $50 billion in cash by the end of the calendar year. So, what do you do with this?

Speaker #2: Are you planning to set aside a bunch of it to sort of buy back a bunch of stock on the other side? And I guess, with respect to that, you do have restrictions on the repo from the money you took from the CHIPS Act.

Speaker #2: Is there any way to get that reworked? Thanks.

Speaker #1: Yeah, Tim, so we're thrilled with the performance of the business and the improvement in the balance sheet, and in the second quarter, with record net cash.

Mark Murphy: Yeah, Tim, we're thrilled with the performance of the business and the improvement in the balance sheet, and in Q2 with record net cash and record free cash flow, beating the previous quarter's record by 77%. You know, our Q3 guide, when you take those numbers and consider the CapEx we gave, yeah, we could see cash flow double roughly sequentially. Yeah, we're gonna continue to build on the balance sheet strength and improve our net cash position. You know, we're continuing to de-lever.

Mark Murphy: Yeah, Tim, we're thrilled with the performance of the business and the improvement in the balance sheet, and in Q2 with record net cash and record free cash flow, beating the previous quarter's record by 77%. You know, our Q3 guide, when you take those numbers and consider the CapEx we gave, yeah, we could see cash flow double roughly sequentially. Yeah, we're gonna continue to build on the balance sheet strength and improve our net cash position. You know, we're continuing to de-lever.

Speaker #1: And record free cash flow, beating the previous quarter's record by 77%. Our third quarter guide—when you take those numbers and consider the CapEx we gave—we could see cash flow roughly double sequentially.

Speaker #1: We're going to continue to build on the balance sheet strength and improve our net cash position. We're continuing to delever and pay down debt.

Sanjay Mehrotra: Pay down debt. You know, noteworthy that we received two credit upgrades in the quarter. We're now a solid triple B. We're getting stronger, while, as you can see, we've talked about increasing our CapEx investment and increasing our R&D investment. Now to your specific question on balance sheet priority or capital allocation. You know, balance sheet is always gonna be a priority along with organic investment in our business to advance technology and to put in capacity for value add bits, which we certainly see now. You know, generating return on capital at this point over 30%, headed towards 50%. We're gonna remain disciplined there though.

Mark Murphy: Pay down debt. You know, noteworthy that we received two credit upgrades in the quarter. We're now a solid triple B. We're getting stronger, while, as you can see, we've talked about increasing our CapEx investment and increasing our R&D investment. Now to your specific question on balance sheet priority or capital allocation. You know, balance sheet is always gonna be a priority along with organic investment in our business to advance technology and to put in capacity for value add bits, which we certainly see now. You know, generating return on capital at this point over 30%, headed towards 50%. We're gonna remain disciplined there though.

Speaker #1: Noteworthy that we received two credit upgrades in the quarter, so we're now solid BBB. So we're getting stronger while, as you can see, we've talked about increasing our CapEx investment and increasing our R&D investment.

Speaker #1: Nadir, specific question on balance sheet or capital allocation. Balance sheet is always going to be a priority, along with organic investment in our business to advance technology and to put in capacity for value-added bits, which we certainly see now.

Speaker #1: And generating return on capital at this point over 30%, headed towards 50%. We're going to remain disciplined there, though. And then you saw today we're pleased to announce a dividend increase of 30%, which reflects the confidence we have in our business outlook, the stability of the business, and cash returns in the future.

Sanjay Mehrotra: You saw today we're pleased to announce a dividend increase of 30%, reflects the confidence we have in our business outlook, stability of the business, and cash returns in the future. You know, as you said, we will have, we believe, significant capacity then for returning cash to shareholders to repurchase combination of offsetting you know dilution from stock comp, and then opportunistically repurchasing.

Mark Murphy: You saw today we're pleased to announce a dividend increase of 30%, reflects the confidence we have in our business outlook, stability of the business, and cash returns in the future. You know, as you said, we will have, we believe, significant capacity then for returning cash to shareholders to repurchase combination of offsetting you know dilution from stock comp, and then opportunistically repurchasing.

Speaker #1: And then, as you said, we will have, we believe, significant capacity then for returning cash to shareholders through repurchase, in combination with offsetting dilution from stock comp.

Speaker #1: And then opportunistically repurchasing.

Speaker #2: Thanks, Mark.

C.J. Muse: Thanks, Mark.

Timothy Arcuri: Thanks, Mark.

Speaker #3: Your next question comes from the line of CJ Mews from Canter Fitzgerald. Your line is open. Please go ahead.

Operator: Your next question comes from the line of C.J. Muse from Cantor Fitzgerald. Your line is open. Please go ahead.

Operator: Your next question comes from the line of C.J. Muse from Cantor Fitzgerald. Your line is open. Please go ahead.

Speaker #4: Yeah, good afternoon. Thanks for taking the question. I wanted to follow up again on the FCA question. So you've had an evolution here—LTAs binding, now FCAs.

C.J. Muse: Yeah, good afternoon. Thanks for taking the question. Wanted to follow up again, SCA question. So you've had an evolution here, LTAs binding, now SCAs. I'm curious if you could kinda discuss the breadth of the different customers that you're speaking with. Is it only Hyperscale or are there others that are interested in? And I know you don't wanna go into specific details on the contract. But just to follow up on the last question, you know, is there any CapEx forward requirements tied to these agreements? Are there pricing tied to an ROIC on those investments? Any help there would be helpful. Thank you.

CJ Muse: Yeah, good afternoon. Thanks for taking the question. Wanted to follow up again, SCA question. So you've had an evolution here, LTAs binding, now SCAs. I'm curious if you could kinda discuss the breadth of the different customers that you're speaking with. Is it only Hyperscale or are there others that are interested in? And I know you don't wanna go into specific details on the contract. But just to follow up on the last question, you know, is there any CapEx forward requirements tied to these agreements? Are there pricing tied to an ROIC on those investments? Any help there would be helpful. Thank you.

Speaker #4: I'm curious if you could kind of discuss the breadth of the different customers that you're speaking with. Is it only hyperscale, or are there others that are interested in?

Speaker #4: And I know you don't want to go into specific details on the contract, but just to follow up on the last question, is there any CapEx forward requirements tied to these agreements?

Speaker #4: Are there pricing tied to an ROIC on those investments? Any help there would be helpful. Thank you.

Speaker #2: So we shared with you that the FCA that we just have signed is with a large customer. And of course, these agreements are very much focused on allowing us to invest with confidence in our future supply plans.

Sanjay Mehrotra: We'll share with you that the CA that we just have signed is with a large customer. Of course, you know, these agreements are very much focused on you know allowing us to invest with confidence in our you know future supply plans and of course also you know just having specific terms that you know enable us to have overall better visibility into the future demand and you know as I said earlier enable stability around the business model as well. C.J., beyond that, really, we are not commenting on the SCAs other than we'll say that of course as I mentioned earlier these SCA discussions are proceeding with multiple customers, and yes these are across multiple markets as well.

Sanjay Mehrotra: We'll share with you that the CA that we just have signed is with a large customer. Of course, you know, these agreements are very much focused on you know allowing us to invest with confidence in our you know future supply plans and of course also you know just having specific terms that you know enable us to have overall better visibility into the future demand and you know as I said earlier enable stability around the business model as well. C.J., beyond that, really, we are not commenting on the SCAs other than we'll say that of course as I mentioned earlier these SCA discussions are proceeding with multiple customers, and yes these are across multiple markets as well.

Speaker #2: And, of course, also just having specific terms that enable us to have overall better visibility into future demand and, as I said earlier, enable stability around the business model as well.

Speaker #2: And CJ, beyond that, really, we are not commenting on the FCAs other than we'll say that, of course, as I mentioned earlier, these FCA discussions are proceeding with multiple customers.

Speaker #2: And yes, these are across multiple markets as well.

Speaker #4: Very helpful. And then I guess as a quick follow-up for HBM, I think you guided last quarter to 40% growth CAGR, which would suggest roughly $50 billion in revenues for the market this year.

C.J. Muse: Very helpful. I guess there's a quick follow-up for HBM. I think you guided last quarter 40% growth CAGR, which would suggest roughly $50 billion in revenues for the market this year. Has that number changed? Are you seeing any sort of, kind of preference for perhaps moving this to D5 over HBM by any industry players given, you know, today, the higher margins that we see there? Thanks so much.

CJ Muse: Very helpful. I guess there's a quick follow-up for HBM. I think you guided last quarter 40% growth CAGR, which would suggest roughly $50 billion in revenues for the market this year. Has that number changed? Are you seeing any sort of, kind of preference for perhaps moving this to D5 over HBM by any industry players given, you know, today, the higher margins that we see there? Thanks so much.

Speaker #4: Has that number changed? And are you seeing any sort of preference for perhaps moving this to DDR5 over HBM by any industry players, given today the higher margins that we see there?

Speaker #4: Thanks so much.

Speaker #2: So yes, it is correct that the margins for non-HBM today are higher than HBM margins. Demand for HBM, of course, continues to be strong.

Sanjay Mehrotra: Yes, it is correct that the margins for non-HBM today are higher than HBM margins. Demand for HBM, of course, continues to be strong. We have not updated the numbers that we had provided last in terms of the outlook for the HBM TAM. Of course, the demand for, you know, DDR5, LP, and HBM, all of them continue to be strong in the data centers. You know, of course, we continue to manage the mix of the business as the data center AI demand continues to grow. As I said earlier, we of course outside of data center, you know, we are very much focused on making sure that we are maintaining, you know, relevant shares in our key other market segments as well.

Sanjay Mehrotra: Yes, it is correct that the margins for non-HBM today are higher than HBM margins. Demand for HBM, of course, continues to be strong. We have not updated the numbers that we had provided last in terms of the outlook for the HBM TAM. Of course, the demand for, you know, DDR5, LP, and HBM, all of them continue to be strong in the data centers. You know, of course, we continue to manage the mix of the business as the data center AI demand continues to grow. As I said earlier, we of course outside of data center, you know, we are very much focused on making sure that we are maintaining, you know, relevant shares in our key other market segments as well.

Speaker #2: We have not updated the numbers that we had provided last in terms of the outlook for the HBM TAM. Of course, the demand for DDR5, LP, and HBM—all of them continue to be strong in the data centers.

Speaker #2: And of course, continue to manage the mix of the business as the data center AI demand continues to grow. And as I said earlier, we, of course, outside of data center, are very much focused on making sure that we are maintaining relevant share in our key other market segments as well.

Speaker #2: So, we are overall in this environment of strong AI demand, with trends across data center to the edge. We are very much focused on continuing to manage our portfolio.

Sanjay Mehrotra: We are overall in this environment of strong AI demand trends across data center to the edge. We are very much focused on continuing to manage our portfolio, and we see strong growth opportunity for the full portfolio of Micron's in the data center. I'll just point out there that that portfolio is about HBM, it's about ALP, it's about SOCAMM, it's about DDR5, as well as our data center SSDs we have made, which have made tremendous strides in terms of our market share in data center SSDs over the course of last few years.

Sanjay Mehrotra: We are overall in this environment of strong AI demand trends across data center to the edge. We are very much focused on continuing to manage our portfolio, and we see strong growth opportunity for the full portfolio of Micron's in the data center. I'll just point out there that that portfolio is about HBM, it's about ALP, it's about SOCAMM, it's about DDR5, as well as our data center SSDs we have made, which have made tremendous strides in terms of our market share in data center SSDs over the course of last few years.

Speaker #2: And we see strong growth opportunity for the full portfolio of Micron's in the data center. And I'll just point out there that portfolio is about HBM.

Speaker #2: It's about LP. It's about SoCam. It's about DDR5, as well as our data center SSDs. We have made tremendous strides in terms of our market share in data center SSDs over the course of the last few years.

Speaker #4: Thank you.

C.J. Muse: Thank you.

CJ Muse: Thank you.

Speaker #3: Your next question comes from the line of Harlan Sir from J.P. Morgan. Your line is open. Please go ahead.

Operator: Your next question comes from the line of Harlan Sur from JP Morgan. Your line is open. Please go ahead.

Operator: Your next question comes from the line of Harlan Sur from JP Morgan. Your line is open. Please go ahead.

Speaker #5: Yeah, good afternoon, and congratulations on the solid results and strong quarterly execution. Maybe, Sanjay, to carry on from where he left off on your commentary on SSD.

Harlan Sur: Yeah, good afternoon, and congratulations on the solid results and strong quarterly execution. Maybe, Sanjay, you know, to carry on from where you left off on your commentary on SSD. You know, in the November quarter of last year, I estimated that your enterprise SSD business was almost half of your total flash business, right? I think it was up, like, 60% sequentially. Obviously, a very favorable mix shift from a margin perspective for the Micron team. And as you mentioned, you remain a top, you know, strong top three global supplier of ESSD. Off of that strong number, it looks like your ESSD business doubled sequentially in the February quarter, still 50% of the NAND mix. Looking forward, with the G9 node continuing to ramp your next generation ESSDs, performance optimized, capacity optimized, mainstream all on G9.

Harlan Sur: Yeah, good afternoon, and congratulations on the solid results and strong quarterly execution. Maybe, Sanjay, you know, to carry on from where you left off on your commentary on SSD. You know, in the November quarter of last year, I estimated that your enterprise SSD business was almost half of your total flash business, right? I think it was up, like, 60% sequentially. Obviously, a very favorable mix shift from a margin perspective for the Micron team. And as you mentioned, you remain a top, you know, strong top three global supplier of ESSD. Off of that strong number, it looks like your ESSD business doubled sequentially in the February quarter, still 50% of the NAND mix. Looking forward, with the G9 node continuing to ramp your next generation ESSDs, performance optimized, capacity optimized, mainstream all on G9.

Speaker #5: On the November quarter of last year, you estimated that your enterprise SSD business was almost half of your total flash business, right? I think it was up like 60% sequentially.

Speaker #5: Obviously, a very favorable makeshift from a margin perspective for the Micron team. And as you mentioned, you remain a strong top-three global supplier of SSD.

Speaker #5: Off of that strong number, it looks like your ESSD business doubled sequentially in the February quarter, still 50% of demand mix. Looking forward, with the G9 nodes continuing to ramp, your next generation ESSDs—performance optimized, capacity optimized, mainstream—all on G9.

Speaker #5: So this gives the team a runway to continue to drive sequential growth in ESSD through the remainder of this calendar year, and into next year.

Harlan Sur: Does this give the team a runway to continue to drive sequential growth in ESSD for the remainder of this calendar year and into next year? I just wanted to get your thoughts on this new proposed memory tier of high bandwidth flash. Is this an area where the Micron team might start to focus some R&D resources?

Harlan Sur: Does this give the team a runway to continue to drive sequential growth in ESSD for the remainder of this calendar year and into next year? I just wanted to get your thoughts on this new proposed memory tier of high bandwidth flash. Is this an area where the Micron team might start to focus some R&D resources?

Speaker #5: And then I just wanted to get your thoughts on this new proposed memory tier of high-bandwidth flash. Is this an area where the Micron team might start to focus on R&D resources?

Speaker #4: So, regarding your question on data center SSDs, of course, this is an area of strong growth ahead. NAND supply stays strong—I mean, NAND supply is very tight.

Sanjay Mehrotra: Regarding your question on data center SSDs, of course, you know, this is an area of strong growth ahead and, you know, NAND supply stays strong. I mean, NAND supply is very tight and the demand for NAND stays strong and the data center SSDs are a big driver of, you know, NAND growth here as well. Micron is well-positioned with our portfolio of SSDs really going across the requirements in terms of capacity as well as performance, you know, across the various customers using TLC as well as QLC, you know, with respect to our data center mix. Very well positioned with this.

Sanjay Mehrotra: Regarding your question on data center SSDs, of course, you know, this is an area of strong growth ahead and, you know, NAND supply stays strong. I mean, NAND supply is very tight and the demand for NAND stays strong and the data center SSDs are a big driver of, you know, NAND growth here as well. Micron is well-positioned with our portfolio of SSDs really going across the requirements in terms of capacity as well as performance, you know, across the various customers using TLC as well as QLC, you know, with respect to our data center mix. Very well positioned with this.

Speaker #4: And the demand for NAND stays strong. The data center SSDs are a big driver of NAND growth here as well. And Micron is well positioned with our portfolio of SSDs.

Speaker #4: Really going across the requirements in terms of capacity as well as performance, across the various customers using TLC as well as QLC with respect to our data center mix.

Speaker #4: So, very well positioned with this. And this is as part of our strategy of continuing to shift our portfolio—our revenue mix—toward higher profit pools of the industry and higher-value parts of the market.

Sanjay Mehrotra: As part of our strategy of continuing to shift our portfolio, our revenue mix towards higher profit pools of the industry and, you know, higher value parts of the market. We of course will continue to address opportunities for growing our SSD business. We feel really good about the trajectory that we have been on with data center SSD and the trajectory that is planned ahead for us as well. You know, regarding your question on HBF, high bandwidth flash here. Of course, you know, high bandwidth flash has some positive attributes, you know, such as capacity, but it has limitations that NAND has, you know, such as write speed as well as power and retention.

Sanjay Mehrotra: As part of our strategy of continuing to shift our portfolio, our revenue mix towards higher profit pools of the industry and, you know, higher value parts of the market. We of course will continue to address opportunities for growing our SSD business. We feel really good about the trajectory that we have been on with data center SSD and the trajectory that is planned ahead for us as well. You know, regarding your question on HBF, high bandwidth flash here. Of course, you know, high bandwidth flash has some positive attributes, you know, such as capacity, but it has limitations that NAND has, you know, such as write speed as well as power and retention.

Speaker #4: We, of course, will continue to address opportunities for growing our SSD business. We feel really good about the trajectory that we have been on with data center SSD.

Speaker #4: And the trajectory that is planned ahead for us as well. And regarding your question on HBF, high bandwidth flash here. So, of course, high bandwidth flash has some positive attributes, such as capacity.

Speaker #4: But it has the limitations that NAND has, such as write speed, as well as power and retention. Therefore, there will potentially be some workloads here where this may be a possible solution.

Sanjay Mehrotra: Therefore, you know, there'll be potentially some workloads here, you know, where this may be a possible solution, but it is really early and, you know, what is needed, of course, is engagement with the customers here in terms of really understanding the business value proposition of HBF. We, of course, continue to study this.

Sanjay Mehrotra: Therefore, you know, there'll be potentially some workloads here, you know, where this may be a possible solution, but it is really early and, you know, what is needed, of course, is engagement with the customers here in terms of really understanding the business value proposition of HBF. We, of course, continue to study this.

Speaker #4: But it is really early. And what is needed, of course, is engagement with the customers here in terms of really understanding the business value proposition of HBF.

Speaker #4: But we, of course, continue to study this.

Harlan Sur: No, I appreciate that. How much of these multi-year SCA agreements is due to the sort of inherent requirements for earlier and longer term engagement with your GPU, XPU chip customers, just due to the customization of their next generation HBM architectures, right? Especially around the base die and given the 12 to 18-month design cycle times for these custom base dies, the sharing of IP between you and your chip customers and optimizing the base die to your process flow, right? It does imply that they have to engage with you much earlier in the design phase for their GPUs and XPU. Is this another factor driving these multi-year SCAs?

Harlan Sur: No, I appreciate that. How much of these multi-year SCA agreements is due to the sort of inherent requirements for earlier and longer term engagement with your GPU, XPU chip customers, just due to the customization of their next generation HBM architectures, right? Especially around the base die and given the 12 to 18-month design cycle times for these custom base dies, the sharing of IP between you and your chip customers and optimizing the base die to your process flow, right? It does imply that they have to engage with you much earlier in the design phase for their GPUs and XPU. Is this another factor driving these multi-year SCAs?

Speaker #5: And then, how much of these multi-year SCA agreements is due to the sort of inherent requirements for earlier and longer-term engagement with your GPU/XPU chip customers, just due to the customization of their next-generation HBM architectures, right?

Speaker #5: Especially around the base die, and given the 12- to 18-month design cycle times for these custom base dies, to sharing of IP between you and your chip customers.

Speaker #5: And optimizing the base die to your process flow, right? It does imply that they have to engage with you much earlier in the design phase for their GPUs and XPUs.

Speaker #5: Is this another factor driving these multi-year SCAs?

Speaker #4: Again, we are not really getting into the specifics—not getting into the specific type of customers here as well. But what I will definitely tell you is that, yes, these SCAs really bring us closer to the customer in terms of customer as well as our partnership.

Sanjay Mehrotra: Again, you know, we are not really getting into the specifics, not getting into the specific type of customers here as well. But what I will definitely tell you is that, yes, these SCAs really bring us closer to the customer in terms of, you know, customer as well as our partnership. That partnership, of course, extends into bringing us closer in terms of R&D collaboration.

Sanjay Mehrotra: Again, you know, we are not really getting into the specifics, not getting into the specific type of customers here as well. But what I will definitely tell you is that, yes, these SCAs really bring us closer to the customer in terms of, you know, customer as well as our partnership. That partnership, of course, extends into bringing us closer in terms of R&D collaboration.

Speaker #4: And that partnership, of course, extends into bringing us closer in terms of R&D, collaboration, and roadmap planning—both ours as well as for customers.

Harlan Sur: Mm-hmm.

Harlan Sur: Mm-hmm.

Sanjay Mehrotra: Roadmap planning, both ours as well as for customers. That's, you know, definitely one of the benefits of these SCAs as well.

Sanjay Mehrotra: Roadmap planning, both ours as well as for customers. That's, you know, definitely one of the benefits of these SCAs as well.

Speaker #4: So that's definitely one of the benefits of these SCAs as well.

Speaker #5: Yeah. Thank you.

Harlan Sur: Yeah. Thank you.

Harlan Sur: Yeah. Thank you.

Speaker #3: Your next question comes from the line of Tom O'Malley from Barclays. Your line is open. Please go ahead.

Operator: Your next question comes from the line of Tom O'Malley from Barclays. Your line is open. Please go ahead.

Operator: Your next question comes from the line of Tom O'Malley from Barclays. Your line is open. Please go ahead.

Speaker #6: Hey, guys. Thanks for taking the questions, and really nice results. So, GTC and OSC this week—I think there's a lot of conversation just around the LPU architecture and the increased use of SRAM.

Tom O'Malley: Hey, guys. Thanks for taking the questions and, really nice results. GTC and OFC this week, I think there's a lot of conversation just around the LPU architecture and the increased use of SRAM. Could you talk about your view on the memory market longer term as you see more workloads relying on other types of memory, outside of the HBM that you're already using? And then just as a broader question, with so much of the demand coming in these longer term agreements being associated with data center and just a few number of customers that can actually acquire and build these products, how are you benchmarking when you're adding capacity? Do you have an internal forecast for accelerators? Are you talking customer by customer and building bottoms up forecasts?

Tom O'Malley: Hey, guys. Thanks for taking the questions and, really nice results. GTC and OFC this week, I think there's a lot of conversation just around the LPU architecture and the increased use of SRAM. Could you talk about your view on the memory market longer term as you see more workloads relying on other types of memory, outside of the HBM that you're already using? And then just as a broader question, with so much of the demand coming in these longer term agreements being associated with data center and just a few number of customers that can actually acquire and build these products, how are you benchmarking when you're adding capacity? Do you have an internal forecast for accelerators? Are you talking customer by customer and building bottoms up forecasts?

Speaker #6: Could you talk about your view on the memory market longer term as you see more workloads relying on other types of memory outside of the HBM that you're already using?

Speaker #6: And then just as a broader question, with so much of the demand coming in these longer-term agreements, being associated with data centers and just a few number of customers that can actually acquire and build these products, how are you benchmarking when you're adding capacity?

Speaker #6: Do you have an internal forecast for accelerators? Are you talking customer by customer and building bottoms-up forecasts? Just so that you know, in year three and year four, year five, that you're offering enough supply to the industry and not getting into a situation in which we're in an oversupply.

Tom O'Malley: Just so that you know in year 3, year 4, year 5, that you're offering enough supply to the industry and not getting into a situation in which we're at an oversupply. Thank you very much.

Tom O'Malley: Just so that you know in year 3, year 4, year 5, that you're offering enough supply to the industry and not getting into a situation in which we're at an oversupply. Thank you very much.

Speaker #6: Thank you very much.

Sanjay Mehrotra: You know, first of all, on your question on the SRAM and LPU based architectures, you know, I would just like to point out that, of course, you know, these kind of architectures make the AI infrastructure more efficient. Any architecture that make AI infrastructure more efficient, they are good for all AI. Basically, they help the pie grow faster. Keep in mind that this LPU architecture works in conjunction with Vera Rubin. Vera Rubin, which utilizes, you know, tremendous amount of HBM as well as DRAM. The NVIDIA Groq LPX, this LPU based architecture, actually a rack uses 12 TB of DRAM in it as well. All of this actually is addressing the workloads, you know, in a more efficient manner.

Sanjay Mehrotra: You know, first of all, on your question on the SRAM and LPU based architectures, you know, I would just like to point out that, of course, you know, these kind of architectures make the AI infrastructure more efficient. Any architecture that make AI infrastructure more efficient, they are good for all AI. Basically, they help the pie grow faster. Keep in mind that this LPU architecture works in conjunction with Vera Rubin. Vera Rubin, which utilizes, you know, tremendous amount of HBM as well as DRAM. The NVIDIA Groq LPX, this LPU based architecture, actually a rack uses 12 TB of DRAM in it as well. All of this actually is addressing the workloads, you know, in a more efficient manner.

Speaker #4: So, first of all, on your question on the SRAM- and LPU-based architectures, I would just like to point out that, of course, these kinds of architectures make the AI infrastructure more efficient.

Speaker #4: And any architecture that makes AI infrastructure more efficient, they're good for all AI. Basically, they help the pie grow faster. Keep in mind that this LPU architecture works in conjunction with Vera Rubin.

Speaker #4: Vera Rubin, which utilizes a tremendous amount of HBM as well as DRAM. And the NVIDIA Grok LPX, this LPU-based architecture, actually a rack uses 12 terabytes of DRAM in it as well.

Speaker #4: So, all of this actually is addressing the workloads in a more efficient manner. This helps with, of course, the token economics. The token speed scale-up of AI across inference helps with the power.

Sanjay Mehrotra: This helps with, of course, the token economics. The token speed scale up of AI across inference helps with the power and every bit that helps overall is good for further scaling up and acceleration of AI demand as well. We look at this as complementing what already exists with respect to HBM and DRAM and, of course, continuing to grow the pie and accelerate deployment of AI. Just keep in mind that today in the enterprises, the AI deployment as a percentage is still very, very low. There is across all verticals, across all industries, across the economies, there is a lot of opportunity ahead. We are excited about all of these opportunities for our full portfolio of HBM, LP, DRAM, SOCAMM, and SSD in addressing these future market requirements.

Sanjay Mehrotra: This helps with, of course, the token economics. The token speed scale up of AI across inference helps with the power and every bit that helps overall is good for further scaling up and acceleration of AI demand as well. We look at this as complementing what already exists with respect to HBM and DRAM and, of course, continuing to grow the pie and accelerate deployment of AI. Just keep in mind that today in the enterprises, the AI deployment as a percentage is still very, very low. There is across all verticals, across all industries, across the economies, there is a lot of opportunity ahead. We are excited about all of these opportunities for our full portfolio of HBM, LP, DRAM, SOCAMM, and SSD in addressing these future market requirements.

Speaker #4: And every bit that helps overall is good for further scaling up and acceleration of AI demand as well. So we look at this as complementing what already exists with respect to HBM and, of course, continuing to grow the pie and accelerate our deployment of AI.

Speaker #4: Just keep in mind that today, in enterprises, AI deployment as a percentage is still very, very low. And across all verticals, across all industries, across the economies, there is a lot of opportunity ahead.

Speaker #4: So we are excited about all of these opportunities for our full portfolio of HBM, LP, DRAM, SoC, and SSD in addressing these future market requirements.

Speaker #4: And ultimately, all of this just points to how strategic of an asset memory is for AI. Because without more memory, without faster memory, AI just cannot scale up.

Sanjay Mehrotra: Ultimately, you know, all of this just points to how strategic of an asset memory is for AI, because without more memory, without faster memory, AI just cannot scale up. AI just cannot deliver the capabilities whether it's in training or in an inference. You know, just look at from last year to this year, the DRAM requirement in the advanced AI accelerators has now doubled. Really, I mean, these are some of the factors that are contributing to the supply shortage. Of course, these trends of greater deployment of AI apply on the edge devices, smartphone, and PCs as well. We are excited about the opportunities ahead and really, you know, absolutely continue to see strong opportunities for our full portfolio ahead.

Sanjay Mehrotra: Ultimately, you know, all of this just points to how strategic of an asset memory is for AI, because without more memory, without faster memory, AI just cannot scale up. AI just cannot deliver the capabilities whether it's in training or in an inference. You know, just look at from last year to this year, the DRAM requirement in the advanced AI accelerators has now doubled. Really, I mean, these are some of the factors that are contributing to the supply shortage. Of course, these trends of greater deployment of AI apply on the edge devices, smartphone, and PCs as well. We are excited about the opportunities ahead and really, you know, absolutely continue to see strong opportunities for our full portfolio ahead.

Speaker #4: AI just cannot deliver the capabilities. Whether it's in training or in inference, just look at from last year to this year—the DRAM requirement in the advanced AI accelerators has now doubled.

Speaker #4: And so, really, and this is what I mean, these are some of the factors that are contributing to the supply shortage. And, of course, these trends of greater deployment of AI apply on the edge devices, smartphones, and PCs as well.

Speaker #4: So, we are excited about the opportunities ahead, and really, absolutely continue to see strong opportunities for our full portfolio ahead.

Speaker #3: Your next question comes from the line of Vivek Arya from Bank of America Securities. Please go ahead.

Operator: Your next question comes from the line of Vivek Arya from Bank of America Securities. Please go ahead.

Operator: Your next question comes from the line of Vivek Arya from Bank of America Securities. Please go ahead.

Speaker #5: Thanks for taking my questions. Sanjay, on HBM4, do you expect your share to be in this target 20 to 25 percent range right off the bat?

Vivek Arya: Thanks for taking my questions. Sanjay, on HBM4, do you expect your share to be in this target 20-25% range right off the bat? Or do you think you will kind of build towards it over time? Just conceptually, how do you see the puts and takes in terms of whether you can actually expand your HBM share in this upcoming Vera Rubin generation?

Vivek Arya: Thanks for taking my questions. Sanjay, on HBM4, do you expect your share to be in this target 20-25% range right off the bat? Or do you think you will kind of build towards it over time? Just conceptually, how do you see the puts and takes in terms of whether you can actually expand your HBM share in this upcoming Vera Rubin generation?

Speaker #5: Or do you think you will kind of build towards it over time? Just conceptually, how do you see the puts and takes in terms of whether you can actually expand your HBM share in this upcoming Vera Rubin generation?

Speaker #4: We have shared before that in Q3 of last year, we reached our HBM target, which we had targeted for calendar '26, to bring our HBM share in line with DRAM share.

Sanjay Mehrotra: You know, we have shared before that, in Q3 of last year, we reached our HBM target, you know, which we have targeted for calendar 2026, to bring our HBM share in line with DRAM share. We had also said that, you know, going forward, we are going to manage our HBM as part of the mix of our total portfolio and are not going to break out, you know, the share, quarter by quarter here. What I can tell you is that we feel very good about our HBM product positioning, feel very good about overall HBM product.

Sanjay Mehrotra: You know, we have shared before that, in Q3 of last year, we reached our HBM target, you know, which we have targeted for calendar 2026, to bring our HBM share in line with DRAM share. We had also said that, you know, going forward, we are going to manage our HBM as part of the mix of our total portfolio and are not going to break out, you know, the share, quarter by quarter here. What I can tell you is that we feel very good about our HBM product positioning, feel very good about overall HBM product.

Speaker #4: And we had also said that, going forward, we are going to manage our HBM as part of the mix of our total portfolio, and are not going to break out the share quarter by quarter here.

Speaker #4: But what I can tell you is that we feel very good about our HBM product positioning. We feel very good about the overall HBM product. Of course, the market is there for both HBM4 as well as HBM3E in calendar 2026.

Sanjay Mehrotra: Of course, you know, the market is there for both HBM4 as well as HBM3E in calendar 2026, and we will be supplying both of these products and feel good about our overall position here and our ability to fully manage the mix of the business.

Sanjay Mehrotra: Of course, you know, the market is there for both HBM4 as well as HBM3E in calendar 2026, and we will be supplying both of these products and feel good about our overall position here and our ability to fully manage the mix of the business.

Speaker #4: And we will be supplying both of these products, and feel good about our overall position here and our ability to fully manage the mix of the business.

Speaker #5: And for my follow-up, Mark, I wanted to revisit this 81% gross margin guidance. I appreciate you're not giving a specific forward view, but what has happened in prior historical peaks where Micron's margins, I think, peaked in the low 60s, I believe? So, what is the difference between the prior situations versus now?

Vivek Arya: For my follow-up, Mark, I wanted to revisit this 81% gross margin guidance. I appreciate you're not giving a specific forward view, but what has happened in kind of prior historical peaks where Micron's margins, I think, peaked in the low 60s, I believe. What is the difference between the prior situations versus now? You know, what have those kind of historical precedents indicated to you about how the trajectory of gross margins can be over the next several quarters? How do customers start to react differently when they see these level of gross margins and what is a very, very important input into their AI silicon? Thank you.

Vivek Arya: For my follow-up, Mark, I wanted to revisit this 81% gross margin guidance. I appreciate you're not giving a specific forward view, but what has happened in kind of prior historical peaks where Micron's margins, I think, peaked in the low 60s, I believe. What is the difference between the prior situations versus now? You know, what have those kind of historical precedents indicated to you about how the trajectory of gross margins can be over the next several quarters? How do customers start to react differently when they see these level of gross margins and what is a very, very important input into their AI silicon? Thank you.

Speaker #5: What have those kind of historical precedents indicated to you about how the trajectory of gross margins can be over the next several quarters? How do customers start to react differently when they see these levels of gross margins, and what is a very, very important input into their AI silicon?

Speaker #5: Thank you.

Speaker #4: And before Mark answers that question, can I just point out that I accidentally said that we targeted to reach our HBM share in Q3 '26?

Sanjay Mehrotra: Before Mark answers that question, can I just point out that I, you know, accidentally said that we targeted to reach our HBM share in Q3 2026. Of course, I said it wrong by mistake. I meant that we had targeted to get to our HBM share in 2025, and we achieved our HBM share in line with our DRAM share in Q3 of 2025. Beyond that Q3 2025, we had said we are not going to be providing any further mix of HBM share. I just wanted to correct what I accidentally said 2026 instead of 2025.

Sanjay Mehrotra: Before Mark answers that question, can I just point out that I, you know, accidentally said that we targeted to reach our HBM share in Q3 2026. Of course, I said it wrong by mistake. I meant that we had targeted to get to our HBM share in 2025, and we achieved our HBM share in line with our DRAM share in Q3 of 2025. Beyond that Q3 2025, we had said we are not going to be providing any further mix of HBM share. I just wanted to correct what I accidentally said 2026 instead of 2025.

Speaker #4: Of course, I said it wrong by mistake. I meant that we had targeted to get to our HBM share in '25. And we achieved our HBM share in line with our DRAM share in Q3 of '25.

Speaker #4: And beyond that, Q3 '25, we had said we are not going to be providing any further mix of HBM share. So I just wanted to correct what I accidentally said—'26 instead of '25.

Vivek Arya: Thanks.

Vivek Arya: Thanks.

Speaker #5: Vivek, I would say that—keep in mind that the industry is supply constrained. So, conditions will remain very tight, and that's beyond '26.

Mark Murphy: Vivek, I would say that you know, keep in mind that the industry is supply constrained, and conditions will remain you know very tight, and that's beyond 2026. You know, that certainly supports the near term, medium term pricing. You know, we've discussed how we're working with customers to allocate best we can to their businesses and work with them on adding capacity, work with them on supply assurance, working with them on new products and so forth. I think your question about you know reverting to some historical mean, I think maybe you know, that's the thing that should be revisited is you know, we have a situation where AI is a transformational secular driver.

Mark Murphy: Vivek, I would say that you know, keep in mind that the industry is supply constrained, and conditions will remain you know very tight, and that's beyond 2026. You know, that certainly supports the near term, medium term pricing. You know, we've discussed how we're working with customers to allocate best we can to their businesses and work with them on adding capacity, work with them on supply assurance, working with them on new products and so forth. I think your question about you know reverting to some historical mean, I think maybe you know, that's the thing that should be revisited is you know, we have a situation where AI is a transformational secular driver.

Speaker #5: So, that certainly supports the near-term and medium-term pricing, and we've discussed how we're working with customers to allocate the best we can to their businesses—working with them on adding capacity, working with them on supply assurance, working with them on new products, and so forth.

Speaker #5: I think your question about reverting to some historical mean—I think maybe that's the thing that should be revisited, as we have a situation where AI is a transformational secular driver.

Mark Murphy: You know, as Sanjay mentioned, AI requires more and higher performance memory. This memory helps with the, you know, driving the token cost down, it helps lower the energy cost per token. It increases the number of tokens. It increases intelligence overall of AI, which drives harder problem sets and agent use, which drives more tokens and needs more memory. It's become, you know, the margins are reflecting recognition that memory is a lot more valuable and an efficient way to monetize AI, and that's from data center to the edge. You know, and then on top of that, we've been clear for, you know, a year or more that there are, you know, supply constraints that exist on a number of fronts that will take time. You know, there are low inventory levels.

Mark Murphy: You know, as Sanjay mentioned, AI requires more and higher performance memory. This memory helps with the, you know, driving the token cost down, it helps lower the energy cost per token. It increases the number of tokens. It increases intelligence overall of AI, which drives harder problem sets and agent use, which drives more tokens and needs more memory. It's become, you know, the margins are reflecting recognition that memory is a lot more valuable and an efficient way to monetize AI, and that's from data center to the edge. You know, and then on top of that, we've been clear for, you know, a year or more that there are, you know, supply constraints that exist on a number of fronts that will take time. You know, there are low inventory levels.

Speaker #5: As Sanjay mentioned, AI requires more and higher-performance memory. And this memory helps with driving the token cost down. It helps lower the energy cost per token.

Speaker #5: It increases the number of tokens. It increases intelligence overall of AI, which drives harder problem sets and agent use, which drives more tokens and needs more memory.

Speaker #5: So, it's become—the margins are reflecting recognition that memory is a lot more valuable, and an efficient way to monetize AI. And that's from data center to the edge.

Speaker #5: So, and then on top of that, we've been clear for a year or more that there are supply constraints that exist on a number of fronts that will take time.

Speaker #5: There are low inventory levels. There's declining bits per wafer on node advances. HBM trade ratio—and that's increasing. And then, any new capacity really needs to be greenfield, which is a physical constraint that takes a lot of time.

Mark Murphy: There's declining bits per wafer on node advances, HBM trade ratio, and that's increasing. Then any new capacity really needs to be greenfield, which is a physical constraint, which takes a lot of time. These are both, you know, durable factors, both the value of memory and the structural challenge of bringing on supply. We're working that, you know, both those issues. We're investing in capacity, and we're also increasing R&D to continue to advance the technology and improve the value of memory. We believe these will help with margins over time, and I think customers are recognizing that and entering into these agreements.

Mark Murphy: There's declining bits per wafer on node advances, HBM trade ratio, and that's increasing. Then any new capacity really needs to be greenfield, which is a physical constraint, which takes a lot of time. These are both, you know, durable factors, both the value of memory and the structural challenge of bringing on supply. We're working that, you know, both those issues. We're investing in capacity, and we're also increasing R&D to continue to advance the technology and improve the value of memory. We believe these will help with margins over time, and I think customers are recognizing that and entering into these agreements.

Speaker #5: So these are both durable factors: both the value of memory and the structural challenge of bringing on supply. And we're working on both those issues.

Speaker #5: We're investing in capacity, and we're also increasing R&D to continue to advance the technology and improve the value of memory. We believe these will help with margins over time.

Speaker #5: And I think customers are recognizing that and entering into these agreements. Thank you.

Vivek Arya: Thank you.

Vivek Arya: Thank you.

Operator: This concludes today's call. Thank you for attending. You may now disconnect.

Operator: This concludes today's call. Thank you for attending. You may now disconnect.

Q2 2026 Micron Technology Inc Earnings Call

Demo

Micron Technology

Earnings

Q2 2026 Micron Technology Inc Earnings Call

MU

Wednesday, March 18th, 2026 at 8:30 PM

Transcript

No Transcript Available

No transcript data is available for this event yet. Transcripts typically become available shortly after an earnings call ends.

Want AI-powered analysis? Try AllMind AI →