Q2 2026 Micron Technology Inc Post-Earnings Call

Operator 1: Ladies and gentlemen, thank you for joining us and welcome to Micron's Post Earnings Analyst Call. After the speaker presentation, we will host a question-and-answer session. I will now hand the conference over to Satya Kumar, Investor Relations. Satya, please go ahead.

Operator: Ladies and gentlemen, thank you for joining us and welcome to Micron's post-earnings analyst call. After the speaker presentation, we will host a question-and-answer session. I will now hand the conference over to Satya Kumar, Investor Relations. Satya, please go ahead.

Speaker #1: Ladies and gentlemen, thank you for joining us, and welcome to MICRON's Post-Earnings Analyst Call. After the speaker presentation, we will host a question-and-answer session.

Speaker #1: I will now hand the

Satya Kumar: Yeah, thank you, and welcome to Micron Technology's Fiscal Q2 2026 Post-Earnings Analyst Call. On the call with me today are Sumit Sadana, Micron's Chief Business Officer, Manish Bhatia, EVP of Global Operations, and Mark Murphy, our CEO. As a reminder, the matters we're discussing today include forward-looking statements regarding market demand and supply, market trends and drivers, and our expected results, guidance, and other matters. These forward-looking statements are subject to risks and uncertainties that may cause actual results to differ materially from statements made today. We refer to documents we have filed with the SEC, including our most recent Form 10-K and upcoming 10-Q for a discussion of risks that may affect our results. Although we believe that the expectations reflected in the forward-looking statements are reasonable, we cannot guarantee future results, levels of activity, performance, and achievements.

Satya Kumar: Yeah, thank you, and welcome to Micron Technology's fiscal Q2 2026 post-earnings analyst call. On the call with me today are Sumit Sadana, Micron's Chief Business Officer, Manish Bhatia, EVP of Global Operations, and Mark Murphy, our CEO. As a reminder, the matters we're discussing today include forward-looking statements regarding market demand and supply, market trends and drivers, and our expected results, guidance, and other matters. These forward-looking statements are subject to risks and uncertainties that may cause actual results to differ materially from statements made today. We refer to documents we have filed with the SEC, including our most recent Form 10-K and upcoming 10-Q for a discussion of risks that may affect our results. Although we believe that the expectations reflected in the forward-looking statements are reasonable, we cannot guarantee future results, levels of activity, performance, and achievements.

Speaker #2: TECHNOLOGY's fiscal second quarter 2026 post-earnings analyst call. On the call with me today are Sumit Sadhana, Micron's Chief Business Officer; Manish Bhatia, EVP of Global Operations; and Mark Murphy, our CEO.

Speaker #2: As a reminder, the matters we're discussing today include forward-looking statements regarding market demand and supply, market trends and drivers, and our expected results and guidance in other matters.

Speaker #2: These forward-looking statements are subject to risks and uncertainties that may cause actual results to differ materially from statements made today. We refer to the documents we have filed with the SEC, including our most recent Form 10-K and upcoming 10-Q, for a discussion of risks that may affect our results.

Speaker #2: Although we believe that the expectations reflected in the forward-looking statements are reasonable, we cannot guarantee future results, levels of activity, performance, or achievements. We are under no duty to update any of the forward-looking statements to conform these statements to actual results.

Satya Kumar: We are under no duty to update any of the forward-looking statements to conform these statements to actual results. Operator, we can now open the call up for Q&A.

Satya Kumar: We are under no duty to update any of the forward-looking statements to conform these statements to actual results. Operator, we can now open the call up for Q&A.

Speaker #2: Operator, we can now open the call up for Q&A.

Speaker #1: We will now begin the question-and-answer session. Please limit yourself to one question and one follow-up. If you would like to ask a question, please press star 1 to raise your hand, and star 6 to unmute.

Operator 1: We will now begin the question-and-answer session. Please limit yourself to one question and one follow-up. If you would like to ask a question, please press star one to raise your hand and star six to unmute. Your first question comes from the line of Melissa Weathers from Deutsche Bank. Your line is open. Please go ahead.

Operator: We will now begin the question-and-answer session. Please limit yourself to one question and one follow-up. If you would like to ask a question, please press star one to raise your hand and star six to unmute. Your first question comes from the line of Melissa Weathers from Deutsche Bank. Your line is open. Please go ahead.

Speaker #1: Your first question comes from the line of Melissa Weathers from Dutch Bank. Your line is open. Please go ahead.

Speaker #3: Hi there. Thanks for letting me ask your question. I wanted to touch on the NAND side of things. So, you guys are one of the only players to openly talk about greenfield capacity adds.

Melissa Weathers: Hi there. Thanks for letting me ask a question. I wanted to touch on the NAND side of things. You guys are one of the only players to openly talk about greenfield capacity adds so far. Then there are some industry participants saying that the growth in NAND bits going forward can be served by node upgrades alone, without the need for capacity adds. Can you talk about the decision to add greenfield capacity there and what kinda trends you're seeing that gives you confidence to add that capacity?

Melissa Weathers: Hi there. Thanks for letting me ask a question. I wanted to touch on the NAND side of things. You guys are one of the only players to openly talk about greenfield capacity adds so far. Then there are some industry participants saying that the growth in NAND bits going forward can be served by node upgrades alone, without the need for capacity adds. Can you talk about the decision to add greenfield capacity there and what kinda trends you're seeing that gives you confidence to add that capacity?

Speaker #3: So far, and then there are some industry participants saying that the growth in NAND bits going forward can be served by node upgrades alone.

Speaker #3: Without the need for capacity adds. So, can you talk about the decision to add greenfield capacity there, and what kind of trends you're seeing that give you confidence to add that capacity?

Speaker #4: Sure. Hi, Melissa. It's Manish. I can maybe talk a little bit about the decision, and then maybe Sumit can talk about the NAND demand trends.

Manish Bhatia: Hi, Melissa, it's Manish. I can let me talk a little bit about the decision and then maybe Sumit can talk about the NAND demand trends. You know, we have said that in the past that while DRAM required greenfield wafer capacity, meaning new wafer start capacity growth to be able to meet the long-term demand trends that we saw, NAND was, you know, able to meet that with technology transitions. I think we still see that NAND technology transitions are able to provide strong bit growth going forward.

Manish Bhatia: Hi, Melissa, it's Manish. I can let me talk a little bit about the decision and then maybe Sumit can talk about the NAND demand trends. You know, we have said that in the past that while DRAM required greenfield wafer capacity, meaning new wafer start capacity growth to be able to meet the long-term demand trends that we saw, NAND was, you know, able to meet that with technology transitions. I think we still see that NAND technology transitions are able to provide strong bit growth going forward.

Speaker #4: So, we have said in the past that while DRAM required greenfield wafer capacity—meaning new wafer start capacity growth—to be able to meet the long-term demand trends, what we saw in NAND was that it was able to meet that with technology transitions.

Speaker #4: And I think we still see that NAND technology transitions are able to provide strong bit growth going forward. And so our decision here, while yes, reflecting confidence in the market demand outlook—which Sumit will talk about—was also driven by our continued space consumption for those technology transitions and for the next technology transitions that we'll be having in the future, as well as our decision to locate more of our NAND R&D in Singapore, where it's closer to our manufacturing.

Manish Bhatia: Our decision here, while yes, reflecting confidence in the market demand outlook, which Sumit will talk about, was also driven by our continued space consumption for those technology transitions and for the next technology transitions that we'll be having in the future, as well as our decision to locate more of our NAND R&D in Singapore, where it's closer to our manufacturing. Those two drivers of additional clean room space, combined with our outlook on the market, you know, were really what was behind our confidence, frankly, in our product portfolio, which I'm sure Sumit will talk about, were behind our decision to be able to break ground for our new NAND fab.

Manish Bhatia: Our decision here, while yes, reflecting confidence in the market demand outlook, which Sumit will talk about, was also driven by our continued space consumption for those technology transitions and for the next technology transitions that we'll be having in the future, as well as our decision to locate more of our NAND R&D in Singapore, where it's closer to our manufacturing. Those two drivers of additional clean room space, combined with our outlook on the market, you know, were really what was behind our confidence, frankly, in our product portfolio, which I'm sure Sumit will talk about, were behind our decision to be able to break ground for our new NAND fab.

Speaker #4: And so those two drivers of additional clean room space, combined with our outlook on the market—were really what was behind our, and our confidence, frankly, in our product portfolio, which I'm sure Sumit will talk about—were behind our decision to be able to break ground for our new NAND fab.

Speaker #2: And just on that, it's not a greenfield site, right? I mean, you mentioned greenfield in your question, and this is an existing site where we are adding this additional clean room space that Manish said.

Sumit Sadana: Just on that, it's not a greenfield site, right? I mean, you mentioned greenfield in your question, and this is the existing site where we are adding this additional clean room space that Manish said. In terms of the demand side of the picture, I mean, we see very robust demand for NAND, driven by growth in the data center. AI servers are using huge amounts of SSDs, high capacity SSDs, and high-performance SSDs. This is working really well in our favor because our portfolio is doing exceptionally well. We are the first company in the world to have a PCIe Gen6 SSD in the market.

Sumit Sadana: Just on that, it's not a greenfield site, right? I mean, you mentioned greenfield in your question, and this is the existing site where we are adding this additional clean room space that Manish said. In terms of the demand side of the picture, I mean, we see very robust demand for NAND, driven by growth in the data center. AI servers are using huge amounts of SSDs, high capacity SSDs, and high-performance SSDs. This is working really well in our favor because our portfolio is doing exceptionally well. We are the first company in the world to have a PCIe Gen6 SSD in the market.

Speaker #2: So, in terms of the demand side of the picture, I mean, we see very robust demand for NAND, driven by growth in the data center.

Speaker #2: AI servers are using huge amounts of SSDs—high-capacity SSDs, as well as high-performance SSDs. And this is working really well in our favor, because our portfolio is doing exceptionally well.

Speaker #2: We are the first company in the world to have Gen 6 SSD in the market. And this has been something that works well with NVIDIA systems. We have seen tremendous demand for it that we are not able to completely even come close to meeting.

Sumit Sadana: This has been something that works well with NVIDIA systems. We have seen tremendous demand for it that we are not able to completely even come close to meeting. Even high capacity SSDs, we have seen tremendous growth and tremendous demand, and we have come out with a slew of new products that we are looking really excitedly to continue to grow our share in the data center SSD space from a record level that reached the end of last calendar year, calendar 2025. We have been growing to record shares in the data center every year for the last four years. We are very excited about that, and our supply is nowhere close to being able to meet the demand that we see for the foreseeable future.

Sumit Sadana: This has been something that works well with NVIDIA systems. We have seen tremendous demand for it that we are not able to completely even come close to meeting. Even high capacity SSDs, we have seen tremendous growth and tremendous demand, and we have come out with a slew of new products that we are looking really excitedly to continue to grow our share in the data center SSD space from a record level that reached the end of last calendar year, calendar 2025. We have been growing to record shares in the data center every year for the last four years. We are very excited about that, and our supply is nowhere close to being able to meet the demand that we see for the foreseeable future.

Speaker #2: And then even high-capacity SSDs, we have seen tremendous growth and tremendous demand, and we have come out with a slew of new products that we are looking really excitedly to continue to grow our share in the data center SSD space from a record level that we reached at the end of last calendar year, calendar 2025.

Speaker #2: We have been growing to record shares in the data center every year for the last four years, so we are very excited about that.

Speaker #2: And our supply is nowhere close to being able to meet the demand that we see for the foreseeable future. And so this expansion that Manish mentioned is something we will put to good use to continue to grow our business.

Sumit Sadana: This expansion that Manish mentioned is something we will put to good use to continue to grow our business, with obviously, you know, a focus on disciplined investing when it comes to our CapEx going forward.

Sumit Sadana: This expansion that Manish mentioned is something we will put to good use to continue to grow our business, with obviously, you know, a focus on disciplined investing when it comes to our CapEx going forward.

Speaker #2: With, obviously, a focus on disciplined investing when it comes to our CapEx going forward.

Speaker #4: And I should just add that even though we're breaking ground now, that capacity—that clean room—isn't expected to provide new boost to our capacity until the second half of 2028.

Manish Bhatia: I should just add that even though we're breaking ground now, you know, that capacity that clean room isn't expected to provide new boost to our capacity until the second half of 2028. We do think that, you know, not just for us, but across the industry in NAND, you know, clean room space over the medium term is still gonna be a challenge for the industry, particularly as many in the industry have redirected some NAND clean room space towards DRAM.

Manish Bhatia: I should just add that even though we're breaking ground now, you know, that capacity that clean room isn't expected to provide new boost to our capacity until the second half of 2028. We do think that, you know, not just for us, but across the industry in NAND, you know, clean room space over the medium term is still gonna be a challenge for the industry, particularly as many in the industry have redirected some NAND clean room space towards DRAM.

Speaker #4: So we do think that, not just for us, but across the industry, NAND clean room space over the medium term is still going to be a challenge for the industry, particularly as many in the industry have redirected some NAND clean room space towards DRAM.

Speaker #3: Perfect, thank you. And then back to the DRAM side, I wanted to ask pretty candidly on the pricing that we're all seeing. Next year, we have line of sight to a couple of DRAM projects coming online, including you guys bringing on a couple of those new fabs.

Melissa Weathers: Perfect. Thank you. Back to the DRAM side, I wanted to ask pretty candidly on the pricing that we're all seeing. Next year, we have line of sight to a couple of DRAM projects coming online, including you guys bringing on a couple of those new fabs. How are you internally modeling the impact of that increase in supply going forward? Should we? Are you guys thinking pricing could come down next year or flatten out or decel? Just any color on how you guys are internally modeling the impact of all those new fabs coming online next year and the year after.

Melissa Weathers: Perfect. Thank you. Back to the DRAM side, I wanted to ask pretty candidly on the pricing that we're all seeing. Next year, we have line of sight to a couple of DRAM projects coming online, including you guys bringing on a couple of those new fabs. How are you internally modeling the impact of that increase in supply going forward? Should we? Are you guys thinking pricing could come down next year or flatten out or decel? Just any color on how you guys are internally modeling the impact of all those new fabs coming online next year and the year after.

Speaker #3: So, how are you internally modeling the impact of that increase in supply going forward? Should we, or are you guys thinking pricing could come down next year, or flatten out, or decel?

Speaker #3: Just any color on how you guys are internally modeling the impact of all those new fabs coming online next year and the year after?

Speaker #4: Yeah. Maybe I'll answer the supply side, or what the industry—and I think for the projects that we've talked about, which are the new Idaho shell, the DRAM side Idaho shell, as well as our ramp in the Powerchip Tainan facility that we've now acquired.

Mark Murphy: Maybe-

Mark Murphy: Maybe-

Manish Bhatia: Maybe I'll answer the supply side or what the industry. I think, you know, for the projects that we've talked about, which are the, you know, the new Idaho shell, you know, the DRAM side, Idaho shell, as well as the, our ramp in the Powerchip Tongluo facility that we've now acquired. You know, we've said that the supply impact from those will be, towards our fiscal 2028 in terms of being able to meet revenue shipments. I think that's sort of largely the case for the industry in terms of new large clean room projects coming online.

Manish Bhatia: Maybe I'll answer the supply side or what the industry. I think, you know, for the projects that we've talked about, which are the, you know, the new Idaho shell, you know, the DRAM side, Idaho shell, as well as the, our ramp in the Powerchip Tongluo facility that we've now acquired. You know, we've said that the supply impact from those will be, towards our fiscal 2028 in terms of being able to meet revenue shipments. I think that's sort of largely the case for the industry in terms of new large clean room projects coming online.

Speaker #4: We've said that the supply impact from those will be toward our fiscal 2028 in terms of being able to meet revenue shipments, and I think that's sort of largely the case for the industry in terms of new large cleanroom projects coming online.

Speaker #4: They will be really impacting later in '27 and really into '28 before you see meaningful supply, which is one of the reasons we have reiterated that we see the tight supply conditions continue beyond '26.

Manish Bhatia: They will be really, you know, impacting later in 2027 and really into 2028 before you see meaningful supply, which is, you know, one of the reasons we, you know, have reiterated that we see the tight supply conditions to continue beyond 2026.

Manish Bhatia: They will be really, you know, impacting later in 2027 and really into 2028 before you see meaningful supply, which is, you know, one of the reasons we, you know, have reiterated that we see the tight supply conditions to continue beyond 2026.

Speaker #2: Yeah. I think we obviously don't provide any kind of price modeling or anything else—type of discussion in that forward-looking way. But we have mentioned that the demand forecast that we get from our customers for 2026, for 2027, continues to escalate, and that's despite efforts that we are making to increase the amount of supply.

Sumit Sadana: Yeah. I think, we obviously don't provide any kind of price modeling or anything else type of, discussion in that forward-looking way. We have mentioned that, you know, the demand, forecast that we get from our customers, for 2026, for 2027, continue to escalate. Despite, you know, efforts that we are making to increase the amount of supply we can bring online and, to a modest extent in 2026, but, some more in 2027, is not really making that much of a meaningful dent in the gap. There is a lot of demand growth, driven by AI and also across different segments of the market, not just in the data center. We expect tight conditions for the foreseeable future and certainly beyond 2026.

Sumit Sadana: Yeah. I think, we obviously don't provide any kind of price modeling or anything else type of, discussion in that forward-looking way. We have mentioned that, you know, the demand, forecast that we get from our customers, for 2026, for 2027, continue to escalate. Despite, you know, efforts that we are making to increase the amount of supply we can bring online and, to a modest extent in 2026, but, some more in 2027, is not really making that much of a meaningful dent in the gap. There is a lot of demand growth, driven by AI and also across different segments of the market, not just in the data center. We expect tight conditions for the foreseeable future and certainly beyond 2026.

Speaker #2: We can bring online, and to a modest extent in '26, but some more in '27 is not really making that much of a meaningful dent in the gap. That is a lot of demand growth driven by AI and also across different segments of the market, not just in the data center, and we expect tight conditions for the foreseeable future.

Speaker #2: And certainly beyond 2026. And not giving any further guidance beyond that.

Sumit Sadana: not giving any further guidance beyond that.

Sumit Sadana: not giving any further guidance beyond that.

Speaker #4: I would just—the last thing I would add, Paula, is that as Samet mentioned, demand far exceeds supply, as Manish mentioned. These fabs talked about for us are coming on basically meaningfully in our fiscal '28.

Mark Murphy: The last thing I would add, Melissa, is that, you know, as Sumit mentioned, demand far exceeds supply, as Manish mentioned. You know, these fabs talked about for us are coming on basically meaningfully in our fiscal 2028. You know, we always have the ability to modulate tool installs. You know, it's you know, and that would be an option as well.

Mark Murphy: The last thing I would add, Melissa, is that, you know, as Sumit mentioned, demand far exceeds supply, as Manish mentioned. You know, these fabs talked about for us are coming on basically meaningfully in our fiscal 2028. You know, we always have the ability to modulate tool installs. You know, it's you know, and that would be an option as well.

Speaker #4: We always have the ability to modulate tool installs, so it's—and that would be an option as well.

Speaker #3: Perfect. Thank you all three.

Melissa Weathers: Perfect. Thank you, all three.

Melissa Weathers: Perfect. Thank you, all three.

Speaker #1: Your next question comes from the line of Aaron Rakers from Wells Fargo. Aaron, your line is open. Please go ahead.

Operator 2: Your next question comes from the line of Aaron Rakers from Wells Fargo. Aaron, your line is open. Please go ahead.

Melissa Weathers: Your next question comes from the line of Aaron Rakers from Wells Fargo. Aaron, your line is open. Please go ahead.

Speaker #5: Yeah. Can you hear me?

Aaron Rakers: Yeah. Can you hear me?

Aaron Rakers: Yeah. Can you hear me?

Speaker #6: Yes.

Sumit Sadana: Yes.

Sumit Sadana: Yes.

Speaker #5: Hello? Yeah. Sorry, guys.

Aaron Rakers: Hello? Yeah, sorry, guys.

Aaron Rakers: Hello? Yeah, sorry, guys.

Speaker #4: Yeah, we can hear you, Aaron.

Mark Murphy: Yeah, we can hear you, Aaron.

Mark Murphy: Yeah, we can hear you, Aaron.

Aaron Rakers: Thanks for doing this. Perfect. Thanks, Mark. Thanks for doing this call. Two questions, one and a quick follow-up. You know, I guess, you know, Mark, maybe just to help model-wise, I'm curious if you could, maybe frame the current quarter guidance between DRAM and NAND. What maybe assumptions you're making in this tight environment around your ability to ship bits. Will that grow sequentially? And any kind of separation between the two buckets, between DRAM and NAND.

Aaron Rakers: Thanks for doing this. Perfect. Thanks, Mark. Thanks for doing this call. Two questions, one and a quick follow-up. You know, I guess, you know, Mark, maybe just to help model-wise, I'm curious if you could, maybe frame the current quarter guidance between DRAM and NAND. What maybe assumptions you're making in this tight environment around your ability to ship bits. Will that grow sequentially? And any kind of separation between the two buckets, between DRAM and NAND.

Speaker #5: Thanks for doing this. Perfect, thanks, Mark. So, thanks for doing this call. Two questions—one and a quick follow-up. I guess, Mark, maybe just to help model-wise, I'm curious if you could maybe frame the current quarter guidance between DRAM and NAND.

Speaker #5: What maybe assumptions you're making in this tight environment around your ability to ship bits? Will that grow sequentially, and then kind of separation between the two buckets—between DRAM and NAND?

Speaker #6: I can provide you—we don't break it out. I can provide you a little bit of color. I mean, let me just—maybe you heard the Q2 report out.

Mark Murphy: We don't break it out. I can provide you a little bit of color. I mean, let me just

Mark Murphy: We don't break it out. I can provide you a little bit of color. I mean, let me just

Sumit Sadana: Maybe you heard the Q2 report out. Both DRAM and NAND pricing was up strongly. NAND even more than DRAM. Both also grew volume sequentially, NAND less than DRAM. In Q3, we would expect price to again be the largest factor. You know, if we've given you enough on you know, CY 2026 industry bit shipments and that those industry bit shipments are constrained by supply. Our supply we expect to grow in line with the industry. If you do all that, you can sort of determine that you can assume some modest growth, volume growth in Q3 for both DRAM and NAND.

Sumit Sadana: Maybe you heard the Q2 report out. Both DRAM and NAND pricing was up strongly. NAND even more than DRAM. Both also grew volume sequentially, NAND less than DRAM. In Q3, we would expect price to again be the largest factor. You know, if we've given you enough on you know, CY 2026 industry bit shipments and that those industry bit shipments are constrained by supply. Our supply we expect to grow in line with the industry. If you do all that, you can sort of determine that you can assume some modest growth, volume growth in Q3 for both DRAM and NAND.

Speaker #6: So both DRAM and NAND pricing was up strongly—NAND even more than DRAM. Both also grew volume sequentially, NAND less than DRAM. In Q3, we would expect price to again be the largest factor.

Speaker #6: But we've given you enough on CY26 industry bit shipments, and that those industry bit shipments are constrained by supply. And then our supply, we expect to grow in line with the industry.

Speaker #6: If you do all that, you can sort of determine that there’s—yeah, that you can assume some modest growth, volume growth in the third quarter for both DRAM and NAND.

Speaker #5: Okay, that's perfect. Thanks, Mark. And then as a follow-up, maybe more architecturally—in this environment, I'm curious, there's a tremendous amount of demand that you guys are seeing in servers.

Aaron Rakers: Okay. That's perfect. Thanks, Mark. As a follow-up, maybe more architecturally, in this environment, I'm curious, you know, there's a tremendous amount of demand that you guys are seeing in servers. I think you've guided low teens unit growth this year. But one of the architecture things that I'm starting to hear a little bit more about is the idea of CXL and memory pooling, and whether or not some of these hyperscalers could, you know, drive better efficiency of their memory architecture. I'm just curious at a high level, what your guys' thoughts are on that. Obviously, there's a lot of other factors, MRDIMMs and, you know, just content growth in general. But any thoughts on CXL and having an effect on DRAM?

Aaron Rakers: Okay. That's perfect. Thanks, Mark. As a follow-up, maybe more architecturally, in this environment, I'm curious, you know, there's a tremendous amount of demand that you guys are seeing in servers. I think you've guided low teens unit growth this year. But one of the architecture things that I'm starting to hear a little bit more about is the idea of CXL and memory pooling, and whether or not some of these hyperscalers could, you know, drive better efficiency of their memory architecture. I'm just curious at a high level, what your guys' thoughts are on that. Obviously, there's a lot of other factors, MRDIMMs and, you know, just content growth in general. But any thoughts on CXL and having an effect on DRAM?

Speaker #5: I think you've guided low-teens unit growth this year. But one of the architectural things that I'm starting to hear a little bit more about is the idea of CXL.

Speaker #5: And memory pooling, and whether or not some of these hyperscalers could drive better efficiency of their memory architecture. I'm just curious, at a high level, what your guys' thoughts are on that.

Speaker #5: Obviously, there's a lot of other factors—MRDIMS, and just content growth in general. But any thoughts on CXL and having an effect on DRAM?

Speaker #2: Yeah, Aaron, CXL is likely to be experimented with at some of our customers. And different companies are in different phases of either assessing it or figuring out what they want to do with it, if anything.

Sumit Sadana: Yeah. Aaron, CXL is likely to be experimented with at some of our customers. There are different companies in different phases of either assessing it or figuring out what they want to do with it, if anything. We are certainly going to have our memory be available in those CXL configurations. Now, my feeling is that the demand is in such a robust place in terms of gap between supply and demand being so significant that any and every available opportunity that can be used and deployed at scale is going to be something that customers will likely try if it is architecturally feasible and can be made to work with their software and their systems.

Sumit Sadana: Yeah. Aaron, CXL is likely to be experimented with at some of our customers. There are different companies in different phases of either assessing it or figuring out what they want to do with it, if anything. We are certainly going to have our memory be available in those CXL configurations. Now, my feeling is that the demand is in such a robust place in terms of gap between supply and demand being so significant that any and every available opportunity that can be used and deployed at scale is going to be something that customers will likely try if it is architecturally feasible and can be made to work with their software and their systems.

Speaker #2: And we are certainly going to have our memory be available in those CXL configurations. Now, my feeling is that the demand is in such a robust place, in terms of the gap between supply and demand being so significant, that any and every available opportunity that can be used and deployed at scale is going to be something that customers will likely try if it is architecturally feasible and can be made to work with their software in their systems.

Sumit Sadana: There is a lot of work to be done to bring these solutions to a large deployment scale, and it's not easy, and it's not something that can be pervasively used. There are lots of technical limitations as well. I'm sure those experimentations will continue, and some limited deployments will be done in order to test it out and see how it performs.

Speaker #2: There is a lot of work to be done to bring these solutions to a large deployment scale. And it's not easy. And it's not something that can be pervasively used.

Sumit Sadana: There is a lot of work to be done to bring these solutions to a large deployment scale, and it's not easy, and it's not something that can be pervasively used. There are lots of technical limitations as well. I'm sure those experimentations will continue, and some limited deployments will be done in order to test it out and see how it performs.

Speaker #2: There are lots of technical limitations as well. But I'm sure those experimentations will continue, and some limited deployments will be done in order to test it out and see how it performs.

Speaker #5: Yeah. Thank you.

Aaron Rakers: Yeah. Thank you.

Aaron Rakers: Yeah. Thank you.

Speaker #2: Sure.

Sumit Sadana: Sure.

Sumit Sadana: Sure.

Speaker #1: Your next question comes from the line of Jim Schneider from Goldman Sachs. Your line is open. Please go ahead.

Operator 1: Your next question comes from the line of Jim Schneider from Goldman Sachs. Your line is open. Please go ahead.

Operator: Your next question comes from the line of Jim Schneider from Goldman Sachs. Your line is open. Please go ahead.

Sumit Sadana: Jim, can you hear us? Do you have a question? Operator, want to go to the next.

Sumit Sadana: Jim, can you hear us? Do you have a question? Operator, want to go to the next.

Speaker #4: Jim? Can you hear us? Do you have a question? Operator? Want to go to the next?

Speaker #1: It seems like we might be having some technical difficulties. Just a reminder to unmute by pressing star six. In the meantime, we will move on to Atif Malik from Citi.

Operator 1: It seems like we might be having some technical difficulties. Just a reminder to unmute by pressing star six. In the meantime, we will move on to Atif Malik from Citi. Your line is open. Please go ahead.

Operator: It seems like we might be having some technical difficulties. Just a reminder to unmute by pressing star six. In the meantime, we will move on to Atif Malik from Citi. Your line is open. Please go ahead.

Speaker #1: Your line is open. Please go ahead.

Speaker #7: Thank you, guys, for hosting the call. On the call, you mentioned that the non-HBM margins are higher than HBM. So my question is really on allocation—how you guys are thinking about allocation.

Atif Malik: Thank you guys for hosting the call. On the call, you mentioned that the non-HBM margins are higher than HBM. My question is really on allocation. How are you guys thinking about allocation? The reason I'm asking that question is because one of your Korean peers has been reported to be raising pricing by 100%, and I know you had pricing mid-60s for DRAM, and I understand there's always delta across different memory makers. Are you leaving money on the table by not pivoting more towards non-HBM? Just your understanding on HBM versus non-HBM allocation.

Atif Malik: Thank you guys for hosting the call. On the call, you mentioned that the non-HBM margins are higher than HBM. My question is really on allocation. How are you guys thinking about allocation? The reason I'm asking that question is because one of your Korean peers has been reported to be raising pricing by 100%, and I know you had pricing mid-60s for DRAM, and I understand there's always delta across different memory makers. Are you leaving money on the table by not pivoting more towards non-HBM? Just your understanding on HBM versus non-HBM allocation.

Speaker #7: And the reason I'm asking that question is because one of your Korean peers has been reported to be raising pricing by 100%. And you had pricing in the mid-$60s for DRAM.

Speaker #7: And I understand there's always delta across different memory makers. So are you leaving money on the table by not pivoting more towards non-HBM? So just your understanding on HBM versus non-HBM allocation.

Speaker #2: Yeah, I mean, we definitely go through different quarters, and these relationships between different products can change over time. There is no doubt that HBM pricing was set for a large portion of the shipments in the 2026 calendar year, late in calendar '25, as is generally the case with HBM where the pricing is determined sometime before the start of the calendar year.

Sumit Sadana: Yeah, I mean, we definitely go through different quarters and these relationships between different products can change over time. There is no doubt that HBM pricing was set for a large portion of the shipments in 2026 calendar year, late in calendar 2025, as is generally the case with HBM, where the pricing is determined sometime before the start of the calendar year. That kind of a model is a good model to have. It provides good stability and visibility into the business. The pricing that we negotiate has very robust ROI and profitability, and we feel good about that business.

Sumit Sadana: Yeah, I mean, we definitely go through different quarters and these relationships between different products can change over time. There is no doubt that HBM pricing was set for a large portion of the shipments in 2026 calendar year, late in calendar 2025, as is generally the case with HBM, where the pricing is determined sometime before the start of the calendar year. That kind of a model is a good model to have. It provides good stability and visibility into the business. The pricing that we negotiate has very robust ROI and profitability, and we feel good about that business.

Speaker #2: And that kind of a model is a good model to have. It provides good stability and visibility into the business. The pricing that we negotiate has very robust ROI and profitability.

Speaker #2: And we feel good about that business. Of course, the upsides that we have been able to create from our operations team and supply chain teams on the HBM, we have been able to sell those for even more robust levels of pricing as those upside volumes have materialized.

Sumit Sadana: Of course, the upsides that we have been able to create from our operations team and supply chain teams on the HBM, we have been able to sell those for, you know, even more robust levels of pricing as those upside volumes have materialized. With that said, we don't view a lot of these HBM versus non-HBM allocations to be that tactically focused. These are strategic things that we have to do in terms of providing our customers with match sets of products so they can build AI systems. You cannot ship DDR5 into an AI system that doesn't have enough HBM in it and vice versa.

Sumit Sadana: Of course, the upsides that we have been able to create from our operations team and supply chain teams on the HBM, we have been able to sell those for, you know, even more robust levels of pricing as those upside volumes have materialized. With that said, we don't view a lot of these HBM versus non-HBM allocations to be that tactically focused. These are strategic things that we have to do in terms of providing our customers with match sets of products so they can build AI systems. You cannot ship DDR5 into an AI system that doesn't have enough HBM in it and vice versa.

Speaker #2: With that said, we don't view a lot of these HBM versus non-HBM allocations to be that tactically focused. These are strategic things that we have to do in terms of providing our customers with matched sets of products so they can build AI systems. You cannot ship DDR5 into an AI system that doesn't have enough HBM in it, and vice versa.

Speaker #2: So, there is a natural level of balance that is needed for the market to appropriately have matched sets of products to be able to ship these at scale.

Sumit Sadana: There is a natural level of balance that is needed for the market to be appropriately having match sets of products to be able to ship these at scale. With that in mind, certainly the HBM market, you know, margins are good and the non-HBM margins of course have become even higher. This is not just a data center issue. The non-HBM margins outside the data center, meaning the DRAM margins outside the data center, are also exceptionally robust. We don't tend to like just jerk around the allocations to different customers and different segments just based on where the pricing is.

Sumit Sadana: There is a natural level of balance that is needed for the market to be appropriately having match sets of products to be able to ship these at scale. With that in mind, certainly the HBM market, you know, margins are good and the non-HBM margins of course have become even higher. This is not just a data center issue. The non-HBM margins outside the data center, meaning the DRAM margins outside the data center, are also exceptionally robust. We don't tend to like just jerk around the allocations to different customers and different segments just based on where the pricing is.

Speaker #2: So, with that in mind, certainly, the HBM market margins are good. And the non-HBM margins, of course, have become even higher. But then, this is not just a data center issue.

Speaker #2: The non-HBM margins outside the data center, meaning the DRAM margins outside the data center, are also exceptionally robust. So, we don't tend to just jerk around the allocations to different customers in different segments just based on where the pricing is.

Speaker #2: We have a goal of working with our customers to meet their business needs, and we do that with the intent of helping them meet their business goals as well.

Sumit Sadana: We have a goal of working with our customers to meet their business needs, and we do that with an intent of helping them meet their business goals as well.

Sumit Sadana: We have a goal of working with our customers to meet their business needs, and we do that with an intent of helping them meet their business goals as well.

Speaker #5: Thanks, Sumit. And as my follow-up, NVIDIA talked about Grok chips could be like 25% of the ultra-fast inferencing market. And my understanding is currently these LPU chips have embedded SRAM at one of the Korean makers.

Atif Malik: Thanks, Amit. As my follow-up, you know, we just talked about Groq chips could be like 25% of the ultra-fast inferencing market. My understanding is currently, you know, these LPU chips have embedded SRAM at one of the Korean makers. In the future if these chips move to a Taiwanese foundry, will the SRAM be embedded on that chip, or can the SRAM be standalone and somehow bonded to the processor?

Atif Malik: Thanks, Amit. As my follow-up, you know, we just talked about Groq chips could be like 25% of the ultra-fast inferencing market. My understanding is currently, you know, these LPU chips have embedded SRAM at one of the Korean makers. In the future if these chips move to a Taiwanese foundry, will the SRAM be embedded on that chip, or can the SRAM be standalone and somehow bonded to the processor?

Speaker #5: And in the future, if these chips move to a Taiwanese foundry, will the SRAM be embedded on that chip, or can the SRAM be standalone and somehow bonded to the processor?

Sumit Sadana: You know, there are, you know, definitely on-chip SRAM approaches that are currently in use and I have seen, you know, some talk about bonded SRAM, et cetera. I would not want to comment on what directions our customers would go in terms of how they work with the SRAM over time. My main focus, our main focus here is that, you know, we look at all of these systems as being, you know, very well-balanced in the way they are evolving. These SRAM-based systems complement, in small users, you know, the larger systems that are being utilized, deployed at scale, like the Vera Rubin, for example, and other similar systems that are based on ASIC accelerators.

Sumit Sadana: You know, there are, you know, definitely on-chip SRAM approaches that are currently in use and I have seen, you know, some talk about bonded SRAM, et cetera. I would not want to comment on what directions our customers would go in terms of how they work with the SRAM over time. My main focus, our main focus here is that, you know, we look at all of these systems as being, you know, very well-balanced in the way they are evolving. These SRAM-based systems complement, in small users, you know, the larger systems that are being utilized, deployed at scale, like the Vera Rubin, for example, and other similar systems that are based on ASIC accelerators.

Speaker #2: There are definitely on-chip SRAM approaches that are currently in use. And there is—I have seen some talk about bonded SRAM, etc. I would not want to comment on what directions our customers would go in terms of how they work with the SRAM over time.

Speaker #2: My main focus, our main focus here, is that we look at all of these systems as being very well balanced in the way they are evolving.

Speaker #2: So these SRAM-based systems complement, in small uses, the larger systems that are being utilized and deployed at scale—like the Vera Rubin, for example.

Speaker #2: And other similar systems that are based on ASIC accelerators. And we see these as continuing to move the ball forward in terms of making these systems more balanced, more efficient.

Sumit Sadana: We see these as continuing to move the ball forward in terms of making these systems more balanced, more efficient. The DRAM usage in these systems continues to grow over time and has gotten to levels which of course, you know, we don't have adequate supply for. Over time, we continue to focus on the growth of these average capacities, the growth of the high performance tiers, and even the growth of LP DRAM in these AI systems, all of which are, you know, really big positives for us on the DRAM side of the business.

Sumit Sadana: We see these as continuing to move the ball forward in terms of making these systems more balanced, more efficient. The DRAM usage in these systems continues to grow over time and has gotten to levels which of course, you know, we don't have adequate supply for. Over time, we continue to focus on the growth of these average capacities, the growth of the high performance tiers, and even the growth of LP DRAM in these AI systems, all of which are, you know, really big positives for us on the DRAM side of the business.

Speaker #2: And the DRAM usage in these systems continues to grow over time, and has gotten to levels which, of course, we don't have adequate supply for.

Speaker #2: But over time, we continue to focus on the growth of these average capacities, the growth of the high-performance tiers, and even the growth of LP DRAM in these AI systems.

Speaker #2: All of which are really big positives for us on the DRAM side of the business.

Speaker #5: Thank you.

Atif Malik: Thank you.

Atif Malik: Thank you.

Speaker #3: Your next question comes from the line of Vijay Rakesh from Mizuho. Your line is open. Please go ahead.

Operator 3: Your next question comes from the line of Vijay Rakesh from Mizuho. Your line is open. Please go ahead.

Atif Malik: Your next question comes from the line of Vijay Rakesh from Mizuho. Your line is open. Please go ahead.

Speaker #4: Yeah, hi. Just a quick question on the NAND side. I know you mentioned SSD NAND seeing a pretty strong uptick. Just wondering how to size it with the KiwiCash demand.

Vijay Rakesh: Yeah, hi. Just a quick question on the NAND side. I know you mentioned SSD and NAND seeing a pretty strong uptick. Just wondering how to size it with the KV cache demand. What is the mix of, you know, SSD, AI data center SSD, I should say, AI data center NAND, if you look out for calendar 2026 versus last year, if you could, you know, if there's a way to kind of size that pickup in demand from KV cache. A follow-up.

Vijay Rakesh: Yeah, hi. Just a quick question on the NAND side. I know you mentioned SSD and NAND seeing a pretty strong uptick. Just wondering how to size it with the KV cache demand. What is the mix of, you know, SSD, AI data center SSD, I should say, AI data center NAND, if you look out for calendar 2026 versus last year, if you could, you know, if there's a way to kind of size that pickup in demand from KV cache. A follow-up.

Speaker #4: What’s the mix of SSD—AI data center SSD, I should say, AI data center NAND—if you look out for 2026 versus last year? If you could, is there a way to kind of size that pickup in demand from QLC as well?

Speaker #4: Any follow-up?

Sumit Sadana: You know, when we made our prior forecast of the extent of growth we are likely to see in the data center space, it did include a view that KV cache would be a meaningful driver. As that has become more in focus in terms of how our customers have been wanting to deploy their SSD, definitely it has increased our view of the extent of demand coming from KV cache related applications. That has continued to cause our view of the total market opportunity to continue to grow. I'll just remind us that something we have said the last time also, which is that we have also seen a significant uptick in demand for data center SSDs coming from shortages in HDDs.

Sumit Sadana: You know, when we made our prior forecast of the extent of growth we are likely to see in the data center space, it did include a view that KV cache would be a meaningful driver. As that has become more in focus in terms of how our customers have been wanting to deploy their SSD, definitely it has increased our view of the extent of demand coming from KV cache related applications. That has continued to cause our view of the total market opportunity to continue to grow. I'll just remind us that something we have said the last time also, which is that we have also seen a significant uptick in demand for data center SSDs coming from shortages in HDDs.

Speaker #2: When we made our prior forecast of the extent of growth we are likely to see in the data center space, it did include a view that KiwiCash would be a meaningful driver.

Speaker #2: And as that has become more in focus in terms of how our customers have been wanting to deploy their SSD, definitely, it has increased our view of the extent of demand coming from KiwiCash-related applications.

Speaker #2: And that has continued to cause our view of the total market opportunity to continue to grow. I'll just remind us of something we said the last time also, which is that we have also seen a significant uptick in demand for data center SSDs coming from shortages in HDDs.

Speaker #2: And we continue to see those shortages for the foreseeable future. That has been another driver. So, when we put all of these together, the NAND market is significantly undersupplied relative to demand in the data center.

Sumit Sadana: We continue to see those shortages for the foreseeable future. That has been another driver. When we put all of these together, you know, the NAND market is significantly undersupplied to the demand in the data center, and that demand continues to escalate. In part driven by KV cache, but also driven by just the insatiable appetite that these AI servers have to have, you know, fast storage capability available as these systems get deployed more and more. I, you know, the outlook is really strong and, you know, as we have mentioned earlier, our portfolio is incredibly well positioned to continue to gain share in that space, including for KV cache applications, by the way. Yeah.

Sumit Sadana: We continue to see those shortages for the foreseeable future. That has been another driver. When we put all of these together, you know, the NAND market is significantly undersupplied to the demand in the data center, and that demand continues to escalate. In part driven by KV cache, but also driven by just the insatiable appetite that these AI servers have to have, you know, fast storage capability available as these systems get deployed more and more. I, you know, the outlook is really strong and, you know, as we have mentioned earlier, our portfolio is incredibly well positioned to continue to gain share in that space, including for KV cache applications, by the way. Yeah.

Speaker #2: And that demand continues to escalate, in part driven by KiwiCash, but also driven by just the insatiable appetite that these AI servers have to have fast storage capability available.

Speaker #2: As these systems get deployed more and more, the outlook is really strong. And, as we have mentioned earlier, our portfolio is incredibly well positioned to continue to gain share in that space.

Speaker #2: Including for KiwiCash applications, by the way. Yeah.

Speaker #4: Right. Thanks. And just to follow up on that, as you look at the CapEx, obviously, the last couple of years, CapEx on NAND has been lighter.

Vijay Rakesh: Right. Thanks. Just to follow up on that, as you look at the CapEx, obviously last couple of years, CapEx to NAND has been lighter. But as you look at the mix of CapEx for 2026, 2027 with the CapEx numbers that went up, how would you look at the mix of DRAM to NAND CapEx, given you know, both DRAM demand is up, but you're also seeing a spike in NAND because. Thanks.

Vijay Rakesh: Right. Thanks. Just to follow up on that, as you look at the CapEx, obviously last couple of years, CapEx to NAND has been lighter. But as you look at the mix of CapEx for 2026, 2027 with the CapEx numbers that went up, how would you look at the mix of DRAM to NAND CapEx, given you know, both DRAM demand is up, but you're also seeing a spike in NAND because. Thanks.

Speaker #4: But as you look at the mix of CapEx for 2026, 2027, with the CapEx numbers that went up, how would you look at the mix of DRAM to NAND CapEx, given both DRAM demand is up, but you're also seeing a spike in NAND, I guess.

Speaker #4: Thanks.

Speaker #5: Yeah. Maybe I'll start. So, Vijay, the CapEx is still going to be dominated by DRAM and HBM additions. Yeah. This FY26, we increased our outlook on CapEx to over $25 billion, which was up from the $20 billion we did on the last earnings call.

Mark Murphy: Yeah, maybe I'll start. So Vijay, the CapEx is still gonna be dominated by DRAM and HBM additions. Yeah, this FY 2026, we increased our outlook on CapEx to over $25 billion, which was up from the $20 billion we did on the last earnings call. Yeah, the updated investment reflects this investment in the Tongluo fab, which we communicated at February conference, and increases in US expansion. Again, it's DRAM and HBM driven, including the increase. Today, we also provided more detail on construction, which we've said, you know, in the past was becoming a more material part of the build-out, of, you know, obviously because we need greenfield capacity. On the December call, we said expect that FY 2025, 2026 construction would double.

Mark Murphy: Yeah, maybe I'll start. So Vijay, the CapEx is still gonna be dominated by DRAM and HBM additions. Yeah, this FY 2026, we increased our outlook on CapEx to over $25 billion, which was up from the $20 billion we did on the last earnings call. Yeah, the updated investment reflects this investment in the Tongluo fab, which we communicated at February conference, and increases in US expansion. Again, it's DRAM and HBM driven, including the increase. Today, we also provided more detail on construction, which we've said, you know, in the past was becoming a more material part of the build-out, of, you know, obviously because we need greenfield capacity. On the December call, we said expect that FY 2025, 2026 construction would double.

Speaker #5: The updated investment reflects this investment in the Tonglou fab, which we communicated in the February conference, and increases in U.S. expansion. Again, it's DRAM and HBM driven.

Speaker #5: Including the increase. Today, we also provide more detail on construction, which we've said in the past was becoming a more material part of the build-out—obviously, because we need greenfield capacity.

Speaker #5: On the December call, we said to expect that FY25, 2026 construction would double. And so, we expect FY26 construction to be mid- to high-single-digit billions, net, when I talk CapEx.

Mark Murphy: You know, we expect FY26 construction to be mid- to high-single-digit billions net. When I talk CapEx, we're talking net CapEx.

Mark Murphy: You know, we expect FY26 construction to be mid- to high-single-digit billions net. When I talk CapEx, we're talking net CapEx.

Speaker #5: We're talking net CapEx. Now, as we look out to 2027, we did say that we project approximately a $10 billion incremental construction cost, and also for equipment spend to increase.

Vijay Rakesh: Right.

Vijay Rakesh: Right.

Mark Murphy: Now as we look out to 2027, you know, we did say that we project approximately a $10 billion incremental construction cost, and also for equipment spend to increase. Now, the 2027 spend, you know, NAND will begin to increase, but it will still be a much smaller portion of the spend compared to DRAM.

Mark Murphy: Now as we look out to 2027, you know, we did say that we project approximately a $10 billion incremental construction cost, and also for equipment spend to increase. Now, the 2027 spend, you know, NAND will begin to increase, but it will still be a much smaller portion of the spend compared to DRAM.

Speaker #5: Now, that 2027 spend—NAND will begin to increase, but it will still be a much smaller portion of the spend compared to DRAM.

Speaker #4: Got it. Thanks, Mike. Thanks, Amy.

Vijay Rakesh: Got it. Thanks, Mark. Thanks, Sunny.

Vijay Rakesh: Got it. Thanks, Mark. Thanks, Sunny.

Speaker #3: Your next question comes from the line of Carl Ackerman from B&B Paradise. Please go ahead.

Operator 1: Your next question comes from the line of C.J. Muse from BNP Paribas. Please go ahead.

Operator: Your next question comes from the line of C.J. Muse from BNP Paribas. Please go ahead.

Sam Feldman: Hi, can you hear me?

Samuel Feldman: Hi, can you hear me?

Speaker #4: Hi. Can you hear me?

Speaker #5: Yes.

Mark Murphy: Yes.

Mark Murphy: Yes.

Speaker #4: Hi, thanks for taking my question. This is Sam Feldman on for Carl Ackerman. So, we've seen continuous HBM content uplifts with each new generation of XPUs.

Sam Feldman: Hi. Thanks for taking my question. This is Sam Feldman on for C.J. Muse. We've seen continuous HBM content uplifts with each new generation of XPUs. With the growing trade ratio of HBM keeping the DRAM market tight and increasing AI memory requirements in the form of server DRAM, LPDDR, and SRAM, do you think HBM will continue to see such large content uplifts with each new generation of processors, or do you expect the content per XPU to eventually plateau?

Samuel Feldman: Hi. Thanks for taking my question. This is Sam Feldman on for C.J. Muse. We've seen continuous HBM content uplifts with each new generation of XPUs. With the growing trade ratio of HBM keeping the DRAM market tight and increasing AI memory requirements in the form of server DRAM, LPDDR, and SRAM, do you think HBM will continue to see such large content uplifts with each new generation of processors, or do you expect the content per XPU to eventually plateau?

Speaker #4: With the growing trade ratio of HBM keeping the DRAM market tight, and increasing AI memory requirements in the form of server DRAM, LPDDR, and SRAM, using HBM will continue to see such large content uplifts with each new generation of processors, or do you expect the content per XPU to eventually plateau?

Speaker #2: Well, we are not going to make long-term projections of where these average capacities will go, because those are things that our customers are going to decide as their architectures evolve.

Sumit Sadana: Well, we are not going to make long-term projections of where these average capacities will go because those are things that our customers are going to decide as their architectures evolve. What I will say is that if you look at the direction in which AI is trending and the types of things that are creating value in the AI domain for customers, they go towards, you know, more reasoning capability and more longer context windows and, you know, the ability to do more with agents and multi-agent orchestration. All of these things are really requiring more DRAM capacity and more DRAM bandwidth.

Sumit Sadana: Well, we are not going to make long-term projections of where these average capacities will go because those are things that our customers are going to decide as their architectures evolve. What I will say is that if you look at the direction in which AI is trending and the types of things that are creating value in the AI domain for customers, they go towards, you know, more reasoning capability and more longer context windows and, you know, the ability to do more with agents and multi-agent orchestration. All of these things are really requiring more DRAM capacity and more DRAM bandwidth.

Speaker #2: What I will say is that, if you look at the direction in which AI is trending, and the types of things that are creating value in the AI domain for customers, they go toward more reasoning capability, longer context windows, and the ability to do more with agents and multi-agent orchestration.

Speaker #2: All of these things are really requiring more DRAM capacity and more DRAM bandwidth. And when it comes to delivering on that kind of bandwidth and being able to really optimize the system for all of the different stages of prefill and decode and different aspects of training as well as inference, you really come to the conclusion that these accelerators—whether it's GPUs or ASICs—do require increasing amounts of HBM and increasing amounts of DDR5 or LP5 capacity.

Sumit Sadana: When it comes to delivering on that kind of bandwidth and being able to really optimize the system for all of the different stages of prefill and decode, and you know, different aspects of training as well as inference, you really come to the conclusion that you know, these accelerators, whether it's GPUs or ASICs, do require increasing amount of HBM and increasing amount of DDR5 or LPDDR5 capacity with time. That's the trend that we have seen thus far, and you're also seeing in the announced architectures that have been publicized.

Sumit Sadana: When it comes to delivering on that kind of bandwidth and being able to really optimize the system for all of the different stages of prefill and decode, and you know, different aspects of training as well as inference, you really come to the conclusion that you know, these accelerators, whether it's GPUs or ASICs, do require increasing amount of HBM and increasing amount of DDR5 or LPDDR5 capacity with time. That's the trend that we have seen thus far, and you're also seeing in the announced architectures that have been publicized.

Speaker #2: With time, and that's the trend that we have seen thus far. And you're also seeing it in the announced architectures that have been publicized. And so we feel certainly that the trend has been clear.

Sumit Sadana: We feel, you know, certainly that the trend has been clear, and not only has the trend been clear, but that the way our customers' customers, meaning the end customers, derive value out of the AI system and AI applications, is very much connected and consistent with, you know, the trend of needing more DRAM capacity and bandwidth which HBM delivers so effectively at, you know, very good, efficient power consumption levels.

Sumit Sadana: We feel, you know, certainly that the trend has been clear, and not only has the trend been clear, but that the way our customers' customers, meaning the end customers, derive value out of the AI system and AI applications, is very much connected and consistent with, you know, the trend of needing more DRAM capacity and bandwidth which HBM delivers so effectively at, you know, very good, efficient power consumption levels.

Speaker #2: And not only has the trend been clear, but that the way our customers' customers - meaning the end customers - derive value out of the AI system and AI applications is very much connected and consistent with the trend of needing more DRAM capacity and bandwidth, which HBM delivers.

Speaker #2: So, effectively, at very good, efficient power consumption levels—and so that's the reason we have said in the past that memory is becoming a strategic asset in the AI era. One of the important drivers of that is precisely this trend: you really can't have a high-performing AI architecture or hardware infrastructure without all of those capabilities that DRAM and HBM bring to the table.

Sumit Sadana: That's the reason we have said in the past that, you know, it, you know, memory is becoming a strategic asset in the AI era, is precisely one of the important drivers of that is precisely this trend that you really can't have a high performing AI architecture or hardware infrastructure without all of those capabilities that DRAM and HBM bring to the table. That's something we feel pretty good about in terms of where the market is headed.

Sumit Sadana: That's the reason we have said in the past that, you know, it, you know, memory is becoming a strategic asset in the AI era, is precisely one of the important drivers of that is precisely this trend that you really can't have a high performing AI architecture or hardware infrastructure without all of those capabilities that DRAM and HBM bring to the table. That's something we feel pretty good about in terms of where the market is headed.

Speaker #2: So, that's something we feel pretty good about in terms of where the market is headed.

Sam Feldman: Great. Thank you.

Samuel Feldman: Great. Thank you.

Speaker #4: Great. Thank you.

Speaker #3: Your next question comes from the line of Chris Caso from Wolf Research. Your line is open. Please go ahead.

Operator 1: Your next question comes from the line of Chris Caso from Wolfe Research. Your line is open. Please go ahead.

Operator: Your next question comes from the line of Chris Caso from Wolfe Research. Your line is open. Please go ahead.

Speaker #6: Yes, thank you. I'm wondering if you care to update your view of long-term bit growth for both DRAM and NAND. I know you mentioned on the call low 20s for DRAM, and 20% for NAND.

Chris Caso: Yes, thank you. Wondering if you care to update your view of long-term bit growth for both DRAM and NAND. I know you mentioned on the call low 20s in DRAM, 20% for NAND, but those are obviously supply constrained numbers. Maybe you speak to the extent to which you think long-term bit growth is increasing as a result of all the things that we've been talking about.

Chris Caso: Yes, thank you. Wondering if you care to update your view of long-term bit growth for both DRAM and NAND. I know you mentioned on the call low 20s in DRAM, 20% for NAND, but those are obviously supply constrained numbers. Maybe you speak to the extent to which you think long-term bit growth is increasing as a result of all the things that we've been talking about.

Speaker #6: But those are obviously supply-constrained numbers. And maybe you could speak to the extent to which you think long-term bit growth is increasing as well.

Speaker #2: Yeah. I mean, we haven't provided a new long-term bit growth number. I think you have seen that in the past. We have spoken about high teens, mid to high teens type of ranges for DRAM.

Sumit Sadana: Yeah, I mean, we haven't provided a new long-term bit growth number. I think you're, you have seen that in the past we have spoken about, you know, high teens, mid to high teens type of ranges for DRAM. Yet you have seen that, last year, this year, you know, our forecasts have been more robust, than those levels. We continue to feel like we are in an extended space of, you know, robust industry demand that, obviously due to HBM being part of these numbers, with its trade ratio, is just stressing the entire industries and certainly our capabilities to be able to meet those demand numbers. You're right. I mean, these numbers, at least in the foreseeable future, are all supply limited numbers rather than the actual, level, true level of demand.

Sumit Sadana: Yeah, I mean, we haven't provided a new long-term bit growth number. I think you're, you have seen that in the past we have spoken about, you know, high teens, mid to high teens type of ranges for DRAM. Yet you have seen that, last year, this year, you know, our forecasts have been more robust, than those levels. We continue to feel like we are in an extended space of, you know, robust industry demand that, obviously due to HBM being part of these numbers, with its trade ratio, is just stressing the entire industries and certainly our capabilities to be able to meet those demand numbers. You're right. I mean, these numbers, at least in the foreseeable future, are all supply limited numbers rather than the actual, level, true level of demand.

Speaker #2: And yet, you have seen that last year, this year, our forecasts have been more robust than those levels. And we continue to feel like we are in an extended space of robust industry demand that, obviously, due to HBM being part of these numbers, with its trade ratio, is just stressing the entire industry's, and certainly our, capabilities to be able to meet these numbers. At least in the foreseeable future, these are all supply-limited numbers.

Speaker #2: Rather than the actual level, true level of demand. So yeah, I mean, that's sort of the environment we are in. We do expect that next year, again, we will have a fairly robust level of growth in calendar '27.

Sumit Sadana: Yeah, I mean, that's sort of the environment we are in. We do expect that, you know, next year again, we will have a fairly robust level of growth in calendar 2027. Yeah, we are not providing a long-term number beyond that commentary.

Sumit Sadana: Yeah, I mean, that's sort of the environment we are in. We do expect that, you know, next year again, we will have a fairly robust level of growth in calendar 2027. Yeah, we are not providing a long-term number beyond that commentary.

Speaker #2: But yeah, we are not providing a long-term number beyond that commentary.

Chris Caso: Got it. Thank you. As a follow-up, obviously, there has been a lot of discussion about cleanroom constraints. You know, as you folks have pointed out, you know, you are starting greenfield in Singapore, and it is not available till the end of 2028. I guess you probably cannot speak for the industry, but for at least Micron, you know, at what point do you think you can get caught up with having enough cleanroom capacity, you know, that gives you some headroom for what the customers need? Then obviously, it is going to take time to move the tools into that. You know, it goes to the sustainability of what is going on right now and the extent to which the cleanroom constraints are going to drive that sustainability.

Speaker #4: Got it. Thank you. As a follow-up, obviously, there's been a lot of discussion about clean room constraints. And as you folks have pointed out, you're starting greenfield in Singapore and it's not available till the end of '28.

Chris Caso: Got it. Thank you. As a follow-up, obviously, there has been a lot of discussion about cleanroom constraints. You know, as you folks have pointed out, you know, you are starting greenfield in Singapore, and it is not available till the end of 2028. I guess you probably cannot speak for the industry, but for at least Micron, you know, at what point do you think you can get caught up with having enough cleanroom capacity, you know, that gives you some headroom for what the customers need? Then obviously, it is going to take time to move the tools into that. You know, it goes to the sustainability of what is going on right now and the extent to which the cleanroom constraints are going to drive that sustainability.

Speaker #4: I guess you probably can't speak for the industry, but at least for Micron, at what point do you think you can get caught up with having enough cleanroom capacity that gives you some headroom for what the customers need?

Speaker #4: And then, obviously, it's going to take time to move the tools into that. But it goes to the sustainability of what's going on right now, and the extent to which the clean room constraints are going to drive that sustainability.

Speaker #5: Well, I think, Chris, the first part of the question is really around what Smith had described in terms of the long-term demand, and we're continuing to evaluate all the different demand drivers and signals. In terms of the availability of clean room space for us and the industry, certainly, we see this relative constraint for all the major DRAM players to be there through this year and into next year, with major meaningful improvement to clean room space availability only out into '28.

Manish Bhatia: Well, I think, Chris, the first part of the question is really around what Sumit had described in terms of, you know, the long-term demand, and we're, you know, continuing to just evaluate all the different, you know, demand drivers and signals, including, you know, what all the announcements from GTC this week. In terms of the, you know, availability of cleanroom space for us and the industry, you know, certainly we see this, you know, relative constraint for all the major DRAM players to be there, through this year and into next year with major meaningful improvement to cleanroom space availability only out into 2028.

Manish Bhatia: Well, I think, Chris, the first part of the question is really around what Sumit had described in terms of, you know, the long-term demand, and we're, you know, continuing to just evaluate all the different, you know, demand drivers and signals, including, you know, what all the announcements from GTC this week. In terms of the, you know, availability of cleanroom space for us and the industry, you know, certainly we see this, you know, relative constraint for all the major DRAM players to be there, through this year and into next year with major meaningful improvement to cleanroom space availability only out into 2028.

Manish Bhatia: Of course, as we think through our projects that we've announced, you know, go beyond that into 2028, 2029, 2030, as we talk about those projects, the timeline for us to get to the point, as you mentioned, will be a function of the demand. We will be nimble to be able to adjust equipment orders and equipment installations in order to, you know, stay aligned with whatever the demand is, as it plays out over that time horizon.

Speaker #5: But of course, as we think through our projects that we've announced go beyond that into '28, '29, '30—as we talk about those projects, the timeline for us to get to the point, as you mentioned, will be a function of the demand.

Manish Bhatia: Of course, as we think through our projects that we've announced, you know, go beyond that into 2028, 2029, 2030, as we talk about those projects, the timeline for us to get to the point, as you mentioned, will be a function of the demand. We will be nimble to be able to adjust equipment orders and equipment installations in order to, you know, stay aligned with whatever the demand is, as it plays out over that time horizon.

Speaker #5: And we will be nimble, to be able to adjust equipment orders and equipment installations in order to stay aligned with whatever the demand is, as it plays out over that time horizon.

Speaker #2: And I'll just add to what Manish said, which is that as we look at the demand side of the picture, we have been engaged with our customers—several of them—to be talking about these five-year, multi-year SCA agreements.

Sumit Sadana: I'll just add to what Manish said, which is that, you know, as we look at the demand side of the picture and we have been engaged with our customers, several of them to be talking about these five-year, multi-year SCA agreements. In the context of that, of course, we are assessing their longer term demand, their five-year demand, for example. You know, we are assessing those against our own supply capabilities. Within that timeframe, we are also seeing the emergence and growth of some, you know, really exciting new demand vectors, including things like robotics, which we expect to become a very major demand driver.

Sumit Sadana: I'll just add to what Manish said, which is that, you know, as we look at the demand side of the picture and we have been engaged with our customers, several of them to be talking about these five-year, multi-year SCA agreements. In the context of that, of course, we are assessing their longer term demand, their five-year demand, for example. You know, we are assessing those against our own supply capabilities. Within that timeframe, we are also seeing the emergence and growth of some, you know, really exciting new demand vectors, including things like robotics, which we expect to become a very major demand driver.

Speaker #2: And in the context of that, of course, we are assessing their longer-term and their five-year demand, for example. And we are assessing those against our own supply capabilities.

Speaker #2: And within that time frame, we are also seeing the emergence and growth of some really exciting new demand vectors, including things like robotics, which we expect to become a very major demand driver.

Speaker #2: And so, when we put all of those things in the equation, we don't have a high-confidence view yet as to when the supply will be able to catch up with demand, because the escalation of demand from these various vectors is just very phenomenal.

Sumit Sadana: When we put all of those things in the equation, we don't have a high confidence view yet as to when the supply will be able to catch up with demand because the escalation of demand from these various vectors is just, very phenomenal. You know, to answer the question as to when do we have a high confidence view that the supply will be able to catch up with demand, we don't really have a high confidence view yet as to when that would happen.

Sumit Sadana: When we put all of those things in the equation, we don't have a high confidence view yet as to when the supply will be able to catch up with demand because the escalation of demand from these various vectors is just, very phenomenal. You know, to answer the question as to when do we have a high confidence view that the supply will be able to catch up with demand, we don't really have a high confidence view yet as to when that would happen.

Speaker #2: And so, to answer the question as to when we have a high-confidence view that the supply will be able to catch up with demand—we don't really have a high-confidence view yet as to when that would happen.

Speaker #4: Thank you.

Srini Pajjuri: Thank you.

Srini Pajjuri: Thank you.

Speaker #6: Your next question comes from the line of Srini Pajuri from Royal Bank of Canada. Please go ahead.

Operator 1: Your next question comes from the line of Srini Pajjuri from Royal Bank of Canada. Please go ahead.

Operator: Your next question comes from the line of Srini Pajjuri from Royal Bank of Canada. Please go ahead.

Speaker #4: Thank you. Mark, I want to ask the previous question slightly differently on CapEx. You talked about CapEx for the construction being high single digits in '25, growing by $10 billion.

Srini Pajjuri: Thank you. Mark, I wanna ask the previous question slightly differently on CapEx. You talked about CapEx being for the construction being high single digits in 25, you know, growing by $10 billion. That suggests that, you know, roughly next year, half and half between construction and equipment. You know, for the projects that you already announced, do you think, you know, 27 is the peak year for construction CapEx? If so, you know, what is a normalized, I guess, mix between construction equipment and through the cycles that you kinda think about?

Srini Pajjuri: Thank you. Mark, I wanna ask the previous question slightly differently on CapEx. You talked about CapEx being for the construction being high single digits in 25, you know, growing by $10 billion. That suggests that, you know, roughly next year, half and half between construction and equipment. You know, for the projects that you already announced, do you think, you know, 27 is the peak year for construction CapEx? If so, you know, what is a normalized, I guess, mix between construction equipment and through the cycles that you kinda think about?

Speaker #4: So that suggests that, roughly next year, it's half and half between construction and equipment. For the projects that you already announced, do you think '27 is the peak year for construction CapEx?

Speaker #4: If so, what is a normalized mix between construction and equipment through the cycles that you kind of think about?

Speaker #2: Yeah, Srini, we're not going to give any more breakdown than we've provided. It will be lumpy, just to be clear. The $10 billion was in reference to '26 to '27.

Mark Murphy: Yeah, Srini, we're not gonna give any more, you know, breakdown than we've provided. You know, it will be lumpy. Just to be clear, the $10 billion was in reference to 2026 to 2027. I wasn't quite sure. You may have said that, but I just wanna make sure it's clear. You know, we're of course going to modulate spend as we see demand and to maintain stable bit share. As Sumit mentioned, it's not clear if, you know, we're investing, you know, as we can, you know, including this recent acquisition of the Tongluo fab to put capacity in, but it takes time and a lot of effort, and we can't get meaningful bits until our fiscal 2028.

Mark Murphy: Yeah, Srini, we're not gonna give any more, you know, breakdown than we've provided. You know, it will be lumpy. Just to be clear, the $10 billion was in reference to 2026 to 2027. I wasn't quite sure. You may have said that, but I just wanna make sure it's clear. You know, we're of course going to modulate spend as we see demand and to maintain stable bit share. As Sumit mentioned, it's not clear if, you know, we're investing, you know, as we can, you know, including this recent acquisition of the Tongluo fab to put capacity in, but it takes time and a lot of effort, and we can't get meaningful bits until our fiscal 2028.

Speaker #2: It wasn't quite—you may have said that—but I just want to make sure it's clear. And we're, of course, going to modulate spend as we see demand.

Speaker #2: And to maintain stable bit share, as Samit mentioned, it's not clear if we're investing as we can, including this recent acquisition of the Tonglo fab to put capacity in.

Speaker #2: But it takes time and a lot of effort. We can't get meaningful bids until our fiscal '28. So beyond '27, of course, we're going to be very disciplined, and in '27, CapEx could go down after '27, but we're not making that call at this point.

Mark Murphy: You know, beyond 27, of course, we're going to be very disciplined and, you know, 27 could be, CapEx could go down after 27, but we're not making that call at this point. We're just, you know, investing through this next several year period that we can try and, you know, get the supply we need for our customers.

Mark Murphy: You know, beyond 27, of course, we're going to be very disciplined and, you know, 27 could be, CapEx could go down after 27, but we're not making that call at this point. We're just, you know, investing through this next several year period that we can try and, you know, get the supply we need for our customers.

Speaker #2: We're just investing through this next several-year period so that we can try and get the supply we need for our customers.

Speaker #4: Okay. Then maybe a follow-up. For the next few quarters, there were questions on gross margins. Obviously, you don't want to comment on pricing, but could you give us an idea how to think about any puts and takes on the cost side, and also any mix dynamics that we should be aware of?

Srini Pajjuri: Okay. Maybe a follow-up. For the, you know, next few quarters, there were questions on gross margins. Obviously, you don't wanna comment on pricing, but could you give us an idea how to think about any, you know, puts and takes on the cost side and also any mix dynamics, you know, that we should be aware of? Also, you know, given the higher CapEx, you know, how should we kinda think about depreciation over the next few quarters and any start-up costs that you anticipate? Thank you.

Srini Pajjuri: Okay. Maybe a follow-up. For the, you know, next few quarters, there were questions on gross margins. Obviously, you don't wanna comment on pricing, but could you give us an idea how to think about any, you know, puts and takes on the cost side and also any mix dynamics, you know, that we should be aware of? Also, you know, given the higher CapEx, you know, how should we kinda think about depreciation over the next few quarters and any start-up costs that you anticipate? Thank you.

Speaker #4: And also, given the higher CapEx, how should we kind of think about depreciation over the next few quarters, and any startup costs that you anticipate?

Speaker #4: Thank you.

Speaker #2: Yeah, so good questions. Maybe tackle first just on costs. We continue to execute really well on cost reductions. As you can comment some more, but as we've talked about, FY '26, we have the benefit of the node transitions.

Mark Murphy: Yeah. Good questions. You know, maybe tackle first just on costs. We continue to execute really well on cost reductions. Nish can comment some more, but as we've talked about, FY 2026, we have the benefit of, you know, the node transitions, one gamma in particular on DRAM and G9 on NAND that are driving our bit growth. We get a lot of, you know, efficiency there and cost downs. The one gamma, for example, replacing one alpha capacity. That's good, and the spend control's been outstanding. You know, as busy as we are, as good as the numbers are, the discipline in the business is very good.

Mark Murphy: Yeah. Good questions. You know, maybe tackle first just on costs. We continue to execute really well on cost reductions. Nish can comment some more, but as we've talked about, FY 2026, we have the benefit of, you know, the node transitions, one gamma in particular on DRAM and G9 on NAND that are driving our bit growth. We get a lot of, you know, efficiency there and cost downs. The one gamma, for example, replacing one alpha capacity. That's good, and the spend control's been outstanding. You know, as busy as we are, as good as the numbers are, the discipline in the business is very good.

Speaker #2: One gamma in particular on DRAM, and G9 on NAND, that are driving our bit growth. So we get a lot of efficiency there and cost downs.

Speaker #2: And the one gamma, for example, replacing one alpha capacity—that’s good. And the spend control has been outstanding. So as busy as we are, as good as the numbers are, the discipline in the business is very good.

Mark Murphy: Including managing, by the way, all this, you know, geopolitical and the team has really been on top of that, and there's no impact to our operations. That's good. You know, I think to your question on start-up costs, you know, as we bring on ID1, and then there's gonna be some with Tongluo. You know, I made some comments, maybe it was a year and a half ago talking about start-up costs, and those comments are still applicable. At the time, I said, I think between 1 point, 2 points of cost. Now revenue was much lower at the time.

Speaker #2: Including managing, by the way, all this geopolitical, and the team has really been on top of that. There's no impact to our operations, so that's good.

Mark Murphy: Including managing, by the way, all this, you know, geopolitical and the team has really been on top of that, and there's no impact to our operations. That's good. You know, I think to your question on start-up costs, you know, as we bring on ID1, and then there's gonna be some with Tongluo. You know, I made some comments, maybe it was a year and a half ago talking about start-up costs, and those comments are still applicable. At the time, I said, I think between 1 point, 2 points of cost. Now revenue was much lower at the time.

Speaker #2: I think to your question on startup costs, as we bring on ID1 and then there's going to be some with Tonglu, and I made some comments—but maybe it was a year, year and a half ago—talking about startup costs.

Speaker #2: And those comments are still applicable. At the time, I said I think between 0.2 points of cost. Now, revenue is much lower than at the time.

Speaker #2: So, if we dollarize that, it's probably $100 million to $200 million per quarter, starting in the next quarter, and then continuing on through '27.

Mark Murphy: If we dollarize that, it's probably, you know, $100 million, you know, $100 to 200 million per quarter starting in, you know, in the next quarter or so, and then, you know, continuing on through 2027. It would come down off of that. Again, at these revenue margin levels, it's a much smaller impact, you know, 50 basis points or less. I think what was the third question?

Mark Murphy: If we dollarize that, it's probably, you know, $100 million, you know, $100 to 200 million per quarter starting in, you know, in the next quarter or so, and then, you know, continuing on through 2027. It would come down off of that. Again, at these revenue margin levels, it's a much smaller impact, you know, 50 basis points or less. I think what was the third question?

Speaker #2: And then it would come down off of that. So again, at these revenue margin levels, it's a much smaller impact—50 basis points or less.

Speaker #2: And then I think, was there a—what was the third question?

Srini Pajjuri: Depreciation costs.

Speaker #4: I was going to depreciation. Sorry.

Srini Pajjuri: Depreciation costs.

Mark Murphy: Yeah, depreciation just depends on when, you know, when production wafers are out and then, you know, useful life of those, you know, those assets when they're put in service. Keep in mind it's, you know, that because this is greenfield, this is very long depreciation life. Yeah. That's important to keep in mind.

Speaker #2: Oh, yeah. Depreciation just depends on when production wafers are out, and then the useful life of those assets when they're put in service. And keep in mind, it's that because this is Greenfield, this is a very long depreciation life.

Mark Murphy: Yeah, depreciation just depends on when, you know, when production wafers are out and then, you know, useful life of those, you know, those assets when they're put in service. Keep in mind it's, you know, that because this is greenfield, this is very long depreciation life. Yeah. That's important to keep in mind.

Speaker #2: So that's important to keep in mind.

Speaker #4: Going just a couple of things I'll add, Mark. I'll ask you for some comments on cost. I think you covered a lot of them. One gamma is going really well.

Manish Bhatia: You know, just a couple of things I'll add. Mark asked me to add some comments on cost. I think he covered a lot of them. 1-gamma is going really well. Gen 9 are both going well, and we had positive comments on those. The only other one I'll add is HBM. I think our HBM3E 12-high has continued to execute well, you know, as we've gone through the last you know gotten to high volume over the last couple of quarters. We actually see HBM4, even though we're in the early stages of ramp, having an even faster yield ramp than HBM3E 12-high. Our HBM cost structure, we feel very good about both HBM3E as well as HBM4, and then, you know, providing improvements for us over, you know, over previous periods.

Manish Bhatia: You know, just a couple of things I'll add. Mark asked me to add some comments on cost. I think he covered a lot of them. 1-gamma is going really well. Gen 9 are both going well, and we had positive comments on those. The only other one I'll add is HBM. I think our HBM3E 12-high has continued to execute well, you know, as we've gone through the last you know gotten to high volume over the last couple of quarters. We actually see HBM4, even though we're in the early stages of ramp, having an even faster yield ramp than HBM3E 12-high. Our HBM cost structure, we feel very good about both HBM3E as well as HBM4, and then, you know, providing improvements for us over, you know, over previous periods.

Speaker #4: Gen 9 are both going well, and we had positive comments on those. The only other one I'll add is HBM. I think our HBM3E 12-high has continued to execute well as we've gotten to high volume over the last couple of quarters.

Speaker #4: And we actually see HBM4, even though we're in the early stage of the ramp, having an even faster yield ramp than HBM3E—high. So our HBM cost structure, we feel very good about; both HBM3E as well as HBM4.

Speaker #4: And providing improvements for us over previous periods. Then the other comment I'll just have, I think Mark referred to managing costs relative to geopolitical issues.

Manish Bhatia: The other comment I'll just add, I think Mark referred to geopolitical, you know, managing, I think, costs relative to geopolitical issues. I think he meant about the Middle East and the-

Manish Bhatia: The other comment I'll just add, I think Mark referred to geopolitical, you know, managing, I think, costs relative to geopolitical issues. I think he meant about the Middle East and the-

Speaker #4: I think he meant about the Middle East and some of the media reports regarding various input disruptions. But we don't see any supply risks and very, very minimal impact to cost at this time.

Mark Murphy: Yeah.

Mark Murphy: Yeah.

Manish Bhatia: some of the media reports regarding various input disruptions. You know, we you know, we don't see any supply risks and very minimal impact to cost at this time.

Manish Bhatia: some of the media reports regarding various input disruptions. You know, we you know, we don't see any supply risks and very minimal impact to cost at this time.

Speaker #2: Yeah, maybe just one housekeeping issue, since we're covering those issues now. On OPEX, we indicated that our OPEX would be ticking up due to R&D costs.

Mark Murphy: Yeah, maybe, Srini, just maybe one housekeeping issue since we're covering sort of those issues now. On OpEx, you know, we indicated that our OpEx would be ticking up due to R&D costs. You know, in Q4, we would expect OpEx to be closer to $1.6 billion, just given that, you know, with part of it's the extra week, but part of it is just, again, this increase in R&D spend, which we think is completely appropriate to drive the technology, the increased value of memory, Micron's position, and then just customer demand on specific projects.

Mark Murphy: Yeah, maybe, Srini, just maybe one housekeeping issue since we're covering sort of those issues now. On OpEx, you know, we indicated that our OpEx would be ticking up due to R&D costs. You know, in Q4, we would expect OpEx to be closer to $1.6 billion, just given that, you know, with part of it's the extra week, but part of it is just, again, this increase in R&D spend, which we think is completely appropriate to drive the technology, the increased value of memory, Micron's position, and then just customer demand on specific projects.

Speaker #2: So in the fourth quarter, we would expect OPEX to be closer to $1.6 billion, just given that part of it's the extra week, but part of it is just, again, this increase in R&D spend, which we think is completely appropriate to drive the technology, to increase value of memory, Micron's position, and then just customer demand on specific projects.

Mark Murphy: We would expect in 2027 for that OpEx number to be, you know, over 1.6, probably, you know, kind of a 1.7 run rate number and stabilize from there.

Speaker #2: And then we would expect in '27 for that OPEX number to be over $1.6 billion, probably kind of a $1.7 billion run rate number, and stabilize from there.

Mark Murphy: We would expect in 2027 for that OpEx number to be, you know, over 1.6, probably, you know, kind of a 1.7 run rate number and stabilize from there.

Speaker #4: Thanks, Mark.

Manish Bhatia: Thanks, Mark.

Manish Bhatia: Thanks, Mark.

Operator 4: This concludes today's call. Thank you for attending. You may now disconnect.

Manish Bhatia: This concludes today's call. Thank you for attending. You may now disconnect.

Q2 2026 Micron Technology Inc Post-Earnings Call

Demo

Micron Technology

Earnings

Q2 2026 Micron Technology Inc Post-Earnings Call

MU

Wednesday, March 18th, 2026 at 10:00 PM

Transcript

No Transcript Available

No transcript data is available for this event yet. Transcripts typically become available shortly after an earnings call ends.

Want AI-powered analysis? Try AllMind AI →