Q1 2026 Broadcom Inc Earnings Call
For opening remarks. Introductions, I would like to turn the call over to gu head of investor relations of broadcom Inc.
Thank you, operator, and good afternoon everyone.
Joining me on today's call are hock tan president and CEO.
Kirsten Spears Chief Financial Officer. Charlie kawas president, semiconductor solutions group and Ron velaga president infrastructure software group.
Broadcom distributed, a press release and financial tables after the market closed.
Describing our financial performance for the first quarter, fiscal year 2026.
if you did not receive a copy, you may obtain the information from the investor section of broadcom website at broadcom.com
This conference call is being webcast live and an audio replay of the call can be accessed for 1 year. Through the investors section of broadcast website.
During the prepared comments.
Talk and Kirsten will be providing details of our first quarter fiscal year 2026 results.
Guidance for our second quarter of fiscal year 2026,
As well as commentary regarding the business environment.
We'll take questions after the end of our prepared comments.
Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call.
In addition to us, gaap reporting.
Broadcom reports, certain Financial measures on a non-gaap basis.
A Reconciliation between gaap and non-gaap measures is included in the table attached to today's press release.
Comments made during today's call Will primarily refer to our non-gaap financial results.
I will now turn the call over to Hawk.
Thank you, chi.
And thank you everyone for joining us today.
Speaker #1: To welcome to Broadcom Inc.'s first quarter fiscal year 2026 financial results conference call. At this time, for opening remarks and introductions, I would like to turn the call over to Ji Yoo, Head of Investor Relations of Broadcom Inc.
Operator: Welcome to Broadcom Inc.'s Q1 fiscal year 2026 Financial Results Conference Call. At this time, for opening remarks and introductions, I would like to turn the call over to Ji Yoo, Head of Investor Relations of Broadcom Inc.
Operator: Welcome to Broadcom Inc.'s Q1 fiscal year 2026 Financial Results Conference Call. At this time, for opening remarks and introductions, I would like to turn the call over to Ji Yoo, Head of Investor Relations of Broadcom Inc.
in our fiscal, q1 2026 to 2026, total revenue, reach a record 19.3 billion dollars,
Speaker #2: Thank you, Operator, and good afternoon, everyone. Joining me on today's call are Hock Tan, President and CEO; Kirsten Spears, Chief Financial Officer; Charlie Kawaz, President, Semiconductor Solutions Group; and Ram Valaga, President, Infrastructure Software Group.
Ji Yoo: Thank you, operator. Good afternoon, everyone. Joining me on today's call are Hock Tan, President and CEO, Kirsten Spears, Chief Financial Officer, Charlie Kawwas, President, Semiconductor Solutions Group, and Ram Velaga, President, Infrastructure Software Group. Broadcom distributed a press release and financial tables after the market closed describing our financial performance for Q1 fiscal year 2026. If you did not receive a copy, you may obtain the information from the investor section of Broadcom's website at broadcom.com. This conference call is being webcast live and an audio replay of the call can be accessed for one year through the investors section of Broadcom's website. During the prepared comments, Hock and Kirsten will be providing details of our Q1 fiscal year 2026 results, guidance for our Q2 of fiscal year 2026, as well as commentary regarding the business environment.
Ji Yoo: Thank you, operator. Good afternoon, everyone. Joining me on today's call are Hock Tan, President and CEO, Kirsten Spears, Chief Financial Officer, Charlie Kawwas, President, Semiconductor Solutions Group, and Ram Velaga, President, Infrastructure Software Group. Broadcom distributed a press release and financial tables after the market closed describing our financial performance for Q1 fiscal year 2026. If you did not receive a copy, you may obtain the information from the investor section of Broadcom's website at broadcom.com.
and that's up 29% year on year and exceeding, our guide guidance, on the back of better than expected growth in Ai and semiconductors,
this stop line translated into exceptional profitability with q1 Consolidated, adjusted ibida, hitting a record 13.1 billion
Speaker #2: Broadcom distributed a press release and financial tables after the market closed describing our financial performance for the first quarter fiscal year 2026. If you did not receive a copy, you may obtain the information from the investor section of Broadcom's website, at Broadcom.com.
Which is 68% of Revenue.
These figures demonstrate that our scale.
Continues to drive significant operating Leverage.
Now, we expect this momentum.
Ji Yoo: This conference call is being webcast live and an audio replay of the call can be accessed for one year through the investors section of Broadcom's website. During the prepared comments, Hock and Kirsten will be providing details of our Q1 fiscal year 2026 results, guidance for our Q2 of fiscal year 2026, as well as commentary regarding the business environment.
To accelerate.
Speaker #2: This conference call is being webcast live and an audio replay of the call can be accessed for one year through the investor section of Broadcom's website.
As our custom AI experience hit their next phase of deployment deployment, among our 5 customers.
Speaker #2: During the prepared comments, Hock and Kirsten will be providing details of our first quarter fiscal year 2026 results; guidance for our second quarter of fiscal year 2026; as well as commentary regarding the business environment.
So looking ahead to next quarter, Q2 26.
We're guiding for Consolidated revenue of approximately 22 billion dollars.
Which represents 47% year-on-year growth.
Ji Yoo: We'll take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to US GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I will now turn the call over to Hock.
Ji Yoo: We'll take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to US GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis.
Let me now give you more color on our semiconductor business.
In q1.
Revenue was a record 12.5 billion as year-on-year. Growth accelerated to 52%
In addition to us, gaap reporting.
Ji Yoo: A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I will now turn the call over to Hock.
These robust growth was driven by AI semiconductor Revenue, which grew 106% year on year to 8.4 billion.
Broadcom reports, certain Financial measures on a non-gaap basis.
Way above our Outlook.
In Q2.
A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release.
This momentum accelerates and we expect semiconductor Revenue to be 14.8 billion.
Comments made during today's call will primarily refer to our non-GAAP financial results.
Up 76% year on year.
Hock Tan: Thank you, Ji, and thank you everyone for joining us today. In our fiscal Q1 2026, total revenue reached a record $19.3 billion. That's up 29% year-on-year and exceeding our guidance on the back of better than expected growth in AI semiconductors. This top line strength translated into exceptional profitability with Q1 consolidated adjusted EBITDA hitting a record $13.1 billion, which is 68% of revenue. These figures demonstrate that our scale continues to drive significant operating leverage. Now, we expect this momentum to accelerate as our custom AI XPUs hit their next phase of deployment among our 5 customers. Looking ahead to next quarter Q2 2026, we're guiding for consolidated revenue of approximately $22 billion, which represents 47% year-on-year growth.
Hock Tan: Thank you, Ji, and thank you everyone for joining us today. In our fiscal Q1 2026, total revenue reached a record $19.3 billion. That's up 29% year-on-year and exceeding our guidance on the back of better than expected growth in AI semiconductors. This top line strength translated into exceptional profitability with Q1 consolidated adjusted EBITDA hitting a record $13.1 billion, which is 68% of revenue. These figures demonstrate that our scale continues to drive significant operating leverage.
I will now turn the call over to Hawk.
Thank you, chi.
And thank you everyone for joining us today.
Driving. This is AI Revenue growth, which will accelerate very sharply to 140% year on year to 10.7 billion dollars.
in our fiscal, q1 2026 to 2026, total revenue, reach a record 19.3 billion dollars,
Now, our customers accelerated business, grew 140% year on year in q1.
This momentum continues in Q2.
and that's up 29% year on year and exceeding, our G guidance, on the back of better than expected growth in Ai and semiconductors,
The ram of custom AI is seller accelerators across all our 5 customers is progressing very well.
This stop line strength translated into exceptional. Profitability with q1 Consolidated adjusted ibida, hitting a record 13.1 billion
Which is 68% of Revenue.
With strong demand for the 7 Generations. TPU
These figures demonstrate that our scale.
In 2027 and Beyond.
Hock Tan: Now, we expect this momentum to accelerate as our custom AI XPUs hit their next phase of deployment among our 5 customers. Looking ahead to next quarter Q2 2026, we're guiding for consolidated revenue of approximately $22 billion, which represents 47% year-on-year growth.
Continues to drive significant operating Leverage.
We expect to see even stronger demand, from next, January's of TPU.
Now, we expect this momentum.
For entropic.
To accelerate.
As our custom AI experience.
We are off to a very good start in 2026 for 1 gigawatt of TPU compute.
Hit their next phase of deployment deployment, among our 5 customers.
So, looking ahead to next quarter, Q2 '26,
And for 207, this demand is expected to search in excess of 3, gigawatts of compute
We're guiding for consolidated revenue of approximately $22 billion.
Our xpu franchise, I should end extends Beyond tpus.
Hock Tan: Let me now give you more color on our semiconductor business. In Q1, revenue was a record $12.5 billion as year-over-year growth accelerated to 52%. This robust growth was driven by AI semiconductor revenue, which grew 106% year-over-year to $8.4 billion, way above our outlook. In Q2, this momentum accelerates and we expect semiconductor revenue to be $14.8 billion, up 76% year-over-year. Driving this is AI revenue growth, which will accelerate very sharply to 140% year-over-year to $10.7 billion. Now, our custom accelerator business grew 140% year-over-year in Q1. This momentum continues in Q2. The ramp of custom AI accelerators across all our five customers is progressing very well.
Hock Tan: Let me now give you more color on our semiconductor business. In Q1, revenue was a record $12.5 billion as year-over-year growth accelerated to 52%. This robust growth was driven by AI semiconductor revenue, which grew 106% year-over-year to $8.4 billion, way above our outlook. In Q2, this momentum accelerates and we expect semiconductor revenue to be $14.8 billion, up 76% year-over-year. Driving this is AI revenue growth, which will accelerate very sharply to 140% year-over-year to $10.7 billion.
Which represents 47% year-on-year growth.
now contrary, to recent analysts reports,
Let me now give you more color on our semiconductor business.
In q1.
Matters custom accelerator mdia road map is alive and well.
We're shipping now.
Revenue was a record to 45.5 billion as year on year. Growth accelerated, to 52%
And in fact, for the Next Generation xbus,
We will scale to multiple gigawatts in 27 and Beyond.
This robust growth was driven by AI semiconductor revenue, which grew 106% year on year to $8.4 billion.
Way above our Outlook.
Rounding off for customers 4 and 5.
In Q2.
We see strong shipments this year.
And which we expect more than double in 2027.
This momentum accelerates and we expect semiconductor Revenue to be 14.8 billion.
We also now have a 6 customer.
Up 76% year on year.
Hock Tan: Now, our custom accelerator business grew 140% year-over-year in Q1. This momentum continues in Q2. The ramp of custom AI accelerators across all our five customers is progressing very well. For Google, we continue our trajectory of growth in 2026 with strong demand for the seventh-generation AI TPU. In 2027 and beyond, we expect to see even stronger demand from next generations of TPU. For Anthropic, we are off to a very good start in 2026 for 1 gigawatt of TPU compute. For 2027, this demand is expected to surge in excess of 3 gigawatts of compute.
We expect openai deploying in volume. Their first generation, xpu in 2027 and over 1, gigawatt of compute capacity.
Driving. This is AI Revenue growth, which will accelerate very sharply to 140% year on year to 10.7 billion dollars.
Let me take a second to emphasize.
Now, our customers accelerated business, grew 140% year on year in q1.
This momentum continues in Q2.
Our collaboration with these 6 customers to develop AI xbus is deep strategic and multi-year.
We bring to the Partnerships.
Hock Tan: For Google, we continue our trajectory of growth in 2026 with strong demand for the seventh-generation AI TPU. In 2027 and beyond, we expect to see even stronger demand from next generations of TPU. For Anthropic, we are off to a very good start in 2026 for 1 gigawatt of TPU compute. For 2027, this demand is expected to surge in excess of 3 gigawatts of compute. Our XPU franchise, I should add, extends beyond TPUs. Now, contrary to recent analyst reports, Meta's custom accelerator MTIA roadmap is alive and well. We're shipping now and in fact for the next generation XPUs we will scale to multiple gigawatts in 2027 and beyond. Rounding off for customers four and five. We see strong shipments this year and which we expect to more than double in 2027. We also now have a sixth customer.
The ram of custom AI accelerators across all our 5 customers is progressing very well.
Each of them, unmatched technology in 30s silicon design.
Process technology advanced Packaging.
And networking to enable.
For Google, we continue our trajectory of growth in '26, with strong demand for the 7 Generations.
In 2027 and Beyond.
Each of these customers to achieve Optimal Performance for their differentiated llm workloads.
We have the track record.
To deliver.
We expect to see even stronger demand from next, Generations of TPU.
For entropic.
These XP use and high volumes at an accelerated time to Market.
With very high yields.
We are off to a very good start in 2026 for 1 gigawatt of TPU compute.
and Beyond technology, we provide
and for 27, this demand is expected to search in excess of 3, gigawatts of compute
multi-year Supply agreements as our customers scale up deployment of their compute infrastructure.
Hock Tan: Our XPU franchise, I should add, extends beyond TPUs. Now, contrary to recent analyst reports, Meta's custom accelerator MTIA roadmap is alive and well. We're shipping now and in fact for the next generation XPUs we will scale to multiple gigawatts in 2027 and beyond. Rounding off for customers four and five. We see strong shipments this year and which we expect to more than double in 2027. We also now have a sixth customer.
Our xpu franchise, I should end extends Beyond tpus.
Our ability to assure Supply in these times of constrained capacity.
In Leading Edge wafers.
now contrary, to recent analysts reports,
In high bandwidth memory and substrates ensures the durability of our Partnerships.
and we have,
Matters custom accelerator mdia road map is alive and well.
We're shipping now.
And in fact, for the Next Generation xbus,
Fully secured capacity of these components for 26, through 28.
Consistent, now, with the strong outlook. For our xbus demand for AI. Networking is accelerating.
Rounding off for customers 4 and 5.
Q1.
We see strong shipments this year.
AI, networking Revenue grew 60% year on year and represented.
And which we expect will more than double in 2027.
1/3 of total AI in Revenue.
Hock Tan: We expect OpenAI de-deploying in volume their first generation XPU in 2027 at over 1GW of compute capacity. Let me take a second to emphasize our collaboration with these six customers to develop AI XPUs is deep, strategic, and multi-year. We bring to the partnerships, each of them unmatched technology in service, silicon design, process technology, advanced packaging, and networking to enable each of these customers to achieve optimal performance for their differentiated LLM workloads. We have the track record to deliver these XPUs at high volumes at an accelerated time to market with very high yields. Beyond technology we provide multi-year supply agreements as our customers scale up deployment of their compute infrastructure. Our ability to assure supply in these times of constrained capacity in leading-edge wafers, in high bandwidth memory, and substrates ensures the durability of our partnerships.
Hock Tan: We expect OpenAI de-deploying in volume their first generation XPU in 2027 at over 1GW of compute capacity. Let me take a second to emphasize our collaboration with these six customers to develop AI XPUs is deep, strategic, and multi-year. We bring to the partnerships, each of them unmatched technology in service, silicon design, process technology, advanced packaging, and networking to enable each of these customers to achieve optimal performance for their differentiated LLM workloads.
In Q2.
We also now have a sixth customer.
We project AI networking to accelerate a lot more and grow to 40% of total, AI Revenue.
We expect openai deploying in volume. Their first generation. Xpu in 2027 at over 1, gigawatt of compute capacity.
We are clearly gaining share in networking.
Let me take a second to emphasize.
Let me explain in scale out.
Our first to Market Tomas, 6.
Switch at 1 terabyte per second.
Our collaboration with these 6 customers to develop AI experience is deep strategic and multi-year.
We bring to the Partnerships.
As well as our 200. G30s are capturing demand from high hyperscalers, whether they use XP use or gpus this year,
Each of them, unmatched technology in 30s silicon design.
Process technology advanced Packaging.
And networking to enable.
This lead will extend in 27 with our next Generation. Tomahawk 7 featuring double the performance.
Hock Tan: We have the track record to deliver these XPUs at high volumes at an accelerated time to market with very high yields. Beyond technology we provide multi-year supply agreements as our customers scale up deployment of their compute infrastructure. Our ability to assure supply in these times of constrained capacity in leading-edge wafers, in high bandwidth memory, and substrates ensures the durability of our partnerships.
Each of these customers to achieve Optimal Performance for their differentiated llm workloads.
Meanwhile, in scale up as cluster sizes, and our at our customers expand.
We have the track record to deliver.
These XP use and high volumes at an accelerated time to market.
200g studies.
With very high yields.
and Beyond technology, we provide
as we next step up, to 400 gig studies in 2028,
Our xpu customers.
multi-year Supply agreements as our customers scale up deployment of their compute infrastructure.
Will likely continue to stay on Direct attach copper.
Our ability to assure supply in these times of constrained capacity.
In Leading Edge wafers.
And this is a huge Advantage as the alternative of going to Optical is more expensive and requires significantly more power.
Hock Tan: We have fully secured capacity of these components for 2026 through 2028. Consistent now with the strong outlook for our XPUs, demand for AI networking is accelerating. Q1 AI networking revenue grew 60% year-on-year and represented 1/3 of total AI revenue. In Q2, we project AI networking to accelerate a lot more and grow to 40% of total AI revenue. We are clearly gaining share in networking. Let me explain. In scale out, our first to market Tomahawk 6 switch at 100 Tb per second, as well as our 200 G30s, are capturing demand from hyperscalers, whether they use XPUs or GPUs this year. This lead will extend in 2027 with our next generation Tomahawk 7, featuring double the performance.
Hock Tan: We have fully secured capacity of these components for 2026 through 2028. Consistent now with the strong outlook for our XPUs, demand for AI networking is accelerating. Q1 AI networking revenue grew 60% year-on-year and represented 1/3 of total AI revenue. In Q2, we project AI networking to accelerate a lot more and grow to 40% of total AI revenue. We are clearly gaining share in networking.
In high bandwidth memory and substrates ensures the durability of our Partnerships.
Reflecting the foregoing factors.
and we have,
Our visibility in 2027.
Has dramatically improved.
Fully secured capacity of these components for '26 through '28.
Today.
In fact, we have line of sight to achieve AI Revenue.
For from chips.
Consistent, now, with the strong outlook for our XBUS demand for AI. Networking is accelerating.
Just chips.
Q1.
In excess of 100 billion in 2027.
AI, networking Revenue grew 60% year on year.
And represented.
1/3 of total AI in Revenue.
We have also secured the supply chain required to achieve this.
In Q2.
We project AI networking to accelerate a lot more and grow to 40% of total, AI Revenue.
Hock Tan: Let me explain. In scale out, our first to market Tomahawk 6 switch at 100 Tb per second, as well as our 200 G30s, are capturing demand from hyperscalers, whether they use XPUs or GPUs this year. This lead will extend in 2027 with our next generation Tomahawk 7, featuring double the performance.
We are clearly gaining share in networking.
Let me explain in scale out.
Now, turning to non AI semiconductors q1 revenue or 4.1 billion was flat year on year, in line with guidance Enterprise. Networking broadband, service storage revenues were up year on year offset by seasonal decline in Wireless.
Our first to Market 6 switch at 100 terabit per second.
In Q2 we forecast, non AI, semiconductor Revenue to be approximately 4.1 billion up 4% from a year ago.
As well as our 200. G30s are capturing demand from high hyperscalers, whether they use XP use or gpus this year,
Let me now talk about our infrastructure software segment.
Q1 infrastructure, software revenue of 6.8 billion dollars was in line with our guidance.
Hock Tan: Meanwhile, in scale up, as cluster sizes and our customers expand, we are uniquely positioned to enable these customers to stay on direct attached copper through our 200G SerDes. As we next step up to 400G SerDes in 2028, our XPU customers will likely continue to stay on direct attached copper. This is a huge advantage as the alternative of going to optical is more expensive and requires significantly more power. Reflecting the foregoing factors, our visibility in 2027 has dramatically improved. Today, in fact, we have line of sight to achieve AI revenue from chips, just chips, in excess of $100 billion in 2027. We have also secured the supply chain required to achieve this. Now, turning to non-AI semiconductors. Q1 revenue of $4.1 billion was flat year-on-year, in line with guidance.
Hock Tan: Meanwhile, in scale up, as cluster sizes and our customers expand, we are uniquely positioned to enable these customers to stay on direct attached copper through our 200G SerDes. As we next step up to 400G SerDes in 2028, our XPU customers will likely continue to stay on direct attached copper. This is a huge advantage as the alternative of going to optical is more expensive and requires significantly more power. Reflecting the foregoing factors, our visibility in 2027 has dramatically improved. Today, in fact, we have line of sight to achieve AI revenue from chips, just chips, in excess of $100 billion in 2027.
These lead will extend in 27 with our next Generation. Tomahawk 7 featuring double the performance.
Meanwhile, in scale up.
As cluster sizes and our, and our customers expand.
Up 1% year on year for Q2, we forecast infrastructure, software Revenue to be approximately 7.2 billion, um, 9% year on year.
We are uniquely positioned to enable these customers to stay on Direct attach copper, through our 200 G Series.
as we next step up, to 400 gig studies in 2028,
Our xpu customers.
Will likely continue to stay on Direct attach copper.
VMware Revenue grew 13% year on year. Bookings. Continue to be strong and total contract value booked in q1 exceeded 9.2 billion dollars, sustaining and airr annual, which is annual recurring Revenue growth of 19% year upon year.
And this is a huge Advantage as the alternative of going to Optical is more expensive and requires significantly more power.
Let me reinforce that this growth in our infrastructure software business, reflects our focus and investments in foundational infrastructure.
Reflecting the foregoing factors.
Our visibility in 2027.
And I want infrastructure, software is not disrupted by AI.
Has dramatically improved.
today, in fact, we have line of sight to achieve AI Revenue
for from chips.
Just chips.
Hock Tan: We have also secured the supply chain required to achieve this. Now, turning to non-AI semiconductors. Q1 revenue of $4.1 billion was flat year-on-year, in line with guidance. Enterprise networking, broadband, server storage revenues were up year-on-year, offset by a seasonal decline in wireless. In Q2, we forecast non-AI semiconductor revenue to be approximately $4.1 billion, up 4% from a year ago. Let me now talk about our infrastructure software segment. Q1 infrastructure software revenue of $6.8 billion was in line with our guidance, up 1% year-on-year.
In excess of $100 billion in 2027.
is the Essential Software layer in data centers. Integrating CPUs, gpus storage and networking into a common high performance private Cloud environment.
We have also secured the supply chain required to achieve this.
As the permanent abstraction layer between AI software and physical chips.
Silicon VCF cannot be disintermediated or replaced.
Hock Tan: Enterprise networking, broadband, server storage revenues were up year-on-year, offset by a seasonal decline in wireless. In Q2, we forecast non-AI semiconductor revenue to be approximately $4.1 billion, up 4% from a year ago. Let me now talk about our infrastructure software segment. Q1 infrastructure software revenue of $6.8 billion was in line with our guidance, up 1% year-on-year. For Q2, we forecast infrastructure software revenue to be approximately $7.2 billion, up 9% year-on-year. VMware revenue grew 13% year-on-year. Bookings continued to be strong, and total contract value booked in Q1 exceeded $9.2 billion, sustaining an ARR annual, which is annual recurring revenue growth of 19% year upon year. Let me reinforce that this growth in our infrastructure software business reflects our focus and investments in foundational infrastructure.
It allows Enterprises in fact to scale complex generative AI workloads effectively with agility that Hardware alone cannot provide.
Turning to non AI semiconductors q1 revenue or 4.1 billion was flat year on year, in line with guidance Enterprise. Networking broadband, service storage revenues were up year on year of set by seasonal decline in Wireless.
We are confident that the growth in generative and agentic AI will create the need for more VMware.
Not less.
so, in summary,
In Q2 we forecast, non AI, semiconductor Revenue to be approximately 4.1 billion up 4% from a year ago.
Let me now talk about our infrastructure software segment.
Let me put it all together for Q2 20 2026, we expect Consolidated Revenue growth.
To accelerate to 47% year on year.
Q1 infrastructure software revenue of $6.8 billion was in line with our guidance.
Hock Tan: For Q2, we forecast infrastructure software revenue to be approximately $7.2 billion, up 9% year-on-year. VMware revenue grew 13% year-on-year. Bookings continued to be strong, and total contract value booked in Q1 exceeded $9.2 billion, sustaining an ARR annual, which is annual recurring revenue growth of 19% year upon year. Let me reinforce that this growth in our infrastructure software business reflects our focus and investments in foundational infrastructure.
Bank adjusted ibida.
To be approximately 68% of Revenue.
Up 1% year on year for Q2, we forecast infrastructure, software Revenue to be approximately 7.2 billion, um, 9% year on year.
So with that, let me turn the call over to Kirsten.
Thank you hawk. Let me now provide additional detail on our q1 financial performance.
Consolidated Revenue was a record 19.3 billion for the quarter up 29% from a year ago.
Gross margin was 77% of Revenue in the quarter.
VMware Revenue grew 13% year on year. Bookings. Continue to be strong and total contract value booked in q1, exceeded, 9.2 billion sustaining and airr annual, which is annual recurring Revenue growth of 19% year upon year.
Consolidated operating expenses were 2 billion of which 1.5 billion was R&D, q1. Operating income was a record 12.8 billion up 31% from a year ago.
Hock Tan: Our infrastructure software is not disrupted by AI. In fact, VMware Cloud Foundation, VCF, is the essential software layer in data centers integrating CPUs, GPUs, storage, and networking into a common high-performance private cloud environment. As the permanent abstraction layer between AI software and physical chips, silicon, VCF cannot be disintermediated or replaced. It allows enterprises, in fact, to scale complex generative AI workloads effectively with agility that hardware alone cannot provide. We are confident that the growth in generative and agentic AI will create the need for more VMware, not less. In summary, let me put it all together. For Q2 2026, we expect consolidated revenue growth to accelerate to 47% year-on-year and reach approximately $22 billion. We expect adjusted EBITDA to be approximately 68% of revenue. With that, let me turn the call over to Kirsten.
Hock Tan: Our infrastructure software is not disrupted by AI. In fact, VMware Cloud Foundation, VCF, is the essential software layer in data centers integrating CPUs, GPUs, storage, and networking into a common high-performance private cloud environment. As the permanent abstraction layer between AI software and physical chips, silicon, VCF cannot be disintermediated or replaced. It allows enterprises, in fact, to scale complex generative AI workloads effectively with agility that hardware alone cannot provide.
Let me reinforce that this growth in our infrastructure software business reflects our focus and investments in foundational infrastructure.
Operating margin increased 50 basis points, a year over year to 66.4% on favorable operating Leverage.
And our infrastructure software is not disrupted by AI.
In fact, VMware Cloud Foundation—VCF.
Adjusted Eva of 13.1 billion or 68% of Revenue was above our guidance of 67%. Now let's go into detail for our 2 segments.
Starting with semiconductors.
Revenue for our semiconductor solution. Segment was a record 12.5 billion with growth accelerating to 52% year-on-year driven by AI.
is the Essential Software layer in data centers. Integrating CPUs, gpus storage and networking into a common high performance private Cloud environment.
Semiconductor Revenue represented, 65% of total revenue in the quarter.
As the permanent abstraction layer between AI software and physical chips.
Silicon VCF cannot be disintermediated or replaced.
Gross margin for our semiconductor Solutions. Segment was up 30 basis points here on year to approximately 68% operating expenses of 1.1 billion, reflected increased investment in R&D, for Leading Edge, AI, semiconductors and represented, 8% of Revenue.
Hock Tan: We are confident that the growth in generative and agentic AI will create the need for more VMware, not less. In summary, let me put it all together. For Q2 2026, we expect consolidated revenue growth to accelerate to 47% year-on-year and reach approximately $22 billion. We expect adjusted EBITDA to be approximately 68% of revenue. With that, let me turn the call over to Kirsten.
It allows Enterprises in fact to scale complex generative AI workloads effectively with agility that Hardware alone cannot provide.
Semiconductor operating margin of 60% was up, 260 basis points year-on-year, reflecting strong operating leverage. Now, moving on to infrastructure software,
We are confident that the growth in generative and agentic AI will create the need for more VMware.
Not less.
so, in summary,
Revenue for infrastructure. Software of 6.8 billion was up 1% year-on-year and represented 35% of Revenue.
Let me put it all together. For Q2 2026, we expect consolidated revenue growth.
Gross margin for infrastructure. Software was 93% in the quarter. And operating expenses were 979 million in the quarter.
To accelerate to 47% year on year.
Q1 software, operating margin was up, 190 basis points a year on year to 78%.
And reach approximately 22 billion dollars and we expect adjusted ebida.
Moving on to cash flow.
To be approximately 68% of Revenue.
Free cash flow in the quarter was 8 billion and represented 41% of Revenue.
Kirsten Spears: Thank you, Hock. Let me now provide additional detail on our Q1 financial performance. Consolidated revenue was a record $19.3 billion for the quarter, up 29% from a year ago. Gross margin was 77% of revenue in the quarter. Consolidated operating expenses were $2 billion, of which $1.5 billion was R&D. Q1 operating income was a record $12.8 billion, up 31% from a year ago. Operating margin increased 50 basis points year-over-year to 66.4% on favorable operating leverage. Adjusted EBITDA of $13.1 billion or 68% of revenue was above our guidance of 67%. Now let's go into detail for our two segments. Starting with Semiconductors. Revenue for our Semiconductor Solutions segment was a record $12.5 billion, with growth accelerating to 52% year-on-year, driven by AI.
Kirsten Spears: Thank you, Hock. Let me now provide additional detail on our Q1 financial performance. Consolidated revenue was a record $19.3 billion for the quarter, up 29% from a year ago. Gross margin was 77% of revenue in the quarter. Consolidated operating expenses were $2 billion, of which $1.5 billion was R&D. Q1 operating income was a record $12.8 billion, up 31% from a year ago. Operating margin increased 50 basis points year-over-year to 66.4% on favorable operating leverage. Adjusted EBITDA of $13.1 billion or 68% of revenue was above our guidance of 67%.
So we then, let me turn the call over to Kirsten.
Thank you, hawk.
We spent 250 million on Capital expenditures.
Let me now provide additional detail on our q1 financial performance.
Validated Revenue was a record 19.3 billion for the quarter up 29% from a year ago.
Gross margin was 77% of Revenue in the quarter.
We ended the first quarter with inventory of 3 billion. As we continue to secure components to support strong AI demand, our days of inventory. On hand were 68 days in q1 compared to 58 days in Q4 in anticipation of accelerating AI semiconductor growth.
Consolidated operating expenses were 2 billion of which 1.5 billion was R&D, q1. Operating income was a record 12.8 billion up 31% from a year ago.
Operating margin increased 50 basis points year-over-year to 66.4% on favorable operating leverage.
Kirsten Spears: Now let's go into detail for our two segments. Starting with Semiconductors. Revenue for our Semiconductor Solutions segment was a record $12.5 billion, with growth accelerating to 52% year-on-year, driven by AI.
It's turning to Capital allocation in q1. We paid stockholders 3.1 billion of cash. Dividends based on a quarterly common stock cash dividend of 65 cents per share during the quarter. We repurchased 7.8 billion or approximately 23 million shares of common stock in total in q1. We returned 10.9 billion to shareholders through dividends and share repurchases,
Adjusted EBITDA of $13.1 billion, or 68% of revenue, was above our guidance of 67%. Now, let's go into detail for our two segments.
In Q2, we expect the non-gaap diluted share count to be approximately 4.94 billion, shares. Excluding the impact of potential share repurchases.
Kirsten Spears: Semiconductor revenue represented 65% of total revenue in the quarter. Gross margin for our Semiconductor Solutions segment was up 30 basis points year-on-year to approximately 68%. Operating expenses of $1.1 billion reflected increased investment in R&D for leading-edge AI semiconductors and represented 8% of revenue. Semiconductor operating margin of 60% was up 260 basis points year-on-year, reflecting strong operating leverage. Now moving on to Infrastructure Software. Revenue for Infrastructure Software of $6.8 billion was up 1% year-on-year and represented 35% of revenue. Gross margin for Infrastructure Software was 93% in the quarter and operating expenses were $979 million in the quarter. Q1 software operating margin was up 190 basis points year-on-year to 78%. Moving on to cash flow.
Kirsten Spears: Semiconductor revenue represented 65% of total revenue in the quarter. Gross margin for our Semiconductor Solutions segment was up 30 basis points year-on-year to approximately 68%. Operating expenses of $1.1 billion reflected increased investment in R&D for leading-edge AI semiconductors and represented 8% of revenue. Semiconductor operating margin of 60% was up 260 basis points year-on-year, reflecting strong operating leverage. Now moving on to Infrastructure Software.
We ended the first quarter with 14.2 billion of cash.
Starting with semiconductors revenue for our semiconductor Solutions. Segment was a record 12.5 billion with growth accelerating to 52% year-on-year driven by AI.
Semiconductor Revenue represented, 65% of total revenue in the quarter.
Today, we are announcing our board of directors has authorized an additional 10 billion for our share repurchase program effective through the end of calendar year 2026.
Now, moving on to guidance.
Gross margin for our semiconductor solution. Segment was up 30 basis points here on year to approximately 68% operating expenses of 1.1 billion reflected increased investment in R&D. For Leading Edge, AI, semiconductors and represented, 8% of Revenue.
Kirsten Spears: Revenue for Infrastructure Software of $6.8 billion was up 1% year-on-year and represented 35% of revenue. Gross margin for Infrastructure Software was 93% in the quarter and operating expenses were $979 million in the quarter. Q1 software operating margin was up 190 basis points year-on-year to 78%. Moving on to cash flow.
Our guidance for Q2 is for Consolidated revenue of 22 billion up 47% year on year. We forecast, semiconductor revenue of approximately. 14.8 billion up 76% year on year within this. We expect Q2 AI, semiconductor revenue of 10.7 billion up approximately 140% year on year.
We expect infrastructure software revenue of approximately 7.2 billion up 9% year on year.
Revenue for Infrastructure Software of $6.8 billion was up 1% year-on-year and represented 35% of revenue.
Gross margin for infrastructure. Software was 93% in the quarter. And operating expenses were 979 million in the quarter.
For your modeling purposes. We expect Consolidated gross margin to be flat, sequentially at 77%. We expect Q2, adjust to the beta to be approximately 68%.
Q1 software, operating margin was up, 190 basis points a year on year to 78%.
Kirsten Spears: Free cash flow in the quarter was $8 billion and represented 41% of revenue. We spent $250 million on capital expenditures. We ended Q1 with inventory of $3 billion as we continue to secure components to support strong AI demand. Our days of inventory on hand were 68 days in Q1 compared to 58 days in Q4 in anticipation of accelerating AI semiconductor growth. Turning to capital allocation. In Q1, we paid stockholders $3.1 billion of cash dividends based on a quarterly common stock cash dividend of $0.65 per share. During the quarter, we repurchased $7.8 billion or approximately 23 million shares of common stock. In total, in Q1, we returned $10.9 billion to shareholders through dividends and share repurchases.
Kirsten Spears: Free cash flow in the quarter was $8 billion and represented 41% of revenue. We spent $250 million on capital expenditures. We ended Q1 with inventory of $3 billion as we continue to secure components to support strong AI demand. Our days of inventory on hand were 68 days in Q1 compared to 58 days in Q4 in anticipation of accelerating AI semiconductor growth. Turning to capital allocation. In Q1, we paid stockholders $3.1 billion of cash dividends based on a quarterly common stock cash dividend of $0.65 per share.
We expect the non-gaap tax rate for Q2 and fiscal year 2026 to be approximately 16 and 1.5% due to the impact of the global minimum tax, and the geographic mix of income, compared to that of fiscal year 25.
Moving on to cash flow, free cash flow in the quarter was 8 billion and represented 41% of Revenue.
We spent 250 million on Capital expenditures.
That concludes my prepared remarks operator. Please open up the call for questions.
Thank you to ask a question, you will need to press star 1 1 1 on your telephone,
We ended the first quarter with inventory of $3 billion. As we continue to secure components to support strong AI demand, our days of inventory on hand were 68 days in Q1, compared to 58 days in Q4, in anticipation of accelerating AI semiconductor growth.
With y'all, your question, press star, 1 1 again, due to time restraints, we ask that you, please limit yourself to 1 question please. Stand by while we compile the Q&A roster,
and our first question will come from the line of Blaine Curtis with Jeffrey's, your line is open
Kirsten Spears: During the quarter, we repurchased $7.8 billion or approximately 23 million shares of common stock. In total, in Q1, we returned $10.9 billion to shareholders through dividends and share repurchases. In Q2, we expect the non-GAAP diluted share count to be approximately 4.94 billion shares, excluding the impact of potential share repurchases. We ended Q1 with $14.2 billion of cash. Today, we are announcing our board of directors has authorized an additional $10 billion for our share repurchase program effective through the end of calendar year 2026.
Hey, good afternoon and thanks for uh taking my question. Uh, it's just a, a clarification that the question just clarification Hawk on the
Kirsten Spears: In Q2, we expect the non-GAAP diluted share count to be approximately 4.94 billion shares, excluding the impact of potential share repurchases. We ended Q1 with $14.2 billion of cash. Today, we are announcing our board of directors has authorized an additional $10 billion for our share repurchase program effective through the end of calendar year 2026. Now moving on to guidance. Our guidance for Q2 is for consolidated revenue of $22 billion, up 47% year-over-year. We forecast semiconductor revenue of approximately $14.8 billion, up 76% year-over-year. Within this, we expect Q2 AI semiconductor revenue of $10.7 billion, up approximately 140% year-over-year. We expect infrastructure software revenue of approximately $7.2 billion, up 9% year-over-year.
Turning to capital allocation in Q1, we paid stockholders $3.1 billion in cash dividends, based on a quarterly common stock cash dividend of $0.65 per share during the quarter. We repurchased $7.8 billion, or approximately 23 million shares, of common stock in total in Q1. We returned $10.9 billion to shareholders through dividends and share repurchases.
In Q2, we expect the non-GAAP diluted share count to be approximately 4.94 billion shares, excluding the impact of potential share repurchases.
We ended the first quarter with 14.2 billion of cash.
Kirsten Spears: Now moving on to guidance. Our guidance for Q2 is for consolidated revenue of $22 billion, up 47% year-over-year. We forecast semiconductor revenue of approximately $14.8 billion, up 76% year-over-year. Within this, we expect Q2 AI semiconductor revenue of $10.7 billion, up approximately 140% year-over-year. We expect infrastructure software revenue of approximately $7.2 billion, up 9% year-over-year.
Today, we are announcing our board of directors has authorized an additional 10 billion for our share repurchase program effective through the end of calendar year 2026.
Now, moving on to guidance.
Greater than 100 billion dollars. I think you said AI chips, I just want to make sure you're clarifying the difference between the A6 and networking and didn't know how racks uh Revenue fits in there. And then the question, you know, I think the biggest overhang on the group here is that, you know, you, you grew roughly double in the quarter AI. I think that's what, you know, kind of cloud capex is growing this year. I'm just kind of curious your perspective. Um, you know, I I think given the Outlook that you have for 27, you should be a share Gainer. I'm just kind of curious your perspective in terms of the pessimism that investors kind of, think of that. The hyperscalers need to get a return on investment in this year or next year, or if not the year after I'm just kind of curious your perspective, how you factor that into your outlook,
Well, uh, what we see?
what we have seen over the last few months and continue to see even more
Is.
Our guidance for Q2 is for Consolidated revenue of 22 billion up 47% year on year. We forecast, semiconductor revenue of approximately. 14.8 billion up 76% year on year within this. We expect Q2 AI, semiconductor revenue of 10.7 billion up approximately 140% year-on-year,
And it's really not so much talking about hyperscalers. Uh, our customers blend is limited to those few.
Kirsten Spears: For your modeling purposes, we expect consolidated gross margin to be flat sequentially at 77%. We expect Q2 adjusted EBITDA to be approximately 68%. We expect the non-GAAP tax rate for Q2 in fiscal year 2026 to be approximately 16.5% due to the impact of the global minimum tax and the geographic mix of income compared to that of fiscal year 2025. That concludes my prepared remarks. Operator, please open up the call for questions.
Kirsten Spears: For your modeling purposes, we expect consolidated gross margin to be flat sequentially at 77%. We expect Q2 adjusted EBITDA to be approximately 68%. We expect the non-GAAP tax rate for Q2 in fiscal year 2026 to be approximately 16.5% due to the impact of the global minimum tax and the geographic mix of income compared to that of fiscal year 2025. That concludes my prepared remarks. Operator, please open up the call for questions.
we expect infrastructure software revenue of approximately 7.2 billion up 9% year on year.
players out there, and some of them are hyperscalers, some of them are not hyperscalers, but they all have 1 thing in common, which is to create
Llms.
For your modeling purposes. We expect Consolidated, gross margin to be flat, sequentially at 77%, we expect Q2 adjusted ebit da to be approximately 68%.
Productizing and generate platforms. Be it for Enterprise consumption in code assistance of of agentic AI or be it for Consumer subscription.
We expect the non-gaap tax rate for Q2 and fiscal year, 2026 to be approximately, 16.5% due to the impact of the global minimum tax and the geographic mix of income, compared to that of fiscal year 25.
That concludes my prepared remarks, operator. Please open up the call for questions.
Operator: Thank you. To ask a question, you will need to press star one one on your telephone. To withdraw your question, press star one one again. Due to time restraints, we ask that you please limit yourself to one question. Please stand by while we compile the Q&A roster. Our first question will come from the line of Blayne Curtis with Jefferies. Your line is open.
Operator: Thank you. To ask a question, you will need to press star one one on your telephone. To withdraw your question, press star one one again. Due to time restraints, we ask that you please limit yourself to one question. Please stand by while we compile the Q&A roster. Our first question will come from the line of Blayne Curtis with Jefferies. Your line is open.
Thank you to ask a question, you will need to press star 1 1 on your telephone.
That we know about whatever it is is that few prospects and many of whom are customers now, who are creating this uh General whether it's generative, ai, agentic ai, but creating a platform that's our customer and we respect to each of those guys.
We are.
To withdraw, your question, press star 1 1 again, due to time restraints, we ask that you, please limit yourself to 1 question please. Stand by while we compile the Q&A roster,
Seeing the stronger and stronger demand.
For compute capacity.
Blayne Curtis: Hey, good afternoon, thanks for taking my question. It's just a clarification, then the question. Just clarification, Hock, on the greater than $100 billion. I think you said AI chips. I just want to make sure you're clarifying the difference between the ASICs and networking, and didn't know how rack revenue fits in there. The question, you know, I think the biggest overhang on the group here is that, you know, you grew roughly double in the quarter AI. I think that's what, you know, kind of cloud CapEx is growing this year. I'm just kind of curious your perspective, you know, I think given the outlook that you have for 2027, you should be a share gainer.
Blayne Curtis: Hey, good afternoon, thanks for taking my question. It's just a clarification, then the question. Just clarification, Hock, on the greater than $100 billion. I think you said AI chips. I just want to make sure you're clarifying the difference between the ASICs and networking, and didn't know how rack revenue fits in there. The question, you know, I think the biggest overhang on the group here is that, you know, you grew roughly double in the quarter AI. I think that's what, you know, kind of cloud CapEx is growing this year.
and our first question will come from the line of Blaine Curtis with Jeffrey's, your line is open
Hey, good afternoon, and thanks for taking my question. Uh, it's just a clarification—just a clarification, Hock, on the—
For training which is something they do need constantly. But what is very very interesting and surprising to to us is very much for inference in order to productize
the LMS their latest LMS, they create and monetize it.
And that influence is driving a, a substantial amount of
compute capacity, which is great for us because
Blayne Curtis: I'm just kind of curious your perspective, you know, I think given the outlook that you have for 2027, you should be a share gainer. I'm just kind of curious your perspective in terms of the pessimism that investors kind of think of that the hyperscalers need to get a return on investment in this year or next year, or if not, the year after. I'm just kind of curious your perspective, how you factor that into your outlook.
This or these players these 5 6 customers now of ours.
Blayne Curtis: I'm just kind of curious your perspective in terms of the pessimism that investors kind of think of that the hyperscalers need to get a return on investment in this year or next year, or if not, the year after. I'm just kind of curious your perspective, how you factor that into your outlook.
Greater than 100 billion dollars. I think you said AI chips, I just want to make sure you're clarifying the difference between the A6 and networking and didn't know how racks uh Revenue fits in there. And then the question, you know, I think the biggest overhang on the group here is that, you know, you, you grew roughly double in the quarter AI. I think that's what, you know, kind of cloud capex is growing this year. I'm just kind of curious your perspective. Um, you know, I I think given the Outlook that you have for 27, you should be a share Gainer. I'm just kind of curious your perspective in terms of the pessimism that investors kind of think of that, the hyperscalers need to get
are on the path to creating their own custom accelerators and beyond that, their their own design architecture of, uh,
Networking clusters of those customer accelerators.
a return on investment in this year or next year, or if not the year after I'm just kind of curious your perspective, how you factor that into your outlook,
Kirsten Spears: Well, what we have seen over the last few months and continue to see even more is, and it's really not so much talking about hyperscalers. Our customers, Blayne, is limited to those few players out there, and some of them are hyperscalers, some of them are not hyperscalers, but they all have one thing in common, which is to create
Hock Tan: Well, what we have seen over the last few months and continue to see even more is, and it's really not so much talking about hyperscalers. Our customers, Blayne, is limited to those few players out there, and some of them are hyperscalers, some of them are not hyperscalers, but they all have one thing in common, which is to create
Well.
What we see.
So I think we going to see demand keeps picking up as with her announcements in the past 6 months.
What we have seen over the last few months, and continue to see even more,
Is.
No, to clarify. Your first part blame.
And it's really not not so much talking about hyperscalers. Uh, our customers blame is limited to those few.
When I say we forecast we have a line of sight. Then our Revenue in 27, will be
players out there, and some of them are hyperscalers, some of them are not hyperscalers, but they all have 1 thing in common, which is to create
Hock Tan: Productize it and generate platforms, be it for enterprise consumption in code assistance of agentic AI or be it for consumer subscription that we know about. Whatever it is that few prospects, many of whom are customers now, who are creating this, whether it's generative AI, agentic AI, but creating a platform. That's our customer. With respect to each of those guys, we are seeing very stronger and stronger demand for compute capacity. For training, which is something they do need constantly, but what is very, very interesting and surprising to us is very much for inference in order to productize the LLMs, their latest LLMs they create and monetize it.
Hock Tan: Productize it and generate platforms, be it for enterprise consumption in code assistance of agentic AI or be it for consumer subscription that we know about. Whatever it is that few prospects, many of whom are customers now, who are creating this, whether it's generative AI, agentic AI, but creating a platform. That's our customer. With respect to each of those guys, we are seeing very stronger and stronger demand for compute capacity.
Llms.
Significantly in excess of 100 billion, I'm focusing on the fact that these are pretty much all based on chips, whether they are xbus, whether they are switch chips.
Dsbs, these are silicon content. We're talking about.
Thank you so much.
1 moment for our next question.
Consumption in code assistance of of agentic AI or be it for Consumer subscription.
And that will come from the line of Harland Sir with JP Morgan. Your line is open.
That we know about, whatever it is, is that
Few prospects, and many of whom are customers now, who are creating this, uh, general—whether it's generative AI, agentic AI—but creating a platform that's a customer. And with respect to each of those guys,
We are seeing.
The stronger and stronger demand.
Hock Tan: For training, which is something they do need constantly, but what is very, very interesting and surprising to us is very much for inference in order to productize the LLMs, their latest LLMs they create and monetize it.
For compute capacity.
T initiatives are coming to the market now, but it looks like
For training which is something they do need constantly. But what is very very interesting and surprising to to us is very much for inference in order to productize
Hock Tan: That inference is driving a substantial amount of compute capacity, which is great for us because these players, these 5, 6 customers of ours, are on the path to creating their own custom accelerators. Beyond that, their own design architecture of networking clusters of those custom accelerators. I think we're going to see demand keeps picking up as we've heard announcements in the past 6 months. Now, to clarify your first part, Blayne, when I say we forecast, we have a line of sight that our revenue in 2027 will be significantly in excess of $100 billion, I'm focusing on the fact that these are pretty much all based on chips. Whether they are XPUs, whether they are switch chips, DSPs, these are silicon content we're talking about.
Hock Tan: That inference is driving a substantial amount of compute capacity, which is great for us because these players, these 5, 6 customers of ours, are on the path to creating their own custom accelerators. Beyond that, their own design architecture of networking clusters of those custom accelerators. I think we're going to see demand keeps picking up as we've heard announcements in the past 6 months.
The LMS—their latest LMS—they create and monetize it.
And that inference is driving a
A substantial amount of.
compute capacity, which is great for us because
this or these players this 56 customers now, now of our
Uh, on the path to creating your own custom accelerators, and beyond that, they are their own design architecture of, uh,
They're at these 2x less performant, than your current generation Solutions, 2x less complex in terms of Chip design complexity, packaging, complexity IP. So maybe just a quick 2-part question. Hawk, 1 for you is giving your visibility into next year. Do you see these CLT science projects, taking any meaningful TPU? Xpu share from broadcom and then maybe the second quick questions were either. Your Charlie is given that broadcom TPU. Xpu programs from a performance. Complexity IP perspective are 12 to 18 months ahead of any of these coot programs. How does the broadcom team widen this Gap further?
Well, that's
Networking clusters of those customer accelerators.
and, you know, it's it's
Fits into.
Hock Tan: Now, to clarify your first part, Blayne, when I say we forecast, we have a line of sight that our revenue in 2027 will be significantly in excess of $100 billion, I'm focusing on the fact that these are pretty much all based on chips. Whether they are XPUs, whether they are switch chips, DSPs, these are silicon content we're talking about.
So I think we going to see demand keeps picking up as with her announcements in the past 6 months.
in my, in my opening remarks to to say that when any of our any
Now, to clarify your first part, blame—
I guess, hyperscaler.
Or llm developer.
When I say we forecast we have a line of sight. Then our Revenue in 27, will be
Tries to create become self-sufficient entirely in creating what you call a customer owned tooling or coot model. They faced tremendous challenges.
Significantly in excess of 100 billion, I'm focusing on the fact that these are pretty much all based on chips, whether they are xbus, whether they are switch chips dsps, these are silicon content. We're talking about,
Timothy Arcuri: Thanks so much.
Blayne Curtis: Thanks so much.
Operator: One moment for our next question. That will come from the line of Harlan Sur with J.P. Morgan. Your line is open.
Operator: One moment for our next question. That will come from the line of Harlan Sur with J.P. Morgan. Your line is open.
Thanks so much.
1 moment for our next question.
Harlan Sur: Yeah, good afternoon. Thank Thank you for taking my question. Congratulations to the team on the strong results. Hock, you know, there's been a lot of noise around CSPs and hyperscalers embarking on their own internal XPU, TPU design efforts, right? We call it COT or customer-owned tooling. This is not a new dynamic with ASICs, right? I think the Broadcom team has been through this COT competitive dynamic before over the 30 years, right, that you've been a leader in the ASIC industry. Very few of these COT initiatives have ever been successful. Now, on AI, some of these COT initiatives are coming to the market now, but it looks like they're at least 2x less performant than your current generation solutions, 2x less complex in terms of chip design complexity, packaging complexity, IP. Maybe just a quick two-part question.
Harlan Sur: Yeah, good afternoon. Thank Thank you for taking my question. Congratulations to the team on the strong results. Hock, you know, there's been a lot of noise around CSPs and hyperscalers embarking on their own internal XPU, TPU design efforts, right? We call it COT or customer-owned tooling. This is not a new dynamic with ASICs, right? I think the Broadcom team has been through this COT competitive dynamic before over the 30 years, right, that you've been a leader in the ASIC industry. Very few of these COT initiatives have ever been successful.
And that will come from the line of Harland Sir with J.P. Morgan. Your line is open.
Yeah, good afternoon. Thank you for taking my question and um congratulations to the team on the strong results.
Which is as a technology as it relates to creating the Silicon chips and particularly in XP, use that they need to do the Computing. And then, and then they uh that's needed to optimize and run the train and influence on the workloads, they produce other llm. It's, it's, it's then that technology we talked about comes from, uh, comes and from different dimensions, you need the best.
Silicon design team around. You need Cutting Edge. Really cutting edge.
Studies.
Harlan Sur: Now, on AI, some of these COT initiatives are coming to the market now, but it looks like they're at least 2x less performant than your current generation solutions, 2x less complex in terms of chip design complexity, packaging complexity, IP. Maybe just a quick two-part question.
Very Advanced packaging and and most and just as much you need to understand how to network clusters of them together. We've been doing this for 20 years more than 20 years.
in Silicon, and
How can, you know, there's been a lot of noise around csps and hyperscalers embarking on their own internal xpu, TPU design efforts, right? We call it cot or customer own tooling. This is not a new Dynamic with A6, right? I think the broadcom team has been through the CT competitive Dynamic before over the 30 years. Right? That you've been a leader in the Asic industry and very few of these coot initiatives, have ever been successful. Now on AI, some of these coot initiatives are coming to the market now but it looks like
in in, in this particular space today in generative AI
If you're trying to as an llm player to do your own chip, you cannot afford.
Harlan Sur: Hock, one for you is, given your visibility into next year, do you see these COT science projects taking any meaningful TPU, XPU share from Broadcom? Maybe the second quick question for either you or Charlie is, given that Broadcom's TPU, XPU programs from a performance complexity IP perspective are 12 to 18 months ahead of any of these COT programs, how does the Broadcom team widen this gap further?
Harlan Sur: Hock, one for you is, given your visibility into next year, do you see these COT science projects taking any meaningful TPU, XPU share from Broadcom? Maybe the second quick question for either you or Charlie is, given that Broadcom's TPU, XPU programs from a performance complexity IP perspective are 12 to 18 months ahead of any of these COT programs, how does the Broadcom team widen this gap further?
To have a chip that is just good enough.
You need the best chips. That is around, because you're competing against other LM players and most of all you also competing against Nvidia
Hock Tan: Well, that's a great question. You know, it fits into that I purposely took the time in my opening remarks to say that when any, I guess, hyperscaler or LLM developer tries to create, become self-sufficient entirely in creating what you call a customer-owned tooling or COT model, they face tremendous challenges. One is technology, which is a technology as it relates to creating the silicon chips, and particularly in XPUs, that they need to do the computing and that they, that's needed to optimize and run, train, and inference on the workloads they produce on their LLM. It's that technology we talked about comes from, comes in from different dimensions. You need the best silicon design team around. You need cutting edge, really cutting edge studies, very advanced packaging.
Hock Tan: Well, that's a great question. You know, it fits into that I purposely took the time in my opening remarks to say that when any, I guess, hyperscaler or LLM developer tries to create, become self-sufficient entirely in creating what you call a customer-owned tooling or COT model, they face tremendous challenges. One is technology, which is a technology as it relates to creating the silicon chips, and particularly in XPUs, that they need to do the computing and that they, that's needed to optimize and run, train, and inference on the workloads they produce on their LLM.
They're at these 2x less performant than your current-generation solutions, 2x less complex in terms of chip design complexity, packaging complexity, IP itself. Maybe just a quick two-part question. Hock, one for you is—given your visibility into next year, do you see these CLT science projects taking any meaningful TPU/XPU share from Broadcom? And then maybe the second quick question is for either you or Charlie: given that Broadcom's TPU/XPU programs, from a performance, complexity, and IP perspective, are 12 to 18 months ahead of any of these co-op programs, how does the Broadcom team widen this gap further?
Who is by no means letting down their guard. They are producing better and better chips, if every passing generation. So you have to as an llm trying to establish your platform in the world, have to create chips that are better than is not competitive with. Not just
Well, that's a great question. And, you know, it's it fits into that. I purposely took the time in my, in my opening remarks to, to say that, when any of our any
I guess, hyperscaler.
Nvidia, but all the other platform players that you're competing against and for that, you really need our belief. And we see that for a second, the bad, the partner in Silicon,
Or llm developer.
With the best technology IP and execution around and very modestly. I would say we are by far
way out there.
Tries to create become self-sufficient entirely in creating what you call a customer own tooling or CO2 model. They faced tremendous challenges.
1 is technology.
And we will not see competition in coot for many years to come, it will come eventually, but we're still a long way of because the race, which, which we see continues and 1 thing, I end in there that is particularly unique to us.
When you create those silicon.
You really have to get it.
Hock Tan: It's that technology we talked about comes from, comes in from different dimensions. You need the best silicon design team around. You need cutting edge, really cutting edge studies, very advanced packaging.
Up and running in high volume in production, very quickly time to Market.
We are very, very experienced in doing that.
Silicon design team around. You need Cutting Edge. Really cutting edge.
Studies.
Hock Tan: Just as much you need to understand how to network clusters of them together. We've been doing this for 20 years, more than 20 years in silicon. In this particular space today, in generative AI, if you're trying to, as an LLM player, to do your own chip, you cannot afford to have a chip that is just good enough. You need the best chips that is around because you're competing against other LLM players, and most of all, you're also competing against Nvidia, who is by no means letting down their guard. They are producing better and better chips with every passing generation.
Hock Tan: Just as much you need to understand how to network clusters of them together. We've been doing this for 20 years, more than 20 years in silicon. In this particular space today, in generative AI, if you're trying to, as an LLM player, to do your own chip, you cannot afford to have a chip that is just good enough. You need the best chips that is around because you're competing against other LLM players, and most of all, you're also competing against Nvidia, who is by no means letting down their guard. They are producing better and better chips with every passing generation.
Anybody can design a chip in a lab that works. Well, can you produce a 100,000 of those chips? Quickly at yields that you can that you can afford and we don't see too many players in the world that can do that.
Charlie. I think you covered that very well, huh?
Very Advanced packaging and and most and just as much you need to understand how to network clusters of them together. We've been doing this for 20 years more than 20 years.
Thank you hawk. Thank you Charlie.
in Silicon, and
1 moment for our next question.
And that will come from the line of Ross. Seymour with Deutsche Bank, your line is open,
In this particular space today, in generative AI,
If you're trying to, as an LLM player, do your own chip, you cannot afford it.
To have a chip that is just good enough, you need the best chips. That is around because you're competing against other LM players and most of all you also competing against Nvidia
Hock Tan: You have to, as an LLM trying to establish your platform in the world. Have to create chips that are better than if not competitive with not just NVIDIA, but all the other platform players that you're competing against. For that, you really need, our belief, and we see that firsthand, a partner in silicon with the best technology, IP, and execution around. Very modestly, I would say we are by far way out there. We will not see competition in COT for many years to come. It will come eventually, but we're still a long way off because the race we, which we see continues.
Hock Tan: You have to, as an LLM trying to establish your platform in the world. Have to create chips that are better than if not competitive with not just NVIDIA, but all the other platform players that you're competing against. For that, you really need, our belief, and we see that firsthand, a partner in silicon with the best technology, IP, and execution around. Very modestly, I would say we are by far way out there. We will not see competition in COT for many years to come. It will come eventually, but we're still a long way off because the race we, which we see continues.
Going that percentage mix in that 100 billion plus, is that changing now? What sort of, uh, leadership? Do you expect to maintain in that business, whether it's scale out, or scale up and is your leadership position there? Helping on your xpu side, as you can optimize across both the compute and the networking sites
Who is by no means letting down their guard. They are producing better and better chips if with every passing generation. So you have to as an llm trying to establish your platform in the world, have to create chips that are better than if not competitive with not just
First part of that fairly complex question first, or also, yes in networking, especially the, you know, with the new generation of gpus XP use that are coming out there. We're we're running.
Uh, Nvidia. But all the other platform players that you're competing against—and for that, you really need, I believe, and we see that—for a second, the bad, the partner in Silicon.
With the best technology IP and execution around, and very modestly, I would say we are by far—
way out there.
At 200 gigabyte gigabit service out there in terms of bandwidth and the tomahawk 6 that we introduced over 6 months ago on saying over on closer to 9 months ago, we're the only 1 out there.
Hock Tan: One thing I add in there that is particularly unique to us, when you create a silicon, you really have to get it up and running in high volume in production very quickly, time to market. We are very, very experienced in doing that. Anybody can design a chip in a lab that works well. Can you produce 100,000 of those chips quickly at yields that you can afford? We don't see too many players in the world that can do that. Charlie?
Hock Tan: One thing I add in there that is particularly unique to us, when you create a silicon, you really have to get it up and running in high volume in production very quickly, time to market. We are very, very experienced in doing that. Anybody can design a chip in a lab that works well. Can you produce 100,000 of those chips quickly at yields that you can afford? We don't see too many players in the world that can do that. Charlie?
And our customers and the hyperscalers wants to run with the best networking and with the most banned we found there for their clusters. So we are seeing huge demand
And we will not see competition in Coot for many years to come. It will come eventually, but we're still a long way off because the race, which we see, continues. And one thing, I end in there, that is particularly unique to us.
When you create those silicon.
You really have to get it.
Up and running in high volume in production, very quickly to market.
We are very, very experienced in doing that.
For this uh, only 100 terabyte per second, switch out there. So that's driving. A lot of demand and couple that with running then. Uh bandwidth on scaling out optical. Uh transceivers at 1.6 therapy. We are again, the only play out there doing doing DSP at 1.6 therapy. That combination is driving.
Charlie Kawwas: I think you covered it very well, Hock.
Charlie Kawwas: I think you covered it very well, Hock.
Anybody can design a chip in a lab that works. Well, can you produce a 100,000 of those chips? Quickly at yields that you can that you can afford and we don't see too many players in the world that can do that.
Charlie, I think you covered it very well, huh.
Timothy Arcuri: Thank you, Hock. Thank you, Charlie.
Harlan Sur: Thank you, Hock. Thank you, Charlie.
Thank you, Hawk. Thank you, Charlie.
Operator: One moment for our next question. That will come from the line of Ross Seymore with Deutsche Bank. Your line is open.
Operator: One moment for our next question. That will come from the line of Ross Seymore with Deutsche Bank. Your line is open.
1 moment for our next question.
Ross Seymore: Hi, thanks for letting me ask a question. Hock, in your script, you leaned a little bit more into the networking differentiation than you have in the past. I guess kind of a short term and a longer term question. The short term is, what's driving that up to 40% of the AI revenues? The longer term question is that percentage mix in that $100 billion plus, is that changing now? What sort of leadership do you expect to maintain in that business, whether it's scale out or scale up? Is your leadership position there helping on your XPU side as you can optimize across both the compute and the networking sides?
Ross Seymore: Hi, thanks for letting me ask a question. Hock, in your script, you leaned a little bit more into the networking differentiation than you have in the past. I guess kind of a short term and a longer term question. The short term is, what's driving that up to 40% of the AI revenues? The longer term question is that percentage mix in that $100 billion plus, is that changing now? What sort of leadership do you expect to maintain in that business, whether it's scale out or scale up? Is your leadership position there helping on your XPU side as you can optimize across both the compute and the networking sides?
And that will come from the line of Ross Seymour with Deutsche Bank. Your line is open.
I would say the growth of our networking components even faster than our XP us are growing, which is already pretty remarkable. So that's what you're saying. But at some point I would think these things will settle down though. We're not slowing down the pace because as I said, next in 27, will launch Next Generation to come out 7. 2x the performance and we'll probably be the by far the first out there and that will continue to drive sustain that momentum. And but at the end of the day to answer your question. Yeah. I expect as a composition of our total, AI revenue and any quarter that will be ranging between probably 33% to 40% AI networking components.
Hock Tan: Well, let's address the first part of that fairly complex question first, Ross. Yes, in networking, especially the, you know, with the new generation of GPUs, XPUs that are coming out there, we're running at 200 gigabit SerDes out there in terms of bandwidth. The Tomahawk 6 that we introduced over 6 months ago, in fact, closer to 9 months ago, we're the only one out there. Our customers and the hyperscalers wants to run with the best networking and with the most bandwidth out there for their clusters. We are seeing huge demand for this only 100 terabit per second switch out there. That's driving a lot of the demand. Couple that with running bandwidth on scaling out optical transceivers at 1.6 terabit.
Hi, thanks for helping me. Ask a question. How can your script you leaned a little bit more into the networking differentiation than you have in the past? So I guess kind of a short term and a longer term question. The short term is what's driving that up to 40% of the AI revenues. And then longer term, question is, is that going that percentage mix in that 100 billion? Plus, is that changing now? What sort of, uh, leadership? Do you expect to maintain in that business, whether it's scale out, or scale up and is your leadership position there? Helping on your xpu side, as you can optimize across both the compute and the networking sites
Great. Thanks Hawk.
Hock Tan: Well, let's address the first part of that fairly complex question first, Ross. Yes, in networking, especially the, you know, with the new generation of GPUs, XPUs that are coming out there, we're running at 200 gigabit SerDes out there in terms of bandwidth. The Tomahawk 6 that we introduced over 6 months ago, in fact, closer to 9 months ago, we're the only one out there. Our customers and the hyperscalers wants to run with the best networking and with the most bandwidth out there for their clusters. We are seeing huge demand for this only 100 terabit per second switch out there.
1 moment for our next question.
And that will come from the line of CJ, Muse, with cancer. Fitzgerald. Your line is open
Uh, well, let's address the first part of that fairly complex question first, or also, yes, in networking. Especially the, you know, with the new generation of GPUs, XP use that are coming out there where we're running.
Yeah, good afternoon. Thank you for taking the question. Uh, I I'm curious. Um, you know, how are you thinking about the move to disagreed pre-fill and decode from the GPU ecosystem and the impact to custom silicon demand, are you seeing any potential changes in sort of the relative mix between gpus and customers silicon?
At 200-gig gigabit service out there in terms of bandwidth, and the Tomahawk 6 that we introduced over 6 months ago—in fact, closer to 9 months ago—were the only one out there.
I'm not sure. I fully understand your question. CJ CJ, could you clarify what? I mean? This aggregate.
and our customers and the hyperscalers wants to run with the best networking and with the most banned we found there for their clusters. So we are seeing huge demand
for this, uh,
Hock Tan: That's driving a lot of the demand. Couple that with running bandwidth on scaling out optical transceivers at 1.6 terabit. We are again, the only player out there doing DSP at 1.6 terabit. That combination is driving, I would say, the growth of our networking components even faster than our XPUs are growing, which is already pretty remarkable. That's what you're seeing.
Hock Tan: We are again, the only player out there doing DSP at 1.6 terabit. That combination is driving, I would say, the growth of our networking components even faster than our XPUs are growing, which is already pretty remarkable. That's what you're seeing. At some point, I would think these things will settle down, though we're not slowing down the pace because as I said, next year in 2027, we'll launch next generation Tomahawk 7, 2x performance, and we'll probably be by far the first out there, and then we'll continue to sustain that momentum. At the end of the day, to answer your question, yeah, expect as a composition of our total AI revenue at any quarter that we'll be ranging between probably 33% to 40% AI networking components.
Sure. Um, you know, pushing off, um, workloads to to CPX for pre-fill and uh working off of graph for um for decode. Uh and and you know, having that disagreed kind of world. Uh, and does that put, you know, any pressure in terms of the demand for custom versus going uh, with, you know, a full GPU stack, okay? I get, I get what you mean that that, what this aggregation kind of took me off. What do you in a way? What you're really saying is
What?
How is the architecture?
Of.
Only 100 terabits per second switch out there, so that's driving a lot of the demand, and couple that with running the bandwidth on scaling out optical transceivers at 1.6 terabits. We are again the only player out there doing DSP at 1.6 terabits. That combination is driving.
Hock Tan: At some point, I would think these things will settle down, though we're not slowing down the pace because as I said, next year in 2027, we'll launch next generation Tomahawk 7, 2x performance, and we'll probably be by far the first out there, and then we'll continue to sustain that momentum. At the end of the day, to answer your question, yeah, expect as a composition of our total AI revenue at any quarter that we'll be ranging between probably 33% to 40% AI networking components.
AI accelerator be GPU or xpu evolving as workloads starts to evolve and that's what we are seeing very much in particular.
The 1 size fits all of a general purpose GPU. Gets you only that far
Uh you can still keep going on because you can still run different workloads. Like you run, make sure of experts even though you have you want to run, make sure experts with
Sparse calls to be very effective. You hear the term but uh in a GPU you are designed for dense, matrix multiplication. So you do it with software at kernels, but it's not as effective as you hard code, it in Silicon.
And make.
Ross Seymore: Great. Thanks, Hock.
Ross Seymore: Great. Thanks, Hock.
Those xbus.
Remarkable. So that's what you're saying, but at some point, I would think these things will settle down though. We're not slowing down the pace because as I said, next in 27, we launched Next Generation, tamok 7, 2x a performance and we'll probably be the by far the first out there and that will continue to drive sustain that momentum. And but at the end of the day to answer your question. Yeah. I expect as a composition of our total, AI revenue and any quarter that will be ranging between probably 33% to 40% AI networking components.
Hock Tan: Thanks.
Hock Tan: Thanks.
Operator: One moment for our next question. That will come from the line of C.J. Muse with Cantor Fitzgerald. Your line is open.
Operator: One moment for our next question. That will come from the line of C.J. Muse with Cantor Fitzgerald. Your line is open.
Great, thanks.
One moment for our next question.
C.J. Muse: Yeah, good afternoon. Thank you for taking the question. I'm curious, you know, how are you thinking about the move to disaggregate prefill and decode from the GPU ecosystem and the impact to custom silicon demand? Are you seeing any potential changes in sort of the relative mix between GPUs and custom silicon?
C.J. Muse: Yeah, good afternoon. Thank you for taking the question. I'm curious, you know, how are you thinking about the move to disaggregate prefill and decode from the GPU ecosystem and the impact to custom silicon demand? Are you seeing any potential changes in sort of the relative mix between GPUs and custom silicon?
And that will come from the line of CJ Muse with Cantor Fitzgerald. Your line is open.
Yeah, good afternoon. Thank you for taking the question. Uh, I'm curious—you know, how are you thinking about the move to disaggregate pre-fill and decode from the GPU ecosystem and the impact on custom silicon demand? Are you seeing any potential changes in sort of the relative mix between GPUs and custom silicon?
Hock Tan: I'm not sure I fully understand your question, C.J. C.J., could you clarify what you mean disaggregate?
Hock Tan: I'm not sure I fully understand your question, C.J. C.J., could you clarify what you mean disaggregate?
Make sure of expert workloads, say the same applies for inference and 1. What that drives down to is you start to see designs of XP use become much more customized for a particular workloads of particular, llm customers of ours. And the and the design starts to depart from what is the traditional standard GPU design?
C.J. Muse: Sure. You know, pushing off workloads to CPX for prefill and working off of Groq for for decode, and, you know, having that disaggregated kind of world, does that put, you know, any pressure in terms of the demand for custom versus going with, you know, a full GPU stack?
C.J. Muse: Sure. You know, pushing off workloads to CPX for prefill and working off of Groq for for decode, and, you know, having that disaggregated kind of world, does that put, you know, any pressure in terms of the demand for custom versus going with, you know, a full GPU stack?
Uh, I'm not sure. I fully understand your question. CJ. CJ, could you clarify what? It means to aggregate?
which is why, as we always indicate before XP, use the world effective, it will eventually be more the choice simply because it will allow
Hock Tan: Okay. I get what you mean. That word disaggregation kind of threw me off. In a way, what you're really saying is how is the architecture of AI accelerator, be it GPU or XPU evolving as workloads starts to evolve? That's what we are seeing very much in particular. The one size fits all of a general purpose GPU gets you only that far. It can still keep going on because you can still run different workloads. Like you run mixture of experts, even though you want to run mixture of experts with sparse cores to be very effective, you hear the term. In a GPU, you're designed for dense matrix multiplication.
Hock Tan: Okay. I get what you mean. That word disaggregation kind of threw me off. In a way, what you're really saying is how is the architecture of AI accelerator, be it GPU or XPU evolving as workloads starts to evolve? That's what we are seeing very much in particular. The one size fits all of a general purpose GPU gets you only that far. It can still keep going on because you can still run different workloads. Like you run mixture of experts, even though you want to run mixture of experts with sparse cores to be very effective, you hear the term. In a GPU, you're designed for dense matrix multiplication.
Sure. Um, you know, pushing off, um, workloads to to CPX for pre-fill and uh working off of graph for um, for decode. Uh, and and, you know, having that disaggregated kind of world. Uh, and does that put, you know, any pressure in terms of the demand for custom versus going uh with you know, a full GPU stack,
That, that—what, this aggregation kind of threw me off. What do you—in a way, what you're really saying is...
What?
How is the architecture?
Of.
Flexibility in making designs that work with particular workloads 1 for training even and 1 for inference. And as you say, 1 perhaps would be better at prefilling and want to be better at post training or reinforced learning or test time. Scaling, you can tweak your tpus towards the, uh, XP you sorry, uh, Freud and sleep to a particular kind of workload llm that you want. And we're seeing that we're seeing that road map in
All our 5 customers.
1 moment for our next question.
AI accelerators, be it GPU or XPU, are evolving as workloads start to evolve, and that's what we are seeing very much in particular.
And that will come from the line of Timothy aruri with UBS. Your line is open.
The 1 size fits all of a general purpose GPU. Gets you only that far
Uh, you can still keep going on because you can still run different workloads. Like you run— make sure of experts, even though you have— you want to run, make sure— experts.
Hock Tan: You do it with software kernels, but it's not as effective as you'd hard code it in silicon and make those XPUs purposely designed to be much more performing for mixture of expert workloads, say. The same applies for inference. What that drives down to is you start to see designs of XPUs become much more customized for particular workloads of particular LLM customers of ours. The design starts to depart from what is the traditional standard GPU design. Which is why, as we always indicated before, XPUs will eventually be more the choice simply because it will allow flexibility in making designs that work with particular workloads, one for training even, and one for inference. As you say, one perhaps will be better at pre-filling and one to be better at post-training, reinforced learning, or test time scaling.
Hock Tan: You do it with software kernels, but it's not as effective as you'd hard code it in silicon and make those XPUs purposely designed to be much more performing for mixture of expert workloads, say. The same applies for inference. What that drives down to is you start to see designs of XPUs become much more customized for particular workloads of particular LLM customers of ours. The design starts to depart from what is the traditional standard GPU design.
With sparse calls to be very effective. You hear the term, but in a GPU, you're designed for dense matrix multiplication. So you do it with software kernels, but it's not as effective as if you hardcoded it in silicon.
And make.
Thanks a lot. Um, I had uh, just a question on sort of the puts and takes on, um, gross margin as you begin to ship these racks. I mean, obviously it's going to pull the Blended margin down, but I'm wondering if there's any guardrails, you can give us on this. It seems like the racks are maybe 45, 50, gross margin. So I guess uh should we think about that pulling gross margin down like 500 basis points roughly as these racks seeing the ship. And I guess, you know, part of that Hawk is there some like floor to the gross margin? Uh, you know, below which you wouldn't be willing to do, uh, you know more racks. Thanks.
Hate to tell you.
Those xbus.
Hallucinate.
Purposely designed to be much more.
Our gross margin is solidly at the number kilson, reports.
We will not be affected by the gross 1 and buy more and more AI products going up. We have gotten our yields. We've gotten a cost to the point where
The model we have in AI will be fairly consistent with the models. We have in the rest of the semiconductor business.
Hock Tan: Which is why, as we always indicated before, XPUs will eventually be more the choice simply because it will allow flexibility in making designs that work with particular workloads, one for training even, and one for inference. As you say, one perhaps will be better at pre-filling and one to be better at post-training, reinforced learning, or test time scaling.You can tweak your TPUs towards the XPU, sorry, Freudian slip, to a particular kind of workload LLM that you want. We're seeing that. We're seeing that roadmap in all our 5 customers.
Performing for mixture of expert workloads. Say the same applies for inference. And what what that drives down to is you start to see designs of XP use become much more customized for a particular workloads of particular, llm customers of ours, and the div and the design starts to depart from what is the traditional standard GPU design?
I would agree with that. I think on further study uh relative to even comments that I did make last quarter. Uh the the impact relative to our overall. Mix is actually not going to be
substantial at all. So
I I wouldn't worry about it.
which is why as we are always indicate before XP, use the world, effectively will eventually be more the choice simply because it will allow
Oh okay. Thank you so much.
1 moment for our next question.
And that will come from the line of Stacy rasgon with Bernstein. Your line is open.
Hock Tan: You can tweak your TPUs towards the XPU, sorry, Freudian slip, to a particular kind of workload LLM that you want. We're seeing that. We're seeing that roadmap in all our 5 customers.
Hi guys. Uh thanks for taking my question. Um I don't know if this is for a hawker person, but I wanted to dig in a little more to this substantially more than 100 billion next year, I'm trying to just count up the gig of gigawatts. I counted I don't know 8 or 9 you have 3 from anthropic
Flexibility in making designs that work with particular workloads 1 for training even and 1 for inference. And as you say, 1 perhaps will be better at briefing and want to be better at post training or reinforced learning or test time. Scaling, you can tweak your tpus towards the uh, xpu, sorry. Uh, freuden sleep to a particular kind of workload llm that you want. And we're seeing that we're seeing that road map in
Um, 1 from openai. So that's 4. You said, metuh, was multiple. So at least 2 that gets you, the 6.
all our 5 customers.
Operator: One moment for our next question. That will come from the line of Timothy Arcuri with UBS. Your line is open.
Operator: One moment for our next question. That will come from the line of Timothy Arcuri with UBS. Your line is open.
Google, I figure should be bigger than meta. So like at least 3 you know that's that's 9 and then you got a few others
One moment for our next question.
I thought that your content per gigawatt was sort of, you know, calling in a 20 billion dollar per gigawatt range.
And that will come from the line of Timothy Ari with UBS. Your line is open.
Timothy Arcuri: Thanks a lot. I had just a question on sort of the puts and takes on gross margin as you begin to ship these racks. I mean, obviously it's gonna pull the blended margin down, but I'm wondering if there's any guardrails you can give us on this. It seems like the racks are maybe 45%, 50% gross margin. I guess, should we think about that pulling gross margin down like 500 basis points roughly as these racks begin to ship? I guess, you know, part of that, Hock, is there some, like, floor to the gross margin, you know, below which you wouldn't be willing to do, you know, more racks? Thanks.
Timothy Arcuri: Thanks a lot. I had just a question on sort of the puts and takes on gross margin as you begin to ship these racks. I mean, obviously it's gonna pull the blended margin down, but I'm wondering if there's any guardrails you can give us on this. It seems like the racks are maybe 45%, 50% gross margin. I guess, should we think about that pulling gross margin down like 500 basis points roughly as these racks begin to ship? I guess, you know, part of that, Hock, is there some, like, floor to the gross margin, you know, below which you wouldn't be willing to do, you know, more racks? Thanks.
I guess what I'm asking is is is my math around the gigawatts? You plan to ship in 27 correct. And how do I think about your content per gigawatt is is that ships?
Um, maybe it will be quote, unquote substantially more than 100 billion.
I got a Mayo for that. But yeah, right, you can look at it gigawatts, which is the right way to look at it instead of dollars because that's how we sell a chip. So,
You have to realize.
Hock Tan: I hate to tell you that you must be a bit hallucinating. Our gross margin is solidly at the number Kirsten reports. We will not be affected by the gross margin and by more and more AI products going out. We have gotten our yields, we've gotten our cost to the point where the model we have in AI will be fairly consistent with the models we have in the rest of the semiconductor business. Kirsten?
Hock Tan: I hate to tell you that you must be a bit hallucinating. Our gross margin is solidly at the number Kirsten reports. We will not be affected by the gross margin and by more and more AI products going out. We have gotten our yields, we've gotten our cost to the point where the model we have in AI will be fairly consistent with the models we have in the rest of the semiconductor business. Kirsten?
% gross margin. So I guess, uh, should we think about that pulling gross margin down, like 500 basis points, roughly, as these racks dig into ship? And I guess, you know, part of that, Hock, is there some, like, floor to the gross margin, uh, you know, below which you wouldn't be willing to do, uh, you know, more racks? Thanks.
We it depending on our llm. Customer our 6 customers now. Sorry not 5, 6 6.
Hate to tell you that you must be a bit headless hallucinating.
The dollars per gigawatt dollars, varies.
Our gross margin is solidly at the number kilson, reports.
Sometimes quite dramatically.
It does vary, but you're right.
It's not far from the dollars you're talking about. And if you look at it by gigawatt in 27,
We will not be affected by the gross man and buy more and more AI products growing up. We have gotten our yields, we've gotten a cost to the point where
We are saying.
It getting close to 10 gigawatts.
The model we have in AI will be fairly consistent. The models, we have in the rest of the semiconductor business,
Kirsten Spears: I would agree with that. I think on further study, relative to even comments that I did make last quarter, the impact relative to our overall mix is actually not going to be substantial at all. I wouldn't worry about it.
Kirsten Spears: I would agree with that. I think on further study, relative to even comments that I did make last quarter, the impact relative to our overall mix is actually not going to be substantial at all. I wouldn't worry about it.
About it. That's very helpful. Thank you.
and our next question that will come from the line of Ben reitzes with Melius research, your line is open,
System. I would agree with that. I think on further study uh relative to even comments that I did make last quarter. Uh, the the impact relative to our overall. Mix is actually not going to be substantial at all. So
Timothy Arcuri: Oh, okay. Thank you so much.
Timothy Arcuri: Oh, okay. Thank you so much.
I I wouldn't worry about it.
Oh, okay. Thank you so much.
Operator: One moment for our next question.
Operator: One moment for our next question.
Hey thanks. Uh um Hawk great to be speaking with you wanted to ask you about your commentary about Supply visibility on those 4 major components.
Timothy Arcuri: Oh.
Timothy Arcuri: Oh.
Operator: That will come from the line of Stacy Rasgon with Bernstein. Your line is open.
One moment for our next question.
Operator: That will come from the line of Stacy Rasgon with Bernstein. Your line is open.
Stacy Rasgon: Hi, guys. Thanks for taking my question. I don't know if this is for Hock or Kirsten, I wanted to dig in a little more to this substantially more than $100 billion next year. I'm trying to just count up the giga-gigawatts. I counted, I don't know, 8 or 9. You have 3 from Anthropic, 1 from OpenAI, that's 4. You said Meta was multiple, at least 2. That gets me to 6. Google, I figure, should be bigger than Meta, like, at least 3. You know, that's 9, you got a few others. I thought that your content per gigawatt was sort of, you know, call it in a $20 billion per gigawatt range. I guess what I'm asking is my math around the gigawatts you plan to ship in 2027 correct?
Stacy Rasgon: Hi, guys. Thanks for taking my question. I don't know if this is for Hock or Kirsten, I wanted to dig in a little more to this substantially more than $100 billion next year. I'm trying to just count up the giga-gigawatts. I counted, I don't know, 8 or 9. You have 3 from Anthropic, 1 from OpenAI, that's 4. You said Meta was multiple, at least 2. That gets me to 6. Google, I figure, should be bigger than Meta, like, at least 3. You know, that's 9, you got a few others. I thought that your content per gigawatt was sort of, you know, call it in a $20 billion per gigawatt range.
And that will come from the line of Stacy Rasgon with Bernstein. Your line is open.
Hi guys. Uh thanks for taking my question. Um I don't know if this is for a hawker person but I wanted to dig in a little more to this substantially more than 100 billion next year, I'm trying to just count up the gig of gigawatts. I counted I don't know 8 or 9 you have 3 from anthropic
Through 2028. Um, you know a how'd you do it? This is probably the the you know, you're the first 1 to kind of go out through the 28 time frame and secondly um after this astounding growth in 2027 for your AI business, do you have enough visibility to grow quite a bit in 2028? Um, based on the supply that you see, and that kind of commentary, thanks a lot.
The best answer is. Yeah, you're right.
We, we anticipate.
Um, one from OpenAI. So that's four. You said Meta was multiple, so at least two—that got you to six. Google, I figured, should be bigger than Meta, so like at least three, you know, that's nine, and then you got a few others.
This sharp accelerated growth. None. Nobody good in participate. The rate of growth is showing but we kind of anticipate a large part of it.
Stacy Rasgon: I guess what I'm asking is my math around the gigawatts you plan to ship in 2027 correct?How do I think about your content per gigawatt as that ships? Maybe it will be quote-unquote, "Substantially more than $100 billion.
I thought that your content for gigawatt was sort of, you know, calling in the 20 billion dollar per gigawatt range.
Oh, I guess hope for the uh, long.
Or longer than 6 months.
Stacy Rasgon: How do I think about your content per gigawatt as that ships? Maybe it will be quote-unquote, "Substantially more than $100 billion.
We were early.
In being able to lock up key glass.
I guess what I'm asking is, is my math around the gigawatts? You plan to ship in '27, correct? And how do I think about your content per gigawatt—is that ships?
Infamously glass. You all heard about. We were very early.
Hock Tan: Stacy, you have a very interesting perspective, and I gotta admire you for that. You're right. You can look at it at gigawatts, which is the right way to look at it instead of $, cause that's how we sell our chips to. You have to realize we, depending on our LLM customer, our 6 customers now. Sorry, not 5, 6. 6. The $ per gigawatt chip $ varies, sometimes quite dramatically. It does vary. You're right. It's not far from the $ you're talking about. If you look at it by gigawatt in 2027, we are seeing it getting close to 10 gigawatts.
Hock Tan: Stacy, you have a very interesting perspective, and I gotta admire you for that. You're right. You can look at it at gigawatts, which is the right way to look at it instead of $, cause that's how we sell our chips to. You have to realize we, depending on our LLM customer, our 6 customers now. Sorry, not 5, 6. 6. The $ per gigawatt chip $ varies, sometimes quite dramatically. It does vary. You're right. It's not far from the $ you're talking about. If you look at it by gigawatt in 2027, we are seeing it getting close to 10 gigawatts.
Um, maybe it will be, quote unquote, substantially more than $100 billion. Yeah.
We've locked up.
Substrates.
We have.
Work on.
Our good partners on the rest of the stuff we talked about.
I got a money for that. But yeah, right, you can look at it gigawatts, which is the right way to look at it instead of dollars because that's how we sell a chip. So,
and so, the answer to your question is,
Is.
You have to realize.
someone anticipation early, and the fact that we have very good
We it depending on our llm. Customer our 6 customers now. Sorry not 5, 6 6.
partners out there in this key components.
The dollars per giga, one to two dollars.
Sometimes quite dramatically.
It does vary, but you're right.
It's not far from the dollars you're talking about. And if you look at it by gigawatt in '27,
We are saying.
It's getting close to 10 gigawatts.
Stacy Rasgon: Got it. That's very helpful. Thank you.
Stacy Rasgon: Got it. That's very helpful. Thank you.
Hock Tan: Sure.
Hock Tan: Sure.
Got it. That's very helpful. Thank you.
Operator: Our next question, that will come from the line of Ben Reitzes with Melius Research. Your line is open.
Operator: Our next question, that will come from the line of Ben Reitzes with Melius Research. Your line is open.
Ben Reitzes: Hey, thanks. Hock, great to be speaking with you. Wanted to ask you about your commentary about supply visibility on those four major components through 2028. you know, A, how'd you do it? This is probably the, you know, you're the first one to kind of go out through the 2028 timeframe. Secondly, after this astounding growth in 2027 for your AI business, do you have enough visibility to grow quite a bit in 2028, based on the supply that you see and that kind of commentary? Thanks a lot.
Ben Reitzes: Hey, thanks. Hock, great to be speaking with you. Wanted to ask you about your commentary about supply visibility on those four major components through 2028. you know, A, how'd you do it? This is probably the, you know, you're the first one to kind of go out through the 2028 timeframe. Secondly, after this astounding growth in 2027 for your AI business, do you have enough visibility to grow quite a bit in 2028, based on the supply that you see and that kind of commentary? Thanks a lot.
and our next question that will come from the line of Ben reitzes with Melius research, your line is open,
Hey thanks. Uh um hi great to be speaking with you wanted to ask you about your commentary about Supply visibility on those 4 major components.
What else can I say except that? Yes. Charlie. You want to add anything? Yeah, just a maybe a couple of quick ones. I think you covered that piece really. Well, I think been the other piece that's really important. As a said, we build custom silicon for 6. Customers, we have very deep, strategic multi-year engagement with them. They share with us because of this custom capability. Exactly what they anticipate at least over the next 2 to 3 years sometimes, 4 years. And so because of that that's exactly why we went and secured all the elements Hawk talked about. And when we secure this, it requires Investments with these Partners. Sometimes developing not just more capacity but the right technology and capacity for that. So we have to go secure it for multiple years and uh we're probably you're right, we're probably the first 1 to secure that up to 28.
Or Beyond.
And can you grow in 28 with what you see in Supply? Sorry to sneak that in? Yes.
Thank you.
Thank you.
Our next question that will come from the line of Vivic Arya was Bank of America. Security is your line is open.
Through 2028. Um, you know, a how'd you do it? This is probably the the, you know, you're the first 1 to kind of go out through the 28th time frame and secondly, um, after this astounding growth in 2027 for your AI business, do you have enough visibility to grow quite a bit in 2028? Um, based on the supply that you see, and that kind of commentary, thanks a lot.
Hock Tan: The best answer is, yeah, you're right. We anticipate this sharp accelerated growth. Now, nobody could anticipate the rate of growth we're showing, but we kind of anticipate a large part of it all, I guess, all for the long, or longer than six months. We were early in being able to lock up glass substrate. The infamous glass substrate you all heard about. We were very early. We've locked up substrates. We have worked on our good partners on the rest of the stuff we talked about. The answer to your question is, it's somewhat anticipation early and the fact that we have very good partners out there in these key components. What else can I say except that, yes. Charlie, you want to add anything?
Hock Tan: The best answer is, yeah, you're right. We anticipate this sharp accelerated growth. Now, nobody could anticipate the rate of growth we're showing, but we kind of anticipate a large part of it all, I guess, all for the long, or longer than six months. We were early in being able to lock up glass substrate. The infamous glass substrate you all heard about. We were very early. We've locked up substrates. We have worked on our good partners on the rest of the stuff we talked about.
The best answer is, yeah, you're right.
We, we anticipate.
This sharp, accelerated growth—none, nobody could anticipate. The rate of growth is showing, but we kind of anticipate a large part of it.
Uh, thanks for taking my question. How can I just wanted to first uh clarify uh, the anthropic project, you're doing the 20 billion or so? For a gigawatt, this year, how much of that is chips and how much of that is kind of, right? I just wanted to understand when you say 100 billion in chips, is it a distinction between chips versus uh your rack scale uh projects? Because just that project is supposed to Triple
Or I guess for the, uh, long.
Or longer than 6 months.
We were early.
Here. And then my question is you know your AI business is transitioning from kind of 1. Large customer that was you know where you had kind of exclusive. Um
In being able to lock up T glass.
For information glass. You all heard about we were very early.
We've locked up.
Substrates.
We have worked on.
Hock Tan: The answer to your question is, it's somewhat anticipation early and the fact that we have very good partners out there in these key components. What else can I say except that, yes. Charlie, you want to add anything?
Our good partners on the rest of the stuff we talked about.
and so, the answer to your question is,
Is.
someone's anticipation early, and the fact that we have very good
partners out there in this key components.
Charlie Kawwas: Yeah, just maybe a couple of quick ones. I think you covered that piece really well. I think, Ben, the other piece that's really important, as Hock said, we build custom silicon for six customers. We have very deep strategic multi-year engagement with them. They share with us, because of this custom capability, exactly what they anticipate at least over the next two to three years, sometimes four years. Because of that's exactly why we went and secured all the elements Hock talked about. When we secure this, it requires investments with these partners, sometimes developing not just more capacity, but the right technology and capacity for that. We have to go secure it for multiple years. We're probably, you're right. We're probably the first one to secure that up to 28 or beyond.
Charlie Kawwas: Yeah, just maybe a couple of quick ones. I think you covered that piece really well. I think, Ben, the other piece that's really important, as Hock said, we build custom silicon for six customers. We have very deep strategic multi-year engagement with them. They share with us, because of this custom capability, exactly what they anticipate at least over the next two to three years, sometimes four years. Because of that's exactly why we went and secured all the elements Hock talked about.
Uh, partnership to now, multiple customers who are using multiple, uh, suppliers. So how do you, uh, get the visibility and the confidence about, uh, you know, how your share will progress at these, uh, multiple, uh, customers because it's, you know, it's it's a very kind of fragmented, uh, engagement that they have across a whole range of cloud service providers and so forth. So, what are you doing to ensure? Uh, that, you know, you have solid visibility? Uh, and um, you know, the the right market share at, uh, this fragment except of uh, customers who are using multiple suppliers via you.
understand, 1 thing about
First, as Charlie correctly, put down very nicely.
We only have very few customers to be precise, 6, for the volume, we're driving. The, the, the revenue we're driving, we only have
Just 6.
Prior to that, uh, even less recently.
and number 2,
Charlie Kawwas: When we secure this, it requires investments with these partners, sometimes developing not just more capacity, but the right technology and capacity for that. We have to go secure it for multiple years. We're probably, you're right. We're probably the first one to secure that up to 28 or beyond.
also have to understand the with the dollars each of them spent
in the criticality of the nature of what they're embarking on and that's why I threw out this
Ben Reitzes: Can you grow in 2028 with what you see in supply? Sorry to sneak that in.
Ben Reitzes: Can you grow in 2028 with what you see in supply? Sorry to sneak that in.
Except that yes Charlie. You want to add anything? Yeah, just a maybe a couple of quick ones. I think you covered that piece really. Well, I think been the other piece that's really important. As oh, said we built custom silicon for 6 customers. We have very deep, strategic multi-year engagement with them. They share with us because of this custom capability. Exactly what they anticipate at least over the next 2 to 3 years sometimes, 4 years. And so because of that that's exactly why we went and secured all the elements Hawk talked about. And when we secure this it requires Investments with these Partners. Sometimes developing not just more capacity but the right technology and capacity for that. So we have to go secure it for multiple years and uh we're probably you're right we're probably the first 1 to secure that up to 28 or Beyond.
Customer salar accelerator program.
Hock Tan: Yes.
Hock Tan: Yes.
And can you grow in 28 with what you see in Supply? Sorry to sneak that in? Yes.
To them as to every 1 of my customers in this space.
Ben Reitzes: Thank you.
Ben Reitzes: Thank you.
It's a strategic play.
Thank you.
Operator: Thank you. Our next question, that will come from the line of Vivek Arya with Bank of America Securities. Your line is open.
Operator: Thank you. Our next question, that will come from the line of Vivek Arya with Bank of America Securities. Your line is open.
It's not.
Optionality.
Thank you.
To them long-term.
Vivek Arya: Thanks for taking my question. Hock, I just wanted to first clarify the Anthropic project you're doing, the $20 billion or so for a gigawatt this year. How much of that is chips, and how much of that is kind of racks? I just wanted to understand when you say $100 billion in chips, is there a distinction between chips versus your rack scale projects? Because just that project is supposed to triple next year. My question is, you know, your AI business is transitioning from kind of one large customer that was, you know, where you had kind of exclusive partnership to now multiple customers who are using multiple suppliers. How do you get the visibility and the confidence about, you know, how your share will progress at these multiple customers?
Vivek Arya: Thanks for taking my question. Hock, I just wanted to first clarify the Anthropic project you're doing, the $20 billion or so for a gigawatt this year. How much of that is chips, and how much of that is kind of racks? I just wanted to understand when you say $100 billion in chips, is there a distinction between chips versus your rack scale projects? Because just that project is supposed to triple next year.
Our next question that will come from the line of Civic, Arya with Bank of America security, your line is open.
Short-term medium-term is strategic extremely strategic. They don't stop.
Vivek Arya: My question is, you know, your AI business is transitioning from kind of one large customer that was, you know, where you had kind of exclusive partnership to now multiple customers who are using multiple suppliers. How do you get the visibility and the confidence about, you know, how your share will progress at these multiple customers?
And they have very clear each of them of where they want to uh, position these custom silicon within their the, the trajectory of the llm development and the trajectory of how they develop,
Inference for productizing those elements.
That part we have very clear visibility.
Uh, thanks for taking my question. How can I just wanted to first uh clarify uh, the anthropic project, you're doing the 20 billion or so? For a gigawatt, this year, how much of that is chips and how much of that is kind of racked? I just wanted to understand when you say 100 billion in chips, is it a distinction between chips versus uh your rack scale uh projects because just that project is supposed to Triple next year and then my question is you know your AI business is transitioning from kind of 1. Large customer that was you know where you had kind of exclusive. Um
Vivek Arya: It's, you know, it's a very kind of fragmented engagement that they have across a whole range of cloud service providers and so forth. What are you doing to ensure that, you know, you have solid visibility and, you know, the right market share at this fragmented set of customers who are using multiple suppliers?
Vivek Arya: It's, you know, it's a very kind of fragmented engagement that they have across a whole range of cloud service providers and so forth. What are you doing to ensure that, you know, you have solid visibility and, you know, the right market share at this fragmented set of customers who are using multiple suppliers?
Anything else on GPU using Neo Cloud hyper, uh using Cloud business. These are all transactional and optionality so you have to split you you point out very correctly, it seems very confusing.
Trust me, not for us, not those customers, we have, they're very strategic, they're very targeted and they know exactly what they're building up and how much capacity they want to build up each year.
Hock Tan: Vivek, you have to understand one thing. First, as Charlie Kawwas correctly put down very nicely. We only have very few customers, to be precise, six. For the volume we're driving, the revenue we're driving, we only have just six. Prior to that, even less recently. Number two, also have to understand with the dollars each of them spend and the criticality of the nature of what they're embarking on. That's why I threw out this term. Meta has MTIA. That's their AI, their custom accelerator program. To them, as to every one of my customers in this space, it's a strategic play. It's not optionality. To them, long term, short term, medium term is strategic, extremely strategic.
Hock Tan: Vivek, you have to understand one thing. First, as Charlie Kawwas correctly put down very nicely. We only have very few customers, to be precise, six. For the volume we're driving, the revenue we're driving, we only have just six. Prior to that, even less recently. Number two, also have to understand with the dollars each of them spend and the criticality of the nature of what they're embarking on. That's why I threw out this term.
And the only thing they think about is, can you do it faster?
Uh, partnership to now, multiple customers who are using multiple, uh, suppliers. So how do you, uh, get the visibility and the confidence about, uh, you know, how your share will progress at these, uh, multiple, uh, customers? Because it's, you know, it's, it's a very kind of fragmented, uh, engagement that they have across a whole range of cloud service providers and so forth. So, what are you doing to ensure, uh, that, you know, you have solid visibility, uh, and, um, you know, the right market share at, uh, this fragmented set of, uh, customers who are using multiple suppliers? We then, you have to understand.
1 Thing.
About.
Otherwise, it's very strategic and targeted on a projected road map. Anything else you see in the mix?
First Charlie correctly, put down very nicely.
We only have very few customers—to be precise, six—for the volume we're driving. The revenue we're driving, we only have...
Just 6.
On the clarification how uh anthropic racks versus chips. Thank you.
Prior to that, uh, even less recently.
I rather not answer that but we are. Okay.
and number 2,
As Ken say, we're good on our dollars.
And margin. Thank you.
also have to understand with the dollars each of them spent
Thank you.
Hock Tan: Meta has MTIA. That's their AI, their custom accelerator program. To them, as to every one of my customers in this space, it's a strategic play. It's not optionality. To them, long term, short term, medium term is strategic, extremely strategic.
in the criticality of the nature of what they're embarking on, and that's why I threw out this
Thank you. Our next question that will come from the line of Tom Ali with Barkley. Is your line is open.
Matter has empty. That's the the customer solid or accelerator program to them as to every 1 of my customers in this space.
It's a strategic play.
It's not.
Optionality.
To them long-term.
Hock Tan: They don't stop, and they are very clear, each of them, on where they want to position this custom silicon within their trajectory of their LLM development and the trajectory of how they develop inference for productizing those LLMs. That part, we have very clear visibility. Anything else on GPU, using Neo cloud, using cloud business, these are all transactional and optionality. You point out very correctly. It seems very confusing. Trust me, not for us, nor those customers we have. They're very strategic, they're very targeted, and they know exactly what they're building up and how much capacity they want to build up each year. The only thing they think about is can they do it faster? Wise is very strategic and targeted on a projected roadmap.
Hock Tan: They don't stop, and they are very clear, each of them, on where they want to position this custom silicon within their trajectory of their LLM development and the trajectory of how they develop inference for productizing those LLMs. That part, we have very clear visibility. Anything else on GPU, using Neo cloud, using cloud business, these are all transactional and optionality. You point out very correctly. It seems very confusing. Trust me, not for us, nor those customers we have.
Hey guys, thanks for taking my questions. I have 1 for hockey and 1 for Charlie. So hockey. I know you're very specific in particular about what you put in the Preamble and you noted that customers are seeing a direct attached copper through 400 gig 30s is there any reason you're pointing that out in particular especially as a leading Pioneer in CPO and then on Charlie's side as you're adding more customers here. I would imagine customers that design agents with you are going to use scale of Internet. Maybe talk about scale of protocols and how you see ethernet developing there as well.
Short-term medium-term is strategic extremely strategic. They don't stop.
Okay, now and let's I'm just highlighting the fact that we're on networking.
Our technology.
And they are very clear, each of them, on where they want to, uh, position these custom silicon in—within the trajectory of the LLM development and the trajectory of how they develop,
Inference, for productizing those llm.
Is really very, very unique. Leave a positioning is to help our customers and more than our customers, even customers using general purpose. Uh gpus not just XP use, which is that you know if you are
That part, we have very clear visibility.
Uh, running a uh trying to create llms and running creating your own AI data centers, and designing. It architecting it. You truly want larger and larger domains or clusters.
Anything else on GPU, using new Cloud, hyper—uh, using Cloud business. These are all transactional and optionality. So, you have to split—you point out very correctly—it seems very confusing.
Hock Tan: They're very strategic, they're very targeted, and they know exactly what they're building up and how much capacity they want to build up each year. The only thing they think about is can they do it faster? Wise is very strategic and targeted on a projected roadmap.Anything else you see in the mix is pure, I call it, opportunistic for these guys, the optionality. It's very clear.
for, and you really want to connect G XP used to XP use directly where you can
Trust me, not for us, not those customers. We have the very strategic—they're very targeted, and they know exactly what they're building up and how much capacity they want to build up each year.
And the best way to do that is use direct attach copper. That's the lowest latency. Lowest power and lowest cost. So you want to keep doing that?
And the only thing they think about is, can you do it faster?
Especially in scale up.
Hock Tan: Anything else you see in the mix is pure, I call it, opportunistic for these guys, the optionality. It's very clear.
As long as possible. In scaling out, where past them, we use Optical.
Otherwise, it's very strategic and targeted on a projected road map. Anything else you see in the mix?
It's pure, I call it, uh, opportunistic for these guys, the optionality. So it's very clear.
Vivek Arya: On the clarification, Hock, Anthropic racks versus chips. Thank you.
Vivek Arya: On the clarification, Hock, Anthropic racks versus chips. Thank you.
Hock Tan: I'd rather not answer that, but we're okay. As Kirsten said, we're good on our dollars and margin.
Hock Tan: I'd rather not answer that, but we're okay. As Kirsten said, we're good on our dollars and margin.
Cracks versus chips. Thank you.
I'd rather not answer them, but we're okay.
That's fine. But I'm talking about scaling up in a rag in a cluster domain you really want to use direct attach copper as long as you can. And we are still based on our technology that broadcom has with on, especially on connecting. Uh, xpu to xpu or even GP to GPU, we can do it with copper.
As Ken said, we're good on our dollars.
Vivek Arya: Thank you. Thank you.
Vivek Arya: Thank you. Thank you.
And margin. Thank you.
Thank you.
Operator: Thank you. Our next question will come from the line of Thomas O'Malley with Barclays. Your line is open.
Operator: Thank you. Our next question will come from the line of Thomas O'Malley with Barclays. Your line is open.
Thank you. Our next question will come from the line of Tom Ali with Barkley. Your line is open.
Thomas O'Malley: Hey, guys. Thanks for taking my questions. I have one for Hock and one for Charlie. Hock, I know you're very specific, in particular, about what you put in the preamble, and you noted that customers are staying at direct attached copper through 400G SerDes. Is there any reason you're pointing that out in particular, especially as a leading pioneer in CPO? As you're adding more customers here, I would imagine customers that design ASICs with you are gonna use scale-up Ethernet. Maybe talk about scale-up protocols and how you see Ethernet developing there as well. Thank you.
Tom O'Malley: Hey, guys. Thanks for taking my questions. I have one for Hock and one for Charlie. Hock, I know you're very specific, in particular, about what you put in the preamble, and you noted that customers are staying at direct attached copper through 400G SerDes. Is there any reason you're pointing that out in particular, especially as a leading pioneer in CPO? As you're adding more customers here, I would imagine customers that design ASICs with you are gonna use scale-up Ethernet. Maybe talk about scale-up protocols and how you see Ethernet developing there as well. Thank you.
And we can push the envelope from 100 gate g to 200 g to even 2, 400 G. We have 30 is now running 400 G that can drive distance on a rank to run copper.
What all I'm trying to say is, you don't need to go run into some bright shiny objects called CPO, even as we are the lead in cpos, cpos will come.
In next time.
Not this year, maybe not next year, but in its time.
Hock Tan: Okay. No, I'm just highlighting the fact that we're on networking, our technology is really very, very uniquely positioning us to help our customers. More than our customers, even customers using general purpose GPUs, not just XPUs. Which is that, you know, if you are trying to create LLMs and running, creating your own AI data centers and designing it, architecting it, you truly want larger and larger domains or clusters. You really want to connect XPUs to XPUs directly where you can. The best way to do that is use direct attached copper. That's the lowest latency, lowest power, and lowest cost. You want to keep doing that, especially in scale up, as long as possible. In scaling out, we're past that. We use optical. That's fine.
Hock Tan: Okay. No, I'm just highlighting the fact that we're on networking, our technology is really very, very uniquely positioning us to help our customers. More than our customers, even customers using general purpose GPUs, not just XPUs. Which is that, you know, if you are trying to create LLMs and running, creating your own AI data centers and designing it, architecting it, you truly want larger and larger domains or clusters. You really want to connect XPUs to XPUs directly where you can. The best way to do that is use direct attached copper.
Hey guys, thanks for taking my questions. I have one for Hock and one for Charlie too. Hock, I know you're very specific and particular about what you put in the Preamble, and you noted that customers are saying a direct-attached copper to 400 gig, 30. Is there any reason you're pointing that out in particular, especially as a leading pioneer in CPO? And then on Charlie's side, as you're adding more customers here, I would imagine customers that design ASICs with you are going to use scale of Internet. Maybe talk about scale of protocols and how you see Internet developing there as well.
Okay, no.
Ethernet became the de facto standard in every cloud for the last 2 decades.
And let's—I'm just highlighting the fact that we do our networking.
I would think no G.
It's really very, very unique. Leaving a positioning is to help our customers, and more than our customers, even customers using jump purpose—uh, GPUs—not just XP use, which is that, you know, if you are...
Uh, running—uh, trying to create LLMs and running, creating your own AI data centers, and designing it, architecting it. You truly want larger and larger domains or clusters.
For— and you really want to connect G, uh, XP used to XP, use directly where you can.
Hock Tan: That's the lowest latency, lowest power, and lowest cost. You want to keep doing that, especially in scale up, as long as possible. In scaling out, we're past that. We use optical. That's fine. I'm talking about scaling up in a rack, in a cluster domain. You really want to use direct attached copper as long as you can. We are still based on our technology that Broadcom has with on, especially on connecting XPU to XPU or even GPU to GPU. We can do it with copper, and we can push the envelope from 100G to 200G to even to 400G.
And the best way to do that is to use direct attach copper. That's the lowest latency, lowest power, and lowest cost. So you want to keep doing that?
Specially in scale up.
If you look at, uh, the debut of the, uh, back-end networks as Hawk, articulated there was 2 years ago, a big fight about what protocol should be used to achieve the latency the scale necessary on scale out and the industry at the time, 24 months ago was not clear. We were clear we were very clear actually about what the answer should be. And again because of the deep engagements with our partners, they made it very clear to all of us and the industry GPU or xpu that ethernet is the scale out of choice. Check mark today. Everyone is talking about scaling out with ethernet. Now, when it comes to scale up, yes, exactly. Like what happened 3, 4 years ago, uh, on scale up. Now, what's the right answer for this? And what we're hearing consistently and what we're seeing is the right answer, uh, is ethernet. And as you know, uh, last year we've announced with multiple hyperscalers.
As long as possible in scaling out where past, then we use optical,
Hock Tan: I'm talking about scaling up in a rack, in a cluster domain. You really want to use direct attached copper as long as you can. We are still based on our technology that Broadcom has with on, especially on connecting XPU to XPU or even GPU to GPU. We can do it with copper, and we can push the envelope from 100G to 200G to even to 400G. We have SerDes now running 400G that can drive distance on a rack to run copper. What all I'm trying to say is, you don't need to go run into some bright, shiny objects called CPO. Even as we are the lead in CPOs. CPOs will come in its time. Not this year, maybe not next year, but in its time. Charlie?
And many of our peers in the semiconductor industry, that ethernet scale up is the right choice. That's what we believe will happen, time will tell. But a lot of the xpu designs, we're doing, we're being asked to scale up through Ethernet, and we're happy to enable that.
Thank you, both.
Thank you. And our next question that will come from the line of Jim Schneider, with Goldman Sachs, your line is open.
That's fine. But I'm talking about scaling up in a rack—in the cluster domain, you really want to use direct attach copper as long as you can. And we are still, based on our technology that Broadcom has, especially on connecting XPU to XPU or even GPU to GPU, we can do it with copper.
Hock Tan: We have SerDes now running 400G that can drive distance on a rack to run copper. What all I'm trying to say is, you don't need to go run into some bright, shiny objects called CPO. Even as we are the lead in CPOs. CPOs will come in its time. Not this year, maybe not next year, but in its time. Charlie?
Good afternoon. Thanks for taking my question uh hockey. It was helpful to hear you discuss the progress of your other full custom, xpu, engagements outside of tpus.
And we can push the envelope from 100 gate g to 200 g to, even to 400 G, we have 30s now, running 400 G that can drive distance. On a rank to run copper, what all I'm trying to say is, you don't need to go run into some bright shiny objects called CPO, even as we are the lead in CPU.
CPOs will come in next time.
Charlie Kawwas: Yeah. No. Well, well said, Hock. On the question of Ethernet, with the debut of the cloud, Ethernet became the de facto standard in every cloud for the last two decades. If you look at the debut of the backend networks, as Hock articulated, there was two years ago a big fight about what protocol should be used to achieve the latency, the scale necessary on scale-out. The industry at the time, 24 months ago, was not clear. We were clear. We were very clear actually about what the answer should be. Again, because of the deep engagements with our partners, they made it very clear to all of us and the industry, GPU or XPU, that Ethernet is the scale-out of choice. Check mark. Today, everyone is talking about scaling out with Ethernet.
Charlie Kawwas: Yeah. No. Well, well said, Hock. On the question of Ethernet, with the debut of the cloud, Ethernet became the de facto standard in every cloud for the last two decades. If you look at the debut of the backend networks, as Hock articulated, there was two years ago a big fight about what protocol should be used to achieve the latency, the scale necessary on scale-out. The industry at the time, 24 months ago, was not clear. We were clear. We were very clear actually about what the answer should be.
Not this year, maybe not next year, but in its time.
Um as we look into next year um is it fair to assume that those are mostly targeting inference applications or not? And then could you maybe qualitatively speak to either the performance or cost advantages relative to gpus that is giving those customers the ability to uh, to forecast it and such a large scale. Thank you. Thanks. Uh, it's you know, most of our customers.
Begin with inference.
Simply because
Selling, yeah, no. Well, well said, Hock. And on the question of Ethernet, um, with the debut of the cloud, Ethernet became the de facto standard in every cloud for the last two decades.
that tends to be, you know, that tends to be the
easiest path to start on, not necessarily from anything else than the fact that
you know, when you do inference
Charlie Kawwas: Again, because of the deep engagements with our partners, they made it very clear to all of us and the industry, GPU or XPU, that Ethernet is the scale-out of choice. Check mark. Today, everyone is talking about scaling out with Ethernet.
Charlie Kawwas: Now, when it comes to scale up, yes, exactly like what happened three, four years ago, on scale up now, what's the right answer for this? What we're hearing consistently and what we're seeing is the right answer, is Ethernet. As you know, last year, we've announced with multiple hyperscalers and many of our peers in the semiconductor industry that Ethernet scale-up is the right choice. That's what we believe will happen. Time will tell. A lot of the XPU designs we're doing, we're being asked to scale up through Ethernet, and we're happy to enable that.
Charlie Kawwas: Now, when it comes to scale up, yes, exactly like what happened three, four years ago, on scale up now, what's the right answer for this? What we're hearing consistently and what we're seeing is the right answer, is Ethernet.
Charlie Kawwas: As you know, last year, we've announced with multiple hyperscalers and many of our peers in the semiconductor industry that Ethernet scale-up is the right choice. That's what we believe will happen. Time will tell. A lot of the XPU designs we're doing, we're being asked to scale up through Ethernet, and we're happy to enable that.
If you look at, uh, the debut of the, uh, back end networks as Hawk articulated, there was 2 years ago, a big fight about what protocol should be used to achieve the latency the scale necessary on scale up and the industry at the time, 24 months ago was not clear. We were clear we were very clear actually about what the answer should be. And again because of the deep engagements with our partners, they made it very clear to all of us and the industry GPU or xpu that ethernet is the scale out of choice. Check mark today. Everyone is talking about scaling out with either men. Now, when it comes to scale up, yes, exactly. Like what happened, 3, 4 years ago, uh, on scale up. Now, what's the right answer for this? And what we're hearing consistently and what we're seeing is the right answer, uh, is ethernet. And as you know, uh, last year we've announced with multiple hyperscalers.
It's much, it's less compute. But also, then the question is do you need this general purpose? Massive dense matrix multiplication gpus when you can do it more efficiently effectively with Customs inference silicon GP uh XP use that, do the job better or just as well much cheaper cost lower power and that's what we find these customers starting with. But they're now in training and many of our XP use are used both in training as well as inference. And by the way, they are interchangeable. Just a GPU, can be used not just for training which they are perhaps more perfectly suited to, but they can be used for inference. What we think is our experience I use for both and we are seeing that going on but we also seeing very rapidly more for those customers who are much more
And many of our peers in the semiconductor industry think that Ethernet scale-up is the right choice. That's what we believe will happen—time will tell. But a lot of the xPU designs we're doing, we're being asked to scale up through Ethernet, and we're happy to enable that.
Thomas O'Malley: Thank you both.
Tom O'Malley: Thank you both.
Thank you, both.
Operator: Thank you. Our next question that will come from the line of Toshiya Hari with Goldman Sachs. Your line is open.
Operator: Thank you. Our next question that will come from the line of Toshiya Hari with Goldman Sachs. Your line is open.
Mature in the progression. I talked about in their Journey towards complete, xpu that they will start to develop 2 chips.
Each year.
Thank you. And our next question will come from the line of Jim Schneider with Goldman Sachs. Your line is open.
Toshiya Hari: Good afternoon, and thanks for taking my question. Hock, it was helpful to hear you discuss the progress of your other full custom XPU engagements outside of TPUs. As we look into next year, is it fair to assume that those are mostly targeting inference applications or not? Then could you maybe qualitatively speak to either the performance or cost advantages relative to GPUs that is giving those customers the ability to forecast in such a large scale? Thank you.
Toshiya Hari: Good afternoon, and thanks for taking my question. Hock, it was helpful to hear you discuss the progress of your other full custom XPU engagements outside of TPUs. As we look into next year, is it fair to assume that those are mostly targeting inference applications or not? Then could you maybe qualitatively speak to either the performance or cost advantages relative to GPUs that is giving those customers the ability to forecast in such a large scale? Thank you.
Simultaneously 1 for training, 1 for inference to be specialized. Why? Because what we're seeing very clearly for these players LM players,
Good afternoon. Thanks for taking my question, uh, Hock. It was helpful to hear you discuss the progress of your other full custom XPU engagements outside of TPUs.
you do the training to get to achieve a higher level of intelligence smarts for your llm. So great, you get yourself a great llm state-of-the-art or more.
Hock Tan: Thanks. It's, you know, most of our customers begin with inference. Simply because that tends to be, you know, that tends to be the easiest path to start on. Not necessarily from anything else than the fact that, you know, when you do inference, it's less compute. Also, then the question is, do you need this general purpose, massive, dense matrix multiplication GPUs, when you can do it more efficiently, effectively with customs inference, silicon XPUs, that do the job better or just as well, much cheaper cost, lower power. That's what we find these customers starting with. They are now in training, and many of our XPUs are used both in training as well as inference.
Hock Tan: Thanks. It's, you know, most of our customers begin with inference. Simply because that tends to be, you know, that tends to be the easiest path to start on. Not necessarily from anything else than the fact that, you know, when you do inference, it's less compute. Also, then the question is, do you need this general purpose, massive, dense matrix multiplication GPUs, when you can do it more efficiently, effectively with customs inference, silicon XPUs, that do the job better or just as well, much cheaper cost, lower power.
Compared to assume that those are mostly targeting inference applications or not. And then, could you maybe qualitatively speak to either the performance or cost advantages relative to GPUs that is giving those customers the ability to forecast at such a large scale? Thank you.
You know.
Begin with inference.
Simply because
That tends to be, you know, that tends to be the
Now, you got a productized, it which means inference. Well, you can then decide at that time. You got your uh, your model going as the best. Because if you decide then to do your inference, productization you'll take you a year at least to productize. Every time somebody else is going to create an llm better than yours. So you, there's a leap of faith here that when you do training to create the next level of super intelligence in your llm, you have to be investing.
Easiest path to start on, not necessarily from anything else than the fact that
Simultaneously in inference, both in terms of the chit.
You know, when you do inference,
It's really coming up better and better as we find those 6 customers, get more matured in their progression towards uh better and better llms.
Hock Tan: That's what we find these customers starting with. They are now in training, and many of our XPUs are used both in training as well as inference. By the way, they are interchangeable, just a GPGPU can be used not just for training, which they are perhaps more perfectly suited to, but they can be used for inference. What we're seeing is our XPUs are used for both. We're seeing that going on.
So yeah, it's that is the trend. We are seeing is not happening to all our 6 customers yet, but we are seeing a majority of them headed in that way right now.
Hock Tan: By the way, they are interchangeable, just a GPGPU can be used not just for training, which they are perhaps more perfectly suited to, but they can be used for inference. What we're seeing is our XPUs are used for both. We're seeing that going on. We're also seeing very rapidly more for those customers who are much more matured in the progression I talked about in their journey towards complete XPU, that they will start to develop two chips each year, simultaneously, one for training, one for inference, to be specialized. Why? Because what we're seeing very clearly for this player, LLM players is you do the training to achieve a higher level of intelligence, smarts for your LLM. Great, you get yourself a great LLM, state-of-the-art or more. Now you've got to productize it, which means inference.
Thank you. 1 moment for our next question.
It's much—it's less compute. But also, then the question is, do you need this general purpose, massive, dense matrix-multiplication GPUs when you can do it more efficiently, effectively, with custom inference silicon? GPU or XPU use that do the job better, or just as well, much cheaper cost, lower power. And that's what we find these customers starting with. But they are now in training, and many of XPUs are used both in training as well as inference. And by the way, they are interchangeable. Just as a GPU can be used not just for training—which they are, perhaps.
and that will come from the line of Joshua Buck halter with TD Cowen, your line is open
Hock Tan: We're also seeing very rapidly more for those customers who are much more matured in the progression I talked about in their journey towards complete XPU, that they will start to develop two chips each year, simultaneously, one for training, one for inference, to be specialized. Why? Because what we're seeing very clearly for this player, LLM players is you do the training to achieve a higher level of intelligence, smarts for your LLM. Great, you get yourself a great LLM, state-of-the-art or more. Now you've got to productize it, which means inference.
More perfectly suited to, but they can be used for inference. What we think is our experience are used for both, and we're seeing that going on. But we're also seeing, very rapidly, more for those customers who are much more matured in the progression I talked about in their journey towards complete XPU, that they will start to develop two chips.
Each year.
Hey guys, thanks for taking my question and congrats on the results. Um appreciate all the details on the expectations for deployments at specific customers. Um, I was hoping you could just maybe reflect on how visibility has changed over the last 1 to 2 quarters that that gave you the confidence, to give us more details. And then on a specific 1, you you mentioned greater than a gigawatt for open AI in 2027 with that deal. Being for 10, gigawatts through 2029, that implies a pretty sharp inflection I guess in 2028 is that the right way to think about it and was that sort of always the plan. Thank you.
Simultaneously, one for training, one for inference to be specialized. Why? Because what we're seeing very clearly for these LM players,
Yes. Well, yeah, this as you all seen and you all know in this generative AI, uh, race that we are in now and I shouldn't use the word race. Let's call it.
You do the training to achieve a higher level of intelligence or smarts for your LLM. So, great, you get yourself a great LLM, state of the art or more.
Progression.
Hock Tan: Well, you can then decide at that time you got your model going as the best, because if you decide then to do your inference, productization, it'll take you a year at least to productize. At which time, somebody else is gonna create an LLM better than yours. There's a leap of faith here that when you do training to create the next level of super intelligence in your LLM, you have to be investing simultaneously in inference, both in terms of the chip and the capacity. Our visibility is really coming out better and better as we find those six customers get more matured in their progression towards better and better LLMs. Yeah, that is the trend we are seeing.
Hock Tan: Well, you can then decide at that time you got your model going as the best, because if you decide then to do your inference, productization, it'll take you a year at least to productize. At which time, somebody else is gonna create an LLM better than yours. There's a leap of faith here that when you do training to create the next level of super intelligence in your LLM, you have to be investing simultaneously in inference, both in terms of the chip and the capacity.
Among the the few players we see here.
I mean it's a competition.
Each is trying to create an llm.
Better than the other and more tailored for specific purpose. Be they Enterprise be they consumer be they uh search. Each 1 is trying to create it more and more and all of all of that requires
Now, you got a product-type it, which means inference. Well, you can then decide at that time, you got your LLR, your model going, as the best. Because if you decide then to do your inference, productization will take you a year at least to productize. I reach time—somebody else is going to create an LLM better than yours. So you, there's a leap of faith here that when you do training to create a next level of super intelligence in your LLM, you have to be investing.
Simultaneously in inference, both in terms of the chip,
Hock Tan: Our visibility is really coming out better and better as we find those six customers get more matured in their progression towards better and better LLMs. Yeah, that is the trend we are seeing.It's not happening to all our 6 customers yet, but we are seeing a majority of them headed in that way right now.
Not just training, which is important to keep improving your uh, llm models, but inference for productization and monetization of your llms and we are get and probably call it. The fact that we've been engaged with some of them now, for
More than a couple of years.
And the capacity. So, our visibility is really coming out better and better as we find those six customers, get more mature, India progression towards, uh, better and better LLMs,
Hock Tan: It's not happening to all our 6 customers yet, but we are seeing a majority of them headed in that way right now.
We're getting better and better visibility as they have more and more confidence that the XP use they are working on with us.
So, yeah, that is the trend we are seeing. It's not happening to all our six customers yet, but we are seeing a majority of them headed in that way right now.
Operator: Thank you. One moment for our next question. That will come from the line of Joshua Buchalter with TD Cowen. Your line is open.
Operator: Thank you. One moment for our next question. That will come from the line of Joshua Buchalter with TD Cowen. Your line is open.
Thank you. One moment for our next question.
Is achieving what they are getting at. As they get the sense that the XP use they are working on with the, uh, with the software with the algorithms, they needed that they are having more confidence that this xpu silicon is what they need.
Joshua Buchalter: Hey, guys. Thanks for taking my question, and congrats on the results. Appreciate all the details on the expectations for deployments at specific customers. I was hoping you could just maybe reflect on how visibility has changed over the last one to two quarters that gave you the confidence to give us more details. On a specific one, you mentioned greater than a gigawatt for OpenAI in 2027, with that deal being for 10 gigawatts through 2029, that implies a pretty sharp inflection, I guess, in 2028. Is that the right way to think about it? Was that sort of always the plan? Thank you.
Joshua Buchalter: Hey, guys. Thanks for taking my question, and congrats on the results. Appreciate all the details on the expectations for deployments at specific customers. I was hoping you could just maybe reflect on how visibility has changed over the last one to two quarters that gave you the confidence to give us more details. On a specific one, you mentioned greater than a gigawatt for OpenAI in 2027, with that deal being for 10 gigawatts through 2029, that implies a pretty sharp inflection, I guess, in 2028. Is that the right way to think about it? Was that sort of always the plan? Thank you.
And that will come from the line of Joshua Buckhalter with TD Cowen. Your line is open.
And they want and it gets better and better and it's get better. We get more visibility as Charlie puts up perfectly because at the end of the day, we only have 6 guys who work on.
And this 6 guys are all, as I said, very look at XP use.
And AI in a very strategic man.
They don't think 1 generation at a time. They think multiple generation multiple years.
Hock Tan: Yes. Well, yeah. As you all seen, and you all know, in this generative AI race that we are in now, and I shouldn't use the word race, let's call it progression, among the few players we see here. I mean, it's a competition. Each is trying to create an LLM better than the other and more tailored for specific purpose, be they enterprise, be they consumer, be they search. Each one is trying to create it more and more. All of that requires not just training, which is important to keep improving your LLM models, but inference for productization and monetization of your LLMs. Probably call it the fact that we've been engaged with some of them now for more than a couple of years.
Hock Tan: Yes. Well, yeah. As you all seen, and you all know, in this generative AI race that we are in now, and I shouldn't use the word race, let's call it progression, among the few players we see here. I mean, it's a competition. Each is trying to create an LLM better than the other and more tailored for specific purpose, be they enterprise, be they consumer, be they search. Each one is trying to create it more and more.
they are they think very long term on how they how they deploy.
Hey guys, thanks for taking my question and congrats on the results. Um, appreciate all the details on the expectations for deployments at specific customers. Um, I was hoping you could just maybe reflect on how visibility has changed over the last one to two quarters, that gave you the confidence to give us more details. And then, on a specific one—you mentioned greater than a gigawatt for OpenAI in 2027. With that deal being for 10 gigawatts through 2029, that implies a pretty sharp inflection, I guess, in 2028. Is that the right way to think about it, and was that sort of always the plan? Thank you.
Yes. Well, yeah, this— as you all have seen and you all know, in this generative AI, uh, race that we are in now— and I shouldn't use the word 'race.' Let's call it...
They experience. They develop with us how they deploy it in achieving better and better LMS that they want to create and more than that how they deploy in monetizing.
Progression.
So it's we are part of the Strategic.
Among the few players we see here.
I mean it's a competition.
We are not.
Each is trying to create an LLM.
Hock Tan: All of that requires not just training, which is important to keep improving your LLM models, but inference for productization and monetization of your LLMs. Probably call it the fact that we've been engaged with some of them now for more than a couple of years.
Be the consumer, be the search. Each one is trying to create it more and more, and all of that requires—
Not just training, which is important to keep.
Improving your, uh, LLM models, but inference for productization and monetization of your LLMs, and we are getting—probably call it—the fact that we've been engaged with some of them now, for
Hock Tan: We're getting better and better visibility as they have more and more confidence that the XPUs they are working on with us is achieving what they're getting at. As they get a sense that the XPUs they are working on with the, with the software, with the algorithm they needed, that they are having more confidence that this XPU silicon is what they need. It gets better and better. As it get better, we get more visibility, as Charlie puts up perfectly. At the end of the day, we only have six guys to work on. These six guys are all, as I said, very look at XPUs and AI in a very strategic manner. They don't think one generation at a time. They think multiple generation, multiple years.
Hock Tan: We're getting better and better visibility as they have more and more confidence that the XPUs they are working on with us is achieving what they're getting at. As they get a sense that the XPUs they are working on with the, with the software, with the algorithm they needed, that they are having more confidence that this XPU silicon is what they need. It gets better and better. As it get better, we get more visibility, as Charlie puts up perfectly. At the end of the day, we only have six guys to work on. These six guys are all, as I said, very look at XPUs and AI in a very strategic manner. They don't think one generation at a time. They think multiple generation, multiple years.
more than couple years.
In just optionality of, oh, shall I use a GPU? Shall I use it in the cloud because I need to train for 6 months? No, this is more than that, the investment. These guys are making a long-term and it's great to be part of that long-term roadmap as opposed to a transactional road map and the noise. As I entered an early question from a is, there's a lot of noise that makes up short-term transactions with what is long-term strategic positioning of our business, and our product and to sum it all. I think our businesses in XP use is a strategic sustainable play.
for all the 6 customers we have today,
We're getting better and better visibility as they have more and more confidence that the Xbus they are working on with us.
Thank you.
For Q&A today, I would now like to turn the call back over to gu for any closing remarks.
Thank you, Sheree.
Is achieving what they are getting at. As they get the sense that the XP use they are working on with the, uh, with the software, with the algorithms, they needed, that they are having more confidence that this XPU silicon is what they need.
And they want, and it gets better and better, and it gets better. We get more visibility, as Charlie puts it perfectly, because at the end of the day, we only have six guys who work on.
And these six guys are all, as I said, very—look at xBus.
Broadcom currently plans to report its earnings for the second quarter of fiscal year 2026 after the close of Market on Wednesday, June 3rd 2026. A public webcast of broadcom earnings conference, call Will Follow at 2 pm Pacific that will conclude our earnings call today. Thank you all for joining SRI. You may end the call.
And AI in a very strategic manner.
This concludes today's program. Thank you all for participating. You may now. Disconnect
Hock Tan: In spite of all the hubris, noise out there on what's available, they think very long-term on how they deploy the XPUs they develop with us, how they deploy in achieving better and better LLMs that they want to create. More than that, how they deploy and monetize it. It's, we are part of their strategic roadmap. We are not in just optionality of, Oh, shall I use a GPU? Shall I use it in the cloud because I need to train for six months? No, this is more than that. The investment these guys are making are long term, and it's great to be part of that long-term roadmap as opposed to a transactional roadmap.
They don't think one generation at a time. They think multiple generations, multiple years.
Hock Tan: In spite of all the hubris, noise out there on what's available, they think very long-term on how they deploy the XPUs they develop with us, how they deploy in achieving better and better LLMs that they want to create. More than that, how they deploy and monetize it. It's, we are part of their strategic roadmap. We are not in just optionality of, Oh, shall I use a GPU? Shall I use it in the cloud because I need to train for six months? No, this is more than that. The investment these guys are making are long term, and it's great to be part of that long-term roadmap as opposed to a transactional roadmap.
And in spite of all the few breeze noise out there on what's available,
They think very long-term on how they deploy.
They experience. They develop with us how they deploy it in achieving better and better LMS that they want to create, and more than that, how they deploy it in monetizing.
So it's we are part of their strategic.
Roadmap.
We are not.
In just optionality of, oh, shall I use a GPU? Shall I use it in the cloud because I need to train for six months? No, this is more than that. The investment. These guys are making a long-term—
Hock Tan: The noise, as I answered an earlier question, is there's a lot of noise that mix up short-term transactions with what is long-term strategic positioning of our business and our product. To sum it all, I think our business in XPUs is a strategic, sustainable play for all the six customers we have today.
Hock Tan: The noise, as I answered an earlier question, is there's a lot of noise that mix up short-term transactions with what is long-term strategic positioning of our business and our product. To sum it all, I think our business in XPUs is a strategic, sustainable play for all the six customers we have today.
And it's great to be part of that long-term roadmap, as opposed to a transactional roadmap and the noise. As I answered an earlier question from Ais, there's a lot of noise that makes up short-term transactions versus what is the long-term strategic positioning of our business and our product. And to sum it all, I think our business in XP used is a strategic, sustainable play.
For all the 6 customers we have today,
Timothy Arcuri: Thank you.
Timothy Arcuri: Thank you.
Operator: Thank you. That is all the time we have for Q&A today. I would now like to turn the call back over to Ji Yoo for any closing remarks.
Operator: Thank you. That is all the time we have for Q&A today. I would now like to turn the call back over to Ji Yoo for any closing remarks.
Thank you.
Ji Yoo: Thank you, Sheri. Broadcom currently plans to report its earnings for the Q2 of fiscal year 2026 after the close of market on Wednesday, 3 June 2026. A public webcast of Broadcom's earnings conference call will follow at 2:00 PM Pacific. That will conclude our earnings call today. Thank you all for joining. Sheri, you may end the call.
Ji Yoo: Thank you, Sheri. Broadcom currently plans to report its earnings for the Q2 of fiscal year 2026 after the close of market on Wednesday, 3 June 2026. A public webcast of Broadcom's earnings conference call will follow at 2:00 PM Pacific. That will conclude our earnings call today. Thank you all for joining. Sheri, you may end the call.
Thank you. That is all the time we have for Q&A today. I would now like to turn the call back over to Gu for any closing remarks.
Operator: This concludes today's program. Thank you all for participating. You may now disconnect.
Operator: This concludes today's program. Thank you all for participating. You may now disconnect.
After the close of market on Wednesday, June 3, 2026, a public webcast that broadcasts the earnings conference call will follow at 2 p.m. Pacific. That will conclude our earnings call today. Thank you all for joining. Sheree, you may end the call.
This concludes today's program. Thank you all for participating. You may now disconnect.