Q4 2026 NVIDIA Corp Earnings Call

Operator: Good afternoon. My name is Sarah, I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's Q4 Earnings Call. All lines have been placed on mute to prevent any background noise. After the speaker's remarks, there will be a question-and-answer session. If you would like to ask a question during this time, simply press star, followed by 1 on your telephone keypad. If you would like to withdraw your question, press star 1 again. Thank you. Toshiya Hari, you may begin your conference.

Operator: Good afternoon. My name is Sarah, I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's Q4 Earnings Call. All lines have been placed on mute to prevent any background noise. After the speaker's remarks, there will be a question-and-answer session. If you would like to ask a question during this time, simply press star, followed by 1 on your telephone keypad. If you would like to withdraw your question, press star 1 again. Thank you. Toshiya Hari, you may begin your conference.

Speaker #1: Good afternoon. My name is Sarah, and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's fourth-quarter earnings call.

Toshiya Hari: Thank you. Good afternoon, everyone, and welcome to NVIDIA's Conference Call for Q4 of fiscal 2026. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer. Our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for Q1 of fiscal 2027. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially.

Toshiya Hari: Thank you. Good afternoon, everyone, and welcome to NVIDIA's Conference Call for Q4 of fiscal 2026. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer. Our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for Q1 of fiscal 2027. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially.

Speaker #2: afternoon, everyone, and welcome to NVIDIA's conference call for the fourth quarter of fiscal 2026. With me today from NVIDIA are Jensen Huang, President and Chief Executive

Speaker #2: Good afternoon, everyone, and welcome to NVIDIA's conference call for the fourth quarter of fiscal 2026. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer. All lines have been placed on mute.

Speaker #2: Our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the first quarter of fiscal 2027.

Speaker #2: The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations.

Speaker #2: These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission.

Toshiya Hari: For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, 25 February 2026, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette.

Toshiya Hari: For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, 25 February 2026, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette.

Speaker #2: All our statements are made as of today, February 25, 2026, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.

Speaker #2: During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

Speaker #2: With that, let me turn the call over to Colette.

Speaker #3: Thanks, Toshiya. We delivered another outstanding quarter with record revenue, operating income, and free cash flow. Total revenue of $68 billion was up 73% year over year, accelerating from Q3.

Colette Kress: Thanks, Toshiya Hari. We delivered another outstanding quarter with record revenue, operating income, and free cash flow. Total revenue of $68 billion was up 73% year-over-year, accelerating from Q3. Growth on a sequential basis was also a record as we added $11 billion in data center revenue across a diverse and expanding set of customers, including cloud providers, hyperscalers, AI model makers, enterprises, and sovereign nations. Demand for our Blackwell architecture, extreme co-design at data center scale, continues to strengthen as inference deployments grow in addition to training. The transition to accelerated computing and the infusion of AI across existing hyperscale workloads continue to fuel our growth. Agentic and physical AI applications built on increasingly smarter and multimodal models are beginning to drive our financial performance. On a full year basis, data center generated revenue of $194 billion, up 68% year-over-year.

Colette Kress: Thanks, Toshiya Hari. We delivered another outstanding quarter with record revenue, operating income, and free cash flow. Total revenue of $68 billion was up 73% year-over-year, accelerating from Q3. Growth on a sequential basis was also a record as we added $11 billion in data center revenue across a diverse and expanding set of customers, including cloud providers, hyperscalers, AI model makers, enterprises, and sovereign nations. Demand for our Blackwell architecture, extreme co-design at data center scale, continues to strengthen as inference deployments grow in addition to training. The transition to accelerated computing and the infusion of AI across existing hyperscale workloads continue to fuel our growth. Agentic and physical AI applications built on increasingly smarter and multimodal models are beginning to drive our financial performance. On a full year basis, data center generated revenue of $194 billion, up 68% year-over-year.

Speaker #3: Growth on a sequential basis was also a record, as we added $11 billion in data center revenue across a diverse and expanding set of customers, including cloud providers, hyperscalers, AI model makers, enterprises, and sovereign nations.

Speaker #3: Demand for our Blackwell architecture extreme co-designed a data center scale continues to strengthen as inference deployments grow in addition to training. The transition to accelerated computing and the infusion of AI across existing hyperscale workloads continue to fuel our growth.

Speaker #3: Agentic and physical AI applications built on increasingly smarter and multimodal models are beginning to drive our financial performance. On a full-year basis, data center-generated revenue of $194 billion was up 68% year over year.

Speaker #3: We have now scaled our data center business by nearly 13X since the emergence of ChatGPT in fiscal 2023. We look ahead; we expect sequential revenue growth throughout calendar 2026.

Colette Kress: We have now scaled our data center business by nearly 13x since the emergence of ChatGPT in fiscal 2023. We look ahead, we expect sequential revenue growth throughout calendar 2026, exceeding what was included in the $500 billion Blackwell and Rubin revenue opportunity we shared last year. We believe we have inventory and supply commitments in place to address future demand, including shipments extending into calendar 2027. Every data center is power constrained. Customers make critical architectural decisions based on performance per watt, given these constraints and the need to maximize AI factory revenue. SemiAnalysis declared NVIDIA Inference King, as recent results from Inference X reinforced our inference leadership, with GB300 and NVL72 achieving up to 50x performance per watt and 35x lower cost per token compared with Hopper.

Colette Kress: We have now scaled our data center business by nearly 13x since the emergence of ChatGPT in fiscal 2023. We look ahead, we expect sequential revenue growth throughout calendar 2026, exceeding what was included in the $500 billion Blackwell and Rubin revenue opportunity we shared last year. We believe we have inventory and supply commitments in place to address future demand, including shipments extending into calendar 2027. Every data center is power constrained. Customers make critical architectural decisions based on performance per watt, given these constraints and the need to maximize AI factory revenue. SemiAnalysis declared NVIDIA Inference King, as recent results from Inference X reinforced our inference leadership, with GB300 and NVL72 achieving up to 50x performance per watt and 35x lower cost per token compared with Hopper.

Speaker #3: Exceeding what was included in the $500 billion Blackwell and Reuben revenue opportunity we shared last year. We believe we have inventory and supply commitments in place to address future demand including shipments extending into calendar 2027.

Speaker #3: Every data center is power-constrained. Customers may critical architectural decisions based on performance per watt given these constraints and the need to maximize AI factory revenue.

Speaker #3: Some analysis declared NVIDIA inference king. As recent results from inference X reinforced our inference leadership, with GB300 and VL72 achieving up to 50X performance per watt and 35X lower cost per token.

Speaker #3: Compared with operating. And continuous optimization of CUDA software helped deliver up to five times better performance on GB200 and VL72 just within four months.

Colette Kress: Continuous optimization of CUDA software helped deliver up to five times better performance on GB200 and NVL72 just within four months. NVIDIA produces the lowest cost per token, and data centers running on NVIDIA generate the highest revenues. Our pace of innovation, particularly at our scale, is unmatched. Fueled by an annual R&D budget approaching $20 billion and our ability to extreme co-design across compute and networking, across chips, systems, algorithms, and softwares, we intend to deliver X factor leaps in performance per watt every generation and extend our leadership position over the long term. Q4 data center revenue of $62 billion increased 75% year-over-year and 22% sequentially, driven primarily by sustained strength in Blackwell and the Blackwell Ultra ramp. With NVIDIA infrastructure in high demand, even Hopper and much of the six-year-old Ampere-based products are sold out in the cloud.

Colette Kress: Continuous optimization of CUDA software helped deliver up to five times better performance on GB200 and NVL72 just within four months. NVIDIA produces the lowest cost per token, and data centers running on NVIDIA generate the highest revenues. Our pace of innovation, particularly at our scale, is unmatched. Fueled by an annual R&D budget approaching $20 billion and our ability to extreme co-design across compute and networking, across chips, systems, algorithms, and softwares, we intend to deliver X factor leaps in performance per watt every generation and extend our leadership position over the long term. Q4 data center revenue of $62 billion increased 75% year-over-year and 22% sequentially, driven primarily by sustained strength in Blackwell and the Blackwell Ultra ramp. With NVIDIA infrastructure in high demand, even Hopper and much of the six-year-old Ampere-based products are sold out in the cloud.

Speaker #3: NVIDIA produces the lowest cost per token, and data centers running on NVIDIA generate the highest revenues. Our pace of innovation, particularly at our scale, is unmatched.

Speaker #3: Fueled by an annual R&D budget approaching $20 billion and our ability to extreme co-designed across compute and networking across chips, systems, algorithms, and softwares, we intend to deliver X-factor leaps in performance per watt every generation and extend our leadership position over the long term.

Speaker #3: Q4 data center revenue of $62 billion increased 75% year over year and $22% sequentially driven primarily by sustained strength in Blackwell and the Blackwell Ultra ramp.

Speaker #3: With NVIDIA infrastructure in high demand, even Hopper and much of the six-year-old Ampere-based products are sold out in the cloud. Nearly a year has

Colette Kress: Nearly a year has passed since the release of our Grace Blackwell NVL72 systems. Today, nearly 9 gigawatts of infrastructure on Blackwell are deployed and consumed by the major cloud service providers, hyperscalers, AI model makers, and enterprises. Networking, a cornerstone of our data center scale infrastructure offering, was a standout this quarter, generating $11 billion in revenue, up more than 3.5x year-over-year. Demand for our scale up and scale out technologies reached record levels, both growing double digits sequentially, driven by strong adoption of NVLink, Spectrum-X Ethernet, and InfiniBand. On a year-over-year basis, growth was driven primarily by NVLink 72 scale up switches, as Grace Blackwell systems accounted for roughly two-thirds of data center revenue in the quarter. NVLink scale up fabric has revolutionized computing and demonstrates the power of extreme co-design across all of the chips of the supercomputer and the full stack.

Colette Kress: Nearly a year has passed since the release of our Grace Blackwell NVL72 systems. Today, nearly 9 gigawatts of infrastructure on Blackwell are deployed and consumed by the major cloud service providers, hyperscalers, AI model makers, and enterprises. Networking, a cornerstone of our data center scale infrastructure offering, was a standout this quarter, generating $11 billion in revenue, up more than 3.5x year-over-year. Demand for our scale up and scale out technologies reached record levels, both growing double digits sequentially, driven by strong adoption of NVLink, Spectrum-X Ethernet, and InfiniBand. On a year-over-year basis, growth was driven primarily by NVLink 72 scale up switches, as Grace Blackwell systems accounted for roughly two-thirds of data center revenue in the quarter. NVLink scale up fabric has revolutionized computing and demonstrates the power of extreme co-design across all of the chips of the supercomputer and the full stack.

Speaker #1: Has passed since the release of our Grace Blackwell novel 72 systems . Today , nearly nine gigawatts of infrastructure on Blackwell are deployed and consumed by the major cloud service providers .

Speaker #1: Hyperscalers , AI model makers , and enterprises . Networking a cornerstone of our data center scale infrastructure offering was a standout this quarter , generating 11 billion in revenue , up more than 3.5 x year over year Demand for our scale up and scale out technologies reached record levels , both growing double digit sequentially , driven by strong adoption of NVLink spectrum X , Ethernet and InfiniBand on a year over year basis .

Speaker #1: Growth was driven primarily by NVLink 72 scale up switches as Grace Blackwell Systems accounted for roughly two thirds of data center revenue in the quarter .

Speaker #1: NVLink scale-up fabric has revolutionized computing and demonstrates the power of extreme co-design across all of the chips of the supercomputer and the full stack.

Speaker #1: In Q4 , we announced that we will enable AWS with NVLink to integrate with their custom silicon Momentum is strong with our spectrum X Ethernet scale up and scale across networking .

Colette Kress: In Q4, we announced that we will enable AWS with NVLink to integrate with their custom silicon. Momentum is strong with our Spectrum-X Ethernet scale up and scale across networking as customers work to unify distributed data centers into integrated gigascale AI factories. For the full year, our networking business exceeded $31 billion in revenue, up more than 10x compared to fiscal 2021, the year we acquired Mellanox. Our demand profile is broad, diverse, and expanding beyond just chatbots. First, there is a fundamental platform shift from classical machine learning to generative AI. Strong evidence of ROI as hyperscalers upgrade massive traditional workloads to generative AI, including search, ad generation, and content recommender systems, is encouraging our largest customers to accelerate their capital spending.

Colette Kress: In Q4, we announced that we will enable AWS with NVLink to integrate with their custom silicon. Momentum is strong with our Spectrum-X Ethernet scale up and scale across networking as customers work to unify distributed data centers into integrated gigascale AI factories. For the full year, our networking business exceeded $31 billion in revenue, up more than 10x compared to fiscal 2021, the year we acquired Mellanox. Our demand profile is broad, diverse, and expanding beyond just chatbots. First, there is a fundamental platform shift from classical machine learning to generative AI. Strong evidence of ROI as hyperscalers upgrade massive traditional workloads to generative AI, including search, ad generation, and content recommender systems, is encouraging our largest customers to accelerate their capital spending.

Speaker #1: As customers work to unify distributed data centers into integrated giga scale AI factories For the full year , our networking business exceeded 31 billion in revenue , up more than ten X compared to fiscal 2021 .

Speaker #1: The year we acquired Mellanox . Our demand profile is broad , diverse and expanding beyond just chatbots . First , there's a fundamental platform shift from classical machine learning to generative AI .

Speaker #1: Strong evidence of ROI as hyperscalers upgrade massive traditional workloads to generative AI , including search , ad generation and content recommender systems is encouraging .

Speaker #1: Our largest customers to accelerate their capital spending , for example , at meta , advancements in their Gem model drove a 3.5 increase in ad clicks on Facebook and more than 1% gain in conversations on Instagram , translating into meaningful revenue growth .

Colette Kress: For example, at Meta, advancements in their Gen model drove a 3.5 increase in ad clicks on Facebook and more than 1% gain in conversations on Instagram, translating into meaningful revenue growth. With the same NVIDIA infrastructure, Meta Superintelligence Labs can train and deploy their frontier agentic AI systems. Frontier agentic systems have reached an inflection point. Claude Code, Claude Cowork, and OpenAI Codex have achieved useful intelligence. Adoption is skyrocketing, and tokens are profitable, driving extreme urgency to scale up compute. Compute directly translate to intelligence and revenue growth. Analysts' expectations for 2026 CapEx across the top 5 cloud providers and hyperscalers, who collectively account for a little over 50% of our data center revenue, are up nearly $120 billion since the start of the year and approaching $700 billion.

Colette Kress: For example, at Meta, advancements in their Gen model drove a 3.5 increase in ad clicks on Facebook and more than 1% gain in conversations on Instagram, translating into meaningful revenue growth. With the same NVIDIA infrastructure, Meta Superintelligence Labs can train and deploy their frontier agentic AI systems. Frontier agentic systems have reached an inflection point. Claude Code, Claude Cowork, and OpenAI Codex have achieved useful intelligence. Adoption is skyrocketing, and tokens are profitable, driving extreme urgency to scale up compute. Compute directly translate to intelligence and revenue growth. Analysts' expectations for 2026 CapEx across the top 5 cloud providers and hyperscalers, who collectively account for a little over 50% of our data center revenue, are up nearly $120 billion since the start of the year and approaching $700 billion.

Speaker #1: With the same Nvidia infrastructure . Meta superintelligence Labs can train and deploy their frontier Agentic AI systems . Frontier Agentic systems have reached an inflection point .

Speaker #1: Cloud code clod Co-work and opening Codex have achieved useful intelligence adoption is skyrocketing and tokens are profitable . Driving extreme urgency to scale up compute , compute directly translate to intelligence and revenue growth .

Speaker #1: Analyst expectations for 2026 CapEx across the top five cloud providers and hyperscalers who collectively account for a little over 50% of our data center revenue , are up nearly 120 billion since the start of the year , and approaching 700 billion .

Speaker #1: We continue to expect the transition of classic data center workloads to GPU accelerated computing , and the use of AI to Enhance today's hyperscale workloads and contribute toward roughly half of our long term opportunity Every country will build and operate some parts of its AI infrastructure , just like with electricity and internet .

Colette Kress: We continue to expect the transition of classic data center workloads to GPU-accelerated computing and the use of AI to enhance today's hyperscale workloads and contribute toward roughly half of our long-term opportunity. Every country will build and operate some parts of its AI infrastructure, just like with electricity and internet today. In fiscal year 2026, our sovereign AI business more than tripled year-over-year and over $30 billion, driven primarily by customers based in Canada, France, the Netherlands, Singapore, and the UK. Over the long run, we expect our sovereign opportunity to grow at least in line with the AI infrastructure market as countries spend on AI proportional to their GDP.

Colette Kress: We continue to expect the transition of classic data center workloads to GPU-accelerated computing and the use of AI to enhance today's hyperscale workloads and contribute toward roughly half of our long-term opportunity. Every country will build and operate some parts of its AI infrastructure, just like with electricity and internet today. In fiscal year 2026, our sovereign AI business more than tripled year-over-year and over $30 billion, driven primarily by customers based in Canada, France, the Netherlands, Singapore, and the UK. Over the long run, we expect our sovereign opportunity to grow at least in line with the AI infrastructure market as countries spend on AI proportional to their GDP.

Speaker #1: Today . In fiscal year 2026 , our sovereign AI business more than tripled year over year and over 30 billion , driven primarily by customers based in Canada , France , the Netherlands , Singapore and the UK .

Speaker #1: Over the long run , we expect our sovereign opportunity to grow at least in line with the AI infrastructure market , as countries spend on AI proportional to their GDP While small amounts of H-100 products for China based customers were approved by the US government , we have yet to generate any revenue and we do not know whether any imports will be allowed into China or competitors in China , bolstered by recent IPOs are making progress and have the potential to disrupt the structure of the global AI industry over the long term to sustain its leadership position in AI .

Colette Kress: While small amounts of H200 products for China-based customers were approved by the US government, we have yet to generate any revenue, and we do not know whether any imports will be allowed into China. Our competitors in China, bolstered by recent IPOs, are making progress and have the potential to disrupt the structure of the global AI industry over the long term. To sustain its leadership position in AI compute, America must engage every developer and be the platform for choice for every commercial business, including those in China. We will continue to engage with the US and China governments and advocate for America's ability to compete around the world. We unveiled the Rubin platform last month at CES, comprised of six new chips: the Vera CPU, Rubin GPU, NVLink Switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet switch.

Colette Kress: While small amounts of H200 products for China-based customers were approved by the US government, we have yet to generate any revenue, and we do not know whether any imports will be allowed into China. Our competitors in China, bolstered by recent IPOs, are making progress and have the potential to disrupt the structure of the global AI industry over the long term. To sustain its leadership position in AI compute, America must engage every developer and be the platform for choice for every commercial business, including those in China. We will continue to engage with the US and China governments and advocate for America's ability to compete around the world. We unveiled the Rubin platform last month at CES, comprised of six new chips: the Vera CPU, Rubin GPU, NVLink Switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet switch.

Speaker #1: Compute , America must engage every developer and be the platform for choice for every commercial business , including those in China . We will continue to engage with the US and China governments and advocate for America's ability to compete around the world .

Speaker #1: We unveiled the Rubin platform last month at CES , comprised of six new chips the Vera CPU , Rubin GPU , NVLink , six , Switch Connect X9 , Supernet , Bluefield four DPU and spectrum six Ethernet switch .

Speaker #1: The platform will train Mo models with one fourth the number of GPUs and reduce inference token costs by up to ten x compared to Blackwell .

Colette Kress: The platform will train MoE models with one-fourth the number of GPUs and reduce inference token costs by up to 10x compared to Blackwell. We shipped our first Vera Rubin samples to customers earlier this week, and we remain on track to commence production shipments in the second half of the year. Based on its modular, cable-free tray design, Rubin will deliver improved resiliency and serviceability relative to Blackwell. We expect every cloud model builder to deploy Vera Rubin. Moving to gaming. Gaming revenue of $3.7 billion increased 47% year-on-year, driven by strong Blackwell demand and improved supply. GeForce RTX is the leading platform for PC gamers, creators, and developers. In Q4, we added several new technologies and advancements, including DLSS 4.5, which uses AI to bring game visuals to a new level.

Colette Kress: The platform will train MoE models with one-fourth the number of GPUs and reduce inference token costs by up to 10x compared to Blackwell. We shipped our first Vera Rubin samples to customers earlier this week, and we remain on track to commence production shipments in the second half of the year. Based on its modular, cable-free tray design, Rubin will deliver improved resiliency and serviceability relative to Blackwell. We expect every cloud model builder to deploy Vera Rubin. Moving to gaming. Gaming revenue of $3.7 billion increased 47% year-on-year, driven by strong Blackwell demand and improved supply. GeForce RTX is the leading platform for PC gamers, creators, and developers. In Q4, we added several new technologies and advancements, including DLSS 4.5, which uses AI to bring game visuals to a new level.

Speaker #1: We shipped our first Vera Rubin samples to customers earlier this week, and we remain on track to commence production shipments in the second half of the year.

Speaker #1: Based on its modular cable free trade design . Rubin will deliver improved resiliency and serviceability relative to Blackwell . We expect every cloud model builder to deploy , Vera Rubin Moving to gaming gaming revenue of $3.7 billion increased 47% year on year , driven by strong Blackwell demand and improved supply .

Speaker #1: GeForce RTX is the leading platform for PC gamers , creators and developers . In Q4 , we added several new technologies and advancements , including DLSS 4.5 , which uses AI to bring game visuals to a new level .

Speaker #1: G-Sync pulsar , bringing incredible clear graphics even in motion , and 35% faster LM inference across leading HPC frameworks . Looking ahead while end demand for our products remains strong and channel inventory levels are healthy , we expect supply constraints to be the headwind to gaming in Q1 and beyond .

Colette Kress: G-SYNC Pulsar, bringing incredible clear graphics even in motion, and 35% faster LLM inference across leading AI PC frameworks. Looking ahead, while end demand for our products remains strong and channel inventory levels are healthy, we expect supply constraints to be the headwind to gaming in Q1 and beyond. For professional visualization, it crossed the $1 billion mark for the first time, with revenue of $1.3 billion, up 159% year-over-year and 74% sequentially. During the quarter, we launched the RTX PRO 5000 Blackwell workstation with 72 GB of fast memory for AI developers running LLMs and agentic workflows. Automotive revenue of $604 million was up 6% year-over-year and was driven by robust demand for self-driving solutions....

Colette Kress: G-SYNC Pulsar, bringing incredible clear graphics even in motion, and 35% faster LLM inference across leading AI PC frameworks. Looking ahead, while end demand for our products remains strong and channel inventory levels are healthy, we expect supply constraints to be the headwind to gaming in Q1 and beyond. For professional visualization, it crossed the $1 billion mark for the first time, with revenue of $1.3 billion, up 159% year-over-year and 74% sequentially. During the quarter, we launched the RTX PRO 5000 Blackwell workstation with 72 GB of fast memory for AI developers running LLMs and agentic workflows. Automotive revenue of $604 million was up 6% year-over-year and was driven by robust demand for self-driving solutions....

Speaker #1: For professional visualization , it crossed the 1 billion mark for the first time with revenue of 1.3 billion , up 159% year over year , and 74% sequentially during the quarter .

Speaker #1: We launched the RTX Pro 5000 . Blackwell workstation with 72GB of fast memory for AI , AI developers running Llms and a genetic workflows automotive revenue of 604 million was up 6% year over year and was driven by robust demand for self-driving solutions at CES , we introduced Alpamayo , the world's first open portfolio of reasoning , vision , language , action models , simulation blueprints and data sets , enabling vehicles that can think The first passenger car featuring Alpamayo built on Nvidia Drive , will be on the road soon in the new Mercedes-Benz CLA Physical AI is here , having already contributed north of 6 billion in Nvidia revenue in fiscal year 2026 .

Colette Kress: At CES, we introduced Alpamayo, the world's first open portfolio of reasoning, vision, language, action models, simulation blueprints, and datasets, enabling vehicles that can think. The first passenger car featuring Alpamayo, built on NVIDIA DRIVE, will be on the road soon in the new Mercedes-Benz CLA. Physical AI is here, having already contributed north of $6 billion in NVIDIA revenue in fiscal year 2026. Robotaxi rides are growing exponentially with commercial fleets from Waymo, Tesla, Uber, WeRide, and Zoox, and many others are expected to scale from thousands of vehicles in 2025 to millions over the next decade, creating a market poised to generate hundreds of billions of dollars of revenue. This expansion will demand orders of a magnitude more compute with every major OEM and service provider developing on NVIDIA's platform.

Colette Kress: At CES, we introduced Alpamayo, the world's first open portfolio of reasoning, vision, language, action models, simulation blueprints, and datasets, enabling vehicles that can think. The first passenger car featuring Alpamayo, built on NVIDIA DRIVE, will be on the road soon in the new Mercedes-Benz CLA. Physical AI is here, having already contributed north of $6 billion in NVIDIA revenue in fiscal year 2026. Robotaxi rides are growing exponentially with commercial fleets from Waymo, Tesla, Uber, WeRide, and Zoox, and many others are expected to scale from thousands of vehicles in 2025 to millions over the next decade, creating a market poised to generate hundreds of billions of dollars of revenue. This expansion will demand orders of a magnitude more compute with every major OEM and service provider developing on NVIDIA's platform.

Speaker #1: Robotaxi rides are growing exponentially with commercial fleets from Waymo , Tesla , Uber . We ride and Zoox and many others are expected to scale from thousands of vehicles in 2025 to millions over the next decade , creating a market poised to generate hundreds of billions of dollars of revenue .

Speaker #1: This expansion will demand orders of magnitude more compute with every major OEM and service provider developing on Nvidia's platform , we continue to advance robotics development with the new Nvidia Cosmos and Isaac Group open models , frameworks and Nvidia's powered robots and autonomous machines for leading companies including Boston Dynamics , caterpillar , Franco Robotics , LG electronics and Neurorobotics .

Colette Kress: We continue to advance robotics development with the new NVIDIA Cosmos and Isaac Group, open models, frameworks, and NVIDIA's powered robots and autonomous machines for leading co-companies, including Boston Dynamics, Caterpillar, Franka Robotics, LG Electronics, and Nuro Robotics. To accelerate industrial physical AI adoption, we also announced new expanding partnerships with Dassault Systèmes, Siemens, and Synopsys to bring NVIDIA AI infrastructure, Omniverse digital twins, world models, and CUDA-X libraries to millions of researchers, designers, and engineers building the world's industries. Let's move to the rest of the P&L. GAAP gross margin was 75%, and non-GAAP gross margin was 75.2%, increasing sequentially as Blackwell continued to ramp. GAAP operating expenses were up 16% sequentially and up 21% on a non-GAAP basis related to new product introductions and compute and infrastructure costs.

Colette Kress: We continue to advance robotics development with the new NVIDIA Cosmos and Isaac Group, open models, frameworks, and NVIDIA's powered robots and autonomous machines for leading co-companies, including Boston Dynamics, Caterpillar, Franka Robotics, LG Electronics, and Nuro Robotics. To accelerate industrial physical AI adoption, we also announced new expanding partnerships with Dassault Systèmes, Siemens, and Synopsys to bring NVIDIA AI infrastructure, Omniverse digital twins, world models, and CUDA-X libraries to millions of researchers, designers, and engineers building the world's industries. Let's move to the rest of the P&L. GAAP gross margin was 75%, and non-GAAP gross margin was 75.2%, increasing sequentially as Blackwell continued to ramp. GAAP operating expenses were up 16% sequentially and up 21% on a non-GAAP basis related to new product introductions and compute and infrastructure costs.

Speaker #1: To accelerate industrial physical AI adoption , we also announced new expanding partnerships with Deso Systems , Siemens and Cynopsis to bring Nvidia AI infrastructure , Omniverse digital twins , World models and Cuda X libraries to millions of researchers , designers and engineers .

Speaker #1: Building the world's industry's . Let's move to the rest of the PNL . GAAP gross margin was 75% , and non-GAAP gross margin was 75.2% , increasing sequentially as Backwell continued to ramp GAAP operating expenses were up 16% sequentially and up 21% on a non-GAAP basis .

Speaker #1: Related to new product introductions and compute and infrastructure costs . non-GAAP effective tax rate for the fourth quarter was 15.4% , below our outlook for the quarter , primarily due to the impact of a one time tax benefit .

Colette Kress: Non-GAAP effective tax rate for the Q4 was 15.4%, below our outlook for the quarter, primarily due to the impact of a one-time tax benefit. Inventory grew 8% quarter-over-quarter, while purchase commitments also increased significantly, and we have strategically secured inventory and capacity to meet demand beyond the next several quarters. This is further out in time than usual and reflects the longer demand visibility we have. While we expect tightness in the supply for our advanced architectures to persist, we remain confident in our ability to capitalize on the growth opportunity ahead with our scale, expansive supply chain, and the long-standing partnerships continuing to serve us well. We generated free cash flow of $35 billion in Q4 and $97 billion in fiscal year 2026.

Colette Kress: Non-GAAP effective tax rate for the Q4 was 15.4%, below our outlook for the quarter, primarily due to the impact of a one-time tax benefit. Inventory grew 8% quarter-over-quarter, while purchase commitments also increased significantly, and we have strategically secured inventory and capacity to meet demand beyond the next several quarters. This is further out in time than usual and reflects the longer demand visibility we have. While we expect tightness in the supply for our advanced architectures to persist, we remain confident in our ability to capitalize on the growth opportunity ahead with our scale, expansive supply chain, and the long-standing partnerships continuing to serve us well. We generated free cash flow of $35 billion in Q4 and $97 billion in fiscal year 2026.

Speaker #1: Inventory grew 8% quarter over quarter . While purchase commitments also increased significantly . As we have strategically secured inventory and capacity to meet demand beyond the next several quarters , this is further out in time than usual and reflects the longer demand visibility we have while we expect tightness in the supply for our advanced architectures to persist , we remain confident in our ability to capitalize on the growth opportunity ahead with our scale , expansive supply chain , and the long standing partnerships continuing to serve us well .

Speaker #1: We generated free cash flow of 35 billion in Q4 and 97 billion in fiscal year 2026 . For the year we returned 41 billion , or 43% of free cash flow to our shareholders in the form of share repurchases and dividends .

Colette Kress: For the year, we returned $41 billion or 43% of free cash flow to our shareholders in the form of share repurchases and dividends. We continue to invest in our technology and our ecosystem to cultivate market development, drive long-term growth, and ultimately, yield total shareholder returns superior to the market or our peer group. Importantly, we will continue to run a strategic and disciplined process as it relates to our investments, and we remain committed to returning capital to our shareholders. Let me turn to the outlook for Q1. Starting this quarter, we will be including stock-based compensation expense in our non-GAAP results. Stock-based compensation is a foundational component of our compensation program to attract and retain world-class talent. Let me first start with revenue. Total revenue is expected to be $78 billion, ±2%.

Colette Kress: For the year, we returned $41 billion or 43% of free cash flow to our shareholders in the form of share repurchases and dividends. We continue to invest in our technology and our ecosystem to cultivate market development, drive long-term growth, and ultimately, yield total shareholder returns superior to the market or our peer group. Importantly, we will continue to run a strategic and disciplined process as it relates to our investments, and we remain committed to returning capital to our shareholders. Let me turn to the outlook for Q1. Starting this quarter, we will be including stock-based compensation expense in our non-GAAP results. Stock-based compensation is a foundational component of our compensation program to attract and retain world-class talent. Let me first start with revenue. Total revenue is expected to be $78 billion, ±2%.

Speaker #1: We continue to invest in our technology and our ecosystem to cultivate market development , drive long term growth , and ultimately yield total shareholder returns superior to the market or our peer group .

Speaker #1: Importantly , we will continue to run a strategic and disciplined process as it relates to our investments , and we remain committed to returning capital to our shareholders .

Speaker #1: Let me turn to the outlook for the first quarter . Starting this quarter , we will be including stock based compensation expense in our non-GAAP results , stock based compensation is a foundational component of our compensation program to attract and retain world class talent .

Speaker #1: Let me first start with revenue . Total revenue is expected to be 78 billion . Plus or minus 2% . We expect most of our growth to be driven by data center consistent with last quarter .

Colette Kress: We expect most of our growth to be driven by data center. Consistent with last quarter, we are not assuming any data center compute revenue from China in our outlook. GAAP and non-GAAP gross margins are expected to be 74.9% and 75%, respectively, ±50 basis points. For the full year, we continue to see gross margins in the mid-seventies. We will keep you updated on our progress as we prepare for the Vera Rubin transition. GAAP and non-GAAP OpEx are expected to be approximately $7.7 billion and $7.5 billion, respectively, including stock-based compensation expense of $1.9 billion. For the full year, we expect non-GAAP OpEx to grow in the low forties on a year-over-year basis as we continue to invest in our expanding opportunity set.

Colette Kress: We expect most of our growth to be driven by data center. Consistent with last quarter, we are not assuming any data center compute revenue from China in our outlook. GAAP and non-GAAP gross margins are expected to be 74.9% and 75%, respectively, ±50 basis points. For the full year, we continue to see gross margins in the mid-seventies. We will keep you updated on our progress as we prepare for the Vera Rubin transition. GAAP and non-GAAP OpEx are expected to be approximately $7.7 billion and $7.5 billion, respectively, including stock-based compensation expense of $1.9 billion. For the full year, we expect non-GAAP OpEx to grow in the low forties on a year-over-year basis as we continue to invest in our expanding opportunity set.

Speaker #1: We are not assuming any data center compute revenue from China in our outlook. GAAP and non-GAAP gross margins are expected to be 74.9% and 75%, respectively, plus or minus 50 basis points for the full year.

Speaker #1: We continue to see gross margins in the mid 70s . We will keep you updated on our progress as we prepare for the Vera Rubin transition , GAAP and non-GAAP operating expenses are expected to be approximately 7.7 billion and 7.5 billion , respectively , including stock based compensation expense of 1.9 billion for the full year , we expect non-GAAP operating expenses to grow in the low 40s on a year over year basis as we continue to invest in our expanding opportunity set for the full year fiscal year 27 , we expect GAAP and non-GAAP tax rates to be in between seven and 19% , excluding any discrete items and material changes to our tax environment .

Colette Kress: For the full year, fiscal year 2027, we expect GAAP and non-GAAP tax rates to be between 7% and 19%, excluding any discrete items and material changes to our tax environment. With that, let me turn the call over to Jensen. I think he has a few words for us.

Colette Kress: For the full year, fiscal year 2027, we expect GAAP and non-GAAP tax rates to be between 7% and 19%, excluding any discrete items and material changes to our tax environment. With that, let me turn the call over to Jensen. I think he has a few words for us.

Speaker #1: With that , let me turn the call over to Jensen . I think he has a few words for us

Speaker #2: This quarter . We significantly deepened and expanded our partnerships with leading frontier model makers . We recently celebrated OpenAI's launch of GPT 5.3 Codex , trained with and inferencing on , Grace Blackwell , NVLink , 72 systems , GPT 5.3 , Codex can take on long running tasks that involve research , tool use , and complex execution .

Jensen Huang: This quarter, we significantly deepened and expanded our partnerships with leading frontier model makers. We recently celebrated OpenAI's launch of GPT-5.3-Codex, trained with and inferencing on Grace Blackwell and NVL72 systems. GPT-5.3-Codex can take on long-running tasks that involve research, tool use, and complex execution. GPT-5.3-Codex is deployed broadly inside NVIDIA. Our engineers love it. We continue to work with OpenAI toward a partnership agreement and believe we are close. We are thrilled with our ongoing partnership with OpenAI, a once-in-a-generation company we've had the pleasure of partnering with since their first days. Meta Superintelligence Labs is scaling up at lightning speed. Last week, we announced that Meta is deploying millions of Blackwells and Rubin GPUs, NVIDIA CPUs, and Spectrum-X Ethernet for training and inference. This quarter, we announced a partnership with Anthropic and a $10 billion investment in their company.

Jensen Huang: This quarter, we significantly deepened and expanded our partnerships with leading frontier model makers. We recently celebrated OpenAI's launch of GPT-5.3-Codex, trained with and inferencing on Grace Blackwell and NVL72 systems. GPT-5.3-Codex can take on long-running tasks that involve research, tool use, and complex execution. GPT-5.3-Codex is deployed broadly inside NVIDIA. Our engineers love it. We continue to work with OpenAI toward a partnership agreement and believe we are close. We are thrilled with our ongoing partnership with OpenAI, a once-in-a-generation company we've had the pleasure of partnering with since their first days. Meta Superintelligence Labs is scaling up at lightning speed. Last week, we announced that Meta is deploying millions of Blackwells and Rubin GPUs, NVIDIA CPUs, and Spectrum-X Ethernet for training and inference. This quarter, we announced a partnership with Anthropic and a $10 billion investment in their company.

Speaker #2: 5.3 Codex is deployed broadly inside Nvidia. Our engineers love it. We continue to work with OpenAI toward a partnership agreement and believe we are close.

Speaker #2: We are thrilled with our ongoing partnership with OpenAI , a once in a generation company . We've had the pleasure of partnering with since their first days .

Speaker #2: Meta superintelligence labs is scaling up at lightning speed . Last week , we announced that meta is deploying millions of Blackwell's and Rubin GPUs .

Speaker #2: Nvidia CPUs and Spectrum Ethernet for training and inference. This quarter, we announced a partnership with Anthropic and a $10 billion investment in their company.

Speaker #2: Anthropic will train an inference on grace Blackwell and Vera Rubin systems . Anthropic's Claude . Code work agent platform is revolutionary and has opened the floodgates for enterprise AI adoption between cloud cowork and open claw compute demand is skyrocketing and ChatGPT moment of Agentic AI has arrived with partnerships manning anthropic meta , OpenAI and XAI Nvidia deployed across every cloud and with our ability to build full stack AI infrastructure from the ground up or support them in the cloud , we're uniquely positioned to partner with Frontier model builders at every stage .

Jensen Huang: Anthropic will train an inference on Grace Blackwell and Vera Rubin systems. Anthropic's Claude Codework agent platform is revolutionary and has opened the floodgates for enterprise AI adoption. Between Claude Codework and OpenClaw, compute demand is skyrocketing, and ChatGPT moment of agentic AI has arrived. With partnerships spanning Anthropic, Meta, OpenAI, and xAI, NVIDIA is deployed across every cloud, and with our ability to build full stack AI infrastructure from the ground up or support them in the cloud, we're uniquely positioned to partner with frontier model builders at every stage: training, inference, and AI factory scale-out. Finally, we recently entered into a non-exclusive licensing agreement with Groq for its low latency inference technology and welcomed a team of brilliant engineers to NVIDIA. As we did with Mellanox, we will extend NVIDIA's architecture with Groq's innovations to enable new levels of AI infrastructure, performance, and value.

Jensen Huang: Anthropic will train an inference on Grace Blackwell and Vera Rubin systems. Anthropic's Claude Codework agent platform is revolutionary and has opened the floodgates for enterprise AI adoption. Between Claude Codework and OpenClaw, compute demand is skyrocketing, and ChatGPT moment of agentic AI has arrived. With partnerships spanning Anthropic, Meta, OpenAI, and xAI, NVIDIA is deployed across every cloud, and with our ability to build full stack AI infrastructure from the ground up or support them in the cloud, we're uniquely positioned to partner with frontier model builders at every stage: training, inference, and AI factory scale-out. Finally, we recently entered into a non-exclusive licensing agreement with Groq for its low latency inference technology and welcomed a team of brilliant engineers to NVIDIA. As we did with Mellanox, we will extend NVIDIA's architecture with Groq's innovations to enable new levels of AI infrastructure, performance, and value.

Speaker #2: Training , inference , and AI factory scale out Finally , we recently entered into a non-exclusive licensing agreement with grok for its low latency inference technology and welcomed a team of brilliant engineers to Nvidia , as we did with Mellanox , we will extend Nvidia's architecture with Groks innovations to enable new levels of AI infrastructure , performance and value .

Speaker #2: We look forward to sharing more at GTC next month . Okay , back to you . We will now transition to Q&A . Operator .

Jensen Huang: We look forward to sharing more at GTC next month. Okay, back to you. We will now transition to Q&A. Operator, please pull for questions.

Jensen Huang: We look forward to sharing more at GTC next month. Okay, back to you. We will now transition to Q&A. Operator, please pull for questions.

Speaker #2: Please pull for questions

Speaker #3: At this time , I would like to remind everyone , in order to ask a question , press star . Then the number one on your telephone keypad , we'll pause for just a moment to compile the Q&A roster Your first question comes from Vivek Arya with Bank of America Securities .

Operator: At this time, I would like to remind everyone, in order to ask a question, press Star, then the number one on your telephone keypad. We'll pause for just a moment to compile the Q&A roster. Your first question comes from Vivek Arya with Bank of America Securities. Your line is open.

Operator: At this time, I would like to remind everyone, in order to ask a question, press Star, then the number one on your telephone keypad. We'll pause for just a moment to compile the Q&A roster. Your first question comes from Vivek Arya with Bank of America Securities. Your line is open.

Speaker #3: Your line is open

Speaker #4: Thanks for taking my question . I think you mentioned that you now have growth , visibility into calendar 27 also , and I think your purchase commitment kind of reflect that confidence .

Vivek Arya: my question. I think you mentioned that you now have growth visibility into calendar 2027 also, and I think your purchase commitments kind of reflect that confidence. But Jensen, I'm curious, you know, when you look at your top cloud customers, cloud CapEx close to $700 billion this year, many investors are concerned that it would be harder for this level to grow into next year. And for several of them, their cash flow generation capability is also getting compressed. I know you're very confident about your roadmap, right, and your purchase commitments and whatnot, but how confident are you about your customers' ability to continue to grow their CapEx?

Vivek Arya: my question. I think you mentioned that you now have growth visibility into calendar 2027 also, and I think your purchase commitments kind of reflect that confidence. But Jensen, I'm curious, you know, when you look at your top cloud customers, cloud CapEx close to $700 billion this year, many investors are concerned that it would be harder for this level to grow into next year. And for several of them, their cash flow generation capability is also getting compressed. I know you're very confident about your roadmap, right, and your purchase commitments and whatnot, but how confident are you about your customers' ability to continue to grow their CapEx?If their CapEx doesn't grow, can NVIDIA still find a way to grow in that envelope? Thank you.

Speaker #4: But Jensen , I'm curious , you know , when you look at your top cloud customers , cloud CapEx close to 700 billion this year , many investors are concerned that it would be harder for this level to grow into next year .

Speaker #4: And for several of them , their cash flow generation capability is also getting compressed . So I know you're very confident about your roadmap , right ?

Speaker #4: And your purchase commitments and whatnot , but how confident are you about your customers ability to continue to grow their CapEx ? And if their CapEx doesn't grow ?

Vivek Arya: If their CapEx doesn't grow, can NVIDIA still find a way to grow in that envelope? Thank you.

Speaker #4: Then Nvidia still found a way to grow in that envelope. Thank you.

Speaker #2: I am confident in their cash flow growing and the reason for that is very simple . We have now seen the inflection of agentic AI and the usefulness of agents across the world and enterprises everywhere .

Jensen Huang: I am confident in their cash flow growing, and the reason for that is very simple. We have now seen the inflection of agentic AI and the usefulness of agents across the world in enterprises everywhere. You're seeing incredible compute demand because of it. In this new world of AI, compute is revenues. Without compute, there's no way to generate tokens. Without tokens, there's no way to grow revenues. In this new world of AI, compute equals revenues. I am certain that at this point, with the productive use of Codex and Claude Code and the excitement around Claude Codework, you know, just the incredible enthusiasm about OpenClaw and the enterprise versions of them, all of the enterprise ISVs who are now working on agentic systems on top of their tools platforms.

Jensen Huang: I am confident in their cash flow growing, and the reason for that is very simple. We have now seen the inflection of agentic AI and the usefulness of agents across the world in enterprises everywhere. You're seeing incredible compute demand because of it. In this new world of AI, compute is revenues. Without compute, there's no way to generate tokens. Without tokens, there's no way to grow revenues. In this new world of AI, compute equals revenues. I am certain that at this point, with the productive use of Codex and Claude Code and the excitement around Claude Codework, you know, just the incredible enthusiasm about OpenClaw and the enterprise versions of them, all of the enterprise ISVs who are now working on agentic systems on top of their tools platforms.

Speaker #2: You're seeing incredible compute , demand because of it . In this new world of AI , compute is revenues without compute , there's no way to generate tokens .

Speaker #2: Without tokens , there's no way to grow revenues . So in this new world of AI , compute equals revenues . And I am certain that at this point , with the product productive use of Codex and cloud code and the the excitement around cloud co-work and , you know , just the , the incredible enthusiasm about open claw and the enterprise versions of them , all of the enterprise ISVs who are now working on agentic systems on top of their tools platforms .

Speaker #2: I am certain at this point that we are at the inflection point . We've reached the inflection point and we're generating profitable tokens that are productive for customers and profitable for the cloud service providers .

Jensen Huang: Now, I am certain at this point that we are at the inflection point. We've reached the inflection point, and we're generating profitable tokens that are productive for customers and profitable for the cloud service providers. The simple logic of it, the simple way to think about it, is computing has changed. What used to be software running on computers, modest amount of computers, you know, call it $300 or $400 billion worth of CapEx each year, has now gone into AI. AI, in order to generate tokens, you need compute capacity, and that translates directly to growth, and that translates directly to revenues.

Jensen Huang: Now, I am certain at this point that we are at the inflection point. We've reached the inflection point, and we're generating profitable tokens that are productive for customers and profitable for the cloud service providers. The simple logic of it, the simple way to think about it, is computing has changed. What used to be software running on computers, modest amount of computers, you know, call it $300 or $400 billion worth of CapEx each year, has now gone into AI. AI, in order to generate tokens, you need compute capacity, and that translates directly to growth, and that translates directly to revenues.

Speaker #2: And so the simple , the simple logic of it , the simple way to think about it , is computing has changed what used to be software running on computers .

Speaker #2: Modest amount of computers , you know , call it 3 or $400 billion worth of CapEx each year has now gone into AI and AI in order to have in order to generate tokens , you need compute capacity .

Speaker #2: And that translates directly to growth . And that translates directly to revenues

Speaker #3: Your next question comes from Joe Moore with Morgan Stanley . Your line is open .

Operator: Your next question comes from Joe Moore with Morgan Stanley. Your line is open.

Operator: Your next question comes from Joe Moore with Morgan Stanley. Your line is open.

Speaker #5: Great . Thank you . And congratulations on the numbers . You talked about some of the strategic investments that you've made into anthropic and OpenAI CoreWeave as well , but also partners Intel and Nokia .

Joe Moore: Great, thank you. Congratulations on the numbers. You talked about some of the strategic investments that you've made into Anthropic and potentially OpenAI, CoreWeave as well, but also partners Intel, Nokia, Synopsys. You know, you're clearly at the center of everything. Can you talk about the role of those investments and kind of how do you view the balance sheet as a tool to kind of grow the NVIDIA's position in the ecosystem and participate in that growth?

Joe Moore: Great, thank you. Congratulations on the numbers. You talked about some of the strategic investments that you've made into Anthropic and potentially OpenAI, CoreWeave as well, but also partners Intel, Nokia, Synopsys. You know, you're clearly at the center of everything. Can you talk about the role of those investments and kind of how do you view the balance sheet as a tool to kind of grow the NVIDIA's position in the ecosystem and participate in that growth?

Speaker #5: Synopsis . You know , you're clearly at the center of everything . Can you talk about the role of those investments and kind of how do you view the balance sheet as a tool to kind of grow the Nvidia's position , the ecosystem and , and participate in that growth ?

Speaker #2: As you know , fundamentally , at the core of everything , Nvidia is our ecosystem . That's what everybody loves about our business .

Jensen Huang: As you know, fundamentally, at the core of everything NVIDIA is our ecosystem. That's what everybody loves about our business, the richness of our ecosystem. Just about every startup in the world is working on NVIDIA's on NVIDIA's platform. We're in every cloud, we're in every on-prem data center. We're all over the world's edge and robotic systems. Thousands of AI natives are built on top of NVIDIA. We want to take the great opportunity that we have as we're in the beginning of this new computing era, this new computing, platform shift, to put everybody on NVIDIA. Everything is already built on CUDA, it's, we're starting from a really terrific starting point.

Jensen Huang: As you know, fundamentally, at the core of everything NVIDIA is our ecosystem. That's what everybody loves about our business, the richness of our ecosystem. Just about every startup in the world is working on NVIDIA's on NVIDIA's platform. We're in every cloud, we're in every on-prem data center. We're all over the world's edge and robotic systems. Thousands of AI natives are built on top of NVIDIA. We want to take the great opportunity that we have as we're in the beginning of this new computing era, this new computing, platform shift, to put everybody on NVIDIA. Everything is already built on CUDA, it's, we're starting from a really terrific starting point.

Speaker #2: The richness of our ecosystem . Just about every startup in the world is working on Nvidia's on Nvidia's platform . Or in every cloud we're in every on prem data center .

Speaker #2: We're all over the world's edge and robotic systems , thousands of AI natives are built on top of Nvidia . We want to take the great opportunity that we have as we're in the beginning of this new computing era , this new computing platform shift to put everybody on Nvidia , everybody , everything is already built on Cuda .

Speaker #2: And so it's we're starting from a really terrific starting point . But as we build out the entire AI ecosystem , whether it's in AI for language or physical AI or AI physics or biology or robotics or manufacturing , we want all of these ecosystems to be built on top of Nvidia .

Jensen Huang: As we build out the entire AI ecosystem, whether it's in AI for language or physical AI or AI physics or biology or robotics or manufacturing, we want all of these ecosystems to be built on top of NVIDIA. This is such a wonderful opportunity for us to invest into the ecosystem across the entire stack. Our ecosystem is also richer today than it used to be. We used to be largely a computing platform on GPUs, but now we're a computing AI infrastructure company, and we have computing platforms on, well, every aspect of that. Everything from computing to AI models, to networking, to our DPU, all of that has computing stacks on top of it.

Jensen Huang: As we build out the entire AI ecosystem, whether it's in AI for language or physical AI or AI physics or biology or robotics or manufacturing, we want all of these ecosystems to be built on top of NVIDIA. This is such a wonderful opportunity for us to invest into the ecosystem across the entire stack. Our ecosystem is also richer today than it used to be. We used to be largely a computing platform on GPUs, but now we're a computing AI infrastructure company, and we have computing platforms on, well, every aspect of that. Everything from computing to AI models, to networking, to our DPU, all of that has computing stacks on top of it.

Speaker #2: And this is such a wonderful opportunity for us to invest into the ecosystem across the entire stack . Our ecosystem is also richer today than it used to be .

Speaker #2: We used to be largely a computing platform on GPUs, but now we're a computing AI infrastructure company, and we have computing platforms on, well, every aspect of that.

Speaker #2: And everything from computing to AI models to networking to our DPU , all of that has computing stacks on top of it . And as I mentioned before , whether it's an enterprise or in manufacturing , industrial or science or robotics , each one of these ecosystems have different stacks .

Jensen Huang: As I mentioned before, whether it's in enterprise or in manufacturing, industrial, or science or robotics, each one of these ecosystems have different stacks, and we wanna make sure that we continue to invest into our ecosystem. Our investments are focused very squarely, strategically on expanding and deeply our ecosystem reach.

Jensen Huang: As I mentioned before, whether it's in enterprise or in manufacturing, industrial, or science or robotics, each one of these ecosystems have different stacks, and we wanna make sure that we continue to invest into our ecosystem. Our investments are focused very squarely, strategically on expanding and deeply our ecosystem reach.

Speaker #2: And we want to make sure that we continue to invest into our ecosystem . So our investments are focused very squarely , strategically on expanding and deepening our ecosystem reach

Operator: Your next question comes from Harlan Sur with J.P. Morgan. Your line is open.

Operator: Your next question comes from Harlan Sur with J.P. Morgan. Your line is open.

Speaker #3: Your next question comes from Harlan Sur with JP Morgan . Your line is open

Speaker #6: Good afternoon . Thanks for taking my question . Networking continues to rise as a percentage of your overall data center profile , right through fiscal 26 .

Harlan Sur: Good afternoon. Thanks for taking my question. Networking continues to rise as a percentage of your overall data center profile, right? Through fiscal 26, your networking revenues accelerated on a year-over-year basis every single quarter, right, with 3.6x growth, as you guys mentioned, year-over-year growth in Q4. Obviously, on the strength of your scale up and scale-out networking product portfolios, I would seem to remember that first half of last year, your annualized run rate on your Spectrum-X Ethernet switching platform was around $10 billion annualized. It looks like that may have stepped up to around $11, $12 billion in the second half of last year. Jensen, looking at your order book, especially with Spectrum-XGS, upcoming 102T Spectrum-6 switching platforms launching soon, where is the Spectrum runway trending now and as you foresee exiting sort of this calendar year?

Harlan Sur: Good afternoon. Thanks for taking my question. Networking continues to rise as a percentage of your overall data center profile, right? Through fiscal 26, your networking revenues accelerated on a year-over-year basis every single quarter, right, with 3.6x growth, as you guys mentioned, year-over-year growth in Q4. Obviously, on the strength of your scale up and scale-out networking product portfolios, I would seem to remember that first half of last year, your annualized run rate on your Spectrum-X Ethernet switching platform was around $10 billion annualized. It looks like that may have stepped up to around $11, $12 billion in the second half of last year. Jensen, looking at your order book, especially with Spectrum-XGS, upcoming 102T Spectrum-6 switching platforms launching soon, where is the Spectrum runway trending now and as you foresee exiting sort of this calendar year?

Speaker #6: Your networking revenues accelerated on a year over year basis . Every single quarter , right , with 3.6 x growth . As you guys mentioned , year over year growth in Q4 .

Speaker #6: Obviously , on the strength of your scale up and scale out networking product portfolio , I seem to remember that first half of last year , your annualized run rate on your spectrum X Ethernet switching platform was around 10 billion annualized .

Speaker #6: It looks like that may have stepped up to around $1,112 billion in the second half of last year. Jensen, looking at your order book, especially with Spectrum-X, as well as the upcoming 120 Spectrum-6 switching platforms launching soon.

Speaker #6: Where is the Spectrum run rate trending now? And as you foresee, exiting sort of this calendar year?

Speaker #2: Yeah . You know , as you know , we see ourselves as an AI infrastructure company and the AI computing infrastructure includes CPUs , GPUs , and we invented NVLink to scale up the one computing node into a giant computing rack .

Jensen Huang: Yeah. you know, as you know, we see ourselves as an AI infrastructure company, the AI computing infrastructure includes CPUs, GPUs, we invented NVLink to scale up one computing node into a giant computing rack. We invented the idea of a rack scale computer. We don't ship nodes of computers. We ship racks of computers. Those, that NVLinks, that NVLink Switch scale-up system is then scaled out using Spectrum-X and InfiniBand. We support both. Further, we also scale across data centers using Spectrum-X scale across. The way we think about networking is really an extension. It's, we offer everything openly so that people could decide to mix and match in different scale and, you know, however they would like to integrate it into their bespoke data center.

Jensen Huang: Yeah. you know, as you know, we see ourselves as an AI infrastructure company, the AI computing infrastructure includes CPUs, GPUs, we invented NVLink to scale up one computing node into a giant computing rack. We invented the idea of a rack scale computer. We don't ship nodes of computers. We ship racks of computers. Those, that NVLinks, that NVLink Switch scale-up system is then scaled out using Spectrum-X and InfiniBand. We support both. Further, we also scale across data centers using Spectrum-X scale across. The way we think about networking is really an extension. It's, we offer everything openly so that people could decide to mix and match in different scale and, you know, however they would like to integrate it into their bespoke data center.

Speaker #2: We invented the idea of a rack scale computer . We now we don't ship nodes of computers . We ship racks of computers and those that NVLink , that NVLink switch scale up system is then scaled out using spectrum X and InfiniBand .

Speaker #2: We support both . And then further , we also scale across data centers using spectrum scale across . And so the way we think about networking is really an extension , it's we offer everything openly so that people could decide to mix and match in different scale .

Speaker #2: And , you know , however they would like to integrate it into their bespoke data center . But in the final analysis , it's all one big part of our platform .

Jensen Huang: In the final analysis, it's all one big part of our platform. The invention of NVLink really turbocharged our networking business. Every rack comes with 9 nodes of switches, and each one of them has 2 chips in it, and in the future, they'll have more. The amount of switching that we do per rack is really quite incredible. We're also now the largest networking company in the world. If you look at Ethernet, we came into the Ethernet market about a couple of years ago into Ethernet switching, and I think that we're probably the largest Ethernet networking company in the world today, and surely will be soon. Spectrum-X Ethernet has been a home run for us. You know, we're open to however people want to do networking.

Jensen Huang: In the final analysis, it's all one big part of our platform. The invention of NVLink really turbocharged our networking business. Every rack comes with 9 nodes of switches, and each one of them has 2 chips in it, and in the future, they'll have more. The amount of switching that we do per rack is really quite incredible. We're also now the largest networking company in the world. If you look at Ethernet, we came into the Ethernet market about a couple of years ago into Ethernet switching, and I think that we're probably the largest Ethernet networking company in the world today, and surely will be soon. Spectrum-X Ethernet has been a home run for us. You know, we're open to however people want to do networking.

Speaker #2: And the invention of NVLink Really turbocharged our networking business . Every rack comes with nine nodes of switches , and each one of them has two chips in it .

Speaker #2: And in the future they'll have more . And so the amount of switching that we do per rack is really quite incredible . Now , we're also now the largest networking company in the world .

Speaker #2: And and if you look at Ethernet , we came into the Ethernet market about a couple of years ago into Ethernet switching . And I think that we're probably the largest Ethernet networking company in the world today .

Speaker #2: And surely will be soon . And so Spectrum Ethernet has been a home run for us . But , you know , we're open to however people want to do networking Some people just really love the low latency and the scale up capability and InfiniBand , and we will continue to support that , of course .

Jensen Huang: Some people just really love the low latency and the scale-up capability of InfiniBand, and we will continue to support that, of course. Some people love to integrate their networking across their data center based on Ethernet. We created an Ethernet capability that extends Ethernet with artificial intelligence way of processing in the data center, and we're incredibly good at that. Our Spectrum-X performance really shows it. You know, the difference of it when you built a $10 billion or $20 billion AI factory, the difference of 10%, you know, and it could be easily 20% on the effectiveness and the utilization of your network for your data center, that translates to real money. NVIDIA's networking business is really growing fast.

Jensen Huang: Some people just really love the low latency and the scale-up capability of InfiniBand, and we will continue to support that, of course. Some people love to integrate their networking across their data center based on Ethernet. We created an Ethernet capability that extends Ethernet with artificial intelligence way of processing in the data center, and we're incredibly good at that. Our Spectrum-X performance really shows it. You know, the difference of it when you built a $10 billion or $20 billion AI factory, the difference of 10%, you know, and it could be easily 20% on the effectiveness and the utilization of your network for your data center, that translates to real money. NVIDIA's networking business is really growing fast. It's, you know, I think it's just because we built the AI infrastructure so effectively, the AI infrastructure business is growing incredibly fast.

Speaker #2: And some people love to integrate their networking across their data center based on Ethernet. And we have an Ethernet capability that extends Ethernet with an artificial intelligence way of processing in the data center.

Speaker #2: And we're incredibly good at that . And our spectrum performance really shows it . You know , the difference of when you when you built a 10 billion or $20 billion AI factory , the difference of 10% , you know , and it could be easily 20% on the effectiveness and the throughput , the utilization of your network for your data center that translates to real money .

Speaker #2: And so Nvidia's networking business is really , really growing fast . And it's you know , I think it's just because because we we built the the AI infrastructure .

Jensen Huang: It's, you know, I think it's just because we built the AI infrastructure so effectively, the AI infrastructure business is growing incredibly fast.

Speaker #2: So, effectively, the AI infrastructure business is growing incredibly fast.

Speaker #3: Your next question comes from CJ Muse with Cantor Fitzgerald. Your line is open.

Operator: Your next question comes from CJ Muse with Cantor Fitzgerald. Your line is open.

Operator: Your next question comes from CJ Muse with Cantor Fitzgerald. Your line is open.

Speaker #7: Yeah . Good afternoon . Thank you for taking the question . I guess with CPE for large context windows and Grok likely adding a decode specific solution .

Colette Kress: Yeah, good afternoon. Thank you for taking the question. I guess with CPX for large context windows and Groq likely adding a decode-specific solution, curious how we should think about your future roadmap. Should we be thinking about customized silicon, either by workload or customer, as an increasing focus by NVIDIA, particularly helped by your move to a dielet architecture? Thanks so much.

CJ Muse: Yeah, good afternoon. Thank you for taking the question. I guess with CPX for large context windows and Groq likely adding a decode-specific solution, curious how we should think about your future roadmap. Should we be thinking about customized silicon, either by workload or customer, as an increasing focus by NVIDIA, particularly helped by your move to a dielet architecture? Thanks so much.

Speaker #7: Curious how we should think about your future roadmap . Should we be thinking about customized silicon either by workload or customer ? As an increasing focus ?

Speaker #7: By Nvidia , particularly helped by your move to dilate architecture ? Thanks so much

Jensen Huang: Everybody should want to extend, push out dielet as long as they can. The reason for that is because every time you cross a dielet, you have a dielet, you have to cross an interface. Every time you cross an interface, you add latency, you add power unnecessarily. We're not allergic to dielet. We use dielets already, but we try to use dielets only when we absolutely have no choice but to do so. If you look at the Grace Blackwell architecture and the Rubin architecture, we use two giant reticle-limited dies, and we abut them, and that reduces the amount of architecture crossing. The dielet tax shows up in the architecture effectiveness of the competitors.

Speaker #2: We don't use we we want to everybody should want to extend , push out , dilate as long as they can . And the reason for that is because every time you cross a dialect , you have a dialect .

Jensen Huang: Everybody should want to extend, push out dielet as long as they can. The reason for that is because every time you cross a dielet, you have a dielet, you have to cross an interface. Every time you cross an interface, you add latency, you add power unnecessarily. We're not allergic to dielet. We use dielets already, but we try to use dielets only when we absolutely have no choice but to do so. If you look at the Grace Blackwell architecture and the Rubin architecture, we use two giant reticle-limited dies, and we abut them, and that reduces the amount of architecture crossing. The dielet tax shows up in the architecture effectiveness of the competitors.

Speaker #2: You have to cross an interface . Every time you cross an interface , you add latency , you add power unnecessarily . We're not allergic to dialect .

Speaker #2: We use dialects already , but we try to use dialects only when we absolutely have no choice but to do so . And so we .

Speaker #2: If you look at the Grace Blackwell architecture and the architecture , we use two giant radical limited dyes and we abut them . And that reduces the amount of architecture crossing the dial attacks shows up in the architecture effectivness of of the competitors .

Speaker #2: If you look at Nvidia , people call it our software advantage . But , you know , we're software starts in architecture , starts and ends is kind of hard to tell .

Jensen Huang: If you look at NVIDIA, people call it our software advantage, but, you know, where software starts and architecture starts and ends is kind of hard to tell. It's, you know, our software is effective because our architecture is so good. The CUDA architecture is unquestionably more effective, more efficient, delivers more performance per flop, per watt than any computing architecture out there, and it's because of the way we architect. With respect to how we think about Groq and the low latency decoder, I've got some great ideas that I'd like to share with you at GTC. The simple idea is that our infrastructure is incredibly versatile because of CUDA, and we're gonna continue to do that.

Jensen Huang: If you look at NVIDIA, people call it our software advantage, but, you know, where software starts and architecture starts and ends is kind of hard to tell. It's, you know, our software is effective because our architecture is so good. The CUDA architecture is unquestionably more effective, more efficient, delivers more performance per flop, per watt than any computing architecture out there, and it's because of the way we architect. With respect to how we think about Groq and the low latency decoder, I've got some great ideas that I'd like to share with you at GTC. The simple idea is that our infrastructure is incredibly versatile because of CUDA, and we're gonna continue to do that.

Speaker #2: It's , you know , our software is effective because our architecture is so good . And so the Cuda architecture is unquestionably more effective , more efficient , delivers more performance per flop , per watt than any computing architecture out there .

Speaker #2: And it's because of the way we architect with respect to how we think about grok and , and the low latency decoder . I've got some great ideas that I'd like to share with you at GTC , but the simple idea is that our our infrastructure is incredible .

Speaker #2: Incredibly versatile . Because of Cuda . And we're going to continue to continue to do that . You know , all of our GPUs are architecturally compatible , which means that when I'm working on optimizing models today for Blackwell , all of that work and all of that dedication to optimizing software stacks and new models also benefit Hopper and also benefit Ampere .

Jensen Huang: You know, all of our GPUs are architecturally compatible, which means that when I'm working on optimizing models today, for Blackwell, all of that work and all of that dedication to optimizing software stacks and new models also benefit Hopper and also benefit Ampere. It's the reason why A100 continues to feel fresh and continues to stay performant, years after we've deployed it into the world. Architecture compatibility allows us to do that. It allows us to invest enormously in software engineering and optimization, knowing that our entire install base in the cloud, on-prem, everywhere, from generations of architectures of GPUs, will all benefit. We'll continue to do that, and allows us to extend the useful life, allows us to have innovation, flexibility, and velocity, which translates to performance and very importantly, performance per dollar and performance per watt for our customers.

Jensen Huang: You know, all of our GPUs are architecturally compatible, which means that when I'm working on optimizing models today, for Blackwell, all of that work and all of that dedication to optimizing software stacks and new models also benefit Hopper and also benefit Ampere. It's the reason why A100 continues to feel fresh and continues to stay performant, years after we've deployed it into the world. Architecture compatibility allows us to do that. It allows us to invest enormously in software engineering and optimization, knowing that our entire install base in the cloud, on-prem, everywhere, from generations of architectures of GPUs, will all benefit. We'll continue to do that, and allows us to extend the useful life, allows us to have innovation, flexibility, and velocity, which translates to performance and very importantly, performance per dollar and performance per watt for our customers.

Speaker #2: It's the reason why A100 continues to feel fresh and continues to stay performant years after we've deployed it into the world . Architecture compatibility allows us to do that .

Speaker #2: It allows us to invest enormously in software engineering and optimization , knowing that our entire installed base in the cloud , on prem , everywhere from generations of architectures of GPUs will all benefit And so we'll continue to do that and allows us to extend the useful life , allows us to have innovation , flexibility and velocity , which translates to performance .

Speaker #2: And very importantly , performance per dollar . And performance per watt for our customers . And and so what we'll do with grok is you'll , you'll , you'll come to come to see GTC .

Jensen Huang: What we'll do with Groq is you'll come to see GTC, but what we'll do is we'll extend our architecture with Groq as an accelerator in very much the way that we extended NVIDIA's architecture with Mellanox.

Jensen Huang: What we'll do with Groq is you'll come to see GTC, but what we'll do is we'll extend our architecture with Groq as an accelerator in very much the way that we extended NVIDIA's architecture with Mellanox.

Speaker #2: But what we'll do is we'll extend our architecture with grok as an accelerator in very much the way that we extended Nvidia's architecture with Mellanox

Speaker #3: The next question comes from Stacy Rasgon with Bernstein Research . Your line is open .

Operator: The next question comes from Stacy Rasgon with Bernstein Research. Your line is open.

Operator: The next question comes from Stacy Rasgon with Bernstein Research. Your line is open.

Speaker #8: Hi , guys . Thanks for taking my questions . Colette . I wanted to dig a little bit into the call for sequential growth through the year .

Stacy Rasgon: Hi, guys. Thanks for taking my questions. Colette, I wanted to dig a little bit into the call for sequential growth through the year. I mean, you grew this quarter more than $10 billion sequentially in data center, and the guide seems to imply, you know, the bulk of the increased $10 billion sequentially in data center. Like, how do you see that as we go through the year, especially as Rubin ramps into the back half? Blackwell has been a pretty massive acceleration for sequential growth. Should we expect something similar as we get to Rubin? Then I was also just hoping you could comment on your expectations for gaming. I understand the memory issues and everything else.

Stacy Rasgon: Hi, guys. Thanks for taking my questions. Colette, I wanted to dig a little bit into the call for sequential growth through the year. I mean, you grew this quarter more than $10 billion sequentially in data center, and the guide seems to imply, you know, the bulk of the increased $10 billion sequentially in data center. Like, how do you see that as we go through the year, especially as Rubin ramps into the back half? Blackwell has been a pretty massive acceleration for sequential growth. Should we expect something similar as we get to Rubin? Then I was also just hoping you could comment on your expectations for gaming. I understand the memory issues and everything else.Do you think gaming can still grow year over year in fiscal 2027, or will that be under more pressure given memory? Those two questions, please. Thank you.

Speaker #8: So , I mean , you grew this quarter more than 10 billion US sequentially in data center . And the guide seems to imply , you know , the bulk of of the increase , 10 billion sequential in data center .

Speaker #8: So how do you see that as we go through the year , especially as Rubin ramps into the back half ? Blackwell has been a pretty massive acceleration for sequential growth .

Speaker #8: Should we expect something similar as we get to Rubin? And then I was also just hoping you could comment on your expectations for gaming.

Speaker #8: I understand the memory issues and everything else . Do you think gaming can still grow year over year in fiscal 27 , or will that be under more pressure given memory ?

Stacy Rasgon: Do you think gaming can still grow year over year in fiscal 2027, or will that be under more pressure given memory? Those two questions, please. Thank you.

Speaker #8: So those two questions please . Thank you

Colette Kress: Thanks, Stacy. Let me start with the revenue going forward. Again, we're trying to look at revenue quarter by quarter. As you think about the full year, we are absolutely going to be still selling and providing Blackwell, probably at the same time that we're also seeing Vera Rubin come to market. This is a very great architecture that helps them just today, quickly standing up and have already planned on many different orders across the different customers to provide that. It's too early yet to determine how much in terms of that Vera Rubin, that beginning ramp, will start in the second half, and we'll get through it. No confusion in terms of the strong demand and the interest.

Speaker #1: Thanks , Jason . Let me let me start with the revenue going forward . Again . We're we're we're trying to look at revenue quarter by quarter as you think about the full year , we are absolutely going to be still selling .

Colette Kress: Thanks, Stacy. Let me start with the revenue going forward. Again, we're trying to look at revenue quarter by quarter. As you think about the full year, we are absolutely going to be still selling and providing Blackwell, probably at the same time that we're also seeing Vera Rubin come to market. This is a very great architecture that helps them just today, quickly standing up and have already planned on many different orders across the different customers to provide that. It's too early yet to determine how much in terms of that Vera Rubin, that beginning ramp, will start in the second half, and we'll get through it. No confusion in terms of the strong demand and the interest.

Speaker #1: And providing Blackwell probably at the same time that we're also seeing Vera Rubin come to market . This is a very great architecture that helps them .

Speaker #1: Just today , quickly standing up and have already planned on many different orders across the different customers to provide that . It's too early yet to determine how much in terms of that , Vera Rubin , that beginning ramp will start in the second half and we'll get through it .

Speaker #1: But know no confusion in terms of the strong demand and the interest . We do expect pretty much every single customer to be purchasing .

Colette Kress: We do expect pretty much every single customer to be purchasing Vera Rubin. The question is, how soon are we in market and how soon are they able to stand that up in terms of in their data centers? That was your first part. The second part was focusing on our gaming. As much as we would love to have additional more supply, we do believe for a couple quarters it is gonna be very tight. If things improve by the end of the year, there is an opportunity to think about what that is from a year-over-year growth. It's still too early for us to know at this time, and we'll get back to you as soon as we can.

Colette Kress: We do expect pretty much every single customer to be purchasing Vera Rubin. The question is, how soon are we in market and how soon are they able to stand that up in terms of in their data centers? That was your first part. The second part was focusing on our gaming. As much as we would love to have additional more supply, we do believe for a couple quarters it is gonna be very tight. If things improve by the end of the year, there is an opportunity to think about what that is from a year-over-year growth. It's still too early for us to know at this time, and we'll get back to you as soon as we can.

Speaker #1: Vera Rubin , the question is , are how soon are we in market and how soon are they able to stand that up in terms of in their data centers ?

Speaker #1: That was your first part. The second part was focusing on our gaming. As much as we would love to have additional, more supply, we do believe for a couple of quarters it is going to be very tight.

Speaker #1: If things improve by the end of the year . There is an opportunity to think about what that is from a year over year growth , but it's still too early for us to know at this time , and we'll get back to you as soon as we can

Speaker #3: Your next question comes from Atif Malik with Citi. Your line is open.

Operator: Your next question comes from Atif Malik with Citi. Your line is open.

Operator: Your next question comes from Atif Malik with Citi. Your line is open.

Speaker #9: Thank you for taking my question, Jensen. I'm curious if you can touch on the importance of CUDA as now more of the investment dollars in AI are coming from inference workloads.

Atif Malik: Thank you for taking my question. Jensen, I'm curious if you can touch on the importance of CUDA, as now more of the investment dollars in AI are coming from inference workloads?

Atif Malik: Thank you for taking my question. Jensen, I'm curious if you can touch on the importance of CUDA, as now more of the investment dollars in AI are coming from inference workloads?

Speaker #2: Without without Cuda , we wouldn't know what to do with inference The entire stack from Tensorrt that we introduced a few years ago , which is still the most performant inference stack in the world , optimizing it for NVLink requires us to discover and invent new parallelization algorithms that sits on top of Cuda to distribute the workload and the inferencing to take advantage of the aggregate bandwidth across NVLink 72 .

Jensen Huang: Without CUDA, we wouldn't know what to do with inference. The entire stack from TensorRT-LLM that we introduced a few years ago, which is still the most performant inference stack in the world. Optimizing it for NVLink requires us to discover and invent new parallelization algorithms that sits on top of CUDA to distribute the workload and the inferencing to take advantage of the aggregate bandwidth across NVLink Switch. NVLink Switch has enabled us to deliver generationally, 50 times more performance per watt. It's just an incredible leap, and it's sensible. You know, NVLink Switch is a great invention. It was hard to do.

Jensen Huang: Without CUDA, we wouldn't know what to do with inference. The entire stack from TensorRT-LLM that we introduced a few years ago, which is still the most performant inference stack in the world. Optimizing it for NVLink requires us to discover and invent new parallelization algorithms that sits on top of CUDA to distribute the workload and the inferencing to take advantage of the aggregate bandwidth across NVLink Switch. NVLink Switch has enabled us to deliver generationally, 50 times more performance per watt. It's just an incredible leap, and it's sensible. You know, NVLink Switch is a great invention. It was hard to do.

Speaker #2: NVLink 72 has enabled us to deliver, generationally, 50 times more performance per what it is. It's just an incredible leap, and it's sensible.

Speaker #2: NVLink 72 is a great invention . It was hard to do the the the creation of the the switching technology disaggregating the switches , building the system racks , all of that .

Jensen Huang: The creation of the switching technology, disaggregating the switches, building the system racks, all of that, you know, we did it all in plain sight, and everybody knew how hard it was for us to do. But the results are incredible. You know, performance per watt is 50 times. Performance per dollar, at 35 times, the leap in inference is incredible. It's very important, it's really important to realize that inference equals revenues now for our customers. Because agents are generating so many tokens and the results are so effective. When the agents are coding, it's off generating thousands, tens of thousands, hundreds of thousands, because they're running for, you know, minutes to hours. These systems, these agentic systems, are spawning off different agents working as a team.

Jensen Huang: The creation of the switching technology, disaggregating the switches, building the system racks, all of that, you know, we did it all in plain sight, and everybody knew how hard it was for us to do. But the results are incredible. You know, performance per watt is 50 times. Performance per dollar, at 35 times, the leap in inference is incredible. It's very important, it's really important to realize that inference equals revenues now for our customers. Because agents are generating so many tokens and the results are so effective. When the agents are coding, it's off generating thousands, tens of thousands, hundreds of thousands, because they're running for, you know, minutes to hours. These systems, these agentic systems, are spawning off different agents working as a team.

Speaker #2: You know, we did it all in plain sight, and everybody knew how hard it was for us to do. But the results are incredible.

Speaker #2: You know . So performance per watt is 50 times performance per dollar , 35 times . And so the leap in inference is incredible .

Speaker #2: It's very important . It's really important to realize that inference equals revenues . Now for our customers because agents are generating so many tokens and the results are so effective when the agents are coding , it's off generating thousands , tens of thousands , hundreds of thousands because they're running for , you know , minutes to hours .

Speaker #2: And so these systems , these agentic systems are spawning off different agents working as a team . The number of tokens that are being generated is really , really gone .

Jensen Huang: The number of tokens that are being generated has really, really gone exponential. We need to inference at a much higher speed, and when you're inferencing at a much higher speed, and each one of those tokens are dollarized, it directly translates into revenues. Inference equals... inference performance equals revenues for our customers. For the data centers, inference tokens per watt translates directly to the revenues of the CSPs. The reason for that is because everybody is power-limited. I mean, no matter how many data centers you have, each data center, you know, hundred megawatts or one gigawatt, has power limits. The architecture that has the best performance per watt translates, because each token, the performance tokens per watt, each token is dollarized.

Jensen Huang: The number of tokens that are being generated has really, really gone exponential. We need to inference at a much higher speed, and when you're inferencing at a much higher speed, and each one of those tokens are dollarized, it directly translates into revenues. Inference equals... inference performance equals revenues for our customers. For the data centers, inference tokens per watt translates directly to the revenues of the CSPs. The reason for that is because everybody is power-limited. I mean, no matter how many data centers you have, each data center, you know, hundred megawatts or one gigawatt, has power limits. The architecture that has the best performance per watt translates, because each token, the performance tokens per watt, each token is dollarized.

Speaker #2: Exponential . And so so we need to inference at a much higher speed . And when you're inferencing at a much higher speed , and each one of those tokens are dollarized , it directly translates into revenues .

Speaker #2: And so inference equals inference . Performance equals revenues for our customers . For the data centers , inference tokens per watt translates directly to the revenues of the CSPs .

Speaker #2: And the reason for that is because everybody is power limited . And so I mean , no matter how many data centers you have , each data center , you know , 100MW or one gigawatt has power limits .

Speaker #2: So the architecture that has the best performance per watt translates because each token , each the performance tokens per watt , each token is dollarized tokens per watt translates to dollars per watt , which translates in a gigawatt directly to revenues .

Jensen Huang: Tokens per watt translates to dollars per watt, which translates in a gigawatt directly to revenues. You could see that every CSP understands this now, every hyperscaler understands this, that CapEx translates to compute. Compute with the right architecture translates to maximizing revenues, and compute equals revenues. Without investing capacity today, without investing in compute, there cannot be revenue growth, and that I think everybody understands. Compute equals revenues. Choosing the right architecture is incredibly important. It's more than strategic now, it directly affects their earnings, and choosing the right architecture, the one with the best performance per watt, is literally everything.

Jensen Huang: Tokens per watt translates to dollars per watt, which translates in a gigawatt directly to revenues. You could see that every CSP understands this now, every hyperscaler understands this, that CapEx translates to compute. Compute with the right architecture translates to maximizing revenues, and compute equals revenues. Without investing capacity today, without investing in compute, there cannot be revenue growth, and that I think everybody understands. Compute equals revenues. Choosing the right architecture is incredibly important. It's more than strategic now, it directly affects their earnings, and choosing the right architecture, the one with the best performance per watt, is literally everything.

Speaker #2: And so you could see that every CSP understands this . Now , every Hyperscaler understands this , that CapEx translates to compute . Compute with the right architecture translates to maximizing revenues and compute equals revenues without investing capacity .

Speaker #2: Today , without investing in compute , there cannot be revenue growth . And that that I think everybody understands compute equals revenues . Choosing the right architecture is incredibly important as more than strategic .

Speaker #2: Now it directly affects their earnings . And choosing the right architecture . The one with the best performance per watt is literally everything

Speaker #3: Your next question comes from Ben Reitzes with Melius Research . Your line is open .

Operator: Your next question comes from Ben Reitzes with Melius Research. Your line is open.

Operator: Your next question comes from Ben Reitzes with Melius Research. Your line is open.

Speaker #10: Yeah . Hey , thanks . First let me say kudos on including the stock comp in non-GAAP . I think that's a great move .

Ben Reitzes: Yeah, hey, thanks. First, let me say kudos on including the stock comp, in non-GAAP. I think that's a great move, that isn't my question. My question is around gross margins and the sustainability of the mid-70s long term. Should we read into the visibility on supply being available into calendar 2027, that it's sustainable until then? Then, Jensen, what about after that? Are there innovations in memory consumption you can unveil that makes us feel better about the ability to keep margins at that level for a long time? Thanks.

Ben Reitzes: Yeah, hey, thanks. First, let me say kudos on including the stock comp, in non-GAAP. I think that's a great move, that isn't my question. My question is around gross margins and the sustainability of the mid-70s long term. Should we read into the visibility on supply being available into calendar 2027, that it's sustainable until then? Then, Jensen, what about after that? Are there innovations in memory consumption you can unveil that makes us feel better about the ability to keep margins at that level for a long time? Thanks.

Speaker #10: But that isn't my question . My question is around gross margins and the sustainability of the mid 70s . Long term , should we read into the visibility on supply being available into calendar 27 that it's sustainable until until then and then Jensen what about after that ?

Speaker #10: Are there innovations in memory consumption you can unveil that make us feel better about the ability to keep margins at that level for a long time?

Speaker #10: Thanks .

Speaker #2: The single most important lever of our gross margins is actually delivering generational leaps to our customers . That is the single most important thing .

Jensen Huang: The single most important lever of our gross margins is actually delivering generational leaps to our customers. That is the single most important thing. If we could deliver generationally performance per watt, that exceeds dramatically what Moore's Law can do. If we can deliver performance per dollar dramatically more than the cost of our systems, than the price of our systems, then we can continue to sustain our gross margins. That's the simple, most important concept. The reason why we're moving so fast is because, number one, the demand for tokens in the world, as a result of the inflection points that we've gone through, has gone completely exponential. I think we're all seeing that. To the point where even our six-year-old GPUs in the cloud are completely consumed and the pricing is going up.

Jensen Huang: The single most important lever of our gross margins is actually delivering generational leaps to our customers. That is the single most important thing. If we could deliver generationally performance per watt, that exceeds dramatically what Moore's Law can do. If we can deliver performance per dollar dramatically more than the cost of our systems, than the price of our systems, then we can continue to sustain our gross margins. That's the simple, most important concept. The reason why we're moving so fast is because, number one, the demand for tokens in the world, as a result of the inflection points that we've gone through, has gone completely exponential. I think we're all seeing that. To the point where even our six-year-old GPUs in the cloud are completely consumed and the pricing is going up.

Speaker #2: If we could deliver generationally performance per watt that exceeds dramatically what Moore's Law can do . If we can deliver performance per dollar dramatically more than the cost of our systems , than the price of our systems , then we can continue to sustain our gross margins .

Speaker #2: That's the simple , most important concept . Every . The reason why we're moving so fast is because , number one , the demand for tokens in the world as a result of the inflection points that we've gone through , has now has gone completely exponential .

Speaker #2: I think we're all seeing that to the point where even our six year old GPUs in the cloud are completely consumed , and the pricing is going up .

Speaker #2: And so we know , we know that the amount of computation necessary , the amount of compute necessary for the for the modern way of doing software is growing exponentially .

Jensen Huang: We know that the amount of computation necessary, the amount of compute necessary for the, for the modern way of doing software, is growing exponentially. Our strategy is to deliver an entire AI infrastructure every single year. This year, we introduced six new chips. Rubin next generation will do many new chips as well. Every single generation, we are committed to deliver many X factors of performance per watt and performance per dollar. That pace and our ability to do extreme co-design allows us to deliver that value and that benefit to the customers. That is the single most vital thing as it relates to our value delivered.

Jensen Huang: We know that the amount of computation necessary, the amount of compute necessary for the, for the modern way of doing software, is growing exponentially. Our strategy is to deliver an entire AI infrastructure every single year. This year, we introduced six new chips. Rubin next generation will do many new chips as well. Every single generation, we are committed to deliver many X factors of performance per watt and performance per dollar. That pace and our ability to do extreme co-design allows us to deliver that value and that benefit to the customers. That is the single most vital thing as it relates to our value delivered.

Speaker #2: And so our strategy is to deliver an entire AI infrastructure every single year . This year we introduced six new chips . Rubin next generation will do many new chips as well .

Speaker #2: And every single generation we are committed to deliver many X factors of performance per watt . And performance per dollar . And that pace and our ability to do extreme co-design allows us to deliver that value and that benefit to the customers .

Speaker #2: And that—that is the single most vital thing as it relates to our value delivered.

Speaker #3: Your next question comes from Antoine with New Street. Your line is open.

Operator: Your next question comes from Antoine Chkaiban with New Street Research. Your line is open.

Operator: Your next question comes from Antoine Chkaiban with New Street Research. Your line is open.

Speaker #9: Hi .

Speaker #11: Thanks a lot for taking my question . I'd like to ask about space data centers , which some of your customers are considering .

Antoine Chkaiban: Hi, thanks a lot for taking my question. I'd like to ask about space data centers, which some of your customers are considering. How feasible do you think that is, and what kind of horizon? What do the economics look like today, and how do you think that could evolve over time? Thank you.

Antoine Chkaiban: Hi, thanks a lot for taking my question. I'd like to ask about space data centers, which some of your customers are considering. How feasible do you think that is, and what kind of horizon? What do the economics look like today, and how do you think that could evolve over time? Thank you.

Speaker #11: How feasible do you think that is and what kind of horizon ? And what do the economics look like today , and how do you think that could evolve over time ?

Speaker #11: Thank you

Speaker #2: Well , the economics are poor today , but it's going to improve over time . As you know , as you know , the way that space works is radically different than how it works down here .

Jensen Huang: Well, the economics are poor today, it's going to improve over time. As you know, the way that space works is radically different than how it works down here. There's an abundance of energy, but solar panels are large, but there's plenty of space in space. The heat dissipation, it's cold in space. However, there's no airflow, the only way to dissipate heat is through conduction. The radiators that you need to create are fairly large. Liquid cooling is obviously out of the question because it's heavy and, you know, freezes. The methods that we use here on Earth are a little different than the way we would do it in space.

Jensen Huang: Well, the economics are poor today, it's going to improve over time. As you know, the way that space works is radically different than how it works down here. There's an abundance of energy, but solar panels are large, but there's plenty of space in space. The heat dissipation, it's cold in space. However, there's no airflow, the only way to dissipate heat is through conduction. The radiators that you need to create are fairly large. Liquid cooling is obviously out of the question because it's heavy and, you know, freezes. The methods that we use here on Earth are a little different than the way we would do it in space.

Speaker #2: There's an abundance of energy , but solar panels are large , but there's plenty of space in space . The the heat dissipation .

Speaker #2: It's cold in space . However , there's no there's no airflow . And so the only way to to dissipate heat is through conduction .

Speaker #2: And the radiators that you need to create are fairly large . Liquid cooling is obviously out of the question because it's kind of it's heavy .

Speaker #2: And , you know , freezes and and so the methods that we use here on Earth are , are a little different than the way we would do it in space .

Speaker #2: But there are many different computing problems that really wants to be done in space . And so Nvidia is already the world's first GPU in space .

Jensen Huang: But there are many different computing problems that really wants to be done in space. NVIDIA is already the world's first GPU in space. Hopper is in space. One of the best use cases of GPUs in space is imaging. To be able to image at extremely high resolutions using, of course, optics and artificial intelligence, and to be able to do that computation of reprojection, of different angles, and be able to up res and do noise reduction, and just be able to see, be able to image at very large, high resolutions, extremely large scales and very fast. It's hard to do that, by sending, you know, petabytes and petabytes of imaging data back here on Earth and doing that work.

Jensen Huang: But there are many different computing problems that really wants to be done in space. NVIDIA is already the world's first GPU in space. Hopper is in space. One of the best use cases of GPUs in space is imaging. To be able to image at extremely high resolutions using, of course, optics and artificial intelligence, and to be able to do that computation of reprojection, of different angles, and be able to up res and do noise reduction, and just be able to see, be able to image at very large, high resolutions, extremely large scales and very fast. It's hard to do that, by sending, you know, petabytes and petabytes of imaging data back here on Earth and doing that work.

Speaker #2: Hopper is in space , and one of the best use cases of of GPUs in space is imaging to be able to image image at extremely high resolutions using using of course , of course , a optics and and and artificial intelligence and to be able to do that computation of reprojection of different angles and be able to to upraise and do noise reduction and , and just be able to see , be able to image at very large , very high resolutions , extremely large scales and very , very fast .

Speaker #2: It's hard to do that by sending, you know, petabytes and petabytes of imaging data back here on Earth and doing that work.

Speaker #2: It's easier just to do it out in space . And then and then ignore , ignore all of the data collected and processed until you see something interesting .

Jensen Huang: It's easier just to do it out in space, and then ignore all of the data collected and then process until you see something interesting. Artificial intelligence in space will have very good, very interesting applications.

Jensen Huang: It's easier just to do it out in space, and then ignore all of the data collected and then process until you see something interesting. Artificial intelligence in space will have very good, very interesting applications.

Speaker #2: And so, artificial intelligence in space will have very good, very interesting applications.

Speaker #3: Your next question comes from Mark Loupakis with Evercore ISI . Your line is open .

Operator: Your next question comes from Mark Lipacis with Evercore ISI. Your line is open.

Operator: Your next question comes from Mark Lipacis with Evercore ISI. Your line is open.

Speaker #11: Hi. Thanks for taking my question. I want to pick up on the comment you made on the script about revenue diversification.

Mark Lipacis: Hi, thanks for taking my question. I want to pick up with the comment you made on the script about revenue diversification. I believe, Colette, you said that hyperscalers were over 50% of revenues, but growth was led by the rest of your data center customers. I, you know, as a clarification, I just want to make sure I understood that. Does that imply your non-hyperscale customers grew faster? If so, can you help us understand what are the non-hyperscalers doing different? Are they doing different things than the hyperscalers, or the same things on a different scale?

Harlan Sur: Hi, thanks for taking my question. I want to pick up with the comment you made on the script about revenue diversification. I believe, Colette, you said that hyperscalers were over 50% of revenues, but growth was led by the rest of your data center customers. I, you know, as a clarification, I just want to make sure I understood that. Does that imply your non-hyperscale customers grew faster? If so, can you help us understand what are the non-hyperscalers doing different? Are they doing different things than the hyperscalers, or the same things on a different scale?

Speaker #11: I believe, Colette, you said that hyperscalers were over 50% of revenues, but growth was led by the rest of your data center customers.

Speaker #11: And you know , as a clarification , I just want to make sure I understood that . Does that imply your your non hyperscale customers grew faster .

Speaker #11: And and if so , what are the can you help us understand what are the non hyperscalers doing different . Are they doing different things than the hyperscalers or the same things on a different scale .

Speaker #11: And does this do you expect this trend to continue . Would you expect your customer base to evolve to a point where non hyperscalers are becoming a bigger part of your the larger part of your business ?

Mark Lipacis: Do you expect this trend to continue, or do you expect your customer base to evolve to a point where non-hyperscalers are, become a bigger part of your, the larger part of your business? Thank you.

Mark Lipacis: Do you expect this trend to continue, or do you expect your customer base to evolve to a point where non-hyperscalers are, become a bigger part of your, the larger part of your business? Thank you.

Speaker #11: Thank you .

Speaker #1: Yes . Let's let's see if we can help on this question . So when you think about our top five as we articulated as being our CSPs , our hyperscalers , and they have right now at , about 50% of our total revenue , there's a big organization , therefore , of diversity of all different other types of companies that we are working with , that it goes through our AI model makers that goes through our enterprises , that goes to supercomputing , it goes to our sovereigns .

Colette Kress: Yes. Let's see if we can help on this question. When you think about our top five, as we articulated, as being our CSPs, our hyperscalers, and they have right now sat at about 50% of our total revenue. There's a big organization, therefore, of diversity of all different other types of companies that we are working with, that it goes through our AI model makers, that goes through our enterprises, that goes to supercomputing, it goes to our sovereigns. There's a lot of other different facts on there. You are correct, it's a very fast-growing area as well. We have a strong position in terms of all of our different cloud providers on our platform, and now we also have a extreme diversity of different customers that we are seeing all the way across the world.

Colette Kress: Yes. Let's see if we can help on this question. When you think about our top five, as we articulated, as being our CSPs, our hyperscalers, and they have right now sat at about 50% of our total revenue. There's a big organization, therefore, of diversity of all different other types of companies that we are working with, that it goes through our AI model makers, that goes through our enterprises, that goes to supercomputing, it goes to our sovereigns. There's a lot of other different facts on there. You are correct, it's a very fast-growing area as well. We have a strong position in terms of all of our different cloud providers on our platform, and now we also have a extreme diversity of different customers that we are seeing all the way across the world.

Speaker #1: There's a lot of other different facts on there . But you are correct . It's a very fast growing area as well . We have a strong position in terms of all of our different cloud providers on our platform , and now we also have a extreme diversity of different customers that we are seeing all the way across the world .

Speaker #1: And this will really benefit . Seeing that diversity and being able to serve all of those parts . I want to see if Jensen wants to add .

Colette Kress: This will really benefit seeing that diversity and being able to serve all of those parts. I'm going to see if Jensen wants to add a bit more.

Colette Kress: This will really benefit seeing that diversity and being able to serve all of those parts. I'm going to see if Jensen wants to add a bit more.

Speaker #2: A bit . Yeah . This is one of the advantages that we have with our ecosystem . I'll build on top of Cuda .

Jensen Huang: Yeah, this is one of the advantages that we have with our ecosystem, all built on top of CUDA. We're the only accelerated computing platform that is in every cloud, that's available through every single computer maker, available at the edge, we're now cultivating telecommunications. Obviously, the future radios will all be AI-driven radios, and the future wireless network will also be a computing platform. That is a foregone conclusion, somebody has to go and invent the technologies to make that possible. We've created a platform called Aerial to go do that. We're out in just about every single robot, every single self-driving car. Our ability, CUDA's ability, to have the benefit of the performance of specialized processors on the one hand, with the tensor cores inside our GPUs.

Jensen Huang: Yeah, this is one of the advantages that we have with our ecosystem, all built on top of CUDA. We're the only accelerated computing platform that is in every cloud, that's available through every single computer maker, available at the edge, we're now cultivating telecommunications. Obviously, the future radios will all be AI-driven radios, and the future wireless network will also be a computing platform. That is a foregone conclusion, somebody has to go and invent the technologies to make that possible. We've created a platform called Aerial to go do that. We're out in just about every single robot, every single self-driving car. Our ability, CUDA's ability, to have the benefit of the performance of specialized processors on the one hand, with the tensor cores inside our GPUs.

Speaker #2: We have we're we're the only accelerated computing platform that is in every cloud that's available through every single computer maker , available at the edge and we're now cultivating telecommunications .

Speaker #2: Obviously , the future radios will all be AI driven radios and and the future wireless network will also be a computing platform that is a foregone conclusion .

Speaker #2: But somebody has to go and invent a technology to make that possible . And we created a we created a platform called Ariel to go do that where in just about every single robot , every single self-driving car , our ability Cuda as ability to have the benefit of the performance of specialized processors .

Speaker #2: On the one hand , with the tensor cores inside GPUs , on the other hand , the flexibility of Cuda allows us to solve language problems .

Jensen Huang: On the other hand, the flexibility of CUDA allows us to solve language problems, computer vision problems, robotics problems, to biology problems, you know, physics problems, and just about all kinds of AI and all kinds of computation algorithms. The diversity of our customer base is one of the greatest strengths that we have. The second thing, of course, is without our own ecosystem, even if our, even if our processor was programmable, if we didn't cultivate our ecosystem, and talking about some of the things that we're doing today, investing in our future ecosystem and continue to enhance our ecosystem. Without our ecosystem, it's hard for us to grow beyond what design wins we capture for somebody else's ecosystem.

Jensen Huang: On the other hand, the flexibility of CUDA allows us to solve language problems, computer vision problems, robotics problems, to biology problems, you know, physics problems, and just about all kinds of AI and all kinds of computation algorithms. The diversity of our customer base is one of the greatest strengths that we have. The second thing, of course, is without our own ecosystem, even if our, even if our processor was programmable, if we didn't cultivate our ecosystem, and talking about some of the things that we're doing today, investing in our future ecosystem and continue to enhance our ecosystem. Without our ecosystem, it's hard for us to grow beyond what design wins we capture for somebody else's ecosystem.

Speaker #2: Computer vision problems , robotics problems to biology problems . Physics problems , and just about all kinds of AI and all kinds of computation computation algorithms .

Speaker #2: And so the diversity of our customer base is one of the greatest strengths that we have . The the second thing , of course , is without our own ecosystem , even of our even of our processor was was programmable .

Speaker #2: If we didn't cultivate our ecosystem and talking about some of the things that we were were doing today , investing in our future ecosystem and continuing to enhance our ecosystem without our ecosystem , it's hard for us to grow beyond what design wins .

Speaker #2: We capture for somebody else's ecosystem . And so we could grow and expand our ecosystem very naturally because of because of our the the platform that we've created .

Jensen Huang: We could grow and expand our ecosystem very naturally because of our platform that we created. Lastly, one of the things that's really important is the partnerships that we have with OpenAI, Anthropic, xAI, Meta, and of course, just about every single open source in the world. There's 1.5 million AI models on Hugging Face. All of it runs on NVIDIA CUDA. An open source, in totality, probably represents the largest, the second-largest model in the world. OpenAI is the largest. Second-largest is probably all the collection of all the open sources. NVIDIA's ability to run all of that makes our platform super fungible, super easy to use, and really safe to invest into.

Jensen Huang: We could grow and expand our ecosystem very naturally because of our platform that we created. Lastly, one of the things that's really important is the partnerships that we have with OpenAI, Anthropic, xAI, Meta, and of course, just about every single open source in the world. There's 1.5 million AI models on Hugging Face. All of it runs on NVIDIA CUDA. An open source, in totality, probably represents the largest, the second-largest model in the world. OpenAI is the largest. Second-largest is probably all the collection of all the open sources. NVIDIA's ability to run all of that makes our platform super fungible, super easy to use, and really safe to invest into.That creates the diversity of customers and the diversity of the platforms and available in every single country, because, you know, we support the whole world's ecosystem.

Speaker #2: And then lastly , one of the things that's really important is the partnerships that we have with OpenAI and anthropic , with XAI , with meta now makes and of course , just about every single open source in the world .

Speaker #2: There are 1.5 million AI models on Hugging Face. All of it runs on Nvidia CUDA. And so, in open source, the total probably represents the largest, or the second largest, model collection in the world.

Speaker #2: OpenAI is the largest , second largest , probably all the collection of all the open , open sources . And so Nvidia's ability to run all of that makes our our platform super fungible , super easy to use and really safe to invest into .

Speaker #2: And so that creates that creates the diversity of customers and , and the diversity of the platforms and available in every single country .

Jensen Huang: That creates the diversity of customers and the diversity of the platforms and available in every single country, because, you know, we support the whole world's ecosystem.

Speaker #2: And because, you know, we support the whole world's ecosystem.

Operator: Your next question comes from Aaron Rakers with Wells Fargo. Your line is open.

Operator: Your next question comes from Aaron Rakers with Wells Fargo. Your line is open.

Speaker #3: Your next question comes from Aaron Rakers with Wells Fargo. Your line is open.

Speaker #9: Yeah .

Speaker #12: Thanks for taking the question . I guess , you know , sticking with the idea of the platform and extreme co-design , some of the news over this last quarter is obviously been Nvidia's ability or push to bring Vera CPUs to market on a standalone solution basis .

Aaron Rakers: Yeah, thanks for taking the question. I guess, you know, sticking with the idea of the platform and extreme co-design, some of the news over this last quarter has obviously been NVIDIA's ability or push to bring Vera CPUs to market on a standalone solution basis. I guess, Jensen, I'm curious of what's the importance that Vera plays in this architecture evolution as we move forward? Is this being driven more by the proliferation or the heterogeneity of inference workloads? I'm just curious of how you see that evolving for NVIDIA particularly on a standalone CPU basis. Thank you.

Aaron Rakers: Yeah, thanks for taking the question. I guess, you know, sticking with the idea of the platform and extreme co-design, some of the news over this last quarter has obviously been NVIDIA's ability or push to bring Vera CPUs to market on a standalone solution basis. I guess, Jensen, I'm curious of what's the importance that Vera plays in this architecture evolution as we move forward? Is this being driven more by the proliferation or the heterogeneity of inference workloads? I'm just curious of how you see that evolving for NVIDIA particularly on a standalone CPU basis. Thank you.

Speaker #12: So I guess , Jensen , I'm curious of what is the importance of Vera plays in this architecture evolution as we move forward .

Speaker #12: Is this being driven more by the proliferation or the heterogeneity of of inference workloads ? I'm just curious of how you see that evolving for Nvidia , particularly on a standalone CPU basis .

Speaker #12: Thank you .

Speaker #2: Yeah , thanks . And I'll tell more about it at GTC . But at the highest level we made we made fundamentally different architecture decisions about our CPUs compared to the rest of the world .

Jensen Huang: Yeah, thanks. I'll tell you some more about it at GTC. At the highest level, we made fundamentally different architecture decisions about our CPUs compared to the rest of the world's CPUs. It's the only data center CPU that supports LPDDR5. It is designed to be focused on very high data processing capabilities. The reason for that is because most of the computing problems that we're interested in are data-driven, artificial intelligence being one. The single-threaded performance in this ratio with bandwidth is just off the charts. And we made those architectural decisions because in the entire phase, the different phases of AI, from data processing, before you even do training, you have to do data processing.

Jensen Huang: Yeah, thanks. I'll tell you some more about it at GTC. At the highest level, we made fundamentally different architecture decisions about our CPUs compared to the rest of the world's CPUs. It's the only data center CPU that supports LPDDR5. It is designed to be focused on very high data processing capabilities. The reason for that is because most of the computing problems that we're interested in are data-driven, artificial intelligence being one. The single-threaded performance in this ratio with bandwidth is just off the charts. And we made those architectural decisions because in the entire phase, the different phases of AI, from data processing, before you even do training, you have to do data processing.

Speaker #2: CPUs . It's the only data center CPU that supports Lpddr5 . It is designed to be to be focused on very high data processing capabilities .

Speaker #2: And the reason for that is because most of the computing problems that we're interested in are data driven , artificial intelligence being one and , and the , the single threaded performance in this ratio with bandwidth is just off the charts .

Speaker #2: And , and and we made those architectural decisions because in the entire phase , the different phases of AI from data processing before you even do training , you have to do data processing .

Speaker #2: So you have data processing , pre-training , and then post-training . Now the AIS are learning how to use tools and and the usage of tools .

Jensen Huang: Data processing, pre-training, and in post-training now, the AIs are learning how to use tools. The usage of tools, many of those tools run in CPU-only environments, or they run in CPU with GPU-accelerated environments. Vera was designed to be an excellent CPU for post-training. Some of the use cases in the entire pipeline of artificial intelligence includes using a lot of CPUs. You know, we love CPUs as well as GPUs, and when you accelerate the algorithms to the limit, as we have, Amdahl's law would suggest that you need really, really fast single-threaded CPUs. That's the reason why we built Grace to be extraordinarily great at single-threaded performance, and Vera is off the charts better than that.

Jensen Huang: Data processing, pre-training, and in post-training now, the AIs are learning how to use tools. The usage of tools, many of those tools run in CPU-only environments, or they run in CPU with GPU-accelerated environments. Vera was designed to be an excellent CPU for post-training. Some of the use cases in the entire pipeline of artificial intelligence includes using a lot of CPUs. You know, we love CPUs as well as GPUs, and when you accelerate the algorithms to the limit, as we have, Amdahl's law would suggest that you need really, really fast single-threaded CPUs. That's the reason why we built Grace to be extraordinarily great at single-threaded performance, and Vera is off the charts better than that.

Speaker #2: Many of those tools run in CPU only environments , or they run in CPU or GPU accelerated environments . And very . Vera was designed to be an excellent CPU for Post-training and and so some of the some of the use cases in the in the entire pipeline of artificial intelligence includes using a lot of CPUs .

Speaker #2: You know , we we love CPUs as well as GPUs . And when you accelerate the algorithms to the limit , as we have , Amdahl's law would suggest that you need really , really fast single threaded CPUs .

Speaker #2: And that's the reason why we built Grace to be to be a extraordinarily great at single threaded performance . And Vera is off the charts better than that

Speaker #3: Your next question comes from Tim Arcuri with UBS. Your line is open.

Operator: Your next question comes from Tim Arcuri with UBS. Your line is open.

Operator: Your next question comes from Tim Arcuri with UBS. Your line is open.

Speaker #13: Thanks a lot. I was wondering if you can talk about the deployment of capital. I know that you jacked up the purchase commits, but it sounds like maybe you're over the hump on this and you're going to generate about $100 billion in cash this year.

Tim Arcuri: Thanks a lot. Colette, I was wondering if you can talk about the deployment of capital. I know that you really jacked up the purchase commits, but it sounds like maybe you're over the hump on this, and you're gonna probably generate about $100 billion in cash this year. You know, pretty much no matter how good the results have been, the stock hasn't really gone up much. I would think that you probably feel like this is a pretty good price to be, you know, buying back a bunch of it here. I was wondering if you can talk about that, like, you know, question being, why not put a big stake in the ground and just, you know, have a huge share of repo here? Thanks.

Timothy Arcuri: Thanks a lot. Colette, I was wondering if you can talk about the deployment of capital. I know that you really jacked up the purchase commits, but it sounds like maybe you're over the hump on this, and you're gonna probably generate about $100 billion in cash this year. You know, pretty much no matter how good the results have been, the stock hasn't really gone up much. I would think that you probably feel like this is a pretty good price to be, you know, buying back a bunch of it here. I was wondering if you can talk about that, like, you know, question being, why not put a big stake in the ground and just, you know, have a huge share of repo here? Thanks.

Speaker #13: So and , you know , pretty much no matter how good the results have been , the stock hasn't really gone up , gone up much .

Speaker #13: So I would think that you probably feel like this is a pretty good price to be buying back a bunch of it here.

Speaker #13: So I was wondering if you can talk about that . Like , you know , question being , why not put a big stake in the ground and just , you know , have a huge share of repo here ?

Speaker #13: Thanks .

Speaker #1: So thanks for the question . We look at our capital return very , very carefully . And we do believe that one of the most important things that we can do is really supporting the extreme oak ecosystem that's in front of us , that stems from everywhere , from our suppliers and the work that we need to do to assure that we can have the supply .

Colette Kress: Thanks for the question. We look at our capital return.

Colette Kress: Thanks for the question. We look at our capital return very, very carefully, we do believe that one of the most important things that we can do is really supporting the extreme ecosystem that's in front of us, that stems from everywhere, from our suppliers and the work that we need to do to assure that we can have the supply that's needed and help them from a capacity, all the way that we are in terms of the early developers of the AI solutions that will be on our platform. We will continue to make this a very important part of our process and strategic investments. Of course, we are still repurchasing our stock, we are still with our dividend as well, and we will continue to find the right unique opportunities within the year, for doing those different purchases.

Jensen Huang: ... very, very carefully, we do believe that one of the most important things that we can do is really supporting the extreme ecosystem that's in front of us, that stems from everywhere, from our suppliers and the work that we need to do to assure that we can have the supply that's needed and help them from a capacity, all the way that we are in terms of the early developers of the AI solutions that will be on our platform. We will continue to make this a very important part of our process and strategic investments. Of course, we are still repurchasing our stock, we are still with our dividend as well, and we will continue to find the right unique opportunities within the year, for doing those different purchases.

Speaker #1: That's needed, and helps them from a capacity standpoint, all the way through, as we are in terms of the early developers of the AI solutions that will be on our platform.

Speaker #1: So, we will continue to make this a very important part of our process, and strategic investments. But of course, we are still repurchasing our stock.

Speaker #1: We are still with our dividend as well, and we will continue to find the right unique opportunities within the year for doing those different purchases.

Speaker #3: Your final question comes from Jim Schneider with Goldman Sachs. Your line is open.

Operator: Your final question comes from Jim Schneider with Goldman Sachs. Your line is open.

Operator: Your final question comes from Jim Schneider with Goldman Sachs. Your line is open.

Speaker #14: Thank you for taking my question , Jensen . You've previously outlined the potential to get to 3 to 4 trillion of data center CapEx by 2030 , which implies a potential acceleration in growth rates , which you've sort of guided to this at least this next quarter .

Jim Schneider: Thank you for taking my question. Justin, you've previously outlined the potential to get to $3 to 4 trillion of data center CapEx by 2030, which implies a potential acceleration in growth rates, which you sort of guided to this, at least this next quarter. The question is: What are some of the key application areas that you believe are most likely to drive that inflection? Is that physical AI, agentic, or something else? Do you still feel good about that $3 to 4 trillion dollar envelope? Thank you.

Jim Schneider: Thank you for taking my question. Justin, you've previously outlined the potential to get to $3 to 4 trillion of data center CapEx by 2030, which implies a potential acceleration in growth rates, which you sort of guided to this, at least this next quarter. The question is: What are some of the key application areas that you believe are most likely to drive that inflection? Is that physical AI, agentic, or something else? Do you still feel good about that $3 to 4 trillion dollar envelope? Thank you.

Speaker #14: The question is , what are some of the key application areas that you believe are most likely to drive that inflection ? Is that physical AI agentic or something else ?

Speaker #14: And do you still feel good about that ? 3 to $4 trillion envelope . Thank you .

Speaker #2: Yeah . Let's let's let's back that up . And and just reason through it from a few different ways . So the first way is is on first principles .

Jensen Huang: Yeah, let's back that up and just reason through it from a few different ways. The first way is on first principles, the way that software is done in the future using AI is token-driven. I think everybody talks about tokenomics, talks about data centers generating tokens, and inference is about generating tokens. We generate tokens, you know, we were just talking about tokens, how NVIDIA's NVLink 72 enabled us to generate tokens at 50 times better performance per unit energy than the previous generation. Token generation is at the center of almost everything that relates to software in the future and relates to computing.

Jensen Huang: Yeah, let's back that up and just reason through it from a few different ways. The first way is on first principles, the way that software is done in the future using AI is token-driven. I think everybody talks about tokenomics, talks about data centers generating tokens, and inference is about generating tokens. We generate tokens, you know, we were just talking about tokens, how NVIDIA's NVLink 72 enabled us to generate tokens at 50 times better performance per unit energy than the previous generation. Token generation is at the center of almost everything that relates to software in the future and relates to computing.

Speaker #2: The way that software is done in the future using AI is token driven . And I think you everybody talks about tokenomics and talks about data centers generating tokens and inference is about generating tokens and we generate tokens .

Speaker #2: You know , we're just talking about tokens . How Nvidia is NVLink 72 enabled us to generate tokens at 50 times better performance per unit energy than the previous generation .

Speaker #2: And so token generation is at the center of almost everything that relates to software in the future and relates to computing . Now , if you look at look at the way we use computing in the past , however , the amount of computation demand for software in the past is that a tiny fraction of what is necessary in the future , and AI is here .

Jensen Huang: If you look at the way we used computing in the past, however, the amount of computation demand for software in the past is a tiny fraction of what is necessary in the future. AI is here. AI is not going to go back. AI is only going to get better from here. If you think about it and you said, okay, well, the world was investing about $300 billion to 400 billion a year in classical computing, and now AI is here, and the amount of computation necessary is 1,000 times higher than the way we used to do computing. The computing demand is just a lot higher.

Jensen Huang: If you look at the way we used computing in the past, however, the amount of computation demand for software in the past is a tiny fraction of what is necessary in the future. AI is here. AI is not going to go back. AI is only going to get better from here. If you think about it and you said, okay, well, the world was investing about $300 billion to 400 billion a year in classical computing, and now AI is here, and the amount of computation necessary is 1,000 times higher than the way we used to do computing. The computing demand is just a lot higher.

Speaker #2: AI is not going to go back . AI is only going to only get better from here . And so if you think about it , and you said , okay , well , the world was investing about 3 to $400 billion a year in classical computing .

Speaker #2: And now AI is here. And the amount of computation necessary is a thousand times higher than the way we used to do computing. The computing demand is just a lot higher.

Speaker #2: And so if if we continue to believe there's value in it and we'll talk about that in a second , then then the world will invest to produce that token .

Jensen Huang: If we continue to believe there's value in it, and we'll talk about that in a second, then the world will invest to produce that token. The amount of token generation capability that the world needs is a lot more than $700 billion. I'm fairly confident that we're going to continue to generate tokens. We're going to continue to invest in compute capacity from this point out. Fundamentally, because every single company depends on software, every software will depend on AI, every company will produce tokens, and that's the reason why I call them AI factories. Whether you're for your revenues, if you're an enterprise software company, you're going to generate tokens for the agentic systems that are on top of your tools.

Jensen Huang: If we continue to believe there's value in it, and we'll talk about that in a second, then the world will invest to produce that token. The amount of token generation capability that the world needs is a lot more than $700 billion. I'm fairly confident that we're going to continue to generate tokens. We're going to continue to invest in compute capacity from this point out. Fundamentally, because every single company depends on software, every software will depend on AI, every company will produce tokens, and that's the reason why I call them AI factories. Whether you're for your revenues, if you're an enterprise software company, you're going to generate tokens for the agentic systems that are on top of your tools.

Speaker #2: And so the amount of token generation capability that the world needs is a lot more than $700 billion . And , and I'm fairly confident that we're going to continue to generate tokens .

Speaker #2: We're going to continue to invest in compute capacity from this point out . And and the fundamentally because every single company depends on software , every software will depend on AI .

Speaker #2: And so every company will produce tokens . And that's the reason why I call them AI factories . And and whether you're in data centers , you have factories to for your revenues .

Speaker #2: If your enterprise software company , you're going to generate tokens for the systems that are on top of your tools . If you are a robotics factory and self-driving cars , the first indication of that you have huge supercomputers , which are basically AI factories to generate tokens .

Jensen Huang: If you are a robotics factory and self-driving cars, first indication of that, you have huge supercomputers, which are basically AI factories, to generate tokens that goes into your cars, that becomes its AI. You also have to put computers inside the cars to continuously generate tokens. We're fairly sure now that this is the future of computing. Why is it so certain that this is the future of computing? The reason for that is because the way we used to do software was prerecorded. Everything was captured a priori. We precompile the software, we pre-write the content, we pre-record the videos, but now everything is generative in real time.

Jensen Huang: If you are a robotics factory and self-driving cars, first indication of that, you have huge supercomputers, which are basically AI factories, to generate tokens that goes into your cars, that becomes its AI. You also have to put computers inside the cars to continuously generate tokens. We're fairly sure now that this is the future of computing. Why is it so certain that this is the future of computing? The reason for that is because the way we used to do software was prerecorded. Everything was captured a priori. We precompile the software, we pre-write the content, we pre-record the videos, but now everything is generative in real time.

Speaker #2: That goes into your cars, that becomes its AI, and then you also have to put computers inside the cars to continuously generate tokens.

Speaker #2: And so we're fairly sure now that this is the future of computing . Now why is it so certain that this is the future of computing .

Speaker #2: And the reason for that is because the way we used to do software was pre-recorded . Everything was captured a priori . We pre-compiled the software , we pre we pre-write the content , we prerecord the videos .

Speaker #2: But now everything is generative in real time . And when it's generated in real time , it can take into context of the person , the situation , the query and the intentions could all be taken into consideration to generate the outcome of this new software called , we call AI Agentic AI .

Jensen Huang: When it's generated in real time, it can take into context of the person, the situation, the query, and the intentions could all be taken into consideration to generate the outcome of this new software called, we call AI, agentic AI. The amount of computation necessary is far greater than prerecorded. You know, just as a computer has a lot more computation capability than a DVD recorder, a DVD player that was prerecorded, artificial intelligence needs a lot more computing capability than the way we used to do software in the past. The question about computation, about sustainability at the first level is just at the computer science level, this is the way computing is going to be done. From an industrial level, because all of our companies-...

Jensen Huang: When it's generated in real time, it can take into context of the person, the situation, the query, and the intentions could all be taken into consideration to generate the outcome of this new software called, we call AI, agentic AI. The amount of computation necessary is far greater than prerecorded. You know, just as a computer has a lot more computation capability than a DVD recorder, a DVD player that was prerecorded, artificial intelligence needs a lot more computing capability than the way we used to do software in the past. The question about computation, about sustainability at the first level is just at the computer science level, this is the way computing is going to be done. From an industrial level, because all of our companies-...

Speaker #2: And so the amount of computation necessary is far , far greater than prerecorded . You know , just as just as a computer has a lot more computation capability than a DVD recorder , a DVD player that was pre-recorded artificial intelligence needs a lot more computing capability than the way we used to do software in the past .

Speaker #2: Now , the question about the question about computation , about sustainability at the first level is just at the computer science level , this is the way computing is going to be done .

Speaker #2: Now from an industrial level, because all of our companies in the final analysis are powered by software, and the cloud companies are powered by software.

Speaker #2: And if the new software requires tokens to be generated and the tokens are monetized , then it stands to reason that their data center build out directly drives their revenues .

Jensen Huang: in the final analysis, are powered by software, and the cloud companies are powered by software. If the new software requires tokens to be generated and the tokens are monetized, then it stands to reason that their data center build-out directly drives their revenues. Compute drives revenues, and I think they all understand that. I think people are increasingly starting to understand that as well. Lastly, you know, the benefits that AI produces for the world ultimately has to generate revenues. We're seeing right in front, being developed, as we stand here, agentic AI has turned an inflection point, and it literally happened in the last couple of two, three months.

Jensen Huang: in the final analysis, are powered by software, and the cloud companies are powered by software. If the new software requires tokens to be generated and the tokens are monetized, then it stands to reason that their data center build-out directly drives their revenues. Compute drives revenues, and I think they all understand that. I think people are increasingly starting to understand that as well. Lastly, you know, the benefits that AI produces for the world ultimately has to generate revenues. We're seeing right in front, being developed, as we stand here, agentic AI has turned an inflection point, and it literally happened in the last couple of two, three months.

Speaker #2: And so compute drives revenues . And I think they all understand that . I think people are increasingly starting to understand that as well .

Speaker #2: And then lastly , you know , the benefits that AI produces for the world ultimately has to generate revenues . And we're seeing right in front , right , right .

Speaker #2: Being developed as we see as we stand here . Agentic AI has turned an inflection point . And it literally happened in the last couple , two , three months .

Speaker #2: Of course , inside the industry , we've been seeing it for a while . You know , probably six months or so . But the world is now awakened to the agentic AI inflection .

Jensen Huang: Of course, inside the industry, we've been seeing it for a while, you know, probably six months or so, but the world is now awakened to the agentic AI inflection. The agents are super smart. They're solving real problems. Coding is obviously supported by agentic systems now, and all of our coders here at NVIDIA are using agentic systems, either Claude Code or OpenAI Codex, enormously, and oftentimes both, and Cursor, oftentimes all three, depends on the use case. They have agents and co, design partners, and engineering partners, to help them solve problems. You can see their revenues skyrocketing. You know, these companies, in the case of Anthropic, I think their revenues 10x in a year, and they are severely capacity constrained because demand is just incredible, and the token demand is incredible.

Jensen Huang: Of course, inside the industry, we've been seeing it for a while, you know, probably six months or so, but the world is now awakened to the agentic AI inflection. The agents are super smart. They're solving real problems. Coding is obviously supported by agentic systems now, and all of our coders here at NVIDIA are using agentic systems, either Claude Code or OpenAI Codex, enormously, and oftentimes both, and Cursor, oftentimes all three, depends on the use case. They have agents and co, design partners, and engineering partners, to help them solve problems. You can see their revenues skyrocketing. You know, these companies, in the case of Anthropic, I think their revenues 10x in a year, and they are severely capacity constrained because demand is just incredible, and the token demand is incredible.

Speaker #2: The agents are super smart . They're solving real problems . Coding is obviously supported by agentic systems . Now , and all of our coders Nvidia are using genetic systems either cloud code or OpenAI Codex enormously to .

Speaker #2: And oftentimes both and cursor oftentimes all three depends on the use case . But they have agents and co co design partners engineering partners to help them solve problems .

Speaker #2: And you could see their revenue skyrocketing . You know these companies in the case of anthropic I think they're revenues . Ten in a year .

Speaker #2: And they are severely capacity constrained because the demand is just incredible. And the token demand is incredible. The token generation rate is growing exponentially.

Speaker #2: And the same thing with , of course , OpenAI . Their demand is incredible . And so the more compute that they can stand online , bring online , their faster .

Jensen Huang: The token generation rate is growing exponentially. The same thing with, of course, OpenAI. Their demand is incredible. The more compute that they can stand online, bring online, the faster their revenues will grow. That goes back to the comment that I was saying, that inference is revenues, that compute equals revenues now in this new world. In a lot of ways, that's the reason why we say it's a new industrial revolution. There are new factories, new infrastructure being built, this new way of doing computing is not gonna go back.

Jensen Huang: The token generation rate is growing exponentially. The same thing with, of course, OpenAI. Their demand is incredible. The more compute that they can stand online, bring online, the faster their revenues will grow. That goes back to the comment that I was saying, that inference is revenues, that compute equals revenues now in this new world. In a lot of ways, that's the reason why we say it's a new industrial revolution. There are new factories, new infrastructure being built, this new way of doing computing is not gonna go back.

Speaker #2: Their revenues will grow . And that goes back to the comment that I was saying that inference is revenues that compute equals revenues .

Speaker #2: Now , in this new new world . And in a lot of ways , that's the reason why we say it's a new industrial revolution .

Speaker #2: There are new factories , new infrastructure being built , and and this new way of doing computing is not going to go back .

Speaker #2: And so to the extent that we believe that producing tokens is going to be the future of computing , which I believe and I think largely the the industry believes , then we're going to be building out this capacity from this point forward .

Jensen Huang: To the extent that we believe that producing tokens is going to be the future of computing, which I believe, and I think largely the industry believes, then we're gonna be building out this capacity from this point forward and continue to expand from here. Now, the wave that we're seeing now is the agentic AI inflection, and the next inflection beyond that is physical AI, where we take AI and these agentic systems into the physical applications such as manufacturing, such as robotics. That's a giant opportunity ahead. Okay.

Jensen Huang: To the extent that we believe that producing tokens is going to be the future of computing, which I believe, and I think largely the industry believes, then we're gonna be building out this capacity from this point forward and continue to expand from here. Now, the wave that we're seeing now is the agentic AI inflection, and the next inflection beyond that is physical AI, where we take AI and these agentic systems into the physical applications such as manufacturing, such as robotics. That's a giant opportunity ahead. Okay.

Speaker #2: And continue to expand from here . Now , the thing that is the way that we're seeing now is the Agentic AI inflection and the next inflection beyond that is physical AI , where we take AI and these agentic systems into the physical applications such as manufacturing , such as robotics .

Speaker #2: And so, that's a giant opportunity ahead. Okay.

Speaker #3: This concludes the question and answer session. I'll turn the call over to Toshiya Hari.

Speaker #15: In closing, please note Jensen will be participating in a fireside chat at the Morgan Stanley TMT Conference in San Francisco on March 4.

Operator: This concludes the question and answer session. I'll turn the call to Toshiya Hari.

Operator: This concludes the question and answer session. I'll turn the call to Toshiya Hari.

Speaker #15: He'll also be giving a keynote at GTC in San Jose on March 16th . Our earnings call to discuss the results of our first quarter of fiscal 2027 is scheduled for May 20th .

Toshiya Hari: In closing, please note Jensen will be participating in a fireside chat at the Morgan Stanley TMT Conference in San Francisco on 4 March. He'll also be giving a keynote at GTC in San Jose on 16 March. Our earnings call to discuss the results of our Q1 of fiscal 2027 is scheduled for 20 May. Thank you for joining us today. Operator, please go ahead and close the call.

Toshiya Hari: In closing, please note Jensen will be participating in a fireside chat at the Morgan Stanley TMT Conference in San Francisco on 4 March. He'll also be giving a keynote at GTC in San Jose on 16 March. Our earnings call to discuss the results of our Q1 of fiscal 2027 is scheduled for 20 May. Thank you for joining us today. Operator, please go ahead and close the call.

Speaker #15: Thank you for joining us today. Operator, please go ahead and close the call.

Operator: Thank you. This concludes today's conference call. You may now disconnect.

Operator: Thank you. This concludes today's conference call. You may now disconnect.

Q4 2026 NVIDIA Corp Earnings Call

Demo

NVIDIA

Earnings

Q4 2026 NVIDIA Corp Earnings Call

NVDA

Wednesday, February 25th, 2026 at 10:00 PM

Transcript

No Transcript Available

No transcript data is available for this event yet. Transcripts typically become available shortly after an earnings call ends.

Want AI-powered analysis? Try AllMind AI →