Microsoft Q2 2026 Microsoft Corp Earnings Call | AllMind AI Earnings | AllMind AI
Q2 2026 Microsoft Corp Earnings Call
All participants are in a listen-only mode.
A question and answer session will follow the formal presentation.
If anyone should require operator assistance, please press star zero on your telephone keypad.
As a reminder, this conference is being recorded.
It is now my pleasure to introduce Jonathan Nielsen vice president of Astral relations. Please go ahead.
Good afternoon and thank you for joining us today on the call with me, I was actually Nadella chairman and chief executive officer, Amy Hood Chief Financial Officer. Alice Jolla Chief accounting officer and Keith Oliver, corporate secretary and Deputy general counsel.
On the Microsoft investor relations website. You can find our earnings press, release and financial summary slide deck, which is intended to supplement. Our prepared remarks during today's call and provide the reconciliation of differences between gaap and non-gaap financial measures.
More detailed out. Looks like we'll be available on the Microsoft investor relations website when we provide Outlook commentary on today's call.
On this call, we will discuss certain non-gaap items. The non-gaap financial measures provided should not be considered as a substitute for all superior to the measures of financial performance prepared in accordance with gaap.
They are included as additional clarifying items to Aid investors in further understanding the company's second quarter performance. In addition to the impact, these items and events have on the financial results.
All growth comparisons we make on the call today relate to the corresponding period of last year, unless otherwise noted
We will also provide growth rates in constant currency when available as a framework for assessing, how our underlying businesses performed, excluding the effect of foreign currency rates fluctuations.
where growth rates are the same in constant currency, we will refer to the growth rate only
We will post our prepared remarks to our website's immediately following the call until the complete transcript is available.
Today's call is being webcast live and recorded. If you ask a question it will be included in our live Transmission, in the transcript and in any future use of the recording.
You can replay the call and view the transcript on the Microsoft investor relations website.
During this call, we will be making forward-looking statements which are predictions projections or other statements about future events.
These statements are based on current expectations and assumptions that are subject to risks and uncertainties.
All participants are in a listen-only mode. A question-and-answer session will follow the formal presentation. If anyone should require operator assistance, please press star zero on your telephone keypad. As a reminder, this conference is being recorded. It is now my pleasure to introduce Jonathan Neilson, Vice President of Investor Relations. Please go ahead.
Operator: All participants are in a listen-only mode. A question-and-answer session will follow the formal presentation. If anyone should require operator assistance, please press star zero on your telephone keypad. As a reminder, this conference is being recorded. It is now my pleasure to introduce Jonathan Neilson, Vice President of Investor Relations. Please go ahead.
Jonathan Neilson: We will post our prepared remarks to our website immediately following the call until the complete transcript is available. Today's call is being webcast live and recorded. If you ask a question, it will be included in our live transmission, in the transcript, and in any future use of the recording. You can replay the call and view the transcript on the Microsoft Investor Relations website. During this call, we will be making forward-looking statements, which are predictions, projections, or other statements about future events. These statements are based on current expectations and assumptions that are subject to risks and uncertainties. Actual results could materially differ because of factors discussed in today's earnings press release, in the comments made during this conference call, and in the Risk Factors section of our Form 10-K, Forms 10-Q, and other reports and filings with the Securities and Exchange Commission.
Jonathan Neilson: We will post our prepared remarks to our website immediately following the call until the complete transcript is available. Today's call is being webcast live and recorded. If you ask a question, it will be included in our live transmission, in the transcript, and in any future use of the recording. You can replay the call and view the transcript on the Microsoft Investor Relations website. During this call, we will be making forward-looking statements, which are predictions, projections, or other statements about future events.
Actual results could materially differ because the factors discussed in today's earnings, press release in the comments May during this conference call and in a risk factor section of our form 10 K forms 10, q and other reports and filings with the Securities and Exchange Commission.
We do not undertake any duty to update any forward-looking statement.
And with that, I'll turn the call over to Setia.
Jonathan Neilson: Good afternoon, and thank you for joining us today. On the call with me are Satya Nadella, Chairman and Chief Executive Officer; Amy Hood, Chief Financial Officer; Alice Jolla, Chief Accounting Officer; and Keith Dolliver, Corporate Secretary and Deputy General Counsel. On the Microsoft Investor Relations website, you can find our earnings press release and financial summary slide deck, which is intended to supplement our prepared remarks during today's call and provides a reconciliation of differences between GAAP and non-GAAP financial measures. More detailed outlook slides will be available on the Microsoft Investor Relations website when we provide outlook commentary on today's call. On this call, we will discuss certain non-GAAP items. The non-GAAP financial measures provided should not be considered as a substitute for or superior to the measures of financial performance prepared in accordance with GAAP.
Jonathan Neilson: Good afternoon, and thank you for joining us today. On the call with me are Satya Nadella, Chairman and Chief Executive Officer; Amy Hood, Chief Financial Officer; Alice Jolla, Chief Accounting Officer; and Keith Dolliver, Corporate Secretary and Deputy General Counsel. On the Microsoft Investor Relations website, you can find our earnings press release and financial summary slide deck, which is intended to supplement our prepared remarks during today's call and provides a reconciliation of differences between GAAP and non-GAAP financial measures. More detailed outlook slides will be available on the Microsoft Investor Relations website when we provide outlook commentary on today's call. On this call, we will discuss certain non-GAAP items. The non-GAAP financial measures provided should not be considered as a substitute for or superior to the measures of financial performance prepared in accordance with GAAP.
These statements are based on current expectations and assumptions that are subject to risks and uncertainties. Actual results could materially differ because of factors discussed in today's earnings press release, in the comments made during this conference call, and in the Risk Factors section of our Form 10-K, Forms 10-Q, and other reports and filings with the Securities and Exchange Commission. We do not undertake any duty to update any forward-looking statement. With that, I'll turn the call over to Satya.
Thank you very much Jonathan. This quarter, the Microsoft cloud surpassed 50 billion dollars in revenue for the first time up, 26% year-over-year reflecting the strength of our platform and accelerating demand.
We are in the beginning phases of AI diffusion and its broad GDP impact.
Jonathan Neilson: We do not undertake any duty to update any forward-looking statement. With that, I'll turn the call over to Satya.
Actual results could materially differ because of factors discussed in today's earnings, press release in the comments, May during this conference call and in a risk factor section of our form. 10K forms 10, q and other reports and filings with the Securities and Exchange Commission.
Our tamble growth substantially across every layer of the tech stack as this diffusion accelerates and spreads. In fact, even in this early Innings we are built an AI business that is larger than some of our biggest franchises that took decades to build.
We do not undertake any duty to update any forward-looking statement.
Satya Nadella: Thank you very much, Jonathan. This quarter, the Microsoft Cloud surpassed $50 billion in revenue for the first time, up 26% year-over-year, reflecting the strength of our platform and accelerating demand. We are in the beginning phases of AI diffusion and its broad GDP impact. Our TAM will grow substantially across every layer of the tech stack as this diffusion accelerates and spreads. In fact, even in this early innings, we have built an AI business that is larger than some of our biggest franchises that took decades to build. Today, I'll focus my remarks across the three layers of our stack: cloud and token factory, agent platform, and high-value agentic experiences. When it comes to our cloud and token factory, the key to long-term competitiveness is shaping our infrastructure to support new high-scale workloads.
Satya Nadella: Thank you very much, Jonathan. This quarter, the Microsoft Cloud surpassed $50 billion in revenue for the first time, up 26% year-over-year, reflecting the strength of our platform and accelerating demand. We are in the beginning phases of AI diffusion and its broad GDP impact. Our TAM will grow substantially across every layer of the tech stack as this diffusion accelerates and spreads.
Today our Focus, my remarks across the 3 layers of our stack, cloud and token, Factory, Asian platform and high value. Agentic experiences when it comes to our cloud and token Factory. The key to long-term competitiveness is shaping our infrastructure to support new high-scale workloads.
Jonathan Neilson: They are included as additional clarifying items to aid investors in further understanding the company's Q2 performance, in addition to the impact these items and events have on the financial results. All growth comparisons we make on the call today relate to the corresponding period of last year unless otherwise noted. We will also provide growth rates in constant currency, when available, as a framework for assessing how our underlying businesses performed, excluding the effect of foreign currency rate fluctuations. Where growth rates are the same in constant currency, we will refer to the growth rates only. We will post our prepared remarks to our website immediately following the call until the complete transcript is available. Today's call is being webcast live and recorded. If you ask a question, it will be included in our live transmission, in the transcript, and in any future use of the recording.
They are included as additional clarifying items to aid investors in further understanding the company's Q2 performance, in addition to the impact these items and events have on the financial results. All growth comparisons we make on the call today relate to the corresponding period of last year unless otherwise noted. We will also provide growth rates in constant currency, when available, as a framework for assessing how our underlying businesses performed, excluding the effect of foreign currency rate fluctuations. Where growth rates are the same in constant currency, we will refer to the growth rates only. We will post our prepared remarks to our website immediately following the call until the complete transcript is available. Today's call is being webcast live and recorded. If you ask a question, it will be included in our live transmission, in the transcript, and in any future use of the recording.
And with that, I'll turn the call over to Satya. Thank you very much, Jonathan. This quarter the Microsoft Cloud surpassed $50 billion in revenue for the first time, up 26% year-over-year, reflecting the strength of our platform and accelerating demand.
In fact, even in this early innings, we have built an AI business that is larger than some of our biggest franchises that took decades to build. Today, I'll focus my remarks across the three layers of our stack: cloud and token factory, agent platform, and high-value agentic experiences. When it comes to our cloud and token factory, the key to long-term competitiveness is shaping our infrastructure to support new high-scale workloads.
We are in the beginning phases of AI diffusion and its broad GDP impact. Our TAM will grow substantially across every layer of the tech stack as this diffusion accelerates and spreads. In fact, even in these early innings, we have built an AI business that is larger than some of our biggest franchises that took decades to build.
Satya Nadella: We're building this infrastructure out for the heterogeneous and distributed nature of these workloads, ensuring the right fit with the geographic and segment-specific needs for all customers, including the long tail. The key metric we're optimizing for is tokens per watt per dollar, which comes down to increasing utilization and decreasing TCO using silicon, systems, and software. A good example of this is the 50% increase in throughput we were able to achieve in one of our highest volume workloads, OpenAI inferencing, powering our copilots. Another example was the unlocking of new capabilities and efficiencies for our Fairwater data centers. In this instance, we connected both Atlanta and Wisconsin sites through an AI WAN to build a first-of-its-kind AI super factory. Fairwater's two-story design and liquid cooling allow us to run higher GPU densities and thereby improve both performance and latencies for high-scale training.
We're building this infrastructure out for the heterogeneous and distributed nature of these workloads, ensuring the right fit with the geographic and segment-specific needs for all customers, including the long tail. The key metric we're optimizing for is tokens per watt per dollar, which comes down to increasing utilization and decreasing TCO using silicon, systems, and software.
Today, our Focus, my remarks across the 3 layers of our stack, cloud and token, Factory, agent platform and high value. Agentic experience is when it comes to our cloud and token Factory. The key to long-term competitiveness is shaping our infrastructure to support new high-scale workloads.
We're building this infrastructure out for the heterogeneous and distributed nature of these workloads ensuring the right fit with the geographic and segment specific needs for all customers, including the long tail. The key metric. We're optimizing for is tokens per watt per dollar, which comes down to increasing utilization and decreasing TCO using silicon systems and software. A good example of this is the 50% increase in throughput. We were able to achieve in 1 of our highest volume workloads. Open AI inferencing, powering our co-pilots and another example was the unlocking of new capabilities in efficiencies for our Fair water, data centers. In this instance, we connected both Atlanta and Wisconsin site through an AI van to build a first-of-its-kind AI, super Factory.
Jonathan Neilson: You can replay the call and view the transcript on the Microsoft Investor Relations website. During this call, we will be making forward-looking statements, which are predictions, projections, or other statements about future events. These statements are based on current expectations and assumptions that are subject to risks and uncertainties. Actual results could materially differ because of factors discussed in today's earnings press release, in the comments made during this conference call, and in the risk factor section of our Form 10-K, Forms 10-Q, and other reports and filings with the Securities and Exchange Commission. We do not undertake any duty to update any forward-looking statement. With that, I'll turn the call over to Satya.
You can replay the call and view the transcript on the Microsoft Investor Relations website. During this call, we will be making forward-looking statements, which are predictions, projections, or other statements about future events. These statements are based on current expectations and assumptions that are subject to risks and uncertainties. Actual results could materially differ because of factors discussed in today's earnings press release, in the comments made during this conference call, and in the risk factor section of our Form 10-K, Forms 10-Q, and other reports and filings with the Securities and Exchange Commission. We do not undertake any duty to update any forward-looking statement. With that, I'll turn the call over to Satya.
Fair Waters, 2-story design, and liquid cooling. Allow us to run higher, GPU densities, and thereby improve both performance and latencies for high-scale training.
Quarter alone.
A good example of this is the 50% increase in throughput we were able to achieve in one of our highest volume workloads, OpenAI inferencing, powering our copilots. Another example was the unlocking of new capabilities and efficiencies for our Fairwater data centers. In this instance, we connected both Atlanta and Wisconsin sites through an AI WAN to build a first-of-its-kind AI super factory.
At the Silicon layer, we have Nvidia and AMD and our own Maya chips, delivering the best. All up Fleet, performance cost and Supply across multiple generations of Hardware earlier this week, we brought online, our Maya 200 accelerator Maya, 200 delivers 10 plus Parallax at fb4, Precision with over 30%, improved TCO compared to the latest generation Hardware in our Fleet
Fairwater's two-story design and liquid cooling allow us to run higher GPU densities and thereby improve both performance and latencies for high-scale training. All up, we added nearly 1 gigawatt of total capacity this quarter alone. At the silicon layer, we have NVIDIA and AMD and our own Maia chips delivering the best all-up fleet performance, cost, and supply across multiple generations of hardware. Earlier this week, we brought online our Maia 200 accelerator.
We're building this infrastructure out for the heterogeneous and distributed nature of these workloads ensuring the right fit with the geographic and segment specific needs for all customers, including the long tail. The key metric. We're optimizing for is tokens per watt per dollar, which comes down to increasing utilization and decreasing TCO using silicon systems and software. A good example of this is the 50% increase in throughput. We were able to achieve in 1 of our highest volume workloads, open AI inferencing, powering our co-pilots and another example was the unlocking of new capabilities and efficiencies for our Fair water data centers. In this instance, we connected both Atlanta and Wisconsin site through an AI ran to build a first-of-its-kind AI. Super Factory, Fair Waters, 2-story design and liquid cooling.
We will be scaling this starting with inferencing and synthetic data Jen for our super intelligence team as well as doing inferencing for co-pilot and Foundry.
Satya Nadella: All up, we added nearly 1 gigawatt of total capacity this quarter alone. At the silicon layer, we have NVIDIA and AMD and our own Maia chips delivering the best all-up fleet performance, cost, and supply across multiple generations of hardware. Earlier this week, we brought online our Maia 200 accelerator. Maia 200 delivers 10+ petaflops at FP4 precision with over 30% improved TCO compared to the latest generation hardware in our fleet. We will be scaling this, starting with inferencing and synthetic data gen for our super intelligence team, as well as doing inferencing for Copilot and Foundry. And given AI workloads are not just about AI accelerators, but also consume large amounts of compute, we are pleased with the progress we are making on the CPU side as well.
Allow us to run higher GPU densities, and thereby improve both performance and latencies for high-scale training.
Satya Nadella: Thank you very much, Jonathan. This quarter, the Microsoft Cloud surpassed $50 billion in revenue for the first time, up 26% year-over-year, reflecting the strength of our platform and accelerating demand. We are in the beginning phases of AI diffusion and its broad GDP impact. Our TAM will grow substantially across every layer of the tech stack as this diffusion accelerates and spreads. In fact, even in this early innings, we have built an AI business that is larger than some of our biggest franchises that took decades to build. Today, I'll focus my remarks across the three layers of our stack: Cloud and Token Factory, Agent Platform, and High-Value Agentic Experiences. When it comes to our Cloud and Token Factory, the key to long-term competitiveness is shaping our infrastructure to support new high-scale workloads.
Satya Nadella: Thank you very much, Jonathan. This quarter, the Microsoft Cloud surpassed $50 billion in revenue for the first time, up 26% year-over-year, reflecting the strength of our platform and accelerating demand. We are in the beginning phases of AI diffusion and its broad GDP impact. Our TAM will grow substantially across every layer of the tech stack as this diffusion accelerates and spreads. In fact, even in this early innings, we have built an AI business that is larger than some of our biggest franchises that took decades to build. Today, I'll focus my remarks across the three layers of our stack: Cloud and Token Factory, Agent Platform, and High-Value Agentic Experiences. When it comes to our Cloud and Token Factory, the key to long-term competitiveness is shaping our infrastructure to support new high-scale workloads.
All up, we added nearly 1 gigawatt of total capacity. This quarter alone.
Maia 200 delivers 10+ petaflops at FP4 precision with over 30% improved TCO compared to the latest generation hardware in our fleet. We will be scaling this, starting with inferencing and synthetic data gen for our super intelligence team, as well as doing inferencing for Copilot and Foundry. And given AI workloads are not just about AI accelerators, but also consume large amounts of compute, we are pleased with the progress we are making on the CPU side as well.
At the silicon layer, we have Nvidia and AMD and our own Maya chips, delivering the best. All-up fleet, performance, cost, and supply across multiple generations of hardware. Earlier this week, we brought online our Maya 200 accelerator. Maya 200 delivers 10+ petaflops at FP4 precision, with over 30% improved TCO compared to the latest generation hardware in our fleet.
We will be scaling this, starting with inferencing and synthetic data gen for our Super Intelligence team, as well as doing inferencing for Copilot and Foundry.
And given AI workloads are not just about AI accelerators, but also consume large amounts of compute. We are pleased with the progress. We are making on the CPU side as well. Cobalt 200 is an another big Leap Forward delivering over 50%, higher performance compared to our first custom build processor for cloud. Native workloads. Sovereignty is increasingly top of mind for customers and we are expanding our Solutions and Global footprint to match. We announced DC investments in 7 countries, this quarter alone, supporting local data residency needs. And we offer the most comprehensive set of sovereignty Solutions across public private, and National partner clouds. So customers can choose the right approach for each workload with the local control, they're required
Next, I want to talk about the agent platform.
Satya Nadella: Cobalt 200 is another big leap forward, delivering over 50% higher performance compared to our first custom-built processor for cloud-native workloads. Sovereignty is increasingly top of mind for customers, and we are expanding our solutions and global footprint to match. We announced DC investments in 7 countries this quarter alone, supporting local data residency needs. We offer the most comprehensive set of sovereignty solutions across public, private, and national partner clouds, so customers can choose the right approach for each workload with the local control they require. Next, I want to talk about the agent platform. Like in every platform shift, all software is being rewritten. A new app platform is being born.
Cobalt 200 is another big leap forward, delivering over 50% higher performance compared to our first custom-built processor for cloud-native workloads. Sovereignty is increasingly top of mind for customers, and we are expanding our solutions and global footprint to match. We announced DC investments in 7 countries this quarter alone, supporting local data residency needs.
Satya Nadella: We are building this infrastructure out for the heterogeneous and distributed nature of these workloads, ensuring the right fit with the geographic and segment-specific needs for all customers, including the long tail. The key metric we are optimizing for is tokens per watt per dollar, which comes down to increasing utilization and decreasing TCO using silicon, systems, and software. A good example of this is the 50% increase in throughput we were able to achieve in one of our highest volume workloads, OpenAI inferencing powering our Copilots. Another example was the unlocking of new capabilities and efficiencies for our Fairwater data centers. In this instance, we connected both Atlanta and Wisconsin site through an AI WAN to build a first-of-its-kind AI super factory. Fairwater's two-story design and liquid cooling allow us to run higher GPU densities and thereby improve both performance and latencies for high-scale training.
We are building this infrastructure out for the heterogeneous and distributed nature of these workloads, ensuring the right fit with the geographic and segment-specific needs for all customers, including the long tail. The key metric we are optimizing for is tokens per watt per dollar, which comes down to increasing utilization and decreasing TCO using silicon, systems, and software. A good example of this is the 50% increase in throughput we were able to achieve in one of our highest volume workloads, OpenAI inferencing powering our Copilots. Another example was the unlocking of new capabilities and efficiencies for our Fairwater data centers. In this instance, we connected both Atlanta and Wisconsin site through an AI WAN to build a first-of-its-kind AI super factory. Fairwater's two-story design and liquid cooling allow us to run higher GPU densities and thereby improve both performance and latencies for high-scale training.
And given AI workloads are not just about AI accelerators, but also consume large amounts of compute. We are pleased with the progress. We are making on the CPU side as well. Cobalt 200 is an another big Leap Forward delivering over 50%, higher performance compared to our first custom build processor for cloud native workloads.
Like in every platform shift all software is being Rewritten. A new app platform is being born. You can think of Agents as the new apps and to build deploy and manage agents, customers will need a model catalog, tuning Services, harness for orchestration services for Context, Engineering AI, Safety Management observability and security.
We offer the most comprehensive set of sovereignty solutions across public, private, and national partner clouds, so customers can choose the right approach for each workload with the local control they require. Next, I want to talk about the agent platform. Like in every platform shift, all software is being rewritten. A new app platform is being born.
Sovereignty is increasingly top of mind for customers, and we are expanding our Solutions and global footprint to match. We announced data center investments in seven countries this quarter alone, supporting local data residency needs. And we offer the most comprehensive set of Sovereignty Solutions across public, private, and national partner clouds.
So, customers can choose the right approach for each workload, with the local control they require.
Next, I want to talk about the agent platform.
Like in every platform.
Shift.
Satya Nadella: You can think of agents as the new apps, and to build, deploy, and manage agents, customers will need a model catalog, tuning services, harness for orchestration, services for context engineering, AI safety, management, observability, and security. It starts with having broad model choice. Our customers expect to use multiple models as part of any workload that they can fine-tune and optimize based on cost, latency, and performance requirements. And we offer the broadest selection of models of any hyperscaler. This quarter, we added support for GPT-5 too, as well as Claude 4.5. Already, over 1,500 customers have used both Anthropic and OpenAI models on Foundry. We are seeing increasing demand for region-specific models, including Mistral and Cohere, as more customers look for sovereign AI choices.
You can think of agents as the new apps, and to build, deploy, and manage agents, customers will need a model catalog, tuning services, harness for orchestration, services for context engineering, AI safety, management, observability, and security. It starts with having broad model choice. Our customers expect to use multiple models as part of any workload that they can fine-tune and optimize based on cost, latency, and performance requirements. And we offer the broadest selection of models of any hyperscaler.
All software is being rewritten. A new app platform is being born. You can think of agents as the new apps, and to build, deploy, and manage agents.
Satya Nadella: All up, we added nearly 1GW of total capacity this quarter alone. At the silicon layer, we have NVIDIA and AMD and our own Maia chips delivering the best all-up fleet performance, cost, and supply across multiple generations of hardware. Earlier this week, we brought online our Maia 200 accelerator. Maia 200 delivers 10+ petaflops at FP4 precision with over 30% improved TCO compared to the latest generation hardware in our fleet. We will be scaling this, starting with inferencing and synthetic data gen for our superintelligence team, as well as doing inferencing for Copilot and Foundry. And given AI workloads are not just about AI accelerators but also consume large amounts of compute, we are pleased with the progress we are making on the CPU side as well.
All up, we added nearly 1GW of total capacity this quarter alone. At the silicon layer, we have NVIDIA and AMD and our own Maia chips delivering the best all-up fleet performance, cost, and supply across multiple generations of hardware. Earlier this week, we brought online our Maia 200 accelerator. Maia 200 delivers 10+ petaflops at FP4 precision with over 30% improved TCO compared to the latest generation hardware in our fleet. We will be scaling this, starting with inferencing and synthetic data gen for our superintelligence team, as well as doing inferencing for Copilot and Foundry. And given AI workloads are not just about AI accelerators but also consume large amounts of compute, we are pleased with the progress we are making on the CPU side as well.
Customers will need a model catalog, tuning services, a harness for orchestration services for context, engineering AI, safety management, observability, and security.
It starts with having broad model Choice. Our customers expect to use multiple models as part of any workload that they can fine-tune and optimize based on costs latency and performance requirements. And we offer the broadest selection of models of any hyperscaler this quarter, we added support for GPT 52 as well as Claude 455 already over. 1,500 customers have used both anthropic and open AI models on Foundry. We are seeing increasing demand for region, specific models, including Mistral and coher, as more customers look for Sovereign, AI choices. And we continue to invest in our first-party models, which are optimized to address the highest value, customer scenarios such as productivity coding and security.
This quarter, we added support for GPT-5 too, as well as Claude 4.5. Already, over 1,500 customers have used both Anthropic and OpenAI models on Foundry. We are seeing increasing demand for region-specific models, including Mistral and Cohere, as more customers look for sovereign AI choices.
Satya Nadella: And we continue to invest in our first-party models, which are optimized to address the highest value customer scenarios, such as productivity, coding, and security. As part of Foundry, we also give customers the ability to customize and fine-tune models. Increasingly, customers want to be able to capture the tacit knowledge they possess inside of model weights as their core IP.... This is probably the most important sovereign consideration for firms as AI diffuses more broadly across our GDP, and every firm needs to protect their enterprise value. For agents to be effective, they need to be grounded in enterprise data and knowledge. That means connecting their agents to systems of record and operational data, analytical data, as well as semi-structured and unstructured productivity and communications data. And this is what we are doing with our unified IQ layer, spanning Fabric, Foundry, and data powering Microsoft 365.
And we continue to invest in our first-party models, which are optimized to address the highest value customer scenarios, such as productivity, coding, and security. As part of Foundry, we also give customers the ability to customize and fine-tune models. Increasingly, customers want to be able to capture the tacit knowledge they possess inside of model weights as their core IP....
With having broad model choice, our customers expect to use multiple models as part of any workload that they can fine-tune and optimize based on cost, latency, and performance requirements. And we offer the broadest selection of models of any hyperscaler. This quarter, we added support for GPT-5.2 as well as Claude 4.5. Already over 1,500 customers have used both Anthropic and OpenAI models on Foundry. We are seeing increasing demand for region-specific models, including Mistral and Cohere, as more customers look for sovereign AI choices. And we continue to invest in our first-party models, which are optimized to address the highest-value customer scenarios, such as productivity, coding, and security.
Satya Nadella: Cobalt 200 is another big leap forward, delivering over 50% higher performance compared to our first custom-built processor for cloud-native workloads. Sovereignty is increasingly top of mind for customers, and we are expanding our solutions and global footprint to match. We announced DC investments in seven countries this quarter alone, supporting local data residency needs. We offer the most comprehensive set of sovereignty solutions across public, private, and national partner clouds, so customers can choose the right approach for each workload with the local control they require. Next, I want to talk about the agent platform. Like in every platform shift, all software is being rewritten. A new app platform is being born. You can think of agents as the new apps.
Cobalt 200 is another big leap forward, delivering over 50% higher performance compared to our first custom-built processor for cloud-native workloads. Sovereignty is increasingly top of mind for customers, and we are expanding our solutions and global footprint to match. We announced DC investments in seven countries this quarter alone, supporting local data residency needs. We offer the most comprehensive set of sovereignty solutions across public, private, and national partner clouds, so customers can choose the right approach for each workload with the local control they require. Next, I want to talk about the agent platform. Like in every platform shift, all software is being rewritten. A new app platform is being born. You can think of agents as the new apps.
This is probably the most important sovereign consideration for firms as AI diffuses more broadly across our GDP, and every firm needs to protect their enterprise value. For agents to be effective, they need to be grounded in enterprise data and knowledge. That means connecting their agents to systems of record and operational data, analytical data, as well as semi-structured and unstructured productivity and communications data.
As part of Foundry, we also give customers the ability to customize and fine-tune models, increasingly customers want to be able to capture the tacit knowledge. They possess inside a model weights as their core IP. This is probably the most important Sovereign consideration for firms. As AI, diffuses more broadly across our GDP in every firm needs to protect their Enterprise value for agents to be effective, they need to be grounded in Enterprise data and knowledge. That means connecting their agents to systems of record and operational data, analytical data, as well as semi-structured and unstructured, productivity and Communications data. And this is what we are doing with our unified IQ layer spanning fabric Foundry, and data powering Microsoft 365 in the world of Context, Engineering Foundry, knowledge and fabric are gaining momentum Foundry. Knowledge delivers better context with automated Source routing and advanced digit retrieval.
And this is what we are doing with our unified IQ layer, spanning Fabric, Foundry, and data powering Microsoft 365. In the world of context engineering, Foundry Knowledge and Fabric are gaining momentum. Foundry Knowledge delivers better context with automated source routing and advanced agentic retrieval while respecting user permissions.
As part of Foundry, we also give customers the ability to customize and finetune models. Increasingly, customers want to be able to capture the tacit knowledge they possess inside of model weights as their core IP. This is probably the most important sovereign consideration for firms. As AI diffuses more broadly across our GDP and every firm needs to protect their enterprise value, for agents to be effective, they need to be grounded in enterprise data and knowledge. That means connecting their agents to systems of record and operational data, analytical data, as well as semi-structured and unstructured productivity and communications data. And this is what we are doing with our
While respecting user permissions and fabric brings together end-to-end operational, real-time and analytical data 2 years. Since it became broadly, available Fabrics. Annual revenue. Run rate is now over 2 billion dollars with over 31,000 customers and it continues to be the fastest growing analytics platform. On the market with Revenue up, 60% year-over-year all up the number of customers spending 1 million dollars. Plus per quarter on Foundry, grew nearly 8.
Satya Nadella: In the world of context engineering, Foundry Knowledge and Fabric are gaining momentum. Foundry Knowledge delivers better context with automated source routing and advanced agentic retrieval while respecting user permissions. Fabric brings together end-to-end operational, real-time, and analytical data. 2 years since it became broadly available, Fabric's annual revenue run rate is now over $2 billion with over 31,000 customers, and it continues to be the fastest-growing analytics platform on the market, with revenue up 60% year-over-year. All up, the number of customers spending $1 million plus per quarter on Foundry grew nearly 80%, driven by strong growth in every industry. Over 250 customers are on track to process over 1 trillion tokens on Foundry this year. There are many great examples of customers using all of this capability on Foundry to build their own agentic systems.
Satya Nadella: To build, deploy, and manage agents, customers will need a model catalog, tuning services, harness for orchestration, services for context engineering, AI safety, management, observability, and security. It starts with having broad model choice. Our customers expect to use multiple models as part of any workload that they can fine-tune and optimize based on cost, latency, and performance requirements. We offer the broadest selection of models of any hyperscaler. This quarter, we added support for GPT-5.2 as well as Claude 4.5. Already, over 1,500 customers have used both Anthropic and OpenAI models on Foundry. We are seeing increasing demand for region-specific models, including Mistral and Cohere, as more customers look for sovereign AI choices. We continue to invest in our first-party models, which are optimized to address the highest-value customer scenarios, such as productivity, coding, and security.
To build, deploy, and manage agents, customers will need a model catalog, tuning services, harness for orchestration, services for context engineering, AI safety, management, observability, and security. It starts with having broad model choice. Our customers expect to use multiple models as part of any workload that they can fine-tune and optimize based on cost, latency, and performance requirements. We offer the broadest selection of models of any hyperscaler. This quarter, we added support for GPT-5.2 as well as Claude 4.5. Already, over 1,500 customers have used both Anthropic and OpenAI models on Foundry. We are seeing increasing demand for region-specific models, including Mistral and Cohere, as more customers look for sovereign AI choices. We continue to invest in our first-party models, which are optimized to address the highest-value customer scenarios, such as productivity, coding, and security.
Unified IQ layer spanning Fabric, Foundry, and data powering Microsoft 365 in the world of context.
Fabric brings together end-to-end operational, real-time, and analytical data. 2 years since it became broadly available, Fabric's annual revenue run rate is now over $2 billion with over 31,000 customers, and it continues to be the fastest-growing analytics platform on the market, with revenue up 60% year-over-year. All up, the number of customers spending $1 million plus per quarter on Foundry grew nearly 80%, driven by strong growth in every industry.
80% driven by strong growth in every industry and over 250 customers are on track to process over 1 trillion tokens on Foundry. This year, there are many great examples of customers using all of this capability on Foundry to build their own agentic systems. Alaska Airlines is creating natural language flight search, BMW is speeding up design Cycles. Lander Lakes is enabling Precision, farming for Co-Op members and Symphony AI is addressing bottlenecks in the cpg industry.
Over 250 customers are on track to process over 1 trillion tokens on Foundry this year. There are many great examples of customers using all of this capability on Foundry to build their own agentic systems. Alaska Airlines is creating natural language flight search, BMW is speeding up design cycles, Land O'Lakes is enabling precision farming for co-op members, and SymphonyAI is addressing bottlenecks in the CPG industry. Of course, Foundry remains a powerful on-ramp for the entire cloud.
Services databases as they scale.
Satya Nadella: Alaska Airlines is creating natural language flight search, BMW is speeding up design cycles, Land O'Lakes is enabling precision farming for co-op members, and SymphonyAI is addressing bottlenecks in the CPG industry. Of course, Foundry remains a powerful on-ramp for the entire cloud. The vast majority of Foundry customers use additional Azure solutions like developer services, app services, databases as they scale. Beyond Fabric and Foundry, we are also addressing agent building by knowledge workers with Copilot Studio and Agent Builder. Over 80% of the Fortune 500 have active agents built using these low-code, no-code tools. As agents proliferate, every customer will need new ways to deploy, manage, and protect them. We believe this creates a major new category and significant growth opportunity for us.
Beyond Fabric and Foundry. We're also addressing agent building by knowledge, workers with co-pilot studio and agent Builder over, 80% of the Fortune, 500 have active agents built using these low code, no code tools.
Engineering Foundry, Knowledge, and Fabric are gaining momentum. Foundry Knowledge delivers better context with automated source routing and advanced agentic retrieval, while respecting user permissions. And Fabric brings together end-to-end operational and real-time analytical data. Two years since it became broadly available, Fabric's annual revenue run rate is now over $2 billion, with over 31,000 customers, and it continues to be the fastest-growing analytics platform on the market, with revenue up 60% year-over-year. Also, the number of customers spending $1 million plus per quarter on Foundry grew nearly 80%, driven by strong growth in every industry, and over 250 customers are on track to process over 1 trillion tokens on Foundry this year. There are many great examples of customers using all of this capability on Foundry to build their own agentic systems. Alaska Airlines is creating natural...
Satya Nadella: As part of Foundry, we also give customers the ability to customize and fine-tune models. Increasingly, customers want to be able to capture the tacit knowledge they possess inside of model weights as their core IP. This is probably the most important sovereign consideration for firms, as AI diffuses more broadly across our GDP, and every firm needs to protect their enterprise value. For agents to be effective, they need to be grounded in enterprise data and knowledge. That means connecting their agents to systems of record and operational data, analytical data, as well as semi-structured and unstructured productivity and communications data. And this is what we are doing with our Unified IQ layer spanning Fabric, Foundry, and data powering Microsoft 365. In the world of context engineering, Foundry knowledge and Fabric are gaining momentum.
As part of Foundry, we also give customers the ability to customize and fine-tune models. Increasingly, customers want to be able to capture the tacit knowledge they possess inside of model weights as their core IP. This is probably the most important sovereign consideration for firms, as AI diffuses more broadly across our GDP, and every firm needs to protect their enterprise value. For agents to be effective, they need to be grounded in enterprise data and knowledge. That means connecting their agents to systems of record and operational data, analytical data, as well as semi-structured and unstructured productivity and communications data. And this is what we are doing with our Unified IQ layer spanning Fabric, Foundry, and data powering Microsoft 365. In the world of context engineering, Foundry knowledge and Fabric are gaining momentum.
Language Flight Search, BMW is speeding up design cycles. Land O'Lakes is enabling precision farming for co-op members, and SymphonyAI is addressing bottlenecks in the CPG industry.
The vast majority of Foundry customers use additional Azure solutions like developer services, app services, databases as they scale. Beyond Fabric and Foundry, we are also addressing agent building by knowledge workers with Copilot Studio and Agent Builder. Over 80% of the Fortune 500 have active agents built using these low-code, no-code tools. As agents proliferate, every customer will need new ways to deploy, manage, and protect them. We believe this creates a major new category and significant growth opportunity for us.
And, of course, Foundry remains a powerful on-ramp for the entire cloud. The vast majority of Foundry customers use additional Azure solutions like Developer Services, App Services, and databases as they scale.
As agents proliferate, every customer will need new ways to deploy manage and protect them. We believe this creates a major new category and significant growth opportunity for us this quarter. We introduced agent 365 which makes it easy for organizations to extend their existing governance identity security and management to agents. That means the same controls. They already use across Microsoft 365, and Azure now, extend to agents. They build and Deploy on our cloud or any other cloud.
And partners like Adobe data bricks. Jen sparked glean, Nvidia, sap service. Now, and workday are already integrating agent 365.
Satya Nadella: This quarter, we introduced Agent 365, which makes it easy for organizations to extend their existing governance, identity, security, and management to agents. That means the same controls they already use across Microsoft 365 and Azure now extend to agents they build and deploy on our cloud or any other cloud. And partners like Adobe, Databricks, Genspark, Glean, NVIDIA, SAP, ServiceNow, and Workday are already integrating Agent 365. We are the first provider to offer this type of agent control plane across clouds. Now let's turn to the high-value agentic experiences we are building. AI experiences are intent-driven and are beginning to work at task scope. We are entering an age of macro delegation and micro steering across domains. Intelligence using multiple models is built into multiple form factors.
This quarter, we introduced Agent 365, which makes it easy for organizations to extend their existing governance, identity, security, and management to agents. That means the same controls they already use across Microsoft 365 and Azure now extend to agents they build and deploy on our cloud or any other cloud. And partners like Adobe, Databricks, Genspark, Glean, NVIDIA, SAP, ServiceNow, and Workday are already integrating Agent 365. We are the first provider to offer this type of agent control plane across clouds.
We are the first provider to offer this type of agent control plane across clouds. Now, let's turn to the high-value agentic experiences, we are building, AI experiences, are intended, driven, and are beginning to work at task scope,
Satya Nadella: Foundry knowledge delivers better context with automated source routing and advanced agentic retrieval while respecting user permissions. Fabric brings together end-to-end operational, real-time, and analytical data. Two years since it became broadly available, Fabric's annual revenue run rate is now over $2 billion with over 31,000 customers. It continues to be the fastest-growing analytics platform on the market, with revenue up 60% year over year. All up, the number of customers spending $1 million-plus per quarter on Foundry grew nearly 80%, driven by strong growth in every industry. Over 250 customers are on track to process over 1 trillion tokens on Foundry this year. There are many great examples of customers using all of this capability on Foundry to build their own agentic systems. Alaska Airlines is creating natural language flight search. BMW is speeding up design cycles. Land O'Lakes is enabling precision farming for co-op members.
Foundry knowledge delivers better context with automated source routing and advanced agentic retrieval while respecting user permissions. Fabric brings together end-to-end operational, real-time, and analytical data. Two years since it became broadly available, Fabric's annual revenue run rate is now over $2 billion with over 31,000 customers. It continues to be the fastest-growing analytics platform on the market, with revenue up 60% year over year. All up, the number of customers spending $1 million-plus per quarter on Foundry grew nearly 80%, driven by strong growth in every industry. Over 250 customers are on track to process over 1 trillion tokens on Foundry this year. There are many great examples of customers using all of this capability on Foundry to build their own agentic systems. Alaska Airlines is creating natural language flight search. BMW is speeding up design cycles. Land O'Lakes is enabling precision farming for co-op members.
Beyond Fabric and Foundry, we’re also addressing agent building by knowledge workers with Copilot Studio and Agent Builder. Over 80% of the Fortune 500 have active agents built using these low-code, no-code tools. As agents proliferate, every customer will need new ways to deploy, manage, and protect them. We believe this creates a major new category and significant growth opportunity for us. This quarter, we introduced Agent 365, which makes it easy for organizations to extend their existing governance, identity, security, and management to agents. That means the same controls they already use across Microsoft 365 and Azure now extend to agents they build and deploy on our cloud or any other cloud.
We are entering an age of macro, delegation and micro steering across domains intelligence using multiple models, is built into multiple form factors. You see this in chat, in new agent, inbox apps, co-workers, scaffoldings, agent workflows embedded in applications and ideas that are used every day or even in our command line, with file system, access and skills.
And partners like Adobe, Databricks, Genpact, Lean, and Vidya, SAP, ServiceNow, and Workday are already integrating Agent 365.
Now let's turn to the high-value agentic experiences we are building. AI experiences are intent-driven and are beginning to work at task scope. We are entering an age of macro delegation and micro steering across domains. Intelligence using multiple models is built into multiple form factors.
We are the first provider to offer this type of agent control plane across clouds. Now, let's turn to the high-value agentic experiences we are building. AI experiences are intent-driven and are beginning to work at task scope.
Satya Nadella: You see this in chat, in new agent inbox apps, coworkers, scaffoldings, agent workflows embedded in applications and IDEs that are used every day, or even in our command line with file system access and skills. That's the approach we are taking with our first-party family of Copilots spanning key domains. In consumer, for example, Copilot experiences span chat, news, feed, search, creation, browsing, shopping, and integrations into the operating system, and it's gaining momentum. Daily users of our Copilot app increased nearly 3x year-over-year, and with Copilot Checkout, we are partnered with PayPal, Shopify, and Stripe, so customers can make purchases directly within the app. With Microsoft 365 Copilot, we are focused on organization-wide productivity. Work IQ takes the data underneath Microsoft 365 and creates the most valuable stateful agent for every organization.
You see this in chat, in new agent inbox apps, coworkers, scaffoldings, agent workflows embedded in applications and IDEs that are used every day, or even in our command line with file system access and skills. That's the approach we are taking with our first-party family of Copilots spanning key domains. In consumer, for example, Copilot experiences span chat, news, feed, search, creation, browsing, shopping, and integrations into the operating system, and it's gaining momentum.
That's the approach we're taking with our first-party family of co-pilot spanning key domains in consumer. For example, co-pilot experiences Span chat news, feeds search creation, browsing shopping and Integrations into the operating system and it's gaining momentum daily users of our co-pilot app increased nearly 3x year over year. And with co-pilot checkout, we are partnered with PayPal Shopify and Stripes. So customers can make purchases directly within the app.
Satya Nadella: SymphonyAI is addressing bottlenecks in the CPG industry. Of course, Foundry remains a powerful on-ramp for the entire cloud. The vast majority of Foundry customers use additional Azure solutions like developer services, app services, databases as they scale. Beyond Fabric and Foundry, we are also addressing agent building by knowledge workers with Copilot Studio and Agent Builder. Over 80% of the Fortune 500 have active agents built using these low-code, no-code tools. As agents proliferate, every customer will need new ways to deploy, manage, and protect them. We believe this creates a major new category and significant growth opportunity for us. This quarter, we introduced Agent 365, which makes it easy for organizations to extend their existing governance, identity, security, and management to agents.
SymphonyAI is addressing bottlenecks in the CPG industry. Of course, Foundry remains a powerful on-ramp for the entire cloud. The vast majority of Foundry customers use additional Azure solutions like developer services, app services, databases as they scale. Beyond Fabric and Foundry, we are also addressing agent building by knowledge workers with Copilot Studio and Agent Builder. Over 80% of the Fortune 500 have active agents built using these low-code, no-code tools. As agents proliferate, every customer will need new ways to deploy, manage, and protect them. We believe this creates a major new category and significant growth opportunity for us. This quarter, we introduced Agent 365, which makes it easy for organizations to extend their existing governance, identity, security, and management to agents.
We are entering an age of macro, delegation, and micro-steering across domains. Intelligence using multiple models is built into multiple form factors. You see this in chat, in new agent, inbox apps, co-workers, scaffolding, agent workflows embedded in applications and ideas that are used every day, or even in our command line, with file system access and skills.
Daily users of our Copilot app increased nearly 3x year-over-year, and with Copilot Checkout, we are partnered with PayPal, Shopify, and Stripe, so customers can make purchases directly within the app. With Microsoft 365 Copilot, we are focused on organization-wide productivity. Work IQ takes the data underneath Microsoft 365 and creates the most valuable stateful agent for every organization.
With Microsoft 365 co-pilot. We have focused on organization wide productivity work IQ takes the data underneath Microsoft 365, and creates the most valuable stateful agent for every organization. It delivers powerful reasoning capabilities over people. Their roles their artifacts their Communications and their history and memory all within an organization's security, boundary, Microsoft 365, co-pilot's, accuracy, and latency powered by work IQ is unmatched. Delivering faster and more accurate work grounded results than competition.
Shopping and integrations into the operating system are gaining momentum. Daily users of our Copilot app increased nearly 3x year-over-year. And with Copilot checkout, we are partnered with PayPal, Shopify, and Stripe so our customers can make purchases directly within the app.
Satya Nadella: It delivers powerful reasoning capabilities over people, their roles, their artifacts, their communications, and their history and memory, all within an organization's security boundary. Microsoft 365 Copilot's accuracy and latency, powered by Work IQ, is unmatched, delivering faster and more accurate work-grounded results than competition. We have seen our biggest quarter-over-quarter improvement in response quality to date. This has driven record usage intensity, with average number of conversations per user doubling year-over-year. Microsoft 365 Copilot also is becoming true daily habit, with daily active users increasing 10x year-over-year. We are also seeing strong momentum with Researcher Agent, which supports both OpenAI and Claude, as well as agent mode in Excel, PowerPoint, and Word. All up, it was a record quarter for Microsoft 365 Copilot seat adds, up over 160% year-over-year.
It delivers powerful reasoning capabilities over people, their roles, their artifacts, their communications, and their history and memory, all within an organization's security boundary. Microsoft 365 Copilot's accuracy and latency, powered by Work IQ, is unmatched, delivering faster and more accurate work-grounded results than competition. We have seen our biggest quarter-over-quarter improvement in response quality to date.
Satya Nadella: That means the same controls they already use across Microsoft 365 and Azure now extend to agents they build and deploy on our cloud or any other cloud. And partners like Adobe, Databricks, Genspark, Glean, NVIDIA, SAP, ServiceNow, and Workday are already integrating Agent 365. We are the first provider to offer this type of agent control plane across clouds. Now let's turn to the high-value agentic experiences we are building. AI experiences are intent-driven and are beginning to work at task scope. We are entering an age of macro-delegation and micro-steering across domains. Intelligence using multiple models is built into multiple form factors. You see this in chat, in new agent inbox, apps, coworker scaffoldings, agent workflows embedded in applications and IDEs that are used every day, or even in our command line with file system access and skills.
That means the same controls they already use across Microsoft 365 and Azure now extend to agents they build and deploy on our cloud or any other cloud. And partners like Adobe, Databricks, Genspark, Glean, NVIDIA, SAP, ServiceNow, and Workday are already integrating Agent 365. We are the first provider to offer this type of agent control plane across clouds. Now let's turn to the high-value agentic experiences we are building. AI experiences are intent-driven and are beginning to work at task scope. We are entering an age of macro-delegation and micro-steering across domains. Intelligence using multiple models is built into multiple form factors. You see this in chat, in new agent inbox, apps, coworker scaffoldings, agent workflows embedded in applications and IDEs that are used every day, or even in our command line with file system access and skills.
This has driven record usage intensity, with average number of conversations per user doubling year-over-year. Microsoft 365 Copilot also is becoming true daily habit, with daily active users increasing 10x year-over-year. We are also seeing strong momentum with Researcher Agent, which supports both OpenAI and Claude, as well as agent mode in Excel, PowerPoint, and Word. All up, it was a record quarter for Microsoft 365 Copilot seat adds, up over 160% year-over-year.
And we have seen our biggest quarter over quarter Improvement in response quality. Today this is driven record usage, intensity with average number of conversations, per user doubling year-over-year. Microsoft 365. Co-pilot also is becoming true daily habit, with daily active users increasing 10x year-over-year. We're also seeing strong momentum with researcher agent which supports both openai and Claude as well as agent mode in Excel PowerPoint and word all up. It was a record quarter for Microsoft 365. Co-pilot seat adds up over 160% year-over-year. We saw accelerating seed growth quarter over quarter and now have 15 million paid Microsoft 365. Co-pilot seats and multiple more Enterprise chat users and we are seeing larger commercial deployments the number of customers with over 35,000 seats, tripled year-over-year, Fiserv NASA University of Kentucky,
With Microsoft 365 Copilot, we are focused on organization-wide productivity. Work IQ takes the data underneath Microsoft 365 and creates the most valuable stateful agent for every organization. It delivers powerful reasoning capabilities over people—their roles, their artifacts, their communications, and their history and memory—all within an organization's security boundary. Microsoft 365 Copilot's accuracy and latency powered by Work IQ is unmatched, delivering faster and more accurate work-grounded results than the competition. And we have seen our biggest quarter-over-quarter improvement in response quality. Today, this has driven record usage intensity, with average number of conversations per user doubling year-over-year. Microsoft 365 Copilot also is becoming a true daily habit, with daily active users increasing 10x year-over-year. We're also seeing strong momentum with researcher agent, which supports both open air.
Satya Nadella: We saw accelerating seat growth quarter-over-quarter and now have 15 million paid Microsoft 365 Copilot seats and multiples more enterprise chat users. We are seeing larger commercial deployments. The number of customers with over 35,000 seats tripled year-over-year. Fiserv, ING, NASA, University of Kentucky, University of Manchester, U.S. Department of the Interior, and Westpac all purchased over 35,000 seats. Publicis alone purchased over 95,000 seats for nearly all its employees. We are also taking share in Dynamics 365 with built-in agents across the entire suite. A great example of this is how Visa is turning customer conversations data into knowledge articles with our customer knowledge management agent in Dynamics... and how Sandvik is using our sales qualification agent to automate lead qualification across tens of thousands of potential customers. In coding, we are seeing strong growth across all paid GitHub Copilot.
We saw accelerating seat growth quarter-over-quarter and now have 15 million paid Microsoft 365 Copilot seats and multiples more enterprise chat users. We are seeing larger commercial deployments. The number of customers with over 35,000 seats tripled year-over-year. Fiserv, ING, NASA, University of Kentucky, University of Manchester, U.S. Department of the Interior, and Westpac all purchased over 35,000 seats. Publicis alone purchased over 95,000 seats for nearly all its employees.
University of Manchester US Department of interior and Westpac, all purchased over 35,000 seats, publicists alone purchased over 95,000 seats for nearly all its employees. We also taking share in Dynamics 365 with built-in agents across the entire Suite. A great example of this is how Visa is turning customer conversations data into knowledge articles with our customer Knowledge Management agent in Dynamics.
Satya Nadella: That's the approach we are taking with our first-party family of Copilots spanning key domains. In consumer, for example, Copilot experiences span chat, news, feeds, search, creation, browsing, shopping, and integrations into the operating system. And it's gaining momentum. Daily users of our Copilot app increased nearly 3x year-over-year. And with Copilot Checkout, we have partnered with PayPal, Shopify, and Stripe so customers can make purchases directly within the app. With Microsoft 365 Copilot, we are focused on organization-wide productivity. WorkIQ takes the data underneath Microsoft 365 and creates the most valuable, stateful agent for every organization. It delivers powerful reasoning capabilities over people, their roles, their artifacts, their communications, and their history and memory, all within an organization's security boundary. Microsoft 365 Copilot's accuracy and latency powered by WorkIQ is unmatched, delivering faster and more accurate, work-grounded results than competition.
That's the approach we are taking with our first-party family of Copilots spanning key domains. In consumer, for example, Copilot experiences span chat, news, feeds, search, creation, browsing, shopping, and integrations into the operating system. And it's gaining momentum. Daily users of our Copilot app increased nearly 3x year-over-year. And with Copilot Checkout, we have partnered with PayPal, Shopify, and Stripe so customers can make purchases directly within the app. With Microsoft 365 Copilot, we are focused on organization-wide productivity. WorkIQ takes the data underneath Microsoft 365 and creates the most valuable, stateful agent for every organization. It delivers powerful reasoning capabilities over people, their roles, their artifacts, their communications, and their history and memory, all within an organization's security boundary. Microsoft 365 Copilot's accuracy and latency powered by WorkIQ is unmatched, delivering faster and more accurate, work-grounded results than competition.
And how stadry is using our sales qualification agent to automate lead qualification across tens and thousands of potential customers in coding. We are seeing strong growth across all paid GitHub, co-pilot co-pilot Pro Plus subs for individual devs increased, 77% quarter over quarter and all up. Now, we have 4.7 million paid co-pilot subscribers up, 75% year-over-year
We are also taking share in Dynamics 365 with built-in agents across the entire suite. A great example of this is how Visa is turning customer conversations data into knowledge articles with our customer knowledge management agent in Dynamics... and how Sandvik is using our sales qualification agent to automate lead qualification across tens of thousands of potential customers. In coding, we are seeing strong growth across all paid GitHub Copilot.
For example, is going all in on GitHub adopting, the full platform to increase developer productivity. After a successful co-pilot roll out to 30,000 plus Developers.
And Claude, as well as agent mode in Excel, PowerPoint, and Word all up, it was a record quarter for Microsoft 365. Copilot seat adds up over 160% year-over-year. We saw accelerating seat growth quarter over quarter and now have 15 million paid Microsoft 365 Copilot seats and multiples more Enterprise Chat users, and we are seeing larger commercial deployments. The number of customers with over 35,000 seats tripled year-over-year. Fiserv, ING, NASA, University of Kentucky, University of Manchester, US Department of the Interior, and Westpac all purchased over 35,000 seats. Publicis alone purchased over 95,000 seats for nearly all its employees. We're also taking share in Dynamics 365 with built-in agents across the entire suite. A great example of this is how Visa is turning customer conversation data into knowledge articles with our customers.
Knowledge Management agent and Dynamics.
Satya Nadella: Copilot Pro Plus subs for individual devs increased 77% quarter-over-quarter, and all up now, we have 4.7 million paid Copilot subscribers, up 75% year-over-year. Siemens, for example, is going all in on GitHub, adopting the full platform to increase developer productivity after a successful Copilot rollout to 30,000+ developers. GitHub Agent HQ is the organizing layer for all coding agents like Anthropic, OpenAI, Google, Cognition, and xAI in the context of customers' GitHub repos. With Copilot CLI and VS Code, we offer developers the full spectrum of form factors and models they need for AI-first coding workflows. And when you add Work IQ as a skill or an MCP to our developer workflow, it's a game changer, surfacing more context like emails, meetings, docs, projects, messages, and more.
Copilot Pro Plus subs for individual devs increased 77% quarter-over-quarter, and all up now, we have 4.7 million paid Copilot subscribers, up 75% year-over-year. Siemens, for example, is going all in on GitHub, adopting the full platform to increase developer productivity after a successful Copilot rollout to 30,000+ developers. GitHub Agent HQ is the organizing layer for all coding agents like Anthropic, OpenAI, Google, Cognition, and xAI in the context of customers' GitHub repos.
Satya Nadella: We have seen our biggest quarter-over-quarter improvement in response quality to date. This is driven record usage intensity, with the average number of conversations per user doubling year-over-year. Microsoft 365 Copilot also is becoming a true daily habit, with daily active users increasing 10x year-over-year. We are also seeing strong momentum with Researcher Agent, which supports both OpenAI and Claude, as well as agent mode in Excel, PowerPoint, and Word. All up, it was a record quarter for Microsoft 365 Copilot seat ads, up over 160% year-over-year. We saw accelerating seat growth quarter-over-quarter and now have 15 million paid Microsoft 365 Copilot seats and multiples more enterprise chat users. We are seeing larger commercial deployments. The number of customers with over 35,000 seats tripled year-over-year.
We have seen our biggest quarter-over-quarter improvement in response quality to date. This is driven record usage intensity, with the average number of conversations per user doubling year-over-year. Microsoft 365 Copilot also is becoming a true daily habit, with daily active users increasing 10x year-over-year. We are also seeing strong momentum with Researcher Agent, which supports both OpenAI and Claude, as well as agent mode in Excel, PowerPoint, and Word. All up, it was a record quarter for Microsoft 365 Copilot seat ads, up over 160% year-over-year. We saw accelerating seat growth quarter-over-quarter and now have 15 million paid Microsoft 365 Copilot seats and multiples more enterprise chat users. We are seeing larger commercial deployments. The number of customers with over 35,000 seats tripled year-over-year.
And how SandRae is using our sales qualification agent to automate lead qualification across tens of thousands of potential customers in coding. We are seeing strong growth across all paid GitHub Copilot, Copilot Pro, and Copilot Plus subs for individual devs, which increased 77% quarter over quarter. And all up, now, we have 4.7% paid Copilot subscribers, up 75% year-over-year.
Like anthropic open, AI Google cognition and xai and the context of customers GitHub repos with co-pilot CLI and vs code. We offer developers. The full spectrum of form factors and models. They need for AI first, coding workflows and when you add work IQ as a skill or an mCP to our developer workflow, it's a GameChanger surfacing more context like emails meetings, docs projects, messages and more. You can simply ask the agent to plan and execute changes to your codebase based on an update to a speck in SharePoint or using the transcript of your last engineering and design meeting in teams.
Semmens, for example, is going all in on GitHub, adopting the full platform to increase developer productivity after a successful Copilot rollout to 30,000-plus developers.
With Copilot CLI and VS Code, we offer developers the full spectrum of form factors and models they need for AI-first coding workflows. And when you add Work IQ as a skill or an MCP to our developer workflow, it's a game changer, surfacing more context like emails, meetings, docs, projects, messages, and more.
Satya Nadella: You can simply ask the agent to plan and execute changes to your code base based on an update to a spec in SharePoint or using the transcript of your last engineering and design meeting in Teams. We are going beyond that with GitHub Copilot SDK. Developers can now embed the same runtime behind Copilot CLI, multi-model, multi-step planning tools, MCP integration, auth, streaming directly into their applications. In security, we added a dozen new and updated Security Copilot agents across Defender, Entra, Intune, and Purview. For example, Icertis, a SOC team, used Security Copilot agent to reduce manual triage time by 75%, which is a real game changer in an industry facing a severe talent shortage.
You can simply ask the agent to plan and execute changes to your code base based on an update to a spec in SharePoint or using the transcript of your last engineering and design meeting in Teams. We are going beyond that with GitHub Copilot SDK. Developers can now embed the same runtime behind Copilot CLI, multi-model, multi-step planning tools, MCP integration, auth, streaming directly into their applications.
And we're going beyond that with GitHub. Co-pilot SDK developers can now embed the same runtime behind co-pilot CLI multimodal, multi-step planning tools. MCP, integration Off streaming directly into their applications. Insecurity, we added a dozen new and updated security co-pilot agents across Defender intra InTune and purview. For example, I certis is sock team. Used security, co-pilot agent to reduce manual Tryon Time by 75%, which is a real game changer and an industry. Facing a severe Talent shortage
GitHub Agent HQ is the organizing layer for all coding agents like Anthropic, OpenAI, Google Cognition, and xAI, and the context of customers’ GitHub repos with Copilot CLI and VS Code. We offer developers the full spectrum of form factors and models they need for AI-first coding workflows. And when you add Work IQ as a skill or an MCP to our developer workflow, it's a game changer, surfacing more context like emails, meetings, docs, projects, messages, and more. You can simply ask the agent to plan and execute changes to your codebase based on an update to a spec in SharePoint, or using the transcript of your last engineering and design meeting in Teams.
Satya Nadella: Fiserv, ING, NASA, University of Kentucky, University of Manchester, US Department of Interior, and Westpac all purchased over 35,000 seats. Publicis alone purchased over 95,000 seats for nearly all its employees. We are also taking share in Dynamics 365 with built-in agents across the entire suite. A great example of this is how Visa is turning customer conversations data into knowledge articles with our customer knowledge management agent in Dynamics, and how Sandvik is using our sales qualification agent to automate lead qualification across tens and thousands of potential customers. In coding, we are seeing strong growth across all paid GitHub Copilot. Copilot Pro Plus subs for individual devs increased 77% quarter-over-quarter. All up now, we have 4.7 million paid Copilot subscribers, up 75% year-over-year.
Fiserv, ING, NASA, University of Kentucky, University of Manchester, US Department of Interior, and Westpac all purchased over 35,000 seats. Publicis alone purchased over 95,000 seats for nearly all its employees. We are also taking share in Dynamics 365 with built-in agents across the entire suite. A great example of this is how Visa is turning customer conversations data into knowledge articles with our customer knowledge management agent in Dynamics, and how Sandvik is using our sales qualification agent to automate lead qualification across tens and thousands of potential customers. In coding, we are seeing strong growth across all paid GitHub Copilot. Copilot Pro Plus subs for individual devs increased 77% quarter-over-quarter. All up now, we have 4.7 million paid Copilot subscribers, up 75% year-over-year.
To make it easier for security teams to onboard. We are rolling out security. Co-pilot to all our E5 customers and our Security Solutions are also becoming essential to manage organizations AI deployments.
In security, we added a dozen new and updated Security Copilot agents across Defender, Entra, Intune, and Purview. For example, Icertis, a SOC team, used Security Copilot agent to reduce manual triage time by 75%, which is a real game changer in an industry facing a severe talent shortage.
24 billion co-pilot interactions were audited, by purview this quarter up 9x year over year.
And we're going beyond that with GitHub. Copilot SDK developers can now embed the same runtime behind Copilot CLI multimodal, multi-step planning tools. MCP integration offers streaming directly into their applications. In security, we added a dozen new and updated Security Copilot agents across Defender, Intune, and Purview, for example. I saw that this SOC team used Security Copilot agent to reduce manual triage time by 75%, which is a real game changer and an
Satya Nadella: To make it easier for security teams to onboard, we are rolling out Security Copilot to all our E5 customers, and our security solutions are also becoming essential to manage organizations' AI deployments. 24 billion Copilot interactions were audited by Purview this quarter, up 9x year-over-year. Finally, I want to talk about two additional high-impact agentic experiences. First, in healthcare, Dragon Copilot is the leader in its category, helping over 100,000 medical providers automate their workflows. Mount Sinai Health is now moving to a system-wide Dragon Copilot deployment for providers after a successful trial with its primary care physicians. All up, we helped document 21 million patient encounters this quarter, up 3x year-over-year. And second, when it comes to science and engineering, companies like Unilever in consumer goods and Synopsys in EDA are using Microsoft Discovery to orchestrate specialized agents for R&D end-to-end.
To make it easier for security teams to onboard, we are rolling out Security Copilot to all our E5 customers, and our security solutions are also becoming essential to manage organizations' AI deployments. 24 billion Copilot interactions were audited by Purview this quarter, up 9x year-over-year. Finally, I want to talk about two additional high-impact agentic experiences. First, in healthcare, Dragon Copilot is the leader in its category, helping over 100,000 medical providers automate their workflows.
Industry. Facing a severe Talent shortage.
To make it easier for security teams to onboard, we are rolling out Security Copilot to all our E5 customers, and our security solutions are also becoming essential to manage organizations' AI deployments.
Satya Nadella: Siemens, for example, is going all in on GitHub, adopting the full platform to increase developer productivity after a successful Copilot rollout to 30,000-plus developers. GitHub Agent HQ is the organizing layer for all coding agents like Anthropic, OpenAI, Google, Cognition, and xAI in the context of customers' GitHub repos. With Copilot CLI and VS Code, we offer developers the full spectrum of form factors and models they need for AI-first coding workflows. And when you add WorkIQ as a skill or an MCP to our developer workflow, it's a game-changer, surfacing more context like emails, meetings, docs, projects, messages, and more. You can simply ask the agent to plan and execute changes to your codebase based on an update to a spec in SharePoint or using the transcript of your last engineering and design meeting in Teams. And we are going beyond that with GitHub Copilot SDK.
Siemens, for example, is going all in on GitHub, adopting the full platform to increase developer productivity after a successful Copilot rollout to 30,000-plus developers. GitHub Agent HQ is the organizing layer for all coding agents like Anthropic, OpenAI, Google, Cognition, and xAI in the context of customers' GitHub repos. With Copilot CLI and VS Code, we offer developers the full spectrum of form factors and models they need for AI-first coding workflows. And when you add WorkIQ as a skill or an MCP to our developer workflow, it's a game-changer, surfacing more context like emails, meetings, docs, projects, messages, and more. You can simply ask the agent to plan and execute changes to your codebase based on an update to a spec in SharePoint or using the transcript of your last engineering and design meeting in Teams. And we are going beyond that with GitHub Copilot SDK.
24 billion Copilot interactions were audited by Purview; this was up 9x year over year.
Finally, I want to talk about 2 additional high impact. Agentic experiences. First in healthcare, Dragon co-pilot is the leader in its category, helping over 100,000 medical providers. Automate their workflows Mount Sinai. Health is now moving to a systemwide dragon copilot deployment for providers after a successful trial with its primary care physicians. All up we help document 21 million patient. Encounters this quarter up 3x year over year and second when it comes to science and engineering companies, like uni, liver and consumer goods. And synopsis, in Eda are using Microsoft, Discovery, to orchestrate specialized agents for R&D. End to end. They're able to reason over scientific literature and internal knowledge. Formulate hypotheses spin up simulations and continuously iterate to drive new discoveries.
Mount Sinai Health is now moving to a system-wide Dragon Copilot deployment for providers after a successful trial with its primary care physicians. All up, we helped document 21 million patient encounters this quarter, up 3x year-over-year. And second, when it comes to science and engineering, companies like Unilever in consumer goods and Synopsys in EDA are using Microsoft Discovery to orchestrate specialized agents for R&D end-to-end.
Satya Nadella: They're able to reason over scientific literature and internal knowledge, formulate hypotheses, spin up simulations, and continuously iterate to drive new discoveries. Beyond AI, we continue to invest in all our core franchises and meet the needs of our customers and partners, and we are seeing strong progress. For example, when it comes to cloud migrations, our new SQL Server has over 2x the IaaS adoption of the previous version. In security, we now have 1.6 million security customers, including over 1 million who use 4 or more of our workloads. Windows reached a big milestone, 1 billion Windows 11 users, up over 45% year-over-year. And we had share gains this quarter across Windows, Edge, and Bing. Double-digit member growth in LinkedIn, with 30% growth in paid video ads.
They're able to reason over scientific literature and internal knowledge, formulate hypotheses, spin up simulations, and continuously iterate to drive new discoveries. Beyond AI, we continue to invest in all our core franchises and meet the needs of our customers and partners, and we are seeing strong progress. For example, when it comes to cloud migrations, our new SQL Server has over 2x the IaaS adoption of the previous version.
Mount Sinai Health is now moving to a systemwide Dragon Copilot deployment for providers after a successful trial with its primary care physicians. All up, we helped document 21 million patient encounters this quarter, up 3x year over year. And second, when it comes to science and engineering, companies like Unilever and Synopsys in EDA are using Microsoft Discovery to orchestrate specialized agents for R&D. End to end, they're able to reason over scientific literature and internal knowledge, formulate hypotheses, spin up simulations, and continuously iterate to drive new discoveries.
Satya Nadella: Developers can now embed the same runtime behind Copilot CLI, multimodal, multi-step planning tools, MCP integration, auth, streaming directly into their applications. In security, we added a dozen new and updated Security Copilot agents across Defender, Entra, Intune, and Purview. For example, Icertis's SOC team used Security Copilot agent to reduce manual triage time by 75%, which is a real game-changer in an industry facing a severe talent shortage. To make it easier for security teams to onboard, we are rolling out Security Copilot to all our E5 customers. And our security solutions are also becoming essential to manage organizations' AI deployments. 24 billion Copilot interactions were audited by Purview this quarter, up 9X year over year. Finally, I want to talk about two additional high-impact agentic experiences. First, in healthcare, Dragon Copilot is the leader in its category, helping over 100,000 medical providers automate their workflows.
Developers can now embed the same runtime behind Copilot CLI, multimodal, multi-step planning tools, MCP integration, auth, streaming directly into their applications. In security, we added a dozen new and updated Security Copilot agents across Defender, Entra, Intune, and Purview. For example, Icertis's SOC team used Security Copilot agent to reduce manual triage time by 75%, which is a real game-changer in an industry facing a severe talent shortage. To make it easier for security teams to onboard, we are rolling out Security Copilot to all our E5 customers. And our security solutions are also becoming essential to manage organizations' AI deployments. 24 billion Copilot interactions were audited by Purview this quarter, up 9X year over year. Finally, I want to talk about two additional high-impact agentic experiences. First, in healthcare, Dragon Copilot is the leader in its category, helping over 100,000 medical providers automate their workflows.
In security, we now have 1.6 million security customers, including over 1 million who use 4 or more of our workloads. Windows reached a big milestone, 1 billion Windows 11 users, up over 45% year-over-year. And we had share gains this quarter across Windows, Edge, and Bing. Double-digit member growth in LinkedIn, with 30% growth in paid video ads.
Beyond AI. We continue to invest in all our core franchises and meet the needs of our customers and partners, and we are seeing strong progress. For example, when it comes to Cloud, migrations our new SQL server has over 2x, the is adoption of the previous version in security. We now have 1.6 million security customers, including over a million who use 4 or more of our workloads Windows reached a big milestone, 1 billion Windows 11 users up over 45% year-over-year and we had share gains this quarter across Windows Edge and Bing double digit member growth, in LinkedIn with 30% growth in paid video ads. And in gaming, we are committed to delivering great games across Xbox PC cloud, and every other device and we saw record PC players and paid streaming hours on Xbox in closing. We feel very good about how we are delivering for customers today in building the full stack.
To capture the opportunity ahead with that. Let me turn it over to Amy to walk through our financial results and Outlook and I look forward to rejoining for your questions. Thank you, Satya and good afternoon. Everyone with growing demand for our offerings and focused execution by our sales teams. We again exceeded expectations across Revenue operating income and earnings per share while investing to fuel long-term growth.
Satya Nadella: In gaming, we are committed to delivering great games across Xbox, PC, cloud, and every other device, and we saw record PC players and paid streaming hours on Xbox. In closing, we feel very good about how we are delivering for customers today and building the full stack to capture the opportunity ahead. With that, let me turn it over to Amy to walk through our financial results and outlook, and I look forward to rejoining for your questions.
In gaming, we are committed to delivering great games across Xbox, PC, cloud, and every other device, and we saw record PC players and paid streaming hours on Xbox. In closing, we feel very good about how we are delivering for customers today and building the full stack to capture the opportunity ahead. With that, let me turn it over to Amy to walk through our financial results and outlook, and I look forward to rejoining for your questions.
Beyond AI, we continue to invest in all our core franchises and meet the needs of our customers and partners, and we are seeing strong progress. For example, when it comes to Cloud migrations, our new SQL Server has over 2x the adoption of the previous version. In security, we now have 1.6 million security customers, including over a million who use four or more of our workloads. Windows reached a big milestone—1 billion Windows 11 users, up over 45% year-over-year—and we had share gains this quarter across Windows, Edge, and Bing. We saw double-digit member growth in LinkedIn, with 30% growth in paid video ads. And in gaming, we are committed to delivering great games across Xbox, PC, cloud, and every other device. And we...
This quarter Revenue was 81.3 billion up 17% and 15% in constant currency. Gross margin dollars increased 16% and 14% in constant currency while operating income increased 21% and 19% in constant currency earnings per share was $4.14. An increase of 24 and 21% in constant currency when adjusted for the impact from our investment in openai,
Amy Hood: Thank you, Satya, and good afternoon, everyone. With growing demand for our offerings and focused execution by our sales teams, we again exceeded expectations across revenue, operating income, and earnings per share while investing to fuel long-term growth. This quarter, revenue was $81.3 billion, up 17% and 15% in constant currency. Gross margin dollars increased 16% and 14% in constant currency, while operating income increased 21% and 19% in constant currency. Earnings per share was $4.14, an increase of 24% and 21% in constant currency when adjusted for the impact from our investment in OpenAI. FX increased reported results slightly less than expected, particularly in Intelligent Cloud revenue.
Amy Hood: Thank you, Satya, and good afternoon, everyone. With growing demand for our offerings and focused execution by our sales teams, we again exceeded expectations across revenue, operating income, and earnings per share while investing to fuel long-term growth. This quarter, revenue was $81.3 billion, up 17% and 15% in constant currency.
Saw record PC players and paid streaming hours on Xbox in closing. We feel very good about how we are delivering for customers today and building the full stack to capture the opportunity ahead with that. Let me turn it over to Amy to walk through our financial results and outlook, and I look forward to rejoining for your questions.
Thank you. Setia
And good afternoon, everyone.
and for our offering,
Satya Nadella: Mount Sinai Health is now moving to a system-wide Dragon Copilot deployment for providers after a successful trial with its primary care physicians. All up, we helped document 21 million patient encounters this quarter, up 3X year-over-year. Second, when it comes to science and engineering, companies like Unilever in consumer goods and Synopsys in EDA are using Microsoft Discovery to orchestrate specialized agents for R&D end-to-end. They're able to reason over scientific literature and internal knowledge, formulate hypotheses, spin up simulations, and continuously iterate to drive new discoveries. Beyond AI, we continue to invest in all our core franchises and meet the needs of our customers and partners. We are seeing strong progress. For example, when it comes to cloud migrations, our new SQL Server has over 2X the IaaS adoption of the previous version.
Mount Sinai Health is now moving to a system-wide Dragon Copilot deployment for providers after a successful trial with its primary care physicians. All up, we helped document 21 million patient encounters this quarter, up 3X year-over-year. Second, when it comes to science and engineering, companies like Unilever in consumer goods and Synopsys in EDA are using Microsoft Discovery to orchestrate specialized agents for R&D end-to-end. They're able to reason over scientific literature and internal knowledge, formulate hypotheses, spin up simulations, and continuously iterate to drive new discoveries. Beyond AI, we continue to invest in all our core franchises and meet the needs of our customers and partners. We are seeing strong progress. For example, when it comes to cloud migrations, our new SQL Server has over 2X the IaaS adoption of the previous version.
An fx increased reported results slightly less than expected particularly in intelligent Cloud Revenue.
Execution by our sales teams, we again exceeded expectations across revenue, operating income, and earnings per share, while investing to fuel long-term growth.
Gross margin dollars increased 16% and 14% in constant currency, while operating income increased 21% and 19% in constant currency. Earnings per share was $4.14, an increase of 24% and 21% in constant currency when adjusted for the impact from our investment in OpenAI. FX increased reported results slightly less than expected, particularly in Intelligent Cloud revenue.
This quarter, revenue was $81.3 billion, up 17% and 15% in constant currency. Gross margin dollars increased 16% and 14% in constant currency, while operating income increased 21% and 19% in constant currency. Earnings per share was $4.14, an increase of 24% and 21% in constant currency when adjusted for the impact from our investment in OpenAI.
Amy Hood: Company gross margin percentage was 68%, down slightly year-over-year, primarily driven by continued investments in AI infrastructure and growing AI product usage that was partially offset by ongoing efficiency gains, particularly in Azure and M365 Commercial Cloud, as well as sales mix shift to higher-margin businesses. Operating expenses increased 5% and 4% in constant currency, driven by R&D investments in compute capacity and AI talent, as well as impairment charges in our gaming business. Operating margins increased year-over-year to 47%, ahead of expectations. As a reminder, we still account for our investment in OpenAI under the equity method.
Company gross margin percentage was 68%, down slightly year-over-year, primarily driven by continued investments in AI infrastructure and growing AI product usage that was partially offset by ongoing efficiency gains, particularly in Azure and M365 Commercial Cloud, as well as sales mix shift to higher-margin businesses. Operating expenses increased 5% and 4% in constant currency, driven by R&D investments in compute capacity and AI talent, as well as impairment charges in our gaming business. Operating margins increased year-over-year to 47%, ahead of expectations.
And FX increased reported results slightly less than expected, particularly in Intelligent Cloud revenue.
Satya Nadella: In security, we now have 1.6 million security customers, including over a million, who use four or more of our workloads. Windows reached a big milestone: 1 billion Windows 11 users, up over 45% year over year. We had share gains this quarter across Windows, Edge, and Bing, double-digit member growth in LinkedIn with 30% growth in paid video ads. In gaming, we are committed to delivering great games across Xbox, PC, cloud, and every other device. We saw record PC players and paid streaming hours on Xbox. In closing, we feel very good about how we are delivering for customers today and building the full stack to capture the opportunity ahead. With that, let me turn it over to Amy to walk through our financial results and outlook. I look forward to rejoining for your questions.
In security, we now have 1.6 million security customers, including over a million, who use four or more of our workloads. Windows reached a big milestone: 1 billion Windows 11 users, up over 45% year over year. We had share gains this quarter across Windows, Edge, and Bing, double-digit member growth in LinkedIn with 30% growth in paid video ads. In gaming, we are committed to delivering great games across Xbox, PC, cloud, and every other device. We saw record PC players and paid streaming hours on Xbox. In closing, we feel very good about how we are delivering for customers today and building the full stack to capture the opportunity ahead. With that, let me turn it over to Amy to walk through our financial results and outlook. I look forward to rejoining for your questions.
And compute capacity and AI Talent. As well as impairment charges in our gaming business. Operating margins increased year-over-year to 47% ahead of expectations as a reminder, we still account for our investment in openai under the equity method. And as a result of openai's recapitalization, We Now record gains or losses based on our share of the change in their net assets on their balance sheet, as opposed to our share other operating profit or losses from their income statement. Therefore, we recorded a game which drove other income and expense to 10 billion dollars in our Gap results.
Company gross margin percentage was 68%, down slightly year-over-year, primarily driven by continued investments in AI infrastructure and growing AI product usage. That was partially offset by ongoing efficiency gains, particularly in Azure and M365 Commercial Cloud, as well as sales mix shift to higher margin businesses. Operating expenses increased 5% and 4% in constant currency.
When adjusted for the open, AI impact, other income and expense was slightly negative and lower than expected driven by net losses on investments.
Capital expenditures were 37.5 billion and this quarter roughly 2/3, of our capex, was on short-lived assets.
As a reminder, we still account for our investment in OpenAI under the equity method. As a result of OpenAI's recapitalization, we now record gains or losses based on our share of the change in their net assets on their balance sheet, as opposed to our share of their operating profit or losses from their income statement. Therefore, we recorded a gain, which drove other income and expense to $10 billion in our GAAP results.
Amy Hood: As a result of OpenAI's recapitalization, we now record gains or losses based on our share of the change in their net assets on their balance sheet, as opposed to our share of their operating profit or losses from their income statement. Therefore, we recorded a gain, which drove other income and expense to $10 billion in our GAAP results. When adjusted for the OpenAI impact, other income and expense was slightly negative and lower than expected, driven by net losses on investments. Capital expenditures were $37.5 billion, and this quarter, roughly two-thirds of our CapEx was on short-lived assets, primarily GPUs and CPUs. Our customer demand continues to exceed our supply.
Primarily gpus and CPUs, our customer demand continues to exceed our supply. Therefore, we must balance the need to have our incoming Supply better. Meet growing Azure Demand with expanding first-party AI usage across services like M365 co-pilot and GitHub co-pilot increasing allocations to R&D teams to accelerate product, Innovation, and continued replacement of end of life server and networking equipment.
When adjusted for the OpenAI impact, other income and expense was slightly negative and lower than expected, driven by net losses on investments. Capital expenditures were $37.5 billion, and this quarter, roughly two-thirds of our CapEx was on short-lived assets, primarily GPUs and CPUs. Our customer demand continues to exceed our supply.
Driven by R&D investments in compute capacity and AI talent as well as impairment charges in our gaming business. Operating margins increased year-over-year to 47%, ahead of expectations. As a reminder, we still account for investment in OpenAI under the equity method. And as a result of OpenAI's recapitalization, we now record gains or losses based on our share of the change in their net assets on their balance sheet, as opposed to our share of other operating profit or losses from their income statement. Therefore, we recorded a gain, which drove other income and expense to $10 billion in our GAAP results.
Amy Hood: Thank you, Satya. Good afternoon, everyone. With growing demand for our offerings and focused execution by our sales teams, we again exceeded expectations across revenue, operating income, and earnings per share while investing to fuel long-term growth. This quarter, revenue was $81.3 billion, up 17% and 15% in constant currency. Gross margin dollars increased 16% and 14% in constant currency, while operating income increased 21% and 19% in constant currency. Earnings per share was $4.14, an increase of 24% and 21% in constant currency when adjusted for the impact from our investment in OpenAI. FX increased reported results slightly less than expected, particularly in intelligent cloud revenue.
Amy Hood: Thank you, Satya. Good afternoon, everyone. With growing demand for our offerings and focused execution by our sales teams, we again exceeded expectations across revenue, operating income, and earnings per share while investing to fuel long-term growth. This quarter, revenue was $81.3 billion, up 17% and 15% in constant currency. Gross margin dollars increased 16% and 14% in constant currency, while operating income increased 21% and 19% in constant currency. Earnings per share was $4.14, an increase of 24% and 21% in constant currency when adjusted for the impact from our investment in OpenAI. FX increased reported results slightly less than expected, particularly in intelligent cloud revenue.
When adjusted for the open AI impact, other income and expense was slightly negative and lower than expected, driven by net losses on investments.
Amy Hood: Therefore, we must balance the need to have our incoming supply better meet growing Azure demand with expanding first-party AI usage across services like M365 Copilot and GitHub Copilot, increasing allocations to R&D teams to accelerate product innovation, and continued replacement of end-of-life server and networking equipment. The remaining spend was for long-lived assets that will support monetization for the next 15 years and beyond. This quarter, total finance leases were $6.7 billion and were primarily for large data center sites, and cash paid for PP&E was $29.9 billion. Cash flow from operations was $35.8 billion, up 60%, driven by strong cloud billings and collections. Free cash flow was $5.9 billion and decreased sequentially, reflecting the higher cash capital expenditures from a lower mix of finance leases.
Therefore, we must balance the need to have our incoming supply better meet growing Azure demand with expanding first-party AI usage across services like M365 Copilot and GitHub Copilot, increasing allocations to R&D teams to accelerate product innovation, and continued replacement of end-of-life server and networking equipment. The remaining spend was for long-lived assets that will support monetization for the next 15 years and beyond.
The remaining spin was for long-lived assets that will support monetization for the next 15 years and Beyond this quarter. Total Finance leases were 6.7 billion and were primarily for large is Center sites and cash paid for pp&e was 29.9 billion. Cash flow. From operations was 35.8 billion up 60% driven by strong Cloud Billings and Collections and free cash flow is 5.9 billion and decreased sequentially reflecting the higher cash Capital expenditures from a lower mix of Finance. Leases and finally we returned 12.7 billion dollars to shareholders through dividend and share purchases and increase of 32% year-over-year.
Now to our commercial results.
Capital expenditures were $37.5 billion, and this quarter roughly two-thirds of our CapEx was on short-lived assets, primarily GPUs and CPUs. Our customer demand continues to exceed our supply. Therefore, we must balance the need to have our incoming supply better meet growing Azure demand with expanding first-party AI usage across services like M365 Copilot and GitHub Copilot, increasing allocations to R&D teams to accelerate product innovation, and continued replacement of end-of-life server and networking equipment.
This quarter, total finance leases were $6.7 billion and were primarily for large data center sites, and cash paid for PP&E was $29.9 billion. Cash flow from operations was $35.8 billion, up 60%, driven by strong cloud billings and collections. Free cash flow was $5.9 billion and decreased sequentially, reflecting the higher cash capital expenditures from a lower mix of finance leases.
Commercial bookings, increased 230% and 228% in constant currency driven by the previously announced large Azure commitment from openai. That reflects multi-year demand needs as
Amy Hood: Company gross margin percentage was 68%, down slightly year over year, primarily driven by continued investments in AI infrastructure and growing AI product usage that was partially offset by ongoing efficiency gains, particularly in Azure and M365 commercial cloud, as well as sales mix shift to higher-margin businesses. Operating expenses increased 5% and 4% in constant currency, driven by R&D investments in compute capacity and AI talent, as well as impairment charges in our gaming business. Operating margins increased year over year to 47% ahead of expectations. As a reminder, we still account for our investment in OpenAI under the equity method. And as a result of OpenAI's recapitalization, we now record gains or losses based on our share of the change in their net assets on their balance sheet, as opposed to our share of their operating profit or losses from their income statement.
Company gross margin percentage was 68%, down slightly year over year, primarily driven by continued investments in AI infrastructure and growing AI product usage that was partially offset by ongoing efficiency gains, particularly in Azure and M365 commercial cloud, as well as sales mix shift to higher-margin businesses. Operating expenses increased 5% and 4% in constant currency, driven by R&D investments in compute capacity and AI talent, as well as impairment charges in our gaming business. Operating margins increased year over year to 47% ahead of expectations. As a reminder, we still account for our investment in OpenAI under the equity method. And as a result of OpenAI's recapitalization, we now record gains or losses based on our share of the change in their net assets on their balance sheet, as opposed to our share of their operating profit or losses from their income statement.
Amy Hood: And finally, we returned $12.7 billion to shareholders through dividend and share repurchases, an increase of 32% year-over-year. Now to our commercial results. Commercial bookings increased 230% and 228% in constant currency, driven by the previously announced large Azure commitment from OpenAI that reflects multiyear demand needs, as well as the previously announced Anthropic commitment from November and healthy growth across our core annuity sales motions. Commercial remaining performance obligation, which continues to be reported net of reserves, increased to $625 billion and was up 110% year-over-year, with a weighted average duration of approximately 2.5 years. Roughly 25% will be recognized in revenue in the next 12 months, up 39% year-over-year.
And finally, we returned $12.7 billion to shareholders through dividend and share repurchases, an increase of 32% year-over-year. Now to our commercial results. Commercial bookings increased 230% and 228% in constant currency, driven by the previously announced large Azure commitment from OpenAI that reflects multiyear demand needs, as well as the previously announced Anthropic commitment from November and healthy growth across our core annuity sales motions.
Previously announced anthropic commitment from November and Healthy. Growth across our core annuity sales motions commercial remaining performance obligation which continues to be reported, net of reserves increased to 625 billion and was up 110% year-over-year with a weighted average duration of approximately 2 and a half years.
Purchases and increase of 32% year-over-year.
Roughly 25% will be recognized in Revenue in the next 12 months up, 39% year-over-year. The remaining portion recognized beyond the next 12 months increased 156%.
Now to our commercial results. Commercial bookings increased 230%, and 228% in constant currency, driven by the previously announced large Azure commitment from OpenAI. That reflects multi-year demand needs as
Commercial remaining performance obligation, which continues to be reported net of reserves, increased to $625 billion and was up 110% year-over-year, with a weighted average duration of approximately 2.5 years. Roughly 25% will be recognized in revenue in the next 12 months, up 39% year-over-year.
Approximately 45% of our commercial RPO balance is from openai. The significant remaining balance, grew 28% and reflects ongoing. Broad customer demand across the portfolio.
Amy Hood: Therefore, we recorded a gain which drove other income and expense to $10 billion in our GAAP results. When adjusted for the OpenAI impact, other income and expense was slightly negative and lower than expected, driven by net losses on investments. Capital expenditures were $37.5 billion and this quarter, roughly two-thirds of our CapEx was on short-lived assets, primarily GPUs and CPUs. Our customer demand continues to exceed our supply. Therefore, we must balance the need to have our incoming supply better meet growing Azure demand with expanding first-party AI usage across services like M365 Copilot and GitHub Copilot, increasing allocations to R&D teams to accelerate product innovation, and continued replacement of end-of-life server and networking equipment. The remaining spend was for long-lived assets that will support monetization for the next 15 years and beyond.
Therefore, we recorded a gain which drove other income and expense to $10 billion in our GAAP results. When adjusted for the OpenAI impact, other income and expense was slightly negative and lower than expected, driven by net losses on investments. Capital expenditures were $37.5 billion and this quarter, roughly two-thirds of our CapEx was on short-lived assets, primarily GPUs and CPUs. Our customer demand continues to exceed our supply. Therefore, we must balance the need to have our incoming supply better meet growing Azure demand with expanding first-party AI usage across services like M365 Copilot and GitHub Copilot, increasing allocations to R&D teams to accelerate product innovation, and continued replacement of end-of-life server and networking equipment. The remaining spend was for long-lived assets that will support monetization for the next 15 years and beyond.
Mostly announced Anthropic commitment from November and healthy growth across our core annuity sales motions. Commercial remaining performance obligation, which continues to be reported net of reserves, increased to $60.025 billion and was up 110% year-over-year, with a weighted average duration of approximately two and a half years.
Amy Hood: The remaining portion, recognized beyond the next 12 months, increased 156%. Approximately 45% of our commercial RPO balance is from OpenAI. The significant remaining balance grew 28% and reflects ongoing broad customer demand across the portfolio. Microsoft Cloud revenue was $51.5 billion and grew 26% and 24% in constant currency. Microsoft Cloud gross margin percentage was slightly better than expected at 67% and down year-over-year due to continued investments in AI that were partially offset by ongoing efficiency gains noted earlier. Now to our segment results. Revenue from productivity and business processes was $34.1 billion and grew 16% and 14% in constant currency. M365 Commercial Cloud revenue increased 17% and 14% in constant currency, with consistent execution in the core business and increasing contribution from strong Copilot results.
The remaining portion, recognized beyond the next 12 months, increased 156%. Approximately 45% of our commercial RPO balance is from OpenAI. The significant remaining balance grew 28% and reflects ongoing broad customer demand across the portfolio. Microsoft Cloud revenue was $51.5 billion and grew 26% and 24% in constant currency.
Roughly 25% will be recognized in revenue in the next 12 months, up 39% year-over-year. The remaining portion recognized beyond the next 12 months increased 156%.
Microsoft cloud Revenue was 51.5 billion and grew 26% in 24% in constant currency. Microsoft, cloud, gross margin percentage was slightly better than expected, at 67% and down year-over-year due to continued investments in AI that were partially offset by ongoing efficiency gains noted earlier. Now to our segment results revenue from productivity and business processes was 34.1 billion and grew 16% and 14% in constant currency. M365, commercial Cloud, Revenue, increased 17% and 14% in constant currency with consistent execution in the core business and increasing contribution from strong co-pilot results.
Approximately 45% of our commercial RPO balance is from OpenAI. The significant remaining balance grew 28% and reflects ongoing, broad customer demand across the portfolio.
Microsoft Cloud gross margin percentage was slightly better than expected at 67% and down year-over-year due to continued investments in AI that were partially offset by ongoing efficiency gains noted earlier. Now to our segment results. Revenue from productivity and business processes was $34.1 billion and grew 16% and 14% in constant currency. M365 Commercial Cloud revenue increased 17% and 14% in constant currency, with consistent execution in the core business and increasing contribution from strong Copilot results.
Amy Hood: This quarter, total finance leases were $6.7 billion and were primarily for large data center sites. Cash paid for PP&E was $29.9 billion. Cash flow from operations was $35.8 billion, up 60%, driven by strong cloud billings and collections. Free cash flow was $5.9 billion and decreased sequentially, reflecting the higher cash capital expenditures from a lower mix of finance leases. Finally, we returned $12.7 billion to shareholders through dividend and share repurchases, an increase of 32% year over year. Now to our commercial results. Commercial bookings increased 230% and 228% in constant currency, driven by the previously announced large Azure commitment from OpenAI that reflects multi-year demand needs, as well as the previously announced Anthropic commitment from November, and healthy growth across our core annuity sales motions.
This quarter, total finance leases were $6.7 billion and were primarily for large data center sites. Cash paid for PP&E was $29.9 billion. Cash flow from operations was $35.8 billion, up 60%, driven by strong cloud billings and collections. Free cash flow was $5.9 billion and decreased sequentially, reflecting the higher cash capital expenditures from a lower mix of finance leases. Finally, we returned $12.7 billion to shareholders through dividend and share repurchases, an increase of 32% year over year. Now to our commercial results. Commercial bookings increased 230% and 228% in constant currency, driven by the previously announced large Azure commitment from OpenAI that reflects multi-year demand needs, as well as the previously announced Anthropic commitment from November, and healthy growth across our core annuity sales motions.
Our poo growth was again, led by E5 and M365, co-pilot and paid M365 commercial seats grew 6% year-over-year to over 450 million with installed basic expansion across all customer segments, though primarily in our small and medium business and Frontline worker offerings.
Microsoft cloud revenue was $51.5 billion and grew 26%, or 24% in constant currency. Microsoft cloud gross margin percentage was slightly better than expected, at 67%, and down year-over-year due to continued investments in AI that were partially offset by ongoing efficiency gains noted earlier. Now to our segment results revenue,
M4 65 commercial products, Revenue, increased 13% and 10% in constant currency ahead of expectations due to higher than expected office 2024 transactional, purchasing
Sumer, Cloud Revenue, increased 29% and 27% in constant currency. Again, driven by arpu growth, M365 consumer subscriptions, grew 6%,
Amy Hood: ARPU growth was again led by E5 and M365 Copilot, and paid M365 commercial seats grew 6% year-over-year to over 450 million, with installed base expansion across all customer segments, though primarily in our small and medium business, and frontline worker offerings. M365 Commercial Products revenue increased 13% and 10% in constant currency, ahead of expectations, due to higher-than-expected Office 2024 transactional purchasing. M365 Consumer Cloud revenue increased 29% and 27% in constant currency, again driven by ARPU growth. M365 consumer subscriptions grew 6%. LinkedIn revenue increased 11% and 10% in constant currency, driven by marketing solutions. Dynamics 365 revenue increased 19% and 17% in constant currency, with continued growth across all workloads.
ARPU growth was again led by E5 and M365 Copilot, and paid M365 commercial seats grew 6% year-over-year to over 450 million, with installed base expansion across all customer segments, though primarily in our small and medium business, and frontline worker offerings. M365 Commercial Products revenue increased 13% and 10% in constant currency, ahead of expectations, due to higher-than-expected Office 2024 transactional purchasing.
New from Productivity and Business Processes was $34.1 billion and grew 16% and 14% in constant currency. M365 Commercial Cloud revenue increased 17% and 14% in constant currency, with consistent execution in the core business and increasing contribution from strong Copilot results.
LinkedIn Revenue, increased 11%, and 10% in constant currency driven by Marketing Solutions, Dynamics 365 Revenue, increased 19% and 17% in constant currency with continued growth across all workloads.
Our growth was, again, led by E5 and M365. Copilot and paid M365 commercial seats grew 6% year-over-year to over 450 million, with installed base expansion across all customer segments, though primarily in our small and medium business and frontline worker offerings.
Amy Hood: Commercial remaining performance obligation, which continues to be reported net of reserves, increased to $625 billion and was up 110% year-over-year, with a weighted average duration of approximately two and a half years. Roughly 25% will be recognized in revenue in the next 12 months, up 39% year-over-year. The remaining portion recognized beyond the next 12 months increased 156%. Approximately 45% of our commercial RPO balance is from OpenAI. The significant remaining balance grew 28% and reflects ongoing broad customer demand across the portfolio. Microsoft Cloud revenue was $51.5 billion and grew 26% and 24% in constant currency. Microsoft Cloud gross margin percentage was slightly better than expected at 67% and down year-over-year due to continued investments in AI that were partially offset by ongoing efficiency gains noted earlier. Now to our segment results.
Commercial remaining performance obligation, which continues to be reported net of reserves, increased to $625 billion and was up 110% year-over-year, with a weighted average duration of approximately two and a half years. Roughly 25% will be recognized in revenue in the next 12 months, up 39% year-over-year. The remaining portion recognized beyond the next 12 months increased 156%. Approximately 45% of our commercial RPO balance is from OpenAI. The significant remaining balance grew 28% and reflects ongoing broad customer demand across the portfolio. Microsoft Cloud revenue was $51.5 billion and grew 26% and 24% in constant currency. Microsoft Cloud gross margin percentage was slightly better than expected at 67% and down year-over-year due to continued investments in AI that were partially offset by ongoing efficiency gains noted earlier. Now to our segment results.
Co-pilot usage.
M365 Consumer Cloud revenue increased 29% and 27% in constant currency, again driven by ARPU growth. M365 consumer subscriptions grew 6%. LinkedIn revenue increased 11% and 10% in constant currency, driven by marketing solutions. Dynamics 365 revenue increased 19% and 17% in constant currency, with continued growth across all workloads.
M, for 65 commercial products, revenue increased 13% and 10% in constant currency, ahead of expectations due to higher than expected Office 2024 transactional purchasing.
M365 consumer cloud revenue increased 29%, and 27% in constant currency. Again, driven by RPU growth. M365 consumer subscriptions grew 6%.
Operating expenses increased 6% and 5% in constant currency and operating income increased 22% and 19% in constant currency. Operating margins increased year-over-year to 60%, driven by improved operating leverage as well as the higher gross margins noted earlier.
Amy Hood: Segment gross margin dollars increased 17% and 15% in constant currency, and gross margin percentage increased, again, driven by efficiency gains at M365 Commercial Cloud that were partially offset by continued investments in AI, including the impact of growing Copilot usage. Operating expenses increased 6% and 5% in constant currency, and operating income increased 22% and 19% in constant currency. Operating margins increased year-over-year to 60%, driven by improved operating leverage as well as the higher gross margins noted earlier. Next, the Intelligent Cloud segment. Revenue was $32.9 billion and grew 29% and 28% in constant currency.
Segment gross margin dollars increased 17% and 15% in constant currency, and gross margin percentage increased, again, driven by efficiency gains at M365 Commercial Cloud that were partially offset by continued investments in AI, including the impact of growing Copilot usage. Operating expenses increased 6% and 5% in constant currency, and operating income increased 22% and 19% in constant currency.
LinkedIn revenue increased 11%, and 10% in constant currency, driven by Marketing Solutions. Dynamics 365 revenue increased 19%, and 17% in constant currency, with continued growth across all workloads.
Segment gross margin dollars increased 17% and 15% in constant currency, and gross margin percentage increased again, driven by efficiency gains at M365 commercial and Cloud. That was partially offset by continued investments in AI, including the impact of growing Copilot usage.
Operating margins increased year-over-year to 60%, driven by improved operating leverage as well as the higher gross margins noted earlier. Next, the Intelligent Cloud segment. Revenue was $32.9 billion and grew 29% and 28% in constant currency.
Next, the intelligent Cloud segment Revenue was 32.9 billion and grew 29% and 28% in constant currency, in Azure and other cloud services, Revenue, grew 39% and 38% in constant currency slightly ahead of expectations with ongoing efficiency gains across our fungible Fleet enabling us to reallocate some capacity to Azure that was monetized in the quarter. As mentioned earlier. We continue to see strong demand across workloads customer segments and geographic regions and demand continues to exceed available Supply
Operating expenses increased 6% and 5% in constant currency, and operating income increased 22% and 19% in constant currency. Operating margins increased year-over-year to 60%, driven by improved operating leverage as well as the higher gross margins noted earlier.
Amy Hood: Revenue from productivity and business processes was $34.1 billion and grew 16% and 14% in constant currency. M365 commercial cloud revenue increased 17% and 14% in constant currency, with consistent execution in the core business and increasing contribution from strong Copilot results. RPO growth was again led by E5 and M365 Copilot. Paid M365 commercial seats grew 6% year over year to over $450 million, with installed base expansion across all customer segments, though primarily in our small and medium business and frontline worker offerings. M365 commercial products revenue increased 13% and 10% in constant currency ahead of expectations due to higher than expected Office 2024 transactional purchasing. M365 consumer cloud revenue increased 29% and 27% in constant currency, again driven by RPO growth. M365 consumer subscriptions grew 6%. LinkedIn revenue increased 11% and 10% in constant currency, driven by marketing solutions.
Revenue from productivity and business processes was $34.1 billion and grew 16% and 14% in constant currency. M365 commercial cloud revenue increased 17% and 14% in constant currency, with consistent execution in the core business and increasing contribution from strong Copilot results. RPO growth was again led by E5 and M365 Copilot. Paid M365 commercial seats grew 6% year over year to over $450 million, with installed base expansion across all customer segments, though primarily in our small and medium business and frontline worker offerings. M365 commercial products revenue increased 13% and 10% in constant currency ahead of expectations due to higher than expected Office 2024 transactional purchasing. M365 consumer cloud revenue increased 29% and 27% in constant currency, again driven by RPO growth. M365 consumer subscriptions grew 6%. LinkedIn revenue increased 11% and 10% in constant currency, driven by marketing solutions.
Amy Hood: In Azure and other cloud services, revenue grew 39% and 38% in constant currency, slightly ahead of expectations, with ongoing efficiency gains across our fungible fleet, enabling us to reallocate some capacity to Azure that was monetized in the quarter. As mentioned earlier, we continued to see strong demand across workloads, customer segments, and geographic regions, and demand continues to exceed available supply. In our on-premises server business, revenue increased 2% and 1% in constant currency ahead of expectations, driven by demand for our hybrid solutions, including a benefit from the launch of SQL Server 2025, as well as higher transactional purchasing ahead of memory price increases. Segment gross margin dollars increased 20% and 19% in constant currency. Gross margin percentage decreased year-over-year, driven by continued investments in AI and sales mix shift to Azure, partially offset by efficiency gains in Azure.
In Azure and other cloud services, revenue grew 39% and 38% in constant currency, slightly ahead of expectations, with ongoing efficiency gains across our fungible fleet, enabling us to reallocate some capacity to Azure that was monetized in the quarter. As mentioned earlier, we continued to see strong demand across workloads, customer segments, and geographic regions, and demand continues to exceed available supply.
In our on premise of server business Revenue increased 2% and 1% in constant currency ahead of expectations driven by demand for our hybrid Solutions, including a benefit from the launch of SQL Server 2025 as well as higher transactional purchasing ahead of memory price increases segment. Gross margin dollars increased 20% and 19% in constant currency gross margin percentage. Decreased year-over-year driven by continued investments in Ai and sales mixed shift to Azure partially offset by efficiency gains in azure.
In our on-premises server business, revenue increased 2% and 1% in constant currency ahead of expectations, driven by demand for our hybrid solutions, including a benefit from the launch of SQL Server 2025, as well as higher transactional purchasing ahead of memory price increases. Segment gross margin dollars increased 20% and 19% in constant currency. Gross margin percentage decreased year-over-year, driven by continued investments in AI and sales mix shift to Azure, partially offset by efficiency gains in Azure.
Next, the Intelligent Cloud segment revenue was $32.9 billion and grew 29%, and 28% in constant currency. In Azure and other cloud services, revenue grew 39% and 38% in constant currency, slightly ahead of expectations, with ongoing efficiency gains across our fungible fleet enabling us to reallocate some capacity to Azure that was monetized in the quarter. As mentioned earlier, we continue to see strong demand across workloads, customer segments, and geographic regions, and demand continues to exceed available supply.
Operating expenses increased 3% and 2% in constant currency and operating income grew 28% and 27% in constant currency. Operating margins were 42% down slightly year-over-year as increased investments in AI were mostly offset by improved. Operating Leverage.
Amy Hood: Operating expenses increased 3% and 2% in constant currency, and operating income grew 28% and 27% in constant currency. Operating margins were 42%, down slightly year-over-year, as increased investments in AI were mostly offset by improved operating leverage. Now to more personal computing. Revenue was $14.3 billion and declined 3%. Windows OEM and devices revenue increased 1% and was relatively unchanged in constant currency. Windows OEM grew 5% with strong execution, as well as a continued benefit from Windows 10 end of support. Results were ahead of expectations as inventory levels remained elevated, with increased purchasing ahead of memory price increases. Search and news advertising revenue, ex TAC, increased 10% and 9% in constant currency, slightly below expectations, driven by some execution challenges. As expected, the sequential growth rate moderated as the benefit from third-party partnerships normalized.
Operating expenses increased 3% and 2% in constant currency, and operating income grew 28% and 27% in constant currency. Operating margins were 42%, down slightly year-over-year, as increased investments in AI were mostly offset by improved operating leverage. Now to more personal computing. Revenue was $14.3 billion and declined 3%. Windows OEM and devices revenue increased 1% and was relatively unchanged in constant currency.
And on-premise server business revenue increased 2%, and 1% in constant currency, ahead of expectations, driven by demand for our hybrid solutions, including a benefit from the launch of SQL Server 2025, as well as higher transactional purchasing ahead of memory price increases. Segment gross margin dollars increased 20%, and 19% in constant currency. Gross margin percentage decreased year-over-year, driven by continued investments in AI and sales mix shift to Azure, partially offset by efficiency gains in Azure.
Amy Hood: Dynamics 365 revenue increased 19% and 17% in constant currency, with continued growth across all workloads. Segment gross margin dollars increased 17% and 15% in constant currency. Gross margin percentage increased, again driven by efficiency gains at M365 commercial cloud that were partially offset by continued investments in AI, including the impact of growing Copilot usage. Operating expenses increased 6% and 5% in constant currency. Operating income increased 22% and 19% in constant currency. Operating margins increased year-over-year to 60%, driven by improved operating leverage, as well as the higher gross margins noted earlier. Next, the Intelligent Cloud segment. Revenue was $32.9 billion and grew 29% and 28% in constant currency.
Dynamics 365 revenue increased 19% and 17% in constant currency, with continued growth across all workloads. Segment gross margin dollars increased 17% and 15% in constant currency. Gross margin percentage increased, again driven by efficiency gains at M365 commercial cloud that were partially offset by continued investments in AI, including the impact of growing Copilot usage. Operating expenses increased 6% and 5% in constant currency. Operating income increased 22% and 19% in constant currency. Operating margins increased year-over-year to 60%, driven by improved operating leverage, as well as the higher gross margins noted earlier. Next, the Intelligent Cloud segment. Revenue was $32.9 billion and grew 29% and 28% in constant currency.
Operating expenses increased 3% and 2% in constant currency, and operating income grew 28% and 27% in constant currency. Operating margins were 42%, down slightly year-over-year as increased investments in AI were mostly offset by improved operating leverage.
Windows OEM grew 5% with strong execution, as well as a continued benefit from Windows 10 end of support. Results were ahead of expectations as inventory levels remained elevated, with increased purchasing ahead of memory price increases. Search and news advertising revenue, ex TAC, increased 10% and 9% in constant currency, slightly below expectations, driven by some execution challenges. As expected, the sequential growth rate moderated as the benefit from third-party partnerships normalized.
Now, to more personal Computing, Revenue was 14.3 billion and declined, 3%, Windows OEM and devices Revenue, increased 1%, and was relatively unchanged in constant currency. When does OEM grew 5% with strong execution, as well as a continued benefit from Windows 10 end of support results, were ahead of expectations as inventory levels remained elevated with increased purchasing ahead of memory price increases search and news advertising Revenue, x-tac increased, 10% and 9% in constant currency slightly below expectations driven by some execution challenges as expected, the sequential growth rate moderated as the benefit from third-party Partnerships normalized and in Gaming, revenue decreased 9% and 10% in constant currency Xbox content and services. Revenue decreased 5% and 6% in constant currency and was below expectations. Driven by first-party content with impact across the platform. Segment growth margin dollars, increased 2% and 1% in constant currency and gross margin percentage increase.
Over here, driven by sales, mix shift to higher margin businesses, operating expenses increased 6% and 5% in constant currency driven by the impairment charges in our gaming business. Noted earlier, as well as R&D Investments and compute capacity, and AI Talent.
Amy Hood: In Azure and other cloud services, revenue grew 39% and 38% in constant currency, slightly ahead of expectations, with ongoing efficiency gains across our fungible fleet enabling us to reallocate some capacity to Azure that was monetized in the quarter. As mentioned earlier, we continue to see strong demand across workloads, customer segments, and geographic regions. Demand continues to exceed available supply. In our on-premises server business, revenue increased 2% and 1% in constant currency ahead of expectations, driven by demand for our hybrid solutions, including a benefit from the launch of SQL Server 2025, as well as higher transactional purchasing ahead of memory price increases. Segment gross margin dollars increased 20% and 19% in constant currency. Gross margin percentage decreased year over year, driven by continued investments in AI and sales mix shift to Azure, partially offset by efficiency gains in Azure.
In Azure and other cloud services, revenue grew 39% and 38% in constant currency, slightly ahead of expectations, with ongoing efficiency gains across our fungible fleet enabling us to reallocate some capacity to Azure that was monetized in the quarter. As mentioned earlier, we continue to see strong demand across workloads, customer segments, and geographic regions. Demand continues to exceed available supply. In our on-premises server business, revenue increased 2% and 1% in constant currency ahead of expectations, driven by demand for our hybrid solutions, including a benefit from the launch of SQL Server 2025, as well as higher transactional purchasing ahead of memory price increases. Segment gross margin dollars increased 20% and 19% in constant currency. Gross margin percentage decreased year over year, driven by continued investments in AI and sales mix shift to Azure, partially offset by efficiency gains in Azure.
Amy Hood: In gaming, revenue decreased 9% and 10% in constant currency. Xbox content and services revenue decreased 5% and 6% in constant currency, and was below expectations, driven by first-party content with impact across the platform. Segment gross margin dollars increased 2% and 1% in constant currency, and gross margin percentage increased year-over-year, driven by sales mix shift to higher margin businesses. Operating expenses increased 6% and 5% in constant currency, driven by the impairment charges in our gaming business noted earlier, as well as R&D investments in compute capacity and AI talent. Operating income decreased 3% and 4% in constant currency, and operating margins were relatively unchanged year-over-year at 27%, as higher operating expenses were mostly offset by higher gross margins. Now, moving to our Q3 outlook, which, unless specifically noted otherwise, is on a US dollar basis.
In gaming, revenue decreased 9% and 10% in constant currency. Xbox content and services revenue decreased 5% and 6% in constant currency, and was below expectations, driven by first-party content with impact across the platform. Segment gross margin dollars increased 2% and 1% in constant currency, and gross margin percentage increased year-over-year, driven by sales mix shift to higher margin businesses.
Operating income decreased 3% and 4% in constant currency and operating margins. Were relatively unchanged over year at 27% as higher operating expenses were mostly offset by higher gross margins.
Operating expenses increased 6% and 5% in constant currency, driven by the impairment charges in our gaming business noted earlier, as well as R&D investments in compute capacity and AI talent. Operating income decreased 3% and 4% in constant currency, and operating margins were relatively unchanged year-over-year at 27%, as higher operating expenses were mostly offset by higher gross margins. Now, moving to our Q3 outlook, which, unless specifically noted otherwise, is on a US dollar basis.
LI unchanged in constant currency. Windows OEM grew 5% with strong execution, as well as a continued benefit from Windows 10 end of support results. We’re ahead of expectations. Inventory levels remained elevated with increased purchasing ahead of memory price increases. Search and news advertising revenue, ex-TAC, increased 10% and 9% in constant currency, slightly below expectations, driven by some execution challenges. As expected, the sequential growth rate moderated as the benefit from third-party partnerships normalized. In Gaming, revenue decreased 9% and 10% in constant currency. Xbox content and services revenue decreased 5% and 6% in constant currency and was below expectations, driven by first-party content with impact across the platform segment. Gross margin dollars increased 2% and 1% in constant currency, and gross margin percentage increased year-over-year, driven by sales mix shift to higher margin businesses. Operating expenses increased 6% and 5% in constant currency, driven by the impairment charges in our gaming business.
Now moving to our Q3 Outlook which unless specifically noted otherwise is on a US dollar basis. Based on current rates, we expect FX to increase total revenue growth by 3 points within the segments. We expect FX to increase Revenue growth by 4 Points in productivity and business processes and 2 points in intelligent cloud and more personal Computing. We expect FX to increase cogs and operating expense growth by 2 points. As a reminder, this impact is due to the exchange rates a year ago.
Starting with the total company.
Business noted earlier, as well as R&D investments and compute capacity, and AI talent.
Operating income decreased 3% and 4% in constant currency, and operating margins were relatively unchanged over the year at 27%, as higher operating expenses were mostly offset by higher gross margins.
Amy Hood: Based on current rates, we expect FX to increase total revenue growth by 3 points. Within the segments, we expect FX to increase revenue growth by 4 points in productivity and business processes and 2 points in intelligent cloud and more personal computing. We expect FX to increase COGS and operating expense growth by 2 points. As a reminder, this impact is due to the exchange rates a year ago. Starting with the total company, we expect revenue of $80.65 to 81.75 billion, or growth of 15% to 17%, with continued strong growth across our commercial businesses, partially offset by our consumer businesses.
Based on current rates, we expect FX to increase total revenue growth by 3 points. Within the segments, we expect FX to increase revenue growth by 4 points in productivity and business processes and 2 points in intelligent cloud and more personal computing. We expect FX to increase COGS and operating expense growth by 2 points.
Amy Hood: Operating expenses increased 3% and 2% in constant currency. Operating income grew 28% and 27% in constant currency. Operating margins were 42%, down slightly year-over-year, as increased investments in AI were mostly offset by improved operating leverage. Now to more personal computing. Revenue was $14.3 billion and declined 3%. Windows OEM and devices revenue increased 1% and was relatively unchanged in constant currency. Windows OEM grew 5% with strong execution, as well as a continued benefit from Windows 10 end-of-support. Results were ahead of expectations as inventory levels remained elevated, with increased purchasing ahead of memory price increases. Search and news advertising revenue, XTAC, increased 10% and 9% in constant currency, slightly below expectations, driven by some execution challenges. As expected, the sequential growth rate moderated as the benefit from third-party partnerships normalized. In gaming, revenue decreased 9% and 10% in constant currency.
Operating expenses increased 3% and 2% in constant currency. Operating income grew 28% and 27% in constant currency. Operating margins were 42%, down slightly year-over-year, as increased investments in AI were mostly offset by improved operating leverage. Now to more personal computing. Revenue was $14.3 billion and declined 3%. Windows OEM and devices revenue increased 1% and was relatively unchanged in constant currency. Windows OEM grew 5% with strong execution, as well as a continued benefit from Windows 10 end-of-support. Results were ahead of expectations as inventory levels remained elevated, with increased purchasing ahead of memory price increases. Search and news advertising revenue, XTAC, increased 10% and 9% in constant currency, slightly below expectations, driven by some execution challenges. As expected, the sequential growth rate moderated as the benefit from third-party partnerships normalized. In gaming, revenue decreased 9% and 10% in constant currency.
We expect revenue of 80.65 to 81.75 billion US dollars or growth of 15 to 17% with continued, strong growth, across our commercial businesses partially offset but our consumer businesses, we expect cogs of 26.65 to 26.85 billion US dollars or growth of 22 to 23% and operating expense of 17.8 to 17.9 billion US Dollars. Our growth of 10 to 11% driven by continued investment.
Now moving to our Q3 outlook, which unless specifically noted otherwise, is on a US dollar basis. Based on current rates, we expect FX to increase total revenue growth by 3 points within the segments. We expect FX to increase revenue growth by 4 points in Productivity and Business Processes, and 2 points in Intelligent Cloud and More Personal Computing.
As a reminder, this impact is due to the exchange rates a year ago. Starting with the total company, we expect revenue of $80.65 to 81.75 billion, or growth of 15% to 17%, with continued strong growth across our commercial businesses, partially offset by our consumer businesses.
An R&D AI compute capacity and talent against a low prior year comparable, operating margins should be down slightly year-over-year.
We expect FX to increase COGS and operating expense growth by 2 points. As a reminder, this impact is due to the exchange rates a year ago.
Amy Hood: We expect COGS of $26.65 to 26.85 billion, or growth of 22% to 23%, and operating expense of $17.8 to 17.9 billion, or growth of 10% to 11%, driven by continued investment in R&D, AI compute capacity, and talent against a low prior year comparable. Operating margins should be down slightly year-over-year. Excluding any impact from our investments in OpenAI, other income and expense is expected to be roughly $700 million, driven by a fair market gain in our equity portfolio and interest income, partially offset by interest expense, which includes the interest payments related to data center finance leases. We expect our adjusted Q3 effective tax rate to be approximately 19%.
We expect COGS of $26.65 to 26.85 billion, or growth of 22% to 23%, and operating expense of $17.8 to 17.9 billion, or growth of 10% to 11%, driven by continued investment in R&D, AI compute capacity, and talent against a low prior year comparable. Operating margins should be down slightly year-over-year.
From our investments in open, AI, other income and expense is expected to be roughly 700 million driven by a fair market gain, and our Equity portfolio and interest income. Partially offset by interest expense, which includes the interest payments related to Data Center. Finance leases and we expect our adjusted Q3 effective tax rate to be approximately 19%.
Next.
Amy Hood: Xbox content and services revenue decreased 5% and 6% in constant currency and was below expectations, driven by first-party content with impact across the platform. Segment gross margin dollars increased 2% and 1% in constant currency. Gross margin percentage increased year over year, driven by sales mix shift to higher-margin businesses. Operating expenses increased 6% and 5% in constant currency, driven by the impairment charges in our gaming business noted earlier, as well as R&D investments in compute capacity and AI talent. Operating income decreased 3% and 4% in constant currency. Operating margins were relatively unchanged year over year at 27%, as higher operating expenses were mostly offset by higher gross margins. Now moving to our Q3 outlook, which, unless specifically noted otherwise, is on a US dollar basis. Based on current rates, we expect FX to increase total revenue growth by 3 points.
Xbox content and services revenue decreased 5% and 6% in constant currency and was below expectations, driven by first-party content with impact across the platform. Segment gross margin dollars increased 2% and 1% in constant currency. Gross margin percentage increased year over year, driven by sales mix shift to higher-margin businesses. Operating expenses increased 6% and 5% in constant currency, driven by the impairment charges in our gaming business noted earlier, as well as R&D investments in compute capacity and AI talent. Operating income decreased 3% and 4% in constant currency. Operating margins were relatively unchanged year over year at 27%, as higher operating expenses were mostly offset by higher gross margins. Now moving to our Q3 outlook, which, unless specifically noted otherwise, is on a US dollar basis. Based on current rates, we expect FX to increase total revenue growth by 3 points.
Excluding any impact from our investments in OpenAI, other income and expense is expected to be roughly $700 million, driven by a fair market gain in our equity portfolio and interest income, partially offset by interest expense, which includes the interest payments related to data center finance leases. We expect our adjusted Q3 effective tax rate to be approximately 19%.
Starting with the total company. We expect revenue of 80.65 to 81.75 billion US dollars or growth of 15 to 17% with continued strong growth. Across our commercial businesses partially offset but our consumer businesses, we expect cogs of 26.65 to 26.85 billion US dollars or growth of 22 to 23% and operating expense of 17.8 to 17.9 billion US Dollars. Our growth of 10 to 11% driven by continued investment in R&D. AI compute capacity and talent against a low prior year comparable. Operating margins should be down slightly year-over-year.
To close the gap between demand and Supply. We expect the mix of short-lived assets to remain similar to Q2. Now our Commercial Business in commercial bookings, we expect Healthy Growth in the core business on a growing x-ray base, when adjusted for the opening, I contracts in the prior year. As a reminder, the significant opening. I contract signed in Q2 represents multi-year, demand needs from them, which will result in some quarterly volatility in both bookings, and RPO growth rates going forward.
Amy Hood: Next, we expect capital expenditures to decrease on a sequential basis due to the normal variability from cloud infrastructure build-outs and the timing of delivery of finance leases. As we work to close the gap between demand and supply, we expect the mix of short-lived assets to remain similar to Q2. Now, our commercial business. In commercial bookings, we expect healthy growth in the core business on a growing ex-OpenAI base when adjusted for the OpenAI contracts in the prior year. As a reminder, the significant OpenAI contract signed in Q2 represents multiyear demand needs from them, which will result in some quarterly volatility in both bookings and RPO growth rates going forward. Microsoft Cloud gross margin percentage should be roughly 65%, down year-over-year, driven by continued investments in AI. Now to segment guidance.
Next, we expect capital expenditures to decrease on a sequential basis due to the normal variability from cloud infrastructure build-outs and the timing of delivery of finance leases. As we work to close the gap between demand and supply, we expect the mix of short-lived assets to remain similar to Q2. Now, our commercial business. In commercial bookings, we expect healthy growth in the core business on a growing ex-OpenAI base when adjusted for the OpenAI contracts in the prior year.
Excluding any impact for more investments in OpenAI, other income and expense is expected to be roughly $700 million, driven by a fair market gain in our equity portfolio and interest income, partially offset by interest expense, which includes the interest payments related to data center finances. We expect our adjusted Q3 effective tax rate to be approximately 19%.
Next.
Microsoft cloud, gross margin percentage should be roughly, 65% down year-over-year driven by continued investments in AI now to segment guidance,
As a reminder, the significant OpenAI contract signed in Q2 represents multiyear demand needs from them, which will result in some quarterly volatility in both bookings and RPO growth rates going forward. Microsoft Cloud gross margin percentage should be roughly 65%, down year-over-year, driven by continued investments in AI. Now to segment guidance.
Productivity and business processes. We expect revenue of 34.25 to 34.55 billion US dollars or growth of 14 to 15%. And M365 commercial Cloud, we expect Revenue growth to be between 13 and 14% in constant currency with continued stability and year-over-year. Growth rates on a large and expanding base.
Amy Hood: Within the segments, we expect FX to increase revenue growth by 4 points in productivity and business processes and 2 points in intelligent cloud and more personal computing. We expect FX to increase COGS and operating expense growth by 2 points. As a reminder, this impact is due to the exchange rates a year ago. Starting with the total company, we expect revenue of $80.65 to 81.75 billion or growth of 15% to 17%, with continued strong growth across our commercial businesses, partially offset by our consumer businesses. We expect COGS of $26.65 to 26.85 billion or growth of 22% to 23%. Operating expense of $17.8 to 17.9 billion or growth of 10% to 11%, driven by continued investment in R&D, AI compute capacity, and talent against a low prior year comparable. Operating margins should be down slightly year-over-year.
Within the segments, we expect FX to increase revenue growth by 4 points in productivity and business processes and 2 points in intelligent cloud and more personal computing. We expect FX to increase COGS and operating expense growth by 2 points. As a reminder, this impact is due to the exchange rates a year ago. Starting with the total company, we expect revenue of $80.65 to 81.75 billion or growth of 15% to 17%, with continued strong growth across our commercial businesses, partially offset by our consumer businesses. We expect COGS of $26.65 to 26.85 billion or growth of 22% to 23%. Operating expense of $17.8 to 17.9 billion or growth of 10% to 11%, driven by continued investment in R&D, AI compute capacity, and talent against a low prior year comparable. Operating margins should be down slightly year-over-year.
Accelerating co-pilot momentum and ongoing E5 adoption will again Drive RPC growth.
We expect capital expenditures to decrease on a sequential basis due to the normal variability from cloud infrastructure build-outs and the timing of delivery of finance leases as we work to close the gap between demand and supply. We expect the mix of short-lived assets to remain similar to Q2. Now, in our commercial business, in commercial bookings, we expect healthy growth and the core business on a growing year-over-year base, when adjusted for the opening AI contracts in the prior year. As a reminder, the significant opening AI contract signed in Q2 represents multi-year demand needs from them, which will result in some quarterly volatility in both bookings and RPO growth rates going forward.
Amy Hood: In productivity and business processes, we expect revenue of $34.25 to 34.55 billion, or growth of 14 to 15%. In M365 Commercial Cloud, we expect revenue growth to be between 13 and 14% in constant currency, with continued stability in year-over-year growth rates on a large and expanding base. Accelerating Copilot momentum and ongoing E5 adoption will again drive ARPU growth. M365 commercial products revenue should decline in the low single digits, down sequentially, assuming Office 2024 transactional purchasing trends normalize. As a reminder, M365 commercial products include components that can be variable due to in-period revenue recognition dynamics. M365 consumer cloud revenue growth should be in the mid- to high-20% range, driven by growth at ARPU, as well as continued subscription volume.
In productivity and business processes, we expect revenue of $34.25 to 34.55 billion, or growth of 14 to 15%. In M365 Commercial Cloud, we expect revenue growth to be between 13 and 14% in constant currency, with continued stability in year-over-year growth rates on a large and expanding base. Accelerating Copilot momentum and ongoing E5 adoption will again drive ARPU growth.
M365 commercial products Revenue should decline in the low single digits, down sequentially, assuming office, 2024 transactional, purchasing Trends normalize, as a reminder, M365 commercial products include components that can be variable due to in Period, Revenue, recognition Dynamics,
M365 consumer Cloud, Revenue growth should be in the mid to high 20% range driven by growth at arpu as well as continued subscription volume.
Microsoft Cloud gross margin percentage should be roughly 65%, down year-over-year driven by continued investments in AI. Now to segment guidance in Productivity and Business Processes: we expect revenue of $34.25 to $34.55 billion, or growth of 14 to 15%. And for M365 Commercial Cloud, we expect revenue growth to be between 13 and 14% in constant currency, with continued stability in year-over-year growth rates on a large base.
For LinkedIn. We expect Revenue growth to be in the low double digits and in Dynamics 365, we expect Revenue growth to be in the high teens with continued growth across all workloads.
Large and expanding base.
M365 commercial products revenue should decline in the low single digits, down sequentially, assuming Office 2024 transactional purchasing trends normalize. As a reminder, M365 commercial products include components that can be variable due to in-period revenue recognition dynamics. M365 consumer cloud revenue growth should be in the mid- to high-20% range, driven by growth at ARPU, as well as continued subscription volume.
Amy Hood: Excluding any impact from our investments in OpenAI, other income and expense is expected to be roughly $700 million, driven by a fair market gain in our equity portfolio and interest income, partially offset by interest expense, which includes the interest payments related to data center finance leases. We expect our adjusted Q3 effective tax rate to be approximately 19%. Next, we expect capital expenditures to decrease on a sequential basis due to the normal variability from cloud infrastructure buildouts and the timing of delivery of finance leases. As we work to close the gap between demand and supply, we expect the mix of short-lived assets to remain similar to Q2. Now, our commercial business. In commercial bookings, we expect healthy growth in the core business on a growing expiry base when adjusted for the OpenAI contracts in the prior year.
Excluding any impact from our investments in OpenAI, other income and expense is expected to be roughly $700 million, driven by a fair market gain in our equity portfolio and interest income, partially offset by interest expense, which includes the interest payments related to data center finance leases. We expect our adjusted Q3 effective tax rate to be approximately 19%. Next, we expect capital expenditures to decrease on a sequential basis due to the normal variability from cloud infrastructure buildouts and the timing of delivery of finance leases. As we work to close the gap between demand and supply, we expect the mix of short-lived assets to remain similar to Q2. Now, our commercial business. In commercial bookings, we expect healthy growth in the core business on a growing expiry base when adjusted for the OpenAI contracts in the prior year.
For intelligent Cloud. We expect revenue of 34.1 to 34.4 billion US dollars or growth of 27 to 29% in Azure. We expect Q3 Revenue growth to be between 37 and 38% in constant currency against a prior year comparable, that included significantly accelerating growth rates in both Q3 and Q4.
Accelerating Copilot momentum and ongoing E5 adoption will again drive R2. Growth in M365 commercial products revenue should decline in the low single digits, down sequentially, assuming Office 2024 transactional purchasing trends normalize. As a reminder, M365 commercial products include components that can be variable due to in-period revenue recognition dynamics.
as mentioned earlier, demand continues to exceed Supply,
Amy Hood: For LinkedIn, we expect revenue growth to be in the low double digits, and in Dynamics 365, we expect revenue growth to be in the high teens, with continued growth across all workloads. For Intelligent Cloud, we expect revenue of $34.1 to 34.4 billion, or growth of 27 to 29%. In Azure, we expect Q3 revenue growth to be between 37 and 38% in constant currency against a prior year comparable that includes significantly accelerating growth rates in both Q3 and Q4. As mentioned earlier, demand continues to exceed supply, and we will need to continue to balance the incoming supply we can allocate here against other priorities.
For LinkedIn, we expect revenue growth to be in the low double digits, and in Dynamics 365, we expect revenue growth to be in the high teens, with continued growth across all workloads. For Intelligent Cloud, we expect revenue of $34.1 to 34.4 billion, or growth of 27 to 29%.
M365 consumer cloud revenue growth should be in the mid to high 20% range, driven by growth in ARPU as well as continued subscription volume.
For LinkedIn, we expect revenue growth to be in the low double digits, and in Dynamics 365, we expect revenue growth to be in the high teens, with continued growth across all workloads.
And we will need to continue to balance the incoming Supply. We can allocate here against other priorities. As a reminder, there can be quarterly variability and year-on-year growth rates depending on the timing of capacity, delivery, and when it comes online as well, as from in Period, Revenue recognition, depending on the mix of contracts.
In Azure, we expect Q3 revenue growth to be between 37 and 38% in constant currency against a prior year comparable that includes significantly accelerating growth rates in both Q3 and Q4. As mentioned earlier, demand continues to exceed supply, and we will need to continue to balance the incoming supply we can allocate here against other priorities.
In our, on premise of server business, we expect Revenue to decline in the low single digits as growth rate. Normalized following the launch of SQL Server 2025, though. Increased memory pricing, could create additional volatility in transactional, purchasing.
In Azure, we expect Q3 revenue growth to be between 37% and 38% in constant currency, against a prior year comparable that included significantly accelerating growth rates in both Q3 and Q4.
Amy Hood: As a reminder, the significant OpenAI contract signed in Q2 represents multi-year demand needs from them, which will result in some quarterly volatility in both bookings and RPO growth rates going forward. Microsoft Cloud gross margin percentage should be roughly 65%, down year-over-year, driven by continued investments in AI. Now to segment guidance. In productivity and business processes, we expect revenue of $34.25 to $34.55 billion or growth of 14% to 15%. In M365 commercial cloud, we expect revenue growth to be between 13% and 14% in constant currency, with continued stability and year-over-year growth rates on a large and expanding base. Accelerating Copilot momentum and ongoing E5 adoption will again drive RPO growth. M365 commercial products revenue should decline in the low single digits, down sequentially, assuming Office 2024 transactional purchasing trends normalize.
As a reminder, the significant OpenAI contract signed in Q2 represents multi-year demand needs from them, which will result in some quarterly volatility in both bookings and RPO growth rates going forward. Microsoft Cloud gross margin percentage should be roughly 65%, down year-over-year, driven by continued investments in AI. Now to segment guidance. In productivity and business processes, we expect revenue of $34.25 to $34.55 billion or growth of 14% to 15%. In M365 commercial cloud, we expect revenue growth to be between 13% and 14% in constant currency, with continued stability and year-over-year growth rates on a large and expanding base. Accelerating Copilot momentum and ongoing E5 adoption will again drive RPO growth. M365 commercial products revenue should decline in the low single digits, down sequentially, assuming Office 2024 transactional purchasing trends normalize.
As mentioned earlier, demand continues to exceed supply,
Amy Hood: As a reminder, there can be quarterly variability in year-on-year growth rates, depending on the timing of capacity delivery and when it comes online, as well as from in-period revenue recognition, depending on the mix of contracts. In our on-premises server business, we expect revenue to decline in the low single digits as growth rates normalize following the launch of SQL Server 2025, though increased memory pricing could create additional volatility in transactional purchasing. In More Personal Computing, we expect revenue to be $12.3 to 12.8 billion. Windows OEM and devices revenue should decline in the low teens. Growth rates will be impacted as the benefit from Windows 10 end of support normalizes and as elevated inventory levels come down through the quarter. Therefore, Windows OEM revenue should decline roughly 10%.
As a reminder, there can be quarterly variability in year-on-year growth rates, depending on the timing of capacity delivery and when it comes online, as well as from in-period revenue recognition, depending on the mix of contracts. In our on-premises server business, we expect revenue to decline in the low single digits as growth rates normalize following the launch of SQL Server 2025, though increased memory pricing could create additional volatility in transactional purchasing.
And we will need to continue to balance the incoming supply. We can allocate here against other priorities. As a reminder, there can be quarterly variability and year-on-year growth rates depending on the timing of capacity, delivery, and when it comes online, as well as from interior revenue recognition, depending on the mix of contracts.
In More Personal Computing, we expect revenue to be $12.3 to 12.8 billion. Windows OEM and devices revenue should decline in the low teens. Growth rates will be impacted as the benefit from Windows 10 end of support normalizes and as elevated inventory levels come down through the quarter. Therefore, Windows OEM revenue should decline roughly 10%.
In our on-premise server business, we expect revenue to decline in the low single digits as a growth rate. Normalized following the launch of SQL Server 2025, though. Increased memory pricing could create additional volatility in transactional purchasing.
A more personal Computing, we expect Revenue to be 12.3 to 12.8 billion US Dollars, Windows, OEM and devices. Revenue should decline in the low. Teens growth rates will be impacted as the benefit from Windows 10 in the support normalizes and is elevated inventory levels, come down through the quarter. Therefore Windows, OEM Revenue should decline roughly 10% the range of potential outcomes remains wider than normal in part due to the potential impact on the PC market from increased memory pricing, search and news advertising, x-tac Revenue growth should be in the high single digits. Even as we work to improve execution, we expect continued share gains across Bing and edged with growth driven by volume and we expect sequential growth moderation. As a contribution from third-party Partnerships continues to normalize.
And an Xbox content and services, we expect Revenue decline in the mid single digits against a prior year comparable, that benefited from strong content performance, partially offset by growth and Xbox game, pass, and Hardware Revenue should decline your over year.
Amy Hood: The range of potential outcomes remains wider than normal, in part due to the potential impact on the PC market from increased memory pricing. Search and news advertising ex tech revenue growth should be in the high single digits. Even as we work to improve execution, we expect continued share gains across Bing and Edge, with growth driven by volume, and we expect sequential growth moderation as the contribution from third-party partnerships continues to normalize. Xbox content and services, we expect revenue to decline in the mid-single digits against a prior year comparable that benefited from strong content performance, partially offset by growth in Xbox Game Pass, and hardware revenue should decline year-over-year. Now, some additional thoughts on the rest of the fiscal year and beyond. First, FX.
The range of potential outcomes remains wider than normal, in part due to the potential impact on the PC market from increased memory pricing. Search and news advertising ex tech revenue growth should be in the high single digits. Even as we work to improve execution, we expect continued share gains across Bing and Edge, with growth driven by volume, and we expect sequential growth moderation as the contribution from third-party partnerships continues to normalize.
Now, some additional thoughts on the rest of the fiscal year and Beyond first FX.
Amy Hood: As a reminder, M365 commercial products include components that can be variable due to in-period revenue recognition dynamics. M365 consumer cloud revenue growth should be in the mid- to high-20% range, driven by growth at RPO, as well as continued subscription volume. For LinkedIn, we expect revenue growth to be in the low double digits. In Dynamics 365, we expect revenue growth to be in the high teens, with continued growth across all workloads. For Intelligent Cloud, we expect revenue of $34.1 to $34.4 billion or growth of 27% to 29%. In Azure, we expect Q3 revenue growth to be between 37% and 38% in constant currency against a prior year comparable that included significantly accelerating growth rates in both Q3 and Q4. As mentioned earlier, demand continues to exceed supply.
As a reminder, M365 commercial products include components that can be variable due to in-period revenue recognition dynamics. M365 consumer cloud revenue growth should be in the mid- to high-20% range, driven by growth at RPO, as well as continued subscription volume. For LinkedIn, we expect revenue growth to be in the low double digits. In Dynamics 365, we expect revenue growth to be in the high teens, with continued growth across all workloads. For Intelligent Cloud, we expect revenue of $34.1 to $34.4 billion or growth of 27% to 29%. In Azure, we expect Q3 revenue growth to be between 37% and 38% in constant currency against a prior year comparable that included significantly accelerating growth rates in both Q3 and Q4. As mentioned earlier, demand continues to exceed supply.
And in More Personal Computing, we expect revenue to be $12.3 to $12.8 billion. Windows OEM and devices revenue should decline in the low teens, with rates impacted as the benefit from Windows 10 end of support normalizes and as elevated inventory levels come down through the quarter. Therefore, Windows OEM revenue should decline roughly 10%. The range of potential outcomes remains wider than normal, in part due to the potential impact on the PC market from increased memory pricing. Search and news advertising, ex-TAC, revenue growth should be in the high single digits. Even as we work to improve execution, we expect continued share gains across Bing and Edge, with growth driven by volume. And we expect to...
Based on current rates, we expect FX to increase Q4 total revenue and cogs growth by less than 1 point with no impact to operating expense growth. Within the segments, we expect FX to increase Revenue growth by roughly 1 point in productivity and business process is a more personal Computing and less than 1 point in intelligent cloud.
Xbox content and services, we expect revenue to decline in the mid-single digits against a prior year comparable that benefited from strong content performance, partially offset by growth in Xbox Game Pass, and hardware revenue should decline year-over-year. Now, some additional thoughts on the rest of the fiscal year and beyond. First, FX.
Sequential growth moderation, as a contribution from third-party partnerships continues to normalize.
And on Xbox content and services, we expect revenue to decline in the mid-single digits against a prior year comparable that benefited from strong content performance, partially offset by growth in Xbox Game Pass. And hardware revenue should decline year-over-year.
Amy Hood: Based on current rates, we expect FX to increase Q4 total revenue and COGS growth by less than 1 point, with no impact to operating expense growth. Within the segments, we expect FX to increase revenue growth by roughly 1 point in Productivity and Business Processes and More Personal Computing, and less than 1 point in Intelligent Cloud. With the strong work delivered in H1 to prioritize investment in key growth areas and the favorable impact from a higher mix of revenue in our Windows OEM and commercial on-prem businesses, we now expect FY 26 operating margins to be up slightly. We mentioned the potential impact on Windows OEM and on-premises server markets from increased memory pricing earlier. In addition, rising memory prices would impact capital expenditures, though the impact on Microsoft Cloud gross margins will build more gradually as these assets depreciate over 6 years.
Based on current rates, we expect FX to increase Q4 total revenue and COGS growth by less than 1 point, with no impact to operating expense growth. Within the segments, we expect FX to increase revenue growth by roughly 1 point in Productivity and Business Processes and More Personal Computing, and less than 1 point in Intelligent Cloud.
Now, some additional thoughts on the rest of the fiscal year and beyond. First, FX.
With a strong work delivered in H1 to prioritize investment in key growth areas and the favorable impact from a higher mix of Revenue in our Windows OEM and Commercial on-prem businesses. We now expect FY, 26, operating margins to be up slightly. We mentioned the potential impact on Windows, OEM and on premises server markets from increased memory pricing earlier, in addition, Rising memory, prices would impact Capital expenditures though, the impact on Microsoft, cloud. Gross margins will build more gradually as these assets depreciate over 6 years.
In closing.
Amy Hood: We will need to continue to balance the incoming supply we can allocate here against other priorities. As a reminder, there can be quarterly variability in year-on-year growth rates depending on the timing of capacity delivery and when it comes online, as well as from in-period revenue recognition depending on the mix of contracts. In our on-premises server business, we expect revenue to decline in the low single digits as growth rate normalized following the launch of SQL Server 2025, though increased memory pricing could create additional volatility in transactional purchasing. In more personal computing, we expect revenue to be $12.3 to $12.8 billion. Windows OEM and devices revenue should decline in the low teens. Growth rates will be impacted as the benefit from Windows 10 end-of-support normalizes and as elevated inventory levels come down through the quarter. Therefore, Windows OEM revenue should decline roughly 10%.
We will need to continue to balance the incoming supply we can allocate here against other priorities. As a reminder, there can be quarterly variability in year-on-year growth rates depending on the timing of capacity delivery and when it comes online, as well as from in-period revenue recognition depending on the mix of contracts. In our on-premises server business, we expect revenue to decline in the low single digits as growth rate normalized following the launch of SQL Server 2025, though increased memory pricing could create additional volatility in transactional purchasing. In more personal computing, we expect revenue to be $12.3 to $12.8 billion. Windows OEM and devices revenue should decline in the low teens. Growth rates will be impacted as the benefit from Windows 10 end-of-support normalizes and as elevated inventory levels come down through the quarter. Therefore, Windows OEM revenue should decline roughly 10%.
With the strong work delivered in H1 to prioritize investment in key growth areas and the favorable impact from a higher mix of revenue in our Windows OEM and commercial on-prem businesses, we now expect FY 26 operating margins to be up slightly. We mentioned the potential impact on Windows OEM and on-premises server markets from increased memory pricing earlier.
We delivered strong Topline growth in H1 and our investing across every layer of the stack, to continue to deliver high-value Solutions and tools to our customers with that. Let's go to Q&A Jonathan
Thanks Amy.
We'll now move over to Q&A.
Algebra respects for others. On the call, we request a participants, please, only ask 1 question.
Operator, can you please repeat your instructions?
In addition, rising memory prices would impact capital expenditures, though the impact on Microsoft Cloud gross margins will build more gradually as these assets depreciate over 6 years.
Thank you.
Ladies and gentlemen, if you would like to ask a question, please press star 1 on your telephone keypad and a confirmation Tom will indicate your line is in the question queue.
Amy Hood: In closing, we delivered strong top-line growth in H1 and are investing across every layer of the stack to continue to deliver high-value solutions and tools to our customers. With that, let's go to Q&A. Jonathan?
In closing, we delivered strong top-line growth in H1 and are investing across every layer of the stack to continue to deliver high-value solutions and tools to our customers. With that, let's go to Q&A. Jonathan?
Based on current rates, we expect FX to increase Q4 total revenue and COGS growth by less than 1 point, with no impact to operating expense growth. Within the segments, we expect FX to increase revenue growth by roughly 1 point in Productivity and Business Processes, and by less than 1 point in More Personal Computing. In Intelligent Cloud, with the strong work delivered in H1 to prioritize investment in key growth areas and the favorable impact from a higher mix of revenue in our Windows OEM and Commercial on-prem businesses, we now expect FY26 operating margins to be up slightly. We mentioned the potential impact on Windows OEM and on-premises server markets from increased memory pricing earlier. In addition, rising memory prices would impact capital expenditures, though the impact on Microsoft Cloud gross margins will build more gradually, as these assets depreciate over six years.
In closing.
you may press star 2 if you would like to remove your question from the Q,
We delivered strong topline growth in H1.
For participants using speaker equipment and maybe necessary to pick up your handset before pressing the star keys.
And we're investing across every layer of the stack to continue to deliver high-value solutions and tools to our customers. With that, let's go to Q&A. Jonathan.
Jonathan Neilson: Thanks, Amy. We'll now move over to Q&A. Out of respect for others on the call, we request that participants please only ask one question. Operator, can you please repeat your instructions?
Jonathan Neilson: Thanks, Amy. We'll now move over to Q&A. Out of respect for others on the call, we request that participants please only ask one question. Operator, can you please repeat your instructions?
And our first question comes from the line of Keith Weiss with Morgan Stanley. Please proceed
Amy Hood: The range of potential outcomes remains wider than normal, in part due to the potential impact on the PC market from increased memory pricing. Search and news advertising Ex-TAC revenue growth should be in the high single digits. Even as we work to improve execution, we expect continued share gains across Bing and Edge, with growth driven by volume. We expect sequential growth moderation as the contribution from third-party partnerships continues to normalize. In Xbox content and services, we expect revenue to decline in the mid single digits against a prior year comparable that benefited from strong content performance, partially offset by growth in Xbox Game Pass. Hardware revenue should decline year over year. Now some additional thoughts on the rest of the fiscal year and beyond. First, FX.
The range of potential outcomes remains wider than normal, in part due to the potential impact on the PC market from increased memory pricing. Search and news advertising Ex-TAC revenue growth should be in the high single digits. Even as we work to improve execution, we expect continued share gains across Bing and Edge, with growth driven by volume. We expect sequential growth moderation as the contribution from third-party partnerships continues to normalize. In Xbox content and services, we expect revenue to decline in the mid single digits against a prior year comparable that benefited from strong content performance, partially offset by growth in Xbox Game Pass. Hardware revenue should decline year over year. Now some additional thoughts on the rest of the fiscal year and beyond. First, FX.
Thanks, Amy. We'll now move over to Q&A.
Excellent. Thank you guys for for taking the question. Um,
Out of respect for others on the call, we request that participants please only ask one question.
Operator, can you please repeat your instructions?
Operator: Thank you. Ladies and gentlemen, if you would like to ask a question, please press star one on your telephone keypad, and a confirmation tone will indicate your line is in the question queue. You may press star two if you would like to remove your question from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star keys. Our first question comes from the line of Keith Weiss with Morgan Stanley. Please proceed.
Operator: Thank you. Ladies and gentlemen, if you would like to ask a question, please press star one on your telephone keypad, and a confirmation tone will indicate your line is in the question queue. You may press star two if you would like to remove your question from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star keys. Our first question comes from the line of Keith Weiss with Morgan Stanley. Please proceed.
Thank you.
Ladies and gentlemen, if you would like to ask a question, please press star 1 on your telephone keypad, and a confirmation tone will indicate your line is in the question queue.
You may press star 2 if you would like to remove your question from the queue.
For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star keys.
And our first question comes from the line of Keith Weiss with Morgan Stanley. Please proceed.
Keith Weiss: Excellent. Thank you, guys, for taking the question. I'm looking at a Microsoft print where earnings is growing 24% year-over-year, which is a spectacular result. Great execution on your part. Top line growing well, margins expanding. But I'm looking at after-hours trading, and the stock is still down. I think one of the core issues that is weighing on investors is CapEx is growing faster than we expected, and maybe Azure is growing a little bit slower than we expected. I think that fundamentally comes down to a concern on the ROI on this CapEx spend over time. So I was hoping you guys could help us fill in some of the blanks a little bit in terms of how should we think about capacity expansion and what that can yield in terms of Azure growth going forward?
Keith Weiss: Excellent. Thank you, guys, for taking the question. I'm looking at a Microsoft print where earnings is growing 24% year-over-year, which is a spectacular result. Great execution on your part. Top line growing well, margins expanding. But I'm looking at after-hours trading, and the stock is still down. I think one of the core issues that is weighing on investors is CapEx is growing faster than we expected, and maybe Azure is growing a little bit slower than we expected.
I'm looking at a, a Microsoft print where earnings is growing 24% year on year, um, which is a spectacular result, um, great execution on your part Topline growing, well, margins expanding. But I'm looking at after hours trading in the stock is still down. And I think 1 of the core issues that um, is Weighing on. Investors is capex is growing faster than we expected and maybe Azure is growing a little bit slower than we expected. And I think that fundamentally comes down to a concern on the ROI, on this capex, spend over time. So I was hoping you guys could help us fill in some of the blinds a little bit in terms of how
Amy Hood: Based on current rates, we expect FX to increase Q4 total revenue and COGS growth by less than one point, with no impact to operating expense growth. Within the segments, we expect FX to increase revenue growth by roughly one point in Productivity and Business Processes, More Personal Computing, and less than one point in Intelligent Cloud. With the strong work delivered in H1 to prioritize investment in key growth areas and the favorable impact from a higher mix of revenue in our Windows OEM and commercial on-prem businesses, we now expect FY26 operating margins to be up slightly. We mentioned the potential impact on Windows OEM and on-premises server markets from increased memory pricing earlier. In addition, rising memory prices would impact capital expenditures, though the impact on Microsoft Cloud gross margins will build more gradually as these assets depreciate over six years.
Based on current rates, we expect FX to increase Q4 total revenue and COGS growth by less than one point, with no impact to operating expense growth. Within the segments, we expect FX to increase revenue growth by roughly one point in Productivity and Business Processes, More Personal Computing, and less than one point in Intelligent Cloud. With the strong work delivered in H1 to prioritize investment in key growth areas and the favorable impact from a higher mix of revenue in our Windows OEM and commercial on-prem businesses, we now expect FY26 operating margins to be up slightly. We mentioned the potential impact on Windows OEM and on-premises server markets from increased memory pricing earlier. In addition, rising memory prices would impact capital expenditures, though the impact on Microsoft Cloud gross margins will build more gradually as these assets depreciate over six years.
Excellent. Thank you, guys, for taking the question. Um,
What should you think about capacity expansion and what that can yield in terms of azure growth going forward? But more to the point, how should we think about the ROI on this investment as it comes to fruition. Thanks guys.
I'm looking at, uh, a Microsoft print where earnings are growing 24% year on year, um, which is a spectacular result—great execution on your part. Topline growing well, margins expanding. But I'm looking at after-hours trading, and the stock is still down. And I think one of the core issues that, um, is weighing on investors is CapEx is growing.
I think that fundamentally comes down to a concern on the ROI on this CapEx spend over time. So I was hoping you guys could help us fill in some of the blanks a little bit in terms of how should we think about capacity expansion and what that can yield in terms of Azure growth going forward? But more to the point, how should we think about the ROI on this investment as it comes to fruition? Thanks, guys.
Uh, thanks Keith. And I let let me start and and seisha can add, uh, some broader comments. I'm sure. I think the first thing I think you really asked a very direct correlation that I do think many investors are doing, which is between the capex, spend and seeing an Azure Revenue number. And you know, we tried last quarter. And I think again, this quarter to talk, um, more specifically, about all the places that the capex spend especially the shortlived capex, spend across CPU and GPU and where that will show up.
Keith Weiss: But more to the point, how should we think about the ROI on this investment as it comes to fruition? Thanks, guys.
Amy Hood: In closing, we delivered strong top-line growth in H1 and are investing across every layer of the stack to continue to deliver high-value solutions and tools to our customers. With that, let's go to Q&A, Jonathan. Thanks, Amy. We'll now move over to Q&A. Out of respect for others on the call, we request that participants please only ask one question. Operator, can you please repeat your instructions?
In closing, we delivered strong top-line growth in H1 and are investing across every layer of the stack to continue to deliver high-value solutions and tools to our customers. With that, let's go to Q&A, Jonathan.
Faster than we expected, and maybe Azure is growing a little bit slower than we expected. And I think that fundamentally comes down to a concern on the ROI on this CapEx spend over time. So I was hoping you guys could help us fill in some of the blanks a little bit in terms of how should we think about capacity expansion and what that can yield in terms of Azure growth going forward. But more to the point, how should we think about the ROI on this investment as it comes to fruition. Thanks, guys.
Amy Hood: Thanks, Keith, and let me start, and Satya can add some broader comments, I'm sure. I think the first thing, I think you really asked a very direct correlation that I do think many investors are doing, which is between the CapEx spend and seeing an Azure revenue number. You know, we tried last quarter, and I think again this quarter, to talk more specifically about all the places that the CapEx spend, especially the short-lived CapEx spend across CPU and GPU, and where that'll show up. Sometimes I think it's probably better to think about the Azure guidance that we give as an allocated capacity guide about what we can deliver in Azure revenue. Because as we spend the capital and put GPUs specifically, it applies to CPUs, but GPUs more specifically, we're really making long-term decisions.
Amy Hood: Thanks, Keith, and let me start, and Satya can add some broader comments, I'm sure. I think the first thing, I think you really asked a very direct correlation that I do think many investors are doing, which is between the CapEx spend and seeing an Azure revenue number. You know, we tried last quarter, and I think again this quarter, to talk more specifically about all the places that the CapEx spend, especially the short-lived CapEx spend across CPU and GPU, and where that'll show up.
um, sometimes I think it's probably better to think about the Azure guidance, that we give as an allocated capacity guide about what we can deliver in Azure Revenue,
Jonathan Neilson: Thanks, Amy. We'll now move over to Q&A. Out of respect for others on the call, we request that participants please only ask one question. Operator, can you please repeat your instructions?
Uh, thanks, Keith. And let me start, and Seisha can add some broader comments, I'm sure.
Because as we spend the capital and put gpus specifically it applies to CPUs. But gpus more specifically, we're really making long-term decisions. And the first thing we're doing is solving for the increased usage in sales and the accelerating pace and 365 co-pilot as well as GitHub co-pilot, our first party apps
Jonathan Neilson: Thank you. Ladies and gentlemen, if you would like to ask a question, please press star one on your telephone keypad, and a confirmation tone will indicate your line is in the question queue. You may press star two if you would like to remove your question from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star keys. Our first question comes from the line of Keith Weiss with Morgan Stanley. Please proceed.
Operator: Thank you. Ladies and gentlemen, if you would like to ask a question, please press star one on your telephone keypad, and a confirmation tone will indicate your line is in the question queue. You may press star two if you would like to remove your question from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star keys. Our first question comes from the line of Keith Weiss with Morgan Stanley. Please proceed.
Sometimes I think it's probably better to think about the Azure guidance that we give as an allocated capacity guide about what we can deliver in Azure revenue. Because as we spend the capital and put GPUs specifically, it applies to CPUs, but GPUs more specifically, we're really making long-term decisions.
A very direct correlation that I do think many investors are doing, which is between the CapEx spend and seeing an Azure revenue number. And, you know, we tried last quarter—and I think again this quarter—to talk, um, more specifically about all the places that the CapEx spend, especially the short-lived CapEx spend across CPU and GPU, and where that'll show up.
Then we make sure we're investing in the long-term nature of R&D and product Innovation and much of the acceleration that I think you've seen from us and products, uh, over the past bit is coming because we are allocating, um, gpus and capacity to many of The Talented uh, AI people. We've been hiring over the past years.
Um, sometimes I think it's probably better to think about the Azure guidance that we give as an allocated capacity guide, about what we can deliver in Azure revenue,
Amy Hood: The first thing we're doing is solving for the increased usage in sales and the accelerating pace of M365 Copilot, as well as GitHub Copilot, our first-party apps. We make sure we're investing in the long-term nature of R&D and product innovation. Much of the acceleration that I think you've seen from us and products over the past bit is coming because we are allocating GPUs and capacity to many of the talented AI people we've been hiring over the past years. Then what you end up with is the remainder going towards serving the Azure capacity that continues to grow in terms of demand.
The first thing we're doing is solving for the increased usage in sales and the accelerating pace of M365 Copilot, as well as GitHub Copilot, our first-party apps. We make sure we're investing in the long-term nature of R&D and product innovation. Much of the acceleration that I think you've seen from us and products over the past bit is coming because we are allocating GPUs and capacity to many of the talented AI people we've been hiring over the past years.
Satya Nadella: Excellent. Thank you guys for taking the question. I'm looking at a Microsoft print where earnings are growing 24% year-over-year, which is a spectacular result. Great execution on your part. Top line growing well, margins expanding. But I'm looking at after-hours trading, and the stock is still down. And I think one of the core issues that is weighing on investors is CapEx is growing faster than we expected, and maybe Azure is growing a little bit slower than we expected. And I think that fundamentally comes down to a concern on the ROI on this CapEx spend over time. So I was hoping you guys could help us fill in some of the blanks a little bit in terms of how should we think about capacity expansion and what that can yield in terms of Azure growth going forward.
Keith Weiss: Excellent. Thank you guys for taking the question. I'm looking at a Microsoft print where earnings are growing 24% year-over-year, which is a spectacular result. Great execution on your part. Top line growing well, margins expanding. But I'm looking at after-hours trading, and the stock is still down. And I think one of the core issues that is weighing on investors is CapEx is growing faster than we expected, and maybe Azure is growing a little bit slower than we expected. And I think that fundamentally comes down to a concern on the ROI on this CapEx spend over time. So I was hoping you guys could help us fill in some of the blanks a little bit in terms of how should we think about capacity expansion and what that can yield in terms of Azure growth going forward.
Then when you end up is that you end up with the remainder going towards serving, um, the Azure capacity that continues to grow in terms of demand, uh, and a way to think about it because I think it I get asked this question. Sometimes is, you know, if I had
Because as we spend the capital and put GPUs—specifically, it applies to CPUs, but GPUs more specifically—we're really making long-term decisions. And the first thing we're doing is solving for the increased usage in sales and the accelerating pace in M365 Copilot as well as GitHub Copilot, our first-party apps.
Taken the gpus. That just came online in q1 and Q2 and terms of gpus and allocated them all to azure
The kpi would have been over 40.
Then what you end up with is the remainder going towards serving the Azure capacity that continues to grow in terms of demand. And a way to think about it, because I think I get asked this question sometimes, is, you know, if I had taken the GPUs that just came online in Q1 and Q2, in terms of GPUs, and allocated them all to Azure, the KPI would have been over 40. And I think the most important thing to realize is that this is about investing in all the layers of the stack that benefit customers.
Then we make sure we're investing in the long-term nature of R&D and product. Innovation, and much of the acceleration that I think you've seen from us in products, uh, over the past bit is coming because we are allocating, um, GPUs and capacity to many of the talented, uh, AI people we've been hiring over the past years.
And I think the most important thing to realize is that this is about investing in all the layers of the stack that benefit customers. Uh and I think that's hopefully helpful. In terms of thinking about Capital Growth, it shows in every piece that shows in Revenue growth, the business and shows uh as Opex growth as we invest in our people
Amy Hood: And a way to think about it, because I think I get asked this question sometimes, is, you know, if I had taken the GPUs that just came online in Q1 and Q2, in terms of GPUs, and allocated them all to Azure, the KPI would have been over 40. And I think the most important thing to realize is that this is about investing in all the layers of the stack that benefit customers. And I think that's hopefully helpful in terms of thinking about capital growth. It shows in every piece, it shows in revenue growth across the business, and shows as OpEx growth as we invest in our people.
Satya Nadella: But more to the point, how should we think about the ROI on this investment as it comes to fruition? Thanks, guys.
But more to the point, how should we think about the ROI on this investment as it comes to fruition? Thanks, guys.
Then when you end up, is that you end up with the remainder going towards surveying, um, the Azure capacity that continues to grow in terms of demand. And a way to think about it, because I think I get asked this question sometimes, is, you know, if I had...
Amy Hood: Thanks, Keith. Let me start, and Satya can add some broader comments, I'm sure. I think the first thing I think you really asked a very direct correlation that I do think many investors are doing, which is between the CapEx spend and seeing an Azure revenue number. We tried last quarter, and I think, again, this quarter, to talk more specifically about all the places that the CapEx spend, especially the short-lived CapEx spend across CPU and GPU, and where that'll show up. Sometimes I think it's probably better to think about the Azure guidance that we give as an allocated capacity guide about what we can deliver in Azure revenue. Because as we spend the capital and put GPUs specifically, it applies to CPUs, but GPUs more specifically, we're really making long-term decisions.
Amy Hood: Thanks, Keith. Let me start, and Satya can add some broader comments, I'm sure. I think the first thing I think you really asked a very direct correlation that I do think many investors are doing, which is between the CapEx spend and seeing an Azure revenue number. We tried last quarter, and I think, again, this quarter, to talk more specifically about all the places that the CapEx spend, especially the short-lived CapEx spend across CPU and GPU, and where that'll show up. Sometimes I think it's probably better to think about the Azure guidance that we give as an allocated capacity guide about what we can deliver in Azure revenue. Because as we spend the capital and put GPUs specifically, it applies to CPUs, but GPUs more specifically, we're really making long-term decisions.
Taken the GPUs that just came online in Q1 and Q2 in terms of GPUs and allocated them all to Azure, the KPI would have been over 40.
And I think that's hopefully helpful in terms of thinking about capital growth. It shows in every piece, it shows in revenue growth across the business, and shows as OpEx growth as we invest in our people.
Satya Nadella: Yeah, I think you, Amy, covered it. But basically, as an investor, I think when you think about our capital, and you think about the GM profile of our portfolio, you should obviously think about Azure, but you should think about M365 Copilot, and you should think about GitHub Copilot, you should think about Dragon Copilot, Security Copilot. All of those have a GM profile and lifetime value. I mean, if you think about it, acquiring an Azure customer is super important to us, but so is acquiring an M365, or a GitHub, or a Dragon Copilot, which are all, by the way, incremental businesses and dams for us.
Satya Nadella: Yeah, I think you, Amy, covered it. But basically, as an investor, I think when you think about our capital, and you think about the GM profile of our portfolio, you should obviously think about Azure, but you should think about M365 Copilot, and you should think about GitHub Copilot, you should think about Dragon Copilot, Security Copilot. All of those have a GM profile and lifetime value.
LTV portfolio. Uh that's on 1 side and the other 1 that Amy mentioned is also R&D. I mean, you got to think about compute is also R&D and that's sort of the second element of it. And so we're using all of that obviously to optimize for the long term.
Amy Hood: And the first thing we're doing is solving for the increased usage and sales and the accelerating pace of M365 Copilot, as well as GitHub Copilot, our first-party apps. Then we make sure we're investing in the long-term nature of R&D and product innovation. And much of the acceleration that I think you've seen from us and products over the past bit is coming because we are allocating GPUs and capacity to many of the talented AI people we've been hiring over the past years. Then what you end up with is the remainder going towards serving the Azure capacity that continues to grow in terms of demand.
And the first thing we're doing is solving for the increased usage and sales and the accelerating pace of M365 Copilot, as well as GitHub Copilot, our first-party apps. Then we make sure we're investing in the long-term nature of R&D and product innovation. And much of the acceleration that I think you've seen from us and products over the past bit is coming because we are allocating GPUs and capacity to many of the talented AI people we've been hiring over the past years. Then what you end up with is the remainder going towards serving the Azure capacity that continues to grow in terms of demand.
Excellent, thank you.
I mean, if you think about it, acquiring an Azure customer is super important to us, but so is acquiring an M365, or a GitHub, or a Dragon Copilot, which are all, by the way, incremental businesses and dams for us.
the next question comes from the
And I think the most important thing to realize is that this is about investing in all the layers of the stack that benefit customers. Uh, and I think that's hopefully helpful. In terms of thinking about capital growth, it shows in every piece—it shows in revenue growth, the business—and shows, uh, as OpEx growth as we invest in our people. Yeah. I think you, Amy, covered it. But basically, as an investor, I think when you think about our capital, uh, and you think about the GM profile of our portfolio, you should obviously think about Azure, but you should think about M365 Copilot. Um, and you should think about GitHub Copilot. You should think about Dragon, Copilot Security, Copilot—all of those have a GM profile and a lifetime value. I mean, if you think about it, acquiring an Azure customer is super important to us. But so is acquiring an M365, or a GitHub, or a Dragon Copilot, which are all by the...
Line of Mark mler with Bernstein, research, please proceed.
Satya Nadella: And so we don't wanna maximize just one business of ours, we wanna be able to allocate capacity while we're sort of supply constrained in a way that allows us to essentially build the best LTV portfolio. That's on one side, and the other one that Amy mentioned is also R&D. I mean, you gotta think about, compute is also R&D, and that's sort of the second element of it. And so we're using all of that, obviously, to optimize for the long term.
And so we don't wanna maximize just one business of ours, we wanna be able to allocate capacity while we're sort of supply constrained in a way that allows us to essentially build the best LTV portfolio. That's on one side, and the other one that Amy mentioned is also R&D. I mean, you gotta think about, compute is also R&D, and that's sort of the second element of it. And so we're using all of that, obviously, to optimize for the long term.
Thank you very much for taking my question and congrats on the quarter. Um, 1 of the other questions, We Believe investors, want to understand is how to think about your line of sight
From Hardware capex investment to revenue and margins. You capitalize servers over 6 years. But the average duration of your RPO is 2 and a half years up from 2 years. Last quarter how to invest
Way incremental businesses and dams for us. Uh and so we don't want to maximize just 1 business of ours. We want to be able to allocate capacity while we're sort of Supply constrained in a way that allows to essentially build the best LTV portfolio uh, that's on 1 side and the other 1 that Amy mentioned is also R&D. I mean, you got to think about compute is also R&D and that's sort of the second element of it. And so we're using all of that obviously to optimize for the long term.
Jonathan Neilson: Excellent. Thank you. Thanks, Keith. Operator, next question, please.
Keith Weiss: Excellent. Thank you.
Jonathan Neilson: Thanks, Keith. Operator, next question, please.
Excellent, thank you.
Amy Hood: A way to think about it, because I think I get asked this question sometimes, is if I had taken the GPUs that just came online in Q1 and Q2 in terms of GPUs and allocated them all to Azure, the KPI would have been over 40. And I think the most important thing to realize is that this is about investing in all the layers of the stack that benefit customers. And I think that's hopefully helpful in terms of thinking about capital growth. It shows in every piece. It shows in revenue growth across the business and shows as OpEx growth as we invest in our people.
A way to think about it, because I think I get asked this question sometimes, is if I had taken the GPUs that just came online in Q1 and Q2 in terms of GPUs and allocated them all to Azure, the KPI would have been over 40. And I think the most important thing to realize is that this is about investing in all the layers of the stack that benefit customers. And I think that's hopefully helpful in terms of thinking about capital growth. It shows in every piece. It shows in revenue growth across the business and shows as OpEx growth as we invest in our people.
Keith operator. Next question, please.
Operator: The next question comes from the line of Mark Moerdler with Bernstein Research. Please proceed.
Operator: The next question comes from the line of Mark Moerdler with Bernstein Research. Please proceed.
The next question comes from the line of Mark Miller with Bernstein Research. Please proceed.
Mark Moerdler: Thank you very much for taking my question, and congrats on the quarter. One of the other questions we believe investors wanna understand is how to think about your line of sight from hardware CapEx investment to revenue and margins. You capitalize servers over 6 years, but the average duration of your RPO is 2.5 years, up from 2 years last quarter. How do investors get comfortable that since this is a lot of this CapEx is AI-centric, that you'll be able to capture sufficient revenue over the 6-year useful life of the hardware to deliver solid revenue and gross profit dollars growth? Hopefully, one similar to the CPU revenue. Thank you.
Mark Moerdler: Thank you very much for taking my question, and congrats on the quarter. One of the other questions we believe investors wanna understand is how to think about your line of sight from hardware CapEx investment to revenue and margins. You capitalize servers over 6 years, but the average duration of your RPO is 2.5 years, up from 2 years last quarter.
Investors get comfortable that since this is a lot of this capex is AI Centric that you'll be able to capture sufficient Revenue over the 6 year, useful life of the hardware delivers, solid revenue and gross profit dollars growth. Hopefully 1 similar to the CPU Revenue. Thank you.
How do investors get comfortable that since this is a lot of this CapEx is AI-centric, that you'll be able to capture sufficient revenue over the 6-year useful life of the hardware to deliver solid revenue and gross profit dollars growth? Hopefully, one similar to the CPU revenue. Thank you.
Satya Nadella: Yeah, I think you, Amy, covered it. But basically, as an investor, I think when you think about our capital and you think about the GM profile of our portfolio, you should obviously think about Azure. But you should think about M365 Copilot, and you should think about GitHub Copilot. You should think about Dragon Copilot, Security Copilot. All of those have a GM profile and lifetime value. I mean, if you think about it, acquiring an Azure customer is super important to us, but so is acquiring an M365 or a GitHub or a Dragon Copilot, which are all, by the way, incremental businesses and DAMs for us. And so we don't want to maximize just one business of ours. We want to be able to allocate capacity while we're sort of supply constrained in a way that allows to essentially build the best LTV portfolio.
Satya Nadella: Yeah, I think you, Amy, covered it. But basically, as an investor, I think when you think about our capital and you think about the GM profile of our portfolio, you should obviously think about Azure. But you should think about M365 Copilot, and you should think about GitHub Copilot. You should think about Dragon Copilot, Security Copilot. All of those have a GM profile and lifetime value. I mean, if you think about it, acquiring an Azure customer is super important to us, but so is acquiring an M365 or a GitHub or a Dragon Copilot, which are all, by the way, incremental businesses and DAMs for us. And so we don't want to maximize just one business of ours. We want to be able to allocate capacity while we're sort of supply constrained in a way that allows to essentially build the best LTV portfolio.
Thank you very much for taking my question and congrats on the quarter. Um, 1 of the other questions, We Believe investors want to understand is how to think about your line of sight from Hardware capex investment to revenue and margins. You capitalize servers over 6 years but the average duration of your RPO is 2 and a half years up from 2 years. Last quarter. How do investors get comfortable that since this is a lot of this capex is AI Centric. That you'll be able to capture sufficient Revenue, over the 6 year use of life of the hardware delivers, solid revenue and gross profit. Dollars growth. Hopefully 1 similar to the CPU Revenue. Thank you.
Amy Hood: Thanks, Mark. Let me start with at a high level, and Satya can add as well. I think, you know, when you think about average duration, I think what you're getting to is, and we need to remember, is that average duration is a combination of a broad set of contract arrangements that we have. A lot of them around things like M365 or our BizApp portfolio are shorter dated, right? Three-year contracts, and so they have, quite frankly, a short duration. The majority then, that's remaining are Azure contracts that are longer duration, and you saw that this quarter when you saw the extension of that duration from around two years to two and a half.
Amy Hood: Thanks, Mark. Let me start with at a high level, and Satya can add as well. I think, you know, when you think about average duration, I think what you're getting to is, and we need to remember, is that average duration is a combination of a broad set of contract arrangements that we have. A lot of them around things like M365 or our BizApp portfolio are shorter dated, right? Three-year contracts, and so they have, quite frankly, a short duration.
Uh, thanks Mark. Let me start with at a high level and and uh, thought you can add as well. I think, um, you know, when you think about um, average duration, I think what you're getting to is and we need to remember, is that average duration is a combination of a broad set of, uh, contract Arrangements that we have. A lot of them around things like M365 or a bizapp portfolio. Um, our shorter. Dated right? Um, 3 year contracts. And so they have quite frankly, a short duration, the majority then that's remaining our Azure contracts that are longer duration. And you saw that this quarter when we saw the extension of that duration, um, from around 2 years to, uh, 2 and a half. And the way to think about that is
You know, the majority, um, of the capital that we're spending today and a lot of the gpus, uh, that we're buying are already contracted for most of their useful life. And so, a way to think about that is, you know, much of that risk that I think your, um, point,
Ing to isn't there?
The majority then, that's remaining are Azure contracts that are longer duration, and you saw that this quarter when you saw the extension of that duration from around two years to two and a half. And the way to think about that is, you know, the majority of the capital that we're spending today, and a lot of the GPUs that we're buying, are already contracted for most of their useful life. And so a way to think about that is, you know, much of that risk that I think you're pointing to isn't there, because they're already sold for the entirety of their useful life.
Satya Nadella: That's on one side. The other one that Amy mentioned is also R&D. I mean, you got to think about compute is also R&D, and that's sort of the second element of it. So we're using all of that, obviously, to optimize for the long term.
That's on one side. The other one that Amy mentioned is also R&D. I mean, you got to think about compute is also R&D, and that's sort of the second element of it. So we're using all of that, obviously, to optimize for the long term.
Amy Hood: And the way to think about that is, you know, the majority of the capital that we're spending today, and a lot of the GPUs that we're buying, are already contracted for most of their useful life. And so a way to think about that is, you know, much of that risk that I think you're pointing to isn't there, because they're already sold for the entirety of their useful life. And so part of it exists because you have this shorter dated RPO because of some of the M365 stuff. If you look at the Azure-only RPO, it's a little bit more extended. A lot of that is CPU basis; it's not just GPU.
Our Azure contracts that are longer duration. And you saw that this quarter when we saw the extension of that duration, um, from around two years to, uh, two and a half. And the way to think about that is
[Analyst]: Excellent. Thank you.
Keith Weiss: Excellent. Thank you.
Amy Hood: Thanks, Keith. Operator, next question, please.
Jonathan Neilson: Thanks, Keith. Operator, next question, please.
Jonathan Neilson: The next question comes from the line of Mark Moerdler with Bernstein Research. Please proceed.
Operator: The next question comes from the line of Mark Moerdler with Bernstein Research. Please proceed.
Because they're already sold for the entirety of their useful life. And so part of it um, exists because uh, you have this short shorter dated uh RPO because of some of the M365 stuff. If you look at the Azure only RPO, it's a little bit more extended. Uh, a lot of that is CPU basis, it's not just GPU and on the GPU contracts, uh, that we've talked about, including for some of our largest customers. Those are sold for the entire useful life of the GPU. And so there's not the risk to, which I think you may be referring. Hopefully, that's helpful.
[Analyst]: Thank you very much for taking my question and congrats on the quarter. One of the other questions we believe investors want to understand is how to think about your line of sight from hardware CapEx investment to revenue and margins. You capitalize servers over 6 years, but the average duration of your RPO is 2.5 years, up from 2 years last quarter. How do investors get comfortable that since a lot of this CapEx is AI-centric, that you'll be able to capture sufficient revenue over the 6-year use life of the hardware, deliver solid revenue, and gross profit dollars growth? Hopefully, one similar to the CPU revenue. Thank you.
Mark Moerdler: Thank you very much for taking my question and congrats on the quarter. One of the other questions we believe investors want to understand is how to think about your line of sight from hardware CapEx investment to revenue and margins. You capitalize servers over 6 years, but the average duration of your RPO is 2.5 years, up from 2 years last quarter. How do investors get comfortable that since a lot of this CapEx is AI-centric, that you'll be able to capture sufficient revenue over the 6-year use life of the hardware, deliver solid revenue, and gross profit dollars growth? Hopefully, one similar to the CPU revenue. Thank you.
You know, the majority of the capital that we're spending today, and a lot of the GPUs that we're buying, are already contracted for most of their useful life. And so, a way to think about that is, you know, much of that risk that I think you're pointing to isn't there.
And so part of it exists because you have this shorter dated RPO because of some of the M365 stuff. If you look at the Azure-only RPO, it's a little bit more extended. A lot of that is CPU basis; it's not just GPU. On the GPU contracts that we've talked about, including for some of our largest customers, those are sold for the entire useful life of the GPU, and so there's not the risk to which I think you may be referring. Hopefully, that's helpful.
Because they are already sold for the entirety of their useful life.
Yeah, and just a 1, other thing I would add to it is, uh, in addition to sort of what Amy mentioned, which is, it's already contracted for the useful life, is we do use software to continuously run. Even the latest models on uh, the fleet. Uh, that is aging.
Amy Hood: On the GPU contracts that we've talked about, including for some of our largest customers, those are sold for the entire useful life of the GPU, and so there's not the risk to which I think you may be referring. Hopefully, that's helpful.
Satya Nadella: Yeah, and just one other thing I would add to it is in addition to sort of what Amy mentioned, which is it's already contracted for the useful life, is we do use software to continuously run even the latest models on the fleet that is aging, if you will. So that's sort of what gives us that duration. And so at the end of the day, we wanna have... That's why we even think about aging the fleet constantly, right? So it's not about buying a whole lot of gear one year, it's about each year, you ride the Moore's Law, you add, you use software, and then you optimize across all of it.
Satya Nadella: Yeah, and just one other thing I would add to it is in addition to sort of what Amy mentioned, which is it's already contracted for the useful life, is we do use software to continuously run even the latest models on the fleet that is aging, if you will. So that's sort of what gives us that duration. And so at the end of the day, we wanna have.
And so part of it, um, exists because, uh, you have this shorter-dated, uh, RPO because of some of the M365 stuff. If you look at the Azure-only RPO, it's a little bit more extended. A lot of that is CPU basis; it's not just GPU. And on the GPU contracts, uh, that we've talked about, including for some of our largest customers, those are sold for the entire useful life of the GPU. And so there's not the risk to—which I think you may be referring. Hopefully, that's helpful.
Uh, if you will, so that's sort of what gives us that duration. Uh, and so, uh, at the end of the day, we want to have. That's why we even think about aging, the fleet constantly, right? So it's not about buying a whole lot of Gear 1 year, it's about each year, you write the Mohs law, you add you use software, and then you optimize across all of it.
Yeah, and just one other thing I would add to it is,
and Mark, maybe to
Amy Hood: Thanks, Mark. Let me start with at a high level, and Satya can add as well. I think when you think about average duration, I think what you're getting to is and we need to remember is that average duration is a combination of a broad set of contract arrangements that we have. A lot of them around things like M365 or a BizApp portfolio are shorter dated, right? 3-year contracts. And so they have, quite frankly, a short duration. The majority, then, that's remaining are Azure contracts that are longer duration. And you saw that this quarter when we saw the extension of that duration from around 2 years to 2.5. And the way to think about that is the majority of the capital that we're spending today and a lot of the GPUs that we're buying are already contracted for most of their useful life.
Amy Hood: Thanks, Mark. Let me start with at a high level, and Satya can add as well. I think when you think about average duration, I think what you're getting to is and we need to remember is that average duration is a combination of a broad set of contract arrangements that we have. A lot of them around things like M365 or a BizApp portfolio are shorter dated, right? 3-year contracts. And so they have, quite frankly, a short duration. The majority, then, that's remaining are Azure contracts that are longer duration. And you saw that this quarter when we saw the extension of that duration from around 2 years to 2.5. And the way to think about that is the majority of the capital that we're spending today and a lot of the GPUs that we're buying are already contracted for most of their useful life.
In addition to sort of what Amy mentioned which is, it's already contracted for the useful life, is we do use software to continuously run. Even the latest models on, uh, the fleet, uh, that is aging, uh, if you will. So that's sort of what gives us that duration. Uh,
State this in case it's not obvious is that as you go through the useful life actually you get more and more and more efficient at its delivery. So where you've sold the entirety of its life, the margins actually improve with time. And so I think that may be a good reminder to people as we see that obviously in the CPU Fleet all the time.
That's why we even think about aging the fleet constantly, right? So it's not about buying a whole lot of gear one year, it's about each year, you ride the Moore's Law, you add, you use software, and then you optimize across all of it.
That's that's a great answer, I really appreciate and thank you.
Thanks, Mark operates. The next question, please.
The next question comes from the line of Brent, Phil.
Amy Hood: Mark, maybe to state this, in case it's not obvious, is that as you go through the useful life, actually, you get more, and more, and more efficient at delivery. So where you've sold the entirety of its life, the margins actually improve with time. And so I think that may be a good reminder to people as we see that, obviously, in the CPU fleet all the time.
Amy Hood: Mark, maybe to state this, in case it's not obvious, is that as you go through the useful life, actually, you get more, and more, and more efficient at delivery. So where you've sold the entirety of its life, the margins actually improve with time. And so I think that may be a good reminder to people as we see that, obviously, in the CPU fleet all the time.
And so, uh, at the end of the day, we want to have—that's why we even think about aging the fleet constantly, right? So it's not about buying a whole lot of gear one year, it's about each year—you ride the Moore's Law, you add, you use software, and then you optimize across all of it.
With Jeffrey's please proceed.
and Mark, maybe to
Uh, thanks Amy. Uh, on 45% of the backlog being related to open AI. I'm just curious if you can
State this, in case it's not obvious, is that as you go through the useful life, actually you get more and more and more efficient at its delivery. So where you've sold the entirety of its life, the margins actually improve with time. And so I think that may be a good reminder to people, as we see that obviously in the CPU fleet all the time.
Mark Moerdler: That's, that's a great answer. I really appreciate it, and thank you.
Mark Moerdler: That's, that's a great answer. I really appreciate it, and thank you.
That's a great answer. I really appreciate it, and thank you.
Jonathan Neilson: Thanks, Mark. Operator, next question, please.
Jonathan Neilson: Thanks, Mark. Operator, next question, please.
Seen.
Thanks, Mark. Operator, the next question, please.
Operator: The next question comes from the line of Brent Thill with Jefferies. Please proceed.
Operator: The next question comes from the line of Brent Thill with Jefferies. Please proceed.
Amy Hood: And so a way to think about that is much of that risk that I think you're pointing to isn't there because they're already sold for the entirety of their useful life. And so part of it exists because you have this shorter dated RPO because of some of the M365 stuff. If you look at the Azure-only RPO, it's a little bit more extended. A lot of that is CPU-based. It's not just GPU. And on the GPU contracts that we've talked about, including for some of our largest customers, those are sold for the entire useful life of the GPU. And so there's not the risk to which I think you may be referring. Hopefully, that's helpful.
And so a way to think about that is much of that risk that I think you're pointing to isn't there because they're already sold for the entirety of their useful life. And so part of it exists because you have this shorter dated RPO because of some of the M365 stuff. If you look at the Azure-only RPO, it's a little bit more extended. A lot of that is CPU-based. It's not just GPU. And on the GPU contracts that we've talked about, including for some of our largest customers, those are sold for the entire useful life of the GPU. And so there's not the risk to which I think you may be referring. Hopefully, that's helpful.
The next question comes from the line of Brent Phil with Jefferies. Please proceed.
Brent Thill: Thanks, Amy. On 45% of the backlog being related to OpenAI, I'm just curious if you can comment. There's obviously concern about, you know, the durability, and I know maybe there's not much you can say on this, but I think everyone's concerned about the exposure, and if you could maybe talk through your perspective on what both you and Satya are seeing.
Brent Thill: Thanks, Amy. On 45% of the backlog being related to OpenAI, I'm just curious if you can comment. There's obviously concern about, you know, the durability, and I know maybe there's not much you can say on this, but I think everyone's concerned about the exposure, and if you could maybe talk through your perspective on what both you and Satya are seeing.
I think maybe I would have thought about the question quite differently. Brent, the first thing to focus on is uh the reason we talked about that number is
Uh, thanks, Amy. Uh, on 45% of the backlog being related to OpenAI, I'm just curious if you can
Comment. There's obviously, uh, concern about, um, about the, you know, the durability. And I—I know, um, maybe there's not much you can say in this, but I think everyone's concerned about the—the—the exposure. And if you could maybe, uh, talk through, uh, your perspective and what both you and Satya are seeing.
Amy Hood: I think maybe I would have thought about the question quite differently, Brent. The first thing to focus on is the reason we talked about that number is because 55% or roughly $350 billion is related to the breadth of our portfolio, a breadth of customers across solutions, across Azure, across industries, across geographies. That is a significant RPO balance, larger than most peers, more diversified than most peers. And frankly, I think we have super high confidence in it. And when you think about that portion alone growing 28%, it's really impressive work on the breadth as well as the adoption curve that we're seeing, which is, I think, what I get asked most frequently. It's grown by customer segment, by industry, and by geo. And so it's very consistent.
Amy Hood: I think maybe I would have thought about the question quite differently, Brent. The first thing to focus on is the reason we talked about that number is because 55% or roughly $350 billion is related to the breadth of our portfolio, a breadth of customers across solutions, across Azure, across industries, across geographies. That is a significant RPO balance, larger than most peers, more diversified than most peers.
is because 55% or roughly 350 billion dollars is related to the breadth of our portfolio, a breadth of customers across Solutions across Azure across Industries across geographies. That is a significant RPO balance larger than most peers more Diversified than most peers. And frankly, uh, I think we have super high confidence in it and when you think about that portion alone growing 28%, it's really impressive work on the breadth.
I think maybe I would have thought about the question quite differently. Brent, the first thing to focus on is
Satya Nadella: Yeah. And just one other thing I would add to it is, in addition to sort of what Amy mentioned, which is it's already contracted for the useful life, is we do use software to continuously run even the latest models on the fleet that is aging, if you will. So that's sort of what gives us that duration. And so at the end of the day, we want to have that's why we even think about aging the fleet constantly, right? So it's not about buying a whole lot of gear one year. It's about each year you ride the Moore's Law, you add, you use software, and then you optimize across all of it.
Satya Nadella: Yeah. And just one other thing I would add to it is, in addition to sort of what Amy mentioned, which is it's already contracted for the useful life, is we do use software to continuously run even the latest models on the fleet that is aging, if you will. So that's sort of what gives us that duration. And so at the end of the day, we want to have that's why we even think about aging the fleet constantly, right? So it's not about buying a whole lot of gear one year. It's about each year you ride the Moore's Law, you add, you use software, and then you optimize across all of it.
And frankly, I think we have super high confidence in it. And when you think about that portion alone growing 28%, it's really impressive work on the breadth as well as the adoption curve that we're seeing, which is, I think, what I get asked most frequently. It's grown by customer segment, by industry, and by geo. And so it's very consistent.
As well as the adoption curve that we're seeing, which is, I think, what I get asked most frequently, uh, it's grown by customer segment by industry and by go. And so it's very consistent. And so, then if you're asking about how do I feel about opening Ai and the contract and the health. Listen, it's a great partnership. Um, we continue, uh, to be their provider, uh, of scale. Uh, we're excited to do that. We sit under 1 of the most successful businesses built, uh, and we continue to feel quite good about that it, it's allowed us to
To remain a leader in terms of what we're building and being on The Cutting Edge of app innovation.
Amy Hood: And so then, if you're asking me about how do I feel about OpenAI and the contract and the health, listen, it's a great partnership. We continue to be their provider of scale. We're excited to do that. We sit under one of the most successful businesses built, and we continue to feel quite good about that. It's allowed us to remain a leader in terms of what we're building and being on the cutting edge of app innovation.
And so then, if you're asking me about how do I feel about OpenAI and the contract and the health, listen, it's a great partnership. We continue to be their provider of scale. We're excited to do that. We sit under one of the most successful businesses built, and we continue to feel quite good about that. It's allowed us to remain a leader in terms of what we're building and being on the cutting edge of app innovation.
Amy Hood: Mark, maybe to state this in case it's not obvious, is that as you go through the useful life, actually, you get more and more and more efficient at its delivery. So where you've sold the entirety of its life, the margins actually improve with time. And so I think that may be a good reminder to people as we see that, obviously, in the CPU fleet all the time.
Amy Hood: Mark, maybe to state this in case it's not obvious, is that as you go through the useful life, actually, you get more and more and more efficient at its delivery. So where you've sold the entirety of its life, the margins actually improve with time. And so I think that may be a good reminder to people as we see that, obviously, in the CPU fleet all the time.
Thanks, Brent operates the next question, please.
about that number is because 55% or roughly 350 billion dollars is related to the breadth of our portfolio, a breadth of customers across Solutions across Azure across Industries across geographies. That is a significant RPO balance larger than most peers more Diversified than most peers. And frankly, uh, I think we have super high confidence in it and when you think about that portion alone growing 28%, it's really impressive work on the breadth as well as the adoption curve that we're seeing, which is, I think, what I get asked most frequently, uh, it's grown by customer segment by industry and by go. And so it's very consistent. And so, then if you're asking about how do I feel about opening Ai and the contract and the health. Listen, it's a great partnership. Um, we continue, uh,
The next question comes from the line of Carl curd with UBS, please proceed.
To be their provider, uh, of scale. Uh, we're excited to do that. We sit under one of the most successful businesses built. Um,
[Analyst]: That's a great answer. I really appreciate it. Thank you.
Mark Moerdler: That's a great answer. I really appreciate it. Thank you.
And we continue to feel quite good about that. It's allowed us to remain a leader in terms of what we're building and being on the cutting edge of app innovation.
Amy Hood: Thanks, Mark. Operator, next question, please.
Jonathan Neilson: Thanks, Mark. Operator, next question, please.
Jonathan Neilson: Thanks, Brent. Operator, next question, please.
Jonathan Neilson: Thanks, Brent. Operator, next question, please.
Jonathan Neilson: The next question comes from the line of Brent Thill with Jefferies. Please proceed.
Operator: The next question comes from the line of Brent Thill with Jefferies. Please proceed.
Thanks, Brent. Operator, next question, please.
Operator: The next question comes from the line of Karl Keirstead with UBS. Please proceed.
Operator: The next question comes from the line of Karl Keirstead with UBS. Please proceed.
Karl Keirstead: Okay, thank you very much. Up to Amy. Regardless of how you allocate the capacity between first party and third party, can you comment qualitatively on the amount of capacity that's coming on? I think the 1 gigawatt added in the December quarter was extraordinary and hints that the capacity adds are accelerating. But I think a lot of investors have their eyes on Fairwater, Atlanta, Wisconsin, and would love some comments about the magnitude of the capacity adds, regardless of how they're allocated in the coming quarters. Thank you.
Karl Keirstead: Okay, thank you very much. Up to Amy. Regardless of how you allocate the capacity between first party and third party, can you comment qualitatively on the amount of capacity that's coming on? I think the 1 gigawatt added in the December quarter was extraordinary and hints that the capacity adds are accelerating. But I think a lot of investors have their eyes on Fairwater, Atlanta, Wisconsin, and would love some comments about the magnitude of the capacity adds, regardless of how they're allocated in the coming quarters. Thank you.
[Analyst]: Thanks, Amy. On 45% of the backlog being related to OpenAI, I'm just curious if you can comment. There's obviously concern about the durability. And I know maybe there's not much you can say on this, but I think everyone's concerned about the exposure. And if you could maybe talk through your perspective and what both you and Satya are seeing.
Brent Thill: Thanks, Amy. On 45% of the backlog being related to OpenAI, I'm just curious if you can comment. There's obviously concern about the durability. And I know maybe there's not much you can say on this, but I think everyone's concerned about the exposure. And if you could maybe talk through your perspective and what both you and Satya are seeing.
The next question comes from the line of Carl Curd with UBS. Please proceed.
Okay, thank you very much. Um, talk to Amy regardless of how you allocate the capacity between first party and third party. Can you comment qualitatively on the amount of capacity that you have coming on? I think the 1 gigawatts extraordinary and hints that the capacity ads are accelerating, but I think a lot of investors have their eyes on Fair water. Atlanta fair water, uh, Wisconsin and would love some comments about magnitude of the capacity ads, regardless of how they're allocated in the coming quarters. Thank you.
Amy Hood: I think maybe I would have thought about the question quite differently, Brent. The first thing to focus on is the reason we talked about that number is because 55%, or roughly $350 billion, is related to the breadth of our portfolio, a breadth of customers across solutions, across Azure, across industries, across geographies. That is a significant RPO balance, larger than most peers, more diversified than most peers. And frankly, I think we have super high confidence in it. And when you think about that portion alone growing 28%, it's really impressive work on the breadth as well as the adoption curve that we're seeing, which is, I think, what I get asked most frequently. It's grown by customer segment, by industry, and by GO. And so it's very consistent.
Amy Hood: I think maybe I would have thought about the question quite differently, Brent. The first thing to focus on is the reason we talked about that number is because 55%, or roughly $350 billion, is related to the breadth of our portfolio, a breadth of customers across solutions, across Azure, across industries, across geographies. That is a significant RPO balance, larger than most peers, more diversified than most peers. And frankly, I think we have super high confidence in it. And when you think about that portion alone growing 28%, it's really impressive work on the breadth as well as the adoption curve that we're seeing, which is, I think, what I get asked most frequently. It's grown by customer segment, by industry, and by GO. And so it's very consistent.
Thanks for working as hard as we can to had um, capacity as quickly as we can. You've mentioned, specific sites, like Atlanta or Wisconsin. Those are multi-year deliveries. So I wouldn't Focus necessarily on specific locations. Uh, the real thing we've got to do uh, and we're working incredibly hard doing. It is adding capacity globally. A lot of that will be added in the United States to see locations you've mentioned. But it also
On our water, Atlanta Fairwater, Wisconsin, and would love some comments about the magnitude of the capacity adds, regardless of how they're allocated in the coming quarters. Thank you.
Amy Hood: Yeah, Karl, I think we've said a couple of things. We're working as hard as we can to add capacity as quickly as we can. You've mentioned specific sites like Atlanta or Wisconsin. Those are multiyear deliveries, so I wouldn't focus necessarily on specific locations. The real thing we've got to do, and we're working incredibly hard doing it, is adding capacity globally. A lot of that will be added in the United States, the two locations you've mentioned, but it also needs to be added across the globe to meet the customer demand that we're seeing and the increased usage. You know, we'll continue to add both long-lived at infrastructure.
Amy Hood: Yeah, Karl, I think we've said a couple of things. We're working as hard as we can to add capacity as quickly as we can. You've mentioned specific sites like Atlanta or Wisconsin. Those are multiyear deliveries, so I wouldn't focus necessarily on specific locations. The real thing we've got to do, and we're working incredibly hard doing it, is adding capacity globally.
Who needs to be added to the globe, to meet the customer demand that we're seeing and the increased usage. Um, you know, we'll continue to add both long-lived at, um, infrastructure the way to think about that. Is we need to make sure we've got power and land and Facilities available and we'll continue to put GP.
A lot of that will be added in the United States, the two locations you've mentioned, but it also needs to be added across the globe to meet the customer demand that we're seeing and the increased usage. You know, we'll continue to add both long-lived at infrastructure.
Amy Hood: The way to think about that is we need to make sure we've got power, land, and facilities available, and we'll continue to put GPUs and CPUs in them when they're done, as quickly as we can. And then finally, we'll try to make sure we can get as efficient as we possibly can on the pace at which we do that and how we operate them so that they can have the highest possible utility. And so I think, it's not really about, you know, two places, Karl. I would definitely abstract away from that. Those are multiyear delivery timelines, but really, we just need to get it done. Every location where we're currently in a build or starting to do that, we're working as quickly as we can.
The way to think about that is we need to make sure we've got power, land, and facilities available, and we'll continue to put GPUs and CPUs in them when they're done, as quickly as we can. And then finally, we'll try to make sure we can get as efficient as we possibly can on the pace at which we do that and how we operate them so that they can have the highest possible utility.
Amy Hood: If you're asking about how do I feel about OpenAI and the contract and the health, listen, it's a great partnership. We continue to be their provider of scale. We're excited to do that. We sit under one of the most successful businesses built, and we continue to feel quite good about that. It's allowed us to remain a leader in terms of what we're building and being on the cutting edge of app innovation. Thanks, Brent. Operator, next question, please.
If you're asking about how do I feel about OpenAI and the contract and the health, listen, it's a great partnership. We continue to be their provider of scale. We're excited to do that. We sit under one of the most successful businesses built, and we continue to feel quite good about that. It's allowed us to remain a leader in terms of what we're building and being on the cutting edge of app innovation.
To use and CPUs in them when they're done as quickly as we can. And then finally, um, we'll try to make sure we can get as efficient as we possibly can on the pace at which we do that and how we operate them so that they can have the highest possible utility. Um, and so I think, um, it's not really about, you know, 2 places Carl. I I would definitely abstract away from that. Those are multi-year delivery timelines. But really, we just need to get
It done every location where we're currently in a build or or starting to do that. We're working as quickly as we can.
Okay, got it. Thank you.
Thanks, Carl operate. The next question, please.
Uh, yeah, Carl. I think we've we've said a couple of things we're working as hard as we can to had, um, capacity as quickly as we can. You've mentioned, specific sites, like Atlanta or Wisconsin. Those are multi-year deliveries. So I wouldn't Focus necessarily on specific locations. Uh, the real thing we've got to do uh, and we're working, incredibly hard doing. It is adding capacity globally. A lot of that will be added in the United States to the locations you've mentioned. But it ALS needs to be added across the globe to meet the customer demand that we're seeing and the increased usage. Um, you know, we'll continue to add both long-lived at, um, infrastructure the way to think about that. Is we need to make sure we've got power and land and Facilities available and we'll continue to put gpus and CPUs in them when they're done as quickly as we can. And then finally, um, we'll try to make sure we can get as efficient as we possibly can on the pace at which we do that. And
And so I think, it's not really about, you know, two places, Karl. I would definitely abstract away from that. Those are multiyear delivery timelines, but really, we just need to get it done. Every location where we're currently in a build or starting to do that, we're working as quickly as we can.
The next question comes from the line of Mark Murphy with JP Morgan please proceed.
Jonathan Neilson: Thanks, Brent. Operator, next question, please.
You the performance.
And how we operate them so that they can have the highest possible utility. Um, and so I think, um, it's not really about, you know, two places, Carl. I would definitely abstract away from that. Those are multi-year delivery timelines. But really, we just need to get
Jonathan Neilson: The next question comes from the line of Karl Keirstead with UBS. Please proceed.
Operator: The next question comes from the line of Karl Keirstead with UBS. Please proceed.
Karl Keirstead: Okay. Got it. Thank you.
Karl Keirstead: Okay. Got it. Thank you.
It’s done at every location where we’re currently in a build, or are starting to do that. We’re working as quickly as we can.
Okay, got it. Thank you.
Jonathan Neilson: Thanks, Karl. Operator, next question, please.
Jonathan Neilson: Thanks, Karl. Operator, next question, please.
[Analyst]: Okay. Thank you very much. Satya and Amy, regardless of how you allocate the capacity between first-party and third-party, can you comment qualitatively on the amount of capacity that you have coming on? I think the 1 gigawatt added in the December quarter was extraordinary and hints that the capacity adds are accelerating. But I think a lot of investors have their eyes on Fairwater, Atlanta, Fairwater, Wisconsin, and would love some comments about the magnitude of the capacity adds regardless of how they're allocated in the coming quarters. Thank you.
Karl Keirstead: Okay. Thank you very much. Satya and Amy, regardless of how you allocate the capacity between first-party and third-party, can you comment qualitatively on the amount of capacity that you have coming on? I think the 1 gigawatt added in the December quarter was extraordinary and hints that the capacity adds are accelerating. But I think a lot of investors have their eyes on Fairwater, Atlanta, Fairwater, Wisconsin, and would love some comments about the magnitude of the capacity adds regardless of how they're allocated in the coming quarters. Thank you.
Thanks, call operator. The next question, please.
Operator: The next question comes from the line of Mark Murphy with JPMorgan. Please proceed.
Operator: The next question comes from the line of Mark Murphy with JPMorgan. Please proceed.
The next question comes from the line of Mark Murphy with JPMorgan. Please proceed.
Mark Murphy: Thank you so much. Satya, the performance, achievements of the Maia 200 accelerator, for inference looked, quite remarkable in- especially in comparison to TPUs and Trainium and Blackwell, which have just been around a lot longer. Can you put that accomplishment in perspective, in terms of how much of a core competency you think Silicon might become for Microsoft? And Amy, are there any, ramifications worth mentioning there in terms of supporting your gross margin profile for, inference costs going forward?
Mark Murphy: Thank you so much. Satya, the performance, achievements of the Maia 200 accelerator, for inference looked, quite remarkable in- especially in comparison to TPUs and Trainium and Blackwell, which have just been around a lot longer. Can you put that accomplishment in perspective, in terms of how much of a core competency you think Silicon might become for Microsoft? And Amy, are there any, ramifications worth mentioning there in terms of supporting your gross margin profile for, inference costs going forward?
Achievements of the Maya 200 accelerator for inference. Looked quite remarkable in especially in comparison to tpus and tranium and Blackwell which have just been around a lot longer C. Can you put that accomplishment in perspective? Uh, in in terms of how much of a core competency, you think silicon might become for Microsoft and Amy. Are there any uh ramifications worth mentioning there in terms of supporting your gross margin profile for for inference costs, going forward?
So thanks for the question. So
Couple things 1.
um, in a variety of different forms for a long, long time in terms of building our own silicon, um, and
Amy Hood: Yeah, Karl, I think we've said a couple of things. We're working as hard as we can to add capacity as quickly as we can. You've mentioned specific sites like Atlanta or Wisconsin. Those are multi-year deliveries, so I wouldn't focus necessarily on specific locations. The real thing we've got to do, and we're working incredibly hard at doing it, is adding capacity globally. A lot of that will be added in the United States. You see locations you've mentioned. But it also needs to be added across the globe to meet the customer demand that we're seeing and the increased usage. We'll continue to add both long-lived infrastructure. The way to think about that is we need to make sure we've got power, land, and facilities available. And we'll continue to put GPUs and CPUs in them when they're done as quickly as we can.
Amy Hood: Yeah, Karl, I think we've said a couple of things. We're working as hard as we can to add capacity as quickly as we can. You've mentioned specific sites like Atlanta or Wisconsin. Those are multi-year deliveries, so I wouldn't focus necessarily on specific locations. The real thing we've got to do, and we're working incredibly hard at doing it, is adding capacity globally. A lot of that will be added in the United States. You see locations you've mentioned. But it also needs to be added across the globe to meet the customer demand that we're seeing and the increased usage. We'll continue to add both long-lived infrastructure. The way to think about that is we need to make sure we've got power, land, and facilities available. And we'll continue to put GPUs and CPUs in them when they're done as quickly as we can.
Thank you so much, Sacha the performance uh achievements of the Maya 200 accelerator uh for inference looked quite remarkable in especially in comparison to tpus and tranium and Blackwell which have just been around a lot longer c c. Can you put that accomplishment in perspective, uh, in in terms of how much of a core competency, you think silicon might become for Microsoft and Amy. Are there any uh ramifications worth mentioning there in terms of supporting your gross margin profile for for inference costs, going forward?
Satya Nadella: Yeah, no, thanks for the question. So a couple of things. One is we've been at this, in a variety of different forms, for a long, long time in terms of building our own silicon. So we're very, very thrilled about the progress with Maia 200. And, you know, especially when we think about running a GPT 5.2 and the, the performance we're able to get in the gems, at FB 4, just proves the point that, when you have, a new workload, a new shape of a workload, you can start innovating end-to-end, between the model and, and the silicon. And the entire system, it's just not even about just the silicon. The way, the networking works at rack scale, that's optimized, with memory for, this particular workload.
Satya Nadella: Yeah, no, thanks for the question. So a couple of things. One is we've been at this, in a variety of different forms, for a long, long time in terms of building our own silicon. So we're very, very thrilled about the progress with Maia 200. And, you know, especially when we think about running a GPT 5.2 and the, the performance we're able to get in the gems, at FB 4, just proves the point that, when you have, a new workload, a new shape of a workload, you can start innovating end-to-end, between the model and, and the silicon.
Running a GPT 52. And the, the performance we're able to get in the gems, uh, at fb4. Uh, just proved the point that, uh, when you have, um, a new workload, a new shape of a workload.
Uh, you can start innovating end to end, uh, between the model and uh, and, and the silicon, and the entire system is just not even about just the Silicon the way. Uh, the networking works at Rack scale, that's optimized, uh, with memory for, uh, this particular workload. And the other thing is, we are obviously round-tripping and working very closely with, uh, our own Super intelligence team, uh, with all of our models, as you can imagine, whatever we build will be all optimized, uh, for mayor.
And the entire system, it's just not even about just the silicon. The way, the networking works at rack scale, that's optimized, with memory for, this particular workload.And the other thing is we are obviously round tripping and working very closely with our own super intelligence team with all of our models. As you can imagine, whatever we build will be all optimized for Maia. So we feel great about it, and I think the way to think about all up is we're in such early innings. I mean, even just look at the amount of silicon innovation and systems innovation.
Amy Hood: And then finally, we'll try to make sure we can get as efficient as we possibly can on the pace at which we do that and how we operate them so that they can have the highest possible utility. And so I think it's not really about two places. Karl, I would definitely abstract away from that. Those are multi-year delivery timelines. But really, we just need to get it done every location where we're currently in a build or starting to do that. We're working as quickly as we can.
And then finally, we'll try to make sure we can get as efficient as we possibly can on the pace at which we do that and how we operate them so that they can have the highest possible utility. And so I think it's not really about two places. Karl, I would definitely abstract away from that. Those are multi-year delivery timelines. But really, we just need to get it done every location where we're currently in a build or starting to do that. We're working as quickly as we can.
Um, so um if you're great about it and I think the way to think about all up is we're in such early Innings. I mean even just look at the amount of silicon Innovation and Systems Innovation. Um, even
Satya Nadella: And the other thing is we are obviously round tripping and working very closely with our own super intelligence team with all of our models. As you can imagine, whatever we build will be all optimized for Maia. So we feel great about it, and I think the way to think about all up is we're in such early innings. I mean, even just look at the amount of silicon innovation and systems innovation. Even since December, I think the new thing is everybody's talking about low latency inference, right? And so one of the things we wanna make sure is we're not locked into any one thing. If anything, we have great partnership with NVIDIA, with AMD. They're innovating, we're innovating. We want to fleet at any given point in time, to have access to the best TCO.
Yeah, no, thanks for the question. So couple of things 1 is we've been at this, um, in a variety of different forms, uh, for a long, long time in terms of building our own silicon. Um, and uh, so we're very, very thrilled about the progress with Maya 200, um, and, you know, especially when we think about running a GPT 52. And the, the performance, we're able to get in the gems, uh, at fb4, uh, just proves the point that, uh, when you have, uh, a new workload, a new shape of a workload. Uh, you can start innovating end to end, uh, between the model and, uh, and and, and the silicon, and the entire system is just not even about just the Silicon the way. Uh, the networking works at Rack scale. That's optimized, uh, with memory for, uh, this particular workload. And the other thing is, we're obviously round-tripping and working very closely with, uh, our own Super intelligence team, uh, with all of our models, as you can imagine, what
Whatever we build will be all optimized, uh, for mayor.
Even since December, I think the new thing is everybody's talking about low latency inference, right? And so one of the things we wanna make sure is we're not locked into any one thing. If anything, we have great partnership with NVIDIA, with AMD. They're innovating, we're innovating. We want to fleet at any given point in time, to have access to the best TCO.
[Analyst]: Okay. Got it. Thank you.
Karl Keirstead: Okay. Got it. Thank you.
Amy Hood: Thanks, Karl. Operator, next question, please.
Jonathan Neilson: Thanks, Karl. Operator, next question, please.
Since December. I think the new thing is, everybody is talking about low latency inference, um, right? Um and so 1 of the things we want to make sure is we're not locked into any 1 Thing. Uh if anything we we have great partnership with Nvidia with AMD they are innovating. We are innovating we want to Fleet at any given point in time to have access to the best TCO. Uh and it's not a 1 generation game. I think a lot of folks just talk about
Um, so um, if you're great about it, and I think the way to think about all up is we're in such early innings. I mean, even just look at the amount of silicon innovation and systems innovation, um, even since December. I think the new thing is everybody is talking about low-latency inference. Uh,
Jonathan Neilson: The next question comes from the line of Mark Murphy with JPMorgan. Please proceed.
Jonathan Neilson: The next question comes from the line of Mark Murphy with JPMorgan. Please proceed.
[Analyst]: Thank you so much. Satya, the performance achievements of the Maia 200 accelerator for inference looked quite remarkable, especially in comparison to TPUs, Trainium, and Blackwell, which have just been around a lot longer. Can you put that accomplishment in perspective in terms of how much of a core competency you think silicon might become for Microsoft and Amy? Are there any ramifications worth mentioning there in terms of supporting your gross margin profile for inference costs going forward?
Mark Murphy: Thank you so much. Satya, the performance achievements of the Maia 200 accelerator for inference looked quite remarkable, especially in comparison to TPUs, Trainium, and Blackwell, which have just been around a lot longer. Can you put that accomplishment in perspective in terms of how much of a core competency you think silicon might become for Microsoft and Amy? Are there any ramifications worth mentioning there in terms of supporting your gross margin profile for inference costs going forward?
Satya Nadella: And it's not a one-generation game. I think a lot of folks just talk about who's ahead. It's just, remember, you have to be ahead for all time to come, and that means you really want to think about, you know, having a lot of innovation that happens out there to be in your fleet so that your fleet is fundamentally advantaged at the TCO level. So that's kind of how I look at it, which is we are excited about Maia, we're excited about Cobalt, we're excited about DPU, our next. So we have a lot of systems capability, that means we can vertically integrate. And because we can vertically integrate, doesn't mean we just only vertically integrate. And so we wanna be able to have the flexibility here, and that's what you see us do.
And it's not a one-generation game. I think a lot of folks just talk about who's ahead. It's just, remember, you have to be ahead for all time to come, and that means you really want to think about, you know, having a lot of innovation that happens out there to be in your fleet so that your fleet is fundamentally advantaged at the TCO level. So that's kind of how I look at it, which is we are excited about Maia, we're excited about Cobalt, we're excited about DPU, our next.
About who's ahead? It's just remember. You have to be ahead, all the for all time to come. Uh, and that means you really want to think about, you know, having a lot of innovation that happens out there to be in your Fleet so that your Fleet is fundamentally advantage of the TCO level. So that's kind of how I look at it which is, we are excited about Maya, we are excited about Cobalt. We're excited about our dpu our next. Uh so we have a lot of systems capability. Uh, that means we can vertically integrate and because we can work vertically integrate doesn't mean we just only vertically integrate. Uh and so we want to be able to have the flexibility here and that's what you see us do.
Satya Nadella: Yeah. No, thanks for the question. So a couple of things. One is we've been at this in a variety of different forms for a long, long time in terms of building our own silicon. And so we're very, very thrilled about the progress with Maia 200. And especially when we think about running a GPT-5.2 and the performance we were able to get in the GEMs at FP4 just proves the point that when you have a new workload, a new shape of a workload, you can start innovating end-to-end between the model and the silicon and the entire system. It's just not even about just the silicon. The way the networking works at rack scale, that's optimized with memory for this particular workload. And the other thing is we're obviously round-tripping and working very closely with our own superintelligence team.
Satya Nadella: Yeah. No, thanks for the question. So a couple of things. One is we've been at this in a variety of different forms for a long, long time in terms of building our own silicon. And so we're very, very thrilled about the progress with Maia 200. And especially when we think about running a GPT-5.2 and the performance we were able to get in the GEMs at FP4 just proves the point that when you have a new workload, a new shape of a workload, you can start innovating end-to-end between the model and the silicon and the entire system. It's just not even about just the silicon. The way the networking works at rack scale, that's optimized with memory for this particular workload. And the other thing is we're obviously round-tripping and working very closely with our own superintelligence team.
Thanks Mark. Operator. Next question, please.
So we have a lot of systems capability, that means we can vertically integrate. And because we can vertically integrate, doesn't mean we just only vertically integrate. And so we wanna be able to have the flexibility here, and that's what you see us do.
The next question comes from the line of Brad zelnik with Deutsche Bank, please proceed.
Right. Um, and so one of the things we want to make sure is we're not locked into any one thing. Uh, if anything, we have great partnership with Nvidia, with AMD—they are innovating, we are innovating. We want our fleet at any given point in time to have access to the best TCO. Uh, and it's not a one-generation game. I think a lot of folks just talk about who's ahead. It's just, remember, you have to be ahead for all time to come. Uh, and that means you really want to think about, you know, having a lot of innovation that happens out there to be in your fleet so that your fleet is fundamentally advantageous at the TCO level. So, that's kind of how I look at it, which is, we're excited about Maia, we're excited about Cobalt, we're excited about DPU, our next—uh, so we have a lot of systems capability. Uh, that means we can vertically integrate, and because we can vertically integrate, it doesn't mean we just only work on the integrated, uh, and so we want to be able to—
To have the flexibility here, and that's what you see us do.
Jonathan Neilson: Thanks, Mark. Operator, next question, please.
Jonathan Neilson: Thanks, Mark. Operator, next question, please.
Thanks, Mark. Operator, next question, please.
Operator: The next question comes from the line of Brad Zelnick with Deutsche Bank. Please proceed.
Operator: The next question comes from the line of Brad Zelnick with Deutsche Bank. Please proceed.
Great, thank you very much, Setia, we heard a lot about Frontier Transformations from Judson and ignite and we've seen customers realize breakthrough benefits. When they adopt the Microsoft AI stack, can you help frame for us? The momentum and Enterprises embarking on these Journeys. And any expectation for how much they're spend with Microsoft can expand in becoming Frontier, firms. Thanks
The next question comes from the line of Brad Zelnik with Deutsche Bank. Please proceed.
Brad Zelnick: Great, thank you very much. Satya, we heard a lot about frontier transformations from Judson at Ignite, and we've seen customers realize breakthrough benefits when they adopt the Microsoft AI stack. Can you help frame for us the momentum in enterprises embarking on these journeys, and any expectation for how much their spend with Microsoft can expand in becoming frontier firms? Thanks.
Brad Zelnick: Great, thank you very much. Satya, we heard a lot about frontier transformations from Judson at Ignite, and we've seen customers realize breakthrough benefits when they adopt the Microsoft AI stack. Can you help frame for us the momentum in enterprises embarking on these journeys, and any expectation for how much their spend with Microsoft can expand in becoming frontier firms? Thanks.
Yeah, thank you for that. So I think, um, 1 of the things that we are seeing um is
The adoption across the 3 major Suites of ours, right? So if you take M365, you take uh what's happening with security and you take GitHub
In fact, it's, it's fascinating. I mean, you know, these 3 things had effectively compounding effects for
Satya Nadella: Yeah, thank you for that. So I think, one of the things that we are seeing, is the adoption across the three major suites of ours, right? So if you take M365, you take, what's happening with security, and you take GitHub. In fact, it's fascinating. I mean, you know, these three things had effectively compounding effects for our customers in the past, like something like Entra as an identity system or Defender as the protection system, across all three was sort of super helpful. But so what now you're seeing is something like WorkIQ, right? So, I mean, just to give you a flavor for it, the most important database underneath, for any company that uses Microsoft today is the data underneath Microsoft 365. And the reason is because it has all this tacit information, right? Who are your people?
Satya Nadella: Yeah, thank you for that. So I think, one of the things that we are seeing, is the adoption across the three major suites of ours, right? So if you take M365, you take, what's happening with security, and you take GitHub. In fact, it's fascinating. I mean, you know, these three things had effectively compounding effects for our customers in the past, like something like Entra as an identity system or Defender as the protection system, across all three was sort of super helpful.
Great, thank you very much, Satya. We heard a lot about Frontier Transformations from Judson and Ignite, and we've seen customers realize breakthrough benefits. When they adopt the Microsoft AI stack, can you help frame for us the momentum and enterprises embarking on these journeys? And any expectation for how much their spend with Microsoft can expand in becoming Frontier firms? Thanks.
Satya Nadella: With all of our models, as you can imagine, whatever we build will be all optimized for Maia. So we feel great about it. And I think the way to think about all up is we're in such early innings. I mean, even just look at the amount of silicon innovation and systems innovation. Even since December, I think the new thing is everybody's talking about low-latency inference, right? And so one of the things we want to make sure is we're not locked into any one thing. If anything, we have great partnership with NVIDIA, with AMD. They're innovating. We're innovating. We want a fleet at any given point in time to have access to the best TCO. And it's not a one-generation game. I think a lot of folks just talk about who's ahead. It's just, remember, you have to be ahead for all time to come.
With all of our models, as you can imagine, whatever we build will be all optimized for Maia. So we feel great about it. And I think the way to think about all up is we're in such early innings. I mean, even just look at the amount of silicon innovation and systems innovation. Even since December, I think the new thing is everybody's talking about low-latency inference, right? And so one of the things we want to make sure is we're not locked into any one thing. If anything, we have great partnership with NVIDIA, with AMD. They're innovating. We're innovating. We want a fleet at any given point in time to have access to the best TCO. And it's not a one-generation game. I think a lot of folks just talk about who's ahead. It's just, remember, you have to be ahead for all time to come.
Yeah, thank you for that. So I think, um, one of the things that we are seeing, um, is
Customers in the past like something like entra as an identity system or Defender as the protection system. Um across all 3 was sort of super helpful but so what now you're seeing is something like work IQ, right? So I mean just to give you a flavor for it
The adoption across the three major ES suites of ours, right? So if you take M365, you take what's happening with security, and you take GitHub,
In fact, it's fascinating. I mean, you know, these three things had effectively compounding.
But so what now you're seeing is something like WorkIQ, right? So, I mean, just to give you a flavor for it, the most important database underneath, for any company that uses Microsoft today is the data underneath Microsoft 365. And the reason is because it has all this tacit information, right? Who are your people?
The most important database underneath, uh, for any company that uses Microsoft, today is the data underneath Microsoft 365. And the reason is because it has all those tacit information, right? Who are your people? What are their relationships? What are their projects? They're working on? What are their artifacts? Uh, their Communications. So that's a super important asset for any business process business workflow context.
Of customers in the past like something like entra as an identity system or Defender as the protection system. Um across all 3 was sort of super helpful but so what now you're seeing is something like work IQ, right? So I mean just to give you a flavor for it, the most important database underneath, uh, for any company that uses Microsoft, today is the data underneath Microsoft 365, and the reason is because it has all those tasks.
Satya Nadella: What are their relationships? What are the projects they're working on? What are their artifacts, their communications? So that's a super important asset for any business process, business workflow context. In fact, the scenario I even had in my transcript around, you can now take WorkIQ as an MCP server and in a GitHub repo and say, "Hey, please look at my design meetings for the last month in Teams and tell me if my repo reflects it." I mean, that's a pretty high-level way to think about how what is happening previously, perhaps with our tools business and our GitHub business, are suddenly now being transformative, right? That agent black plane is really transforming companies in some sense, right? That's, I think, the most magical thing, which is you deploy these things, and suddenly the agents are helping you coordinate, bring more leverage to your enterprise.
Satya Nadella: What are their relationships? What are the projects they're working on? What are their artifacts, their communications? So that's a super important asset for any business process, business workflow context.
In fact, the scenario I even had in my transcript around. You can now take work IQ as an mCP server and, you know, GitHub repo and say, hey, please look at my design meetings for the last month in teams, and tell me if my repo reflects it. I mean, that's a pretty high
Satya Nadella: And that means you really want to think about having a lot of innovation that happens out there to be in your fleet so that your fleet is fundamentally advantaged at the TCO level. So that's kind of how I look at it, which is we are excited about Maia. We are excited about Cobalt. We are excited about our DPU, our NICS. So we have a lot of systems capability. That means we can vertically integrate. And because we can vertically integrate doesn't mean we just only vertically integrate. And so we want to be able to have the flexibility here, and that's what you see us do.
And that means you really want to think about having a lot of innovation that happens out there to be in your fleet so that your fleet is fundamentally advantaged at the TCO level. So that's kind of how I look at it, which is we are excited about Maia. We are excited about Cobalt. We are excited about our DPU, our NICS. So we have a lot of systems capability. That means we can vertically integrate. And because we can vertically integrate doesn't mean we just only vertically integrate. And so we want to be able to have the flexibility here, and that's what you see us do.
In fact, the scenario I even had in my transcript around, you can now take WorkIQ as an MCP server and in a GitHub repo and say, "Hey, please look at my design meetings for the last month in Teams and tell me if my repo reflects it." I mean, that's a pretty high-level way to think about how what is happening previously, perhaps with our tools business and our GitHub business, are suddenly now being transformative, right?
That information, right? Who are your people? What are their relationships? What are their projects they're working on? What are their artifacts, their communications? So that's a super important asset for any business process, business workflow context.
Level way to think about how what is happening previously, perhaps with our tools business and our GitHub business or suddenly now being transformative right, that agent black paint plane is really transforming companies in some sense, right? That's I think the most magical thing, which is you deploy these things and suddenly
The agents are helping you.
Coordinate bring more leverage to your Enterprise.
That agent black plane is really transforming companies in some sense, right? That's, I think, the most magical thing, which is you deploy these things, and suddenly the agents are helping you coordinate, bring more leverage to your enterprise.
Amy Hood: Thanks, Mark. Operator, next question, please.
Jonathan Neilson: Thanks, Mark. Operator, next question, please.
Jonathan Neilson: The next question comes from the line of Brad Zelnick with Deutsche Bank. Please proceed.
Jonathan Neilson: The next question comes from the line of Brad Zelnick with Deutsche Bank. Please proceed.
[Analyst]: Great. Thank you very much. Satya, we heard a lot about frontier transformations from Judson and Ignite, and we've seen customers realize breakthrough benefits when they adopt the Microsoft AI stack. Can you help frame for us the momentum in enterprises embarking on these journeys and any expectation for how much their spend with Microsoft can expand in becoming frontier firms? Thanks.
Brad Zelnick: Great. Thank you very much. Satya, we heard a lot about frontier transformations from Judson and Ignite, and we've seen customers realize breakthrough benefits when they adopt the Microsoft AI stack. Can you help frame for us the momentum in enterprises embarking on these journeys and any expectation for how much their spend with Microsoft can expand in becoming frontier firms? Thanks.
And Foundry. Uh and and of course, the GitHub tooling is helping them, or even the low code, no code tools. I had some stats on how much that's being used.
Satya Nadella: Then on top of it, of course, there's the transformation, which is what businesses are doing. How should we think about customer service? How should we think about marketing? How should we think about finance? How should we think about that and build our own agents? That's where all the services in Fabric and Foundry, and of course, the GitHub tooling is helping them, or even the low-code, no-code tools. I had some stats on how much that's being used. But one of the more exciting things for me is these new agents systems, M365 Copilot, GitHub Copilot, Security Copilot, all coming together to compound the benefits of all the data and all the deployment, I think, is probably the most transformative effect right now.
Then on top of it, of course, there's the transformation, which is what businesses are doing. How should we think about customer service? How should we think about marketing? How should we think about finance? How should we think about that and build our own agents? That's where all the services in Fabric and Foundry, and of course, the GitHub tooling is helping them, or even the low-code, no-code tools.
In fact, the scenario I even had in my transcript around—you can now take Work IQ as an MCP server and, you know, GitHub repo and say, hey, please look at my design meetings for the last month in Teams, and tell me if my repo reflects it. I mean, that's a pretty high-level way to think about how—what is happening previously, perhaps with our Tools business and our GitHub business, are suddenly now being transformative, right? That agent black paint plane is really transforming companies in some sense, right? That's, I think, the most magical thing, which is you deploy these things and suddenly the agents are helping you coordinate, bring more leverage to your enterprise.
Uh, but 1 of the more exciting things.
then, on top of
It.
Of course, there's the transformation, which is what businesses are doing. How should we think about customer?
For me is these new, uh, agents systems? M365 co-pilot. Get up. Co-pilot security co-pilot
How should we think about marketing? How should we think about finance, how should we think about that?
Satya Nadella: Yeah. Thank you for that. So I think one of the things that we are seeing is the adoption across the three major suites of ours, right? So if you take M365, you take what's happening with security, and you take GitHub. In fact, it's fascinating. I mean, these three things had effectively compounding effects for our customers in the past, like something like Entra as an identity system or Defender as the protection system across all three was sort of super helpful. But so what now you're seeing is something like WorkIQ, right? So I mean, just to give you a flavor for it, the most important database underneath for any company that uses Microsoft today is the data underneath Microsoft 365. And the reason is because it has all this tacit information, right? Who are your people? What are their relationships? What are the projects they're working on?
Satya Nadella: Yeah. Thank you for that. So I think one of the things that we are seeing is the adoption across the three major suites of ours, right? So if you take M365, you take what's happening with security, and you take GitHub. In fact, it's fascinating. I mean, these three things had effectively compounding effects for our customers in the past, like something like Entra as an identity system or Defender as the protection system across all three was sort of super helpful. But so what now you're seeing is something like WorkIQ, right? So I mean, just to give you a flavor for it, the most important database underneath for any company that uses Microsoft today is the data underneath Microsoft 365. And the reason is because it has all this tacit information, right? Who are your people? What are their relationships? What are the projects they're working on?
I had some stats on how much that's being used. But one of the more exciting things for me is these new agents systems, M365 Copilot, GitHub Copilot, Security Copilot, all coming together to compound the benefits of all the data and all the deployment, I think, is probably the most transformative effect right now.
All coming together to compound the benefits of all the data and all the deployment. Uh, I think is probably the most transformative effect right now.
Foundry. Uh, and of course, we get up—tooling is helping them, or even the low-code, no-code tools. I have some stats on how much that's being used.
Thank you very helpful.
Thanks. Brad operator. We have time for 1 last question.
And the last question will come from the line of Ramo Leno with barklay, please proceed.
Uh, but one of the more exciting things for me is these new, uh, agent systems—M365 Copilot, GitHub Copilot, Security Copilot—all coming together to compound the benefits of all the data and all the deployment. Uh, I think is probably the most transformative effect right now.
Brad Zelnick: Thank you. Very helpful.
Brad Zelnick: Thank you. Very helpful.
Thank you very helpful.
Jonathan Neilson: Thanks, Brad. Operator, we have time for one last question.
Jonathan Neilson: Thanks, Brad. Operator, we have time for one last question.
Thanks, Brad. Operator, we have time for one last question.
Operator: The last question will come from the line of Raimo Lenschow with Barclays. Please proceed.
Operator: The last question will come from the line of Raimo Lenschow with Barclays. Please proceed.
Raimo Lenschow: Perfect. Thanks for squeezing me in. Last few quarters, we talked, besides the GPU side, we talked about CPU as well on the Azure side, and you had some operational changes at the beginning of January last year. Can you speak what you saw there? And maybe put it more in a bigger picture in terms of clients realizing that their move to the cloud is important if they want to deliver proper AI. So what are we seeing in terms of cloud transition? Thank you.
Raimo Lenschow: Perfect. Thanks for squeezing me in. Last few quarters, we talked, besides the GPU side, we talked about CPU as well on the Azure side, and you had some operational changes at the beginning of January last year. Can you speak what you saw there? And maybe put it more in a bigger picture in terms of clients realizing that their move to the cloud is important if they want to deliver proper AI. So what are we seeing in terms of cloud transition? Thank you.
And the last question will come from the line of Ramo Leno with Barclays. Please proceed.
Perfect. Thanks for squeezing me in. Um the last few quarters, we talked to besides the GPU side, we talked about CPU as well on the on the Azure side and you have some operational changes at the beginning uh or January last year, can you speak what you saw there and maybe put it more on the bigger picture in terms of clients realizing that they moved to the cloud is important if you want to deliver proper AI. So so what are we seeing in terms of of uh Cloud transitions? Thank you.
I I didn't quite we, sorry, you were asking about the SMC uh CPU side. Or can you just repeat the question please? Yeah.
Satya Nadella: What are their artifacts? Their communications? So that's a super important asset for any business process, business workflow context. In fact, the scenario I even had in my transcript around you can now take WorkIQ as an MCP server and in a GitHub repo and say, "Hey, please look at my design meetings for the last month in Teams and tell me if my repo reflects it." I mean, that's a pretty high-level way to think about how what was happening previously, perhaps with our tools business and our GitHub business, are suddenly now being transformative, right? That agent blackplane is really transforming companies in some sense, right? That's, I think, the most magical thing, which is you deploy these things, and suddenly the agents are helping you coordinate, bring more leverage to your enterprise.
What are their artifacts? Their communications? So that's a super important asset for any business process, business workflow context. In fact, the scenario I even had in my transcript around you can now take WorkIQ as an MCP server and in a GitHub repo and say, "Hey, please look at my design meetings for the last month in Teams and tell me if my repo reflects it." I mean, that's a pretty high-level way to think about how what was happening previously, perhaps with our tools business and our GitHub business, are suddenly now being transformative, right? That agent blackplane is really transforming companies in some sense, right? That's, I think, the most magical thing, which is you deploy these things, and suddenly the agents are helping you coordinate, bring more leverage to your enterprise.
Perfect. Thanks for squeezing me in. Um the last few quarters. We talked besides the GPU side, we talked about CPU as well on the on the Azure side and you had some operational changes at the beginning or January last year. Can you speak what you saw there and maybe put it more on a bigger picture in terms of clients realizing that they moved to the cloud. It's important if you want to deliver proper AI. So what are we seeing in terms of of uh Cloud transitions? Thank you.
Satya Nadella: I didn't quite-
Satya Nadella: I didn't quite-
Jonathan Neilson: Sorry, Raimo. You were asking about the SNC, CPU side, or can you just repeat the question, please?
Jonathan Neilson: Sorry, Raimo. You were asking about the SNC, CPU side, or can you just repeat the question, please?
Yeah. Uh yeah yeah, sorry. So I was, I was wondering about the CPU side, uh, uh, um, of azure, um, because we had some operational changes there. Um and you know we also hear from the field a lot that people are realizing they need to be uh in the cloud if you want to do proper Ai. And if that's kind of driving momentum, thank you.
I—I didn't quite—we—Sorry, Ryan, you were asking about the S.
Raimo Lenschow: Yeah, yeah, sorry. So I was wondering about the CPU side of Azure, because we had some operational changes there. And, you know, we also hear from the field a lot that people are realizing they need to be in the cloud if they want to do proper AI, and that's kind of driving the momentum. Thank you.
Raimo Lenschow: Yeah, yeah, sorry. So I was wondering about the CPU side of Azure, because we had some operational changes there. And, you know, we also hear from the field a lot that people are realizing they need to be in the cloud if they want to do proper AI, and that's kind of driving the momentum. Thank you.
Can see, uh, CPU side, or can you just repeat the question please?
Satya Nadella: Yeah, I think I get it. So first of all, I had mentioned in my remarks that when you think about AI workloads, you shouldn't think of AI workloads as just AI accelerator compute, right? Because in some sense, it, take any agent, the agent will then spawn through tools used, maybe a container, which runs obviously on compute. In fact, whenever we think about even the building out of the fleet, we think of in ratios. Even for a training job, by the way, an AI training job requires a bunch of compute and a bunch of storage, very close to compute, and so therefore, and in training and inferencing as well. So an inferencing with agent mode would require you to essentially provision a computer or computing resources to the agent.
Satya Nadella: Yeah, I think I get it. So first of all, I had mentioned in my remarks that when you think about AI workloads, you shouldn't think of AI workloads as just AI accelerator compute, right? Because in some sense, it, take any agent, the agent will then spawn through tools used, maybe a container, which runs obviously on compute. In fact, whenever we think about even the building out of the fleet, we think of in ratios.
Yeah, uh, yeah, yeah, sorry. So I was, I was wondering about the CPU side, uh, uh, um, of Azure, um, because we had some operational changes there. Um, and you know we also hear from the field a lot that people are realizing they need to be, uh, in the cloud if you want to do proper AI, and that kind of drives the momentum. Thank you.
Satya Nadella: Then on top of it, of course, there's the transformation, which is what businesses are doing. How should we think about customer service? How should we think about marketing? How should we think about finance? How should we think about that and build our own agents? That's where all the services in Fabric and Foundry and, of course, the GitHub tooling is helping them, or even the low-code, no-code tools. I had some stats on how much that's being used. But one of the more exciting things for me is these new agent systems, M365 Copilot, GitHub Copilot, Security Copilot, all coming together to compound the benefits of all the data and all the deployment, I think, is probably the most transformative effect right now.
Then on top of it, of course, there's the transformation, which is what businesses are doing. How should we think about customer service? How should we think about marketing? How should we think about finance? How should we think about that and build our own agents? That's where all the services in Fabric and Foundry and, of course, the GitHub tooling is helping them, or even the low-code, no-code tools. I had some stats on how much that's being used. But one of the more exciting things for me is these new agent systems, M365 Copilot, GitHub Copilot, Security Copilot, all coming together to compound the benefits of all the data and all the deployment, I think, is probably the most transformative effect right now.
Even for a training job, by the way, an AI training job requires a bunch of compute and a bunch of storage, very close to compute, and so therefore, and in training and inferencing as well. So an inferencing with agent mode would require you to essentially provision a computer or computing resources to the agent.
Yeah, I think I I think I get it. So, first of all, I I I had mentioned in my remarks that when you think about AI workloads, you shouldn't think of AI workloads as just AI accelerator compute, right? Because in some sense it take any agent um the agent will then spawn through tools, use maybe a container which runs obviously on compute. In fact, we have a whenever we think about even building out of the fleet we think of in ratios uh even for a training job, by the way, an AI training job requires a bunch of compute and a bunch of storage, very close to compute. And so therefore, uh, and he's sending an inferencing as well. So in inferencing with agent mode, uh, would require you to essentially provision a computer uh or Computing resources to the agent. Uh, so not, they don't need gpus. They're running on gpus, but they need computers, which are computer and storage. So that's what's happening, even in the new workload.
Satya Nadella: So, they don't need GPUs. They're running on GPUs, but they need computers, which are compute and storage. So that's what's happening even in the new workload. The other thing you mention is cloud migrations are still going on. In fact, one of the stats I had was SQL latest SQL Server growing as an IaaS service in Azure. And so, that's one of the reasons why we have to think about our commercial cloud and keep it balanced with the rest of our AI cloud, because when clients bring their workloads and build new workloads, they need all of these infrastructure elements in the region in which they're deploying.
So, they don't need GPUs. They're running on GPUs, but they need computers, which are compute and storage. So that's what's happening even in the new workload. The other thing you mention is cloud migrations are still going on. In fact, one of the stats I had was SQL latest SQL Server growing as an IaaS service in Azure.
Cleaning job, by the way, an AI training job requires a bunch of compute and a bunch of storage, very close to compute. And so, therefore, uh, and in sending an inferencing as well. So, in the inferencing with agent mode, uh, would require you to essentially provision a computer, uh, or Computing resources to the agent. Uh, so not, they don't need gpus. They're running on gpus, but they need computers, which are computer and storage. So that's what's happening, even in the new work group,
[Analyst]: Thank you. Very helpful.
Brad Zelnick: Thank you. Very helpful.
Amy Hood: Thanks, Brad. Operator, we have time for one last question.
Jonathan Neilson: Thanks, Brad. Operator, we have time for one last question.
The other thing you mentioned is the cloud migrations are still going on. In fact, 1 of the stats, I had was SQL, our latest SQL Server growing as an is Service, uh, in Azure. Uh, and so, uh, that's 1 of the reasons why we have to think about our commercial cloud and to keep it balanced with the rest of our AI Cloud. Because when clients bring their workloads and build new workloads, they need all of these, uh, infrastructure elements in the region, in which they're deployed.
And so, that's one of the reasons why we have to think about our commercial cloud and keep it balanced with the rest of our AI cloud, because when clients bring their workloads and build new workloads, they need all of these infrastructure elements in the region in which they're deploying.
Jonathan Neilson: The last question will come from the line of Raimo Lenschow with Barclays. Please proceed.
Jonathan Neilson: The last question will come from the line of Raimo Lenschow with Barclays. Please proceed.
Yep. Okay perfect. Thank you.
[Analyst] (Barclays): Perfect. Thanks for squeezing me in. Last few quarters, besides the GPU side, we talked about CPU as well on the Azure side. And you had some operational changes at the beginning or January last year. Can you speak what you saw there and maybe put it more in the bigger picture in terms of clients realizing that their move to the cloud is important if they want to deliver proper AI? So what are we seeing in terms of cloud transitions? Thank you.
Raimo Lenschow: Perfect. Thanks for squeezing me in. Last few quarters, besides the GPU side, we talked about CPU as well on the Azure side. And you had some operational changes at the beginning or January last year. Can you speak what you saw there and maybe put it more in the bigger picture in terms of clients realizing that their move to the cloud is important if they want to deliver proper AI? So what are we seeing in terms of cloud transitions? Thank you.
Thanks, Ryan that wraps up the Q&A portion of today's earnings call. Thank you for joining us today and we look forward to speaking with you all soon. Thank you all. Thank you.
Thank you, this concludes today's conference. You may disconnect your lines at this time and we thank you for your participation.
The other thing you mentioned is the cloud migration is still going on. In fact, one of the stats I had was SQL, our latest SQL Server, is growing as an IaaS service in Azure. And so, that's one of the reasons why we have to think about our commercial cloud and keep it balanced with the rest of our AI Cloud. Because when clients bring their workloads and build new workloads, they need all of these infrastructure elements in the region in which they're deployed.
Jonathan Neilson: Yeah. Okay, perfect. Thank you.
Raimo Lenschow: Yeah. Okay, perfect. Thank you.
Have a great night.
Jonathan Neilson: Thanks, Raimo. That wraps up the Q&A portion of today's earnings call. Thank you for joining us today, and we look forward to speaking with you all soon.
Jonathan Neilson: Thanks, Raimo. That wraps up the Q&A portion of today's earnings call. Thank you for joining us today, and we look forward to speaking with you all soon.
Yep. Okay perfect. Thank you.
Satya Nadella: Thank you all.
Satya Nadella: Thank you all.
Jonathan Neilson: Thank you.
Jonathan Neilson: Thank you.
Thanks, Ryan. That wraps up the Q&A portion of today's earnings call. Thank you for joining us today, and we look forward to speaking with you all soon. Thank you all. Thank you.
Karl Keirstead: Thank you. This concludes today's conference. You may disconnect your lines at this time, and we thank you for your participation. Have a great night.
Karl Keirstead: Thank you. This concludes today's conference. You may disconnect your lines at this time, and we thank you for your participation. Have a great night.
Operator: I didn't quite, sorry, Ryan. You were asking about the SMC CPU side, or can you just repeat the question, please?
Jonathan Neilson: I didn't quite, sorry, Ryan. You were asking about the SMC CPU side, or can you just repeat the question, please?
[Analyst] (Barclays): Yep. Yeah. Yeah. Sorry. So I was wondering about the CPU side of Azure because we had some operational changes there. And we also hear from the field a lot that people are realizing they need to be in the cloud if they want to do proper AI, and if that's kind of driving momentum. Thank you.
Raimo Lenschow: Yep. Yeah. Yeah. Sorry. So I was wondering about the CPU side of Azure because we had some operational changes there. And we also hear from the field a lot that people are realizing they need to be in the cloud if they want to do proper AI, and if that's kind of driving momentum. Thank you.
Thank you. This concludes today's conference. You may disconnect your lines at this time, and we thank you for your participation.
Have a great night.
Satya Nadella: Yeah. I think I get it. So first of all, I had mentioned in my remarks that when you think about AI workloads, you shouldn't think of AI workloads as just AI accelerator compute, right? Because in some sense, it takes any agent. The agent will then spawn through tools used maybe a container, which runs obviously on compute. In fact, whenever we think about even building out of the fleet, we think of in ratios. Even for a training job, by the way, an AI training job requires a bunch of compute and a bunch of storage very close to compute. And so therefore, and same thing in inferencing as well. So inferencing with agent mode would require you to essentially provision a computer or computing resources to the agent. So they don't need GPUs. They're running on GPUs, but they need computers, which are compute and storage.
Satya Nadella: Yeah. I think I get it. So first of all, I had mentioned in my remarks that when you think about AI workloads, you shouldn't think of AI workloads as just AI accelerator compute, right? Because in some sense, it takes any agent. The agent will then spawn through tools used maybe a container, which runs obviously on compute. In fact, whenever we think about even building out of the fleet, we think of in ratios. Even for a training job, by the way, an AI training job requires a bunch of compute and a bunch of storage very close to compute. And so therefore, and same thing in inferencing as well. So inferencing with agent mode would require you to essentially provision a computer or computing resources to the agent. So they don't need GPUs. They're running on GPUs, but they need computers, which are compute and storage.
Satya Nadella: So that's what's happening even in the new workload. The other thing you mentioned is the cloud migrations are still going on. In fact, one of the stats I had was our latest SQL Server growing as an IaaS service in Azure. And so that's one of the reasons why we have to think about our commercial cloud and keep it balanced with the rest of our AI cloud because when clients bring their workloads and build new workloads, they need all of these infrastructure elements in the region in which they're deploying.
So that's what's happening even in the new workload. The other thing you mentioned is the cloud migrations are still going on. In fact, one of the stats I had was our latest SQL Server growing as an IaaS service in Azure. And so that's one of the reasons why we have to think about our commercial cloud and keep it balanced with the rest of our AI cloud because when clients bring their workloads and build new workloads, they need all of these infrastructure elements in the region in which they're deploying.
Amy Hood: Thanks, Raimo. That wraps up the Q&A portion of today's earnings call. Thank you for joining us today, and we look forward to speaking with you all soon.
Jonathan Neilson: Thanks, Raimo. That wraps up the Q&A portion of today's earnings call. Thank you for joining us today, and we look forward to speaking with you all soon.
Satya Nadella: Thank you all.
Satya Nadella: Thank you all.
Operator: Thank you.
Amy Hood: Thank you.
Jonathan Neilson: Thank you. This concludes today's conference. You may disconnect your lines at this time, and we thank you for your participation. Have a great night.
Operator: Thank you. This concludes today's conference. You may disconnect your lines at this time, and we thank you for your participation. Have a great night.