Let's look at NVIDIA's last quarter (Q226 in August). After months of speculation and rumors about China licenses and Rubin chip status, we finally know the reality. All in all, I thought it was a steady as she goes quarter for NVIDIA that hints at greater ramp-up of Blackwell racks over the next few quarters. While China sales didn't appear, mgmt is ready for it should the geopolitical environment open the market up.
Lately, I've been looking deeper at the neocloud thesis, and have expanded coverage to CoreWeave, Nebius, and IREN. This included a deep dive into CoreWeave and its big whale deals, a take on its Q3 earnings and buildout delay, and a walk through all of its current and future buildouts, its funding mechanisms, and where it is going next. I'm publishing deep dives into Nebius and IREN from here.
Join Premium for insights like this every week across AI & ML, Data & Analytics, Next-Gen Security, DevOps, SaaS platforms, and the hyperscalers. This has included a deep dive into Netskope, covered Q2 earnings from Figma, Rubrik, Samsara, Axon, and Cloudflare, and looked at announcements out of Samsara Beyond, Cloudflare's growth vectors from here, and Axon's new enterprise body cam & Skydio's drones.
- Last quarter mgmt noted that Blackwell Ultra would be coming in Q3 – but it showed up strong a quarter early.
- They sadly did not sell any H20s to China, though did find a non-China customer for a big sale. They stand ready to pounce should the geo-political gamesmanship ebb.
- Networking had a big rebound, driven by their next-gen InfiniBand switch (800G) and ongoing strength in Spectrum-X and NVLink.
- They gave a strong Q3 guide even without H20 included.
- Mgmt is as optimistic as ever, despite the ongoing back-and-forth with China.
A second post [paid] will cover recent June-August announcements, including their EU sovereign AI buildouts and news out of SIGGRAPH and Hot Chips conferences.
China, China, China
Before we dive in, remember how last quarter [paid] they sold $4.6B into China, before the ban in April [paid] prevented them from shipping an additional $2.5B over the final weeks. So they were selling H20s to China at a $7.1B quarterly rate in Q1, and estimated that they were losing $8B in sales in Q2 – showing them as having a ~$32B run rate that was lost. My hope then was that they could turn much of that supply chain back into Hopper supply (check!), plus find areas to sell excess H20s into (check!) in lieu of eventually resuming sales back into China (it's so close!).
Prior Q1 revenue growth was +69.2% or +12.0% seq. Including that lost $2.5B in sales, it would have grown +78.8% YoY or +18.4% seq – both showing acceleration. NVIDIA also would have had the strongest seq guide (+13.2%) in nearly 2 years if not for the H20 ban.
As of Q2, they finally obtained licenses from the US Government to sell to certain Chinese customers, but it seems those customers have not yet placed orders given the subsequent posturing between the US and China in the ongoing trade negotiations. (This is all so much larger than AI – just this week, Trump is threatening to hold back airplane parts if China doesn't provide rare-earth magnets.)
Per the 10-Q: "In August 2025, the USG granted licenses that would allow us to ship certain H20 products to certain China-based customers, but to date, we have not generated any revenue or shipped any H20 products under those licenses. USG officials have expressed an expectation that the USG will receive 15% of the revenue generated from licensed H20 sales, but to date, the USG has not published a regulation codifying such requirement.
Even if not enacted into binding legislation, draft bills have impacted and may in the future negatively impact our business. For example, following U.S. legislative proposals calling for mandatory features in our chips, China’s government publicly questioned whether our H20 products have built-in vulnerabilities, discouraging customers from purchasing our products. We provided a public response explaining that our GPUs, including H20, do not include such built-in vulnerabilities, and will respond to any follow-up questions we receive."
China has now fallen from 20% to 12% to 6% of the mix over the past 2 years as the result of all the chip regulation posturing from Biden and Trump admins. This quarter, they had no sales of H20s into China, yet given China was still ~6% of the mix, they are still selling some unrestricted items (not Hopper or Blackwell GPUs).
NVIDIA was thankfully able to find one or more customers outside of China that are interested in H20s in the meantime (including at least one big $650M sale), which allowed them to repurpose $180M of H20 inventory. This padded margins a bit.
CFO in letter: "In the second quarter of fiscal 2026, we benefited from a $180 million release of previously reserved H20 inventory related to the sale of approximately $650 million of H20 to an unrestricted customer outside of China. There were no H20 sales to China-based customers in the second quarter."
Sadly, there were no new H20 sales into China, however. Even after obtaining the needed licenses, Chinese customers continue to balk. It seems while there is no official ban of the H20, the Chinese government is telling top clouds to not proceed with purchases. CFO noted that they could potentially sell $2-5B of H20s to China in Q3 if things resolve quickly, but it is not factored into Q3 guidance in any way.
CFO in remarks: "In late July, the U.S. government began reviewing licenses for sales of H20 to China customers. While a select number of our China-based customers have received licenses over the past few weeks, we have not shipped any H20 based on those licenses. USG officials have expressed an expectation that the USG will receive 15% of the revenue generated from licensed H20 sales, but to date, the USG has not published a regulation codifying such requirement. We have not included H20 in our Q3 outlook as we continue to work through geopolitical issues. If geopolitical issues reside, we should ship $2-5B in H20 revenue in Q3."
... Later in Q&A: "There is interest in our H20s. There is the initial set of license that we received. And then additionally, we do have supply that we are ready, and that's why we communicated that somewhere in the range of about $2-5B this quarter we could potentially ship. We're still waiting on several of the geopolitical issues going back and forth between the governments and the companies trying to determine their purchases and what they want to do. So it's still open at this time, and we're not exactly sure what that full amount will be this quarter. However, if more interest arrives, more licenses arrives, again, we can also still build additional H20 and ship more as well."
Q226 Financials
- Revenue was $46.7B, growing +55.6% YoY or +6.1% sequentially.
- They guided Q3 to $54.0B, for +53.9% YoY or +15.5% seq growth, the strongest in 2 years.
- The US grew +80.2%, and is 50% of the mix. Singapore is now 22% of the mix, but 99% of it is being reported as US-based customers consolidating deliveries through the country (likely for SE Asian data centers).
- China sales fell -24.5% to $2.8B, to now 5.9% of the mix – a far cry from the 20%+ seen 2 years ago.
CFO in remarks: "We delivered another record quarter while navigating what continues to be a dynamic external environment. Total revenue was $46.7 billion, exceeded our outlook as we grew sequentially across all market platforms. Data center revenue grew 56% year-over-year. Data center revenue also grew sequentially despite the $4 billion decline in H20 revenue."
Adjusting for the lost H20 sales last and this Q, Nvidia would have been a +82.2% YoY or +17.6% seq growth had the ban not taken place, an acceleration in YoY and similar seq growth rate to last Q (that +18.4% adj seq growth calc'd above).
Data Center:
- Data Center revenue was $41.1B, growing a weak +5.1% seq. [Again, factoring back the lost H20 sales last Q and this, DC segment would have shown reaccelerating seq growth of +17% and +18% over the last and this Q.]
- DC Compute was $33.8B of that, growing +49.7% YoY or a -0.9% seq drop. The loss of H20 hurt here.
- DC Networking was the remaining $7.3B, accelerating to +97.7% YoY growth. It again saw huge seq growth of +46%, after the +64% seq seen last Q.
- CSPs were ~50% of sales, a slight increase that shows the ongoing adoption of Grace Blackwell NVL72 racks by the top hyperscaler clouds.
As expected, that write-down last Q for H20s was one-time only.
- Gross Margin had a nearly +12pp jump back to the low 70%s (72.7%) after the big dip last quarter from the H20 writedown. Mgmt expects Q3 to hit 73.5%, and FY26 to be mid-70%s.
- Op margin also jumped nearly +12pp back to 64.5%.
- After the strong FCF last Q, it dropped -30pp seq to 28.8%. This caused TTM to slip to 46.6%, after hitting a high of 52.0% earlier this year. Something to watch given, but I believe this is Ultra ramp related, plus they mentioned a tax payment.
- Inventory jumped from $11B last Q to $15B this Q due to ramping up Ultra.
- They returned $10B to shareholders in Q2, and $24.3B over the 1H. The BoD increased share repurchases by $60B, in addition to the $14.7B remaining.
According to the 10-Q, their top two direct customers (their OEM/ODM manufacturing partners) accounted for 23% and 16% of revenue. Last year, they noted the top 4 as 14%, 11%, 11%, and 10% of revenue. It seems they (or their end cloud customers) are now more heavily favoring certain partners.
We didn't hear much about onshoring efforts, as H20 questions dominated the call.
Blackwell
Blackwell was noted as "nearly 70%" of the DC Compute revenue last quarter (so ~$23.9B), and this quarter, mgmt noted that Blackwell DC revenue rose +17% seq (so ~$28.0B). [Perhaps an inexact comparison, as that growth rate spanned DC Networking as well, which is growing faster.]
CFO in remarks: "NVIDIA's Blackwell platform reached record levels, growing sequentially by 17%. We began production shipments of GB300 in Q2. Our full stack AI solutions for cloud service providers, neoclouds, enterprises and sovereigns are all contributing to our growth. ... We are on track to achieve over $20B in Sovereign AI revenue this year, more than double than that last year."
Grace Blackwell Ultra (GB300) racks are now shipping, and were noted as being in the "tens of billions" by the CFO on the call, and, later in an interview, at $10B by the CEO. Let's assume the CFO meant "just over ten" instead of "tens", and assume they mean about ~$10-12B this Q. Either way, Ultra was >35% of Blackwell sales out of the gate, showing top customers want the latest upon arrival. Mgmt last told us this wasn't showing up until Q3, but Ultra contributed heavily a quarter early. Like with Hopper, this was a seamless transition as only the HBM memory changed between versions.
CFO in remarks: "The new Blackwell Ultra platform has also had a strong quarter, generating tens of billions in revenue. ... Factory builds in late July and early August were successfully converted to support the GB300 ramp, and today, full production is underway. The current run rate is back at full speed, producing approximately 1000 racks per week. This output is expected to accelerate even further throughout the third quarter as additional capacity comes online. We expect widespread market availability in the second half of the year.
... NVIDIA software innovation, combined with the strength of our developer ecosystem, has already improved Blackwell's performance by more than 2x since its launch."
CEO later in interview: "We ramped $10B worth of GB300 in the first quarter [of shipping], and next quarter we're going to ramp really really hard."
Like with Hopper (which saw ~5x improvement over its first year), we are seeing continual improvements to the Blackwell hardware through software updates. Mgmt noted seeing a 2x inference performance gain since its debut through improvements to Dynamo (inference mgmt) and TensorRT-LLM (inference optimization engine).
CEO in Q&A: "But nonetheless, for the next year, we're ramping really hard into now Grace Blackwell, GB200, and then now Blackwell Ultra, GB300, we're ramping really hard into data centers. This year is obviously a record-breaking year. I expect next year to be a record-breaking year. And while we continue to increase the performance of AI capabilities as we race towards artificial superintelligence on the one hand and continue to increase the revenue generation capabilities of our hyperscalers on the other hand."
Hopper did well this quarter as well, which I think was mostly due to the H20 supply chain being reworked into Hopper supply to fill ongoing demand.
CFO in remarks: "Notably in the quarter was an increase in Hopper 100 and H200 shipments. We also sold approximately $650M of H20 in Q2 to an unrestricted customer outside of China. The sequential increase in Hopper demand indicates the breadth of data center workloads that run on accelerated computing and the power of CUDA libraries and full stack optimizations, which continuously enhance the performance and economic value of our platform."
Networking
Networking has risen from 14.0% to 17.6% of the DC mix. The huge success in networking over the past 2 quarters was due to their next-gen InfiniBand switches hitting the market, as well as the continued success of Spectrum-X and heavy use of NVLink in NVL72 racks.
- Spectrum-X was noted as having double-digit seq growth, and is now at a $10B run rate (from $8B last Q), hitting ~35% of the mix. This implies it is growing ~25% sequentially.
- InfiniBand rose a huge +100% sequentially due to their "XDR" models hitting the market, which refers to their next-gen Quantum-X800 models.
First announced at GTC'24, their new Quantum-X (InfiniBand) and Spectrum-X (Ethernet) line doubles scale-out networking speeds to 800G. When introduced at GTC'25 this year, they announced co-packaged optics (CPO) coming to both lines. Per this recent blog post, the first of those CPO-based models has arrived in the Quantum-X line (model Q3450). This will spread into their Spectrum-X line from here. [More on networking in the next post, including their new Spectrum-XGS announcement and Broadcom's recent moves here.]
AI DC buildouts
The CEO walked through the rough math on AI DC buildouts. A gigawatt AI factory will likely cost ~$50B, and believes they would capture ~$35B of that (70%). Mgmt sees $600B in AI DC capex this year (from just the top 4 clouds) that will be expanding to $3-4T over the rest of the decade.
CFO in remarks: "We are at the beginning of an industrial revolution that will transform every industry. We see $3-4T in AI infrastructure spend by the end of the decade. The scale and scope of these build-outs present significant long-term growth opportunities for NVIDIA. ... This growth is fueled by capital expenditures from the cloud to enterprises, which are on track to invest $600 billion in data center infrastructure and compute this calendar year alone, nearly doubling in 2 years. We expect annual AI infrastructure investments to continue growing, driven by the several factors: reasoning agentic AI requiring orders of magnitude more training and inference compute, global build-outs for sovereign AI, enterprise AI adoption, and the arrival of physical AI and robotics."
CEO in Q&A: "Over the next 5 years, we're going to scale into with Blackwell, with Rubin and follow-ons to scale into effectively a $3-$4T AI infrastructure opportunity. The last couple of years, you have seen that CapEx has grown in just the top 4 CSPs, has doubled and grown to about $600B. So we're in the beginning of this build-out, and the AI technology advances has really enabled AI to be able to adopt and solve problems to many different industries."
Then add in that $1T of enterprise IT infrastructure replacement cycle that they've talked about in the past, which they are now entering with their RTX PRO enterprise servers (Data Center segment) and workstations (Pro Viz segment), which also help push them deeper into enterprise AI, as well as industrial digitization and robotics (Omniverse and Cosmos).
Contrary to recent rumors, Rubin (and its many next-gen chips) is on schedule.
CFO: "The chips of the Rubin platform are in fab: the Vera CPU, Rubin GPU, CX9 SuperNIC, NVLink 144 scale-up switch, Spectrum-X scale-out and scale-across switch, and the silicon photonics processor. Rubin remains on schedule for volume production next year. Rubin will be our third-generation NVLink rackscale AI supercomputer with a mature and full-scale supply chain. This keeps us on track with our pace of an annual product cadence and continuous innovation across compute, networking, systems and software."
Rack adoption
SemiAnalysis had an interesting piece last week [paid] that compared the real-world TCO between H100 clusters and a GB200 NVL72 rack. They don't see major AI providers using NVL72 racks yet for major training, as the kinks get worked out of this new system architecture. Newly installed racks are being heavily used for inference for now, and especially shine with MoE model architectures.
SemiAnalysis: "Currently there are no large-scale training runs done yet on GB200 NVL72 as software continues to mature and reliability challenges are worked through. This means that Nvidia’s H100 and H200 as well as Google TPUs remain the only GPUs that are today being successfully used to complete frontier-scale training. As it stands today, even the most advanced operators at frontier labs and CSPs are not yet able to carry out mega training runs on the GB200 NVL72.
With that said, every new architecture naturally requires time for the ecosystem to ramp software to effectively utilize the architecture. The GB200 NVL72 ramp is slightly slower than prior generations, but not by much, and we are confident that before the end of the year, GB200 NVL72 software would have improved considerably. Combined with frontier models architecture being codesigned with the larger scale up world size in mind, we expect that there will be significant efficiency gains from using the GB200 NVL72 by the end of the year."
- They estimate that H100 HBX servers have a cost of $190-213K to the major CSPs and neoclouds, and total capex of $251-299K.
- They estimate that GB200 NVL72 racks have a cost of $3.2-3.5M to the major CSPs and neoclouds, and a total capex of $3.9-4.5M.
- An interesting tidbit from the article is how the NVL72 rack is designed to have 64 active GPUs and 8 hot spares, so customers aren't actively using all 72 GPUs.
Including the power increase (700W to 1200W per chip), they project a 1.6x higher cost (capex+opex) for the NVL72 racks at the same compute levels – which on the flipside, shows the performance gains needed from Blackwell for these racks to be cost-effective. In July, SemiAnalysis saw perf per TCO reach 1.5x, but expects it to hit 2.7x over the next 3-6mo.
"Currently, GB200 NVL72 racks are only being used for inference, small experiments and dev jobs while ML system engineers and infrastructure engineers figure out how to reliably carry out mega scale training on GB200 NVL72. Some frontier labs haven’t even installed their scale out network for their GB200 NVL72s as they are currently only running production inference. Indeed, this is where the system shines as MoE inferencing is where the GB200 delivers its strongest performance gains over Hopper."
It seems Blackwell isn't there yet for training. Some customers are using the rack as a standalone cluster (no scale-out network) for inference, especially with MoE-based models. We know the performance improvements are ongoing. I previously mentioned above that NVIDIA was already touting a 2x performance improvement in Blackwell from software updates.
But I think it will be telling when we hear about a new frontier model being trained on a GB200 NVL72 rack. Until then, they are likely to be entirely utilized for inference.
Despite these issues (typical of a new infrastructure architecture, as noted by SemiAnalysis), NVIDIA can't keep up with demand. Blackwell racks will ultimately become the training platform of choice once the ROI balances out.
CEO in Q&A: "We have reasonable forecasts from our large customers for next year, a very significant forecast. And we still have a lot of businesses that we're still winning and a lot of start-ups that are still being created. ... If you look at the top AI-native start-ups that are generating revenues, last year was $2 billion. This year, it's $20 billion. Next year being 10x higher than this year is not inconceivable.
And the open source models is now opening up large enterprises, SaaS companies, industrial companies, robotics companies to now join the AI revolution, another source of growth. And whether it's AI natives or enterprise SaaS or industrial AI or startups, we're just seeing just enormous amount of interest in AI and demand for AI.
Right now, I'm sure all of you know about the buzz out there. The buzz is everything sold out. H100 sold out. H200s are sold out. Large CSPs are coming out renting capacity from other CSPs. And so the AI-native start-ups are really scrambling to get capacity so that they could train their reasoning models. And so the demand is really, really high.
But the long-term outlook between where we are today, CapEx has doubled in 2 years. It is now running about $600 billion a year just in the large hyperscalers. For us to grow into that $600 billion a year, representing a significant part of that CapEx isn't unreasonable. And so I think the next several years, surely through the decade, we see just a really fast growing, really significant growth opportunities ahead."
And from here? New networking technologies are emerging to help interconnect multiple AI datacenters into millions of GPUs. [More on this next post.]
CEO in closing: "Blackwell and Rubin AI factory platforms will be scaling into the $3-4T global AI factory build out through the end of the decade. Customers are building ever greater scale AI factories, from thousands of Hopper GPUs in tens of megawatt data centers to now hundreds of thousands of Blackwells in 100-megawatt facilities. And soon, we'll be building millions of Rubin GPU platforms, powering multi-gigawatt multisite AI super factories.
With each generation, demand only grows. One shot chatbots have evolved into reasoning agentic AI that research, plan and use tools, driving orders of magnitude jump in compute for both training and inference. Agentic AI is reaching maturity and has opened the enterprise market to build domain and company-specific AI agents for enterprise workflows, products and services."
Add'l Reading
Past NVIDIA coverage in the Premium service:
- See past NVIDIA earnings coverage over Q225, Q325, Q425, and Q126 earnings.
- This year, I covered the DeepSink panic and ramifications, GTC in March, potential from here (inference, agentic AI, install base, physical AI, robotics), and recapped AI buildouts and announcements. I also recently discussed the tariff implications for NVIDIA, and then the AI Diffusion rules being tossed.
- The prior post covered the latest capex trends from the top hyperscaler clouds.
- A followup second post went through all the June-August product announcements and look at competing ASIC and networking efforts.
- I have since covered the great AI buildout (GPU demand) by OpenAI, xAI, and the neoclouds over the next few years, and dived much deeper into the neocloud thesis.
Now to see what Q3 brings us on November 19.
-muji