In recent times, the tech landscape has been dramatically influenced by the rise of generative AI technologies, creating an unprecedented demand for GPU chips, a domain where NVIDIA has seen remarkable growthSimultaneously, AMD is carving its own niche within this emerging market, propelled by their innovative strategies aimed at capitalizing on the flourishing AI sector.
Recent financial reports illuminate the ascent of NVIDIA’s market value, ballooning by nearly 1.9 trillion yuan or surpassing 11%. Despite the meteoric rise in valuation, there are underlying concerns among industry expertsEarnings reports from Google and Microsoft hinted at a performance in AI that fell short of market expectations, which raises questions about potential impacts on future capital investments in the sector
Moreover, AMD appears to be rapidly closing the gap, having generated over $1 billion in revenue from their AI chips in the second quarter alone.
One clear takeaway from their financial disclosures is the powerhouse performance from each company’s data center segment.
AMD's latest fiscal report displayed better-than-expected resultsPrior to this, NVIDIA also published its quarterly report, drawing significant attention to the revenue generated from its data centers.
AMD's second-quarter financials exceeded expectations
On July 31, AMD announced its second-quarter results, showcasing an impressive turnout with total revenues of $5.835 billion, marking a 9% increase year-over-year and a 7% rise from the previous quarter
- Chip Subsidy Race Heats Up Among US, EU, and Japan
- Traders Bet on Peak Yields
- AIGC Fuels AI Adoption in Hong Kong Publishing
- AMD vs NVIDIA: A Race to Accelerate
- Emerging Opportunities for Long-term Investment in China
The net profit soared to $265 million, reflecting a staggering growth of 881% compared to the same period last year and a 115% increase from the first quarter.
Diving into specifics, AMD's data center division set a new record with revenues hitting $2.8 billion, indicating a year-over-year growth of 115% and a sequential increase of 21%. Conversely, their client computing division brought in $1.5 billion – a 49% improvement year-over-year, while gaming and embedded divisions reported declines, indicating areas for further development.
Despite challenges in the gaming segment, AMD’s Chair and CEO Lisa Su emphasized the rapid acceleration of AI business, indicating robust demand driven by the Instinct, EPYC, and Ryzen processorsShe pointed out that with the continuous advancement of generative AI, the need for substantial computing capabilities within various markets is growing, leading to significant growth opportunities for AMD offering cutting-edge AI solutions across its operational domains.
Looking ahead to the third quarter, AMD predicts revenues between $6.4 billion and $7 billion, with a midpoint of $6.7 billion
This reflects an expected year-over-year growth of approximately 16% and a sequential rise of around 15%, aligning closely with analyst expectations of $6.62 billionFurthermore, AMD anticipates data center GPU revenues to reach $4.5 billion in 2024.
NVIDIA on the other hand, witnesses a staggering growth with quarterly revenues exceeding 200% for three consecutive periods.
The financials released previously indicated that NVIDIA’s revenues for the first quarter of fiscal year 2025 ended on April 28 reached $26.044 billion, reflecting an 18% quarter-on-quarter growth and an astonishing 262% year-over-year increase
The net income was a staggering $14.881 billion, a 628% increase compared to the prior year, showcasing a significant operating margin of 78.9%—a remarkable feat indeedThis triple-digit growth underlines a dominant strategy that has seen NVIDIA thrive in a highly competitive marketplace.
Among the revenue figures, the data center computing segment alone brought in $19.4 billion, which is an increase of 29% quarter-over-quarter and an explosive growth of 478% year-over-year, thanks to the bolstering demand for NVIDIA's Hopper GPU computing platform, employed extensively in training large language models and AI applications.
NVIDIA indicated that their data center division achieved another record-breaking quarter with revenues standing at $22.6 billion, marking a 427% increase year-over-year and a 34% increase from the previous quarter, driven primarily by the shipment of their advanced Hopper series of GPUs.
Looking forwards, NVIDIA anticipates revenues for the second quarter to be around $28 billion, slightly above the market expectations of $26.8 billion, with a predicted non-GAAP gross margin decreasing to 75.5%. For the entire fiscal year, NVIDIA expects an operational expenditure increase of around 40% to 45%.
AMD, on the other hand, is firmly positioned to catch up with NVIDIA in this competitive landscape.
NVIDIA, which currently holds a significant market share, is continuously monitored by AMD, who is also making strides to augment its presence in the AI domain
Recently, both companies shared their expected product release cycles, indicating that NVIDIA's highly anticipated Blackwell super AI chip is expected to launch before the end of 2024, while AMD's MI325X is set for launch in the fourth quarter of 2024.
Industry analysts believe that AMD's MI300 project becomes increasingly competitive due to software optimization and improved configuration of high-bandwidth memory (HBM3e). In contrast to NVIDIA’s more closed-off technical strategies, AMD’s open approach is anticipated to yield substantial returns.
AMD recently disclosed in June that several major companies, including Microsoft, Meta, Dell, HPE, and Lenovo are already adopting the MI300 chips
Their next-generation MI325X will debut in the fourth quarter of this year, followed by updates with MI350X set to launch in 2025 and MI400 expected in 2026.
On the GPU memory update front, AMD’s MI325X will see an increase from 192GB HBM3 to 288GB HBM3e, which could determine AMD's competitive position against NVIDIA's current Hopper GPUs and the forthcoming Blackwell GPUsObservers note that significant changes in architecture for the MI325X may be limited, suggesting potential production increases may be easier to achieve than with the MI300.
Additionally, AMD has committed to launching new AI processors and chips annually, actively pursuing an advantage over NVIDIA, who has altered its new product timeline from bi-annual to annual releases.
NVIDIA’s forthcoming Blackwell platform is scheduled for launch in 2025, poised to replace the existing Hopper platform and is projected to become its primary offering in high-end GPUs, commanding an impressive 83% of their high-end product line.
CEO Jensen Huang of NVIDIA has expressed confidence in the company’s readiness for the next wave of growth, with the second-quarter Blackwell chips expected to start shipments and subsequently contribute substantial revenues throughout the year.
As we delve into the implications surrounding AI chips and the broader market, it becomes evident that this competition is not just a story of incremental advancement but a complex tapestry woven with predictions of demand surges for AI-driven applications and the critical role of efficient computing architecture.
According to TrendForce, NVIDIA leads in market share for AI servers equipped with AI chips, nearing 90%, whereas AMD accounts for roughly 8%. However, if we consider all AI chips utilized in AI servers, including GPUs, ASICs, and FPGAs, NVIDIA’s market share drops to around 64% this year.
Looking to 2025, the demand for advanced AI servers continues to be robust, with NVIDIA’s new-generation Blackwell (GB200, B100/B200, etc.) anticipated to supplant the Hopper platform and subsequently boost demand for CoWoS and HBM technologies.
Specifically, for NVIDIA’s B100 chips, the size is expected to double compared to the H100, which implies a greater utilization of CoWoS materials
By 2025, TSMC, the major supplier, anticipates total CoWoS production capacity could reach 550k-600k units, reflecting an approximate growth rate of 80%.
Examining HBM consumption, the mainstream H100 is set to feature an 80GB HBM3, while by 2025, major chips like NVIDIA's Blackwell Ultra or AMD's MI350 are expected to incorporate up to 288GB of HBM3e, resulting in a more than threefold increase in units, driven primarily by the pervasive demand in the AI server market that promises to double the overall supply of HBM by 2025.
Moreover, the surging demand for GPU chips will inherently lead to a sharper focus on advanced thermal management solutions for AI servers, particularly as NVIDIA plans to launch its next-generation Blackwell platform ahead of schedule