Key Highlights
- Amazon Web Services commits to purchasing 1 million GPUs from Nvidia, with deliveries concluding by late 2027.
- Delivery schedule spans from 2025 through the end of 2027.
- Agreement encompasses networking infrastructure, Groq inference processors, and upcoming Blackwell plus Rubin architectures.
- AWS plans to deploy seven distinct Nvidia chip variants for AI inference operations.
- Shares of both NVDA and AMZN climbed modestly in extended trading after the disclosure.
This Amazon Web Services agreement represents one of Nvidia’s most substantial single-client semiconductor commitments to date. The more closely you examine the arrangement’s specifics, the more significant it becomes.
According to Nvidia Vice President Ian Buck’s statement to Reuters, the million-unit GPU shipment schedule kicks off in 2025 and continues through 2027. This timeframe aligns precisely with CEO Jensen Huang’s forecast identifying a $1 trillion addressable market for Nvidia’s Blackwell and Rubin processor lines during the identical window.
The partnership extends far beyond simple GPU volume. AWS is committing to Nvidia’s broader technology ecosystem, incorporating Spectrum-X alongside ConnectX networking solutions. This development carries particular significance since AWS has traditionally relied on proprietary networking infrastructure. Integrating Nvidia’s networking portfolio into its facilities represents a strategic departure from past practices.
Amazon’s Comprehensive Nvidia Inference Strategy
AI inference — the computational phase where artificial intelligence models produce outputs and execute tasks — forms the foundation of this deal’s technical blueprint. AWS intends to leverage seven different Nvidia processor types for managing inference operations.
Buck stated directly: “Inference is hard. It’s wickedly hard. To be the best at inference, it is not a one chip pony. We actually use all seven chips.”
The Groq processors, unveiled by Nvidia earlier this week after its $17 billion licensing arrangement with an AI semiconductor startup, form part of this inference framework. These function in concert with six additional Nvidia chip designs to provide what the manufacturer characterizes as industry-leading inference capabilities.
AWS additionally plans to implement Nvidia’s Blackwell processors and anticipates incorporating the forthcoming Rubin platform upon its market availability. Neither Nvidia nor Amazon revealed the monetary terms of this partnership.
Both companies’ shares experienced modest appreciation during Thursday’s after-hours session following the announcement. NVDA had declined approximately 1% during regular trading, while AMZN dropped around 0.5%.
Amazon Maintains In-House Chip Development
Amazon continues developing proprietary AI semiconductors, including its Trainium2 chip. Nevertheless, the company maintains its reliance on Nvidia for the most computationally intensive applications. These two strategies appear to function in tandem rather than as alternatives.
This agreement underscores ongoing substantial capital allocation toward AI infrastructure among leading cloud service providers. AWS isn’t abandoning its custom chip initiatives — instead, it’s supplementing them with Nvidia hardware for particular high-performance scenarios.
The Nvidia-AWS partnership received its initial announcement earlier this week without detailed scheduling information. Buck’s Thursday remarks to Reuters delivered the most comprehensive timeline yet: deliveries commencing in 2025, extending through year-end 2027, encompassing a diverse range of Nvidia offerings spanning processing, networking, and inference capabilities.


