Quick Overview
- Shares of Marvell climbed as high as 6.3% during premarket hours following news the company is negotiating with Google to create two advanced AI processors.
- The first processor is a memory processing unit intended to complement Google’s tensor processing unit (TPU); the second represents a newly designed TPU optimized for AI inference tasks.
- Google is targeting completion of the memory processor’s design by next year, with plans to proceed to trial manufacturing thereafter.
- This collaboration represents part of Google’s strategic initiative to establish TPUs as a legitimate alternative to Nvidia’s GPU offerings.
- Alphabet’s first quarter 2025 financial results are scheduled for April 29, potentially revealing additional details regarding AI chip strategy.
Marvell Technology (MRVL) experienced a significant premarket rally on Sunday following a report from The Information that Alphabet’s Google has entered discussions with the semiconductor company to jointly develop two cutting-edge AI processors.
Marvell Technology, Inc., MRVL
Shares surged 6.3% by approximately 4:38 AM ET, delivering an energetic start to the trading week for investors.
Based on the report, sources with knowledge of the negotiations revealed that both organizations are collaborating on a memory processing unit (MPU) engineered to function in tandem with Google’s current tensor processing unit (TPU) infrastructure. The second processor represents an entirely new TPU architecture specifically optimized for AI inference applications.
The companies are targeting finalization of the memory processor’s design blueprint within the next year, followed by a transition to experimental production phases.
Google’s Silicon Strategy Gains Momentum
This partnership doesn’t exist in isolation. Google has been methodically constructing a comprehensive semiconductor network, collaborating with industry players such as Intel and Broadcom in addition to Marvell.
For the majority of its existence, Google maintained its TPU technology exclusively for internal use. This approach transformed in 2022 when the cloud division assumed responsibility for external chip distribution and began marketing TPUs to third-party clients.
Since that pivot, Google has accelerated both manufacturing capacity and external distribution. In the previous year, the company initiated direct TPU sales into customers’ private data centres — extending beyond its cloud platform offerings. This represents a significant evolution in market approach.
Earlier this month, Google officially unveiled TorchTPU, an initiative designed to ensure native compatibility between its processors and PyTorch — the prevailing AI development framework. This development reduces migration barriers for developers who have established their operations using PyTorch and are evaluating alternatives to Nvidia’s technology.
TPU revenue has emerged as an increasingly important component of Google Cloud’s financial performance as the organization seeks to demonstrate to shareholders that its artificial intelligence investments are yielding tangible returns.
Nvidia’s Competitive Landscape
Nvidia continues to lead the AI computing sector, but Google’s strategic maneuvers are intensifying competitive dynamics.
The Marvell collaboration strengthens Google’s position in the inference chip arena — a market segment where Nvidia has also been pursuing aggressive expansion. Nvidia is purportedly working on novel AI inference processors incorporating technology from Groq.
With Google, Marvell, Intel, and Broadcom all pursuing similar objectives, the inference processor marketplace is rapidly becoming highly competitive.
Google’s quarterly financial disclosure is scheduled for April 29. Market watchers will probably scrutinize details regarding the company’s TPU production scaling plans, cloud segment revenue trajectory, and implications of the Marvell negotiations for its semiconductor development timeline.


