Key Takeaways
- Nvidia shares advanced 0.3% to $202.74 in premarket sessions Tuesday, approaching its October record close of $207.
- Google plans to introduce next-generation tensor processing units (TPUs) at its Cloud Next event in Las Vegas, created in collaboration with Marvell Technology.
- The upcoming TPU generation targets inference tasks — where AI systems process user requests — not training operations, where Nvidia maintains dominance.
- KeyBanc’s John Vinh reaffirmed his Overweight position on Nvidia with a $275 target, emphasizing CUDA’s role as a formidable competitive advantage.
- Google secured major TPU deals with Meta worth billions and Anthropic accessing up to 1 million chips, though availability issues persist.
Nvidia continues its impressive trajectory. The semiconductor giant’s shares have surged 15% in the last 30 days and are closing in on their all-time peak. This upward trend persisted Tuesday morning despite Google’s planned advances in artificial intelligence processors.
Trading at $202.74 before the opening bell, Nvidia registered a 0.3% increase. The stock is nearing its historical closing record of slightly above $207, achieved in October 2025.
The positive movement occurred as market participants anticipated quarterly financial results from leading technology firms. Optimism surrounding Nvidia’s operations continues to strengthen.
However, challenges exist. Google is poised to reveal its newest tensor processing unit iteration at the Google Cloud Next gathering in Las Vegas this week.
Google’s Inference Strategy
Reports from Bloomberg indicate Google engineered its recent processors alongside Marvell Technology. These advanced chips emphasize AI inference: the operational phase where trained algorithms deliver responses to user inquiries.
“The competitive landscape is transitioning toward inference,” Gartner’s Chirag Dekate explained to Bloomberg. Google Chief Scientist Jeff Dean reinforced this perspective, noting that specialized chip designs for training versus inference now make strategic sense as artificial intelligence demand accelerates.
Google has pursued this direction for an extended period. Its TPU initiative now includes Meta as a significant client — the social networking company committed to a multibillion-dollar agreement to acquire TPUs through Google Cloud. Anthropic similarly increased its TPU allocation to potentially 1 million processors.
A fundamental advantage exists as well. Among prominent AI developers, Google stands alone in manufacturing proprietary chips at comparable scale, creating tighter integration between model development teams and hardware engineers.
Google has simultaneously expanded its TPU accessibility. PyTorch developers now have TPU compatibility, and the company has allegedly tested on-site TPU installations for corporate clients — representing a departure from its traditional cloud-exclusive approach.
Nvidia’s CUDA Advantage
Financial analysts remain confident. KeyBanc’s John Vinh sustained his Overweight assessment on Nvidia Monday with a $275 valuation, contending that the CUDA software platform establishes substantial obstacles for potential rivals.
“We perceive minimal competitive threats and anticipate Nvidia maintaining its leadership position in one of the most rapidly expanding workloads across cloud and enterprise environments,” Vinh stated.
Nvidia CEO Jensen Huang has previously indicated his processors can execute tasks “you can’t do with TPUs.” Significantly, Google continues deploying Nvidia GPUs in conjunction with its proprietary TPUs for artificial intelligence initiatives.
Nvidia’s forthcoming Vera Rubin platform is anticipated to represent the most sophisticated AI technology available upon launch.
Availability constraints may additionally impede Google’s objectives. An anonymous startup leader informed Bloomberg that TPU scarcity presented genuine complications, with restricted chip access beyond what Google allocated to “the more elite teams.”


