- Joined
- Aug 19, 2017
- Messages
- 3,018 (1.07/day)
It seems like Google aims to grab a bit of the market share from NVIDIA and AMD by offering startups large compute deals and allowing them to train their massive AI models on the Google Cloud Platform (GCP). One such case is the OpenAI co-founder Ilya Sutskever's Safe Superintelligence Inc. (SSI) startup. According to a GCP post, SSI is "partnering with Google Cloud to use TPUs to accelerate its research and development efforts toward building a safe, superintelligent AI." Google's latest TPU v7p, codenamed Ironwood, was released yesterday. Carrying 4,614 TeraFLOPS of FP8 precision and 192 GB of HBM memory, these TPUs are interconnected using Google's custom ICI infrastructure and are scaled to configurations in pods of 9,216 chips, where Ironwood delivers 42.5 ExaFLOPS of total computing power.
For AI training, this massive power will allow AI models to quickly go over training, accelerating research iterations and ultimately accelerating model development. For SSI, the end goal is a simple mission: achieving ASI with safety at the front. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," notes the SSI website, adding that "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."
If the deal succeeds, more startups and AI labs could tap Google's TPUs, not just Google using it for its own internal needs. Google's Gemini models already run on TPUs, and many more services that power each Google product rely on this infrastructure. This is also a sign that the GPU monopoly might be starting to crack and that more AI labs are generally switching to cloud service providers for their computing needs instead of acquiring on-premises infrastructure.
View at TechPowerUp Main Site | Source
For AI training, this massive power will allow AI models to quickly go over training, accelerating research iterations and ultimately accelerating model development. For SSI, the end goal is a simple mission: achieving ASI with safety at the front. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," notes the SSI website, adding that "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

If the deal succeeds, more startups and AI labs could tap Google's TPUs, not just Google using it for its own internal needs. Google's Gemini models already run on TPUs, and many more services that power each Google product rely on this infrastructure. This is also a sign that the GPU monopoly might be starting to crack and that more AI labs are generally switching to cloud service providers for their computing needs instead of acquiring on-premises infrastructure.
View at TechPowerUp Main Site | Source