It seems like Google aims to grab a bit of the market share from NVIDIA and AMD by offering startups large compute deals and allowing them to train their massive AI models on the Google Cloud Platform (GCP). One such case is the OpenAI co-founder Ilya Sutskever's
Safe Superintelligence Inc. (SSI) startup. According to a GCP post, SSI is "partnering with Google Cloud to use TPUs to accelerate its research and development efforts toward building a safe, superintelligent AI." Google's latest TPU v7p, codenamed
Ironwood, was released yesterday. Carrying 4,614 TeraFLOPS of FP8 precision and 192 GB of HBM memory, these TPUs are interconnected using Google's custom ICI infrastructure and are scaled to configurations in pods of 9,216 chips, where Ironwood delivers 42.5 ExaFLOPS of total computing power.
For AI training, this massive power will allow AI models to quickly go over training, accelerating research iterations and ultimately accelerating model development. For SSI, the end goal is a simple mission: achieving ASI with safety at the front. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," notes the SSI website, adding that "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."
If the deal succeeds, more startups and AI labs could tap Google's TPUs, not just Google using it for its own internal needs.
Google's Gemini models already run on TPUs, and many more services that power each Google product rely on this infrastructure. This is also a sign that the GPU monopoly might be starting to crack and that more AI labs are generally switching to cloud service providers for their computing needs instead of acquiring on-premises infrastructure.
7 Comments on Safe Superintelligence Inc. Uses Google TPUs Instead of Regular GPUs for Next-Generation Models
What a joke. Imagine making a company built on fear and the unknown, and just that. A company built on emotion. Looks remarkably like every other ICO we saw in crypto.
The difference is that you can chose either an Nvidia-based instance, or get a TPU one that's cheaper, so there's competition in this regard.
Internally Google also makes a lot of use of their own TPUs instead of Nvidia GPUs.
In contrast, AWS also has their own accelerators (inferentia and trainium), but they're not as easy to use as TPUs, and AWS themselves still make use of Nvidia GPUs to train most of their models.
Azure, on the other hand, only has Nvidia GPUs available for such use cases, without any other proper option.
(minor aside - Azure has their own custom stuff and AMD accelerators, but those are not publicly available for their cloud users)