News Posts matching #24 billion transistors

Return to Keyword Browsing

Tianshu Zhixin Big Island GPU is a 37 TeraFLOP FP32 Computing Monster

Tianshu Zhixin, a Chinese startup company dedicated to designing advanced processors for accelerating various kinds of tasks, has officially entered the production of its latest GPGPU design. Called "Big Island" GPU, it is the company's entry into the GPU market, currently dominated by AMD, NVIDIA, and soon Intel. So what is so special about Tianshu Zhixin's Big Island GPU, making it so important? Firstly, it represents China's attempt of independence from the outside processor suppliers, ensuring maximum security at all times. Secondly, it is an interesting feat to enter a market that is controlled by big players and attempt to grab a piece of that cake. To be successful, the GPU needs to represent a great design.

And great it is, at least on paper. The specifications list that Big Island is currently being manufactured on TSMC's 7 nm node using CoWoS packaging technology, enabling the die to feature over 24 billion transistors. When it comes to performance, the company claims that the GPU is capable of crunching 37 TeraFLOPs of single-precision FP32 data. At FP16/BF16 half-precision, the chip is capable of outputting 147 TeraFLOPs. When it comes to integer performance, it can achieve 317, 147, and 295 TOPS in INT32, INT16, and IN8 respectively. There is no data on double-precision floating-point numbers, so the chip is optimized for single-precision workloads. There is also 32 GB of HBM2 memory present, and it has 1.2 TB of bandwidth. If we compare the chip to the competing offers like NVIDIA A100 or AMD MI100, the new Big Island GPU outperforms both at single-precision FP32 compute tasks, for which it is designed.
Tianshu Zhixin Big Island Tianshu Zhixin Big Island Tianshu Zhixin Big Island Tianshu Zhixin Big Island
Pictures of possible solutions follow.

Chinese Tianshu Zhixin Announces Big Island GPGPU on 7 nm, 24 billion Transistors

Chinese company Shanghai Tianshu Zhixin Semiconductor Co., Ltd., commonly known (at least in Asia) as Tianshu Zhixin, has announced the availability of their special-purpose GPGPU, affectionately referred to as Big Island (BI). The BI chip is the first fully domestic-designed solution for the market it caters to, and features close to the latest in semiconductor manufacturing, being built on a 7 nm process featuring 2.5D CoWoS (chip-on-wafer-on-substrate) packaging. The chip is built towards AI and HPC applications foremost, with applications in other industries such as education, medicine, and security. The manufacturing and packaging processes seem eerily similar to those available from Taiwanese TSMC.

Tianshu Zhixin started work on the BI chip as early as 2018, and has announced that the chip features support for most AI and HPC data processing formats, including FP32, FP16, BF16, INT32, INT16, and INT8 (this list is not exhaustive). The company says the chip offers twice the performance of existing mainstream products on the market, and emphasizes its price/performance ratio. The huge chip (it packs as many as 24 billion transistors) is being teased by the company as offering as much as 147 TFLOPs in FP126 workloads, compared to 77.97 TFLOPs in the NVIDIA A100 (54 billion transistors) and 184.6 TFLOPS from the AMD Radeon Instinct MI100 (estimated at 50 billion transistors).
Return to Keyword Browsing