• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Samsung and Naver Developing an AI Chip Claiming to be 8x More Power Efficient than NVIDIA H100

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,458 (7.66/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Naver, the firm behind the HyperCLOVA X large language model (LLM), has been working with Samsung Electronics toward the development of power-efficient AI accelerators. The collaboration brings Naver's expertise with Samsung's vast systems IP over silicon design, the ability to build complex SoCs, semiconductor fabrication, and its plethora of DRAM technologies. The two recently designed a proof of concept for an upcoming AI chip, which they iterated on an FPGA. Naver claims the AI chip it is co-developing with Samsung will be 8 times more energy efficient than an NVIDIA H100 AI accelerator, but did not elaborate on its actual throughput. Its solution, among other things, leverages energy-efficient LPDDR memory from Samsung. The two companies have been working on this project since December 2022.



View at TechPowerUp Main Site | Source
 
Joined
Jun 29, 2018
Messages
467 (0.22/day)
It's kind of weird that they are calling the H100 an "AI accelerator" while it is a general purpose compute chip. Tensor cores aren't limited to AI either since they are matrix multiplication engines capable of working with data types from AI-centric INT8/FP8/FP16/BFLOAT16 to HPC-oriented FP32/FP64.
Are they saying that the new chip will be capable of the same level of general purpose compute as H100 while being 8x more power efficient? If not, and it's just for AI then this isn't something really revolutionary since we've had multiple AI-specific chips in the market already.
Oh and they still need to face the CUDA domination in the AI/ML software ecosystem.
 
Joined
Feb 18, 2005
Messages
5,327 (0.76/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
8x more energy-efficient with 0.08x the performance, I'll bet.
 
Joined
Oct 6, 2021
Messages
1,476 (1.55/day)
Competition is growing from all sides, strong and challenging. Nvidia, which previously swam alone, now has the biggest companies on the planet seeking independence to sail alone with their own hardware. Yay.
 
Joined
Nov 23, 2010
Messages
313 (0.06/day)
By the time this comes out Nvidia will already have released the B100, competitors are always trying to compete with previous gen products from Nvidia.
 
Joined
May 3, 2018
Messages
2,348 (1.07/day)
By the time this comes out Nvidia will already have released the B100, competitors are always trying to compete with previous gen products from Nvidia.
b100 won't be 8x a power efficient guaranteed. Maybe 2x the performance per watt. An ASIC will always be way more efficient than a general purpose chip running the intended applications.
 
Top