- Joined
- Apr 1, 2008
- Messages
- 4,696 (0.75/day)
- Location
- Portugal
System Name | HTC's System |
---|---|
Processor | Ryzen 5 5800X3D |
Motherboard | Asrock Taichi X370 |
Cooling | NH-C14, with the AM4 mounting kit |
Memory | G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB |
Video Card(s) | Sapphire Pulse 6600 8 GB |
Storage | 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III |
Display(s) | LG 27UD58 |
Case | Fractal Design Define R6 USB-C |
Audio Device(s) | Onboard |
Power Supply | Corsair TX 850M 80+ Gold |
Mouse | Razer Deathadder Elite |
Software | Ubuntu 20.04.6 LTS |
In the graphics business, there's no such thing as too much memory bandwidth. Real-time graphics always wants more: more memory, more bandwidth, more processing power. Most graphics cards on the market today use GDDR3 memory, a graphics-card optimized successor to the DDR2 memory common in PC systems (it's mostly unrelated to the DDR3 used in PC system memory).
A couple years ago, ATI (not yet purchased by AMD ) began promoting and using GDDR4, which lowered voltage requirements and increased bandwidth with a number of signaling tweaks (8-bit prefetch scheme, 8-bit burst length). It was used in a number of ATI graphics cards, but not picked up by Nvidia and, though it became a JEDEC standard, it never really caught on.
AMD's graphics division is at it again now with GDDR5. Working together with the JEDEC standards body, AMD expects this new memory type to become quite popular and eventually all but replace GDDR3. Though AMD plans to be the first with graphics cards using GDDR5, the planned production by Hynix, Qimonda, and Samsung speak to the sort of volumes that only come with industry-wide adoption. Let's take a look at the new memory standard and what sets it apart from GDDR3 and GDDR4.
Source: ExtremeTech