• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Overclocked HBM? It's true, and it's fast

Status
Not open for further replies.

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
25,887 (3.79/day)
Location
Alabama
System Name Rocinante
Processor I9 14900KS
Motherboard EVGA z690 Dark KINGPIN (modded BIOS)
Cooling EK-AIO Elite 360 D-RGB
Memory 64GB Gskill Trident Z5 DDR5 6000 @6400
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 500GB 980 Pro | 1x 1TB 980 Pro | 1x 8TB Corsair MP400
Display(s) Odyssey OLED G9 G95SC
Case Lian Li o11 Evo Dynamic White
Audio Device(s) Moondrop S8's on Schiit Hel 2e
Power Supply Bequiet! Power Pro 12 1500w
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Akko Crystal Blues
VR HMD Quest 3
Software Windows 11
Benchmark Scores I dont have time for that.
3k points seems like total BS and from what iv gathered from the thread there isnt much to stand on other than a sketchy pic and some links to "others" that experience results "like" that.
 
Joined
Dec 31, 2009
Messages
19,366 (3.70/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
I got your point on Tahiti/Hawaii, but it's the fact that I had that exact problem with my cards. Many guys on mining forum had it too.

And do you think the difference between 5960x and 5930k is enough to boost the graphics core that much? Note that this is the graphics score, which is mostly GPU bounded.
Mining is a COMPLETELY different beast though. There were sweet spots for the core and memory clocks there, but I highly doubt it had anything to do with the timings on the ram as there was not peaks and valleys of increases it was linear or it stopped scaling all together even though I had tons of headroom on what was gaming stable - to the order of 100+ MHz (I'm guessing the ECC in GDDR had something to do with that).

... but the scope of this conversation is gaming/benchmarks. I don't care about oranges right now, I want to get to the core of this apple!

Well, it could be a couple of things...

1. Barely any difference between clockspeeds (those CPUs are from the same gen but vary in clock speeds)
2. Significant difference? I would think there is some considering its 1080p and is CPU bound...
3. Maybe the # of cores play a role too (sounds odd but think about AIDA64's memtest and how it, for some reason, scales with cores). I have done zero testing on that as far as GPU score and could be WAY off (in fact Im leaning that way). Clearly it makes a difference in the overall score because of the physics test.
 
Joined
Jul 1, 2015
Messages
6 (0.00/day)
System Name The Terminator -2000+2000+1
Processor Intel Core i7-3770k OC'd to 4.2 ghz
Motherboard Asus Z77 Sabertooth
Cooling Corsair H100 Liquid Cooler
Memory 16GB (2x8) Corsair DDR3-1600Mhz
Video Card(s) Primary: EVGA GTX 980 ACX 2.0 Secondary: EVGA GTX 670 FTW
Storage 2x OCZ Vertex 4 256GB SSD's, 1 OCZ Vertex 4 128GB SSD, 1x 3TB Segate Barracuda 7200rpm HDD
Display(s) Sony 40inch LCD 1920x1080 HD TV
Case NZXT Switch 810 White
Audio Device(s) None
Power Supply Cooler Master Silent Hybrid Pro 1050W
Mouse Razer Taipan BF4 Edition
Keyboard Razer Blackwidow Ultimate BF4 Edition
Software 3TB HDD: Windows 8.1 Pro 64bit, OCZ 128GB SSD: Mac OSX 10.10 Hackintosh, OCZ 256GB SSD: Windows 10
Just made an account just for this discussion. I feel and so do a few others discussing this topic, that ultimately, AMD's Fiji XT core is bottlenecked not by its memory subsystem, but by its raw horsepower. Lets think about this logically, the Fury X has a total memory bandwidth of 512GB/s which is in fact the fastest on the market. The 980 TI has a total memory bandwidth of 336GB/s. But yet in most real world benchmarks, the 980ti is in most games, matching or slightly better than the Fury X at 4K. What is even more interesting is, the Fury X performs much worse at lower resolutions than 980 ti in most games, at 1080p, it is witnessed that it even in some game scenarios, is only slightly faster than the 980!! Now lets dive in to why this might be happening, first i believe the 1080p benchmarks around the web really show what the problem is. At lower resolutions, games are usually less memory dependent as they dont require as much memory bandwidth to get a decent framerate, yes they are still memory bottlenecked, but not to the point like at 4K. Now all of my thoughts are based on both Nvidia and AMD's memory compression technologies are relatively equal. In theory, AMD's HBM implementation would be more than enough to DESTROY the 980 ti. Reason why? Because the GPU is more of the bottleneck then the bandwidth. Now why im posting it in this thread, its to show that while memory overclocking the Fury X MIGHT lead to some perfomance gains, it probably isnt worth it and would be better off if AMD made a better GPU to harness the full potential of HBM. I, like many others, wanted to see the Fury X destory nvidia's offerings for competition sake, but it never happened :(
 
Joined
Nov 9, 2010
Messages
5,654 (1.15/day)
System Name Space Station
Processor Intel 13700K
Motherboard ASRock Z790 PG Riptide
Cooling Arctic Liquid Freezer II 420
Memory Corsair Vengeance 6400 2x16GB @ CL34
Video Card(s) PNY RTX 4080
Storage SSDs - Nextorage 4TB, Samsung EVO 970 500GB, Plextor M5Pro 128GB, HDDs - WD Black 6TB, 2x 1TB
Display(s) LG C3 OLED 42"
Case Corsair 7000D Airflow
Audio Device(s) Yamaha RX-V371
Power Supply SeaSonic Vertex 1200w Gold
Mouse Razer Basilisk V3
Keyboard Bloody B840-LK
Software Windows 11 Pro 23H2
Now why im posting it in this thread, its to show that while memory overclocking the Fury X MIGHT lead to some perfomance gains, it probably isnt worth it and would be better off if AMD made a better GPU to harness the full potential of HBM.

Yet in saying that, you make it obvious the real bottleneck is not the VRAM itself or it's bandwidth, but the GPU architecture itself not being as flexible and efficient as Nvidia's.

This is similar to Intel vs AMD on CPUs. It has to be a very well threaded game before the FX chips can even begin to compete, because Intel chips are made much more flexible and efficient at low thread applications.
 
Joined
Jul 1, 2015
Messages
6 (0.00/day)
System Name The Terminator -2000+2000+1
Processor Intel Core i7-3770k OC'd to 4.2 ghz
Motherboard Asus Z77 Sabertooth
Cooling Corsair H100 Liquid Cooler
Memory 16GB (2x8) Corsair DDR3-1600Mhz
Video Card(s) Primary: EVGA GTX 980 ACX 2.0 Secondary: EVGA GTX 670 FTW
Storage 2x OCZ Vertex 4 256GB SSD's, 1 OCZ Vertex 4 128GB SSD, 1x 3TB Segate Barracuda 7200rpm HDD
Display(s) Sony 40inch LCD 1920x1080 HD TV
Case NZXT Switch 810 White
Audio Device(s) None
Power Supply Cooler Master Silent Hybrid Pro 1050W
Mouse Razer Taipan BF4 Edition
Keyboard Razer Blackwidow Ultimate BF4 Edition
Software 3TB HDD: Windows 8.1 Pro 64bit, OCZ 128GB SSD: Mac OSX 10.10 Hackintosh, OCZ 256GB SSD: Windows 10
Yet in saying that, you make it obvious the real bottleneck is not the VRAM itself or it's bandwidth, but the GPU architecture itself not being as flexible and efficient as Nvidia's.

This is similar to Intel vs AMD on CPUs. It has to be a very well threaded game before the FX chips can even begin to compete, because Intel chips are made much more flexible and efficient at low thread applications.
Sorry I didn't make it clear but yes that was basically the point i was trying to make in a nutshell, basically AMD needs to build a better GPU to support HBM
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
I think the only reason why Fiji doesn't tear everything apart is the ROP. If t was 128 ROP units as initially stated, then it would dominate everything. But they kept the same amount as on R9-290X, where GTX 980Ti did upgrade that to 96 (from 64). And that changes things. It's what drives the pixels processing, it's what makes GPU fast for the most part. There are other things like memory bandwidth and number of texture units and shaders, but all this has to be in a balance to acheive good results. Having just ok GPU and slamming ridiculously fast memory on it won't solve all problems and that's the problem Fiji is facing.
 
Joined
Sep 6, 2013
Messages
748 (0.19/day)
Location
Oceania
Can't wait to see if HBM OCing translates to real game performance gains as opposed to synthetic benchmarks.
Doubtful when an extra 50GB/s bandwidth gains just 250 points....... Seems like overkill tbh...

 
Joined
Dec 31, 2009
Messages
19,366 (3.70/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Can you get it to 600MHz, this magical sweetspot he is talking about?
 
Joined
Feb 2, 2015
Messages
2,707 (0.80/day)
Location
On The Highway To Hell \m/
Yes. But overkill is exactly the point, in overclocking anything. It's as simple as overclock and get what you get. Or don't overclock and get nothing.

Do you want something? Or do you want nothing?

If you choose the latter, then you've already lost.

To say you won't get anything by overclocking your memory is, like I eluded to, just plain wrong. And, frankly, a stupid thing to say. It does work, always has, always will. To ask if it's worth it is equally retarded. How is it not? You're not going to hurt a thing if you do. There's hardware level protection measures in place to prevent it(unless you've physically modified them beyond their factory specifications). Something most of you apparently have not been made aware of(much less done anything about).

On that note...

Until somebody like me(with hard modding skills) gets ahold of one of these cards, nobody's going to know what they can really do. But by all means, predict your winner now. While there's still no evidence to support your case. LMAO!
 
Last edited:

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.94/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
To say you won't get anything by overclocking your memory is, like I eluded to, just plain wrong.
No, it just gives you 10% of your overclock as performance. Or in other words, to get 10% boost in performance you would need to double the memory clock. That sounds a lot like overclocking DRAM on general purpose CPUs. Fun for sport, useless in practice.

Secondary question: Is overclocking memory simply useless or is cache just doing it's job well? If cache is getting hit more often than it misses, those DRAM cycles will never be needed because it will use cache because latency of SRAM cache is as low as it gets until you get to registers.
 
Last edited:
Joined
Sep 6, 2013
Messages
748 (0.19/day)
Location
Oceania
Yes. But overkill is exactly the point, in overclocking anything. It's as simple as overclock and get what you get. Or don't overclock and get nothing.

Do you want something? Or do you want nothing?

If you choose the latter, then you've already lost.

To say you won't get anything by overclocking your memory is, like I eluded to, just plain wrong. And, frankly, a stupid thing to say. It does work, always has, always will. To ask if it's worth it is equally retarded. How is it not? You're not going to hurt a thing if you do. There's hardware level protection measures in place to prevent it(unless you've physically modified them beyond their factory specifications). Something most of you apparently have not been made aware of(much less done anything about).

On that note...

Until somebody like me(with hard modding skills) gets ahold of one of these cards, nobody's going to know what they can really do. But by all means, predict your winner now. While there's still no evidence to support your case. LMAO!
Yep OK keep laughing...... Then please illustrate how overclocking memory on a card with too much bandwidth already will net results.?
The 3000point gap in the earlier benchmark was largely due to a 1200mhz advantage in CPU clock speed and 2800mhz DDR4 vs 2133 DDR3....increasing the Physics score dramatically., this was weighted against the final result. Not the vram sorry.




And yea I know nothing about overclocking.... 0_o

 
Joined
Aug 20, 2007
Messages
20,787 (3.41/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage 2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64
Mining is a COMPLETELY different beast though. There were sweet spots for the core and memory clocks there, but I highly doubt it had anything to do with the timings on the ram as there was not peaks and valleys of increases it was linear or it stopped scaling all together even though I had tons of headroom on what was gaming stable - to the order of 100+ MHz (I'm guessing the ECC in GDDR had something to do with that).

Mining was indeed very latency sensitive... It didn't help that many vendors were incompetent in setting the ram timings for optimal latency. It made next to no difference in games, but in mining could make a HUGE difference.

There's a thread in litecointalk discussing this exact phenomena:

https://litecointalk.org/index.php?topic=12369.0
 
Joined
Apr 25, 2013
Messages
127 (0.03/day)
Yep OK keep laughing...... Then please illustrate how overclocking memory on a card with too much bandwidth already will net results.?
The 3000point gap in the earlier benchmark was largely due to a 1200mhz advantage in CPU clock speed and 2800mhz DDR4 vs 2133 DDR3....increasing the Physics score dramatically., this was weighted against the final result. Not the vram sorry.

And yea I know nothing about overclocking.... 0_o
Do you even realize that this is GRAPHICS SCORE we are talking about? If you could make that 3000 boost in GRAPHICS SCORE available with CPU alone, the Nobel prize would be yours. Have you ever "played" 3Dmark???
 
Joined
Sep 6, 2013
Messages
748 (0.19/day)
Location
Oceania
Do you even realize that this is GRAPHICS SCORE we are talking about? If you could make that 3000 boost in GRAPHICS SCORE available with CPU alone, the Nobel prize would be yours. Have you ever "played" 3Dmark???
I suggest you read the 3DMark whitepapers.





Edit,
that's not even the point anyway.
Present results from an actual game benched on 2 identical systems @1440p, if u want to be taken seriously.........
 
Last edited:
Joined
Apr 25, 2013
Messages
127 (0.03/day)
I don't think we can continue our conversation until you make that said jump in GRAPHICS score with just your CPU overclocking. Fyi it is imposible
 
Joined
Jul 1, 2015
Messages
6 (0.00/day)
System Name The Terminator -2000+2000+1
Processor Intel Core i7-3770k OC'd to 4.2 ghz
Motherboard Asus Z77 Sabertooth
Cooling Corsair H100 Liquid Cooler
Memory 16GB (2x8) Corsair DDR3-1600Mhz
Video Card(s) Primary: EVGA GTX 980 ACX 2.0 Secondary: EVGA GTX 670 FTW
Storage 2x OCZ Vertex 4 256GB SSD's, 1 OCZ Vertex 4 128GB SSD, 1x 3TB Segate Barracuda 7200rpm HDD
Display(s) Sony 40inch LCD 1920x1080 HD TV
Case NZXT Switch 810 White
Audio Device(s) None
Power Supply Cooler Master Silent Hybrid Pro 1050W
Mouse Razer Taipan BF4 Edition
Keyboard Razer Blackwidow Ultimate BF4 Edition
Software 3TB HDD: Windows 8.1 Pro 64bit, OCZ 128GB SSD: Mac OSX 10.10 Hackintosh, OCZ 256GB SSD: Windows 10
I suggest you read the 3DMark whitepapers.





Edit,
that's not even the point anyway.
Present results from an actual game benched on 2 identical systems @1440p, if u want to be taken seriously.........
I have to wonder why do people even use 3DMark other than to stress test components for stability reasons when overclocking. When people tell me "ohh look at the higher score i got just by overclocking" in 3DMark, I say "ok, compare the min, max, average framerates in a ACTUAL game with no overclock and overclock and then we will talk"
 
Joined
Jul 1, 2015
Messages
6 (0.00/day)
System Name The Terminator -2000+2000+1
Processor Intel Core i7-3770k OC'd to 4.2 ghz
Motherboard Asus Z77 Sabertooth
Cooling Corsair H100 Liquid Cooler
Memory 16GB (2x8) Corsair DDR3-1600Mhz
Video Card(s) Primary: EVGA GTX 980 ACX 2.0 Secondary: EVGA GTX 670 FTW
Storage 2x OCZ Vertex 4 256GB SSD's, 1 OCZ Vertex 4 128GB SSD, 1x 3TB Segate Barracuda 7200rpm HDD
Display(s) Sony 40inch LCD 1920x1080 HD TV
Case NZXT Switch 810 White
Audio Device(s) None
Power Supply Cooler Master Silent Hybrid Pro 1050W
Mouse Razer Taipan BF4 Edition
Keyboard Razer Blackwidow Ultimate BF4 Edition
Software 3TB HDD: Windows 8.1 Pro 64bit, OCZ 128GB SSD: Mac OSX 10.10 Hackintosh, OCZ 256GB SSD: Windows 10
I don't think we can continue our conversation until you make that said jump in GRAPHICS score with just your CPU overclocking. Fyi it is imposible
Now technically if u have a bad cpu that bottlenecks your gpu, this could happen
 
Joined
Feb 2, 2015
Messages
2,707 (0.80/day)
Location
On The Highway To Hell \m/
I already explained the difference between bandwidth and memory speed. If memory speed gains you nothing, then underclock it. See where that gets you!

Let me try and dumb it down as far as possible. The GPU core needs to use the VRAM. The slower that VRAM is, the less efficient/slower that process becomes. The term bandwidth is meaningless in this instance. It's a product of math. And nothing more. The term bus width isn't though. It's just as important as the memory/VRAM clock speed.

Get it yet you bandwidth obsessed people? IT'S NOT ALL ABOUT BANDWIDTH!!!

EDIT: You feel better now? Names don't hurt your feelings if they don't ring true you know? Sticks and stones...the truth hurts...all that stuff we learn before the age of 5. Excuse me if I forget I'm speaking to infants occasionally. Good grief!
 
Last edited:
Joined
Apr 25, 2013
Messages
127 (0.03/day)
Now technically if u have a bad cpu that bottlenecks your gpu, this could happen
I know but we are talking about 16k zone of single card GRAPHICS score and Intel's highest offerings. That guy seems to ignore these facts.
 
Joined
Jul 1, 2015
Messages
6 (0.00/day)
System Name The Terminator -2000+2000+1
Processor Intel Core i7-3770k OC'd to 4.2 ghz
Motherboard Asus Z77 Sabertooth
Cooling Corsair H100 Liquid Cooler
Memory 16GB (2x8) Corsair DDR3-1600Mhz
Video Card(s) Primary: EVGA GTX 980 ACX 2.0 Secondary: EVGA GTX 670 FTW
Storage 2x OCZ Vertex 4 256GB SSD's, 1 OCZ Vertex 4 128GB SSD, 1x 3TB Segate Barracuda 7200rpm HDD
Display(s) Sony 40inch LCD 1920x1080 HD TV
Case NZXT Switch 810 White
Audio Device(s) None
Power Supply Cooler Master Silent Hybrid Pro 1050W
Mouse Razer Taipan BF4 Edition
Keyboard Razer Blackwidow Ultimate BF4 Edition
Software 3TB HDD: Windows 8.1 Pro 64bit, OCZ 128GB SSD: Mac OSX 10.10 Hackintosh, OCZ 256GB SSD: Windows 10
I already explained the difference between bandwidth and memory speed. If memory speed gains you nothing, then underclock it. See where that gets you!

Let me try and dumb it down as far as possible. The GPU core needs to use the VRAM. The slower that VRAM is, the less efficient/slower that process becomes. The term bandwidth is meaningless in this instance. It's a product of math. And nothing more. The term bus width isn't though. It's just as important as the memory/VRAM clock speed.

Get it yet you bandwidth obsessed fools? IT'S NOT ALL ABOUT BANDWIDTH!!!
What? Memory bandwidth is the total throughput of the vram modules, in either case GDDR5 or HBM. One GDDR5 module is slower than a bunch of GDDR5 modules, and one HBM stack is slower than 4 HBM stacks, memory clockspeed is just the operating frequency of a single module in gddr5 (HBM I dont know if its per stack or per module per stack or what). Frequeny does matter but so does how many which gives the total throughput.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.94/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
I know but we are talking about 16k zone of single card GRAPHICS score and Intel's highest offerings. That guy seems to ignore these facts.
Exactly, we're talking scores of 16k+ with improvements in the 250 range. I think the only person ignoring facts is you. It doesn't take a fricken rocket scientist to figure out when memory isn't your bottleneck.
 
Joined
Jul 1, 2015
Messages
6 (0.00/day)
System Name The Terminator -2000+2000+1
Processor Intel Core i7-3770k OC'd to 4.2 ghz
Motherboard Asus Z77 Sabertooth
Cooling Corsair H100 Liquid Cooler
Memory 16GB (2x8) Corsair DDR3-1600Mhz
Video Card(s) Primary: EVGA GTX 980 ACX 2.0 Secondary: EVGA GTX 670 FTW
Storage 2x OCZ Vertex 4 256GB SSD's, 1 OCZ Vertex 4 128GB SSD, 1x 3TB Segate Barracuda 7200rpm HDD
Display(s) Sony 40inch LCD 1920x1080 HD TV
Case NZXT Switch 810 White
Audio Device(s) None
Power Supply Cooler Master Silent Hybrid Pro 1050W
Mouse Razer Taipan BF4 Edition
Keyboard Razer Blackwidow Ultimate BF4 Edition
Software 3TB HDD: Windows 8.1 Pro 64bit, OCZ 128GB SSD: Mac OSX 10.10 Hackintosh, OCZ 256GB SSD: Windows 10
Exactly, we're talking scores of 16k+ with improvements in the 250 range. I think the only person ignoring facts is you. It doesn't take a fricken rocket scientist to figure out when memory isn't your bottleneck.
All things considered, just about everything on a gpu if increased will give you some performance increase more than likely, however how much to the point its worth it is the question. Same with the cpu
 
Joined
Apr 25, 2013
Messages
127 (0.03/day)
Exactly, we're talking scores of 16k+ with improvements in the 250 range. I think the only person ignoring facts is you. It doesn't take a fricken rocket scientist to figure out when memory isn't your bottleneck.
Hey hey, I'm talking with other guy about the 3000 jump in GRAPHICS score with 600MHz HBM, not the petty 250+. Do you even read my posts before charging in my conversation? It's a whole build up from the latency of HBM due to low clock, and suddenly a random guy jumped in and talked about what already discussed pages before.
Wait, do you even know about the latency part in memory overclocking??
 
Joined
Feb 2, 2015
Messages
2,707 (0.80/day)
Location
On The Highway To Hell \m/
What? Memory bandwidth is the total throughput of the vram modules, in either case GDDR5 or HBM. One GDDR5 module is slower than a bunch of GDDR5 modules, and one HBM stack is slower than 4 HBM stacks, memory clockspeed is just the operating frequency of a single module in gddr5 (HBM I dont know if its per stack or per module per stack or what). Frequeny does matter but so does how many which gives the total throughput.
Ok, you're new here. You haven't bugged me enough to be ignored. You've answered the question. You just don't realize it yet. "Total throughput" = theoretical maximum output capacity. When is that relevant to performance? When does it really matter? Almost never. Most of the time it's just a number that you get when you multiply the memory speed times the data rate times the bus width and divide that number by the total amount of GB available(other factors removed from the equation for ease of understanding, just so you know I know that). There's your precious bandwidth. It's the Horse Power of the computing world. Both are a product of math. What really matters is torque, and how fast it can be applied over time. There's your precious HP. Unless you're on a dyno, you're not likely to ever use all of it either.
 
Last edited:
Joined
Aug 16, 2004
Messages
3,275 (0.46/day)
Location
Sunny California
Processor Intel Core i9 13900KF
Motherboard Asus ROG Maximus Z690 Hero EVA Edition
Cooling Asus Ryujin II 360 EVA Edition
Memory 4x16GBs DDR5 6800MHz G.Skill Trident Z5 Neo Series
Video Card(s) Zotac RTX 4090 AMP Extreme Airo
Storage 2TB Samsung 980 Pro OS - 4TB Nextorage G Series Games - 8TBs WD Black Storage
Display(s) LG C2 OLED 42" 4K 120Hz HDR G-Sync enabled TV
Case Asus ROG Helios EVA Edition
Audio Device(s) Denon AVR-S910W - 7.1 Klipsch Dolby ATMOS Speaker Setup - Audeze Maxwell
Power Supply EVGA Supernova G2 1300W
Mouse Asus ROG Keris EVA Edition - Asus ROG Scabbard II EVA Edition
Keyboard Asus ROG Strix Scope EVA Edition
VR HMD Samsung Odyssey VR
Software Windows 11 Pro 64bit
All I see now is people fighting over semantics and statistically insignificant latency metrics, and no real world performance gains.

Looks a lot to me like some people are grasping at straws, the Fury owners who have contributed to this thread, so far have not been able to replicate the results quoted in the first post, also, as far as I can see, no link has been provided to futuremark to validate the original results.

Until significant real world gains in performance from this specific card when OCing HBM are shown, all this discussion about latency and memory speed and memory bandwidth not being related* is moot if a) people can't show any proof of statically signicant results from OCing the HBM on this card and b) tools are not available to mess with the latency timings of HBM.

And please refrain from calling people names, this is a very respected tech forum, and most users have been around many years, so insults pertaining to not knowing what some people are talking about are only childish and contributes to make the point anyone wants to prove, even less valid in the eyes of everyone.

* btw, raise the memory speed in MHz on any given card and see the bandwidth increase as well, bandwidth is a product of memory speed multiplied by memory buswidth as well as data rate, so both terms are intrinsically linked
 
Status
Not open for further replies.
Top