• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Article: Just How Important is GPU Memory Bandwidth?

rancur3p1c

New Member
Joined
Mar 15, 2015
Messages
1 (0.00/day)
Would love to see this between screen resolutions!!! I.e. draw a correlation between pixels and bandwidth. Need estimate per game of how much bandwidth goes to actually rendering. Then can make purchasing decision for 1600p monitor!!! Playing a couple older games. Tender capacity fine just worried about bandwidth
 
Joined
Dec 29, 2014
Messages
861 (0.25/day)
What do you mean by "render capacity is fine"? What is your card and system and what games are you playing?

Just scale by number of pixels. Should be a decent guess. What does that give you? And if that is too much you can probably just reduce the AA or AF settings.
 

OneMoar

There is Always Moar
Joined
Apr 9, 2010
Messages
8,740 (1.71/day)
Location
Rochester area
System Name RPC MK2.5
Processor Ryzen 5800x
Motherboard Gigabyte Aorus Pro V2
Cooling Enermax ETX-T50RGB
Memory CL16 BL2K16G36C16U4RL 3600 1:1 micron e-die
Video Card(s) GIGABYTE RTX 3070 Ti GAMING OC
Storage ADATA SX8200PRO NVME 512GB, Intel 545s 500GBSSD, ADATA SU800 SSD, 3TB Spinner
Display(s) LG Ultra Gear 32 1440p 165hz Dell 1440p 75hz
Case Phanteks P300 /w 300A front panel conversion
Audio Device(s) onboard
Power Supply SeaSonic Focus+ Platinum 750W
Mouse Kone burst Pro
Keyboard EVGA Z15
Software Windows 11 +startisallback
What do you mean by "render capacity is fine"? What is your card and system and what games are you playing?

Just scale by number of pixels. Should be a decent guess. What does that give you? And if that is too much you can probably just reduce the AA or AF settings.
pro-tip don't question earth-dog
1. hes been doing this wayyyy longer then you
2. hes right I would't buy anything with less then 3GB at this stage, very few titles will live happly at under 1GB of vram at highish settings at >=1080p and any card that comes with 2GB is very likely to be useless at 1080P anyway
as for raw bandwidth it matters A LOT especially once you start piling on the AA and cranking the res beyond 1440p
 
Last edited:
Joined
Dec 29, 2014
Messages
861 (0.25/day)
pro-tip don't question earth-dog

Can you answer the question?

If the 980's specs are cool and the 960 has half the shaders, ROPs, and TMUs, then why would it need more than half the Vram and bandwidth? It only makes sense to me if your desire is to run high eye candy and low frame rates. Maybe the 960 isn't good enough for you "once you start piling on the AA and cranking the res beyond 1440p", but it isn't a problem that more Vram would solve. It doesn't have the processing power to give decent fps regardless.
 
Joined
Nov 4, 2005
Messages
11,654 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
Can you answer the question?

If the 980's specs are cool and the 960 has half the shaders, ROPs, and TMUs, then why would it need more than half the Vram and bandwidth? It only makes sense to me if your desire is to run high eye candy and low frame rates. Maybe the 960 isn't good enough for you "once you start piling on the AA and cranking the res beyond 1440p", but it isn't a problem that more Vram would solve. It doesn't have the processing power to give decent fps regardless.


You have to understand how a GPU works and actually processes data, a texture is loaded into memory, and it takes lets say 4K of memory, now irregardless of what resolution you are running that texture HAS to be there or else there will be a stall as its fetched from either system memory or disk. Add in thousands of textures, or in modern games a large compressed file with the textures for an area already present that is 2Gb when decompressed. The frame may only need 2K of the texture, but the whole thing has to be in memory due to how DX works currently, and since it only needs 2K the memory controller can then sort out the location of where the actual data requested is and fetch it using only the bandwidth required.
Its called the bend of the knee, at a certain memory size it is a waste to add more as performance doesn't usually increase linearly with added memory, but there are always exceptions, and as exceptions go it will happen and people will get pissed when a card is using 60% of the GPU power and waiting for paged data.
 
Joined
Dec 31, 2009
Messages
19,366 (3.72/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
but it isn't a problem that more Vram would solve. It doesn't have the processing power to give decent fps regardless.
I dont think anyone said 2560x1440 (at least I didn't). The 750 only has enough horsepower for 1080 on down. Here is my first reply...

It depends on what you are playing and its resolution. 1080p + AA on 128 bit, it could be a factor. It also only has 1GB of vRAM. I wouldn't even call it a gaming card in the first place...

Rruff, you may want to read the chronology of posts again to get a better handle on the context. I was specifically talking about 1080p or less as I mentioned.

Point is, for a card that is marketed as a 1080p gamer, the 750, it doesn't have enough ram capacity in the first place to hold textures and AA info. Before you worry about getting the water(data) in and out of the bucket(frame buffer/vRAM), you need to have a big enough bucket(frame buffer/vRAM) in the first place. Moving it faster doesn't matter if its paging out to system memory.
 
Last edited:
Joined
Dec 29, 2014
Messages
861 (0.25/day)
You have to understand how a GPU works and actually processes data, a texture is loaded into memory, and it takes lets say 4K of memory, now irregardless of what resolution you are running that texture HAS to be there or else there will be a stall as its fetched from either system memory or disk. Add in thousands of textures, or in modern games a large compressed file with the textures for an area already present that is 2Gb when decompressed. The frame may only need 2K of the texture, but the whole thing has to be in memory due to how DX works currently, and since it only needs 2K the memory controller can then sort out the location of where the actual data requested is and fetch it using only the bandwidth required.

I understand this. And if I have a GTX 960 and select textures that are half the size of what works in a GTX 980, then what would the problem be? I should be able to run the same FPS and not have an issue with vram or bandwidth, right? The reason I wouldn't mind doing that is because the 960 doesn't have the processor to run that game at high settings anyway and still get decent FPS.

Its called the bend of the knee, at a certain memory size it is a waste to add more as performance doesn't usually increase linearly with added memory, but there are always exceptions, and as exceptions go it will happen and people will get pissed when a card is using 60% of the GPU power and waiting for paged data.

But in practice the 960 does have enough and so does the 750 with only 1GB. If there was a rare instance where the page file size was the limiting factor you can reduce the textures. Nvidia's software will probably do this automatically.

I looked at a bunch of reviews of the GTX 750 and 750 Ti. In addition to 1GB more vram, the 750 Ti has 20% more shaders and TMUs and a 8% faster vram clock. I was curious to see if there was any evidence that the 750 was hobbled by it's lack of vram. Most of the tests used highest settings which aren't realistic for these cards, and there were a couple of games where the 750 dropped behind more than you'd expect, but on average it scored only 13% slower than 750 Ti (average of many tests and many reviews). I bought a 750 and I monitor vram usage and a bunch of other things, and I haven't been limited by a lack of vram yet.

Tons of the user reviews out on the 750 and quite a few on the 960 as well. People aren't "pissed". They might even be the most highly reviewed cards you can get.
 
Joined
Dec 31, 2009
Messages
19,366 (3.72/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Sometimes, it doesn't always manifest itself as a FPS issue though...

I bought a 750 and I monitor vram usage and a bunch of other things, and I haven't been limited by a lack of vram yet.
Because, as you said, you reduce textures which puts less in vRAM.

The bottom line is, 1GB is not enough to run games at their highest quality on 1080p.
 
Joined
Feb 21, 2014
Messages
1,383 (0.37/day)
Location
Alabama, USA
Processor 5900x
Motherboard MSI MEG UNIFY
Cooling Arctic Liquid Freezer 2 360mm
Memory 4x8GB 3600c16 Ballistix
Video Card(s) EVGA 3080 FTW3 Ultra
Storage 1TB SX8200 Pro, 2TB SanDisk Ultra 3D, 6TB WD Red Pro
Display(s) Acer XV272U
Case Fractal Design Meshify 2
Power Supply Corsair RM850x
Mouse Logitech G502 Hero
Keyboard Ducky One 2
Psh, 2GB isn't enough any more to run 1080p maxed. I easily hit 2.5GB in elite dangerous maxed out with AA and everything, still keep a solid 60fps except in the most unoptimized places.

I saw a defined lack of bandwidth on the HD 7770. If I upped the memory clock to higher speeds (I don't remember, it was really high. Golden chip, 1300mhz core) I would see a 5fps increase in some games, and more in benchmarks. I realize nvidia has the compression tech, but there is only so much magic nvidia dust they can use before they have to increase bus size.
 
Joined
Dec 29, 2014
Messages
861 (0.25/day)
The bottom line is, 1GB is not enough to run games at their highest quality on 1080p.

Right, I get that being true for most games. But the GTX 750 only has 1/4 the processor the 980 has, and 1/2 the processor the 960 has, so it can't do it anyway.
 
Joined
Nov 4, 2005
Messages
11,654 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
Right, I get that being true for most games. But the GTX 750 only has 1/4 the processor the 980 has, and 1/2 the processor the 960 has, so it can't do it anyway.
But 1GB is before the curve of the knee, and even though it may not be able to fully utilize the memory in a single pass it can still use a huge amount more to prevent bottlenecks, and to act as a buffer against caching issues.

Put it this way, you have to tow a 10,000Lb trailer, would you buy a truck that tows exactly 10,000 pounds and then cry about why it goes so slow in hills? Or would you buy the truck that tows 15,000 for 5% more cost and goes the same in hills (poorly optimized games) as it does on flat ground?

Vmem is the same, sure some cards will never find the use in 2GB memory, but if the whole texture file for a level or area is 1.7GB and you had 1.5 the occasional hiccups from fetching data sure would suck right?
 
Joined
Dec 29, 2014
Messages
861 (0.25/day)
if the whole texture file for a level or area is 1.7GB and you had 1.5 the occasional hiccups from fetching data sure would suck right?

I'd reduce textures to prevent the problem. Which I'd want to do anyway because the card is weak in other ways.

Sure, it's nice to have extra vram, if it's free. But everything costs money. On a budget card like the 750 adding 1GB would have cost >10% more. Not worth it. If I keep playing that game I might as well get a 980 and be done with it. There's always something a little better for a little more. Why settle for anything less than the best?
 
Joined
Nov 4, 2005
Messages
11,654 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
I'd reduce textures to prevent the problem. Which I'd want to do anyway because the card is weak in other ways.

Sure, it's nice to have extra vram, if it's free. But everything costs money. On a budget card like the 750 adding 1GB would have cost >10% more. Not worth it. If I keep playing that game I might as well get a 980 and be done with it. There's always something a little better for a little more. Why settle for anything less than the best?


Wow, just reduce textures, lets all just reduce textures. Call up the ol boys at Valve and say "You chaps know what, I want to reduce the textures in the game, never you mind sir that means a whole other texture pack to add to this game just for me, you louts owe me, now make it snappy" Then we will all go down to the speakeasy and have ourselves some devil juice and let the girlies dance for us whatcha say?

Ever notice how they have "minimum specifications" for games? And most of the time those are for a slide show, no AA, no AF, 1024X768 laptops with integrated Intel for people who don't mind 15ishFPS, and then they have "recommended" and that is for middle of the road 1080 2XAA, 4-8XAF at 30-60FPS. Sure you do ole boy, now lets quit playing this cat and mouse game and get on down to the speakeasy for them girlies see.
 
Joined
Dec 29, 2014
Messages
861 (0.25/day)
Wow, just reduce textures, lets all just reduce textures.... Ever notice how they have "minimum specifications" for games?

If the 750 isn't powerful enough to run it, then I can't run it. It isn't a vram issue. Why would I want to pay for more vram just so I can run at 10 fps without a page file limit? If I want to play a demanding game with a good experience, then I should have bought a faster card.

And if you are interested take a look at the pro reviews, user reviews, and screen captures for the GTX 750, and see what people are actually getting in games.
 

nl_bugsbunny

New Member
Joined
Aug 17, 2013
Messages
3 (0.00/day)
****
Holy crap I am tired and this is all probably totally wrong
You can see all my original data here:
https://www.dropbox.com/sh/v3vqnglktagj8tr/AADvMQeqR-nxETkn4PwJKlZBa?dl=0
****
Introduction

The main reason for running into this kind of article was with the recent “exclamations” about the GTX 960’s 128bit wide memory interface. The GPU offers a 112GB/s memory bandwidth, and many believe that this narrow interface will not provide enough memory bandwidth for games. This card is primarily aimed at the midrange crowd, wanting to run modern titles (both AAA and independent), at a native resolution of 1080p.

Memory bandwidth usage is actually incredibly difficult to measure, but it’s the only way of making known once and for all, what the real 1080p requirement is for memory bandwidth. Typically using GPU-Z, what we have available to us is “Memory Controller Load”. This is a percentage figure does not accurately measure the total GB/s bandwidth that is being used. The easiest way to explain it is it acts similar to the percentage CPU utilisation Task Manager shows. Another example would be GPU Load, wherein various types of load can cause the same percentage figure measurement, but can have very different power usage readings, leading us to assume one 97% load can be much more intensive than another. Something else that only NVidia cards allow measurements of is PCIe Bus usage. AMD has yet to allow such a measurement, and thanks to @W1zzard for throwing me a test build of GPU-Z, I could run some Bus usage benchmarks. I had a fair few expectations from the figures, but the results I got were a little less than expected.

Something I need to make clear before you read on, my memory bandwidth usage figures (GB/s) are not 100% accurate. They have been estimated and extrapolated using performance percentages of the benchmark figures I’ve got, as such, most of this article will be relying largely on those estimations. Only a fool would consider it as fact. NVidia has said themselves that Bus usage is wholly inaccurate, and most of us are aware that Memory Controller Load (%) cannot represent the exact bandwidth usage (GB/s) with total precision. All loads are different.

All of the following benchmarks were run 4 times for each game on each resolution for accuracy. Every preset is set to High where Very High is unavailable. The only graphical alteration to my video settings was turning off VSync and Motion Blur.

Choices of Games

I’ve chosen to run with 4 games which I felt represented a fair array of game types. For CPU orientated, I’ve run with Insurgency. This is Source engine based, highly CPU intensive, and should cover most games running that sort of requirement. It has a reasonable VRAM requirement, but is overall quite light on general GPU usage, so it should stress the memory somewhat.

To represent the independent games, while also holding a high VRAM requirement, I’ve run with Starpoint Gemini II. This game has massive VRAM requirements, and is quite a GPU heavy game.

I’ve chosen two other games for the AAA area, one very generalised game, and one that boasted massive 4GB VRAM requirements for general high res play. Far Cry 4 felt like a good representative for the AAA genre that has balance in both general performance of the CPU, GPU, and moderate VRAM requirements. Middle Earth: Shadow of Mordor was my choice for the AAA genre to slaughter my VRAM and hopefully put my GPU memory controller and VRAM to the test.
*****

1440p – Overall Correlations

I’ve started off with benchmarks running on 1440p to clearly identify what kind of GPU power is required for this resolution. I understand that the 112GB/s bandwidth we’re aiming for is designed to cope with 1080p, but hopefully you’ll see just what you need.

First off, we’ll take a look at all four games, and the performance of the GPU Core(%), Memory Controller Load(%), and VRAM Usage(MB). (The following data has been sorted by “Largest to Smallest” PCIe Bus Usage).






What I expected to see was the Memory Controller Load to be in direct correlation with VRAM usage. What we can clearly see here is that Memory Controller Load is in absolute correlation with the GPU Load. VRAM usage seems to make little difference to the way either performs except in edge cases.

Next up, we’ll look directly at the correlation between PCIe-Bus Usage(%) and VRAM usage(MB).






Besides the Insurgency graph, it appears that there is no direct correlation between the PCIe Bus and VRAM. I had to run these benchmarks multiple times, as I was a little confused that the PCIe Bus usage was always so low, or in some cases, idle.

Next let’s look at the overall correlation between Memory Controller Load (%) and the PCIe Bus usage (%)






You can see there’s literally no particular change in PCIe Bus usage overall. When the Memory Controller Load peaks, the data for the PCIe Bus shows no reaction to the change.

Finally let’s take a look at the individual Memory Bandwidth Usage (GB/s) figures overall. Note, these figures are not 100% accurate, and follow the 100% = 224GB/s rule.






We can see in most cases the Memory Bandwidth usage (GB/s) is actually extremely erratic over the period. Shadow of Mordor showed the only real case where the usage was relatively persistent throughout the benchmark. You’ll also probably notice that it hits a rather high figure at peak load.

Let’s look at what these figures equate to overall. For this I’ve used the 95th percentile rule to remove freak results from both the low and high end of the scale. Note, these figures indicate bandwidth with Maxwell compression methods (~30%) in mind.






We’ll see most of these figures are relatively high, though none manage to reach the limit of my 970’s 224GB/s bandwidth available at any time. The only exception is Starpoint Gemini II, which despite eating VRAM when available, didn’t appear to put much load on the Memory Controller. If we took the Memory Controller Load figure as a good representation of actual bandwidth usage, the 970 is never really in danger of being overwhelmed. We can clearly see however that the peak figures would be too much for a 960’s 112GB/s available bandwidth. If we ran by the average figures instead, the 960 could cope with a couple of the games, but it would still choke on the big titles during average gameplay. We can’t discount the peak figures though, so you’d certainly see issues at the 1440p resolution.

For the sake of estimation and sheer curiosity, here is what the estimated Memory Bandwidth Usage would be if Maxwell was exactly 30% efficient at compression, without the compression.






The 970 would still cope, except in peak cases during Shadow of Mordor, where the required bandwidth exceeds that of the available 224GB/s. Obviously all these figures are mere estimates, so the actual cases may vary in real world examples.

*****

1080p – Overall Correlations

These are the main benchmarks we’ll be looking at for our 112GB/s bandwidth limit on the 960. The card is aimed at this resolution, so hopefully we’ll see some post-Maxwell compression figures dropping us in that area.

Let’ take a look at the overall figures for this, and look for similarities between 1440p correlation (or lack of). The previous charts showed Memory Controller Load linked with GPU Load and not VRAM Usage.






This surprised me a little bit. If you look relatively closed at the peaks and drops, all three measurements appear to correlate rather well at this resolution. The VRAM drops actually appear to associate with the drops in Memory Controller Load as well as GPU Usage. Certainly an interesting turn of events.

Next let’s take a look at the PCIe Bus usage and VRAM. There were no direct correlations in the 1440p benchmarks.






This time things look a little more interesting, but unexplained. Far Cry 4 shows no real correlation at all. The rest of the games however seem to show a drop in PCIe Bus usage every time there’s a drop in VRAM usage, before the VRAM usage steadily rises before dropping again.

Next up is the Bus and Memory Controller figures.






This time again, no real correlation. A similar result to the 1440p benchmark. No unexpected surprises there.

Here are the figures you’re more interested in however. Let’s take a look at the overall Memory Controller Usage over the benchmarks. This should show us approximate (again inaccurately) how much bandwidth 1080p seems to scream for.






This time Shadow of Mordor follows suit and starts to become a little more erratic along with the rest. We can see some interesting peaks in usage, as well as a general idea of what the average is overall. The plateau at the beginning of Far Cry 4 is particularly interesting.

Next, here are those overall figures in a more pleasant representation. Here we can see exactly what the figures are. Again, using the 95th percentile rule for these results to remove the serious spikes, these results are not 100% accurate.






Shadow ofMordor slaughters all, even in the average benchmark. Far Cry 4 scrapes the barrel in the average figures, but again, the peak proves to be above the 112GB/s mark. The Source engine game as well as SPG2 however prove to be completely viable solutions.

Here’s what the results would look like without the estimated ~30% Maxwell compression.






Shadow of Mordor peaks within percentile points of the available bandwidth on a 770 (224GB/s), but all other games remain below to 200GB/s mark.

Conclusion

Something you have to bear in mind when looking at these figures (besides the fact they are most certainly not 100% accurate), is that it’s plausible memory bandwidth acts similar to VRAM. There are many occasions where people can see VRAM usages in an average game hit a certain mark, let’s say 1800MB on a 2GB card. Other people, running the same settings, but with a 4GB card may see usages above and beyond 2GB, almost as though the game is using the available VRAM simply because it can. Is it possible that games utilise memory bandwidth in a similar fashion? Possibly, but we don’t really know. It could be possible that the same benchmark, when run on a 770 which shares identical bandwidth with the 970 (224GB/s) may provide higher results due to the lack of compression, but prove to be less than the 30% assumption. Maybe the video card wouldn’t “stretch it’s legs” and would be more conservative with bandwidth usage if it had less available. It’d be an interesting benchmark to see.

If we treated these bandwidth figures as a reference (which you most certainly should not), we could then assume that the GTX 960’s 128bit wide memory interface simply does not provide enough bandwidth to play AAA titles at Very High (or High where not available) and Ultra Presets on 1080p. If we went by average figures, it would get by OK, but struggle at peak loads. In terms of Independent titles, along with Source engine games, it’d do just fine. It may be the case that at 1080p turning off a little eye candy would put the game within the 112GB/s limit and remove that bottleneck in AAA titles.

The main issue is that more and more AAA titles may follow the example of games like Shadow of Mordor and require more and more VRAM and eat up more bandwidth. If things plateau at that sort of figure, perhaps the 112GB/s would cope. In the event AAA titles became more advanced in their fidelity, the 960 might find itself quickly outpaced by rivals offering a more sensible bandwidth ceiling.

Finally, I’ll leave you again with the same bold statement, that the (GB/s) figures in these benchmarks are merely estimates of a largely inaccurate form of extrapolating memory bandwidth usage figures. By no means should you base a purchase on these, as the percentage representation of memory bandwidth is open to extremely broad interpretation.

If anyone would be so kind as to run a benchmark of these games on a 770 and send the log over to me, I can more accurately show bandwidth usage BEFORE Maxwell compression. I’d also be delighted to see user’s benchmarks on GTX 960’s to prove these estimates horribly wrong.
Hi can u make some test on directx12, im just curious because NVIDIA say this card was make thinking in the new techs like dx12 and MFAA what in reality are not heavy and don't go affect memory's supplied from 128bit bus I think or they say must be checked for who have questions in the sites from NVIDIAor youtube where they talk. but think with me, if I have one 970 with 256bit. now if I sli I can run perfect 4k on 512bit right? now just divide 4k for 4 and 512bit as well and give what? 1080p? and what bus? 128bit? at least mathematic don't fail here :) or we are just fine with single 128bit card on 1080p?
 
Joined
Apr 19, 2012
Messages
12,062 (2.77/day)
Location
Gypsyland, UK
System Name HP Omen 17
Processor i7 7700HQ
Memory 16GB 2400Mhz DDR4
Video Card(s) GTX 1060
Storage Samsung SM961 256GB + HGST 1TB
Display(s) 1080p IPS G-SYNC 75Hz
Audio Device(s) Bang & Olufsen
Power Supply 230W
Mouse Roccat Kone XTD+
Software Win 10 Pro
if I have one 970 with 256bit. now if I sli I can run perfect 4k on 512bit right?

'Fraid memory bandwidth doesn't double in SLI, it just replicates data on each card, or takes it in turns. You'll still only have 256bit bus.

~200GB/s is the kind of bandwidth you're going to want for 1080p currently.

As and when DX12 releases and actually gets a signed driver with it, I'll test DX12 games and publish the results. I'll probably run an update when I finally buy a 4K monitor too, which should be soon™
 
Joined
Oct 2, 2004
Messages
13,791 (1.94/day)
Also be aware that what you see as specs for GTX 9xx and R9-285/R9 Fury is hardware raw bandwidth. It is a lot higher when you take framebuffer compression into account, but no one really knows how high it is after that since it depends on the rendered image...
 
Joined
Apr 19, 2012
Messages
12,062 (2.77/day)
Location
Gypsyland, UK
System Name HP Omen 17
Processor i7 7700HQ
Memory 16GB 2400Mhz DDR4
Video Card(s) GTX 1060
Storage Samsung SM961 256GB + HGST 1TB
Display(s) 1080p IPS G-SYNC 75Hz
Audio Device(s) Bang & Olufsen
Power Supply 230W
Mouse Roccat Kone XTD+
Software Win 10 Pro
Also be aware that what you see as specs for GTX 9xx and R9-285/R9 Fury is hardware raw bandwidth. It is a lot higher when you take framebuffer compression into account, but no one really knows how high it is after that since it depends on the rendered image...

+1, exactly this. While it's advertised at 30% off the total figure, in some games it can be as low as 1%.
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.21/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
and DX12 doesn't technically add the ram bandwidth either, no ones totally sure yet but it may well vary between titles how multi GPU is handled. It may allow more to be used (by assigning tasks to each GPU) but that could mean the tasks arent perfectly split, meaning you gain some more RAM at the cost of GPU power... DX12 is going to require some testing :D
 
Joined
Jan 2, 2015
Messages
1,099 (0.33/day)
Processor FX6350@4.2ghz-i54670k@4ghz
Video Card(s) HD7850-R9290
and DX12 doesn't technically add the ram bandwidth either, no ones totally sure yet but it may well vary between titles how multi GPU is handled. It may allow more to be used (by assigning tasks to each GPU) but that could mean the tasks arent perfectly split, meaning you gain some more RAM at the cost of GPU power... DX12 is going to require some testing :D
well if you can reduce latency by at least 100 percent that we know is more than that then you can render twice as much in the same amount of time. imagine if your gpu was commanded more like a cpu and was better at multitasking.. preload? sorta but different or just more fps.. dx12? no doubt.
 
Last edited:
Joined
Jul 10, 2015
Messages
839 (0.26/day)
Location
Romania
System Name Comet Lake
Processor Intel® Core™ i5-10600K CPU @ 5.0GHz
Motherboard MSI MPG Z490 GAMING PLUS
Cooling Arctic Freezer 34 eSports Duo
Memory CORSAIR LPX 32GB DDR4 3200 CL16 B-die
Video Card(s) GeForce® RTX 3060Ti™
Storage Samsung 970 Evo Plus M2 1TB & Seagate 2TB ST2000DM008-2UB102
Display(s) Dell 24 Gaming Monitor - S2422HG 165 Hz Curved
Case AQIRYS Arcturus
Audio Device(s) Realtek® ALC1200 Codec | Logitech Z533
Power Supply RMx White Series™ RM750x
Mouse Genesis Krypton 770
Keyboard Logitech G910
VR HMD N/A Skip
Software W10 Pro x64
Benchmark Scores XXX
So .. in your opinion is it good enough as the 280x or below ?
 
Joined
Apr 19, 2012
Messages
12,062 (2.77/day)
Location
Gypsyland, UK
System Name HP Omen 17
Processor i7 7700HQ
Memory 16GB 2400Mhz DDR4
Video Card(s) GTX 1060
Storage Samsung SM961 256GB + HGST 1TB
Display(s) 1080p IPS G-SYNC 75Hz
Audio Device(s) Bang & Olufsen
Power Supply 230W
Mouse Roccat Kone XTD+
Software Win 10 Pro
So .. in your opinion is it good enough as the 280x or below ?

The 280X has 288GB/s memory bandwidth, while the 960 has a mere 112.2GB/s. In terms of memory bandwidth, the 280X is the better card. In terms of raw performance, the 280X is only very slightly above the 960 (~10% faster).
 
Joined
Jul 10, 2015
Messages
839 (0.26/day)
Location
Romania
System Name Comet Lake
Processor Intel® Core™ i5-10600K CPU @ 5.0GHz
Motherboard MSI MPG Z490 GAMING PLUS
Cooling Arctic Freezer 34 eSports Duo
Memory CORSAIR LPX 32GB DDR4 3200 CL16 B-die
Video Card(s) GeForce® RTX 3060Ti™
Storage Samsung 970 Evo Plus M2 1TB & Seagate 2TB ST2000DM008-2UB102
Display(s) Dell 24 Gaming Monitor - S2422HG 165 Hz Curved
Case AQIRYS Arcturus
Audio Device(s) Realtek® ALC1200 Codec | Logitech Z533
Power Supply RMx White Series™ RM750x
Mouse Genesis Krypton 770
Keyboard Logitech G910
VR HMD N/A Skip
Software W10 Pro x64
Benchmark Scores XXX
The 280X has 288GB/s memory bandwidth, while the 960 has a mere 112.2GB/s. In terms of memory bandwidth, the 280X is the better card. In terms of raw performance, the 280X is only very slightly above the 960 (~10% faster)

Thank you,one more thing to ask u..to upgrade my card to R9 290 is it worth?
 
Joined
Apr 19, 2012
Messages
12,062 (2.77/day)
Location
Gypsyland, UK
System Name HP Omen 17
Processor i7 7700HQ
Memory 16GB 2400Mhz DDR4
Video Card(s) GTX 1060
Storage Samsung SM961 256GB + HGST 1TB
Display(s) 1080p IPS G-SYNC 75Hz
Audio Device(s) Bang & Olufsen
Power Supply 230W
Mouse Roccat Kone XTD+
Software Win 10 Pro
to upgrade my card to R9 290 is it worth?

Eyes of the beholder. If you have money burning a hole in your pocket and you're not getting the FPS you want, then it's worth it.
If you're scraping together upgrade money, and are quite happy with your current performance, then only you can make that decision.
 
  • Like
Reactions: xvi
Top