• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GT300 to Boast Around 256 GB/s Memory Bandwidth

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Recently, early-information on NVIDIA's next-generation GT300 graphics processor surfaced, that suggested it to pack 512 shader processors, and an enhanced processing model. A fresh report from Hardware-Infos sheds some light on its memory interface, revealing it to be stronger than that of any production GPU. According to a piece of information that has been doing ping-pong between Hardware-Infos and Bright Side of News, GT300 might feature a 512-bit wide GDDR5 memory interface.

The memory interface in conjunction with the use of the lowest latency GDDR5 memory available, at a theoretical 1000 MHz (2000 MHz DDR) would churn out 256 GB/s of bandwidth, the highest for a GPU so far. Although Hardware-Infos puts the lowest-latency figure at 0.5 ns, the math wouldn't work out. At 0.5 ns, memory with actual clock rate of 1000 MHz would churn out 512 GB/s, so a slight inaccuracy there. Qimonda's IDGV1G-05A1F1C-40X leads production today with its "40X" rating. With these chips across a 512-bit interface, the 256 GB/s bandwidth equation is satisfied. The clock speeds of the memory isn't known just as yet, the above is just an example that uses the commonly available high-performance GDDR5 memory chip. The new GPU, at least from these little information leaks, is shaping up to be another silicon-monstrosity by NVIDIA in the making.

View at TechPowerUp Main Site
 
Last edited:
sounds like a new 8800GTX
 
Yeah lets just hope that it doesn't idle at 55 degrees:p
 
This in conjunction with the use of the lowest latency GDDR5 memory available (0.5 ns), at a theoretical 1000 MHz (2000 MHz DDR)

If its GDDR5 shouldn't it be 4000MHz DDR ?
 
If its GDDR5 shouldn't it be 4000MHz DDR ?

No, the data is pushed only on two parts of a clock cycle, so it's DDR, and GDDR5. The amount of data pushed makes the difference here, and is twice that of what GDDR3 pushes. You can put it as "effectively 4.00 GHz", but not "4.00 GHz DDR". It's still 2.00 GHz when its actual clock-speed is 1 GHz.
 
I have a feeling this monster card will come with a monster price!
Can't wait to see what AMD's response is to this beast.
 
No, the data is pushed only on two parts of a clock cycle, so it's DDR, and GDDR5. The amount of data pushed makes the difference here, and is twice that of what GDDR3 pushes.

GDDR5 is actually QDR.

At least from what I've been told.

Edit: Wasn't quite right but according to wikipedia "GDDR5 is the successor to GDDR4 and unlike its predecessors has two parallel DQ links which provide doubled I/O throughput when compared to GDDR4"
 
Again another PCI-E graphics monster from Nvidia intending to torture our PSUs.
 
No, the data is pushed only on two parts of a clock cycle, so it's DDR, and GDDR5. The amount of data pushed makes the difference here, and is twice that of what GDDR3 pushes. You can put it as "effectively 4.00 GHz", but not "4.00 GHz DDR". It's still 2.00 GHz when its actual clock-speed is 1 GHz.

i meant to say 4GHz effective, since we are used to saying that in case of GDDR5 on ati cards.

So shouldn't it be easier to put 4GHz effective in the article as people are used to saying that, rather than getting somebody confused.

Anyway, I didn't get you point that
It's still 2.00 GHz when its actual clock-speed is 1 GHz
. So does GDDR5 push 4GHz or 2GHz ?
 
takes performance crown for sure.
 
GDDR5 is actually QDR.

At least from what I've been told.

Not that I didn't know that. You need to understand how it works to know why they don't call it QDR, even when the bandwidth is four times that of DRAM at a given clock-speed.
 
Not that I didn't know that. You need to understand how it works to know why they don't call it QDR, even when the bandwidth is four times that of DRAM at a given clock-speed.

I just looked it up there. Its not that it sends information 4 times in 1 clock but its that it has two more paths. I think.
 
Can someone help explain how memory bandwidth relates to overall preformance? If the new GTX 300 series has 256 GB/s and the 295 already has 223.8 GB/s does the gpu use the clocks better? Does it use the memory better?

So does higher mem bandwidth = better memory overclock?
 
Its already ridiculous as it is.
People have freaking 1200 watt psu's.
That is not cool, this thing better now use more power than currnet cards.
 
Can someone help explain how memory bandwidth relates to overall preformance? If the new GTX 300 series has 256 GB/s and the 295 already has 223.8 GB/s does the gpu use the clocks better? Does it use the memory better?

So does higher mem bandwidth = better memory overclock?

Bandwidth is the result of the memory clock speed and the bus size. Which means if the memory runs at a higher clock speed and the bus width is bigger then more bandwidth.
 
So the overall higher bandwidth will mean the card will run an overclock better, right?
 
So the overall higher bandwidth will mean the card will run an overclock better, right?

No. That is dependant on the memory modules on the card.
 
So I'm ovbiously not fully understanding this so lets do a comparision:

Take for example the 275 line up from Evga, the standard edition vs the FTW edition. Both are same gpu's, both are same memory modules, but the FTW edition is clocked faster and has a slightly higher memory bandwidth. Wouldn't the higher bandwidth mean the FTW edition preforms better than the regular edition overclocked to the same clock settings? I'm trying to clarify to see if the extra $ for a higher memory bandwidth will pay off.
 
Wonder if it'll be dual-core...
 
So I'm ovbiously not fully understanding this so lets do a comparision:

Take for example the 275 line up from Evga, the standard edition vs the FTW edition. Both are same gpu's, both are same memory modules, but the FTW edition is clocked faster and has a slightly higher memory bandwidth. Wouldn't the higher bandwidth mean the FTW edition preforms better than the regular edition overclocked to the same clock settings? I'm trying to clarify to see if the extra $ for a higher memory bandwidth will pay off.

Chances are the FTW version and stock can both achieve the same clock speed because the modules would be rated to the same speed. If the FTW version and the stock version are at the same speed performance will be identical. Higher bandwidth will increase frames per second and help with higher resolutions
 
Woot! Next gen nvidia cards here we come! Might leave ATI/AMD speechless for a while unless they release their next gen cards along the same quarter which I hope will happen else we aussies will be seeing video cards at the 1000+ AUD mark again :roll:
 
So did this leak or did they announce it? I don't think it's smart to announce what you're coming out with like this. AMD is watching.. They're probably already trying to get something to trump it. It's gonna be hard, but I'm sure they'll keep up. I just hope we don't see another HD 2900XT vs 8800GTX :laugh: Not saying the 2900XT was bad. I owned two of them myself. The 8800GTX was just so much better..
 
Chances are the FTW version and stock can both achieve the same clock speed because the modules would be rated to the same speed. If the FTW version and the stock version are at the same speed performance will be identical. Higher bandwidth will increase frames per second and help with higher resolutions

Thanks DP :toast: Just trying to learn more about gfx cards and overclocking. It's not all about the clock speeds :cool:
 
Back
Top