Monday, November 14th 2016

NVIDIA GeForce GTX 1080 Ti Features 10GB Memory?

Air-cargo shipping manifest descriptions point to the possibility that NVIDIA's upcoming high-end graphics card based on the GP102 silicon, the GeForce GTX 1080 Ti, could feature 10 GB of memory, and not the previously thought of 12 GB, which would be the same amount as the TITAN X Pascal. NVIDIA apparently is getting 10 GB to work over a 384-bit wide memory interface, likely using chips of different densities. The GTX 1080 Ti is also rumored to feature 3,328 CUDA cores, 208 TMUs, and 96 ROPs.

NVIDIA has, in the past, used memory chips of different densities to achieve its desired memory amount targets over limited memory bus widths. The company achieved 2 GB of memory across a 192-bit wide GDDR5 memory interface on the GeForce GTX 660, for example. The product number for the new SKU, as mentioned in the shipping manifest, is "699-1G611-0010-000," which is different from the "699-1G611-0000-000" of the TITAN X Pascal, indicating that this is the second product based on the PG611 platform (the PCB on which NVIDIA built the TITAN X Pascal). NVIDIA is expected to launch the GeForce GTX 1080 Ti in early-2017.
Source: VideoCardz
Add your own comment

45 Comments on NVIDIA GeForce GTX 1080 Ti Features 10GB Memory?

#1
xorbe
The math on that does NOT work out. 384, you get 12GB, 6, 3, 1.5, 768MB, 384MB ... 6+3 does not add up to 10GB. So either it's driver limited to 10 (from 12), or 320-bit, or lopsided.
Posted on Reply
#2
btarunr
Editor & Senior Moderator
xorbeThe math on that does NOT work out. 384, you get 12GB, 6, 3, 1.5, 768MB, 384MB ... 6+3 does not add up to 10GB. So either it's driver limited to 10 (from 12), or 320-bit, or lopsided.
That's why you read the post before commenting.
Posted on Reply
#3
Darksword
Meh. This entire series has been overpriced compared to last. The 980 Ti was about $650.00 at release, and the 1080 Ti will likely cost close to $1,000.00?!

I'll pass on this round.
Posted on Reply
#4
swirl09
DarkswordMeh. This entire series has been overpriced compared to last. The 980 Ti was about $650.00 at release, and the 1080 Ti will likely cost close to $1,000.00?!
And the 980ti wasnt what I would call a cheap card, so the fact that its replacement will see you not having much change out of a grand is a clear sign nVidia is milking its unrivaled high end GPUs.

When I was weighing up the Fury X vs the 980ti, the red teams card just fell short for me, along with the lack of HDMI2 (aimed at 4K gaming, that makes sense...?), and the fear that 4GB of VRAM - however special - was going to quickly hit a wall. Anyway, this time around, DX12 is a more relevant argument than it was a year and a half ago. I will wait to see what Vega looks like and frankly am willing it to be my next card. Im sick of nVidia (but damn they make nice cards...).
Posted on Reply
#5
evernessince
Waiting for Vega and HBM2. We've already seen the power savings of HBM1, AMD's top end cards should have excellent watt / performance that should compare to Nvidia.
swirl09And the 980ti wasnt what I would call a cheap card, so the fact that its replacement will see you not having much change out of a grand is a clear sign nVidia is milking its unrivaled high end GPUs.

When I was weighing up the Fury X vs the 980ti, the red teams card just fell short for me, along with the lack of HDMI2 (aimed at 4K gaming, that makes sense...?), and the fear that 4GB of VRAM - however special - was going to quickly hit a wall. Anyway, this time around, DX12 is a more relevant argument than it was a year and a half ago. I will wait to see what Vega looks like and frankly am willing it to be my next card. Im sick of nVidia (but damn they make nice cards...).
Yeah, that 4 GB was just a bit too little. I fully expect VEGA with HBM II to be very competitive so long as AMD doesn't go crazy on the price. They also need to work out overclocking, as both the original FURY lineup and Polaris can't overclock well at all.
Posted on Reply
#6
FordGT90Concept
"I go fast!1!11!1!"
This sort of memory tomfoolery is what got them into trouble with GTX 970. AMD has it right sticking to powers of two (32 MiB, 64 MiB, 128 MiB, 256 MiB, 512 MiB, 1024 MiB, 2048 MiB, 4096 MiB, 8192 MiB, 16 GiB) simple to implement and meaningful jumps in available VRAM.
Posted on Reply
#7
ZoneDymo
evernessinceYeah, that 4 GB was just a bit too little. I fully expect VEGA with HBM II to be very competitive so long as AMD doesn't go crazy on the price. They also need to work out overclocking, as both the original FURY lineup and Polaris can't overclock well at all.
Well was it though? I mean people are upgrading from the 900 series to the 1000 series just fine so they did not suffer the 4gb on the 900 series at all.
Likewise AMD people can upgrade to whatever AMD is offering next just the same.
And its not like we have had any game so far where this 4gb limit has had consequences.
Posted on Reply
#9
ZoneDymo
FordGT90ConceptShadows of Mordor comes to mind.
Well thats the thing, I know from benchmarks if you have the Vram to spend SoM will use a lot of it.
But if you look at the benchmarks the 4gb Fury X does not perform any worse the the 6gb 980Ti
Posted on Reply
#10
btarunr
Editor & Senior Moderator
FordGT90ConceptThis sort of memory tomfoolery is what got them into trouble with GTX 970. AMD has it right sticking to powers of two (32 MiB, 64 MiB, 128 MiB, 256 MiB, 512 MiB, 1024 MiB, 2048 MiB, 4096 MiB, 8192 MiB, 16 GiB) simple to implement and meaningful jumps in available VRAM.
Also 3072 MB (HD 7970) and 6144 MB (some custom R9 280).
Posted on Reply
#11
zzzaac
evernessinceWaiting for Vega and HBM2. We've already seen the power savings of HBM1, AMD's top end cards should have excellent watt / performance that should compare to Nvidia.
I'm actually pretty impressed what Nvidia did with "old" GDDR5. Would be interesting to see them with HBM too
Posted on Reply
#12
ViperXTR
Were the old ones such as the GTX 550Ti, 660/660Ti with asymmetrical memory configs showed any performance deficiencies in using the remaining sector of their memory?
Posted on Reply
#13
Nokiron
evernessinceWaiting for Vega and HBM2. We've already seen the power savings of HBM1, AMD's top end cards should have excellent watt / performance that should compare to Nvidia.
HBM is not a big power saver as a complete package. Sure, it's a couple of watts lower but that won't do much of a difference for Perf/Watt. (10W instead of 20W?)

That's the GPUs job.
Posted on Reply
#14
FordGT90Concept
"I go fast!1!11!1!"
btarunrAlso 3072 MB (HD 7970) and 6144 MB (some custom R9 280).
Hmm, 7970 has 384-bit bus which is an odd ball (six memory controllers). R9 280 is a rebrand of the HD 7970 so it also has a 384-bit bus. Strange how they waited until the R9 290(X) to roll out the 512-bit bus. A 512-bit bus in 7970 would have given it more staying power.
Posted on Reply
#15
JalleR
And the Fury wasn't the greatest Performance/watt, so no big deal there either.
Posted on Reply
#16
bogami
Another knife in my back !
Posted on Reply
#17
evernessince
NokironHBM is not a big power saver as a complete package. Sure, it's a couple of watts lower but that won't do much of a difference for Perf/Watt. (10W instead of 20W?)

That's the GPUs job.
The Fury X achieves better performance per watt than the R9 390X. That's 4,096 stream processors vs 2,816 and 294w vs 258w. That's 36 more watts of power usage for over 1,200 more stream processors. The Dual Fury X achieves an even higher performance per watt.

You are mistaken in believing the GPU is the only part that consumes a considerable amount of power. The link between the GPU and the memory is one of the most power hungry components of a graphics card. As memory demands continue to increase so too will efficiency be needed. The more RAM, the more power drawn (As proven by the 8 GB RX 480 fiasco). The higher the bandwidth, the more power drawn (as proven by the R9 290x, which had a larger memory bus than it needed, wasting power).
Posted on Reply
#18
Tatty_Two
Gone Fishing
evernessinceThe Fury X achieves better performance per watt than the R9 390X. That's 4,096 stream processors vs 2,816 and 294w vs 258w. That's 36 more watts of power usage for over 1,200 more stream processors. The Dual Fury X achieves an even higher performance per watt.

You are mistaken in believing the GPU is the only part that consumes a considerable amount of power. The link between the GPU and the memory is one of the most power hungry components of a graphics card. As memory demands continue to increase so too will efficiency be needed. The more RAM, the more power drawn (As proven by the 8 GB RX 480 fiasco). The higher the bandwidth, the more power drawn (as proven by the R9 290x, which had a larger memory bus than it needed, wasting power).
Yes, however the TITAN X Pascal without HMB draws just 268, at the very least if the memory really did account for such a saving it becomes moot if the GPU is still eating the juice, I have a feeling the savings you quote, could be due as much to improved SP efficiency with FuryX over the 390X than just the memory improvements.

here is an interesting piece on memory, bandwidth and power consumption with comparisons, kind of supports both views...... differently.................

wccftech.com/nvidia-geforce-amd-radeon-graphic-cards-memory-analysis/
Posted on Reply
#19
Assimilator
Assuming they reuse the Titan XP board design that has 12 memory packages, they could easily accomplish this with 8x 1GB + 4x 512MB chips.
Posted on Reply
#20
trog100
FordGT90ConceptShadows of Mordor comes to mind.
the game as it came played fine on 4 gigs.. the textures that needed 6 gig were offered as a downloadable extra.. most people would not have bothered..

but i think the amount of vram a game needs is governed by the amount thats available on the average mid to high level cards.. once 8 gig becomes the norm 4 gigs for sure wont be enough..

trog
Posted on Reply
#21
Parn
ViperXTRWere the old ones such as the GTX 550Ti, 660/660Ti with asymmetrical memory configs showed any performance deficiencies in using the remaining sector of their memory?
I believe they did. There were some discussions about GTX660 and its odd 1.5GB + 0.5GB asymmetrical memory config.
Posted on Reply
#22
Nokiron
evernessinceThe Fury X achieves better performance per watt than the R9 390X. That's 4,096 stream processors vs 2,816 and 294w vs 258w. That's 36 more watts of power usage for over 1,200 more stream processors. The Dual Fury X achieves an even higher performance per watt.

You are mistaken in believing the GPU is the only part that consumes a considerable amount of power. The link between the GPU and the memory is one of the most power hungry components of a graphics card. As memory demands continue to increase so too will efficiency be needed. The more RAM, the more power drawn (As proven by the 8 GB RX 480 fiasco). The higher the bandwidth, the more power drawn (as proven by the R9 290x, which had a larger memory bus than it needed, wasting power).
Yeah, we are certainly seeing 1070s pulling tons of power with the fastest GDDR5 available. You can't compare different architectures like that, that's not the only difference between Fiji and Hawaii.
390X has superhigh voltages added because driving that 512-bit bus at 6Ghz was nothing more than desperation to increase performance. (Hawaiis memory controller was not designed for that speed in the first place)

Usually, ~10% of the powerbudget goes to memory and maybe half of that for HBM. Which still is an improvement, but it's nowhere near the amount of gain that you believe.
Posted on Reply
#23
TheinsanegamerN
ViperXTRWere the old ones such as the GTX 550Ti, 660/660Ti with asymmetrical memory configs showed any performance deficiencies in using the remaining sector of their memory?
Yes. Performance in those cards was always a bit iffy. Games that needed to use all 1GB of framebuffer suffered on a 550ti VS a 560. The 660 and 660ti experienced some occasional oddities. But these were well known to buyers back then.

The 970 fiasco, OTOH, was due to nvidia lying about how it's 4GB/256 bit bus was done. It didnt have enough rops and as a result one of its memory controllers didnt perform like the others did. Nvidia screwed up hard there.
Posted on Reply
#24
qubit
Overclocked quantum bit
FordGT90ConceptThis sort of memory tomfoolery is what got them into trouble with GTX 970. AMD has it right sticking to powers of two (32 MiB, 64 MiB, 128 MiB, 256 MiB, 512 MiB, 1024 MiB, 2048 MiB, 4096 MiB, 8192 MiB, 16 GiB) simple to implement and meaningful jumps in available VRAM.
Yup, designs are so much more efficient when sticking to powers of 2. Unfortunately the chips can get so large that physical constraints force them to make compromises with the higher end models so we end up with lopsided memory buses and memory amounts.
Posted on Reply
#25
jabbadap
AssimilatorAssuming they reuse the Titan XP board design that has 12 memory packages, they could easily accomplish this with 8x 1GB + 4x 512MB chips.
Agreed. But which memory chips, gddr5 8Gbps or gddr5x 10Gbps. I don't remember seeing gddr5x with 4Gb density so it might be just gddr5 when we have BW of 8Gbps*384b/(8b/B) = 384GB/s(compared to gtx 1080 320GB/s and titan xp 480GB/s). More normal configuration it would be 10*8Gb 10Gbps gddr5x@320bit bus, thus 400GB/s BW.
Posted on Reply
Add your own comment
Apr 26th, 2024 02:30 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts