• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 3070 and RTX 3070 Ti Rumored Specifications Appear

24GB of VRAM sounds like a Titan variant...


please... PLEASE GPU makers, don't stuff us with more than 12GB of VRAM.................
 
24GB of VRAM sounds like a Titan variant...


please... PLEASE GPU makers, don't stuff us with more than 12GB of VRAM.................

I agree that it is probably a Titan. I think 16gb should keep us satisfied for the 'life' of a current gpu as I can see the new consoles creating a new generation of laziness with regards to memory and resource management.
 
I'm worried seeing this. It would be definitely another level of pricing but not necessarily another level of gaming.
 
These leaks are just so useless... They just assume the same die naming scheme as Turing and proportional spec upgrade to what Turing had over Pascal. You could make this "leak" the day Turing launched...
 
24GB sounds like a Dual GPU card, 3090, much like 490,590. And the buffer is not additive. 12GB+12GB in SLI.
 
24GB sounds like a Dual GPU card, 3090, much like 490,590. And the buffer is not additive. 12GB+12GB in SLI.
I highly doubt we'll see a dual GPU card these days....

the last dual GPU card I recall is AMD's 295x2 that was 500W, lolol.
 
I highly doubt we'll see a dual GPU card these days....

the last dual GPU card I recall is AMD's 295x2 that was 500W, lolol.


Unless they start producing GCs with chiplets, for example 2, 4, 6 small chiplets.
 
Well, many rumors point to some nice fat TDP's for this generation, too. But maybe not 500w, though :)
 
Unless they start producing GCs with chiplets, for example 2, 4, 6 small chiplets.
Correct. But that isn't happening now and will be different implementation than the SLI/CF that we know. We aren't getting any dual GPUs on a single PCB with these large chips.

EDIT: See your post below, lol.
 
Well, many rumors point to some nice fat TDP's for this generation, too. But maybe not 500w, though :)


The usual wccftech click-bait now states the top die should be 627 mm^2 on Samsung N8 which would equal 895 mm^2 TSMC old 16/12nm process.
Which means around 15-20% larger than the old RTX 2080 Ti.
 
Or Nvidia just don't care about PC gamers migrating to consoles?

Migrating where? To 30 FPS with lowest presets box? There are barely any games worth purchasing a dedicated $400-500 hardware for it. Consoles will be viable between 2021 and 2022 when we actually start receving next-gen games instead of that cartoonish/cell-shaded crap for children. As far as I'm aware, more enthusiasts are pumped for GPU hardware, not console trash.
 
Correct. But that isn't happening now and will be different implementation than the SLI/CF that we know. We aren't getting any dual GPUs on a single PCB with these large chips.

EDIT: See your post below, lol.
Maybe Nvidia is moving to wafer-scale GPUs, ditching the PCB and just carving the whole card out of silicon? :D
Migrating where? To 30 FPS with lowest presets box? There are barely any games worth purchasing a dedicated $400-500 hardware for it. Consoles will be viable between 2021 and 2022 when we actually start receving next-gen games instead of that cartoonish/cell-shaded crap for children. As far as I'm aware, more enthusiasts are pumped for GPU hardware, not console trash.
I think you'd do well to take a look at Digital Foundry's analysis of the PS5 game showcase video. While there is some 30fps there, definitely no low presets, cel-shaded crap or current-gen looks there. And while of course none of those games are final, most of them are run on actual hardware, including examples of RTRT and detail levels not even remotely possible on current-gen consoles. There's little reason to suspect that high-budget launch titles won't be very impressive graphically on both consoles.
 
Newest rumor from Chiphell claims the upcoming 3080Ti (or the non Titan flagship equivalent) will boost 24GB of VRAM.


$1499 for lowest end 3080Ti would I dare say?

I actually expect the upcoming Nvidia cards to come in at a reasonable price. My premise is that not only will Nvidia have to compete with AMD as well as Intel in the graphics card space, but they will also have to compete with the new consoles as well. Rumor is that due to the pandemic, Xbox Series X and PS5 will be sold at a loss in order to move as many units as possible, with prices being suggested as low as $199 and $299 respectively, although obviously take that with a very large grain of salt.
 
I actually expect the upcoming Nvidia cards to come in at a reasonable price. My premise is that not only will Nvidia have to compete with AMD as well as Intel in the graphics card space, but they will also have to compete with the new consoles as well. Rumor is that due to the pandemic, Xbox Series X and PS5 will be sold at a loss in order to move as many units as possible, with prices being suggested as low as $199 and $299 respectively, although obviously take that with a very large grain of salt.
All the rumors that I heard point to 400-500 for the consoles, with only the Microsoft 1080P version coming lower than that. 300USD for a console sound like wishful thinking, but, if it were true, Nvidia cannot compete with that. They won't give you a 12TFlop card for 300USD.
 
I actually expect the upcoming Nvidia cards to come in at a reasonable price. My premise is that not only will Nvidia have to compete with AMD as well as Intel in the graphics card space, but they will also have to compete with the new consoles as well. Rumor is that due to the pandemic, Xbox Series X and PS5 will be sold at a loss in order to move as many units as possible, with prices being suggested as low as $199 and $299 respectively, although obviously take that with a very large grain of salt.
There is absolutely no chance whatsoever that those consoles will be that cheap. Zero. Selling at a loss is one thing, selling at that kind of loss is quite something else. The SoCs inside of each likely costs close to that much in production costs for AMD, let alone what console makers are paying for them. So, if the PS5 (should be the cheaper of the two to make) costs $400 to make (it's likely more than that) but is sold at $299 including distributor and retailer margins, that means Sony would be losing more than 150 million dollars per million consoles sold assuming two levels of sales with 10% margins. Losses on that scale won't be made back by game licencing or subscription services. And remember, the PS4 has sold something like 50 million units. If they sold ten million ps5s the first year they would then lose more than a billion and a half dollars. In a single year. Sony would buckle under those losses.

So no, those prices aren't happening.

On the other hand, if the rumored 20-CU "Xbox Series S/Lockhart" shows up, that might indeed hit $299.
 
Yep 199 and 299 are very unrealistic. The fact that neither has shared prices and Sony specificly argued for "value" tells me that they will try to soften the blow and promote other aspects of the new consoles. If the prices were that low they would heavily promote that. 449 is the absolute cheapest i can see them going. That's for the PS5 Digital Edition. Regular PS5 and XSX will be likely 499.

Now the less powerful XSS could be 299 for 1080p gaming.
 
I'm worried seeing this. It would be definitely another level of pricing but not necessarily another level of gaming.

Not understanding the angst here.
It's basically the same arch going to a smaller, more efficient, and probably faster process node.
So you get 2080 arch -> 3070, + faster memory, +more power efficient, + a few percent better IPC from the new node

I bet the 3070 is at least 110% the performance of a 2080.
 
Oh shit, here we go again. Want lower prices? Stop buying things. And convince everyone else to do the same.

When I see that word written is when I know the person isn't worth listening to.

On the note of the 3070 and Ti, I'm hopeful that if this spec is roughly where we land, I can get a meaningful upgrade from my GTX1080 with one. A 2070S with the same 2560 CUDA count handily outperforms my 1080 already, bump that up to ~3000 with the next gen features and I can't see any way it at least doesn't outperform a 2080S. What I do really want in addtion is more VRAM, I won't spend big money on another 8GB card.
 
When I see that word written is when I know the person isn't worth listening to.

They are still worth listening too provided they walk the walk and work for free as well.
 
Leaks that clocks will be over 2GHz, combined with IPC gains of at least 10% - it's a new architecture not a tweak of Turing - make it plausible a 3070 Ti is as fast as 2080 Ti and will smash it in RT. 3070 should be faster than 2080 for sure and of course all Ampere cards will crush Turing for RT. Again leaks indicate no more than 10% hit to frame rate with RT, 3060 will be viable for RT for sure.
 
Not understanding the angst here.
It's basically the same arch going to a smaller, more efficient, and probably faster process node.
So you get 2080 arch -> 3070, + faster memory, +more power efficient, + a few percent better IPC from the new node

I bet the 3070 is at least 110% the performance of a 2080.
what does IPC gain have to do with a node?
 
what does IPC gain have to do with a node?

From my limited understanding, changing to a smaller process node can (doesn't have to, can get worse) result in transistor switching occurring faster. That can and has resulted in a microcode instruction that used to take say 2 cycles to complete, to take 1 cycle.


"Smaller processes also have a lower capacitance, allowing transistors to turn on and off more quickly while using less energy. And if you’re trying to make a better chip, that’s perfect. The faster a transistor can toggle on and off, the faster it can do work. "
 
Say, if stuff is to be released for CP2077, shouldn't RDNA2 cards be in full swing AIB production already? Yet, there have been no leaks so far.

Is it me or the gap between 3070Ti and 3080 is rather large?

That can and has resulted in a microcode instruction that used to take say 2 cycles to complete, to take 1 cycle.
This makes no sense. You get faster cycles (higher clocks) you don't miraculously get circuits that are capable of doing something in 1 cycle if it took 2.
 
From my limited understanding, changing to a smaller process node can (doesn't have to, can get worse) result in transistor switching occurring faster. That can and has resulted in a microcode instruction that used to take say 2 cycles to complete, to take 1 cycle.


"Smaller processes also have a lower capacitance, allowing transistors to turn on and off more quickly while using less energy. And if you’re trying to make a better chip, that’s perfect. The faster a transistor can toggle on and off, the faster it can do work. "
Let me just go ahead and clear this up for you. Process node has NOTHING to do with IPC.
 
Back
Top