• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA France Accidentally Reveals GeForce RTX 4070 Ti Specs

User error. Don't try to stick an RTX 4090 in a small cramped case and make sure it's plugged in all the way.

Any guesses on where the RTX 4070 Ti will place on this list after Wizard's review of that card?

https://pcpartpicker.com/search/?q=RTX+3080+Ti <--- current US prices for the RTX 3080 Ti 12GB via PC Partpicker

average-fps_2560_1440.png

I will take a guess, 4070Ti will be
~3090Ti at 1440p
3090Ti -5% at 4K
Top the efficiency chart
 
4080 has 20% more Teraflops, 42% more bandwidth and 40% more Rops. 33% faster overall, resulting 150 Fps and 125 non-Ti.
 
Last edited:
With this 12 GB VRAM it will die quite fast, turning into a one-time/hit wonder.
 
My objection is prejudging a product before we know the product's performance and price.
This card may very well end up at a similar performance per Dollar range as AMD's latest cards, so will you criticize them as harshly as you criticize this product then?
Of course - I couldn't care less about AMD either, and I posted that they're disappointing when their respective reviews dropped. How did you even decide that I like AMD, haha, I'm literally one of the few people in the entire forum with an all-Intel rig! :D
And again, I'll repeat myself - the pricing is insane regardless, in the entire market except the used cards. To me it feels obvious that both teams just want to keep the margins they had for the last two years and there's no way in hell I'm going to trust their stories when they just don't line up with the rest of HW industry.
 
Last edited:
"The problem with the 4070Ti: Even at $799, the performance/price ratio is just about on par with "normal" RTX30 SKUs. Thus, a generation leap is not present at all. Ada Lovelace is simply a continuation of Ampere from P/P's point of view."

at this point we can expect the 9070ti to cost the same as a new Tesla.
 
  • Like
Reactions: N/A
16GB is the amount portal RTX uses at 4K, not just allocates: https://www.techpowerup.com/review/portal-with-rtx/3.html

Performance doesn't immediately drop when you run out of VRAM. It depends on the game but usually you can go 30% above available VRAM and the GPU will do a decent job of swapping between the VRAM and main system memory. The problem is, the instant something that needs to be fetched often is sent to the main system memory when VRAM is full, performance tanks.

It's not just an annoyance, it renders the game unplayable. The 3070 gets 1 FPS at 4K, but even in less extreme scenarios where you "just" get poor frame timing or stuttering it's easy to see why people want more VRAM. There's really no excuse other than forced obsolescence either because it would not be expensive for Nvidia to have added more.
Thanks for making my case.
Even at 1440p, the RTX 3070 gets an impressive 16 FPS, and if we estimate performance based on RTX 3090 with assumed no memory bottleneck, we would get a massive 6-7 FPS at 4K. Even the RTX 3060 with 12 GB scores a breathtaking 3 FPS!
So this is very far off from a smooth 60 FPS. No one will play games like this, it's a pretty slideshow, not a playable game. And as you can see, the cards are not powerful enough to game like this, so the VRAM limit is proven irrelevant. Both VRAM and computational performance is bottlenecking long before the VRAM size here, with raytracing it's often computational performance in perticular.
 
Thanks for making my case.
Even at 1440p, the RTX 3070 gets an impressive 16 FPS, and if we estimate performance based on RTX 3090 with assumed no memory bottleneck, we would get a massive 6-7 FPS at 4K. Even the RTX 3060 with 12 GB scores a breathtaking 3 FPS!
So this is very far off from a smooth 60 FPS. No one will play games like this, it's a pretty slideshow, not a playable game. And as you can see, the cards are not powerful enough to game like this, so the VRAM limit is proven irrelevant. Both VRAM and computational performance is bottlenecking long before the VRAM size here, with raytracing it's often computational performance in perticular.

i have a solution, play with all the shiny pretty rays traced at 540p.

Problem solved, no need to thank me.
 
  • Love
Reactions: ixi
VRAM allocated isn't the same as VRAM needed. Many buffers and textures are heavily compressed on the fly. The true judge of VRAM requirement is benchmarking the performance; if the card runs out of VRAM the performance will drop sharply. If on the other hand the performance keep scaling, then there is no issue.
This!

Guys, please stop looking at VRAM usage in monitoring apps and considering it as a must. It's not!

Kind of like when you put more system memory in your PC, your idle RAM usage rises. Currently, my main PC sits at 5.3 GB used with only Chrome open. Does Windows 10 work with 4 GB RAM? Absolutely.

Look at your performance. When your GPU usage drops, and the game starts to stutter massively, that's when you're hitting a VRAM (or CPU) limit. VRAM usage being at 100% doesn't mean anything.

i have a solution, play with all the shiny pretty rays traced at 540p.

Problem solved, no need to thank me.
I have a solution, too. Play the original Portal that actually makes sense as a game.
 
I have a solution, too. Play the original Portal that actually makes sense as a game.

it does look amazing, you can't ignore it. But it's definitely not worth it, and the game is amazing even in low settings on a potato.
 
it does look amazing, you can't ignore it. But it's definitely not worth it, and the game is amazing even in low settings on a potato.
Exactly. It's a game about puzzles with some witty humor mixed into it. It was never about shiny rays, and adding them doesn't make the game better. Just look different.
 
"The problem with the 4070Ti: Even at $799, the performance/price ratio is just about on par with "normal" RTX30 SKUs. Thus, a generation leap is not present at all. Ada Lovelace is simply a continuation of Ampere from P/P's point of view."

at this point we can expect the 9070ti to cost the same as a new Tesla.

This is a very dangerous trend which if not fixed will lead to dire consequences to all makers involved!
It will inevitably shrink the GFX shipments to levels in which the economies of scale will no longer work and an industry for billions will die off.

This!

Guys, please stop looking at VRAM usage in monitoring apps and considering it as a must. It's not!

Kind of like when you put more system memory in your PC, your idle RAM usage rises. Currently, my main PC sits at 5.3 GB used with only Chrome open. Does Windows 10 work with 4 GB RAM? Absolutely.

No, I do not recommend Windows 10 with 4 GB - it runs very slowly.
4 GB is good for Windows 7 or Windows XP, though.
 
No, I do not recommend Windows 10 with 4 GB - it runs very slowly.
4 GB is good for Windows 7 or Windows XP, though.
I'm not saying that I recommend it - I'm saying that it works. ;) I have a laptop with a dual-core Celeron (that's basically an Atom), and 4 GB RAM. It's ok for light browsing.

I could have compared having 8 and 32 GB of system RAM - the allocation you see in Task Manager will differ greatly.

My point is: Just because you see all of your VRAM used up in a game, it doesn't mean that you couldn't run it with less.
 
I'm not saying that I recommend it - I'm saying that it works. ;) I have a laptop with a dual-core Celeron (that's basically an Atom), and 4 GB RAM. It's ok for light browsing.

I could have compared having 8 and 32 GB of system RAM - the allocation you see in Task Manager will differ greatly.

My point is: Just because you see all of your VRAM used up in a game, it doesn't mean that you couldn't run it with less.

I think the minimum for running Windows 10 is 6 GB and an SSD.
 
I think the minimum for running Windows 10 is 6 GB and an SSD.
I'd still say 4 GB is OK. Heck, I even ran it on a Compute Stick with only 2 GB. It wasn't pleasant, but it worked.

End of off on my part. :)
 
Thanks for making my case.
Even at 1440p, the RTX 3070 gets an impressive 16 FPS, and if we estimate performance based on RTX 3090 with assumed no memory bottleneck, we would get a massive 6-7 FPS at 4K. Even the RTX 3060 with 12 GB scores a breathtaking 3 FPS!
So this is very far off from a smooth 60 FPS. No one will play games like this, it's a pretty slideshow, not a playable game. And as you can see, the cards are not powerful enough to game like this, so the VRAM limit is proven irrelevant. Both VRAM and computational performance is bottlenecking long before the VRAM size here, with raytracing it's often computational performance in perticular.

are we looking at the same chart?
1672506668209.png


Because I see the 12gb vram 3060 beating the 8gb vram 3070, ok by 1 fps but still.
I also see the 3080 10gb doing about half the fps of the 3090 24 gb.

sooo yeah, idk man, again you believe whatever you want to believe, but I think the Vram amount on these new cards is too low and again, that is probably on purpose so you buy the new stuff in a shorter amount of time.
 
I think that there is no much difference between the two approaches -
1. RTX 3080 10 GB disabled shaders vs RTX 3090 24 GB all shaders
2. RTX 4080 16 GB crippled second-tier chip vs RTX 4090 24 GB almost full first-tier chip

Both approaches result in almost the same end result - the 80 cards can be beaten badly and it is a clear market segmentation.
 
"The problem with the 4070Ti: Even at $799, the performance/price ratio is just about on par with "normal" RTX30 SKUs. Thus, a generation leap is not present at all. Ada Lovelace is simply a continuation of Ampere from P/P's point of view."

at this point we can expect the 9070ti to cost the same as a new Tesla.

Nah, it will barely top the $15.000, unless we get the “real price increases” - remember, right now people are still defending the Ada cards that this is nothing out of the ordinary:

2020, RTX 3080 - $700
2022, RTX 4080 - $1200 <- WE ARE HERE
2024, RTX 5080 - $2040
2026, RTX 6080 - $3468
2028, RTX 7080 - $5896
2030, RTX 8080 - $10022
2032, RTX 9080 - $17038
2034, GTX 1080 - $28965
 
Nah, it will barely top the $15.000, unless we get the “real price increases” - remember, right now people are still defending the Ada cards that this is nothing out of the ordinary:

2020, RTX 3080 - $700
2022, RTX 4080 - $1200 <- WE ARE HERE
2024, RTX 5080 - $2040
2026, RTX 6080 - $3468
2028, RTX 7080 - $5896
2030, RTX 8080 - $10022
2032, RTX 9080 - $17038
2034, GTX 1080 - $28965

There are not that many remaining manufacturing nodes, though :roll:
2024 is TSMC N3
2026 is TSMC N2
2028 is TSMC N1
...

and then what?
 
Thanks for making my case.
Even at 1440p, the RTX 3070 gets an impressive 16 FPS, and if we estimate performance based on RTX 3090 with assumed no memory bottleneck, we would get a massive 6-7 FPS at 4K. Even the RTX 3060 with 12 GB scores a breathtaking 3 FPS!
So this is very far off from a smooth 60 FPS. No one will play games like this, it's a pretty slideshow, not a playable game. And as you can see, the cards are not powerful enough to game like this, so the VRAM limit is proven irrelevant. Both VRAM and computational performance is bottlenecking long before the VRAM size here, with raytracing it's often computational performance in perticular.

1 FPS is less than 10% of 11. The 3070 is not 10% the performance of a 3090, that should go without saying.

Most benchmarks show performance of the 3070 and 3090 in non-memory bottlenecked scenarios (for the reasons stated in my prior post), therefore if the performance differential between the two cards changes drastically it's safe to assume that the bottleneck lies elsewhere.

As seen in the provided techpowerup RTX portal benchmark, we can see there is a pretty clear advantage to cards that have more VRAM, with the vastly less powerful 3060 12GB beating the the 3080 10GB. You can see this trend extend throughout Nvidia's entire lineup in this benchmark.

are we looking at the same chart?
View attachment 276966

Because I see the 12gb vram 3060 beating the 8gb vram 3070, ok by 1 fps but still.
I also see the 3080 10gb doing about half the fps of the 3090 24 gb.

sooo yeah, idk man, again you believe whatever you want to believe, but I think the Vram amount on these new cards is too low and again, that is probably on purpose so you buy the new stuff in a shorter amount of time.

Oh he is almost certainly arguing in bad faith at this point. The fact that the 3060 12GB is beating the 10GB 3080 is an extremely clear indication of a memory bottleneck. The game uses 16GB of VRAM, any extra over the VRAM allotence is stores in the main system memory. This means the 3060 12GB is storing 4GB in the main system memory while keeping higher priority data in the VRAM. The 3070 is pushing 8GB into the main system memory but unfortunately for it some critical data is not able to fit as the VRAM is already filled with equal priority data, thus resulting in a much lower level of performance as compared to scenarios where it is not VRAM bound.

Mind you it shouldn't take such an obvious example of VRAM size bottlenecking to be a wake up call to this. You don't see this very often precisely because the performance penalty of not having enough VRAM is so heavy (not always just avg but frame timing as well). Devs cannot have newer video cards having stuttering, low FPS, or inconsistent frame timing. I really don't get the logic behind defending the practice aside from blindly defending everything Nvidia.
 
Last edited:
Wow we are so not surprised.

No Way Wow GIF
Didn't Jensen shit on the board partners because of things like this? Is he going to shit on his employees too?
 
Picometre? :)
2030 is TSMC P800 :laugh:

There is no scientific proof that these will exist.
First, I was very generous to state 2-year cadence for N3->N2->N1 cadence. What if the story happens like with Intel's now badly famous 14nm with numerous pluses?! 14nm+, 14nm++, 14nm+++, 14nm+++(+), etc.

No one can guarantee that anything after N3 will work.
 
Back
Top