If you're serious, then I'll explain a bit. For a moment it seemed like you were being antagonistic.
Apologies you feel this way. I do get frustrated at your (what feels like a) consistent lack of supporting information without being prodded. Maybe it shows even when I ask nicely (not antagonizing, just being honest with you).
So...the point of 314 is this......
Enabling DLSS in Cyberpunk 2077 will instantly lower VRAM allocation by 1GB, yet they claim 3070 is slower than 2080 Ti with RTX/DLSS on is because of VRAM limitation ? what kinda editorial logic is this ?
... to which the charts he posted support the assertion, correct?
The first chart shows 1440p ultra with the 3070 and 2080 Ti hitting 56 FPS. The second chart shows the 1440p ultra + DLSS and the 2080Ti is ~9% (~8 FPS) faster at the same res.
Now, look at the chart in #316.. when you go up to 4K ultra (which uses more vRAM than 1440p, right) the difference between a 2080Ti and 3070 is 3% (~1 FPS) as it should be. TPU and Gulu3D (if anyone gets that joke, LMK...
) support this claim they are the same speed at 4K.....while using MORE vram than 1440p+DLSS.
How is that possible? What are we BOTH missing here?
In the next post I pointed out the flaws in his statements and posted the correct page in the subject review to be referring to in the context of the discussion at hand.
Did you though? In #317 it feels like you missed his point which #318 attempts to bring it back on track... to which you promptly blew him off in #319. The fact that they are the same speed at 4K at 3 sites isn't the point. The point is that at 4K which uses MORE vRAM than 1440p + DLSS, they are still tied, yet when using DLSS alone and LESS vram, there is a significant gap.
The charts in #318 show when RT
and DLSS are used, (where
more vRAM is allocated vs not using RT and DLSS) that the 3070
closes that gap to 1%.
If vRAM was a problem, how does the gap shrink when it's using more at the higher res?
In Nguyen's next statement, post318, they made even more inaccurate statements and not only misquoted the cited data but seemed to be deliberately twisting facts out of context to fit their flawed, factless narrative.
What was misquoted? What are those twisted facts, exactly? This is where the details matter, lex.
I guess the question, at least to me, is
how is it possible that 4K UHD shows the two cards neck and neck with each other, but at a lower resolution with DLSS the gap is nearly 10% (even with more tensor cores)? After cutting through all this (thanks random insomnia, lol), I think I know the answer...but it isn't anyone twisting a narrative intentionally (that I can see). I don't think HUB/Techspot have an agenda, but I can see why he feels this way according to the conclusion he quoted. That said........ I think I found a curiosity/hole in Ngyen's point... but it isn't what I think you are trying to describe, Lex.
As a general rule, if I feel like people are discussing a subject in earnest, I'll go out of my way to help them see the real deal or get more facts that can help them understand more about the subject being discussed. But when it seems like people are just being deceptive or worse, willfully ignorant, I lose the will to be helpful or continue the discussion. That's when you see me say, "Google it yourself" or "whatever". I've got no time for people who are going to waste it.
I felt he was discussing this in earnest. He posted his opinion and supported it with charts and multiple references. You come in shooting him down and seemingly miss the point. He clarifies the point with more charts and you blow him off thinking the above. I don't believe he was being deceptive, nor willfully ignorant. That said, it doesn't mean he (we both) didn't miss something. Now, I think I see what the issue is............
What I think he missed that may shed some light on things...............is the fact that RT on Ampere is a lot faster that Turing b/c more RT bits. So the faster RT overcomes the 8GB of vRAM 'issue' and closes the gap regardless(?). Now, that seems a bit counterintuitive since with RT enabled, you use
more vRAM... but it's the only thing I can think of. In the end, it doesn't seem like when using RT/DLSS that there is an issue with RAM.
If so, it should have manifested itself in their results, correct? We don't see that, yet we see what the conclusion says. It doesn't seem to match.
Let's be clear, all of his charts support what he is saying.... but the reasoning behind his conclusion may be flawed is all. So, you may be right in some respect, Lex, but more so by accident/different reasons than actually/accurately pointing out what he was missing.
So in the end, about the vRAM situation (3070/3080... it doesn't matter)...... is it the RT that is overcoming the so-called problem?