Sunday, December 30th 2018

NVIDIA GeForce RTX 2060 Founders Edition Pictured, Tested

Here are some of the first pictures of NVIDIA's upcoming GeForce RTX 2060 Founders Edition graphics card. You'll know from our older report that there could be as many as six variants of the RTX 2060 based on memory size and type. The Founders Edition is based on the top-spec one with 6 GB of GDDR6 memory. The card looks similar in design to the RTX 2070 Founders Edition, which is probably because NVIDIA is reusing the reference-design PCB and cooling solution, minus two of the eight memory chips. The card continues to pull power from a single 8-pin PCIe power connector.

According to VideoCardz, NVIDIA could launch the RTX 2060 on the 15th of January, 2019. It could get an earlier unveiling by CEO Jen-Hsun Huang at NVIDIA's CES 2019 event, slated for January 7th. The top-spec RTX 2060 trim is based on the TU106-300 ASIC, configured with 1,920 CUDA cores, 120 TMUs, 48 ROPs, 240 tensor cores, and 30 RT cores. With an estimated FP32 compute performance of 6.5 TFLOP/s, the card is expected to perform on par with the GTX 1070 Ti from the previous generation in workloads that lack DXR. VideoCardz also posted performance numbers obtained from NVIDIA's Reviewer's Guide, that point to the same possibility.
In its Reviewer's Guide document, NVIDIA tested the RTX 2060 Founders Edition on a machine powered by a Core i9-7900X processor and 16 GB of memory. The card was tested at 1920 x 1080 and 2560 x 1440, its target consumer segment. Performance numbers obtained at both resolutions point to the card performing within ±5% of the GTX 1070 Ti (and possibly the RX Vega 56 from the AMD camp). The guide also mentions an SEP pricing of the RTX 2060 6 GB at USD $349.99.
Source: VideoCardz
Add your own comment

234 Comments on NVIDIA GeForce RTX 2060 Founders Edition Pictured, Tested

#226
lexluthermiester
Renald said:
A 2080Ti should cost ~800/900 IMO and a 2080 600$.
Yeah, but you're not the one making them having to recoup manufacturing and R&D costs. And for the record, 2080's are not far from the $600 you mentioned.
Posted on Reply
#227
wolf
Performance Enthusiast
kings said:
People compare this to the price of the GTX 1060, but forget the RX 480 was on the market at the same time to keep prices a bit lower! Basically, the closest competition to the RTX 2060 will be the RX Vega 56 (if we believe in the leaks), which still costs upwards of $400~$450, except for one or another occasional promotion.

Unless AMD pulls something off their hat in January, $350 to $400 for the RTX 2060 will be in tune with what AMD also offers! Nvidia with their dominant position, is not interested in disrupting the market with price/performance.
xkm1948 said:
Nice to see bunch of couch GPU designers and financial analysts knows better than a multi million GPU company regarding both technology and pricing. It is called capitalism for a reason, no competition means Nvidia can have free say on how much they price their cards. You don’t like it then don’t buy, good for you. Someone else likes it they buy and it is entirely their own business. NVIDIA is “greedy” sure yeah they better f*ucking be greedy. They are a for profit company not a f*ucking charity.
Good to see a few people out there are onto it, probably others too I just haven't quoted you all. In the absence of competition at certain price points which AMD has generally been able to do at least in the low/mid/upper-midrange (and often top teir) segments previously for some time, Nvidia just has this ability to charge a premium for premium performance. Add to that fact the upper-end RTX chips are enormous and use newer more expensive memory and yeah, you find them charging top dollar for them, and so they should in that position.

As has been said, don't like it? vote with your wallet! I sure have. I bought a GTX1080 at launch and ~2.5 years later I personally have no compelling reason to upgrade, that comes down to my rig, screen, available time to game, what I play, price performance etc etc etc, add it all together - that equation is different for every buyer.

Do I think 20 series RTX is worth it? Not yet but I'm glad someones doing it, I've seen BFV played with it on and I truly hope Ray Tracing is in the future of gaming.

My take is that when one or both of these two things happen prices will drop, perhaps but a negligible amount, perhaps significantly;

1. Nvidia clears out all (or virtually all) 10 series stock, which the market still seems hungry for, partly because many offerings offer more than adequate performance for the particular consumer's needs.
2. AMD answer the 20 series upper level performance, or release cards matching 1080/2070/vega perf at lower prices (or again, both)
Posted on Reply
#228
bug
lexluthermiester said:
Yeah, but you're not the one making them having to recoup manufacturing and R&D costs. And for the record, 2080's are not far from the $600 you mentioned.
My guess would be at least some of the R&D has been covered by Volta. It's the sheer die size that makes Turing so damn expensive.
If Nvidia manages to tweak the hardware for their 7nm lineup, then we'll have a strong proposition for DXR. Otherwise, we'll have to wait for another generation.
Posted on Reply
#229
lexluthermiester
bug said:
My guess would be at least some of the R&D has been covered by Volta.
Maybe, but how many Volta cards have they actually sold? Even so, Volta and Turing are not the same. They have similarities, but are different enough that a lot of R&D is unrelated and doesn't cross over.
bug said:
It's the sheer die size that makes Turing so damn expensive.
That is what I was referring to with manufacturing costs. Pricey wafer dies, even if you manage a high wafer/die ratio yield. That price goes way up of you can't manage at least an 88% wafer yield, which will be challenging given the total size of a functional die.
Posted on Reply
#230
bug
lexluthermiester said:
Maybe, but how many Volta cards have they actually sold? Even so, Volta and Turing are not the same. They have similarities, but are different enough that a lot of R&D is unrelated and doesn't cross over.
Well, Volta is only for professional cards. The Quadro goes for $9,000, God knows how much they charge for a Tesla.
And about differences, I'm really not aware of many, save for the tweaks Nvidia did to make it more fit for DXR (and probably less fit for general computing). I'm sure Anand did a piece highlighting the differences (and I'm sure I read it), but nothing striking has stuck.

That said, yes, R&D does not usually pay off after just one iteration. I was just saying they've already made some of that back.
Posted on Reply
#231
moproblems99
wolf said:
Nvidia clears out all (or virtually all) 10 series stock
They could certainly just restock 10 series at the prices they are at and leave the 20 series where they are to continue this pricing model until AMD releases something. Lord help us all if it doesn't compete.
Posted on Reply
#232
kanecvr
efikkan said:
No, the first Titan was released along with the 700 series.
The top model of the 600 series was the GTX 690, having two GK104 chips.
Yes, the original titan was released with the 7 series, but both use the kepler architecture. In fact, the 680 and 770 are identical, sans for a small clock speed increase in favor of the 770 (1045 vs 1006Mhz). You can even flash a 680 to 770 and vice versa (if the cards use the same pcb like say a reference design).

efikkan said:

I have to correct you there.
GK100 was bad and Nvidia had to do a fairly "last minute" rebrand of "GTX 670 Ti" into "GTX 680". (I remember my GTX 680 box had stickers over all the product names) The GK100 was only used for some compute cards, but the GK110 was a revised version, which ended up in the GTX 780, and was pretty much what the GTX 680 should have been.

You have to remember that Kepler was a major architectural redesign for Nvidia.
No. GK100=GK110. The reason for the extra "1" is the name change from 6 series to 7 series - try to make it look like a new product. Kepler also has GK2xx branded chips witch do contain small architectural improvements, mostly for the scheduler and power efficiency. Again, and for the last time - the 680 is NOT GK100 - it's GK104. Nvidia did not release any GPU with the GK100 codename, not even in the professional market. There were rumors and the tech press did speculate that GK100 would be reserved for the (then) new Tesla, but that never happened. This is the complete list of Kepler GPUs, both 6 and 7 series, and including the Titan, Quadro and Tesla cards:
  • Full GK104 - GTX 680, GTX 770, GTX 880M and several profesional cards.
  • Cut down GK104 - GTX 660, 760, 670, 680M, 860M and several professional cards
  • GK106 - GTX 650ti boost, 650ti, 660, and several mobile and pro cards.
  • GK107 - GTX 640, 740, 820 and lots of mobile cards
  • GK110 - GTX 780 (cut down GK110), GTX 780ti and the original Titan, as well as the Titan Black, Titan Z and loads of Tesla / Quadro cards like the K6000
  • GK208 - entry level 7 series and 8 series cards, both Geforce and Quadro branded
  • GK208B - entry level 7 series and 8 series cards, both Geforce and Quadro branded
  • GK210 - Revised and slightly cut-down version of the GK100/GK110. Launched as the Tesla K80
  • GK20A - GPU built into the Tegra K1 SoC
I know from a trustworthy source (nvidia board partner employee) that nvidia had no issues whatsoever with the GK100. If fact internal testing showed what a huge leap in performance Kepler was over Fermi. This is THE REASON why nvidia decided to launch the GK104 mid-range chip as the GTX 680 - the GK104 is 30 to 50% faster then the GF100 used in the GTX 480 and 580. Some clever people in management came up with this marketing stunt - spread Kepler over two series of cards, 600 and 700 series, and release the GK104 first, and save the full GK100 (GK110) for the later 700 series and launch it as a premium product, creating a new market segment with the 780ti and original titan. GK110 is simply the name chosen for launch, replacing the GK100 moniker, mainly to attempt to obfuscate savvy consumers and the tech press. As nvidia naming schemes go, the GK110 should have been an entry level chip: GK104>GK106>GK107 -> the smaller the last number, the larger the chip. The only Kepler revision is named GK2xx (see above) and only includes entry level cards.
Posted on Reply
#233
jabbadap
While true, gk110 had two versions too. GK110 and GK110b, latter could clock a bit higher. All gtx780 "AIBs GHz editions" used the gk110b version of that chip. Going back to that ancient history, everybody knows that gtx780ti had aged pretty bad mostly because of 3GB vram. But how has GTX 780 6GB version aged?
Posted on Reply
#234
Berfs1
I told y’all the power plug was on the side lol
Posted on Reply
Add your own comment