Tuesday, January 22nd 2019

NVIDIA GeForce GTX 1660 Ti Put Through AoTS, About 16% Faster Than GTX 1060

Thai PC enthusiast TUM Apisak posted a screenshot of an alleged GeForce GTX 1660 Ti Ashes of the Singularity (AoTS) benchmark. The GTX 1660 Ti, if you'll recall, is an upcoming graphics card based on the TU116 silicon, which is a derivative of the "Turing" architecture but with a lack of real-time raytracing capabilities. Tested on a machine powered by an Intel Core i9-9900K processor, the AoTS benchmark was set to run at 1080p and DirectX 11. At this resolution, the GTX 1660 Ti returned a score of 7,400 points, which roughly compares with the previous-generation GTX 1070, and is about 16-17 percent faster than the GTX 1060 6 GB. NVIDIA is expected to launch the GTX 1660 Ti some time in Spring-Summer, 2019, as a sub-$300 successor to the GTX 1060 series.
Source: TUM_APISAK (Twitter)
Add your own comment

155 Comments on NVIDIA GeForce GTX 1660 Ti Put Through AoTS, About 16% Faster Than GTX 1060

#126
notb
Blueberries said:
That is literally the definition of luxury.
No, it isn't!
There's only one decent definition of something being "a luxury" - that's "luxury goods" in economics. Simply put: these are things that you want more as they become more expensive (like collectables, art, jewelry etc).

Everything else is colloquial and varies from dictionary to dictionary (and from person to person).

There's no way to create a good definition of "luxury" the way you want. There's no objective condition, so you quickly end up with everything or nothing being luxurious. :-)
The 2060 has more performance at an MSRP of $350 than the 1070 at an MSRP of $370. The ridiculous idea that it should be priced at 1060 tier is something you made up in your head.
I never said that. I said there's a need for a cheaper card.
You should concentrate more on reading and less on imagining new world order. :-)
You want a cheaper GPU? Buy a 10 or 9 series, the fact that you feel entitled to the latest part at whatever price you want is ludicrous.
I can't buy a 9-series anymore and 10 will also disappear when cheaper 16/20-series arrive.
So what's your solution?

As I said: AMD does what you say - they release a high-end model and keep refreshing it in following years with lower prices. But we end up with inefficient cards (power-hungry, hot and noisy). Customers clearly prefer what Nvidia has been doing.

Moreover, GPU is more than just performance and efficiency. It's also about other technologies - like output standards, CUDA compatibility, supported hardware encoding etc. That's why refreshing (and updating these things) makes more sense than just selling the same card for 6 years.
Blueberries said:
Try extrapolating the words "great comfort" to this context. A luxury item is something you buy out of comfort and not out of necessity.
Which gets as back to what I said: using your definition, the only non-luxury products are those our organism needs to function: water, food, oxygen etc.
But since these can be found for free in the wild, is everything you have to pay for a luxury?
R0H1T said:
Yes it is a luxury! Are some people on this forum so deluded that they don't consider mid/high end PC/gaming a luxury?
If you base this on a definition that "luxury" is something you can live without, isn't any kind of gaming a luxury?

Are some people on this forum so elitist and arrogant that they need recognition for owning expensive hardware? :-D
Posted on Reply
#127
R0H1T
notb said:
If you base this on a definition that "luxury" is something you can live without, isn't any kind of gaming a luxury?
Figuratively speaking - yes, in case of PC/console gaming though the costs range anywhere between 500$ to infinity. In many parts of the world, including my own, a person can easily live for a month (or more) off that kind of money. No matter how you look at it, it is a luxury.
notb said:
Are some people on this forum so elitist and arrogant that they need recognition for owning expensive hardware? :-D
I don't know honestly, I'm not someone who does that nor am I in a position to judge others on what they do - I also know how that'll pan out here. However given my background, surroundings & upbringing I've always thought of the "collective good" over individual prosperity & even though it may sound socialist - I don't support companies who put profits above all else.
Posted on Reply
#128
Vayra86
medi01 said:
The problem of people "unfairly" complaining about nvidia's price policy exists only in your head and, perhaps, other parts of your body.
Stagnant perf/$ and ever raising prices is not "a problem", but a fact.
If you have problems when people state it, perhaps you should try to figure, why.
The problem is not unfair complaints, the complaints are valid (Turing is a big fat no-no for me), but these same people are still buying these cards, keeping the status quo intact. And 'Nvidia's' price policy... its the market's price policy really. Radeon 7 is a perfect example. Its a bit of hypocrisy if you ask me. Many people (and most don't even realize it) say to themselves 'that 20xx is too expensive, screw Nvidia, I'm buying a lower tier Nvidia card' :D And the same thing happens on the AMD side, make no mistake. Complaints about price are as old as commerce and trade - its the eternal dance between customer and salesman at its core.

Stagnant perf/dollar has nothing to do with Nvidia but with a lack of competition. If AMD competed and if we had a sizeable performance boost this generation in both camps, thén you would have seen prices drop. It won't happen by complaining to Nvidia and still rewarding them with 80% market share. You can see this in the midrange where AMD is still playing; the price of Vega has dropped considerably and the 350 dollar price point is now fiercely fought over. Here, we can get GTX 1080 performance at 66% of the price (give/take) it used to be at launch.

I think we will see an interesting dynamic in the coming months/year, one where the midrange is more than sufficient for mainstream resolutions at pretty fantastic FPS and quality settings, while any more powerful GPU costs an arm and a leg. High end GPU will possibly stagnate even more or price itself out of the market - Turing is already a clear example of this with the 2080(ti) and Radeon 7 is following suit. And this may continue until either Nvidia or AMD find ways to implement an MCM solution effectively, doing the Zen yield efficiency trick all over again. Both camps now have super large dies in the high end, its not something they can keep up in a cost effective way. Neither Radeon 7 or TU102 are viable for future iterations at a reasonable price.
Posted on Reply
#129
R0H1T
I think MCM will mostly be nonviable for the foreseeable future, in GPU space. Heck we haven't gotten past CF or SLI hangover, imagine something similar except in hardware.
Till we get there, if we cross that bridge, I'll hold on to that thought.
Posted on Reply
#130
Vayra86
R0H1T said:
I think MCM will mostly be nonviable for the foreseeable future, in GPU space. Heck we haven't gotten past CF or SLI hangover, imagine something similar except in hardware.
Till we get there, if we cross that bridge, I'll hold on to that thought.
Fast interconnects have long been sci-fi, haven't they, its one of the few areas where we still see major advancements, most recently with IF. SLI/Crossfire have to make do with a standard PCIe bus to make multiple chips work. Or bridges. I can't believe that some IF-like solution cannot work as well or even better for GPUs. Its not like CPU isn't doing a lot in real time either, there is just less data being moved. Nvidia also has its Nvlink, which seems more suitable for GPU. Every competitor already has a fast interconnect technology waiting to be used.

Everything is possible I think its mostly a question of cost effectiveness, that is why you speak of a CF/SLI 'hangover' (there was also a time when it was almost mandatory for high-end performance!) and that is also why Zen is so succesful, it comes at a time where new nodes and the performance/die size we require are creating major difficulties in terms of scaling. Had AMD launched their Zen during the Bulldozer days, it may have fallen flat on its face because Intel could just as easily push out monolithic chips.

notb said:
No, it isn't!
If you base this on a definition that "luxury" is something you can live without, isn't any kind of gaming a luxury?
Yes. The penny dropped :)

That is the essence of luxury. And as humans we are quick to forget that what we've attained are in fact luxuries and are quick to convert those into 'necessities' in our heads. That is exactly what you see here and exactly why some of you seem to have problems with stating that a video card is a luxury. Its called entitlement and its a widespread issue.

I prefer counting my blessings on what I have right now, and be thankful for every day I can live in wealth and good health. Once you've visited a few less fortunate countries (or live in one) you'll get a pretty clear picture of what is luxury and what is not. Its not abstract at all, the only abstract here is every individuals' frame of reference. Which can also be translated to 'you haven't seen much of the world if you think a video card is not a luxury'. Just because one lacks knowledge doesn't suddenly change a definition.
Posted on Reply
#131
bug
Vayra86 said:
Fast interconnects have long been sci-fi, haven't they, its one of the few areas where we still see major advancements, most recently with IF.
I'm afraid IF might have been a one trick pony. It did the job for the first generation Ryzen, but hit a wall with Threadripper (IF eats more power than all Threadripper's cores together). On top of that, AMD comes out and says IF doesn't scale, so Zen2 will still use a 14nm IF implementation.
At this point, it's as much a hindrance as much as it is a boon for the platform.
Posted on Reply
#132
InVasMani
Does this still have DLSS and mesh shading and simply lack RTRT? If so if priced the same or very close to the same as the 1060 then better than nothing at least.
Posted on Reply
#133
Casecutter
OneMoar said:
the tdp on the 1060 6gb is 120w knowing that and knowing what the tdp on the 2060 is it should be in that ballpark
Total miss-type on the TDP of the GTX 1060... So the GTX 1660 (or whatever) will be in the 160W TDP for the RTX 2060? How does that correspond to your statement...
OneMoar said:
I mean 16% at the same tdp is better then what amd has done so yea ...
So, your dissing AMD, and I take it the RX 590 for 10% performance increase while a 5% TDP reduction? Although admired to see 16% more performance but a TDP increase of 30% if looking at the ballpark of the RTX 2060?... That's what your words were implying ... Correct?

We can't get good comparison of actual perf/watt as W1zzard doesn't include the RX 590 in the latest reviews. That said the RX590 is not nearly in the RTX 2060 territory, while actually hard to correlate that data point between reviews. Kind of wish their was like the Sapphire RX 590 NITRO+ Special Edition in those RTX 2060 charts... ugliness be dammed.
Posted on Reply
#134
gasolina
if this is 250$ and overclock well plus SLI support would be a very interesting deal to make .
Posted on Reply
#135
Captain_Tom
Zubasa said:
nVidia's main goal as any company is to maximize profit, not to price AMD out of the market.
If they can sell a card for $300 and still sell all of them, there is no reason to price it any lower.
Exactly, and until people stop obsessing over 10% performance differences this won't change. But it will change the second people realize those yearly hardware upgrades aren't making their games any better.

Hopefully people do realize that eventually...

bug said:
I'm afraid IF might have been a one trick pony. It did the job for the first generation Ryzen, but hit a wall with Threadripper (IF eats more power than all Threadripper's cores together). On top of that, AMD comes out and says IF doesn't scale, so Zen2 will still use a 14nm IF implementation.
At this point, it's as much a hindrance as much as it is a boon for the platform.
AMD said that a die shrink of the IO portion of a CPU's die doesn't scale better performance compared to die shrinking the cores. You have it completely wrong lol, and AMD is correct in that statement. For example it's not like die-shrinking Haswell's memory controller did a whole lot for performance or efficiency compared to die shrinking the actual cores.

Also just to be clear - are you actually calling Infinity Fabric a "one trick pony"? Even if it was, that's one hell of a trick that allowed AMD to make Desktop cpu's that obliterate Intel's HEDT line-up.
Posted on Reply
#136
Totally
R0H1T said:
Yes it is a luxury! Are some people on this forum so deluded that they don't consider mid/high end PC/gaming a luxury?
Tell that to someone who lives on the street or barely gets a meal a day - not because they made bad choices in life, but because they were not born into a privileged family!
Hence the term is relative, I was going to an example similar to what you just said then declined to figured you'd pick up on that and not try to split hairs but clearly not.

OMITTED example: It's a luxury for me to wake up and draw breath not as a citizen of North Korea every morning.
Posted on Reply
#137
illli
pretty sad these are the times we live in. Past couple months you could buy a 1070/vega 56 for $299 or less, with game bundle... this 'new' card by nvidia is underwhelming.
Posted on Reply
#138
bajs11
THANATOS said:
GTX1060 is ~68% faster than 1050Ti and costs $210. This card is faster than GTX1060 and should cost under $200? I would love that, but It's unreasonable.
you sir and many other seem to have forgotten its 2019 now not 2016
by your logic then they should have just kept releasing 16% faster gpus and still charge 50-100 bucks more with each generation
Posted on Reply
#139
efikkan
illli said:
pretty sad these are the times we live in. Past couple months you could buy a 1070/vega 56 for $299 or less, with game bundle... this 'new' card by nvidia is underwhelming.
Which is called a sale.
It's not like the new cards are never going to have discounts.
Posted on Reply
#140
bug
Captain_Tom said:
AMD said that a die shrink of the IO portion of a CPU's die doesn't scale better performance compared to die shrinking the cores. You have it completely wrong lol, and AMD is correct in that statement. For example it's not like die-shrinking Haswell's memory controller did a whole lot for performance or efficiency compared to die shrinking the actual cores.
That's where IF is, so I'm not sure what's your beef here.
Captain_Tom said:
Also just to be clear - are you actually calling Infinity Fabric a "one trick pony"? Even if it was, that's one hell of a trick that allowed AMD to make Desktop cpu's that obliterate Intel's HEDT line-up.
Well, it's out for one generation and already isn't going place. What do you call that?
Posted on Reply
#141
Captain_Tom
bug said:
Well, it's out for one generation and already isn't going place. What do you call that?
I am sorry - What isn't going places?
Posted on Reply
#143
Captain_Tom
bug said:
This isn't going places: https://www.anandtech.com/show/13124/the-amd-threadripper-2990wx-and-2950x-review/4
It's stuck on 14nm, it will draw the same amount of power (give or take some tweaks).
What is "it"? You think EPYC/TR isn't going to go anywhere? LOL did you see the demonstration where an one single EPYC 3000 chip beat two of Intel's top XEON's while using less energy?

Have you missed the news lately? AMD showed off an R5 3600 matching a $500 9900K while using close to half the energy.
Posted on Reply
#144
bug
Captain_Tom said:
What is "it"? You think EPYC/TR isn't going to go anywhere? LOL did you see the demonstration where an one single EPYC 3000 chip beat two of Intel's top XEON's while using less energy?

Have you missed the news lately? AMD showed off an R5 3600 matching a $500 9900K while using close to half the energy.
Ok now you're just playing dumb.
Posted on Reply
#145
Captain_Tom
bug said:
Ok now you're just playing dumb.
See there it is again - and that is why I am honored to be quoted in the signature of someone completely ignorant to what's going on in this space. Makes me look smart.

I mean seriously listen to yourself - you are saying I am "playing dumb" when you literally haven't paid attention to the latest developments. What's scary is there are people liking your posts that have absolutely zero facts behind them. You clearly don't even know what Infinity Fabric is, and yet you seem to fancy yourself an armchair expert. Then again fanboys like hearing fanboys parrot their own beliefs back to themselves....


But I will continue to bite in the off chance you might want to learn - Why are you accusing me of "Playing Dumb?" Have you actually not seen the latest demo's of the ZEN 3000 series?
Posted on Reply
#146
InVasMani
Captain_Tom said:
Exactly, and until people stop obsessing over 10% performance differences this won't change. But it will change the second people realize those yearly hardware upgrades aren't making their games any better.

Hopefully people do realize that eventually...



AMD said that a die shrink of the IO portion of a CPU's die doesn't scale better performance compared to die shrinking the cores. You have it completely wrong lol, and AMD is correct in that statement. For example it's not like die-shrinking Haswell's memory controller did a whole lot for performance or efficiency compared to die shrinking the actual cores.

Also just to be clear - are you actually calling Infinity Fabric a "one trick pony"? Even if it was, that's one hell of a trick that allowed AMD to make Desktop cpu's that obliterate Intel's HEDT line-up.
I think where a shrink on the I/O hub for AMD makes the most impact is just the space savings if that means possibly squeezing in another 2 chiplet's that's a big up lift in performance right there regardless of how it improves the I/O hub itself from a die shrink. It's something they can worry about more if Intel starts to catch up, but isn't likely to happen for awhile so no need to worry yet I'd say. In fact by the time Intel is a threat again they might be transitioning to 5nm anyway or on the verge of it.
Posted on Reply
#147
Captain_Tom
InVasMani said:
I think where a shrink on the I/O hub for AMD makes the most impact is just the space savings if that means possibly squeezing in another 2 chiplet's that's a big up lift in performance right there regardless of how it improves the I/O hub itself from a die shrink. It's something they can worry about more if Intel starts to catch up, but isn't likely to happen for awhile so no need to worry yet I'd say. In fact by the time Intel is a threat again they might be transitioning to 5nm anyway or on the verge of it.
Yep - actually rumors of Zen 3 point to exactly that. Zen 3 should have slight but notable performance tweaks similar to Zen+, but with a complete rework of the I/O die (including a die shrink to 7nm+ for both the cores and I/O). Supposedly going to cut overall power consumption in half again, and yes I suppose they could probably add one more chiplet if they really wanted to.
Posted on Reply
#148
AnkitaMishra
biffzinker said:
Under $300 would translate to $299 knowing Nvidia's recent pricing shenanigans. /s Hopefully the price is $250.
Price should be 139 USD because GTX 1660 Ti is successor of GTX 1050 Ti. The Successor of GTX 1060 6Gb is RTX 2070 8 GB (TU106).
Posted on Reply
#149
John Naylor
Wow ... it seems we have more economics PhDs on the forum than techno-geeks :). The economics are simple.

1. Folks who make a living at this examine the market and ascertain "what the market will bear".
2. Vendors will always sell at a premium over this number as early after release, they can't sell more than they can get,
3. As supply catches up with demand, the sale prices will come into line.
4. If supply can't keep up, prices will rise; if supply exceeds demand prices will drop.
5. Prices will always follow what the market will bear. It's not "shenanigans", it's called capitalism.
6. Board members are fiscally responsible to their shareholders and have a legal obligation to maximize shareholder returns
7. So pricing procedures will ignore any philanthropic reasoning; only 2 things can affect that. a) competition and customer price ceilings
8. AMD has been unable to compete in the upper tiers for some time and with each successive generation of late has lost 1 more tier.
9. Customers want what they want ... until they are able to exercise restraint, the only option they have is buy and cry.
10. In the US, we still have the tariff penalty. Buy a complete PC made in china = no tariff .... buy the parts and built pay the penalty
Posted on Reply
#150
Captain_Tom
John Naylor said:
Wow ... it seems we have more economics PhDs on the forum than techno-geeks :). The economics are simple.

1. Folks who make a living at this examine the market and ascertain "what the market will bear".
2. Vendors will always sell at a premium over this number as early after release, they can't sell more than they can get,
3. As supply catches up with demand, the sale prices will come into line.
4. If supply can't keep up, prices will rise; if supply exceeds demand prices will drop.
5. Prices will always follow what the market will bear. It's not "shenanigans", it's called capitalism.
6. Board members are fiscally responsible to their shareholders and have a legal obligation to maximize shareholder returns
7. So pricing procedures will ignore any philanthropic reasoning; only 2 things can affect that. a) competition and customer price ceilings
8. AMD has been unable to compete in the upper tiers for some time and with each successive generation of late has lost 1 more tier.
9. Customers want what they want ... until they are able to exercise restraint, the only option they have is buy and cry.
10. In the US, we still have the tariff penalty. Buy a complete PC made in china = no tariff .... buy the parts and built pay the penalty
One would call of this "common sense", but yes some people don't seem to have any around here. My favorite complaints were the ones regarding Vega 64's prices a month after launch - they are elevated because the demand is higher than supply... not because AMD is "lying" lol.
Posted on Reply
Add your own comment