• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Seemingly Begins Resupplying GeForce GTX 1050 Ti GPUs

"However, retailers that have received fresh supply of the 14 nm, 4 GB GDDR5-totting graphics card have it at $179 - still above the 5-year-old asking price at release, which was set at $140. The GTX 1050 Ti features a 192-bit memory bus and a whopping 768 shading units."

Something is better than nothing. Still disappointing though.
 
Probably card manufs in China are just setting up mining farms and not even sending cards at this point.
 
I can't believe NVIDIA went with such old technology. If they have to pull a trick like this, then how about at least going back just one gen to Turing? I've got a 2080 SUPER and it works fine.
I'm pretty sure they used a surplus of chips they already had, it's not uncommon to do this at the end of a product's lifecycle, but normally this doesn't get much attention.

Starting production of a new batch of old chips would require ~4 months for the chips + ~1 month for assembly, providing they had free production capacity, which they probably don't.

Is it possible that Samsung restarted the 14nm line
Foundry production lines usually don't stop until they are worn out or are upgraded to a newer node. The demand for all the "16nm class" nodes is tremendous, and they are probably fully booked. I doubt Nvidia could get many extra wafers without paying someone else to give up their reserved capacity.

or are those just refurbished miner cards that were doing work until now.
Highly unlikely.
It's also illegal to market refurbished cards as new.
 
For the same amount of money I got AIB 5700XT last year, dang we really going away from home gaming.
 
IIRC, in 2018, I saw a 1050 series for $220, the same as my 1060 3 GB in November, 2016!

That reminds me, why not bring the 1060 3 GBs back?
 
A few weeks ago there were rumors about Nvidia reintroducing 2060, but we found none in the stores

the 2060 were a much interesting option...
 
The problem is the companies need to sell anything for the maintenance of jobs, profits etc., and not users to buy anything only because it´s the only hardware available on markets.

The absenteeism is not a choice, not now and not ever, until comes new chips from Nvidia and AMD, this is the best choice, very poorly, to keep things going.

You know, they bitch about memory chip shortage and other components like VRM, capacitors, etc etc.. but really, if that is the case and they are so so scarce, they are worsening the problem then, yes? Why not use them to produce more of the latest series GPU's... Are people really this naive? it's obvious, THEY ARE LYING! They are just re-selling old stock for major profits.
 
While we hate to see this, I see there is a proposition for gamers to bring back into the market such cards; although, yes... at reasonable prices. Stuff miner can't necessarily work with anymore from a perf/power, but still gives something good for 1080p/1440p, all while using unrealized wafer/fab production.

I would think AMD could shuffle back to Gloflo and work Polaris 30 (12nm LP) back in, and then release stuff like a RX 675 ($160) & RX 695 ($200) with nothing more than slight tweak to clocks.
 
On a side note, has anyone else noticed Newegg is filled with chinese sellers lately? For me the search results are garbage now, I look for gaming mice and a bunch of ripoff chinese brands come up. Somehow Amazon is better now and that's a low bar.
 
They will be re releasing the 920 next, if they could get a graphics output out of toast they would be selling it again.

I'm thinking the price could possibly be comedy.
 
I get 1-3 calls a day with people asking if I have any RTX 3000 series cards at all. It pains me, as a gamer myself, to tell them nope nope and nope and no ETA. :(
 
I get 1-3 calls a day with people asking if I have any RTX 3000 series cards at all. It pains me, as a gamer myself, to tell them nope nope and nope and no ETA. :(

I think it is hilarious.

"*Yawn* wake me up when consoles can play 4k ultra like the RTX 3090ti" they would say.

The flagship gpus, although profitable, have simply taken too much resources to sustain.

Now they look on like desperate peasants with their 1050ti while console gamers are playing with much more serious hardware.
 
Everyone will end up with intel discrete gpus

C'mon Raja, you can do this....
 
I can't believe NVIDIA went with such old technology. If they have to pull a trick like this, then how about at least going back just one gen to Turing? I've got a 2080 SUPER and it works fine. A low end version of this would have done the trick. Create a new low-ish end model called "RT 2040" or something with a suitably cut down GPU and people will buy it. Or heck, just make an "RT 3040" equivalent or something. Personally, I'd buy a card called "RT 3030" just because of the name, lol.

Because Turing was done on a shit node with pretty big dies for what they offered.

The gen was crap. And it still is.

I think it is hilarious.

"*Yawn* wake me up when consoles can play 4k ultra like the RTX 3090ti" they would say.

The flagship gpus, although profitable, have simply taken too much resources to sustain.

Now they look on like desperate peasants with their 1050ti while console gamers are playing with much more serious hardware.

Said it many times... RT is too costly.
 
Because Turing was done on a shit node with pretty big dies for what they offered.

The gen was crap. And it still is.
I've got a 2080 SUPER and can tell you that there's nothing shit about it. It's first gen RTX, that's all, so it pushed the envelope and my card works very nicely, with no glitches at all and great temps.

I don't know if NVIDIA used the latest node to make it or not, but you can't just blanket call it "shit", like you're some kind of expert, which you're not. Besides being offensive with language like that, what do you know about the compromises that NVIDIA had to make due to the resources available to them? In particular, what the fab is able to offer them seems to be the biggest one. New, cutting edge nodes are usually fully subscribed and aren't able to push out as much volume as a more mature node, so they've gone with what's best at the time, given all the variables.

Hence, my comment stands that it doesn't justify releasing 5 year old tech, which by definition has to perform worse and have less features.
 
Okay buddy, sorry if I hurt your pretty GPU ;)

Offensive? When did you become a snowflake? Wow, man.

If you can get past your emotions... you may note my comment is aimed at the margins of Turing and therefore its potential in the market. Those dies are big and there isn't a lot of fab capacity, and on top of that, the node was a one-off for TSMC and for Nvidia. Pascal was made in much larger volumes on a cheaper node and it also doesn't contain RT cores.

Yes, I'm blanket calling it shit, like it was since release and like I'll always do. Its clear as day. The dies are too big and the gap with Pascal is too small. Why did Ampere leap ahead in absolute perf? The answer is because Turing was such a weak gen compared to Pascal. Ampere leaps not only on shader perf but also on RT perf. Ergo, the Turing performance delta per square mm is just too low to repeat.

As for expertise... you assume too much. Hence, this is why I correct your comment because it does justify their re-release of Pascal. I don't like it either, but doing more of Turing is a clear no.

I'm pretty sure they used a surplus of chips they already had, it's not uncommon to do this at the end of a product's lifecycle, but normally this doesn't get much attention.

Plausible, but at the current rate I wouldn't abandon the idea. The demand is for graphics cards. Not RT enabled ones, particularly not in fact, because all cards are overpriced now. Adding cost is not an option. It is fast becoming viable to do something along the lines of restarting production on older nodes. You also have to consider the demand problem isn't new, it's been present for a half year now at least, and was already gearing up before that.
 
Last edited:
There is no way in hell they are wasting new wafers on this old ass chip/architecture there are only two viable options:

1. These were left over surplus chips nVidia is trying to offload onto customers who don't know any better

2. These cards are refrub/rebranded models being offloaded en masse by mining farms as they can no longer mine ethereum with 4gb memory

I suspect more and more it's the latter, as the timing is pretty much dead on for a mass dump of these cards onto the market from miners. They could literally just slap a new hs/fan/plastic case on the same pcb (or just the plastic case really) and re-sell it as a 1050ti. Avoid these trash cards at all costs, even MSRP they just aren't worth it for anything other than an EMULATOR machine.
 
I get 1-3 calls a day with people asking if I have any RTX 3000 series cards at all. It pains me, as a gamer myself, to tell them nope nope and nope and no ETA. :(
Pretty much the same at my store. NVidia and AMD both, but at least we're getting a few Radeons here and there.
 
I can get hold of GTX 1650/1660's in the UK, they're in stock in multiple places but I still wouldn't buy one when they're around £170-280 and much better than the 1050Ti's that are on the same sites going for the same amount of money.......

I still need to replace my broken RX570 but I can wait until the end of the year when hopefully things will have settled down a bit. I refuse to pay anything over £600 for a RTX 3070, so until they come back down to something closer to the MSRP I'm gonna keep the money in my pocket.
 
Plausible, but at the current rate I wouldn't abandon the idea. The demand is for graphics cards. Not RT enabled ones, particularly not in fact, because all cards are overpriced now. Adding cost is not an option. It is fast becoming viable to do something along the lines of restarting production on older nodes. You also have to consider the demand problem isn't new, it's been present for a half year now at least, and was already gearing up before that.
If they were running new batches, then they would very likely be running some of the bigger dies which gives more profit per wafer.
 
While we hate to see this, I see there is a proposition for gamers to bring back into the market such cards; although, yes... at reasonable prices. Stuff miner can't necessarily work with anymore from a perf/power, but still gives something good for 1080p/1440p, all while using unrealized wafer/fab production.

I would think AMD could shuffle back to Gloflo and work Polaris 30 (12nm LP) back in, and then release stuff like a RX 675 ($160) & RX 695 ($200) with nothing more than slight tweak to clocks.
If they want serve Gamers the best they could do is release cards with 4 GB of VRAM for 1080P Gaming. Having said that I wager that a large percentage of these are cards that are coming from the distribution market for $179 and the ones that are old inventory in retail channels are the uber expensive ones as the algorithms may not be smart enough to see that they are the same card.
 
Okay buddy, sorry if I hurt your pretty GPU ;)

Offensive? When did you become a snowflake? Wow, man.

If you can get past your emotions... you may note my comment is aimed at the margins of Turing and therefore its potential in the market. Those dies are big and there isn't a lot of fab capacity, and on top of that, the node was a one-off for TSMC and for Nvidia. Pascal was made in much larger volumes on a cheaper node and it also doesn't contain RT cores.

Yes, I'm blanket calling it shit, like it was since release and like I'll always do. Its clear as day. The dies are too big and the gap with Pascal is too small. Why did Ampere leap ahead in absolute perf? The answer is because Turing was such a weak gen compared to Pascal. Ampere leaps not only on shader perf but also on RT perf. Ergo, the Turing performance delta per square mm is just too low to repeat.

As for expertise... you assume too much. Hence, this is why I correct your comment because it does justify their re-release of Pascal. I don't like it either, but doing more of Turing is a clear no.
None of what you say actually negates my post, really.

I've already said that NVIDIA had to make compromises, so yeah, Turing isn't quite as good as it could have been and it's first gen RTX too, the reviews said so. It's not the end of the world though and they can still make decent cards out of it, so I think they should use those instead of Pascal. In the end, neither of us have all the facts in front of us to fully understand why NVIDIA made this decision, so we shouldn't be too judgemental about Turing.

The big performance leap also reflects further development of RTX as well as a better process node - and let's not forget the competition from AMD that simply wasn't there when Turning came out. NVIDIA wants to be top dog at almost any cost, so it's not that surprising that Ampere has a big performance uplift to make it as hard as possible for AMD to catch up, and so far they're winning. We'll see in a year or two if this still holds true, but I think it will.

Anyway, you worry too much about my emotions, which have nothing to do with it. Blanketing the whole Turing line as "shit" without knowing those fact is just being judgemental and a bit ignorant, really.
 
Back
Top