• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

New GPU; Wait? If not, which one?

What GPU should I invest in for the next 1.5 - 3 years?


  • Total voters
    49
  • Poll closed .

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.94/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
That still doesn't rule out a 970, and adding another later as a possibility. Especially if you're OK with 1080p. Two 970s will handle pretty much any game at max settings even at 1440p.
For how long though? My issue with my 6870s isn't GPU power, it's VRAM. Long term, I would rather be GPU limited not VRAM limited. The 6870s did okay until it went over 400MB of shared. The 970s is going to slow down before I even get into shared memory, that's my concern. Either way, I think it's safe to say that benchmarks will be in order before I make any decision.
Then you should probably reflect that ion your spec chart, which is used a lot by people to give advice on game hardware purchases.
I think most people can put two and two together and figure out that I'm not gaming on all 3 if I have two 6870s. ;)
However, why would I change my specs when I would use it if I could (depending on the game.) I'm only allowed to have so many characters in there and it's not like I'm asking about hardware that often enough to be that nit picky. Specs are to describe the computer, not how it's used, IMHO. Although how it's used could be reflected by the hardware (and should IMHO but, doesn't always.)
if AMD's new fury x is as good as the hype says it is, then its a no-brainer unless NVidia drop the price of the Ti or titan x further. the titan x is only £800 atm... I am sooooo tempted....
We shall see. I think AMD needs to have a faster option, because if they're the same, the 980 Ti still has the 6GB over Fury's 4. Once again. I await benchmarks. :)
 
Joined
Nov 9, 2010
Messages
5,654 (1.15/day)
System Name Space Station
Processor Intel 13700K
Motherboard ASRock Z790 PG Riptide
Cooling Arctic Liquid Freezer II 420
Memory Corsair Vengeance 6400 2x16GB @ CL34
Video Card(s) PNY RTX 4080
Storage SSDs - Nextorage 4TB, Samsung EVO 970 500GB, Plextor M5Pro 128GB, HDDs - WD Black 6TB, 2x 1TB
Display(s) LG C3 OLED 42"
Case Corsair 7000D Airflow
Audio Device(s) Yamaha RX-V371
Power Supply SeaSonic Vertex 1200w Gold
Mouse Razer Basilisk V3
Keyboard Bloody B840-LK
Software Windows 11 Pro 23H2
^You don't seem to get that even ONE 970 is plenty for a single 1080p display, INCLUDING VRAM. The only ones complaining about VRAM on a 970 are those pushing higher res, or making something out of nothing with the 3.5+512 thing that was blown up WAY out of proportion.

And it makes no sense to show a triple display setup in a spec chart on a gaming forum, if you have NO intention of using it for gaming. Sure it's obvious your current graphics cards can't run 5760x1080, but some buy such parts while planning to after a GPU upgrade.

I'm tired of giving common sense advice only for you to respond with nonsense though. Clearly you are paranoid just like the ones that claim a 970's VRAM is crippled, despite benches and tons of testimonials proving otherwise.

Your last response also makes it clear you don't know what your goal is, which makes it hard to give advice. "I'm fine with 1080p,...eh, but I might run 5760x1080 if I can". Well, what the hell is it, make up your mind? You DO realize there's a huge difference between the two don't you?

This thread was locked for similar reasons. I see this one headed the same direction.

Forget polling the community, it serves no purpose if you don't really know what you want. You need to poll YOURSELF!!!

And BTW, the R9 Fury X will have not only 8GB VRAM instead of 4, it's HBM too. And it will likely be a much better deal than two 980 Ti or even two 980s.
 
Last edited by a moderator:
Joined
Jun 13, 2012
Messages
1,328 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts. 180-190watt draw)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
And BTW, the R9 Fury X will have not only 8GB VRAM instead of 4, it's HBM too. And it will likely be a much better deal than two 980 Ti or even two 980s.
8gb HBM won't be released for least 2-3 months after 4gb one is out. As for better deal, not likely as don't know how much of a price premium that will tack on to the price, could be 2-300$ extra.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
I could see 8 GiB HBM only showing up on smaller process nodes which means a lot longer than 2-3 months. If it were only 2-3 months, AMD would have not launched the 4 GiB Fury and instead waited to launch 8-16 GiB Fury. I think the decision to go with 4 GiB was made 1-3 years ago and, in hindsight, they probably wish they had designed Fury with a 2048-bit HBM bus or increased the HBM density. I'm sure AMD privately regrets having to launch with 4 GiB but they can't afford to not do it.

And BTW, the R9 Fury X will have not only 8GB VRAM instead of 4, it's HBM too.
Source?
 
Joined
Jun 13, 2012
Messages
1,328 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts. 180-190watt draw)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
I could see 8 GiB HBM only showing up on smaller process nodes which means a lot longer than 2-3 months. If it were only 2-3 months, AMD would have not launched the 4 GiB Fury and instead waited to launch 8-16 GiB Fury. I think the decision to go with 4 GiB was made 1-3 years ago and, in hindsight, they probably wish they had designed Fury with a 2048-bit HBM bus or increased the HBM density. I'm sure AMD privately regrets having to launch with 4 GiB but they can't afford to not do it.
There will be an 8gb version its been confirmed but not for months after launch of 4gb. As for process node, not in next 2-3 months. GPU's are stuck at 28nm til next year. AMD has to launch the product they have already waited hoping small node would be up to shrink the GPU which didn't happen. 2-3 months for HBM2 8gb does seem optimistic i guess so could be closer to end of the year but who knows.

There was a report Nvidia was able to make a prototypes of their next pascal GPU using HBM2 on 16nm for testing, if that is true is still up in the air.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Again, source? I've not seen any mention from (semi-)official sources that there will be a 8 GiB Fury. Even Google is tying everything 8 GiB to 390X which is already well known.

Edit: Could be mistaken for a 2x Fury card with 2x 4 GiB. Theoretically could happen in a few months; memory-on-chip really simplifies designing a multi-GPU card.
 
Last edited:

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.94/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
^You don't seem to get that even ONE 970 is plenty for a single 1080p display, INCLUDING VRAM. The only ones complaining about VRAM on a 970 are those pushing higher res, or making something out of nothing with the 3.5+512 thing that was blown up WAY out of proportion.
Really? I'm talking about the future, not right now. You seem to not read half of what I have to say.
My issue with my 6870s isn't GPU power, it's VRAM. Long term, I would rather be GPU limited not VRAM limited. The 6870s did okay until it went over 400MB of shared. The 970s is going to slow down before I even get into shared memory, that's my concern.

And it makes no sense to show a triple display setup in a spec chart on a gaming forum, if you have NO intention of using it for gaming. Sure it's obvious your current graphics cards can't run 5760x1080, but some buy such parts while planning to after a GPU upgrade.
Last time I checked, TPU was a hardware forum, not just a gaming forum and I shouldn't have to say I have one monitor because I only game on one. Also, as I said before, if I had hardware competent at using all 3, I'm sure I would actually do it...
Your last response also makes it clear you don't know what your goal is, which makes it hard to give advice. "I'm fine with 1080p,...eh, but I might run 5760x1080 if I can". Well, what the hell is it, make up your mind? You DO realize there's a huge difference between the two don't you?
Pretty sure I said, I would run it if I can and situation permitting. There are a lot of games that you don't run at that resolution because it's more headache than it's worth, not because of performance because the game just sucks on 3 displays. Although it seems you're more intent on telling me that I'm wrong and that you know what's best for me when I've made it very clear that a 970 is not in the cards for me.
This thread was locked for similar reasons. I see this one headed the same direction.
Only because you're derailing my thread. I acknowledged your recommendation and rejected it with my reasons why explained. I plan on having whatever I get for the next 3-6 years and having enough VRAM is very much an important issue with me as a second card in a few years a not an unlikely possibility. Even if I do go over like I am now, I would rather the GPU slow down when I used all of the vram (plus some,) and not before. A GPU is not a short term investment in my case which is why I rejected the 970.
And BTW, the R9 Fury X will have not only 8GB VRAM instead of 4, it's HBM too. And it will likely be a much better deal than two 980 Ti or even two 980s.
Let's see that source big guy. That's news to me.

Lastly: I did say that I'm reserving judgement until we have some reviews on Fury (X) as well. Maybe we can postpone the conversation until that occurs.
 
Last edited:

nsdp

New Member
Joined
Jun 6, 2015
Messages
21 (0.01/day)
No its not. I just went and asked my partners friend, who works as a consultant in the oil industry. He then went off and asked his techs what gear they used on one of the North Sea rigs, and he reported back. Turns out they use Quadros, so I have no idea where you pulled that information from.

If your company is using 295X2's for that kind of compute work, either they're doing it wrong, or they're not a very big company. First hologram I've ever seen was being utilised by an oil drilling company. They spend big dollars on that kind of gear.

More to the point, seismic work =/= gaming, so that is entirely irrelevant in terms of benchmarking.


Poor Rcoon. If your friend is working the North Sea he is in the backwaters of the oil and gas industry; it is a dead end for exploration. The Shale fields using fracking are the cutting edge. That would be the Eagle Ford and the Permian in Texas and the Bakken in North Dakota. New exploration in the North Sea has been dead for 5 years so what ever computer equipment they are using in probably either hand me downs or old. They are also probably using code written with Cuda not the newer Open CL 2. According to API North Dakota spudded more new wells last week that were spudded in the entire North Sea, UK, Norway, German Danish and Dutch sectors combined last 12 months. I know more than a little about this area since I am a Standard Oil Co. (Ind) and Valero Energy retiree. As to what I am doing it is this http://www.google.com/patents/WO2014117040A1?cl=en which is way over your head. I am using the 295X2 because it is the only card capable of more than 2 TFLOPS double precision and I happen to need 11 TFLOPS to run this http://www.flownex.com/ in 3D. If you check with PADT https://www.networkingphoenix.com/v...nologies-inc-padt/48797?ModPagespeed=noscript you will find that they are running AMD as well for Flownex. They are the sole US licensee for Flownex commercial services and did the initial thermo analysis for me in 2013. Or you might also check with the people at the University of Pretoria on their work behind this project. http://www.flownex.com/fnxdocs/userpublications/userjournal002.pdf U Pretoria is a two dimensional model; mine is three dimensional so a lot more complex Makes the seismic look pretty simple and your gaming look like Trivial Pursuit. Here is a simple single nozzle burn of what we are doing and designing for use with SIX nozzles in a gas turbine.
Since your javascript screwed up the link, google
Worlds First Air Pollution Free Hydrogen Steam Boiler
UALocal442


3D problem solvers work very nicely on the 295. Any one who tells you otherwise is demonstrating their gross ignorance. Do you like your crow cold or warmed over?

PS I will send you a map and a flashlight.
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Have you considered Xeon Phi? Their double precision performance should beat the snot out of GPUs which have sacrificed double precision for more single precision performance.

Edit: Scratch that. 1.2 TFlOps for $4000.

Edit: Uh, most figures I'm seeing puts the 295X2 at 1.4 TFlOps. Considering that's two processors instead of one, Xeon Phi probably comes out ahead in terms of power consumption.
FirePro W9100 weighs in at 2.6 TFlOps.
Tesla K80 weighs in at 2.9 TFlOps.
 
Last edited:
Joined
Jul 18, 2008
Messages
2,894 (0.50/day)
Location
South Carolina
System Name KILLER
Processor Intel Core i7 4790K
Motherboard MSI Z87 G45 Gaming
Cooling Water Cooling
Memory 2x8GB Gskill 2400mhz
Video Card(s) EVGA GTX 980
Storage SSD Samsung 500GB
Display(s) 24" Asus 144hz
Case Cooler Master HAF
Audio Device(s) X-Fi Platinum
Power Supply Antec 1200 Watt Modular
Software Windows 10 64 Bit
Benchmark Scores 3DMark Vantage 51342
Just pick up a couple of used GTX 980's for SLI and see what you can get out of them.

Saw some going in new condition under $400 now.
 

nsdp

New Member
Joined
Jun 6, 2015
Messages
21 (0.01/day)
Have you considered Xeon Phi? Their double precision performance should beat the snot out of GPUs which have sacrificed double precision for more single precision performance.

Los Alamos National Laboratories put that notion to rest in 2008 with Roadrunner. As to FFT performance on a cpu the ranking is roughly as follows excluding graphics cards. 1. IBM Blue Gene L/N family. 2. AMD Steamroller/Piledriver. 3. SPARC. 4. Intel. Look at the results in the DARPA HPC Challenge over last 9 years. IBM is first with 59 awards, AMD is second with 27 awards, Hitachi /NEC SPARC is 3rd with 22 awards and Intel is last with O! That is why Intel paid $124 million in late 2012 to Cray for the people to fix Intel's problems between the CPU and Memory. That is also why Kraaken, Jaguar, Titan at Oak Ridge have been exclusively AMD machines. Now we will get new(valid) numbers in November for the SuperComputing 15 Conference up the road from me in Austin in November. The Top 500 continues to use Linpack 32 which is 32 bit software and has been unworkable since 2008. In 2012 Jack Dongra released a replacement . Jack also had a paper out at SC7 telling every one why Linpack 32 was no longer a valid testing program.
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
I edited. If you're looking for raw FP64 performance, FirePro W9100 is king. It has almost double the FP64 performance of the 295X2 and it does it with a single GPU. Using four of them, you should be able to get 10 TFlOps out of a single system (and six displays per card :eek:).
 
Last edited:

nsdp

New Member
Joined
Jun 6, 2015
Messages
21 (0.01/day)
It is 2.8 Tflops for the 295 and at $550 each the calculation is a no brainer. I have an S10000 that I acquired new for $1K from the bankruptcy trustee and use it for final evaluation where everything is ECC. I bought all six cards for less than the best price I could find on the K80. I am a private individual not the DOE with an unlimited budget. I want no more than $300/Tflop. Besides most every one of significance is moving to OPen CL and away from proprietary software like Cuda. Example of Photoshop 10 going to OpenCL and nVidia cards having a hard time adapting. https://forums.adobe.com/thread/1235487
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
295X2 has 5,733 GFLOPS x2 single precision. All sources on the internet are saying it has a 1/8 ratio of single to double precision or 717 GFLOPS x2. Multiply by two and you get 1.4 TFlOps for the whole card.

S10000 is actually quite a bit slower than the W9100: 1.48 TFLOP. It's a smidge faster than your 295X2. It practically is the 295X2 having dual GPUs but with more memory and that memory is also ECC'd. W9100 also has ECC'd memory.


If you're in the market to upgrade, you can actually get 50% off W9100 ($1500 for 2.6 TFlOp) for the rest of the month. :eek:
 
Last edited:
Joined
Nov 9, 2010
Messages
5,654 (1.15/day)
System Name Space Station
Processor Intel 13700K
Motherboard ASRock Z790 PG Riptide
Cooling Arctic Liquid Freezer II 420
Memory Corsair Vengeance 6400 2x16GB @ CL34
Video Card(s) PNY RTX 4080
Storage SSDs - Nextorage 4TB, Samsung EVO 970 500GB, Plextor M5Pro 128GB, HDDs - WD Black 6TB, 2x 1TB
Display(s) LG C3 OLED 42"
Case Corsair 7000D Airflow
Audio Device(s) Yamaha RX-V371
Power Supply SeaSonic Vertex 1200w Gold
Mouse Razer Basilisk V3
Keyboard Bloody B840-LK
Software Windows 11 Pro 23H2
I did say that I'm reserving judgement until we have some reviews on Fury (X) as well. Maybe we can postpone the conversation until that occurs.
Sorry kid, that will be pretty one sided. I already grew tired of you sticking your nose up at everything I offered. You seem content on trying to predict the future with your hardware decisions, which almost always ends up being fail due to wasting too much money on something you think will last longer than a more practical solution. Been there, done that, and learned by it. You, I'm not so sure. Seems your lesson has yet to come.
 

nsdp

New Member
Joined
Jun 6, 2015
Messages
21 (0.01/day)
295X2 has 5,733 GFLOPS x2 single precision. All sources on the internet are saying it has a 1/8 ratio of single to double precision or 717 GFLOPS x2. Multiply by two and you get 1.4 TFlOps for the whole card.

S10000 is actually quite a bit slower than the W9100: 1.48 TFLOP. It's a smidge faster than your 295X2. It practically is the 295X2 having dual GPUs but with more memory and that memory is also ECC'd. W9100 also has ECC'd memory.


I wish I had an unlimited budget. I am doing a lot of this out of my own pocket. The W9100 costs about 3x what I paid for the S10000. Double precision seems to be more on the order of 1.85 Tflops. So much of that is how tight the code is (witness computer buffer overflow on Apollo 11 LEM vs the rewrite for Apollo 12) So I have to use the P-51/Me-262 theory. The Me-262 was 140 miles per hour faster than the P-51 but could stay aloft only 45 minutes and there were never more than 700 operational. There were 7000 P-51s which had 8 hours endurance, so the 8th and 15th Air Forces put standing combat patrols over the Me-262 bases and shot them down on take off and landing. We know who won the war. Comrade Stalin was of the opinion that "quantity has a quality of its own".

As the proud owner of 6F03C318630 for forty years I appreciate the Ford link. And my car is better than yours.
 
Last edited:

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.94/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Sorry kid, that will be pretty one sided. I already grew tired of you sticking your nose up at everything I offered. You seem content on trying to predict the future with your hardware decisions, which almost always ends up being fail due to wasting too much money on something you think will last longer than a more practical solution. Been there, done that, and learned by it. You, I'm not so sure. Seems your lesson has yet to come.
I do have an expectation for you to stop derailing the thread. You made your suggestion and I rejected with my reasons why. If you can't take "no" for an answer and you're going to get worked up about it and start trying to insult me with comments like:
Been there, done that, and learned by it. You, I'm not so sure. Seems your lesson has yet to come.
Then maybe you need to step away from the computer for a minute. Of course I know either will be overkill. My 6870 was overkill when I got it brand new. That's the point because I want the GPU to last more than 1.5 years. I did say that I started with one 6870 6 years ago and I would intend to do the same with whatever I get in the near future.

Considering how most people aren't making a recommendation of a 970, I'm reluctant that it's a good recommendation, even more so when I'm planning for this to be viable several years down the road. I might upgrade in 1.5 years but it won't be GPU, I can tell you that. The soonest I would upgrade my GPU again would be 3 years, next upgrade (in 1.5) would focus on storage, memory, and cpu if I were to guess.

Now I've made my point, I expect you to stop derailing the thread. If you don't like what I'm doing, don't get pissed off about it and just unsub the thread. You don't need to go turning this into a pissing content when I've outlined what I'm looking for and responded to your recommendation. Go derail someone else's thread, why don't you.

Side note: Go ask @FordGT90Concept if a 970 is in the cards for him? He's in the same boat if you forget surround considering he's rocking a 1920x1200 display. So are you saying this is true for both of us? Considering how he still rocking a 5870, I suspect he doesn't want to replace his GPU every 1.5 years and he would like it to last about the same amount of time his current GPU has, as do I.
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Side note: Go ask @FordGT90Concept if a 970 is in the cards for him? He's in the same boat if you forget surround considering he's rocking a 1920x1200 display. So are you saying this is true for both of us? Considering how he still rocking a 5870, I suspect he doesn't want to replace his GPU every 1.5 years and he would like it to last about the same amount of time his current GPU has, as do I.
I ain't doing anything until R3## is out. I'm not inclined to buy NVIDIA in the first place because last time I splurged on NVIDIA, I got burned (failure rate astronomically higher than ATI cards at the time).

And yeah, I don't upgrade often. I go for generational changes, not incremental. Those pics in the other thread showing that the 390X may just be a minor update to 290X is really bumming me out. My enthusiasm vanished. It doesn't help that both are on 28nm too. I'm least confident now in the upgrade than I ever was. :( I'm thinking I should maybe try to wait until HBM2 and <28nm cards.
 
Last edited:

nsdp

New Member
Joined
Jun 6, 2015
Messages
21 (0.01/day)
OP, the problem is nVidia is gouging the fanbois right now with smoke and mirrors. You figure that the typical sales price for a 970 is north of $100tflop 32 bit SP. If you look around you can find a 6970 (which will do just fine for most desktop users) for $33/tflop 32 bit. The 290X I have seen as cheap as $42/tflp 32 bit SP. You will routinely see the 290/X for under $50 Tflop 32 bit SP.

I also question most all of the benchmarks in the fanboi press since they are not conducted using IEEE754 standard code compilers nor are they telling you where the pointers, flags and other tuning set points are. For example using autotuning and a bios flash you can push the 295 over 2 Tflops in 64 bit DP. It helps that I have access to one of Oppy's boys (retired from Lawrence Livermore) and a college class mate (retired from Sandia) who like to tinker and they still have back channels into the really good stuff. SPEC.org has benchmarks set up so you get an apples to apples comparison. they also require the testing entity to send all the data of flags pointers etc in so Spec staff can see if the results can be replicated. Or as the old Spec charter said " The key realization was that an ounce of honest data was worth more than a pound of marketing hype."https://www.spec.org/spec/ At today's price points anything nVidia sells represents a lot of hype.
 
Joined
Jun 13, 2012
Messages
1,328 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts. 180-190watt draw)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
OP, the problem is nVidia is gouging the fanbois right now with smoke and mirrors.
Irony of you saying smoke and mirrors, that is mostly what AMD has done with 90% of their product stack. Rebranding last years stuff. 390x landed in hands of a few ppl, gpu date is oct 2013. SO 390x is nothing more then a rebranded 290x and AMD has the peanuts to claim "enhanced" hawaii but its same card using same GCN 1.1. Nvidia least with current line up used a NEW gpu and didn't rehash last years to current.
 

nsdp

New Member
Joined
Jun 6, 2015
Messages
21 (0.01/day)
Irony of you saying smoke and mirrors, that is mostly what AMD has done with 90% of their product stack. Rebranding last years stuff. 390x landed in hands of a few ppl, gpu date is oct 2013. SO 390x is nothing more then a rebranded 290x and AMD has the peanuts to claim "enhanced" hawaii but its same card using same GCN 1.1. Nvidia least with current line up used a NEW gpu and didn't rehash last years to current.
Poor arbiter must be suffering from amnesia again. Intel signed August 4, 20010 agreements http://www.wsj.com/articles/SB10001424052748704017904575409152910216786 with the FTC and AMD with Intel paying the FTC $415 million and AMD $1.25 billion,

"The settlement also prohibits Intel from deceiving computer manufacturers about the performance of non-Intel CPUs or GPUs. " Intel admitted cheating in writing in Federal Court.

Intel also paid the EU 1.45 billion euros which was about $2 billion and lesser sums in Korea and Japan. Intel settled with the DOJ for $415 million Intel for participating in the no poaching of employes. And this year also lost a civil settlement offer when they along with Apple, Google and Adobe tried to settle a employment for a low ball $245 million dollars. It was rejected by the US District Judge who said the number should be in the $3billon range and trebled to $9 billion for the four. Intel is the liar Arbiter not AMD. The boys in Green also have their problems along with Intel at SPEC.org over doctored benchmarks. The only IEEE certified benchmark I am aware of SPEC rejecting for an AMD is one where there is an error on Spec's part in setting up the benchmark. It is for a Dell 905 server. I will post the link when I find it again. https://www.spec.org/cpu2006/results/res2008q2/cpu2006-20080428-04224.html

Maybe Parkinson's disease has set in early. I can recommend several good neurologists.
 
Last edited:
Joined
Jun 13, 2012
Messages
1,328 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts. 180-190watt draw)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
Poor arbiter must be suffering from amnesia again.
Funny how you have to go back to 2010 to find something. yet if you look back 2 years, there has been dozens of things could find against AMD. So who really has Amnesia or do you have alzheimer's? Remember AMD got sued little over a year ago for lying how good their APU's were and gonna sell? we can go at this all day of you finding stuff happened long time ago and i can pull stuff from 2 years ago at most so just stop while your ahead.
 

nsdp

New Member
Joined
Jun 6, 2015
Messages
21 (0.01/day)
Arbiter I have provided references each time. You haven't So you have a credibility problem and I call your bluff. Now there are some rules if you want to use those sources like Passmark in the US and not be subject to libel and slander charges. First, your benchmark system have to be certified by the National Institute of Science and Technology (NIST) or the American Society of Testing and Materials (ASTM) or IEEE754 or SPEC.ORG or HPCCHALLENGE or national Laboratory Like Lawrence Berkley, Lawrence Livermore, JPL, Sandia, Oak Ridge, Idaho National Lab, Brookhaven, PNNL, NERL or NETL . https://en.wikipedia.org/wiki/Daubert_standard otherwise what you offer is what goes with toilet paper. TUV will do nicely for Europe. Japan has one that is Internationally recognized. Passmark is none of the above. It is a trivial Integer algorithm for finding prime numbers. The key is that Intel admitted in open court that they paid testers to manipulate the benchmarks. If you have a subscription to PACER I will give you a direct link to the signature of the General Counsel and Chief Legal Officer of Intel for the admission. Do you have any judicial admissions of cheating on benchmarks by AMD?

Don't use any Brian Williams/Bill O'Reily or to be fair and balanced Rachel Dolezal type sources either. We have a group of the old Charter members (1975)of the Computer Society of IEEE waiting to jump the case of the Brian Williams and Bill O'Reily's of the computer blogs. Also don't try the Bellman's gambit " Just the place for a Snark! I have said it thrice: What I tell you three times is true.'' http://www.theotherpages.org/poems/carrol03.html Beware of snark infested waters.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.94/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
I think we're getting a little off topic here and the name calling isn't acceptable by any measure. Lets be civil, stay on topic, and not get a moderator involved.

On topic: 1 more day, the suspense builds. :)
 

nsdp

New Member
Joined
Jun 6, 2015
Messages
21 (0.01/day)
Fair enough OP. The biggest question now is what is the future of the desktop. The laws of heat transfer, quantum mechanics and the speed of light are going to put a practical end to the CPU as we know it today. Core design can survive shrinks down to 14nm and 7nm beyond that leakage will render the reduction not feasible . Below 7nm the problems of heat transfer and quantum mechanics put a practical limit on manufacturing and we are quickly reaching the limit of how fast the cpu can go due to the speed of light as an upper bound. That means that CISC architecture is reaching its limit just as RISC did in about 2000 and the original vacuum tube analogue designs did in the early 1960's. It also means that hardware associated with the CISC design risks going the way of Holdereith machines and magnetic tape readers software like PL1, Cobol and Algol. Los Alamos has posited that the next generation is the GPCPU with integrated video display. Your display screen and logic circuit are the same device. This is based on their work with Roadrunner which was a CISC machine hybridized with a PS3 Cell chip. I wish this document was available to the public in general

What does that mean for those of us wanting to buy today for the long term. It basically means that as the Apple /Samsung ARM world expands that the CISC is going to go the way of the dodo. It is just like IBM cancelled Blue Waters which was to be built at Argonne NL. That would have been the last RISC based machine like the Blue Gene series. IBM not DOE pulled the plug as IBM's engineers decided that the project was a dead end not worth pursuing. Remember that most all commercial use of Risc ended roughly in 2003-4. The Cray/AMD partnership based on Seastar and Opteron 940's made purchase of a new RISC based machine senseless. For lesser needs server farms became the norm.

Since the laws of physics limit the improvement in the fundamental CISC design, that pretty well spells the end of the line of our conventional graphics cards. So do you spend money for something for bragging rights? Or do you get an acceptable level of performance/$ and realize what you have will essentially be a legacy system in 4 years. IBM is moving to the GPCPU and let me remind you that Big Blue brought you a practical desktop computer and has a very long history of being a first mover in the real world market. The CISC chip was an IBM not Intel design. Intel manufactured to IBM's spec. The Early Apples, Amiga's,Commodores etc were novelties not game changers. I saw the vacuum tube analogue/solid state digital change over , the DOS/GUI change over, and the RISC/CISC change over. The laws of quantum mechanics say design is going to have to change and Moore's Law says it will probably be here in 2 to 3 years max. We have been spoiled by a 20 year run for the GUI/CISC but right now looks like 1994 for me. DOS Command Line was dead in two years and RISC essentially dead when DOE built ASCI Red at Sandia NL in 1996. http://www.top500.org/featured/systems/asci-red-sandia-national-laboratory/
 
Last edited:
Top