• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Final Fantasy XIV: Endwalker GPU benchmark/Post Your Scores

Hmm seem AMD have done some DX11 magic hoodoo with newer drivers
Previous score was 18546 latest score with 22.9.2 drivers

Endwalker bench 1440p 2022-10-11 145603.jpg
 
First run at it with:
11700K @ 135W
EVGA RTX 3050 XC @ Stock

FFXIV Bench 11700K 3050 20221010.png

It says Preset 1, but I didn't change anything from the Maximum preset other than resolution.

2nd Run:
FFXIV Bench 11700K 3050 20221010.1052.png
 
Last edited:
did better than i thought it would
 

Attachments

  • tips.jpg
    tips.jpg
    1.5 MB · Views: 201
Either FFXIV likes Zen 3, or something's wrong with my 11700K runs or system.

5600G @ stock
EVGA RTX 3050 XC @ stock

View attachment 265230
Im more willing to bet something with the runs. Was there anything taking up CPU cycles in the background? Also, what was your 11700k clocked at the whole time? My experience with friends on 11th gen CPU's is that sometimes they personally don't have the best cooling (prebuilts) and the CPU throttles hard. It won't even go passed 4ghz alot of times and my friend has a 11700F. No issue from anyone on 10th gen CPU's though. FFXIV like most MMO's simply loves cache capacity/IPC/clockspeed/low RAM latency. Thats why my score jumped up so much with each CPU upgrade using my 3080 TI from 20.5k (3950x), 25.5k (5900x), and now almost 31k using my 5800X3D and pushing my 3080 TI a bit harder with a higher undervolt OC.
 
did better than i thought it would
Hi,
Well you're not showing the finish/ compare page so we can't see the hardware being used plus you have no system spec's either.
 
Im more willing to bet something with the runs. Was there anything taking up CPU cycles in the background? Also, what was your 11700k clocked at the whole time? My experience with friends on 11th gen CPU's is that sometimes they personally don't have the best cooling (prebuilts) and the CPU throttles hard. It won't even go passed 4ghz alot of times and my friend has a 11700F. No issue from anyone on 10th gen CPU's though. FFXIV like most MMO's simply loves cache capacity/IPC/clockspeed/low RAM latency. Thats why my score jumped up so much with each CPU upgrade using my 3080 TI from 20.5k (3950x), 25.5k (5900x), and now almost 31k using my 5800X3D and pushing my 3080 TI a bit harder with a higher undervolt OC.

Background processes is likely. I have WCG and/or F@H on all my machines, so I may have forgotten to turn those off. Going to need to get real motivated to try again on 11700K, because the 3050 is now in its semi-permanent home in my 9700K LRPC (Living Room PC). We'll see how it goes on that if I can figure out resolution scaling; it's only a 1080p TV.

EDIT:

Results are in, and do suggest mitigating circimstances on the 11700K. First runs have tended to score slightly higher than second on all configs (<0.5%).
9700K @ 3.6/3.9
EVGA RTX 3050 XC @ stock

EW bench 9700K 3050 (1).png

EDIT THE SECOND:

Right, now for the "final" results. My goal was to test all my "modern" graphics cards, partly for curiosity and partly because none of them were on the chart. Without further ado:

11700K @ 135W
ASUS GTX 1060 6GB Dual @ stock

Two runs, identical scores.

EW bench 11700K 1060 6GB (1).png

11700K @ 135W
Sapphire RX 470 8GB Mining Edition 1X DVI @ stock

Single run at 1920 X 1080 (story in Spoiler below)

1665879647052.png

Having gotten my head around DSR for the 3050 run earlier, I assumed configuring VSR for the 470 would be similarly easy. How wrong I was. AMD does some things well, not least simultaneously keeping up (for the most part) with two separate industry titans in two separate markets with a fraction of the resources. What they do not do well is consistency. I loaded up the most recent Radeon Software in my repository, which was 20.11.2. Erm, where's VSR? Googling ensues. Oh, VSR is deprecated, and we have RSR now. But that's not supported pre-RDNA. Grr. Ok, what do I have for older drivers? 19.9.2? Cool; let's go with that. One of the search results talking about VSR was for pre-19.12 drivers anyway. One removal and installation later: Wait, why are we running 19.20.whateveritwas? Aargh. Ok; go offline, uninstall, install 19.9.2 package while offline so Windows doesn't auto-grab something newer. Hey, look: VSR! Not supported?!? The reason I even need VSR (or why it doesn't work) might not even be AMD's fault. My 470 has a single DVI port since it's a mining card (I got it for like a hundred bucks new after the last big crypto crash), which didn't matter since it only needed to be connected to a single 1080p display for most of its life. I'm guessing it's only a single-link output, since resolution is capped at 1920x1200. Hooking up to my 1440p monitor with a dual-link cable didn't help. Of course that could also be a limitation of the monitor or the DVI-HDMI adapter I'm using (though it has all the pins on the DVI side). So why does this feature, that my GPU supports, and is built into the driver I'm using, not supported? Aack.

Side AMD rant: Another thing they don't do well is history. They don't have ANYTHING like Intel's ARK. You can't download deprecated or historical versions of drivers from their website. (Intel's significantly better about this, but could still be more comprehensive). Online documentation in general is poor vs. both Team Blue and Team Green. I have other old installers, but they're the minimal setup versions that download required components because I didn't think it was worth holding on to a bunch of .5GB driver files. Lesson learned, I guess.
 
Last edited:
Screenshot 2022-10-18 155204.jpg


Recently upgraded my CPU from a Ryzen 1700X to the 5700X. Only had to upgrade CPU and it was only 239€. Thank you AMD for your very long AM4 support as promised in 2016 :love: even though they took a very long time until Ryzen 5000 was finally supported on my old boi the Asus Crosshair VI Hero with X370 chipset.

Because of Covid I had a good amout of time to tune my system. Took 4 1/2 days for tuning the CPU with PBO2 and Curve Optimizer and proper stability testing.

OS: Windows 10 Pro 21H2

CPU: AMD Ryzen 5700X Rev. VMR-B2
PBO2 +200MHz PPT: 125W | TDC: 80A | EDC: 125A
Curve Optimizer Cores Negative -30|-20|-20|-30|-25|-25|-10|-30
it's usualy around 4,85GHz single core and 4,65GHz all core depending on the type of load ofc.

Memory: G.Skill Flare X 16GB Kit DDR4-3200 CL14 (F4-3200C14D-16GFX) Samsung B-die
@ 3800MHz CL16-16-16-32 CR1 and all the subtimings tuned. memory latency 57.8ns

GPU: Gigabyte RTX 2080 Ti Gaming OC (GV-N208TGAMING OC-11GC)
Power Limit: +20% (366W) | Core Clock: +75MHz | Memory Clock: +1000MHz
The card is using the better but more expensive Samsung memory that was also used on the Titan RTX. After many of the very first generation cards using Micron memory started to fail due to a manufacturing issue, board partners very quickly started to use the memory from Samsung until the issue with Micron was resolved.
 
@80-watt Hamster, any reason why you run your 11700k @ 135w only for this benchmark?

That's just what I have my 11700K set at in general so that it stays under 80C running WCG. FFXIV is heavily GPU-bound (at least with the cards I'm using), and I don't anticipate a 135W power limit affecting the result much. CPU usage during the run is something like 20%.
 
Endwalker Stock Benchmark Result.jpg

LifeOnMars/5700x-stock/32Gb 3200 Cl 16-stock/RTX 3080 12Gb-stock
 
Must have forgotten to share this one.

Ryzen 5600G stock / EVGA GTX 1070 ti SC Black stock
2022-10-13 22_45_17-FINAL FANTASY XIV_ Endwalker Benchmark 5600G 1070 ti.png
 
Here is my score. I am hold back by CPU now. For better score, i simply need a stronger CPU. With better IPC and higher core clock. The RTX 4090 has more to give.

5950X with PBO on and game mode/RTX 4090 with overclock

Final end walker 5950X.jpg
 
FYI - Time to up the resolution as this is CPU limited now with the Nvidia GeForce RTX 4090

MSI MEG X670E ACE - AM5
AMD Ryzen 7 7700X - 5.75Ghz
32GB GSKILL DDR5 6000Mhz CL-30
Nvidia GeForce RTX 4090 - 3.0Ghz
1666941254563.png
 
ASUS Prime X470.

I ran this last week when I got the 5800X3D but my scores today were lower than they were when I first tested. I had reset my BIOS and forgot to re-enable DOCP.

ffxiv_20221205_194906_028 - Results.png


Didn't bother overclocking. I just enabled DOCP on my GSkill Ram and kept it moving. 3600 Mhz @ 16-19-19-19-39-85.

ffxiv_20221205_215213_303 - Results.png


Then I updated from Bios versions 5861 to 6042 and saw a HUGE jump. I guess this is what AMD meant by "3. Improve system performance for AMD Ryzen 7 5800X3D" in the patchnotes for 6024.

1670301487165.png


(btw: I get essentially the same score Full Screen vs Borderless. Before I reran this on Full Screen, my Borderless score was 26216)

I see a 4000 MHz CL14 Kit on Newegg that I'll get as my last hurrah on this platform, but after that I think we're getting as much as we can out of the 6800XT

Edit:
Forgot to click "Overclock GPU" in Adrenaline. Clock boost to 2465 and Ram boosting to 2250. Marginal improvement - I'm going to leave it on default for now.

1670303732489.png
 
Last edited:
FYI - Time to up the resolution as this is CPU limited now with the Nvidia GeForce RTX 4090

MSI MEG X670E ACE - AM5
AMD Ryzen 7 7700X - 5.75Ghz
32GB GSKILL DDR5 6000Mhz CL-30
Nvidia GeForce RTX 4090 - 3.0Ghz
View attachment 267538
Bro I've been trying my hardest to beat this score and finally came to beating it! AMD Ryzen 9 7950x 16 Core RTX 4090 2560x1440 Maximum. When I say try I was trying really hard, the most i could have squeezed out was 37k. Then finally one morning it did the unthinkable and it went passed your score!.

Gigabyte Aorus EXTREME X670E eATX - AM5
AMD Ryzen 9 7950x - 5.75GHZ
32GB GSkill DDR5 6000mhz CL-30
Gigabyte RTX 4090 Gaming OC
Samsung 1TB 990PRO gen 4 m.2
High.jpg
 

Attachments

  • High.jpg
    High.jpg
    249.3 KB · Views: 96
I just got my Gigabyte 4090 Gaming OC yesterday, it is mind-blowing !

13900K P5.8/E4.7/R5.1 @1.36v
4090 gpu 3090mhz / vram 1525 mhz (+1700)
DDR4 2x16 Gb 3600C14D-32GTZNA @4300-15-15-15-30-280/1.61v
Samsung 950 Pro 512 Gb pci-e 3.0

1440P :
Screenshot (1184).png

1080P :
Screenshot (1183).png Screenshot (1182).png
 
I just got my Gigabyte 4090 Gaming OC yesterday, it is mind-blowing !

13900K P5.8/E4.7/R5.1 @1.36v
4090 gpu 3090mhz / vram 1525 mhz (+1700)
DDR4 2x16 Gb 3600C14D-32GTZNA @4300-15-15-15-30-280/1.61v
Samsung 950 Pro 512 Gb pci-e 3.0

1440P :
View attachment 273362

1080P :
View attachment 273363 View attachment 273365
OH LORD that 47k score is a huge gap!!!!! Here with 38k I was struggling, Congrats and yes the 4090 is a beast. I dont think I'll be hitting that number soon. That just proves Intel is still greater than AMD. I wanted the 13900k so bad but it was always sold out, so I went with what was available. Now, I'm thinking of returning 7950x and going with the 13900k instead. BUT too lazy to take it all apart. Maybe there's more to AMD, I haven't given the power curve a try or manual OC. Curious to see what frames your getting in MW2 WZ 2.0 with that setup on ultra 1440p.
 
Last edited:
OH LORD that 47k score is a huge gap!!!!! Here with 38k I was struggling, Congrats and yes the 4090 is a beast. I dont think I'll be hitting that number soon. That just proves Intel is still greater than AMD. I wanted the 13900k so bad but it was always sold out, so I went with what was available. Now, I'm thinking of returning 7950x and going with the 13900k instead. BUT too lazy to take it all apart. Maybe there's more to AMD, I haven't given the power curve a try or manual OC. Curious to see what frames your getting in MW2 WZ 2.0 with that setup on ultra 1440p.
Dawg. There are a lot of factors at play here. On paper the 5.8 from the 13900k might trade blows with the 5.75 on your 7950, but to have a gain of 10k wit the same GPU means there was more at play.

I'm trying to find a definitive answer but the short version is that your CL30 DDR5 (even at 6000 MHz) has more latency than the 4300 Mhz CL15 DDR4.

I wish I could find an IPC test where they lock both chips to 5GHz or something and see who benches higher, but there might be an IPC difference that is exacerbated by @kryptonfly having a chip with 8 screaming fast P Cores versus the 7950x having to deal with added latency from the infinity fabric across 16 screaming fast cores.

And classically FFXIV does better on Intel chips.

Wait for the X3D versions of AM5 for a drop in upgrade and see if there is DDR5 available that mitigates the latency penalty compared to DDR4.
 
Dawg. There are a lot of factors at play here. On paper the 5.8 from the 13900k might trade blows with the 5.75 on your 7950, but to have a gain of 10k wit the same GPU means there was more at play.

I'm trying to find a definitive answer but the short version is that your CL30 DDR5 (even at 6000 MHz) has more latency than the 4300 Mhz CL15 DDR4.

I wish I could find an IPC test where they lock both chips to 5GHz or something and see who benches higher, but there might be an IPC difference that is exacerbated by @kryptonfly having a chip with 8 screaming fast P Cores versus the 7950x having to deal with added latency from the infinity fabric across 16 screaming fast cores.

And classically FFXIV does better on Intel chips.

Wait for the X3D versions of AM5 for a drop in upgrade and see if there is DDR5 available that mitigates the latency penalty compared to DDR4.
Yeah there is so much more at play here and boy does this benchmark favor Intel. I gave manual and auto OC a try last night and long story short it didn't work out. I couldn't find anything with a lower CL on the market. If you have any other ideas that can help boost the score let me know. Thanks man!
 
I spent months to OC, first with a 12900k, then ks and now this 13900k, ram is really important for this bench, fortunately IMC is pretty good in this chip. Don't forget, it's a DX11 engine, maybe it prefers few cores at high freq and DDR4 (all 32 threads enabled). I disable all C-states. Here's at 4k maximum :

Screenshot (1193).png

It seems I'm more GPU bound now at 4k.
 
Yeah there is so much more at play here and boy does this benchmark favor Intel. I gave manual and auto OC a try last night and long story short it didn't work out. I couldn't find anything with a lower CL on the market. If you have any other ideas that can help boost the score let me know. Thanks man!
I found a 45 Minute video where they OC the chip to 5950 Mhz, and Gamers Nexus used LNG to get it to 6.45 Ghz.

Tomshardware found that even with PBO, the 7950x loses to the 13900k at 1440p, and the results are even more spread out when OCed to 5.6.

My opinion is that the 13900k was over engineered precisely to beat whatever AMD could come out with. Notice how the 7950x running at 230w is 90% of the 253w max that the 13900k runs at, and that the 188 fps average versus the 206 pre-OC fps is approximately 91% of the performance.

In essence, the chips probably perform the same at the same wattages. Unless you can find a way to push 250w+ into your 7950x (I do NOT recommend), this is an insurmountable lead.

Also of note is architecture - the 5800X3D on average is right there with the 7950X at 1440p And beyond. This leads me to believe that even without pumping Wattage at a chip, when given access to more Cache, the Zen architecture punches above it's weightclass.

Depending on the game, the 5800X3D ranged from 6%, 9%, 24%, 30%, even 52% better than the 5800X.

If the 7700/7950X3D enjoys the same 20%-ish boost, then we'll see this battle flip again after CES.

I'll warn you though that there is NOT headroom to OC (so far) the 3D VCache architecture, as the settings are locked out from the Mobo due to purported instability that arrises with the cache clock under OC. So if/when the chips come out, if they don't beat the 13k OC then Intel has this battle.

Edit:
38600 is approximately 82% of 47000 so... Assuming a 20% uplift from the X3D, I'm eager to see how these stack up (not with your money of course - wait for benchmarks to come out since they include XIV in most suites now).
 
Back
Top