Wednesday, July 4th 2018

AMD Beats NVIDIA's Performance in the Battlefield V Closed Alpha

A report via PCGamesN points to some... interesting performance positioning when it comes to NVIDIA and AMD offerings. Battlefield V is being developed by DICE in collaboration with NVIDIA, but it seems there's some sand in the gears of performance improvements as of now. I say this because according to the report, AMD's RX 580 8 GB graphics card (the only red GPU to be tested) bests NVIDIA's GTX 1060 6GB... by quite a considerable margin at that.

The performance difference across both 1080p and 1440p scenarios (with Ultra settings) ranges in the 30% mark, and as has been usually the case, AMD's offerings are bettering NVIDIA's when a change of render - to DX12 - is made - AMD's cards teeter between consistency or worsening performance under DX 12, but NVIDIA's GTX 1060 consistently delivers worse performance levels. Perhaps we're witnessing some bits of AMD's old collaboration efforts with DICE? Still, It's too early to cry wolf right now - performance will only likely improve between now and the October 19th release date.
Source: PCGamesN
Add your own comment

219 Comments on AMD Beats NVIDIA's Performance in the Battlefield V Closed Alpha

#1
FordGT90Concept
"I go fast!1!11!1!"
efikkan said:
The majority of top PC games are console ports, and the console sales still makes up much of the sales for many of these developers, which is why there are more AMD partner games than there are Nvidia partner games in this segment.
Those console ports are still developed on NVIDIA hardware. As a result, it's well optimized for Windows from the start. They have to optimize for the target consoles in order to get qualified. AMD desktop cards rarely get an optimization pass unless there's major problems. There are exceptions like Deus Ex: Mankind Divided and Hitman which were developed in collaboration with AMD.

Bare in mind that even though consoles tend to sell more copies of a game, publishers make more money per sale on PC because profit margins are much better (no qualifying, distributors take a smaller cut, don't need to produce and ship physical media to stores, patches are free to push, etc.).

efikkan said:
At this point games should be developed for Direct3D 12 or Vulkan exclusively, there is no point in supporting pre-Fermi an pre-GCN for new top titles, and pre-GCN cards dropped driver support a while ago anyway. Windows 7/8 support is probably the only reason to have legacy support, but if by doing so you have to design a bad engine, then you should only support the old API.
Unreal Engine 4 nor Unity officially support D3D12 nor Vulkan yet. The majority of PC games are built on those engines. The reason why Vulkan/D3D12 support is sparse is because they are a huge paradigm shift from OpenGL/D3D11: the entire renderer has to be rewritten. I'd argue most games out today that support Vulkan/D3D12 are half-assed implementations of it. It'll be years yet before we see games that fully exploit the technology.
Posted on Reply
#2
cucker tarlson
Lol I remember people claiming 290 would catch up with or even beat 980Ti back in 2016 cause they saw a 3d mark draw calls benchmark :roll:looks like dx11 ain't going anywhere. There's one thing I'm convinced about - once nvidia wants to really invest in dx12/vulkan - it's going to boom. They've been comfortable sitting on dx11 cause their performance is good. We need to wait for nvidia to decide they need to tap into the dx12/vulkan for performance to make new gen cards faster to see a breakthrough.
Posted on Reply
#3
efikkan
FordGT90Concept said:
Those console ports are still developed on NVIDIA hardware. As a result, it's well optimized for Windows from the start. They have to optimize for the target consoles in order to get qualified. AMD desktop cards rarely get an optimization pass unless there's major problems. There are exceptions like Deus Ex: Mankind Divided and Hitman which were developed in collaboration with AMD.
Console games are developed and debugged on AMD hardware. By the time they are tested on Nvidia hardware, the games are already implemented, and the rendering pipeline are not redesigned unless it have major problems.

FordGT90Concept said:

Bare in mind that even though consoles tend to sell more copies of a game, publishers make more money per sale on PC because profit margins are much better (no qualifying, distributors take a smaller cut, don't need to produce and ship physical media to stores, patches are free to push, etc.).
That depends, top titles generally sells much more on console, and many top publishers earn much more from console sales, despite high fees from console makers. The PC market is on the other hand much more diversified, and for the most part less focusing on a few popular titles.

FordGT90Concept said:

Unreal Engine 4 nor Unity officially support D3D12 nor Vulkan yet. The majority of PC games are built on those engines. The reason why Vulkan/D3D12 support is sparse is because they are a huge paradigm shift from OpenGL/D3D11: the entire renderer has to be rewritten. I'd argue most games out today that support Vulkan/D3D12 are half-assed implementations of it. It'll be years yet before we see games that fully exploit the technology.
You're right about many titles using Unreal, but Unity is mostly used or "shovelware" titles, and very few or none of those are relevant when it comes to good performance.

Unity will probably never benefit properly from Direct3D 12 or Vulkan, since the rendering pipeline has to be tailored to the game to fully utilize the potential in the new APIs. Support will probably arrive, but it will suck as much as before.

cucker tarlson said:
Lol I remember people claiming 290 would catch up with or even beat 980Ti back in 2016 cause they saw a 3d mark draw calls benchmark :roll:looks like dx11 ain't going anywhere. There's one thing I'm convinced about - once nvidia wants to really invest in dx12/vulkan - it's going to boom. They've been comfortable sitting on dx11 cause their performance is good. We need to wait for nvidia to decide they need to tap into the dx12/vulkan for performance to make new gen cards faster to see a breakthrough.
Yes, we are always promised that AMD hardware are "better", you just don't see it yet. Well, most people waiting for their 2xx/3xx series to unveil their benefits have already moved on or will be when the next Nvidia cards arrive shortly.:rolleyes:

This is also the problem with purely synthetic benchmarks, even more so, benchmarks with only measures one tiny aspect of rendering. And these benchmarks are just misleading to buyers; what does the average buyer know about "draw calls"? And displaying and edge case is just ridiculous, especially since dummy draw calls have little to do with what the hardware can do in an actual scene.
Posted on Reply
#4
mtcn77
cucker tarlson said:
Lol I remember people claiming 290 would catch up with or even beat 980Ti back in 2016 cause they saw a 3d mark draw calls benchmark :roll:looks like dx11 ain't going anywhere. There's one thing I'm convinced about - once nvidia wants to really invest in dx12/vulkan - it's going to boom. They've been comfortable sitting on dx11 cause their performance is good. We need to wait for nvidia to decide they need to tap into the dx12/vulkan for performance to make new gen cards faster to see a breakthrough.
GCN needs scalarization of code in order to launch multiple wavefronts. So, it could be anything in between unless the path that consoles take is available on Windows.
Posted on Reply
#5
Th3pwn3r
Instead of this being RX580 vs 1060 you fan boys made it AMD vs Nvidia. BOTH cards suck! Let's just leave it at that. I still blame the title for starting all of this mess.
Posted on Reply
#6
MuhammedAbdo
Ryzen CPUs (as usual :shadedshu:) are the cause of the disparity in results:

See, PCGamesN and Hardwareluxx were both using processors less powerful than the one used by Sweclockers. PCGamesN was using a Ryzen 2700X while Hardwareluxx used an AMD Threadripper 1950X processor – which is decidedly not clocked for gaming purposes. Sweclockers, on the other hand, used the king of all gaming CPUs: the Core i7-8700K. They even benchmarked the processors to further elaborate on this reasoning:



As you can see, the difference between the Ryzen 7 2700X (which PCGamesN used) and the Core i7-8700k is very significant. In fact, this is probably the sole reason why we see AMD cards pushing ahead of the GTX 1080 Ti against all odds and why we see the 1080 Ti maintaining a clear lead in the Sweclocker results of the same settings and same resolution. In other words, once you remove the CPU bottleneck from the equation, it looks like the GTX 1080 Ti is still king.



https://wccftech.com/battlefield-v-closed-alpha-benchmarks/
Posted on Reply
#7
cucker tarlson
I think the cause of disparity was the author's stupidity not Ryzen. I can't believe how many small time reviewers are shilling for AMD these days, that's why it's always best to take info from big,reputable sites like pcgh,computerbase or TPU (if they do their own testing).
Posted on Reply
#8
INSTG8R
Yet here you and Muhammed are shilling for Green constantly... Keep up the good “Green Fight’” funny to see so much propaganda over 2FPS...
Posted on Reply
#9
medi01
dj-electric said:
AMD beats nvidia at something, lets make a news piece.
FFS, AMD regularly "beats nvidia at something", but it doesn't happen that often, that AMD beats nVidia by a whopping 30% in an nVidia sponsored game.

efikkan said:
Yes, we are always promised that AMD hardware are "better", you just don't see it yet.
It's hard to see with eyes wide shut.

MuhammedAbdo said:
and Hitman DX12. So don't worry, pretty soon NVIDIA will be ahead come final release.
So, nVidia got ahead in DX12 Hitman?

Why does a minor alpha game benchmark force you to start spitting nonsense, pretty please? Why butheart for Huang at all?
Posted on Reply
#10
MuhammedAbdo
medi01 said:
So, nVidia got ahead in DX12 Hitman?
I didn't say ahead, I said they improved their DX12 performance to the point they are matching AMD.


https://techreport.com/review/32766/nvidia-geforce-gtx-1070-ti-graphics-card-reviewed/7
https://www.hardware.fr/articles/971-9/benchmark-hitman.html

It must be a shock to you doesn't it? Here is another shock, remember Ashes of Singularity?



https://www.hardware.fr/articles/971-11/benchmark-ashes-of-the-singularity.html
https://www.anandtech.com/show/11987/the-nvidia-geforce-gtx-1070-ti-founders-edition-review/5

Too bad for you right? Remember Forza 7?



https://techreport.com/review/32766/nvidia-geforce-gtx-1070-ti-graphics-card-reviewed/3

What's that? 1060 is 25% faster than RX580 in a DX12 game? No way! Let'ss make this a news piece!

Stay in your lands of denial please ..
Mic dropped ..
Posted on Reply
#11
cucker tarlson
Well actually 1080 beats V64 in Hitman dx12 @1440p

https://www.purepc.pl/karty_graficzne/test_colorful_igame_geforce_gtx_1070u_chinski_przepis_na_pascala?page=0,15

although it's splitting hairs about which one is faster.

Muhhamed is on point here. First and foremost there are a lot of dx12 games that run really well on nvidia. Can't tell specifically, but I'd be surprised if you gathered up all dx12 games on the market and 1080 to V64 was more than +/- 5%. Second of all, nvidia did make a reasonable progress optimizing typically AMD games like hitman and doom, to the point that amd no longer wins clealy in any of them,if they win at all. Most of it came from reducing cpu overhead,cause geforce cards used to get worse performance on launch in games that use dx12 features. since amd has hardware implementation,it ran fine on launch. Nvidia had to take time for optimizations, took longer in the past, but nowadays they can deal with it pretty quickly.
Posted on Reply
#12
MuhammedAbdo
cucker tarlson said:
although it's splitting hairs about which one is faster.
Exactly ..

Though move up to 1440p and the 1080 is ahead of the Vega 64 by a small margin, after driver upgrades:

https://www.purepc.pl/karty_graficzne/test_nvidia_geforce_gtx_1070_ti_nemezis_amd_radeon_rx_vega_56?page=0,15

Use a newer review and you will find the 1080 is 7% faster at 1080p:

https://www.purepc.pl/karty_graficzne/test_sapphire_radeon_rx_vega_64_nitro_niereferencyjna_vega?page=0,15
Posted on Reply
#13
cucker tarlson
Good find,the newer one uses 388.59, the older one uses 388.13.
Posted on Reply
#15
cucker tarlson
They did a lot to vulkan and dx12 in 2017. Funny how people who say people buying nvidia are ignorant cause AMD gets better with time are the ones that didn't even know nvidia has caught up to AMD in hitman/doom/aots a long time ago. You can tell them a thousand times, next day they'll be out again with the same arguments.

via GIPHY

Posted on Reply
#16
mtcn77
Again, when did Nvidia stop making frametimes the headline and the fps counter is back in the spotlight? I guess it gets rephrased how it fits the purpose...
Posted on Reply
#17
wiak
Vya Domus said:
Meh , because of the way Nvidia's hardware works they always need more driver work. Can't say this means much.
nah they need to add gimpworks then nvidia is ahead and everything goes 50% slowe
Posted on Reply
#18
StrayKAT
FordGT90Concept said:
Those console ports are still developed on NVIDIA hardware. As a result, it's well optimized for Windows from the start. They have to optimize for the target consoles in order to get qualified. AMD desktop cards rarely get an optimization pass unless there's major problems. There are exceptions like Deus Ex: Mankind Divided and Hitman which were developed in collaboration with AMD.

Bare in mind that even though consoles tend to sell more copies of a game, publishers make more money per sale on PC because profit margins are much better (no qualifying, distributors take a smaller cut, don't need to produce and ship physical media to stores, patches are free to push, etc.).


Unreal Engine 4 nor Unity officially support D3D12 nor Vulkan yet. The majority of PC games are built on those engines. The reason why Vulkan/D3D12 support is sparse is because they are a huge paradigm shift from OpenGL/D3D11: the entire renderer has to be rewritten. I'd argue most games out today that support Vulkan/D3D12 are half-assed implementations of it. It'll be years yet before we see games that fully exploit the technology.
Really? Mankind Divided is still the most demanding game I have. I can't see it running at max, except with a 1080ti.
Posted on Reply
#20
Vya Domus
How about we let this cancer of a thread die already.
Posted on Reply
#21
Caring1
seb777 said:
AMD Beats NVIDIA's Performance ????? Mmmmmmm ..bit dramatic title for this article, how can a GTX1080(i) not smoke both of these cards I dont get it ???
Reading or subtlety not your specialties either?
Posted on Reply
#22
x86overclock
the54thvoid said:
Yup, that's the compute for you.

Once the game is closer to release, I'm quite sure Nvidia driver team will be working their usual magic.
I'm positive they will work their magic, but I bet when they do the Radeon 580 drop in performance. After all this is Nvidia working with Dice and this always happens.

MuhammedAbdo said:
The fruits of such efforts can be found in games like Ashes of Singularity Escalation, where a 1080 now easily beats a Vega 64!



https://www.anandtech.com/show/11987/the-nvidia-geforce-gtx-1070-ti-founders-edition-review/5
The Vega 64 loses because Oxide removed Microsoft DX12's Asynchronous Compute instruction and replaced it with Nvidia's proprietary Simulated Asynch Compute instruction. This was done a month after the launch of the first Ashes of the Singularity because the Geforce cards did very poorly. Nvidia had asked Oxide to implement their instruction because their hardware was incompatible with DX12's Asynchronous Compute and Nvidia insisted that it also improved Radeon's performance which ended up being completely false. After the Nvidia Asynch Compute implementation Radeon cards saw a 17% loss in performance while the Geforce cards improved performance by 8%. DX12's Asynchronous Compute is not implemented in most games, instead most games use Nvidia's simulated Asynch Compute because those games are developed on Nvidia hardware and they have to use Cuda which automatically implements Nvidia optimizations including their simulated Asynch Compute.
Posted on Reply
#23
Jelle Mees
MuhammedAbdo said:
Ryzen CPUs (as usual :shadedshu:) are the cause of the disparity in results:

See, PCGamesN and Hardwareluxx were both using processors less powerful than the one used by Sweclockers. PCGamesN was using a Ryzen 2700X while Hardwareluxx used an AMD Threadripper 1950X processor – which is decidedly not clocked for gaming purposes. Sweclockers, on the other hand, used the king of all gaming CPUs: the Core i7-8700K. They even benchmarked the processors to further elaborate on this reasoning:



As you can see, the difference between the Ryzen 7 2700X (which PCGamesN used) and the Core i7-8700k is very significant. In fact, this is probably the sole reason why we see AMD cards pushing ahead of the GTX 1080 Ti against all odds and why we see the 1080 Ti maintaining a clear lead in the Sweclocker results of the same settings and same resolution. In other words, once you remove the CPU bottleneck from the equation, it looks like the GTX 1080 Ti is still king.



https://wccftech.com/battlefield-v-closed-alpha-benchmarks/
Actually, I would love to see more reviewers using mid-range budget CPU's. It would reflect a more realistic gaming experience for the customer. The reality is that more gamers have a CPU that comes close to Ryzen 2700x performance then the amount of gamers tha have a CPU with 8700K performance.

People see a GTX 1060 review. They see it getting 60+fps in many games. Then they put it in their PC and they encounter framedrops to 40fps. Because all reviewers test these cards on one of the fastest CPU's.
Posted on Reply
#24
MuhammedAbdo
x86overclock said:
The Vega 64 loses because Oxide removed Microsoft DX12's Asynchronous Compute instruction and replaced it with Nvidia's proprietary Simulated Asynch Compute instruction. This was done a month after the launch of the first Ashes of the Singularity because the Geforce cards did very poorly. Nvidia had asked Oxide to implement their instruction because their hardware was incompatible with DX12's Asynchronous Compute and Nvidia insisted that it also improved Radeon's performance which ended up being completely false. After the Nvidia Asynch Compute implementation Radeon cards saw a 17% loss in performance while the Geforce cards improved performance by 8%. DX12's Asynchronous Compute is not implemented in most games, instead most games use Nvidia's simulated Asynch Compute because those games are developed on Nvidia hardware and they have to use Cuda which automatically implements Nvidia optimizations including their simulated Asynch Compute.
That's a load of hosrecrap, none of this happened and Oxide challenged NVIDIA and refused their involvement in any way in their demos which are heavily subsided by AMD, and remain so to this day.

No developer implements Async Compute because it's a pain in the ass to get it working and supported on most architectures, and the gains are limited most of the time anyway.
Posted on Reply
#25
jabbadap
MuhammedAbdo said:
That's a load of hosrecrap, none of this happened and Oxide challenged NVIDIA and refused their involvement in any way in their demos which are heavily subsided by AMD, and remain so to this day.

No developer implements Async Compute because it's a pain in the ass to get it working and supported on most architectures, and the gains are limited most of the time anyway.
Afaik even nvidia implements directx async compute on their Nvidia FleX particle based simulations. But it's not the silver bullet to anything. That Oxides AotS is the best case scenario for async compute, some different type of game and performance benefit from async compute is much much less.
Posted on Reply
Add your own comment