• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD 8-core ZEN Packs a Whallop with Multithreaded Performance

Got another way to explain constant bad console ports?
Sure. What I don't have is time to teach you advanced programming.
 
Sure. What I don't have is time to teach you advanced programming.

Apparently no one has time to teach it. HENCE THE BAD CONSOLE PORTS.
 
Apparently no one has time to teach it. HENCE THE BAD CONSOLE PORTS.
Well, if you're so lazy you won't even bother reading about it, I don't see why you think it's a good idea to criticize others for not learning it (though "not learning" is entirely your unsubstantiated opinion).
Here's something that explains things a bit: https://forums.finalgear.com/entert...n-difficulties-of-ps3-game-programming-22510/

Also, do you remember the thread about EVGA 1080 FTW VRM issues where you said people that don't know cars shouldn't make car analogies? How about you eat you own dog food and stop judging work you don't understand?
 
If the errata bug is true nothing that was said after matters. 30-40% performance difference makes them only as fast as steamroller.
They better not release it in that state if true ... I suspect the delay is going to be announced
 
Well, if you're so lazy you won't even bother reading about it, I don't see why you think it's a good idea to criticize others for not learning it (though "not learning" is entirely your unsubstantiated opinion).
Here's something that explains things a bit: https://forums.finalgear.com/entert...n-difficulties-of-ps3-game-programming-22510/

XB1 and PS4 use AMD Jaguar hardware with two modules and four cores on each module. I've no idea what consistutes a "core" here though (if it's a Bulldozer core/module or more common style cores), but multithreading in multiplatform games should most definitely be more of a thing now. Especially for XB1, which just runs Windows. I wouldn't call the devs lazy though, because I assume it mostly is a pressure/time/money thing.
 
Well, if you're so lazy you won't even bother reading about it, I don't see why you think it's a good idea to criticize others for not learning it (though "not learning" is entirely your unsubstantiated opinion).
Here's something that explains things a bit: https://forums.finalgear.com/entert...n-difficulties-of-ps3-game-programming-22510/

Also, do you remember the thread about EVGA 1080 FTW VRM issues where you said people that don't know cars shouldn't make car analogies? How about you eat you own dog food and stop judging work you don't understand?

The exact thing you linked I mentioned as proof they could program things to work well if they tried which is what the developer kicked out. Also they are talking about the PS3 with the stupid "cell" processor not the x86-64 CPU that is used in the current consoles as well and the current PC market. Yet again ports should be pretty damn boss hoss now considering the xbox uses windows 10 as the base code. It has been roughly 10 years since the first quad core x86 CPU was released. You are telling me in 10 years coders still haven't learned how to get games to scale past 4?

Oh wait some of the games do, further proving they can and choose not to.

There’s a ton of data spread across the preceding pages. But it’s largely distillable to a handful of sweeping conclusions.

First, it’s increasingly clear that dual-threaded CPUs are no longer the way to go. Yes, our experiment exaggerates their limitations with an ultra-high-end graphics card. However, a great many developers specify quad-core processors in their minimum requirements. The Xbox One and PlayStation 4 both expose eight threads, and cross-platform games utilize engines written to exploit the parallelism afforded by those consoles.

http://www.tomshardware.com/reviews/multi-core-cpu-scaling-directx-11,4768.html

Like has been said, they can, they don't maybe lazy isn't the right word, yes it might be what @Frick is saying and timelines, but I personally don't care. If I am going to pay $60 a license for a game I am expecting it to perform better than a $5 indie game on my cellphone that can use all 4 high speed cores on my ARM chip fine.

XB1 and PS4 use AMD Jaguar hardware with two modules and four cores on each module. I've no idea what consistutes a "core" here though (if it's a Bulldozer core/module or more common style cores), but multithreading in multiplatform games should most definitely be more of a thing now. Especially for XB1, which just runs Windows. I wouldn't call the devs lazy though, because I assume it mostly is a pressure/time/money thing.
Jaguar is a tradition x86-64 core with 1 FPU per integer core.
 
Well, if you're so lazy you won't even bother reading about it, I don't see why you think it's a good idea to criticize others for not learning it (though "not learning" is entirely your unsubstantiated opinion).
Here's something that explains things a bit: https://forums.finalgear.com/entert...n-difficulties-of-ps3-game-programming-22510/

Also, do you remember the thread about EVGA 1080 FTW VRM issues where you said people that don't know cars shouldn't make car analogies? How about you eat you own dog food and stop judging work you don't understand?

That's a PS3 article. I won't bother reading it in reference as a programmer myself, because I know for a fact how different the cell architecture was from convention x86_64. It wasn't even out of order execution, and it had tons of cores at the time that were NEEDED for acceptable performance, etc. Whole different ballpark.

I'm surprised you even linked that if you "know advanced programming." What do you actually know? C#? Java? o_O

I would suggest not claiming to know "advanced programming" until you've at least dabbled in some C++ or better yet, various assembler languages for some real fun.
 
That's a PS3 article. I won't bother reading it in reference as a programmer myself, because I know for a fact how different the cell architecture was from convention x86_64. It wasn't even out of order execution, and it had tons of cores at the time that were NEEDED for acceptable performance, etc. Whole different ballpark.

I'm surprised you even linked that if you "know advanced programming." What do you actually know? C#? Java? o_O

I would suggest not claiming to know "advanced programming" until you've at least dabbled in some C++ or better yet, various assembler languages for some real fun.
Well, the guy thinks more cores means everything should run faster by default. Cell is a prime example that, while that's possible, in practice it works so "well" that for PS4 Sony had to go with something else instead. First, because multithreaded code is always more expensive and second, because programs can be easily split to a few cores, but scaling them beyond that is both difficult and unnatural.

Also, while I did both C++ and asm back in the day, neither are a requirement for understanding how to do a join/fork or a map/reduce. Fwiw I've once heard a guy claim that if you knew asm, you'd realize threading is a scam and anything can should be done on a single thread. Me, I always say there are a million ways to write bad code and only a handful to get it right (and then the discussion moves back to cost).
 
Well, the guy thinks more cores means everything should run faster by default. Cell is a prime example that, while that's possible, in practice it works so "well" that for PS4 Sony had to go with something else instead. First, because multithreaded code is always more expensive and second, because programs can be easily split to a few cores, but scaling them beyond that is both difficult and unnatural.

the guy thinks things should utilize cores better, your own "cost" argument is why the PS4 has an AMD custom chip in it, multithreading is more expensive, but worth it.
 
This is the part I am getting at. With a good API you don't need anything, the GPU does the work like it should. This is what all proper coding should look like.

2 of the 3 are dx11. Bf1 has known dx12 issues. None of the benches show a 6700k compared. Dishonored 2 shows a 480 is extremely CPU dependent.
 
the guy thinks things should utilize cores better, your own "cost" argument is why the PS4 has an AMD custom chip in it, multithreading is more expensive, but worth it.
Ok, I see whatever I say, it does not register. I give up.
 
Ok, I see whatever I say, it does not register. I give up.

I read and understood what you said, however it doesn't change that there are a mutltitude of games that are already showing scaling with more than 4 cores, it typically was tapering off at 8, which is completely understandable. Everyone on here knows that it will not be infinitely variable, however that being said consumer chips have expanded past 4 cores. Your argument is a joke. "It's hard" good for you? Go cry in a safe place. 10 years since the first quad core x86 hit the market, we should be more than capable of threading past that. Hell they have figured out how to get the graphics to scale properly across multiple cards...
 
what ever happened with the multi gpu function in dx12 wizz tested in aots a while back? i thought that was going to make crossfire and sli obsolete, but havent heard anything since that article

I would find it interesting if i could get a boost putting my 390 and 1070 together, a 1460!!! lol

I found the idea interesting with the 1060 not supporting sli, whether dx12 could still use both, though im sure nvidia would lock it out pretty quick
 
About that whole "moar core isn't the way to go for gaming" seeing just how strong single thread performance went up those past few years, I'm not really sure that ignoring +4 thread development is the way to go if we want to get more performance. Unless amd or intel come up with a groundbreaking 4 cores cpu as fast as a the fastest 8 cores cpu in every single task, the humble person that I am don't see how we could expect any huge progress.

Unless there is a way for the gpu to handle every single graphic task, and the cpu to only deal with the A.I ? Is that even possible ?
 
what ever happened with the multi gpu function in dx12 wizz tested in aots a while back? i thought that was going to make crossfire and sli obsolete, but havent heard anything since that article

I think that was just MS bull to push Win 10 sales.
 
what ever happened with the multi gpu function in dx12 wizz tested in aots a while back? i thought that was going to make crossfire and sli obsolete, but havent heard anything since that article

I would find it interesting if i could get a boost putting my 390 and 1070 together, a 1460!!! lol

I found the idea interesting with the 1060 not supporting sli, whether dx12 could still use both, though im sure nvidia would lock it out pretty quick

Pretty sure latest Deus Ex patch included Explicit Multi support. As for cores, I'm a big believer that because of consoles, 8 cores (or at least 4+HT) will become the norm for normal performance. Will they be stressing those cores? Doubtful, but anything ported from console is likely to be setup with threading in mind, and a thread executing is always more efficient than a thread switching.
 
Pretty sure latest Deus Ex patch included Explicit Multi support. As for cores, I'm a big believer that because of consoles, 8 cores (or at least 4+HT) will become the norm for normal performance. Will they be stressing those cores? Doubtful, but anything ported from console is likely to be setup with threading in mind, and a thread executing is always more efficient than a thread switching.
I'm hoping zen brings competition. Think it'll be able to clock higher without smt? For any sort of gaming there is 0 reason to have 16 threads. 8 sure. Wonder if you'll even be able to disable smt..
 
Back
Top