• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Futuremark Teases 3DMark "Time Spy" DirectX 12 Benchmark

That's why I'm saying more benchmarks will let us know where Nvidia actually stands.

I don't think you'll get any arguments from anyone that we need more than AotS. :D
 
not to mention game devs wont stupidly cripple their games to punish nvidia users - who are a much larger group than AMD users (as much of an AMD fanboy as i am, i can admit nvidia are more popular)
 
I don't think you'll get any arguments from anyone that we need more than AotS. :D
AotS is good. It puts an upper limit on what gains are to be expected when using async compute heavily.
What we don't know is what happens when async compute is used more sparingly.

Nvidia also claims/implies their pipeline is already used (close) to its fullest without async compute. I'm not sure whether a benchmark can verify that, but I'd surely like for someone to shed some light in that area, too.

And, of course, there are those who, like Mussels above, have already decided that if async compute turns out to be just hot air, then it's Nvidia's fault for not letting developer to use enough of it in their games ;)
 
not to mention game devs wont stupidly cripple their games to punish nvidia users - who are a much larger group than AMD users (as much of an AMD fanboy as i am, i can admit nvidia are more popular)
As i sad, it won't "punish" nvidia users. They just won't get the benefits.
 
And you've never ever seen a single threaded program beat a multithreaded one because of the mutithreading overhead?
That is only the case when the multithreading is done poorly. GPUs are parallel by their nature so the multithreading nature of it is already ingrained. AMD's implementation is more CPU-like than NVIDIAs where AMD can jungle lots of commands inside the GPU simultaneously. As demonstrated by GCN cards, the gains are significant. And FPS isn't the only way to judge async compute either: there's also what they are doing with it. In the case of Ashes of the Singularity, they do physics on the weapons. You're gaining FPS & realism.

Async compute is likely the reason why PS4 and XB1 went with GCN. They could put really crappy CPUs in it because they know they can hand off a lot of heavy workloads to the GPU with async compute (case in point: physics). Async compute isn't going away. It is the direction GPU and API design has been going for the last decade (OpenCL and DirectCompute). NVIDIA needs to address it because, unlike PhysX, async compute isn't a gimmick. The sad irony of it is that PhsyX could have always been done asynchronously as well but NVIDIA never bothered to put that effort into their GPUs.
 
Last edited:
nvidia users - who are a much larger group than AMD users
thats only true on the pc master racer side, and on the big picture, pc is nothing compared to console. so, yeah... if the games are developed with "console first" methodology stuff being disabled and/or unavailable for green camp users will happen (which is only fair, "enhanced" physx effects have been a green camp thing only for quite some time)
 
pc is nothing compared to console. so, yeah...

Not really quite true. It's a fairly common misconception that console numbers are bigger, especially if we are not only talking gamers. Those numbers alone are fairly close though, IIRC.
 
Last edited:
Must.. Have.. All the benchmarks!
 
Futuremark stated that the benchmark was developed with inputs from AMD, Intel, NVIDI

Presumably nvidia begging for there to be no async, and instead using overly aggressive tessellation? :laugh:
 
Oh, is TPU beating the dead Async horse again?
 
in honour of the gods of async, we have to do it really chaotically and out of order.

so yeah, expect this conversation to be going on for at least another 2-3 years.
 
I wonder how many PS4 and XB1 games use the ACEs.
 
Any ETA on a release date?
 
Oh, is TPU beating the dead Async horse again?

Async horse was just born last year with DX12. Many years to live and reign. Next nVidia GPU gen will have it also.
 
Async horse was just born last year with DX12. Many years to live and reign. Next nVidia GPU gen will have it also.
As in quoting AOTS benchmarks over and over in an endless battle that means squat really. The tech itself isn't a dead horse, it's something we should have had for a while. The debate is a dead horse.
 
i have a 290 and a 970 here on very similar systems, so i'll happily compare AMDpples to Nvoranges when this is out


I be interested in seeing, how ever it will mean nothing really being a gamer and not a benchmarker.
 
Is it just me or does the scene looks completely washed out?
 
Back
Top