• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark

Joined
Nov 9, 2008
Messages
2,318 (0.41/day)
Location
Texas
System Name Mr. Reliable
Processor Ryzen R9 5950x
Motherboard MSI Meg X570s Ace Max
Cooling D5 Pump, Singularity Top/Res, 2x360mm EK P rads, EK Magnitude/Alphacool Blocks
Memory 32Gb (4x8Gb) Corsair Dominator Platinum 3600Mhz @ 16/19/20/36 1.35v
Video Card(s) MSI 3080ti with Alphacool Block
Storage 2 x Corsair Force MP400 1TB Nvme; 2 x T-Force Cardea Z340; 2 x Mushkin Reactor 1TB
Display(s) Acer 32" Z321QU 2560x1440; LG 34GP83A-B 34" 3440x1440
Case Lian Li PC-011 Dynamic XL; Synology DS218j w/ 2 x 2TB WD Red
Audio Device(s) SteelSeries Arctis Pro+
Power Supply EVGA SuperNova 850G3
Mouse Razer Basilisk V2
Keyboard Das Keyboard 6; Razer Orbweaver Chroma
Software Windows 10 Pro
No, it proves that the Scheduler was unable to saturate those CUs with a single task.
If parallelizing two tasks requiring the same resources yields a performance increase, then some resources had to be idling in the first place, because they were unable to get instructions from the Scheduler. Any alternative would be impossible.

The difference is in the way tasks are handed out, and the whole point is to get more instructions to idle shaders. But they are two dramatically different approaches. NVidia is best using limited async with instructions running in a mostly serial nature.

So that is the way nVidia approaches multiple workloads. They have very high granularity in when they are able to switch between workloads. This approach bears similarities to time-slicing, and perhaps also SMT, as in being able to switch between contexts down to the instruction-level. This should lend itself very well for low-latency type scenarios, with a mostly serial nature. Scheduling can be done just-in-time.

AMD on the other hand seems to approach it more like a ‘multi-core’ system, where you have multiple ‘asynchronous compute engines’ or ACEs (up to 8 currently), which each processes its own queues of work. This is nice for inherently parallel/concurrent workloads, but is less flexible in terms of scheduling. It’s more of a fire-and-forget approach: once you drop your workload into the queue of a given ACE, it will be executed by that ACE, regardless of what the others are doing. So scheduling seems to be more ahead-of-time (at the high level, the ACEs take care of interleaving the code at the lower level, much like how out-of-order execution works on a conventional CPU).

And until we have a decent collection of software making use of this feature, it’s very difficult to say which approach will be best suited for the real-world. And even then, the situation may arise, where there are two equally valid workloads in widespread use, where one workload favours one architecture, and the other workload favours the other, so there is not a single answer to what the best architecture will be in practice.
Source: https://scalibq.wordpress.com/

This is why NVidia cards shine so well, APIs today send out instructions in a mostly serial nature, wherein preemption works relatively well...however the new APIs are able to be used with inherently parallel workloads, which causes AMD cards to shine.

Please bear in mind I am not bashing either approach, NV cards are pure muscle, and I love it! but that also comes with a price. AMDs approach to bring that kind of power without needing the brute force approach is good for everyone, and is more cost effective when utilized correctly.
 
Last edited:
Joined
Jun 10, 2014
Messages
2,900 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
The difference is in the way tasks are handed out, and the whole point is to get more instructions to idle shaders. But they are two dramatically different approaches. NVidia is best using limited async with instructions running in a mostly serial nature.
When AMD needs a bigger 8602 GFlop/s GPU to match a 5632 GFlop/s GPU, it's clearly an inefficient design. There is no dismissing that. Nvidia has demonstrated that they support async shaders, and it's a design feature of their CUDA architecture.

Please bear in mind I am not bashing either approach, NV cards are pure muscle, and I love it! but that also comes with a price. AMDs approach to bring that kind of power without needing the brute force approach is good for everyone, and is more cost effective when utilized correctly.
Actually no, AMD is using a more "brute force" approach with many more cores to do the same work, and with a much less sophisticated scheduler to keep them busy. Nvidia has made a much more refined and advanced architecture in order to scale well on any workload, and they have clearly demonstrated that with CUDA.
 
Joined
Apr 18, 2016
Messages
184 (0.06/day)
3dmark doesn't use Asynchronous Compute!!!!

http://steamcommunity.com/app/223850/discussions/0/366298942110944664/


All of the current games supporting Asynchronous Compute make use of parallel execution of compute and graphics tasks. 3D Mark Time Fly support concurrent. It is not the same Asynchronous Compute....

So yeah... 3D Mark does not use the same type of Asynchronous compute found in all of the recent game titles. Instead.. 3D Mark appears to be specifically tailored so as to show nVIDIA GPUs in the best light possible. It makes use of Context Switches (good because Pascal has that improved pre-emption) as well as the Dynamic Load Balancing on Maxwell through the use of concurrent rather than parallel Asynchronous compute tasks. If parallelism was used then we would see Maxwell taking a performance hit under Time Fly as admitted by nVIDIA in their GTX 1080 white paper and as we have seen from AotS.


Sources:

https://www.reddit.com/r/Amd/comments/4t5ckj/apparently_3dmark_doesnt_really_use_any/

http://www.overclock.net/t/1605674/computerbase-de-doom-vulkan-benchmarked/220#post_25351958

As indicates the user "Mahigan", the future games will use the asynchronous computation, that it allows, of parallel form, the execution of tasks of calculation and graphs, but the surprise comes when the own description of Time Spy indicates that the asynchronous calculation is in use for superposing to a great extent passes of rendered to maximize the utilization of the GPU, some kind of Concurrent Computing called (Concurrent Computation).

The Concurrent Computation is a form of computation in that several calculations execute during periods of time superposed - concurrently - instead of sequentially (one that ends before it begins the following one), and obviously, it is not the asynchronous computation about which they brag games as the DOOM to take advantage of the real potential of a gpu AMD Radeon, in this case, under the API DirectX 12, which is where the software is executed. On not having used the asynchronous computation, 3DMark it seems to be adapted specifically in order to show the best possible performance in a GPU Nvidia. It being uses Context Switches's Changes (that it is something positive for Pascal, improvement pre-emption) as well as the Dynamic Load Balancing in Maxwell across the use of the asynchronous simultaneous tasks of computation instead of parallel.

Asynchronous computation AMD The architecture AMD GCN not only can handle these tasks, but it improves even more when the Parallelism is used, test of it they are DOOM's results under the API Vulkan. How? On having reduced the latency for frame across the executions in parallel of the graphs and having calculated the tasks. A reduction in the latency for - frame means that every frame needs of less time to be executed and processed. The net profit is a major speed of images per second, but 3DMark he lacks this one. If 3DMark It Time Spy it had implemented both the concurrence and the parallelism, a Radeon Fury X had reached in performance the GeForce GTX 1070 (In DOOM, the Fury X not only reaches her, but it overcomes it in performance).

If so AMD like Nvidia they are executing the same code that Pascal, it would be to win a bit or even to lose performance. This one is the reason for Bethesda the asynchronous computation did not allow to activate + AMD's graphs for Pascal. In his place, Pascal will have his own optimized route. The one that also they will be call an asynchronous computation making believe to the people who is the same thing when actually they are two completely different things. So it itself is happening, not all the implementations of asynchronous computation are equal.

hjklh.jpg
 
Last edited:
Joined
Feb 18, 2013
Messages
2,180 (0.53/day)
Location
Deez Nutz, bozo!
System Name Rainbow Puke Machine :D
Processor Intel Core i5-11400 (MCE enabled, PL removed)
Motherboard ASUS STRIX B560-G GAMING WIFI mATX
Cooling Corsair H60i RGB PRO XT AIO + HD120 RGB (x3) + SP120 RGB PRO (x3) + Commander PRO
Memory Corsair Vengeance RGB RT 2 x 8GB 3200MHz DDR4 C16
Video Card(s) Zotac RTX2060 Twin Fan 6GB GDDR6 (Stock)
Storage Corsair MP600 PRO 1TB M.2 PCIe Gen4 x4 SSD
Display(s) LG 29WK600-W Ultrawide 1080p IPS Monitor (primary display)
Case Corsair iCUE 220T RGB Airflow (White) w/Lighting Node CORE + Lighting Node PRO RGB LED Strips (x4).
Audio Device(s) ASUS ROG Supreme FX S1220A w/ Savitech SV3H712 AMP + Sonic Studio 3 suite
Power Supply Corsair RM750x 80 Plus Gold Fully Modular
Mouse Corsair M65 RGB FPS Gaming (White)
Keyboard Corsair K60 PRO RGB Mechanical w/ Cherry VIOLA Switches
Software Windows 11 Professional x64 (Update 23H2)
gonna upgrade my LAN Party rig's OS to Win 10 I guess... if I wanna bench with this new feature. =/ Will post the results 2morrow.
 
Joined
Jun 10, 2014
Messages
2,900 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Asynchronous computation AMD The architecture AMD GCN not only can handle these tasks, but it improves even more when the Parallelism is used, test of it they are DOOM's results under the API Vulkan. How? On having reduced the latency for frame across the executions in parallel of the graphs and having calculated the tasks. A reduction in the latency for - frame means that every frame needs of less time to be executed and processed. The net profit is a major speed of images per second, but 3DMark he lacks this one. If 3DMark It Time Spy it had implemented both the concurrence and the parallelism, a Radeon Fury X had reached in performance the GeForce GTX 1070 (In DOOM, the Fury X not only reaches her, but it overcomes it in performance).
None of that made any sense at all.
You clearly don't understand how a GPU processes a pipeline. Each queue needs to have as few dependencies as possible, otherwise they will just stall rendering the splitting of queues pointless. Separate queues can do physics(1), particle simulations(2), texture (de)compression, video encoding, data transfer and similar. (1) and (2) are computational intensive and does utilize the same hardware resources as rendering, and having multiple queues competing for the same resources does introduce overhead and clogging. So if a GPU is going to get a speedup for (1) or (2), it needs to have a significant amount of such resources idling. If a GPU with a certain pipeline is utilized ~98%, and the overhead for splitting some of the "compute" tasks is ~5%, then you'll get a net loss of ~>3% for splitting the queues. This is the reason why most games do and many games will continue to disable async shaders for Nvidia hardware. But in cases where there are more resources idling, splitting might help a bit, like proven in the new 3DMark.
 
Joined
Aug 12, 2012
Messages
616 (0.14/day)
Location
Nebulas
System Name X99
Processor 5930K @ 4.7GHz @ 1.323v
Motherboard Rampage V Edition 10
Cooling EK
Memory Dominator Platinum 32GB
Video Card(s) 2x Gigabyte xtreme gaming 980ti
Storage Samsung 950 Pro M.2, 850 Pro & WD320
Display(s) Tempest X270OC @100Hz
Case Thermaltake Core P5
Audio Device(s) On-board
Power Supply 120-G2-1600-X1
Mouse Mamba 2012
Keyboard K70
Software Win10
Benchmark Scores http://www.3dmark.com/fs/6823139
Anyone else getting an error when trying to run the time spy benchmark? I am using the newest Nvidia driver with two 980ti's.
 
Joined
Apr 12, 2010
Messages
1,359 (0.27/day)
Processor Core i7 920
Motherboard Asus P6T v2
Cooling Noctua D-14
Memory OCZ Gold 1600
Video Card(s) Powercolor PCS+ 5870
Storage Samsung SpinPoint F3 1 TB
Display(s) Samsung LE-B530 37" TV
Case Lian Li PC-B25F
Audio Device(s) N/A
Power Supply Thermaltake Toughpower 700w
Software Windows 7 64-bit
So what we are discussing is Nvidia paying Futuremark to release a benchmark that is more favourable to its cards by going easy on asynchronous compute functions in order to attempt to divert attention away from the fact that its cards cannot deal with asynchronous compute at the hardware level and perform worse than AMD's offerings in this regard, is that a fair summary?
 
Joined
Dec 22, 2011
Messages
3,890 (0.86/day)
Processor AMD Ryzen 7 3700X
Motherboard MSI MAG B550 TOMAHAWK
Cooling AMD Wraith Prism
Memory Team Group Dark Pro 8Pack Edition 3600Mhz CL16
Video Card(s) NVIDIA GeForce RTX 3080 FE
Storage Kingston A2000 1TB + Seagate HDD workhorse
Display(s) Samsung 50" QN94A Neo QLED
Case Antec 1200
Power Supply Seasonic Focus GX-850
Mouse Razer Deathadder Chroma
Keyboard Logitech UltraX
Software Windows 11
So what we are discussing is Nvidia paying Futuremark to release a benchmark that is more favourable to its cards by going easy on asynchronous compute functions in order to attempt to divert attention away from the fact that its cards cannot deal with asynchronous compute at the hardware level and perform worse than AMD's offerings in this regard, is that a fair summary?

I suspect it more than likely boils down to AMD fans getting defensive again, it's clear you guys hate a free market.
 
Joined
Apr 12, 2010
Messages
1,359 (0.27/day)
Processor Core i7 920
Motherboard Asus P6T v2
Cooling Noctua D-14
Memory OCZ Gold 1600
Video Card(s) Powercolor PCS+ 5870
Storage Samsung SpinPoint F3 1 TB
Display(s) Samsung LE-B530 37" TV
Case Lian Li PC-B25F
Audio Device(s) N/A
Power Supply Thermaltake Toughpower 700w
Software Windows 7 64-bit
I suspect it more than likely boils down to AMD fans getting defensive again, it's clear you guys hate a free market.

You are wrong on two counts: I am not an AMD fan and market dominance does not necessarily equate to better quality or a more sensible purchase.
 
Joined
Jul 13, 2016
Messages
2,826 (1.00/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
I suspect it more than likely boils down to AMD fans getting defensive again, it's clear you guys hate a free market.

It's totally fair game to discuss a benchmark that supposedly uses Async compute when it's performance numbers don't match up with what we've seen from other DX 12 and Vulcan titles. Every DX 12 and Vulkan title that includes proper Async compute we've seen to date has seen AMD have a large improvement in performance.

Free Market? Since when has the CPU and GPU market been free? Intel have been using monopoly tactics against AMD since the start (and were even forced to pay a small amount in court because of it) and Nvidia is only a little bit better with it's GameWorks program. Screwing over it's own customers and AMD video cards is "The way it's meant to be payed" by Nvidia.
 
Joined
Apr 30, 2012
Messages
3,881 (0.89/day)
One of the developers for it said this

FM_Jarnis said:
Yes it is. There are no "real" from-ground-up DX12 engine games out there yet. Well, except Ashes of Singularity and maybe Quantum Break (not sure about that).

Don't get too Real Housewife on us now.
 
Joined
Dec 22, 2011
Messages
3,890 (0.86/day)
Processor AMD Ryzen 7 3700X
Motherboard MSI MAG B550 TOMAHAWK
Cooling AMD Wraith Prism
Memory Team Group Dark Pro 8Pack Edition 3600Mhz CL16
Video Card(s) NVIDIA GeForce RTX 3080 FE
Storage Kingston A2000 1TB + Seagate HDD workhorse
Display(s) Samsung 50" QN94A Neo QLED
Case Antec 1200
Power Supply Seasonic Focus GX-850
Mouse Razer Deathadder Chroma
Keyboard Logitech UltraX
Software Windows 11
You are wrong on two counts: I am not an AMD fan and market dominance does not necessarily equate to better quality or a more sensible purchase.

Good for you on both counts.

It's totally fair game to discuss a benchmark that supposedly uses Async compute when it's performance numbers don't match up with what we've seen from other DX 12 and Vulcan titles. Every DX 12 and Vulkan title that includes proper Async compute we've seen to date has seen AMD have a large improvement in performance.

Free Market? Since when has the CPU and GPU market been free? Intel have been using monopoly tactics against AMD since the start (and were even forced to pay a small amount in court because of it) and Nvidia is only a little bit better with it's GameWorks program. Screwing over it's own customers and AMD video cards is "The way it's meant to be payed" by Nvidia.

You see, it's rants like this that prove me right.
 
Joined
Jan 31, 2011
Messages
2,202 (0.46/day)
System Name Ultima
Processor AMD Ryzen 7 5800X
Motherboard MSI Mag B550M Mortar
Cooling Arctic Liquid Freezer II 240 rev4 w/ Ryzen offset mount
Memory G.SKill Ripjaws V 2x16GB DDR4 3600
Video Card(s) Palit GeForce RTX 4070 12GB Dual
Storage WD Black SN850X 2TB Gen4, Samsung 970 Evo Plus 500GB , 1TB Crucial MX500 SSD sata,
Display(s) ASUS TUF VG249Q3A 24" 1080p 165-180Hz VRR
Case DarkFlash DLM21 Mesh
Audio Device(s) Onboard Realtek ALC1200 Audio/Nvidia HD Audio
Power Supply Corsair RM650
Mouse Steelseries Rival 3 Wireless | Wacom Intuos CTH-480
Keyboard A4Tech B314 Keyboard
Software Windows 10 Pro
Well, we just need now a benchmark or software that takes advantage of multi-engine where not just performance improvements but taking advantage of those enhanced performance to let the devs add additional details/visuals to the game
 
Joined
Oct 1, 2013
Messages
250 (0.06/day)
So what we are discussing is Nvidia paying Futuremark to release a benchmark that is more favourable to its cards by going easy on asynchronous compute functions in order to attempt to divert attention away from the fact that its cards cannot deal with asynchronous compute at the hardware level and perform worse than AMD's offerings in this regard, is that a fair summary?

Basically the "A-sync" on Timespy is not the real A-sync integrated in Dx12 and Vulkan. It's a code path that can offer the similar effects IN THIS BENCH, and it "happens" to work well on nVidia's hardware. Even Maxwell can have a good time with this "A-sync" o_O
 
Joined
Oct 6, 2004
Messages
405 (0.06/day)
Location
New Taipei City, Taiwan
So what we are discussing is Nvidia paying Futuremark to release a benchmark that is more favourable to its cards by going easy on asynchronous compute functions in order to attempt to divert attention away from the fact that its cards cannot deal with asynchronous compute at the hardware level and perform worse than AMD's offerings in this regard, is that a fair summary?

As it turns out, NV disables Async on the driver level, so no matter how hard the benchmark is trying to push, 3dmark will never get asynch working, hence it's DX12 (Feature_Level 11) on NV hardware.

So, not so much the fault of Futuremark, just the usual cover-up from the green team.
 
Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
Well damn now I want my $5 back from purchasing this damned time spy. I could have used that to play 4 more round of MvM in TF2!
 
Joined
May 6, 2016
Messages
29 (0.01/day)
Processor Intel i7-6700k@4.5Ghz
Motherboard Asus Maximus VIII Ranger
Cooling Noctua NH-D15S
Memory Corsair Vengeance LPX 2x8GB@2400Hz
Video Card(s) EVG GTX 1070 SC
Storage OCZ Vertex 4 256GB, Seagate Baracuda 2TB
Display(s) Benq XL2430T, Viewsonic V3D245
Case Fractal Design Define R5
Power Supply Corsair RM650i
Benchmark Scores http://www.3dmark.com/3dm/13419009
I find this async thing to be a bit blown out of proportions, just like 480 PCIe power consumption.
There is very limited number of games that support async and even less that support it for both vendors. Sure there might be more in next year or two, but it's irrelevant if you're only going to play 1 or 2 of those, and by the time it'll be relevant, we'll already have at least one or two new generations of gpus out.
I'm quite sure that there are more people playing Minecraft than those who are playing AotS, therefore it only makes sense for you to base your purchase not on some synthetic benchmarks or games that you'll never play, but based on what best matches your needs.
 
Joined
Oct 6, 2004
Messages
405 (0.06/day)
Location
New Taipei City, Taiwan
I find this async thing to be a bit blown out of proportions, just like 480 PCIe power consumption.
There is very limited number of games that support async and even less that support it for both vendors. Sure there might be more in next year or two, but it's irrelevant if you're only going to play 1 or 2 of those, and by the time it'll be relevant, we'll already have at least one or two new generations of gpus out.
I'm quite sure that there are more people playing Minecraft than those who are playing AotS, therefore it only makes sense for you to base your purchase not on some synthetic benchmarks or games that you'll never play, but based on what best matches your needs.

Did you really just drag Minecraft into a discussion about high-end graphics cards?
 
Joined
Oct 22, 2014
Messages
13,210 (3.81/day)
Location
Sunshine Coast
System Name Black Box
Processor Intel Xeon E3-1260L v5
Motherboard MSI E3 KRAIT Gaming v5
Cooling Tt tower + 120mm Tt fan
Memory G.Skill 16GB 3600 C18
Video Card(s) Asus GTX 970 Mini
Storage Kingston A2000 512Gb NVME
Display(s) AOC 24" Freesync 1m.s. 75Hz
Case Corsair 450D High Air Flow.
Audio Device(s) No need.
Power Supply FSP Aurum 650W
Mouse Yes
Keyboard Of course
Software W10 Pro 64 bit
Did you really just drag Minecraft into a discussion about high-end graphics cards?
Well to be fair, it's not just about high end graphics cards.
 
Joined
Oct 6, 2004
Messages
405 (0.06/day)
Location
New Taipei City, Taiwan
Well to be fair, it's not just about high end graphics cards.

Indeed, it's about DirectX12 right now. I just can understand that everyone is ticked off that FutureMark is selling a "DirectX 12" benchmark, which actually doesn't do anything DirectX 12 related and just says "well, if we throw this work-load at it, we'll let the scheduler decide, which would be kinda like DX12" (talking about FM's response on Steam.)
 
Joined
Sep 6, 2013
Messages
2,976 (0.77/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 5500 / Ryzen 5 4600G / FX 6300 (12 years latter got to see how bad Bulldozer is)
Motherboard MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2) / Gigabyte GA-990XA-UD3
Cooling Νoctua U12S / Segotep T4 / Snowman M-T6
Memory 16GB G.Skill RIPJAWS 3600 / 16GB G.Skill Aegis 3200 / 16GB Kingston 2400MHz (DDR3)
Video Card(s) ASRock RX 6600 + GT 710 (PhysX)/ Vega 7 integrated / Radeon RX 580
Storage NVMes, NVMes everywhere / NVMes, more NVMes / Various storage, SATA SSD mostly
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) ---- 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / Sharkoon Rebel 9 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / Coolermaster Devastator / Logitech
Software Windows 10 / Windows 10 / Windows 7
Nice. Futuremark develops Time Spy to help Nvidia (in fact it is like writing a code that makes a two core cpu and a single core cpu with hyperthreading performing the same, think it like a program where a i5 is equal to a i3 - useful if you want to sell more i3) and in Doom Nvidia it looks like cheating, at least in the 3,5+0,5 GTX 970 case.

PS. Based on minecraft requirements, no one should care about discrete GPUs.
 
Joined
May 6, 2016
Messages
29 (0.01/day)
Processor Intel i7-6700k@4.5Ghz
Motherboard Asus Maximus VIII Ranger
Cooling Noctua NH-D15S
Memory Corsair Vengeance LPX 2x8GB@2400Hz
Video Card(s) EVG GTX 1070 SC
Storage OCZ Vertex 4 256GB, Seagate Baracuda 2TB
Display(s) Benq XL2430T, Viewsonic V3D245
Case Fractal Design Define R5
Power Supply Corsair RM650i
Benchmark Scores http://www.3dmark.com/3dm/13419009
Did you really just drag Minecraft into a discussion about high-end graphics cards?
This discussion has nothing to do with high end. Even if it was, there are plenty of games that will benefit from the raw power of the new gpus even without all the async thing. Yet, it was not the point I was trying to make.
 
Joined
Dec 18, 2005
Messages
8,253 (1.23/day)
System Name money pit..
Processor Intel 9900K 4.8 at 1.152 core voltage minus 0.120 offset
Motherboard Asus rog Strix Z370-F Gaming
Cooling Dark Rock TF air cooler.. Stock vga air coolers with case side fans to help cooling..
Memory 32 gb corsair vengeance 3200
Video Card(s) Palit Gaming Pro OC 2080TI
Storage 150 nvme boot drive partition.. 1T Sandisk sata.. 1T Transend sata.. 1T 970 evo nvme m 2..
Display(s) 27" Asus PG279Q ROG Swift 165Hrz Nvidia G-Sync, IPS.. 2560x1440..
Case Gigabyte mid-tower.. cheap and nothing special..
Audio Device(s) onboard sounds with stereo amp..
Power Supply EVGA 850 watt..
Mouse Logitech G700s
Keyboard Logitech K270
Software Win 10 pro..
Benchmark Scores Firestike 29500.. timepsy 14000..
I find this async thing to be a bit blown out of proportions, just like 480 PCIe power consumption.
There is very limited number of games that support async and even less that support it for both vendors. Sure there might be more in next year or two, but it's irrelevant if you're only going to play 1 or 2 of those, and by the time it'll be relevant, we'll already have at least one or two new generations of gpus out.
I'm quite sure that there are more people playing Minecraft than those who are playing AotS, therefore it only makes sense for you to base your purchase not on some synthetic benchmarks or games that you'll never play, but based on what best matches your needs.

inst everything blown out all proportion on sites like this one..

its par for the course..

having said that.. benchmarks like time spy are all about high end.. people that download and run them dont do it to to prove how crap their PC is.. which is why its on my gaming desktop and not on my atom powered windows 10 tablet.. he he

trog
 
Last edited:

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.19/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
someone needs to make a benchmark with a few variations:

0%/50%/100% async, for direct comparisons.
 
Top