• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Accelerates Volta to May 2017?

bug

Joined
May 22, 2015
Messages
13,229 (4.06/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Then explain why Doom in Vulcan is so deadly for all nVidia GPU and only a bit less for Pascals?

All reviews that I have seen so far don't use the required drivers for Vulkan to work on Nvidia hardware. Some benchmarks were run when Vulkan support wasn't available at all for Nvidia.

A game-benchmark about that is already here and it shows everything needed to know. And Doom fully uses all the DX12 features (unlike Time Spy from 3D Mark) and as an FPS game shouldn't give so much more performance to AND GPUs.

Like you said above, Doom is using Vulkan, not DX12.

And then think of a big scaled RTS game with 10s of 1000s of units battling each other in Vulcan. What a massacre that would be for nVidia GPUs. Warhammer TW DX12 is a sign of the things to come also.

Based on invalid premises, one can draw any conclusion.
 

bug

Joined
May 22, 2015
Messages
13,229 (4.06/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Oh dear, I made a terrible mistake, I see that.

The GTX 1080 and 1070 only have 7200 transistors compared to Fiji's 8900. That's a nut busting 24% increase. Fury X has 4096 shaders to 1920 on the GTX 1070 (a 113% increase in hardware). Same ROPS. Higher bandwidth.

So, you're happy that a card with the hardware prowess of the Fury X can only match the paltry hardware inside a GTX 1070? That's not impressive. Do you not see? That's worrying on AMD's side. How can the GTX 1070 even stand beside a Fury X in DX12?

I bet he owned a Prescott at some point.
 
Joined
Aug 15, 2008
Messages
5,941 (1.04/day)
Location
Watauga, Texas
System Name Univac SLI Edition
Processor Intel Xeon 1650 V3 @ 4.2GHz
Motherboard eVGA X99 FTW K
Cooling EK Supremacy EVO, Swiftech MCP50x, Alphacool NeXXos UT60 360, Black Ice GTX 360
Memory 2x16GB Corsair Vengeance LPX 3000MHz
Video Card(s) Nvidia Titan X Tri-SLI w/ EK Blocks
Storage HyperX Predator 240GB PCI-E, Samsung 850 Pro 512GB
Display(s) Dell UltraSharp 34" Ultra-Wide (U3415W) / (Samsung 48" Curved 4k)
Case Phanteks Enthoo Pro M Acrylic Edition
Audio Device(s) Sound Blaster Z
Power Supply Thermaltake 1350watt Toughpower Modular
Mouse Logitech G502
Keyboard CODE 10 keyless MX Clears
Software Windows 10 Pro
So they havent even released their ultra expensive, huge die P100 to lower priority customers and someone comes up with the idea that it somehow would be economically viable to release the next gen within less than a year?!



Also gotta love how those threads always derail....
Sure, due to the fact that they're contracted to ship these chips by end of year next year. Doesn't necessarily mean we'll have Volta in our PCs by then.
 
Joined
Apr 29, 2014
Messages
4,180 (1.15/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
Doubtful this will change much in decisions since we have no idea what cards will even be released by that point... Sure as heck has not changed my mind about purchases this year.

Either way, we will have to see what consumer versions come from this. I doubt there will be much soon...
 
Joined
Apr 30, 2011
Messages
2,652 (0.56/day)
Location
Greece
Processor AMD Ryzen 5 5600@80W
Motherboard MSI B550 Tomahawk
Cooling ZALMAN CNPS9X OPTIMA
Memory 2*8GB PATRIOT PVS416G400C9K@3733MT_C16
Video Card(s) Sapphire Radeon RX 6750 XT Pulse 12GB
Storage Sandisk SSD 128GB, Kingston A2000 NVMe 1TB, Samsung F1 1TB, WD Black 10TB
Display(s) AOC 27G2U/BK IPS 144Hz
Case SHARKOON M25-W 7.1 BLACK
Audio Device(s) Realtek 7.1 onboard
Power Supply Seasonic Core GC 500W
Mouse Sharkoon SHARK Force Black
Keyboard Trust GXT280
Software Win 7 Ultimate 64bit/Win 10 pro 64bit/Manjaro Linux
Oh dear, I made a terrible mistake, I see that.

The GTX 1080 and 1070 only have 7200 transistors compared to Fiji's 8900. That's a nut busting 24% increase. Fury X has 4096 shaders to 1920 on the GTX 1070 (a 113% increase in hardware). Same ROPS. Higher bandwidth.

So, you're happy that a card with the hardware prowess of the Fury X can only match the paltry hardware inside a GTX 1070? That's not impressive. Do you not see? That's worrying on AMD's side. How can the GTX 1070 even stand beside a Fury X in DX12?

If I were an AMD fanboy with a brain, I'd be worried that for all my posturing about DX12 and async, the mid range Pascal chip (GTX 1080) with the poorer async optimisations humps my beloved Fury X. I'd be worried that AMD's next step has to be a clock increase but to do that the hardware count has to suffer (relatively speaking). I'd be worried that Nvidia's top end Pascal consumer chip (GP102, Titan X) has 1000 more shaders than the GTX 1080 (which already beats everything at everything).

Your optimism is very misplaced. But that's okay, we need optimism in todays world.

You like to be objective eh? Why don't you put into the equation of the comparison the difference in clocks then? 1070 is getting to 1900MHz when gaming and Fury X reaches 1050MHz. Compare again now...

As for Vulcan, it is built on the same principles as DX12, it is just not only for Windows 10, so it is more independent to the market and thus, more objective.

And all people have brains (fanboys or not). Intelligence is another topic though...
 
Joined
Feb 18, 2005
Messages
5,238 (0.75/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
Bleh, willy-waving over die size vs number of transistors vs shader counts vs ROP counts vs core clocks vs memory clocks is just that, because both companies' architectures are so different. It's pretty obvious that just as in its CPUs, AMD has chosen to scale out its graphics architecure (more shaders) while NVIDIA has chosen to scale up (more clock speed). Both approaches have their upsides and downsides, but I feel that NVIDIA's approach has given it the edge, because extracting maximal performance from parallelism is a really really difficult problem, one that DirectX 12 isn't likely to solve in and of itself.
 
  • Like
Reactions: bug

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,461 (2.37/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
You like to be objective eh? Why don't you put into the equation of the comparison the difference in clocks then? 1070 is getting to 1900MHz when gaming and Fury X reaches 1050MHz. Compare again now...

As for Vulcan, it is built on the same principles as DX12, it is just not only for Windows 10, so it is more independent to the market and thus, more objective.

And all people have brains (fanboys or not). Intelligence is another topic though...

The hardware in AMD chips limits clock speeds. The current i7 enthusiast chips demonstrate that very well. Moar Cores = lower frequency. It's a fact that Nvidia dropped a lot of 'chip hardware (shaders etc) in favour of a leaner, more efficient and way faster chip. This is why these discussions about - just you wait, DX12 and AMD are going to win are futile. AMD will not stomp all over Nvidia - they will at best, achieve parity - which is very good. The reason the GTX1080 (GTX 9 freaking 80 replacement) is so expensive is because AMD have nothing to match it.

This is the 7970 versus 680 all over again, except this time Nvidia have screwed the mother load with pricing of their chips. Don't get me wrong - I hate the pricing of GTX 1080. I am not a fan of this wallet shafting policy but Nvidia know AMD has nothing to match it. Not even on DX12, not for their GP104 and certainly not for GP102.

And again, Vulkan is great for AMD - the hardware inside their architecture gets to shine but it bloody well should. AMD's 'on paper' stats should have them all over Nvidia but their lack of DX11 prowess in favour of Unicorn chasing DX12 has let them down for 2-3 years. And now DX12 is sort of coming along (because let's face it - it's not really anywhere near replacing DX11) it means Nvidia can build it;'s own Unicorn stable and call it Volta (I'd prefer Roach).

Vega cannot have multiples of shaders and ACE hardware AND be as fast as Pascal. Sacrifices will be made. Hell, look at GP Titan X - it's already about 200Mhz slower than GTX 1080 as it has 1000 more cores.

Unfortunately, the other glaring issue with AMD is that they absolutely need to make a developer adopt an abundance of their suited DX12 features. Nvidia can use DX12 just fine but if Nvidia help develop a game, they're not going to 'allow' a full utilisation of AMD hardware - like it or not. Is it shit? Yes. Is it business? Yes. The next big 'real' game that isn't a tech demo or small release is Deus Ex. I love those freaking games. It's AMD sponsored. It will be very good to see how that runs. Bearing in mind, I'm still DX11 bound, it's meaningless to me anyway but if Nvidia's Pascal runs that game fine, that will give you a good idea of the future of DX12.

The reason I get so ranty is I'm pissed off AMD haven't come to the table with a faster chip than Fiji. We now have GTX 1070/1080/Titan X from Nvidia and very little back from AMD. A sneeze with Polaris - great for 1080p/1440p but not future looking for 1440p. I would buy a Fury X but what's the point? My card is way faster than stock 980ti so my card is also way faster than Fury X, especially in my native DX11. Even in DX12 my card's clocks make it a GTX1070 match.

Meh, rant over.
 
Joined
Jan 31, 2011
Messages
2,202 (0.46/day)
System Name Ultima
Processor AMD Ryzen 7 5800X
Motherboard MSI Mag B550M Mortar
Cooling Arctic Liquid Freezer II 240 rev4 w/ Ryzen offset mount
Memory G.SKill Ripjaws V 2x16GB DDR4 3600
Video Card(s) Palit GeForce RTX 4070 12GB Dual
Storage WD Black SN850X 2TB Gen4, Samsung 970 Evo Plus 500GB , 1TB Crucial MX500 SSD sata,
Display(s) ASUS TUF VG249Q3A 24" 1080p 165-180Hz VRR
Case DarkFlash DLM21 Mesh
Audio Device(s) Onboard Realtek ALC1200 Audio/Nvidia HD Audio
Power Supply Corsair RM650
Mouse Steelseries Rival 3 Wireless | Wacom Intuos CTH-480
Keyboard A4Tech B314 Keyboard
Software Windows 10 Pro
Fury X vs 1070

Fury X
Shading Units: 4096
TMUs: 256
ROPs: 64
Compute Units: 64
Pixel Rate: 67.2 GPixel/s
Texture Rate: 268.8 GTexel/s
Floating-point performance: 8,602 GFLOPS
Memory Size: 4096 MB
Memory Type: HBM
Memory Bus: 4096 bit
Bandwidth: 512 GB/s

GTX 1070
Shading Units: 1920
TMUs: 120
ROPs: 64
SM Count: 15
Pixel Rate: 96.4 GPixel/s
Texture Rate: 180.7 GTexel/s
Floating-point performance: 5,783 GFLOPS (up to ~7000+ GFLOPS if 1900Mhz)
Memory Size: 8192 MB
Memory Type: GDDR5
Memory Bus: 256 bit
Bandwidth: 256.3 GB/s

GTX 1070 is basically obliterated here by the Fury X (aside from Pixel filtrate and Memory amount)

Also, indeed were out of topic already
 
Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
Looks like my theory was right. NV is preparing a monster structure which will render even latest GCN obsolete in terms of Async Computing. The history of Tesslation may repeat itself soon.
 
Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
By the time NV completely shifts to full on Async, both Pascal and GCN will be done. GCN based cards my be relavent for a little bit longer though. In the end who commends developers controls the market, and Nvidia has never failed to tighten its grasp over developers.
 
Joined
Aug 2, 2011
Messages
1,451 (0.31/day)
Processor Ryzen 9 7950X3D
Motherboard MSI X670E MPG Carbon Wifi
Cooling Custom loop, 2x360mm radiator,Lian Li UNI, EK XRes140,EK Velocity2
Memory 2x16GB G.Skill DDR5-6400 @ 6400MHz C32
Video Card(s) EVGA RTX 3080 Ti FTW3 Ultra OC Scanner core +750 mem
Storage MP600 2TB,960 EVO 1TB,XPG SX8200 Pro 1TB,Micron 1100 2TB,1.5TB Caviar Green
Display(s) Acer X34S, Acer XB270HU
Case LianLi O11 Dynamic White
Audio Device(s) Logitech G-Pro X Wireless
Power Supply EVGA P3 1200W
Mouse Logitech G502 Lightspeed
Keyboard Logitech G512 Carbon w/ GX Brown
VR HMD HP Reverb G2 (V2)
Software Win 11
That's like saying GTX 980 has async compute then. It actually does. At so small queues they make no practical use. What good is saying "yes, we have Async" which then does basically nothing. Like, lol?

Maxwell has a weak implementation of async compute, and the fact that nVidia hasn't enabled it on the 9 series GPUs in TimeSpy points to the fact that it's implementation is likely flawed.

Async compute IS enabled for Pascal, and it DOES show performance improvement. I think the holding on the the straws of Maxwell having async compute should stop. It doesn't have it, in any way that nVidia cares to support it...


Pascal, for me, shows improvement with Vulkan enabled. I have a good 10+FPS boost when I have it enabled, so I will keep it enabled. Even if async compute isn't yet implemented on Pascal in Vulkan on Doom.
 

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,461 (2.37/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
By the time NV completely shifts to full on Async, both Pascal and GCN will be done. GCN based cards my be relavent for a little bit longer though. In the end who commends developers controls the market, and Nvidia has never failed to tighten its grasp over developers.

All very true. Though I imagine AMD's successor to GCN will maintain the ACE hardware (it's very good).
And yes, all arguments about DX 'anything' can take a back seat to one sided game development.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Pascal, for me, shows improvement with Vulkan enabled. I have a good 10+FPS boost when I have it enabled, so I will keep it enabled. Even if async compute isn't yet implemented on Pascal in Vulkan on Doom.
It is used for TSAA in DOOM. 8x TSAA, on a card like Fury X with lots of idle shaders, is practically zero-cost in terms of framerate.

Pascal improves async compute functionality compared to Maxwell (which facepalms when trying) but NVIDIA's implementation is still behind AMDs. Then again, async compute is only beneficial when the card has idle hardware.
 
Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
It is used for TSAA in DOOM. 8x TSAA, on a card like Fury X with lots of idle shaders, is practically zero-cost in terms of framerate.

Pascal improves async compute functionality compared to Maxwell (which facepalms when trying) but NVIDIA's implementation is still behind AMDs. Then again, async compute is only beneficial when the card has idle hardware.

I bet Volta will come with >8192 ALUs, which will make it perfect for Async Compute. Unfortunately then the older cards with few ALU will all be rendered useless with Volta optimized games and applications. Which is why I say Tessellation history will soon repeat itself.
 
Joined
Apr 30, 2011
Messages
2,652 (0.56/day)
Location
Greece
Processor AMD Ryzen 5 5600@80W
Motherboard MSI B550 Tomahawk
Cooling ZALMAN CNPS9X OPTIMA
Memory 2*8GB PATRIOT PVS416G400C9K@3733MT_C16
Video Card(s) Sapphire Radeon RX 6750 XT Pulse 12GB
Storage Sandisk SSD 128GB, Kingston A2000 NVMe 1TB, Samsung F1 1TB, WD Black 10TB
Display(s) AOC 27G2U/BK IPS 144Hz
Case SHARKOON M25-W 7.1 BLACK
Audio Device(s) Realtek 7.1 onboard
Power Supply Seasonic Core GC 500W
Mouse Sharkoon SHARK Force Black
Keyboard Trust GXT280
Software Win 7 Ultimate 64bit/Win 10 pro 64bit/Manjaro Linux
The hardware in AMD chips limits clock speeds. The current i7 enthusiast chips demonstrate that very well. Moar Cores = lower frequency. It's a fact that Nvidia dropped a lot of 'chip hardware (shaders etc) in favour of a leaner, more efficient and way faster chip. This is why these discussions about - just you wait, DX12 and AMD are going to win are futile. AMD will not stomp all over Nvidia - they will at best, achieve parity - which is very good. The reason the GTX1080 (GTX 9 freaking 80 replacement) is so expensive is because AMD have nothing to match it.

This is the 7970 versus 680 all over again, except this time Nvidia have screwed the mother load with pricing of their chips. Don't get me wrong - I hate the pricing of GTX 1080. I am not a fan of this wallet shafting policy but Nvidia know AMD has nothing to match it. Not even on DX12, not for their GP104 and certainly not for GP102.

And again, Vulkan is great for AMD - the hardware inside their architecture gets to shine but it bloody well should. AMD's 'on paper' stats should have them all over Nvidia but their lack of DX11 prowess in favour of Unicorn chasing DX12 has let them down for 2-3 years. And now DX12 is sort of coming along (because let's face it - it's not really anywhere near replacing DX11) it means Nvidia can build it;'s own Unicorn stable and call it Volta (I'd prefer Roach).

Vega cannot have multiples of shaders and ACE hardware AND be as fast as Pascal. Sacrifices will be made. Hell, look at GP Titan X - it's already about 200Mhz slower than GTX 1080 as it has 1000 more cores.

Unfortunately, the other glaring issue with AMD is that they absolutely need to make a developer adopt an abundance of their suited DX12 features. Nvidia can use DX12 just fine but if Nvidia help develop a game, they're not going to 'allow' a full utilisation of AMD hardware - like it or not. Is it shit? Yes. Is it business? Yes. The next big 'real' game that isn't a tech demo or small release is Deus Ex. I love those freaking games. It's AMD sponsored. It will be very good to see how that runs. Bearing in mind, I'm still DX11 bound, it's meaningless to me anyway but if Nvidia's Pascal runs that game fine, that will give you a good idea of the future of DX12.

The reason I get so ranty is I'm pissed off AMD haven't come to the table with a faster chip than Fiji. We now have GTX 1070/1080/Titan X from Nvidia and very little back from AMD. A sneeze with Polaris - great for 1080p/1440p but not future looking for 1440p. I would buy a Fury X but what's the point? My card is way faster than stock 980ti so my card is also way faster than Fury X, especially in my native DX11. Even in DX12 my card's clocks make it a GTX1070 match.

Meh, rant over.

Agreed at most of what you wrote here.

Strategy of AMD needed to focus on market share and only way to achieve gains there was to get low to mid priced GPUs out 1st. 460, 470 and 480 will do just that. Is it shit? Yes for guys like you who wait for the new flagships from both companies to allow price war to help you get the best in vfm. No for 80% of whoever is in search for their next GPU to play @1080P though. And since we don't know Vega's core size (HBM2 ommited from that size), we cannot calculate if it reaches or surpass 1080 performance. So, it might do the trick and help all in search of better and cheaper high end GPUs as they already achieved this for the ones who want a worthy GPU for less than $300. Volta is too far ahead to affect anything in 2016-2017 market after all. Let's hope DX12 and Vulcan get adopted sooner than later for the benefit of all, as better (more advanced in physics and graphical fidelity) games in several aspects will come out of that change. :toast:
 
Last edited:
Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
In the essence of DX12/Vulkan is to remove as much hardware constrains from software developers as possible. This will be perfect for consoles and mobile segments since they don't need new GPU every 6 months. DX12/Vulkan will make GPU life span a lot longer. Capable GPU with tons of compute units can last a lot longer than flagship GPUs of current gen. In the end it will be good for us consumers. While buying new cards are fun, nobody likes to see their hard earned money go obsolete in a mere 6 months ~ 1 year.
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
No. Async has nothing to do with "idle" shaders. That's like saying a V12 Ferrari will be faster with 8 idle cylinders lol, makes no sense. It has everything to do with ability to do several tasks in parallel at once, how good is the scheduler and caches as well as the multithreading engine. Clearly, the one in GTX 1080 is not even virtually as good as the one in Radeon graphic cards. Which is not a surprise since AMD has been doing async since HD7000 and NVIDIA just now got half working async engine...

Synchronous rendering is when you have 1 graphics rendering thread and all graphics compute tasks are executed in sequence. It's what we have with D3D11 and older. Here, it was all about utilizing available shaders as effectively as possible, that's why they were always chasing that sweetspot of not wasting GPU die size on stuff you can't possibly use efficiently for a single threaded rendering. All hardware capabilities that weren't used were essentially wasted.

Asynchronous is when you can split up your rendering workload into several rendering threads and compute them in parallel. You can split it up whichever way you like it for as long as that makes it beneficial to either coding or rendering performance. Of course, more shaders will mean you can do more things in parallel before you stuff them all to 100% and they can't accept more workload.
 
Joined
Dec 14, 2011
Messages
269 (0.06/day)
Processor 12900K @5.1all Pcore only, 1.23v
Motherboard MSI Edge
Cooling D15 Chromax Black
Memory 32GB 4000 C15
Video Card(s) 4090 Suprim X
Storage Various Samsung M.2s, 860 evo other
Display(s) Predator X27 / Deck (Nreal air) / LG C3 83
Case FD Torrent
Audio Device(s) Hifiman Ananda / AudioEngine A5+
Power Supply Seasonic Prime TX 1000W
Mouse Amazon finest (no brand)
Keyboard Amazon finest (no brand)
VR HMD Index
Benchmark Scores I got some numbers.
I don't get the fuss, the time between gens has typically been just over a year and under a year and a half, with the exception of this most recent gap. So unless the 1180 is on the shelf next May, it's just business as usual.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
No. Async has nothing to do with "idle" shaders. That's like saying a V12 Ferrari will be faster with 8 idle cylinders lol, makes no sense.
The better analogy would be GM's V8 cylinder deactivation. Fury X, in a lot of games, runs like a V4 because the graphics pipeline isn't saturated enough to fill all of the shaders. When enabling TSAA or other async workloads, it puts most of the shaders to work like a full V8.

My understanding is that Pascal doesn't actually do async but they fixed the scheduling problem so that Pascal can rapidly change task instead of waiting for the lengthy pipeline to clear. This change allows it to get a 5% performance boost where AMD sees 10%.

There's two things going on here: async shaders and scheduling. Scheduling involves interrupting the graphics queue to inject a compute task (Pascal does this). Async involves finding idle hardware an utilizing it (Pascal doesn't do this). Both are complex and both are important.


Edit: Don't believe me? Believe Anandtech:
Ryan Smith said:
Meanwhile not shown in these simple graphical examples is that for async’s concurrent execution abilities to be beneficial at all, there needs to be idle time bubbles to begin with. Throwing compute into the mix doesn’t accomplish anything if the graphics queue can sufficiently saturate the entire GPU. As a result, making async concurrency work on Maxwell 2 is a tall order at best, as you first needed execution bubbles to fill, and even then you’d need to almost perfectly determine your partitions ahead of time.
 
Last edited:

ppn

Joined
Aug 18, 2015
Messages
1,231 (0.39/day)
GTX 1180 has to be at least 3200 Cuda, so unless they use gimped GP102 which will not be optimal. HBM2 2x4GB Volta is imminent.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.

danyearight

New Member
Joined
Jul 25, 2016
Messages
1 (0.00/day)
ok... great news... but why? what is financial reasons for this... now they have midrange cards (1070, 1080) for incredible high prices and sold out as soon as those hit the store.... it would be financial reasonable to keep it that way for as long as possible.. unless they know :O
Oh and here me and everybody else though the 1080 and 1070 outperformed the last generation I guess since you say they are only mid range cards and you know it best so tell us what is more powerful then?
 

rtwjunkie

PC Gaming Enthusiast
Supporter
Joined
Jul 25, 2008
Messages
13,909 (2.42/day)
Location
Louisiana -Laissez les bons temps rouler!
System Name Bayou Phantom
Processor Core i7-8700k 4.4Ghz @ 1.18v
Motherboard ASRock Z390 Phantom Gaming 6
Cooling All air: 2x140mm Fractal exhaust; 3x 140mm Cougar Intake; Enermax T40F Black CPU cooler
Memory 2x 16GB Mushkin Redline DDR-4 3200
Video Card(s) EVGA RTX 2080 Ti Xc
Storage 1x 500 MX500 SSD; 2x 6TB WD Black; 1x 4TB WD Black; 1x400GB VelRptr; 1x 4TB WD Blue storage (eSATA)
Display(s) HP 27q 27" IPS @ 2560 x 1440
Case Fractal Design Define R4 Black w/Titanium front -windowed
Audio Device(s) Soundblaster Z
Power Supply Seasonic X-850
Mouse Coolermaster Sentinel III (large palm grip!)
Keyboard Logitech G610 Orion mechanical (Cherry Brown switches)
Software Windows 10 Pro 64-bit (Start10 & Fences 3.0 installed)
I guess since you say they are only mid range cards and you know it best so tell us what is more powerful then?

Nothing right now. They are still mid-range cards built on a mid-range chip, GP104. It has always been thus: New midrange outperforms previous gen high-range. and the cycle goes on and on.
 
Joined
Mar 10, 2014
Messages
1,793 (0.48/day)
Precisely. Once Nvidia decided to not wait around for 10nm and commit Volta to 16nmFFC production there probably wasn't any good reason not to go ahead with the HPC orders since it ties in with IBM's POWER9 schedule.

Yeah, and don't forget: Volta has been on nvidia's official roadmaps longer than Pascal(since 2013). I kind of believe it will be quite major architecture change, Pascal was just little bit updated maxwell on 16nm with higher clocks.
 

deu

Joined
Apr 24, 2016
Messages
493 (0.17/day)
System Name Bo-minator (my name is bo)
Processor AMD 3900X
Motherboard Gigabyte X570 AORUS MASTER
Cooling Noctua NH-D15
Memory G-SkiLL 2x8GB RAM 3600Mhz (CL16-16-16-16-36)
Video Card(s) ASUS STRIX 1080Ti OC
Storage Samsung EVO 850 1TB
Display(s) ACER XB271HU + DELL 2717D
Case Fractal Design Define R4
Audio Device(s) ASUS Xonar Essence STX
Power Supply Antec HCP 1000W
Mouse G403
Keyboard CM STORM Quick Fire Rapid
Software Windows 10 64-bit Pro
Benchmark Scores XX
Hey smarta**, I looked it up too, just not this morning. So I did pretty good getting it right after reading a tiny little blurb about it several weeks ago. Next time, look your facts up first before you roll out the PR machine.

And, for the record, I despise PR machine posts from both sides.

I dont get your point; Are you mad because so many people dont know those facts or the fact that I told them that? I see no problem in telling people the facts. I could do on about all the good things about NVIDIA as well but then I would be a fanboi from camp green. Just because you people got a red /green war going doesnt mean that you can just put people on teams. I argue for and against but lately this site have been swarmed with outright shi**y comments about bashing AMD. That would be ok if the claims where facts. But ALOT of them are not! I saw a dude state AMD had double the powerdraw on 480 as in 1060 and shitty performance. I saw people saying complain about AMD being in the shi**er when they are at their best in 3 years. I will discuss anything and take any argument for and against any company as long as they are substatiated by facts and knowledge.
 
Top