• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Announces Radeon RX 5700 Based on Navi: RDNA, 7nm, PCIe Gen4, GDDR6

Joined
May 15, 2014
Messages
105 (0.05/day)
Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.
Did you read the qualifier:
Top dozen or so tend to favor AMD, bottom dozen favor Nvidia. Pick the games to get the result you want. Test setup/procedure/settings/areas tested can make a difference.
Do you really want sites to pick "balanced" games only for testing? Think carefully.

Latency decreases since you can push twice as much bandwidth in each direction to and from. AMD themselves said it themselves reduced latency, higher bandwidth, lower power. Literally all of those things would benefit crossfire
With the advent of modern game code/shading/post processing techniques, classic SLI/Xfire has to be built into the engines from the ground up. It's just a coding/profiling nightmare. DX12 mGPU is theoretically doable but tends to have performance regression & very little scales well.
 
Joined
Feb 3, 2017
Messages
1,829 (1.80/day)
Processor i5-8400
Motherboard ASUS ROG STRIX Z370-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-3200 CL16
Video Card(s) Gainward GeForce RTX 2080 Phoenix
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Logitech G700
Keyboard Corsair K60
I remember that, my point still stands. (remind me, why it is a proprietary vendor extension in Vulkan)
NVDA was cooking something for years, found time when competition was absent in the highest end, spilled the beans.
Intel/AMD would need to agree that DXR approach is at all viable or the best from their POV.

Crytek has shown one doesn't even need dedicated (20-24% of Turing die) HW to do the RT gimmick:
Because Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions, first at experimental stage, then simply an extension and then they will see what they end up doing with it. By the way, Wolfenstein Youngblood was announced to come with real-time raytracing effects, probably the first new game using these NV_RT extensions.

According to their own roadmap we will see CryTek's implementation live in version 5.7 of the engine in early 2020. They have said DXR etc are being considered and likely to be implemented for performance reasons.

Neon Noir does run on Vega56 in real time but 1080p at 30fps. This is, incidentally, the same frame rate GTX1080 can run Battlefield V with DXR enabled. RT effects in these two are pretty comparable - ray-traced reflections are used in both.
 
Joined
Sep 17, 2014
Messages
10,239 (5.43/day)
Location
Mars
Processor i7 8700k 4.7Ghz @ 1.26v
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) MSI GTX 1080 Gaming X @ 2100/5500
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Eizo Foris FG2421
Case Fractal Design Define C TG
Power Supply EVGA G2 750w
Mouse Logitech G502 Protheus Spectrum
Keyboard Sharkoon MK80 (Brown)
Software W10 x64
I remember that, my point still stands.
NVDA was cooking something for years, found time when competition was absent in the highest end, spilled the beans.
Intel/AMD would need to agree that DXR approach is at all viable or the best from their POV.

Crytek has shown one doesn't even need dedicated (20-24% of Turing die) HW to do the RT gimmick:

But we always knew that, the question was performance versus visual gain. Crytek has also explained how they do it, and it is not specific to anything AMD either, so using this as an example for anything is simply offtopic. What you're linking is their updated CryEngine and what it can do, and it has nothing to do with RTX, or DXR. But DXR will still potentially expand the possibilities of the tech they use in CryEngine, and it will do that, again, regardless of GPU; the question is how the GPU will make use of what DXR has to offer.
 
Joined
May 15, 2014
Messages
105 (0.05/day)
What it really means and what you're actually saying is: AMD should be optimizing a far wider range of games instead of focusing on the handful that they get to run well. That is why AMD lost the DX11 race as well - too much looking at the horizon and how new APIs would save their ass, while Nvidia fine tuned around DX11.
My DX12_11.1 GCN anecdote would've fit better here. MS did (some of) the work for them. By the way, how many gfx/compute/DMA queues should AMD be optimizing games for? ;)
 
Joined
Jun 28, 2015
Messages
752 (0.47/day)
What I found most interesting on the GPU front is to see how much AMD completely controls the gaming development ecosystem.
 
Joined
Sep 17, 2014
Messages
10,239 (5.43/day)
Location
Mars
Processor i7 8700k 4.7Ghz @ 1.26v
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) MSI GTX 1080 Gaming X @ 2100/5500
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Eizo Foris FG2421
Case Fractal Design Define C TG
Power Supply EVGA G2 750w
Mouse Logitech G502 Protheus Spectrum
Keyboard Sharkoon MK80 (Brown)
Software W10 x64
My DX12_11.1 GCN anecdote would've fit better here. MS did (some of) the work for them. By the way, how many gfx/compute/DMA queues should AMD be optimizing games for? ;)
At least half of them, so they don't get their ass kicked in every random comparison. :)
 
Joined
May 15, 2014
Messages
105 (0.05/day)
Because Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions
Better than cap bits.

Neon Noir does run on Vega56 in real time but 1080p at 30fps. This is, incidentally, the same frame rate GTX1080 can run Battlefield V with DXR enabled. RT effects in these two are pretty comparable - ray-traced reflections are used in both.
Isn't that like saying my car has four wheels it must be a Ferrari?

At least half of them, so they don't get their ass kicked in every random comparison. :)
Trick Q. :) No $b = no on-site engineers, or at least no dev evangelists to other than a few AAA studios. Their fault totally ofc. They've even had both consoles stitched up.
 
Last edited:
Joined
Oct 2, 2015
Messages
2,375 (1.58/day)
Location
Argentina
System Name Ciel / Yukino
Processor AMD Ryzen R3 1200 @ 3875MHz / Intel Core i3 5005U
Motherboard MSI B350M PRO-VDH / HP 240 G5
Cooling Wraith Stealth / Stock
Memory 2x 8GB Corsair Vengeance LPX DDR4 3200MHz @ 3333MHz / 2x 4GB Hynix + Kingston DDR3L 1600MHz
Video Card(s) Sapphire R9 270X Toxic 2GB / Intel HD 5500
Storage SSD WD Green 240GB M.2 + HDD Toshiba 2TB / SSD Kingston A400 120GB SATA
Display(s) HP w17e 1440x900 @ 75 Hz / Integrated 1366x768 @ 94Hz
Case Generic / Stock
Audio Device(s) Realtek ALC892 / Realtek ALC282
Power Supply Sentey XPP 525W / Power Brick
Mouse Logitech G203 / Elan Touchpad
Keyboard Generic / Stock
Software Windows 10 LTSC x64 + Arch Linux
So we leave GCN finally behind? Man I was wrong then. I hope this RDNA (horrible name) brings lower CPU overhead at the driver level.
Expect droped driver support for GCN in two years, and heavy fine wine memes while drivers mature.
 
Joined
Jun 28, 2018
Messages
217 (0.43/day)
What I found most interesting on the GPU front is to see how much AMD completely controls the gaming development ecosystem.
Control what? Most of the games still runs better on Nvidia hardware, Nvidia features are still much more adopted than AMD´s (see the Primitive Shader and Rapid Path Math failure, for example).

Since 2013, when we learned that AMD was going to equip the consoles with their chips, people prophesied the death of Nvidia and AMD from then on would only gain ground until they will dominate in performance.

Well, 6 years later here we are and AMD is still struggling to keep up!
 
Joined
Feb 3, 2017
Messages
1,829 (1.80/day)
Processor i5-8400
Motherboard ASUS ROG STRIX Z370-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-3200 CL16
Video Card(s) Gainward GeForce RTX 2080 Phoenix
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Logitech G700
Keyboard Corsair K60
AMD is definitely more in the picture with game development these days. While I am not sure how much help either IHV actually provides to developers, AMD is much-much more visible right now with situation being largely reversed from TWIMTBP days.
 

bug

Joined
May 22, 2015
Messages
6,672 (4.08/day)
Processor Intel i5-6600k (AMD Ryzen5 3600 in a box, waiting for a mobo)
Motherboard ASRock Z170 Extreme7+
Cooling Arctic Cooling Freezer i11
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V (@3200)
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 3TB Seagate
Display(s) HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
So we leave GCN finally behind? Man I was wrong then. I hope this RDNA (horrible name) brings lower CPU overhead at the driver level.
Expect droped driver support for GCN in two years, and heavy fine wine memes while drivers mature.
About that overhead: when you go async-heavy, overhead goes up.
That's why async doesn't stand on its own: it needs to speed up the processing enough to offset that overhead.
 
Joined
Jun 28, 2015
Messages
752 (0.47/day)
Control what? Most of the games still runs better on Nvidia hardware, Nvidia features are still much more adopted than AMD´s (see the Primitive Shader and Rapid Path Math failure, for example).

Since 2013, when we learned that AMD was going to equip the consoles with their chips, people prophesied the death of Nvidia and AMD from then on would only gain ground until they will dominate in performance.

Well, 6 years later here we are and AMD is still struggling to keep up!
Not about performance...

It's about guiding the development of the entire gaming industry across all gaming studios with Microsoft, Sony, Apple's upcoming gaming service, and now Google too. AMD is tailoring everything to themselves as a hub calling the shots.

Business wise that is very impressive.
 
Joined
Apr 30, 2011
Messages
1,440 (0.46/day)
Location
Greece
Processor AMD FX-8350 4GHz@1.3V
Motherboard Gigabyte GA-970A UD3 Rev3.0
Cooling Zalman CNPS5X Performa
Memory 2*4GB Patriot Venom RED DDR3 1600MHz CL9
Video Card(s) XFX RX580 GTS 4GB
Storage Sandisk SSD 120GB, 2 Samsung F1 & F3 (1TB)
Display(s) LG IPS235
Case Zalman Neo Z9 Black
Audio Device(s) Via 7.1 onboard
Power Supply OCZ Z550
Mouse Zalman ZM-M401R
Keyboard Trust GXT280
Software Win 7 sp1 64bit
Benchmark Scores CB R15 64bit: single core 99p, multicore 647p WPrime 1.55 (8 cores): 9.0 secs
I hope most here have already undrestood what the slides show. +25% IPC means that the comparison between 5700 vs Vega64 without core clocks mentioned gives a +25% advantage in performance to Navi, being 50% more efficient at the same time. To make things simple, if we put those numbers on the diagrams from the latest @W1zzard GPU test 5700 sits exactly between the 2070 and the Radeon7 and consumes about 200W. If price is good, that will be a great product. As for Real time tracing, not any GPU has the power yet to allow that feature maxed out to run constantly over 60FPS in big resolution. So, for 2020 the big Navi might be the one for that.
 
Last edited:
Joined
Aug 27, 2015
Messages
32 (0.02/day)
Processor Core i5-4440
Motherboard Gigabyte G1.Sniper Z87
Memory 8 GB DDR3-2400 CL11
Video Card(s) GTX 760 2GB
Thanks for linking a chart showing perf difference TWO TIMES SMALLER than TPU.
Somehow computerbase managed to pick a more balanced set smaller set of games, that match 35-ish game test results.
Can you read the charts or I have to read it for you? performance difference is nearly the same. 9% difference for Techspot vs 10% in TPU. where did you get that "TWO TIMES SMALLER than TPU"?
relative-performance_3840-2160.png
 
Joined
Oct 2, 2015
Messages
2,375 (1.58/day)
Location
Argentina
System Name Ciel / Yukino
Processor AMD Ryzen R3 1200 @ 3875MHz / Intel Core i3 5005U
Motherboard MSI B350M PRO-VDH / HP 240 G5
Cooling Wraith Stealth / Stock
Memory 2x 8GB Corsair Vengeance LPX DDR4 3200MHz @ 3333MHz / 2x 4GB Hynix + Kingston DDR3L 1600MHz
Video Card(s) Sapphire R9 270X Toxic 2GB / Intel HD 5500
Storage SSD WD Green 240GB M.2 + HDD Toshiba 2TB / SSD Kingston A400 120GB SATA
Display(s) HP w17e 1440x900 @ 75 Hz / Integrated 1366x768 @ 94Hz
Case Generic / Stock
Audio Device(s) Realtek ALC892 / Realtek ALC282
Power Supply Sentey XPP 525W / Power Brick
Mouse Logitech G203 / Elan Touchpad
Keyboard Generic / Stock
Software Windows 10 LTSC x64 + Arch Linux
About that overhead: when you go async-heavy, overhead goes up.
That's why async doesn't stand on its own: it needs to speed up the processing enough to offset that overhead.
Yeah, let's see how that turns out on release drivers.
I also hope that we can finally get a proper OpenGL driver.
 
Joined
Jul 9, 2015
Messages
1,951 (1.23/day)
System Name My all round PC
Processor i5 750
Motherboard ASUS P7P55D-E
Memory 8GB
Video Card(s) Sapphire 380 OC... sold, waiting for Navi
Storage 256GB Samsung SSD + 2Tb + 1.5Tb
Display(s) Samsung 40" A650 TV
Case Thermaltake Chaser mk-I Tower
Power Supply 425w Enermax MODU 82+
Software Windows 10
Do you really want sites to pick "balanced" games only for testing? Think carefully.
I've stated it 2 times, yet you literally miss the point.

A - picks a handful of games, does test, arrives at X%
B - picks a handful of games, does test, arrives at 2*X%
C - picks A LOT of games, does test, arrives at X%

Because Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions, first at experimental stage, then simply an extension and then they will see what they end up doing with it.
Well, if there is no diff between wider set / subset, subset is good, I stand corrected. (did criss cross resolution comparison, values are different)

But we always knew that, the question was performance versus visual gain. Crytek has also explained how they do it, and it is not specific to anything AMD either, so using this as an example for anything is simply offtopic.
Where does that almost surreal "it's gotta be dedicated HW" weirdo thought come from?
Crytek demoed we can have RT gimmick right there, with current tech.


Because Vulkan is ruled by a committee and the way features are introduced has historically been through vendor-specific extensions, first at experimental stage, then simply an extension and then they will see what they end up doing with it.
In other words "see, if it gets adopted first", which kinda makes sense, doesn't it?
 
Joined
Feb 18, 2017
Messages
454 (0.45/day)
Looking forward for the pricing. It would be nice to see undercutting NV prices (opposite of initial Vega pricing).

If this prices was true, It would be too expensive.
AMD tested to Strange Brigade which is AMD's DX12 game. For example, in this game RX 570 is faster than GTX 1660. Also RX 580 is same with GTX 1660 Ti. This is certainly AMD's strategy. I think RTX 2060 is faster than RX 5700 in Nvidia's games such as Witcher 3 (also AC Odyssey). I disappointed for AMD's Computex. In addition, I don't like Ryzen 7's 8 cores 16 threads. I hope that AMD will release R7 12 cores 24 threads.
I don't like this gen(Ryzen 2 and RX Navi) maybe i will buy Ryzen 4000.
High end AMD GPU's
RX5700=RTX 2060+%5-10 for 400 Dollars
RX5800=RTX 2070 for 500 Dollars
Med-Low tier GPU's
RX3060=GTX 1650
RX3070=GTX1660
RX3080=GTX 1660-GTX 1660 Ti
(Most games)
What?

Vega 56 trades blows with the GTX 1070 Ti, and in general, it's nearly GTX 1070Ti performance-wise. In Witcher 3, RX 580 is about 5% faster than the GTX 1060. No idea why you said that, but it's your problem.
 
Last edited:
Joined
Jul 9, 2015
Messages
1,951 (1.23/day)
System Name My all round PC
Processor i5 750
Motherboard ASUS P7P55D-E
Memory 8GB
Video Card(s) Sapphire 380 OC... sold, waiting for Navi
Storage 256GB Samsung SSD + 2Tb + 1.5Tb
Display(s) Samsung 40" A650 TV
Case Thermaltake Chaser mk-I Tower
Power Supply 425w Enermax MODU 82+
Software Windows 10
Med-Low tier GPU's
RX3060=GTX 1650
You realize even 2 years old 570 wipes the floor with 1650, don't you?
If you don't, do not worry, neither do millions of 1050/1050Ti's users.
 
Joined
Mar 10, 2014
Messages
1,666 (0.80/day)
I hope most here have already undrestood what the slides show. +25% IPC means that the comparison between 5700 vs Vega64 without core clocks mentioned gives a +25% advantage in performance to Navi, being 50% more efficient at the same time. To make things simple, if we put those numbers on the diagrams from the latest @W1zzard GPU test 5700 sits exactly between the 2070 and the Radeon7 and consumes about 200W. If price is good, that will be a great product. As for Real time tracing, not any GPU has the power yet to allow that feature maxed out to run constantly over 60FPS in big resolution. So, for 2020 the big Navi might be the one for that.
One of the RX 5700 -series sku, note the plural. So in translation there likely be couple of skus out of that series, i.e. RX 5770 and RX 5750 or RX 5700XT and RX5700pro.
 
Joined
Oct 10, 2018
Messages
76 (0.19/day)
I didn't say lie but this is strategy for selling. They only use 3 games and they said Radeon VII is same with RTX 2080. Well, what about other games?

@medi01 Most of the benchmarks, RTX 2080 is faster than Radeon VII. Yes, it depends on games
You realize even 2 years old 570 wipes the floor with 1650, don't you?
If you don't, do not worry, neither do millions of 1050/1050Ti's users.
GTX 1650 is fast card for entry level. RX 570's normal sale price is 169 Dollars. Is GTX 1650 overpriced? Yes. It should be 119 Dollars because it is Nvidia's entry level card.
Vega 56 trades blows with the GTX 1070 Ti, and in general, it's nearly GTX 1070Ti performance-wise. In Witcher 3, RX 580 is about 5% faster than the GTX 1060. No idea why you said that, but it's your problem.
Oh well.
the-witcher-3_1920_1080.png


VEGA 56 doesn't match with GTX 1070 Ti. It has performance between GTX 1070 and GTX 1070 Ti. It depends on what you are playing games.

All in all, I'm not Nvidia's Fanboy and AMD's Fanboy. I am expecting AMD for more performance with price but people are lying about AMD(R7 3000 series have 12 cores or RTX 2070 performance for 250 Dollars). I'm confused due to rumours.
 
Joined
May 2, 2017
Messages
1,614 (1.74/day)
Processor AMD Ryzen 5 1600X
Motherboard Biostar X370GTN
Cooling Custom CPU+GPU water loop
Memory 16GB G.Skill TridentZ DDR4-3200 C16
Video Card(s) AMD R9 Fury X
Storage 500GB 960 Evo (OS ++), 500GB 850 Evo (Games)
Display(s) Dell U2711
Case NZXT H200i
Power Supply EVGA Supernova G2 750W
Mouse Logitech G602
Keyboard Lenovo Compact Keyboard with Trackpoint
Software Windows 10 Pro
I'm not sure how you read that graph, but this is how I do it:
1. Half of Nvidia's cards are in the 90-100% relative efficiency range.
2. AMD cards are generally at 50% or less relative efficiency. Vega 56 does better, at 60%. Radeon VII does even better at 68%, but that's already on 7nm.

If I take the best case scenario, Vega 56 and add 50% to that, it still puts AMD at 90% of the most efficient Nvidia card. And Nvidia is still on 12nm.
You chose a card that performs a few % better than most Turing cards per watt, which also shifts AMD's averages down, which is kind of odd when you say you're not interested in looking at specific cards. Even with that, the Vega 56 was at 62%. 62*1,5=93. That's pretty darn close. Of course the V64 was slower at 54%, for which a 50% increase would be 81%. That's a lot worse for a very small difference in the baseline. If we look at one of the more average (and similar in performance) Turing cards, like the 2070, the result for the V56 is 99%. Which is why talking about multiples of percentages is a minefield. Unless you are very explicitly clear about your baseline, test conditions, and what you're comparing, you're going to confuse people more than clarify anything.
Latency decreases since you can push twice as much bandwidth in each direction to and from. AMD themselves said it themselves reduced latency, higher bandwidth, lower power. Literally all of those things would benefit crossfire and on a cut down version might even improve if they can improve overall efficiency in the process while salvaging imperfect die's by disabling parts of them. I don't know why Crossfire wouldn't be improved a bit, but how much of a improvement is tough to say definitively. I would think the micro stutter would be lessened quite a bit for a two card setup and even a three card setup though less dramatically in the latter case while a quad card setup would "in theory" be identical to a two card one for PCIE 4.0 at least.
That is only true if bandwidth is already maxed out, leading to a bottleneck. Other than that, increasing bandwidth does not necessarily relate to latency whatsoever. The cars on your highway don't go faster if you add more lanes but keep the speed limit the same. Now, I haven't read the PCIe 4.0 spec, so I don't know if they're also reducing latency, but of course they might. It still doesn't relate to bandwidth, though.
 
Joined
Sep 17, 2014
Messages
10,239 (5.43/day)
Location
Mars
Processor i7 8700k 4.7Ghz @ 1.26v
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) MSI GTX 1080 Gaming X @ 2100/5500
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Eizo Foris FG2421
Case Fractal Design Define C TG
Power Supply EVGA G2 750w
Mouse Logitech G502 Protheus Spectrum
Keyboard Sharkoon MK80 (Brown)
Software W10 x64
Where does that almost surreal "it's gotta be dedicated HW" weirdo thought come from?
Crytek demoed we can have RT gimmick right there, with current tech.
Hey look, you and me agree on this, I'm no fan either of large GPU die percentages dedicated to just RT performance; but with the facts available to us now, we also have a few things to deal with...

- RT / Tensor core implementation in Turing has a much higher perf/watt potential and absolute performance potential than any other implementation today.
- Turing shows us it can be done in tandem with a full fat GPU, within a limited power budget.
- RTX / DXR will and can be used to speed up the things you see in the Crytek demo.
.... now that last point is an important one. It means Nvidia, with a hardware solution, is likely to be faster in the usage of tech you saw in that Crytek demo. After all, part of the dedicated hardware has increased efficiency at doing a piece of the workload, which leaves TDP budget for the rest to run as usual. With a software implementation that runs on the 'entire' GPU, a hypothetical AMD GPU might offer a similar performance peak for non-RT gaming (the normal die at work) but it can never be faster at doing both in tandem.

End result, Nvidia with that weirdo thought wins again.

The real question is what the market will accept in terms of visual gain versus additional cost / performance hit. And that is an answer nobody has, but Turing so far isn't going like hotcakes, which is a sign. In that sense, if we can see in-game, live footage of that Crytek implementation adding to visual quality at minimal performance cost, that is the real game changer. A tech demo is just that: a showcase of potential. But you can't sell potential.

I think the more interesting development with hardware solutions for RT is how well it can be utilized for other tasks. That will make RT adoption easier. Nvidia tried something with DLSS, but that takes too much effort.
 
Last edited:
Joined
Jul 9, 2015
Messages
1,951 (1.23/day)
System Name My all round PC
Processor i5 750
Motherboard ASUS P7P55D-E
Memory 8GB
Video Card(s) Sapphire 380 OC... sold, waiting for Navi
Storage 256GB Samsung SSD + 2Tb + 1.5Tb
Display(s) Samsung 40" A650 TV
Case Thermaltake Chaser mk-I Tower
Power Supply 425w Enermax MODU 82+
Software Windows 10
- RT / Tensor core implementation in Turing has a much higher perf/watt potential and absolute performance potential than any other implementation today.
That's a generic "specialized hardware does things faster" statement, and, well, yes.
E.g .AES decryption.

- RTX / DXR will and can be used to speed up the things you see in the Crytek demo.
No, and that's the point.
DXR works with different structures, Crytek is voxel based, DXR is not.
So there goes the "could be used" aspect of it, because, wait for it, "specialized hardware" is not known for being flexible.

The real question is what the market will accept in terms of visual gain versus additional cost / performance hit. And that is an answer nobody has, but Turing so far isn't going like hotcakes, which is a sign. In that sense,
We can have all those visuals today, with helluva shader work, the main point of RT gimmick (and it's nothing beyond it, for F sake, most of RT-ing is denoising at this point) is to achieve reflections/illumination/shade with smaller effort.

For game developers to do it, one simply needs to have large enough "RT user base". And this is why Crytek's take on the problem is so much better than NVDA's.
 

M2B

Joined
Jun 2, 2017
Messages
201 (0.22/day)
Location
Iran
Processor Intel Core i5-8600K @4.9GHz
Motherboard MSI Z370 Gaming Pro Carbon
Cooling Cooler Master MasterLiquid ML240L RGB
Memory XPG 8GBx2 - 3200MHz CL16
Video Card(s) Asus Strix GTX 1080 OC Edition 8G 11Gbps
Storage 2x Samsung 850 EVO 1TB
Display(s) BenQ PD3200U
Case Thermaltake View 71 Tempered Glass RGB Edition
Power Supply EVGA 650 P2
Ray-Tracing is beyond simplifying game development in the way you like to believe.
It takes ages and ages to achieve similar level of accuracy with traditional rendering techniques; especially in open world and more complex games thus in reality, you're never going to see RT level of realism and accuracy in actual games without RT in use.

And also Crytek stated that they're gonna use the RT cores on turing cards for better performance in the future.

One day in the future 70%~ of the PC users will have an RTX card, GTX is going to die sooner or later; that's when developers would think twice about considering or not considering the RT implementation in general; and of course you must be stupid to not use the relative free performance that RT cores offer.
 
Last edited:
Joined
Sep 17, 2014
Messages
10,239 (5.43/day)
Location
Mars
Processor i7 8700k 4.7Ghz @ 1.26v
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) MSI GTX 1080 Gaming X @ 2100/5500
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Eizo Foris FG2421
Case Fractal Design Define C TG
Power Supply EVGA G2 750w
Mouse Logitech G502 Protheus Spectrum
Keyboard Sharkoon MK80 (Brown)
Software W10 x64
That's a generic "specialized hardware does things faster" statement, and, well, yes.
E.g .AES decryption.


No, and that's the point.
DXR works with different structures, Crytek is voxel based, DXR is not.
So there goes the "could be used" aspect of it, because, wait for it, "specialized hardware" is not known for being flexible.


We can have all those visuals today, with helluva shader work, the main point of RT gimmick (and it's nothing beyond it, for F sake, most of RT-ing is denoising at this point) is to achieve reflections/illumination/shade with smaller effort.

For game developers to do it, one simply needs to have large enough "RT user base". And this is why Crytek's take on the problem is so much better than NVDA's.
Okay buddy, whatever you want to disagree on, I'll agree to :D I suppose you know better than what sources have shown thus far.

Also, why always so mad?
 
Last edited:
Top