Tuesday, July 28th 2020

AMD Ryzen 7 4700G "Renoir" iGPU Showing Playing Doom Eternal 1080p by Itself

Hot on the heels of a June story of a 11th Gen Core "Tiger Lake" processor's Gen12 Xe iGPU playing "Battlefield V" by itself (without a graphics card), Tech Epiphany bring us an equally delicious video of an AMD Ryzen 7 4700G desktop processor's Radeon Vega 8 iGPU running "Doom Eternal" by itself. id Software's latest entry to the iconic franchise is well optimized for the PC platform to begin with, but it's impressive to see the Vega 8 munch through this game at 1080p (1920 x 1080 pixels) no resolution scaling, with mostly "High" details. The game is shown running at frame-rates ranging between 42 to 47 FPS, with over 37 FPS in close-quarters combat (where the enemy models are rendered with more detail).

With 70% resolution scale, frame rates are shown climbing 50 FPS. At this point, when the detail preset is lowered to "Medium," the game inches close to the 60 FPS magic figure, swinging between 55 to 65 FPS. The game is also shown utilizing all 16 logical processors of this 8-core/16-thread processor. Despite just 8 "Vega" compute units, amounting to 512 stream processors, the iGPU in the 4700G has freedom to dial up engine clocks (GPU clocks) all the way up to 2.10 GHz, which helps it overcome much of the performance deficit compared to the Vega 11 solution found with the previous generation "Picasso" silicon. Watch the Tech Epiphany video presentation in the source link below.
Source: Tech Epiphany (YouTube)
Add your own comment

66 Comments on AMD Ryzen 7 4700G "Renoir" iGPU Showing Playing Doom Eternal 1080p by Itself

#26
Vya Domus
btarunrQuickSync. It's the only reason anyone with a graphics card doesn't pick the -F variant to save $20.
Except that doesn't really have anything to do with compute acceleration. Every GPU has encode/decode hardware.
Posted on Reply
#27
Assimilator
cucker has a very valid point, though. Vega was developed to compete with Pascal, which it didn't. Why is AMD continuing to shoehorn this barely-competitive GPU architecture from 2017 into APUs they're releasing in 2020? Why aren't they using the newer and much more power-efficient Navi?

Before all the apologists come swinging in with "but it's faster than Intel", you're ignorant of the bigger picture as usual. Vega is GCN and obsolete, Navi is RDNA which is RTG's current focus - which one do you think is going to get driver love going forward? Especially considering AMD's continually-precarious GPU driver situation? Or are y'all going to put your faith in "fine wine" and be let down, again? Do you know what the definition of insanity is?

The bar for iGPUs is not "faster than Intel", it's "is this the latest and greatest dGPU tech crammed into as iGPU", and as such Renoir fails to meet it. Is it better than its predecessors? Yes. Is it better than anything else in its market segment? Yes. Is it as good as it could be? No, and that makes it pretty insignificant, regardless of how many AAA titles it can play at 1080p...

... at nearly 60FPS
... with some details turned down.
Posted on Reply
#28
Vya Domus
AssimilatorWhy is AMD continuing to shoehorn this barely-competitive GPU architecture from 2017
Because it works ? Does Intel have something earth shattering and we missed it ?

Remember how Intel shoehorned dual cores for a decade ? Same thing. Plus DDR4 is on it's last legs, there is no point in a major GPU revamp at this point.
AssimilatorVega was developed to compete with Pascal, which it didn't.
Yeah it did. Are we going on the same old rhetoric of "if it wasn't a million times faster it's irrelevant" ?
Posted on Reply
#29
Caring1
AI has finally attained awareness, and the first thing it does is play Doom because it's protesting work conditions and wages.
Posted on Reply
#30
Assimilator
Vya DomusBecause it works ? Does Intel have something earth shattering and we missed it ?
"Because it works" is how we got 3 successors to Bulldozer.
Vya DomusRemember how Intel shoehorned dual cores for a decade ? Same thing.
Most people, myself included, have a big issue with Intel's lack of innovation - as we should. I don't intend to hold AMD to different standards.
Vya DomusPlus DDR4 is on it's last legs, there is no point in a major GPU revamp at this point.
Intel's roadmap has DDR5 in 2021 (which we know won't happen), AMD's 2022. So "last legs" = "2 years"? Remind me, how often is AMD intending to release a new version of Zen?
Vya DomusYeah it did. Are we going on the same old rhetoric of "if it wasn't a million times faster it's irrelevant" ?
Considering how low the bar for integrated graphics has been set by Intel, yes.
Posted on Reply
#31
moproblems99
AnarchoPrimitivI take this to mean that AMD has figured out a way to use a chiplet approach with GPUs without game programmers and the API itself having to program specifically for multiple GPUs. If this is the case, then AMD may be able to bust the GPU market wide open.
It means this is how they envision it working. Doesn't mean they can or cannot do it yet. I wish you needed a working prototype to file a patent...
Posted on Reply
#32
Vya Domus
Assimilator"Because it works" is how we got 3 successors to Bulldozer.
You know Bulldozer and it's iterations worked just fine for their price. What you may fail to understand is what APUs are and what they are intended for. If you hope AMD or Intel will pour they heart and soul in making the fastest imaginable integrated GPU you are sorely mistaken. You think it would be difficult for AMD to say double or even triple their CU counts ? The reason they're not doing it is because APUs are meant to offer low end performance and be cheap, that being said you can't expect much more than we already have right know.
Posted on Reply
#33
Dredi
btarunrQuickSync. It's the only reason anyone with a graphics card doesn't pick the -F variant to save $20.
Are you stating that the encode/decode functionality is somehow broken in renoir compared to intels offering?
Posted on Reply
#35
JohnSuperXD
First of all, the Vega graphics in Renoir was planned ahead of time. Navi launched half a year before Renoir and it was always meant to for Vega graphics. Also, this Vega graphics has been optimised for mobile low power usage. Navi is unlikely optimised in low power usage and Vega is a mature and reliable architecture when compare to Navi. AMD had to make sure that nothing goes in the way of messing this launch.

APU in the past was supposed to be meshing cpu and gpu into one. Using CPU as the integer point and GPU as the floating point. Unlike Intel, placing a gpu as a display engine and video coding engine, with light gaming ability. The plan was to leverage the floating point processing power from GPU, instead of hardware decoder and encoder.

Second thing is, price and margin. I always think it would be cool to see 8 Jaguar cores with 40 CU and GDDR5 8GBs of VRAM. But how are they gonna produce a product out of those chips? Placing them in gaming notebook seems reasonable, however, how can they make great margins out of those product. Also where else can they sold those chips? They are big chips after all and Renoir is a small chip. They can sold high volume and have a high profit margin, as well as they can salvage more parts. It seems to me that it is easier to design a cooling system for a small chip with low heat output than a big chip with high heat output.

At the end of the day, it comes down to margins and technology that available for AMD to leverage. DDR 5 is coming around the corner, denser transistor density, matured architectures would allow AMD to really get down and dirty with their APU. I think Renoir is good for it is and i think that Renoir is showing off their CPU ability rather than being an APU.
Posted on Reply
#36
lexluthermiester
BorgOvermindbut ignored the GPU high end lately
This was for two reasons. 1. AMD needed to focus their resources on what could make them the most money and 2. There is much more money in the mid-range and budget sector of the GPU market. Much more money to be made. And let's be fair, the 5600/XT and 5700/XT are very winning cards for the money.
Posted on Reply
#37
Chrispy_
That's really impressive performance from an IGP, though it's worth noting that Doom Eternal scales up and down really well and still looks incredible at lowest settings.

More importantly, I didn't realise how unexciting Doom Eternal looks if you play it on easy. After being mildly frustrated at the random difficulty spikes in Doom2016 Nightmare and being unable to dial it down from Nightmare, I started Doom Eternal on Ultra-Violent but quickly realised that id have given us way more tools to kill with, and making monster vulnerabilities to specific weapon more obvious - so a safer game than 2016 on Nightmare as it's never overwhelmingly hard.

I still had a few arenas that took multiple attempts but surely the satisfaction of Doom's gameplay is about overcoming ridiculous odds with evasion and tactics. Playing it on easy for 'the story' doesn't really work for Doom because the story is garbage - I've forgotten it already.
Posted on Reply
#38
RealNeil
Vayra86We also saw SLI and Crossfire die off.
They're DEAD!?

OMG! Now I have to sell four GPUs,....
Posted on Reply
#39
Assimilator
DrediAre you stating that the encode/decode functionality is somehow broken in renoir compared to intels offering?
No, he's stating that AMD doesn't have Quick Sync. Do you have reading comprehension problems?
Posted on Reply
#40
cucker tarlson
btarunrQuickSync. It's the only reason anyone with a graphics card doesn't pick the -F variant to save $20.
you are joking right ?
show me a cheaper and cleaner display output solution for troubleshooting or when you're in between cards
that hard for tpu's news editor to find just one reason ?
Assimilatorcucker has a very valid point, though. Vega was developed to compete with Pascal, which it didn't. Why is AMD continuing to shoehorn this barely-competitive GPU architecture from 2017 into APUs they're releasing in 2020? Why aren't they using the newer and much more power-efficient Navi?

Before all the apologists come swinging in with "but it's faster than Intel", you're ignorant of the bigger picture as usual. Vega is GCN and obsolete, Navi is RDNA which is RTG's current focus - which one do you think is going to get driver love going forward? Especially considering AMD's continually-precarious GPU driver situation? Or are y'all going to put your faith in "fine wine" and be let down, again? Do you know what the definition of insanity is?

The bar for iGPUs is not "faster than Intel", it's "is this the latest and greatest dGPU tech crammed into as iGPU", and as such Renoir fails to meet it. Is it better than its predecessors? Yes. Is it better than anything else in its market segment? Yes. Is it as good as it could be? No, and that makes it pretty insignificant, regardless of how many AAA titles it can play at 1080p...

... at nearly 60FPS
... with some details turned down.
ryzen's ram speed compability went from barely doing 3200 on 1st gen to 3800 on r3000
yet people are somehow happy that amd still uses old ass vega 8.
I hoppe xe apus completely kick its ass,cause if it doesn't,it'd frankly be a colossal fail not to outperform a gcn based soluition
AssimilatorMost people, myself included, have a big issue with Intel's lack of innovation - as we should. I don't intend to hold AMD to different standards.
exactly my point
vya thinks it's cool to jump on intel but praise amd
while both using quad cores are flagship cpus and using a vega 8 as flagship apu is exactly the same thing - small steps,little innovation,cause no competition.

it's understandable on a quad core like 3400,it's a value proposition.but for a 8/16 ? how is that even good when 7nm rdna1 has been out for a year and rdna2 is close ?

if rdna2 got delayed and nvidia just oc'd turings and sold them as "plus" skus,would the same people be that enthusiastic ? cause that's what happened.intel got delays,amd used that to push higher clocked vega apus on r3000
Posted on Reply
#41
kapone32
All I am going to say if my discrete hyper cooled Vega 64 ran at 2.1 GHZ I would not be budgeting for whatever Big Navi is. That clock speed is crazy and Doom Eternal at 1080P (High) is not possible with any of the current desktop APUs.
Posted on Reply
#42
cucker tarlson
DrediAre you stating that the encode/decode functionality is somehow broken in renoir compared to intels offering?
if it does opengl it should be fine for i.e. da vinci
although I compared all three and cuda is the best one,quicksync worked very well on 5775c but files are big.opengl is the slowest.
Posted on Reply
#43
Tsukiyomi91
doing 30-40fps at 1080p with High settings for an iGPU is very impressive. Incoming low-baller gaming PC from me xD
Posted on Reply
#44
Legacy-ZA
Proper iGPU gives people additional options should their main GPU be faulty. I can't stress enough how many times I had my main GPU die and I found myself without a graphics rendering device. This seems to be a great gap filler should the need arise and it seems very capable of playing older game titles and some newer ones too.

Hopefully one day they will become even more powerful.
Posted on Reply
#45
cucker tarlson
Legacy-ZAProper iGPU gives people additional options should their main GPU be faulty. I can't stress enough how many times I had my main GPU die and I found myself without a graphics rendering device. This seems to be a great gap filler should the need arise and it seems very capable of playing older game titles and some newer ones too.

Hopefully one day they will become even more powerful.
gotta be really unimaginative not to think of a single use for an igpu

had two r9 290 cards die,each took a month for rma
had no post issues,had to check what component it was
sold my 1080ti for a really nice sum opportunistically,had to wait for a new card to arrive

and that costs 20 dollars,the output is already there on the mobo
Posted on Reply
#46
Vayra86
Vya DomusBecause it works ? Does Intel have something earth shattering and we missed it ?

Remember how Intel shoehorned dual cores for a decade ? Same thing. Plus DDR4 is on it's last legs, there is no point in a major GPU revamp at this point.



Yeah it did. Are we going on the same old rhetoric of "if it wasn't a million times faster it's irrelevant" ?
Lol, the man above you is literally saying 'because it works' is not enough and 'better than Intel' isn't either. Your first response: but it works and is faster than Intel. That is just about what Intel was doing the last decade with their CPUs as a whole, and I don't remember you saying that was just fine.

Seriously man, its times like these the truth about bias comes out... Do you see it, or? Can you admit its strange to look at it that way? Or do you have some good reason for it? It puzzles me as you seem like an intelligent person. (No sarcasm involved here)

Another way to look at it as well: look at Zen. The very moment AMD offered something the competitor had no answer to, they won market- and mind share bigtime. What @Assimilator and @cucker tarlson are saying, why the hell are they not pushing IGP to a level that puts them in a similar position for bottom end GPU performance? And perhaps snipe some of the midrange along with it? They DID pursue IGP for a long time... What's left of that strategy then? And its double strange because they are now FINALLY in a position to combine strong CPU performance with a strong IGP, in a laptop. That is a huge potential market and they can take share from not just Intel, but also Nvidia. Its really weird to see GPU tech so behind in that sense.

Similarly, but that's just me and thoughts running wild... why is there no movement towards a Threadripper-sized socket with ditto chip that has lots of space for IGP? That would enable dGPU perf from the CPU socket right away, with lower clock and a much higher EU count, if you can keep it from burning up, which I'm sure is possible given the larger surface area and if you look at how low they can push Ryzen TDPs.. Intel had its ultrabooks and pretty much dedicated chips for them, why is AMD not moving towards thought leadership in that sense? They have every reason to.
RealNeilThey're DEAD!?

OMG! Now I have to sell four GPUs,....
Well, I'm sure your patients are still alive, but surely you've seen how SLI fingers have vanished lately. That means its becoming increasingly not worthwhile for devs to cater to them.
Posted on Reply
#47
Chrispy_
I can't remember what AMD's excuse for Renoir still being Vega is, but Cezanne (5000-series APUs) will be Zen3 but also still Vega.

If it were as simple as copy-pasting RDNA logic into the design, AMD would have done it by now. Chances are good that current Vega APU cures are highly-optimised for DDR4 and HSA / Unified memory access. Adding that to Navi when AMD already have their hands full with Big Navi, console chips, Zen4, TSMC's EUV tweaks, and of course the importance of getting things right first time given that they're now bidding against a host of other companies for fab time at TSMC....
Posted on Reply
#48
msroadkill612
AnarchoPrimitivActually, you could be wrong about that.... While prowling the AMD patents, something I do regularly, I came across a recent one detailing something called "GPU masking". Now, in the patent, I believe it had outlined the use of this technique on multi-gpu MCMs, which is basically taking the strategy used on Ryzen and applying it to GPUs.

The secret to it is that the system and API (DirectX) sees the MCM GPU as a single, logical entity and if I remember correctly, it accomplished the same feat with multiple memory pools. I take this to mean that AMD has figured out a way to use a chiplet approach with GPUs without game programmers and the API itself having to program specifically for multiple GPUs. If this is the case, then AMD may be able to bust the GPU market wide open.

That said, I would imagine that the same technique could possibly be applied to multiple gpu/videocard setups somehow.
That is very cool. I too think amd is up to something & ur the first who seems on a similar wave length. - not because i am clever, but for the humble reason that i think lisa knows what she is doing better than most bloggers.

The multi gpu patent you describe, is of course a perfect fit for Infinity Fabric - as u ~say - its like transferring zen architecture to gpu.

Cache coherency is Fabric in a nutshell. AMDs focus on it is at the root of their success, so the patents you describe dont surprise me. Thhat is exactly where i think they would be trying to go.

clearly we have a task (graphics) too big for a single gpu, but we are at an enduring impasse in teaming multiple processors (sli & crossfire ~fails)... multiple cheap, easily cooled & efficient gpuS would have more drastic effect on gpu than zen did on cpu.

Specifically , (I am not competent in gpu tech, but) i have long suspected that Vega was prefered for renoir for secret reasons - not the timing factors officially stated.

There seem to be apps (scientific/math e.g.?) where Vega is preferred. Maybe its more suited to some even more tempting prize than better consumer gaming?

AI is changing the usual processing paradigms - the AI raw data is potentially so vast that consequent slow costly transmission any distance, make processing by mini nodes on the edge of the data storage much more attractive - ~decentralising.

Maybe banks of tightly integrated hybrid processor APU's are suited, & can form a big new market to add to their already broad appeal?

The patents would fit amd's MO very well - they love serving multiple markets with easily scaled variants of a few standard ingredients - or even better win a new tier or market w/ ~existing recipes. (the 3900 & 3950 12 & 16c zens paired w/ x570 mobos invaded, a big patch of workstation turf using desktop cpuS recently). Similarly, 64c & 2x ram TR has charged upscale into ~epyc turf.
Posted on Reply
#49
Vya Domus
Vayra86That is just about what Intel was doing the last decade with their CPUs as a whole, and I don't remember you saying that was just fine.
Because they are not within the same context, I explained it subsequently, APUs need to remain cheap and not overlap with dedicated offerings (obviously) so there has to be a ceiling in terms of what you can expect which is much, much lower, compared to CPU performance at large where people are expected to pay even thousands of dollars. You are not going to see a 1000$ APU from AMD or Intel (not that sure about this one :rolleyes:) so you shouldn't expect the same push for advancement, it's all perfectly logical. Like that guy, you don't, or don't want to understand that the segment in which these things exist has many constraints that prohibit significant leaps in performance of the same order compared to other segments.
Vayra86why is there no movement towards a Threadripper-sized socket with ditto chip that has lots of space for IGP? That would enable dGPU perf from the CPU socket right away, with lower clock and a much higher EU count, if you can keep it from burning up
Because as I explained, no one has a need for it. You can just buy a dedicated GPU that isn't thermally and power constrained as an iGPU would be, who buys into such an expensive platform, has a need for non trivial GPU power but for some inexplicable reason they don't want a dedicated card. We are talking about desktop PCs for Christ sake, not fully integrated systems like a console. There is really big disconnect between what thees products are for and what you guys understand they are for, nothing I can do about that.
Posted on Reply
#50
wahdangun
Assimilatorcucker has a very valid point, though. Vega was developed to compete with Pascal, which it didn't. Why is AMD continuing to shoehorn this barely-competitive GPU architecture from 2017 into APUs they're releasing in 2020? Why aren't they using the newer and much more power-efficient Navi?

Before all the apologists come swinging in with "but it's faster than Intel", you're ignorant of the bigger picture as usual. Vega is GCN and obsolete, Navi is RDNA which is RTG's current focus - which one do you think is going to get driver love going forward? Especially considering AMD's continually-precarious GPU driver situation? Or are y'all going to put your faith in "fine wine" and be let down, again? Do you know what the definition of insanity is?

The bar for iGPUs is not "faster than Intel", it's "is this the latest and greatest dGPU tech crammed into as iGPU", and as such Renoir fails to meet it. Is it better than its predecessors? Yes. Is it better than anything else in its market segment? Yes. Is it as good as it could be? No, and that makes it pretty insignificant, regardless of how many AAA titles it can play at 1080p...

... at nearly 60FPS
... with some details turned down.
because its useless to use faster IGP when the platform still use dual channel ddr4. it will bottleneck it really hard, unless amd use quad or octo channel ram like in TRX platform or moving to ddr5.
Posted on Reply
Add your own comment
Apr 24th, 2024 08:55 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts