• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD "Renoir" APU 3DMark 11 Performance Figures Allegedly Surface

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,777 (7.41/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
AMD "Renoir" is the company's next-generation APU that improves iGPU and CPU performance over the current 12 nm "Picasso" APU. An AMD "Renoir" APU engineering sample running on a "Celadon-RN" platform prototyping board, was allegedly put through 3DMark 11, and its performance numbers surfaced on Reddit, in three data-sets corresponding with three hardware configurations. In the first one, dubbed "config 1," the CPU is clocked at 1.70 GHz, the iGPU at 1.50 GHz, and the system memory at DDR4-2667. In "config 2," the CPU runs at 1.80 GHz, and the iGPU and memory frequencies are unknown. In "config 3," the CPU runs at 2.00 GHz, the iGPU at 1.10 GHz, and the memory at DDR4-2667. Raw benchmark output from 3DMark 11 Performance preset are pasted for each of the configs below (in that order). The three mention 3DMark database result IDs, but all three are private when we tried to look them up.

The "config 1" machine scores 3,547 points in the performance preset of 3DMark 11. It's interesting to note here that the iGPU clock is significantly higher than that of "Picasso." In "config 2," a 3DMark performance score of 3,143 points is yielded. The CPU clock is increased compared to "config 1," but the score is reduced slightly, which indicates a possible reduction in iGPU clocks or memory speed, or perhaps even the iGPU's core-configuration. In "config 3," we see the highest CPU clock speed at 2.00 GHz, but a reduced iGPU clock speed at 1.10 GHz. This setup scores 2,374 points in the 3DMark performance preset, a 33% drop from "config 1," indicating not just reduced iGPU clocks, but possibly also reduced CU count. "Renoir" is expected to combine "Zen 2" CPU cores with an iGPU that has the number-crunching machinery of "Vega," but with the display- and multimedia-engines of "Navi."



View at TechPowerUp Main Site
 
Still slow DDR4? AMD needs to move to much faster LPDDR4x memory, that also offers better efficiency, as Intel did with Ice Lake. Let's hope they do so.
 
Last edited:
Still slow DDR4? AMD needs to move to much faster LPDDR4 memory, that also offers better efficiency, as Intel did with Ice Lake. Let's hope they do so.

What are you talking about. LPDDR4 is actually slower then traditional DDR4 since it operates on a lower power level. If anything that would be boosting performance of any iGPU then it's memory bandwidth. Either go Dual channel with as fast as possible memory or go home.
 
What are you talking about. LPDDR4 is actually slower then traditional DDR4 since it operates on a lower power level. If anything that would be boosting performance of any iGPU then it's memory bandwidth. Either go Dual channel with as fast as possible memory or go home.
Put an X in the end of that LPDDR4. LPDDR4x at 3733MHz for example, the one used in Ice Lake laptops.
 
LPDDR4 is actually slower then traditional DDR4 since it operates on a lower power level.
DDR4-128b@2.666 GHz => 42.656 GB/s
LPDDR4-128-bits@3.2 GHz => 51.2 GB/s <== LPDDR4 is faster than traditional DDR4 since it operates at a quicker rising+falling oscillation speed.
LP-HBM2-1024b@1.6 GHz => 204.8 GB/s
 
This thread is now hurting my head, lol
 
How long until we have integrated memory that is dedicated to the graphics in the A.P.U., and if so, which type do you guys speculate will be in it. HBM2E?
 
LPDDR4-128-bits@3.2 GHz => 51.2 GB/s <== LPDDR4 is faster than traditional DDR4 since it operates at a quicker rising+falling oscillation speed.

Makes sense, but should have written LPDDR4x instead of just LPDDR4. But still a small 9GB increase will not bring masses of performance. HBM is just too expensive for a all-in-one CPU/GPU.
 
Makes sense, but should have written LPDDR4x instead of just LPDDR4. But still a small 9GB increase will not bring masses of performance. HBM is just too expensive for a all-in-one CPU/GPU.
LPDDR4 is fine as is. LPDDR4X is nice, LPDDR5 is nicer with its capability to go Quad Data Rate.

HBM costs less than socketed DDR4/DDR5. So, it is perfect for an all in one.
An 8 GB HBM2_1024-bit @ 3.2 GHz would cost less than a single 8GB 260-Pin DDR4_64-bit SO-DIMM.

1. Wouldn't get stuck with a single channel laptop.
2. Overall higher graphics performance at lower power.
3. It goes on and on this is the song that never ends.
 
Last edited:
HBM is more widely deployed in things like computing. Even tho the idea sounds nice a "APU" with HBM onboard, there's a reason why AMD has'nt released that yet. HBM is widely available by now. I think the GPU + CPU in combination with HBM is a big sack of heat that would exceed 125W easily. A polaris already type of card sits at 150 ~ 180W. A navi is using still a rough 140 to 180 watts as well. So your looking at almost entry level if you want to pair a 65W CPU with a reasonable GPU inside of it.

Again, there's a reason. They have the tech but the costs vs profit are way too off.
 
HBMs aren't hot.
HBMs aren't expensive.
 
Judging by those clock speeds (for the CPU part) it looks like a mobile chip.
 
I try to think better way to give iGPU more memory bandwidth but at the end of the day it will always add more cost, if its costs too much I would rather have dedicated GPU so it won't be power restricted or heat restricted. LPDDR4X is only good for embedded systems (from what I read its not backward compatible with LPDDR4), adding HBM memory will adds cost and complexity to CPU (additional power plane, die size, heat etc.), adding sideport memory will increase motherboard cost.

AMD try to put some extra sauce before, in Kaveri there are rumor for quad channel memory, even GDDR5 options but in the end none of it being used, even on closed embedded system. You can read it here. It looks mighty interesting with quad channel 256-bit with GDDR5 memory for thin gaming laptops/NUC

 
HBMs aren't hot.
HBMs aren't expensive.
Probably it is expensive, from an OEMs perspective, to add another chip in the design, when you can happily use the already installed system memory.
 
AMD try to put some extra sauce before, in Kaveri there are rumor for quad channel memory, even GDDR5 options but in the end none of it being used, even on closed embedded system. You can read it here. It looks mighty interesting with quad channel 256-bit with GDDR5 memory for thin gaming laptops/NUC
It was GDDR5M not GDDR5.
135210


GDDR5M has 32-bit channels, just like GDDR5 did. So, A0/A1+B0/B1 in GDDR5M/DDR4 SO-DIMMs would still have been 128-bit.

Max was 5 Gbps and min 3.3 Gbps ... so 128-bit GDDR5m => 128-bit/3.3 Gbps to 128-bit/5.0 // 52.8 GB/s to 80 GB/s.
Probably it is expensive, from an OEMs perspective, to add another chip in the design, when you can happily use the already installed system memory.
Ship a bunch of laptops with a single channel of DDR4, or have AMD package HBM2 on FP6 and have OEMs not worry about it.

Oh Intel is having a sale whats that? 2xY GB in SO-DIMMs... lets check AMD's benchmarks (benched with 1x8GB DDR4 or 1x16GB DDR4, soldered 1x 4GB/8GB DDR4 + one empty so-dimm). Doesn't look good champ~ *buys intel*
 
Last edited:
I get bored of trying to track what combination of CPU and GPU architecture these things have since AMD have no consistency between dGPU, APU, and CPU naming conventions.

Is this:
  • Another Zen+ and Vega rebrand?
  • Zen2 and Vega
  • Zen+ and Navi
  • Zen2 and Navi?
Any announcement that has either Zen+ or Vega in it at this point in time is just disappointing.
 
I get bored of trying to track what combination of CPU and GPU architecture these things have since AMD have no consistency between dGPU, APU, and CPU naming conventions.

Is this:
  • Another Zen+ and Vega rebrand?
  • Zen2 and Vega
  • Zen+ and Navi
  • Zen2 and Navi?
Any announcement that has either Zen+ or Vega in it at this point in time is just disappointing.
Why are you trying to track anything, everything was written in the last sentence.
"Renoir" is expected to combine "Zen 2" CPU cores with an iGPU that has the number-crunching machinery of "Vega," but with the display- and multimedia-engines of "Navi."
 
Oh Intel is having a sale whats that? 2xY GB in SO-DIMMs... lets check AMD's benchmarks (benched with 1x8GB DDR4 or 1x16GB DDR4, soldered 1x 4GB/8GB DDR4 + one empty so-dimm). Doesn't look good champ~ *buys intel*
90% of consumers don't search for reviews. 90% of consumers, who buy prebuild systems, don't make a choice based on the results they get after searching on line. They don't have tme or don't want to spend time to search on line. They just go and buy anything that looks good. Either being in a nice case, or having a nice price. Or anything with an Intel sticker on it, because that's what they have learned all those FX years. That Intel is better, more stable, more secure, faster. Today that almost nothing is true about this last phrase, they still go out and buy prebuild PCs with Intel inside. That's why Intel is NOT dropping prices in the mainstream market.

Who check for reviews? Well, gamers. There Intel is still leading by a frame or two. But the mentality of buyers in that category, want that one or two extra frames. Stupid, but they want them. There is nothing we can do about it. Enthusiasts and professionals who buy HEDT. What is the ONLY platform where Intel has done price cuts. The HEDT platform. See?

As for laptops. It's not about speed. Speed was important 10 years ago. Now it's about battery life, slim case and low weight. AMD still can't offer all of them as a package. Intel can. They need to go to 7nm on the mobile platform and they need to do it fast. They sould have tried to offer the battery life. Force OEMs to put bigger batteries in AMD systems, no matter if that means more weight. 2-3 more hours of battery life could bring much more customers to AMD, than a 20-40% higher 3DMark score from the integrated GPU.
 
As for laptops. It's not about speed.
OEMs re-use the laptop chips in desktops, aios, not-really nucs, etc.

24-F1060, White => AMD ryzen(tm) 5 3500U Processor, quad-core => 1x8 GB
24-F1030, White => AMD ryzen(tm) 3 3200U Processor, dual-core => 1x8 GB
Both are 128-bit DDR4, and are only equipped with 64-bit DDR4 filled.

No slimlines yet like with the 290-a0045m. (Unlike, the above it only has a 64-bit PHY, so it isn't an issue)

The thing to note is most HBM packages can scale between 1 GHz to the max(HBM2v1 is 2 GHz, HBM2v2 is 2.4 GHz, HBM2E is 3.2 GHz, HBM2Ev2 is 3.6 GHz). So, mobile cTDP clock down and desktop cTDP clock up. Low TDP models usually can scale from 12W to 35W and high TDP models usually scale from 25W to 45W.

$169 APU to OEMs, AMD could easily drop-in HBM and increase its selling price to at most $299 without critically crippling sales. With that higher margins without the penalty to the end user.

HBM2E w/ a single stack can do 16 GB/24 GB in the most expensive options. Which are still cheaper than DDR4/DDR5 with the same capacity and bandwidth.
 
Last edited:
It makes more sense to use one dimm. Most consumers don't know the difference between dual and single channel memory. So, if they get to the point of upgrading their memory, they will be happy to see that you, the OEM, let one dimm empty for easy upgrade. If you install two dimms, they will not be as happy learning that they will have to buy two new dimms and keep the old ones as book markers. Also as an OEM you prefer to offer somewhat lower graphics performance and at the same time offer the option to the consumer to buy a discrete card. From you of course.
 
It makes more sense to use one dimm. Most consumers don't know the difference between dual and single channel memory. So, if they get to the point of upgrading their memory, they will be happy to see that you, the OEM, let one dimm empty for easy upgrade. If you install two dimms, they will not be as happy learning that they will have to buy two new dimms and keep the old ones as book markers. Also as an OEM you prefer to offer somewhat lower graphics performance and at the same time offer the option to the consumer to buy a discrete card. From you of course.

2003 just called. They wanted their concept back. I dont think that OEM's these days are leaving a slot on purpose free for you to upgrade later. Most of the systems today come with 8 to 16GB and upon request you can even extend that to a whopping 32/64GB. Memory is not really the issue anymore these days, just as raw CPU power. You just grab what you need and your good to go.

The historic OEM boards did had 2 slots but just with one single channel of memory. It didnt make any sense performance wise to install a second DIMM in there. It just extended the memory. The quality of the chips is usually the cheapest of the cheapest, no heatsink, no performance, no tight timings or RGB for that matter, unless you go for the more expensive part(s) you can find.

A CPU with a GPU that could switch in between a dedicated and onboard and still have the horsepower for everyday tasks would be cool, but i always would buy a dedicated graphics card.
 
Why are you trying to track anything, everything was written in the last sentence.
Because I missed that bit!

I'm still confused about a "number-crunching machinery of Vega but with the display-and-multimedia-engines of Navi"

Does that mean it's still just old GCN cores but they've updated the encode/decode FF hardware?
 
Because I missed that bit!

I'm still confused about a "number-crunching machinery of Vega but with the display-and-multimedia-engines of Navi"

Does that mean it's still just old GCN cores but they've updated the encode/decode FF hardware?

Pretty much. Maybe late next year we get Zen 3 cores and Navi cores for the iGPU which would be impressive.
 
Because I missed that bit!

I'm still confused about a "number-crunching machinery of Vega but with the display-and-multimedia-engines of Navi"

Does that mean it's still just old GCN cores but they've updated the encode/decode FF hardware?
Yes, it still using GCN. Only the display engine is updated to match Navi. So gaming performance is the same as before, if everything else being equal.
 
Pretty much. Maybe late next year we get Zen 3 cores and Navi cores for the iGPU which would be impressive.

Wishful thinking, but the APUs always seem to be the runts of the litter with the oldest tech.

The 1st Gen Ryzen APUs (2000-series) were named as if they were part of the 12nm Zen+ refresh, but they were still original Zen designs on GloFo's 14nm process.
The 2nd Gen Ryzen APUs (3000-series) are named as if they are part of the 7nm Zen2 refresh, but they are Zen+ designs on GloFo's 12nm refresh.
We assume that Renoir will be the 4000-series, but as Zen2 and (mostly)Vega it will be old tech at launch with Zen3 either imminent or already launched.

On the assumption that Renoir's successor finally switches to a full Navi IGP, it will be likely a generation behind in the CPU department, so probably a Zen2+ refresh rather than Zen3 cores.

I'm guessing, OFC - but at least It's based on empirical data points.
 
Back
Top