• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD is Allegedly Preparing Navi 31 GPU with Dual 80 CU Chiplet Design

high latency with CPU was already terrible and now they want high latency on GPU? Get out.

You do know AMD has the lead in every category CPU wise right now correct? The merits of MCM design far outweigh the negatives, which is why both Nvidia and Intel are trying to do the same. On latency by the way, the University of Toronto showed it's possible to design a chiplet based CPU that has lower latency than a monolithic one through the use of an active interposer.
 
Maybe I'm weird, but I have a 1050 Ti on the shelf just for this purpose. :D
You're not weird. I've always kept a spare PSU, keyboard & mouse lying around. I have an old 1050Ti that's been sitting in it's box waiting for Ebay, but after these past highly insane 6 months I'm definitely keeping that as a backup too.
 
Low quality post by Vya Domus
high latency with CPU was already terrible and now they want high latency on GPU? Get out.
You now pretend like you know anything about any of this in addition to posting nonsense fanboy drivel ?

I guess you are a troll from top to bottom.
 
high latency with CPU was already terrible and now they want high latency on GPU? Get out.
This isn't MCM like in Ryzen - which by the way has largely overcome its latency issues anyhow. There was a recent AMD patent published about MCM GPUs with passive interposers, though it's also feasible for them to use an active interposer with the Infinity cache on it rather than the die itself - either of those would drastically reduce latencies compared to what we see in MCM-on-substrate Ryzen. It stands to reason that if they're actually bringing this to market, it won't be with a fundamental design flaw of such a magnitude that it kills performance entirely.
 
Considering it mentions RDNA 3 as the base for the two chiplets then I'd doubt it would be this year at the earliest, and would probably be late next year?
It will be late this year. It will be priced very well too. Also to everyone talking **** about supply...grow up. AMD is providing chips for not only themselves but they also have contracts to make consoles...5-6 different consoles. 6 if sony is still ordering both ps4 and the pro, 5 if they are only ordering 1 of them like Microsoft who stopped ordering the xbox one x.

AMD can only buy some much capacity from TSMC and that capacity is being split between all those consoles and previous gen CPU's along with new CPU's and GPU'S all with multiple skews.

You wanna know where all the hardware is going? Its not scalpers it's system builders like Dell or any number of the companies making gaming PC's.

Come back to reality. Btw... AMD is making tons of stuff but nvidias not. nVidia is also letting Samsung's fab sit idle when they could easily produce tons of chips for nVidia.

The reason they aren't is because of AMD coming out strong with RDNA2..so strong that nVidia basically stopped making 3080s because the 6800xt devalues nVidia products at the high end which is why we will see production ramp up when they deliver the 3080s or ti.
 
That’s because CP uses some really heavy post processing and has a very soft image quality anyway.
I can add few things to you said stuff.

CP is trash game.
Why?
1) quality is not so good if you check closely
2) cpu and gpu overused because of graphics even if it does look like crap.
3) optimisasion is equal to 0
4) devs who created the visuals are freaking retarded ( sorry for saying this ).
 
Yeah like the PIII-1000 paper launch, Radeon X800 paper launch, GeForce 6800U, X1900, 8800 GTX, GTX 480... We've never had paper launches before.
I wasn't into tech in that era so I didn't know. But looking at how people are acting it doesn't feels like what we living right now is just "business as usual, move along"
 
Sounds like fun been waiting for it since I heard rumours about chiplets for zen.
 
The performance gap ... is getting ridiculous.
Dropping from 4k to 1440p (2.2 times less pixels) boosts performance, who would have thought... :D

On "paper launch": in stock for WEEKS on german online retailer site (at price higher than claimed MSRP, of course):


Nothing "paper" about it, besides, perhaps, MSRP price, but I was told gamers were fine with it, 2080Ti, cough.

Radeon Top 5 Selling Brand Lines!

  1. RX 6900XT = 460 Units.
  2. RX 6800XT = 260 Units.
  3. RX 5700XT = 220 Units
  4. RX 6800 = 175 Units.
  5. RX 5500XT = 80 Units.


Nvidia Top 5 Selling Brand Lines!

  1. RTX 3080 10GB = 530 Units.
  2. RTX 3070 8GB = 410 Units
  3. GTX 1660 Super= 250 Units.
  4. GT 710 = 130 Units.
  5. GTX 1050 TI = 120 Units.
 
It will be late this year. It will be priced very well too. Also to everyone talking **** about supply...grow up. AMD is providing chips for not only themselves but they also have contracts to make consoles...5-6 different consoles. 6 if sony is still ordering both ps4 and the pro, 5 if they are only ordering 1 of them like Microsoft who stopped ordering the xbox one x.

AMD can only buy some much capacity from TSMC and that capacity is being split between all those consoles and previous gen CPU's along with new CPU's and GPU'S all with multiple skews.

You wanna know where all the hardware is going? Its not scalpers it's system builders like Dell or any number of the companies making gaming PC's.

Come back to reality. Btw... AMD is making tons of stuff but nvidias not. nVidia is also letting Samsung's fab sit idle when they could easily produce tons of chips for nVidia.

The reason they aren't is because of AMD coming out strong with RDNA2..so strong that nVidia basically stopped making 3080s because the 6800xt devalues nVidia products at the high end which is why we will see production ramp up when they deliver the 3080s or ti.

"AMD is making tons of stuff but nvidias not. nVidia is also letting Samsung's fab sit idle when they could easily produce tons of chips for nVidia.
The reason they aren't is because of AMD coming out strong with RDNA2..so strong that nVidia basically stopped making 3080s because the 6800xt devalues nVidia products at the high end"

You can't be older than 10 with that logic. Okay maybe a slow 12yo.
 
Dropping from 4k to 1440p (2.2 times less pixels) boosts performance, who would have thought... :D

On "paper launch": in stock for WEEKS on german online retailer site (at price higher than claimed MSRP, of course):


Nothing "paper" about it, besides, perhaps, MSRP price, but I was told gamers were fine with it, 2080Ti, cough.

Radeon Top 5 Selling Brand Lines!

  1. RX 6900XT = 460 Units.
  2. RX 6800XT = 260 Units.
  3. RX 5700XT = 220 Units
  4. RX 6800 = 175 Units.
  5. RX 5500XT = 80 Units.


Nvidia Top 5 Selling Brand Lines!

  1. RTX 3080 10GB = 530 Units.
  2. RTX 3070 8GB = 410 Units
  3. GTX 1660 Super= 250 Units.
  4. GT 710 = 130 Units.
  5. GTX 1050 TI = 120 Units.
This is, what, three months after launch? Or have we hit four now? And a significant retailer in a country of >80 million inhabitants has less than 1000 RTX 3000 units to sell? Yeah, sorry, that's not a lot. Unless every SKU is in stock at prices at least close to those they were announced at (Covid has increased shipping and distribution costs, so some hikes are expected), that's still very low availability.
 
This is, what, three months after launch? Or have we hit four now? And a significant retailer in a country of >80 million inhabitants has less than 1000 RTX 3000 units to sell? Yeah, sorry, that's not a lot. Unless every SKU is in stock at prices at least close to those they were announced at (Covid has increased shipping and distribution costs, so some hikes are expected), that's still very low availability.
6900 was released in December.
Situation wasn't much different 3 weeks ago (same site), if anything, GPUs were a tad cheaper.

Compared to pre-current gen GPUs, mindfactory was selling roughly the same ballpark of GPUs weekly.
 
Never mind, even if you could afford it and actually buy it, there's no power supply on this planet that can feed this beast.
Most likely it would be produced on some iteration of 5nm, and probably lower clocks, thereby drastically reducing power consumption. If done on N5P with reduced clocks it could be 50% less power on the same arch.
-- The N5P technology provides roughly 20% speed improvement or about 40% reduction in power consumption compared with the 7nm process technology.
 
If the latency is high, they'll throw it out as a compute card for rendering and such where horsepower is needed, not lowest latency
 
Never mind, even if you could afford it and actually buy it, there's no power supply on this planet that can feed this beast.
Actually it would run fine on a 750w PSU. the card would use roughly 500W and possibly even less if it uses HBM.

The 6900xt is 300w and chiplets for GPU are really efficient because u don't have to have the ram twice the power and the die's can actually be a bit smalelr each using less pwwoer per core and considering they likely will use HBM it could use much less power.


It could possibly be a 400w card if its tuned properly.
 
Dropping from 4k to 1440p (2.2 times less pixels) boosts performance, who would have thought... :D

On "paper launch": in stock for WEEKS on german online retailer site (at price higher than claimed MSRP, of course):


Nothing "paper" about it, besides, perhaps, MSRP price, but I was told gamers were fine with it, 2080Ti, cough.

Radeon Top 5 Selling Brand Lines!

  1. RX 6900XT = 460 Units.
  2. RX 6800XT = 260 Units.
  3. RX 5700XT = 220 Units
  4. RX 6800 = 175 Units.
  5. RX 5500XT = 80 Units.


Nvidia Top 5 Selling Brand Lines!

  1. RTX 3080 10GB = 530 Units.
  2. RTX 3070 8GB = 410 Units
  3. GTX 1660 Super= 250 Units.
  4. GT 710 = 130 Units.
  5. GTX 1050 TI = 120 Units.

Those numbers are extremely bad given we are months out from launch. In fact they are downright pathetic. The definitely says paper launch to me.
 
Considering it mentions RDNA 3 as the base for the two chiplets then I'd doubt it would be this year at the earliest, and would probably be late next year?

Redgaming tech has pretty solid sources for AMD and was nearly spot on about RDNA2. He mentioned it will be in 2022 at earliest likely second half. Also according to his source the target for RDNA3 big chip is really high at 2.5x performance of 6900xt. He its going to be a big architectural changed when it comes to geometry engine and big leap in ray tracing as well. Watch his latest video on it and his reaction to it lol. RDNA3 is goin to be a monster chip.
 
Never mind, even if you could afford it and actually buy it, there's no power supply on this planet that can feed this beast.

Based on what evidence exactly? I wouldn't expect a single PCB RDNA3 twin chip design with a CU count like the RX 6900 XT to be worse on power draw than two discrete RNDA2 RX 6900 XT's. There should be power optimized refinements from RDNA2 to RNDA3 and just because it's using a pair of chips with the same CU doesn't mean on a single PCB they will be clocked the same which even if they were and AMD's optimized various aspects of it to increase power and efficiency it'll still draw less power. Even if AMD optimized nothing at all a single board design is going to be more efficient in practice. The more challenging aspect would be cooling it, but they'll probably make it water cooled or at least a 3-slot cooler rather than a 2-slot design. I don't know if they'd go with a 4-slot cooler or not they certainly could, but if they don't increase clock speeds while making it more power efficient for RDNA3 they might not need to. Just put a vapor chamber on both sides of the PCB with 4 U-shaped heat pipes connected spaced out across the PCB length and two blower fans on each side of the PCB on the rear exhausting all the heat. That would exhaust plenty of heat all outside of the case. The heat-pipes themselves could be filled with a bit of liquid metal if they aren't already done that way these days.

5nm could easily make that happen and consume close to 350W at over 2GHz. Especially if combined with HBM(3?) for low latency which might be needed more when using chiplets.

Exactly and with that many CU's you'd want to make use of the latest HBM chips regardless. The cost of HBM is more justified in a design with that much compute power that can make heavy use of tons of bandwidth. Plus combined with infinity cache and with BAR SIZE it'll make it more worth while and easier to reduce the memory bus width itself to subsidize some of the cost especially so in a twin chip design with more leeway and wiggle room. The faster chips get the more wiggle room is available to reduce bus width a bit w/o really having a major performance impact in all scenario's. GPU's don't just push raw bandwidth at all times by nature a lot of data is shuffled about in chunk sizes or loaded first taking a moment to do so then pooled from pretty much indefinitely or until shuffled in and out as needed.
"We heard you liked stock shortages, so we're going to take the constrained supply you're waiting on and use it to make half as many graphics cards"

I hear you, but they are a business and they can make significantly more money selling cards aimed more towards the scientific, healthcare, 3D artists, CGI industry, and so on than selling them to the game community. It's just the way it is we paved the way for all that across the years, but we're lowest on the priority list in a lot of ways at the same time. It doesn't help that they sell cards like the RX 6900 XT in the fist place to high end enthusiast gamer's rather than splitting it in two and selling two cards instead which would benefit the industry as a whole more and keep things more fair at the same time to gamer's that can't pay to win in the same way.

Always have back up parts, perferably evne a complete back up pc so you can chuck your main storage drive in there if something is wrong with it.

That's my current I believe I unintentionally installed bitlocker on it and don't know the password for it so I booted up to a orange corrupted looking screen. I tried to install windows 10 from media creation on it no luck. I tried windows repair no luck. I still haven't sorted thru if I can salvage the drive yet or not. Tried using it as a portable USB drive and didn't work at first then partially manged to work enough to copy data off it, but can't write to it to erase or format it. I tried to install windows 10 on another drive with it in the system and that wouldn't even work. The portable drive I turned on after the initial OS was booted from off a HDD that I installed it on after removing the SSD. Horrible pita experience. I thought it was the GPU or display initially though so I'm actually relieved it's not either of those particularly the GPU that wouldn't be cheap to replace at this point time. Windows 10 is unbelievably slow and terrible on a HDD for the record as well holy f*ck it's a pure garbage experience. It's pretty tempting to install Linux frankly.

That’s because CP uses some really heavy post processing and has a very soft image quality anyway.

The cloudy piss soft image quality effect. I generally just don't like soft unless it's lighting and even then I prefer to keep it minimal to avoid it be exaggerated to heavily.


DLSS is objectively great.

Objectively I'd much rather have the RAW GPU resources to devout to rasterization and reshade shader effects myself and other aspects like more TMU's. You can do upscale and downscale of individual post process effects in reshade rather trivially and a enormous degree of custom configuration of shaders. A lot can be tuned from like 5FPS impact downward to 1FPS and not look drastically different while saving a lot of performance or even objectively looking better. Too much over exaggeration of post process can actually have a rather nasty negative impact on image quality rather than improve it including DLSS. The performance impact is overblown if you turn off AA you gain a lot of performance if you upscale from a lower resolution you save a lot of performance overhead if you downsample at a lower resolution you save a lot of overhead as well. It's a matter of balance really and frankly like a ASIC is fixed function optimized performance post process is best when done similarly, but more time consuming. AI will get better at inference and can close that gap as it advances and in many cases surpass human inaccuracies even, but it's far from perfect today.
 
Last edited:
My recommendation to people traversing TPU: ignore reading/discussing technical posts like these.

Jesus christ! I felt like I swam in a facebook slime and I could literally feel thousands of my brain cells dying after reading through the comments.
 
6900 was released in December.
Situation wasn't much different 3 weeks ago (same site), if anything, GPUs were a tad cheaper.

Compared to pre-current gen GPUs, mindfactory was selling roughly the same ballpark of GPUs weekly.
... I was talking about the RTX GPUs, wasn't I? There is obviously more reason for Radeons to have less availability, at least for the time being. It's still kind of scary that they had more 6900 XTs than 6800 + 6800 XT put together.

Also, at what point were they selling comparable amounts? Just before the launch of these new GPUs? Or 1-4 months after the launch of the previous generation? I sincerely hope you see the difference in what sales numbers to expect at those different times.
 
I find these articles so funny cause every single time AMD is announced to destroy the competition and after the product launches, no comment about it that it got beaten. Bit rumours start to circulate again about next gen being some sort of monster that will take over the competition. The end result...we know. Nvidia will launch something much better and we will then move to rdna 4 rumours.
 
EsUyEagW8AEsvt5.png


Upcoming Ray Tracing 2.0, apparently AMD borrowed learn from nVidia :p
 
Why stop there? Why not $1,000 000.00 people seem to have more money than sense, why not?
 
If the latency is high, they'll throw it out as a compute card for rendering and such where horsepower is needed, not lowest latency
It's probably not even a if, but their intended market. My best assumption is AMD is planning to utilize the less perfect monolithic chip yields for MCM for more serious use compute workloads that aren't are latency critical or less so and more parallel computational critical in nature as they improve that tech over time. That similar to EPYC reduces the cost of the more premium perfect single chiplet yields. If it's used for gaming at all it'll likely start with niche enthusiasts initially and be catered more toward niche markets where the latency concerns are less damning like 4K and 8K because frame rates are lower and GPU rendering needs are greatly higher. The latency penalty is going to be more pronounced for lower resolution high refresh rate so they'd target that market last.

The lucrative markets that actually require GPU for work as opposed to fun and games will help subsidize the MCM GPU tech for gaming in future iterations with latency improvements down the road however because it's sure to be refined over time. Luckily what AMD's learned from Ryzen and with RNDA2 will help a lot in practice. I don't think this tech will have as much problems as Ryzen had with CCX issues because some of that is been ironed out with Ryzen already and additionally infinity cache is a big innovation in itself. AMD is gearing up for major innovation in the coming years as is the industry as a whole.

Why stop there? Why not $1,000 000.00 people seem to have more money than sense, why not?

You'd be surprised what governments pay for the right leading edge technology.
 
Last edited:
Back
Top