Friday, August 14th 2020

NVIDIA GeForce RTX 3090 "Ampere" Alleged PCB Picture Surfaces

As we are getting close to September 1st, the day NVIDIA launches its upcoming GeForce RTX graphics cards based on Ampere architecture, we are getting even more leaks. Today, an alleged PCB of the NVIDIA's upcoming GeForce RTX 3090 has been pictured and posted on social media. The PCB appears to be a 3rd party design coming from one of NVIDIA's add-in board (AIB) partners - Colorful. The picture is blurred out on the most of the PCB and has Intel CPU covering the GPU die area to hide the information. There are 11 GDDR6X memory modules covering the surrounding of the GPU and being very near it. Another notable difference is the NVLink finger change, as there seems to be the new design present. Check out the screenshot of the Reddit thread and PCB pictures below:
NVIDIA GeForce RTX 3090 PCB NVIDIA GeForce RTX 3090 PCB NVIDIA GeForce RTX 3090 PCB
More pictures follow:

NVIDIA GeForce RTX 3090 PCB
Source: VideoCardz
Add your own comment

72 Comments on NVIDIA GeForce RTX 3090 "Ampere" Alleged PCB Picture Surfaces

#51
Unregistered
jabbadapWell maybe it's a dual gpu card. Could explain the name RTX 3090 too...
I'd be very interested in that, but skeptical. We shall see.
Posted on Edit | Reply
#52
moproblems99
FluffmeisterI guess that's when undervolting became a hobby too.
The GTX 480 probably spurred that. Hawaii made it a thing and AMD carried the torch from then on giving enthusiasts something else to tweak. We should thank them really. :laugh:
Posted on Reply
#53
steen
ValantarFor those of you claiming there's a 12th memory die below the gpu die area...
FFS, it's silk-screened 1-12 for the memory modules. The bottom module (M506) is on the bottom left corner of the KOA. There's even a less pixelated version out there.
No. See above. And it definitely won't be 22GB on a 384-bit bus...
Who said definitely 22GB? You...?
For those of you saying that the CPU for some reason covering the area behind the die somehow confirms the RT coprocessor...
Who said that? I heard FPGA & assumed PAM4 transceiver...

Edit: Typed wrong module number.
Posted on Reply
#54
Valantar
steenFFS, it's silk-screened 1-12 for the memory modules. The bottom module (M506) is on the bottom left corner of the KOA. There's even a less pixelated version out there.
Got a link? Besides, they are numbered 4-1 on the left, 5,7 and 8 on the top, and 9-12 on the right. Are you then saying that RAM channel 6, of all numbers, is on the opposite side of the die from its neighbors? Sorry, but that's not how RAM topologies, controllers and PHYs are laid out. Typically a single controller handles more than one channel, while what you are suggesting would necessitate a controller on the fourth side of the die. And even if that was the case, why would it be number 6, and not 1 or 12? My guess: the die has 12 channels, but only 11 are used for this version of the card. Knowing this, Nvidia decided to go for shorter RAM traces by sticking the RAM in between the cooler mounting holes. This wouldn't fit with four chips, but unlike the cheap solution of the 2080Ti of just keeping the PCB the same but not populating one channel, they chose to optimize the memory topology by shortening the traces as much as possible, which likely allows them to increase memory frequencies noticeably.
steenWho said definitely 22GB? You...?
I said 22GB on a 352-bit bus. You were trying to correct me, yet only addressed the one number. I'm just asking for a tad of consistency.
steenWho said that? I heard FPGA & assumed PAM4 transceiver...
PAM4 transciever? As in an off-die transciever for the memory? Wow, that would massively increase memory latency. Not happening. And an FPGA? For what? I was referring to the previous rumors about a separate RT coprocessor die; I thought I saw people referring to that rumor, though tbh I can't be bothered to go back and look. It's well worth preemptively shooting down even if nobody brought it up.
Posted on Reply
#55
steen
ValantarGot a link? Besides, they are numbered 4-1 on the left, 5,7 and 8 on the top, and 9-12 on the right.
So where's Waldo?
Are you then saying that RAM channel 6, of all numbers, is on the opposite side of the die from its neighbors? Sorry, but that's not how RAM topologies, controllers and PHYs are laid out.
You do need the link. It's on the first page of this thread, half way down. I concede the pad (if any) may not be populated by a module, but yes the label is M506. M505 might work somewhat better with layout, but alas it is what it is. May even be ordering for the pick/place machine or a pcb printing error. Topology/layout is a given, esp with the Micron info & the perf breakdown for a 12xGDDR6X module 3090. This seems an interesting PCB, so may be marginal to redesign/validate for every GA102 SKU.
I said 22GB on a 352-bit bus. You were trying to correct me, yet only addressed the one number. I'm just asking for a tad of consistency.
Your consistency does not compute. We'll see in a few weeks anyway.
PAM4 transciever? As in an off-die transciever for the memory? Wow, that would massively increase memory latency. Not happening. And an FPGA? For what? I was referring to the previous rumors about a separate RT coprocessor die; I thought I saw people referring to that rumor, though tbh I can't be bothered to go back and look. It's well worth preemptively shooting down even if nobody brought it up.
The OP on Reddit claimed the CPU to be hiding a chip. Through VIAs are not impossible, but not likely under the GPU. I assumed a super cap of sorts. I heard rumour of an on-board FPGA & PAM4, but not in the context of GDDR6X. A separate RT traversal chip would do all sorts of interesting things to the thread/ray/scheduling of an SM.
Posted on Reply
#56
Valantar
steenSo where's Waldo?
As I said: unpopulated for this SKU would be my bet. Nothing at all stopping them from extending the traces a bit for the SKU with all channels populated. That might for example be the (leaked by Micron) 12GB 3090, while this might then be a 11GB (with all the memory on the back? Yeah, that's weird.) 3080Ti or something like that.
steenYou do need the link. It's on the first page of this thread, half way down. I concede the pad (if any) may not be populated by a module, but yes the label is M506. M505 might work somewhat better with layout, but alas it is what it is. May even be ordering for the pick/place machine or a pcb printing error. Topology/layout is a given, esp with the Micron info & the perf breakdown for a 12xGDDR6X module 3090. This seems an interesting PCB, so may be marginal to redesign/validate for every GA102 SKU.
You're right, those markings are indeed visible. I'll still rather be overly skeptical than jump on this though, as a layout like that is entirely unprecedented. And just plain weird. I'll much rather lean towards the sensible side and be surprised than the opposite.
steenYour consistency does not compute. We'll see in a few weeks anyway.
So asking for you to actually follow through with your math doesn't compute? Oh dear. All I'm asking is that if you are attempting to correct one number, to also correct the other number right next to it that is inextricably linked to the first one.
steenThe OP on Reddit claimed the CPU to be hiding a chip. Through VIAs are not impossible, but not likely under the GPU. I assumed a super cap of sorts. I heard rumour of an on-board FPGA & PAM4, but not in the context of GDDR6X. A separate RT traversal chip would do all sorts of interesting things to the thread/ray/scheduling of an SM.
Not that I put any trust whatsoever in anyone posting anything on Reddit, but "hiding a chip" is exactly what I was referring to in my post. As likely as anything else, the person taking the photo was aware of the persistent (yet thoroughly debunked) RT coprocessor rumors and decided to mess with the people desperate for Ampere rumors. Of course, there might be something there. But I highly doubt it. And indeed, a separate RT traversal chip would do all sorts of interesting things - it's just that most of them would be severely detrimental to performance.

But as you say, we'll see in a few weeks' time. I'll be happy to be proven wrong - that would mean a series of very interesting and ambitious/silly design decisions, which will be very interesting to see play out. But until then, or until more conclusive evidence shows up, I'll remain skeptical.
Posted on Reply
#57
steen
ValantarAs I said: unpopulated for this SKU would be my bet. Nothing at all stopping them from extending the traces a bit for the SKU with all channels populated. That might for example be the (leaked by Micron) 12GB 3090, while this might then be a 11GB (with all the memory on the back? Yeah, that's weird.) 3080Ti or something like that.

You're right, those markings are indeed visible. I'll still rather be overly skeptical than jump on this though, as a layout like that is entirely unprecedented. And just plain weird. I'll much rather lean towards the sensible side and be surprised than the opposite.
If we're speculating, there may not be a 3080ti this time round. 384/320bit only with 3 tiers based on full/salvage GA102. If you want weird, how about 12GB & 22GB capacities. With clamshell mode, additional load may ordinarily necessitate a clk reduction, but GDDR6X at low clk reduces to half data rate, so they get rid of the two modules you're offended by instead.
So asking for you to actually follow through with your math doesn't compute? Oh dear. All I'm asking is that if you are attempting to correct one number, to also correct the other number right next to it that is inextricably linked to the first one.
Que? They're not mutually exclusive. Do we need to spell everything out (arithmetically)?
Not that I put any trust whatsoever in anyone posting anything on Reddit, but "hiding a chip" is exactly what I was referring to in my post. As likely as anything else, the person taking the photo was aware of the persistent (yet thoroughly debunked) RT coprocessor rumors and decided to mess with the people desperate for Ampere rumors. Of course, there might be something there. But I highly doubt it. And indeed, a separate RT traversal chip would do all sorts of interesting things - it's just that most of them would be severely detrimental to performance.
Sure.
But as you say, we'll see in a few weeks' time. I'll be happy to be proven wrong - that would mean a series of very interesting and ambitious/silly design decisions, which will be very interesting to see play out. But until then, or until more conclusive evidence shows up, I'll remain skeptical.
I think Nv stepped up to the plate with "ambitious & silly" for this cycle because they could & circumstances. They've certainly made things interesting. I think it works for 2021 refresh cycle as well as Hopper. Think Nv will be content with lower margins?
Posted on Reply
#58
GhostRyder
I am more interested in the naming 3090 mostly because NVidia's X90 series was pretty much exclusively dual GPU. I mean I think its going to be weird especially if they still use TI having the slots been moved around for the lineup. But hey whatever is highest is going to be curious (3080ti, 3090, or 3090ti)
Posted on Reply
#59
Th3pwn3r
GhostRyderI am more interested in the naming 3090 mostly because NVidia's X90 series was pretty much exclusively dual GPU. I mean I think its going to be weird especially if they still use TI having the slots been moved around for the lineup. But hey whatever is highest is going to be curious (3080ti, 3090, or 3090ti)
They'll have the 3080,3080super,3080ti,3080super ti. Then they'll come up with a dozen or so more versions with minimal differences, some will be cherry picked, some will be gimped...

I'm gonna be butthurt when I get the top tier card and then the next week there's one slightly better.
Posted on Reply
#60
Jism
Nvidia Bulldozer TI.
Posted on Reply
#61
Fouquin
ProedrosExactly, we are looking at the back. This is possibly a co-processor like chip. 1GB memory modules front and back to 22GB total.
The latency associated with any co-processor even on board would entirely negate the efficacy of said co-processor. On-die or don't bother. With modern GPUs throwing around literal terabytes per second of data there's no way in hell it makes sense to stuff a co-processor on the board that does anything useful. The prior rumors of a co-processor segment link to nVidia's patent filing which shows the logic sharing L1 cache lines within the GPC. Not external.
Posted on Reply
#62
TheUn4seen
I always wondered, why do people even care about such "leaks"? This is just the manufacturer sending out feelers to check how people react, so you all just get duped into doing their work, or a random guy looking for his second of fame in social media. You can't estimate performance numbers from such "leaks" with any reasonable margin of error, so it's just a waste of time before the final product is released, priced and reviewed. If the 3090 is at least 50% faster than my 2080ti I'll buy it, if not I'll just forget it exists and postpone the upgrade until the next generation. Who cares, it's just a product.
Posted on Reply
#63
medi01
Dude, AMD stock hit 80 bucks, mcap close to 100 billion...
I guess expectations from the next gen GPU cards are already set in the right circles.
Posted on Reply
#64
Vayra86
Seems like the 16GB and higher rumors were total BS after all (duh).

Can we reintroduce some common sense here and consider the fact Nvidia has always had a pretty tight lid on their actual releases? It gets out when they want it, generally. Its not AMD, guys. For some recent events, remember how SUPER surfaced for us. A little teaser and we were all left guessing. Almost nobody nailed it, and many people thought we would get a step above 2080ti. That never happened.

We are looking at a PCB with some logos and a bloody full sized Intel IHS and imagination runs wild... Most of what we've seen doesn't really, actually line up all that well. Relax, sit back, and just wait it out ;)
Posted on Reply
#65
medi01
Vayra86It gets out when they want it,
Right. They just happened to want "supers" when AMD rolled out NAVI just by incident.
Obviously.
Posted on Reply
#66
Th3pwn3r
TheUn4seenI always wondered, why do people even care about such "leaks"?
You tell me, why did you click on he thread? People love rumors and the like. It's all in good fun if you ask me.
Posted on Reply
#67
TheUn4seen
Th3pwn3rYou tell me, why did you click on he thread? People love rumors and the like. It's all in good fun if you ask me.
Personally I clicked in my ongoing quest to understand why do people want to discuss such topics, which, to me at least, seems like a waste of time with no useful outcome. If, as you say, it is fun for some to waste time on unsubstantiated rumors, I'll just go my way and wait for the actual product to be released, so there's no need to speculate on it's parameters.
Posted on Reply
#68
Vayra86
TheUn4seenPersonally I clicked in my ongoing quest to understand why do people want to discuss such topics, which, to me at least, seems like a waste of time with no useful outcome. If, as you say, it is fun for some to waste time on unsubstantiated rumors, I'll just go my way and wait for the actual product to be released, so there's no need to speculate on it's parameters.
Sure, its good fun. But some take it to another level to connect all sorts of realities to whatever is supposed to be the next release. Before you know it, the next GPU is doing Skynet, RT, and gaming all at the same time, with double the memory of past gen and doubled power consumption figures. You can read back... people actually went that far, well apart from Skynet.
Posted on Reply
#69
efikkan
Vayra86Seems like the 16GB and higher rumors were total BS after all (duh).

Can we reintroduce some common sense here and consider the fact Nvidia has always had a pretty tight lid on their actual releases? It gets out when they want it, generally. Its not AMD, guys. For some recent events, remember how SUPER surfaced for us. A little teaser and we were all left guessing. Almost nobody nailed it, and many people thought we would get a step above 2080ti. That never happened.

We are looking at a PCB with some logos and a bloody full sized Intel IHS and imagination runs wild... Most of what we've seen doesn't really, actually line up all that well. Relax, sit back, and just wait it out ;)
Do you remember how most were assuming Turing was a "Pascal refresh", many claiming at most 10% more performance? Many "sources" even thought the name would be 11xx up until a few days before the release.

Whatever amount of VRAM Nvidia has chosen for their upcoming cards, it's very unlikely to change right before the release, as this can't change after the final design goes into mass production.

I would agree to some common sense about rumors and "leaks", most of them don't pass the sniff test. As of right now there are many rumors floating about like; "Ampere will be very power efficient", "Ampere will be a power hog", "Ampere will have xx GB VRAM", "Nvidia will move to Samsung 8nm", etc. Contradiction usually indicates people are guessing. Especially when it comes to details which are known internally 1.5-2 years ahead, like die configurations, memory buses, production node etc. If "leakers" gets things like this wrong, then they are making stuff up. Perhaps we should start to make a timeline of various "leaks", that would make it obvious which of these "sources" which are BS.
Posted on Reply
#70
Th3pwn3r
TheUn4seenPersonally I clicked in my ongoing quest to understand why do people want to discuss such topics, which, to me at least, seems like a waste of time with no useful outcome. If, as you say, it is fun for some to waste time on unsubstantiated rumors, I'll just go my way and wait for the actual product to be released, so there's no need to speculate on it's parameters.
One could argue that anything that doesn't earn you income is a waste of time. Personally I think that's stupid.
Posted on Reply
#71
TheUn4seen
Th3pwn3rOne could argue that anything that doesn't earn you income is a waste of time. Personally I think that's stupid.
Personally I think any activity which doesn't result in increasing knowledge or an improvement to general conditions is a waste of time by definition. Discussing a future product based on a picture of it's part seems to only benefit the manufacturer by creating mind share and so called hype, which leads me to question why would consumers want to work for corporate marketing.
Posted on Reply
#72
kayjay010101
TheUn4seenPersonally I think any activity which doesn't result in increasing knowledge or an improvement to general conditions is a waste of time by definition. Discussing a future product based on a picture of it's part seems to only benefit the manufacturer by creating mind share and so called hype, which leads me to question why would consumers want to work for corporate marketing.
So by your definition gaming is a waste of time, and by extension, gaming hardware is a waste of time?
I'd argue reading your comments is a waste of time by your definition.
Posted on Reply
Add your own comment
Apr 26th, 2024 19:09 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts