• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

GIGABYTE Z690 AERO D Combines Function with Absolute Form

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,766 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
GIGABYTE's AERO line of motherboards and notebooks target creators who like to game. The company is ready with a premium motherboard based on the Intel Z690 chipset, the Z690 AERO D. This has to be the prettiest looking motherboard we've come across in a long time, and it appears to have the chops to match this beauty. The Socket LGA1700 motherboard uses large ridged-aluminium heatsinks over the chipset, M.2 NVMe slots, and a portion of the rear I/O shroud. Aluminium fin-stack heatsinks fed by heat-pipes, cool the CPU VRM. You get two PCI-Express 5.0 x16 slots (x8/x8 with both populated). As an AERO series product, the board is expected to be loaded with connectivity relevant to content creators, although the box is missing a Thunderbolt logo. We expect at least 20 Gbps USB 3.2x2 ports, and 10 GbE networking, Wi-Fi 6E.



View at TechPowerUp Main Site
 
Heatsinks have grown in size on majority of GB boards and they all are proper finned heatsinks.
 
The NVME drive looks ugly, but cmon guys: flat is bad! whats the use of all that metal without the surface area to use it for cooling?
 
pretty sure this pandemic, gigabyte has fired its design team
 
pretty sure this pandemic, gigabyte has fired its design team
Not many people were laid off in Taiwan, since the pandemic has not really affected Taiwan.
One benefit of being a nation that understands what's going on in the PRC and being able to act on it early.
 
i am so sad to see this having dual pcie 5 slots(with switches) while the aorus master hasnt.
 
Again, not a single x1-Slot! Whatever happened between Z390 and now, where have all those slots gone? Where am I to put my soundcard, additional 2.5GbE/5GbE-NICs, TV-card etc?
 
Again, not a single x1-Slot! Whatever happened between Z390 and now, where have all those slots gone? Where am I to put my soundcard, additional 2.5GbE/5GbE-NICs, TV-card etc?
In an entry level board.
 
So somone who prefers a discrete highendsoundcard or wants to use 2x5GbE should buy an entry-level board? Really?

I would rather they put something like Asus Hyper M.2 in the box as standard instead of cluttering the board with more than two M.2, of which most will only be usable by sacrificing many SATA-ports and/or the x4-Slot.

Even better would be if they offered more U.2-Ports an either offered NVMe-SSD in 2.5"-U.2-Formfactor again or at least a board in 3.5"-formfactor with 2-4 U.2 Portss and 2-4 M.2-Slots. This way, SSD would move from the mainboard back to the case, where storage belongs and can be directly cooled by the front fans. Best would be a new connector for that task, offering PCIe x16-Link to the board
 
Again, not a single x1-Slot! Whatever happened between Z390 and now, where have all those slots gone? Where am I to put my soundcard, additional 2.5GbE/5GbE-NICs, TV-card etc?
gigabyte hates x1 slot.
since z490.

So somone who prefers a discrete highendsoundcard or wants to use 2x5GbE should buy an entry-level board? Really?

I would rather they put something like Asus Hyper M.2 in the box as standard instead of cluttering the board with more than two M.2, of which most will only be usable by sacrificing many SATA-ports and/or the x4-Slot.

Even better would be if they offered more U.2-Ports an either offered NVMe-SSD in 2.5"-U.2-Formfactor again or at least a board in 3.5"-formfactor with 2-4 U.2 Portss and 2-4 M.2-Slots. This way, SSD would move from the mainboard back to the case, where storage belongs and can be directly cooled by the front fans. Best would be a new connector for that task, offering PCIe x16-Link to the board
this one got dual lan (10g and 2.5g) onboard, no worries.
this one got dual alc codec too, super high end lollll (not 4082 btw).
 
Last edited:
Again, not a single x1-Slot! Whatever happened between Z390 and now, where have all those slots gone? Where am I to put my soundcard, additional 2.5GbE/5GbE-NICs, TV-card etc?
Considering that 10GbE is expected to be built in, why add a slower NIC? Your sound card can go in either of the lower slots, your GPU does not need PCIe 4.0 or 5.0 x16 unless you have very, very specialized workloads. And TV card? Do those still exist?


Reality is that the vast majority of PCs will never see more than one AIC installed. Ever. All the while, GPUs get thicker. Thus, removing the slots likely to be blocked by a GPU makes reasonable sense.
 
Ah, I missed that it will get 10GbE. And yes, there still exist very capable tv-cards available here in Germany, but linux-based DVB-receivers are a better option, especially if you want to record pay-TV.
That still leaves the highend soundcard. Why should I have to put it in a x16-slot and either sacrifice 8 lanes for my GPU or the x4-slot, which probably will be shared with a M.2 anyway?
 
Ah, I missed that it will get 10GbE. And yes, there still exist very capable tv-cards available here in Germany, but linux-based DVB-receivers are a better option, especially if you want to record pay-TV.
That still leaves the highend soundcard. Why should I have to put it in a x16-slot and either sacrifice 8 lanes for my GPU or the x4-slot, which probably will be shared with a M.2 anyway?
the bottom x4 is gen3 pcie, and all m.2 on this mobo are gen4. so no bandwidth share there.


All the while, GPUs get thicker. Thus, removing the slots likely to be blocked by a GPU makes reasonable sense.
seriously, only gigabyte aorus cards got 4 slots. and the GPU temp is not low lol.
 
Won’t an x1 card just run as x1 regardless? Maybe I’m out of the loop but, even if the lanes were all shared, wouldn’t the x1 card just operate at x1 speeds? That’s how it used to work at least.
 
Only if they are not all shard. If the x4-Slot is shared with an M.2, often all 4 lanes are, so if you put an x4 NVMe-SSD in said M.2, the x4-Slot becomes useless. Sometimes, only 2 Lanes are shared or 2 Lanes are shared with the x4-Slot, 2 others with SATA-Ports.

But as asdkj1740 pointed out, the x4-slot on this board is Gen3, the M.2 are all Gen4, so there is no sharing between them. But regardless how good the board is equipped already, a board with only one slot for expansion cards is not for me.
 
seriously, only gigabyte aorus cards got 4 slots. and the GPU temp is not low lol.
A 3-slot gpu still needs air though, so in the case of another AIC being installed this ensures access to air to even thick cards like a 3090, rather than having it partially choked by your soundcard or whatever.
Why should I have to put it in a x16-slot and either sacrifice 8 lanes for my GPU or the x4-slot, which probably will be shared with a M.2 anyway?
Why not? It's not like it will make any difference whatsoever. The difference for a 3090 between PCIe 3.0 x8 and x16 is 1-2%. On PCIe 4.0 it is less. On PCIe 5.0 it will be even less still. And unless you're running server compute loads, GPU bandwidth needs increase very slowly. You won't run into a bottleneck any time soon on 4.0 x8 on a current gen GPU or 5.0 x8 on a future one. Not even close.

And given that Intel chipsets tend to have 30-ish lanes of PCIe 3.0, why would the bottom slot be shared with an m.2?
 
Only if they are not all shard. If the x4-Slot is shared with an M.2, often all 4 lanes are, so if you put an x4 NVMe-SSD in said M.2, the x4-Slot becomes useless. Sometimes, only 2 Lanes are shared or 2 Lanes are shared with the x4-Slot, 2 others with SATA-Ports.

But as asdkj1740 pointed out, the x4-slot on this board is Gen3, the M.2 are all Gen4, so there is no sharing between them. But regardless how good the board is equipped already, a board with only one slot for expansion cards is not for me.

I should’ve just read the OP — 2x PCIe 5 at x8 (16 CPU lanes total) when both are populated. :oops:
 
@Valantar : So I should buy a highend Z690-board to then put an PCIe1.1/2.0/3.0x1-card in a PCIe5.0x8-slot because Gigabyte decided x1-Slots are no longer needed?

30-ish lanes is a bit much. Since Z270, the Z-PCHs did have 24 HSIO-Lanes, which is not equal to PCIe-Lanes, because many of those have to be used for other IO-ports as standard an only some can be shared with PCIe- or M.2-slots. On Z590, not counting 6 HSIO-Lanes that always either are one USB3.2 5Gbps per Lane or one 10Gbps per two lanes and 8 HSIO-Lanes always reserved for DMI, you got 24 Lanes left, of which 6 always go to SATA, but can be shared, and one goes to (2.5)GbE.
So you are left with 17 Lanes that can always be used for PCIe-slots and/or M.2-slots, but 4 of these can be used for USB3.2 5Gbps, too.

Anyway, somehow I have not come across any board since Z270 with more than two unshared M.2-Slots, often only one or no M.2-slot is unshared.

Now, Z690 is speced to have up to 12 lanes Gen4 and up to 16 lanes Gen3 so that sound like an increase in total lane count but we will only see what that really means when all specs are revealed. For now, this board seems to have four M.2 slots, of which one would be linked to the CPU. One or two of the others could be shared with the PCIe5.0x8-slot for all we know, to make possible the use of a Gen5-M.2-SSD without any adapter.
 
Whoops, didn’t see the x4 slot you’re referring to down there, thought it was just the 2 v5 — makes sense now lol
 
@Valantar : So I should buy a highend Z690-board to then put an PCIe1.1/2.0/3.0x1-card in a PCIe5.0x8-slot because Gigabyte decided x1-Slots are no longer needed?

30-ish lanes is a bit much. Since Z270, the Z-PCHs did have 24 HSIO-Lanes, which is not equal to PCIe-Lanes, because many of those have to be used for other IO-ports as standard an only some can be shared with PCIe- or M.2-slots. On Z590, not counting 6 HSIO-Lanes that always either are one USB3.2 5Gbps per Lane or one 10Gbps per two lanes and 8 HSIO-Lanes always reserved for DMI, you got 24 Lanes left, of which 6 always go to SATA, but can be shared, and one goes to (2.5)GbE.
So you are left with 17 Lanes that can always be used for PCIe-slots and/or M.2-slots, but 4 of these can be used for USB3.2 5Gbps, too.

Anyway, somehow I have not come across any board since Z270 with more than two unshared M.2-Slots, often only one or no M.2-slot is unshared.

Now, Z690 is speced to have up to 12 lanes Gen4 and up to 16 lanes Gen3 so that sound like an increase in total lane count but we will only see what that really means when all specs are revealed. For now, this board seems to have four M.2 slots, of which one would be linked to the CPU. One or two of the others could be shared with the PCIe5.0x8-slot for all we know, to make possible the use of a Gen5-M.2-SSD without any adapter.
z590, 38 hsio = 6 usb3.x (5gbps or above) + 24 pcie(usb sata pcie) + 8 dmi3.0
z690, 38 hsio = 10 usb3.x (5gbps or above) + 28 pcie(pcie3.0 pcie4.0 sata)
z690 intel's hsio definition precludes dmi. it would be 46 hsio if 8*dmi4.0 is included in z690's hsio definition.

z690 vision d, 28 pcie lanes are all used. so actually there is no extra lane to provide for x1 pcie slot.
3*m.2 gen4 = 12 pcie4.0 used.
1*pcie slot(x4) + 1*wifi(x1) + 1*10g lan(x2) + 1*2.5g lan(x1) + 1*thunderbolt(x4) + 4*sata(x4) = 16 pcie3.0 used.

actually, again, asm1061 can do the trick:)
sadly 22110 kills pciex1 slot. god bless asus vertical m.2 design in the past.
 
Last edited:
@Valantar : So I should buy a highend Z690-board to then put an PCIe1.1/2.0/3.0x1-card in a PCIe5.0x8-slot because Gigabyte decided x1-Slots are no longer needed?

30-ish lanes is a bit much. Since Z270, the Z-PCHs did have 24 HSIO-Lanes, which is not equal to PCIe-Lanes, because many of those have to be used for other IO-ports as standard an only some can be shared with PCIe- or M.2-slots. On Z590, not counting 6 HSIO-Lanes that always either are one USB3.2 5Gbps per Lane or one 10Gbps per two lanes and 8 HSIO-Lanes always reserved for DMI, you got 24 Lanes left, of which 6 always go to SATA, but can be shared, and one goes to (2.5)GbE.
So you are left with 17 Lanes that can always be used for PCIe-slots and/or M.2-slots, but 4 of these can be used for USB3.2 5Gbps, too.

Anyway, somehow I have not come across any board since Z270 with more than two unshared M.2-Slots, often only one or no M.2-slot is unshared.

Now, Z690 is speced to have up to 12 lanes Gen4 and up to 16 lanes Gen3 so that sound like an increase in total lane count but we will only see what that really means when all specs are revealed. For now, this board seems to have four M.2 slots, of which one would be linked to the CPU. One or two of the others could be shared with the PCIe5.0x8-slot for all we know, to make possible the use of a Gen5-M.2-SSD without any adapter.
That's possible, though I doubt it given that x8+x4+x4 bifurcation (trifurcation?) generally isn't allowed on Intel boards, it's either x16 or x8+x8. Of course that might change.

As for buying a high end board to put a low bandwidth card in a 5.0 x8 slot - if that's your combination of needs, then yes. Given that literally nothing consumer facing in the next couple of years will make meaningful use of PCIe 5.0 (yes, that includes SSDs - benchmarks might "make use of it", but no real-world workloads), the bandwidth will be "wasted" anyhow. Using the available lanes for useful AICs is less wasteful IMO, and if you're convinced otherwise despite there being no tangible practical or performance loss, if I were you I would work to convince myself to leave that conviction behind, as it won't meaningfully affect anything. Though if you are one of the very few people who need at least three AICs and might need more, this is clearly not the board for you. And that's fine, really. There will be others.
 
That's possible, though I doubt it given that x8+x4+x4 bifurcation (trifurcation?) generally isn't allowed on Intel boards, it's either x16 or x8+x8. Of course that might change.

As for buying a high end board to put a low bandwidth card in a 5.0 x8 slot - if that's your combination of needs, then yes. Given that literally nothing consumer facing in the next couple of years will make meaningful use of PCIe 5.0 (yes, that includes SSDs - benchmarks might "make use of it", but no real-world workloads), the bandwidth will be "wasted" anyhow. Using the available lanes for useful AICs is less wasteful IMO, and if you're convinced otherwise despite there being no tangible practical or performance loss, if I were you I would work to convince myself to leave that conviction behind, as it won't meaningfully affect anything. Though if you are one of the very few people who need at least three AICs and might need more, this is clearly not the board for you. And that's fine, really. There will be others.
8+4+4 has nothing to do with intel as long as the chipset is eligible for that (intel has 100% to do with this lol, b660 still got this locked by intel).
z590 strix a, support 8+4+4 (no 8+8).
switches are costly to mobo vendors, PCB material grade too.
PCB material needs to be good enough to support pcie signal across the whole mobo (from CPU to the bottom side) while maintain signal integrity, not to mention on z690 we are talking about gen5.
no idea about the cost of using gen5 pcie slot, but most of gen5 pcie slot on z690 are smd type, it wont be cheap i guess.

8+4+4 or 8+8 needs 4 pcie switches if those switches are 1:2 / 2:1 and 4 differential each.
8+4+4 plus 8+8 option also available needs 6 pcie switches if those switches are 1:2 / 2:1 and 4 differential each.
z390 designare supports 8+4+4 and 8+8 by having 6 switches, which is rare these days to have more than 4 switches responsible for CPU x16 pcie lanes.
 
Last edited:
Hopefully it has thunderbolt implemented. Like it’s Vision and Designaire predecessors.
 
8+4+4 has nothing to do with intel as long as the chipset is eligible for that (intel has 100% to do with this lol, b660 still got this locked by intel).
z590 strix a, support 8+4+4 (no 8+8).
switches are costly to mobo vendors, PCB material grade too.
PCB material needs to be good enough to support pcie signal across the whole mobo (from CPU to the bottom side) while maintain signal integrity, not to mention on z690 we are talking about gen5.
no idea about the cost of using gen5 pcie slot, but most of gen5 pcie slot on z690 are smd type, it wont be cheap i guess.

8+4+4 or 8+8 needs 4 pcie switches if those switches are 1:2 / 2:1 and 4 differential each.
8+4+4 plus 8+8 option also available needs 6 pcie switches if those switches are 1:2 / 2:1 and 4 differential each.
z390 designare supports 8+4+4 and 8+8 by having 6 switches, which is rare these days to have more than 4 switches responsible for CPU x16 pcie lanes.
Well, at least you're clear on contradicting yourself - Intel determines the rules for implementing its platforms, after all. Allowed bifurcated layouts is something Intel is entirely within its power to determine if they want to - and there is plenty of evidence that they want to. (Though of course the more complex BIOS programming is no doubt also a turn-off for some motherboard makers.) But generally, bifurcated layouts outside of x8+x8 are very rare. You might be right that their rules are more relaxed now than they used to be, but most boards still don't split things three ways from the CPU.

The Z390 Designare has a really weird layout where the bottom x4 slot is connected to the chipset for an x8+x8+x4 layout, but you can switch its connection to the CPU instead for an x8+x4+x4 layout. I've never seen anything like that, but I guess it provides an option for very latency-sensitive scenarios. Definitely extremely niche, that's for sure.

As for what you're saying about switches: no. The term PCIe switch is generally used for PLX switches, which allow for simultaneous lane multiplication (i.e. x number of x16 slots from a single x16 controller). Bifurcation from the CPU does not require any switches whatsoever. This is handled by the CPU, through the x16 controller actually consisting of several smaller x8 or x4 controllers working together. You need multiplexers/muxes for the lanes that are assigned to multiple ports, but no actual PCIe switches. And PCIe muxes are cheap and simple. (Of course not cheaper than not having them in the first place, but nowhere near the price of a PLX switch, which easily runs $100 or more.) That Designare must actually have a really fascinating mux layout, as the x16 must have 1:2 muxes on four lanes and 1:3 muxes on the last four, with 2:1 muxes further down the same lanes for swapping that slot between CPU and PCIe.

Either way, bifurcated layouts beyond x8+x8 are very rare - though with more differentiated PCIe connectivity between the CPU and PCH than before I can see them becoming a bit more common in the future. It certainly didn't matter much when you either got PCIe 3.0 from the CPU or slightly higher latency PCIe 3.0 through the chipset, but 5.0 on one and 3.0 on the other is definitely another story. Still, I don't think this is likely to be relevant to the discussion at hand: chances are the bottom x4 slot (which is labeled PCIEX4, not PCIEX16_3 or similar) comes from the PCH and not the CPU.

(An interesting side note: the leaked Z690 Aorus Master has a single x16 + two slots labeled PCIEX4 and PCIEX4_2, which makes me think both those slots are PCH-based. Which again begs the question of whether there's no bifurcation of the CPU lanes at all, or if they're split into m.2 slots instead. IMO the latter would be sensible, as no GPU in the useful lifetime of this board will make use of PCIe 5.0x16 anhow.)
 
Back
Top