• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Seagate Mechanical HDD with U.2 NVMe Interface Pictured, Signals the Decline of SAS 12G

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,690 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Here's one of the first pictures of a mechanical HDD with NVMe interface. Seagate is apparently in production of an Exos-series Enterprise HDD featuring U.2 NVMe interface where one would expect SAS 12 Gbps. We seriously doubt if the HDD is fast enough to take advantage of U.2 NVMe (at least 32 Gbps per direction), but the move to NVMe probably has to do with the decline of older interface standards such as SAS 12 Gbps and SATA 6 Gbps in the enterprise space for cold-storage, and that future generations of rackmount DAS and NAS enclosures could increasingly feature U.2 backplanes, phasing out SAS and SATA. This is like when optical drives switched over to SATA despite not needing the bandwidth SATA had to offer (optical discs barely move a few dozen MB/s, for which even ATA33 IDE was sufficient). We don't know how the Seagate Exos drive handles NVMe internally—whether there is a new native NVMe controller, or whether this is really just a SAS 12 Gbps drive with a PCIe-to-SAS bridge chip.



View at TechPowerUp Main Site | Source
 
Why would anyone needs this type of combination? Is that mean conventional HDDs are more robust?
 
Why would anyone needs this type of combination? Is that mean conventional HDDs are more robust?
Price per tb obviously.

By converting hdd to native NVME we can get rid the need of additional SATA/SAS controllers. You do not need a SATA/SAS controller on the CPU/Mobo/Chipset, freeing up PCIE lanes so that you are free to utilize those lanes however you like, increasing flexibility. You free up space and die area required to have those controllers on your system so that you can make your system more compact or utilize that extra space for other more useful things in your system.

You are not forced to buy a chipset/cpu that has lanes taken up by the sata/sas controller, and in the future when you dont need those sata/sas anymore you don't need to switch your system/motherboard just to get your lanes back.

You do not need an addon pcie SATA/SAS card, saving on space and on cooling, and can therefore utilize those free pcie slots for other things.

It is possible we save on latency and power consumption especially if they made the NVME implementation native. We only need a single controller to communicate between the CPU/Chipset and the HDD. Saving power and time (by reduced latency) helps us go green as well, since the theme is now saving the planet. (Instead of CPU->Chipset->SATA Controller->HDD Controller ->HDD Platters, we get CPU->HDD Controller->HDD Platters)
(im not making fun of trying to save our planet, im making fun of ppl and companies that need a "theme" to sell more environmentally friendly products, and also the same theme is needed for consumers to be convinced to buy and use them)

edit: clarified some stuff and added pic of amd x670 diagram as example. credits to AMD and Anandtech.

SoC_25.png
 
Last edited by a moderator:
Why no longer hybrid designs HDD+SSD are released? I have one WD which works flawlessly in my old laptop.
 
Hey get us a fast nvme drive.

wait what. This is a spinner!!!

:(
 
Price per tb obviously.

By converting hdd to native NVME we can get rid the need of additional SATA/SAS controllers. You do not need a SATA/SAS controller on the CPU/Mobo/Chipset, freeing up PCIE lanes so that you are free to utilize those lanes however you like, increasing flexibility. You free up space and die area required to have those controllers on your system so that you can make your system more compact or utilize that extra space for other more useful things in your system.

You are not forced to buy a chipset/cpu that has lanes taken up by the sata/sas controller, and in the future when you dont need those sata/sas anymore you don't need to switch your system/motherboard just to get your lanes back.

You do not need an addon pcie SATA/SAS card, saving on space and on cooling, and can therefore utilize those free pcie slots for other things.

It is possible we save on latency and power consumption especially if they made the NVME implementation native. We only need a single controller to communicate between the CPU/Chipset and the HDD. Saving power and time (by reduced latency) helps us go green as well, since the theme is now saving the planet. (Instead of CPU->Chipset->SATA Controller->HDD Controller ->HDD Platters, we get CPU->HDD Controller->HDD Platters)
(im not making fun of trying to save our planet, im making fun of ppl and companies that need a "theme" to sell more environmentally friendly products, and also the same theme is needed for consumers to be convinced to buy and use them)

edit: clarified some stuff and added pic of amd x670 diagram as example. credits to AMD and Anandtech.

SoC_25.png
Nobody is combining SAS drives with x670es. They're in the server space, where all the configs are pre determined.

Why no longer hybrid designs HDD+SSD are released? I have one WD which works flawlessly in my old laptop.
Because in a world of readily available SSDs hybrids make no sense. They were a stop gap solution when 64GB SSDs were $300 and most HDDs were incapable of exceeding sata I speed.

Nowadays you have HDDs that can push almost Sata II speed and 2TB SSDs are under $200.
 
Nobody is combining SAS drives with x670es. They're in the server space, where all the configs are pre determined.


Because in a world of readily available SSDs hybrids make no sense. They were a stop gap solution when 64GB SSDs were $300 and most HDDs were incapable of exceeding sata I speed.

Nowadays you have HDDs that can push almost Sata II speed and 2TB SSDs are under $200.
Exactly. Hence U.2 and backplanes.

HDDs in server racks will run on relatively slow interfaces like PCIe 2.0 x4 or x2 or even x1, so an interface card like PCIe 4.0 x8 can feed tens of them.
 
The trick here appears to be that the drives are compatible with SATA, SAS and NVMe interfaces.

NVMe-HDD-illustration-1c.png
 
Price per tb obviously.

By converting hdd to native NVME we can get rid the need of additional SATA/SAS controllers. You do not need a SATA/SAS controller on the CPU/Mobo/Chipset, freeing up PCIE lanes so that you are free to utilize those lanes however you like, increasing flexibility. You free up space and die area required to have those controllers on your system so that you can make your system more compact or utilize that extra space for other more useful things in your system.

You are not forced to buy a chipset/cpu that has lanes taken up by the sata/sas controller, and in the future when you dont need those sata/sas anymore you don't need to switch your system/motherboard just to get your lanes back.

You do not need an addon pcie SATA/SAS card, saving on space and on cooling, and can therefore utilize those free pcie slots for other things.

It is possible we save on latency and power consumption especially if they made the NVME implementation native. We only need a single controller to communicate between the CPU/Chipset and the HDD. Saving power and time (by reduced latency) helps us go green as well, since the theme is now saving the planet. (Instead of CPU->Chipset->SATA Controller->HDD Controller ->HDD Platters, we get CPU->HDD Controller->HDD Platters)
(im not making fun of trying to save our planet, im making fun of ppl and companies that need a "theme" to sell more environmentally friendly products, and also the same theme is needed for consumers to be convinced to buy and use them)

edit: clarified some stuff and added pic of amd x670 diagram as example. credits to AMD and Anandtech.

SoC_25.png
Dont know space saving of u.2 vs SAS but m.2 does not save space vs SATA it significantly increases it, boards have significant regressions currently to fit m.2 slots. Flexibity is reduced as a consequence. u.2 is much better than m.2 though so server boards it might work out ok.

This doesnt signal the end of SAS this drive might just be to allow more drives in a system where SAS fully populated but u.2 spare.
 
Dont know space saving of u.2 vs SAS but m.2 does not save space vs SATA it significantly increases it, boards have significant regressions currently to fit m.2 slots. Flexibity is reduced as a consequence. u.2 is much better than m.2 though so server boards it might work out ok.

This doesnt signal the end of SAS this drive might just be to allow more drives in a system where SAS fully populated but u.2 spare.
Again in the server space this MASSIVELY saves space, it can also simplify power requirements as U.2 standardises a 12v possibility vs just a 3.3 with m.2 negating the need for another voltage conversion either in the PSU or Motherboard.

What I suspect is that this is a trasitiionary product before EDSFF (E1S/L, E2, E3) formats take over and they need to adapt to those standards. Where as currently trying to get a E3 equipped server etc isnt that easy using the U2 interface would be a great way to test controllers on 12v vs 3.3 that normal contollers have been using for years.
 
From what I can see U.2 seems similar to SAS space, whilst M.2 is really bad. So I assume the space savings you speak off are related to the voltage side of things.

Quote from gamers nexus, one of the few in the tech media who at least looked into U.2 vs M.2.

M.2, then, is the most comparable to U.2. It's capable of the same four-lane throughput for storage devices, but takes a significantly larger footprint on the motherboard and limits users purely by physical space. U.2 interests us because it can be stacked where current SATA connectors are, PCI-e lanes allowing, and you could theoretically run several 2.5” U.2 SSDs.

Unless seagate say otherwise my opinion is this product is still to allow flexibility in a server might be out of SAS ports, but have U.2 ports available so thus allowing adding of more drives.
 
Again in the server space this MASSIVELY saves space, it can also simplify power requirements as U.2 standardises a 12v possibility vs just a 3.3 with m.2 negating the need for another voltage conversion either in the PSU or Motherboard.

What I suspect is that this is a trasitiionary product before EDSFF (E1S/L, E2, E3) formats take over and they need to adapt to those standards. Where as currently trying to get a E3 equipped server etc isnt that easy using the U2 interface would be a great way to test controllers on 12v vs 3.3 that normal contollers have been using for years.
EDSFF seems physically incompatible with hard disks because a 2.5" HDD does not fit into an E1 "ruler" enclosure (but I don't know about E2/E3 and their limitations). So U.2 may remain a long-lasting standard for datacenter HDDs.
 
Convectional HDD's only have room left on enterprise or consumer and in particular backups, surveillance and whatever needs lots of storage without the speeds NVME SSD's provide.

Crap. My first IBM XT with a "20MB" harddrive and the size of two 5.25inch floppy drives. Drives that could take a far more beating then todays drives that seem to fall out random whenever it wants.
 
this could be set to the end of SATA port? smart idea to be one port and choosing between fast nvme ssd or high density hdd
 
This is likely only to be used for enterprise stuff in server racks to save on connectors, since they can be universally compatible


Make a NAS/1U rack with NVME connectors, special mech drives still fit - tada, big sales
 
Price of SSDs dropped low enough.

Because in a world of readily available SSDs hybrids make no sense. They were a stop gap solution when 64GB SSDs were $300 and most HDDs were incapable of exceeding sata I speed.
While this is correct, I would love having 6 or 8+ TB Hybrids in my system mostly for backup and storage.
 
EDSFF seems physically incompatible with hard disks because a 2.5" HDD does not fit into an E1 "ruler" enclosure (but I don't know about E2/E3 and their limitations). So U.2 may remain a long-lasting standard for datacenter HDDs.
E1 is purely for SSD density. There are 1U servers with 40 yes 40!!! Slots for E1.L SSDs. Compared that to current 12 2.5s on 1U or 25 on 2U.

E3 is designed to replace 2.5 inch but basically is taking the current 7mm 2.5inch drives and making that the LARGEST size.

Actually a good document describing the new sizing format:


In theory the EDSFF formats can also have >4x PCI-E connections per drive so either inbuilt "RAID" or controllers with far more bandwidth is capable. I suspect this is more for high end database/ML servers where data throughput is Key.
 
Why though? It may have made sense ten years ago for an OS drive, where the most used files end up on the SSD, but for storage? What’s the SSD for? The most accessed movies? A game maybe?
 
Not in the 4TB+ range they haven't, not even garbage class QLC drive. Hybrid 8TB+ hdd still would be useful for large photo collections for example.
oh yeah, large photo collections. because you would have 0.78125% (64gb cache, 8tb drive) of the most viewed photos cached in there for faster viewing?

Hybrids were good for using as an OS drive because they would cache the ~15gb worth of Windows stuff (or rather, the most accessed files in there) plus frequently used programs and files, but they aren't going to make the difference in photo backups

That would be very nice! Say with 32GB or 64GB SSD buffer? I'd give that serious consideration
Have you heard about Intel Smart Response Technology? It lets you make a (64 gb max) SSD cache for your hdd. the remaining ssd space can be partitioned normally. AMD platforms have a similar tool. I guess it would help if you didn't want to meddle with two drives with different speeds and capacities.
 
Have you heard about Intel Smart Response Technology? It lets you make a (64 gb max) SSD cache for your hdd.
But that's not ON said hard drive. All-in-one solutions keep things simple.

It may have made sense ten years ago for an OS drive, where the most used files end up on the SSD
Still makes sense and for the reason you just stated. Some people what a simple system with one drive. 8TB, 10TB or 12TB hybrid with a 32GB or 64GB SSD buffer? Hell yeah! That would make an excellent main drive!
but for storage? What’s the SSD for? The most accessed movies? A game maybe?
Anything frequently accessed. Why would you NOT want that?
 
Last edited:
But that's not ON said hard drive. All in one solutions keep thing simple.
I'm simply giving you a solution comparable to this - whatever hard drive you want (even 18tb) + 64gb of fast PCIe storage. You'll have to install two drives and one program instead of just one drive, but it will work and it will work better than any integrated cache drive available since those are limited to SATA3 speed
 
Back
Top