• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

LSI Implements SAS 12 Gb/s Interface

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
34,496 (9.18/day)
Likes
17,519
Location
Hyderabad, India
#1
LSI corporation is the first to implement new serial-attached SCSI (SAS) 12 Gb/s interface, geared for future generations of storage devices that can make use of that much bandwidth. For now, LSI proposes SAS expander chips that distribute that bandwidth among current-generation storage devices. The company displayed the world's first SATA 12 Gb/s add-on card, which uses PCI-Express 3.0 x8 interface to make sure there's enough system bus bandwidth. This card can connect to up to 44 SAS or SATA devices, and supports up to 2048 SAS addresses. It is backwards compatible with today's 6 Gb/s and 3 Gb/s devices.

By making use of the 12 Gb/s SAS Expander solution paired with 32 current-generation Seagate Savvio 15.3K RPM hard drives, LSI claims 58% increase in IOPS compared to a 6 Gb/s host controller, because of better bandwidth aggregation per drive. There's also a 68% increase in bandwidth yield. The array of 32 hard drives could dole out 3106.84 MB/s on IOMeter, and more significantly, over 1.01 million IOPS. As big as this number seems, it could be an IOMeter bug, because the numbers don't add up. Perhaps it's measuring IOPS from disk caches.



Source: TheSSDReview
 
Last edited:
Joined
Jan 11, 2009
Messages
9,027 (2.74/day)
Likes
1,446
Location
Montreal, Canada
System Name Fun waste of money
Processor i7 3930k @ 4 Ghz
Motherboard AsRock X79 Fatal1ty Professional
Cooling Dark Rock Pro 3 (removed 120mm fan)
Memory 32GB (4x8gb) 2133Mhz CL9 Mushkin RedLine
Video Card(s) EVGA ACX 3.0 GTX 1060 6GB
Storage 3.25TB separated in 5 SSDs (SanDisk Extreme Pro, Intel 730 & 520, 2*Crucial MX100)
Display(s) 27" 2560x1440 [Korean] PCBANK PB2700
Case BeQuiet! SilentBase 800
Audio Device(s) iBasso D10 + AKG KXXX
Power Supply EVGA P2 750W 80+ Platinum semi-fanless
Mouse Sensei RAW
Keyboard Black cherry MX Thermaltake ESports
Software Windows 10 Pro X64
#2
over a million IOPS :eek:
 
Joined
Dec 16, 2010
Messages
1,484 (0.57/day)
Likes
544
System Name My Surround PC
Processor Intel Core i7 4770K @ 4.2 GHz (1.15 V)
Motherboard ASRock Z87 Extreme6
Cooling Swiftech MCP35X / XSPC Rasa CPU / Swiftech MCW82 / Koolance HX-1320 w/ 8 Scythe Fans
Memory 16GB (2 x 8 GB) Mushkin Blackline DDR3-2400 CL11-13-13-31
Video Card(s) MSI Nvidia GeForce GTX 980 Ti Armor 2X
Storage Samsung SSD 850 Pro 256GB, 2 x 4TB HGST NAS HDD in RAID 1
Display(s) 3 x Acer K272HUL 27" in Surround 7860x1440
Case NZXT Source 530
Audio Device(s) Integrated ALC1150 + Logitech Z-5500 5.1
Power Supply Seasonic X-1250 1.25kW
Mouse Gigabyte Aivia Krypton
Keyboard Logitech G15
Software Windows 8.1 Pro x64
#3

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
17,166 (3.43/day)
Likes
18,136
Processor Core i7-4790K
Memory 16 GB
Video Card(s) GTX 1080
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 7
#4
i think those IOPS are coming from cache.

IOPS for HDDs are calculated as 1/(Avg latency + Avg seek time). So: 1/(2 ms + 2.7 ms) = 213 IOPS

213 * 32 = 6,816 IOPS

i'd say user error using IOMeter
 
J

John Doe

Guest
#5
HDD's aren't much use in datacenters anymore. An I/O drive would give superior I/O's in a much lower profile. Using a few PCI-E slots rather than a whole rack that needs power and cooling as well.

They should have done it on SLC SSD's to show all the capabilities of this controller (given it's application area)

still great stuff though.
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
34,496 (9.18/day)
Likes
17,519
Location
Hyderabad, India
#6
HDD's aren't much use in datacenters anymore.
They very much are. Single 250 GB SATA HDD is the most common storage option in leased servers. SSDs commonly start at an additional $50-odd per month for Intel SLC 50 GB.
 
J

John Doe

Guest
#7
They very much are. Single 250 GB SATA HDD is the most common storage option in leased servers. SSDs commonly start at an additional $50-odd per month for Intel SLC 50 GB.
lol you bombed on this one. I was referring to enterprise datacenters. HDD's do nothing but suck power and underperform on an enterprise scale. Enterprise sized servers don't base their power on 250 GB HDD's. A single PCI-E or I/O SSD can replace tens, if not a few hundreds of them. Look up "ZeusIOPS". ;)
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
34,496 (9.18/day)
Likes
17,519
Location
Hyderabad, India
#8
lol you bombed on this one. I was referring to enterprise datacenters. HDD's do nothing but suck power and underperform on an enterprise scale. Enterprise sized servers don't base their power on 250 GB HDD's. A single PCI-E or I/O SSD can replace tens, if not a few hundreds of them. Look up "ZeusIOPS". ;)
No yuo bombed on this one. SSDs are far too unreliable and have far too high price per GB to "replace" HDDs in enterprise datacenters. By these, yes, I mean "enterprise-grade" SSDs. No enterprise with half a brain replaces most of its HDDs with SSDs are you're suggesting. Instead, they use SSDs only to temporarily hold the "hot" parts of their databases, and constantly keep them in sync with hard drives arrays, which are infinitely more in number.
 
Last edited:
J

John Doe

Guest
#9
No yuo bombed on this one. SSDs are far too unreliable to "replace" HDDs in enterprise datacenters. They use SSDs only to hold the "hot" parts of their databases, and quickly archive to hard drives, which are infinitely more in number.
Uhm, yeah. Time to do some research on what I just mentioned. Look it up then tell me what the IBM guys say about it. SSD reliability (especially real SLC like ZeusIOP's, single cell only) is much more improved. It's more reliable than a convertional HDD. The SLC you see on the market (like Intel X25 Extreme) still use a few cells to store the data per-cell. They don't compare to the reliability of ZeusIOPS. The ZeusIOPS is an OEM only drive that enterprise business use instead of hundreds of HDD's. The memory is uses is like ECC RAM (as in reliability). It costs thousands of Euro's per drive and can only be ordered from STEC themselves.
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
34,496 (9.18/day)
Likes
17,519
Location
Hyderabad, India
#10
Uhm, yeah. Time to do some research on what I just mentioned. Look it up then tell me what the IBM guys say about it. SSD reliability (especially real SLC like ZeusIOP's, single cell only) is much more improved. It's more reliable than a convertional HDD. The SLC you see on the market (like Intel X25 Extreme) still use a few cells to store the data per-cell. They don't compare to the reliability of ZeusIOPS. The ZeusIOPS is an OEM only drive that enterprise business use instead of hundreds of HDD's. The memory is uses is like ECC RAM (as in reliability). It costs thousands of Euro's per drive and can only be ordered from STEC themselves.
Again, no. What IBM thinks doesn't reflect on how today's enterprises are storing data. And no, not even SLC SSDs are more reliable than enterprise hard drives. IBM is merely endorsing SSDs in its ZeusIOPS paper, it's in no way showing the spread of today's enterprise data.

And if we're talking research: http://www.snia.org/sites/default/files/AnilVasudeva_Are_SSDs_Ready_Enterprise_Storage_Systemsv4.pdf





^look at how that confirms my views on the spread of enterprise data.
 
Last edited:
J

John Doe

Guest
#11
Talking about researches, you don't know what you're talking about. So I suggest you to do some more on ZeusIOPS. Especially read what storage professionals say about it. It's the future of store. It can reproduce the I/O's of hundreds of drives by itself alone. It's extremely reliable can easily, constantly be backed-up. You can use a few consulting each other for the potential of a thousand drives. It's ridicilious to use 1000 drives instead of 5 ZeusIOPS with backup. Do you have an idea on how much it takes to cool and power 1000 drives?

That research is showing HDD's in use because only the rich and big business can afford such setup. Companies are still on HDD's because of their price and availability. Such HW isn't sold to every server. ZeusIOPS is in no way a traditional SSD, and I/O drives will take over. But it's a slow transaction at this rate.

Seriously, inform yourself from the info out here.
 
J

John Doe

Guest
#13
Go to bed.

Seriously, go to bed. At this moment you are simply unable to debate.
Yes, I'm. Re-read my post. They aren't in use because they aren't available options to all servers. Only the richer and more modern servers have moved up to those. And the thing is, they WILL be used in the future. We won't have HDD's by the time all flash become as reliable and as fast as ZeusIOPS.

Companies that're building from the ground up are moving towards enterprise SLC SSD's. No-one in his right mind would get a load of HDD's over SSD's at this time and date.
 
Joined
Nov 18, 2011
Messages
233 (0.10/day)
Likes
68
#16
That topic you had entirely valid points. As for this one and many others I have to agree with BTA, where as Zeus is a great technology for one the Enterprise even if they wanted it I'm sure are still looking at plans to overhaul to go to it. As I firmly also believe most of the enterprise is still running off of mechanical. I also don't even remotely see how my statement was absurd, I actually was convinced you were trolling this entire topic.
 
J

John Doe

Guest
#17
That topic you had entirely valid points. As for this one and many others I have to agree with BTA, where as Zeus is a great technology for one the Enterprise even if they wanted it I'm sure are still looking at plans to overhaul to go to it. As I firmly also believe most of the enterprise is still running off of mechanical. I also don't even remotely see how my statement was absurd, I actually was convinced you were trolling this entire topic.
I'm not, why should I? You didn't need to nitpick on my post right in the first post. Enterprise is running mechanical but that's going to change. My original post stated;

"HDD's aren't much use in datacenters anymore".

I didn't say they aren't used anymore. I tried to say it's more logical to invest in SSD's rather than to buy hundreds of drives for this kind of application (Input/Output's). It's much easier to do it that way. See the original post? The topic is about the amount of IOPS LSI came up with on their controller.
 
Joined
Nov 18, 2011
Messages
233 (0.10/day)
Likes
68
#18
Right I understood that, although I also stated that I agree with BTA, that was also half of his argument that the industry also was not running SSDs. I personally even with Zeus wouldn't be surprised if industries would not be attracted to them. The reliability of SSDs is still a toss up just because of history, until the big wigs in the corporations realize that it's worthwhile then they will jump on the bandwagon. I actually highly doubt they will invest now though, and with companies continuing to put out new technology that supports the mechanical more and more they are also pushed into not seeing a point to change how they go about it.
 
J

John Doe

Guest
#19
And that's the problem. People are still thinking HDD's are the only solid option. They aren't. For a fact, current MLC has much writes that you don't have to worry about degradition like people did a few years ago (for home usage).

That example aside, the logic used in flash like Zeus is perfect. It's great when you can reapplicate I/O levels of that much drives under just one 120 GB SSD. It really is the way to go. If you read what some storage experts say, they repeat the things I said in here.

People don't bother and are being stubborn on SAS/SCSI drives. They don't realize non-volatile flash has come a long way and will take over. Now the question is when.
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
34,496 (9.18/day)
Likes
17,519
Location
Hyderabad, India
#20
"HDD's aren't much use in datacenters anymore".
You highlighted the wrong part of your statement. You said:

"HDD's aren't much use in datacenters anymore".

That statement is incorrect, I've provided statistics to refute that. Mechanical drives are far from being "aren't much use in datacenters anymore."
 
J

John Doe

Guest
#21
You highlighted the wrong part of your statement. You said:

"HDD's aren't much use in datacenters anymore". The most logical interpretation of that statement is that you claim HDDs.

That statement is incorrect, I've provided statistics to refute that. Mechanical drives are far from being "aren't much use in datacenters anymore"
Okay sir, let me clear up further then. ;)

HDD's aren't much use in continously accessed data. That is high I/O's, which is what the OP is on about.

For storage though, yeah, they are. HDD's and especially tape for mass storage.
 
Last edited by a moderator:
Joined
Apr 21, 2010
Messages
4,665 (1.65/day)
Likes
2,018
#22
Okay sir, let me clear up further than. ;)

HDD's aren't much use in continously accessed data. That is high I/O's, which is what the OP is on about.

For storage though, yeah, they are. HDD's and especially Tape for mass storage.
They might not be of "much use" however even in enterprise datacentres they are still the most common form of storage/data by a country mile compared to SSD/IO drives
 

Easy Rhino

Linux Advocate
Joined
Nov 13, 2006
Messages
14,414 (3.53/day)
Likes
4,275
System Name VHOST01 | Desktop
Processor i7 980x | i5 7500 Kaby Lake
Motherboard Gigabyte x58 Extreme | AsRock MicroATX Z170M Exteme4
Cooling Prolimatech Megahelams | Stock
Memory 6x4 GB @ 1333 | 2x 8G Gskill Aegis DDR4 2400
Video Card(s) Nvidia GT 210 | Nvidia GTX 970 FTW+
Storage 4x2 TB Enterprise RAID5 |Corsair mForce nvme 250G
Display(s) N/A | Dell 27" 1440p 8bit GSYNC
Case Lian Li ATX Mid Tower | Corsair Carbide 400C
Audio Device(s) NA | On Board
Power Supply SeaSonic 500W Gold | Seasonic SSR-650GD Flagship Prime Series 650W Gold
Mouse N/A | Logitech G900 Chaos Spectrum
Keyboard N/A | Posiden Z RGB Cherry MX Brown
Software Centos 7 | Windows 10
#23
that was a good read. anyone who thinks HDDs aren't much use in enterprise datacenters today is clearly on drugs. i'm glad i didn't get sucked in that nonsense.
 
Last edited by a moderator:
Joined
Apr 7, 2011
Messages
1,163 (0.47/day)
Likes
471
System Name Desktop
Processor Intel i7-3930k
Motherboard ASUS Sabertooth X79
Cooling Intel AIO
Memory 8x4GB 1866MHz
Video Card(s) EVGA GTX 970 SC
Storage Adaptec 2405 SAS|Hitachi SAS 15k 450GB|2xSeagate 2TB|WD RE Enterprise 4 TB|HP Ultrium 1760 LTO4 Tape
Display(s) HP ZR24w
Case Fractal Define XL Black
Audio Device(s) Schiit Modi Uber>Schiit Asgard 2>Sennheiser HD600
Power Supply Corsair HX850
Mouse Logitech G603
Keyboard Logitech G613
Software Windows 10 Pro x64
#24
People don't bother and are being stubborn on SAS/SCSI drives. They don't realize non-volatile flash has come a long way and will take over. Now the question is when.
Yes SSD will take over, but right now? Not even close, SSD's in enterprise solutions is still quite rare, it is getting in there but SAS HDD's still hold their ground.
The biggest issue with SSD is their number of writes, they would be raped in heavy write environment. And this I say from my own experience.
SCSI/SAS HDD's have no issues working for 5 years 24/7 while getting hammered, now let me see an SSD do that.
 
Joined
Aug 10, 2007
Messages
4,064 (1.06/day)
Likes
1,130
Location
Geneva, FL, USA
Processor Intel i5-6600
Motherboard ASRock H170M-ITX
Cooling Cooler Master Geminii S524
Memory G.Skill DDR4-2133 16GB (8GB x 2)
Video Card(s) Gigabyte R9-380X 4GB
Storage Samsung 950 EVO 250GB (mSATA)
Display(s) LG 29UM69G-B 2560x1080 IPS
Case Lian Li PC-Q25
Audio Device(s) Realtek ALC892
Power Supply Seasonic SS-460FL2
Mouse Logitech G700s
Keyboard Logitech G110
Software Windows 10 Pro
#25
Well if you believe post #21 (not saying anyone should) and he only meant high I/O scenarios this whole time, then he does have at least a little standing. Million IOPS systems were comprised of ~750 short stroked 15K hard drives only five years ago. Performance of two full racks now can be had in 4U thanks to SSDs.