• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

LSI-SandForce Releases Code to SSD Manufacturers That Adjusts Over-provisioning

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,690 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
To anyone who's familiar with SSDs, "SandForce" is equally familiar, as it makes the brains of some of the fastest client SSDs in the business. Buyers have also come to know of SandForce-driven SSDs as being characterized by unique capacity amounts caused by allocating a certain amount of the physical NAND flash capacity for some special low-level tasks by the controller, resulting in capacities such as 60 GB, 120 GB, 240 GB, for drives with physical NAND flash capacities of 64, 128, and 256 GB, respectively. This allocation is called "over-provisioning". An impression was built that this ~7% loss in capacity is some sort of a trade-off for higher performance. It appears like that's not quite the case.



SandForce released a code to SSD manufacturers, which causes the drives to operate without that ~7% over-provisioning, providing nearly 100% of the physically-available NAND flash as unformatted capacity to the end-user, without loss in performance. All modern SSDs need a certain amount of their physical NAND flash allocated by the controller to map out bad-blocks, and data marked for deletion when the OS issues a TRIM command to delete something, which the controller later leisurely ruminates upon like a cow (ensuring users don't experience drops in performance caused by NAND flash write cycles).

What SandForce achieved with its newest code is let SSD manufacturers use what's called "0% over-provisioning". This is impossible in the real-world, but can be achieved handing over the difference in capacity between "billions of bytes" and "gigabytes" towards user area. This delta is really 7.37% of the physical capacity. The real difference in user capacity between a 120 GB its 128 GB physical NAND flash capacity really is 7% + 7.37%, or 14.37%. The translation of Billion bytes and gigabytes was made by HDD manufacturers a while ago, so most users don't notice that difference. What the new firmware for the SF-2000 processor family now permits, is for manufacturers to create SSDs at full binary capacity points with what is commonly known as "0% over-provisioning".

In other words, buyers will soon see SandForce-driven SSDs with capacities such as 64 GB, 128 GB, 256 GB, etc., with ~7% higher user-space, and no loss in performance. This is not to be confused with some SandForce-driven SSDs launched in the past, bearing labels of canonical capacities (64, 128, 256 GB), denoting physical NAND flash capacity.

View at TechPowerUp Main Site
 
I think such a firmware update won't be easy for users, since it's changing the user space on the drive. I guess firmware-updated drives will need fresh low and high-level formats.
 
this should be done a few years ago;paid storage space which can't be accessed wtf

2 pity we won't have for HDD's also a similar "code"
 
7% extra storage space (about the size of an internet temp directory) OR an extra X% in terms of longevity from overprovisioning. I wonder what that X% is? I'd give up 7% space for a "certain guarantee" of my data. That is, if overprovisioning helped longevity at all!

Funny how Sandforce is worried about this 7%. Obviously consumers are, in general, dumb, and were picking up 64GB drives instead of 60GB drives because they felt they were getting "more".

As many of us know from experience trying with our first SSDs, a measly 7% or 4GB isnt going to make an iota of difference. It 60GB isnt enough, neither is 64GB. You cannot live with 64GB as your main drive and will need to upgrade to 128GB or 256MB. And if that was 120GB or 240GB, again, didnt make a difference.
 
I'm more concerned how this will affect reliability, seeing as stability is not one of SandForce's strong points. Since their technology doesn't use any form of cache (DRAM or otherwise), wiping the only "reserve space" left leaves them without almost any sort of "scratch pad" to write data on (the difference between gigabytes and gibibytes isn't all that much). It'll be interesting to see if the next-generation controller will include some form of cache or not.
 
For the price of SSD's on the market and reliability/performance I see no issue leaving the 7% as scratch pad/bad block area. That will allow one whole chip on 16GB densities to fail and still run with data intact.
 
I wonder if Intel helped them with this. When Intel spent the last year testing SF controllers and modding the firmware, they said they would allow SF to release what they updated to all drive manufacturers but at a later time. Giving Intel a while to have it exclusively.

I wonder if this was one of the things Intel did. I do know they have made the SF much more stable and in a few months new firmware will come out for everyone else.
 
Back
Top