Fitseries3
Eleet Hardware Junkie
- Joined
- Oct 6, 2007
- Messages
- 15,508 (2.56/day)
- Location
- Republic of Texas
Im going to take an evolution approach to this one. This should help you to understand everything you could possibly need to know about ssds right from the start.
Yes, some of this information is summarized from an article found on Anandtech.com but i aim to take the above mentioned evolution approach to explain SSDs.
1. Basics -
Everyone knows what a flashdrive/jumpdrive/thumbdrive is. It has a single memory chip in it that stores data. This chip is capable of around 10mb/s read for a typical generic drive. The one i have here today just happens to do around 12mb/s read.
The idea of a SSD stemmed from this basic device. Though more complicated, a ssd is generally the same as a flash drive, just larger and made of more chips.
However... if these chips where chained together in one chain end to end they would still only perform at the 12mb/s read speed above.
Solution? RAID! SSDs use arrays of memory chips in a "raid" in order to achieve the high speeds we have come to see. But how? You need a controller for raid don't you? YES. The drives have their own built in controller that controls data transports and management to and from the disk itself.
So let us take what we have learned so far and put it into a real world example.
Lets say we take 20 memory chips and make a drive. We use a simple controller to "raid" the chips that we know are capable of 12mb/s read individually. So that's 20chips x 12mb/s = 240mb/s read!!! Makes more sense now doesn't it?
So where do the problems occur? Lets get more in depth of how this all works.
2. Flash memory, indepth -
NAND flash is made up of cells. each cell holds either 1 or 2 bits of data. These cells are organized into pages. Pages are the smallest structure that's readable/writable in SSDs. These pages hold 4kb each. Pages are then organized into Blocks. Each block has 128 pages in it therefore capable of holding 512kb of data. A block is the smallest level that can be erased in NAND memory.
Source: Anandtech
These blocks are then arranged in to planes. These planes then are arranged into what we see as the actual chips. Depending on the size of the chip, you can have differing groups of planes.
Source: Anandtech
So lets summarize what we just talked about.
NAND flash chips are made up of blocks of pages that contain cells. Each block is 512kb and has 128 pages that are 4kb each. You can read and write data to each individual page IF each page is empty. If the page contains data, it cannot be overwritten. It must be erased first before data can be written to it again. So where is the problem? Remember how i said that the lowest level that can be erased is the block? That means in order to write to a page that has data in it already, you have to erase the entire block of 128 pages before data can be written to that page again. Like i said, you can write to 4kb pages but you must erase entire blocks of 512kb in order to be able to write in those blocks again. AH... now i see the problem emerging.
Another thing to worry about, once you erase NAND memory it lessens the life span of the memory itself. NAND memory can be erased on average about 10,000 times before its not able to work correctly any longer. That right there makes the whole idea look pretty dumb.... but if you think about it in terms of performance, the benefits greatly outweigh the pitfalls.
3. How does this relate to performance? -
Lets say we have a simple SSD. It will have 1 block that contains 5 pages that hold the previously mentioned 4kb each. Let us also say we can only read 2kb/s and write 1kb/s. Now lets write some data to out theoretical drive.
we are going to write a simple text file to the drive...
Source: Anandtech
Since the text file is small it fits in a single 4kb page. That leaves 4 pages free of our block. Now lets write a picture to the drive.
Source: Anandtech
Since the picture is 8kb it fills 2 pages of our block. Now 3 pages are full so we have filled approximately 60% of our drive or 3/5ths. Now lets say we dont need our text file anymore so we delete it. When you delete a file from a hard disk, whether it be SSD or mechanical, the OS simply tells itself that there is no data in that page anymore. The data is not erased however. That is how you can recover data from a drive that has lost its data or after you have deleted data. You cannot recover the data though after it has been overwritten.
So lets say we want to now write a 12kb picture to the drive.
Source: Anandtech
If you remember what we talked about in the last section, the entire block has to be erased before the page is cleared for data to be rewritten to it again. To do that we need to relocate the data somewhere else until the entire block has been erased and the data can be written back to the drive.
Source: Anandtech
So what just happened? From what we normally see, we needed to write a simple 12kb file to a drive and thats what ended up happening. What really happened at the drive level is a bit more complicated though. We had to read 12kb into memory, erase the entire block, and then write 20kb back to the drive so if you do the math it took 26 seconds to perform an operation that should have taken only 12 seconds. Now to put this further into perspective, in a benchmark this event would look like the drive that normally writes at 1kb/s is now only writing at 0.46kb/s which is less than satisfactory causing disappointment and what appears to be "stuttering". This also points to another thing i want to cover. The more you use a SSD, the slower it gets.... to a certain point. Once you begin to fill a SSD it will have to start clearing entire blocks in order to write data to the drive again.
4. Whats the solution? -
There are 2 ways to begin to overcome the previously mentioned problem. The first one being cache and the second one being what Intel calls "Free Space".
Now i bet you didn't know that most SSDs don't have any cache at all. That is one of the contributors to the stuttering we've heard about. So it seems as though if we just add cache to the drives the problem will be eliminated right? Not exactly. What if the drive needs to erase a block that contains data that is linked to many other blocks? To ensure that none of the data is lost or corrupted, we need to relocate the entire group of data so that the pages can be erased and the data can be rewritten to the drive again. Most mechanical hard drives have between 8 and 32mb's of cache depending on the capacity of the drive. On SSDs more cache is needed so we are now seeing drives with 64mb of cache integrated. The data can now be offloaded into the cache so the drive can perform the tasks it needs to complete before being able to rewrite to the drive.
So what's this "Free Space" that Intel uses in its drives?
Intel has generously included both cache and "free space" in their drives. In addition to the cache, the drive has a built in "overhead" of space to use for temporarily relocating the data thus eliminating the stuttering altogether.
I still don't see the "free space"... where is it located? Free space is what i like to referring to as overhead. The Intel drives come in 32gb and 80gb sizes. For example, the 80gb drive will format out to ~79.98gb of usable space BUT actually has 100gb's of actual space that can be used collectively. The extra space that is used as "Free Space" is only seen by the drives internal controller, not the OS. This seamless integration is what makes the Intel drives perform without any hiccups and better than any other SSD to date.
5. Defragmenting? -
A common practice to keep your system up to speed is to defrag your hard drive regularly. This, however, will become a thing of the past when using SSDs.
SSDs are purposely fragmented to allow data to be written across as many chips as possible. This allows faster read and write times because the controller has to read/write from/to many chips rather than just a single source. The raid effect is prevalent here again.
Mechanical hard drives can read/write data faster on the outer portions of the disk but that drops way down as you proceed towards the spindle of the drive. Mechanical drives also have rotational latency. SSDs do not suffer from either one of these problems. Data can be read/wrote to all parts of the drive at the same speed regardless of where its being read from or stored. Yet another reason why defragmenting a SSD is pointless. Also, remember how I said that NAND memory can only be erased 10,000 times or so before it stops working? defragmenting your SSD would force the drive to relocate all the data thus shortening the drives lifespan drastically each time you defrag. NOT a good idea at all.
So how do i restore my drive to that "Like New" speed?
Format and reinstall. Intel includes a bootable disk with all of its SSDs that includes a few disk check and scan utilities as well as a ghosting and backup utility. They also have a special SSD tool that will return the drive to its "Like New" state of absolutely free and clear pages ready to be filled with data again.
Stay tuned, more soon.
Yes, some of this information is summarized from an article found on Anandtech.com but i aim to take the above mentioned evolution approach to explain SSDs.
1. Basics -
Everyone knows what a flashdrive/jumpdrive/thumbdrive is. It has a single memory chip in it that stores data. This chip is capable of around 10mb/s read for a typical generic drive. The one i have here today just happens to do around 12mb/s read.
The idea of a SSD stemmed from this basic device. Though more complicated, a ssd is generally the same as a flash drive, just larger and made of more chips.
However... if these chips where chained together in one chain end to end they would still only perform at the 12mb/s read speed above.
Solution? RAID! SSDs use arrays of memory chips in a "raid" in order to achieve the high speeds we have come to see. But how? You need a controller for raid don't you? YES. The drives have their own built in controller that controls data transports and management to and from the disk itself.
So let us take what we have learned so far and put it into a real world example.
Lets say we take 20 memory chips and make a drive. We use a simple controller to "raid" the chips that we know are capable of 12mb/s read individually. So that's 20chips x 12mb/s = 240mb/s read!!! Makes more sense now doesn't it?
So where do the problems occur? Lets get more in depth of how this all works.
2. Flash memory, indepth -
NAND flash is made up of cells. each cell holds either 1 or 2 bits of data. These cells are organized into pages. Pages are the smallest structure that's readable/writable in SSDs. These pages hold 4kb each. Pages are then organized into Blocks. Each block has 128 pages in it therefore capable of holding 512kb of data. A block is the smallest level that can be erased in NAND memory.
Source: Anandtech
These blocks are then arranged in to planes. These planes then are arranged into what we see as the actual chips. Depending on the size of the chip, you can have differing groups of planes.
Source: Anandtech
So lets summarize what we just talked about.
NAND flash chips are made up of blocks of pages that contain cells. Each block is 512kb and has 128 pages that are 4kb each. You can read and write data to each individual page IF each page is empty. If the page contains data, it cannot be overwritten. It must be erased first before data can be written to it again. So where is the problem? Remember how i said that the lowest level that can be erased is the block? That means in order to write to a page that has data in it already, you have to erase the entire block of 128 pages before data can be written to that page again. Like i said, you can write to 4kb pages but you must erase entire blocks of 512kb in order to be able to write in those blocks again. AH... now i see the problem emerging.
Another thing to worry about, once you erase NAND memory it lessens the life span of the memory itself. NAND memory can be erased on average about 10,000 times before its not able to work correctly any longer. That right there makes the whole idea look pretty dumb.... but if you think about it in terms of performance, the benefits greatly outweigh the pitfalls.
3. How does this relate to performance? -
Lets say we have a simple SSD. It will have 1 block that contains 5 pages that hold the previously mentioned 4kb each. Let us also say we can only read 2kb/s and write 1kb/s. Now lets write some data to out theoretical drive.
we are going to write a simple text file to the drive...
Source: Anandtech
Since the text file is small it fits in a single 4kb page. That leaves 4 pages free of our block. Now lets write a picture to the drive.
Source: Anandtech
Since the picture is 8kb it fills 2 pages of our block. Now 3 pages are full so we have filled approximately 60% of our drive or 3/5ths. Now lets say we dont need our text file anymore so we delete it. When you delete a file from a hard disk, whether it be SSD or mechanical, the OS simply tells itself that there is no data in that page anymore. The data is not erased however. That is how you can recover data from a drive that has lost its data or after you have deleted data. You cannot recover the data though after it has been overwritten.
So lets say we want to now write a 12kb picture to the drive.
Source: Anandtech
If you remember what we talked about in the last section, the entire block has to be erased before the page is cleared for data to be rewritten to it again. To do that we need to relocate the data somewhere else until the entire block has been erased and the data can be written back to the drive.
Source: Anandtech
So what just happened? From what we normally see, we needed to write a simple 12kb file to a drive and thats what ended up happening. What really happened at the drive level is a bit more complicated though. We had to read 12kb into memory, erase the entire block, and then write 20kb back to the drive so if you do the math it took 26 seconds to perform an operation that should have taken only 12 seconds. Now to put this further into perspective, in a benchmark this event would look like the drive that normally writes at 1kb/s is now only writing at 0.46kb/s which is less than satisfactory causing disappointment and what appears to be "stuttering". This also points to another thing i want to cover. The more you use a SSD, the slower it gets.... to a certain point. Once you begin to fill a SSD it will have to start clearing entire blocks in order to write data to the drive again.
4. Whats the solution? -
There are 2 ways to begin to overcome the previously mentioned problem. The first one being cache and the second one being what Intel calls "Free Space".
Now i bet you didn't know that most SSDs don't have any cache at all. That is one of the contributors to the stuttering we've heard about. So it seems as though if we just add cache to the drives the problem will be eliminated right? Not exactly. What if the drive needs to erase a block that contains data that is linked to many other blocks? To ensure that none of the data is lost or corrupted, we need to relocate the entire group of data so that the pages can be erased and the data can be rewritten to the drive again. Most mechanical hard drives have between 8 and 32mb's of cache depending on the capacity of the drive. On SSDs more cache is needed so we are now seeing drives with 64mb of cache integrated. The data can now be offloaded into the cache so the drive can perform the tasks it needs to complete before being able to rewrite to the drive.
So what's this "Free Space" that Intel uses in its drives?
Intel has generously included both cache and "free space" in their drives. In addition to the cache, the drive has a built in "overhead" of space to use for temporarily relocating the data thus eliminating the stuttering altogether.
I still don't see the "free space"... where is it located? Free space is what i like to referring to as overhead. The Intel drives come in 32gb and 80gb sizes. For example, the 80gb drive will format out to ~79.98gb of usable space BUT actually has 100gb's of actual space that can be used collectively. The extra space that is used as "Free Space" is only seen by the drives internal controller, not the OS. This seamless integration is what makes the Intel drives perform without any hiccups and better than any other SSD to date.
5. Defragmenting? -
A common practice to keep your system up to speed is to defrag your hard drive regularly. This, however, will become a thing of the past when using SSDs.
SSDs are purposely fragmented to allow data to be written across as many chips as possible. This allows faster read and write times because the controller has to read/write from/to many chips rather than just a single source. The raid effect is prevalent here again.
Mechanical hard drives can read/write data faster on the outer portions of the disk but that drops way down as you proceed towards the spindle of the drive. Mechanical drives also have rotational latency. SSDs do not suffer from either one of these problems. Data can be read/wrote to all parts of the drive at the same speed regardless of where its being read from or stored. Yet another reason why defragmenting a SSD is pointless. Also, remember how I said that NAND memory can only be erased 10,000 times or so before it stops working? defragmenting your SSD would force the drive to relocate all the data thus shortening the drives lifespan drastically each time you defrag. NOT a good idea at all.
So how do i restore my drive to that "Like New" speed?
Format and reinstall. Intel includes a bootable disk with all of its SSDs that includes a few disk check and scan utilities as well as a ghosting and backup utility. They also have a special SSD tool that will return the drive to its "Like New" state of absolutely free and clear pages ready to be filled with data again.
Stay tuned, more soon.
Last edited: