• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Ready with 144-layer 3D NAND On its Own, Talks 4-layer 3DXP, "Alder Stream" and "Keystone Harbor"

Joined
Feb 3, 2017
Messages
3,475 (1.33/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
Just use the 380gb 110mm long m.2....most boards support at least one 110mm m.2
I am in my own little limited niche. I have been on mITX boards ever since these came out :)
 
Joined
Jun 10, 2014
Messages
2,890 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Just use the 380gb 110mm long m.2....most boards support at least one 110mm m.2
And if not, there are adapters if you have free PCIe slots.

I probably will use such adapters in my future builds, just to make it more convenient to pull them out. The M2 form factor might make sense in laptops, but in desktop they are just a pain.

When I build computers, they are usually in service for 8-10 years, but their role changes over time. And this is one of the biggest strengths of desktop computers; many things can be adapted or changed, each usable part can be reused somewhere else. So graphics cards are usually swapped a couple of times, SSDs and HDDs swapped a lot, etc. Then it's a pain if I have to unplug half of the stuff to swap an SSD, especially if I'm troubleshooting something. I'm also tired of small cases, so unlike londiste, I will do my future builds in spacious towers, do only basic cable management, etc. Computers to me are not meant to be pretty, but serve as tools. ;)
 
Joined
Feb 3, 2017
Messages
3,475 (1.33/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
When I build computers, they are usually in service for 8-10 years, but their role changes over time. And this is one of the biggest strengths of desktop computers; many things can be adapted or changed, each usable part can be reused somewhere else. So graphics cards are usually swapped a couple of times, SSDs and HDDs swapped a lot, etc. Then it's a pain if I have to unplug half of the stuff to swap an SSD, especially if I'm troubleshooting something. I'm also tired of small cases, so unlike londiste, I will do my future builds in spacious towers, do only basic cable management, etc. Computers to me are not meant to be pretty, but serve as tools. ;)
It gets offtopic but I am seeing less and less need for big computer these days. For a fully functional computer, put CPU with cooler on the motherboard, add RAM and M.2 drive or two and the only cables you need are 24-pin and 4/8 pin for power. If you need GPU, it goes to PCIe slot with possibly a couple power cables of its own and done. Very nice and clean. This is also why I tend to like the idea of 10-pin ATX12VO - that would get a few more cables out of the way.

Depends on what you use the computer for but I am noticing that I swap parts and do random testing and troubleshooting less and less every year. There is mostly no need for that. Swapping GPU is the same regardless of case size (excluding some extremes like my A4SFX maybe), CPU is more likely to be replaced with motherboard and when it comes to drives how often would you want to swap those and why? For testing HDDs/SSDs, a USB dock has been my go-to method for well over a decade now and troubleshooting M.2/mSATA etc drives is PITA one way or another.

Edit:
For mass storage, NASes are pretty good these days. I would not run games or storage-dependent work off of them but a cheap enough Synology box for example will nicely saturate a 1Gbit Ethernet bandwidth which isn't much slower than what drives are realistically capable of.

I would bet constant testing and swapping and adapting is pretty niche thing in itself :)
 
Last edited:
Joined
Jun 10, 2014
Messages
2,890 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
For testing HDDs/SSDs, a USB dock has been my go-to method for well over a decade now and troubleshooting M.2/mSATA etc drives is PITA one way or another.
Not what I meant. I have a dock too.

I would bet constant testing and swapping and adapting is pretty niche thing in itself :)
It is, but it's not what I meant. Those who do usually have open rigs for that.
Click the spoiler for details.
Like many for many power users, my needs evolve over time. As of now I have two operational desktops, one laptop and two servers, 14 HDDs and 7 SSDs in use between them. Every once in a while a drive needs replacing, either because it's bad or because it's too small, if it's still good then it moves to another machine. The same with GPUs; if I buy a new one, the old one is moved to another machine. Then it's really appreciated when installation takes 2 minutes instead of 20+, like one of my machines which sits in a Fractal Node 304; I've had to open it five times in the last few months; replacing a HDD, adding another SSD, two times because some cable fell out because it's too cramped, and once because the PSU blew up (but difficulties changing a PSU is understandable and rare).

My "work" machine have one SSD for the OS, two SSDs in RAID 1 for work, and two HDDs for long-term storage, all marked with labels to make it easy to identify and replace. If a drive goes bad I can pull it and replace it, if the machine goes bad, I can move the drives to another machine and keep working. If I were to put a M2 drive in there, I would probably put it in a PCIe adapter to make it easier to pull out.

Back on topic,
QLC is bad enough, but with PLC I would be really worried about file integrity. I would certainly put OS and other stuff on separate drives, but this still sounds scary. I do wonder if controllers will move things around more to reduce data rot. Still, I think NAND flash is in dire need of a replacement.
 
Joined
Feb 3, 2017
Messages
3,475 (1.33/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
Joined
Jul 5, 2013
Messages
25,559 (6.52/day)
QLC is bad enough, but with PLC I would be really worried about file integrity.
I'm not touching either until the process can be proved reliable for a minimum of 5 years with an expected useful lifespan of 10years+. And before anyone whines that such is not a reasonable expectation, save it. I still have OCZ SSD's from 2009 that are still going strong. If that level of durability is not a factor in the engineering of an SSD, I'm not buying it. MLC and TLC have been refined and engineered to the point where that level of durability(or close to it) can be expected.
 
Joined
Jun 10, 2014
Messages
2,890 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
If the second generation 3D XPoint cuts the price per GB in half, then many prosumers will start to get interested, including myself.
 
Joined
Feb 3, 2017
Messages
3,475 (1.33/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
If the second generation 3D XPoint cuts the price per GB in half, then many prosumers will start to get interested, including myself.
Current XPoint is manufactured on 20nm class node. This is very likely to be shrunk to some 10nm class node that DRAM and NAND Flash is manufactured on these days.

What I am a bit vary of is that 2nd gen XPoint might actually get a nerf of some kind. XPoint itself is overspecced for anything it is used for today in consumer or enthusiast space. The exception is probably the Optane DIMM technology. NVMe controllers cannot seem to use XPoint speed and latency to fullest. This leaves some space for simplification or relaxed specs.

I'm not touching either until the process can be proved reliable for a minimum of 5 years with an expected useful lifespan of 10years+. And before anyone whines that such is not a reasonable expectation, save it. I still have OCZ SSD's from 2009 that are still going strong. If that level of durability is not a factor in the engineering of an SSD, I'm not buying it. MLC and TLC have been refined and engineered to the point where that level of durability(or close to it) can be expected.
Depends on what you use these for. 5 years is a long time when we look at how SSDs have evolved. In 2009 the reasonably priced SSDs were 120GB, maybe 240GB with especially write speeds that are low by today's standards. Manufacturers have learned from their mistakes for the most part when it comes to SSD endurance. Today, I am quite convinced that endurance rating claim in manufacturer spec is legitimate, at least for major manufacturers.

Price remains a factor and QLC has its place. I have a 2TB 660p for my secondary drive (mainly for games) which I got for ~160€ last year. Intel spec says it is good for 400 TBW and in about 9 months I have put 10TB on it. If what Intel claims is even remotely accurate, I am fine for a long while. The downsides of QLC I can live with in this particular configuration - unless I fill the drive to 95+%, it's faster than a SATA SSD. Filling the last 5% is indeed very painful but awareness of that helps :)
 
Last edited:
Joined
Jun 4, 2004
Messages
480 (0.07/day)
System Name Blackbird
Processor AMD Threadripper 3960X 24-core
Motherboard Gigabyte TRX40 Aorus Master
Cooling Full custom-loop water cooling, mostly Aqua Computer and EKWB stuff!
Memory 4x 16GB G.Skill Trident-Z RGB @3733-CL14
Video Card(s) Nvidia RTX 3090 FE
Storage Samsung 950PRO 512GB, Crusial P5 2TB, Samsung 850PRO 1TB
Display(s) LG 38GN950-B 38" IPS TFT, Dell U3011 30" IPS TFT
Case CaseLabs TH10A
Audio Device(s) Edifier S1000DB
Power Supply ASUS ROG Thor 1200W (SeaSonic)
Mouse Logitech MX Master
Keyboard SteelSeries Apex M800
Software MS Windows 10 Pro for Workstation
Benchmark Scores A lot.
Current XPoint is manufactured on 20nm class node. This is very likely to be shrunk to some 10nm class node that DRAM and NAND Flash is manufactured on these days.

What I am a bit vary of is that 2nd gen XPoint might actually get a nerf of some kind. XPoint itself is overspecced for anything it is used for today in consumer or enthusiast space. The exception is probably the Optane DIMM technology. NVMe controllers cannot seem to use XPoint speed and latency to fullest. This leaves some space for simplification or relaxed specs.

Depends on what you use these for. 5 years is a long time when we look at how SSDs have evolved. In 2009 the reasonably priced SSDs were 120GB, maybe 240GB with especially write speeds that are low by today's standards. Manufacturers have learned from their mistakes for the most part when it comes to SSD endurance. Today, I am quite convinced that endurance rating claim in manufacturer spec is legitimate, at least for major manufacturers.

Price remains a factor and QLC has its place. I have a 2TB 660p for my secondary drive (mainly for games) which I got for ~160€ last year. Intel spec says it is good for 400 TBW and in about 9 months I have put 10TB on it. If what Intel claims is even remotely accurate, I am fine for a long while. The downsides of QLC I can live with in this particular configuration - unless I fill the drive to 95+%, it's faster than a SATA SSD. Filling the last 5% is indeed very painful but awareness of that helps :)
You can always overprovision that SSD yourself if you create your partitions to 90% of the total available space and forget about that.

In fact, where does all this "QLC is unreliable as shit" attitude come from? Are there any real tests or experiences published that I'm not aware of?
If QLC is used for longterm storage, you barely write to it anyways and if used in a redundant array with a proper file system in a NAS, you can scrub your stored data to detect and repair flipped bits as often as you like? Nobody said that QLC is good for every application...
 
Joined
Feb 3, 2017
Messages
3,475 (1.33/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
The usually quoted numbers are roughly 100 000 write cycles or more for SLC (1 bit per cell), 10 000 - 30 000 for MLC (2 bits per cell), 3 000 - 5 000 for TLC (3 bits per cell) and 1 000 for QLC. Now, all this is not necessarily true. The exact number for specific manufacturers' Flash dies are not publicly known. Also, 3D or V-NAND is not just for marketing but was an actual innovation in building the cells to height to increase endurance. MLC and TLC numbers earlier are for currently used technologies, good old planar Flash would have far lower endurance.

QLC definitely has much less write cycles compared to TLC. In a write-heavy environment this is important. For consumer use cases, might not be too crucial. The other downside of QLC is its speed, particularly write speed. Due to the accuracy of voltage levels needed for that many levels, writes are painfully slow. The approach controllers and drives take to mitigate this is the same as with TLC Flash - SLC cache. Some part of the drive is treated as SLC and that allows very fast write speeds. After writes to the drive finish, drive itself will move the data around to QLC and frees up the SLC cache for more writes.
 
Joined
Jun 10, 2014
Messages
2,890 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Price remains a factor and QLC has its place. I have a 2TB 660p for my secondary drive (mainly for games) which I got for ~160€ last year. Intel spec says it is good for 400 TBW and in about 9 months I have put 10TB on it. If what Intel claims is even remotely accurate, I am fine for a long while. The downsides of QLC I can live with in this particular configuration - unless I fill the drive to 95+%, it's faster than a SATA SSD. Filling the last 5% is indeed very painful but awareness of that helps :)
Those endurance ratings are probably as inflated as MTBF numbers for HDDs. It also depends what the maker defines as "working". To me, a drive is no longer "usable" when it gets SMART errors, especially if it's reformatted and still keeps getting them.

While I can't claim to have the empirical data to support a scientific conclusion, the trend I'm seeing is really concerning. In the past couple of years I've seen a decent number of SSDs which become very unstable after 2-3 years of use, and most of them nowhere near their endurance rating. I've seen a lot of HDDs go bad in the last 25 years, but nothing close to this failure rate.

There are at least two major problems with SSDs; stability of the cells and data rot. Especially data rot gets a lot worse when they cram more bits into each cell. I wish SSD specs would list what the controllers do to deal with data rot, if they do anything at all.
 
Joined
Aug 21, 2013
Messages
1,669 (0.43/day)
Current XPoint is manufactured on 20nm class node. This is very likely to be shrunk to some 10nm class node that DRAM and NAND Flash is manufactured on these days.
14nm++++++++++++++++++ :D
Price remains a factor and QLC has its place. I have a 2TB 660p for my secondary drive (mainly for games) which I got for ~160€ last year.
That's a good price for 660p. Unfortunately the prices have risen considerably making QLC all but useless. 660p costs ~250€ for 2TB and 665p costs nearly ~450€. Jesus F christ. For 450 i can buy a 2TB TLC based PCIe 4.0 drive. Crucial P1 seems to be all but gone looking at prices.

The cheapest ones are the slower SATA based Samsung 860 QVO and ADATA SU630 around 200€ for 2TB. Still too much.
SATA based 2TB QLC should be 150€ max and PCIe versions should be around 200€ like they were last year.
 
Joined
Feb 3, 2017
Messages
3,475 (1.33/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
Those endurance ratings are probably as inflated as MTBF numbers for HDDs. It also depends what the maker defines as "working". To me, a drive is no longer "usable" when it gets SMART errors, especially if it's reformatted and still keeps getting them.
TechReport's SSD Endurance test (https://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead/) is probably the biggest one of the sort and these seem to show manufacturers' numbers in a good enough light. Also keep in mind that these are 2nd gen SSDs and everyone has much much more experience with everything involved.
While I can't claim to have the empirical data to support a scientific conclusion, the trend I'm seeing is really concerning. In the past couple of years I've seen a decent number of SSDs which become very unstable after 2-3 years of use, and most of them nowhere near their endurance rating. I've seen a lot of HDDs go bad in the last 25 years, but nothing close to this failure rate.
I have seen only a couple SSDs fail but with the amount of SSDs I have had or deployed, the defect rate is actually lower than with HDDs. Granted, I tend to not use the cheapest SSDs and stick to the models I know - Intel in early days, Samsungs, Crucials with some Kingstons and Adatas sprinkled in. Reallocated sector count seems to be the one to watch for SSDs and get the drive replaced as soon as that starts to happen. Endurance rating as well of course but I do not have write-heavy use cases that would use up all of that in drive's lifetime. I have 10 years old SSDs that are still going strong without issues, mostly relegated to a desktop machine of family or friends due to something like 1st gen 60GB or 2nd gen 120GB SSD being pretty slow by today's standards.
There are at least two major problems with SSDs; stability of the cells and data rot. Especially data rot gets a lot worse when they cram more bits into each cell. I wish SSD specs would list what the controllers do to deal with data rot, if they do anything at all.
Data rot is definitely a concern. From what I have seen tested this is a problem primarily with unpowered drives (where data rots relatively quickly) and powered archive or NAS type use where there is a suspicion data could rot unnoticed but there seems to be little research into that. If you have good sources, links would be appreciated.
 
Joined
Mar 21, 2016
Messages
2,195 (0.75/day)
It gets offtopic but I am seeing less and less need for big computer these days. For a fully functional computer, put CPU with cooler on the motherboard, add RAM and M.2 drive or two and the only cables you need are 24-pin and 4/8 pin for power. If you need GPU, it goes to PCIe slot with possibly a couple power cables of its own and done. Very nice and clean.
You're overlooking all the case fans and wiring to them as well that can be about as messy or more than all the SATA cabling with HDD's/CD-ROMs of the past in it's own right. I say though that NVMe has helped a lot to eliminate a lot of messy case cable management which is great. I'd like to see HDD's that fit in PCIe slots and bus powered that utilize NVMe protocol and have more cache than current HDD's. Look at the Gigabyte I-RAM and it's easy to see how bad a bottleneck can be to the overall design fast memory with a slow interface is still slow no matter how often it bumps it's head against the wall blocking it.
 
Joined
Aug 20, 2007
Messages
20,714 (3.41/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage 2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches
Software Windows 11 Enterprise (legit), Gentoo Linux x64
That would only be useful if the durability is on par with native TLC/MLC/SLC tech.

I've read up on this a bit. The tl;dr basically is it does improve longevity but not nearly as much as being native TLC or MLC/SLC. It's not even close. It helps, but does not fix QLCs issues. The only way to fix QLC is to ramp up process node to a larger size, which really, defeats the point altogether of saving space with it.

Those endurance ratings are probably as inflated as MTBF numbers for HDDs.

They aren't. They are pretty defined numbers. "No longer working" is defined as when the cell no longer retains data properly, so a proper failure for the data block. This isn't like BS MTBF on SSDs, this is the true rating of how many times you can program a cell on the chip on average, and it's usually got testing to back it.
 
Joined
Jul 5, 2013
Messages
25,559 (6.52/day)
I've read up on this a bit. The tl;dr basically is it does improve longevity but not nearly as much as being native TLC or MLC/SLC. It's not even close. It helps, but does not fix QLCs issues. The only way to fix QLC is to ramp up process node to a larger size, which really, defeats the point altogether of saving space with it.
And that's more or less what I've been reading elsewhere. Just not worth it.
 
Joined
Aug 22, 2007
Messages
3,450 (0.57/day)
Location
CA, US
System Name :)
Processor Intel 13700k
Motherboard Gigabyte z790 UD AC
Cooling Noctua NH-D15
Memory 64GB GSKILL DDR5
Video Card(s) Gigabyte RTX 4090 Gaming OC
Storage 960GB Optane 905P U.2 SSD + 4TB PCIe4 U.2 SSD
Display(s) Alienware AW3423DW 175Hz QD-OLED + Nixeus 27" IPS 1440p 144Hz
Case Fractal Design Torrent
Audio Device(s) MOTU M4 - JBL 305P MKII w/2x JL Audio 10 Sealed --- X-Fi Titanium HD - Presonus Eris E5 - JBL 4412
Power Supply Silverstone 1000W
Mouse Roccat Kain 122 AIMO
Keyboard KBD67 Lite / Mammoth75
VR HMD Reverb G2 V2
Software Win 11 Pro
14nm++++++++++++++++++ :D

That's a good price for 660p. Unfortunately the prices have risen considerably making QLC all but useless. 660p costs ~250€ for 2TB and 665p costs nearly ~450€. Jesus F christ. For 450 i can buy a 2TB TLC based PCIe 4.0 drive. Crucial P1 seems to be all but gone looking at prices.

The cheapest ones are the slower SATA based Samsung 860 QVO and ADATA SU630 around 200€ for 2TB. Still too much.
SATA based 2TB QLC should be 150€ max and PCIe versions should be around 200€ like they were last year.

Where do you see those prices?
The retailers are ripping you off.
I see $239 and $369 respectively.That 665p price is a bit high doesn't make sense when you look at the 1TB being $139
 
Joined
Aug 21, 2013
Messages
1,669 (0.43/day)
Where do you see those prices?
The retailers are ripping you off.
I see $239 and $369 respectively.That 665p price is a bit high doesn't make sense when you look at the 1TB being $139

US pricing is meaningless to me as it does not include VAT and shipping costs.
But still $239 is too much for 660p.
 
Joined
Feb 20, 2018
Messages
41 (0.02/day)
optane as a caching technology is worthless but as a swap file/partition it works nicely. to the point I think once next gen games launch that can take advantage of fast storage 3DXpoint will be a life saver in that use case. the key thing is that it be cheaper than high capacity ram and I have no doubt it will be cheaper than 64GB of DDR5 for the 1st few years.
 
Top