• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Share your Anvil SSD score

Joined
Aug 6, 2017
Messages
7,412 (2.55/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
I've been looking for something to add to my SSD storage, but found that we're still missing reviews on a lot of the drives currently available. So let's all chip in to create a solid base of our benchmark scores. The rules are simple - post your anvil storage score. Why anvil - out of all simple and fast benchmarks it has the best range of small file size tests and is easy to read. Please specify whether the drive is full or empty and whether it's OS or storage.

Anvil download (free)
http://anvils-storage-utilities.en.lo4d.com/

latest RST driver

https://downloadcenter.intel.com/do...d-Storage-Technology-Intel-RST-?product=55005



That's my RAID0 array of two 256GB 850 Pro SSDs - 90% used, OS.

Di2lbWW.jpg



my almost empty 850 Pro 512GB for comparison

EHMfftk.jpg
 
Last edited:
Don't do RAID0 with a SSD, it is simply a waste. The screen shot contains how much space is used, no need to add something.


INTEL SSDPED1D280GA_280GB_1GB-20180121-1924.png
 
it's a waste if you have two of the same drives and you DON'T raid them. Please specify the name of the drive.
 
it's a waste if you have two of the same drives and you DON'T raid them. Please specify the name of the drive.

Your own benches proves it, where it it matters you gain nothing with raiding them versus the single drive bench. I approve RAID1 or 5 for safety reasons, but zero, no need. Linear write speed has no meaning really.

Screenshot contains also the drive name.
 
My own benches prove I gained 40% overall and 68% in 4K QD16 by creating an array. Anvil has no QD32 or 64 but the improvement is even greater there (80% in 4KQD64 read, 90% in 4KQD64 write, masured on my drives in crystal disk mark). Screenshot contains some stupid numbers not the drive name. Nvme pci-e can really stomp a traditional ssd in synthetics, but the price of a 960 EVO 500GB is the same as a 1TB MX500, and real world tests prove there's not much in it between sata ssd and nvme ssd when it comes to loading times and multitasking.

This also has me interested. I know 4K matters most, but isn't it more about the 4K at higer queue depths and threads ? I mean how much can reading/writing a single 4K files affect your performance ? Don't they come in long queues most of the time when you're installing something,unpacking, moving or loading data ?


edit: well I seem to be completely wrong. resource monitor has a tab where you can see your current QD size for your drives and this is what I found

installing a game (from drive c to drive c using steam backup) gave me the highest QD of 15, but most of the time it hovered between 2 and 6.


8w80KEl.jpg



Then I tired loading a game (ghost recon wildlands, pretty vast open world game so a lot of loading) and to my huge surprise the queue depth barely exceeded 1, it stayed in the 0.2-0.8 range both while loading the game, then the savegame and then during gameplay.

Well, you learn sth new every day. That actually makes this thread even more relevant for me than before since I know random read/write at low QD is the thing to really look for.
 
Last edited:
Update:
I found out my raid array wasn't properly configured in irst, write back caching wasn't enabled. 4K low QD write got a pretty hefty increase

IJdGHjU.jpg




it smokes the pci-e phoenix drive in write performance

6906_14_samsung-850-pro-256gb-three-drive-ssd-raid-report.png
 
Two 960EVO m2 in RAID0.

ANVIL 2-3-18.JPG
 
your 4K write shouldn't be lower than my two 850Pros'. Try the samsung nvme driver

http://www.samsung.com/semiconductor/minisite/ssd/product/consumer/960evo/

then in irst disable flushing the cache buffer and ebable write back caching


also, it looks like you don't have any overprovisioning enabled. since samsung magician does not recognize individual drives in a raid volume, just go to disk management and set a samll amount (10% is customary) as unallocated.
 
Last edited:
The Samsung driver installation says:

Samsung nvme driver error 2-3-18.JPG


I do have buffer flushing disabled and write back cache enabled in IRST.
 
well the driver is clearly not meant for raid then.
 
Intel 600p 256GB and Samsung 840 Pro 256GB

4eMx1zW.png
 
Last edited:
Didn't change a lot. Weird.
 
have you got irst driver installed ? trim working ?
 
not sure if my scores look right, but here you go.

7f47c57df9.png
 
not sure if my scores look right, but here you go.

7f47c57df9.png
Yup they look very fine. Actually they are better than @Arctuas' 960EVO RAID0, which makes me wonder about his setup. The thing to look for is not the sequential R/W but 4K and 4K QD4 write, that's the most important part.
 
not sure if my scores look right, but here you go.

7f47c57df9.png


hey guys, I know you said my speeds were good on this benchmark, could you tell me if my speeds in crystaldisk look good as well? I don't know how to read this stuff, I just want to make sure I am getting the right speeds for what I paid for, cheers @cucker tarlson

9063c562f6.png
 
They're fine.
 
Built my first computer in like 10 years and I want to know if I set up my raid 0 correctly.

I have an ASUS ROG strix z390-e
Core i7 8700k
32GB tridentZ ddr4 3200
And two Samsung 970 evo 250gb In raid 0

This is my score.. Is that what it should be for two m.2 NVME’s in RAID 0? @cucker tarlson

16a74aq.jpg
 

Attachments

  • FE64E2C6-98C9-4638-9AE9-FDA7E24A5825.jpeg
    FE64E2C6-98C9-4638-9AE9-FDA7E24A5825.jpeg
    4.7 MB · Views: 805
  • 09794C73-1EBC-4FD6-BFBB-64ABFC0D0397.jpeg
    09794C73-1EBC-4FD6-BFBB-64ABFC0D0397.jpeg
    6.5 MB · Views: 942
Last edited:
1541147866907.png


Rocking the 840 evo for 5 years now.
Rapid (RAM cache) is obviously disabled for this test!
 
View attachment 109733

Rocking the 840 evo for 5 years now.
Rapid (RAM cache) is obviously disabled for this test!

So since your the only person who posted after me and you seem to know what your doing. The score posted above you is mine, it’s 2 250gb 970 evo m.2 drives in raid 0. Is that a good score for what it is?
 
So since your the only person who posted after me and you seem to know what your doing. The score posted above you is mine, it’s 2 250gb 970 evo m.2 drives in raid 0. Is that a good score for what it is?

your stick is pcie x4 which means ~3.99.. GB/s cap by the bus itself (including overhead) so striping it won't help a lot with max sequential.
since the advertised speeds for a non-array single stick of yours are 3.4GB/s read and 2.3GB/s write.
as we can see these are actually underachieved in reads (overhead , bus overloaded, any other thinkable logical level bottleneck)

and it doesn't help with 4k randoms all too much unfortunately.

i understand if people raid0 their non-NGFF ssds (where you have ~500 GBs read and write) to do sequential stuff like editing very BIG video files (talking about 100s of GB up to multiple TB per file)
as it helps in two ways, it will enable to work with files larger than the size of a single disk.
for example presuming you have the correct FS choice (NTFS works ofc ;) ) it would be no problem to store and work with a 6 TB video RAW on 4 x 2TB SSDs in RAID0. funny tought.
and of course it will make the editing/seq. speed scale with n disk ... minus some 1-5 percent maybe...
but as soon as you reach bus limit (sata / pcie limits) you stop profiting.

so for normal desktop/gaming use (as OS disk or Game disk) i don't recommend striping ssds at all because of several reasons:

1. 4k random reads are not significantly improved (in contrast to striping platter drives)
2. ssds in itself are designed to do as few sequential operations as neccessary because of longevity of the chips -> striping to use the disk as a "data monster" is opposed to this principle (unless - see above - highly prof. video editing
3. using rapid mode (ram support) is much more effective if you want to speed up day-to-day use case operations
4. you reach bus limits (or network limits in case of transfers to NAS etc.) pretty quickly
(5.) a rather small but still existing reason: you lose functionality of vendor tools like for example samsung magician to analyze your drives
 
Back
Top