• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Post your CrystalDiskMark speeds

i5-13600KF - 64Gb of RAM 3600mhz - 990 PRO 1TB
 

Attachments

  • Capture d’écran 2023-03-19 220618.png
    Capture d’écran 2023-03-19 220618.png
    29.1 KB · Views: 122
The peak of performance.
Connor 210MB (386 era)

Wowwwws. I distinctly remember my Conner (120mb?) HD getting 700KB/s on a dx2-66 in (i think..) HD-tach. My mate then got a quantum fireball and it did like 1200KB/s on a dx2. I was well jealous. You, however, have surpassed this.

edit -- okay, can't find any pics of hdtach in dos, so maybe it wasn't that...
 
Wowwwws. I distinctly remember my Conner (120mb?) HD getting 700KB/s on a dx2-66 in (i think..) HD-tach. My mate then got a quantum fireball and it did like 1200KB/s on a dx2. I was well jealous. You, however, have surpassed this.

edit -- okay, can't find any pics of hdtach in dos, so maybe it wasn't that...
In ATTO it get's a more reasonable 700/600KB/s R/W.
 
x99 Xeons soldier on.


2x 1.5TB HGST HUSMM series SAS-3 12gbs SSDs on an HP 240 HBA (on a x99 based Dell workstation mobo ;)). CPU is a 14 core Xeon E5-2697 2.6/3.0 ("Haswell") with 40gb of DDR4 2133 (registered ECC). This is dedicated storage, the system is running on its own SATA SSD. Half of each disk is mirrored, half striped, using Windows LVM/StorageSpaces. The test shown is of the striped logical volume.

According to the thread instructions:


diskmark_sas_winraid-std.PNG



Defaults:

diskmark_sas_winraid.PNG


The NVME suite, for comparison:

diskmark_sas_winraid_nvmesuite.PNG


These disks show up reliably on the 'bay as SAN pulls. I've never had any issues reformatting them with sg_utils when neccesary.

Performance with more queue depth and six threads (for more parallelism, to exercise the cores):

diskmark_sas_winraid_multithread.PNG


Significant boost to RND4K, but probably not meaningful for anything but development, virtual machine hosting, keeping up w/streaming I/O, etc.

E5 Xeons and x99 boards aren't quite as old as a 386, but still not bad for a platform that's nearing a decade.

Though I'd still like to figure out why I'm not exceeding 12gbps in sequential reads despite the striped volume having drives on different SAS-3 ports and the card being PCI-E 3 x8. Maybe an issue with the benchmark? Seems too coincidental to be limited right at the SAS3 bandwidth, allowing for a little overhead.
 
Last edited:
Asus ROG Strix Scar 16, 2 Samsung 1TB in RAID 0
CrystalMark23.jpg
 
Drives are NVMe 2x 1TB Samsung PM9A1s MZVL21T0HCLR-00$00/07, PCIe Gen 4 at 4x, came configured in RAID 0 out of the box in the firmware.
 
[...]

"Seq Q32T1 (R/W)" Speeds are clickable, leading to the original post.

NameDriveSizeTypeRPMConnectorSeq Q32T1 (R/W)Raid
mamaSK hynix Platinum P411TBNVMeM.27372.7 / 6739.8
Det0xSK hynix Platinum P411TBNVMeM.27366.3 / 6519.1
mamaKingston 30001TBNVMeM.27355.7 / 6074.5

[...]

You have to check (delete) the 1st and 3rd place Benchmarks from Mama, because there are no Seq1M Q32T1 (R/W) in the linked screenshots.
They are Q8T1 and so the Scores are not for this SEQ1M Q32T1 highscore list.
 

Attachments

  • KINGSTON3000_mama.png
    KINGSTON3000_mama.png
    165.2 KB · Views: 81
  • PlatinumP41_mama.png
    PlatinumP41_mama.png
    409 KB · Views: 115
Left side: 980 Pro RAID 0 (2 X 2TB) -- D: drive
Right side : 4TB WD Black SN850X Single drive -- C: drive

1684207735421.png
 
How did you manage to get 22GB/s from a drive with a max rating of 3GB/s?
And it only has a PCIe gen 3 interface. Is ram caching software being used? Is it an error? I am confused
 
8 x Dell P5600 3.2TB Nvme drives in Raid 10:
Untitled.png


This is the same 8 drives in Raid 0:
1685937906235.png


I am not sure why the speed numbers appear to be so slow given these drives cost $10k each and with 8 drives in raid 10 i would be expecting a 8x read and 4x write boost over a standard nvme drive (raid 0 would be 8x for both), so i was hoping someone here might have some advise to get my numbers up?
 
Last edited:
My dirty and unkept SN850 boot:

sn850.png


New SN770 on a Gen 3 connection:

g3.png


SN770 on Gen 4:

g4_1.png


And the other SN770 on Gen 4:

g4_2.png


I couldn't get the system to see all 3 SN770 on the Asus M.2 card. I am guessing I am bumping into a lane shortage :/

Edit:

I just noticed the test wasn't done running in the one shot :pimp:
 
I just noticed the test wasn't done running in the one shot :pimp:
Your test settings are also incorrect like numerous others that post benhmarks in this thread. :p

Correct test setttings in this post.
 
Your test settings are also incorrect like numerous others that post benhmarks in this thread. :p

Correct test setttings in this post.
Whoops, just box stock settings all around :D
 

Samsung EVO Plus 32GB, SDHC, UHS-I, U1, Upto 130MB/s, FHD, Memory Card (MB-SC32K)​


CrystalDiskMark_20230616134209.png
SS 32GB EVO Plus for Creators.jpg
 
My D: drive - Samsung 980 PRO 1TB

cdmark-980PRO-1TB.jpg
 
CrystalDiskMark_20230617163622.png


Seagate Firecude 530 2TB.
 
Samsung SSD 990 PRO 1TB.
 

Attachments

  • Untitled-1 copy.jpg
    Untitled-1 copy.jpg
    153.6 KB · Views: 98
Samsung 980 1TB NVMe M.2
1687035049488.png
 
Back
Top