1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RAID transfer SLOOOW

Discussion in 'Storage' started by taz420nj, Jul 9, 2017.

  1. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    10,102 (5.03/day)
    Thanks Received:
    5,157
    Location:
    Concord, NH
    Code:
     - Disk: #0: WDC WD1600BEVS-00RST0 --
    
        Hard Disk Summary
       -------------------
        Hard Disk Number . . . . . . . . . . . . . . . . : 0
        Interface  . . . . . . . . . . . . . . . . . . . : Intel RAID #0/0 [11/0 (0)]
        Hard Disk Model ID . . . . . . . . . . . . . . . : WDC WD1600BEVS-00RST0
        Firmware Revision  . . . . . . . . . . . . . . . : 04.01G04
        Hard Disk Serial Number  . . . . . . . . . . . . : WD-WXE107092151
        Total Size . . . . . . . . . . . . . . . . . . . : 152627 MB
        Power State  . . . . . . . . . . . . . . . . . . : Active
        Logical Drive(s) . . . . . . . . . . . . . . . . : C: []
        Current Temperature  . . . . . . . . . . . . . . : 26 °C
        Power On Time  . . . . . . . . . . . . . . . . . : 1650 days, 1 hours
        Estimated Remaining Lifetime . . . . . . . . . . : 9 days
        Health . . . . . . . . . . . . . . . . . . . . . : #------------------- 9 % (Critical)
        Performance  . . . . . . . . . . . . . . . . . . : #################### 100 % (Excellent)
    
        There are 129 bad sectors on the disk surface. The contents of these sectors were moved to the spare area.
        Based on the number of remapping operations, the bad sectors may form continuous areas.
        Problems occurred between the communication of the disk and the host 450 times.
        In case of sudden system crash, reboot, blue-screen-of-death, inaccessible file(s)/folder(s), it is recommended to verify data and power cables, connections - and if possible try different cables to prevent further problems.
        More information: http://www.hdsentinel.com/hard_disk_case_communication_error.php
        It is recommended to examine the log of the disk regularly. All new problems found will be logged there.
          It is recommended to backup immediately to prevent data loss.
    You say it drops down to ~40MB/s? Would it be a coincidence that the maximum transfer rate for your C: drive which might be in crisis is matching the speed you're seeing?
    Code:
        Maximum Transfer Rate  . . . . . . . . . . . . . : 41475 KB/s
    The winning question: Is your swap file on the C: drive? Poor swap performance can impact copy speeds.

    Better question: Is your C: drive about to die?
     
    taz420nj says thanks.
  2. taz420nj

    taz420nj

    Joined:
    Jul 21, 2015
    Messages:
    358 (0.49/day)
    Thanks Received:
    238
    I have tried both PCIe slots, and yeah I realized the end slot is 100MHz. It's currently in the third slot (tried both).

    C: is a pair of drives in RAID1. Yes one of the drives is growing a bad patch. They are going to be replaced.

    I may buy that, but why do some folders transfer at full speed? Isn't the /J switch supposed to bypass the OS buffer? And the C: drives are only SATA1, so if it's transferring to the swap first, I should never see any transfer between the two arrays greater than half that interface.
     
    Last edited by a moderator: Jul 15, 2017
  3. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    10,102 (5.03/day)
    Thanks Received:
    5,157
    Location:
    Concord, NH
    Try disabling your swap file or moving it to the fastest array you have.

    It may be the case that just enough is getting cached to hit the swap file, more than what you have for available system memory on that machine. You can use the resource monitor to look for swap activity and see if it matches when it slows down.
     
    taz420nj and blobster21 say thanks.
  4. blobster21

    blobster21

    Joined:
    Oct 24, 2004
    Messages:
    758 (0.16/day)
    Thanks Received:
    419
    It would be extremely dissapointing to have that much RAM and be tied to the inferior performances of some disk swapping....the performance monitoring on earlier screenshots suggested that the system used 9-10Gb RAM max at any given time.
     
    taz420nj says thanks.
    10 Year Member at TPU
  5. taz420nj

    taz420nj

    Joined:
    Jul 21, 2015
    Messages:
    358 (0.49/day)
    Thanks Received:
    238
    Disabled the swap file, no change.. (yes I rebooted)

    System is only using about 4GB.. I even checked during an Explorer copy and it didn't chew it all up like it's buffering...

    [​IMG]
    [​IMG]
     
  6. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    10,102 (5.03/day)
    Thanks Received:
    5,157
    Location:
    Concord, NH
    According to the report you uploaded it says:
    Code:
      -- Partition Information --
    
    Logical Drive                           Total Space         Free Space          Free Space               Used Space
    C: (Disk: #0-1)                         148.7 GB            77.1 GB              52 %                    #########-----------
    D: RAID-5 (Disk: #11-12-13-14)          5587.9 GB           29.4 GB               1 % (Low)              ###################-
    G: RAID-6 (Disk: #3-4-5-6-7-8-9-10)     11174.9 GB          11148.6 GB          100 %                    --------------------
    I: TeraDrive (Disk: #2)                 931.5 GB            8.0 GB                1 % (Low)              ###################-
    
    If the RAID-5 disk is that filled up, is it possible that fragmented data is making it hard to get good read speeds? I'm just guessing at this point. I'm not really sure what's going on but, the RAID-5 array being practically full stood out to me.
     
    taz420nj says thanks.
  7. taz420nj

    taz420nj

    Joined:
    Jul 21, 2015
    Messages:
    358 (0.49/day)
    Thanks Received:
    238
    I don't see how that could be, since stuff is written once and rarely deleted. It may have some fragmented parts but I cant imagine it being that bad.
     
  8. OneMoar

    OneMoar There is Always Moar

    Joined:
    Apr 9, 2010
    Messages:
    7,048 (2.64/day)
    Thanks Received:
    3,823
    Location:
    Rochester area
    did you turn off write-cache buffer flushing in the device manger for all the drives ?

    you absolutely need todo that with RAID 5
     
    taz420nj says thanks.
  9. taz420nj

    taz420nj

    Joined:
    Jul 21, 2015
    Messages:
    358 (0.49/day)
    Thanks Received:
    238
    What about RAID 6? The 5 array is only going to be reads.
     
  10. OneMoar

    OneMoar There is Always Moar

    Joined:
    Apr 9, 2010
    Messages:
    7,048 (2.64/day)
    Thanks Received:
    3,823
    Location:
    Rochester area
    write cache buffer flushing needs to be off with any RAID setup that has raid card
    the card manages buffer control having it enabled in windows is known to murder speeds

    the exception two this is a two or four drive raid 0

    the reason it hurts is because it effectively makes the write buffer flush twice because the RAID card is acting as Drive-cache
    on a multi-drive raid or a high level array it kills
     
    taz420nj says thanks.
  11. taz420nj

    taz420nj

    Joined:
    Jul 21, 2015
    Messages:
    358 (0.49/day)
    Thanks Received:
    238
    Okay, that did seem to make a little difference.. Still slow but 20MB/S better.. :)

    [​IMG]
     
  12. OneMoar

    OneMoar There is Always Moar

    Joined:
    Apr 9, 2010
    Messages:
    7,048 (2.64/day)
    Thanks Received:
    3,823
    Location:
    Rochester area
    try turning write caching all the way off
     
    taz420nj says thanks.
  13. taz420nj

    taz420nj

    Joined:
    Jul 21, 2015
    Messages:
    358 (0.49/day)
    Thanks Received:
    238
    [​IMG]
     
  14. OneMoar

    OneMoar There is Always Moar

    Joined:
    Apr 9, 2010
    Messages:
    7,048 (2.64/day)
    Thanks Received:
    3,823
    Location:
    Rochester area
    Last edited: Jul 16, 2017
  15. taz420nj

    taz420nj

    Joined:
    Jul 21, 2015
    Messages:
    358 (0.49/day)
    Thanks Received:
    238
    Sigh.... I thought I had this issue beat. I didn't change anything else at this point, I rathered do yardwork lol.. Rebooted the machine and tried copying folders one at a time.. Got full speed. Over 200MB/S consistently. So I tried doing a mass move, about 15 folders. First four (about 40GB, including tiny .jpg files) blazed through - hitting over 300MB/S at the seond peak. Then it hit a wall and dropped like a rock. So seriously.. Please... Somebody must know what in the F*&K is going on here?

    [​IMG]


    edit:

    It seems to be all over the place at this point.. No consistency in the speed whatsoever..

    [​IMG]

    Edit again...

    It almost seems like it has to do with the files themselves. I don't mean the big vs small files because it is always during the media itself. I mean one movie file seems to transfer a lot faster than another.. They are all .mkv, but it shouldn't matter because bits are bits as far as copying, right? Because when it got to this file, look what happened...

    [​IMG]
     
    Last edited: Jul 24, 2017 at 12:57 AM
  16. OneMoar

    OneMoar There is Always Moar

    Joined:
    Apr 9, 2010
    Messages:
    7,048 (2.64/day)
    Thanks Received:
    3,823
    Location:
    Rochester area
    fragmentation ? did you change any of the caching settings with megacli64 ?

    you could also have a drive or a card not playing nice / going out

    sadly at this stage I think you may be looking at a array rebuild (again)
     
    Last edited: Jul 24, 2017 at 2:30 AM
  17. taz420nj

    taz420nj

    Joined:
    Jul 21, 2015
    Messages:
    358 (0.49/day)
    Thanks Received:
    238
    I don't see what settings I could change with MegaCLI that would make any difference.. It's got a BBU and it is already set to Write Back. I did that through the option rom bios when I set up the array. It's not an SSD array so setting it to Write Through is only going to hurt performance. Already been to that movie with the old card before I had the BBU for it.

    The two RAID cards are the only cards in the machine - and they are on two completely different buses. The old card is PCI-X and the new one is PCIe.

    I have rebuilt that array about 36 times already with different settings/formatting. What good would doing it again do?

    Seriously I'm not shitting on you because I appreciate the help but the definition of insanity is doing the same thing over and over and expecting a different result.
     
    Last edited: Jul 24, 2017 at 3:40 AM
  18. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    9,771 (2.28/day)
    Thanks Received:
    2,242
    Last edited: Jul 24, 2017 at 4:36 AM
    10 Year Member at TPU 10 Million points folded for TPU
  19. OneMoar

    OneMoar There is Always Moar

    Joined:
    Apr 9, 2010
    Messages:
    7,048 (2.64/day)
    Thanks Received:
    3,823
    Location:
    Rochester area
    did you try write-though mode write-back is slower then write through in some cases when using large amounts of cache

    is the board peg-lane throttling the cards is the pci-e power management disabled ?

    do you have enough pci-e lanes to keep the cards fed

    that board is pretty old it might be done to not having enough pci-e bandwith or it could just be one of the drives performing below-par and holding things up
     
    Last edited: Jul 24, 2017 at 4:43 AM
  20. zwing688 New Member

    Joined:
    Yesterday
    Messages:
    1 (1.00/day)
    Thanks Received:
    1
    I signed up after reading this thread because I am experiencing huge write speed drops with Adaptec 71605 controller under Windows10.

    CPU: Intel i7-3930K ; 32GB RAM ; Adaptec 71605 (without BBU) Firmware 32106 and drivers v7.5.0.52013 ; 8 WD Black 2.5" 750GB WD7500BPKX hard disks with available space set as 750GB RAID-50 boot then 1800GB RAID-6 and 1638GB RAID-6 ; Windows 10 x64 1607 14393.1378

    Under Windows7SP1 and Linux I saw no speed drops in writing operations.

    With Windows10 there must be some bugs with RAID controllers then that the writing slows down.. OS write cache completely broken for RAID ? And reading here it seems that Windows Server versions are affected too then. All known Microsoft spyware/malware stuff has been disabled with DoNotSpy10

    Reading files from RAID partitions it's always fast .. although no more than 160MB/s in Windows10 anyway, higher in Windows7.
    Writing files on RAID partitions it starts fast in the 120MB/s range then it drops down to 10MByte/s or 20MByte/s. It always happens after 4GBytes (regardless of huge files like 10GB+ or a bunch of small files for more than 4GB in size overall).
    Using ExtremCopy Free utility I managed to get sustained 70MB/s writing speed on RAID partitions. It doesn't use the Windows OS writing cache calls but its own.
    The problem is with all programs using Windows writing cache and so slowing down a lot. The system is way slower than under Windows7 or Linux anyway.
    None of the Windows10 monthly updates (manually installed from Microsoft Catalog) so far fixed the issues.

    I sent a report to Adaptec support about this issues. I hope they will be able fix it with Microsoft and other RAID controllers manufacturers quickly.
     
    Last edited: Jul 25, 2017 at 9:10 PM
    blobster21 says thanks.
  21. blobster21

    blobster21

    Joined:
    Oct 24, 2004
    Messages:
    758 (0.16/day)
    Thanks Received:
    419
    Good point, zwing688.

    After having my files server for years on Microsoft OSes ( Windows 7, then Windows 2012R2 and ultimately on Windows 10) i moved to Ubuntu Mate Zeisty mostly because i randomly experienced things i couldn't pinpoint, explain, or reproduce at will.

    Don't get me wrong, it worked remarkably well most of time, but on some occasions i ran into inexplicable slow downs like the ones @taz420nj and @zwing688 688 describded.

    So far i've had no complaints about transfer speed to/from my arrays.

    It would be interesting to boot from a live linux media, and do a couples transfers from your old array to the new one. Any recent distros most likely have native support for the Perc H700 and the older LSI 9550 raid adapters.

    You would probably not get the highest speed possible out of them, because both arrays have been formatted as NTFS/ReFS and the ntfs-3g implementation on linux is not as efficient as the native NTFS support on Microsoft OSes, but you should witness some stable, consistent speed transfers, which could indicate that there's something wrong with the OS itself.
     
    10 Year Member at TPU

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)