• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

hard disk speed with file size question

  • Thread starter Thread starter wolf2009
  • Start date Start date
W

wolf2009

Guest
Hard disk speed depends on the file size, right ?

Like if you have 1000 small files each 1MB, it would take longer to transfer them, than 1 file of 1000 MB.

My question is, Why so ?
 
It would take longer to eat 1000 apples than eat 1 huge apple. Something to do with surface area I think.
 
I think the 1GB file be faster as long as all files were defraged correctly.
 
seeks will kill performance. another factor is OS overhead, "processing" a lot of small files takes more computing power than just one big chunk of data
 
Due to hard drives physically being seperated into set block sizes where one block can only contain data for ONE file if you have 1000x 1MB files which do not use all the space in their blocks then they will need more blocks than 1x 1000MB file which would only have 1 block where all the space wouldn't be used as it would fill each block before moving onto the next one. So if each of the 1000 1MB blocks wastes 1 block then they'd need 999 more hard drive blocks than the 1000MB file.

Plus the added multiple file overhead and extra info needed for each seperate file and the fact that the 1000 smaller files even if defragged could be on completely different sectors of the hard drive platter, making the reading head having to jump all over the disk, whereas a single large file if properly defragmented should be in a sequential order, removing the seek time needed very significantly.
 
seeks will kill performance. another factor is OS overhead, "processing" a lot of small files takes more computing power than just one big chunk of data

its exactly what w1zz said. no surprise there.

Theres no guarantee your 1,000 small files are going to be located near each other on the hard drive - with a (very) rough average of 10ms seek times on hard drives, you could have a lot of seeks, wheras a single (defragmented) file could be done in one smooth transfer.

if you look at benchmark programs and so on, you'll often see "sustained" vs "random" in read and write tests. random would be your 1,000 small files, sustained one large file.
 
It's easier for a file system to look at a big file, segment it, and throw it on a drive than micro-manage 1000 individual files that may or may not be related. Moreover, you have 1000 times much information in the file system telling it what each individual file is.

Small files are less likely to fragment than larger files but many small files take longer to write (due to updates to the file system) than few large files.
 
The OS has to create file system index entries for each file ?
 
Move a box of m&m's from one location to another. Now move them back, but this time one candy at a time. Does it take longer?

Of course the box is heavy and you'll have difficulties moving it, so your movement will be partially impared. A candy is easier to move, but you'll have to repeat a few complex steps to move just one. And you have a whole bunch of them to move. So you'll repeat the same steps over and over again.

Unless you get hungry and eat a few, then the OS will give you an IO error.
 
Last edited:
Back
Top