It’s a matter of filesystem overhead mainly.
For each file, the filesystem needs to create an entry, allocate one of more blocks on the drive to it, and finally copy the data to those blocks.
Depending on the filesystem it might also create checksum data so that upon reading the file back it can verify that the data stored matches the data that was originally written. This checksum data also needs to be written to the target disk.
This “housekeeping” needs to be performed on every file written to the disk, and takes approximately the same amount of time regardless of file size.
For a large file, once these operations are taken care of all that’s left for the drive to do is copy a series of full blocks of data one after the other from the source drive to the target drive, and drives are able to do this quite quickly because it’s a really simple, predictable operation that can be optimized in their firmware.
SSDs are better at small, random read/write operations than mechanical HDDs are, due to not having to physically move heads around to the desired location on the disk, but they still have to do alot of behind the scenes work when things are happening at random (from the drives standpoint) vs. just reading/writing a continuous stream of data.
For example writing a small amount of checksum data might actually involve reading a much larger “chunk” of existing data, adding the new data to it, erasing the original “chunk”, and replacing it with the new combined “chunk”.
This is because there is a minimum block size they can work with by design. If they make these blocks too small, they have to work with many blocks to handle large files, and this costs performance. Make them too big and you end up having to do more of these read/modify/erase/write cycles to avoid having partially filled blocks which is a waste of space.
With a large file, the disk drive or operating system driver software can fill a look-ahead buffer, making it more likely to be able to continuously read and write data, while copying individual small files involves a separate seek operation for each file, plus creating a directory entry and new file structure on the target system, so there is much more overhead that can’t make use of I/O buffering.
Do not nail me on this but this was the first thing it came to my mind just reading this sentence.
Put the old Games on a HDD and test the difference.
would appreciate a result of this act of rescueing your SSD.