Originally Posted by W1zzard
windows has its own disk cache which works great and has dynamic size. i fail to see the point of allocating a ramdisk that preloads data from disk, so it goes to disk cache first, then into the ram disk, then is stored in both ram disk and disk cache.
essentially you are wasting some main memory, but given current ram pricing, users have done worse things with their money
When I read this, I was assuming that we could allocate certain files or folders, IE if you played a certain game and wanted it to load quickly.
It looks like it will work similar to disk cache, but on a much larger scale.
It would be great on our ESX servers that have 64GB of memory each, but it seems it is a bit too early in terms of ram for something like this.
I predict in the future we will use SSDs for only large file (media) storage, and our entire memory space will be what we currently call a RAMdisk, turned non-volatile. Even then SSDs wont come with the computer, it will be an optional thing you buy on the side, like external hard drives today. On another note, threading on multiple cores will be possible using a single thread (programs will no longer have to be programmed for multi-threading, this is done at the hardware level). Core clocks will be standardized, probably somewhere between 2 and 3 GHZ across the board, and when we look at processors in the future, we will be deciding between shared cache, and wether we want the 600-core enthusiast or the high-end 560-core intel CPU.