qubit
Overclocked quantum bit
- Joined
- Dec 6, 2007
- Messages
- 17,865 (2.98/day)
- Location
- Quantum Well UK
System Name | Quantumville™ |
---|---|
Processor | Intel Core i7-2700K @ 4GHz |
Motherboard | Asus P8Z68-V PRO/GEN3 |
Cooling | Noctua NH-D14 |
Memory | 16GB (2 x 8GB Corsair Vengeance Black DDR3 PC3-12800 C9 1600MHz) |
Video Card(s) | MSI RTX 2080 SUPER Gaming X Trio |
Storage | Samsung 850 Pro 256GB | WD Black 4TB | WD Blue 6TB |
Display(s) | ASUS ROG Strix XG27UQR (4K, 144Hz, G-SYNC compatible) | Asus MG28UQ (4K, 60Hz, FreeSync compatible) |
Case | Cooler Master HAF 922 |
Audio Device(s) | Creative Sound Blaster X-Fi Fatal1ty PCIe |
Power Supply | Corsair AX1600i |
Mouse | Microsoft Intellimouse Pro - Black Shadow |
Keyboard | Yes |
Software | Windows 10 Pro 64-bit |
It's gone way past 4am, I have work tomorrow and I can't sleep, so what better than to be thinking about internet security and hash functions?
Now, I'm no expert on hash functions, those fiddly mathematical algorithms using huge prime numbers that underpin all encrypted communications and files. SHA1 and MD5 are two well known ones.
Since the hash value or fingerprint that they generate always has many less bits than the file that's being secured (unless it's a tiny file that's even shorter than the hash length of say, 128-bits, 256-bits etc) the one fingerprint that they generate will occur many times over for different files, regardless of their length or what they contain. These are called collisions which represent a security hole and it's impossible to eliminate them. The best that can be done is to randomize them as much as possible to make them harder to find. MD5 turned out to have a weakness in this randomization which made creating intentional collisions easier and hence was retired from active duty.
But what if it's possible to eliminate collisions after all?
I'm thinking of doing something very simple: just signing every file using two different hash functions producing two different hash values for the one file. Sure, each function will have all those possible collisions, but they will be in different places. This means that it's impossible to create a different file that will produce fingerprint collisions in both algorithms at the same time, which can be easily checked for, as one will give the wrong value every time. This will fix the problem of fake SSL certs being generated using enough computing power, for example.
Hence, the only cost to do this is a bit of time to compute both hashes and would eliminate this problem once and for all.
As I said, I'm no expert in this, but it sounds reasonable to me. Waddya think?
Now, I'm no expert on hash functions, those fiddly mathematical algorithms using huge prime numbers that underpin all encrypted communications and files. SHA1 and MD5 are two well known ones.
Since the hash value or fingerprint that they generate always has many less bits than the file that's being secured (unless it's a tiny file that's even shorter than the hash length of say, 128-bits, 256-bits etc) the one fingerprint that they generate will occur many times over for different files, regardless of their length or what they contain. These are called collisions which represent a security hole and it's impossible to eliminate them. The best that can be done is to randomize them as much as possible to make them harder to find. MD5 turned out to have a weakness in this randomization which made creating intentional collisions easier and hence was retired from active duty.
But what if it's possible to eliminate collisions after all?
I'm thinking of doing something very simple: just signing every file using two different hash functions producing two different hash values for the one file. Sure, each function will have all those possible collisions, but they will be in different places. This means that it's impossible to create a different file that will produce fingerprint collisions in both algorithms at the same time, which can be easily checked for, as one will give the wrong value every time. This will fix the problem of fake SSL certs being generated using enough computing power, for example.
Hence, the only cost to do this is a bit of time to compute both hashes and would eliminate this problem once and for all.
As I said, I'm no expert in this, but it sounds reasonable to me. Waddya think?
Last edited: