- May 22, 2015
- 7,691 (4.11/day)
|Processor||Intel i5-6600k (AMD Ryzen5 3600 in a box, waiting for a mobo)|
|Motherboard||ASRock Z170 Extreme7+|
|Cooling||Arctic Cooling Freezer i11|
|Memory||2x16GB DDR4 3600 G.Skill Ripjaws V (@3200)|
|Video Card(s)||EVGA GTX 1060 SC|
|Storage||500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 3TB Seagate|
|Audio Device(s)||Audioquest Dragonfly Red :D|
|Power Supply||Seasonic 620W M12|
|Mouse||Logitech G502 Proteus Core|
|Software||Arch Linux + Win10|
When a browser loads an image, it will never, under any circumstances need to execute that part of the memory. It knows it's data, therefore it does not need to be executed. It's why we have languages like Rust and DEP at a hardware level: to make sure data memory regions are not "executed".Oops Sorry you need to brush up on a few things
if image is loaded or Data into a Browser then the Browser runs the data
Or Do You Dispute that this can Happen ?
Remember the Crypto Coin Mining Browser Contraversity currently Circulating
And let's leave mining out of this, it really has nothing to do with the subject.
Thank you, this is what I've been saying from the start: it's the decoders that should be under scrutiny here, not the image format itself.There a lot of way to to run an code from within a a file that has the signature of an image (or pretty much everything else) by exploiting the vulnerabilities of the software used to view them. One of the most basic methods is screwing with buffers and overwriting data at locations in memory that are marked as being executable. That's how code gets to "run by itself by simply loading it into memory" , it ain't that complicated.
View attachment 96277