• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Insidious New "NetSpectre" Vulnerability Can Be Exploited Over Network

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,683 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
The "Spectre" family of vulnerability, an exploitation of the speculative execution features of modern processors (mostly Intel), was scary enough. Up until now, running malware that implements Spectre needed one to run the program on a local machine. Running it remotely was limited to well-crafted JavaScript executed on the victim's machine, or cloud hosts made to process infected files. This is about to change. Security researchers from Graz University of Technology, including one of the discoverers of the "Meltdown" vulnerability, Daniel Gruss; have discovered NetSpectre, a fully network-based exploit that can let attackers read the memory of a remote machine without executing any program on that machine.

NetSpectre works by deriving bits and bytes from the memory based on measurements of the time the processor to succeed or recover from failure in speculative execution. As a processor is executing code, it speculates what the next instruction or data is, and stores their outcomes beforehand. A successful "guess" is rewarded with tangible performance benefits, while an unsuccessful guess is penalized with having to repeat the step. By measuring the precise time it takes for the processor to perform either (respond to success or failure in speculative execution), the contents of the memory can be inferred.


It's a slow and tedious process, though; and attackers use the victim machine's own inconspicuous networked applications to make the measurements. It takes 100,000 measurements to derive the value of a single bit, on average 30 minutes to derive a byte, and if the code is using the AVX2 register (i.e. measuring the time it takes for the processor to fire up or power down the register in response to load from the networked application), takes about 8 minutes to derive a byte. At this rate, it would take about 15 years to make out 1 MB of data; but if all you need is to derive a few bytes long cryptographic key and know exactly where to look for it, an attack can succeed in a tangible amount of time.

Intel downplayed NetSpectre. In a statement, the company said:
NetSpectre is an application of Bounds Check Bypass (CVE-2017-5753), and is mitigated in the same manner - through code inspection and modification of software to ensure a speculation stopping barrier is in place where appropriate. We provide guidance for developers in our whitepaper, Analyzing Potential Bounds Check Bypass Vulnerabilities, which has been updated to incorporate this method. We are thankful to Michael Schwarz, Daniel Gruss, Martin Schwarzl, Moritz Lipp, & Stefan Mangard of Graz University of Technology for reporting their research.

View at TechPowerUp Main Site
 
Seems like a lot of effort for most likely getting zero results (it has to be a highly targeted attack which basically excludes 99% of home machines). I also wonder how firewalls would affect this process. Be it basic ones or more capable ones with SPI...
 
Would that attack work outside LAN or over several network segments?
 
This seems about as viable as those famous "vulnerabilities" for AMD processors. A lot of puff but not much practical use for the attackers. Same here. I don't think this would be that worrysome to Intel users even on more critical places. It's just too many "if" to get results...
 
The Spectre and Meltdown vulnerability's that just keeps on giving.

Could it be a genius conspiracy to force the whole world to upgrade to new silicon that is immune to these new vulnerability's.

I mean PC sales are going up for the fist time in seven years :)
 
Ummmm. I hate to say it but this is even less exploitable than regular spectre. In other words, completely useless. Come back to me when someone actually managed to make malacious use of this, we're still waiting for one based on spectre which never happened. Perhaps, the type of exploit does matter. :slap:

In other words, if it's insanely hard to do in a lab, how the hell do you expect to exploit it in the real world.
 
Last edited:
I honestly don't really see how the heck does this new vulnerability is even possible to pull off. And what if I use a dedicated NIC card instead of the onboard LAN? Does that change anything?

I mean, it's interesting as research, but not really practical.
 
Thankfully, I'm nobody and my computer is meaningless. Maybe I'd be worried if I was Iran and the NSA was targetting me. Most of these things are going to take that amount of scrutiny. Who the hell am I? Or you? Don't flatter yourself. :)
 
The Spectre and Meltdown vulnerability's that just keeps on giving.

Could it be a genius conspiracy to force the whole world to upgrade to new silicon that is immune to these new vulnerability's.

I mean PC sales are going up for the fist time in seven years :)

Yeah with the fun side note that none of the hardware coming out now is actually Spectre proof, instead we say its 'hardened'... until the next exploit is found.

Thankfully, I'm nobody and my computer is meaningless. Maybe I'd be worried if I was Iran and the NSA was targetting me. Most of these things are going to take that amount of scrutiny. Who the hell am I? Or you? Don't flatter yourself. :)

Don't worry they 'have' you too. Its called dragnet surveillance. There are no targets, there is only everything.
 
I'm no electrical engineer... and I'm definitely no hacker. But this is suspect as hell.

How are you going to measure the time it takes a processor to make a branch decision (which takes on the order of generally less than a nanosecond) over a network with latency in the milliseconds?
 
I'm not normally one for conspiracy theories, but IMHO, this just reeks of yet another marketing ploy to get everyone to drop more serious coin on new, "upgraded/updated" networking devices to "protect" ourselves from something that, as others have said, is really difficult to execute and not very rewarding for anyone but the most dedicated and sophisticated hackers looking for ANY break in their quest for more bragging rights than anything else.......
 
Not to mention... if it takes 100k measurements to derive the value of a single bit (presumably in memory), then by the time you've read that bit, it has changed anyway. Sounds completely useless, even if possible.
 
I'm no electrical engineer... and I'm definitely no hacker. But this is suspect as hell.

How are you going to measure the time it takes a processor to make a branch decision (which takes on the order of generally less than a nanosecond) over a network with latency in the milliseconds?

Locally executed code via a network service?

Not to mention... if it takes 100k measurements to derive the value of a single bit (presumably in memory), then by the time you've read that bit, it has changed anyway. Sounds completely useless, even if possible.

It certainly slows ops down. It's basically dividing your memory bandwidth by at least 100k. For high value targets it could still be useful however.
 
Locally executed code via a network service?
It certainly slows ops down. It's basically dividing your memory bandwidth by at least 100k. For high value targets it could still be useful however.
Observing one byte every 30 minutes isn't exactly what I would call a feasible exploit forget *how* you determine if you have a byte or not. How much network load does it generate? Will the machine lose connection to the network or have predictable network latency? I like theories but is it even realistic for this to work outside of a freaking lab?!

So, forget "NetSpectre". Lets talk about Spectre (a seemingly "easier" exploit.) Name me a single virus or malware in the wild, that has been successful and identified as using this as a vector for attack. I like theories, but we have to measure everything in common sense. Reading a single byte (maybe,) every 30 minutes doesn't get me anything. I would need to know exactly what I'm looking for, it would have to be not changing, and network conditions would have to be ideal if it's even reproducible.
 
I think we're crossing the limits of what's feasibly possible, and moving on to the territory of technically possible here...
 
Observing one byte every 30 minutes isn't exactly what I would call a feasible exploit forget *how* you determine if you have a byte or not. How much network load does it generate? Will the machine lose connection to the network or have predictable network latency? I like theories but is it even realistic for this to work outside of a freaking lab?!

So, forget "NetSpectre". Lets talk about Spectre (a seemingly "easier" exploit.) Name me a single virus or malware in the wild, that has been successful and identified as using this as a vector for attack. I like theories, but we have to measure everything in common sense. Reading a single byte (maybe,) every 30 minutes doesn't get me anything. I would need to know exactly what I'm looking for, it would have to be not changing, and network conditions would have to be ideal if it's even reproducible.

I know, which is why I said "high value targets." And by that I mean like state level actors, who are always trying again (and again, and again). No one here is going to be at much risk from NetSpectre as it presently stands. Unless I am misunderstanding something, of course.
 
Locally executed code via a network service?
It certainly slows ops down. It's basically dividing your memory bandwidth by at least 100k. For high value targets it could still be useful however.

I could be wrong, but if it's locally executed code, then it's just spectre, right? Nothing new. Also, I quote, from the OP:

without executing any program on that machine.

Also, I'm not saying the problem is the speed it happens. If it's a high value target, sure you can spend however much time it takes. But that's not my point. I'm saying it's impossible to measure something that happens in a fraction of a nanosecond with a measuring stick that maxes out in milliseconds. They're trying to measure a molecule with a yardstick. It just isn't going to happen.

Not to mention, bits in memory are constantly flipping. The very act of reading a bit changes the contents of that memory (though hopefully it changes a different bit.) By the time they've read one bit, the rest of the memory has changed. Maybe if we're talking about RAM, it could be feasible that the contents remain unchanged. But I think we're talking about memory on board the processor since we're talking about branch prediction. If that's the case, the processor has moved on within milliseconds. I don't know how they expect to read anything useful, and not just random bits taken out of context because the program is in flight.
 
Back
Top