• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

New Spectre Vulnerability Version Beats All Mitigations, Performance to Badly Degrade After the Fix

Hence why computers with sensitive data and/or that are critical to infrastructure have no reason to be jacked into the internet or accessible via Wi-Fi other than out of "convenience" for the users. "Convenience" and security never mix. With that said, one comes to understand the futility of it all and realize the hole we've managed to dig ourselves into given the fact that the internet itself has become THE critical piece of infrastructure.

If you want to bring down a country with a powerful military, this is the way to do it. As Sun Tzu said, to defeat the rider, kill his horse.
 
Hi,
This is why Inspectre exists to disabe these hyped threats.
GRC | InSpectre
 
Using my wrists in the up, or down, position, plus my 10 fingers, gives me the ability to count to 2^12 = 4096.
Left middle finger up, is that one hundred and twenty-eight rather grave Windows issues?
 
And that only works given a TON of assumptions and perfect circumstances, none of which are real world possibilities. That supposed "proof of concept" was only barely so and had zero practical application.
Oh I'm not disagreeing. I'm just a stickler for the details. If it can be done in JavaScript, it can be done "remotely" however "remote" the possibility, lol.
 
Low quality post by Bones
Not again :mad:
nope.gif
 
I will never install any updates that try to address this. We better have a choice and not have it slipped into a windows update or fw update tied to other essential things.
 
Microsoft's enterprise products have Q&A testers. They just aren't who you think.

Hint: They are... you guys!
Hamsters. I am not wrong.

Using my wrists in the up, or down, position, plus my 10 fingers, gives me the ability to count to 2^12 = 4096.
unless you have 11 fingers...

10, 9, 8,7,6, and 5 are 11.
 
I will never install any updates that try to address this. We better have a choice and not have it slipped into a windows update or fw update tied to other essential things.

Then force disable Windows update and leave it off.

Eh, it's likely you'll get the mitigation forced on you at some point if you upgrade to whatever newer Windows version comes later. Though by then you might have migrated to a processor that already ships with hardware mitigations or enough architecture changes that render the vulnerability even more of a non-issue than it already is.

Sound like a planned strategy to force consumers into buying new hardware. As always, never trust anyone.
Pfft, we're buying new hardware anyways, these extremely low-risk vulnerabilities don't change anything...
 
They mention the differences between zen and zen 2 and only test on Zen... but don't specify the chip, they specified Skylake refresh 8700t.
They are also intel funded, which might explain the vagueness of other chips used or just theoretically vulnerable.
In general, yet another poorly done "security piece" not learning from other groups stumbles or intentional misdirection's.
No CVE, no 90 days given to architecture owners, no credibility. I don't see any proof they tested against mitigated hardware.

View attachment 199059
Fucking Intel... some shit never changes.
 
There should be no need. The CPU will know if the code is privileged to read a cache line, and once these enforcements are firmly in place, the Spectre class of bugs will go away
The attacker thread in a side channel attack is not directly reading the cache line. It merely probes the cache to determine the usage by the victim thread. You merely need access to a shared cache to carry out such an attack, and on the flip side the chance you get useful information out of such an attack is absurdly small.
The usefulness of SMT is decreasing with more efficient CPU architectures
The more resources a core has inside it, the more likely a thread does not have enough instruction level parallelism to suitably utilise all the resources in the core. Digging too hard for ILP results in bloated cores since increasing the out of order window exponentially drives up complexity of the core.

Back when SMT was introduced, it made a lot of sense since the pipelines were stalled much more and implementing SMT required very little die space
SMT no longer exists to cover stalling pipelines.

Itanium had many flaws, probably the biggest one was a very complex instruction scheme
VLIW has the fundamental flaw that it does not address runtime variance of certain latencies. Memory (cache level) latency is unpredictable and changes based on uarch and what other code is being run on the machine. Statically scheduled VLIW code can never be scheduled efficiently for every use case on general purpose processors.

I know my computer architecture quite well thank you.
 
Where can i download the papers, in which "people on the internet" are explaining their concers and proof the mentioned PoC paper is wrong?
 
The attacker thread in a side channel attack is not directly reading the cache line. It merely probes the cache to determine the usage by the victim thread. You merely need access to a shared cache to carry out such an attack, and on the flip side the chance you get useful information out of such an attack is absurdly small.
Unless you are talking about the extraction of meta information here, cached data is not a problem.
Even non-speculative execution have sensitive data in L1/L2/L3 all the time, as the CPU constantly do context switches without flushing caches.
The issue with speculative execution is when sensitive data is loaded into registers, etc. or even whole instructions are executed before this is discarded, but some of this data can be extracted before it's cleaned up (or overwritten). Implementing all instructions with proper safeguards in place will eliminate this problem (and all Specre class bugs). This will certainly create design constraints, but speculative execution as a whole is not principally flawed like many seems to think.

The more resources a core has inside it, the more likely a thread does not have enough instruction level parallelism to suitably utilise all the resources in the core. Digging too hard for ILP results in bloated cores since increasing the out of order window exponentially drives up complexity of the core.

SMT no longer exists to cover stalling pipelines.
You are forgetting that modern microarchitectures are using power gating quite heavily, and have multiple different execution units on a single execution port. If we are talking about computing in general (desktop usage, workstations, etc.), execution ports are usually quite well saturated when the CPU pipeline isn't stalled, so unless it's stalled, there rarely are many idle execution ports to delegate to other threads. This is why x86 SMT implementations only execute one thread at the time.
On the other hand, Power have an "impressive" 8-way SMT which can execute two threads simultaneously. These are intended for specific web server/enterprise workloads where performance of a single thread is less important than total throughput, and the threads are mostly stalled anyway. A such CPU design would result in a horrible user experience as a desktop CPU.

Statically scheduled VLIW code can never be scheduled efficiently for every use case on general purpose processors.
True, at least as far as we know. A new paradigm would be required to change this.
 
Where can i download the papers, in which "people on the internet" are explaining their concers and proof the mentioned PoC paper is wrong?
We can't prove it wrong per se. We're just saying this whitepaper needs work. It isn't really the best I've seen.
 
Where can i download the papers, in which "people on the internet" are explaining their concers and proof the mentioned PoC paper is wrong?
You can't because they didn't follow the standard procedure and get it verified or give the allegedly effected architectures a head up. Instead they recklessly released a white paper making accusations to get their masters and moved on. This is just yet another CS department without ethical oversight.
Anyone of us can try to do what they have and perhaps not reproduce, but that isn't proof persay because they didn't exactly give enough details either way.
 
We're just saying this whitepaper needs work. It isn't really the best I've seen.
True. There's a lot unanswered. However, what it discloses clearly shows the level of difficulty of executing an exploit.
 
Last edited:
Sound like a planned strategy to force consumers into buying new hardware. As always, never trust anyone.
If anything, these exploits prove that no matter what is added to the hardware, there's always a way around it.
 
If anything, these exploits prove that no matter what is added to the hardware, there's always a way around it.
It's why depending on the hardware for mission critical security is just dumb. It's good for non-critical security but if your mission depends on some barrier setup by the CPU you are doing it wrong.
 
It's why depending on the hardware for mission critical security is just dumb. It's good for non-critical security but if your mission depends on some barrier setup by the CPU you are doing it wrong.
While true, what choice do we have? Governments need to do government things, military's need to do military things and businesses need to do business things. Trying to build mission specific computers for every "mission critical" task would be prohibitively expensive, overly complicated and an overall untenable proposition..
 
While true, what choice do we have? Governments need to do government things, military's need to do military things and businesses need to do business things. Trying to build mission specific computers for every "mission critical" task would be prohibitively expensive, overly complicated and an overall untenable proposition..
Software security.

If it's mission critical, use hard software encryption. Don't trust the cpu to set up barriers, build your own with hard math.

It's a paradighm shift in many ways, but it's not impossible, and it is long overdue.
 
Software security.

If it's mission critical, use hard software encryption. Don't trust the cpu to set up barriers, build your own with hard math.

It's a paradighm shift in many ways, but it's not impossible, and it is long overdue.
Excellent points, and if done to rigorous and exacting requirements, such efforts would work.
 
It requires physical access to the machine, and if the bad guy has physical access nothing is secure anyway. This is why data centers use physical security also.
 
Back
Top