• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Faulty Windows Update from CrowdStrike Hits Banks and Airlines Around the World

Imagine blaming microsoft/windows update, for a 3rd party security software bricking windows.

So you feel MS is in no way to blame? arnt they the ones who have a contract with this firm? is it not up to MS to check and verify this crap before letting it through?

This seems the mentality that lets CEO's that make a complete hash of it, the very thing they are paid WAY TOO MUCH for, leave with a "golden handshake".
 
Let AI solve it....:D


At least then it turns out to be useful....
 
And yes, Microsoft certainly deserves blame for how easily their systems break, and for how tedious it is to roll back.
Wat. It’s a security update for kernel level operation, from my understanding. ANY OS can be bricked by such a thing. Modern Windows, for all its flaws, is at its core incredibly robust. Why are we acting like MS engineers (and I do mean engineers, not people who shove marketing driven shit on top of a good core) are incompetent mole-people who fail at basic tasks?

Is it just me or do others think critical IT and society infrastructure services need to switch from Windows to Linux?
Nothing would change then, the potential for failure will increase with wider adoption. Linux isn’t some fantabulous mythical unbreakable OS which can never go wrong. It has comparatively less issues and less security concerns to patch for because it’s used less. That’s it.
And yes, many critical tasks already run under some form of Linux, sure. But there are things where it isn’t feasible.

So you feel MS is in no way to blame? arnt they the onces who have a contract with this firm? is it not up to MS to check and verify this crap before letting it through?
MS isn’t the ones who contract this firm, no. Where did you even infer it?
 
Wat. It’s a security update for kernel level operation, from my understanding. ANY OS can be bricked by such a thing. Modern Windows, for all its flaws, is at its core incredibly robust. Why are we acting like MS engineers (and I do mean engineers, not people who shove marketing driven shit on top of a good core) are incompetent mole-people who fail at basic tasks?

MS isn’t the ones who contract this firm, no. Where did you even infer it?
I don't like quoting myself, but:

You're expecting the anti-Microsoft crowd to be capable of basic reading comprehension...
 
This is a major cluster fuck and the focus will be on Crowdstrike QA and update release procedure...

Prayers for admins dealing with this and especially those that have to manually access bitlocker encypted machines one by one. If they have the keys.
 
Their first mistake was rolling update to Production on Friday.
 
@Chomiq
This is a good point, actually. Good practice is to not roll shit out before weekends or, god forbid, long holidays. But maybe there was some rapid response fix or vulnerability protection they felt needed to be applied ASAP. Who even knows, at this point.
 
Having client PCs go offline may not be surprising, but seeing banks, traders, airlines, media companies etc., having their central services being offline from an update, that's just ridiculous. Come on guys, it's not 1995 any more, this level of incompetence isn't excusable. If you're making billions you can afford having properly trained staff and a properly managed tech "stack" with whatever appropriate failovers, backups, recovery images/procedures, etc. is needed to ensure reliability and security.
Assuming this affected client PCs primarily, or exclusively: companies don't just have "failovers" for those. Or any other *quick* recovery procedure if many of them fail all at once.
 
Here is F1 team also affected by this MS nonsense.

13min ago:

2532552.png
 
Their first mistake was rolling update to Production on Friday.
Or maybe they found out that companies spend three days to recover from an average Microsoft (and SAP, Adobe and Oracle) Patch Tuesday.
 
First off I will say I don't know if this would fall under MS's automatic updating scheme or not, which I do not like period.
I have known it to wreck things before (Personally saw this happen at work one morning from an overnight forced update / Win 10 no less) and lead to downtime and all the rest you'd expect.

Regardless of that, it's a major screwup and the fallout will certainly cause some heads to roll wherever.

I also feel for the IT guys having to address this because you know some are clocking in and just learning about it and that would include the boss..... Depending on the boss and the sheer number of machines affected wherever they are, it may be a really bad & long day for those guys.
 
Pour one out for sysadmins, who have just learned that the fix is to log into each affected PC one at a time and delete the single bad file from each one.
It's going to be a loooooooooooooooooooooooooooooooooooooooong day for those in bigger organizations!
On Linux this would be a simple script that iterates over machines and sshes into each one. I'd be surprised if PowerShell doesn't have something similar.

On another note, this is why I insist most software I install will edit my my boot loader. Or at least they install some kernel-level shenanigans (looking at you anti-cheats). /s
 
MS update and chaos ensues. If you've never had your PC borked by an MS update then consider yourself blessed. MS is notorious for rolling out updates with QA that is pitiful.

8xjhuo.jpg
 
This was 100% caused by CrowdStrike and not Microsoft.

The fix can only be done manually from recovery mode. This will take days to weeks to repair at scale.
MS update and chaos ensues. If you've never had your PC borked by an MS update then consider yourself blessed. MS is notorious for rolling out updates with QA that is pitiful.

8xjhuo.jpg
 
On Linux this would be a simple script that iterates over machines and sshes into each one. I'd be surprised if PowerShell doesn't have something similar.

On another note, this is why I insist most software I install will edit my my boot loader. Or at least they install some kernel-level shenanigans (looking at you anti-cheats). /s
Sadly, many organizations use thousands of BitLocker-enabled PCs, which require individual visits to repair.
 
This was 100% caused by CrowdStrike and not Microsoft.

The fix can only be done manually from recovery mode. This will take days to weeks to repair at scale.

There are automated ways to fix it in some environments. The problem is drive encryption... I seriously wonder if question will be asked, why do you need Bitlocker or equivalent on PCs that don't have any sensitive data on them?

It's the people who's keys are also on crashed servers that are most FUBAR. Even if they have them somewhere, have to manually do it all. If no keys, guess it's time to restore from backups.
 
Sadly, many organizations use thousands of BitLocker-enabled PCs, which require individual visits to repair.
Exactly the boat I'm in... I'm the infosec manager so I'm just the one documenting the wreckage.
 
From a buddy of mine working in MS:
"There was an outage confined in central US datacenters but it was resolved hours before crowdstrike shat its pants"
 
There are automated ways to fix it in some environments. The problem is drive encryption... I seriously wonder if question will be asked, why do you need Bitlocker or equivalent on PCs that don't have any sensitive data on them?

It's the people who's keys are also on crashed servers that are most FUBAR. Even if they have them somewhere, have to manually do it all. If no keys, guess it's time to restore from backups.
the problem with that is having a way to classify PCs with and without sensitive info and dynamically enrolling in Bitlocker. Most of our PCs have sensitive info being an electronics company. There's very few without such as receptionists, janitors/maintenance, etc. The effort isn't worth the reward in that case.

Besides even without the effort is equivalent since we use LAPS.
 
Back
Top