• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Faulty Windows Update from CrowdStrike Hits Banks and Airlines Around the World

People running in Azure were completely unaffected...
Oh is that why gov sites all over the globe are down? O365 included? You might want to double check your info. Im not getting mine from a news site. Even despite MS redundancy and maximum reliability policies those went down simply because MS lost four data locations in the US. Closer to my workspace we lost Azure devops.

Additionally we arent out of the woods yet even with the Crowdstrike update rolled back; contrary to what news outlets say now.
 
Last edited:
Wat. It’s a security update for kernel level operation, from my understanding. ANY OS can be bricked by such a thing. Modern Windows, for all its flaws, is at its core incredibly robust. Why are we acting like MS engineers (and I do mean engineers, not people who shove marketing driven shit on top of a good core) are incompetent mole-people who fail at basic tasks?


Nothing would change then, the potential for failure will increase with wider adoption. Linux isn’t some fantabulous mythical unbreakable OS which can never go wrong. It has comparatively less issues and less security concerns to patch for because it’s used less. That’s it.
And yes, many critical tasks already run under some form of Linux, sure. But there are things where it isn’t feasible.
This. All of it.
 
Or not. I fought tooth and nail to avoid it. And I did. Might not be possible everywhere but at least at my lowly records storage role it was possible. I just have to jump through a longer list of OTHER compliance proofs, but worth it to avoid headaches like this.
Just reboot in safe mode now and you are golden, except thats not allowed in almost any business or gov environment :)
 
Oh is that why gov sites all over the globe are down? O365 included? You might want to double check your info. Im not getting mine from a news site. Even despite MS redundancy and maximum reliability policies those went down simply because MS lost four data locations in the US. Closer to my workspace we lost Azure devops.

Additionally we arent out of the woods yet even with the Crowdstrike update rolled back; contrary to what news outlets say now.
You would have to be running azure WITH cloudstrike.
 
On Linux this would be a simple script that iterates over machines and sshes into each one. I'd be surprised if PowerShell doesn't have something similar.

If you have a kernel module causing a panic on boot before ssh comes up it would not.

You'd have to boot into single user mode, too.

The real difference of cause is that Linux and FreeBSD users generally only run kernel modules that came with the kernel, not some closed-source third-party garbage. Except for the NVidia drivers. Errr...

TBF if Microsoft offered user-mode APIs into kernel events, it wouldn't be necessary to install a kernel driver.

FreeBSD has dtrace, Linux has eBPF. But we can't know whether they would be sufficient for Crowdstrike. They have a Linux version, I bet they use a kernel module, too.
 
Would something like dell idrac work to fix servers that have this?? So you won't have to go to them physically??
 
I could see idrac fixing this by one time booting a WinPE scripted to automatically delete \Windows\System32\drivers\CrowdStrike\C-00000291*.sys from the first three drive letters (mounting is a little weird) and then rebooting the machine normally.
 
I could see idrac fixing this by one time booting a WinPE scripted to automatically delete \Windows\System32\drivers\CrowdStrike\C-00000291*.sys from the first three drive letters (mounting is a little weird) and then rebooting the machine normally.
Awesome that's cool.. I'll tell ppl that then.. the only poweredge stuff I've messed with recently storage stuff
 
So ... i just went to the hypermarket ... and it was affected by this CrowdStrike problem ...

Thing i found weird is that only the SELF SERVICE payment area was affected: non self service WAS NOT affected.
I went shopping before and half the checkouts were down with the windows logo on screen, first time I've seen that. o_O

Las Vegas late last night early this morning:

View attachment 355739
I've seen similar on overhead digital signage on highways.
 
The banks and governments are trying to shove a cashless society on us, please everyone do not let them!
Again, conspiracy theories don't belong here. Can you still get cash? Yes? Then we don't need to theorize about anything else.
 
No problems where I am, everything smooth as, at least as of now.
 
Stay away from the off-topic conspiracy theories. This was a major problem caused by a major oversight (and possibly lax attitude to update roll-outs). Hopefully it's a wake-up call for organisations to not cut corners and tie in redundancies.
 
Last edited:
I went shopping before and half the checkouts were down with the windows logo on screen, first time I've seen that.

Couldn't see the checkouts' screens (there are only 4).

A security guard was placing two shopping carts "fixed together with something" blocking the access to these checkouts, which is why i asked if it was related to this global issue, to which he said yes.
 
Last edited:
The title is wrong, I wouldn't put much stake in it. This is and is only a crowdstrike issue; they even admitted it.

If you really want to blame someone, try your management that under funded the IT dept so much that didnt have the budget to roll this out to testing before it hit mass.

For the rest, please keep wack conspiracy theories away from the thread.

wack conspiracies?
what?
 
Am I the only one here who had never heard of CrowdStrike until yesterday?
 
Yes, this affected our Global Business yesterday.
Had great fun helping end-users try to get their machines back online, and then explain why they then couldn't access any company services.
It mostly came back online quite quickly, but our AD was still having problems yesterday evening, causing problems for user authentification, which is used across most of our sites and services...so the sites and services were up, but people couldn't log into them.
Our Bitlocker key server wasn't available for most of yesterday morning, but came back up pretty quickly thankfully.

We are expecting a few things to still be down on monday, as there aren't very many people available to go to the still down critical machines manually.

Just want to put my 2c that this shouldn't have happened.
Any deployment should be tested nefore release, and even when released, to one or two "test" customers who get high support and low prices for their help with testing, and potential risks.

Hats off to all those sysadmins that have to spend their whole weekend, and more, getting these systems back up manually.
 
CrowdStrike updates configuration files for the endpoint sensors that are part of its Falcon platform several times a day. It calls those updates “Channel Files.”









The defect was in one it calls Channel 291, the company said in Saturday’s technical blog post. The file is stored in a directory named “C:\Windows\System32\drivers\CrowdStrike\” and with a filename beginning “C-00000291-” and ending “.sys”. Despite the file’s location and name, the file is not a Windows kernel driver, CrowdStrike insisted.
Channel File 291 is used to pass the Falcon sensor information about how to evaluate “named pipe” execution. Windows systems use these pipes for intersystem or interprocess communication, and are not in themselves a threat — although they can be misused.
“The update that occurred at 04:09 UTC was designed to target newly observed, malicious named pipes being used by common C2 [command and control] frameworks in cyberattacks,” the technical blog post explained.









However, it said, “The configuration update triggered a logic error that resulted in an operating system crash.”
 
wack conspiracies?
what?

That last line wasnt for you, that was a general warning to the thread given the posts that were deleted yall cant see. I figured "For the rest" was clear enough but I guess not.
 
Stay away from the off-topic conspiracy theories. This was a major problem caused by a major oversight (and possibly lax attitude to update roll-outs). Hopefully it's a wake-up call for organisations to not cut corners and tie in redundancies.
All tech fails in some fashion eventually because we can't predict every scenario. That's not a conspiracy. It's a wake-up call to make sure we don't rely too heavily on automated sytems.
 
All tech fails in some fashion eventually because we can't predict every scenario. That's not a conspiracy. It's a wake-up call to make sure we don't rely too heavily on automated sytems.
Those posts were removed, you can't see them.
 
Back
Top