Falcon Sensor is one of the most popular security products in Windows servers. Practically every large company purchases Crowdstrike services to protect their servers.
People who aren’t affected:
Linux and Mac servers
Private individuals and smaller businesess who have Windows machines that don’t buy CrowdStrike services.
Companies that bothered to create proper test environments for their production servers.
People who are affected:
Companies that use Windows machines, buy Falcon Sensor from Crowdstrike, and are too stupid/cheap to have proper update policies.
Does anyone know how these Cloudstrike updates are actually deployed? Presumably the software has its own update mechanism to react to emergent threats without waiting for patch tuesday. Can users control the update policy for these ‘channel files’ themselves?
These channel files are configuration for the driver and are pushed several times a day. It seems the driver can take a page fault if certain conditions are met. A mistake in a config file triggered this condition and put a lot of machines into a BSOD bootloop.
I think it makes sense that this was a preexisting bug in the driver which was triggered by an erroneous config. What I still don’t know is if these channel updates have a staged deployment (presumably driver updates do), and what fraction of machines that got the bad update actually had a BSOD.
Falcon Sensor is one of the most popular security products in Windows servers. Practically every large company purchases Crowdstrike services to protect their servers.
People who aren’t affected:
People who are affected:
Companies that use Windows machines, buy Falcon Sensor from Crowdstrike, and are too stupid/cheap to have proper update policies.
Thank you very much
Does anyone know how these Cloudstrike updates are actually deployed? Presumably the software has its own update mechanism to react to emergent threats without waiting for patch tuesday. Can users control the update policy for these ‘channel files’ themselves?
This doesn’t really answer my question but Crowdstrike do explain a bit here: https://www.crowdstrike.com/blog/technical-details-on-todays-outage/
These channel files are configuration for the driver and are pushed several times a day. It seems the driver can take a page fault if certain conditions are met. A mistake in a config file triggered this condition and put a lot of machines into a BSOD bootloop.
I think it makes sense that this was a preexisting bug in the driver which was triggered by an erroneous config. What I still don’t know is if these channel updates have a staged deployment (presumably driver updates do), and what fraction of machines that got the bad update actually had a BSOD.
Anyway, they should rewrite it in Rust.
Damn this morning I wished so hard my company was in the affected group. Alas, we all still had to work.
Nah, let’s direct ship anything any vendor sends us.
“We need to allocate our available budget to profit-generating processes. This just seems like a luxury we can’t afford.”
-thousands of overpaid dipshits, yesterday.