Additional analysis of CrowdStrike failure

Translation to English, they don't do Puppyfood testing after all, but will be going forward...

I must be reading that one wrong, perhaps more Chai tea first?
I think it is an obvious given that the cause is way complicated at the programing level and yet, from all the detailed posting on this event, it was and is a lack of due diligence that allowed this to happen. The most summary statement from the aristech article is, "In addition to this preliminary incident report, CrowdStrike says it will release "the full Root Cause Analysis" once it has finished investigating the issue."

It does remind me of past serious meltdowns of big tech ignoring the many warning bells going off loudly. The meltdown of the world wide economy of 2008 (housing bubble), the ignoring of the intelligence folks about 9-11, etc. Be careful of those politicians who want to cut safeguards and regulations by promising you a quick buck.
The EU did just that. Microsoft had a fix that would have prevented this some time ago but the EU wouldn't let them use it claiming it was monopolistic because it gave them too much control over how software could be installed. The ironic thing is Apple has the same fix, yet it was allowed.

--
Tom
 
Last edited:
The other interesting thing I found was when I poked around in Sysinternals Autoruns.

ba1d3d2bb6ca4e2ba2474e672594a933.jpg

Based in this, it seems like Windows Defender might be structured in such a way where each definition update is itself a driver. This is in contrast to Crowdstrike Falcon where their “driver” that is running in ring 0 reads in its update files.

I don't intend that to be a direct comparison between the two since...as we've all come to know...Crowdstrike's product isn't really the same as a “definition-based” antimalware product. I just found this discovery about Defender interesting and possibly something to look for with third party “traditional” antimalware products.
I got curious and had a look at a folder named like the one referenced in your registry to see what's in it, if it had a .sys file:

de7fe8e573844bc6a5808400174b2b6f.jpg

Apparently not.

What this all means, if anything, I'm afraid I have no idea. Anyone?
Well, well, well. Interesting. The above folder screenshot was from the Meteor Lake laptop I was using yesterday evening. No .sys file.

This is the folder from my Raptor Lake desktop:

f4e04b0b6c594012b7bd1b18dd3fa0ed.jpg

And there is the .sys file, like Billiam29's example.

I wonder what's different between the two machines that caused this. Same build, 22631.3958. I'll look at the laptop's registry.

Edit: Two laptops lack that registry key and the .sys file. The desktop has them both. Speculation: the laptops have local accounts, the desktop a MS account. Maybe I'll convert the Insider laptop to a MS account and see. Anyway, my apologies to any patient readers for going OT on this thread.
 
Last edited:
It was a logic flaw that was triggered by encountering a data file was that contained only zeros.
Crowdstrike repeatedly refutes that the file was all zeros.

Given that we have competing realities, I'm going to wait for an investigation to decide which is correct.
 
It was a logic flaw that was triggered by encountering a data file was that contained only zeros.
Crowdstrike repeatedly refutes that the file was all zeros.

Given that we have competing realities, I'm going to wait for an investigation to decide which is correct.
I don't really care which one is true - its still their same failing.
 
It was a logic flaw that was triggered by encountering a data file was that contained only zeros.
Crowdstrike repeatedly refutes that the file was all zeros.

Given that we have competing realities, I'm going to wait for an investigation to decide which is correct.
I was under the impression it was not a file but one line of code that was all zeros.
 
It is hard to see how anybody can claim that an update that crashed millions of computers and probabbly every one that it was installed could be described as being even reasonably tested. Change management procedures should ensure that all changes have been sucsesfully tested and what is shipped out is what was tested. Pushing this update should never have happend.

The systems that crashed are the responsibility of their owners. They subcontracted the updating of their systems to a third party without apparently ensuring that there was adequate testing and change control.

Both CrowdStrike and their clients should be held to account for the chaos and cost of the problems that have been caused. There needs to be severe financial penalties for both CrowdStrike and it's clients to prevent this happening again.
How do you think Crowdstrikes clients should be held responsible? Thats like saying you should be responsible for your cars recall that caused 100,000 cars to be recalled.

Microsoft is not responsible, CS's clients are not responsible...CROWDSTRIKE is responsible.
 

Keyboard shortcuts

Back
Top