this post was submitted on 21 Jul 2024
28 points (80.4% liked)

Technology

80267 readers
4194 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

This is an unpopular opinion, and I get why – people crave a scapegoat. CrowdStrike undeniably pushed a faulty update demanding a low-level fix (booting into recovery). However, this incident lays bare the fragility of corporate IT, particularly for companies entrusted with vast amounts of sensitive personal information.

Robust disaster recovery plans, including automated processes to remotely reboot and remediate thousands of machines, aren't revolutionary. They're basic hygiene, especially when considering the potential consequences of a breach. Yet, this incident highlights a systemic failure across many organizations. While CrowdStrike erred, the real culprit is a culture of shortcuts and misplaced priorities within corporate IT.

Too often, companies throw millions at vendor contracts, lured by flashy promises and neglecting the due diligence necessary to ensure those solutions truly fit their needs. This is exacerbated by a corporate culture where CEOs, vice presidents, and managers are often more easily swayed by vendor kickbacks, gifts, and lavish trips than by investing in innovative ideas with measurable outcomes.

This misguided approach not only results in bloated IT budgets but also leaves companies vulnerable to precisely the kind of disruptions caused by the CrowdStrike incident. When decision-makers prioritize personal gain over the long-term health and security of their IT infrastructure, it's ultimately the customers and their data that suffer.

top 50 comments
sorted by: hot top controversial new old
[–] kent_eh@lemmy.ca 28 points 2 years ago (2 children)

Bloated IT budgets?

Where do you work, and are they hiring?

[–] irotsoma@lemmy.world 10 points 2 years ago

The bloat isn't for workers, otherwise there'd be enough people to go reboot the machines and fix the issue manually in a reasonable amount of time. It's only for executives, managers, and contracts with kickbacks. In fact usually they buy software because it promises to cut the need for people and becomes an excuse for laying off or eliminating new hire positions.

[–] GiveMemes@jlai.lu 7 points 2 years ago

As the post was stating, they get bloated by relying on vendors rather than in-house IT/Security.

My grandfather works IT for my state government tho and it's a pretty good gig according to him

[–] breakingcups@lemmy.world 18 points 2 years ago (12 children)

Please, enlighten me how you'd remotely service a few thousand Bitlocker-locked machines, that won't boot far enough to get an internet connection, with non-tech-savvy users behind them. Pray tell what common "basic hygiene" practices would've helped, especially with Crowdstrike reportedly ignoring and bypassing the rollout policies set by their customers.

Not saying the rest of your post is wrong, but this stood out as easily glossed over.

[–] lazynooblet@lazysoci.al 4 points 2 years ago

Autopilot, intune. Force restart device twice to get startup repair, choose factory reset, share LAPS admin password and let the workstation rebuild itself.

[–] LrdThndr@lemmy.world 3 points 2 years ago* (last edited 2 years ago) (6 children)

A decade ago I worked for a regional chain of gyms with locations in 4 states.

I was in TN. When a system would go down in SC or NC, we originally had three options:

  1. (The most common) have them put it in a box and ship it to me.
  2. I go there and fix it (rare)
  3. I walk them through fixing it over the phone (fuck my life)

I got sick of this. So I researched options and found an open source software solution called FOG. I ran a server in our office and had little optiplex 160s running a software client that I shipped to each club. Then each machine at each club was configured to PXE boot from the fog client.

The server contained images of every machine we commonly used. I could tell FOG which locations used which models, and it would keep the images cached on the client machines.

If everything was okay, it would chain the boot to the os on the machine. But I could flag a machine for reimage and at next boot, the machine would check in with the local FOG client via PXE and get a complete reimage from premade images on the fog server.

The corporate office was physically connected to one of the clubs, so I trialed the software at our adjacent club, and when it worked great, I rolled it out company wide. It was a massive success.

So yes, I could completely reimage a computer from hundreds of miles away by clicking a few checkboxes on my computer. Since it ran in PXE, the condition of the os didn’t matter at all. It never loaded the os when it was flagged for reimage. It would even join the computer to the domain and set up that locations printers and everything. All I had to tell the low-tech gymbro sales guy on the phone to do was reboot it.

This was free software. It saved us thousands in shipping fees alone. And brought our time to fix down from days to minutes.

There ARE options out there.

[–] magikmw@lemm.ee 4 points 2 years ago* (last edited 2 years ago) (5 children)

This works great for stationary pcs and local servers, does nothing for public internet connected laptops in hands of users.

The only fix here is staggered and tested updates, and apparently this update bypassed even deffered update settings that crowdstrike themselves put into their software.

The only winning move here was to not use crowdstrike.

[–] LrdThndr@lemmy.world 1 points 2 years ago (1 children)

Absolutely. 100%

But don’t let perfect be the enemy of good. A fix that gets you 40% of the way there is still 40% less work you have to do by hand. Not everything has to be a fix for all situations. There’s no such thing as a panacea.

[–] magikmw@lemm.ee 1 points 2 years ago (1 children)

Sure. At the same time one needs to manage resources.

I was all in on laptop deployment automation. It cut down on a lot of human error issues and having inconsistent configuration popping up all the time.

But it needs constant supervision, even if not constant updates. More systems and solutions lead to neglect if not supplied well. So some "would be good to have" systems just never make the cut, because as overachieving I am, I'm also don't want to think everything is taken care of when it clearly isn't.

load more comments (1 replies)
[–] wizardbeard@lemmy.dbzer0.com 0 points 2 years ago (1 children)

It also assumes that reimaging is always an option.

Yes, every company should have networked storage enforced specifically for issues like this, so no user data would be lost, but there's often a gap between should and "has been able to find the time and get the required business side buy in to make it happen".

Also, users constantly find new ways to do non-standard, non-supported things with business critical data.

[–] Bluetreefrog@lemmy.world 2 points 2 years ago

Isn't this just more of what caused the problem in the first place? Namely, centralisation. If you store data locally and you lose a machine, that's bad but not the end of the world. If you store it centrally and you lose the data, that's catastrophic. Nassim Taleb nailed this stuff. Keep the downside limited, and the upside unlimited or as he says, "Don't pick up pennies in front of a steamroller."

load more comments (3 replies)
[–] Brkdncr@lemmy.world 0 points 2 years ago (1 children)

How removed from IT are that you think fog would have helped here?

[–] LrdThndr@lemmy.world 0 points 2 years ago* (last edited 2 years ago) (1 children)

How would it not have? You got an office or field offices?

“Bring your computer by and plug it in over there.” And flag it for reimage. Yeah. It’s gonna be slow, since you have 200 of the damn things running at once, but you really want to go and manually touch every computer in your org?

The damn thing’s even boot looping, so you don’t even have to reboot it.

I’m sure the user saved all their data in one drive like they were supposed to, right?

I get it, it’s not a 100% fix rate. And it’s a bit of a callous answer to their data. And I don’t even know if the project is still being maintained.

But the post I replied to was lamenting the lack of an option to remotely fix unbootable machines. This was an option to remotely fix nonbootable machines. No need to be a jerk about it.

But to actually answer your question and be transparent, I’ve been doing Linux devops for 10 years now. I haven’t touched a windows server since the days of the gymbros. I DID say it’s been a decade.

[–] Brkdncr@lemmy.world 0 points 2 years ago (1 children)

Because your imaging environment would also be down. And you’re still touching each machine and bringing users into the office.

Or your imaging process over the wan takes 3 hours since it’s dynamically installing apps and updates and not a static “gold” image. Imaging is then even slower because your source disk is only ssd and imaging slows down once you get 10+ going at once.

I’m being rude because I see a lot of armchair sysadmins that don’t seem to understand the scale of the crowdstike outage, what crowdstrike even is beyond antivirus, and the workflow needed to recover from it.

load more comments (1 replies)
load more comments (4 replies)
[–] ramble81@lemm.ee 3 points 2 years ago (4 children)

You’d have to have something even lower level like a OOB KVM on every workstation which would be stupid expensive for the ROI, or something at the UEFI layer that could potentially introduce more security holes.

[–] circuscritic@lemmy.ca 2 points 2 years ago* (last edited 2 years ago)

.....you don't have OOBM on every single networked device and terminal? Have you never heard of the buddy system?

You should probably start writing up an RFP. I'd suggest you also consider doubling up on the company issued phones per user.

If they already have an ATT phone, get them a Verizon one as well, or vice versa.

At my company we're already way past that. We're actually starting to import workers to provide human OOBM.

You don't answer my call? I'll just text the migrant worker we chained to your leg to flick your ear until you pick up.

Maybe that sounds extreme, but guess who's company wasn't impacted by the Crowdstrike outage.

[–] Leeks@lemmy.world 1 points 2 years ago

Maybe they should offer a real time patcher for the security vulnerabilities in the OOB KVM, I know a great vulnerability database offered by a company that does this for a lot of systems world wide! /s

load more comments (2 replies)
[–] mynamesnotrick@lemmy.zip 3 points 2 years ago* (last edited 2 years ago) (4 children)

Was a windows sysadmin for a decade. We had thousands of machines with endpoint management with bitlocker encryption. (I have sincd moved on to more of into cloud kubertlnetes devops) Anything on a remote endpoint doesn't have any basic "hygiene" solution that could remotely fix this mess automatically. I guess Intels bios remote connection (forget the name) could in theory allow at least some poor tech to remote in given there is internet connection and the company paid the xhorbant price.

All that to say, anything with end-user machines that don't allow it to boot is a nightmare. And since bit locker it's even more complicated. (Hope your bitloxker key synced... Lol).

[–] Spuddlesv2@lemmy.ca 3 points 2 years ago

You’re thinking of Intel vPro. I imagine some of the Crowdstrike ~~victims~~ customers have this and a bunch of poor level 1 techs are slowly griding their way through every workstation on their networks. But yeah, OP is deluded and/or very inexperienced if they think this could have been mitigated on workstations through some magical “hygiene”.

load more comments (3 replies)
[–] riskable@programming.dev 1 points 2 years ago* (last edited 2 years ago)

what common "basic hygiene" practices would've helped

Not using a proprietary, unvetted, auto-updating, 3rd party kernel module in essential systems would be a good start.

Back in the day companies used to insist upon access to the source code for such things along with regular 3rd party code audits but these days companies are cheap and lazy and don't care as much. They'd rather just invest in "security incident insurance" and hope for the best 🤷

Sometimes they don't even go that far and instead just insist upon useless indemnification clauses in software licenses. ...and yes, they're useless:

https://www.nolo.com/legal-encyclopedia/indemnification-provisions-contracts.html#:~:text=Courts%20have%20commonly%20held%20that,knowledge%20of%20the%20relevant%20circumstances).

(Important part indicating why they're useless should be highlighted)

load more comments (7 replies)

This doesn't seem to be a problem with disaster recovery plans. It is perfectly reasonable for disaster recovery to take several hours, or even days. As far as DR goes, this was easy. It did not generally require rebuilding systems from backups.

In a sane world, no single party would even have the technical capability of causing a global disaster like this. But executives have been tripping over themselves for the past decade to outsource all their shit to centralized third parties so they can lay off expensive IT staff. They have no control over their infrastructure, their data, or, by extension, their business.

[–] technocrit@lemmy.dbzer0.com 3 points 2 years ago* (last edited 2 years ago)

An underlying problem is that legal security is mostly security theatre. Legal security provides legal cover for entities without much actual security.

The point of legal security is not to protect privacy, users, etc., but to protect the liability of legal entities when the inevitable happens.

neglecting the due diligence necessary to ensure those solutions truly fit their needs.

CrowdStrike perfectly met their needs by proving someone else to blame. I don't think anybody is facing any consequences for contracting with CrowdStrike. It's the same deal with Microsoft X 10000000. These bad incentives are the whole point of the system.

[–] Leeks@lemmy.world 3 points 2 years ago (3 children)

bloated IT budgets

Can you point me to one of these companies?

In general IT is run as a “cost center” which means they have to scratch and save everywhere they can. Every IT department I have seen is under staffed and spread too thin. Also, since it is viewed as a cost, getting all teams to sit down and make DR plans (since these involve the entire company, not just IT) is near impossible since “we may spend a lot of time and money on a plan we never need”.

load more comments (3 replies)
[–] r00ty@kbin.life 3 points 2 years ago (1 children)

I think it's most likely a little of both. It seems like the fact most systems failed at around the same time suggests that this was the default automatic upgrade /deployment option.

So, for sure the default option should have had upgrades staggered within an organisation. But at the same time organisations should have been ensuring they aren't upgrading everything at once.

As it is, the way the upgrade was deployed made the software a single point of failure that completely negated redundancies and in many cases hobbled disaster recovery plans.

[–] DesertCreosote@lemm.ee 9 points 2 years ago (2 children)

Speaking as someone who manages CrowdStrike in my company, we do stagger updates and turn off all the automatic things we can.

This channel file update wasn’t something we can turn off or control. It’s handled by CrowdStrike themselves, and we confirmed that in discussions with our TAM and account manager at CrowdStrike while we were working on remediation.

[–] daddy32@lemmy.world 2 points 2 years ago

There was a "hack" mentioned in another thread - you can block it via firewall and then selectively open it.

[–] r00ty@kbin.life 1 points 2 years ago (1 children)

That's interesting. We use crowdstrike, but I'm not in IT so don't know about the configuration. Is a channel file, somehow similar to AV definitions? That would make sense, and I guess means this was a bug in the crowdstrike code in parsing the file somehow?

[–] DesertCreosote@lemm.ee 2 points 2 years ago (1 children)

Yes, CrowdStrike says they don’t need to do conventional AV definitions updates, but the channel file updates sure seem similar to me.

The file they pushed out consisted of all zeroes, which somehow corrupted their agent and caused the BSOD. I wasn’t on the meeting where they explained how this happened to my company; I was one of the people woken up to deal with the initial issue, and they explained this later to the rest of my team and our leadership while I was catching up on missed sleep.

I would have expected their agent to ignore invalid updates, which would have prevented this whole thing, but this isn’t the first time I’ve seen examples of bad QA and/or their engineering making assumptions about how things will work. For the amount of money they charge, their product is frustratingly incomplete. And asking them to fix things results in them asking you to submit your request to their Ideas Portal, so the entire world can vote on whether it’s a good idea, and if enough people vote for it they will “consider” doing it. My company spends a fortune on their tool every year, and we haven’t been able to even get them to allow non-case-sensitive searching, or searching for a list of hosts instead of individuals.

[–] r00ty@kbin.life 2 points 2 years ago

Thanks. That explains a lot of what I didn't think was right regarding the almost simultaneous failures.

I don't write kernel code at all for a living. But, I do understand the rationale behind it, and it seems to me this doesn't fit that expectation. Now, it's a lot of hypothetical. But if I were writing this software, any processing of these files would happen in userspace. This would mean that any rejection of bad/badly formatted data, or indeed if it managed to crash the processor it would just be an app crash.

The general rule I've always heard is that you want to keep the minimum required work in the kernel code. So I think processing/rejection should have been happening in userspace (and perhaps even using code written in a higher level language with better memory protections etc) and then a parsed and validated set of data would be passed to the kernel code for actioning.

But, I admit I'm observing from the outside, and it could be nothing like this. But, on the face of it, it does seem to me like they were processing too much in the kernel code.

[–] istanbullu@lemmy.ml 1 points 2 years ago

The real problem is the monopolization of IT and the Cloud.

[–] Rhaedas@fedia.io 1 points 2 years ago

I don't think it's that uncommon an opinion. An even simpler version is the constant repeats over years now of information breaches, often because of inferior protect. As a amateur website creator decades ago I learned that plain text passwords was a big no-no, so how are corporation ITs still doing it? Even the non-tech person on the street rolls their eyes at such news, and yet it continues. CrowdStrike is just a more complicated version of the same thing.

[–] TechNerdWizard42@lemmy.world 1 points 2 years ago (2 children)

Issue is definitely corporate greed outsourcing issues to a mega monolith IT company.

Most IT departments are idiots now. Even 15 years ago, those were the smartest nerds in most buildings. They had to know how to do it all. Now it's just installing the corporate overlord software and the bullshit spyware. When something goes wrong, you call the vendor's support line. That's not IT, you've just outsourced all your brains to a monolith that can go at any time.

None of my servers running windows went down. None of my infrastructure. None of the infrastructure I manage as side hustles.

[–] Lettuceeatlettuce@lemmy.ml 2 points 2 years ago* (last edited 2 years ago)

I've seen the same thing. IT departments are less and less interested in building and maintaining in-house solutions.

I get why, it requires more time, effort, money, and experienced staff to pay.

But you gain more robust systems when it's done well. Companies want to cut costs everywhere they can, and it's cheaper to just pay an outside company to do XY&Z for you and just hire an MSP to manage your web portals for it, or maybe a 2-3 internal sys admins that are expected to do all that plus level 1 help desk support.

Same thing has happened with end users. We spent so much time trying to make computers "friendly" to people, that we actually just made people computer illiterate.

I find myself in a strange place where I am having to help Boomers, older Gen-X, and Gen-Z with incredibly basic computer functions.

Things like:

  • Changing their passwords when the policy requires it.
  • Showing people where the Start menu is and how to search for programs there.
  • How to pin a shortcut to their task bar.
  • How to snap windows to half the screen.
  • How to un-mute their volume.
  • How to change their audio device in Teams or Zoom from their speakers to their headphones.
  • How to log out of their account and log back in.
  • How to move files between folders.
  • How to download attachments from emails.
  • How to attach files in an email.
  • How to create and organize Browser shortcuts.
  • How to open a hyperlink in a document.
  • How to play an audio or video file in an email.
  • How to expand a basic folder structure in a file tree.
  • How to press buttons on their desk phone to hear voicemails.

It's like only older Millennials and younger gen-X seem to have a general understanding of basic computer usage.

Much of this stuff has been the same for literally 30+ years. The Start menu, folders, voicemail, email, hyperlinks, browser bookmarks, etc. The coat of paint changes every 5-7 years, but almost all the same principles are identical.

Can you imagine people not knowing how to put a car in drive, turn on the windshield wipers, or fill it with petrol, just because every 5-7 years the body style changes a little?

[–] ocassionallyaduck@lemmy.world 1 points 2 years ago

Man, as someone who's cross discipline in my former companies, the way people treat It, and the way the company considers IT as an afterthought is just insane. The technical debt is piled high.

load more comments
view more: next ›