this post was submitted on 09 Jan 2026
2 points (100.0% liked)

DevOps

2030 readers
1 users here now

DevOps integrates and automates the work of software development (Dev) and IT operations (Ops) as a means for improving and shortening the systems development life cycle.

Rules:

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

founded 2 years ago
MODERATORS
 

When setting resources on Kubernetes pods, I'm finding it very difficult to achieve good memory efficiency.

I'm using the "no CPU limits, set memory=limits" philosophy that I hear heavily recommended on the internet.

The problem is, that many pods will have random memory spikes. In order for them not to be OOM Killed, you have to set the memory requests for them above their highest spike, which means most of the time they're only using like 25% or so of their memory allocation.

I've been trying to optimize this one cluster, and on average I'm only getting 33% of the total memory requested for all the pods in the cluster actually being used. Whenever I try decreasing some pod's memory requests, I eventually get OOMs. I was hoping I could reach closer to 50%, considering that this particular cluster has a stable workload.

I'm sure that I could optimize it a bit better, but not by much.

Is this a shared experience in Kubernetes? That you ultimately have to sacrifice a lot of memory efficiency.

you are viewing a single comment's thread
view the rest of the comments
[–] rglullis@communick.news 3 points 3 days ago

That seems like a problem of the application, no? If the workloads have memory leaks or are too eager to get memory to itself, then no cluster will be able to make it perform better.