this post was submitted on 18 Apr 2026
210 points (97.3% liked)

Announcements

697 readers
80 users here now

lemmy.zip annoucements

The same rules as the main instance apply here.

founded 2 years ago
MODERATORS
 

TL;DR

We’ve temporarily defederated from Hexbear due to a Lemmy bug with very deeply nested comment threads.

A thread there triggered repeated crashes on our server, causing errors like 502 pages and “Lemmy is starting” messages. Defederating stops the issue for now.


Announcement

Due to technical issues, we’ve temporarily defederated from Hexbear until a Lemmy update is available that fixes issues with deeply nested comment chains.

There is a known bug in Lemmy (see: https://github.com/LemmyNet/lemmy/issues/6435 ) where very deeply nested comments can trigger excessive recursion during federation. When Lemmy processes these comments, it recursively fetches and verifies parent comments, which can eventually lead to stack overflows.

Under normal circumstances this happens rarely (we’ve been seeing it maybe once per day), but it becomes much more problematic when multiple new comments are added to an already deeply nested thread. Each new activity can trigger processing of the same deep chain again.

In this case, a thread on Hexbear received a large number of additional replies in a very deep comment chain.

This caused Lemmy to repeatedly process that chain, leading to stack overflows, federation worker exhaustion and timeouts. Simply put, parts of the server were crashing, too many tasks piled up at once, and requests started timing out and failing to load

You may have see this on the website with 502 errors or the lemmy error screen, and on apps it may have presented you with API timeout errors or "Lemmy is starting" errors.

For a visual representation, this graph shows the memory drop each time the server restarts:

The flat bit to the left is good, everything is fine. The choppy bit to the right, not so good, everything is not fine.

Usually its a one-off comment causing this crash, however in this case the user spent a good portion of time bumping the thread, and we had to process each one of those, each causing a crash, restarting the server, and then crashing on the next in the queue, and so on.

I did try removing the offending community from Lemmy.zip to prevent this from happening (It's quite common behavior in that community to bump threads I think), however we still process all the activities from that community - the only certain fix for now is to defederate until a version of lemmy is released that fixes this.

The graph is back to improving now:

Hope that all makes sense!

Demigodrick

you are viewing a single comment's thread
view the rest of the comments
[–] db0@lemmy.dbzer0.com 29 points 1 day ago (1 children)

Thanks so much for doing the legwork on this. I was going nuts trying to figure out where seemingly random downtimes were coming from. It felt like a DOS and this cause explains why.

Out of curiosity, how did you trace this root cause?

[–] Demigodrick@lemmy.zip 27 points 1 day ago* (last edited 1 day ago) (2 children)

I noticed in the logs before every timeout there were lots of "verify" words appearing, and in each iteration of that statement there were more and more verify words. Honestly had no idea what it meant at the point, only that I didn't recognise it from looking at lemmy logs previously, it always appeared before a crash, and it felt suspicious.

Here's an example from some logs before a crash:

2026-03-15T21:47:22.670586Z  INFO HTTP request{http.method=POST http.scheme="https" http.host=lemmy.zip http.target=/inbox otel.kind="server" request_id=2cc6dc65-571d-4a69-9733-5e80e455c00b}:receive:community:
verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:
verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:
verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:
verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:
verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:
verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:
verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:verify:
verify:verify:verify:verify: activitypub_federation::fetch: 
Fetching remote object https://hexbear.net/comment/7004776

thread 'actix-server worker 18' has overflowed its stack

fatal runtime error: stack overflow

I pinged some logs over to Nutomic on matrix, who thought it might have been related to nested comments, and then I noticed Dessalines had made the linked thread, which matched pretty much with what I was seeing behaviour and logs-wise.

Usefully the logs link the object it's fetching, and 9 times out of 10 its a deeply nested hexbear thread! Or someone from another instance commenting on a nested hexbear thread. Nutomic confirmed the behaviour based on the logs in the issue, and I'm pulling the logs when I get chance to see what other threads are causing it to crash, although hopefully the fix will make it's way into 0.19.18 beta 3 so I can stop worrying about it!

[–] db0@lemmy.dbzer0.com 5 points 1 day ago (1 children)

Did you also see the db cpu spiking during this period?

[–] Demigodrick@lemmy.zip 3 points 1 day ago

No, no meaningful cpu spikes I could make out anywhere, although admittedly I was focusing on the lemmy server container mostly

[–] mathemachristian@lemmy.blahaj.zone 3 points 1 day ago (1 children)

There has to be a better way to gain visibility for mutual_aid posts because good god

[–] frongt@lemmy.zip 2 points 23 hours ago

Voting and sorting by top.