this post was submitted on 04 May 2026
414 points (98.8% liked)

Fuck AI

6940 readers
1442 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.zip/post/63702017

top 31 comments
sorted by: hot top controversial new old
[–] FrankFrankson@lemmy.world 65 points 4 days ago (2 children)

People really need to get the fuck off of github. There are multiple alternatives.

[–] RustyNova@lemmy.world 11 points 4 days ago (1 children)

Currently trying to migrate a project to codeberg, the site goes down...

[–] ramble81@lemmy.zip 7 points 4 days ago (2 children)

Any of them support SSO without a need for megalicense (tm)? Or artifact storage and CI/CD build agents?

[–] Anafabula@discuss.tchncs.de 10 points 4 days ago (1 children)

Forgejo is Codeberg's (a non-profit) hard fork of gitea. It has SSO, artifact storage, CI/CD build agents and no paid plan.

[–] Fiery@lemmy.dbzer0.com 1 points 3 days ago (1 children)

Why did they fork? I like just set up gitea and now I'm scared

[–] Anafabula@discuss.tchncs.de 2 points 3 days ago (1 children)

From Forgejo's comparison with Gitea:

In October 2022 the domains and trademark of Gitea were transferred to a for-profit company without knowledge or approval of the community. Despite writing an open letter, the takeover was later confirmed. Forgejo was created as an alternative providing a software forge whose governance further the interest of the general public.

[–] Fiery@lemmy.dbzer0.com 1 points 3 days ago

Ah well guess I know what I'm doing tomorrow or this weekend

[–] epicshepich@programming.dev 5 points 4 days ago

I run Gitea on my home server, and I'm able to use my Authentik instance for SSO. I don't use CI/CD, but I'm pretty sure it has an "actions" system similar to GitHub. I don't know about CI/CD artifacts, but I do use package and container registries, as well as LFS, which all work well!

[–] 4grams@awful.systems 41 points 4 days ago (2 children)

As an infrastructure engineer and architect, that graph really causes the stress levels to rise. That is incompetence visualized for the world to see. Holy shit, if anything I produced had results like that, I’d be fired, maybe prosecuted.

[–] ivanvector@piefed.ca 14 points 4 days ago (2 children)

You just know some exec is making a bonus from some invented metric that this supports.

[–] borth@sh.itjust.works 6 points 4 days ago

Prob tracking code commits. They'll try to show how many commits have been made, say that they've been super productive and that they deserve another bonus

[–] MrEff@lemmy.world 3 points 3 days ago

Cant get bonuses for fixing outages if there are no outages.

[–] spartanatreyu@programming.dev 1 points 2 days ago

You think that's bad, check their "high score" here: https://www.dayswithoutgithubincident.com/

[–] Tar_alcaran@sh.itjust.works 27 points 4 days ago (3 children)

Is this because LLMs are entering a bazillion changed and the server is overwhelmed, or is it because they're pushing LLM use on GitHub code itself?

[–] jonathan@piefed.social 15 points 4 days ago (1 children)

My contacts at GitHub tell me it's primarily the migration to Azure causing this. The increased load from LLM usage is just adding to their problems.

[–] Voroxpete@sh.itjust.works 9 points 4 days ago* (last edited 4 days ago)

Those problems start in 2019. This isn't an AI issue, it's a Microsoft incompetence issue.

[–] Diplomjodler3@lemmy.world 20 points 4 days ago

Textbook enshittification.

[–] VibeSurgeon@piefed.social 14 points 4 days ago (1 children)

Right, so this image cuts off the Y-axis. Looking into it, it's 100% uptime for the green parts of the line, and the second horizontal line is for 99.9% uptime.

I'm fairly convinced that GitHub didn't manage to keep a clean 100% uptime before the acquisition, so this is more likely to be faulty data - basically underreported downtime figures prior to the acquisition

[–] DacoTaco@lemmy.world 1 points 4 days ago* (last edited 4 days ago)

100% this.
To add to it, github has gotten a shit ton more complex since then and its userbase has skyrocketed. Scaling issues are a thing after all.

Iirc github actions were not released yet when microsoft took over ( but was in the works ) and that alone makes infrastructure a bitch to maintain and keep safe hehe

[–] Sir_Premiumhengst@lemmy.world 2 points 3 days ago

Oh lol thumbnail made me think this is an IR spectrum.

[–] MrEff@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

If the average month has 43800 minutes per month, then 1% is 438 minutes. But the Y axis is 1/10 smaller, and goes by 00.1% increments. So 43.8 minutes. So really we are talking about less than an hour for most months. Most months are around 1-2 hours, and never more than 4 hours in any given month.

There also isn't a counter for number of events. If you just did a major overhaul of some system with both hardware and software changes and when you went live you stalled out, then fixed it in 2 hours and never crashed again for the month- that is actually a decent and half competent IT team. Versus if you are just applying untested updates or shitty product breaking commits that are crashing servers and needing to roll back every other day but your down times are less than 2 hours- that team needs to be re-evaluated.