this post was submitted on 02 Dec 2025
96 points (99.0% liked)

Fuck AI

4946 readers
1159 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Quick math is the one thing computers were always good at, not anymore apparently.

top 9 comments
sorted by: hot top controversial new old
[–] gustofwind@lemmy.world 26 points 2 weeks ago

“Approximately” doing some Olympian weightlifting

[–] Greg@lemmy.ca 10 points 2 weeks ago (2 children)

I think it's trying to say that 1.3 billion people were born between 2015 and 2025. Poor word choice though

[–] Pothetato@lemmy.world 8 points 2 weeks ago (1 children)

Yeah it increased by that much! Decreased a little too, but that's totally unrelated.

[–] PunnyName@lemmy.world 1 points 2 weeks ago

Especially during that 2020 thing.

[–] technocrit@lemmy.dbzer0.com 3 points 2 weeks ago

Poor word choice though

They literally developed this function by averaging over the entire internet for the best word choice.

[–] thisbenzingring@lemmy.today 7 points 2 weeks ago* (last edited 2 weeks ago)

APPROXIMATELY ... Come on guys! Don't hurt the clankers feelfeels it's gonna go skynet on us because of some fucking internet troll

[–] technocrit@lemmy.dbzer0.com 4 points 2 weeks ago

It's almost like it's more accurate and efficient to do a basic computation than to calculate the "average" written response over the entire internet...

Predictive language models are bad at this because they are not actually parsing meaning from the text. It is just outputting patterns it has seen before from training data based on your inputs. The patterns are complex and the training data is often immense enough that is has seen just about any kind of pattern plenty. That is often good enough to get sensible output, but not always.

There are models that do handle this better through a few different strategies. They can have a team of specialized models take on the problem. Using one model to categorize the prompt and the data generated, it then has other models specifically trained on that kind of data, or even just basic stupid calculators in cases like this, parse and produce results it understands. Then it can take the output of all of those other models through one more model that organizes the data cohesively.

Alternatively, you can also have a series of models that successively breaks down the prompt and data generated into finer details and steps, where instead of guessing at math problems like this, it literally "shows its work", so to speak, applying step by step arithmetic to it instead of just guessing with "good enough" language modeling.

[–] MonkderVierte@lemmy.zip 1 points 2 weeks ago

Luckily, China and India have both sunk below 2 children per pair now.