this post was submitted on 22 Nov 2024
71 points (89.0% liked)

196

17538 readers
450 users here now

Be sure to follow the rule before you head out.


Rule: You must post before you leave.



Other rules

Behavior rules:

Posting rules:

NSFW: NSFW content is permitted but it must be tagged and have content warnings. Anything that doesn't adhere to this will be removed. Content warnings should be added like: [penis], [explicit description of sex]. Non-sexualized breasts of any gender are not considered inappropriate and therefore do not need to be blurred/tagged.

If you have any questions, feel free to contact us on our matrix channel or email.

Other 196's:

founded 2 years ago
MODERATORS
71
matrix.im rule (lemmy.blahaj.zone)
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/[email protected]
 

memes are the best way of support

edit: ecosia didn't give anything

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 19 points 5 months ago (2 children)

You know asking AI is asking to be lied to yes?

Like, Apple has gone so far as to say it's impossible for current LLMs to reason.

It's incapable of knowing what is true in it's current form according to apple.

Don't trust AI to actually know anything.

[–] [email protected] 2 points 5 months ago (1 children)
[–] [email protected] 2 points 5 months ago (1 children)

It's not that you can trust it a little, it's that you can't trust it ever. It's just saying want it thinks you want to hear, not what is true.

[–] [email protected] 1 points 5 months ago

i posted here to see if anyone knew the reason, couldn't even get a comment on [email protected] .

[–] [email protected] 1 points 5 months ago

I mean, by the definition of "LLM", it's impossible for the models to reason. It's literally a fancy big text generation model, like your keyboard text prediction on ~~steroids~~ GPUs.