this post was submitted on 19 Apr 2024
0 points (NaN% liked)

Technology

77765 readers
2358 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 7 comments
sorted by: hot top controversial new old
[–] Sneptaur@pawb.social 1 points 2 years ago

He’s right you know

[–] istanbullu@lemmy.ml 1 points 2 years ago (2 children)

I don't buy into this "AI is dangerous" hype. Humans are dangerous.

[–] funkless_eck@sh.itjust.works 1 points 2 years ago (1 children)

"ooh it's more advanced but don't worry- it's not conscious"

is as much a marketing tactic as "how it feels to chew 5 gum" or buzzfeedesque "top 10 celebrity mistakes - number 3 will blow your mind"

it's a tech product that runs a series of complicated loops against a large series of texts and returns the closest comparison, as it stands it's never going to be dangerous in and of itself.

[–] kromem@lemmy.world -1 points 2 years ago (1 children)

it's a tech product that runs a series of complicated loops against a large series of texts and returns the closest comparison, as it stands it's never going to be dangerous in and of itself.

That's not how it works. I really don't get what's with people these days being so willing to be confidently incorrect. It's like after the pandemic people just decided that if everyone else was spewing BS from their "gut feelings," well gosh darnit they could too!

It uses gradient descent on a large series of texts to build a neural network capable of predicting those texts as accurately as possible.

How that network actually operates ends up a black box, especially for larger models.

But research over the past year and a half in simpler toy models has found that there's a rather extensive degree of abstraction. For example, a small GPT trained only on legal Othello or Chess moves ends up building a virtual representation of the board and tracks "my pieces" and "opponent pieces" on it, despite never being fed anything that directly describes the board or the concept of 'mine' vs 'other'. In fact, in the Chess model, the research found there was even a single vector in the neural network that could be flipped to have the model play well or play like shit regardless of the surrounding moves fed in.

It's fairly different from what you seem to think it is. Though I suspect that's not going to matter to you in the least, as I've come to find that explaining transformers to people spouting misinformation about them online has about the same result as a few years ago explaining vaccine research to people spouting misinformation about that.

[–] funkless_eck@sh.itjust.works 1 points 2 years ago* (last edited 2 years ago)

I dont know if saying "it's not a loop! it's an iterative process using a series of steps!" is that much of a burn.

my dude, that's a loop.

[–] Thorny_Insight@lemm.ee 0 points 2 years ago* (last edited 2 years ago) (1 children)

AI can be dangerous. The point is not that it's likely but that in the very unlikely event of it going rogue it can at worst have civilication ending consequences.

Imagine how easy it is to trick a child as an adult. The difference in intelligence between a human and superintelligent AGI would be orders of magnitude greater that that.

An actual AI (that modern tools don't even vaguely resemble) could maybe theoretically be dangerous.

An LLM cannot be dangerous. There's no path to anything resembling intelligence or agency.