this post was submitted on 14 Mar 2026
182 points (95.5% liked)

Technology

82711 readers
2047 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 16 comments
sorted by: hot top controversial new old
[–] chunkystyles@sopuli.xyz 12 points 1 day ago

I hate how these kinds of things are always framed. The implied message is always that "AI" can autonomously decide to go of the rails. Similar to the Moltbot craze. The agents have to be told to go do the things they do. They don't have free will.

Using a combination of network science and large language models, the same underlying technology that powers systems like ChatGPT, the researchers created and monitored synthetic bot agent personas, their posts, and their interactions with one another, simulating what a coordinated AI-powered social media network might look like.

So yeah, LLMs can used nefariously to great effect. They're essentially more sophisticated bots.

[–] subignition@fedia.io 26 points 2 days ago (3 children)

A somewhat more hopeful take is that this strategy could be weaponized against misinformation too.

[–] Tiresia@slrpnk.net 18 points 2 days ago (1 children)

The truth has the advantage of objective evidence and the disadvantage of needing to be more complicated to incorporate objective evidence.

When it comes to news from out of town, there is no objective evidence, only appeal to authority. The few people willing to personally travel somewhere to testify that it is real can be written off as paid actors (or as AI-generated if you aren't seeing their testimony live).

So in almost all scenarios with this technology, the truth would have the disadvantage but not the advantage. An arms race between pro-truth and anti-truth AI would be the anti-truth AI winning because it can tell the more convenient lie.

My hopeful take is that it will make proper citation an essential life skill, with everyone who believes stories without citation getting scammed until they know better and everyone who doesn't cite sources being disbelieved. And that, as such, people will organically build up transparent citation networks that they rely on for information, meaning they can more effectively filter out advertisement, propaganda, memes, and lies.

[–] IronBird@lemmy.world 4 points 2 days ago

if it's like everything else LLM, the quality of propaganda will noticeably drop to the point where maybe the normies will catch on.

arguably legacy media has been falling there for awhile, as evidenced by their cratering revenue streams, maybe this will just accelerate things even further

[–] technocrit@lemmy.dbzer0.com 2 points 1 day ago* (last edited 1 day ago)

lol. Let me remind you: We live under capitalism. Capital is not out there spreading truth and justice. Quite the opposite.

[–] oozy7@piefed.social 5 points 2 days ago* (last edited 2 days ago)

It's already happening. Aren't spam bots somewhat like AI agents?

[–] ChunkMcHorkle@lemmy.world 7 points 2 days ago* (last edited 2 days ago)

Imagine it is two weeks before a major election in a closely contested state. A controversial ballot measure is on the line. Suddenly, a wave of posts floods X, Reddit, and Facebook, all pushing the same narrative, all amplifying each other, all generating the appearance of a massive grassroots movement. Except none of it is real. ...Trust in the information people encounter on X, Facebook, and Reddit, already eroded, could fall even farther.

It's much more difficult to be propagandized by any means, including autonomous AI, when you're not freely offering up your own time and devices daily to have it fed to you, individualized just for you by means of your own data, which you are also donating to the cause of propagandizing you.

I get why people do, there are lots of good reasons, but at a certain point the good outweighs the bad. And there's no time like the present to make a change.

So if you're reading this and you are still interacting with these centralized corporate-owned propaganda sites regularly, maybe it's time to rethink that strategy.

[–] UnderpantsWeevil@lemmy.world 16 points 2 days ago (3 children)

Excited to see smear campaigns that become increasingly surreal and disturbing

[–] Steve@startrek.website 13 points 2 days ago

At least 30% of the population will never notice

[–] technocrit@lemmy.dbzer0.com 2 points 1 day ago

Excited to see ~~smear campaigns~~ a reality that become increasingly surreal and disturbing

We're already here.

[–] prex@aussie.zone 1 points 1 day ago

While also making genuine bad press easier to dismiss.
eg: "fake news" says the worlds ugliest person.

[–] bibbasa@piefed.social 8 points 2 days ago (1 children)

well shit, first writing hit pieces, now this.

[–] zd9@lemmy.world 4 points 2 days ago

first writing hit pieces, then hitting targets with drones

[–] aceshigh@lemmy.world 2 points 1 day ago

Is AI stealing influencer jobs?

[–] technocrit@lemmy.dbzer0.com 1 points 1 day ago

Without Human Direction

Grifter bullshit.

Who do they think programmed these computers?