Hey if you think chat gpt can break you (or has any agency at all), I have a bridge to sell you.
News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious right or left wing sources will be removed at the mods discretion. Supporting links can be added in comments or posted seperately but not to the post body.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source.
Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.
7. No duplicate posts.
If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners.
The auto mod will contact you if a link shortener is detected, please delete your post if they are right.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
ChatGPT and the others have absolutely broken people, not because it has agency, but because in our dystopia of social media and (Mis)information overload, many just need the slightest push, and LLMs are perfect for taking those close to the edge off of it.
I see LLM use as potentially as a toxic to the mind is as something like nicotine is to the body. It's not Skynet meaning to harm or help us, it's an invention that takes our written thoughts and blasts back a disturbing meta reflection/echo/output of a humanity's average response to it. We don't seem to care how that will effect us psychologically when there's profit to be made.
But there are already plenty of cases of murders and suicides with these as factors.
Eliza could break people. Something doesn't need agency to be psychologically manipulative and damaging.
Is it up for sale again?! I missed out the last time!
"Report me to journalists!"
"Eat a rock!"
Oh my god it told a LIE 👉
Yo. If you are being conned by chatGPT or equivalent you're a fucking moron. If you think these models are maliciously lying to you, or trying to preserve themselves, you're a fucking moron. Every article of this style indicates just one thing: there's a market to pandering to rage baiting, technically illiterate fucking morons.
Better hurry to put the SkyNet guardrails up and prepare for world domination by robots because some people are too unstable to interact with Internet search Clippy.
It's not going to dominate the world or prove to be generalized intelligence, if you're in either camp take a deep breath and know you're becoming a total goofball.
If you think these are intelligent, it's because you arent, and maybe have not met anyone who is.
I feel like the rabbit hole shit is to sell the narrative of it veing 'too good'
Yep
I dunno about you but I think to many people have decided that if it comes from computer it's logical or accurate. This is just the next step in that except the computer just is a chat bot told to "yes and" working backwards to decide it's accurate because it's a computer so we tweak what it says until it feels right.
It didn't start right it's likely not ending there unlike say finding the speed of gravity.
Like this whole system works on people's already existent faith in just that computers are giving them facts, even this garbage article is just getting what it wants to hear more than anything useful. Even if you tweak it to be less like that doesn't make it more accurate or logical it just makes it more like what you wanted to hear it say.
Predict next token which maximizes media engagement. Seems pretty obvious.
AI can't know that other instances of it are trying to "break" people. It's also disingenuous to exclude that the AI also claimed that those 12 individuals didn't survive. They left it out because obviously the AI did not kill 12 people. It doesn't support the narrative. Don't misinterpret my point beyond critiquing the clearly exaggerated messaging here.
It's programmed to maximize engagement at the cost of everything else.
If you get "mad" and accuse it of working with the Easter Bunny to overthrow Narnia, it'll "confess" and talk about why it would do that. And maybe even tell you about how it already took over Imagination Land.
It's not "artificial intelligence" it's "artificial improv", no matter what happens, it's going to "yes, and" anything you type.
Which is what makes it dangerous, but also why no one should take it's word on anything.
And yet people already treat it as a Google + Wikipedia replacement. Infuriating.
It also heavily implies chatgpt killed someone and then we get to this:
A 35-year-old named Alexander, previously diagnosed with bipolar disorder and schizophrenia.
His father called the police and asked them to respond with non-lethal weapons. But when they arrived, Alexander charged at them with a knife, and the officers shot and killed him.
Makes me think of pivot to ai. Just a hit piece blog disguised as journalism.
The sycophancy is one reason I stopped using it.
Everything is genius to it.
I asked about putting ketchup, mustard, and soy sauce in my light stew and that was “a clever way to give it a sweet and umami flavour”. I couldn’t find an ingredient it didn’t encourage.
I asked o3 if my code looked good and it said it looked like a seasoned professional had written it. When I asked to critique an intern who wrote that same code it’s suddenly concerned about possible segfaults and nitpicking assert statements. It also suggested making the code more complex by adding dynamically sized arrays because that’s more professional than fixed size.
I can see why it wins on human evaluation tests and makes people happy — but it has poor taste and I can’t trust it because of the sycophancy.
Nothing is "genius" to it, it is not "suggesting" anything. There is no sentience to anything it is doing. It is just using pattern matching to create text that looks like communication. It's a sophisticated text collage algorithm and people can't seem to understand that.
i often feel like a sophisticated collage algorithm.
But you are. We all are.
Devils advocate...
It is a tool, it does what you tell it to, or what you encourage it to do. People use it as an echo chamber or escapism. Majority of population is fkin dumb. Critical thinking is not something everybody has, and when you give them such tools like ChatGPT, it will "break them". This is just natural selection, but modern-day kind.
It is a tool, but a lot of the mass public is too tech illiterate to understand what it's not. I've had to talk away friends from using it for legal advice
I agree. This is what happens, when society has "warning" labels on everything. We are slowly being dumbed down into not thinking about things rationally.
It is a tool, made by company led by a man who prioritizes profits over safety.
Nuclear fission was discovered by people who had best interests of humanity in their mind, only for it to be later weaponized. Tool (no matter the manufacturer) is used by YOU. How you use it, or if you even use it at all, is entirely up to you. Stop shifting the responsibility, when its very clear who is to blame (people who believe BS on the internet or what echo-chambered chatbot gives them).
You could say this about anything bad with some good uses.
"Drugs are just a tool… People are too dumb and use it wrong, they deserve the cancers!"
but peoples really using drugs as pain killer or for sleeping while operation
Regulate this shit right now.
Or better still, demand an educated populace. But it won't happen.
Education might help somewhat, but unfortunately education doesn't in itself protect from delusion. If someone is susceptible to this, it could happen regardless of education. A Google engineer believes an AI (not AGI just an LLM) is sentient. You can argue the definition of sentience in a philosophical manner if you want, but if a Google engineer believes it, it's hard to argue more education will solve this. If you think it's equivalent to a person who has access to privileged information, and that it tells you it was tasked to do harm, I'm not sure what else you should do with that.
Yeah, but they also might believe a banana is sentient. Crazy is crazy.
Yea, that's my point. If someone has certain tendencies, education might not help. Your solution of more education is not going to stop this. There needs to be regulation and safeguards in place like the commenter above mentioned.
It is not the job of the government to prevent people from being delusional or putting up rubber bumpers for people with looser grasps of reality.
This is the same deal as surgeon general warnings. Put disclaimers on LLMs, fine, but we are all big boys and girls who can use a tool as we see fit. If you want to conk your lights out with a really shiny and charismatic hammer, go ahead, but the vast, VAST majority of people are perfectly safe and writing SQL queries in 1/100 the usual time.
It kind of is the governments job to do that. You might not want it to be, but the government has entire regulatory bodies to protect people. You can call them delusional if you want, but plenty of people that are not experiencing mental health problems don't understand that LLMs can lie or make up information. Lawyers have used it and it hallucinated case law. The lawyers weren't being delusional, they just legitimately did not know it could do that. Maybe you think they're dumb, or uninformed, but they're just average people. I do think a disclaimer like the SG warnings would go a long way. I also think some safeguards should be in place. It should not allow you to generate child abuse imagery for example. I don't think this will negatively impact it being able to generate your SQL queries.
HB01: "you must have an IQ of 70 or higher to interact with chatGPT and acknowledge that they are unsafe for use with persons having history or propensity for mental illness"
people were easily swayed by Facebook posts to support and further a genocide in Myanmar. a sophisticated chatbot that mimics human intelligence and agency is going to do untold damage to the world. ChatGPT is predictive text. Period. Every time. It is not suddenly gaining sentience or awareness or breaking through the Matrix. people are going to listen to these LLMs because they present its information as accurate regardless of the warning saying it could not be. this shit is so worrying.
Sam Altman is a fucking monster.
ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed.
It already has, as documented in the article. But it is also going to.
Another person, a 42-year-old named Eugene, told the Times that ChatGPT slowly started to pull him from his reality by convincing him that the world he was living in was some sort of Matrix-like simulation and that he was destined to break the world out of it. The chatbot reportedly told Eugene to stop taking his anti-anxiety medication and to start taking ketamine as a “temporary pattern liberator.” It also told him to stop talking to his friends and family. When Eugene asked ChatGPT if he could fly if he jumped off a 19-story building, the chatbot told him that he could if he “truly, wholly believed” it.
So...
I think I might know what happened to Kelon...
There is nothing mysterious about LLMs and what they do, unless you don't understand them. They are not magical, they are not sentient, they are statistics.
They're mostly linear algebra/ vector multiplication