Independent News
Welcome to the community for independent journalism, a place to post and engage with diverse, free news media from around the world.
The rundown:
-
Posts should link to a current* article from a credible, independent news source. If there's a paywall, please put the official link in the URL box and add an archive link in the text body of your post. Blogs, editorials, listicles and reports are welcome.
-
Post title should be the article headline or best fit. Add this tag if an account is needed for access: [sign-in required.]
-
No misinformation. Provide sources when making substantial or potentially destructive claims.
-
Be civil. Be respectful. Be cool. Instance rules apply.
-
Tag NSFW and apply content warnings at your discretion.
*Independent journalism is generally free from government and corporate interests and is not controlled by a major media conglomerate. "Independence" is a gradient, so use your best judgement when posting.
*Current depends on whether new, publicly available information has been released since the article has last been updated. When in doubt please add the published date to the title in a tag [like this.]
For a less serious news community, check out: !wildfeed@sh.itjust.works
Canadian-based independent news: !indy_news_canada@sh.itjust.works
All communities were created with the goal of increasing media literacy and media pluralism.
Some of the independent news sources posted here:
Australia
https://independentaustralia.net/
Canada
Germany
India
Philippines
Russia
https://meduza.io/en (based in Latvia)
South Africa
https://www.dailymaverick.co.za/
https://groundup.org.za/about/
U.S.A.
https://theconversation.com/us
Global
view the rest of the comments
This has literally happened to my colleague's teen sister two days ago...
She fortunately survived the attempt, but chatgpt advice did play a role in it. While the familly knew she wasn't ok and they were actively working on trying to solve her problems and getting help, she had a second unmonitored chatgpt account hidden (actually encrypted on a hidden drive) on her phone that she used to hide her conversations, and from what I've heard the messages they found were extremely unsettling. She managed to get advice on how to painlessly do it using medicine they had at home, and was able to get tips on self-harm that accompanied it, beyond other things.
Sure, I realize it's not only ChatGPTs fault, but it's clear that it fucking helped. The fact that a child can talk about their suicide and self-harm plans with anyone who replies with compassion and actually offers tips how instead of immediately calling help is an extreme problem.
She could've just google it, sure, but google won't have a conversation with you and is not designed to agree with whatever you say, thus confirming your plans.
Fuck unregulated AI, seriously.
Granted with something like AI systems it's easier and faster, but libraries could be faulted the same way - they have the same information, the only difference is learning how to look for it.
There's a problem here, for sure, but how can it be addressed? Frankly I have no idea, especially since you can host these LLM's on your own computer these days.
if they can continuously update grok to slob elon’s knob, i reckon they can push gpt to stop glorifying suicide with minimal effort comparatively.
A library doesn't ask you questions and confidently try to assist you, though. ChatGPT is made to sound like a person, so much so that people believe it's actually intelligent (it's not.)
We know LLMs and stable diffusion image gens can be moderated because they were from the beginning. I recall strict guardrails on DAL-E when it first came out, and ChatGPT wouldn't respond to anything to do with making explosives, even in the context of fiction, and definitely wouldn't help me edit erotica.
The rot's in the system itself, though. The culture puts shareholder value ahead of people's wellbeing, so they get an erratic stock market and a mental health crisis.
The chatlogs between this boy and ChatGPT are publicly available as court documents, if you're interested to see just how bad it is. No library book or librarian is going to come into a teenager's home and encourage him to get drunk to dull his will to live prior to attempting. It won't examine his setup and tell him which knot to use.
The way the problem can be addressed is extremely simple: alter the program so that it can't say certain things, or so that it forcibly ends the interaction when certain topics come up.
e: Also there's a certain barrier of effort required in looking things up manually, even online. It gives you time to potentially come out of it. That's why a good mental health strategy is to wait ten minutes before you kill yourself and see if you still want to, because the odds are pretty good you'll have changed your mind. But that can't happen with your bestie cheering you on the whole time, saying how brave you are and that you got this.
e2: And no book or librarian is going to tell you it's a good idea to not discuss your mental health status with your mom when you mention you're thinking about opening up to her. Seriously, these chat logs are disgusting, and I've read them from multiple kids and adults who've been driven to suicide and/or psychosis by LLMs disguised as friends and girlfriends. It's terrible.