Should we trust a researcher whose brain got fried. Did they remember to do the old double-blind setup before the frying of the brains occurred?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
1980: TVs will fry your brain
1990: Videogames will fry your brain
2000: Computers will fry your brain
2010: Smartphones will fry your brain
2020: AI will fry your brain
Any takes for the 2030s?
I mean, based fully on our current dystopian reality, I feel you just made a really good point about tech growing to a point where it fully captures you from reality, and indeed frys your brain by convincing you that fantasies are real.
MAGA is a great example of people with brains so fried they think a pedophile exconman with 34 felonies who killed millions of Americans trough a poor pandemic response is somehow helping them by destroying USAID, DEI, Healthcare, and Social Security.
Their brains are gonzo, all through the constant applied exploitation of all the tech you just mentioned combined.
AI will absolutley make it worse.
Climate change.
Literally.
2030: Critical thought will fry your brain
I fucking hate this AI shit but I'll admit I end up using Gemini (knowing its wrong sometimes) but it's like how I'd use Google but just more of a complex ask instead of simple search query's, I couldn't imagine using it beyond that other than a follow-up or two.
It's just a chatbot that has access to info, who goes onto their cable companies website and befriends the chatbot?
I have found Google search to be getting progressively worse where as I can type out a question to Gemini that will return better results than Google search. It's annoying that Google search has gotten so bad and duckduckgo will return you something interesting but not relavent. So Gemini is my Google search nowadays.
It very well may be intentional; to drive people away from traditional search and in to Gemini.
I've used gpt a coup times when I was searching the web and forums for well over an hour and found nothing relevant enough to work. Theissue got solved in 5-10 minutes.
They enshittified the search so now using the chatbot is more useful. The search just returns slop and even fake slop forums.
Pretty much. Can't find useful info without having to put in ALOT of extra work that I wouldn't of a decade ago.
Fuck though I love being able to ask it for part numbers and info. Much less hassle to ask it then use the shitty corpo parts catalogues search features especially when there's weird naming schemes and a lack of description, clicking through 50 parts trying to find the right one sucks.
Its more that SEO is so well known at this point you can whip up whatever AI generated garabge you want to be ranked high on search engines in seconds. For now the AIs are just better at "wading" through the trash since they somewhat curate the data its training on. Once all they can train it on is slop you better hope you still have some encyclopedias and text books laying around
I mean I have been using DDG for years now. I just could not find the right answer for my specific issue on my specific linux distro and AI was sadly just faster
Oh, do you mean Claudia!? She's awesome!
Found the Richard Dawkins :P
study already came out that hs people graduating cant even read or write, functionally illterate.
You can’t see the same kind of propaganda your grandparents were saying about computers just for ai now?
Besides, why are colleges passing illiterate students? That’s the actual problem.
There’s a tiny difference between then and now called scientific evidence. These are actual scientific studies saying that using AI results in lower cognitive abilities.
AI is like a dog looking at itself in a mirror.
Some dogs are smart, and understand that this is a tool and that it is there to help you see things better.... Some dogs are fucking morons and think their reflection is another dog, and they wanna fuck and fight....
There are a ton of good use cases for ai, and none of them include coquettish sexbots or drawings of me as a Simpson or a Ghibli sketch.
i think reading the title of this post hurt my brain. like what are we doing here? making medical claims using sensationalist and meaningless language... seems unhelpful
Studies show that using a bulldozer for plowing a field decreases the farmers muscle density after just one day of use.
Christ. What a load of shit.
I think the key point is that you’re not outsourcing critical thinking to LLMs, but are instead using it as a tool to do grunt work that you could’ve done yourself, but the LLM can pump out faster. This means constantly being critical of everything it does, asking questions, asking for links to credible sources, asking it to provide info to help evaluate the pros and cons of multiple approaches, with you making the decisions and learning along the way. Overall, any work a LLM produces that will have your name on it should be work you entirely understand and agree with. For coding, I find agent markdown files to be especially helpful to make sure the LLM follows my desired practices without me constantly making it refactor.
Largely, my assumption at this point is that LLMs may not always be around, so I definitely don’t want to be left holding the bag with a bunch of slop I can’t manage on my own. I think I’ll feel better when I can run open weight models on my own hardware that are fully competitive with cloud models. With models like Qwen 3.6 27B, it seems we are getting closer to that.
According to a new study by researchers at Carnegie Mellon, MIT, Oxford, and UCLA,
Study should be solid I guess.
participants who were given AI assistants (in this case, a chatbot powered by OpenAI’s GPT-5 model) would have the aid pulled from them without warning during the test
Wow, interesting idea. 👍
where they had their assistant removed, the AI group saw the solve rate fall off a cliff. They had a solve rate about 20% lower
And even worse IMO:
They also had nearly double the skip rate, meaning they simply chose not to solve the questions.
This seems very alarming IMO, because this indicates they lost some of their ability to think constructively on how to actually solve a problem!
I know there have always been some who cried wold every time new technology has become available, like calculators and computers. Even dictionaries were once claimed to be harmful once!
But maybe this time there is a real danger, because AI takes away a lot of the need to actually think creatively and constructively. And that's an ability we must not lose.
The last paragraph of the article is even worse. As it mentions 2 studies that show these effects are also long term!!!
it has ruined the ability of K-12 people writing and reading proficiency.
That was well in the toilet before llms.
When driving somewhere, if I set out with the mindset that I can’t rely on gps I can usually wing it and figure out where to go when a hiccup occurs. If I don’t, then I have a lot of trouble getting into that path finding mode when needed… similar to this maybe?
Changing the terms of the test in the middle of it, without warning, is disruptive. I’m not convinced it “fried their brains.” The same would happen with a calculator suddenly removed during the middle of an exam.
The test seems kind of dogshit, you could make the same argument against any tool, calculators or even abacuses would have the same effect.
I'm made to use it for work and it does speed up some tasks, however for some stuff it ends up being like the experiment where not doing the work the first time means the whole process takes longer at the end.
I really do see the issue with AI. I see people around me outsource thinking to it too much. Like literally. As if they are happy that a machine can make their life choices for them. This is extremely worrying It's About how people use it
Thinking is hard and ppl would prefer to feel, instead. When you just have to vibe with your AI that thinks for you, ppl will absolutely use it and disempower themselves under the illusion of empowerment. They will infantilize themselves and end up being treated like the children they want to be.
Those are important studies but nothing shocking. The conclusion to draw from them is the same one we've drawn from all technologies that have improved our lives to some degree: Without them, we tend to either be incompetent as losing access to them isn't worth planning for, or we are demotivated because why would we deprive ourselves from technology that makes our work so much less exhausting?
It doesn't necessarily remove our capacity to think (and the article falsely generalises to critical thinking), it shifts what kind of thinking we do.
If AI is as good or better than I am at writing code, then I'll switch my brain to only do the orchestrating and architecture rather than the writing code part. And yes, if you remove AI, then the switch will cause me to perform less than I used to before AI, but not permanently, only until I get used to it again.
If an AI is better than a doctor at finding cancer indicators, then the doctor will focus their mind on finding solutions only rather than splitting it on both the detection and solution.
This is not new, not bad, and I'll even go to the extent of saying it's a great use of AI: Humans evolved for specialization. The less varied the tasks are, the better we are at the subset we specialize in. That's what has driven our rapid technological and societal advances in the past millenia.
But, AI has many issues and many detrimental applications as well, so don't see this comment as a full endorsement of AI.