this post was submitted on 12 Jan 2026
610 points (96.9% liked)

Technology

78661 readers
4083 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Across the world schools are wedging AI between students and their learning materials; in some countries greater than half of all schools have already adopted it (often an "edu" version of a model like ChatGPT, Gemini, etc), usually in the name of preparing kids for the future, despite the fact that no consensus exists around what preparing them for the future actually means when referring to AI.

Some educators have said that they believe AI is not that different from previous cutting edge technologies (like the personal computer and the smartphone), and that we need to push the "robots in front of the kids so they can learn to dance with them" (paraphrasing a quote from Harvard professor Houman Harouni). This framing ignores the obvious fact that AI is by far, the most disruptive technology we have yet developed. Any technology that has experts and developers alike (including Sam Altman a couple years ago) warning of the need for serious regulation to avoid potentially catastrophic consequences isn't something we should probably take lightly. In very important ways, AI isn't comparable to technologies that came before it.

The kind of reasoning we're hearing from those educators in favor of AI adoption in schools doesn't seem to have very solid arguments for rushing to include it broadly in virtually all classrooms rather than offering something like optional college courses in AI education for those interested. It also doesn't sound like the sort of academic reasoning and rigorous vetting many of us would have expected of the institutions tasked with the important responsibility of educating our kids.

ChatGPT was released roughly three years ago. Anyone who uses AI generally recognizes that its actual usefulness is highly subjective. And as much as it might feel like it's been around for a long time, three years is hardly enough time to have a firm grasp on what something that complex actually means for society or education. It's really a stretch to say it's had enough time to establish its value as an educational tool, even if we had come up with clear and consistent standards for its use, which we haven't. We're still scrambling and debating about how we should be using it in general. We're still in the AI wild west, untamed and largely lawless.

The bottom line is that the benefits of AI to education are anything but proven at this point. The same can be said of the vague notion that every classroom must have it right now to prevent children from falling behind. Falling behind how, exactly? What assumptions are being made here? Are they founded on solid, factual evidence or merely speculation?

The benefits to Big Tech companies like OpenAI and Google, however, seem fairly obvious. They get their products into the hands of customers while they're young, potentially cultivating their brands and products into them early. They get a wealth of highly valuable data on them. They get to maybe experiment on them, like they have previously been caught doing. They reinforce the corporate narratives behind AI — that it should be everywhere, a part of everything we do.

While some may want to assume that these companies are doing this as some sort of public service, looking at the track record of these corporations reveals a more consistent pattern of actions which are obviously focused on considerations like market share, commodification, and bottom line.

Meanwhile, there are documented problems educators are contending with in their classrooms as many children seem to be performing worse and learning less.

The way people (of all ages) often use AI has often been shown to lead to a tendency to "offload" thinking onto it — which doesn't seem far from the opposite of learning. Even before AI, test scores and other measures of student performance have been plummeting. This seems like a terrible time to risk making our children guinea pigs in some broad experiment with poorly defined goals and unregulated and unproven technologies which may actually be more of an impediment to learning than an aid in their current form.

This approach has the potential to leave children even less prepared to deal with the unique and accelerating challenges our world is presenting us with, which will require the same critical thinking skills which are currently being eroded (in adults and children alike) by the very technologies being pushed as learning tools.

This is one of the many crazy situations happening right now that terrify me when I try to imagine the world we might actually be creating for ourselves and future generations, particularly given personal experiences and what I've heard from others. One quick look at the state of society today will tell you that even we adults are becoming increasingly unable to determine what's real anymore, in large part thanks to the way in which our technologies are influencing our thinking. Our attention spans are shrinking, our ability to think critically is deteriorating along with our creativity.

I am personally not against AI, I sometimes use open source models and I believe that there is a place for it if done correctly and responsibly. We are not regulating it even remotely adequately. Instead, we're hastily shoving it into every classroom, refrigerator, toaster, and pair of socks, in the name of making it all smart, as we ourselves grow ever dumber and less sane in response. Anyone else here worried that we might end up digitally lobotomizing our kids?

top 50 comments
sorted by: hot top controversial new old
[–] Doomsider@lemmy.world 17 points 2 days ago (1 children)

Grok AI Teacher is coming to a school near you! With amazing lesson plans like "Was the Holocaust even real?"

[–] MrScottyTay@sh.itjust.works 10 points 2 days ago (2 children)

Pedos aren't allowed near schools

[–] InputZero@lemmy.world 5 points 2 days ago

"Well good news folks, problem solved. You need to be a person to be a pedophile and Grok isn't a person. Therefore Grok can't be found liable for anything it does. Therefore it's safe and won't risk me being litigated. Therefore it's safe as is for kids. I don't know why everyone got so upset." Says Elon Musk. /S but also not/s.

[–] Duamerthrax@lemmy.world 1 points 1 day ago

Except for all the ones that are.

[–] SethTaylor@lemmy.world 10 points 2 days ago* (last edited 2 days ago)

I've never seen anything make more people act stupid faster. It's like they're in some sort of frenzy. It's like a cult.

Three years ago and everyone talks about it like life has never and will never exist without it and if you don't use it you're useless to society

So stupid I don't have a nice, non-rage-inducing way to describe it. People are simply idiots and will fall for any sort of marketing scam

"AI: not even once"

[–] SnarkoPolo@lemmy.world 28 points 2 days ago (1 children)

People who can't think critically tend to vote Conservative.

Coincidence? I think not.

[–] Tollana1234567@lemmy.today 9 points 2 days ago* (last edited 2 days ago)

thats why conservative govts are all in adopting AI. because conservatives cant tell the difference between an AI video and a real one. jus tlook on reddit how many videos are accused of being AI when its not.

[–] SoftestSapphic@lemmy.world 48 points 3 days ago (3 children)

AI highlights a problem with universities that we have been ignoring for decades already, which is that learning is not the point of education, the point is to get a degree with as little effort as possible, because that's the only valueable thing to take away from education in our current society.

[–] T156@lemmy.world 9 points 2 days ago

I'd argue schooling in general. Instead of being something you do because you want to and enjoy it, it's instead a thing you have to do either because you don't have the qualifications for a promotion, or you need the qualifications for an entry-level position.

People that are there because they enjoy study, or want to learn more are arguably something of a minority.

Naturally if you're there because you have to be, you're not going to put much, if any effort in, and will look to take what shortcuts you can.

load more comments (2 replies)

Previous tech presented information, made it faster and more available. It also just processed information. AI however claims to do the creativity and decision making for you. Once you've done that you've removed humans from any part of the equation except as passive consumers unneeded for any production.

How you plan on running an economy based on that structure remains to be seen.

[–] lechekaflan@lemmy.world 7 points 2 days ago (2 children)

Thru AI as some glorified meme generators, what oligarchies are now steering millions of people to become... cows.

[–] Bosht@lemmy.world 5 points 2 days ago (6 children)
[–] lechekaflan@lemmy.world 2 points 1 day ago

Yeah, and it's also the perfect horrifying metaphor like being headed to the Brave New World.

load more comments (5 replies)
[–] tehn00bi@lemmy.world 28 points 3 days ago (7 children)

I just keep seeing in my head when John Connor says “we’re not going to make it, are we?”

load more comments (7 replies)
[–] NigelFrobisher@aussie.zone 8 points 2 days ago

At work now we’re having team learning sessions that are just one person doing a change incredibly slowly using AI while everyone else watches, but at least I can keep doing my regular work if it’s a Teams call. It usually takes the AI about 45 minutes to decide what I immediately knew needed doing.

[–] jpreston2005@lemmy.world 18 points 3 days ago (26 children)

I gotta be honest. Whenever I find out that someone uses any of these LLMs, or Ai chatbots, hell even Alexa or Siri, my respect for them instantly plummets. What these things are doing to our minds, is akin to how your diet and cooking habits change once you start utilizing doordash extensively.

I say this with full understanding that I'm coming off as just some luddite, but I don't care. A tool is only as useful as it improves your life, and off-loading critical thinking does not improve your life. It actively harms your brains higher functions, making you a much easier target for propaganda and conspiratorial thinking. Letting children use this is exponentially worse than letting them use social media, and we all know how devastating the effects of that are... This would be catastrophically worse.

But hey, good thing we dismantled the department of education! Wouldn't want kids to be educated! just make sure they know how to write a good ai prompt, because that will be so fucking useful.

load more comments (26 replies)
[–] sturmblast@lemmy.world 20 points 3 days ago (9 children)
load more comments (9 replies)
[–] termaxima@slrpnk.net 26 points 3 days ago (4 children)

Children don't yet have the maturity, the self control, or the technical knowledge required to actually use AI to learn.

You need to know how to search the web the regular way, how to phrase questions so the AI explains things rather than just give you the solution. You also need the self restraint to only use it to teach you, never do things for you ; and the patience to think about the problem yourself, only then search the regular web, and only then ask the AI to clarify the few things you still don't get.

Many adults are already letting the chatbots de-skill them, I do not trust children would do any better.

load more comments (4 replies)
[–] Cryxtalix@programming.dev 6 points 2 days ago (2 children)

I think, therefore I am. If they don't think, I'm not so sure.

AI gets increasingly easy and more capable, so there's really no reason to adopt AI early in case you miss out. AI never allows anyone to miss out, the end goal is quite literally to be used by babies and animals. Any preparation you do today, is preparation you don't need to do in the near future as AI strives to take over everything.

Feel free to set AI aside and work on yourself. You won't miss out. AI won't let you miss out.

[–] Disillusionist@piefed.world 5 points 2 days ago

I think you'd probably have to hide out under a rock to miss out on AI at this point. Not sure even that's enough. Good luck finding a regular rock and not a smart one these days.

[–] AdolfSchmitler@lemmy.world 2 points 2 days ago

I do not think, therefore I do not am.

[–] JeeBaiChow@lemmy.world 70 points 3 days ago* (last edited 3 days ago) (10 children)

Already seeing this in some junior devs.

[–] killabeezio@lemmy.world 19 points 3 days ago (2 children)

Recently had to lay someone off because they just weren't producing the work that needed to be done. Even the simplest of tasks.

I would be like we need to remove/delete these things. That's it. It took some time because you had to just do some comparison and research, but it was a super difficult task for them.

I would then give them something more technical, like write this script and it was mostly ok, but much better work than the simple tasks I would give.

Then I would get AI slop and I would ask WTF are you thinking here. Why are you doing this? They couldn't give a good answer because they didn't actually do the work. They would just have LLMs do all their work for them and if it requires them to do any sort of thinking, they would fail miserably.

Even in simple PR reviews, I would leave at least 10 comments just going back and forth. Got to the point where it was just easier if I would have done it myself. I tried to mentor them and guide them along, but it just wasn't getting through to them.

I don't mind the use of LLMs, but use it as a tool, not a crutch. You should be able to produce the thing you are giving the llm to produce for you.

load more comments (2 replies)
load more comments (9 replies)
[–] scarabic@lemmy.world 14 points 3 days ago* (last edited 3 days ago) (3 children)

We need to be able to distinguish between giving kids a chance to learn how to use AI, and replacing their whole education with AI.

Right under this story in my feed is the one about the CEO who fired 80% of his staff because they didn’t switch over to AI fast enough. That’s the world these kids are being prepared for.

I would rather they get some exposure to AI in the classroom where a teacher can be present and do some contextualizing. Kids are going to find AI either way. My kids have gotten reasonable contextualizing of other things at school, like not to trust Google blindly and not to cite Wikipedia as a source. Schools aren’t always great with new technology but they aren’t always terrible either. My kids school seems to take a very cautious approach with technology and mostly teach literacy and critical thinking about it. They aren’t throwing out textbooks, shoving AI at kids and calling it learning.

This is an alarmist post. AIs benefits to education are far from proven. But it’s definitely high time for ~~kids~~ everyone to get some education about it at least.

[–] TubularTittyFrog@lemmy.world 10 points 2 days ago (1 children)

ai companies don't care about kids learning.

load more comments (1 replies)
load more comments (2 replies)
[–] undrwater@lemmy.world 55 points 3 days ago (14 children)

I spent some years in classrooms as a service provider when Wikipedia was all the rage. Most districts had a "no Wikipedia" policy, and required primary sources.

My kids just graduated high school, and they were told NOT to use LLM's (though some of their teachers would wink). Their current college professors use LLM detection software.

AI and Wikipedia are not the same, though. Students are better off with Wikipedia as they MIGHT read the references.

Still, those students who WANT to learn will not be held back by AI.

[–] otter@lemmy.ca 45 points 3 days ago (5 children)

I always saw the rules against Wikipedia to be around citations (and accuracy in the early years), rather than it harming learning. It's not that different from other tertiary sources like textbooks or encyclopedias. It's good for learning a topic and the interacting pieces, but you need to then search for primary/secondary sources relevant to the topic you are writing about.

Generative AI however

  • is a text prediction engine that often generates made up info, and then students learn things wrong
  • does the writing for the students, so they don't actually have to read or understand anything
load more comments (5 replies)
load more comments (13 replies)
[–] agent_nycto@lemmy.world 16 points 3 days ago (3 children)

Don't trust any doctor that graduated after 2024

load more comments (3 replies)
[–] Tehdastehdas@piefed.social 29 points 3 days ago

They let AI into the curriculum immediately, while actual life skills have been excluded for the benefit of work skills since Prussian schooling became popular. Dumbing down the livestock.

https://www.quora.com/What-are-some-things-schools-should-teach-but-dont/answer/Harri-K-Hiltunen

[–] StitchInTime@piefed.social 14 points 3 days ago (1 children)

When I was in school I was fortunate enough that I had educators who strongly emphasized critical thinking. I don’t think “AI” would be an issue if it were viewed as a research tool (with a grain of salt), backed by interactive activities that showcased how to validate what you’re getting.

The unfortunate part is instructor’s hands are more often than not tied, and the temptation to just “finish the work” quickly on the part of the student is real. Then again, I had a few rather attractive girls flirt with me to copy my work and they didn’t exactly get far in life, so I have to wonder how much has truly changed.

load more comments (1 replies)
load more comments
view more: next ›