this post was submitted on 19 May 2025
949 points (98.2% liked)

Microblog Memes

7647 readers
1904 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] McDropout@lemmy.world 10 points 2 hours ago

It’s funny how everyone is against using AI for students to get summaries of texts, pdfs etc which I totally get.

But during my time through medschool, I never got my exam paper back (ever!) so the exam was a test where I needed to prove that I have enough knowledge but the exam is also allowed to show me my weaknesses are so I would work on them but no, we never get out papers back. And this extends beyond medschool, exams like the USMLE are long and tiring at the end of the day we just want a pass, another hurdle to jump on.

We criticize students a lot (righfully so) but we don’t criticize the system where students only study becase there is an exam, not because they are particularly interested in the topic at given hand.

A lot of topics that I found interesting in medicine were dropped off because I had to sit for other examinations.

[–] TheDoozer@lemmy.world 2 points 1 hour ago

A good use I've seen for AI (or particularly ChatGPT) is employee reviews and awards (military). A lot of my coworkers (and subordinates) have used it, and it's generally a good way to fluff up the wording for people who don't write fluffy things for a living (we work on helicopters, our writing is very technical, specific, and generally with a pre-established template).

I prefer reading the specifics and can fill out the fluff myself, but higher-ups tend to want "how it benefitted the service" and fitting in the terminology from the rubric.

I don't use it because I'm good at writing that stuff. Not because it's my job, but because I've always been into writing. I don't expect every mechanic to do the same, though, so having things like ChatGPT can make an otherwise onerous (albeit necessary) task more palatable.

[–] eugenevdebs@lemmy.dbzer0.com 1 points 1 hour ago (1 children)

My hot take on students graduating college using AI is this: if a subject can be passed using ChatGPT, then it's a trash subject. If a whole course can be passed using ChatGPT, then it's a trash course.

It's not that difficult to put together a course that cannot be completed using AI. All you need is to give a sh!t about the subject you're teaching. What if the teacher, instead of assignments, had everyone sit down at the end of the semester in a room, and had them put together the essay on the spot, based on what they've learned so far? No phones, no internet, just the paper, pencil, and you. Those using ChatGPT will never pass that course.

As damaging as AI can be, I think it also exposes a lot of systemic issues with education. Students feeling the need to complete assignments using AI could do so for a number of reasons:

  • students feel like the task is pointless busywork, in which case a) they are correct, or b) the teacher did not properly explain the task's benefit to them.

  • students just aren't interested in learning, either because a) the subject is pointless filler (I've been there before), or b) the course is badly designed, to the point where even a rote algorithm can complete it, or c) said students shouldn't be in college in the first place.

Higher education should be a place of learning for those who want to further their knowledge, profession, and so on. However, right now college is treated as this mandatory rite of passage to the world of work for most people. It doesn't matter how meaningless the course, or how little you've actually learned, for many people having a degree is absolutely necessary to find a job. I think that's bullcrap.

If you don't want students graduating with ChatGPT, then design your courses properly, cut the filler from the curriculum, and make sure only those are enrolled who are actually interested in what is being taught.

[–] Jimmycakes@lemmy.world 1 points 1 hour ago

Who's gonna grade that essay? The professor has vacation planned.

galileosballs is the last screw holding the house together i swear

[–] TankovayaDiviziya@lemmy.world 12 points 4 hours ago

This reasoning applies to everything, like the tariff rates that the Trump admin imposed to each countries and places is very likely based from the response from Chat GPT.

[–] MystikIncarnate@lemmy.ca 14 points 5 hours ago (2 children)

I've said it before and I'll say it again. The only thing AI can, or should be used for in the current era, is templating... I suppose things that don't require truth or accuracy are fine too, but yeah.

You can build the framework of an article, report, story, publication, assignment, etc using AI to get some words on paper to start from. Every fact, declaration, or reference needs to be handled as false information unless otherwise proven, and most of the work will need to be rewritten. It's there to provide, more or less, a structure to start from and you do the rest.

When I did essays and the like in school, I didn't have AI to lean on, and the hardest part of doing any essay was.... How the fuck do I start this thing? I knew what I wanted to say, I knew how I wanted to say it, but the initial declarations and wording to "break the ice" so-to-speak, always gave me issues.

It's shit like that where AI can help.

Take everything AI gives you with a gigantic asterisk, that any/all information is liable to be false. Do your own research.

Given how fast things are moving in terms of knowledge and developments in science, technology, medicine, etc that's transforming how we work, now, more than ever before, what you know is less important than what you can figure out. That's what the youth need to be taught, how to figure that shit out for themselves, do the research and verify your findings. Once you know how to do that, then you'll be able to adapt to almost any job that you can comprehend from a high level, it's just a matter of time patience, research and learning. With that being said, some occupations have little to no margin for error, which is where my thought process inverts. Train long and hard before you start doing the job.... Stuff like doctors, who can literally kill patients if they don't know what they don't know.... Or nuclear power plant techs... Stuff like that.

[–] Doctor_Satan@lemm.ee 1 points 1 hour ago

There's an application that I think LLMs would be great for, where accuracy doesn't matter: Video games. Take a game like Cyberpunk 2077, and have all the NPCs speech and interactions run on various fine-tuned LLMs, with different LoRA-based restrictions depending on character type. Like random gang members would have a lot of latitude to talk shit, start fights, commit low-level crimes, etc, without getting repetitive. But for more major characters like Judy, the model would be a little more strictly controlled. She would know to go in a certain direction story-wise, but the variables to get from A to B are much more open.

This would eliminate the very limited scripted conversation options which don't seem to have much effect on the story. It could also give NPCs their own motivations with actual goals, and they could even keep dynamically creating side quests and mini-missions for you. It would make the city seem a lot more "alive", rather than people just milling about aimlessly, with bad guys spawning in preprogrammed places at predictable times. It would offer nearly infinite replayability.

I know nothing about programming or game production, but I feel like this would be a legit use of AI. Though I'm sure it would take massive amounts of computing power, just based on my limited knowledge of how LLMs work.

[–] GoofSchmoofer@lemmy.world 22 points 5 hours ago* (last edited 5 hours ago) (4 children)

When I did essays and the like in school, I didn’t have AI to lean on, and the hardest part of doing any essay was… How the fuck do I start this thing?

I think that this is a big part of education and learning though. When you have to stare at a blank screen (or paper) and wonder "How the fuck do I start?" Having to brainstorm write shit down 50 times, edit, delete, start over. I think that process alone makes you appreciate good writing and how difficult it can be.

My opinion is that when you skip that step you skip a big part of the creative process.

[–] Retrograde@lemmy.world 6 points 4 hours ago* (last edited 4 hours ago)

If not arguably the biggest part of the creative process, the foundational structure that is

[–] j4k3@lemmy.world -1 points 2 hours ago (1 children)

Was the best part of agrarian subsistence turning the Earth by hand? Should we return to it. A person learns more and is more productive if they talk out an issue. Having someone else to bounce ideas off of is a good thing. Asking someone to do it for you has always been a thing. Individualized learning has long been the secret of academic success for the children of the super rich. Just pay a professor to tutor the individual child. AI is the democratization of this advantage. A person can explain what they do not know and get a direct answer. Even with a small model that I know is wrong, forming the questions in conversation often leads me to correct answers and what I do not know. It is far faster and more efficient than I ever experienced elsewhere in life.

It takes time to learn how to use the tool. I'm sure there were lots of people making stupid patterns with a plow at first too when it was new.

The creative process is about the results it produces, not how long one spent in frustration. Gatekeeping because of the time you wasted is Luddism or plain sadism.

Use open weights models running on enthusiast level hardware you control. Inference providers are junk and the source of most problems with ignorant people from both sides of the issue. Use llama.cpp and a 70B or larger quantized model with emacs and gptel. Then you are free as in a citizen in a democracy with autonomy.

[–] GoofSchmoofer@lemmy.world 2 points 1 hour ago (1 children)

You're right - giving people the option to bounce questions off others or AI can be helpful. But I don't think that is the same as asking someone (or some thing) to do the work for you and then you edit it.

The creative process is about the results it produces, not how long one spent in frustration

This I disagree on. A process is not a result. You get a result from the process and sometimes it's what you want and often times it isn't what you want. This is especially true for beginners. And to get the results you want from a process you have to work through all parts of it including the frustrating parts. Actually getting through the frustrating parts makes you a better creator and I would argue makes the final result more satisfying because you worked hard to get it right.

[–] j4k3@lemmy.world 1 points 44 minutes ago

I think this is where fundamental differences in functional thought come into play. I abstract on a level where I can dwell on a subject for a few weeks and be productive. Beyond that, I get bored and lose interest to monotony. I need to spend that time unfettered. Primary school was a poor fit for me because I have excellent comprehension and self awareness. I tend to get hung up on very specific things where I need them answered right away. I need that connected flow to be unbroken as much as possible. So I find it deeply frustrating to conglomerate information. I don't memorize anything. I need to have intuitively grounded information. I strongly believe that any subject that a person is unable to explain intuitively means they do not understand what they are talking about. Information without this intuitive connection has no long term value because it cannot be retained outside of constant use unless a person has total recall memory. I do not have such a gift so I do not care to pretend otherwise.

If I am in a class where I do not make the needed intuitive connection, no new information is useful to me. Having any entity that can get me past that challenge immediately is a priceless advantage to someone like me. I find no value in repetition. I only find value in application and across broad connecting spaces. I know that many people are very different in this respect, but also that I am not special or unique and my life and learning experience are shared by a significant and relevant part of the population. It is okay to be different. Every stereotype and simplification hurts someone. The only way to avoid hurting people as much as possible is to be liberal and withhold judgment in all possible cases. AI is a tool. Like any tool, it can be beneficial when used well and harmful in other contexts.

For instance, a base model is pretty bad at telling me what to do in emacs, but it is good at using a database to parse the help documentation to show me relevant information. Or like, when I am lonely from involuntary social isolation due to physical disability, I can spin up someone to talk to. When I am frustrated by interactions with other people, I can simulate them or discuss how I feel in depth. When I have some random idea or question I can talk about it right away. With emacs and org mode, I can turn that into markdown-like notes with gptel. It can create detailed plans and hierarchical notes and I can prompt within the tree to build out ideas and create and link documents. I don't have to keep track of it all. I just ask the agent questions for it yo pull up the relevant buffers. As they say, org mode in emacs is like a second brain for planning and notes. With gptel, it becomes far more accessible to break through complexity both within Linux/emacs and within whatever subject one is interested in pursuing. There is much to be said about having an entity that can understand what the individual's needs are and address them directly. I respect if you need a little bit of masochism to stay engaged. I like my share as a hardcore cyclists; no judgement. However, that learning experience is not universal to everyone. I quickly lose interest and motivation in that circumstance. I expect to understand the subject as it is explained by someone that truly understands what they are talking about. I'm really good at that kind of focused comprehension and very sensitive to poor quality educators that do not know the information they are paid to share.

[–] Dagwood222@lemm.ee 3 points 4 hours ago
[–] MystikIncarnate@lemmy.ca 2 points 4 hours ago

That's a fair argument. I don't refute it.

I only wish I had any coaching when it was my turn, to help me through that. I figured it out eventually, but still. I wish.

[–] detun3d@lemm.ee 5 points 4 hours ago

Yes! Preach!

[–] SoftestSapphic@lemmy.world 56 points 7 hours ago

The moment that we change school to be about learning instead of making it the requirement for employment then we will see students prioritize learning over "just getting through it to get the degree"

[–] Jankatarch@lemmy.world 30 points 7 hours ago (1 children)

Only topic I am close-minded and strict about.

If you need to cheat as a highschooler or younger there is something else going wrong, focus on that.

And if you are an undergrad or higher you should be better than AI already. Unless you cheated on important stuff before.

[–] sneekee_snek_17@lemmy.world 23 points 7 hours ago (2 children)

This is my stance exactly. ChatGPT CANNOT say what I want to say, how i want to say it, in a logical and factually accurate way without me having to just rewrite the whole thing myself.

There isn't enough research about mercury bioaccumulation in the Great Smoky Mountains National Park for it to actually say anything of substance.

I know being a non-traditional student massively affects my perspective, but like, if you don't want to learn about the precise thing your major is about...... WHY ARE YOU HERE

load more comments (2 replies)
load more comments
view more: next ›