this post was submitted on 19 May 2025
737 points (98.3% liked)

Microblog Memes

7641 readers
2027 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] detun3d@lemm.ee 3 points 30 minutes ago

Yes! Preach!

[–] MystikIncarnate@lemmy.ca 11 points 1 hour ago (1 children)

I've said it before and I'll say it again. The only thing AI can, or should be used for in the current era, is templating... I suppose things that don't require truth or accuracy are fine too, but yeah.

You can build the framework of an article, report, story, publication, assignment, etc using AI to get some words on paper to start from. Every fact, declaration, or reference needs to be handled as false information unless otherwise proven, and most of the work will need to be rewritten. It's there to provide, more or less, a structure to start from and you do the rest.

When I did essays and the like in school, I didn't have AI to lean on, and the hardest part of doing any essay was.... How the fuck do I start this thing? I knew what I wanted to say, I knew how I wanted to say it, but the initial declarations and wording to "break the ice" so-to-speak, always gave me issues.

It's shit like that where AI can help.

Take everything AI gives you with a gigantic asterisk, that any/all information is liable to be false. Do your own research.

Given how fast things are moving in terms of knowledge and developments in science, technology, medicine, etc that's transforming how we work, now, more than ever before, what you know is less important than what you can figure out. That's what the youth need to be taught, how to figure that shit out for themselves, do the research and verify your findings. Once you know how to do that, then you'll be able to adapt to almost any job that you can comprehend from a high level, it's just a matter of time patience, research and learning. With that being said, some occupations have little to no margin for error, which is where my thought process inverts. Train long and hard before you start doing the job.... Stuff like doctors, who can literally kill patients if they don't know what they don't know.... Or nuclear power plant techs... Stuff like that.

[–] GoofSchmoofer@lemmy.world 13 points 1 hour ago* (last edited 1 hour ago) (3 children)

When I did essays and the like in school, I didn’t have AI to lean on, and the hardest part of doing any essay was… How the fuck do I start this thing?

I think that this is a big part of education and learning though. When you have to stare at a blank screen (or paper) and wonder "How the fuck do I start?" Having to brainstorm write shit down 50 times, edit, delete, start over. I think that process alone makes you appreciate good writing and how difficult it can be.

My opinion is that when you skip that step you skip a big part of the creative process.

[–] Retrograde@lemmy.world 2 points 37 minutes ago* (last edited 36 minutes ago)

If not arguably the biggest part of the creative process, the foundational structure that is

[–] MystikIncarnate@lemmy.ca 1 points 47 minutes ago

That's a fair argument. I don't refute it.

I only wish I had any coaching when it was my turn, to help me through that. I figured it out eventually, but still. I wish.

[–] Dagwood222@lemm.ee 1 points 1 hour ago
[–] TankovayaDiviziya@lemmy.world 3 points 1 hour ago

This reasoning applies to everything, like the tariff rates that the Trump admin imposed to each countries and places is very likely based from the response from Chat GPT.

[–] JamesBoeing737MAX@sopuli.xyz 1 points 45 minutes ago

Well, this just looks like criteria for a financially sucessful person.

[–] SoftestSapphic@lemmy.world 42 points 4 hours ago

The moment that we change school to be about learning instead of making it the requirement for employment then we will see students prioritize learning over "just getting through it to get the degree"

[–] Jankatarch@lemmy.world 17 points 3 hours ago (1 children)

Only topic I am close-minded and strict about.

If you need to cheat as a highschooler or younger there is something else going wrong, focus on that.

And if you are an undergrad or higher you should be better than AI already. Unless you cheated on important stuff before.

[–] sneekee_snek_17@lemmy.world 18 points 3 hours ago (1 children)

This is my stance exactly. ChatGPT CANNOT say what I want to say, how i want to say it, in a logical and factually accurate way without me having to just rewrite the whole thing myself.

There isn't enough research about mercury bioaccumulation in the Great Smoky Mountains National Park for it to actually say anything of substance.

I know being a non-traditional student massively affects my perspective, but like, if you don't want to learn about the precise thing your major is about...... WHY ARE YOU HERE

[–] ByteJunk@lemmy.world -1 points 1 hour ago (1 children)

I mean, are you sure?

Studies in the GSMNP have looked at:

  • Mercury levels in fish: Especially in high-elevation streams, where even remote waters can show elevated levels of mercury in predatory fish due to biomagnification.

  • Benthic macroinvertebrates and amphibians: As indicators of mercury in aquatic food webs.

  • Forest soils and leaf litter: As long-term mercury sinks that can slowly release mercury into waterways.

If GPT and I were being graded on the subject, it wouldn't be the machine flunking...

[–] sneekee_snek_17@lemmy.world 2 points 49 minutes ago

I mean, it's a matter of perspective, i guess.

I did a final assignment that was a research proposal, mine was the assessment of various methods of increasing periphyton biomass (clearing tree cover over rivers and introducing fertilizers to the water) in order to dilute mercury bioaccumulation in top river predators like trout and other fish people eat

There's a lot of tangentially related research, but not a ton done on the river/riparian food webs in the GSMNP specifically and possible mitigation strategies for mercury bioaccumulation.

OBVIOUSLY my proposal isn't realistic. No one on earth is gonna be like "yeah sure, go ahead and chop down all the trees over this river and dump chemicals in that one, on the off chance it allows jimbob to give trout to his pregnant wife all year round"

[–] Pacattack57@lemmy.world -3 points 1 hour ago (2 children)

This is a problem with integrity, not AI. If I have AI write me a paper and then proof read it to make sure the information is accurate and properly sourced how is that wrong?

[–] Lv_InSaNe_vL@lemmy.world 10 points 1 hour ago (1 children)

Because education isn't about writing an essay. In fact, the actual information you learn is the secondary thing you're there to learn.

Education, especially higher education, is about learning how to think, how to do research, and how to formulate all of that into a cohesive argument. Using AI deprives you of all of that, so you are missing the most important part of your education

[–] Pacattack57@lemmy.world -1 points 9 minutes ago (1 children)

Says who? I understand that you value that and I’m sure there are many careers where that actually matters but this is the entire problem with our current education system. The job market is vast and for every job that critical thinking is important, there’s 10 that it isn’t. You are also falling into the trap that school is the only place you can learn that. Education is more than follow X steps and get smart. There’s plenty of ways to learn something and not everyone learns the same way.

Maybe use some critical thinking and figure out a way to evaluate someone’s knowledge without having them write an essay that is easily faked by using AI?

AI isn’t going anywhere and the sooner we embrace it, the sooner we can figure out a way to get around being crippled by it.

[–] silasmariner@programming.dev 1 points 2 minutes ago

Name just one job where critical thinking isn't important

[–] jjjalljs@ttrpg.network 4 points 51 minutes ago (1 children)

Imagine you go to a gym. There's weights you can lift. Instead of lifting them, you use a gas powered machine to pick them up while you sit on the couch with your phone. Sometimes the machine drops weights, or picks up the wrong thing. But you went to the gym and lifted weights, right? They were on the ground, and then they weren't. Requirements met?

[–] Pacattack57@lemmy.world 0 points 17 minutes ago (1 children)

That would be a good analogy if going to school was anything like going to the gym. You sound like one of those old teachers that said “You won’t have a calculator in your pocket the rest of your life.”

[–] lightnsfw@reddthat.com 2 points 5 minutes ago

School is like going to the gym for your brain. In the same way that using a calculator for everything makes you worse at math using chatgpt to read and write your assignments makes you worse at those things than you would be if you did it yourself.

[–] j4k3@lemmy.world -4 points 1 hour ago (1 children)

This is as insane as all of my school teachers that insisted that I will not always carry a calculator. In the real world, this is insecure Luddism, and stupidity. No real employer is going to stop you from using AI, or a calculator for that matter. These are tools. Your calculator has a limited register size for computations. It truncates everything in real world math, so π is always wrong as are all of the other cosmological constants. All calculators fail at the real world in an absolute sense, but so do you. You are limited in the scope of a time constraint that prevents you from calculating π to extended precision. You are a flawed machine too, we all are. My mom is pretty good at spelling, but terrible at maps. My dad is good at taking action and doing some kind of task, but terrible at planning and abstractive thinking. AI is great for answering questions about information quickly. It is really good at collaborative writing where I heavily edit the output for the first ~1k tokens or write it myself, then I limit the model's output to one sentence and add or alter keywords. Within around 4k-5k tokens, I am only writing a few key phrases and the model is absolutely writing in my words and in my voice far faster than I can type out my thoughts. Of course this is me running models offline on my hardware using open source tools. I also ban several keyword tokens that take away any patterns one might recognize as AI generated. No, I never use it here unless I have a good reason, and will always tell you so because we are digital neighbors and I care about you. I do not care about your biases with disrespect, but I do care when people are wrong.

If someone turns in math work specifically about π precision that is wrong because they do not know the limitations on their calculator, the should absolutely fail. If I did not teach them that π is truncated in all computers, I have failed. AI exists. Get over it. This dichotomous thinking and tribalism is insanely stupid barbarous primitivism. If you think AI is the appropriate tool and turn in work that is wrong, either I have failed to explain how AI is only correct around 80% of the time and that is not acceptable, or the student has displayed their irrational logic skills. If I use the tool to half my time spent researching, can use it for individualized learning, and half the time I spend writing, while turning in excellent work and displaying advanced understanding, I am demonstrably top of my class. It is a tool, and only a tool. Those that react in some dichotomous repulsion to AI should be purged for exactly the same reason as anyone that uses the tool poorly or to cheat. Both are equally incompetent.

[–] fafferlicious@lemmy.world 5 points 51 minutes ago

It's not Luddism to recognize that foundational knowledge is essential to effectively utilizing tools in every industry, and jumping ahead to just using the tool is not good for the individual or the group.

Your example is iconic. Do you think the average middle schoolers to college students that are using AI understand anything about self hosting, token limits, and optimizing things by banning keywords? Let alone how prone to just making shit up models are - because they were designed to! I STILL get enterprise chatgpt referencing scientific papers that don't exist. I wonder how many students are paying for premium models. Probably only the rich ones.

[–] conditional_soup@lemm.ee 54 points 6 hours ago (14 children)

Idk, I think we're back to "it depends on how you use it". Once upon a time, the same was said of the internet in general, because people could just go online and copy and paste shit and share answers and stuff, but the Internet can also just be a really great educational resource in general. I think that using LLMs in non load-bearing "trust but verify" type roles (study buddies, brainstorming, very high level information searching) is actually really useful. One of my favorite uses of ChatGPT is when I have a concept so loose that I don't even know the right question to Google, I can just kind of chat with the LLM and potentially refine a narrower, more google-able subject.

[–] takeda@lemm.ee 80 points 6 hours ago (8 children)

trust but verify

The thing is that LLM is a professional bullshitter. It is actually trained to produce text that can fool ordinary person into thinking that it was produced by a human. The facts come 2nd.

[–] conditional_soup@lemm.ee 27 points 6 hours ago (1 children)

Yeah, I know. I use it for work in tech. If I encounter a novel (to me) problem and I don't even know where to start with how to attack the problem, the LLM can sometimes save me hours of googling by just describing my problem to it in a chat format, describing what I want to do, and asking if there's a commonly accepted approach or library for handling it. Sure, it sometimes hallucinate a library, but that's why I go and verify and read the docs myself instead of just blindly copying and pasting.

[–] lefaucet@slrpnk.net 19 points 4 hours ago* (last edited 4 hours ago) (1 children)

That last step of verifying is often being skipped and is getting HARDER to do

The hallucinations spread like wildfire on the internet. Doesn't matter what's true; just what gets clicks that encourages more apparent "citations". Another even worse fertilizer of false citations is the desire to push false narratives by power-hungry bastards

AI rabbit holes are getting too deep to verify. It really is important to keep digital hallucinations out of the academic loop, especially for things with life-and-death consequences like medical school

[–] medgremlin@midwest.social 1 points 1 hour ago

This is why I just use google to look for the NIH article I want, or I go straight to DynaMed or UpToDate. (The NIH does have a search function, but it's terrible meaning it's just easier to use google to find the link to the article I actually want.)

[–] Impleader@lemmy.world 14 points 5 hours ago (1 children)

I don’t trust LLMs for anything based on facts or complex reasoning. I’m a lawyer and any time I try asking an LLM a legal question, I get an answer ranging from “technically wrong/incomplete, but I can see how you got there” to “absolute fabrication.”

I actually think the best current use for LLMs is for itinerary planning and organizing thoughts. They’re pretty good at creating coherent, logical schedules based on sets of simple criteria as well as making communications more succinct (although still not perfect).

[–] sneekee_snek_17@lemmy.world 4 points 3 hours ago (3 children)

The only substantial uses i have for it are occasional blurbs of R code for charts, rewording a sentence, or finding a precise word when I can't think of it

load more comments (3 replies)
load more comments (6 replies)
[–] TheTechnician27@lemmy.world 13 points 5 hours ago* (last edited 5 hours ago) (2 children)

Something I think you neglect in this comment is that yes, you're using LLMs in a responsible way. However, this doesn't translate well to school. The objective of homework isn't just to reproduce the correct answer. It isn't even to reproduce the steps to the correct answer. It's for you to learn the steps to the correct answer (and possibly the correct answer itself), and the reproduction of those steps is a "proof" to your teacher/professor that you put in the effort to do so. This way you have the foundation to learn other things as they come up in life.

For instance, if I'm in a class learning to read latitude and longitude, the teacher can give me an assignment to find 64° 8′ 55.03″ N, 21° 56′ 8.99″ W on the map and write where it is. If I want, I can just copy-paste that into OpenStreetMap right now and see what horrors await, but to actually learn, I need to manually track down where that is on the map. Because I learned to use latitude and longitude as a kid, I can verify what the computer is telling me, and I can imagine in my head roughly where that coordinate is without a map in front of me.

Learning without cheating lets you develop a good understanding of what you: 1) need to memorize, 2) don't need to memorize because you can reproduce it from other things you know, and 3) should just rely on an outside reference work for whenever you need it.

There's nuance to this, of course. Say, for example, that you cheat to find an answer because you just don't understand the problem, but afterward, you set aside the time to figure out how that answer came about so you can reproduce it yourself. That's still, in my opinion, a robust way to learn. But that kind of learning also requires very strict discipline.

load more comments (2 replies)
[–] TowardsTheFuture@lemmy.zip 16 points 5 hours ago

And just as back then, the problem is not with people using something to actually learn and deepen their understanding. It is with people blatantly cheating and knowing nothing because they don’t even read the thing they’re copying down.

load more comments (11 replies)
[–] disguy_ovahea@lemmy.world 20 points 6 hours ago (1 children)

Even more concerning, their dependance on AI will carry over into their professional lives, effectively training our software replacements.

[–] kibiz0r@midwest.social 3 points 2 hours ago

While eroding the body of actual practitioners that are necessary to train the thing properly in the first place.

It’s not simply that the bots will take your job. It that was all, I wouldn’t really see that as a problem with AI so much as a problem with using employment to allocate life-sustaining resources.

But if we’re willingly training ourselves to remix old solutions to old problems instead of learning the reasoning behind those solutions, we’ll have a hard time making big, non-incremental changes to form new solutions for new problems.

It’s a really bad strategy for a generation that absolutely must solve climate change or perish.

load more comments
view more: next ›