this post was submitted on 01 Jul 2025
49 points (88.9% liked)

No Stupid Questions

41967 readers
1620 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

Hi, I am a computer nerd. I also took a computer programming class and got the highest score in the class, but I never followed up with advanced classes. Recently, I've thought of different ideas for software I'd like to try to create. I've heard about vibe coding. I know real programmers make fun of it, but I also have heard so much about it and people using it and paying for it that I have a hard time believing it writes garbage code all the time.

However, whenever I am trying to do things in linux and don't know how and ask an LLM, it gets it wrong like 85% of the time. Sometimes it helps, but a lot of times it's fucking stupid and just leads me down a rabbit hole of shit that won't work. Is all vibe coding actually like that too or does some of it actually work?

For example, I know how to set up a server, ssh in, and get some stuff running. I have an idea for an App and since everyone uses smart phones (unfortunately), I'd probably try to code something for a smart phone. But would it be next to impossible for someone like me to learn? I like nerdy stuff, but I am not experienced at all in coding.

I also am not sure I have the dedication to do hours and hours of code, despite possible autism, unless I were highly fucked up, possibly on huge amounts of caffeine or microdosing something. But like, it doesn't seem impossible.

Is this a rabbit hole worth falling into? Do most Apps just fail all the time? Is making an App nowadays like trying to win a lotto?

It would be cool to hear from real App developers. I am getting laid off, my expenses are low because I barely made anything at my job, I'll be getting unemployment, and I am hoping I can get a job working 20-30 hours a week and pay for my living expenses, which are pretty low.

Is this a stupid idea? I did well in school, but I'm not sure that means anything. Also, when I was in the programming class, the TA seemed much, much smarter at programming and could intuitively solve coding problems much faster due to likely a higher IQ. I'm honestly not sure my IQ is high enough to code. My IQ is probably around 112, but I also sometimes did better than everyone on tests for some reason, maybe because I'm a nerd. I'm not sure I will have the insight to tackle hard coding problems, but I'm not sure if those actually occur in real coding.

top 35 comments
sorted by: hot top controversial new old
[–] pinball_wizard@lemmy.zip 5 points 20 hours ago* (last edited 20 hours ago)

Is all vibe coding actually like that too or does some of it actually work?

It's all like that.

How bad that is - for you - depends on your patience and your learning style.

When I use it, my experience usually lets me recognize the mistakes and correct them quickly. So it's just a lazy convenience. Most of the time.

I've had it make subtle mistakes that cost me significant amounts of time to cleanup after letting the vibe code run for a few minutes.

I'm aware that particular mistake cost me more time that vibe coding has ever saved me.

I don't mind, because my employer is excited about AI right now, and I get paid for my time, and I don't work unpaid overtime.

So - to your implied questions:

Is AI bad at coding?

Yes. It will get better. But today, it is worse than most people think. Obvious problems are easily fixed. Subtle problems are being released daily all over the Internet to combine to cause headaches later.

Should you try it, anyway?

Of course! You'll learn something and it might do a good enough job for what you need. If you stick with it, you'll learn enough to do what you need.

Is vibe coding a better path forward than learning a programming language?

Absolutely not. If you need to succeed, and had to pick one, learn to code.

But you don't have to pick just one approach. And it's probably impossible to vibe code for long without learning to actually code. Vibe coding is a path toward aware knowledgeable coding. It's not the only path. It's not the best path. But it's still a path. And you can pursue more than one path.

So I say, Dive in! You'll be complaining with the rest of us, soon! Maybe together we will make it a bit better.

[–] GreenKnight23@lemmy.world 5 points 1 day ago

I don't even have to read everything you wrote past the question.

no. no it does not.

it doesn't work for many reasons. most of all it doesn't work when you need to improve or extend the code. handing it over to a new developer also doesn't work.

If I ever see another developer vibe code IRL I will relentlessly mock them until HR is forced to get involved.

[–] daniskarma@lemmy.dbzer0.com 3 points 21 hours ago* (last edited 21 hours ago)

Full complex app, forget about it. It's not going to work.

Concrete functions or small parts of the program (or maybe a very small project? 50/50 chance. Depending on the complexity.

For instance I benchmark several LLMs last month, asking them to build a worldle clone for the terminal. Some of them were able to spite the full program in a completely working state.

For anything larger or more complex I haven't had any luck. And LLM are mostly used for references and ideas.

[–] supakaity@piefed.blahaj.zone 3 points 21 hours ago* (last edited 21 hours ago)

You know those movies where the guy gets 3 wishes from a Genie who takes malicious delight in giving them exactly what they asked for even when they're super careful like "I want a million dollars, and no I don't want it stolen from a bank, or anywhere that someone's going to come after me for having it and oh, it needs to be actual real US dollars in circulation today, and without any tax obligations, the IRS can't come after me. The SEC can't come after me." And when they think finally that they've specified everything they possibly can, the Genie summons the money and a big gust of wind blows it all out the window and down the street... Then they need to use their second wish to summon it all back in and shut the window. But then the genie summons it back into the fireplace and it all catches fire, so they have to use their third wish to bring it all out of the fireplace, so the Genie brings it all out, but it's just ashes...

Well, okay, there's probably no movie like that, but that's what programming with AI is like.

"Vibe coding" purists define it as "If you know how it works then it isn't vibe coded". And those type of coders kinda keep going at it more and more refined until they eventually get some spaghetti code that kinda does what they wanted it to do and heck, It's close enough, ship it! Then they end up being exploited by some random internet hacker.

Most of the companies that use "Agentic coding" are using it to perform rapid prototyping or templating, performing repetitive tasks quickly or generally using it like a really dumb junior programmer, that the engineer then takes their code and does the code review / testing (often again using AI tools), followed by a whole heap of fixing up, to make sure it does what it says on the box.

As stated on other comments, the amounts of money they pay for this kind of AI tooling could easily cost many thousands of dollars a month (in addition to the engineer(s) salary/salaries), but the order of magnitudes of productivity increase for that engineer make it worthwhile. But you need that experienced engineer to make it all work.

I'm not aware of any companies that are solely using coding agents in isolation to replace engineers completely. I'm sure it'll happen one day and I'll probably be forced into retirement at that point.

[–] 18107@aussie.zone 5 points 1 day ago

LLMs are great at language problems. If you're learning the syntax of a new programming language or you've forgotten the syntax for a specific feature, LLMs will give you exactly what you want.

I frequently use AI/LLMs when switching languages to quickly get me back up to speed. They're also adequate at giving you a starting point, or a basic understanding of a library or feature.

The major downfall is if you ask for a solution to a problem. Chances are, it will give you a solution. Often it won't work at all.
The real problem is when it does work.

I was looking for a datatype that could act as a cache (forget the oldest item when adding a new one). I got a beautifully written class with 2 fields and 3 methods.
After poking at the AI for a while, it realized that half the code wasn't actually needed. After much more prodding, it finally informed me that there was actually an existing datatype (LinkedHashMap) that would do exactly what I wanted.

Be aware that AI/LLMs will rarely give you the best solution, and often give you really bad solutions even when an elegant one exists. Use them to learn if you want, but don't trust them.

[–] AdamBomb@lemmy.sdf.org 7 points 1 day ago

My bro, your TA wasn’t better at coding because “higher IQ”. They were better because they put in the hours to build the instincts and techniques that characterize an experienced developer. As for LLM usage, my advice is to be aware of what they are and what the aren’t. They are a randomized word prediction engine trained on— among other things— all the publicly available code on the internet. This means they’ll be pretty good at solving problems that it has seen in its training set. You could use it to get things set up and maybe get something partway done, depending on how novel your idea is. An LLM cannot think or solve novel problems, and they also generally will confidently fake an answer rather than say they don’t know something, because truly, they don’t know anything. To actually make it to the finish line, you’ll almost certainly need to know how to finish it yourself, or learn how to as you go.

[–] TempermentalAnomaly@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

Here's Simon Willison's write up of how he uses AI. He's been using it for a couple of years and distilled his methods in this article. He also discussed when and how he vibe codes.

[–] Psythik@lemmy.world 4 points 1 day ago* (last edited 5 hours ago)

I vibe coded an AutoHotKey script to automate part of my job. It works.

Edit: FWIW you have to pressure it quite a bit to get what you want. One or two prompts usually won't produce working code on the first attempt. Also you have to understand at least the basics of programing so that you know the right words to enter into the prompt to get the results you desire.

[–] LovableSidekick@lemmy.world 8 points 1 day ago* (last edited 1 day ago) (1 children)

The exact definition of vibe coding varies with who you talk to. A software dev friend of mine uses ChatGPt every day in his work and claims it saves him a ton of time. He mostly does db work and node apps right now, and I'm pretty sure the way he uses ChatGPT falls under the heading of vibe coding - using AI to generate code and then going through the code and tweaking it, saving the developer a lot of typing and grunt work.

[–] TranquilTurbulence@lemmy.zip 3 points 1 day ago* (last edited 22 hours ago)

I prefer to think of vibe coding like the relationship some famous artists had with apprentices and assistants. The master artist tells the apprentice to take care of the simple and boring stuff, like background and less significant figures. Meanwhile the master artist would paint of all the parts that require actual skill and talent. Raphael and Rembrandt would be good examples of that sort of workflow.

[–] listless@lemmy.cringecollective.io 29 points 1 day ago (1 children)

if you know how to code, you can vibe code because you can immediately see and be confident enough to identify and not use obvious mistakes, oversights, lack of security, and missed edge cases the LLM generated.

if you don't know how to code, you can't vibe code, because you think the LLM is smarter than you and you trust it.

Imagine saying "I'm a mathematician" because you have a scientific calculator. If you don't know the difference between RAD and DEG and you just start doing calculations without understanding the unit circle, then building a bridge based on your math, you're gonna have a bad time.

[–] Lembot_0004@discuss.online 11 points 1 day ago

Nah, building will be fun. Bad time will be for those who will use that bridge:)

[–] xavier666@lemmy.umucat.day 52 points 2 days ago* (last edited 2 days ago) (2 children)

Think of LLMs as the person who gets good marks in exams because they memorized the entire textbook.

For small, quick problems you can rely on them ("Hey, what's the syntax for using rsync between two remote servers?") but the moment the problem is slightly complicated, they will fail because they don't actually understand what they have learnt. If the answer is not present in the original textbook, they fail.

Now, if you are aware of the source material or if you are decently proficient in coding, you can check their incorrect response, correct it, and make it your own. Instead of creating the solution from scratch, LLMs can give you a push in the right direction. However, DON'T consider their output as the gospel truth. LLMs can augment good coders, but it can lead poor coders astray.

This is not something specific to LLMs; if you don't know how to use Stackoverflow, you can use the wrong solution from the list of given solutions. You need to be technically proficient to even understand which one of the solutions is correct for your usecase. Having a strong base will help you in the long run.

[–] lepinkainen@lemmy.world 6 points 1 day ago (1 children)

The main problem with LLMs is that they’re the person who memorised the textbook AND never admit they don’t know something.

No matter what you ask, an LLM will give you an answer. They will never say “I don’t know”, but will rather spout 100% confident bullshit.

The “thinking” models are a bit better, but still have the same issue.

[–] xavier666@lemmy.umucat.day 3 points 23 hours ago (1 children)

No matter what you ask, an LLM will give you an answer. They will never say “I don’t know”

There is a reason for this. LLMs are "rewarded" (just an internal scoring mechanism) for generating an answer. No matter what you say, it will try to maximize the reward value by generating an answer with high hallucination. There is no reward mechanism for saying "I don't know" to a difficult question.

I am not into research on LLMs, but i think this is being worked upon.

[–] TranquilTurbulence@lemmy.zip 2 points 21 hours ago

Something very similar is also true with humans. People just love to have answers even if they aren't entirely reliable or even true. Having just some answer seems to be more appealing than not having any answers at all. Why do you think people had weird beliefs about stars, rainbows, thunder etc.

The way LLMs hallucinate is also a little weird. If you ask about quantum physics things, they actually can tell you that modern science doesn't have a conclusive answer to your question. I guess that's because other people have written articles about the very same question, and have pointed out that it's still a topic of ongoing debate.

If you ask about robot waitresses used in a particular restaurant, it will happily give you the wrong answer. Obviously, there's not much data about that restaurant, let alone any academic debate, so I guess that's also reflected in the answer.

[–] josefo@leminal.space 1 points 19 hours ago

Great summary. I would add not using LLMs to learn something new. As OP mentioned, when you know your stuff, you are aware of how much it bullshits. What happens when you don't know? You eat all the bullshit because it sounds good. Or you will end up with a vibed codebase you can't fully understand because you didn't reason to produce it. It's like driving a car and having a shitty copilot that sometimes hallucinates roads, and if you don't know where you are supposed to be, wherever that copilot takes you would look good. You lack the context to judge the results or advice.

I basically use it now days as a semantic search engine of documentation. Talking with documentation is the coolest. If the response doesn't come with a doc link, it's probably not worth it. Make it point to the human input, make it help you find things you don't know the name of, but never trust the output without judging. In my experience, making it generate code that you end up correcting it's more cognitive heavy load than to write it yourself from scratch.

[–] EmilyIsTrans@lemmy.blahaj.zone 20 points 2 days ago

In my experience, an LLM can write small, basic scripts or equally small and isolated bits of logic. It can also do some basic boilerplate work and write nearly functional unit tests. Anything else and it's hopeless.

[–] ComfortableRaspberry@feddit.org 18 points 2 days ago* (last edited 2 days ago) (1 children)

I use it as a friendlier version of stackoverflow. I think you should generally know / understand what you are doing because you have to take everything it says with a grain of salt. It's important to understand that these assistants can't admit that they don't know something and come up with random generated bullshit instead so you can't fully trust their answers.

So you still need to understand the basics of software development and potential issues otherwise it's just negligence.

On a general note: IQ means nothing. I mean a lot of IQ tests use pattern recognition tasks that can be helpful but still, having a high IQ says nothing about you ability as developer

[–] FuglyDuck@lemmy.world 8 points 2 days ago

On a general note: IQ means nothing. I mean a lot of IQ tests use pattern recognition tasks that can be helpful but still, having a high IQ says nothing about you ability as developer

to put this another way... expertise is superior to intelligence. Unfortunately we have this habit of conflating the two. intelligent people some times do some incredibly stupid things because they lack the experience to understand why something is stupid.

Being a skilled doctor or surgeon doesn't make you skilled at governance. two different skillsets.

[–] older_code@lemmy.world 7 points 1 day ago

I have successfully written and deployed a number of large complex applications with 100% AI written code, but I micromanage it. I’ve been developing software for 30 years and use AI as a sort of code paintbrush. The trick is managing the AI context window to keep it big enough to understand its task but small enough to not confuse it.

[–] FreedomAdvocate@lemmy.net.au 9 points 1 day ago

No, making an app is not just something you can decide you want to do and do it without learning to code.

[–] TranquilTurbulence@lemmy.zip 2 points 1 day ago* (last edited 1 day ago)

Vibe coding works, but there are some serious caveats.

I've used LLMs for data visualization and found them helpful for simple tasks, but they will always make serious mistakes with more complex prompts. While they understand syntax and functions well, they usually produce errors that require manual debugging. Vibe coding with LLMs works best if you're an expert in your project and could write all of the code yourself but just can't be bothered. Prepare to spend some time fixing the bugs, but it should still be faster than writing all of it yourself.

If you're not proficient in using a specific function the LLM generated, vibe coding becomes less effective because debugging can be time consuming. Relying on an LLM to troubleshoot its own code tends to lead to "fixes" that only spawn more errors. The key is to catch these situations early and avoid getting lured into any of the wild goose chases it offers.

[–] Mountaineer@aussie.zone 10 points 2 days ago (1 children)

If you "vibe code" your way through trial and error to an app, it may work.
But if you don't understand what it's doing, why it's doing it and how it's doing it?
Then you can't (easily) maintain it.
If you can't fix bugs or add features, you don't have a saleable product - you have a proof of concept.

AI tools are useful, but letting the tool do all the driving is asking for the metaphorical car to crash.

[–] Tollana1234567@lemmy.today 0 points 1 day ago

probably useful in getting your resume to be noticed?

[–] Dr_Nik@lemmy.world 5 points 1 day ago (2 children)

People who vibe code are not using free LLMs, they are using custom AI code generation systems they pay subscriptions for. I don't know which ones work best but I do have a close friend who runs a software company and he just bought subscriptions for all his employees to some system I've never heard of because the code it generated drastically sped up their development time.

[–] lepinkainen@lemmy.world 1 points 1 day ago

Most likely Claude, it’s pretty much the best at the moment

[–] HiddenLychee@lemmy.world 0 points 1 day ago

A friend of mine is a senior full stack developer and just uses gpt 4o. He makes 300k a year doing it, so it can't be that bad

[–] NigelFrobisher@aussie.zone 6 points 2 days ago* (last edited 2 days ago)

Only for really basic things. I’m trying to use it to build tools in the background while I do real work, but it quickly falls into a pattern of presenting a working product that actually doesn’t work at all (and then If have to dedicate a lot more time analysing the generated code to find out why. It often can’t fix its own workings even with a “reasoning” model.)

[–] hendrik@palaver.p3x.de 2 points 1 day ago* (last edited 1 day ago)

Concerning the IQ: App development and regular programming aren't that hard. It needs some time and dedication, and willingness to learn how all these things work and tie together, but I think everyone with an average IQ could do it. It's specific domains where you need a high IQ, like writing advanced signal processing algorithms. Or write very efficient algorithms or do detailed security audits. But App development is just moderately complex, you can get away with basic math... So I'd say it's doable. Still needs quite some time and effort though. At least several weeks to months. And the Kotlin book I have has like 800 pages filled with information, and that just takes some time to work through. None of it is magic, though. You do one chapter at a time.

Vibe coding is overrated IMO. There are applications and clients out there for whom it's fine if you just do a piss-poor job and throw something together, and it somehow works enough. For a lot of things it's not advanced enough, yet.

[–] Toes@ani.social 6 points 2 days ago

It's cool for little things when working on an unfamiliar project or learning something new.

But don't trust one example and read about the features you're using.

[–] Gonzako@lemmy.world 4 points 2 days ago

If you care about it at all, don't vibe it otherwise go hog wild

[–] droning_in_my_ears@lemmy.world 4 points 2 days ago* (last edited 2 days ago)

It works short term. If you have a deadline tomorrow by all means.

Long term you need to be aware of not just the code but the theory behind the code. You can make it work if you're promoting what you need and read the result, understand it and test it but if we pure vibe coding is probably too much. How are you gonna solve problems when you don't fully understand how things work?

Another thing, a lot of AI generated code solves the problem in the most obvious often bad way. For example I asked the AI for help with an ORM limitation I was running into and so many times the code it suggested was just query the db, then filter in code afterwards

[–] Lumelore@lemmy.blahaj.zone 2 points 1 day ago

I pretty much only use it to generate boilerplate. I've tried using it to learn the syntax of new languages and it kind of works, but in my experience just reading the docs is better even if it seems like a lot of text. Also your IQ really does not matter. You can learn anything as long as you're willing to put in the time and effort; don't compare yourself to others it's fine to go at your own pace. (I'm Autistic also btw)

[–] jol@discuss.tchncs.de 2 points 2 days ago

It already kinda works 95% of the way. But more often than not the last 5% still requires you to understand everything the AI did which can be hard. If it was you implementing everything, you'd already have the whole context in working memory. I've been learning better prompting and getting better at it. I think it thrives in typed languages and where the code base has clear design patterns it can follow.