this post was submitted on 17 Jan 2026
72 points (91.9% liked)
Technology
78879 readers
2070 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So, this is an area where I'm also pretty skeptical. It might be possible to address some of the security issues by making minor shifts away from a pure-LLM system. There are (conventional) security code-analysis tools out there, stuff like Coverity. Like, maybe if one says "all of the code coming out of this LLM gets rammed through a series of security-analysis tools", you catch enough to bring the security flaws down to a tolerable level.
One item that they highlight is the problem of API keys being committed. I'd bet that there's already software that will run on git-commit hooks that will try to red-flag those, for example. Yes, in theory an LLM could embed them into code in some sort of obfuscated form that slips through, but I bet that it's reasonable to have heuristics that can catch most of that, that will be good-enough, and that such software isn't terribly difficult to write.
But in general, I think that LLMs and image diffusion models are, in their present form, more useful for generating output that a human will consume than that a CPU will consume. CPUs are not tolerant of errors in programming languages. Humans often just need an approximately-right answer, to cue our brains, which itself has the right information to construct the desired mental state. An oil painting isn't a perfect rendition of the real world, but it's good enough, as it can hint to us what the artist wanted to convey by cuing up the appropriate information about the world that we have in our brains.
This Monet isn't a perfect rendition of the world. But because we have knowledge in our brain about what the real world looks like, there's enough information in the painting to cue up the right things in our head to let us construct a mental image.
Ditto for rough concept art. Similarly, a diffusion model can get an image approximately right
some errors often just aren't all that big a deal.
But a lot of what one is producing when programming is going to be consumed by a CPU that doesn't work the way that a human brain does. A significant error rate isn't good enough; the CPU isn't going to patch over flaws and errors itself using its knowledge of what the program should do.
EDIT:
Yes. Here are instructions for setting up trufflehog to run on git pre-commit hooks to do just that.
EDIT2: Though you'd need to disable this trufflehog functionality and have some out-of-band method for flagging false positives, or an LLM could learn to bypass the security-auditing code by being trained on code that overrides false positives: