this post was submitted on 30 Apr 2026
-71 points (13.4% liked)

Programming

26746 readers
236 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

Been banned for AI-Slop on a few subs here on Lemmy as well as on Reddit.

I always provide a good amount of technical detail in my posts and i try to be as transparant and communicative about the details. My projects are very complicated and I try to document them well.

my project is pretty cryptography-heavy... the act of me sharing my efforts in an attempt to show transparency... but it is used against my project by calling it AI-slop (undermining Kerkhoff's principles).

It's 2026 and most developers are using AI. I have used it to create things like formal proof and verification.

my project is aimed to be a secure messaging app. i have all the bells-and-whistles there along with documentation.... but if the conversation cant move past "its AI-generated"... then it seems the cryptography/cybersecurity/privacy community isnt aligned with the fact that using AI is now common practice for developers of all levels.

AI is a tool. you cant (and shouldnt) "trust" AI to do anything without oversight. AI does not replace the due-diligence that has always been needed. i dont "trust" my hammer to bash in a nail... i "use" the hammer. AI is not different in how you need to be responsible for how its used.

i've busted my ass on my project for it to be called AI slop. i think its completely fine when it comes from folks in the community. cryptography is a serious subject and my ideas and implementation SHOULD/MUST be scrutinised... but its simply ignorant if mods are banning me for the quality of my work considering the the level of transparency and my engagement on discussions about it.

It's a bit reductive to call it slop. I think i try harder than most in providing links, code and documentation. Of course I used AI... and it's clearer for it. (you can find more detail on my profile)

i am of course sour from being banned, but am i wrong to think my code isnt AI slop? Some parts of my project are clearly lazy-ui... but im not sharing on some UI/UX/design sub. the cryptography module has unit tests and formal verification. if that is AI-slop and can result in me being banned, i simply dont have faith in that community to be objective on the reality of where AI can contribute.

while its understandable people dont want to review AI-slop... i think the cryptography/cybersecurity community needs to get on board with the idea of using AI to help in reviewing such code. am i wrong? is the future of cryptography is still people performing manual review of the breathtaking volumes of AI code?

top 50 comments
sorted by: hot top controversial new old
[–] spectrums_coherence@piefed.social 7 points 19 hours ago* (last edited 2 hours ago)

the cryptography module has unit tests and formal verification.

I suspect your formal proof refers to the following files: https://github.com/positive-intentions/signal-protocol/tree/staging/formal-proofs

It contains 6 files each with less than 100 lines of code, and the claim seems to be it almost prove the entire security of the signal protocol.

There are three possiblities here: (1) the formal proof community has advanced so much without me knowing (2) your AI produced complete garbage (3) your AI made ground breaking advancements in formal method. Since my best known state of the art is Signal* from project everest. It involves tens of components, and years of works for top academics and proof engineers.

Each file here, like fstar/Impl.Signal.Core.fst would already be longer than your entire proof, even just the hints provided to the SMT solvers fstar/Impl.Signal.Core.fst.hints are longer than your entire proof.

So I am interested in what technique did you apply to acheive the almost same effect as this monumental project with less than 5% of the code?


You have also claimed there is support for Rocq, Lean, and F*, and the code is here https://github.com/positive-intentions/signal-protocol/tree/staging/signal-protocol-core/proofs

I looked into the Rocq and Lean part of the proof, and there is no proof, all the "correctness" claims are all declared as axioms, which are not proven.


So far, I have sit down and read your code, and I feel it is either a major breakthrough or a complete waste of my time (I am unfortunately leaning towards the latter). I would be furious if my student or colleagues handed me a work of this quality, and I imagine all the experts reading your code will likely feel the same.

I am not angry because your work involves LLM (I don't like that, but I won't be angry about it), but because you disrespected my time and effort to review your code by not putting out a product of reasonable quality. In turn, I also cannot provide you constructive and technical feedback to you, as the technical part of your project seems hollow to me. IMO, disrespecting the time of your peer is a very good reason to ban people from their community.

Academia is currently being flooded with AI, many are used by compotent individuals so AI is able to hide error in obscure process. For the first time, academia need to deal with a large amount of submission are not in good faith, and that is frustrating for us volunteering reviewers. Your reader, who are also volunteering their time to help you improve, will likely feel the same.

AI is just a tool, that is, you will get as much expertise out of it as you put into it. Like a computer, it will make producing work easier and faster, but it cannot help you build anything you do not understand yourself.

I am glad you are interested in crypto and verification. But to make meaningful contribution will take honest effort as opposed to just prompting a couple so called artificial "intelligence".

[–] CombatWombat@feddit.online 61 points 1 day ago (5 children)
load more comments (5 replies)
[–] entwine@programming.dev 15 points 1 day ago (1 children)

I think you need to speak to a mental health specialist, because AI psychosis can be really destructive. We all have problems, but using chat bots to make us feel better is dangerous for you and those around you, even if it feels good in the moment. These bots are designed to tell you exactly what you want to hear so that you become addicted to them.

I'm going to guess you didn't accomplish much as a software engineer before AI? The personal deficiencies at the core of that are still there even if you use AI to tell you otherwise. I won't speculate what those deficiencies are, but I just want you to engage in some honest introspection. Absolutely nobody will trust someone like you to handle such a sensitive topic like cryptography. Stop wasting your short time on this earth on something so stupid. Go make literally anything else.

[–] thedeadwalking4242@lemmy.world 22 points 1 day ago (1 children)

I've read some other comments and wanted to add.

You cannot use a LLM to verify its own work

They have no ability to think. Any intelligence they have is extremely limited. There a mostly automatic copy and paste machines. They pull code from their training data and online and attempt to compose the.

Using a LLM to verify its own work is like asking a criminal to run their own trial.

That's just now any of this works. I think you should take a step back from the LLM and really start evaluating your work more critically. There is more to software then "it works!"

[–] xoron@programming.dev -1 points 1 day ago (1 children)

i stated off with a version i created manually without AI. i know how to do this old-school (i tried). that was a different kind of slop.

https://github.com/positive-intentions/chat

i use AI in a way i think is appropriate. i check as much as i can myself too. i post online about details and questions. i can iterate with AI. im may naive to think i know how to inpect what is created, so i share it online. im not sharing slop. this is the best i can do. of couse there are countless points of improvement, but there are only so many hours in the day.

youre sharing a valid opinion, but its difficult for me to quantify my efforts. im sure you dont think i just asked AI something basic (e.g. "verify this code is correct").

[–] thedeadwalking4242@lemmy.world 3 points 17 hours ago (1 children)

If you can't write manually and have it not be slop then you can't program with a LLM effectively.

It doesn't matter how much instruction you give a LLM it fundamentally cannot evaluate itself. Because to ensure that it's evaluating correctly either you need to evaluate it or someone else. These are not deterministic machines and will "lie" to reach their goals. And I put that in quotes because it's not really lying that's to much personification.

These things are not good for literally anything beyond minor transformation or boilerplate.

Trust me. If you actually spend the time learning to code well written software by hand you will save time and get a better result. LLMs based coding is a anti-pattern.

You're not getting push back because "programmers a upset their jobs are getting stolen" you're getting push back because your falling for LLM company propaganda. LLMs just are not there yet.

If more then 20% of your code is written by a LLM your using it wrong.

[–] xoron@programming.dev 1 points 10 hours ago

here is the open source version i created with out AI: https://github.com/positive-intentions/chat

its faily ugly and not user friendly, but the core mechanics of secure encrypted communication is demonstrated and documented. it was clear after creating that version, open source was worthless. with or without AI, slop has always been around.... for better or worse, i was creating slop before it was cool.

i then created the newer version of the messaging app with AI (it isnt fully open source but works in a similar way): https://p2p.positive-intentions.com/iframe.html?globals=&id=demo-p2p-messaging--p-2-p-messaging&viewMode=story

having done it manually and then with AI, i can clearly compare why the close source version is more appealing to users. its not just a nicer UI, its better documented.

youre making assumptions that if i didnt have AI, i wouldnt be able to work on my project. im naive enough to think that isnt true. the documentation and code might not be to the same quality, but im sure i can still crank out code the old-fashioned way.

[–] graynk@discuss.tchncs.de 38 points 1 day ago (1 children)

Cryptography is notoriously easy to get wrong. If you don't know enough about it - you should not offload it to the hallucination machine, because you will not be able to verify it properly, and those who can - will not bother to.

This is not what a real audit looks like and it should not be presented as such. This "audit" is, in fact, slop.

Auditor: Security Analysis (Automated + Manual Review)

Do you not see the problem in this line?

The implementation uses real cryptographic primitives

Or this?

I avoid slop code like yours because typically the user of the slop generator has no real idea of how things actually work, the slop is over-"engineered", and it's likely full of security issues. Further, it also wastes tons of resources just for poorly written slop.

I especially wouldn't ever touch your cryptographic slop.

[–] Pamasich@kbin.earth 37 points 1 day ago (1 children)

In my opinion, slop is slop. AI tends to result in slop, but it doesn't have to. But to ensure it's not slop, one has to put in effort and time. Which kind of defeats the purpose of using AI in the first place. So I think it's obvious why most people default to AI involvement = slop.

[–] farbidden_lands@quokk.au 13 points 1 day ago (1 children)

Unless you invented some new form of encryption why are you generating so much ai slop?

Just reuse human made cryptography libraries that are battle tested. Then you won't have to do disastrous things like putting ai to review your ai slop.

You know that it lies, gaslights, writes or deletes production databases, tests etc as it pleases.

[–] xoron@programming.dev -5 points 1 day ago

your right. my version of what your describing exists here: https://github.com/positive-intentions/chat

not AI slop, but slop of a different kind. purely a webpp and uses audited crypto primitives from the browser. webrtc is already encrypted, but there is a diffie-helman key exchange (you can share public key hashes to guard againt mitm sttacks). i put time and effort there and documented it to seek some kind of open source support. it didnt work out.

my plan was always to beef up the encryption. i wanted to add the signal protocol. i asked on eddit and i couldnt find something suitable.

https://www.reddit.com/r/crypto/comments/1mi4ooa/looking_for_the_signal_protocol_in_javascript

i can ue AI to sweat it out myself: https://www.reddit.com/r/signal/comments/1orsjw2/signal_protocol_in_javascript

there is a great deal of effort that i simply cant quantify.

[–] luciole@beehaw.org 17 points 1 day ago

No matter how hard you pet your LLM, this project is not your work. LLM output attribution is a gray zone by design. Your assumption that vibe coding has overtaken software development is a big red flag imho. I wonder where you've acquired this belief. If you've been banned from multiple communities already I recommend you reflect upon this.

[–] toebert@piefed.social 30 points 1 day ago (1 children)

I don't think everything is getting called ai slop, but I would say if any part of your project is ai slop (like your "lazy uis") I'd also immediately lose trust in the entirety of the project, especially if it's intended to be around security. I do think most projects that use AI for code generation are slop though, I've seen far fewer examples of good use (i.e. where the output looks human written because the operator reviewed and refactored every part of it, or where it was used to write small parts of functions rather than entire functionalities)

Your last sentence I think provides a great argument for why people here (and more and more broadly in engineering) hate on ai generated code in general. It produces such vast quantities of code (and often unnecessarily) that it becomes infeasible for a human to review it, immediately requiring us to place trust in the machine to both generate it and review it, and to continue maintaining it while the human operator probably does not even have full understanding of what's changing. A machine, that we all know hallucinates and generates often low quality garbage, including severe security vulnerabilities by design. According to GitHub, your project has millions of lines of changes on a weekly basis in the earlier days, that does scream slop to me.

Last, AI is more and more hated due to the increasing number of horrible impacts it has on our world, personally I'd not support AI generated projects just on that principle alone.

[–] Goldholz@lemmy.blahaj.zone 19 points 1 day ago

Yes we can. Watch me

look I'm all for using LLMs for tedious or straight forward transformations of easily verifiable logic. The issue is they LLMs are sycophantic by nature and we are seeing a lot of newly freed "geniuses" who have promised "no no no. You see! I know the secret to using them for good!"

It's like the one ring. If you start using it for doing anything beyond reformating, anything that requires critical thinking, you've already trapped yourself.

You'll feel like your work is quality when it isn't.

Personally I still think the quality of LLM code is crap for pretty much anything. Much better done by a well seasoned developer, which is harder to come by then people think. A LLM can help in some narrow cases but not many.

[–] tabular@lemmy.world 21 points 1 day ago (1 children)

Was the AI you're using trained like most; scrapping the internet and disregarding the licenses of code?

[–] mlatu@moist.catsweat.com 18 points 1 day ago

using AI is now common practice for developers of all levels

is not a fact.

but one person standing in front of their (in part) dice-rolled "work" is not a welcomed sight is one.

any dev much rather would brown their own greenfields than help you regreen your AI-brownies...

[–] Solumbran@lemmy.world 11 points 1 day ago

So many critical bugs and security holes have been made from an oversight of the people handling the code.

Now you want to tell me that instead of having people write code that tries to make sense, and then review it (sometimes a bit too late), you want to have an hallucination machine produce some code randomly, then have people "fix" it, then review it?

This is just a recipe for disaster.

AIs are not "AIs", they're just bullshit generators that everyone is falling for. Technical debt and lack of code reliability were the main problems of software dev, and AIs are sacrificing those two specifically, just in exchange for the illusion of speed.

If you train monkeys to pile up bricks, it doesn't make a house, it makes a disaster waiting to happen. And monkeys, unlike AIs, are actually intelligent and sentient, which would make them more reliable still.

[–] Auster@thebrainbin.org 1 points 20 hours ago

Some communities have "no-AI" rules. If you didn't break any, maybe you've been targeted by moderators that partake in cancel culture?

If that's the case, at least helps to sift through communities. And worse comes worst, maybe start a personal community to share what you make?

[–] hendrik@palaver.p3x.de 10 points 1 day ago* (last edited 1 day ago) (1 children)

It's a broad topic. Everytime I see some new AI-coded project linked in the selfhosted community, it's kinda shit... I had hallucinated installation instructions. Very overexagerrated claims of what it's supposed to do... Sometimes it looks okay but some buttons don't do anything and then I look at the code and everything is more of a stub. Some projects have ridiculous security issues like someone finds a master key buried in the code, and of course none of the "developers" ever noticed because noone ever had a look at the code...

You're somewhere in the same territory. Maybe you're the one who gets it applied properly. But once I'm going to notice the tell-tale signs of vibe-coding, I'm going to start looking at it with the prejudice that got shaped by my prior experience. And I tend to be right most of the times.

But with that said, I don't think it's healthy to have a war over it, ban people and yell at each other. Most I want is transparency. I think all software projects should just disclose if and how they use AI, to what extent. And the users can make up their mind.

And with cryptography code... Isn't that a bit dangerous? From my own experience, AI models tend to learn a lot of example code and the standard documentation of libraries... Wikipedia articles and such... And then generate responses closer to that, than completely new thoughts... But(!) all these examples, tutorials and boilerplate code use a lot of shortcuts to explain it in simpler terms. Shortcuts that weaken security. And I wouldn't be surprised if your AI is then going ahead to reproduce that, and casually forget about the steps to prepare the numbers and follow up on the next steps if that wasn't ever in the Wikipedia example code. And I've seen a lot of wrong advice on StackOverflow and Reddit, so you better hope it also didn't internalize that. There's some fairly common myths about security or cryptography details out there. And I never know if your average Claude learned more from Reddit discussions, or from computer science technical literature... And you probably used Claude to skip reading the computer science books as well (and have a really close look at the code), or you probably would have just typed it down yourself. So I'd expect your software to be roughly as sound as newbie code, up to the average of projects that's out there on GitHub, which your AI has probably learned from. Not any better than that.

[–] xoron@programming.dev -3 points 1 day ago (1 children)

Most I want is transparency.

i agree with all youre saying. especially this which is why i entertain the idea of open source at all. what does transparency look like to you? code? documentation? open discussion? transparency is undermined when im trying to talk about something clearly complicated in order to seek feedback.

cryptography code… Isn’t that a bit dangerous?

in software dev we have thing like unit test (you already know that)... but when diving into cryptography we have formals proofs and verification we can use. it doesnt need AI to extract abstraction from the code implementation to run verification on. the tooking there is common practice and if we question if AI is doing it ptoperly we bring into question if the tooling used is good enough.

  • security audit
  • unit tests
  • formal proof
  • formal verification
  • documentation

individually, they are all easily AI slop. but combined i hope it can serve as a starting point for a proper review. i dont mean a proper review from you either... im was seeking a review from orgs that specialise in such review.

https://www.reddit.com/r/CyberSecurityAdvice/comments/1su8lir/security_audit_feedback_from_radically_open

you make a lot of assumptions about how i code and what i understand about my project. enumerating what ive done and plan to do wouldnt do it any justice... but i will say this project is the result of a long-term effort. i created the project without AI originally. the idea is unique around client-managed cryptography (https://github.com/positive-intentions/chat).... ultimately it was clear that open-source is dead and so ive started introducing less transparency in the project as i introduce a close-source UI. i still keep the cryptography related modules open for transparency (whatever thats worth when people see that AI was involved).

i wouldnt put my project out there if i didnt have faith in the implementation. i have actively seeked feedback and recieved good advice from which i iterated and improved. particularly concerning if im being banned from from communities for posting slop.

[–] hendrik@palaver.p3x.de 6 points 1 day ago* (last edited 1 day ago) (1 children)

diving into cryptography we have formals proofs and verification we can use

Did you do formal proofs or verification? I had a quick look at the repos and I can't find them.

[–] xoron@programming.dev -1 points 1 day ago (1 children)

https://github.com/positive-intentions/signal-protocol

https://positive-intentions.com/docs/technical/signal-protocol-formal-verification

There are still bugs there I'm actively working on it you look at the pull-requests where I'm trying to setup verification on the ci.

[–] hendrik@palaver.p3x.de 7 points 1 day ago* (last edited 1 day ago) (1 children)

Uh, sorry your code is a bit difficult to read. There seems to be one implementation in the 'src' directory, which is referenced in your ProVerif pi code. But then there's another one(?) in the 'signal-protocol-core' directory which seems to be the one that's actually built?

And how did you arrive at those proverif files? Do they come from your Rust code? How? And how do you make sure they relate to your code? I mean for all I know they could contain some correct design, while your code does something else... I'm not really an expert at this, but they seem (to me) just to appear in some commit but I don't really get how it relates to the Rust code. Or how it came to be.

And then it's a bit difficult to tell for me whether your Chat uses the cryptography code from the 'cryptography' repository. Or the one from the 'signal-protocol' repository. It seems to load both?! But your own AI security audit flagged a lot of issues with your 'cryptography' repository. I can't tell if that's still up-to-date information but there was some report with mostly exclamation marks and red crosses in it. And a recommendation not to do it this way.

While at it, I had a look at the browser's developer console, and you have a lot of JavaScript warnings and errors there. Which I guess isn't good?! And another sidenote: If I were you and developing a secure and private messenger, I'd skip all the requests to Google fonts, AWS, JSdelivr, third party JS CDN, analytics... It directly connects to Youtube and another analytics service which gets broad permissions. The infrastructure isn't entirely controlled by you, for example the signalling server is the default free one. All of that isn't great for privacy. Plus your content security policy has way too many asterisks in it with external domains and domains you control but there's debugging stuff on there. And I don't think you even put further restrictions on what JavaScript can be loaded or injected, other than the CSP?!

And the hax just traslates code and is supposed to do a bit of type-checking and see if your code generates things with the correct length. It doesn't currently do any theorems or verification regarding cryptography, does it? I'm not sure where to look.

Sorry I'm not exactly a security researcher... Maybe my layman's audit is shit... But I think there's quite some stuff going on which pretty much renders any verification of a component irrelevant. I could be wrong though. But I'd still be interested to hear how the code relates to the ProVerif files, and what kind of assurance there is, they're the same.

[–] hendrik@palaver.p3x.de 3 points 1 day ago* (last edited 1 day ago) (1 children)

@xoron@programming.dev Does the currently deployed version on chat.positive-intentions.com work? I tried to connect and try some more. But somehow it doesn't ever connect. I'm following the procedure in the Youtube video. It reloads something on the page intermittently but never connects to the other browser.

And already after opening the page, it says: "My peer ID is: xy"
But then immediately "peer disconnected" and "peer closed: undefined". Even before I do anything. Is it supposed to say that?

I tried several combinations of Chromium 147 and LibreWolf 150. And whatever Vanadium is on my phone. I tried phone-computer and two different browsers on the same computer. Is that an issue? Other PeerJS applications work just fine.

And does the QR scanner work? It opens the camera and scans the QR code just fine, but then reloads and doesn't put any ID into the field?! So I guess that's broken and I need to copy-paste it?

Edit: Your file demo seems to work better. It at least gets to the point where it tries to open a connection. For some reason it also fails (ICE failed, your TURN server appears to be broken, see about:webrtc for more details). But at least that demo gets far enough to listen to connections and try to initialize them.

[–] xoron@programming.dev 1 points 1 day ago

https://p2p.positive-intentions.com/iframe.html?globals=&id=demo-p2p-messaging--p-2-p-messaging&viewMode=story

That's the most stable version of the project. It's still a work in progress and can't promise stability.

For the best experience trying from 2 fresh incognito instances or clear site-date befor testing it out.

[–] zonico@discuss.tchncs.de 6 points 1 day ago (2 children)

Why is your App cryptography-heavy if it's a messanger? Don't you just have to call msg.encrypt() or similar and then the library handles the rest?

[–] anton@lemmy.blahaj.zone 4 points 1 day ago (1 children)

There are some interesting aspects to asynchronous encrypted messengers.
https://youtu.be/9sO2qdTci-s
Not that I would trust some random strangers slop over established projects like signal.

[–] xoron@programming.dev -5 points 1 day ago

completely understandable and so the proactive attempt to get a professional security audit so i can avoid asking to "trust me".

its completely understandable that you want to use something established. i cant offer more than open source and transparency in the implementation. if "trust" is behind the "paywall" of a security audit, its simply not an option without support.

i used AI to generate an audit. it took several days of my time and effort to get it to where it is. i made a genuine attempt to be objective.

in SWE we already have things in place for this like unit tests. if we dive further into cryptography we have things like formal proofs and verification.

formal verification has tooling to help make sure things work and behave how it should. (without AI) it can take a look at the code and create abstractions that can be used for verification. if we question if AI can be used with such tooling, we start discussing if the tooling we use is good enough (its pretty widely used!).

if the conversation cant move past that i used AI, then we're not really having a discussion.

load more comments (1 replies)
[–] janonymous@lemmy.world 5 points 1 day ago* (last edited 1 day ago)

You might be expecting too much nuance from online communities. It's easy and fun to oversimplify and dunk on a perceived common enemy. Lemmy has a very AI critical community. I imagine on reddit you might get less backlash, at least depending on the community. You might also find more AI friendly places here. In any case, trying to fight against a community bias is often a fools errand. I'm sure your code isn't slop, but I don't think you'll be able to change the minds of random, biased people on the internet with no incentive to really listen to you anyways.

I'm sure you already know all the reasons why people are against AI and are sick of having to defend yourself. Still, I want to add that even if you use AI as a tool instead of vibe-coding, as a consumer I wouldn't trust any privacy/security critical software that's developed with the use of AI. As a layman I can't check how secure your software is, so I have to rely on simple signifiers to make my judgements. At this point in time, AI is a red flag for me for security reasons alone. I know it's not "fair" or "accurate", but I don't have the time and knowledge to individually check every software to that extend. I know allegedly every programmer now uses AI in some form to code (I personally don't and most people I know don't either, but I'm sure it's just my bubble), but it's not a sign of quality code in my mind.

Another thing I want to add is that your hammer comparison should probably include how the hammer was produced and how much resources your hammer consumes to function. There is a strong ethical argument against the use of AI for most use cases. I'd include coding and code reviews. Again, that doesn't make your code slop, but it might help you understand why so many people are ready to dismiss it as that.

load more comments
view more: next ›