this post was submitted on 03 Nov 2025
121 points (96.9% liked)

Programming

23543 readers
250 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

As a Java engineer in the web development industry for several years now, having heard multiple times that X is good because of SOLID principles or Y is bad because it breaks SOLID principles, and having to memorize the "good" ways to do everything before an interview etc, I find it harder and harder to do when I really start to dive into the real reason I'm doing something in a particular way.

One example is creating an interface for every goddamn class I make because of "loose coupling" when in reality none of these classes are ever going to have an alternative implementation.

Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

There are definitely occasions when these principles do make sense, especially in an OOP environment, and they can also make some design patterns really satisfying and easy.

What are your opinions on this?

top 50 comments
sorted by: hot top controversial new old
[–] Feyd@programming.dev 59 points 2 weeks ago* (last edited 2 weeks ago) (7 children)

If it makes the code easier to maintain it's good. If it doesn't make the code easier to maintain it is bad.

Making interfaces for everything, or making getters and setters for everything, just in case you change something in the future makes the code harder to maintain.

This might make sense for a library, but it doesn't make sense for application code that you can refactor at will. Even if you do have to change something and it means a refactor that touches a lot, it'll still be a lot less work than bloating the entire codebase with needless indirections every day.

[–] mr_satan@lemmy.zip 12 points 2 weeks ago

Yeah, this. Code for the problem you're solving now, think about the problems of the future.

Knowing OOP principles and patterns is just a tool. If you're driving nails you're fine with a hammer, if you're cooking an egg I doubt a hammer is necessary.

[–] Valmond@lemmy.world 4 points 2 weeks ago (2 children)

I remember the recommendation to use a typedef (or #define 😱) for integers, like INT32.

If you like recompile it on a weird CPU or something I guess. What a stupid idea. At least where I worked it was dumb, if someone knows any benefits I'd gladly hear it!

[–] SilverShark@programming.dev 9 points 2 weeks ago (10 children)

We had it because we needed to compile for Windows and Linux on both 32 and 64 bit processors. So we defined all our Int32, Int64, uint32, uint64 and so on. There were a bunch of these definitions within the core header file with #ifndef and such.

load more comments (10 replies)
[–] HetareKing@piefed.social 5 points 2 weeks ago (20 children)

If you're directly interacting with any sort of binary protocol, i.e. file formats, network protocols etc., you definitely want your variable types to be unambiguous. For future-proofing, yes, but also because because I don't want to go confirm whether I remember correctly that long is the same size as int.

There's also clarity of meaning; unsigned long long is a noisy monstrosity, uint64_t conveys what it is much more cleanly. char is great if it's representing text characters, but if you have a byte array of binary data, using a type alias helps convey that.

And then there are type aliases that are useful because they have different sizes on different platforms like size_t.

I'd say that generally speaking, if it's not an int or a char, that probably means the exact size of the type is important, in which case it makes sense to convey that using a type alias. It conveys your intentions more clearly and tersely (in a good way), it makes your code more robust when compiled for different platforms, and it's not actually more work; that extra #include <cstdint> you may need to add pays for itself pretty quickly.

load more comments (20 replies)
load more comments (4 replies)
[–] JakenVeina@midwest.social 26 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

One example is creating an interface for every goddamn class I make because of "loose coupling" when in reality none of these classes are ever going to have an alternative implementation.

That one is indeed objective horse shit. If your interface has only one implementation, it should not be an interface. That being said, a second implementation made for testing COUNTS as a second implementation, so context matters.

In general, I feel like OOP principals like are indeed used as dogma more often than not, in Java-land and .NET-land. There's a lot of legacy applications out there run by folks who've either forgotten how to apply these principles soundly, or were never taught to in the first place. But I think it's more of a general programming trend, than any problem with OOP or its ecosystems in particular. Betcha we see similar things with Rust, when it reaches the same age.

[–] egerlach@lemmy.ca 7 points 2 weeks ago

SOLID often comes up against YAGNI (you ain't gonna need it).

What makes software so great to develop (as opposed to hardware) is that you can (on the small scale) do design after implementation (i.e. refactoring). That lets you decide after seeing how your new bit fits in whether you need an abstraction or not.

load more comments (1 replies)
[–] iii@mander.xyz 22 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Yes OOP and all the patterns are more than often bullshit. Java is especially well known for that. "Enterprise Java" is a well known meme.

The patterns and principles aren't useless. It's just that in practice most of the time they're used as hammers even when there's no nail in sight.

What, you don’t like AbstractSingletonBeanFactorys?

[–] SinTan1729@programming.dev 4 points 2 weeks ago (1 children)

As an amateur with some experience in the functional style of programming, anything that does SOLID seems so unreadable to me. Everything is scattered, and it just doesn’t feel natural. I feel like you need to know how things are named, and what the whole thing looks like before anything makes any sense. I thought SOLID is supposed to make code more local. But at least to my eyes, it makes everything a tangled mess.

[–] iii@mander.xyz 5 points 2 weeks ago (1 children)

Especially in Java, it relies extremely heavy on the IDE, to make sense to me.

If you're minimalist, like me, and prefer text editor to be seperate from linter, compiler, linker, it's not pheasable. Because everything is so verbose, spread out, coupled based on convention.

So when I do work in Java, I reluctantly bring out Eclipse. It just doesn't make any sense without.

[–] SinTan1729@programming.dev 3 points 2 weeks ago* (last edited 2 weeks ago)

Yeah, same. I like to code in Neovim, and OOP just doesn't make any sense in there. Fortunately, I don't have to code in Java often. I had to install Android Studio just because I needed to make a small bugfix in an app, it was so annoying. The fix itself was easy, but I had to spend around an hour trying to figure out where the relevant code exactly is.

[–] entwine@programming.dev 19 points 2 weeks ago (3 children)

I think the general path to enlightenment looks like this (in order of experience):

  1. Learn about patterns and try to apply all of them all the time
  2. Don't use any patterns ever, and just go with a "lightweight architecture"
  3. Realize that both extremes are wrong, and focus on finding appropriate middle ground in each situation using your past experiences (aka, be an engineer rather than a code monkey)

Eventually, you'll end up "rediscovering" some parts of SOLID on your own, applying them appropriately, and not even realize it.

Generally, the larger the code base and/or team (which are usually correlated), the more that strict patterns and "best practices" can have a positive impact. Sometimes you need them because those patterns help wrangle complexity, other times it's because they help limit the amount of damage incompetent teammates can do.

But regardless, I want to point something out:

the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

This attitude is a problem. It's an attitude of ignorance, and it's an easy hole to fall into, but difficult to get out of. Nobody is "circlejerking OOP". You're making up a strawman to disregard something you failed at (eg successful application of SOLID principles). Instead, perform some introspection and try to analyze why you didn't like it without emotional language. Imagine you're writing a postmortem for an audience of colleagues.

I'm not saying to use SOLID principles, but drop that attitude. You don't want to end up like those annoying guys who discovered their first native programming language, followed a Vulkan tutorial, and now act like they're on the forefront of human endeavor because they imported a GLTF model into their "game engine" using assimp...

A better attitude will make you a better engineer in the long run :)

load more comments (3 replies)
[–] beejjorgensen@lemmy.sdf.org 18 points 2 weeks ago (1 children)

I'm a firm believer in "Bruce Lee programming". Your approach needs to be flexible and adaptable. Sometimes SOLID is right, and sometimes it's not.

"Adapt what is useful, reject what is useless, and add what is specifically your own."

"Notice that the stiffest tree is most easily cracked, while the bamboo or willow survives by bending with the wind."

And some languages, like Rust, don't fully conform to a strict OO heritage like Java does.

"Be like water making its way through cracks. Do not be assertive, but adjust to the object, and you shall find a way around or through it. If nothing within you stays rigid, outward things will disclose themselves.

"Empty your mind, be formless. Shapeless, like water. If you put water into a cup, it becomes the cup. You put water into a bottle and it becomes the bottle. You put it in a teapot, it becomes the teapot. Now, water can flow or it can crash. Be water, my friend."

[–] frezik@lemmy.blahaj.zone 11 points 2 weeks ago (3 children)

It's been interesting to watch how the industry treats OOP over time. In the 90s, JavaScript was heavily criticized for not being "real" OOP. There were endless flamewars about it. If you didn't have the sorts of explicit support that C++ provided, like a class keyword, you weren't OOP, and that was bad.

Now we get languages like Rust, which seems completely uninterested in providing explicit OOP support at all. You can piece together support on your own if you want, and that's all anyone cares about.

JavaScript eventually did get its class keyword, but now we have much better reasons to bitch about the language.

[–] beejjorgensen@lemmy.sdf.org 6 points 2 weeks ago

The funny thing is I really liked the old JS prototypal inheritance. :)

load more comments (2 replies)
[–] FizzyOrange@programming.dev 15 points 2 weeks ago (1 children)

One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

Sounds like you've learned the answer!

Virtual all programming principles like that should never be applied blindly in all situations. You basically need to develop taste through experience... and caring about code quality (lots of people have experience but don't give a shit what they're excreting).

Stuff like DRY and SOLID are guidelines not rules.

[–] biotin7@sopuli.xyz 4 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

What about KISS ? Now this SHOULD be a rule. Simple is the best

[–] jcr@jlai.lu 4 points 2 weeks ago

DRY SOLID KISS

load more comments (1 replies)
[–] Azzu@lemmy.dbzer0.com 14 points 2 weeks ago (3 children)

The main thing you are missing is that "loose coupling" does not mean "create an interface". You can have all concrete classes and loose coupling or all classes with interfaces and strong coupling. Coupling is not about your choice of implementation, but about which part does what.

If an interface simplifies your code, then use interfaces, if it doesn't, don't. The dogma of "use an interface everywhere" comes from people who saw good developers use interfaces to reduce coupling, while not understanding the context in which it was used, and then just thought "hey so interfaces reduce coupling I guess? Let's mandate using it everywhere!", which results in using interfaces where they aren't needed, while not actually reducing coupling necessarily.

[–] HereIAm@lemmy.world 6 points 2 weeks ago (4 children)

I think a large part of interfaces everywhere comes from unit testing and class composition. I had to create an interface for a Time class because I needed to test for cases around midnight. It would be nice if testing frameworks allowed you to mock concrete classes (maybe you can? I haven't looked into it honestly) it could reduce the number of unnecessary interfaces.

[–] unique_hemp@discuss.tchncs.de 4 points 2 weeks ago (1 children)

At least in C# with Moq you can only mock virtual methods of concrete classes, so using interfaces is still nicer in general.

[–] HereIAm@lemmy.world 4 points 2 weeks ago

Yeah Moq is what I used when I worked with .NET.

On an unrelated note; god I miss .NET so much. Fuck Microsoft and all that, but man C# and .NET feels so good for enterprise stuff compared to everything else I've worked with.

[–] JackbyDev@programming.dev 4 points 2 weeks ago

You've been able to mock concrete classes in Java for like a decade or so, probably longer. As long as I can remember at least. Using Mockito it's super easy.

load more comments (2 replies)
load more comments (2 replies)
[–] masterspace@lemmy.ca 12 points 2 weeks ago* (last edited 2 weeks ago)

The SOLID principles are just that principles, not rules.

As someone else said, you should always write your code to be maintainable first and foremost, and extra code is extra maintenance work, so should only really be done when necessary. Don't write an abstract interface unless multiple things actually need to implement it, and don't refactor common logic until you've repeated it ~3 times.

The DRY principle is probably the most overused one because engineers default to thinking that less code = less work and it's a fun logic puzzle to figure out common logic and abstract it, but the reality is that many of these abstractions in reality create more coupling and make your code less readable. Dan Abramov (creator of React) has a really good presentation on it that's worth watching in its entirety.

But I will say that sometimes these irritations are truly just language issues at the end of the day. Java was written in an era where the object oriented paradigm was king, whereas these days functional programming is often described as what OO programming looks like if you actually follow all the SOLID principles and Java still isn't a first class functional language and probably never will be because it has to maintain backwards compatibility. This is partly why more modern Java compatible languages like Kotlin were created.

A language like C# on the other hand is more flexible since it's designed to be cross paradigm and support first class functions and objects, and a language like JavaScript is so flexible that it has evolved and changed to suit whatever is needed of it.

Flexibility comes with a bit of a cost, but I think a lot of corporate engineers are over fearful of new things and change and don't properly value the hidden costs of rigidity. To give it a structural engineering analogy: a rigid tree will snap in the wind, a flexible tree will bend.

[–] aev_software@programming.dev 10 points 2 weeks ago (1 children)

The main lie about these principles is that they would lead to less maintenance work.

But go ahead and change your database model. Add a field. Then add support for it to your program's code base. Let's see how many parts you need to change of your well-architected enterprise-grade software solution.

[–] justOnePersistentKbinPlease@fedia.io 8 points 2 weeks ago (6 children)

Sure, it might be a lot of places, it might not(well designed microservice arch says hi.)

What proper OOP design does is to make the changes required to be predictable and easily documented. Which in turn can make a many step process faster.

[–] aev_software@programming.dev 3 points 2 weeks ago (2 children)

I guess it's possible I've been doing OOP wrong for the past 30 years, knowing someone like you has experienced code bases that uphold that promise.

[–] calliope@retrolemmy.com 6 points 2 weeks ago* (last edited 2 weeks ago)

Right, knowing when to apply the principles is the thing that comes with experience.

If you’ve literally never seen the benefits of abstraction doing OOP for thirty years, I’m not sure what to tell you. Maybe you’ve just been implementing boilerplate on short-term projects.

I’ve definitely seen lots of benefits from some of the SOLID principles over the same time period, but I was using what I needed when I needed it, not implementing enterprise boilerplate blindly.

I admit this is harder with Java because the “EE” comes with it but no one is forcing you to make sure your DataAccessObject inherits from a class that follows a defined interface.

load more comments (1 replies)
load more comments (5 replies)
[–] melfie@lemy.lol 10 points 2 weeks ago* (last edited 2 weeks ago)

Like anything else, it can be useful in the right context if not followed too dogmatically, and instead is used when there is a tangible benefit.

For example, I nearly always dependency inject dependencies with I/O because I can then inject test doubles with no I/O for fast and stable integration tests. Sometimes, this also improves re-usability, and for example, a client for one vendor’s API can be substituted with another, but this benefit doesn’t materialize that often. I rarely dependency inject dependencies with no side-effects because it’s rare that any tangible benefit materializes, and everyone deals with the additional complexity for years with no reason. With just I/O dependencies, I’ve generally found no need for a DI container in most codebases, but codebases that dependency inject everything make a DI container basically mandatory, and its usually extra overhead for nothing, IMO. There may be codebases where dependency injecting everything makes perfect sense, but I haven’t found one yet.

[–] douglasg14b@lemmy.world 9 points 2 weeks ago (1 children)

The principles are perfectly fine. It's the mindless following of them that's the problem.

Your take is the same take I see with every new generation of software engineers discovering that things like principles, patterns and ideas have nuance to them. Who when they see someone applying a particular pattern without nuance think that is what the pattern means.

load more comments (1 replies)
[–] ravachol@lemmy.world 8 points 2 weeks ago

My opinion is that you are right. I switched to C from an OOP and C# background, and it has made me a happier person.

[–] Windex007@lemmy.world 8 points 2 weeks ago (1 children)

Whoever is demanding every class be an implementation of an interface started thier career in C#, guaranteed.

[–] frezik@lemmy.blahaj.zone 12 points 2 weeks ago (1 children)

Java started that shit before C# existed.

[–] Windex007@lemmy.world 7 points 2 weeks ago

I'm my professional experience working with both, Java shops don't blindly enforce this, but c# shops tend to.

Striving for loosely coupled classes is objectively a good thing. Using dogmatic enforcement of interfaces even for single implementors is a sledgehammer to pound a finishing nail.

[–] Sunsofold@lemmings.world 7 points 2 weeks ago

I have to wonder about how many practices in any field are really a 'best in all cases' rule vs just an 'if everyone does it like this we'll all work better together because we're all operating from the same rulebook, even if the rules are stupid,' thing or a 'this is how my pappy taught me to write it,' thing.

[–] JackbyDev@programming.dev 7 points 2 weeks ago

YAGNI ("you aren't/ain't gonna need it) is my response to making an interface for every single class. If and when we need one, we can extract an interface out. An exception to this is if I'm writing code that another team will use (as opposed to a web API) but like 99% of code I write only my team ever uses and doesn't have any down stream dependencies.

[–] JackbyDev@programming.dev 5 points 2 weeks ago

I'm making a separate comment for this, but people saying "Liskov substitution principle" instead of "Behavioral subtyping" generally seem more interested in finding a set of rules to follow rather than exploring what makes those rules useful. (Context, the L in solid is "Liskov substitution principle.") Barbra Liskov herself has said that the proper name for it would be behavioral subtyping.

In an interview in 2016, Liskov herself explains that what she presented in her keynote address was an "informal rule", that Jeannette Wing later proposed that they "try to figure out precisely what this means", which led to their joint publication [A behavioral notion of subtyping], and indeed that "technically, it's called behavioral subtyping".[5] During the interview, she does not use substitution terminology to discuss the concepts.

You can watch the video interview here. It's less than five minutes. https://youtu.be/-Z-17h3jG0A

[–] HaraldvonBlauzahn@feddit.org 5 points 2 weeks ago (2 children)

I think that OOP is most useful in two domains: Device drivers and graphical user interfaces. The Linux kernel is object-oriented.

OOP might also be useful in data structures. But you can as well think about them as "data structures with operations that keep invariants" (which is an older concept than OOP).

load more comments (2 replies)
[–] deathmetal27@lemmy.world 4 points 2 weeks ago

One example is creating an interface for every goddamn class I make because of "loose coupling" when in reality none of these classes are ever going to have an alternative implementation.

Not only loose coupling but also performance reasons. When you initialise a class as it's interface, the size of the method references you load on the method area of the memory (which doesn't get garbage collected BTW) is reduced.

Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

In my experience, not following SOLID principles makes your application an unmaintainable mess in roughly one year. Though SOLID needs to be coupled with better modularity to be effective.

[–] davidagain@lemmy.world 4 points 2 weeks ago

The promise of oop is that if you thread your spaghetti through your meatballs and baste them in bolgnaise sauce before you cook them, it's much simpler and nothing ever gets tangled up, so that when you come to reheat the frozen dish a month later it's very easy to swap out a meatball for a different one.

It absolutely does not even remotely live up to it's promise, and if it did, no one in their right mind would be recommending an abstract singleton factory, and there wouldn't be quite so many shelves of books about how to do oop well.

[–] alexc@lemmy.world 4 points 2 weeks ago (1 children)

SOLID is generally speaking a good idea. In practice, you have to know when to apply it.

it sounds like your main beef in Java is the need to create interfaces for every class. This is almost certainly over-engineering it, especially if you are not using dependency inversion. IMHO, that is the main point of SOLID. For the most part your inversions need interfaces, and that allows you create simple, performant unit tests.

You also mention OOP - It has it’s place, but I would also suggest you look at functional programming, too. IMHO, OOP should be used sparingly as it creates it’s own form of coupling - especially if you use “Base” classes to share functionality. Such classes should usually be approached using Composition. Put this another way, in a mature project, if you have to add a feature and cannot do this without reusing a large portion of the existing code without modifications you have a code-smell.

To give you an example, I joined a company about a year ago that coded they way you are describing. Since I joined, we’ve been able to move towards a more functional approach. Our code is now significantly smaller, has gone from about 2% to 60% unit testable and our velocity is way faster. I’d also suggest that for most companies, this is what they want not what they currently have. There are far too many legacy projects out there.

So, yes - I very much agree with SOLID but like anything it’s a guideline. My suggestion is learn how to refactor towards more functional patterns.

[–] aev_software@programming.dev 4 points 2 weeks ago (1 children)

In my experience, when applying functional programming to a language like java, one winds up creating more interfaces and their necessary boilerplate - not less.

load more comments (1 replies)
[–] dejected_warp_core@lemmy.world 4 points 2 weeks ago

Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

There are definitely occasions when these principles do make sense, especially in an OOP environment, and they can also make some design patterns really satisfying and easy.

Congratulations. This is where you wind up, long after learning the basics and start interacting with lots of code in the wild. You are not alone.

Implementing things with pragmatism, when it comes to conventions and design patterns, is how it's really done.

[–] gezero@lemmy.bowyerhub.uk 4 points 2 weeks ago* (last edited 2 weeks ago)

If you are creating interfaces for classes that will not have second implementation, that sounds suspicious, what kind of classes are you abstracting? Are those classes representing data? I think I would be against creating interfaces for data classes, I would use records and interfaces only in rare circumstances. Are you complaining about abstracting classes with logic, as in services/controllers? Are you creating tests for those? Are you mocking external dependencies for your tests? Because mocks could also be considered different implementations for your abstractions. Some projects I saw definitely had taken SOLID principles and made them SOLID laws... Sometimes it's an overzealous architect, sometimes it's a long-lasting project with no original devs left... The fact that you are thinking about it already puts you in front of many others...

SOLID principles are principles for Object Oriented programming so as others pointed out, more functional programming might give you a way out.

load more comments
view more: next ›