KindaABigDyl

joined 2 years ago
[–] KindaABigDyl@programming.dev 1 points 1 day ago (1 children)

Yes. I tried it for 6 months. Terrible. Takes way too long to compile

Right but what I'm saying is speed wasn't really the reason to use it in the first place

[–] KindaABigDyl@programming.dev 9 points 4 days ago* (last edited 4 days ago) (12 children)

For me, I always keep coming back to Arch tbh

Sometimes I get fed up with managing a whole system and once in a blue moon bricking my system on an update, but the alternatives are always worse, and with btrfs now, I don't have to worry about the latter problem.

Nix was the closest to pulling me away. A centralized config? Beautiful. Static package store without dependency conflicts? Beautiful. Immutable applications? The WORST idea we've ever had as a community. For instance, imo, VS Code extensions are fundamentally incompatible with Nix. I spent weeks trying to get it to work doing multiple different things to try and hope it would work. It can't. VS Code just has to be mutable.

Anyway so I'm back to arch and have been for over a year since I tried Nix (and before that Fedora which has its own issues). Before that I had been on Arch for 4 years.

I think I'll stay now. It's really the best option out there. In my mind, Arch is Linux, i.e. it's how an OS should be built for the Linux kernel and the FOSS ecosystem, and it won't ever be beat

[–] KindaABigDyl@programming.dev 30 points 5 days ago* (last edited 5 days ago) (6 children)

Maybe this is wrong, but my understanding is BTRFS is generally slower than EXT4, and that's okay. It's not going for speed

Where it shines is not in its speed but in its versatility offering compression, rollback, subvol, etc

For example, for applications, you do a lot of writes/reads to Documents or load resources like for games, so use EXT4 for /home or for a dedicated /games partition

For your system, it could be broken via config tweaks or updates, so use BTRFS to have the rollback options

Ah yes bc it says:

You may not impose any further restrictions on the exercise of the rights granted under this License.

That was the context of the comment you replied to. Not sure why you're talking about something else

[–] KindaABigDyl@programming.dev 46 points 1 week ago* (last edited 1 week ago) (8 children)

I did it once

Used it for a month

Compilation never got faster

Miserable experience updating or installing new software

Never trying it again

Just use minimal binary distros like Arch

Or if you really want the control of Gentoo use Nix; it's just a better system for that since almost everything you need is prebuilt as well

[–] KindaABigDyl@programming.dev 3 points 1 week ago* (last edited 1 week ago) (1 children)

https://esolangs.org/ is a great place to find a ton of esolangs

I like clicking the "Random Page" button and surfing around on it

All those listed here are on there I believe as well as classics like Brainf**k, Malbodge, Piet, etc

Here are some of mine: https://esolangs.org/wiki/User:BlueOkiris

[–] KindaABigDyl@programming.dev 1 points 1 week ago* (last edited 1 week ago) (4 children)

Right. That's the idea. Since Cali has a dumb law, it would be illegal to download Ubuntu in California. Californians follow their law, Ubuntu has to change nothing.

But how is that a license violation on Canonical's part?

 

Not sure if this is a good place to ask for help, but I have scoured the internet and no one has a solution, so hopefully this question helps me as well as others.

I'm trying to get my computer to run at its best when on Hyprland. I have an MSI Raider GE76 which has an Nvidia GTX 3080 Mobile and an Intel Tiger Lake CPU with integrated graphics.

I typically have an external display over display port, an Ultrawide 3440x1440@60Hz, and the internal laptop display is on eDP at 1920x1080@360Hz. Note tho that while I often have the dual screen setup, I do need to be able to go to just the Intel display. The Nvidia GPU drives all outputs (DP, HDMI, Thunderbolt) EXCEPT for the eDP which is connected to the Intel card.

On X11, I could use reverse prime sync to use the Nvidia card for everything and just have the Intel card draw whatever the Nvidia card renders. This worked well. Unfortunately there isn't anything like that for Wayland, and I don't have a hardware switch to put the eDP on the nvidia side of things.

This means that I have to use the default prime modes to run stuff on the nvidia card which makes the second screen incredibly laggy. Now, I can disable the i915 module and the external display becomes buttery smooth, but I can't use my built-in display (which means I also can't use the display when I'm not connected to the external monitor).

How can I get both to work well on Wayland?

Can I run the external display exclusively on Nvidia and the internal on Intel with Prime? That could work, but idk if that's possible.

What's the optimal way to set up an external display on Wayland with and Nvidia hybrid-graphics laptop? Bc right now I'm thinking of just going back to X11 and praying it gets enough support to live until I can get a decent Wayland config.

 

I created a little side project over the past few days, a new build system for C and C++: https://github.com/blueOkiris/acbs/

I've seen a lot of discourse over C build tools. None of them really seem solid except for (some) Makefiles (some Makefiles are atrocious; you just can't rely on people these days). Bazel, cmake - they're just not straight forward like a clean Makefile is, basically black magic, but setting up a Makefile from scratch is a skill. Many copy the same one over each time. Wouldn't it be nice if that Makefile didn't even need to be copied over?

Building C should be straight forward. Grab the C files and headers I want, set some flags, include some libraries, build, link. Instead project build systems are way way way overcomplicated! Like have you ever tried building any of Google's C projects? Nearly impossible to figure out and integrate with projects.

So I've designed a simplistic build system for C (also C++) that is basically set up to work like a normal Makefile with gcc but where you don't have to set it up each time. The only thing you are required to provide is the name of the binary (although you can override defaults for your project, and yes, not just binaries are possible but libs as well). It also includes things like delta building without needing to configure.

Now there is one thing I haven't added yet - parallel building. It should be as simple as adding separate threads when building files (right now it's a for loop). I know that's something a lot of people will care about, but it's not there yet. It's also really intended to only work with Linux rn, but it could probably pretty easily be adjusted to work with Windows.

Lay your project out like the minimal example, adjust the project layout, and get building! The project itself is actually bootstrapped and built using whatever the latest release is, so it's its own example haha.

It's dead simple and obvious to the point I would claim that if your project can't work with this, your project is wrong and grossly over-complicated in its design, and you should rework the build system. C is simple, and so should the build system you use with it!

So yeah. Check it out when y'all get a chance

 

I created a little side project over the past few days, a new build system for C and C++: https://github.com/blueOkiris/acbs/

I've seen a lot of discourse over C build tools. None of them really seem solid except for (some) Makefiles (some Makefiles are atrocious; you just can't rely on people these days). Bazel, cmake - they're just not straight forward like a clean Makefile is, basically black magic, but setting up a Makefile from scratch is a skill. Many copy the same one over each time. Wouldn't it be nice if that Makefile didn't even need to be copied over?

Building C should be straight forward. Grab the C files and headers I want, set some flags, include some libraries, build, link. Instead project build systems are way way way overcomplicated! Like have you ever tried building any of Google's C projects? Nearly impossible to figure out and integrate with projects.

So I've designed a simplistic build system for C (also C++) that is basically set up to work like a normal Makefile with gcc but where you don't have to set it up each time. The only thing you are required to provide is the name of the binary (although you can override defaults for your project, and yes, not just binaries are possible but libs as well). It also includes things like delta building without needing to configure.

Now there is one thing I haven't added yet - parallel building. It should be as simple as adding separate threads when building files (right now it's a for loop). I know that's something a lot of people will care about, but it's not there yet. It's also really intended to only work with Linux rn, but it could probably pretty easily be adjusted to work with Windows.

Lay your project out like the minimal example, adjust the project layout, and get building! The project itself is actually bootstrapped and built using whatever the latest release is, so it's its own example haha.

It's dead simple and obvious to the point I would claim that if your project can't work with this, your project is wrong and grossly over-complicated in its design, and you should rework the build system. C is simple, and so should the build system you use with it!

So yeah. Check it out when y'all get a chance

 
 
view more: next ›