this post was submitted on 24 Nov 2025
21 points (95.7% liked)

commandline

2132 readers
1 users here now

founded 2 years ago
MODERATORS
 

So I'm sure we've all spent time writing scripts or figuring out CLIs for that one project we're working on, and then kind of go on to forget what we did. Then, when another project comes along later, you wish you had that script again so you could see how you did that thing you did.

Personally, I used to just check random scripts into a repo as a kind of "archive" of all my scripts. But I wanted a better way to organize and use these things.

For years I've been building and collecting these scripts into a CLI that I call Devtools to make it so that each script is a subcommand.

I've had a lot of my friends and coworkers ask me to open-source it so they could use it and see how some things are done in Bash, what tools I use, etc. So...here's that CLI!


But what I'd honestly like is more...

So what are your useful scripts or CLIs you've built? Or what's that script you wrote years ago that you now swear by? Or what's that one application you use daily that just makes your life infinitely easier! I want to grow this collection and feed the addiction!

top 3 comments
sorted by: hot top controversial new old
[–] stewie410@programming.dev 4 points 1 month ago (1 children)

I've gotten to the point that, anything "useful" enough goes in a repo -- unless its for work, since I'd otherwise be polluting our "great" subversion server...

Functions

I've stopped using as many functions, though are just too handy:

  • bm(): super basic bookmark manager, cd or pushd to some static path
    • I never got into everything zoxide has to offer
  • mkcd(): essentially mkdir -p && cd, but I use it enough that I forgot it isn't standard

I'm also primarily a WSL user these days (one day, I'll move permanently) -- to deal with ssh-agent shenanigans there, I also rely on ssh.sh in my config. I should at some point remove kc(), as I don't think I'lll ever go back.

Scripts

Despite having a big collection of scripts, I don't use these too often; but still wanted to mention:

  • md2d.sh: pandoc wrapper, mostly using it to convert markdown into docx
    • my boss has a weird requirement that all documentation shared with the team must be editable in Word...
  • gitclone.sh: git clone wrapper, but I use it as gcl -g quite often

A lot of my more useful scripts are, unfortunately, work related -- and probably pretty niche.

"Library"

I also keep a library of sorts for reusable snippets, which I'll source as needed. The math & array libs in particular are very rarely used -- AoC, for the most part.

Config

Otherwise, my bash config is my lifeblood -- without it, I'm pretty unproductive.

dtools comments

Had a look through your repo, and have some thoughts if you don't mind. You may already know about several of these items, but I'm not going to be able to sift through 30K lines to see what is/isn't known.

printf vs echo

There's a great writeup on why echo should be used with caution. Its probably fine, but wanted to mention it -- personally, I'll use echo when I need static text and printf doesn't make sense to use otherwise.

Multiline-printf vs HEREDOC

In the script, you've got like 6K lines of printf statements to show various usage text. Instead, I'd recommend using HEREDOCs (<<).

As an example:

dtools_usage() {
    cat << EOF
dtools - A CLI tool to manage all personal dev tools

\e[1mUsage:\e[0m
    dtools COMMAND
    dtools [COMMAND] --help | -h
    dtools --version | -v

\e[1mCommands:\e[0m
    \e[0;32mupdate\e[0m     Update the dtools CLI to the latest version
    ...
EOF
}

HEREDOCs can also be used for basically any stdin stream; for example:

ssh user@host << EOF
hostname
mkdir -p ~/.config/
EOF

bold() vs $'\e[1m'

On a related note, rather than using functions and by extension subshells ($(...)) to color text; you could do something like:

ANSI_FMT=(
    ['norm']=$'\e[0m'
    
    ['red']=$'\e[31m'
    ['green']=$'\e[32m'
    ['yellow']=$'\e[33m'
    ['blue']=$'\e[34m'
    ['magenta']=$'\e[35m'
    ['cyan']=$'\e[36m'
    ['black']=$'\e[30m'
    ['white']=$'\e[37m'

    ['bold']=$'\e[1m'
    ['red_bold']=$'\e[1;31m'
    ['green_bold']=$'\e[1;32m'
    ['yellow_bold']=$'\e[1;33m'
    ['blue_bold']=$'\e[1;34m'
    ['magenta_bold']=$'\e[1;35m'
    ['cyan_bold']=$'\e[1;36m'
    ['black_bold']=$'\e[1;30m'
    ['white_bold']=$'\e[1;37m'

    ['underlined']=$'\e[4m'
    ['red_underline']=$'\e[4;31m'
    ['green_underline']=$'\e[4;32m'
    ['yellow_underline']=$'\e[4;33m'
    ['blue_underline']=$'\e[4;34m'
    ['magenta_underline']=$'\e[4;35m'
    ['cyan_underline']=$'\e[4;36m'
    ['black_underline']=$'\e[4;30m'
    ['white_underline']=$'\e[4;37m'
)

This sets each of these options in an associative array (or hash table, sort of); callable with ${ANSI_FMT["key"]}; which expands like any other variable. As such, the text will be inserted directly without needing to spawn a subshell.

Additionally, the $'...' or $"..." syntax is a bashism that expands escape sequences directly; so $'\t' expands to a literal tab character. The only difference betweeen the two forms is whether $ expressions will also be expanded, e.g. $"\e[31m$HOME\e[0m vs $'\e[31mHOME\e[0m.

Do also note that $'\e[0m (or equiv) is required with this method, as you're no longer performing the formatting in a subshell environment. I personally find this tradeoff worthwhile, though. But, I also don't use it very often.

The heredoc example before would then look like:

dtools_usage() {
    cat << EOF
dtools - A CLI tool to manage all personal dev tools

${ANSI_FMT['bold']}Usage:${ANSI_FMT['norm']}
    dtools COMMAND
    dtools [COMMAND] --help | -h
    dtools --version | -v

${ANSI_FMT['bold']}Commands:${ANSI_FMT['norm']}
    ${ANSI_FMT['green']}update${ANSI_FMT['norm']}     Update the dtools CLI to the latest version
    ...
EOF
}

As a real-world example from a recent work project:

log() {
    if (( $# == 1 )); then
        mapfile -t largs
        set -- "${1}" "${largs[@]}"
        unset largs
    fi

    local rgb lvl
    case "${1,,}" in
        emerg )     rgb='\e[1;31m'; lvl='EMERGENCY';;
        alert )     rgb='\e[1;36m'; lvl='ALERT';;
        crit )      rgb='\e[1;33m'; lvl='CRITICAL';;
        err )       rgb='\e[0;31m'; lvl='ERROR';;
        warn )      rgb='\e[0;33m'; lvl='WARNING';;
        notice )    rgb='\e[0;32m'; lvl='NOTICE';;
        info )      rgb='\e[1;37m'; lvl='INFO';;
        debug )     rgb='\e[1;35m'; lvl='DEBUG';;
    esac
    case "${1,,}" in
        emerg | alert | crit | err ) err+=( "${@:2}" );;
    esac
    shift

    [[ -n "${nocolor}" ]] && unset rgb

    while (( $# > 0 )); do
        printf '[%(%FT%T)T] [%b%-9s\e[0m] %s: %s\n' -1 \
            "${rgb}" "${lvl}" "${FUNCNAME[1]}" "${1}"
        shift
    done | tee >(
        sed --unbuffered $'s/\e[[][^a-zA-Z]*m//g' >> "${log:-/dev/null}"
    )
}

Here, I'm using printf's %b to expand the color code, then later using $'...' with sed to strip those out for writing to a logfile. While I'm not using an associative array in this case, I do something similar in my log.sh library.

One vs Many

Seeing that there's nearly 30K lines in this script, I would argue it should be split up. You can easily split scripts up to keep everything organized, or to make reusable code, by sourceing the script. For example, to use the log.sh library, I would do something like:

#!/usr/bin/env bash

# $BASH_LIB == ~/.config/bash/lib
#NO_COLOR="1"
source "${BASH_LIB}/log.sh"

# set function
log() {
    log.pretty "${@}"
}

log info "foo"

# or use them directly
log.die.pretty "oopsie!"

Given the insane length of this monolith, splitting it up is probably worth it. The run() and related functions could stay within dtools, but each part could be split out to another file, which does its own subcommand argparse?

Bashisms

The Wooledge on Bashisms is a great writeup explaining the quirks between POSIX and bash -- more specificaly, what kind of tools available out of the box when writing for bash specifically.

Some that I use on a regular basis:

  • &> or &>>: redirect both stdout & stdin to some file/descriptor
  • |&: shorthand for 2>&1 |
  • var="$(< file)": read file contents into a variable
    • Though, I prefer mapfile or readarray for most of these cases
    • Exceptions would be in containers where those are unavailable (alpine + bash)
  • (( ... )): arithemtic expressions, including C-Style for-loops
    • Makes checking numeric values much nicer: (( myvar >= 1 )) or (( count++ ))

grep | awk | sed

Just wanted to note that awk can do basically everything. These days I tend to avoid it, but it can do it all. Using ln. 6361 as an example:

zellij_session_id="$(zellij ls | awk '
    tolower($0) ~ /current/ {
        print gensub(/\x1B[[][^a-zA-Z]*?m/, "", "G", $1)
    }
')"

The downside of awk is that it can be a little slow compared to grep, sed or cut. More power in a single tool, but maybe not as performant.

Shellcheck

I'm almost certain I'm preaching to the choir, but will add the recommendation for shellcheck or bash-language-server broadly.

While there's not much it spit out for dtools, there are some items of concern, notably unclosed strings.

A Loose Offer

If interested, I could like at rewriting dtool taking into account the items I've listed above, amongst others. Given the scope of the project, that's quite the undertaking from a new set of eyes, but figured I'd throw it out there. Gives me something to do over the upcoming long weekend.

[–] aclarke@lemmy.world 1 points 1 month ago* (last edited 1 month ago) (1 children)

Thanks so much for the other stuff you use! I've been using bm for years, but I haven't used mkcd, so I'm definitely going to add that.

I'm going to give your library, config, and AoC a good look because that's exactly what I was hoping for in this conversation! :)

In general: The only time I add it to my ~/.bashrc is when it's either an alias for something simple, or a very simple function. Otherwise, anything that requires more legwork or is bigger than a few lines, I put in dtools. I used to put it all in my ~/.bashrc but that honestly became kind of cumbersome when I have different configs on different servers, or machines for work vs personal, etc. And sometimes the exports would differ making functions work differently and I didn't want to just have to copy that section of my ~/.bashrc as well every time something updated, hence why I created the dtools repo!

To respond to your other comments, I'm going to do my best to respond in the order they show up:

printf vs echo through bold() vs $'\e[1m'

The dtools script is actually compiled, not written by me. So in the vast majority of the project, my code is all in the src directory, not in the dtools script. In fact, my repo without compilation consists of only 3k lines. The compiled code in the script then makes up the completions, coloring, some error handling, validations, filters, help messages, constraints (like conflicting flags), etc.

So many of the echos you see are from the bashly framework, not my code. I often use heredocs for longer or multiline strings (being SUPER careful when using <<-EOF to make sure my damn editor is using TABs...that's such a nightmare otherwise 😂 ).

If you look through my code in particular, you'll see I use many of these bash-isms you've mentioned!

So the One vs Many comment is exactly how the repo works! Each subcommand is its own directory and file. So, for example: All dtools aws commands are in the src/commands/aws directory. And any further subcommands like dtools aws secretsmanager are in another subdirectory where each command has its own individual script!

Bashisms

I'm familiar with many of the bashisms you mentioned except the var="$(< file)" one, that's awesome! I've been trying to migrate away from using cat to just output file contents and use more direct, purpose methods that are often built into tools (like jq '.[]' file.json instead of cat file.json | jq '.[]'). However, I'll say that when I'm trying to read each line into an iterable array, I often use readarray too.

grep | awk | sed

I've been trying for years to get more people to look into awk because it's amazing! It's so undervalued! sed takes some getting used to with the pattern and hold space but it's worth the initial suffering 😛

Shellcheck

I've got my Helix editor set up with Shellcheck! It's awesome! You'll notice if you look at my code directly that there's a number of places I have to do # shellcheck disable=SC2154 (a variable is referencing an undefined value). This is because the framework creates and passes those variables to my scripts for me.

A Loose Offer

You seem a lot like me in that you do a LOT of bash scripting! So I'll admit to the fact that I've looked at the compiled code and noted that the most important code is mine, and while there's a lot of things going on in the compiled script, I agree with most of it. But I've also been a bit concerned about how often it's spawning subshells when it doesn't have to.

I think I can fix some of them with associative arrays if I add a minimum bash version requirement in my config, but I've honestly never tried. I'll check that out now!

Since you make a solid point about a lot of this that should maybe be updated in the Bashly framework, maybe we should work together to update the framework to have better conventions like you've mentioned?

[–] stewie410@programming.dev 1 points 1 month ago

Thanks so much for the other stuff you use! I’ve been using bm for years

If you mean from my dotfiles, that's wild. A friend of mine wrote his own implementation in rust, but I've not really used their version, though I'm not sure its on github.

that honestly became kind of cumbersome when I have different configs on different servers, or machines for work vs personal, etc.

While I'm not currently using it, its on my todo list to take a real look at chezmoi for these per-machine differences; especially as I'm always between Linux, Windows & WSL. While chezmoi is outside the scope of this topic, it seems like a pretty solid configuration management option...and probably safer than what I'm doing (ln -s).

And sometimes the exports would differ making functions work differently and I didn’t want to just have to copy that section of my ~/.bashrc as well every time something updated

My "solution" is a collection of templates I'll load in to my editor (nvim, with my ~~lackluster~~ plugin), which contains the basics for most scripts of a certain type. The only time that I'll write something and rely on something that isn't builtin, e.g. a customization, is if:

  • Its a personal/primary machine that I'm working from
  • I require() the item & add testing for it
    • [[ -z "${var}" ]], or command -v usually

For my work, every script is usually as "batteries included" as reasonable, in whatever language I'm required to work with (bash, sh, pwsh or groovy). That said, the only items that appear in nearly every script at work are:

  • Base functions for normal ops: main(), show_help(), etc.
  • Some kind of logging facility with log()
    • colors & "levels" are a pretty recent change
  • Email notifications on failure (just a curl wrapper for Mailgun)

bashly framework

Transpiling bash into bash is probably the weirdest workflow I've ever heard of. While I can see some benefit of a "framework" mentality, if the 'compiled' result is a 30K line script, I'm not sure how useful it is IMO.

For me at least, I view most shell scripts as being simple automation tools, and an exercise in limitation.

If you look through my code in particular, you’ll see I use many of these bash-isms you’ve mentioned!

I did see some of that, even in the transpiled dtools monolith

$(<file)

Just be aware that this reads the full contents into a variable, not an array. I would generally use mapfile/readarray for multiline files. As for the jq example, you should be able to get away with jq '.[]' < file.json, which is also POSIX when that's a concern.

maybe we should work together to update the framework to have better conventions like you’ve mentioned?

I don't think I'm the right person for that job -- I'm both unfamiliar with Ruby and have no desire to interract with it. I'm also pretty opinionated about shell generally, and likely not the right person to come up with a general spec for most people.

Additionally, my initial reaction that bashly seems like a solution in search of a problem, probably isn't healthy for the project.