I've gotten to the point that, anything "useful" enough goes in a repo -- unless its for work, since I'd otherwise be polluting our "great" subversion server...
Functions
I've stopped using as many functions, though are just too handy:
bm(): super basic bookmark manager,cdorpushdto some static path- I never got into everything
zoxidehas to offer
- I never got into everything
mkcd(): essentiallymkdir -p && cd, but I use it enough that I forgot it isn't standard
I'm also primarily a WSL user these days (one day, I'll move permanently) -- to deal with ssh-agent shenanigans there, I also rely on ssh.sh in my config. I should at some point remove kc(), as I don't think I'lll ever go back.
Scripts
Despite having a big collection of scripts, I don't use these too often; but still wanted to mention:
md2d.sh: pandoc wrapper, mostly using it to convert markdown into docx- my boss has a weird requirement that all documentation shared with the team must be editable in Word...
gitclone.sh:git clonewrapper, but I use it asgcl -gquite often
A lot of my more useful scripts are, unfortunately, work related -- and probably pretty niche.
"Library"
I also keep a library of sorts for reusable snippets, which I'll source as needed. The math & array libs in particular are very rarely used -- AoC, for the most part.
Config
Otherwise, my bash config is my lifeblood -- without it, I'm pretty unproductive.
dtools comments
Had a look through your repo, and have some thoughts if you don't mind. You may already know about several of these items, but I'm not going to be able to sift through 30K lines to see what is/isn't known.
printf vs echo
There's a great writeup on why echo should be used with caution. Its probably fine, but wanted to mention it -- personally, I'll use echo when I need static text and printf doesn't make sense to use otherwise.
Multiline-printf vs HEREDOC
In the script, you've got like 6K lines of printf statements to show various usage text. Instead, I'd recommend using HEREDOCs (<<).
As an example:
dtools_usage() {
cat << EOF
dtools - A CLI tool to manage all personal dev tools
\e[1mUsage:\e[0m
dtools COMMAND
dtools [COMMAND] --help | -h
dtools --version | -v
\e[1mCommands:\e[0m
\e[0;32mupdate\e[0m Update the dtools CLI to the latest version
...
EOF
}
HEREDOCs can also be used for basically any stdin stream; for example:
ssh user@host << EOF
hostname
mkdir -p ~/.config/
EOF
bold() vs $'\e[1m'
On a related note, rather than using functions and by extension subshells ($(...)) to color text; you could do something like:
ANSI_FMT=(
['norm']=$'\e[0m'
['red']=$'\e[31m'
['green']=$'\e[32m'
['yellow']=$'\e[33m'
['blue']=$'\e[34m'
['magenta']=$'\e[35m'
['cyan']=$'\e[36m'
['black']=$'\e[30m'
['white']=$'\e[37m'
['bold']=$'\e[1m'
['red_bold']=$'\e[1;31m'
['green_bold']=$'\e[1;32m'
['yellow_bold']=$'\e[1;33m'
['blue_bold']=$'\e[1;34m'
['magenta_bold']=$'\e[1;35m'
['cyan_bold']=$'\e[1;36m'
['black_bold']=$'\e[1;30m'
['white_bold']=$'\e[1;37m'
['underlined']=$'\e[4m'
['red_underline']=$'\e[4;31m'
['green_underline']=$'\e[4;32m'
['yellow_underline']=$'\e[4;33m'
['blue_underline']=$'\e[4;34m'
['magenta_underline']=$'\e[4;35m'
['cyan_underline']=$'\e[4;36m'
['black_underline']=$'\e[4;30m'
['white_underline']=$'\e[4;37m'
)
This sets each of these options in an associative array (or hash table, sort of); callable with ${ANSI_FMT["key"]}; which expands like any other variable. As such, the text will be inserted directly without needing to spawn a subshell.
Additionally, the $'...' or $"..." syntax is a bashism that expands escape sequences directly; so $'\t' expands to a literal tab character. The only difference betweeen the two forms is whether $ expressions will also be expanded, e.g. $"\e[31m$HOME\e[0m vs $'\e[31mHOME\e[0m.
Do also note that $'\e[0m (or equiv) is required with this method, as you're no longer performing the formatting in a subshell environment. I personally find this tradeoff worthwhile, though. But, I also don't use it very often.
The heredoc example before would then look like:
dtools_usage() {
cat << EOF
dtools - A CLI tool to manage all personal dev tools
${ANSI_FMT['bold']}Usage:${ANSI_FMT['norm']}
dtools COMMAND
dtools [COMMAND] --help | -h
dtools --version | -v
${ANSI_FMT['bold']}Commands:${ANSI_FMT['norm']}
${ANSI_FMT['green']}update${ANSI_FMT['norm']} Update the dtools CLI to the latest version
...
EOF
}
As a real-world example from a recent work project:
log() {
if (( $# == 1 )); then
mapfile -t largs
set -- "${1}" "${largs[@]}"
unset largs
fi
local rgb lvl
case "${1,,}" in
emerg ) rgb='\e[1;31m'; lvl='EMERGENCY';;
alert ) rgb='\e[1;36m'; lvl='ALERT';;
crit ) rgb='\e[1;33m'; lvl='CRITICAL';;
err ) rgb='\e[0;31m'; lvl='ERROR';;
warn ) rgb='\e[0;33m'; lvl='WARNING';;
notice ) rgb='\e[0;32m'; lvl='NOTICE';;
info ) rgb='\e[1;37m'; lvl='INFO';;
debug ) rgb='\e[1;35m'; lvl='DEBUG';;
esac
case "${1,,}" in
emerg | alert | crit | err ) err+=( "${@:2}" );;
esac
shift
[[ -n "${nocolor}" ]] && unset rgb
while (( $# > 0 )); do
printf '[%(%FT%T)T] [%b%-9s\e[0m] %s: %s\n' -1 \
"${rgb}" "${lvl}" "${FUNCNAME[1]}" "${1}"
shift
done | tee >(
sed --unbuffered $'s/\e[[][^a-zA-Z]*m//g' >> "${log:-/dev/null}"
)
}
Here, I'm using printf's %b to expand the color code, then later using $'...' with sed to strip those out for writing to a logfile. While I'm not using an associative array in this case, I do something similar in my log.sh library.
One vs Many
Seeing that there's nearly 30K lines in this script, I would argue it should be split up. You can easily split scripts up to keep everything organized, or to make reusable code, by sourceing the script. For example, to use the log.sh library, I would do something like:
#!/usr/bin/env bash
# $BASH_LIB == ~/.config/bash/lib
#NO_COLOR="1"
source "${BASH_LIB}/log.sh"
# set function
log() {
log.pretty "${@}"
}
log info "foo"
# or use them directly
log.die.pretty "oopsie!"
Given the insane length of this monolith, splitting it up is probably worth it. The run() and related functions could stay within dtools, but each part could be split out to another file, which does its own subcommand argparse?
Bashisms
The Wooledge on Bashisms is a great writeup explaining the quirks between POSIX and bash -- more specificaly, what kind of tools available out of the box when writing for bash specifically.
Some that I use on a regular basis:
&>or&>>: redirect bothstdout&stdinto some file/descriptor|&: shorthand for2>&1 |var="$(< file)": read file contents into a variable- Though, I prefer
mapfileorreadarrayfor most of these cases - Exceptions would be in containers where those are unavailable (alpine +
bash)
- Though, I prefer
(( ... )): arithemtic expressions, including C-Style for-loops- Makes checking numeric values much nicer:
(( myvar >= 1 ))or(( count++ ))
- Makes checking numeric values much nicer:
grep | awk | sed
Just wanted to note that awk can do basically everything. These days I tend to avoid it, but it can do it all. Using ln. 6361 as an example:
zellij_session_id="$(zellij ls | awk '
tolower($0) ~ /current/ {
print gensub(/\x1B[[][^a-zA-Z]*?m/, "", "G", $1)
}
')"
The downside of awk is that it can be a little slow compared to grep, sed or cut. More power in a single tool, but maybe not as performant.
Shellcheck
I'm almost certain I'm preaching to the choir, but will add the recommendation for shellcheck or bash-language-server broadly.
While there's not much it spit out for dtools, there are some items of concern, notably unclosed strings.
A Loose Offer
If interested, I could like at rewriting dtool taking into account the items I've listed above, amongst others. Given the scope of the project, that's quite the undertaking from a new set of eyes, but figured I'd throw it out there. Gives me something to do over the upcoming long weekend.