I've recently switched from np++ to Sublime for some non-standard issues -- I would say that could be closer in performance & extensibility to Vim/Emacs; though limited to GUI and non-FOSS of course.
stewie410
If we're including shell scripts/functions as "terminal commands":
bm()- Zoxide wasn't quite what I wanted
- Quick jump to a static list of directories with some alias
mkcd()- Basic
mkdir && cd - Some extra stuff for my specific workflow
- Basic
bashlib- My small, but helpful, "stdlib"
gitclone.shgit clonewrapper without needing to give a full URI
md2d.shpandocwrapper- I'm required to provide documentation at work in
docxformat, so I use this to avoid Office as much as possible
If we just mean terminal applications:
hyperfinejqandyq- Honorable mention:
xmlstarlet
- Honorable mention:
shellcheck- Primarily used via LSP
- CLI tool still used when my boss sends me AI Slop scripts I have to fix
A couple of bash specific items I'm using quite often these days:
mapfileprintf '%(datefmt)T- Annoying
%Tdoesn't expand to true ISO-8601 compliance
- Annoying
(( expr ))- Truly saves my ass every year in Advent of Code, despite the limitations
shell built-in command
After looking into it a bit more, its at least a builtin for bash but is otherwise not POSIX. I guess nohup ... & would be the POSIX compliant equivalent, though still not a builtin.
Its my understanding that & backgrounds, not necessarily detaches, a process -- if the parent process closes, I think the background tasks would still be wait()ed on, if only using &.
There are times when dealing with annoying field separators that awk is a more convenient tool -- though, I'm also now at the stage that I want to do as much with bash-builtins as I possibly can.
You can get rid of those greps, btw:
ps aux | awk '/zoom/ && $0 !~ /awk/ {print $2}'
Or just use pgrep.
I both love and hate awk -- on the one hand, it provides the same/similar functionality of similar tools (sed, grep, cut, etc); but it is a bit of a bear and can be pretty slow.
If you need more "complex" tasks done what would be cumbersome with the rest of the standard tooling, and performance is a non-issue, awk/gawk can probably get it done.
Though, I too am trying to use it as little as possible in scripts. I think multiple subshells/pipes is still better than awk in some cases. Syntax also leaves a lot to be desired...
Shouldn't you end with & disown to fully detach?
From my experience, that sounds like an added bonus.
Thanks so much for the other stuff you use! I’ve been using
bmfor years
If you mean from my dotfiles, that's wild. A friend of mine wrote his own implementation in rust, but I've not really used their version, though I'm not sure its on github.
that honestly became kind of cumbersome when I have different configs on different servers, or machines for work vs personal, etc.
While I'm not currently using it, its on my todo list to take a real look at chezmoi for these per-machine differences; especially as I'm always between Linux, Windows & WSL. While chezmoi is outside the scope of this topic, it seems like a pretty solid configuration management option...and probably safer than what I'm doing (ln -s).
And sometimes the exports would differ making functions work differently and I didn’t want to just have to copy that section of my
~/.bashrcas well every time something updated
My "solution" is a collection of templates I'll load in to my editor (nvim, with my ~~lackluster~~ plugin), which contains the basics for most scripts of a certain type. The only time that I'll write something and rely on something that isn't builtin, e.g. a customization, is if:
- Its a personal/primary machine that I'm working from
- I
require()the item & add testing for it[[ -z "${var}" ]], orcommand -vusually
For my work, every script is usually as "batteries included" as reasonable, in whatever language I'm required to work with (bash, sh, pwsh or groovy). That said, the only items that appear in nearly every script at work are:
- Base functions for normal ops:
main(),show_help(), etc. - Some kind of logging facility with
log()- colors & "levels" are a pretty recent change
- Email notifications on failure (just a
curlwrapper for Mailgun)
bashlyframework
Transpiling bash into bash is probably the weirdest workflow I've ever heard of. While I can see some benefit of a "framework" mentality, if the 'compiled' result is a 30K line script, I'm not sure how useful it is IMO.
For me at least, I view most shell scripts as being simple automation tools, and an exercise in limitation.
If you look through my code in particular, you’ll see I use many of these bash-isms you’ve mentioned!
I did see some of that, even in the transpiled dtools monolith
$(<file)
Just be aware that this reads the full contents into a variable, not an array. I would generally use mapfile/readarray for multiline files. As for the jq example, you should be able to get away with jq '.[]' < file.json, which is also POSIX when that's a concern.
maybe we should work together to update the framework to have better conventions like you’ve mentioned?
I don't think I'm the right person for that job -- I'm both unfamiliar with Ruby and have no desire to interract with it. I'm also pretty opinionated about shell generally, and likely not the right person to come up with a general spec for most people.
Additionally, my initial reaction that bashly seems like a solution in search of a problem, probably isn't healthy for the project.
I've gotten to the point that, anything "useful" enough goes in a repo -- unless its for work, since I'd otherwise be polluting our "great" subversion server...
Functions
I've stopped using as many functions, though are just too handy:
bm(): super basic bookmark manager,cdorpushdto some static path- I never got into everything
zoxidehas to offer
- I never got into everything
mkcd(): essentiallymkdir -p && cd, but I use it enough that I forgot it isn't standard
I'm also primarily a WSL user these days (one day, I'll move permanently) -- to deal with ssh-agent shenanigans there, I also rely on ssh.sh in my config. I should at some point remove kc(), as I don't think I'lll ever go back.
Scripts
Despite having a big collection of scripts, I don't use these too often; but still wanted to mention:
md2d.sh: pandoc wrapper, mostly using it to convert markdown into docx- my boss has a weird requirement that all documentation shared with the team must be editable in Word...
gitclone.sh:git clonewrapper, but I use it asgcl -gquite often
A lot of my more useful scripts are, unfortunately, work related -- and probably pretty niche.
"Library"
I also keep a library of sorts for reusable snippets, which I'll source as needed. The math & array libs in particular are very rarely used -- AoC, for the most part.
Config
Otherwise, my bash config is my lifeblood -- without it, I'm pretty unproductive.
dtools comments
Had a look through your repo, and have some thoughts if you don't mind. You may already know about several of these items, but I'm not going to be able to sift through 30K lines to see what is/isn't known.
printf vs echo
There's a great writeup on why echo should be used with caution. Its probably fine, but wanted to mention it -- personally, I'll use echo when I need static text and printf doesn't make sense to use otherwise.
Multiline-printf vs HEREDOC
In the script, you've got like 6K lines of printf statements to show various usage text. Instead, I'd recommend using HEREDOCs (<<).
As an example:
dtools_usage() {
cat << EOF
dtools - A CLI tool to manage all personal dev tools
\e[1mUsage:\e[0m
dtools COMMAND
dtools [COMMAND] --help | -h
dtools --version | -v
\e[1mCommands:\e[0m
\e[0;32mupdate\e[0m Update the dtools CLI to the latest version
...
EOF
}
HEREDOCs can also be used for basically any stdin stream; for example:
ssh user@host << EOF
hostname
mkdir -p ~/.config/
EOF
bold() vs $'\e[1m'
On a related note, rather than using functions and by extension subshells ($(...)) to color text; you could do something like:
ANSI_FMT=(
['norm']=$'\e[0m'
['red']=$'\e[31m'
['green']=$'\e[32m'
['yellow']=$'\e[33m'
['blue']=$'\e[34m'
['magenta']=$'\e[35m'
['cyan']=$'\e[36m'
['black']=$'\e[30m'
['white']=$'\e[37m'
['bold']=$'\e[1m'
['red_bold']=$'\e[1;31m'
['green_bold']=$'\e[1;32m'
['yellow_bold']=$'\e[1;33m'
['blue_bold']=$'\e[1;34m'
['magenta_bold']=$'\e[1;35m'
['cyan_bold']=$'\e[1;36m'
['black_bold']=$'\e[1;30m'
['white_bold']=$'\e[1;37m'
['underlined']=$'\e[4m'
['red_underline']=$'\e[4;31m'
['green_underline']=$'\e[4;32m'
['yellow_underline']=$'\e[4;33m'
['blue_underline']=$'\e[4;34m'
['magenta_underline']=$'\e[4;35m'
['cyan_underline']=$'\e[4;36m'
['black_underline']=$'\e[4;30m'
['white_underline']=$'\e[4;37m'
)
This sets each of these options in an associative array (or hash table, sort of); callable with ${ANSI_FMT["key"]}; which expands like any other variable. As such, the text will be inserted directly without needing to spawn a subshell.
Additionally, the $'...' or $"..." syntax is a bashism that expands escape sequences directly; so $'\t' expands to a literal tab character. The only difference betweeen the two forms is whether $ expressions will also be expanded, e.g. $"\e[31m$HOME\e[0m vs $'\e[31mHOME\e[0m.
Do also note that $'\e[0m (or equiv) is required with this method, as you're no longer performing the formatting in a subshell environment. I personally find this tradeoff worthwhile, though. But, I also don't use it very often.
The heredoc example before would then look like:
dtools_usage() {
cat << EOF
dtools - A CLI tool to manage all personal dev tools
${ANSI_FMT['bold']}Usage:${ANSI_FMT['norm']}
dtools COMMAND
dtools [COMMAND] --help | -h
dtools --version | -v
${ANSI_FMT['bold']}Commands:${ANSI_FMT['norm']}
${ANSI_FMT['green']}update${ANSI_FMT['norm']} Update the dtools CLI to the latest version
...
EOF
}
As a real-world example from a recent work project:
log() {
if (( $# == 1 )); then
mapfile -t largs
set -- "${1}" "${largs[@]}"
unset largs
fi
local rgb lvl
case "${1,,}" in
emerg ) rgb='\e[1;31m'; lvl='EMERGENCY';;
alert ) rgb='\e[1;36m'; lvl='ALERT';;
crit ) rgb='\e[1;33m'; lvl='CRITICAL';;
err ) rgb='\e[0;31m'; lvl='ERROR';;
warn ) rgb='\e[0;33m'; lvl='WARNING';;
notice ) rgb='\e[0;32m'; lvl='NOTICE';;
info ) rgb='\e[1;37m'; lvl='INFO';;
debug ) rgb='\e[1;35m'; lvl='DEBUG';;
esac
case "${1,,}" in
emerg | alert | crit | err ) err+=( "${@:2}" );;
esac
shift
[[ -n "${nocolor}" ]] && unset rgb
while (( $# > 0 )); do
printf '[%(%FT%T)T] [%b%-9s\e[0m] %s: %s\n' -1 \
"${rgb}" "${lvl}" "${FUNCNAME[1]}" "${1}"
shift
done | tee >(
sed --unbuffered $'s/\e[[][^a-zA-Z]*m//g' >> "${log:-/dev/null}"
)
}
Here, I'm using printf's %b to expand the color code, then later using $'...' with sed to strip those out for writing to a logfile. While I'm not using an associative array in this case, I do something similar in my log.sh library.
One vs Many
Seeing that there's nearly 30K lines in this script, I would argue it should be split up. You can easily split scripts up to keep everything organized, or to make reusable code, by sourceing the script. For example, to use the log.sh library, I would do something like:
#!/usr/bin/env bash
# $BASH_LIB == ~/.config/bash/lib
#NO_COLOR="1"
source "${BASH_LIB}/log.sh"
# set function
log() {
log.pretty "${@}"
}
log info "foo"
# or use them directly
log.die.pretty "oopsie!"
Given the insane length of this monolith, splitting it up is probably worth it. The run() and related functions could stay within dtools, but each part could be split out to another file, which does its own subcommand argparse?
Bashisms
The Wooledge on Bashisms is a great writeup explaining the quirks between POSIX and bash -- more specificaly, what kind of tools available out of the box when writing for bash specifically.
Some that I use on a regular basis:
&>or&>>: redirect bothstdout&stdinto some file/descriptor|&: shorthand for2>&1 |var="$(< file)": read file contents into a variable- Though, I prefer
mapfileorreadarrayfor most of these cases - Exceptions would be in containers where those are unavailable (alpine +
bash)
- Though, I prefer
(( ... )): arithemtic expressions, including C-Style for-loops- Makes checking numeric values much nicer:
(( myvar >= 1 ))or(( count++ ))
- Makes checking numeric values much nicer:
grep | awk | sed
Just wanted to note that awk can do basically everything. These days I tend to avoid it, but it can do it all. Using ln. 6361 as an example:
zellij_session_id="$(zellij ls | awk '
tolower($0) ~ /current/ {
print gensub(/\x1B[[][^a-zA-Z]*?m/, "", "G", $1)
}
')"
The downside of awk is that it can be a little slow compared to grep, sed or cut. More power in a single tool, but maybe not as performant.
Shellcheck
I'm almost certain I'm preaching to the choir, but will add the recommendation for shellcheck or bash-language-server broadly.
While there's not much it spit out for dtools, there are some items of concern, notably unclosed strings.
A Loose Offer
If interested, I could like at rewriting dtool taking into account the items I've listed above, amongst others. Given the scope of the project, that's quite the undertaking from a new set of eyes, but figured I'd throw it out there. Gives me something to do over the upcoming long weekend.
PuTTY
If you find yourself back on Windows, 10 & 11 both come with openssh natively. Combining that with WSL even gets you X11 Forwarding, if that's a useful feature.
Dunno if you saw, office.com now refers to Office (the "launcher" ?) as:
Can't wait for
Copilot OS (formerly Windows)