melezhik

joined 2 years ago
 

Nice screenshots are attached )

 

Tomtit is a cli task runner when you need to run repetitive tasks / tools around your project. It's similar to make, but with more generic approach, not exactly tied to build tasks only

 

After one Rakulang community member and bio informatics developer mentioned the Nexflow data pipeline framework, I was surprised that Sparky and Sparrow6 eco system could be a good fit for such a type of tasks ...

 

Just create a .env/vars.env file which is not kept in source code, define some Bash variables here with sensitive data and then run sparrowdo cli referencing those variables in this file in a safe way:

—tags password=.env[PASSWORD],token=.env[TOKEN]

  • variables are not exposed in bash history
  • not seen via ps aux
  • variables file gets transferred to remote host over scp
  • file with variables not kept on remote host ( immediately removed after exporting to Sparrowdo scenario )
  • host specific vs default env variables allowed

Safe and simple

[–] melezhik@programming.dev 2 points 7 months ago

TLTR;

Sparky is a distributed jobs framework that allows orchestration of remote tasks on cluster of nodes. It’s simple to set up and easy to use. This post is a brief overview of Sparky architecture design.

Sparky targeted audience is:

cloud providers to manage underlying multiple hosts infrastructure

data scientists to process data in distributed manner (aka data pipelines)

software engineers and devops doing any tasks with distributed nature in mind

[–] melezhik@programming.dev 1 points 7 months ago* (last edited 7 months ago)

You may try out https://github.com/melezhik/sparky which is a local / remote task runner with nice front end and scripts could be written on many languages

 

This URL provides a collection ( 17 recipes ) of Raku/Sparrow snippets to munge your data. Raku provides powerful regexs mechanism to search text, Sparrow - some high level blocks to make it even easier, all this together allows user to use Raku in daily data processing tasks as alternative to well known solutions like sed/grep/awk/perl

Every recipe is an example of how to solve a real user task ( stack overflow questions ) which you may compare with other solutions ( none Raku ) on the same link and make your opinion. Don’t forget to give the author some credit by voting up on stack overflow if you like them ))

PS all recipes are tested by myself, appreciate any suggestions, improvements, bugs reporting

[–] melezhik@programming.dev 1 points 8 months ago

Yep. Fancy devs watching me coding some Rakulang in nano 😂

[–] melezhik@programming.dev 12 points 10 months ago* (last edited 10 months ago)

Ok. "I am a good FOSS developer"

[–] melezhik@programming.dev 1 points 10 months ago

Thanks, will take a look

[–] melezhik@programming.dev -2 points 10 months ago* (last edited 10 months ago) (2 children)

nano is the best (imho) for up to medium size files. It’s preinstalled in most Linux boxes , it’s simple and flexible enough, takes a minimal amount of time to learn basic for keys and then use them all the time

[–] melezhik@programming.dev 1 points 10 months ago* (last edited 10 months ago)

Not generator, validator. It validates configuration files . Ansible is not flexible in comparison with Sparrow, you'd need to write more boilerplate code to do the same ... Also core ansible modules search is limited by "one line" mode, thus it does not allow to search for example within nested structures, like if we want something in between or in nested blocks, or search for sequences, like when we want to search a sequence of strings, a,b,c,d etc, Sparrow does allow al thatl as it has ranges/sequential/SLN search by design. Sparrow allows to generate check rules in runtime as well, Ansible can't

[–] melezhik@programming.dev 2 points 10 months ago* (last edited 10 months ago)

fair enough, however the intention is to show how one could create rules on Sparrow/Raku, not to show rules ... Maybe I should have mentioned that ...

for example this is more interesting example evaluation of net.ipv4.tcp_synack_retries"

regexp: ^^ "net.ipv4.tcp_synack_retries" \s* "=" \s* (\d+) \s* $$

generator: <<RAKU
!raku
if matched().elems {
  my $v = capture()[];
  say "note: net.ipv4.tcp_synack_retries={$v}";
  if $v >= 3 && $v <= 5 {
     say "assert: 1 net.ipv4.tcp_synack_retries in [3..5] range"
  } else {
     say "assert: 0 net.ipv4.tcp_synack_retries in [3..5] range"
  }
} else {
  say "note: net.ipv4.tcp_synack_retries setting not found"
}
RAKU
[–] melezhik@programming.dev 2 points 10 months ago (2 children)

sorry, could you please elaborate on "shouldn’t copy" ? thanks

[–] melezhik@programming.dev 1 points 10 months ago* (last edited 10 months ago)

you are seemed to have edited your initial reply - "it should be sysctl.conf not syslog.conf " - anyway thanks for that, now it's fixed, this was just overlook typo

 

Hi! Sparrowhub maintainer here. Sparrow is an alternate to Ansible written on Raku. Users can create reusable tasks on many programming languages and run them via Raku SDK scenarios.

If you are interested in contribution, you may:

  • create new Sparrow plugins, it’s easy (no knowledge of Raku is required) so people could use them
  • start using Sparrow as is ( 280 plugins included )
  • contribute in Sparrow core
  • spread the news

Discord channel - https://discord.gg/xpBz6yTj or post your comments, questions here.

[–] melezhik@programming.dev 5 points 10 months ago* (last edited 10 months ago)

Yep. Like said - "We talk about use of Bash for simple enough tasks ... where every primitive language or DSL is ok", so Bash does not suck in general and I myself use it a lot in proper domains, but I just do not use it for tasks / domains with complexity ( in all senses, including, but not limited to team work ) growing over time ...

[–] melezhik@programming.dev 14 points 11 months ago* (last edited 11 months ago) (4 children)

We are not taking about use of Bash in dev vs use Bash in production. This is imho incorrect question that skirts around the real problem in software development. We talk about use of Bash for simple enough tasks where code is rarely changed ( if not written once and thrown away ) and where every primitive language or DSL is ok, where when it comes to building of medium or complex size software systems where decomposition, complex data structures support, unit tests, error handling, concurrency, etc is a big of a deal - Bash really sucks because it does not allow one to deal with scaling challenges, by scaling I mean where you need rapidly change huge code base according changes of requirements and still maintain good quality of entire code. Bash is just not designed for that.

 

Hey! I am building Microservices framework with focus on simplicity and potentially targeted to dev environments, it's in veeeeeeery alfa stage, so only WIKI exists reflecting current design and use cases. However I'd like to get some feedback to see if see the whole thing make a sense. Thanks

view more: next ›