• 0 Posts
  • 1 Comment
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • As someone who has done way too much shell scripting, the example on their website just looks bad if i’m being honest.

    I wrote a simple test script that compares the example output from this script to how i would write the same if statement but with pure bash.

    here’s the script:

    #!/bin/bash
    
    age=3
    
    [ "$(printf "%s < 18\n" "$age" | bc -l | sed '/\./ s/\.\{0,1\} 0\{1,\}$//')" != 0  ] && echo hi
    
    # (( "$age" < 18 )) && echo hi
    

    Comment out the line you dont want to test then run hyperfine ./script

    I found that using the amber version takes ~2ms per run while my version takes 800microseconds, meaning the amber version is about twice as slow.

    The reason the amber version is so slow is because: a) it uses 4 subshells, (3 for the pipes, and 1 for the $() syntax) b) it uses external programs (bc, sed) as opposed to using builtins (such as the (( )), [[ ]], or [ ] builtins)

    I decided to download amber and try out some programs myself.

    I wrote this simple amber program

    let x = [1, 2, 3, 4]
    echo x[0]
    

    it compiled to:

    __AMBER_ARRAY_0=(1 2 3 4);
    __0_x=("${__AMBER_ARRAY_0[@]}");
    echo "${__0_x[0]}"
    

    and i actually facepalmed because instead of directly accessing the first item, it first creates a new array then accesses the first item in that array, maybe there’s a reason for this, but i don’t know what that reason would be.

    I decided to modify this script a little into:

    __AMBER_ARRAY_0=($(seq 1 1000));
    __0_x=("${__AMBER_ARRAY_0[@]}");
    echo "${__0_x[0]}"
    

    so now we have 1000 items in our array, I bench marked this, and a version where it doesn’t create a new array. not creating a new array is 600ms faster (1.7ms for the amber version, 1.1ms for my version).

    I wrote another simple amber program that sums the items in a list

    let items = [1, 2, 3, 10]
    let x = 0
    loop i in items {
        x += i
    }
    
    echo x
    

    which compiles to

    __AMBER_ARRAY_0=(1 2 3 10);
    __0_items=("${__AMBER_ARRAY_0[@]}");
    __1_x=0;
    for i in "${__0_items[@]}"
    do
        __1_x=$(echo ${__1_x} '+' ${i} | bc -l | sed '/\./ s/\.\{0,1\}0\{1,\}$//')
    done;
    echo ${__1_x}
    

    This compiled version takes about 5.7ms to run, so i wrote my version

    arr=(1 2 3 10)
    x=0
    for i in "${arr[@]}"; do
        x=$((x+${arr[i]}))
    done
    printf "%s\n" "$x"
    

    This version takes about 900 microseconds to run, making the amber version about 5.7x slower.

    Amber does support 1 thing that bash doesn’t though (which is probably the cause for making all these slow versions of stuff), it supports float arithmetic, which is pretty cool. However if I’m being honest I rarely use float arithmetic in bash, and when i do i just call out to bc which is good enough. (and which is what amber does, but also for integers)

    I dont get the point of this language, in my opinion there are only a couple of reasons that bash should be chosen for something a) if you’re just gonna hack some short script together quickly. or b) something that uses lots of external programs, such as a build or install script.

    for the latter case, amber might be useful, but it will make your install/build script hard to read and slower.

    Lastly, I don’t think amber will make anything easier until they have a standard library of functions.

    The power of bash comes from the fact that it’s easy to pipe text from one text manipulation tool to another, the difficulty comes from learning how each of those individual tools works, and how to chain them together effectively. Until amber has a good standard library, with good data/text manipulation tools, amber doesn’t solve that.