5

I have a shell script of more than 1000 lines, i would like to check if all the commands used in the script are installed in my Linux operating system. Is there any tool to get the list of Linux commands used in the shell script? Or how can i write a small script which can do this for me?

The script runs successfully on the Ubuntu machine, it is invoked as a part of C++ application. we need to run the same on a device where a Linux with limited capability runs. I have identified manually, few commands which the script runs and not present on Device OS. before we try installing these commands i would like to check all other commands and install all at once.

Thanks in advance

6
  • Take a look at: tldp.org/LDP/Bash-Beginners-Guide/html/sect_02_03.html Commented Dec 21, 2016 at 10:24
  • Or this: unix.stackexchange.com/questions/20979/… Commented Dec 21, 2016 at 10:38
  • Arek Thanks, it helps little bit but not exactly what i was looking for. I need to know what all commands the script uses. i can not run the entire script at one shot as it runs part by part based on the arguments. as of now we do not have all the information about arguments. Commented Dec 21, 2016 at 11:45
  • 1000 lines isn't that long, and most of it is probably either shell-specific (like if, while, etc) or repetitive (the same command being called repeatedly with different arguments.) You'll probably spend less time scanning it manually than you will looking for a way to do it automatically. Commented Dec 21, 2016 at 13:13
  • Your best bet: update your script to wrap command execution with your own function, then in this function log the command and just pass through to the command itself. run() { echo "$1" > /tmp/commands; $@; } or such. Commented Nov 4, 2018 at 2:37

4 Answers 4

3

I already tried this in the past and got to the conclusion that is very difficult to provide a solution which would work for all scripts. The reason is that each script with complex commands has a different approach in using the shells features. In case of a simple linear script, it might be as easy as using debug mode. For example: bash -x script.sh 2>&1 | grep ^+ | awk '{print $2}' | sort -u In case the script has some decisions, then you might use the same approach an consider that for the "else" cases the commands would still be the same just with different arguments or would be something trivial (echo + exit).

In case of a complex script, I attempted to write a script that would just look for commands in the same place I would do it myself. The challenge is to create expressions that would help identify all used possibilities, I would say this is doable for about 80-90% of the script and the output should only be used as reference since it will contain invalid data (~20%).

Here is an example script that would parse itself using a very simple approach (separate commands on different lines, 1st word will be the command):

# 1. Eliminate all quoted text
# 2. Eliminate all comments
# 3. Replace all delimiters between commands with new lines ( ; | && || )
# 4. extract the command from 1st column and print it once
cat $0 \
    | sed -e 's/\"/./g' -e "s/'[^']*'//g" -e 's/"[^"]*"//g' \
    | sed -e "s/^[[:space:]]*#.*$//" -e "s/\([^\\]\)#[^\"']*$/\1/" \
    | sed -e "s/&&/;/g" -e "s/||/;/g" | tr ";|" "\n\n" \
    | awk '{print $1}' | sort -u

the output is:

.
/
/g.
awk
cat
sed
sort
tr

There are many more cases to consider (command substitutions, aliases etc.), 1, 2 and 3 are just beginning, but they would still cover 80% of most complex scripts. The regular expressions used would need to be adjusted or extended to increase precision and special cases.

In conclusion if you really need something like this, then you can write a script as above, but don't trust the output until you verify it yourself.

Sign up to request clarification or add additional context in comments.

Comments

0
  1. Add export PATH='' to the second line of your script.
  2. Execute your_script.sh 2>&1 > /dev/null | grep 'No such file or directory' | awk '{print $4;}' | grep -v '/' | sort | uniq | sed 's/.$//'.

3 Comments

Hello and welcome to Stack Overflow :) Which script are you talking about? Please put it in context.
This might find a few, but probably not all: likely the script errors out as soon as it hits the first missing command.
You are right, this is only valid for simple scripts, I think it is too simple.
0

If you have a fedora/redhat based system, bash has been patched with the --rpm-requires flag

--rpm-requires: Produce the list of files that are required for the shell script to run. This implies -n and is subject to the same limitations as compile time error checking checking; Command substitutions, Conditional expressions and eval builtin are not parsed so some dependencies may be missed.

So when you run the following:

$ bash --rpm-requires script.sh
executable(command1)
function(function1)
function(function2)
executable(command2)
function(function3)

There are some limitations here:

  1. command and process substitutions and conditional expressions are not picked up. So the following are ignored:

    $(command)
    <(command)
    >(command)
    command1 && command2 || command3
    
  2. commands as strings are not picked up. So the following line will be ignored

    "/path/to/my/command"
    
  3. commands that contain shell variables are not listed. This generally makes sense since some might be the result of some script logic, but even the following is ignored

    $HOME/bin/command
    

    This point can however be bypassed by using envsubst and running it as

    $ bash --rpm-requires <(<script envsubst)
    

    However, if you use shellcheck, you most likely quoted this and it will still be ignored due to point 2

So if you want to use check if your scripts are all there, you can do something like:

while IFS='' read -r app; do
   [ "${app%%(*}" == "executable" ] || continue
   app="${app#*(}"; app="${app%)}";
   if [ "$(type -t "${app}")" != "builtin" ] &&                 \
       ! [ -x "$(command -v "${app}")" ]
   then
        echo "${app}: missing application"
   fi
done < <(bash --rpm-requires <(<"$0" envsubst) )

If your script contains files that are sourced that might contain various functions and other important definitions, you might want to do something like

bash --rpm-requires <(cat source1 source2 ... <(<script.sh envsubst))  

Comments

0

Based @czvtools’ answer, I added some extra checks to filter out bad values:

#!/usr/bin/fish

if test "$argv[1]" = ""
    echo "Give path to command to be tested"
    exit 1
end

set commands (cat $argv \
    | sed -e 's/\"/./g' -e "s/'[^']*'//g" -e 's/"[^"]*"//g' \
    | sed -e "s/^[[:space:]]*#.*\$//" -e "s/\([^\\]\)#[^\"']*\$/\1/" \
    | sed -e "s/&&/;/g" -e "s/||/;/g" | tr ";|" "\n\n" \
    | awk '{print $1}' | sort -u)

for command in $commands
    if command -q -- $command
        set -a resolved (realpath (which $command))
        continue
    end

    # There may be things like:
    #   PATCH_STRING=$(zypper
    #   REPOS=$(zypper
    set subshell_command (string split -f2 '=$(' -- $command)
    if test -n "$subshell_command" && command -q -- $subshell_command
        set -a resolved (realpath (which $subshell_command))
        continue
    end
end

set resolved (string join0 $resolved | sort -z -u | string split0)

for command in $resolved
    echo $command
end

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.