0

I am trying to run a script (1.sh)

spin -a  /home/files/1/1.pml; 
gcc -O2 -DXUSAFE -DSAFETY -DNOCLAIM -w -o pan pan.c >log1.txt; 
./pan -m100000 >log2.txt; 
spin -p -s -r -X -v -n123 -l -g -k /home/files/1/1.pml.trail \
    -u10000 /home/files/1/1.pml >log3.txt;

The command spin -a ...; generates temporary files (pan.c, pan.h) which is used by the next gcc -O2.. command. If I run the script in terminal it creates the temporary files in the same location.

I want to run multiple scripts parallelly. I tried two things, first to write a script to run then in a loop in background (parallel.sh)

for((i=1;i<1800;i++))
 do 
   /home/files/$i/$i.sh & 
 done

and secondly use parallel gnu parallel -j0 sh /home/files/{}/{}.sh ::: {1..1800}.

Both method created temp file in the location from where they were called from instead of the script location.

For example if I run the script 'parallel.sh' from home/files the temp file are created in "home/files" instead of the location "home/files/1","home/files/2", etc.

Please suggest a method so that the temporary file generated by the script 1.sh,2.sh,.. are created in the directory /home/file/1/, /home/files/2/,.. respectively while I run the parallel script parallel.sh or parallel GNU in terminal from location /home.

1
  • The simplest fix would be to add a cd command to your script so that each invocation goes to the directory you want. Commented Jan 28, 2016 at 18:36

2 Answers 2

1

The trick is to change the working directory for each command.

When your computer can really run up to 1800 such processes at the same time without heating up the climate:

for i in {1..1800}; do (cd $i && ./$i.sh) & done

When running in parallel, and your processes are cpu-bound, it usually does not gain throughput when running more than the number of processors:

seq 1 1800 | xargs -n1 -P8 -I% sh -c 'cd % && ./%.sh'
Sign up to request clarification or add additional context in comments.

Comments

0

Try:

parallel 'cd /home/files/{}; sh {}.sh' ::: {1..1800}

It will run one process per core, and may be faster than '-j0' (only testing can tell with certainty).

If your scripts only vary by the number, consider rewriting it as a general script or bash function that takes the number as an argument:

spinit() {
    num=$1
    spin -a  /home/files/$num/$num.pml; 
    gcc -O2 -DXUSAFE -DSAFETY -DNOCLAIM -w -o pan pan.c >log1.txt; 
    ./pan -m100000 >log2.txt; 
    spin -p -s -r -X -v -n123 -l -g -k /home/files/$num/$num.pml.trail \
        -u10000 /home/files/$num/$num.pml >log3.txt;
}
export -f spinit
parallel 'cd /home/files/{}; spinit {}' ::: {1..1800}

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.