I am trying to run a Python simulation several times simultaneously, but with slightly different parameters in each run. I am trying to use the multiprocessing module to do this. I begin my code like this, where I have the basic simulation defined as a function, with the parameters as arguments:
import multiprocessing
from math import *
def sim_seq(output_name,input_name,s_val...#more arguments):
#do work here
output.write(#data)
output.close()
return
I have also created a text file with the parameters to be used for each run of the simulation, which I read in and use as the arguments in the following loop, where I am trying to use multiprocessing:
input_batch=('batch_file.txt')
if __name__ == '__main__':
jobs=[]
with open(input_batch) as f:
for line in f:
line=line.split(' ')
for i in line:
if i[0]=='o':
output_name=str(i[2:])
#read in more parameters from batch_file.txt
p = multiprocessing.Process(
target=sim_seq,
args=(output_name,input_name,s_val...#more arguments))
jobs.append(p)
for i in jobs:
i.start()
This essentially accomplishes what I want it to do, it runs three simulations at once, each with different parameters. The machine I am using, however, has 16 compute nodes available with 32 processors per node. I want to know how I can control where each simulation is being run. For instance, can I tell each processor to run a separate simulation? I am new to using multiprocessing, and I want to know how I can tell what processor or what node to do what. Can I have 32 separate parameter settings, and run each 32 instances of the simulation on its own processor, yet all running at the same time? Using multiprocessing, what would be the fastest computationally way to run the same python function multiple times simultaneously, but with different arguments for each run? Thanks in advance for any input/advice.