7

I got some really simple code below. #!/usr/bin/python from multiprocessing import Pool import time

def worker(job):
    if job in range(25,30):
        time.sleep(10)
    print "job:%s" %job
    return (job)

pool = Pool(processes=10)
result = []

for job in range(1, 1000):
    result.append(pool.apply_async(worker(job)))
pool.close()
pool.join()

As you can see, I have a worker to handle 1000 jobs use multiprocessing. If the job is 25-30, then the worker will sleep 10s. This is try to simulate a time/resource cost job.

When I run the above code, the out put is like below. From job 25. The whole process is running like a sequencial process.Because every 10s there is output after the job 24. Until the job 30 is finished.

But why? Shouldn`t multiprocessing process run concurrently?

[root@localhost tmp]# ./a.py 
job:1
job:2
job:3
job:4
job:5
job:6
job:7
job:8
job:9
job:10
job:11
job:12
job:13
job:14
job:15
job:16
job:17
job:18
job:19
job:20
job:21
job:22
job:23
job:24


job:25
job:26
...

1 Answer 1

6

Because you're calling it on instantiation. You should pass the callable and the arguments, not the result, to apply_async.

result.append(pool.apply_async(worker, [job]))
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.