0

I have access to a large CPU cluster that does not have GPUs. Is it possible to speed up YOLO training by parallelizing between multiple CPU nodes?
The docs say that device parameter specifies the computational device(s) for training: a single GPU (device=0), multiple GPUs (device=0,1), CPU (device=cpu), or MPS for Apple silicon (device=mps). What about multiple CPUs?

1 Answer 1

0

You can use torch.set_num_threads(int) (docs) to control how many CPU processes pytorch uses to execute operations.

Sign up to request clarification or add additional context in comments.

1 Comment

Did not work for me. Just tried. It runs on 16 threads no matter what I set, and of these 16 only 8 actually compute.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.