45

Having installed tensorflow GPU (running on a measly NVIDIA GeForce 950), I would like to compare performance with the CPU.

I am running the tensorFlow MNIST tutorial code, and have noticed a dramatic increase in speed--estimated anyways (I ran the CPU version 2 days ago on a laptop i7 with a batch size of 100, and this on a desktop GPU, batch size of 10)--between the CPU and the GPU when I switched...but I only noticed the speed increase when I lowered the batch size on the GPU to 10 from 100...

Now I lack an objective measure for what I am gaining.

Is there a way to toggle between the CPU and GPU tensor flows?

6 Answers 6

40

To make GPU invisible

export CUDA_VISIBLE_DEVICES=""

To return to normal

unset CUDA_VISIBLE_DEVICES
Sign up to request clarification or add additional context in comments.

3 Comments

where to put this?
..or CUDA_VISIBLE_DEVICES="" python myscript.py
@Chaine in your terminal, before running your python my_script.py
20

try setting tf.device to cpu:0

with tf.Session() as sess:
     with tf.device("/cpu:0"):

Comments

7

Another option is to install the cpu version and the gpu version of tensorflow in two virtual environments, detailed instructions on how to install tensorflow in virtual environments are listed here https://www.tensorflow.org/get_started/os_setup; in this way, you can have the same code running in two terminals windows one uses CPU and the other uses GPU.

Comments

6
# Check if the server/ instance is having GPU/ CPU from python code
import sys
import tensorflow as tf
from tensorflow.python.client import device_lib

# device_lib.list_local_devices()     ## this command list all the processing device GPU and CPU


device_name = [x.name for x in device_lib.list_local_devices() if x.device_type == 'GPU']
if device_name[0] == "/device:GPU:0":
    device_name = "/gpu:0"
    #print('GPU')
else:
    #print('CPU')
    device_name = "/cpu:0"

with tf.device(device_name):
    a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
    b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
    c = tf.matmul(a, b)
with tf.Session() as sess:
    print (sess.run(c))    

Comments

6

Quite a long time elapsed. Recent versions of Tensorflow (at least from 2.0 on) don't require installing both versions with and without gpu support, so you can launch two separate jupyter-notebook instances. Following @Yaroslav's advice:

$ CUDA_VISIBLE_DEVICES="" jupyter-notebook &
$ jupyter-notebook &

You will then have two separate jupyter clients open in your browser, typically http://localhost:8888/ and http://localhost:8889/, respectively without and with GPU support, where you can run the same .ipynb notebook and measure the performance difference.

Comments

6

To turn off GPU, simply add this at the top of your script.

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"

(Comment it out when you want to use the GPU again)

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.