3

I have an NVidia GeForce GTX 770 and would like to use its CUDA capabilities for a project I am working on. My machine is running windows 10 64bit.

I have followed the provided CUDA Toolkit installation guide: https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/.

Once the drivers were installed I opened the samples solution (using Visual Studio 2019) and built the deviceQuery and bandwidthTest samples. Here is the output:

deviceQuery:

C:\ProgramData\NVIDIA Corporation\CUDA Samples\v11.3\bin\win64\Debug\deviceQuery.exe Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA GeForce GTX 770"
  CUDA Driver Version / Runtime Version          11.3 / 11.3
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 2048 MBytes (2147483648 bytes)
  (008) Multiprocessors, (192) CUDA Cores/MP:    1536 CUDA Cores
  GPU Max Clock rate:                            1137 MHz (1.14 GHz)
  Memory Clock rate:                             3505 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 524288 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  CUDA Device Driver Mode (TCC or WDDM):         WDDM (Windows Display Driver Model)
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 3 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.3, CUDA Runtime Version = 11.3, NumDevs = 1
Result = PASS

Bandwidth:

[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: NVIDIA GeForce GTX 770
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)    Bandwidth(GB/s)
   32000000         3.1

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)    Bandwidth(GB/s)
   32000000         3.4

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)    Bandwidth(GB/s)
   32000000         161.7

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

However, when I try to run any other sample, for example the starter code that is provided with the CUDA 11.3 runtime template:

#include "cuda_runtime.h"
#include "device_launch_parameters.h"

#include <stdio.h>

cudaError_t addWithCuda(int *c, const int *a, const int *b, unsigned int size);

__global__ void addKernel(int* c, const int* a, const int* b) {
    int i = threadIdx.x;
    c[i] = a[i] + b[i];
}

int main() {
    const int arraySize = 5;
    const int a[arraySize] = { 1, 2, 3, 4, 5 };
    const int b[arraySize] = { 10, 20, 30, 40, 50 };
    int c[arraySize] = { 0 };

    // Add vectors in parallel.
    cudaError_t cudaStatus = addWithCuda(c, a, b, arraySize);
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "addWithCuda failed!");
        return 1;
    }

    printf("{1,2,3,4,5} + {10,20,30,40,50} = {%d,%d,%d,%d,%d}\n", c[0], c[1], c[2], c[3], c[4]);

    // cudaDeviceReset must be called before exiting in order for profiling and
    // tracing tools such as Nsight and Visual Profiler to show complete traces.
    cudaStatus = cudaDeviceReset();
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaDeviceReset failed!");
        return 1;
    }

    return 0;
}

// Helper function for using CUDA to add vectors in parallel.
cudaError_t addWithCuda(int* c, const int* a, const int* b, unsigned int size) {
    int* dev_a = 0;
    int* dev_b = 0;
    int* dev_c = 0;
    cudaError_t cudaStatus;

    // Choose which GPU to run on, change this on a multi-GPU system.
    cudaStatus = cudaSetDevice(0);
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaSetDevice failed!  Do you have a CUDA-capable GPU installed?");
        goto Error;
    }

    // Allocate GPU buffers for three vectors (two input, one output)    .
    cudaStatus = cudaMalloc((void**)&dev_c, size * sizeof(int));
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaMalloc failed!");
        goto Error;
    }

    cudaStatus = cudaMalloc((void**)&dev_a, size * sizeof(int));
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaMalloc failed!");
        goto Error;
    }

    cudaStatus = cudaMalloc((void**)&dev_b, size * sizeof(int));
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaMalloc failed!");
        goto Error;
    }

    // Copy input vectors from host memory to GPU buffers.
    cudaStatus = cudaMemcpy(dev_a, a, size * sizeof(int), cudaMemcpyHostToDevice);
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaMemcpy failed!");
        goto Error;
    }

    cudaStatus = cudaMemcpy(dev_b, b, size * sizeof(int), cudaMemcpyHostToDevice);
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaMemcpy failed!");
        goto Error;
    }

    // Launch a kernel on the GPU with one thread for each element.
    addKernel << <1, size >> > (dev_c, dev_a, dev_b);

    // Check for any errors launching the kernel
    cudaStatus = cudaGetLastError();
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "addKernel launch failed: %s\n", cudaGetErrorString(cudaStatus));
        goto Error;
    }

    // cudaDeviceSynchronize waits for the kernel to finish, and returns
    // any errors encountered during the launch.
    cudaStatus = cudaDeviceSynchronize();
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaDeviceSynchronize returned error code %d after launching addKernel!\n", cudaStatus);
        goto Error;
    }

    // Copy output vector from GPU buffer to host memory.
    cudaStatus = cudaMemcpy(c, dev_c, size * sizeof(int), cudaMemcpyDeviceToHost);
    if (cudaStatus != cudaSuccess) {
        fprintf(stderr, "cudaMemcpy failed!");
        goto Error;
    }

Error:
    cudaFree(dev_c);
    cudaFree(dev_a);
    cudaFree(dev_b);

    return cudaStatus;
}

I get the following error:

addKernel launch failed: no kernel image is available for execution on the device
addWithCuda failed!

From this table: https://docs.nvidia.com/deploy/cuda-compatibility/index.html#support-hardware__table-hardware-support you can see that my GPU's compute capability version (3.0) is in fact compatible with the installed driver (465.19.01+), so why can't I run any code other than the query and bandwidth tests?

2
  • @RobertCrovella I have just installed CUDA Toolkit 10.2 and everything works now. Thank you very much. Commented Jun 3, 2021 at 21:43
  • @RobertCrovella If you post your comment as an answer I will gladly mark it as accepted. Commented Jun 3, 2021 at 21:51

3 Answers 3

9

I had a similar problem. I have a Geforce 940 MX card on my laptop which has Cuda capability as 5.0 with CUDA driver 11.7.

The way I solved it was include the compute_50,sm_50 in the field at Properties > CUDA C/C++ > Device > Code Generation. Hope this helps.

Sign up to request clarification or add additional context in comments.

3 Comments

61 for 1080 Ti.
compute_50,sm_50 works on a Quadro M620, thanks
compute_50,sm_50 worked for my gtx 750 Ti :D
5

Your GTX770 GPU is a "Kepler" architecture compute capability 3.0 device. These devices were deprecated during the CUDA 10 release cycle and support for them dropped from CUDA 11.0 onwards

The CUDA 10.2 release is the last toolkit with support for compute 3.0 devices. You will not be able to make CUDA 11.0 or newer work with your GPU. The query and bandwidth tests use APIs which don't attempt to run code on your GPU, that is why they work where any other example will not work.

Comments

0

I have a GeForce 940 MX too, however, in my case I am using KUbuntu 22.04, and I have solved the problem adding the support for the platform in the compilation command:

nvcc TestCUDA.cu -o testcu.bin --gpu-architecture=compute_50 --gpu-code=compute_50,sm_50,sm_52

After that, the code is working fine. However, it is essential using the code to handle errors to determine what is happening during the compilation. In my case I didn't include the cudaPeekAtLastError() and not error was showing.

Below are the supported sm variations and sample cards from that generation (source: Medium - Matching SM architectures (CUDA arch and CUDA gencode) for various NVIDIA cards

Supported on CUDA 7 and later

  • Fermi (CUDA 3.2 until CUDA 8) (deprecated from CUDA 9):

  • SM20 or SM_20, compute_30 — Older cards such as GeForce 400, 500, 600, GT-630

  • Kepler (CUDA 5 and later):

  • SM30 or SM_30, compute_30 — Kepler architecture (generic — Tesla K40/K80, GeForce 700, GT-730) Adds support for unified memory programming

  • SM35 or SM_35, compute_35 — More specific Tesla K40 Adds support for dynamic parallelism. Shows no real benefit over SM30 in my experience.

  • SM37 or SM_37, compute_37 — More specific Tesla K80 Adds a few more registers. Shows no real benefit over SM30 in my experience Maxwell (CUDA 6 and later):

  • SM50 or SM_50, compute_50 — Tesla/Quadro M series

  • SM52 or SM_52, compute_52 — Quadro M6000 , GeForce 900, GTX-970, GTX-980, GTX Titan X

  • SM53 or SM_53, compute_53 — Tegra (Jetson) TX1 / Tegra X1 Pascal (CUDA 8 and later)

  • SM60 or SM_60, compute_60 — Quadro GP100, Tesla P100, DGX-1 (Generic Pascal)

  • SM61 or SM_61, compute_61 — GTX 1080, GTX 1070, GTX 1060, GTX 1050, GTX 1030, Titan Xp, Tesla P40, Tesla P4, Discrete GPU on the NVIDIA Drive PX2

  • SM62 or SM_62, compute_62 — Integrated GPU on the NVIDIA Drive PX2, Tegra (Jetson) TX2

Volta (CUDA 9 and later)

  • SM70 or SM_70, compute_70 — DGX-1 with Volta, Tesla V100, GTX 1180 (GV104), Titan V, Quadro GV100

  • SM72 or SM_72, compute_72 — Jetson AGX Xavier Turing (CUDA 10 and later)

  • SM75 or SM_75, compute_75 — GTX Turing — GTX 1660 Ti, RTX 2060, RTX 2070, RTX 2080, Titan RTX, Quadro RTX 4000, Quadro RTX 5000, Quadro RTX 6000, Quadro RTX 8000

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.