Sunday, February 13, 2011

How to get the GPU memory size and usage in CUDA?

Tip - Use cuMemGetInfo get the GPU memory size and usage in CUDA.

Details - CUDA API cuMemGetInfo retrieves the size of the total available GPU memory and the size of the current available GPU memory. It returns sizes in byte.

unsigned int uCurAvailMemoryInBytes;
unsigned int uTotalMemoryInBytes;
int nNoOfGPUs;

CUresult result;
CUdevice device;
CUcontext context;

cuInit(0); // Initialize CUDA
cuDeviceGetCount( &nNoOfGPUs ); // Get number of devices supporting CUDA
for( int nID = 0; nID < nNoOfGPUs; nID++ )
{
    cuDeviceGet( &device, nID ); // Get handle for device
    cuCtxCreate( &context, 0, device ); // Create context
    result = cuMemGetInfo( &uCurAvailMemoryInBytes, &uTotalMemoryInBytes );
    if( result == CUDA_SUCCESS )
    {
        printf( "Device: %d\nTotal Memory: %d MB, Free Memory: %d MB\n",
                nID,
                uTotalMemoryInBytes / ( 1024 * 1024 ),
                uCurAvailMemoryInBytes / ( 1024 * 1024 ));
    }
    cuCtxDetach( context ); // Destroy context
}

Reference –
http://developer.download.nvidia.com/compute/cuda/3_2/toolkit/docs/online/group__CUDART__MEMORY.html

Posted By : Sujith R Mohan

No comments:

Post a Comment