May 31st & June 1st are university holidays.
Next monthly maintenance June 7th 7am-11am - [Details]
Annual MGHPCC Power Shutdown Aug 9-12 [Details and Schedule]
STATUS PAGE No known issues.

Search Docs by Keyword

GPU Computing on the FASRC cluster

The FASRC cluster has a number of nodes that have NVIDIA Tesla general purpose graphics processing units (GPGPU) attached to them. It is possible to use CUDA tools to run computational work on them and in some use cases see very significant speedups.

Fifteen nodes with 4 V100 per node is available for general use from the gpu partition; the remaining are nodes are owned by various research groups available and may be available when idle through gpu_requeue. FAS members have access to the fas_gpu partition which has 64 nodes with 2xK80s, 16 nodes with 2xK20Xm, and 8 nodes with 2xK20m. Direct access to these nodes by members of other groups is by special request. Please visit the RC Portal and submit a help request for more information.


To request a single GPU on slurm just add #SBATCH --gres=gpu to your submission script and it will give you access to a GPU. To request multiple GPUs add #SBATCH --gres=gpu:n where ‘n’ is the number of GPUs. You can use this method to request both CPUs and GPGPUs independently. So if you want 1 CPU and 2 GPUs from our general use GPU nodes in the ‘gpu’ partition, you would specify:

#SBATCH -p gpu
#SBATCH -n 1
#SBATCH --gres=gpu:2
#SBATCH --gpu-freq=high

When you submit a GPU job SLURM automatically selects some GPUs and restricts your jobs to those GPUs. In your code you reference those GPUs using zero-based indexing from [0,n) where n is the number of GPUs requested. For example, if you’re using a GPU-enabled tensorflow build and requested 2 GPUs you would simply reference gpu:0 or gpu:1 from your code.

For an interactive session to work with the GPUs you can use following. While on GPU node, you can run nvidia-smi to get information about the assigned GPU’s.
srun --pty -p gpu -t 0-06:00 --mem 8000 --gres=gpu:1 /bin/bash

The partition gpu_requeue is a backfill partition similar to serial_requeue and will allow you to submit jobs to idle GPU enabled nodes. Please note that the hardware in that partition is heterogeneous.  SLURM is aware of the model name, and compute capability of the GPU devices each compute node has.

The GPU models currently available on our cluster are k20m k40m k80 m40 1080 titanx titanv p100 v100 rtx2080ti, with compute capabilities ranging from 3.5 to 7.5. See the official Nvidia website for more details.

Name or compute capability can be requested as a constraint in your job submission.  When running in gpu_requeue, nodes with a specific model can be selected using --constraint=modelname,  or, more in general, nodes offering a card with a specific compute capability can be selected using  --constraint=ccx.x (e.g. --constraint=cc7.0 for compute capability 7.0).

For example if your code needs to run on devices with at least compute capability 3.7, you would specify:

#SBATCH -p gpu_requeue 
#SBATCH -n 1 
#SBATCH --gres=gpu:1
#SBATCH --constraint=cc3.7

CUDA Runtime

The current version of the Nvidia driver installed on all GPU-enabled nodes on the cluster cluster is 396.26, which supports Cuda version 9.
To load the toolkit and additional runtime libraries (cublas, cufftw, …) remember to always load the module for cuda in your Slurm  job script or interactive session.
>$ module load cuda/9.0-fasrc02

NOTE: In the past our Cuda installations were heterogeneous and different nodes on the cluster would provide different versions of the Cuda driver. For this reason might have used in your job submissions  the Slurm flags --constraint=cuda-$version  (for example –constraint=cuda-7.5)  to specifically request nodes that were supporting that version.
This is no longer needed as our cuda modules are the same throughout the cluster, and you should remove those flags from your scripts.

Using CUDA-dependent modules

CUDA-dependent applications are accessed on the cluster in a manner that is similar to compilers and MPI libraries. For these applications, a CUDA module must first be loaded before an application is available. For example, to use cuDNN, a CUDA-based neural network library from NVIDIA, the following command will work:

$ module load cuda/9.0-fasrc02 cudnn/7.0_cuda9.0-fasrc01

If you don’t load the CUDA module first, the cuDNN module is not available.

$ module purge
$ module load cudnn/7.0_cuda9.0-fasrc01
Lmod has detected the following error:
The following module(s) are unknown: “cudnn/7.0_cuda9.0-fasrc01”
Please use the command module-query or our user Portal to find available versions and how to load them.
More information on software modules can be found here, and how to run jobs here.

See an example on how use the cuda module to install and use Tensorflow.

Was this article helpful?
3.8 out Of 5 Stars
5 Stars 33%
4 Stars 33%
3 Stars 17%
2 Stars 0%
1 Stars 17%
How can we improve this article?
Need help?
© The President and Fellows of Harvard College
Except where otherwise noted, this content is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.