SEAS Compute Resources

The Harvard John A. Paulson School of Engineering and Applied Sciences has a number of compute resources managed by FAS Research Computing. Members of the SEAS community may access compute resources using this information.  Available partitions and their component parts:

Partition NameAssigned compute nodesDescription of nodes
holyseasgpu holyseasgpu[01-07,09-11]each node: Dell R720, 2 x Intel E5-2695 v2 @ 2.40GHz 12 core, 90GB RAM, 2 x Nvidia K40m
seas_gpuholygpu2c0913single node: Dell DSS 8440 Cauldron, 2 x Intel Xeon Gold 6148 2.4G 20 core, 373GB RAM, 10 x NVIDIA Tesla V100 32G, EDR Infiniband
seas_dgx1seasdgx[101-104]each node: Nvidia DGX1, 2 x Intel(R) Xeon(R) CPU E5-2698 v4 2.20GHz, 499GB RAM, 8 x NVIDIA Tesla V100 16G, FDR Infiniband
seas_dgx1_prioritysee abovesee above
seas_gpu_requeueholyseasgpu[01-7,09-11],
holygpu2c09[13,17,31],
seasdgx10[1-4]
see individual entries
fdoyle_computeseasmicro[01-05,07-28]each node: Supermicro SYS-6018R-MTR, 2 x Intel(R) Xeon(R) CPU E5-2620 v3 2.40GHz 6 core, 247GB RAM, 1GbE
seas_iacsholyseas0[5-6]each node: Dell M915, 4 x AMD Opteron(tm) Processor 6376 16 core, 247GB RAM, 1GbE
narang_dgx1seasdgx[101-102]see individual entries
pierceholyseas04,holy2b09303both nodes Dell M915
mazurholyvulis01Dell M630, 2 x Intel Xeon CPU E5-2697 v4 2.30GHz 18 core, 247GB RAM, 1GbE, FDR Infiniband
tambe_gpuholygpu2c0917single node: Dell DSS 8440 Cauldron, 2 x Intel Xeon Gold 6148 2.4G 20 core, 384GB RAM, 10 x NVIDIA Tesla V100 32G, EDR Infiniband
barak_ysinger_gpuholygpu2c0931single node: Supermicro SYS-4029GP-TRT, 2 x Intel Xeon Silver 4216 CPU 2.10GHz 16 core, 384GB RAM, 8 x NVIDIA GeForce RTX 2080, 1GbE, FDR Infiniband
idreos_parkesholygpu2c1125single node: Dell DSS 8440 Cauldron, 2 x Intel Xeon Gold 6248 2.5G 20 core, 1.5B RAM, 10 x NVIDIA Tesla V100 32G, EDR Infiniband
Partition NameAssigned Compute NodesDescription of nodes

 

The partitions and the groups that have access to them are listed here. For restricted slurm compute partitions you will need to be in the proper AD group to submit jobs (type “id” on the command line to see the groups you are in). These groups have priority access to the listed resources; they are part of the general pool when not in use. Nearly all compute nodes are part of the “serial_requeue” or “gpu_requeue”.  This is the backfill scheduler – any higher priority jobs will kick those off of the node if the resources are needed.  Lowest priority and shortest running jobs are preempted first.

Partition NameGroup(s) Who Have Access
holyseasgpucomputefestgpu, gershman_lab, seas
seas_gpuseas
seas_dgx1seas
seas_dgx1_priorityacc_lab
rush_lab
kung_lab
seas_gpu_requeueseas
fdoyle_computefdoyle_lab
seas_iacsseas_iacs
narang_dgx1narang_lab
piercepierce_lab
slurm_group_pierce
mazurmazur_lab_seas
tambe_gputambe_lab
barak_ysinger_gpubarak_lab
ysinger_group
idreos_parkesidreos_lab
parkes_lab
Partition NameGroup(s) Who Have Access

For more information about Slurm partitions on the FAS RC cluster, please refer to the Running Jobs document.

Was this article helpful?
0 out Of 5 Stars
5 Stars 0%
4 Stars 0%
3 Stars 0%
2 Stars 0%
1 Stars 0%
How can we improve this article?
Need help?
© The President and Fellows of Harvard College
Except where otherwise noted, this content is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.