Search Docs by Keyword
SEAS Compute Resources
The Harvard John A. Paulson School of Engineering and Applied Sciences has a number of compute resources managed by FAS Research Computing. Members of the SEAS community may access compute resources using this information. Available partitions and their component parts:
Partition Name | Assigned compute nodes | Description of nodes |
---|---|---|
seas_gpu | holygpu2c0913 | single node: Dell DSS 8440 Cauldron, 2 x Intel Xeon Gold 6148 2.4G 20 core, 373GB RAM, 10 x NVIDIA Tesla V100 32G, EDR Infiniband |
seas_dgx1 | seasdgx[101-104] | each node: Nvidia DGX1, 2 x Intel(R) Xeon(R) CPU E5-2698 v4 2.20GHz, 499GB RAM, 8 x NVIDIA Tesla V100 16G, FDR Infiniband |
seas_dgx1_priority | see above | see above |
seas_gpu_requeue | holyseasgpu[01-7,09-11], holygpu2c09[13,17,31], seasdgx10[1-4] | see individual entries |
seas_iacs | holyseas0[5-6] | each node: Dell M915, 4 x AMD Opteron(tm) Processor 6376 16 core, 247GB RAM, 1GbE |
narang_dgx1 | seasdgx[101-102] | see individual entries |
pierce | holyseas04,holy2b09303 | both nodes Dell M915 |
mazur | holyvulis01 | Dell M630, 2 x Intel Xeon CPU E5-2697 v4 2.30GHz 18 core, 247GB RAM, 1GbE, FDR Infiniband |
tambe_gpu | holygpu2c0917 | single node: Dell DSS 8440 Cauldron, 2 x Intel Xeon Gold 6148 2.4G 20 core, 384GB RAM, 10 x NVIDIA Tesla V100 32G, EDR Infiniband |
barak_ysinger_gpu | holygpu2c0931 | single node: Supermicro SYS-4029GP-TRT, 2 x Intel Xeon Silver 4216 CPU 2.10GHz 16 core, 384GB RAM, 8 x NVIDIA GeForce RTX 2080, 1GbE, FDR Infiniband |
idreos_parkes | holygpu2c1125 | single node: Dell DSS 8440 Cauldron, 2 x Intel Xeon Gold 6248 2.5G 20 core, 1.5B RAM, 10 x NVIDIA Tesla V100 32G, EDR Infiniband |
seas | holy7c026[09-12], holy7c241[05-12],holy7c242[05-12],holy7c243[05-12], holy7c244[05-12],holy7c245[05-12],holy7c246[05-12] | each node: Lenovo SD650v1, 2 x Intel Xeon Platinum 8268 @ 2.9 GHz 24 core, 192 GB RAM, EDR IB |
seas_priority | holy7c026[09-12], holy7c241[05-12],holy7c242[05-12],holy7c243[05-12], holy7c244[05-12],holy7c245[05-12],holy7c246[05-12] | each node: Lenovo SD650v1, 2 x Intel Xeon Platinum 8268 @ 2.9 GHz 24 core, 192 GB RAM, EDR IB |
Partition Name | Assigned Compute Nodes | Description of nodes |
The partitions and the groups that have access to them are listed here. For restricted slurm compute partitions you will need to be in the proper AD group to submit jobs (type “id” on the command line to see the groups you are in). These groups have priority access to the listed resources; they are part of the general pool when not in use. Nearly all compute nodes are part of the “serial_requeue” or “gpu_requeue”. This is the backfill scheduler – any higher priority jobs will kick those off of the node if the resources are needed. Lowest priority and shortest running jobs are preempted first.
Partition Name | Group(s) Who Have Access | Additional notes |
---|---|---|
seas_gpu | seas | |
seas_dgx1 | seas | |
seas_dgx1_priority | acc_lab rush_lab kung_lab | |
seas_gpu_requeue | seas | |
seas_iacs | seas_iacs | |
narang_dgx1 | narang_lab | |
pierce | pierce_lab slurm_group_pierce | |
mazur | mazur_lab_seas | |
tambe_gpu | tambe_lab | |
barak_ysinger_gpu | barak_lab ysinger_group | |
idreos_parkes | idreos_lab parkes_lab | |
seas | slurm_group_seas | Is requeued by higher priority jobs in seas_priority |
seas_priority | slurm_group_seas | Only groups with a higher fairshare than 0.75 may submit to this partition |
Partition Name | Group(s) Who Have Access |
For more information about Slurm partitions on the FAS RC cluster, please refer to the Running Jobs document.