Tag: cluster

SEAS Compute Resources

SEAS Compute Resources

The Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) has a number of compute resources managed by FAS Research Computing. These compute partitions are open to all researchers at SEAS and their allocation is governed by the relative fairshare of the groups. The partitions themselves are broken down into the following:

  • seas_compute: This block of compute contains 4128 cores of compute ranging from Intel Cascade Lake to Intel Ice Lake. This partition has a 7 day time limit.
  • seas_gpu: This block of GPU’s contains 1984 cores of Intel Ice Lake, 512 GB of RAM per node, and 124 Nvidia A100 40GB GPU’s. This partition has a 7 day time limit.  Interactive jobs on seas_gpu are limited to being less than 6 hours and no more than 2 cores.

seas_compute is a mosaic partition, meaning they have a variety of hardware and interconnects. For users requiring specific types of hardware please use the --constraint option in Slurm. A full list of constraints can be found on the Running Jobs page. To get specific gpu models see the GPU section of the Running Jobs page. For more information about Slurm partitions on the FAS RC cluster, please refer to the Running Jobs document.

For researchers needing a secure environment, the FAS Secure Environment (FASSE) is a secure multi-tenant cluster environment to provide Harvard researchers access to a secure enclave for analysis of sensitive datasets with DUA‘s and IRB’s classified as Level 3.  Please see the FASSE cluster documentation for how to gain access. Note that a home folder on FASSE is separate from any home folder you might have on the FASRC (Cannon) cluster. Data from the secure level 3 (FASSE) environment should not be transferred into level 2 space (Cannon).

© The President and Fellows of Harvard College
Except where otherwise noted, this content is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.