The Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) has a number of compute resources managed by FAS Research Computing. These compute partitions are open to all researchers at SEAS and their allocation is governed by the relative fairshare of the groups. The partitions themselves are broken down into the following:
- seas_compute: This block of compute contains 4980 cores of compute ranging from Intel Broadwell to Intel Ice Lake. This partition has a 7 day time limit.
- seas_gpu: This block of GPU’s contains 2564 cores of compute ranging from Intel Broadwell to Intel Ice Lake and 239 GPU’s ranging from Nvidia Titan X to A100 chipsets. This partition has a 7 day time limit.
Both seas_compute and seas_gpu are mosaic partitions, meaning they have a variety of hardware and interconnects. For users requiring specific types of hardware or for MPI users who need a single Infiniband fabric (i.e. HDR vs. FDR) please use the
--constraint option in Slurm. A full list of constraints can be found on the Running Jobs page. To get specific gpu models see the GPU section of the Running Jobs page. For more information about Slurm partitions on the FAS RC cluster, please refer to the Running Jobs document.
For researchers needing a secure environment, the FAS Secure Environment (FASSE) is a secure multi-tenant cluster environment to provide Harvard researchers access to a secure enclave for analysis of sensitive datasets with DUA‘s and IRB’s classified as Level 3. Please see the FASSE cluster documentation for how to gain access. Note that a home folder on FASSE is separate from any home folder you might have on the FASRC (Cannon) cluster. Data from the secure level 3 (FASSE) environment should not be transferred into level 2 space (Cannon).