2022 NCF migration to FASSE
The following is a copy of the messaging sent from Tim O’Keefe to all NCF users regarding the March 7th migration of NCF to the FASSE secure environment/cluster.
Just a reminder that the NCF will be transitioning to the new FASSE cluster on Monday, March 7th during the regularly scheduled FASRC maintenance window. The transition will begin at 7 AM ET and you will be granted access by 11 AM ET.
Again, you will no longer be able to access the NCF after 7 AM Monday.
If you missed the AMA session, I’ll attach the slides to this email, and you can find the video recording here. Here are some key differences between NCF and FASSE
- VPN (required) — Within your Cisco AnyConnect client, instead of entering <username>@ncf, you’ll begin using
- SSH login nodes — Instead of SSH-ing into ncflogin.rc.fas.harvard.edu, you’ll begin SSH-ing into
fasselogin.rc.fas.havard.edu. 2FA is now required.
- Home directories — On FASSE, your home directory will live within
/n/home_fasse/<username>instead of the previous location /users/<username>.
- Scratch — In addition to a local 70 GB /scratch directory, there will be shared scratch directories for each lab available within
/n/holyscratch01/LABS/<group>. Each group will have a 50 TB quota. Files older than 90 days will be deleted.
- Slurm partitions — Instead of submitting jobs to the ncf partition, you’ll submit them to
fasseand instead of ncf_gpu you’ll use
ncf_bigmemwill remain the same.
- Open OnDemand/VDI — Instead of connecting to https://ncfood.rc.fas.harvard.edu, you’ll begin connecting to https://fasseood.rc.fas.harvard.edu (see also: FASSE VDI Apps )
Please don’t hesitate to let me know if you have any questions.