Annual MGHPCC Power Shutdown Aug 9-12 [Details and Schedule]
Next monthly maintenance October 4th (skipping August & September due to above downtime.)
STATUS PAGE No known issues.

2020 RCNFS server moves – FAQ

If you’ve received a message that your lab storage on an RCNFS server (rcnfs01 – rcnfs13, fs2k01, etc.) is moving and you have questions, we hope this page will help answer those. If not, you can reply to the original message or contact us with your question.

Q: What is the process?

A: All members of each lab will be sent an email about scheduling your share’s migration with general information and a link to this FAQ. On the day of the move, your share will be unavailable so that
A) the final sync can complete without any new changes
and
B) the necessary configuration changes are rolled out across the cluster.
You will be notified when the move is complete and informed of the new full path. Your /n links will remain the same, and any existing Samba/CIFSĀ  desktop mount path will remain the same.

Q: Can I still use the lab storage?

A: Yes, up until the day before the move, your storage will be available. A sync from the existing share to the new location has already run and will run again closer to the move. Please refrain from adding or modifying data the day prior to the move if possible. If you add any significant amount of data the day prior, please let us know immediately as an additional sync may be necessary.

Q: Why is this necessary?

A: FASRC is combining physical data center space with the Harvard Medical School in our Boston data center as part of cost-savings measures. We will be working towards moving 18 racks worth of servers to a new location and consolidating as much as possible.

Overall, we will have less rack space and must decommission older, less-efficient servers in favor of more dense systems to make the best use of the space we will have. The RCNFS boxes are very old and are already due for replacement. Your share will be moved to our new CEPH cluster, which is more resilient and not constrained in size as these NFS servers are.

Q: Can it be put off?

A: We must vacate the current space in the data center and will have less rack space in the new area. As this involves work being doneĀ  by HMS, the Boston data center personnel, and vendors, we must work towards this as quickly as possible. The original schedule was to start in June, but due to the pandemic this was not possible. Each group now has the ability to put limited staff in the data center and we must work against our lease deadline. This migration must be completed by the end of September as we do not have rack space for these servers in the new data center space.

Q: What if we have more than one share on an RCNFS server?

A: We will work with you to consolidate this into one share on the new storage. Most likely this was originally done due to limited contiguous space on the small NFS boxes, which is not an issue with the new storage.

Q: What will happen to your ability to Samba mount your share (if applicable)?

A: We will work with you to maintain your current share’s ability to be mounted if this is the case already. Be aware that you will not be able to mount or access the share via mapping (Samba) on the day of the move.

Q: What will happen to your /n/lab_name link?

A: Your /n symlink will be carried over and point to the new storage. The full path to your storage will change, so if you have the current full path hard-coded in scripts, you will need to update your scripts. Ideally to use the /n/lab_name link instead, which you can change now as that will remain the same.

Q: What is CEPH?

A: CEPH is a dense, distributed storage technology that allows for high resiliency and and can be easily expanded in future. CEPH will feature prominently in our future storage endeavors and is also the foundation the New England Storage Exchange is built upon.

Q: Which share is this?

A: There are around 150, which is why we sent an initial bulk email. Individual emails regarding your share and scheduling details will go out soon. You can also look for your share in this list:

#rcnfs01
ni_lab
abzhanov_lab
lewis_lab
schier_lab
mango_lab
debivort_lab
pfister_lab
shaw_www
hcbi
sriram_lab
biolocal

#rcnfs02
dowling_lab
sanes_lab
needleman_lab
xray_diffraction
test_lab
arnold_arboretum
mathews_lab
duraisingh_lab

#rcnfs03
dumais_lab
garner_lab
gopinath_lab
holbrook_lab
huttenhower_lab
irizarry_lab
massspec
mcb52
sinaiko_lab
stager_lab
undergrads

#rcnfs04
apallais_lab
drop
gaudet_lab
kremer_winner
lichtman_private
matt_test
murthy_lab
oebweb
old_lab_archive
pgsqldata
sorger_lab
xiaoleliu_lab

#rcnfs05
aspuru_lab
aspuru_lab2
baccarelli_lab
burton_lab
eggan_lab
eggan_lab2
kominers_lab
kunes_lab
liu
liu_lab
mullainathan_lab
sabeti_lab
scadden_lab
stantcheva_lab
wakeley

#rcnfs06
biewener_lab
doyle_lab
ft_storage
ips_core
karplus_lab
kou_lab
oracle_backups
randall_lab
schier_sleep

#rcnfs07
channing_nutrition
demler_lab
francis_lab
gyuan_lab
haig_lab
hooker_lab
losick_lab
meissner_lab
meissner_lab2

#rcnfs08
cluzel_lab
gibbs_lab
giribet_lab
holm_lab
mcbimaging
meselson_lab
michael_lab
myers_lab
nunn_lab
oberg_lab
park_lab

#rcnfs09
berg_lab
betley_lab
bomblies_lab2
brain_lab
gmf_group
mcbweb
mcconnell_lab
oeb2
sccr

#rcnfs10
bloxham_lab
bloxham_lab2
chetty
engert_lab
feldman_lab
francis_lab2
hoekstra_lab
hoekstra_lab2
pringle_lab
qzhang_lab
tamer_lab
turnbaugh_lab

#rcnfs11
beta_cell
cbdb
doria_lab
eol_wiki
hanken_lab
huttenhower_lab_nobackup
pgreen_lab
quackenbush_lab
rao_lab
seltzer_lab

#rcnfs12
desmarais_lab
hopkins_lab
kuang_lab
piquet_lab
samuel_lab
scharf_lab

#rcnfs13
jacob_lab2
mallet_lab
mickley
xiefs1

#cohen_lab

#contefs1

© The President and Fellows of Harvard College
Except where otherwise noted, this content is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.