Search Docs by Keyword
Storage Service Center
This page provides the necessary information for requesting, managing data storage allocations, and billing. It is essential that you review the Storage Billing FAQ and Data Storage Service pages. Below we provide information on the three different software applications we use to help PIs (Coldfront), Finance managers (FIINE), and Lab/Data managers (Starfish) perform their roles.
To get more information about the storage tiers and their features please visit storage tiers on our data-storage services page. Please feel free to reach out to us at email@example.com with any questions.
STORAGE TIERS and COST
|Tier 0||Tier 1||Tier 3|
|Description:||High-performance Lustre||Enterprise Isilon||CEPH Storage||Tape|
|Cost per TB/month (rounded down):||$4.16 ($50/yr)||$20.83 ($250/yr)||$8.33 ($100/yr)||$0.416 ($5/yr)|
|Available for new allocations:||Yes||Yes||No||Yes|
- Billing is through the FIINE system. See below to get access to FIINE
- Billing is done monthly
- The cutoff day for billing is the 15th of each month. Any changes done to the allocation will be reflected in next month’s bill
- The service center needs 33 digit billing code to provide the service. It’s an internal service so we can’t create POs for billing
- To read your current bill read here
- For billing questions and queries email firstname.lastname@example.org
REQUEST OR MANAGE AN ALLOCATION
NEW TIERED ALLOCATIONS
If you do not have a tiered storage allocation and project already in Coldfront, you can request a new storage allocation via the portal
- Tier 0-2: Portal Storage Request Form for Tier 0, Tier 1, or Tier 2
- Tier 3: Request form for Tier 3 (tape)
EXISTING COLDFRONT TIERED ALLOCATION PROJECTS
If you have an existing project in Coldfront, you can manage your allocations there:
- To request an additional allocation or manage an existing one, you will use Coldfront
- If you have an existing allocation and cannot access Coldfront, please contacts FASRC
Lead Time for New Tape Allocations
There is a minimum setup time of (TBD – 2-3 weeks currently). This timeframe assumes we receive the completed tape setup from our service partner NESE without delay. Delays there are beyond our control and could increase lead time. Please note that any storage changes made after the 15th of the month will be reflected in the following month’s billing.
MANAGE BILLING FOR ALLOCATIONS
Charges for storage allocations are billed monthly. Expense code(s) can be applied to each allocation and can be sub-divided among multiple billing codes.
See our Service Center FAQ for answers to common questions.
See also How to read your Storage Service Center bill
To manage billing for an existing allocation, you will need:
- A FASRC account. You very likely already have this unless you are a new PI.
If you do not have an account, please view: How Do I Get a FAS Research Computing Account?
- Access to the FIINE billing system.
- PLEASE NOTE: If you have an account that was created before ~ 2021 and cannot log into FIINE, you likely need to go through the Informatics onboarding tool to get your Harvard Key information updated in our system. https://onboard.rc.fas.harvard.edu/onboard/
Instructions for expense code management and billing record review in FIINE are available at:
Starfish – Data Management
Starfish – Scans the different storage servers to provide a view, usage details, metadata, and tagging based on the projects. Check here for more details about starfish and examples to query the data.
Coldfront – Lab and Allocation management:
Coldfront – Provides a view on PI projects and allocations. New allocation and updates to existing allocation can be requested using Coldfront. Check here for more details about Coldfront and its use.
Fiine -FAS Instrument Invoicing Environment
Fiine – For lab/finance administrator to manage the expense codes per project/user and view invoices. Check here for more information about using the Fiine system.
FAQ – Storgae Service Center
Since the growth of storage has increased tenfold in the past 5 years, hosting individual small capacity storage server deployments has become unsustainable to manage. These individual server systems do not easily allow for the growth of data share. Due to their small volume, many systems are run above 85% utilization which degrades the performance.
Many systems also run beyond their original maintenance contract, which causes issues in sourcing parts to make repairs; older systems (>5yr) increase the risk for catastrophic data loss. Some systems were purchased by PIs without a provision for backup systems, which has led to confusion of which data shares should have backups. Our prior backup methodology does not scale to these larger systems with hundreds of millions of files. Given these historical reasons, revamping our storage service offerings allows FASRC to maintain the lifecycle of equipment, allowing us to project the overall growth for data capacity, datacenter space, and professional staffing to maintain your research data assets safely.
Prior to the establishment of a Storage Service Center, we only offered a single NFS filesystem for your Lab Share; you now have the choice of four storage offerings to meet your technology needs. The tiers of service clearly define what type of backup your data will have. You only have to pay for an allocation capacity that you need, as opposed to having to guess at the beginning of a server purchase and have this excess go unused.
Over time, you can request an increase to your allocation size. You will receive monthly reports on utilization from each tier to help you plan for future data needs. Some of our tiers will also have web-based data management tools that allow you to query different aspects of your data, tag your data, and see visual representations of your data.
We have worked with RAS on two allocation methods to charge data storage to your grants (1) per user allocation method (2) per project allocation method.
Per use allocation method: You will be supplied a usage report by the user for each tier. You can use the % of data associated with this individual as the cost and use the same cost distribution of their % effort on grants.
Example 1: PI has 10 TB allocation on Tier 1 in which researchers John and Jill use. The monthly bill for 10 TB of Tier 1 is $208.30 (at $20.83/TB/mo). The usage report shows that 8 TB total usage where John usages is 60% and Jill is 40%. So data charges associated with John is $124.98 and with Jill is $83.32. John is funded 50% on NFS and 50% on the NIH project thus $62.49 should be allocated to each grant. Jill is funded 100% on NSF project, thus $83.32 should be allocated to her NSF grant.
This method allows faculty to manage their data structures independently to the specific projects as multiple projects will be using some of the same data. Keep in mind that as researchers leave, there needs to be a plan for their data as this data will continue to be reported on in the usage reports.
Per project allocation method: If requested a project specific report, you will have a direct mapping of data used by this project and can allocate this full cost to the cost distribution from grants.
Example 2: PI requests new 5 TB allocation on Tier 1 for NSF funded project. 10 users share this data. The monthly bill would include Tier 1 of $54.15 (at $20.83/TB/mo). The entire $54.15 would be charged to the NSF grant.
This allows there to be a very straightforward assignment between data and funding source. Reuse of the active parts of this data will need to be assigned to future projects.
Example 3: The above PI also has 100 TB allocation on Tier 0 used for multiple projects with multiple funding sources. The usage report for the Tier 0 would be provided per user as per Example 1 above, and the % effort allocation method would be used for Tier 0, while the Example 2 would be used for the new project on Tier 1.
As is common with other Science Operations Core Facilities, once funding sources have been established for bills, we will continue to direct bill those funds until the PI updates these distributions. For the first few months billing will be manual via email until the new Science Operations LIMS billing system is complete.
We suggest that a data management plan is established at the beginning of a project, so that a full data lifecycle can be mapped to phases of your data. This helps identify data that will need to be kept long-term from the start, as well as helps mitigate data being orphaned when students and postdocs move on. If research data is being used again in a subsequent project, you should allocate funds to carry this data forward to new projects. As per federal regulations, you cannot pay for storage in advance. The Tier 3 tape service provides a location to deposit data longer term (7 years) which can meet many of the funding requirements,
For billing inquiries or issues, please email email@example.com
For general storage issues, questions, or tier changes, please contact firstname.lastname@example.org
We will maintain existing physical servers while under warranty, which is typically 5-6 years from their purchase date. We will need a data migration plan to the appropriate tiers a few months prior to decommissioning the server.
Over FY22 we will be migrating whole filesystems at a time into the storage service center. All new space requests will be allocated on newly deployed storage in one of the Tiers.
Most owned storage servers have already been phased out.
Bookmarkable Section Links