Tag: billing

Coldfront – Allocation management

Coldfront – Allocation management

Please check the Storage Service Center page for the dataflow to ColdFront

ColdFront is an open-source resource allocation management system designed to provide a central portal for administration, usage reporting, and allocation management of HPC resources.  FASRC adapted the software to manage allocations on the FASRC cluster.  Right now, the service has 2 major components:

Project: Lab groups and projects, if your lab has more than one group or you are part of multiple groups you will see more than one project.

Allocation: Storage Allocation on one or more storage devices.

Users need to connect to @fasrc VPN and can log in to Coldfront using FASRC credentials:


After you log in to coldfront, you will see one or more projects on the left and one or more allocations on the right. Click, project link or Allocation active button to view details about your projects/allocation.


Project list all the users in your lab group. These are 2 main roles, Manager and User. Only the users with the manager role are allowed to request new allocations or request changes to the current allocation.  A user role is to view their current projects and allocations.  The allocation tab shows your current allocations and usage.

Note: If you should be authorized to request allocations, please ask the PI to log in to coldfront and promote your user as manager.


The page shows the total allocation size, total usage, and estimated cost per month. It also has a table to show usage per user and the information is updated once a week from the data management system (Starfish). It shows the estimated cost per month after aggregating their usage across different folders. Starfish is used to get this data and Labs can request access to starfish or its data to better understand the usage and remove any old project or user data.

Currently, allocation requests are for different storage Tiers (Tier0, Tier1, Tier3). We are still working on Tier2 to offer it as a service.


Allocation Requests:

PI or Managers of a lab project should be able to make a new allocation request or request changes to the current allocation. To request a new allocation, click Request Resource allocation and to update existing allocation, click View Details of an allocation and then Request Change. Coldfront will email you when the allocation is updated and ready to use.


Please review https://www.rc.fas.harvard.edu/services/data-storage/#Offerings_Tiers_of_Service for the storage features and updates. If you have any questions. please feel free to reach us here: rchelp@rc.fas.harvard.edu

Storage Service Center

Storage Service Center

This page provides the necessary information for requesting, managing data storage allocations, and billing.  It is essential that you review the Storage Billing FAQ and Data Storage Service pages.  Below we provide information on the three different software applications we use to help PIs (Coldfront), Finance managers (FIINE), and Lab/Data managers (Starfish) perform their roles.

To get more information about the storage tiers and their features please visit storage tiers on our data-storage services page. Please feel free to reach out to us at  rchelp@rc.fas.harvard.edu with any questions.


Tier 0 Tier 1 Tier 2 Tier 3
Description: High-performance Lustre Enterprise Isilon NFS Storage Tape
Cost per TB/month (rounded down): $4.16 ($50/yr) $20.83 ($250/yr) $8.33 ($100/yr) $0.416 ($5/yr)
Snapshot: No Yes No No
Disaster Recovery: No Yes Yes No
Available for new allocations: Yes Yes No Yes

Important notes:

  • Billing is through the FIINE system. See below to get access to FIINE
  • Billing is done monthly
  • The cutoff day for billing is the 15th of each month. Any changes done to the allocation will be reflected in next month’s bill
  • The service center needs 33 digit billing code to provide the service. It’s an internal service so we can’t create POs for billing
  • To read your current bill read here
  • For billing questions and queries email billing@rc.fas.harvard.edu



Lead Time for New Tape Allocations

There is a minimum setup time of (TBD – 2-3 weeks currently). This timeframe assumes we receive the completed tape setup from our service partner NESE without delay. Delays there are beyond our control and could increase lead time. Please note that any storage changes made after the 15th of the month will be reflected in the following month’s billing.


Charges for storage allocations are billed monthly. Expense code(s) can be applied to each allocation and can be sub-divided among multiple billing codes.
See our Service Center FAQ for answers to common questions.

See also How to read your Storage Service Center bill

To manage billing for an existing allocation, you will need:

Instructions for expense code management and billing record review in FIINE are available at:


Starfish – Data Management

Starfish – Scans the different storage servers to provide a view, usage details, metadata, and tagging based on the projects. Check here for more details about starfish and examples to query the data.

See also our guide to Data Management Best Practices

Coldfront – Lab and Allocation management:

Coldfront – Provides a view on PI projects and allocations. New allocation and updates to existing allocation can be requested using Coldfront. Check here for more details about Coldfront and its use.

Fiine -FAS Instrument Invoicing Environment

Fiine – For lab/finance administrator to manage the expense codes per project/user and view invoices. Check here for more information about using the Fiine system.


FAQ – Storgae Service Center

Since the growth of storage has increased tenfold in the past 5 years, hosting individual small capacity storage server deployments has become unsustainable to manage. These individual server systems do not easily allow for the growth of data share. Due to their small volume, many systems are run above 85% utilization which degrades the performance.

Many systems also run beyond their original maintenance contract, which causes issues in sourcing parts to make repairs; older systems (>5yr) increase the risk for catastrophic data loss. Some systems were purchased by PIs without a provision for backup systems, which has led to confusion of which data shares should have backups. Our prior backup methodology does not scale to these larger systems with hundreds of millions of files. Given these historical reasons, revamping our storage service offerings allows FASRC to maintain the lifecycle of equipment, allowing us to project the overall growth for data capacity, datacenter space, and professional staffing to maintain your research data assets safely.

Prior to the establishment of a Storage Service Center, we only offered a single NFS filesystem for your Lab Share; you now have the choice of four storage offerings to meet your technology needs. The tiers of service clearly define what type of backup your data will have. You only have to pay for an allocation capacity that you need, as opposed to having to guess at the beginning of a server purchase and have this excess go unused.

Over time, you can request an increase to your allocation size. You will receive monthly reports on utilization from each tier to help you plan for future data needs. Some of our tiers will also have web-based data management tools that allow you to query different aspects of your data, tag your data, and see visual representations of your data.

Unlike the compute cluster, where resources are reserved and released, data is allocated to storage long-term. In addition, storage needs across various research domains is drastically different. Therefore, in the FY19 federal rate setting, FAS decided to remove the portion of FASRC dedicated to maintaining storage out of the facilities part of the F&A. This allows FAS to run a Storage Service Center with costs that are allowable on federal awards.
Information about the storage offerings can be found on our Storage Services page and Storage Service Center document . Requests for storage allocations can be made through our portal. We ask that you limit your requests to once a month at most. We have a cutoff date of 15th for billing. Please keep in mind that large requests (>100 TB) might not all be available at the time of request and a smaller increase will be applied as we add more capacity in the coming month.
Yes, you can allocations in different storage tiers to meet your needs and budget.

We have worked with RAS on two allocation methods to charge data storage to your grants (1) per user allocation method (2) per project allocation method.

Per use allocation method: You will be supplied a usage report by the user for each tier. You can use the % of data associated with this individual as the cost and use the same cost distribution of their % effort on grants.

Example 1: PI has 10 TB allocation on Tier 1 in which researchers John and Jill use. The monthly bill for 10 TB of Tier 1 is $208.30 (at $20.83/TB/mo). The usage report shows that 8 TB total usage where John usages is 60% and Jill is 40%. So data charges associated with John is $124.98 and with Jill is $83.32. John is funded 50% on NFS and 50% on the NIH project thus $62.49 should be allocated to each grant. Jill is funded 100% on NSF project, thus $83.32 should be allocated to her NSF grant.

This method allows faculty to manage their data structures independently to the specific projects as multiple projects will be using some of the same data. Keep in mind that as researchers leave, there needs to be a plan for their data as this data will continue to be reported on in the usage reports.

Per project allocation method: If requested a project specific report, you will have a direct mapping of data used by this project and can allocate this full cost to the cost distribution from grants.

Example 2: PI requests new 5 TB allocation on Tier 1 for NSF funded project. 10 users share this data. The monthly bill would include Tier 1 of $54.15 (at $20.83/TB/mo). The entire $54.15 would be charged to the NSF grant.

This allows there to be a very straightforward assignment between data and funding source. Reuse of the active parts of this data will need to be assigned to future projects.

Example 3: The above PI also has 100 TB allocation on Tier 0 used for multiple projects with multiple funding sources. The usage report for the Tier 0 would be provided per user as per Example 1 above, and the % effort allocation method would be used for Tier 0, while the Example 2 would be used for the new project on Tier 1.

As is common with other Science Operations Core Facilities, once funding sources have been established for bills, we will continue to direct bill those funds until the PI updates these distributions. For the first few months billing will be manual via email until the new Science Operations LIMS billing system is complete.

We suggest that a data management plan is established at the beginning of a project, so that a full data lifecycle can be mapped to phases of your data. This helps identify data that will need to be kept long-term from the start, as well as helps mitigate data being orphaned when students and postdocs move on. If research data is being used again in a subsequent project, you should allocate funds to carry this data forward to new projects. As per federal regulations, you cannot pay for storage in advance. The Tier 3 tape service provides a location to deposit data longer term (7 years) which can meet many of the funding requirements,

Billing will be handled by Science Operation Core Facilities. You will be billed monthly for the TB allocation of space for each tier. Groups will have 2-3 business days to review the invoices before the charges are assessed via internal billing journals. By default, we will also provide you a usage report by user. A usage report per project can be available by request and is best setup for new projects with new allocations.
It is your and your finance admin's responsibility to update or verify your 33-digit billing code for monthly billing in the FIINE system. If no other billing codes are designated, your start-up fund will be used. We are here to help you navigate these decisions: Contact FASRC

For billing inquiries or issues, please email billing@rc.fas.harvard.edu

For general storage issues, questions, or tier changes, please contact rchelp@rc.fas.harvard.edu

We have moved away from owned servers. Very few exceptions will be made. If circumstances warrant one, the request will be reviewed by the University Research Computing Officer, Sr. Director of Science Operations and Administrative Dean of Science. One possible exception is when storage must be adjacent to an instrument where data collection rates are beyond the capacity of 1 Gbps Ethernet (100 MB/s) for extended periods (days).

We will maintain existing physical servers while under warranty, which is typically 5-6 years from their purchase date. We will need a data migration plan to the appropriate tiers a few months prior to decommissioning the server.

Over FY22 we will be migrating whole filesystems at a time into the storage service center. All new space requests will be allocated on newly deployed storage in one of the Tiers.

Most owned storage servers have already been phased out.

© The President and Fellows of Harvard College
Except where otherwise noted, this content is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.