RIT Research Computing supplies advanced research technology resources and support to researchers in their quest to discover.



This site is the definitive source for RIT Research Computing documentation. If you notice any errors on this site, please submit a ticket.

0 - Overview of Research Computing

0.1 - RC Services

RIT Research Computing supports and provides research IT resources to researchers in their quest to discover. We provide the following services to RIT researchers:

  • High Performance Computing (HPC): The RC Team manages an HPC cluster, called SPORC (Scheduled Processing on Research Computing). Access to cluster resources is reserved via the Slurm scheduler.
  • Research Data Storage: The RC Team maintains network storage for the SPORC cluster and (some) research fileshares. While RC storage is resilient (allowing for limited disk or host failures without data loss), we cannot provide backups or restoration of deleted files.
  • Research Software Support: To enable researchers to focus on their experiments, RC provides research software packaging with Spack for the cluster. Researchers also have the ability to install their own software.
  • Research GitLab: The RC Team hosts a GitLab instance, enabling RIT researchers to collaborate on source code for their experiments with git-based version control. This GitLab instance is intended to be used for research projects. While there is nothing preventing the use of RC’s GitLab instance for academic purposes, we don’t recommend it as there is no Service Level Agreement (SLA) and upgrades/maintenance may bring the service down and impact academic work.
  • Research REDCap: REDCap is a web platform for building and managing online databases and surveys for research projects. RC’s REDCap instance does not meet the requirements for storing protected (e.g. HIPAA) data.
  • Facilitation & Consultation: Our Facilitation Team’s primary goal is to make accessing and using RC resources easier for RIT’s research community. We provide training, outreach, computational workflow optimization, assistance with grant proposals, and guidance for new faculty hires.

Quick Links:

0.2 - Acceptable Use Policy

The Research Computing Acceptable Use Policy (AUP) outlines the proper and responsible use of RC-provided computing resources. It’s designed to help all users understand their rights and responsibilities when using these shared systems.

Quick Links:

0.3 - Data Compliance

The Research Computing environment is not compliant with NIST 800-171, NIST IR8484, nor CMMC controls. RC Resources do not meet compliance standards for the following types of data. DO NOT store these on RC resources:

  • Social Security Numbers (SSNs), Individual Taxpayer Identification Numbers (ITINs), and/or other national identification numbers
  • University Identification Numbers (UIDs)
  • Driver’s license numbers
  • Financial account information
  • Educational records governed by FERPA
  • Personal health information (as defined by HIPAA)
  • Employee personnel information

0.4 - Newsletter

Each month, we share system updates, new features, upcoming events, and tips to help you make the most of our resources in an email newsletter.

Quick Links:

0.5 - Support

Standard support hours are Monday-Friday, 9am-4pm.

Quick Links:

0.6 - Publications

Since 2016, RIT Research Computing has supported over 720 scholarly works! Visit our publications site to view these publications, statistics, and information on how to acknowledge RC in your publications.

Quick Links:

1 - Access to Research Computing Services

1.1 - Project Questionnaire

At Research Computing, we want to know our researchers, their research, and their computational/research computing needs. Before you can use Research Computing resources, you need to fill out our New Project Questionnaire. Your responses to the questionnaire help us understand your project and provide you with everything you need to do your research on our cluster.

Quick Links:

1.2 - ColdFront

After your project questionnaire is approved, we will set up your RC Project using ColdFront. RC Projects should be tied to a specific research project with a start and an end date. Please fill out a new questionnaire for each distinct research project you work on.

ColdFront provides self-service management of RC Projects and associated Resource Allocations for Principal Investigators (PIs). Through ColdFront, PIs can update Project information (e.g. description, grants, publications), add/remove collaborators to their projects, and request additional resources for their projects.

Quick Links:

1.3 - Logging In

Our OnDemand Web Portal provides browser-based access to RC storage, terminal shells, and job submission.

You can also login to the RC cluster via SSH using the following command:

$ ssh RIT_USERNAME@sporcsubmit.rc.rit

Notes:

  • In the command above, replace RIT_USERNAME with your RIT username, e.g. abc1234.
  • You need to use your RIT username and password to login to the cluster.
  • If you are logging in with your password via SSH, you will also receive a Duo prompt on your phone to verify the login. If you do not receive a Duo prompt, please go to start.rit.edu and verify your Multifactor Authentication settings. If you receive an error message that says “Your account is disabled” after your Duo prompt, please follow the instructions here to unlock your Duo account.
  • Duo’s SMS and Phone Call verification methods are not supported for logging into the RC cluster via SSH. You must have the Duo mobile app installed on your phone to receive push notifications.

Quick Links:

1.4 - Maintenance Windows

The RC environment has regular maintenance windows (usually the second Tuesday of the month). Maintenance will begin at 8AM EST/EDT on the scheduled day and will typically be complete by 5PM EST/EDT on the same day, but we do have occasional two-day maintenance windows.

Quick Links:

2 - High Performance Computing (HPC)

The cluster is primarily designed for batch processing. We use software called Slurm to manage batch jobs on the cluster. When you want to run code on the cluster, you will tell slurm what resources you need, then slurm will dispatch your work to one or more computers that make up the clsuter. When your code is finished running, slurm frees up those resources so other researchers can use them.

Quick Links:

2.1 - Slurm Terminology

Slurm has Accounts and Users. When you fill out the Questionnaire, we will create your Slurm Account. Your Slurm User is your normal RIT username.

A Slurm Account is a tool for controlling the resources allocated to a user (or a group of users). A User must be added to an Account before they can use it. So, if multiple researchers want to use the same Account, each of their slurm Users must be added to the Account.

A User is just a simple way to connect your RIT credentials to the Slurm environment. Important: Having a User and being able to log in to the cluster does not mean you have been allocated resources. Your User must be added to an Slurm Account before resources are allocated. You can run the following command to see if you have been added to a Slurm Account:

$ my-accounts

Quick Links:

3 - Research Data Storage

Each researcher who has cluster access has a Home Directory. Your Home Directory is located at /home/rit_username/. You should use your Home Directory to store:

  • SSH keys
  • Custom software environments (e.g. conda)
  • Configuration files (e.g. `.bashrc)
  • Folders that OnDemand creates

Each RC Project has an associated Project Directory. Each User in an RC Project has access to the associated Project Directory, located at /shared/rc/project_name. Your Project Directory should store any data that your collaborators need access to, such as:

  • Datasets for your experiments
  • Shared software environments
  • Code for running your experiments
  • Results from your experiments

Quick Links:

4 - Research Software Support

The RC Team uses a tool called Spack to manage software installations on the RC cluster. Spack is fully supported by Research Computing, which means we will package, build, install, test, and maintain software libraries and environments using Spack. Spack is a standard for HPC systems and prevents many of the challenges typically encountered with Home Directory Installs. We highly encourage you to utilize spack environments, as they are fully maintained and tested by the Research Computing team, allowing you to focus on your research instead of debugging build errors. However, there is one very important thing to note about spack: Building and installing spack environments can take some time, especially if you need us to package/test libraries that aren’t already available in Spack. We can’t guarantee your environment will be available on a specific timeline.

Researchers also have the ability to install their own software (e.g. conda, pip, containers). Please Note: The RC Team is not resourced to help you design, build, install, troubleshoot, or maintain software you install yourself.

Quick Links:

5 - Need More Help?

If you need help using any Research Computing resources, don’t hesitate to contact us. Our infrastructure is changing all the time and our documentation may not always be up-to-date. We’re more than happy to work with you to accommodate your research needs.