Getting started with the shared computing environment

Research computing maintains a shared computing infrastructure for the RC user community, including the Janus HPC cluster, Crestone bladecenter, GP-GPU nodes, and large-memory nodes. Accessing these resources involves getting access to a compute allocation, logging in using SSH, and submitting jobs to the queueing system.

Request access to a compute allocation

A PI with an active allocations can contact us at to request that you be added to it.

Request access to a general account if you are just getting started with Research Computing, and need some time to explore the capabilities of the system. Contact us at to request that you be added to it.

If you're already comfortable with high-performance computing and know that you will need more compute time than is given with a startup allocation, please email for allocation request or management questions during this transition to Summit.

In each case you will receive an allocation ID that is associated with a consumable allocation of compute time, measured in SU or core-hours. You can use this allocation ID when submitting compute jobs to the queueing system.

Log in

Access to Research Computing resources is primarily provided using the secure shell (SSH) protocol. The ssh command can be run from the Linux and OS X command-line. For Windows clients we recommend the PuTTY application, though alternatives exist.

ssh -l $rc_username

When you're prompted for a password, do not enter a CU IdentiKey password. Most users should have already obtained an OTP authenticator device, in which case your RC "password" is the combination of your PIN and the one-time password generated by your OTP token.


If you are an NCAR user, authenticate using your NCAR Yubikey or CryptoCard.

Submit jobs to the queue

Research Computing uses a queueing system called Slurm to manage compute resources and to schedule jobs that use them. In general, you'll submit compute jobs (in the form of job scripts) to the system using the sbatch command.

$ echo > '#!/bin/bash
#SBATCH --job-name test-job
#SBATCH --time 05:00
#SBATCH --nodes 1
#SBATCH --output test-job.out

echo "The job has begun."
echo "Wait one minute..."
sleep 60
echo "Wait a second minute..."
sleep 60
echo "Wait a third minute..."
sleep 60
echo "Enough waiting: job completed."'

$ module load slurm
$ sbatch --qos janus-debug

You can monitor the progress of your job in the queue using the squeue command.

$ squeue --user $USER

Once your job has completed (denoted by the CG state) you can find the output in the test-job.out file, as defined in the script.

$ cat test-job.out

Next steps

Now that you can log into the Research Computing environment and submit jobs to the queueing system, you may want to read a bit more from the User Guide.