Compute resources

Summit

The Summit supercomputer is a heterogeneous Linux compute cluster with an aggregate performance of about 450 TFLOPS (trillion floating-point operations per second.) Funding for Summit was provided by the National Science Foundation (MRI Award Nos. ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University.

Operating system Red Hat Enterprise Linux 7
Compute nodes 488 total (452 general compute, 11 GPU, 5 high-mem, and 20 Phi
CPU cores 12,632 total
System memory 70.8 TiB total
Parallel storage 1.2 PB high-performance GPFS scratch filesystem
Primary interconnect Intel Omni-Path 100 Gb/s fat-tree topology with 2:1 oversubscription between groups of 32 compute nodes
Ethernet 1 GbE to each compute node with 10 GbE aggregate connectivity to the CU Science Network

More detailed technical specifications can be found on the Resources page.

Access to Summit requires a Research Computing account and a supporting PI or faculty member who agrees to abide by the relevant export control requirements.

Blanca

The Research Computing Condo Computing service offers researchers the opportunity to purchase and own compute nodes that will be operated as part of a cluster, named Blanca. The aggregate cluster is made available to all condo partners while maintaining priority for the owner of each node.

Benefits to partners include

  • Data center rack space – including redundant power and cooling – is provided, as is scratch disk space.
  • Partners get significantly prioritized access on nodes that they own and can run jobs on any nodes that are not currently in use by other partners.
  • System configuration and administration, as well as technical support for users, is provided by RC staff.
  • A standard software stack appropriate for a range of research needs is provided. Partners are able to install additional software applications as needed.
  • Bulk discount pricing is available for all compute node purchases.

Three types of compute nodes are available. The specifications below are updated periodically. If you need a newer configuration, such as the latest CPU or GPU model, please email rc-help@colorado.edu.

Node type Specifications Cost Potential uses
Compute node
  • 2x 14-core 2.4 GHz Intel “Broadwell” processors
  • 128 GB RAM at 2400 MHz
  • 1x 1 TB 7200 RPM hard drive
  • 10 gigabit/s Ethernet
$6,875/node general purpose high-performance computation that doesn’t require low-latency parallel communication between nodes
GPU node
  • 2x 14-core 2.4 GHz Intel “Broadwell” processors
  • 128 GB RAM at 2400 MHz
  • 1x 1 TB 7200 RPM hard drive
  • 10 gigabit/s Ethernet
  • 1x NVIDIA P100 GPU coprocessor (12 GB RAM)
$13,560/node molecular dynamics, image processing, deep learning
Dual GPU node
  • 2x 14-core 2.4 GHz Intel “Broadwell” processors
  • 128 GB RAM at 2400 MHz
  • 1x 1 TB 7200 RPM hard drive
  • 10 gigabit/s Ethernet
  • 2x NVIDIA P100 GPU coprocessors (16 GB RAM)
$13,560/node molecular dynamics, image processing, deep learning
Himem
  • 4x 12-core 3.0 GHz Intel “Ivy Bridge” processors
  • 1024 GB RAM at 1600 MHz
  • 10x 1 TB 7200 RPM hard drives in high-performance RAID configuration
  • 10 gigabit/s Ethernet
$34,511/node genetics/genomics applications

Crestone

Crestone is deprecated and may be decommissioned soon.

Crestone is a Dell PowerEdge M1000e Blade system and is provided for jobs requiring more memory or longer runtimes than are available on Janus. This system is intended for single-node jobs and does not have access to a high-speed, low-latency network interconnect.

Operating system Red Hat Enterprise Linux 6
Compute nodes 16
Compute cores 192 total, two Intel Xeon X5660 (6x2.8 GHz "Westmere") processors per node, Hyper-Threading enabled
System memory 1,536 GiB total, 96 GiB per node
Local storage 2 TB hard disk per node

Crestone nodes are accessed via the crestone QOS.