Summit

Summit is a heterogeneous supercomputing cluster based primarily on the Intel Xeon "Haswell" CPU, with additional NVidia Tesla K80 and high-memory nodes and, in the future, an Intel Xeon Phi "knights landing" MIC component. It replaces Janus as Research Computing's flagship computational resource. All nodes sit on a first-generation Intel Omni-Path Architecture interconnect which also provides access to an IBM GPFS Parallel scratch file system.

Status

  • Delivery and installation ✓
  • Synthetic acceptance testing ✓
  • Application acceptance testing ✓
  • Early-user access ✓
  • Full production (expected Feb. 2, 2017) ✓

The Summit update page will have the latest status information.

Citation/Acknowledgement Language

Please use the following language to acknowledge Summit in any published or presented work for which results were obtained using Summit.

This work utilized the RMACC Summit supercomputer, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University. The Summit supercomputer is a joint effort of the University of Colorado Boulder and Colorado State University.

To reference Summit in a citation, please use

Jonathon Anderson, Patrick J. Burns, Daniel Milroy, Peter Ruprecht, Thomas Hauser, and Howard Jay Siegel. 2017. Deploying RMACC Summit: An HPC Resource for the Rocky Mountain Region. In Proceedings of PEARC17, New Orleans, LA, USA, July 09-13, 2017, 7 pages. DOI: 10.1145/3093338.3093379

Specifications

Operating System
Red Hat Enterprise Linux 7

General compute nodes

Nodes
380 (in initial deployment)
CPU
Intel Xeon E5-2680 v3 @ 2.50GHz (2 CPUs/node, 24 cores/node)
memory
2133 MT/s, Dual Rank, x4 Data Width RDIMM, (8x16GB, 128GB/node)
Local storage
200 GB SSD (1/node)
Interconnect
Omni-Path HFI (1/node)

GPU compute nodes

Nodes
10
CPU
Intel Xeon E5-2680 v3 @ 2.50GHz (2 CPUs/node, 24 cores/node)
memory
2133 MT/s, Dual Rank, x4 Data Width RDIMM (8x16GB, 128GB/node)
Local storage
200 GB SSD (1/node)
GPU accellerator
Nvidia Tesla K80 (2/node)
Interconnect
Omni-Path HFI (1/node)

High-memory compute nodes

Nodes
5
CPU
Intel Xeon CPU E7-4830 v3 @ 2.10GHz (4 CPUs/node, 48 cores/node)
memory
2133 MT/s, Dual Rank, x4 Data Width RDIMM (64x32GB, 2TB/node)
Local storage
1TB 7.2K RPM, 6Gbps Near Line SAS 2.5" Hard Drive (12/node)
Interconnect
Omni-Path HFI (1/node)

Phi nodes

Nodes
20
CPU
Intel Xeon Phi "Knights Landing" processor (1/node)
memory
128 GiB/node
Local storage
200 GB SSD
Interconnect
Omni-Path HFI (1/node)

Phi nodes will be integrated as part of Summit "phase 2."

High-performance interconnect

2:1 oversubscribed Intel Omni-Path interconnect serving compute nodes and GPFS servers.

Throughput
100 Gb/s/port
Core switches
8
Core links
16/edge
Edge ports
32/edge

Scratch storage

DDN SFA14k GRIDScaler storage appliance

File system
IBM GPFS 4.2
Metadata storage
2.875 TB
Scratch storage
1.2 PB