High Performance Computing Systems

Narwhal Narwhal
Narwhal is an HPE Cray EX system located at the Navy DSRC. It has 2,304 standard compute nodes, 26 large-memory nodes, 16 visualization accelerated nodes, 32 1-MLA accelerated nodes, and 32 2-MLA accelerated nodes (a total of 2,410 compute nodes or 308,480 compute cores). It has 640 TB of memory and is rated at 13.5 peak PFLOPS.

Node Configuration
Login Standard Large-Memory Visualization MLA 1-GPU MLA 2-GPU
Total Nodes 11 2,304 26 16 32 32
Processor AMD 7H12 Rome AMD 7H12 Rome AMD 7H12 Rome AMD 7H12 Rome AMD 7H12 Rome AMD 7H12 Rome
Processor Speed 2.6 GHz 2.6 GHz 2.6 GHz 2.6 GHz 2.6 GHz 2.6 GHz
Sockets / Node 2 2 2 2 2 2
Cores / Node 128 128 128 128 128 128
Total CPU Cores 1,408 294,912 3,328 2,048 4,096 4,096
Usable Memory / Node 226 GB 238 GB 995 GB 234 GB 239 GB 239 GB
Accelerators / Node None None None 1 1 2
Accelerator n/a n/a n/a NVIDIA V100 PCIe 3 NVIDIA V100 PCIe 3 NVIDIA V100 PCIe 3
Memory / Accelerator n/a n/a n/a 32 GB 32 GB 32 GB
Storage on Node 880 GB SSD None 1.8 TB SSD None 880 GB SSD 880 GB SSD
Interconnect HPE Slingshot HPE Slingshot HPE Slingshot HPE Slingshot HPE Slingshot HPE Slingshot
Operating System SLES SLES SLES SLES SLES SLES

File Systems on Narwhal
Path Formatted Capacity File System Type Storage Type User Quota Minimum File Retention
/p/home ($HOME) 672 TB Lustre HDD 250 GB None
/p/work1 ($WORKDIR) 14 PB Lustre HDD 100 TB 21 Days
/p/work2 1.1 PB Lustre NVMe SSD 25 TB 21 Days
/p/cwfs ($CENTER) 3.3 PB GPFS HDD 100 TB 180 Days
/p/app ($PROJECTS_HOME) 336 TB Lustre HDD None None

Nautilus Nautilus
Nautilus is a Penguin Computing TrueHPC system located at the Navy DSRC. It has 1,304 standard compute nodes, 16 large-memory nodes, 16 visualization accelerated nodes, 32 AI/ML nodes, and 32 High Core Performance nodes (a total of 1,400 compute nodes or 176,128 compute cores). It has 364 TB of memory and is rated at 8.2 peak PFLOPS.

Node Configuration
Login Standard Large-Memory Visualization AI/ML High Core Performance
Total Nodes 14 1,304 16 16 32 32
Processor AMD 7713 Milan AMD 7713 Milan AMD 7713 Milan AMD 7713 Milan AMD 7713 Milan AMD 73F3 Milan
Processor Speed 2 GHz 2 GHz 2 GHz 2 GHz 2 GHz 3.4 GHz
Sockets / Node 2 2 2 2 2 2
Cores / Node 128 128 128 128 128 32
Total CPU Cores 1,792 166,912 2,048 2,048 / 16 4,096 / 128 1,024
Usable Memory / Node 433 GB 237 GB 998 GB 491 GB 491 GB 491 GB
Accelerators / Node None None None 1 4 None
Accelerator n/a n/a n/a NVIDIA A40 PCIe 4 NVIDIA A100 SXM 4 n/a
Memory / Accelerator n/a n/a n/a 48 GB 40 GB n/a
Storage on Node 1.92 TB NVMe SSD None 1.92 TB NVMe SSD None 1.92 TB NVMe SSD None
Interconnect HDR InfiniBand HDR InfiniBand HDR InfiniBand HDR InfiniBand HDR InfiniBand HDR InfiniBand
Operating System RHEL RHEL RHEL RHEL RHEL RHEL

File Systems on Nautilus
Path Formatted Capacity File System Type Storage Type User Quota Minimum File Retention
/p/home ($HOME) 1.4 PB Lustre HDD 250 GB None
/p/work1 ($WORKDIR) 13 PB Lustre Hybrid SSD/HDD TBD 21 Days
/p/cwfs ($CENTER) 3.3 PB GPFS HDD 100 TB 180 Days
/p/app ($PROJECTS_HOME) 357 TB Lustre NVMe None None

Gaffney Gaffney
Gaffney is an HPE SGI 8600 system located at the Navy DSRC. It has 704 standard compute nodes, 16 large-memory nodes, and 32 GPU-accelerated nodes (a total of 752 compute nodes or 36,096 compute cores). It has 154 TB of memory and is rated at 3.05 peak PFLOPS.

Node Configuration
Login Standard Large-Memory GPU
Total Nodes 8 704 16 32
Processor Intel 8168 Skylake Intel 8168 Skylake Intel 8168 Skylake Intel 8168 Skylake
Processor Speed 2.7 GHz 2.7 GHz 2.7 GHz 2.7 GHz
Sockets / Node 2 2 2 2
Cores / Node 48 48 48 48
Total CPU Cores 384 33,792 768 1,536
Usable Memory / Node 320 GB 170 GB 742 GB 367 GB
Accelerators / Node None None None 1
Accelerator n/a n/a n/a NVIDIA P100 PCIe 3
Memory / Accelerator n/a n/a n/a 16 GB
Storage on Node None None 3.2 TB SSD None
Interconnect Intel Omni-Path Intel Omni-Path Intel Omni-Path Intel Omni-Path
Operating System RHEL RHEL RHEL RHEL

File Systems on Gaffney
Path Formatted Capacity File System Type Storage Type User Quota Minimum File Retention
/p/home ($HOME) 346 TB Lustre HDD 200 GB None
/p/work1 ($WORKDIR) 5.5 PB Lustre HDD 20 TB 21 Days
/p/work2 111 TB Lustre SSD 10 TB 21 Days
/p/work3 350 TB Lustre NVMe SSD 5 TB 21 Days
/p/cwfs ($CENTER) 3.3 PB GPFS HDD 100 TB 180 Days
/p/app ($PROJECTS_HOME) 231 TB Lustre HDD None None

Koehr Koehr
Koehr is an HPE SGI 8600 system located at the Navy DSRC. It has 704 standard compute nodes, 16 large-memory nodes, and 32 GPU-accelerated nodes (a total of 752 compute nodes or 36,096 compute cores). It has 154 TB of memory and is rated at 3.05 peak PFLOPS.

Node Configuration
Login Standard Large-Memory GPU
Total Nodes 8 704 16 32
Processor Intel 8168 Skylake Intel 8168 Skylake Intel 8168 Skylake Intel 8168 Skylake
Processor Speed 2.7 GHz 2.7 GHz 2.7 GHz 2.7 GHz
Sockets / Node 2 2 2 2
Cores / Node 48 48 48 48
Total CPU Cores 384 33,792 768 1,536
Usable Memory / Node 320 GB 170 GB 742 GB 367 GB
Accelerators / Node None None None 1
Accelerator n/a n/a n/a NVIDIA P100 PCIe 3
Memory / Accelerator n/a n/a n/a 16 GB
Storage on Node None None 3.2 TB SSD None
Interconnect Intel Omni-Path Intel Omni-Path Intel Omni-Path Intel Omni-Path
Operating System RHEL RHEL RHEL RHEL

File Systems on Koehr
Path Formatted Capacity File System Type Storage Type User Quota Minimum File Retention
/p/home ($HOME) 346 TB Lustre HDD 200 GB None
/p/work1 ($WORKDIR) 5.5 PB Lustre HDD 20 TB 21 Days
/p/work2 111 TB Lustre SSD 10 TB 21 Days
/p/work3 350 TB Lustre NVMe SSD 5 TB 21 Days
/p/cwfs ($CENTER) 3.3 PB GPFS HDD 100 TB 180 Days
/p/app ($PROJECTS_HOME) 231 TB Lustre HDD None None