On this page:
8.1 Apt Cluster
8.2 IG-DDC Cluster
8.3 Cloud  Lab Utah
2018-11-29 (16097c8)

8 Hardware

Apt can allocate experiments on any one of several federated clusters.

8.1 Apt Cluster

This is the cluster that is currently used by default for all experiments on Apt.

The main Apt cluster is housed in the University of Utah’s Downtown Data Center in Salt Lake City, Utah. It contains two classes of nodes:

r320

   

128 nodes (Sandy Bridge, 8 cores)

CPU

   

1x Xeon E5-2450 processor (8 cores, 2.1Ghz)

RAM

   

16GB Memory (4 x 2GB RDIMMs, 1.6Ghz)

Disks

   

4 x 500GB 7.2K SATA Drives (RAID5)

NIC

   

1GbE Dual port embedded NIC (Broadcom)

NIC

   

1 x Mellanox MX354A Dual port FDR CX3 adapter w/1 x QSA adapter

c6220

   

64 nodes (Ivy Bridge, 16 cores)

CPU

   

2 x Xeon E5-2650v2 processors (8 cores each, 2.6Ghz)

RAM

   

64GB Memory (8 x 8GB DDR-3 RDIMMs, 1.86Ghz)

Disks

   

2 x 1TB SATA 3.5” 7.2K rpm hard drives

NIC

   

4 x 1GbE embedded Ethernet Ports (Broadcom)

NIC

   

1 x Intel X520 PCIe Dual port 10Gb Ethernet NIC

NIC

   

1 x Mellanox FDR CX3 Single port mezz card

All nodes are connected to three networks with one interface each:

8.2 IG-DDC Cluster

This small cluster is an InstaGENI Rack housed in the University of Utah’s Downtown Data Center. It has nodes of only a single type:

dl360

   

33 nodes (Sandy Bridge, 16 cores)

CPU

   

2x Xeon E5-2450 processors (8 cores each, 2.1Ghz)

RAM

   

48GB Memory (6 x 8GB RDIMMs, 1.6Ghz)

Disk

   

1 x 1TB 7.2K SATA Drive

NIC

   

1GbE 4-port embedded NIC

It has two network fabrics:

8.3 CloudLab Utah

This cluster is part of CloudLab, but is also available to Apt users.

The CloudLab cluster at the University of Utah is being built in partnership with HP. The first phase of this cluster consists of 315 64-bit ARM servers with 8 cores each, for a total of 2,520 cores. The servers are built on HP’s Moonshot platform using X-GENE system-on-chip designs from Applied Micro. The cluster is hosted in the University of Utah’s Downtown Data Center in Salt Lake City.

More technical details can be found at https://www.aptlab.net/hardware.php#utah

m400

   

315 nodes (64-bit ARM)

CPU

   

Eight 64-bit ARMv8 (Atlas/A57) cores at 2.4 GHz (APM X-GENE)

RAM

   

64GB ECC Memory (8x 8 GB DDR3-1600 SO-DIMMs)

Disk

   

120 GB of flash (SATA3 / M.2, Micron M500)

NIC

   

Dual-port Mellanox ConnectX-3 10 GB NIC (PCIe v3.0, 8 lanes

There are 45 nodes in a chassis, and this cluster consists of seven chassis. Each chassis has two 45XGc switches; each node is connected to both switches, and each chassis switch has four 40Gbps uplinks, for a total of 320Gbps of uplink capacity from each chassis. One switch is used for control traffic, connecting to the Internet, etc. The other is used to build experiment topologies, and should be used for most experimental purposes.

All chassis are interconnected through a large HP FlexFabric 12910 switch which has full bisection bandwidth internally.

We have plans to enable some users to allocate entire chassis; when allocated in this mode, it will be possible to have complete administrator control over the switches in addition to the nodes.