National Computational Infrastructure

NCI

Detailed System Configuration

These are the detailed specifications of the NCI cloud environment, including both Tenjin, NCI’s partner cloud, and the NeCTAR research cloud, accessible by researchers from all Australian universities.

Nodes

Number

Description

Nova Compute Nodes

200

2 x Intel Xeon e5-2670 @ 2.6 GHz, 128 GB RAM 20M Cache, 8.0 GT/s QPI2 x 400 GB Intel DCS3700 SSDs;

1 x Mellanox FDR Single Port

Nova Management Nodes

4

2 x Intel Xeon e5-2670 @ 2.6 GHz, 128 GB RAM 20M Cache 8.0 GT/s QPI2 x 400 GB Intel DCS3700 SSDs

1 x Mellanox FDR Single Port

Glance-API

2

2 x Intel Xeon e5-2670 @ 2.6 GHz, 128 GB RAM 20M Cache 8.0 GT/s QPI2 x 400 GB Intel DCS3700 SSD;

1 x Mellanox FDR Single Port

Swift/Ceph Storage Nodes

10

2 x Intel Xeon e5-2630L @ 2.0 GHz, 48 GB RAM 15M Cache, 7.2 GT/s QPI6 x 200 GB Intel DCS3700 SSDs; 2 x 500 GB 7.2K RPM SATA disks

12 x 4 TB 7.2K RPM near-line SAS 6 Gps drives

1 x Mellanox FDR Single Port

Swift/Ceph Monitor Nodes

3

2 x Intel Xeon e5-2630L @ 2.0 GHz, 48 GB RAM 15M Cache, 7.2 GT/s QPI8 x 300 GB 10.0 K RPM SAS disks;

1 x Mellanox FDR Single Port

Fast Block Storage Gateways Nodes 2 x Intel Xeon e5-2665 @ 2.4 GHz, 256 GB RAM 20M Cache 8.0 GT/s QPI4 x 4.3 TB GB 15.0 K RPM SAS disks

1 x PERC H710 Integrated RAID Controller

1 x QLogic 2562 Dual Port 8Gb optical Fibre Channel HPA

1 x C9 RAID 5

1 x Mellanox VPI FDR card

GPU Nodes

4

2 x Intel Xeon e5-2665 @ 2.4 GHz, 256 GB RAM 20M Cache 8.0 GT/s QPI2 x 100 GB SSD SATA MLC 3 Gps

2 x nVIDIA Tesla K20 PCIe x 16 GPGPU card

Management Network

8

Dell PowerConnect 5548, 48 Ports Gigabit Ethernet, Managed L2 Switch, 10 GbE
Fast Interconnect

22

Mellanox SX1036 SwitchX-2 based 36-port QSFP 40 GbE Ethernet Switch, 36 QSPF ports

In Collaboration With