10 November 2016
The National Computational Infrastructure, NCI, has announced that XENON Systems has been awarded the contract to supply a Lenovo NeXtScale system as an extension of Raijin, NCI's current peak facility, which was commissioned in 2013.
In a competitive procurement, NCI selected XENON for multiple reasons. As well as offering the best value for money, NCI had confidence in Lenovo's space-efficient platform and in XENON's capacity to commission the system in readiness for 2017.
As researchers' requirements for access to high-performance compute (HPC) capability continue to grow unabated, the new Lenovo system will help NCI to meet this demand by providing a 40% increase in capacity.
These upgrades to NCI's HPC capability, together with the planned 2017 replacement of the oldest of the global parallel filesystems, have been made possible with the generous support of the Australian Government through the 2015-16 Agility Fund of the National Collaborative Research Infrastructure Strategy (NCRIS), together with matching co-investment from the NCI Collaboration of research organisations.
The new facility will enter production in January 2017, and will extend vital advanced computational services for national research and innovation through NCI.
Both existing and new researchers and research organisations will benefit, with the additional resources providing better access and improved job waiting times.
Just as importantly, the injection of contemporary technology, in the form of Intel Broadwell series processors and the last generation interconnect, will ensure Australian researchers enjoy continued access to an international standard research HPC environment. The Lenovo system will complement the wide range of computational resources available through NCI and will provide a major boost in capability for users with high memory requirements.
Key specifications for the Lenovo NeXtScale system, which will be integrated with Raijin's fast filesystems, are:
- 22,792 Intel Xeon Broadwell 2690v4 processors
- 144 terabytes of memory, including 10 one-terabyte nodes
- Mellanox EDR 100Gbit/s Infiniband interconnect, configured as a "fat tree"