National Computational Infrastructure

NCI

Specialised queues

In addition to the standard Raijin compute nodes, you can also access each of the following specialised queues. Click here for more information.

hugemem:

  • 2 x 14 cores (Intel Xeon Broadwell, 2.6 GHz) in 10 compute nodes
  • 1 TBytes of RAM
  • 400GB local disk (SSD)
  • charge rate, 1.25SU per cpuhour
  • minimum number of ncpus request is 7, must be a multiple of 7.
  • #PBS -q hugemem

gpu:

  • 2 x 12 cores (Intel Haswell E5-2670v3, 2.3 GHz) in 14 compute nodes
  • 2 x 14 cores (12 usable) (Intel Broadwell E5-2690v4, 2.6 GHz) in 16 compute nodes
  • 4 x Nvidia Tesla 24 GBytes K80 Accelerator (or 8 x GPUs)) on each node
  • 256 GBytes of RAM on CPU
  • 700 GBytes of SSD local disk
  • charge rate, 3SU per cpuhour
  • #PBS -q gpu
  • #PBS -l ngpus = 2, minimum ngpus request is 2, in the multiple of 2, this is defined as the number of GPUs
  • #PBS -l ncpus = 6, minimum ncpus request is 6, in the multiple of 6, and 3 x ngpus

gpupascal:

  • 2 x 12 cores (Intel Broadwell E5-2650v4, 2.2 GHz) in 2 compute nodes
  • 4 x Nvidia Tesla Pascal P100 Accelerator on each node
  • 128 GBytes of RAM on CPU
  • 400 GBytes of SSD local disk
  • charge rate, 4SU per cpuhour
  • #PBS -q gpupascal
  • #PBS -l ngpus = 1, minimum ngpus request is 1.
  • #PBS -l ncpus = 6, minimum ncpus request is 6, in the multiple of 6, and 3 x ngpus

More information on details of the GPU specifications, how to use GPUs on NCI and GPU-enabled software are available here.

knl:

  • 1 x 64 cores with 4-way Hyperthreading (Intel Xeon Phi 7230, 1.30 GHz) in 32 compute nodes
  • 192 GBytes RAM
  • 16 GBytes MCDRAM on-package high bandwidth memory
  • 400 Gbytes of SSD local disk
  • Charge rate 0.25SU per CPU hour
  • #PBS -q knl
  • #PBS -l ncpus=64
  • #PBS -l other=hyperthread, to take advantage of Xeon Phi architecture, use of Hyperthreads is strongly recommended.

In Collaboration With