Idun

The Idun cluster is a project between NTNU's faculties and the IT division that aims at providing a high-performance and professionally administrated compute platform for NTNU. It is an effort to combine the compute resources of individual shareholders to create a cluster for rapid testing and prototyping of HPC software. While the IT division provides the backbone of the cluster, such as switches for high-speed interconnection, storage, and provisioning servers, the individual faculties/departments provide the compute resources. Any faculty or department can become a shareholder in the cluster by financing compute capacity, leveraging their share of compute time as well as the compute time of other idling resources. Accounting guarantees each partner's share of compute time and ensures fairness between the users on the system. More:

Idun support: help@hpc.ntnu.no
Idun status page: http://idun.hpc.ntnu.no (only from NTNU network/VPN)

Hardware (updated 2024-04-24)

We are adding new hardware often. So this page is becoming outdated.

  • Total compute nodes: 188
  • Total CPU cores: 8580
  • Storage size: 550 TB
  • GPUs: 234
  • NVIDIA P100: 28
  • NVIDIA V100: 40
  • NVIDIA A100: 106
  • NVIDIA H100: 56
  • Compute nodes in CPUQ partition: 108
  • CPU cores in CPUQ partition: 6268

Compute nodes

NodeAmountType#CPUsProcessor#CoresRAM[GB]#GPUsGPU type
idun-01-[01-06]6Dell XE96802Intel Xeon Platinum 84705220148NVIDIA H100 80GB HBM3
idun-03-[01-48]48Dell C65202Intel Xeon Gold 634856256
idun-04-[01-36]36Dell C65202Intel Xeon Gold 634856256
idun-05-[01-12]12Dell C64202Intel Xeon Gold 613232768
idun-05-[13-20]8Dell C64202Intel Xeon Gold 624232192
idun-05-[21-22]2Dell C64202Intel Xeon Gold 624232768
idun-05-[23-32]10Dell C64202Intel Xeon Gold 625248192
idun-06-[01-06]6Dell XE85452AMD EPYC 75F36410074NVIDIA A100 80GB
idun-06-071Dell XE85452AMD EPYC 75436420154NVIDIA A100 80GB
idun-06-[08-12]5Dell R7402Intel Xeon Gold 6132287542NVIDIA V100 16GB
idun-07-[01-03]3Dell DSS84402Intel Xeon Gold 61484075488xV100 32GB
idun-07-[04-07]4Dell DSS84402Intel Xeon Gold 6248R48150910NVIDIA A100 40GB
idun-07-[08-10]3Dell DSS84402Intel Xeon Gold 6248R48150910NVIDIA A100 80GB
idun-08-011Dell XE96802Intel Xeon Platinum 84705220148NVIDIA H100 80GB HBM3
idun-09-[01-11,20]12Dell R7302Intel Xeon E5-2650 v4241282NVIDIA P100 16GB
idun-09-[12-14]3Dell R7302Intel Xeon E5-2650 v4241282NVIDIA V100 16GB
idun-09-[15-18]4Dell R7302Intel Xeon E5-2695 v4361282NVIDIA A100 40GB
idun-09-191Dell R7302Intel Xeon E5-2695 v4361282NVIDIA P100 16GB
idun-10-[01-19]19Dell R6302Intel Xeon E5-2630 v420128
idun-10-[21-22]2Dell R7402Gold 6226 CPU @ 2.70GHz243762FPGA (Xilinx)

Login nodes

NodeType#CPUsProcessor#CoresRAM[GB]#GPUsGPU type
idun-login1Dell PE7302Intel Xeon E5-2650 v4241282NVIDIA Tesla P100
idun-login2Dell PE7302Intel Xeon E5-2650 v4241282NVIDIA Tesla P100

Admin nodes

  • 1 admin node/provisioning node: Dell PE620
  • 2 samba servers idun-samba1.hpc.ntnu.no and idun-samba2.hpc.ntnu.no

Network

  • 3 Mellanox passive FDR switches for interconnect/storage on general part of cluster
  • 2 Mellanox passive EDR switches for interconnect/storage on GPU part of cluster
  • 1 Mellanox passive HDR switch for interconnect/storage on GPU part of cluster
  • 6 Gigabit ethernet switches for provisioning and admin network
Scroll to Top