Description
These are the hardware and topology recommendations we usually give to customers who plan on building an HPC (High Performance Cluster). Please keep in mind these are only suggestions and at the end you have to choose on your own.
Solution
HPC Hardware recommendation
- Network infrastructure: One of the most important parts to consider. Depending on the size of cluster Infiniband is recommended for the connection between nodes. For the connection between the DELTAGEN machine and the cluster a 10Gbit/s Ethernet connection is recommended.
- CPU: Sandy bridge or better is recommended (AVX support)
- RAM: Depends on the model size you want to render, 64 GB per node is recommended.
- HDD: It is important to have a local hard drive on every node for caching purpose. Having only access to a shared drive system would slow down initialization time.
- GPU: Is optional as currently DSTELLAR with DELTAGEN use CPU only (might change in the future).
- OS: CentOS 6.5 or higher / Suse Linux Enterprise 12 or higher.
HPC Topology recommendation
- Usually it is fine to use one Infiniband switch to connect all cluster nodes.
- Than you need one bridge node which is connected to the same Infiniband network and to the 10Gbit/s Ethernet switch linked to the application (see picture 1).
- With the broker application available since DELTAGEN 2018 it is also possible to partition the cluster into smaller chunks, each one linked to the application with a different bridge machine (see picture 2).
Article information
KB Article ID:
#5531
Support Reference:
DOC-1159
Affected Versions
Fixed in Version
in Hardware, Installation