4U GPU server-DSS 8440
The DSS 8440 server with a 2-way architecture and a 4U chassis has almost the same size and weight. Moreover, it also belongs to Dell EMC's Extreme Scale Infrastructure (ESI). ) product line, but the main feature is powerful computing performance rather than huge storage space.
In terms of I/O access, DSS 8440 is equipped with four PCIe switching hubs to provide high-speed PCIe interleaving network. Judging from the configuration of the expansion interface card, this server can be equipped with multiple accelerator cards. For example, in The inside of the chassis can accommodate 10 double-width, full-length GPU accelerator cards (currently 4, 8 or 10 Nvidia Tesla V100 can be installed); at the same time, 9 PCIe slots are reserved on the back of the chassis, which can install 8 full-height cards Interface card, and 1 half-height interface card.
In addition, high cost performance is also the main appeal of DSS 8440. Compared with other 4U-sized servers equipped with 8 SXM2-style GPUs and emphasizing the NVLink interconnection architecture, the DSS 8440 with 10 PCI interface card-style GPUs claims to provide similar training performance at a lower cost.
For example, based on HPE Apollo 6500 Gen10, DSS 8440 can be equipped with more GPU accelerator cards in a single chassis (10 Tesla V100 PCIe vs. 8 Tesla V100 SXM2); in terms of the same chassis density, the Tensor computing performance In terms of performance, DSS 8440 is slightly better (112 TFLOPS x10 vs. 125 TFLOPS x8); from the perspective of power usage efficiency, DSS 8440 is also a bit ahead in performance per watt in the training of common deep learning frameworks and convolutional neural network models (Magnitude up to 13.5%).
If compared with Nvidia's DGX-1, the DSS 8440, which uses PCIe interleaving network to construct the Nvidia GPU interconnection architecture, can also provide quite close machine learning performance (the gap is within 5%).
In addition to being paired with a GPU, the DSS 8440 is expected to be paired with other computing accelerators in the future, because at the Dell Technologies conference, Dell EMC also announced a joint cooperation with Graphcore, an AI computing acceleration startup they invested in. It is expected to be launched on the DSS 8440 later this year. Started bundling sales of the C2 computing accelerator card (PCIe interface) provided by Graphcore.
The part developed by Graphcore is a graph-based computing technology specifically targeted at machine learning, which can provide higher performance in training-type workloads. The computing architecture used in C2 is different from CPU and GPU, and is called smart. Processing unit (Intelligence Processing Unit, IPU), each processor has 2432 independent IPU cores, providing up to 2 PFLOPS, and the memory capacity on the chip can be configured to 4.8GB, and the memory bandwidth is up to 720TB/s. The data transmission between IPUs is also equipped with a high-speed channel called IPU-Links, which has a bandwidth of 2.5Tb/s, allowing the eight IPU accelerator cards loaded on the DSS 8440 to be connected to form a shared computing resource pool.
According to the news announced by Graphcore at the end of last year, under their cooperation with Dell EMC, they will jointly launch an integrated application device Dell-Graphcore IPU-Appliance for the enterprise data center environment, which will include 8 C2, and each The C2 accelerator card will have two Colossus GC2 IPU processors, which can execute 100,000 parallel processing applications at the same time, providing a computing performance of nearly 1 PFLOPS for the entire device.
In addition to computing specifications, in terms of storage configuration, the DSS 8440 can be equipped with up to 10 2.5-inch hard drives or solid-state drives (2 NVMe, 2 SAS/SATA, 6 NVMe/SAS/SATA), all located in the chassis The back of the server can provide a maximum storage space of 32TB.