The OSSPASCAL8 is a 169.6 TeraFLOP engine with 80GB/s NVLink for the largest models and most iterations. Supporting eight of the latest Pascal-based NVIDIA GPUs, the OSS-PASCAL8 provides 42.4 TeraFLOPS of double precision performance for the most demanding HPC applications. For state-of-the-art deep learning workloads the OSS-PASCAL4 provides 169.6 TeraFLOPS of half precision performance. GPU management and monitoring and software are preinstalled on the OSS-PASCAL4. The GPU accelerated server also includes dual high-performance “Broadwell” E5 2698v4 2.2GHZ processors and a base configuration of 512GB of DDR4 memory scalable to 2TB. Six PCIe Gen3 slots are available for additional expansion and for scale out creating GPUltima clusters using IB or high speed Ethernet networking. The appliance also includes four 2.5” removable 1.6TB NVMe SSD drives.

  • 4U Chassis
  • Dual Intel Xeon 3.2GHz CPUs
  • Up to 2TB DDR4 LRDIMM System Memory
  • Four 2.5” 1.6TB NVMe SSDs
  • Eight Pascal GPU SXM2 with 80GB/s NVLink
  • Four x16 PCIe 3.0 slots
  • Two x8 PCIe 3.0 slots
  • Choice of Machine Learning Framework
    • Caffe
    • Torch
    • Tensorflow
    • Theano
  • MLPython
  • ML Dependencies (400MB Python)
  • cuDNN (5.0 & 5.1)
  • Caffe on Spark
  • CUDA & NVIDIA driver
  • CUB (CUDA building blocks)
  • NCCL
  • GPU Management from Bright Computing
    • Health Management
    • Workload Integration