Swedish flag with blue background and yellow cross, symbolizing Sweden's national identity and heritage.

Vretenborgsvägen 28, floor 6
SE – 126 30 Hägersten
Phone: + 46 8 683 03 00

Norwegian flag featuring a blue cross with white borders on a red background, symbolizing Norway.

Midstranda 51
NO – 2321 Hamar
Phone: + 47 62 54 02 91

Flag of Denmark, national flag with red background and white cross, symbolizing Danish heritage and pride, used in official and cultural contexts in Denmark.

Hassellunden 14
DK – 2765 Smørum
Phone + 45 70 300 310

Get connected

MAXER-2100

MAXER-2100 – 2U Rackmount AI Inference Server with NVIDIA RTX 4080 Super by AAEON

The AAEON MAXER-2100 is a 2U rackmount AI inference server combining 12th and 13th Generation Intel Core LGA1700 processors with an NVIDIA GeForce RTX 4080 Super GPU, built for high-performance deep learning inference, computer vision, and GPU-accelerated computing workloads. With up to 128GB DDR5 memory, four LAN ports including dual 2.5GbE, dual hot-swappable SATA drives with RAID support, and a built-in 1000W power supply, it delivers enterprise-grade AI compute capability in a rack-deployable form factor. Available from Recab, your Nordic partner for AAEON AI computing solutions.
Description
The AAEON MAXER-2100 is a purpose-built 2U rackmount AI inference server designed for engineers and system integrators deploying demanding machine learning inference, computer vision, industrial inspection, and GPU-accelerated analytics workloads in data centre and edge rack environments. Built on the Intel Q670 chipset with 12th and 13th Generation Intel Core LGA1700 socket processors, the standard configuration ships with an Intel Core i9-13900 and supports up to the i9-13900K at 125W TDP, providing the CPU throughput needed to sustain high-performance GPU inference pipelines without processor bottlenecks. The integrated NVIDIA GeForce RTX 4080 Super in a PCIe x16 slot can optionally be replaced with other NVIDIA GeForce or dedicated AI computing cards including the RTX 4090, giving users flexibility to scale GPU compute to their specific inference requirements. System memory is provided by four DDR5 4000MHz DIMM slots supporting up to 128GB in dual-channel non-ECC configuration, and storage is covered by two hot-swappable 2.5 inch SATA-III drive bays with RAID 0 and RAID 1 support plus one M.2 2280 NVMe slot for the operating system. Networking includes one Intel AMT 12.0-enabled GbE management port, one additional GbE, and two 2.5GbE ports for high-bandwidth data ingestion. Four USB 3.2 Gen 2 ports at 10Gbps, one RS-232/422/485 serial port, and a front access I/O design round out the connectivity. Onboard TPM 2.0 covers platform security. The built-in 1000W power supply handles the full system thermal envelope. Operating temperature is 0°C to 40°C with 0.5m/s airflow. CE and FCC Class A certified. Available from Recab, your Nordic partner for AAEON AI inference server solutions.
Features
  • NVIDIA GeForce RTX 4080 Super in PCIe x16 with optional RTX 4090 or dedicated AI computing card support
  • 12th and 13th Generation Intel Core LGA1700 processors with standard i9-13900 and support up to i9-13900K
  • Up to 128GB DDR5 4000MHz memory in dual-channel configuration for high-throughput AI inference workloads
  • Four LAN ports including two 2.5GbE and one Intel AMT 12.0 GbE for management and high-bandwidth data ingestion
  • Dual hot-swappable 2.5 inch SATA drive bays with RAID 0 and RAID 1 plus M.2 2280 NVMe storage
  • Built-in 1000W power supply with onboard TPM 2.0 and front access I/O design for rack management
Specifications
CPU 12/13th Gen Intel Core i9-13900
GPU GeForce RTX 4080 Super
Chipset Intel Q670
Memory Up to 128GB DDR5
Ethernet 4x RJ-45 LAN Ports
USB USB 3.2 Gen2 x 4
Power 1000W Power Supply
Dimensions 17" x 3.46" x 17.6"
Weight 31.52 lb Gross
Operating Temp 32°F ~ 104°F
  • AI inference and deep learning model deployment in rackmount data centre and edge environments
  • Computer vision and industrial inspection systems requiring high-throughput GPU acceleration
  • Edge AI servers for quality control, defect detection and real-time analytics in manufacturing
  • GPU-accelerated scientific computing and machine learning training workloads in enterprise rack installations