General computing, heterogeneous computing, collaborative computing, edge computing and other diversified product layout
PowerLeader PLStor A9600: new-gen high-performance AI storage for AI, with extreme performance—36 NVMe SSDs, 1105.92TB raw capacity, 160GB/s bandwidth, 4 million IOPS per chassis; up to 32-controller scale-out for 10PB capacity. PowerLeader PLStor A9600 high-performance AI storage: new control-data separated architecture, 32-controller scalable for trillion/ten-trillion parameter multi-modal models. Supports AI-native vectors, tensors, KV Cache; built-in vector search accelerates retrieval, reduces inference "hallucinations"; KV Cache cuts latency, boosts efficiency—one storage for full AI training-inference.
PLStor A9600 new-gen AI storage: DataTurbo engine enables kernel-mode direct storage access/caching, cutting memory usage by 50%, delivering 160GB/s per-chassis bandwidth and over TB-level cluster bandwidth. NDS direct NPU access creates optimal path, bypassing protocols, eliminating copies for 30% performance boost.
PLStor A9600 new-gen AI storage: new hardware architecture, starting with 2 controllers, scaling to 36 per cluster (10PB capacity) with high performance/density.
PLStor A9600 new-gen AI storage meets diverse ecosystems via file, object*, vector protocols, covering full AI workflow (data aggregation, preprocessing, training, inference) with zero data copy to accelerate model iteration.
PLStor A9600 new-gen AI storage features SmartQuota (file system quota tech), letting admins control storage for directories/users/groups, limiting usage to prevent overoccupation and maximize storage value.
PLStor A9600 new-gen AI storage: supports new data paradigms (NAS, objects, vectors, graphs) with high-performance multimodal retrieval, excelling in RAG and Unified Cache acceleration. Built-in multimodal knowledge base with leading QPS for efficiency and accuracy.
Unified Cache offers KV Cache multi-level caching, a PB-level shared pool, flexible allocation—cuts first-token latency by 78%, boosts inference throughput by 60%. Sparse attention enables ultra-large model context windows for accurate, fast, scalable central inference.
Model | PLStor A9600 |
---|---|
Hardware Architecture | Disk and controller integration |
Maximum Raw Capacity per Controller Enclosure | 1105.92TB |
Height of Each Controller Enclosure | 2U |
Controllers per Controller Enclosure | 2 |
Disks per Controller Enclosure | 36 |
Processors per Controller | Two Kunpeng 920 processors |
Maximum Memory per Controller | 512GB |
Data Disk Type | Palm NVMe SSD |
Network Type | 25/100 Gb/s TCP/IP 25/100 Gb/s RoCE |
Key Features | Quota (SmartQuota), Quality of Service (SmartQoS), inference acceleration (Unified Cache), end-to-end data integrity check (DIF) |
Enclosure Dimensions (H x W x D) | 86.1mm x 447mm x 950mm(including the cover) |
Operating Temperature | –60 m to +1800 m altitude: 5°C to 30°C (cabinet)/35°C (enclosure) 1800 m to 4000 m altitude: The max. temperature threshold decreases by 1°C for every altitude increase of 220 m |
Operating Humidity | 10%~90%R.H. |