
A premium GB300 workstation priced at $100K, offering full Multi-Instance GPU (MIG) support to partition resources for multiple developers.
The ASUS ExpertCenter Pro ET900N G3 is a tier-one AI supercomputer in a tower form factor, designed for organizations that require data-center-level compute without the rack-mount infrastructure. Built on the NVIDIA DGX Station™ architecture and powered by the GB300 Grace Blackwell Superchip, this workstation is positioned at the absolute top of the "AI PC" category, effectively bridging the gap between local development and production-scale clusters.
At an MSRP of $99,999, the ET900N G3 is an enterprise-grade investment for teams building sovereign AI, fine-tuning large-scale models, or deploying complex agentic workflows. It competes directly with high-end H100/H200 workstations and the Apple Mac Studio M2/M3 Ultra (though it occupies a significantly higher performance bracket in terms of raw TFLOPS and interconnect speed). For practitioners, the ET900N G3 matters because it provides the first viable desktop path to running 1-trillion parameter models locally with enterprise-grade reliability.
The hardware profile of the ET900N G3 is defined by the Blackwell Ultra architecture, which introduces massive improvements in compute density and memory throughput over the previous Hopper generation.
The ASUS ExpertCenter Pro ET900N G3 is one of the few desktop-class machines capable of running a 1-Trillion parameter model (1T) locally.
While 4-bit (GGUF/EXL2) quantization is popular for consumer hardware, the ET900N G3 is designed for FP8 and FP16 inference.
The ET900N G3 is not a "hobbyist" machine in the traditional sense; it is a professional tool for engineers who cannot rely on cloud providers due to latency, data privacy, or cost-at-scale.
When evaluating the ASUS ExpertCenter Pro ET900N G3 vs. other high-end AI hardware, the distinction lies in the Blackwell architecture and the unified memory pool.
For practitioners looking for the best hardware for local AI agents in 2026, the ASUS ExpertCenter Pro ET900N G3 represents the current ceiling of desktop AI performance. It is the definitive choice for production-ready, local LLM deployment at the 1T parameter scale.
Llama 4 MaverickMeta | 400B(17B active) | SS | 39.1 tok/s | 146.4 GB | |
| 70B | SS | 50.7 tok/s | 112.8 GB | ||
| 70B | SS | 50.7 tok/s | 112.8 GB | ||
Nvidia Nemotron 3 SuperNVIDIA | 120B(12B active) | SS | 55.2 tok/s | 103.5 GB | |
GLM-5Z.ai | 744B(40B active) | SS | 65.2 tok/s | 87.7 GB | |
GLM-5.1Z.ai | 744B(40B active) | SS | 65.2 tok/s | 87.7 GB | |
Kimi K2.6Moonshot AI | 1000B(32B active) | SS | 66.3 tok/s | 86.2 GB | |
Kimi K2 Instruct 0905Moonshot AI | 1000B(32B active) | SS | 67.6 tok/s | 84.6 GB | |
Kimi K2 ThinkingMoonshot AI | 1000B(32B active) | SS | 67.6 tok/s | 84.6 GB | |
Kimi K2.5Moonshot AI | 1000B(32B active) | SS | 67.6 tok/s | 84.6 GB | |
GLM-4.6Z.ai | 355B(32B active) | SS | 81.3 tok/s | 70.3 GB | |
Mistral Large 3 675BMistral AI | 675B(41B active) | SS | 86.3 tok/s | 66.3 GB | |
DeepSeek-V3DeepSeek | 671B(37B active) | SS | 95.5 tok/s | 59.8 GB | |
DeepSeek-R1DeepSeek | 671B(37B active) | SS | 95.5 tok/s | 59.8 GB | |
DeepSeek-V3.1DeepSeek | 671B(37B active) | SS | 95.5 tok/s | 59.8 GB | |
DeepSeek-V3.2DeepSeek | 685B(37B active) | SS | 95.5 tok/s | 59.8 GB | |
GLM-4.5Z.ai | 355B(32B active) | SS | 110.3 tok/s | 51.8 GB | |
GLM-4.7Z.ai | 358B(32B active) | SS | 108.6 tok/s | 52.6 GB | |
Kimi K2 InstructMoonshot AI | 1000B(32B active) | SS | 110.3 tok/s | 51.8 GB | |
| 70B | SS | 125.1 tok/s | 45.7 GB | ||
Qwen3.5-397B-A17BAlibaba Cloud (Qwen) | 397B(17B active) | SS | 124.2 tok/s | 46.0 GB | |
Llama 2 70B ChatMeta | 70B | SS | 131.7 tok/s | 43.4 GB | |
Mixtral 8x22B InstructMistral AI | 141B(39B active) | SS | 131.2 tok/s | 43.6 GB | |
Qwen 3.5 OmniAlibaba Cloud | 397B(17B active) | SS | 126.5 tok/s | 45.2 GB | |
Qwen3-235B-A22BAlibaba Cloud (Qwen) | 235B(22B active) | SS | 157.3 tok/s | 36.3 GB |