made by agents

AMD's flagship AI laptop processor with 12 Zen 5/5c cores, Radeon 890M iGPU, and 50 TOPS XDNA 2 NPU. The most powerful integrated NPU in its class, exceeding Copilot+ PC requirements.
The AMD Ryzen AI 9 HX 370 is the flagship silicon of the "Strix Point" architecture, representing a significant pivot in AMD’s mobile strategy toward dedicated AI acceleration. Built on TSMC’s 4nm process, this SoC (System on a Chip) integrates the Zen 5 CPU architecture with the new XDNA 2 NPU, specifically designed to handle persistent AI workloads without offloading to a discrete GPU. For developers and engineers, this chip marks the transition of the "AI PC" from a marketing term to a functional reality for local model execution.
Positioned as a premium mobile processor, the Ryzen AI 9 HX 370 competes directly with the Intel Core Ultra 200V (Lunar Lake) and the Apple M3/M4 series. While its predecessors focused primarily on raw CPU throughput, the HX 370 is engineered for the balanced demands of local AI development and agentic workflows. By exceeding the 40 TOPS requirement for Microsoft’s Copilot+ PC program with its 50 TOPS NPU, it provides a stable, high-performance target for Windows-based AI applications and Linux-based inference stacks.
The AMD Ryzen AI 9 HX 370 AI inference performance is driven by a three-pillar compute architecture: the CPU, the iGPU, and the NPU. While the total platform offers 80 TOPS, the distribution of these resources is what dictates how practitioners should deploy models.
The standout feature is the XDNA 2 NPU, delivering 50 TOPS of INT8 performance. Unlike previous generations, XDNA 2 supports "Block FP16" data types, which allows for near-INT8 speeds with the accuracy of FP16. This is critical for running local LLMs where quantization loss can degrade reasoning capabilities. The NPU is optimized for low-power, "always-on" tasks such as background noise suppression, image generation, and lightweight agentic loops, operating within a lean 28W TDP.
For AI workloads, memory is the primary bottleneck. The Ryzen AI 9 HX 370 supports LPDDR5X-8000, providing a maximum memory bandwidth of 128 GB/s. While this is lower than dedicated desktop GPUs, it is highly competitive for a mobile SoC.
The integrated Radeon 890M, featuring 16 Compute Units on the RDNA 3.5 architecture, acts as the heavy lifter for parallel processing. When the NPU isn't supported by a specific framework (like certain custom PyTorch kernels), the iGPU provides a robust fallback for GPGPU acceleration, benefiting from the high-speed 128 GB/s memory bus.
When evaluating the AMD Ryzen AI 9 HX 370 for local LLMs, performance is dictated by quantization. This hardware is the "sweet spot" for models in the 3B to 14B parameter range.
Using frameworks like LM Studio (via llama.cpp) or AMD’s own Ryzen AI Software (leveraging ONNX Runtime), you can expect the following:
The 50 TOPS NPU is particularly well-suited for Stable Diffusion XL Turbo or Whisper v3 (speech-to-text). Generating a 512x512 image via SDXL Turbo on the iGPU/NPU hybrid path typically takes under 2 seconds, making it a viable tool for on-the-fly asset generation in developer workflows.
The AMD Ryzen AI 9 HX 370 is one of the best AI chips for local deployment in a mobile form factor. It is specifically designed for:
Ryzen AI library to optimize models for XDNA, this is your primary development target.The Ryzen AI 9 HX 370 generally wins on raw multi-core compute with its 12-core/24-thread configuration (4 Zen 5 + 8 Zen 5c), compared to Intel’s 8-core design. While Intel’s Lunar Lake NPU is highly competitive at 48 TOPS, AMD’s XDNA 2 architecture currently holds a slight edge in INT8 throughput (50 TOPS). For developers who also need high CPU performance for compiling code alongside AI inference, the Ryzen is the stronger choice.
Apple’s M-series chips benefit from higher unified memory bandwidth (e.g., 100 GB/s on base M3, significantly higher on Pro/Max). However, the Ryzen AI 9 HX 370 offers a more flexible environment for developers who require Windows or Linux-specific toolchains. While the M4's Neural Engine is formidable, the XDNA 2 NPU is more accessible for developers using the ONNX and OpenVINO (via cross-compatibility) ecosystems in a PC environment.
When selecting best hardware for local AI agents 2025, the HX 370 stands out because it doesn't require a power-hungry discrete GPU to be effective. For the first time in the Windows ecosystem, a mobile SoC can handle medium-sized LLMs with the efficiency previously reserved for specialized silicon. If your workflow involves 7B to 14B parameter models and you require a mobile workstation, the Ryzen AI 9 HX 370 is currently the performance leader in the integrated AI PC category.
Specs not available for scoring. This product is missing VRAM or memory bandwidth data.
.webp)
