
Framework's ground-up redesigned modular laptop with Intel Core Ultra Series 3 (Panther Lake), 74 Wh battery delivering 20 hours of battery life, LPCAMM2 LPDDR5X memory, 13.5" 3:2 touchscreen at 2880×1920, and full CNC 6063 aluminum chassis at 1.4 kg. First Ubuntu Certified Framework system.
The Framework Laptop 13 Pro is a ground-up rebuild of the modular laptop concept, now targeting developers who need local AI inference on a portable machine. At $1,199 MSRP, it delivers 181 total TOPS across CPU, GPU, and NPU, with up to 64 GB of LPCAMM2 memory at 154 GB/s bandwidth. This is the first Framework laptop with Ubuntu certification and Linux pre-installed option, making it a serious contender for engineers who want to run models without a discrete GPU.
The chassis is CNC-machined 6063 aluminum, weighing 1.4 kg at 15.85 mm thin. The 74 Wh battery delivers 20 hours of Netflix streaming, and the 13.5" 3:2 touchscreen at 2880×1920 with VRR from 30–120 Hz covers both coding and media consumption. But the real story is the Intel Core Ultra Series 3 (Panther Lake) silicon and the repairable, upgradeable memory subsystem.
Total AI Compute: 181 TOPS (INT8)
Memory: LPCAMM2 LPDDR5X at 7467 MT/s, up to 64 GB, 154 GB/s bandwidth. This is socketed memory—you can upgrade it later. For AI workloads, memory bandwidth is the bottleneck for token generation, and 154 GB/s is competitive with entry-level discrete GPUs like the RTX 4060 laptop (256 GB/s) while using a fraction of the power.
TDP: 25 W. This is a 25-watt platform that can run local LLMs without a fan screaming. Compare to a 45 W+ laptop GPU or a 150 W desktop card.
Key tradeoff: The integrated GPU (Arc B390) provides 122 TOPS, but it shares memory bandwidth with the CPU and NPU. You are not getting dedicated VRAM. For models that fit entirely in system memory, this works. For models that need 48 GB+, you are out of luck—64 GB max is the ceiling.
This is where the Framework Laptop 13 Pro lives or dies for AI work. With 154 GB/s memory bandwidth and up to 64 GB unified memory, here is what you can expect:
Fits in memory (quantized):
llama.cpp on the GPU.Sweet spot: 7B–16B parameter models at Q4_K_M on the 32 GB or 64 GB config. These run at conversational speeds (15–25 tok/s) with the NPU handling prompt processing and the GPU handling decode.
What does not work: FP16 models larger than 32 GB. Any model requiring CUDA or ROCm—this is an Intel integrated GPU, so you rely on llama.cpp with Vulkan, SYCL, or OpenCL backends. No Flash Attention 2, no tensor parallelism across GPUs.
Multimodal: LLaVA, Qwen-VL, and similar VLMs run at 5–10 tok/s on 7B variants. The 700-nit touchscreen is usable for local vision tasks, but don't expect real-time video processing.
Who should buy this:
Who should skip this:
vs. MacBook Pro 14 (M4 Pro, 24 GB unified memory):
vs. ThinkPad P1 Gen 7 (RTX 5000 Ada, 16 GB VRAM):
vs. Mac Studio (M2 Ultra, 192 GB unified memory):
When to pick the Framework Laptop 13 Pro:
When to skip it:
Specs not available for scoring. This product is missing VRAM or memory bandwidth data.