Flagship 8B multilingual embedding model from Qwen, ranked #1 on MTEB Multilingual at release.
A strong 7.6B-parameter dense embedding model from Qwen/Alibaba. Treat the modality benchmarks above as the leading indicator of fit — composite scoring across modalities is still maturing.
Generated from this model’s benchmarks and ranking signals. Editor reviews refine it over time.
Copy and paste this command to start running the model locally.
ollama run qwen3-embedding:8bAccess model weights, configuration files, and documentation.
See which devices can run this model and at what quality level.
Qwen3-Embedding-8B is the flagship of Alibaba Qwen's embedding series, built on Qwen3-8B-Base and trained via a 3-stage pipeline (large-scale weakly supervised contrastive pretraining on Qwen3-32B-synthesized data, supervised fine-tuning, and slerp model merging). Supports 100+ languages, 32K context, instruction-aware queries, and Matryoshka output dimensions from 32 to 4096.