Lightweight 0.6B multilingual embedder with 32K context and instruction support.
A strong 0.596B-parameter dense embedding model from Qwen/Alibaba. Treat the modality benchmarks above as the leading indicator of fit — composite scoring across modalities is still maturing.
Generated from this model’s benchmarks and ranking signals. Editor reviews refine it over time.
Copy and paste this command to start running the model locally.
ollama run qwen3-embedding:0.6bAccess model weights, configuration files, and documentation.
See which devices can run this model and at what quality level.
The smallest member of the Qwen3-Embedding series, fine-tuned from Qwen3-0.6B-Base via the same 3-stage contrastive + supervised + model-merging pipeline as its larger siblings. Supports 100+ languages, 32K context, instruction-aware queries and Matryoshka output dimensions, making it a popular choice for edge and resource-constrained deployments.