SOTA sub-1B multilingual embedding model, distilled from a Qwen3-Embedding-4B teacher.
A strong 0.596B-parameter dense embedding model from Jina AI. Treat the modality benchmarks above as the leading indicator of fit — composite scoring across modalities is still maturing.
Generated from this model’s benchmarks and ranking signals. Editor reviews refine it over time.
Access model weights, configuration files, and documentation.
See which devices can run this model and at what quality level.
Jina AI's fifth-generation 677M multilingual text embedding model, built on Qwen3-0.6B-Base and trained via distillation from Qwen3-Embedding-4B with task-specific contrastive losses and four LoRA adapters (retrieval, text-matching, classification, clustering). Supports 119+ languages and 32K-token contexts, with Matryoshka and binary-quantization-friendly outputs.