Microsoft's 270M edge-ready multilingual embedder with 32K context.
A strong 0.268B-parameter dense embedding model from Microsoft. Treat the modality benchmarks above as the leading indicator of fit — composite scoring across modalities is still maturing.
Generated from this model’s benchmarks and ranking signals. Editor reviews refine it over time.
Access model weights, configuration files, and documentation.
See which devices can run this model and at what quality level.
The smallest Harrier variant from Microsoft Research, a 268M-parameter Gemma 3-based decoder embedding model designed to run on CPU and edge devices. Trained with contrastive learning plus knowledge distillation from larger teachers, supports 94 languages and 32K-token contexts, with an MIT license enabling commercial use.