Tencent's SOTA Gemma3-12B-based multilingual embedder, #1 on MMTEB as of November 2025.
A solid 11.8B-parameter dense embedding model from Tencent. Treat the modality benchmarks above as the leading indicator of fit — composite scoring across modalities is still maturing.
Generated from this model’s benchmarks and ranking signals. Editor reviews refine it over time.
Access model weights, configuration files, and documentation.
See which devices can run this model and at what quality level.
Tencent's 11.8B embedding model fine-tuned from Gemma 3 12B using the KaLM-Embedding V2 recipe with Matryoshka Representation Learning across seven embedding dimensions (3840 down to 64). Ranked #1 on MMTEB at release with a 72.32 mean task score, surpassing Qwen3-Embedding-8B and gemini-embedding-001; supports both sentence-transformers and vLLM via a CausalLM branch.