Compact 1.7B multilingual embedder for resource-constrained deployments.
Access model weights, configuration files, and documentation.
See which devices can run this model and at what quality level.
A compact 1.7B-parameter member of CodeFuse-AI's F2LLM-v2 family, fine-tuned from Qwen3-1.7B-Base on the family's 60M-sample open multilingual corpus with the same two-stage contrastive + instruction-tuned recipe. Targets resource-constrained deployments; the v1 1.7B variant previously led the 1–2B tier on MTEB.