Sub-1B multilingual embedder; teacher for the F2LLM-v2 80M/160M/330M distilled siblings.
Access model weights, configuration files, and documentation.
See which devices can run this model and at what quality level.
The smallest non-pruned member of CodeFuse-AI's F2LLM-v2 family, fine-tuned from Qwen3-0.6B-Base on the family's 60M-sample open multilingual corpus. Also serves as the parent model from which the pruned/distilled 80M, 160M, and 330M variants are derived.