A knowledge-distilled, English-only version of OpenAI Whisper Large v3 from Hugging Face. Trained on 98k hours with a 'patient' teacher and SpecAugment, it runs ~1.5× faster than Whisper Large v3 Turbo while matching accuracy.
A solid 0.8B-parameter dense audio model from Hugging Face. Treat the modality benchmarks above as the leading indicator of fit — composite scoring across modalities is still maturing.
Generated from this model’s benchmarks and ranking signals. Editor reviews refine it over time.
Access model weights, configuration files, and documentation.
See which devices can run this model and at what quality level.
Distil-Whisper is Hugging Face's knowledge-distilled version of Whisper, introduced in Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling (Gandhi, von Platen & Rush, 2023). The v3.5 release is the latest English checkpoint in the family.
Production English transcription, on-device / browser ASR via ONNX, speculative decoding to accelerate Whisper, long-form podcast/meeting transcription.