A knowledge-distilled, English-only version of OpenAI Whisper Large v3 from Hugging Face. Trained on 98k hours with a 'patient' teacher and SpecAugment, it runs ~1.5× faster than Whisper Large v3 Turbo while matching accuracy.
Access model weights, configuration files, and documentation.
See which devices can run this model and at what quality level.
Distil-Whisper is Hugging Face's knowledge-distilled version of Whisper, introduced in Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling (Gandhi, von Platen & Rush, 2023). The v3.5 release is the latest English checkpoint in the family.
Production English transcription, on-device / browser ASR via ONNX, speculative decoding to accelerate Whisper, long-form podcast/meeting transcription.