Skip to main content
All CollectionsResound Editor
How Does Resound Detect Filler Sounds?
How Does Resound Detect Filler Sounds?

Learn how Resound uses machine learning to find your umms and ahhs

Updated over a week ago

Resound uses proprietary machine learning models to process your audio to determine if you said a filler sound (umm, ahhh, etc).

Our filler sound model is a complex software that analyzes audio waveforms to find patterns, and then outputs exact times when it believes a filler sound exists.

We are continually testing and improving our machine-learning models by retraining them. In addition, we will be building different models to solve different specific problems down the road (ex: detect breaths, detect filler words, etc).

Filler sound detection history

Below is an overview of the current version and past versions of the filler sound machine learning models. You can keep up with future changes by reading our Release Notes.

V1.0 - February 2024

After 4 years of research and training, Filler Sound Detection V1 is here! This new machine learning model has reached nearly a 90% acceptance rate among test users (the percent of edits users accept).

Translation? Filler Sound Detection is insanely accurate.

You can expect less false alerts, more precise boundaries, and the model is smart enough to even leave in some ums that it anticipates you'll want to keep (gotta stay authentic, right?).

90% is extremely high, but this doesn’t tell the whole story.

We’ve reached a level so accurate that even pro audio engineers cannot consistently agree on how to properly edit the last 10% of edits.

Next, we’ll add a feature that cuts all the accurate edits automatically so you just have to review the harder 10%. It will be MUCH faster.

Here's the story of our previous models...

V0.7.1 - November 2023

When we released V0.7 it was our most accurate model to-date, but it also was much slower, which was causing some of our users to have projects fail. We optimized the model to be 4x faster which resolved any bugs related to projects failing. You can now enjoy all the new accuracy without the bugginess.

V0.7 - September 2023

The number of missed edits improved by 70%, and false alerts, like marking the word “of” as a filler sound, have been decreased by 38%.

V0.6 - July 2023

Recognized even more filler sounds that were previously missed, showed a greater improvement in the accuracy of boundaries (stard and end times).

V0.5.1 - June 2023

Greatly reduces the number of incorrect edits. That means you should have less edits to review, and less edits marked Keep. You’ll also notice more accurate edit boundaries (start and end times).

V0.4 - March 2023

Improves the accuracy of filler sounds detected (e.g. umm) while also reducing the number of filler words (e.g. you know) detected. This subtle change sets Resound up to give you even more fine-tuned control over your edits in the near future.

V0.3.0 - January 2023

Version 0.3.0 introduced breath detection to the app as a byproduct of how we built it. We heard from users that these breaths were more of a distraction than a value-add, so we removed them. Now you can upload audio and enjoy the same overall performance of filler sound detection without the distraction of reviewing breaths.

V0.2.0 - November 2022

We’ve refactored our entire tech stack. This ML model increases the accuracy of the boundaries in each edit, recognizes distracting breath sounds, and makes predictions in seconds rather than minutes.

R&D Models - 2020 to 2022

Resound was first developed as an internal tool at Resonate Recordings, a boutique podcast production company. We spent nearly two years in R&D testing various machine learning models, iterating, and learning from our mistakes. This time taught us many things, including how to prepare our datasets so we could use them to generate highly accurate edits, just like a human.

Machine Learning FAQs

What is machine learning?

Machine learning is a subsection of the larger discipline of artificial intelligence. In short, it’s software that analyzes data to identify meaningful patterns and self-improve to reach a pre-determined goal (like finding umms and ahhs in audio).

Is my data private and secure in Resound?

Resound respects your privacy and security online. Read a detailed summary of what Resound does and what you agree to when using our services in our Privacy Policy and Terms of Service.

New around here? Start editing your podcast with AI for FREE at!

Still have questions after reading? Submit a ticket in the messenger below.

Did this answer your question?