Notes on order of speech processing and on automix

published Mar 27, 2019 09:20   by admin ( last modified Apr 24, 2019 10:38 )

These are just some notes jotted down for my memory, so not authoritative at all, not even inside my own head yet :)

Remove noise → Compress → Normalize

After having watched one of Curtis Judd's videos it was clear it is a good idea to compress/limit before normalizing. There may be some peaks otherwise in your recording that would make the normalizing not be good enough. Also I noted that Judd eyeballs on the left hand dB scale where to put the compressor's knee. That makes sense too.

And completely on my own I realized that noise reduction should go first in the chain. Chances are your noise is at a certain level and it's easier to remove it before compression has introduced a varying noise floor.

Automix

Finally in this post, I have seen that some newer digital recorders and mixers have "automix", that is they mute the microphones of those not talking. This I think would be a great feature to have in post-production! Imagine having six tracks and you just tell the software to automix it. I guess that exists somewhere. Otherwise it would be fun to write an algorithm for it, for post production. You could do voice detection (is this voice?) and amplitude detection, reverberation detection, phase and distance detection (like in a microphone array) and so on.

Update 2019-04-24

Julian Krause has just released a video with his workflow, it's a bit different: https://www.youtube.com/watch?v=GVovXsbCjjU

He normalizes the audio first to a certain LUFS level, then removes rumble and proximity effect with high pass filters, then applies a compressor and a noise gate. What is most interesting for me here is that he sets the attack of the compressor to 1ms and the decay to I think it was 100ms, settings that are on the shorter side but that he argues make sense for voice. Gotta try that.