I have used AA2 on a few different drum recordings now with mixed results, and there is something I can not quite understand.
Let’s say I have a fairly traditional drum kit with close mics, overheads and room mics.
My understanding of AA2 is this: (as one example)
lets say the room mics are about 10 ft from the kit. Does AA2 ‘slide’ the room mics ‘forward’ to be in time with the initial transient of the mic designated as the key time which does not move?
I understand it does more than just this alone, but is this part of what it is doing, and if so why would I want this?
It is the delayed signal arriving to the further away mic that makes a room mic a room mic, the pre delay is a huge part of what makes a room mic usable and interesting and, well, what it is.
Moving them in time also has the effect of accumulated bottom end ‘stacked’ on each other now as a result giving me far more bottom end energy than I need or want, or more importantly, than I was monitoring while tracking. I don’t like the dryness this imparts on what was a roomy sounding recording to start with.
Also, I can run AA2 over the exact same recording 3 or 4 times (or more) and every time I do I get very different results. Sometimes the key time is found to be different, sometimes phase is flipped, sometimes not and the sample delay is different every single time. Exact same recording. This makes me not want to trust AA2 at all.
I find my self using it more like v1 now. I will align kick mics to each other, snare, tom and cymbals mics keyed to the snare and just leave any distant mics out of the alignment all together and I am happier with these results but it is still hit an miss and I still get different results every time I do this so I just have to pick and choose what I feel sounds best.
Yes this is unfortunately the case. But you’re not alone many of us are keeping eye out for an update, until then it’s sadly going to collect dust.
That said Auto Align Post 2 is a great tool for those that need precise control over what’s going on. Being an audio suite process you choose what needs what.
I have it adjust outer kick mic to the internal. Bottom snare to the top mic. With overheads and rooms I turn off time adjustment and only have it correct phase relationship to the snare.
AAP2 gives you independent control over time, polarity, and spectral phase.
I blindly purchased AA2 to assuming it had this functionality.
I understand your issues with getting different results upon multiple analysis. I usually encounter this behavior with room mics on a drum kit in general. I have seen this happen hundreds of times over the years using AA1. I accept that there are going to multiple solutions for room mics and have manually selected the best sound with the shortest delay to preserve the pre-delay.
What we really need from AA1 or AA2 is the ability to assign a preference to either individual channels or an overall preference that limits the range of solutions to a specific window of time.
That time can be in your preferred units, milliseconds, samples, inches, etc. Fot example, I don’t want you to move or delay, a channel or all channels more than X milliseconds or X inches. I have wished for this functionality for years.
When AA2 2 came out, I was so disappointed that it did not have anything like this. That’s also my fault for never providing any feedback. But I am also disappointed that there was not any attempt made to contact users for surveys of potential new features. Maybe there was, but I was not asked, and I am a very long time user of AA1. Specifically with 12 to 14 channels of drums.
I haven’t tried auto AA post version. It sounds like it has the features that I am looking for?
I’ve been pushing for this for a while, so +1
I am really quite surprised and annoyed by this.
I learned very early on in my career that moving room or distant mics forward in time is a bad idea in most scenarios so why implement it in a program and claim it as a good idea?
One of Sound Radix selling/marketing points for AA2 is multi mic drum recordings so it’s not like it’s just a customer misunderstanding here either.
At the very least the option to turn that feature off (since other products of theirs can do this)would be better but it seems absurd and a huge oversight on their part to ignore this and force all mics in physical time.
I understand if the room mics moved a SMALL amount in either direction (I would prefer further away rather than closer though) then I would have got what I thought I was paying for, but not this.
The fact that each time I run it and get different results it equally annoys me. I ran it 3 times on a basic kit setup. First time the key time was found as the kick, second time it was the snare and third time it was the floor Tom!?!
I won’t be using this plugin again until this is addressed.
I’m just reacting on this point ^^
Where does this ‘rule’ come from? With respect it doesn’t make sense and the only guide should be your creative goals.
Some want room mics aligned to increase presence and density, while others want them pushed back to create an enlarged space effect.
Within AA2 you just have to leave the room mics out of the group or choose the settings that minimizes the delay applied to these tracks.
All good and it is a fair question to ask, no worries.
Obviously there are no real rules in audio but that 'There are no rules…" line is often the first line thrown out there when someone states a fairly strong opinion. I did not say no one should ever do it, just that I learnt I did not like the effect of doing it, so stopped doing it.
It does make more sense when you understand the point of a room microphone in respect to how our ears/brain perceive sound in an acoustic environment.
Have you seen settings in reverb plugins for early reflections and pre delay? This is playing on the same principles of delay from the sound source and its audible refections that I am referring to.
One ears can not detect less than (roughly) 6 - 12 MS of delay. This means that for a room mic to be audible and (more importantly) discernible as a room mic from the other close mic/s used it needs to have a certain amount of delay relative to the closest sound source/microphone. If it is less than that your brain can not ‘hear’ it as a different sound source and will combine the two as a single source, negating the ‘room’ part of the sound.
Your brain can tell how large a room is by the amount of time it takes for a sound source to return to your relative position. this means you can manipulate the apparent size of a room by delaying it further.
Try it yourself, delay a room mic by a few more samples at a time. As the mic gets ‘further away’ the room appears to get bigger. BUT conversely as the mic is brought closer the room appears to get smaller.
If it is really close your ear will hear it as early reflections which is only one art of a reverb tail so, if it is in line with the close mics the room will almost disappear. If the room has a long reverb time (something only a large room would have) but is brought up in line with the close mics it is going to sound very un-natural to most people because an acoustic space like that does not exist in real life.
If you want a small room/early elections sound record in a small, bright room.
If you want a big room sound, record in a large room with the mics further away.
If you then just move those mics closer you are loosing more than you are gaining, most of the time.
If you then just move those mics closer you are loosing more than you are gaining, most of the time
Summing correlated signals inherently leads to phase issues and comb filtering. Whether you like it or not is a question of taste so in the end it all depends how you use the plug. I personally think it’s well built in this regard since it lets you decide which time difference you want to apply between direct sound contributions whilst minimizing phase cancellation.
if it is in line with the close mics the room will almost disappear
Except in very dry rooms with barely no reflections, I respectfully disagree. Without time alignment, it is true that the sense of space would increase after summing since the time difference between direct sounds would artificially add a first reflection (which is fine from a creative perspective!). But again, provided your room is not very dry, the end result after summing with time alignment is somewhat closer to what the real room sound like since you add IRs captured at different locations without additional delay. In essence you sort of increase the room density whilst minimizing phase cancellation which is the whole purpose of AA2.
Without time alignment, it is true that the sense of space would increase after summing since the time difference between direct sounds would artificially add a first reflection
I don’t understand your point.
It would not increase, it would be exactly as it was in the room.
Artificially add a first reflection?
A sense of space is not created, it is recreated. It already exists within the room, which includes early reflections as they are not artificial. Every acoustic space has them no matter the size.
By placing the mic further away from the source you will capture less direct sound and more reflected sound as a ratio.
A small, bright room will have very strong early reflections and little RT60. A bigger room has some early reflections still, quite a lot in fact but they are less audible compared to the much longer RT60 time.
Our brain hears them as seperate but our ears hear them combined to make one ‘sound’.
In a natural environment our brain/ear can interpret this and perceive the lack of transient detail and time delay to know it is further away.
Now, if you move the room mic closer you are taking away the time in which the early reflections were happening. They were recorded, you just can’t hear them. You have now created an artificial space because that phenomenon would never occur in the real world.
In fact, the early reflections were recorded and still exist on the recording so that must mean they were pushed back in time along with the room mic so now they will be appearing BEFORE the initial transient if AA2 delays the signal enough, which it does.
What I am talking about is retaining the natural acoustic environment as it exists in the room.
What you are talking about is creative license to do what ever you please, which is fine but a totally separate argument to why I started this thread.
after summing with time alignment is somewhat closer to what the real room sound like since you add Is captured at different locations without additional delay.
This is not the same, not even a little bit. If you capture an IR from any given space it is automatically including the pre delay portion of the signal by virtue of being further away. You don’t have to add additional delay to it.
Do you mean from the same space or any IR from any space because there is a big difference.
If you add an IR from a different, unrelated space then they will sum much easier but again, creating a space that doesn’t exist in the real world.
The point I’m trying to make comes from an observation of how our ears perceive room acoustics and I personally am trying to be faithful to that.
Whilst I agree with most of the above when recording only with a single mic or a stereo pair, there’s a major caveat to consider in a mixing context (summation) due to the interaction of the direct paths as captured by the microphones. Typical IRs of a proper recording room have this (admittedly slightly idealised) shape
Credit : http://www.kunstradio.at/VR_TON/texte/12.html
The first thing you’d notice is the energy of the direct path which is notably higher compared to the early reflections or late reverberation.
Now, this is where it becomes interesting. Since the direct paths of two microphones in the same room are very strong and highly correlated with each other, summing them without correct time alignment will inevitably result in phasing/comb filtering. So in the end, we’ve messed both with the tone (spectral domain) of the instrument and in the time domain we have artificially added a strong additional early reflection that didn’t exist in the first place since the direct path captured by the room mics is integrated by our ear-brain system as part of the room sound signature. Please have a look at the Altiverb user manual (https://www.audioease.com/altiverb/files/Altiverb-7-manual.pdf.zip), specifically when they mention how the direct path can clash with the dry sound (see page 13), which is why it is switched off by default.
Whether you like this effect or not is again a question of taste and although all options are possible and respectable to serve the music, there’s a chance that when mixing you would want to keep the body / clarity of the dry sound whilst adding the contribution of the room with minimum artefacts, namely minimal change in tone of the instruments and respect of the room sound signature. This is of course particularly true for modern music productions.
In fact, this is more or less what AAx does when time aligning the direct paths between mics. Since you cannot get rid of those direct paths (they are captured by the mic), time alignment will help a lot by decorrelating the room contribution and the direct paths which are now in sync.
All of which sounds bad to my ear.
I personally don’t like this effect, I started this thread with my opinion stated very clearly.
I still stand by my opinions, as you stand by yours.
The ideas you have brought up are not without consequence to the sound either, which is why I personally don’t like the effect of moving the room mics so far forward in time.
You do you, my mind is made up.
I’m really surprised you can’t understand the difference between taste and science. I repeated several times what counts in the end is our choices to best serve the music, so amen to your opinion and mine. However when it comes to science (that you adressed in the first place) it’s no longer a question of opinion.
Don’t align the room mics to the rest of the kit. Align the room mics to each other instead.