I made a truly amazing personal discovery once I realised that our ear-brain combo is the most sophisticated piece of audio equipment on the market (make sure you look after it and value it).
Vibrations traveling through the air end up stimulating one or more of the 25000 “trigger” cells we have in our ears, more precisely in an inch long chamber known as the organ of Corti. Each of these triggers are connected to the brain where the received stimuli are further analysed. If you could see a real-time brain scan, you would see it lighting up in different parts of the brain (we will leave this discussion for another time) at different times as stimuli are processed and interpreted.
That Christmas tree lighting event hopping around your grey matter ends up processing things like tone, time, phase, levels and few other variables which in turn end up giving you the sensation of sound, making you experience the pleasure of music, or rather the pleasure of its anticipation.
Furthermore, the combination of all these variables analysed at the same time ends up giving you some truly important information, like location of the source in terms of distance and relative angle, which translates into depth and staging of a sound or sounds within a sound field. The entire wonder (our perception of sound staging) is based on our brain’s capacity for comparing data between the left and the right ear. If you close one ear, you would essentially lose a big chunk of that ability.
What does this have to do with microphones?
Every time we place a single microphone in front of a source, in essence, we close an ear. We loose (to a great extent) the intrinsic depth of the sound within its sound field; we also lose part of its tone as different parts of the instrument contribute differently to its acoustic tone. Unfortunately, by just using one microphone quite close, we end up focusing only on a portion of its natural tone.
Using two microphones with specific angles and distances between them, creates a similar pick-up to what our two ears do naturally (because the two microphones end up capturing phasing and time differences of the same sound that can be conveyed to our ears).
It goes without saying that this is not always or often practical or logistically viable (or even justifiable, because in most cases, live events run mono anyway). Talking about microphone positioning without mentioning stereo techniques would be like eating pasta without knowing or experiencing the undeniable taste of freshly grated parmesan on it. Can one eat pasta without parmesan? Absolutely (for the record some pasta types do not need or require parmesan on it), however never having tasted parmesan would imply that you never truly experienced pasta or food.
Listen to this short guitar clip, we used a technique called DIN
Listen to this grand piano, we used a technique called A-B
Listen to this Hammond Lesley, we used a technique called ORTF for the top of the cabinet
Listen to this wider than life piano, we also used a technique called MS
All the above are totally unprocessed sounds, no eq, no reverb no compression. Some of the clips came out of great preamps and others didn’t. The story is always the same: natural beauty before you even start mixing!
It is worth mentioning that stereo techniques that mono well are also helpful in live and in broadcast scenarios. In my book I mention that if you do not know how to take advantage of stereo techniques, your are definitely poorer for it.
To find out more, visit the DPA microphone university where you will find loads of great tips, among which a bunch of stereo microphone techniques explained. (http://dpamicrophones.com/mic-university)
Next week we will be concluding our perfect microphone series with a few bits on phasing and “bleed”
Read the previous article:
THE PERFECT MIC – PART 6: A piece of sky – movement two and three
THE PERFECT MIC – PART 5: A piece of sky – movement one
THE PERFECT MIC – PART 4: Tonic Tone
THE PERFECT MIC – PART 3: The Matching
THE PERFECT MIC – PART 2: The Prelude
THE PERFECT MIC – PART 1