The History of Modern Mixing – Today’s Audio Engineers
Here’s an article on how you can get pro sounding mixes, just like the ones you hear on the radio and movies.
Let’s a look at the some professional mixers and the history of the art of mixing.
Its important that we know our roots, so we can have an understanding of how the art has evolved with the years. In the olden days, they didn’t have DAWs, audio interfaces, Stereo, edit options. All they had was a 4 track machine mono tapes. The studios in those days would only have around 5 microphones. Then came many new technologies with time and things started getting digital, a bit too digital. They only had EQ, Compression and tape delay. Compare that to the millions of downloads of plugins today, there are thousands of them.
Our beloved Eddie Kramer said he only had a four track in those days.
So, that means my sessions, if i were to to then produce take me years to complete. Today’s world is digital, music in zero’s and one’s played at dangerously high level of compression with a low bit rate, the quality of the sound is gone. Today, we have the amazing sounding vsts, which we can use to do almost anything possible in our songs.
Remember: You don’t need an expensive studio, acoustical treatment, recording equipment, paid plug-ins to produce a great mix.
I always say that “Once you’re in the box, that’s when you can think out of it.
The Reverbs – The Mud Making Pigs
I call the with that nick name because they are the most terrifying things. Here’s what happens:
1: Let’s say, that we have an acoustic guitar, a vocalist, a bass guitar.
Okay, so now you set up a reverb via send and send the acoustic guitar 30% to the reverb, the vocalist 25%,, Bass guitar 30%, what do you get?
Frequencies clashing each other then increasing in the amplitude, so if your 500Hz is all muddied up, sorry no can do big boy.
To polish something, first we need to clean it.
Just open up any track in your daw, put a notch filter at 500Hz and cut it by 4.5dB. You will immediately feel the track become cleaner, polished, smooth, just like sitting in a Maybach and going on a long drive.
Take good care of the reverb, engage an EQ to the reverb, dip at 500, and hear the sound get cleaner with no frequency distortion. You will be able to identify each every position just by listening to it.
The LA Style :
The LA Style of mixing was evolved in LA. This style of mix will usually sound natural with minimal compression, either serial or parallel. The mix is clean with minimal use of effects. What’s noticeable is the reverb with a bit of delay.
The Nashville Style:
The vocals are given a golden seat in this style, at times the vocal and the instrumentation will be set so far apart, they almost sound separate from each other. The guitars, drums, steel, vocal background are processing in a unique way.
The London Style:
10 Tips to a great Audio Engineered Mix
This style of mix will have a lot of layers of the stems, resulting in a huge, big sounding record. I’d like to mention the respectable Mr. Trevor Horn here.
This is something everyone in or entering in music production should know. MIXING CAN NOT BE TAUGHT. The reason i stress on this is because many of my students look at the way I approach the mix and follow my methods. We are all unique, what my ears hear and what my brain perceives will be very different from what someone else perceives.
There is no one size fit all answer to a great mix, as every instrument (non electronic) sounds different, this is due to the harmonics and timbre qualities of the instruments and not to forget the skill of the mixing engineer.
TIP ONE –
Position your monitors accurately:
Whether you’re using Passive or Active monitoring speakers in your studio, even if you spend a couple thousands on them but if they are positioned inaccurately, your mixes will suffer in many ways. Proper positioning will yield the best results, your mix will be true, so where ever you play it (Considering you’re mastering the track separately). Avoid cross – talk at all times, basically cross talk is when two things not meant to be together, come together and create havoc like Bonnie and Clyde, I usually refer to cross talk with the name nick name. Simply put, the reason the left speaker is placed on the left and the right on the respective side is because that is the number of ears we have, 2. Now, the sound coming from the left speaker is to be heard by the left ear and the sound from the right speaker, by the right ear. It is believed that crosstalk hampers the sound quality. Note that with headphones there is no crosstalk and this phenomenon only occurs with monitors, both powered and non powered
Kill the Phase Issues:
Every audio producer, engineer has com across this issue multiple times in their sessions. Phase is a characteristic of a sound wave, which is vibration of air with the compression and rarefaction at certain cycles per second, also called as Hertz.
You’ve placed a microphone, let’s consider the SM57 on the top of a snare and another one aimed at the bottom, When the snare drum is hit, the microphone functions just like our ears, the diaphragms move in and out based on the vibration of the air molecules. The rises and falls in a waveform cause the thin sensitive diaphragm to move forwards and backwards. This creates an electrical signal which is then amplified by the pre-amp before it follows the signal flow onwards in the console.
Now let’s record a snare drum with 2 SM57s positioned on the snare top and snare bottom.
What is going to happen now is that the wave form coming from the top snare is going be completely opposite of the wave form from the bottom snare mic, in fact they can also be completely opposite of each other. When two audio waves are completely in phase with each other, there will be a rise in the volume, whereas when two audio files are completely out of phase and you play back your session you will notice that your channel fader shows lights but the master meter has no signal, or a very very weak sounding snare.
This happens because when two waves are out of phase, they cancel each other out.
When recording snares with 2 microphones : Inverse one of the tracks. The invert function is provided with almost any daw, a simple engagement of a button will get rid of the phasing issues and the snare will not loose its character, tonal qualities and the fullness.
Watch your decibels:
As we know that if two waves are phase canceling each other, their addition will result in a decrease in the overall volume.
Solo your tracks one by one and cross check your levels, if there is a decrease in level with any combination, look into the phase issues and invert one track to avoid cancellation. Specially when dealing with multiple microphones and instruments, various instruments give different results and frequencies, making things even more time consuming for you. Its hence always advised to not have any phase issues in the recording stage.
The 3:1 Rule:
Let’s say you’re using 2 microphones to record a kick drum, let’s assume the AKG D112 and a SM 58. Put the second mic at a distance three times of the first microphone.
If you want to get deep and do surgical level correction, Zoom into the two tracks in your Digital Audio Workstation and nudge one of the track by milliseconds and the phase issues will be solved.
Sum in Mono:
As phasing issues only occur when stereo tracks have some instruments positioned at places where they could cancel each other out after summed together. So, a real easy way to look for phasing issues is to play the mix in mono (mono summing) and hear if anything goes softer in amplitude, if it does, go back and get that phase sorted out. Reverb is also something that can cause cancellations.
Tweak your studio:
The room you mix in makes a big difference, it has to be treated acoustically and made sound proof. If you don’t have the budgets that big studios have, you can do many do-it-yourself.
1- Put one square metre of acoustic foam on the side walls or the both sides of your listening position, this will help get rid of the flutter and reduce the echo and give a better stereo imaging.
Then come the refections from the rear wall, this will ruin the stereo image, resulting in muddy mixes, sometimes the low end would increase and this results in thin mixes. Its important to have your room treated to get clean mixes.
Don’t let the Sub Bass Flight:
Many mixes out there are destroyed due to the too many over saturated, unnecessary sub-bass channels. Every instrument had its fundamental frequency and there are frequencies above them called as overtones and the ones beneath the fundament are the undertones. This means every individual will take up a considerable space in the frequency spectrum. Here’s where we use frequency pocketing.
In this scenario, let’s consider that we have a Kick and a bass line. When we out the kick through the spectrum analyzer, we see that the kick is more prominent, has its fundamental at 63Hz. The Sub Bass is somewhere around 45Hz.
In order to avoid any frequency cluttering, we can take a notch filter EQ and make a dip at 45Hz on the Bass, so as to make room for the kick in the mix. You can engage a high pass filter (HPF) at 45Hz on the Kick as the undertones may end up cluttering as we go on adding new tracks
The new age equalizers come with many features which weren’t even thought of in the olden days. Many new Eqs come with built in spectrum analyzers, which is very helpful and time consuming at the same time. I remember those times when I have to apply the spectrum analyzer then eq and so on. Its definitely saving CPU resources.
Wen EQing, make sure that you don’t boost too much, I usually do not boost above 3dB, although feel free to cut to your hearts wish.
Frequency pocketing will make your mix sound clean, crisp and quality, remember “Less is More”. Use the spectrum analyzers to find out the clashing frequencies, even with reference a/b mixing. Pay close attention to the Low and low mid range as they have a higher tendency to clash
It is not uncommon to reach about 50 tracks in a song, Lets say the drum kit was recorded with 12 microphones > Kick, snare, HH, Crash, Ride, Room Track where the kick may have 2 microphones, the snare has one on top and one on bottom.
Let’s say you want to compress the whole drum track together with out compressing the kick, snare, hi hats, crash and other elements individually. Make a group of the drum tracks and then apply compression to the whole group at once. This will give you the flexibility of having global, saving CPU resources and give a better powerful mix and presence, clarity. If at any point to think an individual instrument requires special processing, you can always take it out of the group and process it individually.
Listen to your work at ~85 dB, that’s the average listening levels. Listening at too loud levels will mess up with your frequency response due to the Fletcher and Munson curves, we have different response to different frequencies at different levels of amplitude.
This should help you get it started. Good Luck!