Regarding the title of this post - no, I haven't discovered a new synth which is geared entirely towards our trance pluck needs.
Which is probably a good thing - if we all started using the "ultimate pluck synth", we may quickly tire of those sounds appearing in many of the songs we hear.
No, today's post is about taking any of the synths we currently have in our studio and automatically turning every preset into a pluck. Or a bass. Or a pad. Or whatever.
A number of you have probably worked this out before now (and potentially there are some synths out there which make this process easy).
The way it's done is by using automation to "lock in" certain aspects of the sound. The parameters which are locked in are totally open to personal preference, though typically if the general "shape" of the sound should be retained, then the following parameters should be locked in:
Amplitude: Attack, Decay, Sustain and Release
Filter: Attack, Decay, Sustain, Release, Type, Cutoff Frequency and Resonance
Any other desired aspects of the sound (whether it should have delay/reverb applied, or unison, or whether the oscillator shapes should be retained, etc etc) can be locked in.
It takes a bit of time to set up all the automation - I use MIDI for my hardware synths, and set it to toggle slightly on the first 64th note of each bar - for example, from value 88 to value 89 - simply enough to trigger the parameter change.
Once the slightly tedious task of setting up is done, however, the synth opens up many more choices to us. We can enter (or copy/paste) in the notes and hit the play button, and any sound we turn to will take on our desired characteristics (at the start of each bar). These will be mixed with a wide range of other characteristics to create exotic new combinations, many of them ideally suited for our needs.
A great thing about this process is that these sounds are "original" (I don't want to get into a discussion about where originality begins and ends here...). Many of us (myself included) use presets and tweak them slightly, but don't deviate greatly from the original sound. Some of us (myself included) try to program our own sounds and get frustrated when they don't sound too good - sure, they're original, but so what? I could write an original atonal song with a random melody - originality for it's own sake is pointless. The process mentioned here allows us to become instant sound designers - using sonically pleasing aspects of sounds designed by talented sound designers, and combining them with many other combinations, to create sounds whose origins nobody will recognize.
I save all the MIDI clips in a single project so I can open it along with a new project and drag the desired clips across. Of course, another way would be to save each separate clip into a designated folder and import it into the project from there. I name them according to the synth and the original sound.
This process will probably work just as well with soft synths - in some ways, even better, since the parameters will likely be named (rather than referring to the manual for which of the 127 MIDI CC numbers control the desired parameter!) I haven't actually tried this with a soft synth however - it may not be as easy to drag the "automation clip" to a new project. It would depend on the DAW being used.
Well, that's all for this blog post. I hope it helps some of you who may have been getting tired of the sounds in a particular synth - this should inject it with some new life!
All the best,
Fabian
Monday, July 4, 2011
Saturday, June 4, 2011
Using send effects
A short post today about send effects. How to use them, why to use them.
For most producers, today's content will be a case of "move along, nothing to see here".
Okay, for the rest of us:
Send effects refer to effects/ processors set up on a sequencer's "send" channels. Rather than being applied directly to an instrument/ a sound, this effect is available for many instruments/sounds to be routed into, at different levels.
Common effects used on send channels are:
REVERB
If reverb is applied to each instrument/sound individually, it can sound like they're all in separate spaces, which can sound very confusing.
Using a send effect and sending more of some instruments and less of others into it means that all the instruments are sharing a space, providing a more convincing sound stage. The effect can be further improved by creating a couple of send channels which feed into the reverb channel, and putting short delays onto these send channels. This way, some sounds can be routed directly into the reverb (far away sounds, where the direct and reverbed sounds will reach the listener at the same time), some sounds can be routed into the reverb via a 25 millisecond delay (such as some lead sounds which aren't absolutely up front of the mix) and others can be routed via a 50ms delay (such as a snare at the front of the mix), which will a while to hit the back of the mix and come back to the listener as reverberation.
A different way of accomplishing this would be to copy each of the tracks going into the reverb, and simply pushing each one back in time the desired amount (with the fader at 0 and the channel fed into the reverb send pre-fader).
Some people set up a number of different send reverbs. I typically set up two for my trance songs - a long "hall" type reverb (which mainly treats my lead sounds) and a short "room" type reverb (which mainly treats my drum and bass sounds).
DELAY
Tempo-synced delays can add some sonic interest to a lot of sounds. If we're applying a distinctive delay to a particular sound it's probably best we apply it to that particular sound, but for more conventional delay effects it's fine to send several sounds through the same delay. I usually have two send delays set up for my trance songs. Both are left-right delays with a bit of feedback, set at different rhythmic patterns (for example, a 2/16ths, 3/16ths delay and a 3/16ths, 6/16ths delay). This way sounds can quickly be spiced up by routing them through one delay line or the other (or even both). Treating a number of sounds with the same delay pattern also adds cohesiveness to a mix.
Less common effects are:
CHORUS/STEREO WIDENERS
I occasionally experiment with these on a send track. As with delays, this provides a quick and easy way to add stereo width to a number of instruments. It generally works best with only two or three sounds running through it at the same time, since if everything is wide, nothing is wide. Using automation, width can be added to instruments at various points during a song.
COMPRESSION
One way of achieving the "New York Compression" effect, where a heavily compressed version of a sound is mixed with the original, uncompressed sound, is via a send. A whole group of sounds (for example, a drum kit) can be routed through the compressor.
An added benefit of using sends such as reverbs is that a high quality, resource hungry reverb can be used - since there is only one instance in the mix rather than ten, we're much less likely to run out of processing power.
EQs can be applied before or after send effects, to shape the frequencies we want passing through the effect and to shape the frequencies coming out of it. Low end is often removed from send effects, since bass can quickly build up. Lifting the high end up will make the effect stick out more, whereas bringing the high end down a bit will blend the effect in.
Send effects also allow for creative mix manipulation - we can bring the level of reverbs and delays up during transitions, or during certain sections of a song. We can briefly drop an effect out, raise its feedback, change its stereo image - there are many possibilities.
I hope some of you found this useful!
Fabian
For most producers, today's content will be a case of "move along, nothing to see here".
Okay, for the rest of us:
Send effects refer to effects/ processors set up on a sequencer's "send" channels. Rather than being applied directly to an instrument/ a sound, this effect is available for many instruments/sounds to be routed into, at different levels.
Common effects used on send channels are:
REVERB
If reverb is applied to each instrument/sound individually, it can sound like they're all in separate spaces, which can sound very confusing.
Using a send effect and sending more of some instruments and less of others into it means that all the instruments are sharing a space, providing a more convincing sound stage. The effect can be further improved by creating a couple of send channels which feed into the reverb channel, and putting short delays onto these send channels. This way, some sounds can be routed directly into the reverb (far away sounds, where the direct and reverbed sounds will reach the listener at the same time), some sounds can be routed into the reverb via a 25 millisecond delay (such as some lead sounds which aren't absolutely up front of the mix) and others can be routed via a 50ms delay (such as a snare at the front of the mix), which will a while to hit the back of the mix and come back to the listener as reverberation.
A different way of accomplishing this would be to copy each of the tracks going into the reverb, and simply pushing each one back in time the desired amount (with the fader at 0 and the channel fed into the reverb send pre-fader).
Some people set up a number of different send reverbs. I typically set up two for my trance songs - a long "hall" type reverb (which mainly treats my lead sounds) and a short "room" type reverb (which mainly treats my drum and bass sounds).
DELAY
Tempo-synced delays can add some sonic interest to a lot of sounds. If we're applying a distinctive delay to a particular sound it's probably best we apply it to that particular sound, but for more conventional delay effects it's fine to send several sounds through the same delay. I usually have two send delays set up for my trance songs. Both are left-right delays with a bit of feedback, set at different rhythmic patterns (for example, a 2/16ths, 3/16ths delay and a 3/16ths, 6/16ths delay). This way sounds can quickly be spiced up by routing them through one delay line or the other (or even both). Treating a number of sounds with the same delay pattern also adds cohesiveness to a mix.
Less common effects are:
CHORUS/STEREO WIDENERS
I occasionally experiment with these on a send track. As with delays, this provides a quick and easy way to add stereo width to a number of instruments. It generally works best with only two or three sounds running through it at the same time, since if everything is wide, nothing is wide. Using automation, width can be added to instruments at various points during a song.
COMPRESSION
One way of achieving the "New York Compression" effect, where a heavily compressed version of a sound is mixed with the original, uncompressed sound, is via a send. A whole group of sounds (for example, a drum kit) can be routed through the compressor.
An added benefit of using sends such as reverbs is that a high quality, resource hungry reverb can be used - since there is only one instance in the mix rather than ten, we're much less likely to run out of processing power.
EQs can be applied before or after send effects, to shape the frequencies we want passing through the effect and to shape the frequencies coming out of it. Low end is often removed from send effects, since bass can quickly build up. Lifting the high end up will make the effect stick out more, whereas bringing the high end down a bit will blend the effect in.
Send effects also allow for creative mix manipulation - we can bring the level of reverbs and delays up during transitions, or during certain sections of a song. We can briefly drop an effect out, raise its feedback, change its stereo image - there are many possibilities.
I hope some of you found this useful!
Fabian
Thursday, June 2, 2011
Sidechain Dynamics
Ah, sidechains.
So much potential for creative fun. So much potential for adding groove. For keeping instruments out of each others way. For totally rampant overuse throughout the last decade at least.
With that last point I'm referring to sidechain compression, which has been used to create heavy pumping effects in countless songs over the past while. It's use has become so prevalent it simply seems normal to me now.
I've used sidechain compression for a while now. I like the way it opens up space for my kicks and enhances groove. For the most part, I don't go too wild with it, though from time to time I'll get into some heavy pumping pad action.
Anyway, today's post is about sidechain processing - compressors, gates and expanders. Between these three processors, and a variety of input signals, lies a whole bunch of creative fun.
I'll just touch briefly on what sidechaining means and how it is typically set up.
Normally, when we run a sound through a dynamics processor, it reacts to the sound running through it - a compressor will turn the level down when the sound goes above the threshold, a gate opens when the sound goes above the threshold. With sidechaining, the processor doesn't care what's going on with the sound it's working on. It reacts to a separate signal, coming in via the sidechain input. The input signal (or "trigger") is typically set up on a separate channel in our sequencer, and is routed into the sidechain input. Depending on how we're using it, we may turn the trigger's level all the way down and send the signal to the sidechain input pre-fader. This means that it doesn't matter how much we move the trigger's fader, including turning it all the way down - the signal will be sent to the sidechain input at a constant level.
I'll go through some examples of how sidechaining can be used:
Sidechain compressing a bass to a kick: Low frequencies can quickly get out of control when both a kick and a low bass play at the same time. Sending the kick into a compressor acting on the bass means the bass will clear out of the way every time the kick plays. Problem solved! For the input trigger, there are a couple of options. We could use the actual kick we're using in the song, and send it post-fader. This means that only as much compression as required will occur - if the kick fades in or out at some point, the compression will gradually increase/decrease as required. However, sometimes we want to apply a steady, consistent pump to our bass, regardless of what's happening with the kick we can hear. An option is to put a copy of the kick on a separate track, program a steady beat, turn the level down to 0 and send it to the sidechain compressor. This way the bass will pump consistently throughout the song. Yet another option is to use a different trigger - kick drums often have long boomy tails, which can compress more of the bass than is actually required. For this reason, I generally use a very short closed hihat sample as the trigger. This allows me to set the timing of the pump very precisely by using the compressor's hold or release parameters.
Sidechain compressing any other sound to a kick: See above. Pads, pumping heavily on the offbeat, are common. Leads can come down very briefly, using a low ratio - hardly noticeable, but helpful for clearing out room for the kick. Percussive loops can groove in time with the kick. Heck, we could go so far as to send every sound except the kick into a group channel and compress the entire group every time the kick hits. Mega pump!
Sidechain compressing guitars/pads to a lead or vocal: This is similar to what radio announcers do when they speak over the top of a song that's playing - they start speaking, the music instantly drops down in volume so we can hear them clearly, then the music rushes back up when they stop speaking. We won't generally apply the effect in anywhere near as extreme a way - it's quite unmusical. But the principle remains - we want the centerpiece of our song to come through clearly - we want to make out every word, hear every note of the melody. So we lightly compress a wall of guitars using a vocal input, or a big pad using a lead input. Done subtly, most listeners won't even notice that it's happening - we've simply found a way to go over 0 dBs in the digital realm, that's all. (please don't take that statement at face value!) For this application, we want to send the vocal/ lead in post-fader, so the other sounds drop back only as much as needed. We'll generally want to use a release slow enough that the sound comes back up naturally when the vocal/ lead stops, rather than rushing up and letting the listener know exactly what was going on - though obviously this is a creative decision for each of us!
Sidechain compressing an instrument's reverb/delay to that instrument: This effect turns the reverb down when the instrument is playing, then brings the reverb back up as the instrument fades away. There are a number of creative applications for this, ranging from very subtle to quite extreme. It can be applied to short room reverbs and large caverns. Likewise, it can be applied to delays. It could potentially be applied to other effects (distortion, chorus and so on), but the effect would be much shorter in nature - it would only apply to the end of each note (unless it's a distortion which feeds back on itself!)
Sidechain gating/expanding a pad with a rhythmic pattern/loop: Using a short, snappy gate, a pad can be chopped into an interesting lead pattern by running a drum loop (or any other rhythmic element) into the gate's sidechain input. If the pad features some nice evolving modulation, this will keep running throughout the chopped up sequence - quite different to actually playing the same sequence and having the modulation start from the same point each time a note is triggered. Rhythmic loops with a wide dynamic range can work better than highly compressed loops - a wide dynamic range means there is a more defined space between "note on" and "note off". Using compressed loops will require finer tuning of the input threshold. A more subtle version of the chopped gated effect can be created by using an expander with a low ratio - the sound will only drop marginally between transients, rather than completely cutting out. Using automation, this also allows us to smoothly transition from unbroken pad chords/notes to heavily chopped chords/notes and back again.
Automating or changing the input trigger: A lot of the time, the automation will need to be reasonably heavy-handed to make an impact on the way the dynamics processor reacts. The input trigger can be heavily EQ'd - if we are punching out a rhythmic pattern using a percussive loop into a gate, an EQ can change the percussive loop so the kick comes through much more prominently (or another area of the loop). A transient designer can be attached to the percussive loop, transitioning it from very short and snappy to more smooth and drawn-out.
There are many more creative possibilities. Whether it's punching parts of sounds out or punching them in, whether it's pushing something away from another sound or locking it to it, or even whether it's a very subtle application which transparently solves an issue, sidechaining is a very handy resource to have at our fingertips.
Have fun, keep making great music!
Fabian
So much potential for creative fun. So much potential for adding groove. For keeping instruments out of each others way. For totally rampant overuse throughout the last decade at least.
With that last point I'm referring to sidechain compression, which has been used to create heavy pumping effects in countless songs over the past while. It's use has become so prevalent it simply seems normal to me now.
I've used sidechain compression for a while now. I like the way it opens up space for my kicks and enhances groove. For the most part, I don't go too wild with it, though from time to time I'll get into some heavy pumping pad action.
Anyway, today's post is about sidechain processing - compressors, gates and expanders. Between these three processors, and a variety of input signals, lies a whole bunch of creative fun.
I'll just touch briefly on what sidechaining means and how it is typically set up.
Normally, when we run a sound through a dynamics processor, it reacts to the sound running through it - a compressor will turn the level down when the sound goes above the threshold, a gate opens when the sound goes above the threshold. With sidechaining, the processor doesn't care what's going on with the sound it's working on. It reacts to a separate signal, coming in via the sidechain input. The input signal (or "trigger") is typically set up on a separate channel in our sequencer, and is routed into the sidechain input. Depending on how we're using it, we may turn the trigger's level all the way down and send the signal to the sidechain input pre-fader. This means that it doesn't matter how much we move the trigger's fader, including turning it all the way down - the signal will be sent to the sidechain input at a constant level.
I'll go through some examples of how sidechaining can be used:
Sidechain compressing a bass to a kick: Low frequencies can quickly get out of control when both a kick and a low bass play at the same time. Sending the kick into a compressor acting on the bass means the bass will clear out of the way every time the kick plays. Problem solved! For the input trigger, there are a couple of options. We could use the actual kick we're using in the song, and send it post-fader. This means that only as much compression as required will occur - if the kick fades in or out at some point, the compression will gradually increase/decrease as required. However, sometimes we want to apply a steady, consistent pump to our bass, regardless of what's happening with the kick we can hear. An option is to put a copy of the kick on a separate track, program a steady beat, turn the level down to 0 and send it to the sidechain compressor. This way the bass will pump consistently throughout the song. Yet another option is to use a different trigger - kick drums often have long boomy tails, which can compress more of the bass than is actually required. For this reason, I generally use a very short closed hihat sample as the trigger. This allows me to set the timing of the pump very precisely by using the compressor's hold or release parameters.
Sidechain compressing any other sound to a kick: See above. Pads, pumping heavily on the offbeat, are common. Leads can come down very briefly, using a low ratio - hardly noticeable, but helpful for clearing out room for the kick. Percussive loops can groove in time with the kick. Heck, we could go so far as to send every sound except the kick into a group channel and compress the entire group every time the kick hits. Mega pump!
Sidechain compressing guitars/pads to a lead or vocal: This is similar to what radio announcers do when they speak over the top of a song that's playing - they start speaking, the music instantly drops down in volume so we can hear them clearly, then the music rushes back up when they stop speaking. We won't generally apply the effect in anywhere near as extreme a way - it's quite unmusical. But the principle remains - we want the centerpiece of our song to come through clearly - we want to make out every word, hear every note of the melody. So we lightly compress a wall of guitars using a vocal input, or a big pad using a lead input. Done subtly, most listeners won't even notice that it's happening - we've simply found a way to go over 0 dBs in the digital realm, that's all. (please don't take that statement at face value!) For this application, we want to send the vocal/ lead in post-fader, so the other sounds drop back only as much as needed. We'll generally want to use a release slow enough that the sound comes back up naturally when the vocal/ lead stops, rather than rushing up and letting the listener know exactly what was going on - though obviously this is a creative decision for each of us!
Sidechain compressing an instrument's reverb/delay to that instrument: This effect turns the reverb down when the instrument is playing, then brings the reverb back up as the instrument fades away. There are a number of creative applications for this, ranging from very subtle to quite extreme. It can be applied to short room reverbs and large caverns. Likewise, it can be applied to delays. It could potentially be applied to other effects (distortion, chorus and so on), but the effect would be much shorter in nature - it would only apply to the end of each note (unless it's a distortion which feeds back on itself!)
Sidechain gating/expanding a pad with a rhythmic pattern/loop: Using a short, snappy gate, a pad can be chopped into an interesting lead pattern by running a drum loop (or any other rhythmic element) into the gate's sidechain input. If the pad features some nice evolving modulation, this will keep running throughout the chopped up sequence - quite different to actually playing the same sequence and having the modulation start from the same point each time a note is triggered. Rhythmic loops with a wide dynamic range can work better than highly compressed loops - a wide dynamic range means there is a more defined space between "note on" and "note off". Using compressed loops will require finer tuning of the input threshold. A more subtle version of the chopped gated effect can be created by using an expander with a low ratio - the sound will only drop marginally between transients, rather than completely cutting out. Using automation, this also allows us to smoothly transition from unbroken pad chords/notes to heavily chopped chords/notes and back again.
Automating or changing the input trigger: A lot of the time, the automation will need to be reasonably heavy-handed to make an impact on the way the dynamics processor reacts. The input trigger can be heavily EQ'd - if we are punching out a rhythmic pattern using a percussive loop into a gate, an EQ can change the percussive loop so the kick comes through much more prominently (or another area of the loop). A transient designer can be attached to the percussive loop, transitioning it from very short and snappy to more smooth and drawn-out.
There are many more creative possibilities. Whether it's punching parts of sounds out or punching them in, whether it's pushing something away from another sound or locking it to it, or even whether it's a very subtle application which transparently solves an issue, sidechaining is a very handy resource to have at our fingertips.
Have fun, keep making great music!
Fabian
Friday, May 13, 2011
We are not Average Listeners
As producers, we are not average listeners. It is helpful to recognize that the majority of people who listen to our music don't experience it the way we experience it. They don't listen to other music the way we tend to listen to it. It's hard for us to switch off our production experience and listen purely for enjoyment - if that bass gets buried in the mix during the chorus, it could be hard for us to ignore, even if the song overall is fantastic. We may chase "perfection", but many great, hugely popular songs fall far short of this. It's useful to keep this in mind.
Here are some thoughts on how average listeners listen to music, and how this may differ from how we producers/engineers listen.
Average listeners:
-Don't know and don't care you've used loops. This includes musical loops, even the main musical hook of the song.
Tip: we should do what feels right to us. If we can sleep fine and use loops, we should keep using them - and take no notice of people who say loops make baby Jesus cry.
-Don't care about the gear you've used - hardware/software, analog/digital, synthesizer/sampler/rompler.
Tip: we should use whatever we're using to make great music. Producer nobody on forum nowhere only uses and likes music made with 1970s analog synthesizers? Good for them and irrelevant for us.
-Won't notice or care about the hours of work you put into small background details.
Tip: we can keep spending these hours on our passion. Who cares if other people don't notice, or care? We care. These hours are quality music time, "us time". Unless we're actually obsessing over details which literally nobody will ever notice.
-(As with every field), will greatly underestimate how hard it is to create a great sounding, musically pleasing song.
Tip: we should keep making great sounding, musically pleasing songs. In the end, that's what matters, not that we created the bass patch ourselves and used twenty tracks for the vocal line. Let people simply enjoy them or dance to them, even if they think it's as easy as making toast or building a spaceship.
-Are into the current hot trend which many of us may find unlistenable. That's why the current hot trend is selling more than our style.
Tip: we can be inspired by the current trend or stick with our style. Either way works. We're only going to create good, listenable music if we're somewhat engaged with our creation - if we hate the current style, we're not going to be able to create great music in that style. In the end, we can't control the public's tastes, but we can control the quality of the music we create, whatever our style happens to be.
-Are going to talk about hooks/easy-to-describe-noteworthy things - "hah! I kissed a girl!". If your song is based on a very effective chord progression and interesting rhythmic groove, people are going to have a lot of difficulty talking about it to their other non-musical friends.
Tip: are we making music for other producers, or a wider audience? In some ways creating music for other producers is easier - they'll often cut us some slack. They may praise us if we've used some technical wizardry and disregard our ineffective groove or complete lack of melody. The dancefloor isn't as forgiving. If we're going for a wider audience, we need to make sure the wider audience can easily communicate our song to each other - a simple, catchy melody they can hum, a memorable lyric, some kind of hook.
-Are going to love our song and think it's perfect. As producers, as people who have developed an intimate relationship with sound over years, it can sometimes be a while between songs we hear which blow us away, which we consider "perfect". When I was younger, before I learned how music was created, how all the bits were put together, I sometimes got that feeling several times on an album. Sure, we can appreciate the skill of the artist or producer, but we can't unlearn our art and listen to songs with a pure music fan's ears (well, maybe if they're songs in a style we've never tried to create).
Tip: We should get our music out there and give people a chance to love it. The person who builds a cover song entirely out of loops using cheap software in a couple of hours and gets people dancing has contributed more than the person who has been working on an original analog synth composition for the past year and shelves it because it "isn't perfect".
I'm sure there are a lot of other examples I could have used. The main thing for us to bear in mind is that the majority of people don't listen to music the way we do.
Keep making great music!
Fabian
Here are some thoughts on how average listeners listen to music, and how this may differ from how we producers/engineers listen.
Average listeners:
-Don't know and don't care you've used loops. This includes musical loops, even the main musical hook of the song.
Tip: we should do what feels right to us. If we can sleep fine and use loops, we should keep using them - and take no notice of people who say loops make baby Jesus cry.
-Don't care about the gear you've used - hardware/software, analog/digital, synthesizer/sampler/rompler.
Tip: we should use whatever we're using to make great music. Producer nobody on forum nowhere only uses and likes music made with 1970s analog synthesizers? Good for them and irrelevant for us.
-Won't notice or care about the hours of work you put into small background details.
Tip: we can keep spending these hours on our passion. Who cares if other people don't notice, or care? We care. These hours are quality music time, "us time". Unless we're actually obsessing over details which literally nobody will ever notice.
-(As with every field), will greatly underestimate how hard it is to create a great sounding, musically pleasing song.
Tip: we should keep making great sounding, musically pleasing songs. In the end, that's what matters, not that we created the bass patch ourselves and used twenty tracks for the vocal line. Let people simply enjoy them or dance to them, even if they think it's as easy as making toast or building a spaceship.
-Are into the current hot trend which many of us may find unlistenable. That's why the current hot trend is selling more than our style.
Tip: we can be inspired by the current trend or stick with our style. Either way works. We're only going to create good, listenable music if we're somewhat engaged with our creation - if we hate the current style, we're not going to be able to create great music in that style. In the end, we can't control the public's tastes, but we can control the quality of the music we create, whatever our style happens to be.
-Are going to talk about hooks/easy-to-describe-noteworthy things - "hah! I kissed a girl!". If your song is based on a very effective chord progression and interesting rhythmic groove, people are going to have a lot of difficulty talking about it to their other non-musical friends.
Tip: are we making music for other producers, or a wider audience? In some ways creating music for other producers is easier - they'll often cut us some slack. They may praise us if we've used some technical wizardry and disregard our ineffective groove or complete lack of melody. The dancefloor isn't as forgiving. If we're going for a wider audience, we need to make sure the wider audience can easily communicate our song to each other - a simple, catchy melody they can hum, a memorable lyric, some kind of hook.
-Are going to love our song and think it's perfect. As producers, as people who have developed an intimate relationship with sound over years, it can sometimes be a while between songs we hear which blow us away, which we consider "perfect". When I was younger, before I learned how music was created, how all the bits were put together, I sometimes got that feeling several times on an album. Sure, we can appreciate the skill of the artist or producer, but we can't unlearn our art and listen to songs with a pure music fan's ears (well, maybe if they're songs in a style we've never tried to create).
Tip: We should get our music out there and give people a chance to love it. The person who builds a cover song entirely out of loops using cheap software in a couple of hours and gets people dancing has contributed more than the person who has been working on an original analog synth composition for the past year and shelves it because it "isn't perfect".
I'm sure there are a lot of other examples I could have used. The main thing for us to bear in mind is that the majority of people don't listen to music the way we do.
Keep making great music!
Fabian
Friday, April 29, 2011
why should people listen to our music?
There are millions of very competent musicians and engineers in the world.
Many of them have staggering technical skills.
And many of the songs these technical wizards create aren't worth listening to, apart from being used as technical references. For lovers of music, these songs offer nothing compelling.
Much like all the "shred guitarists" who can play thousands of meaningless notes per minute, if we as producers try to distinguish ourselves by way of technical prowess we are heading down a dead end. Perhaps we can get work as an engineer or teacher, but it's unlikely our songs will resonate with the average listener.
On an individual level it's not ideal that we're now competing with a million other producers rather than a few thousand. It means, on average, more is required from us to separate our music from the millions of other songs which are created each year. In the end, the artists who have something meaningful to impart will find it easier to connect with listeners.
Production is important, but I believe that songwriting is more important than ever. (I'm trying to help out on the technical/production side with insidemixes - hopefully this will shortcut the path to great sound for a number of people, allowing them to focus their attention on their unique voice).
We are not average listeners, and we would do well to remember this. Average listeners like catchy melodies, lyrics, grooves, hooks. If we want to connect with them, we should spend much more time working on these skills relative to our engineering skills.
If you think you can't write catchy melodies, spend some time listening to catchy melodies. Hum along to them to get a feel for the note placements and intervals. Then spend some time coming up with new melodies - start with an empty head (heh) and let the notes come. Maybe it'll be simple, maybe it'll be complex. Happy, sad or some other mood. Maybe the melody will have some large intervals. Maybe some note bending. Whatever, get it happening in your head until it's ready to put into your sequencer in a rough form. Plenty of time to tidy it up later on.
Get ten melodies down. If you haven't written many melodies before, most of them will be fairly average and aren't worth pursuing. But perhaps one or two will have a bit more merit. It's much easier to gauge this the day after writing the melody down - it'll be much easier to sort good ideas from bad ones, and come up with improvements to the good ones.
Do this process often, learn what works and what doesn't, and you'll get to a point where you're writing very catchy melodies.
Of course, I don't mean to discount other learning aids - videos, lessons, books (such as "how music works"). These can be helpful for many of us.
The same process applies to writing great lyrics and to coming up with danceable grooves and memorable hooks.
At this point, where we have so many excellent tools at our disposal, where we have access to so much quality information relating to our art, we should be giving the world a larger selection of fantastic music than at any time in the past.
So let's keep at it!
Fabian
Many of them have staggering technical skills.
And many of the songs these technical wizards create aren't worth listening to, apart from being used as technical references. For lovers of music, these songs offer nothing compelling.
Much like all the "shred guitarists" who can play thousands of meaningless notes per minute, if we as producers try to distinguish ourselves by way of technical prowess we are heading down a dead end. Perhaps we can get work as an engineer or teacher, but it's unlikely our songs will resonate with the average listener.
On an individual level it's not ideal that we're now competing with a million other producers rather than a few thousand. It means, on average, more is required from us to separate our music from the millions of other songs which are created each year. In the end, the artists who have something meaningful to impart will find it easier to connect with listeners.
Production is important, but I believe that songwriting is more important than ever. (I'm trying to help out on the technical/production side with insidemixes - hopefully this will shortcut the path to great sound for a number of people, allowing them to focus their attention on their unique voice).
We are not average listeners, and we would do well to remember this. Average listeners like catchy melodies, lyrics, grooves, hooks. If we want to connect with them, we should spend much more time working on these skills relative to our engineering skills.
If you think you can't write catchy melodies, spend some time listening to catchy melodies. Hum along to them to get a feel for the note placements and intervals. Then spend some time coming up with new melodies - start with an empty head (heh) and let the notes come. Maybe it'll be simple, maybe it'll be complex. Happy, sad or some other mood. Maybe the melody will have some large intervals. Maybe some note bending. Whatever, get it happening in your head until it's ready to put into your sequencer in a rough form. Plenty of time to tidy it up later on.
Get ten melodies down. If you haven't written many melodies before, most of them will be fairly average and aren't worth pursuing. But perhaps one or two will have a bit more merit. It's much easier to gauge this the day after writing the melody down - it'll be much easier to sort good ideas from bad ones, and come up with improvements to the good ones.
Do this process often, learn what works and what doesn't, and you'll get to a point where you're writing very catchy melodies.
Of course, I don't mean to discount other learning aids - videos, lessons, books (such as "how music works"). These can be helpful for many of us.
The same process applies to writing great lyrics and to coming up with danceable grooves and memorable hooks.
At this point, where we have so many excellent tools at our disposal, where we have access to so much quality information relating to our art, we should be giving the world a larger selection of fantastic music than at any time in the past.
So let's keep at it!
Fabian
Friday, April 15, 2011
Learning from Great Songs
Whenever we listen to music we learn from it. This happens whether we're consciously aware of it or not. Some pieces of music won't appeal to us and (subconsciously) we'll have an awareness of why they don't appeal. Likewise with songs which we love as well as the many songs we neither openly love nor detest. We're always learning what works and what doesn't, from our perspective.
We may open up our sequencer and start with a blank slate, but our minds aren't blank slates. When it comes to crafting a pleasing mix, we'll be guided by the music we've listened to. If our guitars don't sound chunky enough, we only know that because we've heard nice chunky guitars. If our kick drum doesn't have enough impact, we recognize it because we've heard satisfying kick drums in songs we like the sound of.
Given that we're always learning, I'd recommend doing it consciously. If we hear a beautiful lush pad in a song, we shouldn't rely on serendipity to bring it forth in a year's time, when a similar pad may fit perfectly into our current song. Focus on the pad and try to describe it as fully as possible - how wide is it? Is it only represented in the mid range, or does it have decent low and/or high end as well? Is there any modulation - panning, filters, delay effects? How does it interact with the other sounds - are there particular other sounds which help to frame the pad and make it sound the way it does? The more detail we can use to describe what we're hearing, the better we'll be able to recreate and incorporate the sound into our own sonic palette.
This leads me to the main topic of this post. It's great to describe a sound in detail, but in my experience the next step is more important - actually attempting to recreate the sound. I've addressed the issue in some of my posts already regarding the "copying vs originality" aspect of this. In short, as I've already said, we're learning all the time whether we're doing so consciously or not. Recreating sounds won't lead to any less original music than starting with a blank slate. It will however give us more techniques to use when we're making music.
Here's how I go about recreating songs.
I like to build songs up from the foundation - the kick and bass. The first thing I do is look for sections in the song/ album where these are as exposed as possible. When I find a song which is a good candidate, I'll import it into my sequencer and adjust the project's tempo to the tempo of the song. Some sequencers (such as Ableton Live) do this automatically, but mine doesn't, so I just loop a four bar section and adjust the tempo until the section loops cleanly.
Then I set the loop markers so a two bar section with a decently exposed kick drum is looping. The first bar plays the original audio, the second bar is silent (either via volume automation or simply cutting the original audio and moving the section after the first bar out of range of the loop). Then I go through my kick samples, in whichever sample library is closest to the style of sample I'm looking for. I'll hear four of the original kick, then four of the kick I'm previewing. I make note of kicks which sound close to the attack transient, kicks which sound close to the body and kicks which sound close overall. It's great when I find a kick which is close on its own (even better when I find the exact kick the original artist used!), but I have no problems splicing together a great sounding kick using the attack portion of one and the body of another. The thing to keep in mind when cutting out the parts of each kick that aren't required is to enable the "snap to zero crossing" option. Otherwise the audio will pop every time it plays the uncleanly spliced kick. Over the course of a song that's a heap of pops! Once I've gone through my kick library I may have noted down 20 kicks. I'll go through these again and keep narrowing it down until I find the closest one(s).
Once I have a kick which is pretty close I compare it to the original by running a frequency analyzer over both channels and seeing what differences that shows up. Obviously if the original version has other sounds playing I make allowances for these. I'll make a few EQ adjustments to get my kick even closer to the original.
Now that I have a decent version of the original kick it will be easier to hear and audition the other sounds relative to it. Ideally every sound would be isolated at some point during the song/album. When that doesn't happen it's helpful to have a very similar kick sound and listen to where the other sounds are sitting relative to it - level-wise, frequency-wise, their stereo width, panning and so on. Sometimes I'll look at a frequency analyzer, in which case it's definitely handy having a similar kick so I can focus on what the other sound is adding, rather than trying to judge which frequencies belong to the kick and which to the other sounds.
Kicks are often dry, mono and in the middle, whereas other sounds are more likely to be further processed. I note the reverb and delay that have been added to sounds, as well as any other treatment and do my best to replicate these.
After the kick I'll either proceed to the hihat and snare, or the bass. It depends on which element is most exposed. In this example I'll do the bass first.
I have a number of samplers and synthesizers I'm reasonably familiar with, so I'll generally know which one to turn to for a sound similar to the original bass sound. As before, I loop a section where the bass is exposed (if available), or where it's predominantly the kick and bass playing. I'll work out the notes the bass is playing and create a MIDI version to send to my sound source. Then I repeat the sound selection process - going through bass sounds, making notes of the sounds which are closest to the original, modifying parameters to push the sound closer to the original. Again, I may have noted down 10 to 20 sounds, which I'll progressively narrow down to the one I'll use.
The hihat and snare/clap are generally quite straightforward. Sometimes I'll adjust the pitch or length of a sample but generally I'll be content with something which fulfills the same role as the original sample rather than seeking an exact match.
I find a lot of mid/ higher bass sounds in trance very difficult to recreate. They often have filter modulation, phasers, delays or a myriad of other effects applied to them. Often this will be where I have to concede defeat for the time being. On a few occasions I've inserted a different mid bass sound which essentially fulfills the same function, which resulted in a very solid foundation to build pad and lead sounds onto for my future original productions.
I won't go into detail for the other sounds, since the same principles apply. I have a number of guitar amp emulators when I'm trying to match heavy guitar sounds in metal songs. I look for a similar sounding preset and work from there. For high percussive loops I choose a similar style of loop rather than trying to match it perfectly. For pad and lead sounds I listen to the relationship between them and the solid foundation I've already created. Essentially, the more songs I recreate, the better I'm able to hear small differences in sounds. Over time, when selecting sounds, I'm eliminating more of them before I note them down as candidates. Hopefully one day I'll reach a point where I can just go through and choose the best sound without making any notes. More experience with my sample libraries will help, as would an enhanced knowledge of synthesis.
To sum up, our music libraries are the best learning resource we have. If we take the time to study them and to experience their creation directly (to the best of our ability), we'll make tremendous progress. Each song is a new experience, which will stretch us and make us grow. Great masters of classical music transcribed music of the composers they admired in order to achieve this direct experience with the music they loved. I heartily encourage the practice.
I hope you've found this useful, keep making great music!
Fabian
We may open up our sequencer and start with a blank slate, but our minds aren't blank slates. When it comes to crafting a pleasing mix, we'll be guided by the music we've listened to. If our guitars don't sound chunky enough, we only know that because we've heard nice chunky guitars. If our kick drum doesn't have enough impact, we recognize it because we've heard satisfying kick drums in songs we like the sound of.
Given that we're always learning, I'd recommend doing it consciously. If we hear a beautiful lush pad in a song, we shouldn't rely on serendipity to bring it forth in a year's time, when a similar pad may fit perfectly into our current song. Focus on the pad and try to describe it as fully as possible - how wide is it? Is it only represented in the mid range, or does it have decent low and/or high end as well? Is there any modulation - panning, filters, delay effects? How does it interact with the other sounds - are there particular other sounds which help to frame the pad and make it sound the way it does? The more detail we can use to describe what we're hearing, the better we'll be able to recreate and incorporate the sound into our own sonic palette.
This leads me to the main topic of this post. It's great to describe a sound in detail, but in my experience the next step is more important - actually attempting to recreate the sound. I've addressed the issue in some of my posts already regarding the "copying vs originality" aspect of this. In short, as I've already said, we're learning all the time whether we're doing so consciously or not. Recreating sounds won't lead to any less original music than starting with a blank slate. It will however give us more techniques to use when we're making music.
Here's how I go about recreating songs.
I like to build songs up from the foundation - the kick and bass. The first thing I do is look for sections in the song/ album where these are as exposed as possible. When I find a song which is a good candidate, I'll import it into my sequencer and adjust the project's tempo to the tempo of the song. Some sequencers (such as Ableton Live) do this automatically, but mine doesn't, so I just loop a four bar section and adjust the tempo until the section loops cleanly.
Then I set the loop markers so a two bar section with a decently exposed kick drum is looping. The first bar plays the original audio, the second bar is silent (either via volume automation or simply cutting the original audio and moving the section after the first bar out of range of the loop). Then I go through my kick samples, in whichever sample library is closest to the style of sample I'm looking for. I'll hear four of the original kick, then four of the kick I'm previewing. I make note of kicks which sound close to the attack transient, kicks which sound close to the body and kicks which sound close overall. It's great when I find a kick which is close on its own (even better when I find the exact kick the original artist used!), but I have no problems splicing together a great sounding kick using the attack portion of one and the body of another. The thing to keep in mind when cutting out the parts of each kick that aren't required is to enable the "snap to zero crossing" option. Otherwise the audio will pop every time it plays the uncleanly spliced kick. Over the course of a song that's a heap of pops! Once I've gone through my kick library I may have noted down 20 kicks. I'll go through these again and keep narrowing it down until I find the closest one(s).
Once I have a kick which is pretty close I compare it to the original by running a frequency analyzer over both channels and seeing what differences that shows up. Obviously if the original version has other sounds playing I make allowances for these. I'll make a few EQ adjustments to get my kick even closer to the original.
Now that I have a decent version of the original kick it will be easier to hear and audition the other sounds relative to it. Ideally every sound would be isolated at some point during the song/album. When that doesn't happen it's helpful to have a very similar kick sound and listen to where the other sounds are sitting relative to it - level-wise, frequency-wise, their stereo width, panning and so on. Sometimes I'll look at a frequency analyzer, in which case it's definitely handy having a similar kick so I can focus on what the other sound is adding, rather than trying to judge which frequencies belong to the kick and which to the other sounds.
Kicks are often dry, mono and in the middle, whereas other sounds are more likely to be further processed. I note the reverb and delay that have been added to sounds, as well as any other treatment and do my best to replicate these.
After the kick I'll either proceed to the hihat and snare, or the bass. It depends on which element is most exposed. In this example I'll do the bass first.
I have a number of samplers and synthesizers I'm reasonably familiar with, so I'll generally know which one to turn to for a sound similar to the original bass sound. As before, I loop a section where the bass is exposed (if available), or where it's predominantly the kick and bass playing. I'll work out the notes the bass is playing and create a MIDI version to send to my sound source. Then I repeat the sound selection process - going through bass sounds, making notes of the sounds which are closest to the original, modifying parameters to push the sound closer to the original. Again, I may have noted down 10 to 20 sounds, which I'll progressively narrow down to the one I'll use.
The hihat and snare/clap are generally quite straightforward. Sometimes I'll adjust the pitch or length of a sample but generally I'll be content with something which fulfills the same role as the original sample rather than seeking an exact match.
I find a lot of mid/ higher bass sounds in trance very difficult to recreate. They often have filter modulation, phasers, delays or a myriad of other effects applied to them. Often this will be where I have to concede defeat for the time being. On a few occasions I've inserted a different mid bass sound which essentially fulfills the same function, which resulted in a very solid foundation to build pad and lead sounds onto for my future original productions.
I won't go into detail for the other sounds, since the same principles apply. I have a number of guitar amp emulators when I'm trying to match heavy guitar sounds in metal songs. I look for a similar sounding preset and work from there. For high percussive loops I choose a similar style of loop rather than trying to match it perfectly. For pad and lead sounds I listen to the relationship between them and the solid foundation I've already created. Essentially, the more songs I recreate, the better I'm able to hear small differences in sounds. Over time, when selecting sounds, I'm eliminating more of them before I note them down as candidates. Hopefully one day I'll reach a point where I can just go through and choose the best sound without making any notes. More experience with my sample libraries will help, as would an enhanced knowledge of synthesis.
To sum up, our music libraries are the best learning resource we have. If we take the time to study them and to experience their creation directly (to the best of our ability), we'll make tremendous progress. Each song is a new experience, which will stretch us and make us grow. Great masters of classical music transcribed music of the composers they admired in order to achieve this direct experience with the music they loved. I heartily encourage the practice.
I hope you've found this useful, keep making great music!
Fabian
Saturday, April 9, 2011
Seeing the Big Picture
Often we get caught up in the little details. Which is okay, since all the little details make a large difference to the final song.
Now and then, however, we need to step back and see where we're headed. If we finish one song and dive straight into the next and rarely take the time to reflect on our long-term goals and direction, we may lead ourselves into a creative rut. We may compose, produce and mix our songs in habitual ways, not growing as much as we could. We may get bored and disillusioned with the music we make, feeling that our last five songs were just minor variations on the same theme.
We may have lost touch with musical trends; we shouldn't follow trends for the sake of trying to be successful - if the trend doesn't naturally excite us then we're not going to be able to create exciting music in that trend. Being aware of trends can be helpful even if they don't excite us overall. Perhaps among the new sounds we don't like there's also a trend towards a drier sound (that is, less reverb), which we may see as appealing.
Seeing the big picture with regard to which of our songs are strong enough to warrant taking them through to completion involves leaving some time between the original composition and the subsequent production. I've read interviews with people who have worked on classic albums, where each of the eight to twelve songs are very strong. The artists/ bands would have up to 80 songs to choose from when they started recording the album. Not every song was developed past an initial draft. We can learn from this - if we want to create one or two very strong songs, we should compose ten to twenty rough ideas before deciding which ones to take through to the next step. This saves an incredible amount of time compared with taking every single idea through to completion.
Seeing the big picture with regard to learning about and working on aspects of our art - songwriting, production, engineering - means we may not finish any songs for a while. However, it also means that every single song from then on will sound better (well, if we have learned and practiced effectively). If we're struggling to make our bass sound good in our mixes it will make a big difference to put aside our current song and spend a week or two focusing purely on bass sounds.
I've done this a number of times and improved a lot as a result. I once put together around 100 combinations of kick drum and bass sounds. I went through all my synthesizers and samplers and found appealing bass sounds, then matched them with kick sounds which complemented them. A large number of the 100 attempts were rubbish. But 20 sounded quite good and five sounded fantastic, providing me with solid foundations to build five of my next songs onto. I may not have created any songs for a couple of weeks, but I had a lot more experience with my sound sources and putting these sounds together. Vastly preferable to working on and finishing a song with a kick and bass combination which may have fallen around position 30 of the 100 and despairing that the song doesn't sound great overall!
This approach can be applied to any aspect we wish to improve - writing melodies, creating chord progressions, improving our arrangements and song flow, finding better ways to group sounds, finding ways to make better use of our send effects and a thousand other things. If we listen to enough great music we'll have a good feeling for where we need to improve.
Touching again on sound sources - seeing the big picture means getting to fully understand and appreciate the sounds each of our instruments, samplers, drum machines, synthesizers and so on can give us. Each instrument has a range of sounds it can produce - there is no synthesizer which can produce the sound of every other instrument. As in the previous paragraph, this happens largely through working with the instrument, using it in a large number of productions/ practice sessions, experiencing how the sounds fit together with the other sound sources at our disposal. It takes time to understand and get the most out of an instrument. New sound sources can be inspiring and very useful, but if we constantly look to the new to provide us with "amazing sounds" we will struggle with achieving solid, well-produced songs.
As always, I hope some of you have found this useful. It's very satisfying to take a step back and appreciate how far we've come and how much potential we have to create even better music in the future!
Fabian
Now and then, however, we need to step back and see where we're headed. If we finish one song and dive straight into the next and rarely take the time to reflect on our long-term goals and direction, we may lead ourselves into a creative rut. We may compose, produce and mix our songs in habitual ways, not growing as much as we could. We may get bored and disillusioned with the music we make, feeling that our last five songs were just minor variations on the same theme.
We may have lost touch with musical trends; we shouldn't follow trends for the sake of trying to be successful - if the trend doesn't naturally excite us then we're not going to be able to create exciting music in that trend. Being aware of trends can be helpful even if they don't excite us overall. Perhaps among the new sounds we don't like there's also a trend towards a drier sound (that is, less reverb), which we may see as appealing.
Seeing the big picture with regard to which of our songs are strong enough to warrant taking them through to completion involves leaving some time between the original composition and the subsequent production. I've read interviews with people who have worked on classic albums, where each of the eight to twelve songs are very strong. The artists/ bands would have up to 80 songs to choose from when they started recording the album. Not every song was developed past an initial draft. We can learn from this - if we want to create one or two very strong songs, we should compose ten to twenty rough ideas before deciding which ones to take through to the next step. This saves an incredible amount of time compared with taking every single idea through to completion.
Seeing the big picture with regard to learning about and working on aspects of our art - songwriting, production, engineering - means we may not finish any songs for a while. However, it also means that every single song from then on will sound better (well, if we have learned and practiced effectively). If we're struggling to make our bass sound good in our mixes it will make a big difference to put aside our current song and spend a week or two focusing purely on bass sounds.
I've done this a number of times and improved a lot as a result. I once put together around 100 combinations of kick drum and bass sounds. I went through all my synthesizers and samplers and found appealing bass sounds, then matched them with kick sounds which complemented them. A large number of the 100 attempts were rubbish. But 20 sounded quite good and five sounded fantastic, providing me with solid foundations to build five of my next songs onto. I may not have created any songs for a couple of weeks, but I had a lot more experience with my sound sources and putting these sounds together. Vastly preferable to working on and finishing a song with a kick and bass combination which may have fallen around position 30 of the 100 and despairing that the song doesn't sound great overall!
This approach can be applied to any aspect we wish to improve - writing melodies, creating chord progressions, improving our arrangements and song flow, finding better ways to group sounds, finding ways to make better use of our send effects and a thousand other things. If we listen to enough great music we'll have a good feeling for where we need to improve.
Touching again on sound sources - seeing the big picture means getting to fully understand and appreciate the sounds each of our instruments, samplers, drum machines, synthesizers and so on can give us. Each instrument has a range of sounds it can produce - there is no synthesizer which can produce the sound of every other instrument. As in the previous paragraph, this happens largely through working with the instrument, using it in a large number of productions/ practice sessions, experiencing how the sounds fit together with the other sound sources at our disposal. It takes time to understand and get the most out of an instrument. New sound sources can be inspiring and very useful, but if we constantly look to the new to provide us with "amazing sounds" we will struggle with achieving solid, well-produced songs.
As always, I hope some of you have found this useful. It's very satisfying to take a step back and appreciate how far we've come and how much potential we have to create even better music in the future!
Fabian
Wednesday, March 30, 2011
Getting Songs Mastered
It has been a while since I had a song mastered. I've either been happy with the results of my master channel signal chain or I've sent off remixes where the mastering was going to be taken care of.
I'm not a fan of unattended mastering. It makes me uneasy that someone may impart heavy processing on something I've worked hard at. Many hours of work can easily be destroyed by a half hour mastering job performed by someone who doesn't share (or know, or care about) my vision. I like to take full responsibility for my sound - after all, it's my name attached to the product. If the song is clipping and distorted because the mastering engineer got a little eager with a limiter, it reflects poorly on me, not them.
Anyway, here are a few things to be considered when we get our songs mastered:
1) Attended session or not?
In these days of cheap internet mastering this is becoming more difficult. I much prefer to sit in the room when my song is being mastered, to talk to the engineer about the result I'm looking for. I take in a reference CD with several great sounding songs, in the ballpark of what I'm aiming for. The mastering engineers who have worked on my songs have all been happy to do attended sessions. I can understand that some may not be so keen - I'm not sure how I'd feel if I had someone looking over my shoulder while I was putting together a mix. It would depend on the person and the types of contributions they make during the process.
If attending the session isn't a practical option, provide as much detail as possible about the result you're looking for. Some engineers will ask for reference songs. Hopefully you'll have some ready to send. I assume most people compare their songs to others in the same style. It'd be very difficult to to gauge the mix otherwise.
2) How loud should it be?
If you have a particular position on dynamics, let the engineer know this. Let them know you want it as loud as possible, or that you don't want your song squashed to death. If you've heard other examples of their work in the same style, maybe you can trust that they'll push it hard, but not too hard. In any case, it can't hurt to let them know how hot you want your song to be.
3) Where does the mixing process end and the mastering process begin?
I often put a touch of EQ on the master channel of my song. I'm happy to take off the limiter when I send it off to get mastered somewhere unknown, but I'll often leave my EQ touches on there. They're part of my creative vision and there's no guarantee that the mastering engineer will share that vision. So we need to work out where we draw the line - which decisions are essential parts of the creative process for a given song, which ones we're happy to leave in someone else's hands. Perhaps our song fades out at the end and we're happy to leave the fade to the mastering engineer. It's up to each of us.
4) What bit rate and sample rate should the song be?
Mastering engineers generally are quite upfront about the format they prefer to work in. If not, a good rule of thunb is to use the same sample rate that the final song will be in. It doesn't hurt to double it (so 88.2 kHz for songs going to 44.1 kHz, or 96 kHz for songs going to 48 kHz), but going from everything that I've read, working at higher sample rates makes no audible difference when the song is converted down to the final sample rate. Bit rates are a different matter - here it does make an audible difference when converting down to the final bit rate. I provide 24 bit files when the mastering engineer will end up delivering a 16 bit master. If the final output were 24 bits, I'd send a 32 bit file (though I've never been in this situation).
5) It doesn't sound amazing after mastering. Why not?
The first thing we should consider is: did it sound amazing before mastering? If not, then the solution is to keep practicing our mixing skills until our songs sound the way we want them to before they're sent off to be mastered. In my early days of mixing I expected the mastering process to help make my songs sound better, that somehow I and this professional engineer would be a team in delivering a great sounding song. That isn't the case. Mastering will make our songs sound more correct, will push the gain up cleanly, but it won't fix the fact that we've chosen the wrong kick sample or that we've turned our pad up too loud in the mix.
If the song sounded great before mastering, then discuss the issues with the engineer. The engineers I've worked with were happy to rectify issues. I wouldn't push too hard though - if the second version is still nowhere near optimal, leave some time before going to that engineer again. For me, this gave me enough time to learn that all the issues with the finished song came down to my inferior mixing skills!
I hope this has been helpful for some of you. Bear in mind that not every artist has their songs mastered. Some of them like to be responsible for the process from start to finish. Once again, we all make our own decisions. I can see the benefit in running a song through another set of ears in an optimal listening environment, but I also totally embrace the mindset of complete responsibility!
Keep making great music!
Fabian
I'm not a fan of unattended mastering. It makes me uneasy that someone may impart heavy processing on something I've worked hard at. Many hours of work can easily be destroyed by a half hour mastering job performed by someone who doesn't share (or know, or care about) my vision. I like to take full responsibility for my sound - after all, it's my name attached to the product. If the song is clipping and distorted because the mastering engineer got a little eager with a limiter, it reflects poorly on me, not them.
Anyway, here are a few things to be considered when we get our songs mastered:
1) Attended session or not?
In these days of cheap internet mastering this is becoming more difficult. I much prefer to sit in the room when my song is being mastered, to talk to the engineer about the result I'm looking for. I take in a reference CD with several great sounding songs, in the ballpark of what I'm aiming for. The mastering engineers who have worked on my songs have all been happy to do attended sessions. I can understand that some may not be so keen - I'm not sure how I'd feel if I had someone looking over my shoulder while I was putting together a mix. It would depend on the person and the types of contributions they make during the process.
If attending the session isn't a practical option, provide as much detail as possible about the result you're looking for. Some engineers will ask for reference songs. Hopefully you'll have some ready to send. I assume most people compare their songs to others in the same style. It'd be very difficult to to gauge the mix otherwise.
2) How loud should it be?
If you have a particular position on dynamics, let the engineer know this. Let them know you want it as loud as possible, or that you don't want your song squashed to death. If you've heard other examples of their work in the same style, maybe you can trust that they'll push it hard, but not too hard. In any case, it can't hurt to let them know how hot you want your song to be.
3) Where does the mixing process end and the mastering process begin?
I often put a touch of EQ on the master channel of my song. I'm happy to take off the limiter when I send it off to get mastered somewhere unknown, but I'll often leave my EQ touches on there. They're part of my creative vision and there's no guarantee that the mastering engineer will share that vision. So we need to work out where we draw the line - which decisions are essential parts of the creative process for a given song, which ones we're happy to leave in someone else's hands. Perhaps our song fades out at the end and we're happy to leave the fade to the mastering engineer. It's up to each of us.
4) What bit rate and sample rate should the song be?
Mastering engineers generally are quite upfront about the format they prefer to work in. If not, a good rule of thunb is to use the same sample rate that the final song will be in. It doesn't hurt to double it (so 88.2 kHz for songs going to 44.1 kHz, or 96 kHz for songs going to 48 kHz), but going from everything that I've read, working at higher sample rates makes no audible difference when the song is converted down to the final sample rate. Bit rates are a different matter - here it does make an audible difference when converting down to the final bit rate. I provide 24 bit files when the mastering engineer will end up delivering a 16 bit master. If the final output were 24 bits, I'd send a 32 bit file (though I've never been in this situation).
5) It doesn't sound amazing after mastering. Why not?
The first thing we should consider is: did it sound amazing before mastering? If not, then the solution is to keep practicing our mixing skills until our songs sound the way we want them to before they're sent off to be mastered. In my early days of mixing I expected the mastering process to help make my songs sound better, that somehow I and this professional engineer would be a team in delivering a great sounding song. That isn't the case. Mastering will make our songs sound more correct, will push the gain up cleanly, but it won't fix the fact that we've chosen the wrong kick sample or that we've turned our pad up too loud in the mix.
If the song sounded great before mastering, then discuss the issues with the engineer. The engineers I've worked with were happy to rectify issues. I wouldn't push too hard though - if the second version is still nowhere near optimal, leave some time before going to that engineer again. For me, this gave me enough time to learn that all the issues with the finished song came down to my inferior mixing skills!
I hope this has been helpful for some of you. Bear in mind that not every artist has their songs mastered. Some of them like to be responsible for the process from start to finish. Once again, we all make our own decisions. I can see the benefit in running a song through another set of ears in an optimal listening environment, but I also totally embrace the mindset of complete responsibility!
Keep making great music!
Fabian
Thursday, March 24, 2011
Hearing what a compressor does
Some effects are easier to hear than others. Run a synth lead through a big hall reverb and we'll notice the massive space around it. Grab an EQ and boost 5 kHz and we'll hear the high end definition (or shrillness, depending on the original sound) coming through.
Compression, however, can initially seem like a black box. We know it's supposed to "pump up" our sounds, make them bigger, stronger, louder, better. But how? And what are all those controls and why do some of them do nothing, even though we turn them all the way from one end to the other?
Well, let's start with the basics. A compressor turns the volume down when the incoming signal goes over the threshold we've set. That's all a compressor actually does - it turns the volume down. In practice though, there's a bit more to it - how quickly it turns the volume down, how long it should leave the volume down when the incoming signal drops back below the threshold. How much gain can be added to the overall output signal now that the loudest parts have been turned down.
There is a way to hear what a compressor is doing to the signal, and to set the compressor so it does what we want it to do, in an appealing musical way. This is preferable to choosing minimum attack and release, and maximum ratio, which will make the signal hotter, but may not make our song sound better. It's usually better if the compressor works in the same groove as the music.
First, set the threshold so that the compressor is being triggered with every note/ beat that comes through it. This means we'll be able to clearly hear how the compressor operates on the sound.
Next, set the ratio to maximum. Yes, I just spoke against "maximum-ratio-itis", but here it's just an initial setting, which will allow us to clearly hear the timing of the attack and release parameters.
Next, set the attack and release to minimum. Once again, it's not likely they'll end up there, but these starting positions will allow us to hear clearly what the compressor is doing to our sound.
If the compressor has a "hold" parameter, set it to zero. The makeup gain/ auto makeup isn't important at this point - the final output level can be set when we have the compressor working on the sound the way we want it to.
Okay - the compressor should be heavily compressing the sound as it's playing.
The first parameter we'll adjust is the attack parameter. My main consideration here is whether the sound has an attack transient, and whether I care about the transient. If it's a percussive loop, or a kick drum, I'll almost always slow the attack down so that the attack transient comes through cleanly. If we leave the attack at minimum, the front end of the sound will dull considerably - rather than a sharp, clear front end of the sound which cuts through a mix, we'll be left with something which sounds much more like it's being hit with cardboard. The body of the sound will be fine, but the clear attack transient will disappear. How slow the attack should be is totally dependent on the sound and how we want the attack transient to sound. For myself, with some lead sounds I'm happy to lose the attack entirely. For pads, which often have a very slow attack phase, I'm happy to leave the compressor's attack at minimum.
Now that we have the attack where we want it, our focus can turn to the release parameter. My main consideration here is whether the sound is rhythmic and needs to fit into the groove of the song. Pads aren't rhythmic - the release can be left as fast as possible. Drum loops are very rhythmic, and will very likely work against the song's groove if the release is at minimum. We want the compressor to move up and down in volume in time with the song, rather than in some arbitrary way independent of the song's natural groove. So when setting the release parameter, start bobbing your head in time with the song, or get up and dance, or whatever, and see where the release parameter best enhances the groove. This is important - this is the difference between whether or not people feel like moving their bodies to our song!
Right, so we have the compressor working on the part of the sound we want it to work on, and the level rising back up in time with our song. Now it's time to adjust how much compression we want to apply. The last two controls work in tandem - a very low threshold, where the compressor is working on the full range of the sound, combined with quite a low ratio (say 1.20:1) will give us a large reduction in dynamic range. So will a much higher threshold which only catches the loudest parts of the sound, combined with a much higher ratio (say 8:1). As with so many things, it comes down to personal choice - we're artists, after all! We may want to even out the instrument and make it stronger and steadier in the mix, but to still retain enough dynamic range to allow the instrument to play more quietly (say, during a trance breakdown) for contrast. We may want to slam it hard for a very "in your face" sound. Do we want to compress just the spikes, or the full range of the sound? I'm not going to attempt to answer these questions, since we're all going to answer them differently.
I know some people like to deal in approximate settings, so here are some typical starting settings I use for sounds in trance. Where they end up is totally dependent on the sound and the song:
Kick - I generally don't compress kicks. The samples I use are loud and punchy enough for me. If anything, I use a transient designer to heighten the attack portion of the sound.
Low/ Main Bass - I often use a vintage emulation compressor on my main bass, though my bass is usually very solid in the mix without it. I use this compressor more for the tone than the compression it imparts. More often, I'll run the bass through a sidechain compressor to briefly push it out of the way of the kick. This pushes the bass down quite heavily on each kick, with the bass rushing back up as fast as sounds good, musically.
Mid/High Basses - again, these usually receive a sidechain compression treatment similar to the low/ main bass. When I do compress them with standard compression, I use reasonably fast attack (say 10-20ms), reasonably fast release (20-40ms) and aim for 3-6 dBs of gain reduction. It depends how "weighty" I want the sound to be in relation to the other sounds around it. This is a big change to my early days, when my thoughts were "everything needs to be ultra loud and in your face". Now I understand that the more in-your-face one sound is, the less in-your-face all the other sounds become.
Pads - If the pads feature a fair bit of modulation and drift in and out a bit much for my liking, I'll compress them in a similar way to the synth group (see below) - in short - very low threshold and ratio, minimum attack and release.
Synths - I often compress these twice, firstly on their own, and secondly via a sidechain compressor. The settings I use for the first compressor operating on the synth group are as follows: a very low threshold, so the compressor is always operating on the sound. Then quite a low ratio (say 1.50:1), which over the range of operation still amounts to a decent amount of gain reduction, just spread over a large dynamic range. The attack and release parameters are set to minimum. Since the compressor is operating non-stop, I assume that having these set as fast as possible means the compressor makes adjustments as fast as possible. This treatment brings the entire dynamic range of the synths together smoothly, without any non-linear points where the compression is coming in and out.
Drums - I generally don't compress Hi Hats and Claps/Snares.
Percussive Loops - As for other elements, these are sometimes compressed using a sidechain compressor, using a release setting which makes the loop groove nicely around the kick drum.
Sound Effects - I generally don't compress these. If I do, it's generally via a sidechain compressor to add the pumping effect to up- and down-lifter effects.
Overall mix - As for the pads/ synth group, a very low threshold and extremely low ratio - lower than for individual groups - usually only 1.10:1, so hardly noticeable. However, it does mean I don't need to work the master limiter quite as hard.
So that's my approach to compression in a nutshell. I hope you've found it useful!
Fabian
Compression, however, can initially seem like a black box. We know it's supposed to "pump up" our sounds, make them bigger, stronger, louder, better. But how? And what are all those controls and why do some of them do nothing, even though we turn them all the way from one end to the other?
Well, let's start with the basics. A compressor turns the volume down when the incoming signal goes over the threshold we've set. That's all a compressor actually does - it turns the volume down. In practice though, there's a bit more to it - how quickly it turns the volume down, how long it should leave the volume down when the incoming signal drops back below the threshold. How much gain can be added to the overall output signal now that the loudest parts have been turned down.
There is a way to hear what a compressor is doing to the signal, and to set the compressor so it does what we want it to do, in an appealing musical way. This is preferable to choosing minimum attack and release, and maximum ratio, which will make the signal hotter, but may not make our song sound better. It's usually better if the compressor works in the same groove as the music.
First, set the threshold so that the compressor is being triggered with every note/ beat that comes through it. This means we'll be able to clearly hear how the compressor operates on the sound.
Next, set the ratio to maximum. Yes, I just spoke against "maximum-ratio-itis", but here it's just an initial setting, which will allow us to clearly hear the timing of the attack and release parameters.
Next, set the attack and release to minimum. Once again, it's not likely they'll end up there, but these starting positions will allow us to hear clearly what the compressor is doing to our sound.
If the compressor has a "hold" parameter, set it to zero. The makeup gain/ auto makeup isn't important at this point - the final output level can be set when we have the compressor working on the sound the way we want it to.
Okay - the compressor should be heavily compressing the sound as it's playing.
The first parameter we'll adjust is the attack parameter. My main consideration here is whether the sound has an attack transient, and whether I care about the transient. If it's a percussive loop, or a kick drum, I'll almost always slow the attack down so that the attack transient comes through cleanly. If we leave the attack at minimum, the front end of the sound will dull considerably - rather than a sharp, clear front end of the sound which cuts through a mix, we'll be left with something which sounds much more like it's being hit with cardboard. The body of the sound will be fine, but the clear attack transient will disappear. How slow the attack should be is totally dependent on the sound and how we want the attack transient to sound. For myself, with some lead sounds I'm happy to lose the attack entirely. For pads, which often have a very slow attack phase, I'm happy to leave the compressor's attack at minimum.
Now that we have the attack where we want it, our focus can turn to the release parameter. My main consideration here is whether the sound is rhythmic and needs to fit into the groove of the song. Pads aren't rhythmic - the release can be left as fast as possible. Drum loops are very rhythmic, and will very likely work against the song's groove if the release is at minimum. We want the compressor to move up and down in volume in time with the song, rather than in some arbitrary way independent of the song's natural groove. So when setting the release parameter, start bobbing your head in time with the song, or get up and dance, or whatever, and see where the release parameter best enhances the groove. This is important - this is the difference between whether or not people feel like moving their bodies to our song!
Right, so we have the compressor working on the part of the sound we want it to work on, and the level rising back up in time with our song. Now it's time to adjust how much compression we want to apply. The last two controls work in tandem - a very low threshold, where the compressor is working on the full range of the sound, combined with quite a low ratio (say 1.20:1) will give us a large reduction in dynamic range. So will a much higher threshold which only catches the loudest parts of the sound, combined with a much higher ratio (say 8:1). As with so many things, it comes down to personal choice - we're artists, after all! We may want to even out the instrument and make it stronger and steadier in the mix, but to still retain enough dynamic range to allow the instrument to play more quietly (say, during a trance breakdown) for contrast. We may want to slam it hard for a very "in your face" sound. Do we want to compress just the spikes, or the full range of the sound? I'm not going to attempt to answer these questions, since we're all going to answer them differently.
I know some people like to deal in approximate settings, so here are some typical starting settings I use for sounds in trance. Where they end up is totally dependent on the sound and the song:
Kick - I generally don't compress kicks. The samples I use are loud and punchy enough for me. If anything, I use a transient designer to heighten the attack portion of the sound.
Low/ Main Bass - I often use a vintage emulation compressor on my main bass, though my bass is usually very solid in the mix without it. I use this compressor more for the tone than the compression it imparts. More often, I'll run the bass through a sidechain compressor to briefly push it out of the way of the kick. This pushes the bass down quite heavily on each kick, with the bass rushing back up as fast as sounds good, musically.
Mid/High Basses - again, these usually receive a sidechain compression treatment similar to the low/ main bass. When I do compress them with standard compression, I use reasonably fast attack (say 10-20ms), reasonably fast release (20-40ms) and aim for 3-6 dBs of gain reduction. It depends how "weighty" I want the sound to be in relation to the other sounds around it. This is a big change to my early days, when my thoughts were "everything needs to be ultra loud and in your face". Now I understand that the more in-your-face one sound is, the less in-your-face all the other sounds become.
Pads - If the pads feature a fair bit of modulation and drift in and out a bit much for my liking, I'll compress them in a similar way to the synth group (see below) - in short - very low threshold and ratio, minimum attack and release.
Synths - I often compress these twice, firstly on their own, and secondly via a sidechain compressor. The settings I use for the first compressor operating on the synth group are as follows: a very low threshold, so the compressor is always operating on the sound. Then quite a low ratio (say 1.50:1), which over the range of operation still amounts to a decent amount of gain reduction, just spread over a large dynamic range. The attack and release parameters are set to minimum. Since the compressor is operating non-stop, I assume that having these set as fast as possible means the compressor makes adjustments as fast as possible. This treatment brings the entire dynamic range of the synths together smoothly, without any non-linear points where the compression is coming in and out.
Drums - I generally don't compress Hi Hats and Claps/Snares.
Percussive Loops - As for other elements, these are sometimes compressed using a sidechain compressor, using a release setting which makes the loop groove nicely around the kick drum.
Sound Effects - I generally don't compress these. If I do, it's generally via a sidechain compressor to add the pumping effect to up- and down-lifter effects.
Overall mix - As for the pads/ synth group, a very low threshold and extremely low ratio - lower than for individual groups - usually only 1.10:1, so hardly noticeable. However, it does mean I don't need to work the master limiter quite as hard.
So that's my approach to compression in a nutshell. I hope you've found it useful!
Fabian
Tuesday, March 1, 2011
Sound Selection in Trance
After composition, the choice of sounds that go into a song is the next most important factor in achieving an excellent end result. A good set of sounds playing a good composition should sound very listenable, before any additional processing (e.g EQ, delay, reverb or mastering) is applied. These processes can enhance the sound, but they won't fix poor composition or sound choices.
When I started mixing/producing, I focused on the "showpiece" sounds and took other sounds - such as the kick and drum sounds - more lightly. My attitude was "it's a kick - it sounds big and full. Easy, done". Then I wondered why my "showpiece" sounds didn't sound so good. In truth, every sound in a song affects how other sounds are perceived by the listener.
Today I'll outline the thoughts that go through my head as I choose each sound that goes into one of my songs. There are many different approaches to the art of music production – your method may vary!
I often start with the lead sound. I already have a melody ready from a composition session, so now I look for a sound, or combination of sounds, which sound great with that melody. Sometimes I have a sound in mind, and go straight to it, but usually this involves flipping through presets. I choose a synth I think will sound great for the melody, and see if any sound jumps out at me.
From there I tweak the sound – I make adjustments to the amplitude envelope settings, or the filter. I often turn off the reverb or delay if the patch has these. I generally prefer to add these during the mixdown, where I have much more control over them.
Once I have my main sound, I usually layer it with a complementary sound. If my main lead sound is quite centered and defined, I often combine it with a lush, wide sound.
Whenever I add sounds which have significant stereo processing applied, I check their mono compatibility straight away. There's no point in waiting until the song is almost finished to discover that the lead will disappear in some listening situations. I have a plugin on the master channel I can turn on to hear the song quietly in mono. If there are stereo issues, I either look to adjust the stereo treatment (if the sound is otherwise fantastic) or I choose a different sound. Adjusting the stereo treatment usually involves adjusting the timing of the stereo effect by a few milliseconds, whether it's a delay, chorus or any other stereo effect.
MY MAIN THOUGHTS IN CHOOSING A LEAD: "strong enough to play by itself", "expressive", "distinctive" and "just as clear in mono".
Once the lead is in place, I add the next most important element to the overall mix. This is usually the kick, but could also be the main bass sound. For this example I'll go with the kick.
I generally have a sense of the type of kick that will fit with the lead, but am still open to serendipity - I have the lead looping while I browse through a library of kick samples. I don't listen to the kick in isolation (or any of the subsequent sounds) - sounds only have meaning within the context of the mix. I note down all the samples which sound decent in relation to the lead, to narrow the selection to a small number – maybe five to ten.
When I started making trance, I was more inclined to go with the "coolest, most hard-hitting" kick sample, without regard for whether it was appropriate for the lead and the overall song. Now, this can work for producers who have a very defined sound – in this case, the kick (and maybe the bass) would already be in place when selecting lead sounds, so it'd be fine - leads would be chosen which fit with the kick and bass.
Nowadays my main thoughts are "what does the kick do to the lead"? A kick can make a lead feel closed in, or open it up, make it seem bright or dull.
From this narrowed-down selection it's usually a fairly quick process to narrow it down to one. I find it helpful to turn on my “quiet-mono” plugin at times during this process too - If things sound good under these conditions, they'll generally sound good under most conditions. I do the "quiet-mono" thing for subsequent elements as well.
MY MAIN THOUGHTS IN CHOOSING A KICK: "fits with/ enhances the lead", "oomph", "enough definition to cut through the mix" and "tight".
Once the kick is in place, I turn my attention to the main bass pattern and sound. In general terms, if the melody is busy, I make the bass pattern simple. Vice versa, a simple, sparse melody will usually go well with a busier, more driving bassline. The sound of the bass needs to fit with both the kick and the lead. In broad terms, I like to match the presence/hardness of the kick - a smooth, bassy kick will usually go well with a similar bass.
The relationship between the kick and bass is crucial to establishing a solid foundation for the mix. I like both elements to carry considerable weight – I want to feel the impact of both hitting me in the chest, rather than a clicky kick and subby bass. In terms of stereo placement, I usually have both the kick and main bass in the center - either as mono channels, or as stereo channels with the vast majority of the content up the middle. If I do choose a wider main bass I'll use a plugin (OtiumFX BassLane) to keep the low frequencies (below around 200 Hz) in the middle.
MY MAIN THOUGHTS IN CHOOSING A BASS: "fullness", "drive" and "solid".
From here I turn to the Hi Hat and Clap/Snare samples. Once again, these involve browsing through sample libraries while the other sounds are playing. I narrow down the selections, I do the “quiet-mono” thing, I turn off the bass and lead to hear how the samples will sound in the intro and outro. All these factors play a part in the choice of sounds.
I look for sounds which fit into the space well – they have an appropriate stereo placement/spread, they aren't too dry/wet relative to the other sounds (although dry sounds can have reverb/ delays applied to them later on). I listen to the weight of the sounds – they shouldn't be too heavy or thin. I also pay attention to the length of the sounds and how these affect the groove. It's harder to pinpoint what I'm looking for in these sounds – a long gated clap could work well with a given kick and bass, as could a very short snare.
MY MAIN THOUGHTS IN CHOOSING A HI HAT/ CLAP/ SNARE: “original/fresh”, "come through clearly in a full mix" and “good sense of space”.
Sometimes I add percussive elements at this point – I'm happy to create my own from one-shots, or use loops. Both methods are fine by me. To those who are fiercely anti-loop because it's not being original, I say that to create something truly original, first we would need to create the universe. The argument goes all the way down - why is using someone else's one-shot samples okay? If we create all our own one-shots, why is okay to use a drum which someone else has made?
Can we still respect ourselves, can we still sleep at night if we haven't created the universe? I can. It's up to each of us to make that choice.
The percussion is chosen/ created in a similar manner to the Hi Hat and Clap/Snare, though I do also focus on the way the timing of the rhythm interacts with the bass and lead patterns. I look for sounds which fit in around the Kick, Hi Hat and Clap/Snare – anything too similar in timbre to those will obscure the existing sound. I'm not overly concerned with the stereo space of the loop, since I often (though not always) spread the loop out to the sides anyway.
MY MAIN THOUGHTS IN CHOOSING PERCUSSION: "drive", "enhanced space around other drum elements", "not too heavy or piercing".
At this point I add in any additional bass layers, if I feel they would fit well. These sounds need to fit well in the overall context (naturally), but also need to lock in well with the kick and main bass. My main bass will usually be quite centered, while I use the higher bass sounds to fill out the stereo field with lush ear candy.
Again, I choose a synth and run through patches. The “quiet-mono” test tells me if the sound will get lost in a full mix. The pattern(s) played by the higher bass sounds will complement the main bass and lead patterns. When a sound is chosen, I usually apply a highpass filter to remove the portion of the low end which sits in the same region as the kick and main bass – enough to clear out space for these sounds down low, but not enough to make the high bass too thin. I often route the high bass sound to a send channel with a left-right delay, which has delays timed so the notes fall in between the notes of the high bass pattern.
MY MAIN THOUGHTS IN CHOOSING HIGHER BASS LAYERS: "expensive", "full and wide", "movement/modulation", "bass progression clearly audible even at very low volumes".
At this point I mute the lead sound and export the mix of drum and bass elements. I run this through a frequency analysis process which compares it to a library of other drum and bass sections from great sounding trance songs. This allows me to quickly see where the mix is at, relative to the average and the extremes. If I've turned my kick up a bit too high, this quickly points out that 65 Hz is extremely high and that I should turn the kick (or the main bass, or both) down. It lets me know if a Hi Hat sample has a piercing resonant spike at 16 kHz, in which case I first try turning down the Hi Hat, then taming that frequency if that doesn't work.
Right – so now I have a very decent sounding foundation of drums and bass sounds to slot the other sounds on top of. I unmute the lead and listen to it in relation to the newly balanced foundation. Often the low frequencies of the lead muddy up the bass sounds, so I set a highpass filter to an appropriate frequency to rectify this.
The last main element I slot in is the pad, whether it's a single sound or a combination. The pad sits between the basses and the leads. I look for a sound which doesn't have too much going on in frequency ranges which conflict with those sounds. I often spread the pad out wide to the sides of the stereo field to provide a large, lush backdrop to the other sounds. I do the quiet-mono thing here too, though even the most perfect pad won't come through amazingly well under these conditions – nor should it, it would mean the pad is probably turned up way too loud! I also listen to the pad with just the lead sounds playing, to hear how they could interact during the breakdown. As an aside, I rarely add reverb to my pad sounds, whether directly or via a send channel. I used to, but I've found it really cleans things up when I don't do it, and pad sounds are generally lush/ sustained enough that they don't require any additional treatment to lengthen/ fatten them.
MY MAIN THOUGHTS IN CHOOSING A PAD: "deep background", "filling the gaps with lushness", "warmth".
Beyond the main sounds, additional sounds fit in the gaps and are always auditioned quiet-mono, since at this stage many sounds will disappear in the full mix. I always listen to both the way the new sound comes through, and whether it obscures any of the existing sounds. The second part is a little more difficult - once we've heard a sound looping for a while, it's easy to take it for granted. That's why the "next morning quiet listening test" is so important - if anything is too quiet or loud or odd it'll jump out.
So there's my approach. It has changed a lot from when I started, when my thoughts were more like "grab all the coolest sounds and put them together". It will probably (hopefully!) keep changing as I have new mixing experiences.
I hope you've found this useful!
Fabian
When I started mixing/producing, I focused on the "showpiece" sounds and took other sounds - such as the kick and drum sounds - more lightly. My attitude was "it's a kick - it sounds big and full. Easy, done". Then I wondered why my "showpiece" sounds didn't sound so good. In truth, every sound in a song affects how other sounds are perceived by the listener.
Today I'll outline the thoughts that go through my head as I choose each sound that goes into one of my songs. There are many different approaches to the art of music production – your method may vary!
I often start with the lead sound. I already have a melody ready from a composition session, so now I look for a sound, or combination of sounds, which sound great with that melody. Sometimes I have a sound in mind, and go straight to it, but usually this involves flipping through presets. I choose a synth I think will sound great for the melody, and see if any sound jumps out at me.
From there I tweak the sound – I make adjustments to the amplitude envelope settings, or the filter. I often turn off the reverb or delay if the patch has these. I generally prefer to add these during the mixdown, where I have much more control over them.
Once I have my main sound, I usually layer it with a complementary sound. If my main lead sound is quite centered and defined, I often combine it with a lush, wide sound.
Whenever I add sounds which have significant stereo processing applied, I check their mono compatibility straight away. There's no point in waiting until the song is almost finished to discover that the lead will disappear in some listening situations. I have a plugin on the master channel I can turn on to hear the song quietly in mono. If there are stereo issues, I either look to adjust the stereo treatment (if the sound is otherwise fantastic) or I choose a different sound. Adjusting the stereo treatment usually involves adjusting the timing of the stereo effect by a few milliseconds, whether it's a delay, chorus or any other stereo effect.
MY MAIN THOUGHTS IN CHOOSING A LEAD: "strong enough to play by itself", "expressive", "distinctive" and "just as clear in mono".
Once the lead is in place, I add the next most important element to the overall mix. This is usually the kick, but could also be the main bass sound. For this example I'll go with the kick.
I generally have a sense of the type of kick that will fit with the lead, but am still open to serendipity - I have the lead looping while I browse through a library of kick samples. I don't listen to the kick in isolation (or any of the subsequent sounds) - sounds only have meaning within the context of the mix. I note down all the samples which sound decent in relation to the lead, to narrow the selection to a small number – maybe five to ten.
When I started making trance, I was more inclined to go with the "coolest, most hard-hitting" kick sample, without regard for whether it was appropriate for the lead and the overall song. Now, this can work for producers who have a very defined sound – in this case, the kick (and maybe the bass) would already be in place when selecting lead sounds, so it'd be fine - leads would be chosen which fit with the kick and bass.
Nowadays my main thoughts are "what does the kick do to the lead"? A kick can make a lead feel closed in, or open it up, make it seem bright or dull.
From this narrowed-down selection it's usually a fairly quick process to narrow it down to one. I find it helpful to turn on my “quiet-mono” plugin at times during this process too - If things sound good under these conditions, they'll generally sound good under most conditions. I do the "quiet-mono" thing for subsequent elements as well.
MY MAIN THOUGHTS IN CHOOSING A KICK: "fits with/ enhances the lead", "oomph", "enough definition to cut through the mix" and "tight".
Once the kick is in place, I turn my attention to the main bass pattern and sound. In general terms, if the melody is busy, I make the bass pattern simple. Vice versa, a simple, sparse melody will usually go well with a busier, more driving bassline. The sound of the bass needs to fit with both the kick and the lead. In broad terms, I like to match the presence/hardness of the kick - a smooth, bassy kick will usually go well with a similar bass.
The relationship between the kick and bass is crucial to establishing a solid foundation for the mix. I like both elements to carry considerable weight – I want to feel the impact of both hitting me in the chest, rather than a clicky kick and subby bass. In terms of stereo placement, I usually have both the kick and main bass in the center - either as mono channels, or as stereo channels with the vast majority of the content up the middle. If I do choose a wider main bass I'll use a plugin (OtiumFX BassLane) to keep the low frequencies (below around 200 Hz) in the middle.
MY MAIN THOUGHTS IN CHOOSING A BASS: "fullness", "drive" and "solid".
From here I turn to the Hi Hat and Clap/Snare samples. Once again, these involve browsing through sample libraries while the other sounds are playing. I narrow down the selections, I do the “quiet-mono” thing, I turn off the bass and lead to hear how the samples will sound in the intro and outro. All these factors play a part in the choice of sounds.
I look for sounds which fit into the space well – they have an appropriate stereo placement/spread, they aren't too dry/wet relative to the other sounds (although dry sounds can have reverb/ delays applied to them later on). I listen to the weight of the sounds – they shouldn't be too heavy or thin. I also pay attention to the length of the sounds and how these affect the groove. It's harder to pinpoint what I'm looking for in these sounds – a long gated clap could work well with a given kick and bass, as could a very short snare.
MY MAIN THOUGHTS IN CHOOSING A HI HAT/ CLAP/ SNARE: “original/fresh”, "come through clearly in a full mix" and “good sense of space”.
Sometimes I add percussive elements at this point – I'm happy to create my own from one-shots, or use loops. Both methods are fine by me. To those who are fiercely anti-loop because it's not being original, I say that to create something truly original, first we would need to create the universe. The argument goes all the way down - why is using someone else's one-shot samples okay? If we create all our own one-shots, why is okay to use a drum which someone else has made?
Can we still respect ourselves, can we still sleep at night if we haven't created the universe? I can. It's up to each of us to make that choice.
The percussion is chosen/ created in a similar manner to the Hi Hat and Clap/Snare, though I do also focus on the way the timing of the rhythm interacts with the bass and lead patterns. I look for sounds which fit in around the Kick, Hi Hat and Clap/Snare – anything too similar in timbre to those will obscure the existing sound. I'm not overly concerned with the stereo space of the loop, since I often (though not always) spread the loop out to the sides anyway.
MY MAIN THOUGHTS IN CHOOSING PERCUSSION: "drive", "enhanced space around other drum elements", "not too heavy or piercing".
At this point I add in any additional bass layers, if I feel they would fit well. These sounds need to fit well in the overall context (naturally), but also need to lock in well with the kick and main bass. My main bass will usually be quite centered, while I use the higher bass sounds to fill out the stereo field with lush ear candy.
Again, I choose a synth and run through patches. The “quiet-mono” test tells me if the sound will get lost in a full mix. The pattern(s) played by the higher bass sounds will complement the main bass and lead patterns. When a sound is chosen, I usually apply a highpass filter to remove the portion of the low end which sits in the same region as the kick and main bass – enough to clear out space for these sounds down low, but not enough to make the high bass too thin. I often route the high bass sound to a send channel with a left-right delay, which has delays timed so the notes fall in between the notes of the high bass pattern.
MY MAIN THOUGHTS IN CHOOSING HIGHER BASS LAYERS: "expensive", "full and wide", "movement/modulation", "bass progression clearly audible even at very low volumes".
At this point I mute the lead sound and export the mix of drum and bass elements. I run this through a frequency analysis process which compares it to a library of other drum and bass sections from great sounding trance songs. This allows me to quickly see where the mix is at, relative to the average and the extremes. If I've turned my kick up a bit too high, this quickly points out that 65 Hz is extremely high and that I should turn the kick (or the main bass, or both) down. It lets me know if a Hi Hat sample has a piercing resonant spike at 16 kHz, in which case I first try turning down the Hi Hat, then taming that frequency if that doesn't work.
Right – so now I have a very decent sounding foundation of drums and bass sounds to slot the other sounds on top of. I unmute the lead and listen to it in relation to the newly balanced foundation. Often the low frequencies of the lead muddy up the bass sounds, so I set a highpass filter to an appropriate frequency to rectify this.
The last main element I slot in is the pad, whether it's a single sound or a combination. The pad sits between the basses and the leads. I look for a sound which doesn't have too much going on in frequency ranges which conflict with those sounds. I often spread the pad out wide to the sides of the stereo field to provide a large, lush backdrop to the other sounds. I do the quiet-mono thing here too, though even the most perfect pad won't come through amazingly well under these conditions – nor should it, it would mean the pad is probably turned up way too loud! I also listen to the pad with just the lead sounds playing, to hear how they could interact during the breakdown. As an aside, I rarely add reverb to my pad sounds, whether directly or via a send channel. I used to, but I've found it really cleans things up when I don't do it, and pad sounds are generally lush/ sustained enough that they don't require any additional treatment to lengthen/ fatten them.
MY MAIN THOUGHTS IN CHOOSING A PAD: "deep background", "filling the gaps with lushness", "warmth".
Beyond the main sounds, additional sounds fit in the gaps and are always auditioned quiet-mono, since at this stage many sounds will disappear in the full mix. I always listen to both the way the new sound comes through, and whether it obscures any of the existing sounds. The second part is a little more difficult - once we've heard a sound looping for a while, it's easy to take it for granted. That's why the "next morning quiet listening test" is so important - if anything is too quiet or loud or odd it'll jump out.
So there's my approach. It has changed a lot from when I started, when my thoughts were more like "grab all the coolest sounds and put them together". It will probably (hopefully!) keep changing as I have new mixing experiences.
I hope you've found this useful!
Fabian
Friday, February 4, 2011
Musically Obsolete
I recently read an article about the death of rock music. Sure, many people may have said that over the years, but the numbers quoted in the article seemed pretty compelling - rock music is rapidly disappearing from the charts. Maybe there's only so many new ways you can use a drum kit, a bass, one or two guitars and a vocalist. After hearing a thousand songs, there aren't many fundamentally different chord progressions or rhythms or drum sounds.
The same would be true for most genres. After the pioneers have come in and created a fresh sound, and the second wave of artists have pushed the boundaries and made the style as popular as it's ever going to be (I'm simplifying and generalizing here), there isn't that much further for the style to go. Every style eventually makes way for the next.
How we approach change is for each of us to decide. If we've built up a lot of experience in a particular style, we may be loath to move on to something new (though hopefully most of our experience can be readily adapted to the new sounds we're working with). A lot of people will say that we should make the music we love, and not worry about genre definitions. There is some truth to that, and hopefully most of the time we're working on music we enjoy. But, as in other areas of life, it's good to be flexible about the genres we work in. How much we want to specialize and excel in one particular style is up to each of us. Maybe the audience size is of little importance to us. In any case, the internet makes it easier to reach a widely scattered niche audience. Or maybe audience size is very important, and we'd like to have a lot of people enjoying our work, meaning we invest a lot of time in listening to what's popular and seek inspiration there. This isn't necessarily selling out - at any point in time there are several popular genres, and it'd be unfortunate if none of them appealed to us.
It can be disheartening to realize that we're working in a style which is on the way down (if not yet out). One way to combat this, even if there's not much room to push the genre on a production level, is to write great songs. A really catchy melody or hook over a strong, effective chord progression will connect with people (and may lead to the song being covered in more popular styles in the decades to come). A great melody doesn't seem to go out of style. So, a good question for those of us working in more marginal styles would be "is this a really good song I'm working on? If I strip away all the production and it was played by someone on a piano or a guitar or some other instrument, is it a really good song?" If not, maybe we're creating fantastically produced filler, destined to sink without a trace.
Keep making great music!
Fabian
The same would be true for most genres. After the pioneers have come in and created a fresh sound, and the second wave of artists have pushed the boundaries and made the style as popular as it's ever going to be (I'm simplifying and generalizing here), there isn't that much further for the style to go. Every style eventually makes way for the next.
How we approach change is for each of us to decide. If we've built up a lot of experience in a particular style, we may be loath to move on to something new (though hopefully most of our experience can be readily adapted to the new sounds we're working with). A lot of people will say that we should make the music we love, and not worry about genre definitions. There is some truth to that, and hopefully most of the time we're working on music we enjoy. But, as in other areas of life, it's good to be flexible about the genres we work in. How much we want to specialize and excel in one particular style is up to each of us. Maybe the audience size is of little importance to us. In any case, the internet makes it easier to reach a widely scattered niche audience. Or maybe audience size is very important, and we'd like to have a lot of people enjoying our work, meaning we invest a lot of time in listening to what's popular and seek inspiration there. This isn't necessarily selling out - at any point in time there are several popular genres, and it'd be unfortunate if none of them appealed to us.
It can be disheartening to realize that we're working in a style which is on the way down (if not yet out). One way to combat this, even if there's not much room to push the genre on a production level, is to write great songs. A really catchy melody or hook over a strong, effective chord progression will connect with people (and may lead to the song being covered in more popular styles in the decades to come). A great melody doesn't seem to go out of style. So, a good question for those of us working in more marginal styles would be "is this a really good song I'm working on? If I strip away all the production and it was played by someone on a piano or a guitar or some other instrument, is it a really good song?" If not, maybe we're creating fantastically produced filler, destined to sink without a trace.
Keep making great music!
Fabian
Tuesday, January 11, 2011
Finishing Songs
I sometimes hear other people talk about the difficulties they're having in finishing songs. They create many small musical fragments, ideas which don't progress to full-blown songs. Sooner or later, someone will usually offer advice along the lines of "just finish it. Finishing songs is a skill like any other and it gets easier with experience". To a degree, this is good advice.
Something to keep in mind is the difference between completing a song and completing our progress as artists. Our current song can only be completed to our current level of ability. It's very likely that six months from now, we could come back to it and make significant improvements to it. Hopefully, we will continue to evolve and improve throughout our lives, which means there will always be a point in the future where we could improve the song we're currently working on. Our progress as artists will never be complete.
In order to finish songs, we have to be comfortable in letting go. We have to let go of the expectation that our work will be perfect. It will only be as good as our current level of ability. We have to let go of the thought that soon we will have a better approach to using reverb in our mixes. Our reverb will sound the way we presently know how to use it. We have to let go of our thoughts about our limitations and simply do the best we can do.
In order to finish songs, we need to clearly delineate our "finishing songs" sessions from our "learning how to improve our bass sound" sessions and our "experimenting with compression" sessions. Can songs be finished while doing these things? Sure, but it's less likely than when we make song completion our primary goal. Sessions where we are learning and experimenting are more likely to result in the many small musical fragments many of us end up with. This is totally fine - these fragments are incredibly valuable steps in our learning. There's no need to turn each one into a full-blown song. Only when we have a sufficiently strong idea, or composition, is it likely that we'll be motivated to carry it through to completion. If our focus is on finishing the song rather than on learning or experimenting, we'll be fine.
Finishing songs is a valuable skill to learn and it does improve each time we do it. Our first few songs may progress from beginning to end in a very rudimentary way - instruments drop in and out, instrument levels are fairly static throughout, transitions from one section to another are functional at best. In time, all these aspects and others are refined and our songs become much more nuanced, much richer experiences for our listeners. If we wait until we have "the perfect sixteen bar loop" before finishing a song, we will still have a lot of learning ahead of us in order to create something worth listening to for longer than sixteen bars.
Finish songs, then let them go. There are plenty of great songs ahead of us.
Keep making great music!
Fabian
Saturday, January 8, 2011
Headaches of Hardware
These days, a lot of us start out using only software for music production. This makes sense – it's a very cheap way to get into music production, since there are a lot of good affordable (or even freeware) tools around. It means that if we find that music production isn't our thing, we haven't lost too much money.
At some point in our journey, a lot of us start wondering about hardware devices (synthesizers, samplers, effects units) and what they can offer us. Are they capable of things our software can't do? Are they the missing link between us and the coveted “pro sound” (whatever that is)? Well, maybe they can do particular things which aren't available to us in software currently (these “particular things” aren't better or worse, just things not currently available as a plugin). Maybe they will bring our productions forward, once we get to know the piece of equipment and how to work it into our production process.
There are a few things to keep in mind, for those of us who have an itch to add hardware to our setup – if we've been entirely software based until now, we need to realize it's not like adding another plugin to our setup. The following points are in no particular order of importance.
Hardware takes up space. This is pretty obvious, but it needs to be taken into consideration. How much room will the unit take up? Will it sit on the desk, or will it require a rack to mount it into? Will the unit be easily accessible from your production vantage point?
Hardware uses cables. Which take up more space. They also need to be the right cables, plugged into the right inputs and outputs. A lot of units will require both audio and MIDI connections.
Hardware requires an audio interface to plug into. If we're using only software, potentially we could get by using only our computer's onboard sound (though the quality may leave a bit to be desired). With hardware, and it's aforementioned cables, we'll very likely need an interface which can send audio out to the unit and bring audio in from the unit. We could very well require MIDI connections to get signals into and out of the unit. The interfaces we get may require a firewire connection, which could be an additional expense.
The more hardware we get, the more issues we create. It's only feasible to daisy chain a few MIDI devices together (the MIDI signal gets passed through the chain until it finds the device with the MIDI channel it is referring to). If we want to use a device at the end of the chain, we need to power up every other device. So we may require a MIDI interface with multiple inputs and outputs, or a device which splits a MIDI signal into multiple signals. On the audio side, we may need an audio interface with multiple inputs and outputs – maybe eight or even sixteen. Cables can get messy, it may be much easier to get an audio patch bay which sits near our audio interface, with all the audio cables running into the back of it. This way we can quickly patch in the units we require from the patch bay. At some point it may even be appealing to look at custom built furniture to accommodate the equipment in such a way that it's all easily accessible and the cables are hidden away.
Hardware is a little more difficult to interface with from our sequencer, though not too much. We need to set the incoming and outgoing audio and MIDI channels so that everything comes through properly. MIDI automation is slightly more unwieldy than plugin automation – it's much easier when we see “filter cutoff” rather than “MIDI CC40” (which the manual has told us controls the filter cutoff for this particular synthesizer).
Hardware settings generally don't get saved with our project files. Which means we load up the project, turn on the hardware, then find the preset or manually input the settings we used (which we've written down in the relevant track's notes). It is possible to automate program change information (but I must admit, I haven't worked this out myself, it seems fiddly and doesn't seem to work properly with every device).
Particularly frightening is when our hardware starts dying, or stops working completely. A lot of times, it turns out to be a faulty cable, or a cable plugged into the wrong connection. Sometimes it's more critical, meaning a potentially lengthy stay with the repair person (if we can find someone who knows how to fix it). If software exhibits problems, a restart usually helps, or a software update. When hardware exhibits problems, it could soon be lost forever!
For those of us who are prepared to deal with all these issues, hardware can be a wonderful addition to our studio setup. There are high quality units around, built by companies with decades of experience in creating products for musicians. Each unit has it's own sonic character, and certain characters aren't currently available as plugins. We all choose our tools and our preferred way of working, whether it's entirely software based, a mix of software and hardware, or an entirely hardware based setup running into an analog mixing desk. All of these have been used countless times to create fantastic sounding music.
I hope this post has been handy for some of you. Keep making great music!
Fabian
Subscribe to:
Comments (Atom)