Archive

Production

I’ve used Ableton Live for at least 8 years. I still consider it the best DAW for me personally. I’ve tried other DAWs and found it hard to fit into their workflow. Live matches how I think but, as time goes on, I am getting more and more disillusioned by Ableton’s development of the software.

I think that Ableton are in some kind of bubble, detatched from the reality of actually making music. More and more I see them pandering to both the current making-music-is-super-easy fad and a kind of hipster control-your-DAW-with-your-shoes experimental crowd. If you go to Ableton’s website now you’ll probably see lots of photographs of completely unrealistic hipster “studios” with a wooden stool, exposed brick walls and a MacBook Pro on a shonky wooden picknick table. There’s probably a record player. Records aren’t subject to licence laws ofcourse.

Abelton seem to have a kind of hyper-minimalist philosophy. They want Live to have as few moving parts as possible and for it to automagically do all kinds of things. I don’t mind this. I like the idea of having something that has basic simple parts that can be combined arbitrarily to make complex projects. Great. I just think they are talking this approach to such an extent that it is basically the same as laziness.

Over the time I have had Live it’s actual development has dropped down to nothing. They gave us 64bit support (but not 32-64 cross over support, thanks). They improved Live’s sample rate conversion. Great stuff. Then… they just stopped developing it. We got an Amp plugin, a SSL-G series clone compressor, a pitch detector thing… why? Why an Amp and a new compressor? I don’t understand how these fit in. Analog models are a dime a dozen, how are they the missing parts of the Live puzzle? Recently Sampler and Simpler were improved but ostensibly are the same plugins. We got better audio waveform rendering… awesome, thanks.

Where is surround sound support? Where is VST3 support? Where is support for the newest MIDI standard? Why isn’t there a better visibility system for managing large projects? Why is multiscreen support still terrible? Why are audio rendering options so basic? Why can’t I normalise to -1dB?

There are a million tiny things that Ableton could have added that would be invaluable. Like retrospective recording. Or clip versioning. There are also limitations to Live UX logic that really need work-arounds. For example a huge issue with Live’s arrangement logic: If you make a 1 bar drum loop pattern and loop it over, say, 16 bars, you can change it and the change will be reflected across every instance of that clip. Great. But if you decide, which you almost certainly will, to have a fill pattern at the end of every 4 bars, that new fill pattern bisects the orginal clip into 4 completely separate clips, and the fill itself will be 4 separate clips. Live doesn’t support the idea of a clip being in multiple places.

I have Max for Live but my main use of it is for DAW level LFO, envelope and envelope follower support, and for Robert Henke’s Granulator plug in. Max for Live is presented as some kind of utopian wonderland of magic audio development made easy by a GUI and a candy store of exotic community plug-ins. The reality is a development platform that is almost completely inpenetratable, lots of patches that are, let’s face it, almost all rubbish running inside a framework that, to be fair, integrates well but is at best unreliable. I want DAW level LFO support. Fruity Loops (the kids’ DAW) has LFO support. And an envelope follower. Live users have to buy and extra to get that. Someone at Ableton could take the week to make an LFO plug-in.

In my early days of making music I went through a period of having whatever plugins were there, trouble is that this makes your computer completely unstable. Now I have to be very strict about what plugins I’m running. I don’t want a circus of free-ware crap running on my PC. It’s revealing that the one instrument I use (which crashes regularly but I forgive it), and IMO the best M4L plugin out there was made by a founding member of Ableton. An insider. Things like M4L promise ease of use but the reality is that because they are often their own dark standard you may as well just learn c++ and make vsts.

Which relates to what really annoys be about M4L. Any plug-in that is reasonably good is creamed off and sold as a commercial plug-in. This didn’t really happen when I got M4L. It was presented as a development platform that implied it was all about community driven patches. Later they started selling them on Ableton’s store. So you have to buy M4L to be able to buy plug-ins for something that only runs in Live! It’s a walled garden inside of a walled garden! Why not just release VSTs? They’d be more stable and faster. Few M4L plugins use the actual Live integration APIs so most would work fine as VSTs.

M4L should be part of Live. The people at Bitwig are heading that way. I think FL already does something like that.

As I said they are pandering to the current easy-music fad. A perfect example is the audio to midi feature they added a while ago. I have literally no idea what this feature is for. Firstly, it obviously doesn’t work. How could it? And if it did…who is it for?? It’s clearly designed for people to rip off other people’s music. Why waste time, and insult actual producers with rip-off tooling, when they could have released.. say.. a send plug-in. Or an option to turn off Master channel FX in the render screen? Or some better colour schemes? Or added wet-dry to any of the plug-ins that don’t have it. Or added LFO to the group macro section. Or added round-robin to Sampler.

If you look at the change log it’s basically Push all the way down. The open beta changes are all Push. It seems that Live is becoming just the software you need to use Push. I am hoping that the reason they aren’t doing much to Live is that they are going to release Live 10 and don’t want to waste time on 9. I’ve been hoping that for a while though.

I recently, finally, got an analogue synth after only ever using digital ones. I have always been a bit suspicious of digital synths. There was always something off about them. Now that I have had the chance to use an analogue synth first hand I have had my suspicions confirmed: analogue synths sound better by far.

Now, some digital synths are designed to do things that only digital synths can do, so comparing those to analogue synths is a bit unfair. It is true that digital synths have a wider range of timbres and easily do arbitrary control routing; limited only by software architecture, CPU and UI/EX, but digital synths all suffer from the same artifacts.

If you hunt around online you will find people raving about how particular digital synths so well recreate the classic analogue sounds. To some extent this is true, analogue model synths do a good job of recreating the overall tonal characteristics of analogue synths but they also suffer from some of the following:

  • Aliasing – they all have some amount of nasty harmonic distortion caused by sample aliasing. Some do reduce this to a minimum bit it’s always there.
  • Scratchy transients – they just can’t seem to handle situations involving high/short envelope settings in which you get nasty split, crackly distortion in onset transients. You often want a spike at the onset of a sound to give it a snap that your ear can catch hold of, so you might have a short envelope opening a filter momentarily, but this often introduces some scratchy high frequencies or crackly transients.
  • Gritty high end – By far my most serious complaint. The thing that always puts me off. The high frequency component in every digital synth I have ever used always has a kind of brittle, rough texture. The sound is like subtle bit-depth reduction all over the top end. Digital synths never fizz cleanly and I always find it distracting.
  • Bad aliasing during even subtle pitch modulation – one of the main characteristics of analogue synths is looseness. Pitched oscillators will slide around, if only very slightly. Many will recreate this via slight pitch modulation, say settings an LFO to wobble oscillator pitch or using some pitch shift plug-in, but these, even very subtle, variations immediately incur aliasing with a lot of digital synths.
  • Break down at all extremes – digital synths always break apart with extreme settings. You can’t have LFOs going too fast, fast attack envelopes crackle and spit, high resonance sounds aliased, high register notes sound scratchy.
  • Unstable low end – I have noticed that, to my memory, every digital synth I have used unravels on very low frequencies depending on settings. The low end seems to be detached from the rest of the signal. Maybe something to do with phase coherence of digital filters. Dunno.

It took me about 5 seconds using an analogue synth to hear that none of these problems exist in the analogue world. You can get an analogue synth to play a note at very high pitches and while the sound is whiny and difficult to listen to it sounds pure and intact. The high end is generally exactly as you would want to hear it.

On a subjective note analogue synths also have a kind of unruly quality that not many digital synths really have. With digital synths you set something and the synth hits it bang on every time. Analogue synths (and in fact gear in general) seem to be a bit more squirmy and springy. The sound also has more depth than many, but perhaps not all, digital synths. You can somehow tell that you are listening into a mechanical device, which you are. The sound is coming from a real thing and exists. Digital synths often have a flat, two dimensional quality, as if you are listening to cardboard cut-out impression of a sound.

Analogue synths probably aren’t for everyone. Sound is very subjective and I am only following my own taste. They are also kind of a pain in the arse to use compared to a VSTi. I now have ground loop issues for a start, but I have already had a quick test using the synth in a track and found it just added something great, processed beautifully and joined right in with the rest of the track. It recorded in one take and the result was just… perfect.

When I encoded this track to mp3 I noticed that it particularly degraded it’s quality, more so than other tracks. It got me thinking that a lot of people may not realise what effect mp3 compression has on a track because they would tend to only have the mp3 version. Similarly very few people ever listen to music higher than CD quality, with the exception of movie soundtracks on DVDs or Blurays.

So I decided to make a comparison that people can download and try out.

When I produce a track it is done at at least 96kHz. The software I use runs at 32/64-bit floating point (I think it switches to 64-bit for some specific tasks). Such high bit depth is necessary because of the amount of summing that has to happen. A track could be made of dozens of channels, each itself doing summing internally (sound generators may involve summing many channels and effects processors will have things like dry / wet mixes).

I personally do the final mix down through an analog mixing desk. The result is a 32-bit 96kHz wav file ‘master’. This is then downgraded to 44.1kHz 16 bit using r8Brain. That file is than turned into several mp3 files (I happen to use FL Studio for that, which is the only thing I have that can encode mp3s, though it probably isn’t the best application for that). NB: the file might not play an all sound cards. You’ll have to check. It’s also 94MB.

The above file contains the same 16 bars repeated 4 times:

1) The full 32-bit 96kHz version @ 6144 kbps

2) The CD quality version 16-bit 44.1kHz @ 1411 kbps

3) MP3 @ 320 kbps

4) MP3 @ 128 kbps (this is the quality SoundCloud uses)

These were sequenced and the upscaled back to the 96k/32-bit file you can download.

The first thing to note is that this track has a much bigger dynamic range than most modern music. The final full track has an average dynamic range of 12 (measured by this), most modern tracks probably have a dynamic range of <1dB or something ridiculous. MP3s seem to sound ‘better’ when there’s more going on and less dynamic range just because there’s more stuff to distract you and less dynamic range means less actual information. Aliasing does become an issue with mp3s but mostly at lower bit-rates.

The first thing I noticed is that you can hear a slight difference between 1 and 2. 1 has a high frequency granular texture to it that is more rounded off in 2, which makes sense as 2 must have had some low-pass filtering done. You can hear it if you focus on the reverb sound between the kicks, right in the middle of the stereo field. It’s hard to pick that out without knowing what to listen for though.

When you listen to the mp3s the things you should be aiming your attention at are the transients. These are the short snappy sounds that usually happen at the onset of a sound. You’ll notice that mp3s really do ‘pixelate’ them. In the 128 kbps version the transients sound almost like they’ve been passed though a resonant envelope filter and have a tonal quality added to them, like a pitched poppy, zappy sound that is completely unintended. This is important because transients portray most of the rhythmic structure of the music. It sounds exactly like the audio analogy of the type of compression artifacts you get on compressed movies: details smudged, generalised and relocated.

The other area that you should listen to is around the low frequencies. There are all kinds of things happening in that area that are just added by the MP3 codec. The whole area is much more muddy, and filled with random pulses and booms of sound that, again, aren’t intentional, and have their own tonal characteristics.

Depending on your personality this might sound like madness. If your pay-cheque depends on making music, especially soundtracks, this tip really isn’t for you. If you’re a hobbyist, like me, you should be organised to an extent and no more. You need to draw a line. Where? Draw it where organisation costs creativity.

What do I mean? Take an example. Many people say it’s a good idea to have a project template. A blank project file with reusable stuff set up. Maybe you set up sends and channel routes etc etc. In some DAWs this might save you a lot of time… if that’s the case I would question whether that DAW is a good choice…. But really, what are you actually saving? Creativity? This, to me, is bad.

Good organisation:

  • Naming conventions
  • Filing conventions

Bad organisation:

  • Templates
  • Presets

There are quite a lot of tutorials out there on making something sound wider. Most involve some kind of phase effect, which are great for harmonic stuff, like pads or leads or vocals. For anything percussive those types of effects mess up transients and can sound a bit odd. What about increasing the stereo width of something complicated like a whole track? You can’t really stick a chorus in a master channel. I came up with a very effective way to increase the stereo width of just about anything in a very transparent way. It’s transparent in the sense that you can set it up in such a way as to increase the presence of the stereo features of a signal without the overall levels of the signal changing much. This is good for master channel applications.

The idea is fairly simple:

1) Split the signal into two components, the mono component (called Mid: the part of the signal that is shared between the left and right signal) and the purely stereo component (called Side: the part of the signal that is the difference between the left and right signal).

2) Use a compressor to increase the average level of the Side component, making it sound louder without its peak level being any higher.

Splitting a signal into Mid/Side is possible out of the box with Ableton, its very simple, just a bit weird.

You need an Effect Rack with two chains in it. Label them Mid and Side. You’re going to stick something in each of these to extract the Mid and Side components out of the stereo signal. First, the Mid chain. This is easy. The Utility plug-in can do this. Just set it’s width to 0.0%.

Next, the Side Chain. This is a bit more complicated. You will use an effect rack inside the ‘Side’ chain. I called it ‘Side Extract’. It will have 2 chains of its own. One called Mid inverted, one called dry. The Utility can’t make a Side signal, only a Mid signal… but if you subtract the Mid signal from the Stereo signal you get a Side signal. So the ‘Side Extract’ Rack has in it one chain that does nothing, called ‘Dry’, and a second chain that has a Utility (width set to 0.0%, Phz-L on, Phz-R on) in it:

After the ‘Side Extract’ rack, stick a compressor. Get it to do a few decibels of compression and you should start to hear a difference. You can leave ‘Makeup’ turned on if you like but this always seems to add a decibel or two gain more than necessary, so I tend to turn that off.

To show the effect I made this recording. In it the effect is turning on and off. In the the ‘Makeup’ setting is on so you can hear a slight gain increase when the effect is on. Thats not what you are listening for though. What’s more important is that it sounds wider.

I was originally going to do a tutorial on ‘Bass Focusing’, which is what this tutorial is actually about, but you should bare in mind that what I will show you can be used for all kinds of things. What I want to show you is how to split a sound into frequency ranges properly.

The challenge is to split a sound up into, say, three frequency bands. Bass, mid and high. Initially, if you are used to using effect groupings this will seem easy. The trouble is that the the EQ8 plugin doesn’t split frequencies faithfully. If you split a signal into two exact copies and use an EQ8 on each, set to the same cut-off frequency but set to low pass and high pass respectively, you get a band rejection at the cut-off frequency. Try it. I’m not sure why. Possibly something to do with phase non-linearity. The EQ3 suffers from an even more extreme problem in that the EQ on its own alters the frequency content of the signal. Just putting one in the track changes it. I’m fairly certain that is to do with phase non-linearity of the plug-in.

Anyway, the trick is in the multi-band compressor. It has a built in phase linear frequency splitter that works very transparently.

An example scenario: You want to reduce the stereo width of the bass frequencies of your track.

This is a very good idea in a lot of cases. It’ll often make the track more focused and remove some of the ‘bloat’, making is sit together better. To do this you should stick a MB compressor in the channel. Group it. Set macros up for the two split frequency sliders (where the litter green pips are in the pic):

Image

Next, duplicate that Chain twice and call them Low, Mid and High. The macros are also duplicated so you should be able to change the cut-offs of all three at once keeping them in sync; so that you don’t accidentally mess up the EQ. Go through each chain soloing the respective band for each chain:

Image

There you have it, an almost perfect frequency range splitter. For bass focus you can stick a utility after the MB compressor in the Low chain and set the utility’s ‘width’ to 0.0%:

Image

Fiddle with the Low-Mid Crossover macro. This controls the upper limit of the frequencies that will be made mono. What you set this to is up to you. About 200 is roughly where it should sit if you want preserve the stereo image of your track but sort out the bass.

Once you have this you could use it to do other frequency dependent stuff, like doing some stereo panning trickery on the higher frequencies.

NB: Always remember to use limiters! Especially at the end of the channel.

EQ, at it’s heart, is based on a very simple idea:

Sounds with the same frequency content sound the same

The more two sounds share similar frequency content the the more similar they sound. Any sound is made of a collection of vibrations at different frequencies and different loudnesses. This is true of everything from a whistling sound (an almost perfect sine wave, a pure single frequency) to a symphony (an infinite set of frequencies).

The next principal that’s important:

Matching frequencies add up

If two sounds share frequency content (put another way: if the share frequency space) those shared frequencies will add up. They’ll get louder when played together. This is everything you need to know to understand EQing.

Two sounds will ‘compete’ if they share frequency space. You can use EQ to remove frequencies from one sound so that it doesn’t compete with the other.

There are countless forum posts by newbs (and notsonewbs) asking for feedback on a track. Often feedback is given whether it’s asked for or not. The worst example of this is when someone posts a track and someone comments something like “keep trying you’ll get there eventually”… Could there be a more backhanded compliment? I’m sure they mean well, but this needs to be set straight:

Ask for feedback, take it as suggestions, but remember that your aim should not be to make a piece of music for which no-one could think of feedback to give (not least because that’s impossible). Your aim should be to make a piece of music such that if someone suggests it be different you can say “it is that way because I decided it be that way.” if someone says “that hihat should be louder” you should be able to think to yourself “the hihat is at a level I chose for it…I didn’t overlook it, and I’m happy with it.”

Often newbs will ask if something is ‘right’. Others then say what is the right level for a bass-line or whatever… If you like it, it’s right. Let them suggest to you options. Take them or leave them. Ultimately it’s your music and yours is the only judgement that matters.

Many people, even people who know better, still think that headroom on individual tracks matters and that you should never see red on your meters.. anywhere. Actually seeing red on meters isn’t clipping.

In Ableton Live, clipping happens in only 3 possible places:

1) At the final master output

2) At sends

3) Some non-native plugins (Which? No idea. Assume it’s all of them.)

Zero dB in a modern DW is nothing special, it has no magic properties different from any other value. It’s just an arbitrary number to the CPU. A signal can be transformed almost arbitrarily inside Live. It’s only when the sound has to be changed form the native floating-point precision to fixed-point precision that clipping becomes a possibility. At this stage the DAW has to figure out how to map from float to fixed. It does so by the convention that some value (that is represented in the UI as 0dB) is mapped to the maximum value in the fixed point data structure. Long story short: everything above that is clipped off, more specifically it’s clamped.

So, for example, lets say all your channels are red. You put a utility plugin in the master channel, right at the start, and set it to reduce the gain by 12 dB. Suddenly the clipping goes away. The only reason this is bad idea is that it makes the signal meters useless as they’re all block red.

The process of mapping form float to fixed (or perhaps from float to clipped float) happens at some point when using sends… not sure where. Sends aren’t that useful nowadays anyway. If you introduce a non-native plugin there’s no easy way of knowing what type of audio that plugin takes from and transmits back to Live, or how it handles audio internally… so for such plugins stick a limiter before them and watch their internal meters too. However, don’t get too carried away with headroom as the plugin might have terrible signal:noise ratios.

Anyone reading this who knows what I mean when I say compression, and understands what compressors do*, probably thinks I’m about to say something ridiculous. I have to be careful, and actually the title is a little misleading (but it had to be short).

The point is that compressors reduce the dynamic range of the signal that passes through them, that is true, but that means that elements within the signal are quietened and loudened independently. Music is made of a mixture of elements. So while the dynamic range of the track as a whole is reduced, the dynamic range of elements within the track are increased.

Take the example of a side-chained kick-bass set up. The dynamic range of the bass being increased. This is obvious when you consider a genre like Dubstep, for which brick-wall limiting is an explicit feature to the extent that it’s an effect, the elements within a Dubstep track jump all over the place as the compressor flattens out the spikes. If anything it’s more this extreme movement of gain of the individual elements that really characterises over-the-top limiting than any perception of reduced overall dynamic range.

*I’m not talking about data compression, like mp3 compression… although I’m sure mp3 compression has implications on dynamic range.