Archive

Music

I’ve used Ableton Live for at least 8 years. I still consider it the best DAW for me personally. I’ve tried other DAWs and found it hard to fit into their workflow. Live matches how I think but, as time goes on, I am getting more and more disillusioned by Ableton’s development of the software.

I think that Ableton are in some kind of bubble, detatched from the reality of actually making music. More and more I see them pandering to both the current making-music-is-super-easy fad and a kind of hipster control-your-DAW-with-your-shoes experimental crowd. If you go to Ableton’s website now you’ll probably see lots of photographs of completely unrealistic hipster “studios” with a wooden stool, exposed brick walls and a MacBook Pro on a shonky wooden picknick table. There’s probably a record player. Records aren’t subject to licence laws ofcourse.

Abelton seem to have a kind of hyper-minimalist philosophy. They want Live to have as few moving parts as possible and for it to automagically do all kinds of things. I don’t mind this. I like the idea of having something that has basic simple parts that can be combined arbitrarily to make complex projects. Great. I just think they are talking this approach to such an extent that it is basically the same as laziness.

Over the time I have had Live it’s actual development has dropped down to nothing. They gave us 64bit support (but not 32-64 cross over support, thanks). They improved Live’s sample rate conversion. Great stuff. Then… they just stopped developing it. We got an Amp plugin, a SSL-G series clone compressor, a pitch detector thing… why? Why an Amp and a new compressor? I don’t understand how these fit in. Analog models are a dime a dozen, how are they the missing parts of the Live puzzle? Recently Sampler and Simpler were improved but ostensibly are the same plugins. We got better audio waveform rendering… awesome, thanks.

Where is surround sound support? Where is VST3 support? Where is support for the newest MIDI standard? Why isn’t there a better visibility system for managing large projects? Why is multiscreen support still terrible? Why are audio rendering options so basic? Why can’t I normalise to -1dB?

There are a million tiny things that Ableton could have added that would be invaluable. Like retrospective recording. Or clip versioning. There are also limitations to Live UX logic that really need work-arounds. For example a huge issue with Live’s arrangement logic: If you make a 1 bar drum loop pattern and loop it over, say, 16 bars, you can change it and the change will be reflected across every instance of that clip. Great. But if you decide, which you almost certainly will, to have a fill pattern at the end of every 4 bars, that new fill pattern bisects the orginal clip into 4 completely separate clips, and the fill itself will be 4 separate clips. Live doesn’t support the idea of a clip being in multiple places.

I have Max for Live but my main use of it is for DAW level LFO, envelope and envelope follower support, and for Robert Henke’s Granulator plug in. Max for Live is presented as some kind of utopian wonderland of magic audio development made easy by a GUI and a candy store of exotic community plug-ins. The reality is a development platform that is almost completely inpenetratable, lots of patches that are, let’s face it, almost all rubbish running inside a framework that, to be fair, integrates well but is at best unreliable. I want DAW level LFO support. Fruity Loops (the kids’ DAW) has LFO support. And an envelope follower. Live users have to buy and extra to get that. Someone at Ableton could take the week to make an LFO plug-in.

In my early days of making music I went through a period of having whatever plugins were there, trouble is that this makes your computer completely unstable. Now I have to be very strict about what plugins I’m running. I don’t want a circus of free-ware crap running on my PC. It’s revealing that the one instrument I use (which crashes regularly but I forgive it), and IMO the best M4L plugin out there was made by a founding member of Ableton. An insider. Things like M4L promise ease of use but the reality is that because they are often their own dark standard you may as well just learn c++ and make vsts.

Which relates to what really annoys be about M4L. Any plug-in that is reasonably good is creamed off and sold as a commercial plug-in. This didn’t really happen when I got M4L. It was presented as a development platform that implied it was all about community driven patches. Later they started selling them on Ableton’s store. So you have to buy M4L to be able to buy plug-ins for something that only runs in Live! It’s a walled garden inside of a walled garden! Why not just release VSTs? They’d be more stable and faster. Few M4L plugins use the actual Live integration APIs so most would work fine as VSTs.

M4L should be part of Live. The people at Bitwig are heading that way. I think FL already does something like that.

As I said they are pandering to the current easy-music fad. A perfect example is the audio to midi feature they added a while ago. I have literally no idea what this feature is for. Firstly, it obviously doesn’t work. How could it? And if it did…who is it for?? It’s clearly designed for people to rip off other people’s music. Why waste time, and insult actual producers with rip-off tooling, when they could have released.. say.. a send plug-in. Or an option to turn off Master channel FX in the render screen? Or some better colour schemes? Or added wet-dry to any of the plug-ins that don’t have it. Or added LFO to the group macro section. Or added round-robin to Sampler.

If you look at the change log it’s basically Push all the way down. The open beta changes are all Push. It seems that Live is becoming just the software you need to use Push. I am hoping that the reason they aren’t doing much to Live is that they are going to release Live 10 and don’t want to waste time on 9. I’ve been hoping that for a while though.

Immediately after the Scottish Independence referendum Nigel Farage complained that the UK government “made a promise to maintain the Barnett Formula whereby the UK taxpayer spends £1,600 more on every Scot than on every English person”. This kind of factoid is the sort of thing that immediately pisses people off. The problem is that it’s a nonsense figure that refers to nothing meaningful and hides far more important information.

If government spending consisted entirely of paying money directly into people’s pockets then he might have a point. Some is spent that way, of course, but, as everyone is well aware, a huge chunk of government spending pays for goods and services. There would be no point in the government paying people just to pay taxes. The government wants certain things done. For example, a large part goes to the NHS. The NHS then translates that money into a health services. The taxpayer then receives those services, not the money. The NHS provides those services by spending it’s budget on provisioning the goods and services necessary to provide health services. Ultimately the money it is budgeted ends up in the revenue stream of companies that provide it goods and services, some of which will end up in people’s pay packets and some of which will end up in share holders’ and owners’ pockets. Those share holders and owners could be anyone, anywhere. They could, and statistically are likely to be, in London. They could be in the US. The fact that the initial spending is geographically, in a sense, in Scotland has almost no baring on how the money filters down to people.

Another fact is that cost structures vary. So, say for the sake of argument, that for some reason transporting things is more expensive in Scotland. Maybe because of all of the hills. That means that provisioning of services to the public via the public sector will incur higher costs, say, to pay for extra fuel. That extra payment would show up, again, in the revenue stream of a company, in the case a multinational energy company. Or more precisely some weird holding account system somewhere so that shareholder profits can be maximised. Again, where the money ends up is a matter of topology of the economy, which has almost nothing to do with geography.

It might be, arguably, more reasonable to look at spending per head at the whole-nation level, but that figure is largely meaningless as it could be anything. Working out where money really ends up would, as far as I can tell, be perfectly doable. What makes it hard is that we would need access to data on where companies spend their money and, specifically, who all the shareholders are. That data might be available, partially, through public audit records, but we have no intention of really knowing because that might be too revealing. It’s far better to use nonsense metrics like spending-per-head.

I recently, finally, got an analogue synth after only ever using digital ones. I have always been a bit suspicious of digital synths. There was always something off about them. Now that I have had the chance to use an analogue synth first hand I have had my suspicions confirmed: analogue synths sound better by far.

Now, some digital synths are designed to do things that only digital synths can do, so comparing those to analogue synths is a bit unfair. It is true that digital synths have a wider range of timbres and easily do arbitrary control routing; limited only by software architecture, CPU and UI/EX, but digital synths all suffer from the same artifacts.

If you hunt around online you will find people raving about how particular digital synths so well recreate the classic analogue sounds. To some extent this is true, analogue model synths do a good job of recreating the overall tonal characteristics of analogue synths but they also suffer from some of the following:

  • Aliasing – they all have some amount of nasty harmonic distortion caused by sample aliasing. Some do reduce this to a minimum bit it’s always there.
  • Scratchy transients – they just can’t seem to handle situations involving high/short envelope settings in which you get nasty split, crackly distortion in onset transients. You often want a spike at the onset of a sound to give it a snap that your ear can catch hold of, so you might have a short envelope opening a filter momentarily, but this often introduces some scratchy high frequencies or crackly transients.
  • Gritty high end – By far my most serious complaint. The thing that always puts me off. The high frequency component in every digital synth I have ever used always has a kind of brittle, rough texture. The sound is like subtle bit-depth reduction all over the top end. Digital synths never fizz cleanly and I always find it distracting.
  • Bad aliasing during even subtle pitch modulation – one of the main characteristics of analogue synths is looseness. Pitched oscillators will slide around, if only very slightly. Many will recreate this via slight pitch modulation, say settings an LFO to wobble oscillator pitch or using some pitch shift plug-in, but these, even very subtle, variations immediately incur aliasing with a lot of digital synths.
  • Break down at all extremes – digital synths always break apart with extreme settings. You can’t have LFOs going too fast, fast attack envelopes crackle and spit, high resonance sounds aliased, high register notes sound scratchy.
  • Unstable low end – I have noticed that, to my memory, every digital synth I have used unravels on very low frequencies depending on settings. The low end seems to be detached from the rest of the signal. Maybe something to do with phase coherence of digital filters. Dunno.

It took me about 5 seconds using an analogue synth to hear that none of these problems exist in the analogue world. You can get an analogue synth to play a note at very high pitches and while the sound is whiny and difficult to listen to it sounds pure and intact. The high end is generally exactly as you would want to hear it.

On a subjective note analogue synths also have a kind of unruly quality that not many digital synths really have. With digital synths you set something and the synth hits it bang on every time. Analogue synths (and in fact gear in general) seem to be a bit more squirmy and springy. The sound also has more depth than many, but perhaps not all, digital synths. You can somehow tell that you are listening into a mechanical device, which you are. The sound is coming from a real thing and exists. Digital synths often have a flat, two dimensional quality, as if you are listening to cardboard cut-out impression of a sound.

Analogue synths probably aren’t for everyone. Sound is very subjective and I am only following my own taste. They are also kind of a pain in the arse to use compared to a VSTi. I now have ground loop issues for a start, but I have already had a quick test using the synth in a track and found it just added something great, processed beautifully and joined right in with the rest of the track. It recorded in one take and the result was just… perfect.

When I encoded this track to mp3 I noticed that it particularly degraded it’s quality, more so than other tracks. It got me thinking that a lot of people may not realise what effect mp3 compression has on a track because they would tend to only have the mp3 version. Similarly very few people ever listen to music higher than CD quality, with the exception of movie soundtracks on DVDs or Blurays.

So I decided to make a comparison that people can download and try out.

When I produce a track it is done at at least 96kHz. The software I use runs at 32/64-bit floating point (I think it switches to 64-bit for some specific tasks). Such high bit depth is necessary because of the amount of summing that has to happen. A track could be made of dozens of channels, each itself doing summing internally (sound generators may involve summing many channels and effects processors will have things like dry / wet mixes).

I personally do the final mix down through an analog mixing desk. The result is a 32-bit 96kHz wav file ‘master’. This is then downgraded to 44.1kHz 16 bit using r8Brain. That file is than turned into several mp3 files (I happen to use FL Studio for that, which is the only thing I have that can encode mp3s, though it probably isn’t the best application for that). NB: the file might not play an all sound cards. You’ll have to check. It’s also 94MB.

The above file contains the same 16 bars repeated 4 times:

1) The full 32-bit 96kHz version @ 6144 kbps

2) The CD quality version 16-bit 44.1kHz @ 1411 kbps

3) MP3 @ 320 kbps

4) MP3 @ 128 kbps (this is the quality SoundCloud uses)

These were sequenced and the upscaled back to the 96k/32-bit file you can download.

The first thing to note is that this track has a much bigger dynamic range than most modern music. The final full track has an average dynamic range of 12 (measured by this), most modern tracks probably have a dynamic range of <1dB or something ridiculous. MP3s seem to sound ‘better’ when there’s more going on and less dynamic range just because there’s more stuff to distract you and less dynamic range means less actual information. Aliasing does become an issue with mp3s but mostly at lower bit-rates.

The first thing I noticed is that you can hear a slight difference between 1 and 2. 1 has a high frequency granular texture to it that is more rounded off in 2, which makes sense as 2 must have had some low-pass filtering done. You can hear it if you focus on the reverb sound between the kicks, right in the middle of the stereo field. It’s hard to pick that out without knowing what to listen for though.

When you listen to the mp3s the things you should be aiming your attention at are the transients. These are the short snappy sounds that usually happen at the onset of a sound. You’ll notice that mp3s really do ‘pixelate’ them. In the 128 kbps version the transients sound almost like they’ve been passed though a resonant envelope filter and have a tonal quality added to them, like a pitched poppy, zappy sound that is completely unintended. This is important because transients portray most of the rhythmic structure of the music. It sounds exactly like the audio analogy of the type of compression artifacts you get on compressed movies: details smudged, generalised and relocated.

The other area that you should listen to is around the low frequencies. There are all kinds of things happening in that area that are just added by the MP3 codec. The whole area is much more muddy, and filled with random pulses and booms of sound that, again, aren’t intentional, and have their own tonal characteristics.

I finished this track the other day but the idea for it has been bouncing around my head for maybe 12 or 13 years. When I uploaded it to soundcloud.com they immediately blocked it, said that it has been detected as breaching copyright. I had to fill in a form and wait a few hours for them to unblock it. The track was made from scratch by me, until then no-one had heard it but me. I used some drum hit samples (not even loops) but they all came from perfectly legal sample libraries. All the software I use is legit. There is no way anything in this track is a breach of anyone’s copyright. The only thing I can think is that it’s name is an existing track, but that’s absurd. Most track names will be names of numerous tracks, so if you’re going to automatically ban things by name 99% will be false positives.

I even pay for this service!

So to clarify, it seems I was temporarily blocked from using a service I pay for by someone making a completely unfounded claim to owning the copyright to my music.