Review: The M-Audio Fast Track C600 interface

Review: The M-Audio Fast Track C600 interface

Over the festive period, I had quite a bit of work to do completing an album project, so decided to tie that in with testing one of our new batch of M-Audio C600 audio interfaces. Features like the scriptable buttons and especially the new ‘Multi’ button turned out to be a real time saver…

Reviewing audio interfaces is a tricky task, so I wanted to make sure I gave M-Audio’s Fast Track C600 interface a thorough trial. Sure, you can blast through the features, plug in some sources, have a listen, and it’s easy enough to spot the good points and the glaring omissions. But if you want more than a cursory overview, you have to spend some quality time with it, and that means getting stuck into a project to see how the various components perform when it matters.

What the C600 brings to the table

The C600 and its smaller sibling, the C400, are a departure for M-Audio. There’s a growing ethos that the audio interface need not be a dumb box stuck in a rack somewhere, but something that can sit on your desktop and offer additional functionality and productivity. The C600 is certainly one such unit. A 24-bit/96kHz USB 2.0 interface with four mic/line inputs (two of which have instrument inputs), stereo S/PDIF, twin headphone outputs and six analogue outputs, it also brings some added bonuses to the the table. It has monitor control of up to three pairs of speakers, transport control, onboard monitor mixing (complete with delay and reverb) and the unique ‘Multi’ button, which allows for scriptable actions (more on this later).

M-Audio C600 in action

During the time I had the C600, I needed to add some backing vocals, track some electric and acoustic guitars and of course I had a load of mixes to do, so pretty much every aspect of the interface would have its work cut out. The first thing I noticed was that the sound quality of the C600 is very good. Avid make a point of saying they have “leveraged technology” from the HD Omni interface for these units and you can hear that in the quality of the mic preamps and converters. Mic signals have plenty of clean headroom and very low noise, and output has a huge frequency range with exceptional stereo imaging. The instrument inputs handled electric guitars perfectly and produced predictably good results through IK’s AmpliTube.

The control section

For an interface that’s designed for project studio use, the C600’s control section is a real strong point. You can connect up to three sets of monitor speakers and switch between them via dedicated buttons on the interface, with the control software allowing you to level match. Being able to do A/B comparisons is hugely useful and usually requires some sort of monitor controllers; unfortunately such devices inevitably colour the signal so being able to do it without intervention is a real boon.

M-Audio C600 in action

The C600 usefully features a set of transport control buttons, but what’s even more useful is that each button can be re-allocated, so they can be mapped to any control function in your DAW or even be assigned as shortcut buttons. I’ve never needed to use a rewind or fast-forward button on a non-linear editor, so being able to map one or the other to a function such as ‘save’ is incredibly helpful.

Taking this idea one step further is the ‘Multi’ button. This button allows you to perform any series of actions that you can do with key commands, at up to eight steps. You can define the key command for each step using the control panel, and pressing the Multi button takes you through the sequence one step at a time – a real time saver if you find yourself performing the same sequence of key commands repeatedly. Different setups for the Multi button can be saved too, so you’re able to have different functionality for different tasks.

The control panel software is clear and easy to use, and the mixer especially is extremely functional and lets you balance incoming signals and software returns for accurate tracking. It’s a DSP-driven system which gives near-zero latency and provides reverb and delay for comfort monitoring when tracking. It’s also designed to offer independent headphone mixes to each of the headphone outputs (both of which, I should mention, offer loud, clear and very high quality output). One additional thing that is often overlooked – both the control panel and drivers are very stable. Not once did I experience any unusual behaviour or unexpected quitting. Sadly this is not the norm for even considerably more expensive high-end units.

M-Audio C600 in action

There are the inevitable niggles but that’s because, like any user, I’d like the moon on a stick. I would love a version of this interface with more inputs such as an ADAT so I could accommodate recording a drum kit for instance (are you listening Avid?). The monitor control section doesn’t offer a dim control or mono switch which you’d normally find on a dedicated controller, but to be honest I can’t recall the last time I used either and it’s certainly something I would do without if the alternative is to colour the sound with another unit between the output and my ears. The only frustrating omission in my view is the lack of a talkback mic, which meant some wild gesticulating and shouting to attract the singer’s attention.

The verdict

This interface is a real winner. If you’re in the market for a project studio interface with some real time-saving factions, the M-Audio Fast Track C600 is worth investigating. The scriptable buttons are a huge gift to the musician, as they eliminate a lot of breaking of musical flow as you switch from ‘playing’ mode to ‘computer operator’ mode; something that happens every time you pick up the mouse. The fact the buttons are simply performing keystrokes (rather than being tied to DAW functions) and are fully scriptable means that video editors (who perform far more repetitive keystroke-oriented tasks than musicians, and always need monitor control) could find this really helps speed up their workflow.

For more information on the M-Audio Fast Track C600 interface, call 03332 409 306, email or leave us a comment below. You can also keep up with more news, reviews and offers on our Twitter (@Jigsaw24Audio) and Facebook page.


Are you listening to your music, or just your recording equipment?

Are you listening to your music, or just your recording equipment?

Over the weekend I received a message from a friend’s band that their new single had been released on iTunes. Ever the good supporter of the local scene, I of course checked it out immediately, and was left somewhat confused.

The best I could come up with by way of description was that it sounded expensive. I couldn’t tell you what the song was like musically because there was an enormous production in the way. The main thing I noticed was that this sounded a lot like a big label modern rock release but unfortunately very little like the band that I know really well, having seen – and even gigged with – them more than a dozen times.

I’m not so naive or idealistic to imply that all music needs to be an accurate representation of real instruments in real spaces. Creating the impossible is as much a joy of music production as it is with film making – a chance to create something captivating that could maybe never exist in the real world. I do wonder, though, if we have become so used to processing every signal that the notion of accurately capturing the sound of an instrument is in danger of becoming a lost art. It seems odd that at one end of the production chain most people are aware of the need to be able to monitor accurately. If your speakers flatten the sound, you won’t hear mistakes; if they add a ton of bass, you’ll be producing bass-light mixes. But at the other end of the chain, it doesn’t seem to be important. We get bombarded by products that actively colour the sound – mics that add ‘shimmer’ to vocals, preamps that add ‘warmth’, plug-ins to simulate the effects of tape or desk circuitry on the signal path.

The essential recording ethos

When I first started producing music, an engineer friend took me under his wing. Highly opinionated on many aspects of recording, he had one essential piece of equipment that I have adopted. It was a plastic chair, and whenever he had a session, the very first thing he would do was spend five minutes sat out in the live room with the band, listening to what they sounded like in the room. His recording ethos was always: no matter what the final result you’re trying to achieve, a good engineer should always be able to capture on record what they hear with their own ears. You can tweak to your heart’s content in order to achieve the desired result after that, but if you can achieve this starting point then you’re on to a good thing.

Furniture aside, I have my own favourite piece of recording kit – the AKG 414 mic, and for the same reasons. The first time I used one with a vocalist, I was astonished to hear the same voice I heard in the live room coming from my speakers. It dawned on me that I had become so used to the sound of a voice as it sounds through a microphone coming out of the speakers that I wasn’t listening for accuracy any longer. Not only that, I was even processing without listening, applying compression and EQ because that’s an accepted signal chain on a vocal, without ever considering whether it was necessary.

 Virtual processing plug-ins

One of the biggest advantages of modern DAW software is an almost limitless supply of virtual processing equipment in plug-in form. A traditional analogue studio might be able to afford a few choice pieces of outboard and would have to use them sparingly, but now we can strap 1176 limiters and Pultec EQs on as many channels as our computers can handle. The downside is that we now wield these tools almost indiscriminately, using a compressor when we could adjust a level, simply because it’s no longer a limited resource. And like a lot of users, I was reaching for presets rather than listening to the effects of the controls. The sound of processing was overshadowing the sound of my music.

I don’t doubt there will be people reading this and thinking “rookie mistake”. But I wonder – how much time do we spend investigating the other aspects of the signal chain? How many people compare multiple audio interfaces for conversion accuracy before making a purchase? On both input and output? When I switched from an ageing Digidesign 001 to an RME Fireface, I was amazed at how much I hadn’t been hearing on playback, let alone how much had been missing on capture. There are probably a few people starting to wonder now, I’m guessing. So let’s go a step further. How many engineers have listened to multiple DAW systems to see which software sounds the best? Does something recorded in Logic sound different to something recorded in Cubase? And, if so, which is more accurate?

At every step in the chain there is potential for the signal to be changed, whether it is by the sound of the converters or the algorithms used in recording software. And each step away further removes you from being able to recreate the original sound. In other words, even without processing the signal in any way, what goes in is no longer what comes out. Consider any piece of software which claims to be able to automatically time stretch or pitch shift your audio. In order for this to happen automatically, the software must be analysing for transients and computing a stretchable map. In other words, you’ll always be listening to processed audio and it is impossible to process digital audio and leave it unchanged. The same is true if you use a valve preamp to warm up a bright microphone. And what about any monitor controller you may have – are they altering the sound before it even reaches the speakers?

I would encourage anyone who is serious about recording to critically evaluate every link in the recording chain. Prism established the credibility of their converters, like the Orpheus interface, with world-class engineers comparing the sound they knew best of all – the full bandwidth sound of tape hiss! They considered their converters ready once no discernable difference could be detected by the best ears in the business.

For the rest of us, I’d suggest playing a CD you know really well through your speakers. Then play it through the converters of your audio interface and see if it sounds any different. Then record it into your DAW and play it back again. What about now? Are you getting out what you put in? If you’re hearing a difference somewhere along the chain, then maybe it’s time to consider an upgrade…

If you’re looking for the best in audio accuracy, give our audio specialists a call on 03332 409 306, or email You can also visit our website to view our full product range.


Why the Pro Tools 10 naysayers should give the future a chance

Why the Pro Tools 10 naysayers should give the future a chance

Avid’s announcement of both Pro Tools 10 and their introduction of the Pro Tools HDX platform certainly made some big waves at AES conference this year. The latter was of particular excitement as it represents the first major overhaul of the Pro Tools DSP platform in around nine years. 

While most coverage of these two announcements (ours included) looked at them as two separate entities, to get the full scope of exactly how far Avid has moved forward with this release, you really have to consider them in combination with one another, as it definitely lays out the roadmap for where Pro Tools is headed.

For a recap of the new features, you can read our release summary here, but what I want to do in this article is share my opinions on why Avid has really excelled with this release, and also address some of the comments people have raised about it.

Let’s deal with the most important point first. Starting with the new HD I/O interfaces, and continuing through the Pro Tools 10 and HDX platform releases, Avid has focused on the one area that most manufacturers are overlooking – it’s improving the quality of its system. While addressing performance through the disk caching and scheduling advances, and massive DSP power increases, Avid is improving the sound quality of the entire system by moving from fixed point processing to 32-bit floating point with 64-bit floating point summing. This means vastly increased headroom and much less chance of clipping.

Avid has also introduced a brand new plug-in system with the AAX format which, as well as taking advantage of the increased headroom, also provides a seamless transition between DSP-based and native versions of the same plug-in. And it’s all 64-bit ready, meaning a smooth transition to a native 64-bit version of Pro Tools, without various bits not working.

And that stuff’s important, right? If you’re in the market for a DAW system, surely the prime consideration should always be which one sounds best, and any advance in that direction is a good thing. But as with any new version of a popular system, there were flurries of posts, tweets and random shoutings from people about how Avid had made a “colossoal oversight” in not including whichever feature they felt they needed.

Here are some of the most frequently voiced complaints…

It isn’t 64-bit

That’s true. Pro Tools 10 HD could have been 64-bit, but then it wouldn’t have supported the older HD cards. They simply don’t have the architecture, hence the introduction of HDX cards. RTAS and TDM also don’t support 64-bit, therefore Avid now uses AAX. Usefully, Pro Tools 10 gives an interim period where both hardware systems are supported, to allow users to transition. I’m sure those that can recall the day Apple announced new machines that had Firewire instead of SCSI and USB instead of serial ports (or for that matter, the arrival of Intel chips) will remember the feeling of hearing that all their peripherals had become obsolete overnight. With Pro Tools 10 HD remaining 32-bit, obviously Pro Tools 10 has to as well. Interoperability is paramount to Pro Tools users who need to move sessions between different systems.

But we need 64-bit!

Lots of manufacturers have now released 64-bit versions of their DAWs. However, most have focused on just one thing – being able to access more RAM for virtual instruments. This is obviously a great thing if you use huge sample libraries, but what you’re essentially getting is  a straight port of the 32-bit application that can use more memory. There’s no rewrite to increase audio quality and, more often than not, a broken feature set (plug-ins that now require a ‘bridge’, no ReWire support, incomplete video support and interchange formats not working, to name a few common ones). A 64-bit version of a DAW could be capable of sounding much better, but that doesn’t always seem to be a priority. When Pro Tools eventually does go 64-bit, it will be offering a lot more than just extended memory access.

There’s no Track Freeze…

No. But you can still either a) bounce or b) use the much improved Audiosuite plug-ins workflow to render tracks. Both of which are basically the same.

…and no non-realtime bounce

There are two points here. Firstly, how important is that? Would you ever deliver a finished master to a client that you hadn’t heard all the way through? Secondly, some applications are not able to use all the system resources during the bounce phase. A heavy session can actually take longer to bounce in non-realtime if it is, say, only able to access one core of the processor.

HDX is an expensive upgrade

It’s not cheap, but at least there is one. Mixing desk manufacturers, for example, don’t give you an upgrade path when they release a new model. For those who have bought HD in the last year it probably seems harsh, but then there’s still a full year of HD being supported by the current version of the software and even long after that it’s still going to be better than almost any native offering. For those who bought HD when it first came out, nine years represents a huge return on investment so for those guys, being able to upgrade to the HDX at a dramatically reduced cost is a huge bonus. What other recording equipment could you have bought nine years ago that wouldn’t be utterly worthless by now?

Who needs HDX when computers are now so powerful you can run everything natively?

Well, presumably anyone who wants to increase the plug-in capability of their system by at least five times. Or anyone who wants to be able to monitor through plug-ins with extremely low (as low as 0.70ms) latency. Those features are very important to some professional users.

There will always be those who flip-flop between different applications every time another manufacturer releases an update, aggrieved it hasn’t delivered every feature they had imagined. Doubtless we’ll hear the same again next time Apple release an update to Logic from people who want the features Ableton Live users have. But for the majority of Pro Tools users, it reaffirms their faith in Avid – as with the move from Mix to HD, they have once again delivered an upgrade that, above all, makes things sound better.

As always, it’s great to hear opinions from Pro Tools fans as well as other DAW users, so let us know your thoughts on Avid’s latest update in the comments box below…

 Visit our site for more information on our Avid Pro Tools 10 and Pro Tools HDX product range. You can also call 03332 409 306, email or check out the latest audio news and offers on our Twitter (@JigsawAudio) and Facebook page.

Pro Tools 10 and HDX announced

Pro Tools 10 and HDX announced

Avid has stolen the show at the AES expo today by unveiling not only a new version of Pro Tools but also, in a ‘one more thing’-moment, brand new HD hardware.

Pro Tools 10

Pro Tools 10 and Pro Tools HDX cards mark a huge development in the world of audio, and anyone currently running PT software or full HD systems will no doubt be worked into a frenzy of excitement right now.

So, here’s a rundown of my top features in Avid’s new product line…

Pro Tools 10

The release of Pro Tools 10 and Pro Tools 10 HD offers many new features that have come at the request of users. While a lot of them are post-centric, there will definitely be features in there that will be very interesting to the music community too.

Chief among these is clip-based gain automation that lets you adjust the volume on individual clips independently of mixer automation. As well as speeding up the mix process for any session with a large number of regions, this consolidates the workflow between Media Composer and Pro Tools. Incoming clip automation from Media Composer can be kept as clip automation in Pro Tools or converted to mixer automation, and there’s also support for a mixture of file formats and bit depths within the same session.

Then there’s Disk Cache (for Pro Tools HD software), letting you allocate up to half the available RAM in your system as a cache and load your entire session into it. As well as vastly improving disk performance and offering near instantaneous playback of even the largest sessions, it enables Pro Tools users to work with storage media that were previously unsupported, such as any networked attached storage (NAS) and Avid’s ISIS 5000. Pro Tools 10 also offers improved disk scheduling to give optimal performance, even on non-HD systems.

Pro Tools 10 screenshot

Other important improvements in Pro Tools 10 include:

  •  Deeper EUCON control integration. Nearly 500 new Pro Tools commands added to the AppSet, any of which can be assigned to dedicated keys.
  •  Refreshed Avid Rack plug-ins. There’s a newly included Avid Channel Strip plug-in, taken from the Euphonix System 5 console, Revibe, Impact and Reverb One are now available as native versions.


  • Improved workflows for rendered Audiosuite plug-ins. Window Configurations let you recall multiple plug-ins complete with settings for speeding up repetitive processing tasks. Audiosuite plug-ins now feature user definable handles, so processing is no longer limited to just the section of the audio included in a clip. Delay and Reverb plug-ins now also feature a ‘reverse’ function.

For more details and pricing, take a look at Avid Pro Tools 10 on our site.

So that’s the feature set, here’s the other really important news: this will be the last version to support the HD Accel cards and legacy HD interfaces (192, 96 IO, 96I). Also, Pro Tools 10 is not 64-bit. The reason for this is that the architecture of the Accel cards does not support integration with 64-bit Pro Tools Architecture. Despite being a 32-bit application, the disk cache leverages functionality of 64-bit operating systems independently to access more than 4GB RAM for caching functions. However, the next iteration of Pro Tools will be 64-bit, and will require users to be running the new HDX cards.

Pro Tools HDX

Along with the Pro Tools 10 release, Avid also announced the successor to the Pro Tools HD system – Pro Tools HDX. Pro Tools HD has been powering pro quality mixing and post sessions around the world for nearly nine years – now HDX offers vastly increased power and performance, with support for up to 192 channels. There’s up to 5x the DSP per card compared to HD Accel, and HDX is also scalable, so up to three cards can be used per system. You get 4x the voices over the previous hardware (256 voices per card, 768 voices in total, 4x as much delay compensation and double the I/O count. Avid has also shaved time off its latency, with HDX capable of less than 0.7ms regardless of your buffer setting! That’s the lowest latency of any system.

Pro Tools 10 HDX

In other news, Pro Tools HDX will use floating point processing, which gives massive amounts of headroom (an additional 1000dB!) without clipping. And with the new architecture comes a new plug-in environment. AAX replaces TDM and will support both floating point processing and 64-bit. Ultimately a native version will replace RTAS, and in the interim (Pro Tools HD 10 with HDX cards) will continue to support RTAS, but not TDM.

Visit our site for more information and to buy Avid Pro Tools 10 and Pro Tools HDX. You can also call us on 03332 409 306, email or check out the latest audio news and offers on our Twitter (@Jigsaw24Audio) and Facebook page.

sE Eggs crack monitor market

sE Eggs crack monitor market

Now, I’ll admit to being something of a nerd when it comes to studio speakers, so when the invitation to try out the new sE Electronics Munro Egg monitors at an exclusive dealer demo came through, I jumped at the chance. This was not only an opportunity to do a proper comparison with some industry leading brands (who doesn’t love a speaker shootout?) but also to get some insight into the unique design of the sE Eggs.

Losing track of the number of studio monitors I’ve listened to which claimed to have revolutionised speaker design, I went in slightly sceptical. Pyramid designs, round cases, hexagonal cases, dual concentric drivers, speakers hewn out of stone. And now the Eggs, which, as the name suggests, are egg-shaped. So not only did these speakers look like another gimmick, they were also from a company who has never made a speaker before, sE Electronics being better known for microphones.

But these speakers were designed in partnership with Andy Munro. In case you aren’t aware of the name, Andy Munro is a studio designer and acoustics guru who has probably designed more of the world’s top studios than anyone else, including monitoring solutions, so immediately these speakers have a credibility that demands further investigation.

Before any listening commenced, we were given an overview of the sE Eggs’ design ethos. The egg-shaped design is to address a coupled of accepted problems inherent to square box speaker design; namely that parallel internal surfaces recreate resonances, and a flat, square front presents a reflective surface with sharp corners, which can create comb-filtering artefacts by contributing both very early and late reflections. The Eggs are also a monitoring solution – rather than being self-powered speakers, they are passive speakers driven from a dedicated amplifier that provides monitoring level control and an auxiliary input for a reference source, such as a CD player. Eliminating the need for a monitor controller means that no unwanted colouration is added by a device placed between the outputs of the console or interface and the speakers.

A sound performance

But the proof of any speaker is always in the listening, and we had the chance to directly compare the sE Eggs against some big names in a controlled studio environment – Focal, PMC, Genelec and Adam. In the listening tests the Eggs performed extremely well, offering clarity and spatial information that was not only exceptional but in some cases simply not present in the other speakers in the line-up. Bass extension is pretty much linear down to around 40Hz and, importantly, frequencies don’t start to disappear as the volume is lowered. The effects of the cabinet design become apparent as you hear more ‘space’ and subtleties like reverb tails become more audible.

All in all, this was an extremely impressive demonstration that shows a new approach to speakers and it’s clear that sE Electronics is addressing monitoring holistically with their amp controller. The Eggs also feature an LED alignment system. This allows for accurate sweet spot placement, including adjustment on the integrated stands that allows for alignment in both the horizontal and vertical planes.

For anyone looking to upgrade their monitoring system, or those concerned that their speakers may not be revealing everything about their audio, the sE Eggs are a system that simply must be checked out. We’ll be operating a system where you can test drive these monitors for seven days in your own studio, as that really is the only way to make an educated decision. Take it from me, anyone who tries these is going to be blown away by what these speakers reveal about mixes.

Below is the first in a series of videos of Andy Munro talking about the design ethos of the Eggs, and about acoustics in general.

To find out how you can give the sE Electronics Munro Egg monitors a trial in your own studio, get in touch with us on 03332 409 306, email or leave us a message in the box below and we’ll get back to you. You can also keep up with our latest pro audio news and offers by following us on Twitter (@Jigsaw24Audio) or liking our Facebook page./strong>

Exploring Logic’s Compressor plug-in

Exploring Logic’s Compressor plug-in

Plug-ins are funny things. Most DAWs come with a complete set of excellent effects, but often these get overlooked in favour of third party add-ons. And Logic Pro 9’s Compressor plug-in is a prime example.

I’m not going to disparage works from the likes of Universal Audio,Sonnox or Focusrite, as many of their plug-ins are truly exceptional pieces of software, promising accurate recreations of analogue equipment, complete with GUIs that offer agonisingly realistic recreations of front panel controls. But, in reaching for them, it does often mean that we overlook some gems that are right under our noses.

When Logic Pro 8 arrived, all the effects had received a significant overhaul. Unfortunately, few people really noticed as the editor window remained pretty much exactly as it was. It hadn’t metamorphosed into an 1176 or LA-2A recreation… or had it?

Although Compressor looked the same outwardly, Apple had put a lot of work into the algorithms. The current Compressor in Logic now lets you choose from six different circuit types, based on the way that different topologies of actual physical units affect the way a compressor affects the audio. The following choices are available:


    • Platinum. This is pretty much the same algorithm used in previous Logic versions.


    • VCA. Good examples of Voltage Controlled Amplifiers are DBX 160, prized for fast response times.


    • FET. The epitome of a FET based compressor is the Urei 1176. Not as fast as VCA, with similar characteristics to valve.


    • Opto. Optical compressors respond in a similar fashion to valves, with good response to transients. The most popular example is the Telectronix LA-2A.


    • ClassA_R. Forum juries are out on this one, but the Class A part suggests a valve model of one type or another.


  • ClassA_U. See above, but may well be based on a variable ‘Mu’ type device, similar to a Manley.


So, while there’s not a lot of technical literature to support the different circuitry types in Compressor, there’s definitely scope to use the different circuit types to start moulding the acoustic signature – for example, knowing that an SSL compressor is a VCA design gives you a good starting point for dialling in that particular sound.

But circuitry isn’t the only advance. The current Compressor in Logic Pro 9also has some hidden parameters, accessed by clicking the expansion arrow in the bottom left hand corner. This gives you access to Output Distortion with three types – Soft, Hard and Clip (plus the omnipresent ‘Off’). These are useful for generating the extra harmonics associated with valves all the way through to the limiter-smashing effects of units like the Empirical Labs Distressor. There’s also a full and flexible side chain filter complete with key frequencies and, last of all, one of the most interesting features – a Mix control, which allows you to compress all of the signal (as is usual) or only part of it, giving you an easily controllable way of blending a dry and compressed signal.

If you’re a long-term Logic user who hasn’t really explored the new features of the Compressor plug-in, I’d advise setting some time aside to explore the new sonic options before reaching for something else!

For more information on Logic Pro 9, give our audio team a call on 03332 409 306 or email You can also share your tips on the circuitry and parameters in Logic’s compressor in the comments below, and we’ll be in touch.


Why listening to Fleetwood Mac’s ‘Rumours’ on vinyl made me want to burn my CDs

Why listening to Fleetwood Mac’s ‘Rumours’ on vinyl made me want to burn my CDs

Every so often I feel compelled to spend an evening pulling out my record collection and rediscovering a time when I actively enjoyed the process of listening to music. This happens with almost alarming certainty when I have either a) had a little too much to drink or b) split up with my girlfriend (sometimes an unhelpful combination of both). And in almost all cases I seem to arrive at the same conclusion – that for some reason vinyl sounds better.

After much ruminating I have arrived at the conclusion that this has nothing to do with me being some closet analogue purist. I don’t think there is anything intrinsically wrong with my speakers being wobbled by a stream of 1s and 0s as opposed to a stylus jiggling in a groove on a vinyl disc. It has nothing to do with the hiss and crackle of vinyl imparting the pseudo-comforting sound of nature or acting as the sonic glue that imparts a sense of life into an otherwise sterile performance. In fact it is not about how vinyl sounds when compared to CD at all, it is about how the music on a CDcompares with its vinyl equivalent, a result of the process I have come to call ‘masterdisation.’

A common mistake made by advocates of vinyl is that a CD has less dynamic range. The CD format is capable of a dynamic range of 96dB as opposed to around 65 – 70dB for a vinyl record. However, the process of mastering for vinyl favoured using as much dynamic range as was possible, with the caveat that the quietest part should never fall below the agreed noise-floor for the background sounds inherent in a device which basically drags a needle across a plastic surface.

Mastering engineers were still encouraged to try and make the loudest records possible, but there was a limit because above a certain level the needle would literally jump out of the groove. With CDs, the opposite is true. Record company executives looked at the loudness wars in the ’80s, when radio stations competed to get more listeners by being the loudest on the air, and decided they were prepared to sacrifice dynamics if they could have a record that seemed louder than every other.


With dynamics no longer a concern, CD mastering engineers found themselves armed with the same tools as the radios had used. Multiband compressors and limiters let them compress most of those 1s and 0s into straight 1s and despite having a much larger dynamic range than vinyl, it is common for a modern pop CD to be mastered with less than 10dB difference between the loudest and quietest parts. And this is, I think, the key to why so many people claim a preference for vinyl.

Firstly, dynamics are a key dimension in audio. It holds listeners’ interest and we start to actively listen. With no dynamics, listeners get fatigued and lose interest. Secondly, overly loud mastering introduces digital distortions, as CD player converters run out of headroom to recreate the soundwave. In his book ‘Perfecting Sound Forever: The Story of Recorded Music’ (so exhaustively researched it frankly has no business being as enjoyable or entertaining as it is), author Greg Milner cites The Red Hot Chilli Peppers’ ‘Californication’ as being a watershed album for ‘overloud’ mastering. Almost devoid of dynamics (a total dynamic range of less than 6dB across the whole album), the sound of digital clipping produced throughout is recognised by our brains as being painfully loud regardless of how loud the disc is actually being played, and actually becomes unpleasant to listen to.

Finally, all of this compression started to fundamentally change how we perceived the sounds of instruments. Sounds were robbed of transients and others had subtleties boosted. CDs started to sound less like music played on vinyl and more like music heard on the radio. We no longer needed to listen to records, because they were practically screaming at us, the musical equivalent of over-hyped orange-lacquered reality TV celebrities shrieking into our headphones.

The irony of this is that, as CDs used loudness to attract our attention, the effect made the listener less interested. It’s a shame the CD format was standardised before the loudness wars started. In the digital TV age, broadcasters now have access to loudness metadata which allows them to match perceived loudness of different pieces of programme material. If CDs could somehow incorporate the same loudness metadata, a CD wouldn’t have to compete on volume – playback systems would be able to compensate in balancing volumes between different albums based on how loudly the listener will perceive them. Overloud mastering would become undesirable due to the artifacts and limited dynamic range.

While some artists are beginning to see that overloud mastering is detrimental to the enjoyment of the music, the mastering decisions rarely rest with the artist. It may well be that, in future, radio will incorporate loudness monitoring that will help in the fight to reclaim the music from the sound of the CD. For anyone looking to master their own music, I’d advocate paying close attention to how your music sounds, not just how loud it is. Squashing all the transients out of your music may end up reducing a lot more than just the peaks.

Want to know more about mastering? Get in touch with our audio team on 03332 409 306 or email

How to record drums the Glyn Johns way

How to record drums the Glyn Johns way

I took the opportunity of the extended weekend over Easter to get stuck into some recording with my band. Drum recording always comes first when we’re laying down a track and, while it’s technically one of the most fun things to do, it’s also the bane of my life from the perspective of how long it takes.

So this time, I wanted to try something new – the 4-microphone technique developed by Glyn Johns which promises a natural sound and fantastic stereo imaging. Legendary producer Glyn started his career assisting for The Beatles before going on to record Led Zeppelin, The Who, Eric Clapton, The Rolling Stones and more. Well, with a CV like that, I’d be stupid not to try it!

Normally, I record drums in what has become a conventional close-mic fashion – with individual dynamic mics on kick, snare, hat and all four toms, and then an XY overhead pair of condensers. The trouble with close-mic techniques is that you have to be very careful about phase problems and there’s a lot of gating and EQing to do to get back to the natural sound of the kit. The Glyn Johns technique relies more heavily on the sound of the overheads to provide the sound of the full kit, with spot mics on the kick drum and snare simply to reinforce and add some low-end body to those two drums.

You’ll need…

Two overhead mics, preferably not ones that are too bright (I used a pair of Rode NT5 studio condenser mics), a good quality kick drum mic such as an AKG D112 cardioid dynamic mic, a snare drum mic (over the years I’ve come to rely on a Shure SM57 for this) and a tape measure. This last one is vital.

How to do it…

Position one overhead to the drummer’s left hand side, behind the hi-hat. Using the tape measure, measure from the centre of the snare drum to the diaphragm of the mic (about 40″ is ideal, but give or take a few inches if it makes for better positioning), then point the mic directly towards where the kick drum pedal is. Now, position your second overhead mic to the right of the drummer, out behind the floor toms. Point this mic directly at the hi-hat, and use the tape measure to ensure that the distance from the centre of the snare to the diaphragm is exactly the same as for your other mic. This is critical to ensure the snare is perfectly in phase between both mics; being out of phase here will give you a slightly washy sound.

Drums Mic Setup

Next, position the kick drum mic inside the bass drum, starting at halfway towards the batter head, then move it forwards or back depending on whether you want more click or more thump. The snare drum mic points at the centre of the drum and it can be faced away from the hi-hat to cut the bleed from the hats, or towards it if you want a bit more. And that’s it!

Panning is what makes this technique work really well. The mic to the drummer’s right – the one that sits behinds the toms – is panned hard left. The other one, that sits just over the snare, is panned right, but only half way. As if by magic (assuming you’ve measured accurately!), you get this wonderful balanced stereo image. With so much reliance on room mics, obviously the effect of the room becomes much more apparent, as does the sound of the kit itself. But as long as you’ve got a decent room, a properly tuned kit with new heads and a good drummer, you’ll get great results.

I will admit that I second-guessed myself doing this, and as a safety net I also close-miced the toms and added a hi-hat mic. I needn’t have bothered – the results from just the four mics were pretty astonishing and I’ll definitely be using this technique next time!

For more information on what you’ll need to achieve the Glyn Johns technique, call us on 03332 409 306 or email If you use it yourself, we’d love to hear how you get on, so leave us a comment below.

Auto-Tune and the etymology of pop

Auto-Tune and the etymology of pop

We recently ran an article on the announcement of Antares ATR-6 auto-tuning technology for the guitar. While reading up on this, I was drawn back into the web of arguments about whether this technology was damaging to music…

It seems in every musical generation there exists two opposing sides – one that claims good music is only made by musicians playing real instruments, and  anyone reliant on studio trickery is a charlatan; and the other that claims any means used to realise the musical idea of the artist is valid. The former camp certainly argues most vociferously (but often that’s because the second is busy in the studio), and the usual target for the argument always seems to be the technology that is currently trending in production-heavy pop. And it seems that once again, very unfairly, it’s Auto-Tune. So I want to take this opportunity to present a defence, not just on Auto-Tune’s behalf, but on behalf of music technology in general.

Performance enhancements

Pop music, TV shows and talent contests are the key targets for people who claim that Autotune is ruining music. You’ve all read the complaints – it makes people that can’t sing be tuned to perfection; it’s holding real singers/bands/artists back; it’s all just manufactured music; with enough money anyone can make a hit record, etc. The truth is, performance-enhancing studio tricks occur in all genres, from pop to rock, from country to metal. Anyone who thinks that a record made by a band is a straight capture of a performance by musicians demonstrating their instrumental and vocal chops in the studio is clearly oblivious to the processes that go on: the overdubs, the editing, the click tracks, the drum replacement, quantising, layering of guitar parts, pitch and timing replacements. In the majority of cases, a studio record by a band is about as close to a real performance as a photograph of the Alps is to being in Switzerland.

With perhaps the exception of jazz and classical, making a record isn’t about capturing a musical moment, it is about creating the definition of what that music is – we form our understanding of how instruments sound through the recordings we listen to. The easily-identifiable sound of rock drums bears little resemblance to the sound of a drum kit in a room, instead it is defined by the heavy compression, gating and reverb that characterises the genre. These artificial sounds already define what we expect music to sound like, so they influence the music we create.

Auto-Tune is just one of the tools in this production toolbox and is used in every single popular genre but, for some reason, it seems to draw the fire for every negative comment by those who don’t like modern pop music. Even from those with no experience of music production who feel justified to vilify it with such assertiveness that you’d assume they used it every day. Antares need not necessarily feel singled out though – ten years ago the same sorts of unqualified rantings were being aimed at Pro Tools. It was as if Digidesign’s software were some sort of giant creative mashing machine that any idiot could operate, churning out identikit hits from any old rubbish providing the operator could stay awake long enough to push the ‘GO’ button. The thinking seems to be, “if it is used on records I don’t like, then it must be to blame”

So, let’s have a look at exactly what some people think Auto-Tune is to blame for:

Auto-tune is cheating – a good singer doesn’t need pitch correction. I dare say there’s something of a valid point in there. But even the best singer makes mistakes, and in those situations they simply do it again to get it right. But then, you’re not looking at a single performance, you’re looking at a composite of two takes, edited together to make one good one. Is that cheating? If not, how many takes does a singer get before that process is considered cheating? If a singer does 100 takes of a difficult line of a song, and only gets it right once, how is that less of a cheat than using Auto-tune? Given that they’ve got it wrong 99% of the time, it is unlikely that they’ll be any more able to repeat that performance than someone who used Auto-Tune to nail that line. Throwing a basketball through a hoop, while blindfolded, standing on one leg and using my weaker hand, once out of a hundred attempts isn’t proof that I’m technically adept at doing it – it’s just fluke.

Auto-tune is responsible for sterile pop vocal production. Since most modern pop vocals are double or triple-tracked, cut, edited, quantised, regrooved, compressed to within a micron of its dynamic range and then multiband limited, it’s impossible to ascertain at what point pitch correction comes into the equation. I can’t hear Auto-Tune on a Britney Spears record because the vocal is entirely dominated by breath noise. Production style should never be confused with the technology that it uses.

Auto-tune makes singers out of people that can’t sing. This argument is wrong on almost every level. At its most basic, it assumes that only someone with great pitching can be a singer (and conversely, that all you need to be a great singer is good pitch – forget all that timing, phrasing and emotion stuff). If perfect pitch was a requirement, then you can’t account for the musical legacy of such artists as Bob Dylan, Mark Knopfler, David Essex, Lou Reed, Bryan Ferry and pretty much any death metal vocalist ever. Essentially the record-buying public have been making singers out of people who ‘can’t sing’ for decades. At a deeper level, perhaps the confusion arises between asinger and a pop star, since most pop stars sing. This is more about the nature of celebrity, not skill, because production can make a pop star out of virtually anyone and since the only requirement to be a pop star is to bepopular, good doesn’t enter the equation. If your argument is that this is wrong, then that argument needs to be aimed at the record-buying public, not at those that fulfil its needs.

Auto-Tune is responsible for that overused Cher effect. OK, so I’m sort of making this one up, but it is the most common misconception. What is responsible for any overused musical cliché is a lack of imagination. If the stepped vocal stylings of T-Pain or or a thousand others irritate you, so be it, but the technology in use could well be, among others, Celemony’s Melodyne, Waves Pure Pitch, Apple Logic’s Pitch Corrector, a TC Helicon vocal processor or DigiTech Vocalist. Auto-Tune – the Hoover of pitch-correction technology – is in danger of becoming a scapegoat for myriad production sins.

The case for the defence

All of the complaints levelled at Auto-Tune take the form that somehow, using Autotune makes music worse, that it is cheating and blurring the lines between the skilled and the unskilled, affording unwarranted credibility upon the untalented while the gifted go hungry. But on the whole, it doesn’t make music worse. The most important part of any vocal isn’t the pitch, it’s the performance. Performance and emotion are what connects the audience with the singer, and often one shot is all you get at that, especially if the singer is ad-libbing. If you get a great performance that is slightly pitchy, Auto-Tune allows you to correct the pitching without sacrificing the performance. Or you could ask the singer for another go, and risk getting the notes but not the delivery.

The truth is that this is how Auto-Tune is being used, day in, day out, in sessions of all types in studios around the world. Quietly. Invisibly. But we don’t hear those stories, we just hear about the cast of Glee and X-Factor hopefuls, when the effects of technology become noticeable. Auto-Tune is capable of being a completely invisible technology, but it is also capable of being abused by lazy operators. The producer has full control over the amount of the effect, and it’s their job to decide when a missed note needs correction or when it is adding character. Auto-Tune isn’t responsible for any decline in musical standards, that blame lies with those who use it indiscriminately without listening to the results.

What do you think? Let us know in the comments box below or find us on Twitter – @Jigsaw24Audio.

For more on Antares Auto-Tune, or any pitch correction software, get in touch with the Pro Audio team. We don’t judge! Call 03332 409 306 or email

Direct USB recording with the RME Fireface UFX: A video guide

Direct USB recording with the RME Fireface UFX: A video guide

Last weekend, I had a go at recording to a USB storage device through the RME Fireface UFX interface. This new feature, coming with RME’s alpha-phase firmware update, impressed me so much that I went and made the below video to show just how easy it is to record tracks directly to any USB device. I used the Fireface, a 4GB memory stick and Pro Tools 9. (Oh, and my very able drummer. Thanks, mate!)

For more information on RME’s Fireface UFX USB audio interface, call us on 03332 409 306 or email