Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Har-Bal is the software that introduces the concept of Harmonic Balancing and the means of achieving it. In this section we introduce the software and the process of Harmonic Balancing as implemented through Har-Bal. For an introduction to the concept, its origins and its benefits over other mastering processes see Harmonic Balancing. Those already familiar with this introductory tutorial and are interested in exploring the fullest potential of Har-Bal should continue on to the Advanced Tutorial for a brief discussion on a different but sonically pleasing approach to equalisation.
Har-Bal Layout
The essential elements of the Har-Bal environment are shown below:
1.
2.
3.
4.
5.
6.
7.
8. 9.
10.
11.
12.
13.
14.
Tips popup
The tips popup gives spectrum context sensitive clues to designing appropriate equalisation filters.
15.
Spectrum display
The spectrum display shows a graphical representation of the overall energy content of the current and/or reference track. The spectrum display shows the average spectrum, the peak spectrum and the geometric mean of the two. We make use of the spectrum display and the gain cursor to design EQ Filter.
16.
17.
18.
19.
20.
Where to Start?
Har-Bal does its magic on sound files only, so a good place to start is with collecting the source material to harmonically balance. For a list of supported file types consult the Open Command topic. One notable exception from the list is the mp3 format. Due to the licensing arrangements required for the mp3 format we currently have no intention of supporting mp3. It is also worthwhile noting that mp3 is generally not a good format to use for source material that has yet to be harmonically balanced and with good reason. The mp3 format uses lossy compression built around perceptual encoding of the recorded sound. What that means is that if the model believes that you cannot hear a particular part of the spectrum in a recording (because it is masked by an adjacent sound) then it will drop it altogether. This is fine if the recording is already harmonically balanced but if you have a poorly balanced recording and harmonically balance it, normally, much of what is masked becomes readily audible and hence, the improvement in clarity obtained using harmonic balancing. But if you have encoded the source using mp3 before equalisation the mp3 encoder may drop a significant proportion of content that can never be recovered, with or without harmonic balancing. For that reason we only recommend using mp3 post harmonic balancing. Ideally the source material should be arranged such that each track is in a different file. You could, if you wish, perform harmonic balancing on a single file corresponding to and entire album but the outcomes that you will end up with is likely to be less than optimal. This is due to each track in a compilation having its own specific characteristics, which become lost in the other tracks if lumped together. Given a set of tracks that you wish to master we are now ready to start. The first thing to do is to open one of your tracks using the Open Command. The first time you do this on any given track Har-Bal will proceed to analyse the spectrum content. This may take some time depending upon the length of the track and the speed of your computer but once done the result is saved to disk for re-use. If you then re-open the same track, provided it has not been modified in any way, Har-Bal will open it immediately by reading the analysis file of the track and using it to initialise the spectrum display. Alternatively, if you have a compilation of many tracks that you are going to work on you can
pre-analyze those tracks in the background while you work on another file in Har-Bal by using the File | Batch Analysis menu command. For the sake of this tutorial I shall be demonstrating Harmonic Balancing of a CD compilation using the CD Face Value by Phil Collins (Atlantic 16029-2 1981). Note that we do not condone illegal pirating of copyright protected recorded material in any way. We have a legitimate copy of this CD and are re-mastering it for demonstration purposes only.
Beginning a Session
Assuming that you are mastering new material or re-mastering old material then the first thing you should do is thoroughly listen to the compilation of tracks to build a mental picture of the sound the producers are trying to achieve. Make note, mental or otherwise, of anything that concerns you. After doing so pick the track from the compilation that has a broad spectrum (typically tracks with the most instruments and instruments that are played loudly) but also the best sound quality. The reason for this choice is that the first track we master will become a reference for the remaining tracks. If you were to chose a slow song it wont fill the entire spectrum and then you will have some difficulty drawing inferences from your reference if you were to then open a busy track. Using the Open Command I will open the track 9 (Thunder and Lightning) as the first track as it has a full spectrum and a quite good balance. This is the first time I have opened this track so the analysis progress dialog pops up as shown below. For the purpose of following this tutorial in the case where you do not have access to the actual recorded material, we have included the analysis files for all of the tracks of this album. You can find them in the folder c:\program files\har-bal\tutorial (change the path shown here to coincide with the location that you installed har-bal in).
When the analysis runs to completion the spectrum display shows the following result.
The spectrum display has three traces; the lower trace is the average energy content of the track at different frequencies and the top trace is the peak energy content of the track.
and do this,
Note that the illustration above has been cut down in size to keep the help file size small. As many people who download this software are still on dial up connections we want to maintain a small install size. We suggest that to obtain a clearer picture of the process that you follow and reproduce all of the steps in this mastering process by loading the corresponding analysis file in the folder c:\program files\har-bal\tutorial. Doing so will allow you to duplicate the steps although you will not be able to play the tracks. To do so you need the original source material, which should be easy enough to find. Just keep in mind that for this discussion to be valid it needs to be the original CD master. As far as I am aware this album has not been re-mastered and re-released so I dont believe this is an issue, however, it is worthwhile checking that the analysis results you get are the same that I did. In following this step you will note that intuitQ has significantly smoothed the spectrum by cutting the output at the low frequency end, boosting the mid range, cutting the highs around 5kHz and adding some boost around 10kHz. On listening to the result the bass still appears to be masking the mid range somewhat. Looking at the spectrum we note a still quite strong peak at around 116Hz. By selecting the gain cursor we apply a cut of about 1.7dB with a Q of 5.3 as illustrated. Select this tool,
and do this,
Again listening to the result I find the bass still lacking tightness so we use the low shelving cursor to tame it by shelving at 296Hz in the manner illustrated. Select this tool,
and do this,
Further listening shows that the bass is now under control but the track now sounds too bright. To tame the brightness we select the high shelving tool and shelve from 1kHz upwards in the manner illustrated. Select this tool,
and do this,
Now the track sounds like music to my ears as far as the tonality is concerned but the overall level of the recording is quite low (at 17.66dB average power). To make it more consistent with current recordings we can use the gain slider to increase the mastered level. My preference in this process is to increase the gain slider until the limiting meter just lights. By doing so we can obtain a good level without loss of dynamics. Furthermore, if we consider that this compilation contains quite a few rather dynamic tracks (In the Air Tonight is particularly so) it is wise to take a conservative approach so that the more dynamic tracks dont end up over limited when matching the loudness. For these reasons I settled on a gain setting of +4dB. Having made the necessary EQ changes and level changes you could also perhaps use the Air slider to add extra space although in this case I found none of the tracks required it so have avoided using any Air. Having designed our equalisation we now save the filter file, record the result and open this track and the corresponding filter as a reference for the remaining tracks to be processed.
and do this,
On listening the result the clarity is significantly improved though there is a hint of low frequency masking and the upper mid range is over-emphasised around 2kHz. Both these issues arise out of side effects from using intuitQ with tracks in which spectrum holes exist. In such cases this problem is easily mitigated with excellent results by applying the intuitNull cursor to the frequency ranges in which the spectrum holes are found. This has the effect of undoing intuitQ for that specific frequency range. So, our next step is to select the intuitNull cursor and apply it from 106 to 129Hz and also from 1.07kHz to 3.07kHz. Select this tool,
and do this,
and this,
The track equalisation is complete but the loudness does not match well with the newly mastered level of the reference track, Thunder and Lightning. We could use the normal match loudness function in Har-Bal but if you were to do so you would find that the results are far from good. The reason for this arises from the fact that the instrumentation of the two tracks is vastly different and this difference is not taken into account by the normal loudness matching algorithm. However, if we apply the match loudness algorithm so that it attempts to match only over the loudest part of the track spectrum then the results are very good. We can do this easily using the match loudness cursor. In this case the spectrum plateau extends from 68Hz to 619Hz and applying match loudness to that range results in a gain of 4.2dB. Assuming that the original was well match this turns out to be a good result and it is certainly confirmed through listening. Select this tool,
and do this,
and do this,
As this track is a gentle one with few instruments you will note that intuitQ will have overemphasised the upper mid-range between 1kHz and 4kHz. This overemphasis if left untreated will result in a slightly metallic and hard sound rather than a warm sound that far better suits this track. To restore the warmth we select the intuitNull cursor and apply it to the frequency range from 960Hz to 3.7kHz. Similarly there is a natural absence of much sound around 170Hz and 130Hz, which has been over-emphasised by intuitQ. By also treating those regions we can increase the tightness and clarity of the lower mid range. These edits are illustrated below. Select this tool,
and do this,
and this,
and this,
With the track equalisation complete we finish off with loudness matching. We select the match loudness cursor and apply it to the frequency range from 58Hz to 600Hz, resulting in a gain of 4.3dB, which is once again close to the reference gain increase of 4dB. It would appear that loudness matching is largely agreeing with the level matching in the original master. Select this tool,
and do this,
and do this,
Once again, immediately after processing with intuitQ we look at the result to see if intuitQ has overemphasised naturally weak parts of the spectrum. To some extent it becomes quite easy to guess which areas those are but I would generally recommend that you validate those decisions through critical listening. This is easy enough to achieve through applying the edit and then using undo and redo while listening. Doing so should make it readily apparent if you are on the right track. Another good strategy is to switch to frequency response view and look at the peaks in the response. If there are a few peaks that are much larger than the rest that should arouse your suspicion. Using these techniques I identified two areas of concern. One around 570Hz and the other is the region from 1.9kHz to 3.3kHz. We apply intuitNull to both of these as illustrated. Select this tool,
and do this,
and this,
It is clear from both critical listening and visual inspection that the overall tonal balance of this track is
markedly different from our reference. The bass is overly prominent. We attack this issue with the low shelving cursor by applying it to frequencies below 630Hz in the manner shown. Select this tool,
and do this,
Now as a consequence the track sounds too bright (the brightness was necessary previously to combat the masking provided by the prominent bass). We attack this issue with the high shelving cursor by applying it to the frequencies above 870Hz in the manner shown. Select this tool,
and do this,
At this point we partake in further critical listening. On playing back the track and toggling the EQ in and out it is clear the track has much better definition and punch than the original mastering. However, the original has a warmer mid range. In particular, my ears note a rather overbearing stridency in the lead vocal. To combat this I chose to apply a small amount of parameteric EQ to the mid-range around 1kHz. This edit is illustrated below. Select this tool,
and do this,
Further listening post edit confirms the edit as a good one and the equalisation job is complete for this track. Once again we now match the levels by selecting the match loudness cursor and apply it to the frequency range from 58Hz to 13.6kHz. This results in a gain of 3.2dB. After doing loudness matching it is worthwhile playing the track and toggling the reference button to verify the compatibility of the track level with the reference and making fine scale adjustments if need be. Listening confirmed the compatibility and I made no further adjustments. Select this tool,
and do this,
and do this,
Again, after applying intuitQ we look for areas of over emphasis and confirm them through critical listening. I identified and applied intuitNull to the frequency ranges 1.3kHz to 3.3kHz, 450Hz to 540Hz and 108Hz to 145Hz as shown. Select this tool,
and do this,
and this,
and this,
On listening I find the result somewhat pleasing although the absence of lower harmonics in both the vocal and the electric piano does not sit well with the other tracks. To maintain a better tonal consistency we go about restoring some bass with the low shelving cursor. We apply a boost to frequencies below 470Hz and a subsequent cut for frequencies below 80Hz to maintain clarity and avoid excessive subsonic amplification. Select this tool,
and do this,
and this,
Listening to the equalisation changes made demonstrates a much fuller sound with a complete lack of mid range stridency which was evident in the original mix. Be that as it may, the track, in my honest opinion, still holds true to what the producers were trying to achieve. It is essentially the same recording but with a little added clarity. Once again, it is time to match the loudness to the reference. In this case we apply the match loudness cursor to the frequency range from 560Hz to 1.3kHz. This results in a gain of 5.9dB. This figure is seemingly large in comparison to the original master but listening tests confirm it to be a good choice. I would argue that the nominally extra 2dB over the original master is compensation for the spectrum modification to the bottom end.
Track 5: Droned
Now we save the filter design, write the EQd track and move onto track 5, Droned. Open the file 5.anl. Starting with intuitQ we apply it to the frequency range from 76Hz to 7.8kHz. Select this tool,
and do this,
Again, we look for overemphasised regions. We identify and apply intuitNull to the following frequency ranges: 1.6kHz to 2.6kHz; 70Hz to 138Hz and 460Hz to 579Hz. Select this tool,
and do this,
and this,
and this,
Then we perform loudness matching by applying the match loudness cursor to the frequency range from 133Hz to 1.81kHz. This results in a gain of 3.4dB. Critical listening tests demonstrate the appropriateness of both the equalisation and the level matching.
end result far exceeded my expectations and I believe the re-mastering is comparable to anything available from professional mastering houses. The interesting point here is that I remastered the entire album in no more that 2 hours through my old Sennheiser HD450 headphones with dodgy ear pads (home made as the originals disintegrated and as a consequence they dont sound exactly the way they should) and the effects of a cold. You certainly can achieve equal results with conventional approaches but I very much doubt that you can in the same time frame. Imagine the response if you said to a high-end professional mastering engineer that you want this re-mastered in 2 hours? Do you think that he/she could deliver something as good? They probably wouldnt even consider taking on the job.
Track 6 : Hand in Hand IntuitQ applied from 40.5Hz to 11.5kHz IntuitNull applied from 483Hz to 599Hz IntuitNull applied from 2kHz to 3.3kHz Low Shelving cut applied to frequencies below 588Hz (~ -2dB @ 100Hz) Parametric cut applied to 1.2kHz, Q = 0.95, gain = -0.7dB High Shelving cut applied to frequencies above 1.1kHz (~ -2dB @ 10kHz) High Shelving boost applied to frequencies above 6.9kHz (~ +0.5dB @ 10kHz) Match Loudness applied from 58Hz to 11kHz
Track 7 : I Missed Again IntuitQ applied from 47Hz to 10kHz IntuitNull applied from 2.1kHz to 3.3kHz Low Shelving cut applied to frequencies below 215Hz (~ -2dB @ 50Hz) High Shelving boost applied to frequencies above 4.7kHz (~ +1dB @ 10kHz) Match Loudness applied from 57Hz to 11kHz
Track 8 : You Know What I Mean IntuitQ applied from 58Hz to 9.4kHz IntuitNull applied from 1.4kHz to 3.3kHz IntuitNull applied from 121Hz to 165Hz IntuitNull applied from 56Hz to 80Hz Low Shelving boost applied to frequencies below 348Hz (~ +3dB @ 50Hz) Low Shelving cut applied to frequencies below 48Hz (~ -10dB @ 20Hz) Match Loudness applied from 174Hz to 1.24kHz
Track 10 : Im Not Moving IntuitQ applied from 54Hz to 9kHz IntuitNull applied from 1.5kHz to 3.2kHz IntuitNull applied from 508Hz to 672Hz
IntuitNull applied from 56Hz to 80Hz Low Shelving cut applied to frequencies below 483Hz (~ -5dB @ 40Hz) High Shelving cut applied to frequencies above 1.46kHz (~ -1dB @ 10kHz) Match Loudness applied from 61Hz to 1.16kHz
Track 11 : If Leaving Me Is Easy IntuitQ applied from 56Hz to 9kHz IntuitNull applied from 1.1kHz to 3.8kHz IntuitNull applied from 121Hz to 145Hz IntuitNull applied from 226Hz to 290Hz Parametric cut applied to 103Hz, Q = 15, gain = -4.1dB Parametric boost applied to 71Hz, Q = 3, gain = 2.8dB Match Loudness applied from 65Hz to 970kHz
Track 12 : Tomorrow Never Knows IntuitQ applied from 48Hz to 8kHz IntuitNull applied from 2.4kHz to 3.3kHz IntuitNull applied from 1.1kHz to 1.9kHz IntuitNull applied from 140Hz to 165Hz IntuitNull applied from 71Hz to 93Hz Parametric cut applied to 64.6Hz, Q = 15, gain = -2.5dB Parametric cut applied to 130Hz, Q = 15, gain = -2.7dB High Shelving cut applied to frequencies above 836Hz (~ -1dB @ 10kHz) High Shelving boost applied to frequencies above 6.75kHz (~ +0.2dB @ 10kHz) Match Loudness applied from 49Hz to 4.41kHz
What is the significance of that? Masking! If you have a sharp transition in the spectrum then it is indicative of an instrument occupying one part of the spectrum possibly masking another. After applying intuitQ you typically hear more detail than before though in some cases that extra detail is not desired. In those circumstances we can restore that masking with intuitNull. The key point here is that these tools empower you with the ability to quickly and easily optimise the track EQ. It is an interesting exercise applying intuitQ to a well mastered track. Often the resulting changes intuitQ makes is small and inaudible indicating the validity of the algorithm. In cases where intuitQ degrades the response it is generally always due to the over-emphasis of quiet content that is normally masked, either partially or fully. Simple application of intuitNull corrects the error resulting in a very good equalisation. Again, the key issue is that these tools give resulting equalisations that are consistent with the decisions made by professional mastering engineers. IntuitQ and the other tools provided by Har-Bal form a natural, efficient and effective means of performing equalisation. It would seem that much of the objection stems from the fact that we dont offer single button mastering perfection though we never made such claims. Perhaps a case of damned if you do or damned if you dont. One thing that intuitQ offers that mere listen can never do is objective separation from the acoustic peculiarities of a specific listening setup. All studios, no matter how good, will have some degree of biasing, be it from imperfect acoustics or imperfect components or even imperfect engineering (people in all professions have bad days). Subtle variations in frequency response will introduce bias into the perception of the recording by the mixing or mastering engineer. That bias translates into incorrect equalisation and / or mixing decisions. In fact, if you happen to overlay the realised frequency responses to the equalisation filters for re-mastered tracks from one album recorded in one studio a pattern generally emerges. Here are three examples of exactly that. The first is a selection of tracks from Face Value by Phil Collins, next, the tracks from Abacab by Genesis and finally tracks from Elsewhere for 8 minutes by Something for Kate.
Filter responses for tracks from Elsewhere for 8 minutes by Something for Kate
All tracks demonstrate a curious coincidence of problem areas, particularly for the later two cases. These are not exceptional cases. In fact, most albums that I have re-mastered (for my own personal listening pleasure not commercial re-mastering) using intuitQ shows similar consistency between tracks. Is this mere coincidence or an indication of the acoustic characteristics of the environment where the tracks were mixed and / or mastered. It certainly would be interesting to perform acoustic tests on the studios involved, though the logistics of making that a reality (for someone in my
position) is unlikely, at least for now. A sceptic may argue that these results simply show biases in my own listening environment and not that of the studio. The counter to that argument is that if this were the case you would expect the common features to be common to all tracks I re-master. This is clearly not the case as the Something for Kate album colourations are quite distinct in comparison with the Genesis and Phil Collins cases. An interesting point about the first two examples is that they were recorded and mixed by Hugh Padgham so there is a possibility of common factors resulting in similar mixing biases (and this is no criticism of Hugh Padgham whose recording work I admire). This concludes our introductory tutorial on using Har-Bal. Be prepared to experiment. In time and with experience you will discover the best way to use it. Those interested in exploring the fullest potential of Har-Bal should continue on to the Advanced Tutorial for a brief discussion on how to get the best out of your tracks. Should you have any ideas or comments concerning Har-Bal we would certainly be interested in hearing them. You can email us on support@har-bal.com.
upon me until months later when attempting to improve upon another entirely different album. Yet again, intuitQ was of no help but this time I recalled that earlier result and then wondered what the outcome might be If I were to attempt to control problem areas in a recording by exaggerating or at least preserving of the peak and trough excursion in the spectrum rather than dumbing it down. To my pleasant surprise it appeared to work though not as well as Id hoped. The problem was that Har-Bal 2.2 has 1/6th octave spectrum smoothing applied so the degree to which I could read the peaks and dips was limited by that factor. Furthermore the selectivity of the designed filter was similarly limited. When the resolution was increased to 1/12th octave the quality of the equalisation outcomes became increasingly unquestionable. Whereas before, remediation was producing a balance but dull recording, the new approach was producing a balanced average spectrum trend without compromising the track dynamics. Thus was born Empathetic Equalisation, the name of which stems from the idea behind the technique of preserving the narrow peaks whilst taming the broad average : the equalisation has empathy toward the music.
which can be found on the lower right hand corner of the Har-Bal window. It is readily apparent that the track is particularly strident in the 2kHz region and less so around 500Hz whilst also showing some weakness around 300Hz and between 600Hz and 1.2kHz. These problem areas are most apparent in the peak spectrum trace though the average trace shows similar colouration. These problem areas are readily audible though ridding the track of them using conventional approaches (including intuitQ) leads to unpleasing results.
The essence of Empathetic Equalisation is to produce a smooth envelope of peaks in the peak trace without reducing the adjacent peak to trough depth. A more conventional approach would be to apply a narrow band cut to high peaks but doing so will reduce the peaks far more than the troughs and this has significant masking consequences, which we shall demonstrate later. For the moment let us just assume that the way to tackle this issue is by the following technique. Rule 1: If you have a high peak that needs cutting then cut the adjacent troughs to the peak or cut at the mid-points between the peak and the adjacent troughs using high-Q parametric tool edits. Rule 2: If you have a low peak that needs boosting then boost the peak directly with high-Q parametric tool edits. Rule 3: If you have a broad and solid peak with weak troughs then break up the broad peak buy using high Q cutting of the weak troughs, making the trough depths much bigger. Rule 4: In general you should shape the spectrum (peak and average) so that the resultant spectrum flows through the original (particularly when viewed in 1/3 octave spectrum). In other words there is as much of the old spectrum above the new as is below. The reason
for aiming for this type of shape is that it ensures you are respecting the mix and not attempting to drastically alter it. Note that the assumption here is that the mix was carried out in a tonally neutral environment with neutral speakers and ears. This is not always the case so the case may arise where you do end up altering the mix composition to compensate for poor mixing conditions. The strong 2kHz output in the example is a case in point and possibly caused by deficiencies in the monitors used at the time. Please note that these rules are to be interpreted in a loose fashion. They are not absolutes in any mathematical sense but rather a guide on how to approach the problem. Every track invariably requires unique processing, which will often not strictly follow these rules and these unique adjustments are arrived at through trial and error. Returning to the spectrum with these rules in mind it can be seen that the peak spectrum needs boosting and cutting in the locations illustrated below.
In more greater detail, let us start by bringing down the main problem area : the 2kHz region. We begin by choosing an adjacent trough and cutting with a high-Q cut to bring down the peak amplitude (note that when using the gain tool the Q can be forced to its maximum setting by pressing and holding down the M key when dragging the mouse). This is an application of Rule 1 above.
Step 1 cutting adjacent troughs to bring down a high peak Next we tackle the double peak around 2.5kHz. Here we exaggerate the depth of the trough in accordance with Rule 3.
Step 2 exaggerating trough depths to reduce a broad peak Now we continue to cut dominant peaks in accordance with Rule 1.
Step 4 cutting adjacent troughs to reduce peaks continued. After these edits we find our envelope of peaks is beginning to take shape though some peaks are too weak. Those weak peaks we exaggerate in accordance with Rule 2.
Step 7 boosting weak peaks continued. The upper parts of the spectrum are now taking shape but we still need to tackle the issues in the lower part. Tackling the lower half we apply the same techniques as for the upper half. Rather than demonstrate this in repetitive detail we simply highlight the peaks and troughs that require adjustment and the direction of the adjustment.
Step 8 Illustration of the peaks and troughs to modify Now the spectrum is beginning to show much better uniformity. At this point we could either leave it as it is and obtain a clean but less controlled sound or take it a little bit further to obtain a more polished but less idiosyncratic sound by adjusting the peaks such that they fit a uniform envelope. To do so we would adjust the regions as indicated below,
Step 9 - Illustration of the peaks and troughs to modify for a more polished sound
resulting in the following spectrum. The rationale for this approach is that the peak intensity in the peak spectrum loosely corresponds to the intensity of notes played by the instruments within the composition. By keeping them in a smooth envelope we end up with an effect that is similar to the use of compression to control excessive dynamics within particular instruments but without the side effects of compression. However, you should be cautious not to over inflate low peaks in music that is played with fine touch since in such cases some notes will intentionally be played with less intensity than others and you may end up undoing that artistic intent.
Step 10 Illustration of the peaks and troughs to modifycontinued. At this point we now have a complete prototype filter for this track. You can see from the illustration that the peak power spectrum curve is contained within a uniform envelope. Note that although this has been possible with this track and many others, some tracks have spectrums in which producing such a smooth envelope would result in the creating a excessive peaking in the average spectrum (common with tracks that have few instruments) and in such cases we need to relax that interpretation to allow for a more uniform average spectrum. In essence, what you should be aiming for is a uniform peak and average spectrum trace when viewed in 1/3rd octave resolution and apply high-Q edits to achieve that end. It just so happens that for quite a large number of tracks, following the Empathetic Equalisation approach often leads to that uniformity. Where it doesnt is where you will need to customise your approach to achieve the desired outcome.
Enabling the original spectrum trace we see that there is a considerable difference between the original and the modified spectrum but importantly all troughs have been retained.
As a final check on the overall balance it is worthwhile viewing the spectrum shape at 1/3rd octave resolution as this more closely corresponds to the resolution of human hearing. In this view strong narrow peaking or dipping is indicative of a filter that possibly could be improved upon. Select 1/3rd octave resolution by clicking on this button,
which can be found on the lower right hand corner of the Har-Bal window. In doing so we note that the peak spectrum trace is much more uniform than the original shape it had prior to designing our filter. These differences in uniformity are readily apparent when listening to the track. This raises the question of whether or not to listen to the track whilst editing. As a general rule I would suggest not to as a complete filter response is composed of many edits and each individual edit, whilst having a significant effect on the overall result, is a minor effect if taken on its own. As such, it is unlikely that you can make an accurate judgement on the effectiveness of each edit in isolation. Only once I have a near complete filter response do I find it worthwhile listening to the track whilst editing.
Another interesting thing to try is comparing the behaviour of intuitQ with Empathetic Equalisation. If you do so you will find results similar to the ones shown below. In the spectrum view we see that a single application of intuitQ has reduced the intensity of the same areas as our Empathetic Equalisation approach but with less severity. Also, as intuitQ fitting is primarily driven off the average spectrum we note that the average spectrum is smoothest for intuitQ whilst in the Empathetic Equalisation case the peak spectrum is smoothest.
1/3 octave resolution spectrum shape comparison between intuitQ and Empathetic EQ
Turning to the frequency response view the real differences become far more apparent. Here we see that Empathetic Equalisation has produced a much more extreme equalisation response though ironically it sounds far from extreme and much better than the intuitQ equalisation. More importantly, it clearly sounds superior to the original mastering, something Id encourage everyone to try, be it with this track or a problem track of your own.
The reason why we can get away with such extreme equalisation and yet have a natural sounding recording is because those extreme edits are precisely in tune with the music and span a narrow bandwidth.
On Listening
On Listening to this track and many others processed using this approach is something quite difficult to adequately describe. Perhaps a revelation is an appropriate description. The typical impression is one of increased clarity, preservation of most of the dynamics, suppression of (digital) harshness, greater intimacy and increased sense of volume. Another way of summarising it in a single sentence is to say it is like lifting a veil on the recording, or lifting the performance out of a box. The most noticeable side effects (which are rarely an issue) is a possible shift of individual instrument positions within the mix and a loss of some wetness in recordings with a lot of ambience. In both cases the increased detail of that which you can now hear more than compensates for the minor and
largely inevitable side effects (ie. doing anything to a recording is likely to effect these in some way). Suffice to say we feel Empathetic Equalisation is truly a step forward in the art of equalisation but don t take us at our word. Above all, try it out for yourself and should you find problems with the technique then wed loved to hear about them. Only through such insights do we have any chance of improving.
Dire Straits
Dire Straits
Foo Fighters
Suzanne Vega
Suzanne Vega
A typical track is a collage of many different instruments each occupying parts of the spectrum, usually with a great deal of overlap. Because of this overlap masking occurs and if severe enough, can hide much of the detail within the recording. Now if the masking is of a toned sound by an un-toned one, then Empathetic Equalisation can help to unmask that sound. How is this so? Because one of the sounds is narrow band and the other is broadband and the frequency selectivity of our hearing is simply not high enough to hear the spectrum aberrations we are introducing. For those not already familiar with the concept of critical bands in human sound perception we shall introduce it here. With regard to a broad band sound masking a pure tone the critical bandwidth is loosely defined as follows. Consider the case of a pure tone (sine wave) that is being masked by a narrow band noise whose band centre is aligned with the pure tone. The presence of the noise will cause a threshold shift in our hearing at that centre frequency of some level (depending on the noise intensity and bandwidth). Now if we were to increase the bandwidth of the narrow band noise whilst keeping the noise power constant and centred over the tone then there will be an increase in the level of masking. However, at some noise bandwidth the level of masking will no longer increase with an increase in noise bandwidth and that bandwidth is known as the critical bandwidth. The mechanism of masking arises because sounds occupying different parts of the spectrum excite different parts of the basilar membrane. When sounds are spectrally close enough or indeed occupy the same part of the spectrum they will excite the same part of the basilar membrane making the two
sounds fuse together and become indistinguishable as separate sounds. In the example of narrow band noise masking a pure tone, the point at which we have no further increase in masking with increasing noise bandwidth is the point at which the noise excites all the frequencies that would excite that one part of the basilar membrane. Any additional noise bandwidth will not result in increasing level of masking since that increased bandwidth will now be exciting a different part of the basilar membrane producing a distinct nerve response. The critical bandwidth of human hearing changes with frequency and very roughly corresponds to about 1/3rd octave resolution. To that end, the way we hear the effect of Empathetic Equalisation is roughly equivalent to viewing the spectrum with 1/3rd octave resolution. Doing so it is clear that the typical Empathetic Equalisation filter produces a smooth spectrum and not the very undulating spectrum we see in 1/12th octave view. This still leaves the question as to why we can reduce masking though. This can be easily illustrated by a similar contrived experiment using two different Har-Bal filter realisations to attenuate a 440Hz sine wave mixed with pink noise by a nominal 5dB. In one filter realisation we use the tradition approach of cutting a peak directly and in the other realisation we use the Empathetic Equalisation approach, which creates a response with deep troughs either side of the 440Hz we wish to cut. The two realisations are illustrated below.
If we apply these two filter realisations to the pure tone and the pink noise we obtain the following results. Note that the pure tone spectrum measurements show a noise component mixed with the sinusoid. This noise component is not and does not represent the level of the mixed pink noise in this experiment but is simply an added noise needed to allow Har-Bal to function properly with a high level pure tone. Without it added the loudness compensation within Har-Bal applies so much gain that the fixed-point arithmetic used to realise the filter is driven into overflow.
In the pure tone case the 1/3 octave spectrum (measured using a separate spectrum analyser on the Har-Bal filtered wave file) amplitudes of the 440Hz peak are the same, which is what we would logically expect to see. However, in the case of the pink noise the amount of attenuation afforded to the pink noise within that same 1/3 octave band is significantly greater for the Empathetic Equalisation approach (see below).
More importantly we note that in the case of a conventional peak cut filter response the tone is attenuated by 4.74dB whilst the noise in that third octave band is attenuated by only 2.04dB. Hence we have a loss of separation between the noise and the tone with this filter by 2.7dB. In other words, with this filter we have increased the masking of the tone by the noise. In the Empathetic Equalisation approach the tone is again attenuated by 4.74dB but in this case the noise in that third octave band is now attenuated by 5.83dB. Hence we have a gain of separation between the noise and the tone with the Empathetic EQ filter by 1.09dB. In other words, with this filter we have reduced the masking of the tone by the noise. This behaviour anecdotally agrees with listening tests conducted using the two different approaches to equalisation. That is, when equalisation is performed by peak flattening recordings start to sound muffled and indistinct (presumably due to increased masking between toned and un-toned sounds) whereas when equalisation is performed by the Empathetic Equalisation approach recordings start to sound clearer and better defined (due to reduced masking between toned and un-toned sounds). The fact that the frequency response looks horribly extreme and lumpy is made irrelevant by the fact that our ears cant hear that non-uniformity. As such, those extreme filters do not colour the sound appreciably. In much the same way a control room or mastering studio with pleasant ambience sounds wonderful to the ear but looks horribly non-uniform when measured as a frequency response. This concludes the advanced tutorial on Empathetic Equalisation. The theoretical discussion of this technique offers a possible explanation but much is yet to be explained. Furthermore, no attempt has been made to prove this to be the fundamental mode of operation of this technique. To do so would
require careful experimental design with a large statistical sample for which I dont have the expertise nor the time commitment and resources to conduct. I leave that as a possible research opening for a talented engineer in the field of psycho-acoustics to conduct, if indeed anyone is interested in doing so. As a final remark on Empathetic Equalisation and Har-Bal in general, it is interesting to note that such a technique is simply impossible to conduct without spectrum analysis and a tool like Har-Bal, irrespective of the talent of the mastering engineer. Just as critical bandwidth in human hearing is the achilles heel to Empathetic Equalisation it is the bane of Mastering Engineers as it makes it impossible for them to hear the peaks that Har-Bal sees. Finally, Id like to thank both Earle Holder and Evan Kendon (Htimaker)for many engaging discussions on this new technique and Dean Wuksta for the inclusion of track analyses from his song You. Without these exchanges this tutorial would surely be lacking in substance.