Sei sulla pagina 1di 40

Audio Masterclass Music Production and Sound Engineering Course

Module 04: Equalization

Module 04

Equalization
In this module you will learn about filters and various types of equalizers. You will also learn how
filter and equalizers are used for corrective and creative purposes.

Learning outcomes
To understand the function and operation of filters and equalizers.
To be able to use filters and equalizers to correct faults and problems in signals and recordings.
To be able to use filters and equalizers to enhance single signals.
To be able to use filters to blend sounds in the mixing process.

Assessment
Formative assessment is achieved through the short-answer check questions at the end of this
module.

Page 1 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Module Contents

Learning outcomes 1
Assessment 1
Equalization 3
Frequency and level 4
Level and decibels 8
Frequency response 10
Filters 14
Passive EQ and tone controls 17
Mixing console EQ 19
EQ IN button 23
Graphic EQ 24
Using equalization 27
Corrective equalization 28
Creative equalization 31
Equalizing the mix 36
Equalization for live sound 38
Check questions 40

Page 2 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Equalization
Equalization, or EQ, is one of the most basic yet most
important tools in recording, live sound, and all other
activities of sound engineering. Equalization is used
to repair problems, to make individual instruments
and voices sound better, and to help instruments and
voices blend together in the mix. It is also used to
improve the mix, and to make tracks on a CD flow
seamlessly from one to another without sudden
changes of frequency balance.

This text will take you through EQ from an understanding


of frequency and level at first principles, all the way
to how to shape and control frequencies in the mix,
which is indeed both a skill and an art. When you
understand EQ, you will be able to start to apply it
effectively.

Without the necessary understanding, you will be


flying blind, making random changes and not really
knowing whether you are improving matters or not.
But when you do understand all the principles, all the
techniques and all of the options, you will start to be
in control. Over a period of time, you will master the
art of EQ, as well as the science.

Page 3 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Frequency and Level


One of the most important features of sound is
frequency. Imagine one string of a guitar. When
plucked, it vibrates at a certain rate. This is its
frequency. The lowest string of a guitar vibrates
approximately 82 times per second.

We call times per second, hertz, named for the


German scientist Heinrich Rudolph Hertz, who
discovered radio waves. The unit is always spelled
hertz with a lower-case h. The abbreviation for hertz
is Hz, with an upper-case H.

The human ear can only hear a certain range of


frequencies. 20 Hz, or twenty vibrations per second,
is about as low as it can go. The human body can
perceive frequencies lower than that, but it is not
hearing them in the true sense, but rather feeling
them in the abdomen.

The upper range of frequencies that we can hear


extends to 20,000 Hz, or 20 kilohertz (20 kHz) kilo
is the abbreviation for one thousand, just as one
kilogram equals one thousand grams.

Not everybody can hear all the way up to 20 kHz. The


upper frequency range of the human ear deteriorates
with age. So although at age ten you might be able
to hear up to 20 kHz, by age 30 you can probably
only manage 15 kHz or so. By the time you retire you
might be down to 10 kHz.

Subjectively, this change isnt noticeable, and it


is perfectly possible to work as a sound engineer
without this high frequency range. It is however
always desirable to have a youngster around to warn
of any possible high frequency noise or interference
problem.

In light of the above, in sound engineering we normally


quote the frequency range that we are interested in
to be 20 Hz to 20 kHz. Some rare people can hear
beyond that range, but for most purposes 20 Hz to 20
kHz is enough.

At this point it might be a good idea to put this into


perspective in relation to musical instruments. The

Page 4 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

piano is a good reference point. The lowest note on


a standard piano is 27.5 Hz, and the highest note is
4186 Hz. No commonly found instrument goes lower
than the piano; the piccolo extends a little higher.

But if the highest note is 4 kHz or so, does that mean


that the rest of the frequency range up to 20 kHz
doesnt matter? Not so all musical notes have many
frequency components. Take for example the note A
below Middle C on a piano. This has a frequency of
220 Hz. However, it doesnt only contain 220 Hz. It
will also contain components at 440 Hz, 660 Hz, 880
Hz, 1100 Hz, 1320 Hz etc. Can you see a pattern?

Any note played by a string or wind instrument obeys


what is called the harmonic series. It consists of
the base note, which is called the fundamental. This
is the pitch we hear and determine the note to be.
It also contains frequencies that are whole-number
multiples of the fundamental.

These are called the harmonics, or overtones.


These overtones extend all the way up to 20 kHz and
beyond. Even a note from a double bass is rich in high
frequency harmonics. To lose these harmonics would
take all the brightness and presence from music, so
the upper frequencies are important.

High frequencies which we can abbreviate HF


(low frequencies are LF) are particularly important
for metallic percussion instruments such as cymbals.
Cymbals are incredibly rich in harmonics. They
follow a different harmonic series to string and wind
instruments, and they also have strong random
frequency components.

These high frequencies must be captured and


preserved in any recording or amplified performance.
Otherwise a set of expensive, high-quality cymbals
can easily sound like trash can lids banging together.

Imagine a group of acoustic instruments playing


together with no amplification. As you listen, the
various notes and harmonics blend together in a way
that is pleasing to the ear, assuming good players of
course.

When you record or amplify these instruments, ideally

Page 5 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

all frequencies between 20 Hz and 20 kHz should be


captured. Also, and very importantly, they should
all be captured in the same relative levels as were
created by the instruments. In other words, every
frequency should be treated equally. Theres the equal
of equalization, as we shall see shortly. No groups,
or bands, of frequencies should be either raised or
lowered in level in comparison to the others.

Lets narrow down to just one piece of equipment in


the recording or amplification chain the microphone.
Lets imagine a single microphone pointing at a piano.
Flat frequency response
This microphone should capture all the frequencies
produced by the piano at the same relative levels,
including all the fundamentals and all the harmonics
up to 20 kHz. It should not emphasize or subdue any
bands of frequencies. If it can capture all frequencies
equally, we say it has a flat frequency response.

This is a term that crops up regularly in sound


engineering and it is massively important. Every item
of equipment that we use should have a flat frequency
response, meaning that it handles all frequencies
equally. That way, the natural sounds of instruments,
including their harmonics, can be preserved.

We can visualize the concept of a flat frequency


response graphically opposite.

What we can see from this is basically that what


comes out is what goes in. If this were the frequency
response chart of a microphone, then that microphone
would be able to take in acoustic sound vibrations
and turn them into an electric signal, and the electric
signal would be at all frequencies in exact proportion
with the original acoustic signal.

The height of the signal on the y-axis (vertical axis)


of the graph is called its level. Level is a word that
is used all the time in sound engineering and relates
to how loud the sound will be when eventually it is
reproduced by loudspeakers or headphones.

It is obvious therefore that a flat frequency response


is desirable. But real-world conditions often dictate
that this ideal cannot be achieved. For example, the
microphone might not be of particularly good quality,

Page 6 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

or perhaps it has certain other qualities that outweigh


the need for a flat frequency response, in the mind
of the recording engineer. If you cant have a flat
frequency response, then the second-best case that
you would hope for is the smooth roll-off opposite.

In this example, clearly the frequency response is not


flat. We describe a smoothly descending response
as roll off. A smoothly increasing response is
sometimes called tip up. Smooth roll off or tip up Low and high frequency roll-off
doesnt necessarily sound bad. It doesnt sound like
the original acoustic performance, but it wouldnt be
offensive to the ear, unless done to extremes.

Likewise, in real world conditions you might find the


frequency response of a certain item of equipment, or
a certain combination of instrument/room/microphone
to be something like the two center graphs opposite.

Where the response goes up in the middle, we call


this a boost or peak. Where it goes down, it is a cut
Boost or peak
or dip. Since in both cases the response is smoothly
changing, it doesnt have to sound too bad. If the
extent of the peak or dip is large, then yes it can
sound bad. If a peak is narrow, then that can sound
bad too (oddly, a narrow dip doesnt sound too bad
in fact it may go unnoticed). But where the response
is smooth, the problem can easily be corrected.

The worst-case scenario is where the response is


uneven.

Loudspeakers often display this highly irregular Cut or dip


response. When it occurs to a significant extent, it
is displeasing to the ear, and is difficult to correct.
Microphones too often display an irregular response
to sounds that arrive at angles other than head-on
(such as reverberation).

Uneven response

Page 7 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Level and decibels


We mentioned level earlier. Level is an important
concept and it would be impossible to fully describe
EQ without an understanding of level. An acoustic
sound has a certain loudness, which we can call its
level. When it is translated into an electric signal,
clearly that signal doesnt have a loudness because
electricity is silent. But it still has a level.

When instruments are mixed together in a mixing


console, each is set at a certain level. The overall
mix has to be of the correct level to record onto the
eventual delivery medium. See how that word level
is used all the time.

Level can be measured in decibels (abbreviated dB).


We use decibels because they can apply to sound
and to sound signals traveling or being stored in any
medium. So whether we are talking about an acoustic
sound traveling in air, the signal from a microphone,
a recording on a tape recorder, a vinyl record, a film
soundtrack etc. etc., we can always describe level in
terms of decibels.

Without decibels we would continuously have to swap


between newtons per square meter, volts, nanowebers
per meter etc. Its so much easier to talk in terms
of decibels. If a singer is asked to sing 10 decibels
louder, then the signal from the microphone will be 10
dB higher in level; the recording will be 10 dB hotter
(which means the same as higher in level), and
eventually the sound coming from the loudspeakers
will be 10 dB louder too. Decibels make describing
level easy.

Working out decibels can be complex, but all we need


here are a few simple points...

A doubling of level is 6 dB
A halving of level is 6 dB
A quadrupling of level is 12 dB
A quartering of level is 12 dB

This text is not about decibels, otherwise the


explanation would be much longer and much more
complex. But you need to have this basic grasp of

Page 8 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

decibels to understand EQ. If you can appreciate the


above points, then you know much as about decibels
as you need.

Page 9 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Frequency response
So far we have talked about frequency range. But to
say that a certain item of equipment has a frequency
range from 20 Hz to 20 kHz isnt precise enough. Maybe
it covers that range equally, so that all frequencies are
handled in the same way. Or maybe it only responds
just a little at the extremes of the range. To talk about
frequency range is useful to an extent, but it is not
precise. We need to talk of frequency response.

Frequency response not only describes frequency


range, but it describes the level of the response too.
Ideally the response should be equal for all frequencies.
We would call this a flat response, which is good. A
completely flat response is never achievable; there
will always be some deviation, however slight. So
for a certain item of equipment, we might find the
frequency response specified as follows

20 Hz 20 kHz (+0, -1 dB)

What this means is that the level at 1 kHz (1000 Hz)


is taken as a reference, and the response at other
frequencies determined in relation to that. In this
example, no frequency has a response greater than
that at 1 kHz; likewise the maximum downward
deviation is 1 dB no frequency between 20 Hz and
20 kHz is more than 1 dB down with respect to 1
kHz.

Lets try another example

20 Hz to 20 kHz (+/- 3 dB)

In this case, the response varies quite widely over


a six decibel range. However, at no frequency is the
response greater than +3 dB compared to 1 kHz,
and at no frequency is the response less than 3
dB compared to 1 kHz, between the frequency limits
shown.

Either method of describing frequency response is


perfectly good. You should be absolutely clear though
that a frequency response specification must include
both lower and upper frequency limits, and lower and
upper level limits. If not all four items are included,
then it is inadequate as a frequency response

Page 10 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

specification.

Page 11 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

What is equalization for? What are we making equal


to what?

Equalization is all about cutting or boosting bands of


frequencies with respect to other bands of frequencies.
So what are we making equal to what? That is a good
question.

To answer that question we have to go back into


history. The earliest practise of sound engineering
was when it was discovered that a telegraph cable,
used for sending simple electrical pulses in Morse
code, could be used to transmit speech over long
distances. One of the problems was that level was
lost over long distances. To combat this, amplifiers
were placed along the way to boost the signal back up
again periodically.

However, it was found that certain bands of frequencies


suffered more than others. So the amplifiers were made
frequency selective to bring the response back to flat.
This process was called equalization. So equalization
means making the output of a telephone cable equal
to the input equal in terms of frequency response.

For a long time, this was what equalization was used


for. Even well into the era of recording and broadcast,
equalization was used to correct frequency response
problems, where and when they occurred. But then at
some point, some bright spark recording or broadcast
engineer must have twiddled an EQ control and
thought, Hey, that sounds nicer! So rather than
using EQ to correct a problem, it was used to improve
the sound subjectively. So no longer was the output
equal to the input, it was enhanced, over and above
the input.

Once this idea caught on, there was no limit to how


EQ could be used. Recording engineers in particular
used EQ in many and varied creative ways, particularly
during the 1960s.

Moving on to today, we use EQ in both of these


ways. Firstly as a corrective tool to compensate for
frequency response irregularities caused by inadequate
equipment, a less than satisfactory instrument, or
poor acoustics or microphone positioning. When we

Page 12 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

have done that, we go further and use it to enhance


the sound to our liking.

Later in this text we will examine both of these uses


of EQ in detail. But firstly we need to look at some
equalizer designs.

Page 13 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Filters
The filter is the simplest form of equalizer. Some people
wouldnt call it an equalizer but refer to it specifically
as a filter. That isnt important however; the function
of the filter is very similar, as is the way it is used.

A filter removes bands of frequencies. It never boosts.


There are five principal types of filter (responses
shown opposite)...

Low-pass, where low frequencies are allowed to


pass through but high frequencies are reduced
in level (attenuated).
High-pass, where high frequencies are allowed
to pass through but low frequencies are
reduced in level.
Band-pass, where both low and high
frequencies are attenuated; mid frequencies
are allowed through.
Band-stop, where both low and high
frequencies are allowed to pass, but a region in
the mid-band is attenuated.
Notch filter a very narrow band-stop filter,
taking out a small range of frequencies.

Filters almost always have switched controls. They


are either in or out there is no continuous control to
blend the effect of the filter.

So for instance, you might be amplifying a singer


on stage, but you can hear foot noise coming up the
microphone stands, which is a common problem on
wooden stages. A quick solution to this would be
to switch in a 100 Hz high-pass filter. Most mixing
consoles for public address have this feature. The
low frequency energy coming up the stand is reduced
in level and the problem is solved. Well, maybe not
entirely solved but certainly made better than it was.

A more well-specified filter will offer a number of cut-


off frequencies. The cut-off frequency is the point
at which the level is reduced by 3 dB compared to
the level in the pass band. The range of frequencies
beyond the cut-off frequency is known as the stop
band.

Page 14 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Here is an example of a high-pass filter, from a Neve


mixing console. As you can see, it has switch positions
for 50, 80, 160 and 300 Hz. Simple and effective.

To summarize filters so far, they have a type, and


they have a cut-off frequency. They also have another
parameter known as slope.

It might be possible to imagine a filter that passes


everything in the pass band, and stops everything in
the stop band absolutely. This kind of filter has a name
a brickwall filter. The problem with the brickwall
filter however is a) it is difficult to make with analog
circuits, and b) it doesnt sound good. Neve filter control

Somehow, the ear can detect the sharp boundary


between the presence and absence of frequencies.
Brickwall filters are used in CD players and other
digital devices, but they are not used in the recording
process, live sound or the operational areas of
broadcasting.

Practical filters attenuate frequencies in the stopband,


meaning lowering their level. They dont completely
cut them out. It is the rate of attenuation that is
important, as shown by this diagram

Here you can see the four most commonly used filter
slopes... 6 dB/octave, 12 dB/octave, 18 dB/octave and
24 dB/octave. To explain for instance 6 dB/octave, it
means that beyond the cut-off frequency where the
graph has become a straight descending line, the
response drops by six decibels for every doubling of
frequency. Simple as that.

So the greater the slope, the faster the rate at which


the level drops. Thus, a 24 dB/octave filter has a
much greater audible effect than a 6 dB/octave filter.
In fact, 6 dB/octave is too gentle for most purposes
and it is rarely found. 24 dB/octave is too harsh and
also rare. 12 and 18 dB/octave are the commonly
found values.

You might well ask what happened to the in-between


values? What about 15 dB/octave, for example? It
turns out that filters with the four values listed are
easy to design and construct. Filters with in-between
values are possible, but much more difficult, and the

Page 15 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

result is further from the ideal response. There simply


isnt any point to making filters with in-between
slopes, the standard slopes are quite good enough.

Just to round off this section, slopes are sometime


quoted over a decade of frequency rather than an
octave. An octave is a doubling of frequency, a decade
is a ten-fold increase. So a filter that has a 12 dB/
octave slope could also be described as having a 40 dB/
decade response. This terminology is comparatively
rare though.

Page 16 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Passive EQ and tone controls


Moving on to the simplest kind of EQ, we have
the passive EQ circuits typically found in vintage
equipment, retro equipment and guitar amplifiers. A
passive EQ uses resistors, capacitors and sometimes
inductors all common electronic components to
subtract level from certain bands of frequencies. The
picture shows a Pullet passive equalizer.

The amount of reduction can be controlled, making


this kind of EQ more versatile than a simple filter
(although it doesnt detract from the value of having
a filter filters are always useful). Since passive
circuitry can only make the signal smaller, clearly it
is necessary to have amplification after the EQ stage,
firstly to bring the signal back up to full level, and
secondly to provide the opportunity of having an EQ
boost.

The passive part of the circuit can only cut, so the boost
has to be done by raising the levels of frequencies
that were not cut. Complicated, but understandable if
you think about it.

The drawback of a passive EQ is that it loses signal


level. Therefore the signal gets closer to the background
noise level, and when boosted back up, the noise also
gets boosted. So typically you can expect a passive
EQ to be noisy, although some designs are better than
others. The advantage of passive EQ is that it sounds
different to the more modern active EQ. Here is the
Pullet passive EQ...

Although active EQ is better in almost every respect,


recording engineers just like to have something that
sounds different its another tool in the toolbox.
There are also advantages when it is necessary
to cut or boost a narrow range of frequencies by a
large amount. A well-designed passive EQ may sound
smoother and cleaner.

Passive EQ, apart from the exceptions noted, is now

Page 17 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

quite rare. Moving on to the simplest active design


(active in this context means that an amplifier circuit
is itself made frequency-selective, so no level is lost as
in the passive EQ.), we have the tone controls found on
hi-fi and other consumer audio equipment. Often you
will find controls labelled bass and treble. Clearly,
the bass control will cut or boost the low frequencies;
the treble control cuts or boosts the high frequencies.
There is a standard circuit that is employed for tone
controls, invented by Peter Baxandall and called the
Baxandall tone control.

The Baxandall tone control is a simple and elegant


design, and any hi-fi or domestic equipment
manufacturer would have to be a little crazy to want
to do it any other way. However, it is really only
suitable for modifying the end product to individual
preferences. It is only capable of a 6 dB/octave slope
at most, which, for pro audio purposes, is a little like
trying to cut with a blunt knife.
Hi-fi tone controls
We could work up to full-scale EQ gradually, but its
probably better to start right at the top with the EQ
section from a Solid State Logic mixing console, which
as console EQ goes is about as good as it gets. The
diagram has been edited to show only the features
that are relevant to EQ.

Page 18 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Mixing console EQ
Here we can see four separate bands of EQ. The
topmost band deals with high frequencies, the middle
two work on mid-range frequencies, the lowest band
is for low frequencies. Obvious really. But lets look at
each band in detail

The high frequency band has two rotary controls.


The kHz control, incorrectly labelled KHz by SSLs
graphics person, controls the frequency at which the
high frequency section starts to take effect. Below
this frequency, not much will change. Above this
frequency, changes will be audible. As you can see,
the range of frequencies extends from 16 kHz all the
way down to 1.5 kHz.

Most people would say that 1.5 kHz is a distinctly


mid-range frequency rather than a high frequency.
However, extending the range this low offers more
flexibility in control, which is always a good thing.
Important note - the control here labelled kHz is
more commonly known as frequency.

One thing that confuses newcomers to EQ is that it


is possible to turn this control all the way from one
end stop to the other without hearing any change in
sound quality. This will happen if the dB control is
set to its center position. In the center position, the
dB control does absolutely nothing it neither cuts
nor boosts. Therefore the position of the kHz control
SSL EQ section diagram
doesnt matter nothing is being changed.

So you have to set a certain amount of cut or boost for


the kHz control to become relevant. Important note
the control here labelled dB is more correctly known
as gain, but also commonly known as level. (Gain
is the property of a circuit that changes the level of a
signal. Gain can be either positive or negative).

All equalizers offer a certain range of cut or boost on


their gain controls. Typically it would be around +/-15
dB. +/-18 dB is better, +/-12 dB isnt as good. But
they are the commonly found limits.

There is one more control here a button marked


BELL. This button is more properly called the bell/

Page 19 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

shelf button. It affects the way the equalizer processes


frequencies well above the frequency to which the kHz
control is set. Here are the options

As you can see, the bell setting boosts a certain range


of frequencies, but at the extreme high frequency end
of the graph, the boost returns back down to zero. In
comparison, the shelf setting boosts frequencies all
the way up to the limit of the audible range. Whether
you choose bell or shelf is entirely down to subjective
perception. There is no right or wrong choice in any
particular situation it is entirely up to you to decide
what sounds best.

We can skip over the two mid-frequency sections for


the moment. The low-frequency EQ section, as you Shelf
can see, is very similar in layout to the HF section.
There is a dB (gain) control and a Hz (frequency)
control in exactly the same manner. They work on low
frequencies rather than high. There is also a bell/shelf
button that does exactly the same as the HF bell/shelf
but extending towards the lowest frequencies of the
range.

The range of the frequency control is from 30 Hz to


450 Hz. This is a wider range than you would probably
ever need. If anything below 30 Hz needs controlling,
it probably just needs filtering out, and of course
you have a filter for that. 450 Hz, is far from being
anything you could describe as a low frequency it
is well into the midrange. However, having that extra Bell
scope is good because you never know when it might
come in useful.

Going back to the mid-range controls, the two sections


are identical in everything except frequency. The
upper section deals with high mid-range, the lower
section with low mid-range. On some consoles, the
two mid-range bands are entirely identical and cover
the same ranges of frequencies.

At this point we will assume that you know what the


kHz (frequency) and dB (gain) do, its just the same
as for the other bands. But there is an extra control
the Q control. What is that for?

To explain the Q control, we have to start by explaining

Page 20 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Q. The concept of Q dates back to the early days of


radio when engineers were struggling to achieve a
good quality of resonance, which apparently is where
the Q comes from. A circuit that would resonate well
(to resonate means to vibrate or oscillate readily at a
certain frequency, given an energy input) would form
the basis of a good transmitter or a good receiver.

We use exactly the same concept in sound engineering


today, but at audio frequencies rather than the much
higher radio frequencies.

Neve EQ section

Page 21 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Here is a graph showing a resonant boost. It could


just as well be a cut, the concept works both ways.

What we can see here is that the bell of the curve


can be wide or narrow. If it is wide, we say it has a
low Q; if it is narrow we say it has a high Q. You can
calculate Q by taking the two frequencies either side
of the peak of the resonance, subtracting the lower
from the higher, and then dividing the result into the
center resonant frequency.

Q = f0 / (f2 f1)

Since the top and bottom of the dividing line are both
measured in hertz, the units cancel out so that Q has
no units. Q is a simple ratio.

Going back to the equalizer, we can see that Q is


adjustable to give a wide or a narrow bandwidth to
the curve of the EQ. If Q is set low, then a broad
range of frequencies will be affected. If Q is set high
then only a narrow range is changed. Sometimes it is
difficult to hear the effect of changing the Q. To get
a feel for what Q can do, set a large boost at a mid-
range frequency so you can easily hear what the EQ
is doing, then sweep the Q up and down and listen
for what it does. When you have a feeling for what Q
sounds like, you will be able to use it effectively. In
general, a high Q is used where there is a small band
of frequencies causing a problem like an unpleasant
resonance in a snare drum that needs filtering out. A
low Q is more useful for musically-inspired changes,
just to make things sound the way you want. Often,
you would set a low or high Q first, before adjusting
the other two controls of the section. This type of EQ
section with controls for frequency, gain and Q is often
known as a parametric EQ.

On some equalizers, the Q control is labeled


bandwidth. It does exactly the same job, but where
a Q control would be calibrated from low to high,
or from 0.5 to 7, a bandwidth control would be
calibrated from wide to narrow or from one-third
octave to two octaves.

Just remember that wide bandwidth = low Q, narrow


bandwidth = high Q.

Page 22 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

EQ IN button
One more very important control on the EQ section as
a whole is the EQ IN button, which we would prefer
to call the EQ In/Out button. This simply switches the
EQ in or out of circuit. There are two reasons why
this is necessary. Firstly, the EQ circuit is complex and
to a small but possibly audible extent degrades the
signal. So if you dont need EQ, it is better to switch it
out. The degradation is small though and few people
would be likely to hear any difference in the context
of an entire mix.

The other reason is much more important so that


you can easily hear the difference between EQ-in and
EQ-out. You need to be sure that you are improving
the signal! EQ is a powerful tool and it is perfectly
possible that you are making it worse. With the EQ In/
Out button, you can easily tell.

Page 23 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Graphic EQ
The EQ sections provided in mixing consoles are
flexible, easy to use and powerful. Plug-ins on
computer-based recording systems mimic the features
of analog console EQ. But there are other styles of
equalizer, one popular type being the graphic EQ.
Here we can see the Klark Teknik DN360 two-channel
graphic EQ.

The Klark Teknik DN360 has thirty bands of EQ per


channel, each band covering one third of an octave. So
this would be called a third-octave graphic equalizer.
Each band has a cut or boost up to +/-12 dB. There
are graphic equalizers that work in whole octaves with
fewer bands, but they are not nearly as effective or
precise.
Klark Teknik DN360
The idea is that you can set any frequency response
curve you like, and not necessarily symmetrical in
the mid-range as it is with a conventional resonant
EQ. And when you have set the graphic EQ the way
you like it, the positions of the knobs show a graph
of the frequency response! In fact, the supposed
graph is merely an approximation because the bands
overlap and interact with each other. However, even
an approximation is useful you can glance quickly
at a graphic equalizer and see what kind of curve it is
set to.

You dont have to know much about a graphic equalizer


to operate one. It is simple and intuitive. However
there are some things you could know to make your
understanding better. Firstly, when you raise one slider,
you are not only affecting the frequencies between
that and the two adjacent sliders.

The Q of each section is low and a wide range of


frequencies is altered. It has been attempted to
make graphic equalizers with high-Q circuits, but
they just dont sound as good. So expect the bands
to interact. The other thing that you might care to
know is that graphics come in two types variable-Q
and constant-Q. With a variable-Q circuit, the Q of a
section gets higher as you apply more boost or cut.

With a constant-Q graphic, the Q always stays the

Page 24 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

same. Opinions are divided on which sounds the best.


Many engineers prefer variable-Q, but generally only
by a small margin. If there is only a constant-Q graphic
to hand, one shouldnt hesitate to use it.

Something else you need to know about graphics is


that they really screw up the signal. They do nasty
things to a signals phase. We need to explain phase.
The quick explanation is that an instrument emits a
sound, and every frequency component of that sound
reaches your ears at the same time. The speed of
sound does not vary with frequency. Transform that
sound into an electric signal and things change.
Common circuit components delay certain bands of
frequencies with respect to others.

This applies particularly to EQ. When you EQ a signal


to change the level at certain frequencies, you also
change the timing of those frequencies. In practice, it
is not possible to design a circuit that doesnt mess up
the phase of a signal. But it turns out that if the phase
changes smoothly through the frequency band, the
ear doesnt notice. Many equalizers are designed to be
minimum phase, which means that they change the
phase to as small an extent as theoretically possible.

Unfortunately, the graphic equalizer is anything


but minimum phase. Having said that, the audible
differences are slight and generally obscured by the
loudspeaker which, phase-wise, is the worst offender
of all. So its not a big thing, but worth knowing
anyway.

The principal application of the graphic equalizer is


in live sound. PA systems are prone to howlround
(feedback) when the sound from the speaker enters
the microphone and is re-amplified, resulting in a loud
tone filling the auditorium.

So the sound engineer has to find the frequency at


which howlround is most likely to occur, which is
different for every set-up and every venue, and reduce
the gain at that frequency. The graphic equalizer is
simply the most convenient tool for doing this, hence
graphics are in almost universal use in this application.
There is no howlround problem in studios, at least not
if youre doing things correctly, so graphic equalizers

Page 25 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

are less commonly found.

Page 26 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Using Equalization
It is important to understand what equalization does
and how it works, which has been covered and you
now have a good preparation for what follows using
EQ.

We have said already that the use of EQ is a skill


and an art. You may have seen elsewhere instructions
to cut or boost certain frequencies for certain
instruments to achieve attack, clarity, air or some
such quality. By all means, read everything you can.
But there are no rules of EQ. No-one can tell you
exactly what frequencies to cut or boost, and by how
much, because they dont know what sound it is you
are dealing with.

Every instrument is different, every player is different,


every acoustic space is different, every model of
microphone is different. No, there is little value in set
instructions. What you need are tools that will allow
you to assess a sound and decide what needs to be
done to it, within its own frame of reference, and also
precisely the way you want to hear it.

Page 27 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Corrective equalization
The first use of equalization is to correct something
that doesnt sound right. Lets consider single
instruments to start off with. It is perfectly possible
that an acoustic guitar, even an expensive one, has an
irritating resonance where the instrument itself boosts
a band of frequencies, and it doesnt sound good.

The first action to take is to experiment with


microphone position and selection (position is nearly
always more important than which mic you choose).
The way a professional recording engineer would do
this is to listen from the control room while his or her
assistant moves the mic to various positions.

When a rough position is found, the engineer will give


precise instruction on the exact placement down to the
last centimeter. This in a way is a kind of acoustic EQ;
finding the spot where the balance of frequencies just
happens to be optimum. Finding the best microphone
position first is a necessary step before EQ.

Another example would be a drum, say a snare drum.


Drums often have annoying resonances that would
benefit from being removed. But the first step isnt
EQ; the first step is to tune the drum. Drum tuning is
outside of the scope of this text, but the quick solution
is to find a drummer who is experienced in recording
to show you how.

Once the drum is tuned it will sound a whole lot better.


It may need damping to reduce the resonance, and
of course care and attention should be given to mic
positioning. All of that comes before EQ.

Do you get the picture? Acoustic sounds, and electric


sounds from guitar amplifiers, should be optimized at
source. Microphone positions too should be optimized.
Only after that, if there is still a remaining problem,
should you start with corrective EQ.

Even with the best care and attention to the above,


you might still end up with an acoustic guitar that
has a cheap-sounding resonance, indicating a certain
band of frequencies that is naturally strong in that
particular instrument. Now is the time to correct that

Page 28 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

with EQ, before it goes onto the recording. If it is


indeed a resonance that is causing the problem, then
you need to attack the band of frequencies that the
guitar is boosting acoustically with EQ that does the
inverse. Here is what you do

Step one is to set the gain of the EQ to a significant


boost a halfway boost of eight or nine decibels
is usually enough. If there is a Q control, set it to
around 3, or to a moderately high value if there are
no calibrations.

Now, as the instrument is playing, sweep the


frequency control from low to high and back down
again, slowly and repeatedly. The band of frequencies
that was causing the unpleasantness will now be
doubly boosted and you will hear very clearly where
the problem lies. This is an easy way of identifying
troublesome frequencies the ear hears a boost much
more readily than it does a cut.

Now that you have identified the problem frequencies,


simple change the boost to cut. Fine tune the gain
and Q controls so that you shape the correction to
the shape of the problem. Now your guitar will sound
much better. It wont sound better than a better guitar
would have done EQ is a powerful tool but it cant
work miracles.

This technique for corrective EQ in the midrange


always works. As you gain experience you will be able
to set the correct frequencies to cut directly, without
the intermediate boost phase. But it will still be
something that you experiment with on particularly
troublesome sound sources.

There are also commonly-found problems in the bass


and high frequency ends of the sound spectrum. One
such is low frequency noise. LF noise can be caused
by vibrations from foot falls coming up the mic stand.
This can be minimized by using an elastic cradle to
hold the mic rather than the regular clip. However,
such cradles are expensive and not available to fit
every microphone. So it will not always be possible
to use one.

Another source of LF noise is ventilation and air

Page 29 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

conditioning. It is not possible to have a soundproofed


studio without ventilation, and air conditioning is a
valuable extra. Noise from these sources must be
managed effectively. Even in rooms that do not have
ventilation provided by a fan, natural air currents exist
that can cause noise at extremely low frequencies.
Listen out for low frequency noise, and if it exists,
simply use a filter to magic it away. Do not however
do this for bass instruments. Whatever you take away
can often not be put back, and equalization of bass
instruments should always be left until the mix.

Sometimes instruments themselves are excessively


bass heavy. The problem can be too much bass at very
low frequencies below 40 Hz or so. Although most
people, when asked to express an opinion, will say that
they like lots of bass, what they really mean is that
like to hear frequencies of 80 to 100 Hz or so pumping
out at a high level. High levels of very low frequencies
are more likely to make you feel sick. Although these
very low frequencies may not be desirable, and you
might consider that corrective action is necessary, it
is usually best to play safe and not make any change
that will affect the recording.

The same applies to fizzy high frequencies. Although


a harsh and brittle top end might seem to be an
undesirable feature that needs correction, you will
find it difficult to get the top end brightness back if
you take too much away at this stage.

In summary, corrective EQ is very appropriate to the


mid range of frequencies, also very appropriate to
low frequency noise. But the low frequency content
of bass instruments, and the high frequency content
of instruments and voices should be left intact until
mixing. It is very difficult to put back frequencies that
you have taken away.

Page 30 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Creative equalization
Prime time for creative equalization is in the mix,
where all the sounds you have recorded come
together and need to blend well. But there are several
alternative approaches, all of which can work well,
given attention, thought and care. Lets start with the
scenario of a live recording of a jazz band. How should
you approach that EQ-wise?

The thing about recording a band live as they play,


rather than doing it instrument by instrument, is that
you have the opportunity to hear what the band really
sounds like. And that sound will become a benchmark
for your mix. If your mix doesnt achieve the same
level of quality as the live sound, then you havent
done your job properly. However you will score points
massively if your mix sounds even better than the live
sound.

In this situation, the best approach is to start mixing


(the band have all gone home now) with the EQ sections
all either switched out, or set to flat (all EQ gains at
their center positions). Balance the instruments on
the faders and panpots and get as good an overall
sound as you can. Work hard at this stage and dont
be satisfied easily. Try different options, often the
first balance you arrive at isnt necessarily the best.
Explore the mix, play with it, get to know it.

An hour doing this is an hour well spent. When you have


become thoroughly familiar with your source material
you can start to think about EQ. As you listen to your
best faders-and-panpots mix, you will find that some
instruments are not being heard properly, yet raising
the fader makes them too loud. Conversely, other
instruments stick out like a sore thumb, but lowering
the fader makes them go away. You just cant find
the right fader positions, or the right fader positions
have to be tuned to within a couple of millimeters. You
need EQ!

What happens in a band is that several instruments


or groups of instruments will try to compete for the
same frequency space, in their fundamentals but
also in their harmonics. And whichever instrument
happens to be louder at any particular time will mask

Page 31 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

other instruments competing for the same frequency


space. So in this case, lets say that you are having
difficulty hearing the trumpets and clarinets distinctly
when they are playing together. Set an EQ boost for
the trumpet channel and sweep the frequency control
until they stand out more prominently. Do the same
for the clarinets. If you find that the same center
frequency works equally for both, skew one channel
upwards in frequency and the other down. Now you
have differentiated these instruments sufficiently for
them not to mask each other.

As a finesse, if you have two mid-range EQ sections


per channel, or have enough computer processing
power to run additional pug-ins, whatever frequencies
you boosted on one channel, cut on the other, and
vice versa. So not only are you making the trumpets
more prominent at their key frequencies, you are
scooping out a hole in the same frequencies on the
clarinet track. This technique is sometimes known as
complementary EQ. It is a powerful tool.

When you are mixing a band like this, when you


have either heard them play live in the studio, or it
is a conventional line up and you know what it should
sound like, always apply EQ in context. This means
that you do not solo any channel while you EQ, but
apply EQ while all of the instruments are audible. In
this way, you can see what effect the EQ has with
reference to the entire mix.

Many recordings are not made with conventional


band instruments, or with the musicians not playing
simultaneously. In cases like this, there is no reference
point. You dont know what it should sound like. Rather
than live up to a standard, it is your responsibility
to create that standard. This is more difficult, but it
offers more creative opportunities too.

In this case we will assume that you applied corrective


EQ during the recording process, so all the instruments
and voices sound fully adequate at least. You could
try a faders-and-panpots mix, as in the whole-
band example above, but the result will probably be
something of a jumble. Since the instruments were
recorded separately, there wasnt much information

Page 32 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

to go on as to how they should blend.

In this situation, one very effective approach is to start


from a foundation mix the very fewest instruments
that can stand on their own and support the rest of
the track. Very likely this will be the drums, bass and
one pad instrument guitar or keyboard perhaps. If
you can get this rhythm section blending well, then
everything else will hook in easily with that.

As before, you can do a faders-and-panpots mix of


the foundation instruments. Set the EQ of each so
that the sound is full and rich it could be a finished
mix in its own right but for the lack of vocal and color.
This can be done by EQing in context, and of course
applying complementary EQ particularly in frequency
areas where the kick drum and bass instrument clash.
When you have all of this sounding really good, you
can start adding the other components. The vocal will
be next.

What you will typically find when you add the vocal to
an already full-sounding track is that the vocal doesnt
have a space to fit into. Once again, complementary EQ
will come to our assistance. Unlike other instruments,
the human voice is pretty consistent in the frequency
bands in which it is strong. This stems from human
evolution we needed to communicate effectively so
the ear has evolved to be very sensitive at frequencies
where speech is also strong the range around 3 kHz
or so.

Notice that we are talking harmonics here, not


fundamentals. But this is the range that allows us
to differentiate between the consonants, vowels and
phonemes of speech, in both the male and female
voice. So if you apply an EQ boost to a vocal at around
3 kHz, it will suddenly sound very much more present
and stand out wonderfully. Of course the next step is
to apply complementary EQ to the other instruments
to make a hole for the vocal to sit in.

As you start to add other instruments you will find


that they need to be thinned. Your foundation tracks
are already fat or phat! because you spent time
optimizing them. The vocal is complementary EQd to
perfection. So there is no room in the frequency space

Page 33 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

for anything else! Well, yes there is, but you cant add
more phat tracks to an already phat mix. You need
to thin the new instruments so they will fit in.

Thinning can be accomplished by cutting low


frequencies and often cutting high frequencies too.
If an instrument isnt thin enough at this stage, you
can apply a boost where it is harmonically strong. If
the worst comes to the very worst, you can apply a
complementary EQ to your foundation track to make a
space for the new instrument to fit in. Oddly enough,
in a world of phat, there is an amazing power in
thinning things down.

It cant be emphasized too strongly that there is only


a limited audio spectrum and everything has to fit
into that. If every sound is rich in a wide range of
frequencies, they will clash and mask each other. So
you have to make your instruments complementary to
each other in their frequency characteristics, and thin
them down where necessary. Ultimately, your finished
mix will sound so much bigger for that.

Cut can be better than boost


The brain doesnt react symmetrically to boost and
cut. It pays far more attention to frequencies that
are boosted than to frequencies that are cut down in
level. But EQ cut can be a very effective tool in sound
shaping and blending. Lets take the case of a vocal
again. As has been said already, adding a lift around
3 kHz will give it much more presence. But you might
still find that it cant find its place in the mix the
fader is either too low or too high and you just cant
seem to find the right spot.

What is happening here is that the lower range of the


vocal is clashing with the rest of the mix. When the
vocal is loud enough to be clearly audible, the lower
range is too loud and is sticking out. So the answer is
not only to boost around 3 kHz, but also to cut in the
sub-1 kHz region.

Yes, between these frequencies you are cutting down


fundamentals, but the brain has a mysterious way
of reconstructing missing fundamentals from the
harmonics that it hears. Just listen carefully to what

Page 34 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

you are doing and all will be well.

Another key place to cut is in bass instruments, kick


drum and bass guitar or synth. It is quite common
for the kick drum to be over-rich in energy below 40
Hz or so. Subjectively this isnt pleasant and from a
technical point of view eats up loudspeaker excursion
(the distance the cone can move). So excessive low
frequency energy not only sounds bad, it makes the
loudspeakers distort before the music is really loud.
And this applies to anyones listening system, not just
your monitors. So by reducing the level below 40 Hz
50 Hz or so, you can set the kick drum fader higher
so that you get more level around 80 100 Hz, which
subjectively gives exactly the kick we need. (As an
aside, you might consider a boost in the high hundreds
of hertz for attack, and a further boost around 5 kHz
for crispness and click.)

By now you are probably getting the idea. But you can
take cut a stage further and use cut instead of boost.
Think of it like this whenever you use boost, you are
making certain frequencies louder than others. You
can do the same by cutting the frequencies you dont
want to be quite as loud, and then raising the fader. It
is a subtle but useful difference. It is perfectly possible
to construct an entire mix using cut only. This is not to
say that you would always want to, but consider that
when you scan the EQ controls across the board, at
least some should be set to cut rather than boost.

Page 35 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Equalizing the mix


So far we have talked about equalizing individual
instruments and voices. But it is also important to
equalize the stereo mix. There are some situations
where you probably would not want to do this lets
say you have recorded an orchestra playing live in a hall
with good acoustics, with very accurate microphones.
Why would you ever want to equalize something that
is already so obviously perfect? Actually you might
if the label on which the recording is released has
a house sound, they might EQ it so that it sounds
subjectively comparable to their other releases.

For popular music however, there is no point of


reference other than the many classic recordings
that have been made over the years, and of course
recordings that have sold well recently. If you aim to
get your balance of frequencies comparable with music
in a similar genre that is commercially successful, you
wont be making a mistake.

This isnt a lesson on mastering, which is a massive


subject in itself. But you should aim to get a mix that
sounds good on a variety of playback systems, on
your MP3 player, car stereo, portable system and full-
on hi-fi. And if you apply only EQ to your mix and no
other process, a pro mastering engineer can work his
magic unrestricted. If you compress a mix and get it
wrong, that cannot be undone.

The best way to EQ a mix is to set up a track that you


like in a similar genre. Aim to give your mix a similar
frequency balance. The EQ that you use here is totally
dependent on the EQ you applied to the individual
channels during mixing, so everyones situation is
different. But it is not uncommon to boost the low end
gently, the high end also gently, and taking a low-Q
scoop out of a band somewhere between 100 and
1000 Hz.

This is a time for very attentive and detailed


comparative listening with your reference track. Take
your track and the reference track to various playback
systems. Typically you will find that the reference
track travels better than yours thats because it was
worked on by a seasoned mastering engineer who

Page 36 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

equalizes mixes every day of his life. In time however


you will learn to recognize the differences between
mixes produced as the fruit of massive experience,
and your mixes. Gradually your mixes will acquire the
polished professional sound too.

Page 37 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Equalization for live sound


Equalizing for live sound is very much like EQing for
recording, but there are certain extremely important
differences.

The first is that a sound check for a live show often has
to be completed in as few as three songs, sometimes
even just one song. This means that you have to set
the EQ for the entire show from very scant information,
perhaps with a band you have never seen before.

In fact, the live sound engineer isnt EQing songs as


much as EQing the band, and then perhaps fine-tuning
as the show progresses.

A live sound engineer will quickly find preferences for


EQ that he or she will apply from show to show, for
the drums, bass guitar, guitar and vocals. Keyboards
are too varied to be able to have much in the way
of expectations. Adjustments will be made during the
sound check to those basic settings.

The second difference between live sound and


recording is that the live sound engineer is always
battling against howlround (feedback).

Setting too much of an EQ boost on a vocal mic is


almost bound to create howlround, so EQ has to be
done with this in mind. As described earlier, a graphic
EQ will be used to equalize the system as a whole, so
the engineer is in a sense fighting against this.

One point that is important about setting the graphic


is that if, in the interests of combating feedback, too
much level is taken out in the vocal range, then the
vocals will simply be quieter, so the engineer will
have to push the fader higher on the console, thus
counteracting the initial anti-feedback EQ. Keep this
thought in mind when setting the graphic.

The third difference between live sound and recording


is actually a bonus - you get to hear your sound the
way the audience hears it! You dont have to worry
about people listening on different playback systems
- youre all in the same auditorium. OK, you are in the
best position, but if the system has been correctly set
up, then if it sounds good to you, it will sound good

Page 38 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

anywhere in the house.

Page 39 Audio Masterclass 2007


Audio Masterclass Music Production and Sound Engineering Course
Module 04: Equalization

Check Questions
What is the generally accepted range of frequencies of human hearing?
What is the frequency of the lowest note of the piano?
What is the frequency of the highest note of the piano?
Which one common instrument is capable of notes of higher frequency than the piano?
Briefly describe harmonics.
Why is it important to capture and handle all frequencies equally?
What is meant by smooth roll off?
What is meant by an uneven or irregular frequency response?
Why must a frequency response specification include decibel limits?
Briefly describe how equalization originated as a corrective process.
What is a low-pass filter?
What is a high-pass filter?
What is a band-pass filter?
What is a band-stop filter?
What is a notch filter?
Explain cut-off frequency.
What is the pass band of a filter?
What is the stop band of a filter?
What is the slope of a filter?
List four commonly found filter slopes.
What is a passive equalizer?
Why is a hi-fi tone control circuit not suitable for professional audio use?
List the two rotary controls commonly found on a high-frequency or low-frequency mixing
console EQ section.
Describe the difference between bell and shelf.
List the three rotary controls commonly found on a mid-frequency mixing console EQ section.
Briefly describe Q or bandwidth.
Briefly describe a graphic equalizer.
Briefly describe complementary EQ.
Describe how the use of EQ in live sound can increase the problem of feedback.

Page 40 Audio Masterclass 2007

Potrebbero piacerti anche