Sei sulla pagina 1di 14

Static mix:

In basic terms a static mix is when no automations


are used at any point during the any static mixes
that are made, this is because automation is used
normally to change and also make the parameters
of any volume different and also making the effects
on each channel different in a way and this we
dont want when making a static mix. The reason
why we use this is to make the volume louder in
certain audio files when they are needed to be
increased or made louder. The two files which an
automation or a static mix can be done to is a midi
file and an audio file in most cases.
A basic example and steps on how the static
mixes are made is as follows:
1). Making priority- what sounds/instruments are
going to be used.
2). putting all your faders to zero- meaning that
when making a static mix you must put to zero in
order to make a solid static mix and do each
instruments and sections at different set times and
not all in one go.
3). Starting with the drums- bringing the kit up to
70%-75% on the fader that is being used so that
you have some headroom as such, adjusting
where possible the speakers that are being used

so that the drums being used are good and nice


volume to make sound quality better.
4). next is the bass- Bringing up all the bass
instruments which are being used to make the
sound clear and more importantly to make sure
that it is working well with the kick drum and
making sure that the sound is not overpowering
the kick/bass drum.
5).bring up the lead instruments- As in lead
instruments we mean vocals and also things like
guitar solos as such and making sure that the
lead is at the same volume as the bass
instruments so they work together well.
6). Bring in any rhythmic backing instruments
used- These are instruments like the guitar, brass
section and also the piano and start bringing each
one of these instruments/rhythms up until it fit in
with the mix that has been made.
7). if being used the pad instruments should come
in at this point- these are things like strings, or
even synth swells and are normally used lower in
the mix than expected.
8). and last but not least bring in the sparkly or
accent sounds where used- these are for instance
things like wind chimes which may only make an
appearance in the mix for a little part but they must
be brought up just so that they can be heard so
that they are not the centre of attention when
making the mix.

This is an
example of an ongoing static mix which shows
different instruments and layers being used in
order to make the static mix sound good and be
correct and flowing.

SIGNAL TO NOISE RATIO (SNR):


In the most simplest way the signal to noise ratio is
the difference between the sound that we want to
hear and also the sound that we dont want to hear
when mixing something together. Signal to noise
ratio is also known to be something that is a
measure for science or engineering in our case
music engineering that tends to compare the level
of the sound that we want and also the desired
signal that we want to the level of unwanted sound
also known as noise. This signal to noise ratio is
usually measured in decibels for example a ratio
which is higher than 1:1 indicates to someone who
is making a mix that there is more signal/sound
than unwanted noise which everyone will be
desiring when mixing but do not always get this

measurement that they want as many places will


have unwanted sound no mater where you go. An
example of this is that if you places for instance a
microphone on the floor and the singer was on a
level way back and high up then it will pick up
unwanted sounds as well as the voice of the
singer. If the microphone was put into a different
place and much closer to the singer or even on the
face with a handheld microphone then this will
reduce the unwanted sound and make the sound
quality of a better consistency.

This is a diagram
showing what signal looks like and what unwanted
sound looks like when recording or using
specialised equipment and also showing what we
see when there is a bit of both as it is very hard if
not impossible to get rid of all unwanted sound.
DBFS:
The DBFS is commonly referred to as the signal
level that is known to be equivalent to a full scale
or also known as a full scale. it is usually used in

the specifying of the A/D and also the D/A audio


data converters as such. DBFS is also know to be
the decibels relative to a full scale, this is known to
measure the decibel amplitude levels is a range of
different digital systems for example PCD which is
shortened for pulse-code modulation. Below
shows how the DBFS is measured.

This basically means that this is the loudest that


the digital audio signal can get before it starts
clipping, for instance 0 DBFS tends to be the
highest/maximum level that it can be before you
start to see the waveform as such starting to clip.
This is commonly used in a music studio for
instance to help specify the A/D or D/A audio data
converters as such.
DYNAMIC RANGE:
The dynamic range commonly used in music term
is usually seen abbreviated as DR or DNR for
short. Dynamic range is simply the ratio between

largest value and the smallest possible value of


the quantity which is changeable. For example
where this is measured could be in sound and
light. When dynamic range is being used within
audio equipment it is usually used to indicate any
sort of components maximum output signal level
and to also commonly rate a systems nose floor
as such. Compressors, expanders, and noise
gates are processing devices that are usually used
within audio to help change the dynamic range of
any given signal as such. This is done to achieve a
more consistent and clearer sound when you start
recording or as a special effect, by completely
altering the dynamics of a given sound, therefore
creating a sound that was not possible from the
original source as such. (http://
whatis.techtarget.com/definition/dynamic-range
29/05/2015 17:00). Below is an example of how
the dynamic range can change on different types
of music.

Headroom:
Headroom which is commonly known as or
technically known for it being the audio signal

processing of something. Headroom is used in


both digital and analog audio, by headroom this
means what the amount of signal-handling
capabilities of a given audio system exceeding a
designated level which is known as the permitted
maximum level shortened to PML. Headroom can
also be seen as a safety zone which allows the
transient audio peaks as such to meet and exceed
the PML without the audio signal or even the
system in this case. the headroom also help in
making audio recording louder to hear by adding
more channels to a recording track as such. Below
will explain headroom on the images.

EQ/FREQUENCY SPECTRUM:
When recording for instance a lot of different
frequencies will be used to make the music sound
the best that it can possibly sound. the frequencies
come in a load of different ranges, a way to
explain this would be for you to listen to a load of
different instruments, they are not all the same
sound and yu can tell the difference in a lot of the
specifications of each musical instrument and their
sounds. so frequency spectrums are ranges to
which different instruments can be put into to what
EQ or frequency level they fall into to make
recording sound of good quality. For example is
there seems to be a lot of frequency level on one
instrument for instance vocals then you could take
some frequency from there and place into a
different instrument on logic pro which is being
used in this case to bring up the volume where it
has a low frequency and isnt getting heard as
much as it should be. Below is an image of how
the EQ/Frequency spectrum is measured and set
out.

DIGITAL DISTORTION:
Digital distortion which is commonly known as
clipping is a form of waveform distortion. This
tends to occur when an amplifier is for instance is
overdriven and then tends to attempt to deliver a
sort of output voltage as such or even a current
beyond its highest/maximum capability of it for
instance. When an amplifier is used to create a
signal with more power than its power supply can
possibly produce, it will then tend to amplify the
signal only up to its maximum or highest possible
capacity and usage, where at this point the signal
can then be amplified no more further. Below is an
image of digital distortion and explains it with the
name clipping.

As the signal tends to cuts or also clip at its


maximum possible capacity of the amplifier that is
being used as such, the signal is known to now be
clipping as such. This extra sort of signal which
thens to be beyond this ability of the amplifier that
is being used is then quite simply cut off
completely, which then tends to result in
somethings called a sine wave which then tends to
become a distorted and out of place square-wavetype waveform for instance.
Many people who play electric acoustic guitars
seem to intentionally overdrive their amplifiers that
they are using for their guitar to cause this clipping
in order to get the desired sound that they are
looking for.
COMPRESSION THRESHOLD:
In simple forms this is quite simply a ratio in music.
this is how much compression is applied as an
example of this if the compression ratio is set to
6:1 this means that the input signal would have to
cross the threshold as 6db for the output level as
such to 1db. this is also known to help reduce the

volume of any loud sour in the recordings or even


amplify quiet sounds by compressing the audio
signals dynamic range as such. below is an image
about compression threshold.

Compression output/ make up gain


when the threshold is brought down the more gain
reduction as such you will get on the compressor
for instance. this will tend to normally affect the
output of this by seeming to reduce the gain of the
track that is being made within logic pro for
instance by making it a lot quieter than what we
would preferably want it to be. Makeup gain tend
to be really nothing more than a gain stage as
such in any sort of device which is being used
especially where you can amplify the level in it.
What makes it makeup gain is the context, as it is
usually employed in devices where some other
processes tend to seem to occur that seems
reduce the level of sound. The most obvious and
common example is compressors after your signal
undergoes the gain reduction process of
compression you need some way to bring the

average overall level back up so that the signal


sits in the mix appropriately. Below is an image to
explain this process.

COMPRESSION RATIO:
Compression ratio tends to aect how hard the
the compressor works on the recording track.
Compression tends to be the process of reducing
or lessening the dynamic range between the
loudest and quietest parts of an audio signal. This
is done by boosting the quieter signals and
attenuating the louder signals as such to make the
track a better quality. Below is an image explaining
compression ratio a bit more.

COMPRESSION ATTACK/RELEASE:
The attack tend to depend on how much the compressor
seems to act if it goes past the threshold that is being
used. The Attack and Release controls provide
a remedy to this ailment as such for the simple
reason that they seem to determine how quickly
the compressor's gain-reduction then reacts to
changes in the input signal levels being used the
past specifies how fast the compressor reacts in
reducing gain as such, while the recent specifies
how fast the gain reduction resets itself.Below is
an image of compression attack and release to
explain it a bit more.

Potrebbero piacerti anche