Sei sulla pagina 1di 3

A PITCH-CONTROLLED VIBRATO FOR ELECTRIC GUITAR

James Love (450578496)


Initial Review for Digital Audio Systems, DESC9115, 2016
Graduate Program in Audio and Acoustics
Faculty of Architecture, Design and Planning, University of Sydney

ABSTRACT
Adaptive Digital Audio Effects (A-DAFx) allows digital audio
effect parameters to be controlled by sound features extracted
from the input signal itself. This report explored the possibility
of an adaptive vibrato effect for electric guitar, where the pitch
extracted from the input signal controls the modulation rate. A
brief review of several pitch tracking methods was undertaken
so that their suitability for electric guitar could be assessed.
The procedures for mapping the detected pitches to the vibrato
effect itself for automated control of the modulation rate were
also outlined. It was found that the YIN algorithm, which is an
extension of the autocorrelation function, could be the best
choice for electric guitar pitch extraction. The procedure for
signal mapping was found to be relatively straight forward, so
the potential for additional sound feature extraction mappings
could be considered, as well as additional user control from a
foot pedal.
1.

INTRODUCTION

1.1. Adaptive Digital Audio Effects Overview


While most digital audio effects (DAFx) process audio input
based on a set of parameters pre-determined by the user [1],
adaptive digital audio effects (A-DAFx) provide time-varying
control of an effect driven by parameters extracted from the
input signal itself [2].
A-DAFx are sometimes referred to as dynamic or
intelligent effects or content-based transformations. Some
classic examples include the compressor, where gain control of
the effect varies with time based on the input signal level.
Another example is auto-tune, where a time-varying pitch shift
is applied by extracting the fundamental frequency from the
input signal and calculating divergence from a tempered
musical scale [3].
1.2. Generalised Signal Flow
As shown in Figure 1, the three main stages of A-DAFx
include: input sound feature analysis and extraction; mapping
between those features and DAFx parameters; and finally, the
DAFx processing itself.

Figure 1. Generalised signal flow diagram for an


adaptive effect. Adaptive control consists of sound extraction
from inputs x1(n), x2(n) or output y(n) and mapping to effect
control parameters. Mappings can also be manipulated with
an optional gestural control g(n) [2].
It is also possible to extract multiple features from an input (or
output) signal and map them to various effects parameters, or
use features from one input signal to drive control of another
[2]. Additional gestural control, such as a foot pedal can also
be included to modify the mappings between sound features
and effect parameters [4].
1.3. Pitch-Controlled Vibrato
The application of A-DAFx explored in this review was
restricted to extraction of a single sound feature from an input
signal the pitch of an electric guitar. Various methods of pitch
extraction were compared in terms of their suitability for the
electric guitar so that the detected pitch could be used as a
control signal for the modulation rate of a vibrato effect.
2.

PITCH TRACKING AND EXTRACTION

1.4. Overview
For the pitch tracking approaches described here, the
simplified assumption has been made that perceived pitch
corresponds to the fundamental frequency of a harmonic signal
[5]. This section provides a brief introduction to both timebased and frequency-based approaches to pitch extraction and
through a review of a previous study [6], their degree of pitch
extraction accuracy for electric guitar has been compared.

1.5. Time-Domain Pitch Extraction

1.7. Frequency-Domain or FFT-based Pitch Extraction

f0

To determine the fundamental frequency


domain, the corresponding period

T0

in the time

must be found. The

two quantities share the inverse relationship given by:

f 0=

1
T0

For a digital signal, it is necessary to find the pitch lag

, which is the number of samples in one period. For a signal

1
f S=
TS

with sampling rate

, where

TS

is the

sampling interval, the pitch lag is given by:

M=

T0 fS
=
TS f0

The resolution in detecting the fundamental frequency varies


with

fS

f0

and

given that M can only take integer

values [3]. The resolution is given by the frequency error factor

( f 0)

which is the ratio of the exact frequency to the

discrete frequency:

( f 0 ) =1+ 0.5

f0
fS

r xx (m)

is used to detect the

pitch period of a portion of a signal and is given by:


N1

r xx ( m )= x ( n ) x (nm)
n=m

where it is the sum of multiplications of a time window [in


the signal] with its shifted version [6] for a block of length

is restricted by the length of the FFT frame i.e.

f=

fS
N

, additional information such as the phase of the

signal is needed to calculate the correct frequency [6].


1.8. Comparison of Pitch Tracking Methods
In their study of pitch tracking methods for electric guitar [6],
Knesebeck and Zolzer tested the accuracy and latency (for
real-time applications) of the aforementioned methods by
measuring the results each one yielded for an input synthesised
signal of known pitches. They also measured the performance
with an actual dry electric guitar signal.
They found that the YIN algorithm had the best
performance overall for tracking of single notes with a mean
error of less than 0.01Hz and latency of 27.4ms. The accuracy
of all approaches was good for the synthetic signal, but octave
jumps and frequency fluctuations occurred for the real guitar
signal. They found that the LTP was the most robust method
for pitch tracking, but had the longest latency.
PITCH-CONTROLLED TREMOLO

1.9. Adaptive Amplitude Modulation

The autocorrelation function

with lag

3.

1.6. Autocorrelation

FFT is the fast version of the Discreet Fourier Transform


(DFT), from which the magnitude response of the frequency
spectrum is calculated from a finite number of N samples of
the input signal [3]. The lowest peak of the frequency
magnitude response can be taken as the fundamental
frequency. However, given that the frequency resolution

Given an input signal

x (n)

and control signal

from an extracted sound feature, the output signal


is given by [3]:

c (n)
y (n)

y ( n )=x (n)(1+ c ( n ))
A generalised signal flow diagram for adaptive amplitude
modulation is shown in Figure 2.

. Often maxima are at integer multiples

of the pitch lag M rather than the first where it is expected [3].
For this reason octave errors can occur and further steps that
extend upon the autocorrelation concept need to be
implemented, such as those built into the LTP, PRAAT and
YIN algorithms [6].
While these inaccuracies present significant problems for
automatic transcription or auto-tune, the autocorrelation
function is sufficient if it is low-pass filtered and used as a
control parameter for tremolo [3].

Figure 2: Generalised signal flow diagram for adaptive


amplitude modulation. The amplitude modulated output signal
y(n) is the result of a control signal c(n) from an extracted
sound feature driving the input signal x(n).

1.10.

Adaptive Tremolo

Tremolo is a periodic amplitude modulation of the input signal


that produces a volume-wavering effect [7]. The amplitude
modulation control signal
c (n) expressed in a linear
scale is given by:

c ( n )=d ( n) sin 2

f m (n)
n
fS

f m ( m )=1+13

4.

f 0(m )

5.
[1]

where d (n) is the depth of the tremolo, which can be


varied adaptively, but for the purposes of this application is
pre-defined by the user as a value between 0 and 100. The rate
of amplitude oscillation or modulation frequency f m ( m )
is related to the fundamental frequency
the mapping relationship:

controlling the vibrato depth with amplitude tracking.


Additional user control of effect mappings could also be
implemented through use of a MIDI foot controller.

[2]
[3]

through

1420f 0 ( m )
1420780

DISCUSSION & CONCLUSION

While the procedure for mapping the control signal to the


vibrato modulation rate is relatively straight forward, the pitch
extraction stage is more challenging, especially for an electric
guitar. The study reviewed for this particular application of
pitch tracking methods found that the YIN algorithm was the
best performer in terms of pitch tracking accuracy and latency,
which are both important if the effect is going to be applied in
a live situation. The possibility of additional sound feature
extractions and mappings could be explored, such as

[4]

[5]

REFERENCES

J. A. Maddams, S. Finn, and J. D. Reiss, "An


autonomous method for multi-track dynamic range
compression," in Proceedings of the 15th
International Conference on Digital Audio Effects
(DAFx-12), 2012, p. 1.
V. Verfaille and D. Arfib, "A-DAFx: Adaptive digital
audio effects," Proc. COST-G6 Workshop on Digital
Audio Effects, Limerick, Ireland, pp. 10-14, 2001.
D. A. V. Verfaille, F. Keiler, A. von dem Knesebeck
and U. Zolzer, "Adaptive digital audio effects," in
DAFX : Digital Audio Effects, U. Zolzer, Ed., 2nd ed
Chichester, England: Wiley, 2011, pp. 321-322, 335360, 371-372, 376-378.
V. Verfaille, U. Zolzer, and D. Arfib, "Adaptive
digital audio effects (A-DAFx): A new class of sound
transformations," Audio, Speech, and Language
Processing, IEEE Transactions on, vol. 14, pp. 18171831, 2006.
P. d. l. Cuadra. (2000, 20/3/2016). Pitch Detection
Methods Review. Available:

https://ccrma.stanford.edu/~pdelac/154/m154pa
per.htm
[6]
[7]

A. von dem Knesebeck and U. Zolzer, "Comparison


of pitch trackers for real-time guitar effects," in
Proc. 13th Int. Conf. Digital Audio Effects, 2010.
D. Formosa. (2003, 20/3/2016). A Brief History of
Tremolo. Premier Guitar. Available:

http://www.premierguitar.com/articles/19777-abrief-history-of-tremolo