Sei sulla pagina 1di 32

Synthesizer

From Wikipedia, the free encyclopedia

Jump to navigationJump to search

This article is about the electronic music instrument. For other uses, see Synthesizer (disambiguation).

"Synth" redirects here. For other uses, see Synth (disambiguation).

Early Minimoog by R.A. Moog Inc. (ca. 1970)

A synthesizer (often abbreviated as synth, also spelled synthesiser) is an electronic musical


instrument that generates electric signals that are converted to sound through instrument
amplifiers and loudspeakers or headphones. Synthesizers may either imitate traditional musical
instruments like piano, Hammond organ, flute, vocals; natural sounds like ocean waves, etc.; or
generate novel electronic timbres. They are often played with a musical keyboard, but they can be
controlled via a variety of other input devices, including music sequencers, instrument
controllers, fingerboards, guitar synthesizers, wind controllers, and electronic drums. Synthesizers
without built-in controllers are often called sound modules, and are controlled
via USB, MIDI or CV/gate using a controller device, often a MIDI keyboard or other controller.

Synthesizers use various methods to generate electronic signals (sounds). Among the most popular
waveform synthesis techniques are subtractive synthesis, additive synthesis, wavetable
synthesis, frequency modulation synthesis, phase distortion synthesis, physical modeling
synthesis and sample-based synthesis.

Synthesizers were first used in pop music in the 1960s. In the late 1970s, synths were used
in progressive rock, pop and disco. In the 1980s, the invention of the relatively inexpensive Yamaha
DX7 synth made digital synthesizers widely available. 1980s pop and dance music often made heavy
use of synthesizers. In the 2010s, synthesizers are used in many genres, such as pop, hip hop, heavy
metal, rock and dance. Contemporary classical music composersfrom the 20th and 21st century write
compositions for synthesizer.

Contents

[hide]

 1History
o 1.1Early electric instruments

o 1.2Emergence of electronics and early electronic instruments

o 1.3Graphical sound

o 1.4Subtractive synthesis and polyphonic synthesizer

o 1.5Monophonic electronic keyboards

o 1.6Other innovations

o 1.7Electronic music studios as sound synthesizers

 1.7.1Origin of the term "sound synthesizer"

o 1.8From modular synthesizer to popular music

o 1.9Polyphonic keyboards and the digital revolution

o 1.10Impact on popular music

 2Types of synthesizers

 3Sound synthesis

o 3.1Imitative synthesis

 4Components

o 4.1Filter

o 4.2Attack Decay Sustain Release (ADSR) envelope

o 4.3LFO

o 4.4Arpeggiator

 5Patch

 6Module

 7Control interfaces

o 7.1Fingerboard controller

o 7.2Wind controllers

o 7.3Others

o 7.4MIDI control

 8Typical roles
o 8.1Synth lead

o 8.2Synth pad

o 8.3Synth bass

 9Controversy

 10See also

 11Notes

 12References

 13Bibliography

 14Further reading

 15External links

History[edit]

Synthesizers before 19th century

Wolfgang von Kempelen's Speaking Machine in 1769–1791 (replica in 2007–2009)

Rudolph Koenig's sound synthesizer in 1865:


consists of tuning forks, electromagnets, and Helmholtz resonators.

See also: Articulatory synthesis § Mechanical talking heads, and Additive synthesis § History

The beginnings of the synthesizer are difficult to trace, as it is difficult to draw a distinction between
synthesizers and some early electric or electronic musical instruments.[1][2]

Early electric instruments[edit]

See also: Electronic musical instrument § Early examples


One of the earliest electric musical instruments, the Musical Telegraph, was invented in 1876 by
American electrical engineer Elisha Gray. He accidentally discovered the sound generation from a
self-vibrating electromechanical circuit, and invented a basic single-note oscillator. This instrument
used steel reeds with oscillations created by electromagnetstransmitted over a telegraph line. Gray
also built a simple loudspeaker device into later models, consisting of a vibrating diaphragm in
a magnetic field, to make the oscillator audible.[3][4] This instrument was a remote electromechanical
musical instrument that used telegraphy and electric buzzers that generated fixed timbre sound.
Though it lacked an arbitrary sound-synthesis function, some have erroneously called it the first
synthesizer.[1][2]

The Teleharmonium console (1897) and Hammond organ(1934).

In 1897, Thaddeus Cahill invented the Telharmonium, which was capable of additive synthesis.
Cahill's business was unsuccessful for various reasons, but similar and more compact instruments
were subsequently developed, such as electronic and tonewheel organs including the Hammond
organ, which was invented in 1934.

Emergence of electronics and early electronic instruments[edit]


Left: Theremin (RCA AR-1264; 1930). Middle: Ondes Martenot (7th-generation model in 1978).
Right: Trautonium (Telefunken Volkstrautonium Ela T42; 1933).

In 1906, American engineer, Lee de Forest ushered in the "electronics age".[5] He invented the
first amplifying vacuum tube, called the Audion tube. This led to new entertainment technologies,
including radio and sound films. These new technologies also influenced the music industry, and
resulted in various early electronic musical instruments that used vacuum tubes, including:

 Audion piano by Lee de Forest in 1915[6]

 Theremin by Léon Theremin in 1920[7]

 Ondes Martenot by Maurice Martenot in 1928

 Trautonium by Friedrich Trautwein in 1929

Most of these early instruments used heterodyne circuits to produce audio frequencies, and were
limited in their synthesis capabilities. Ondes Martenot and Trautonium were continuously developed
for several decades, finally developing qualities similar to later synthesizers.

Graphical sound[edit]

ANS synthesizer and Graphic sonic

In the 1920s, Arseny Avraamov developed various systems of graphic sonic art,[8] and
similar graphical sound systems were developed around the world, such as those as seen on
the Holzer 2010.[9] In 1938, USSR engineer Yevgeny Murzin designed a compositional tool called ANS,
one of the earliest real-time additive synthesizers using optoelectronics. Although his idea of
reconstructing a sound from its visible image was apparently simple, the instrument was not realized
until 20 years later, in 1958, as Murzin was "an engineer who worked in areas unrelated to music"
(Kreichi 1997).[10]
Subtractive synthesis and polyphonic synthesizer[edit]

Hammond Novachord (1939) and Welte Lichtton orgel (1935)

In the 1930s and 1940s, the basic elements required for the modern analog subtractive
synthesizers — audio oscillators, audio filters, envelope controllers, and various effects units — had
already appeared and were utilized in several electronic instruments.

The earliest polyphonic synthesizers were developed in Germany and the United States. The Warbo
Formant Organ developed by Harald Bode in Germany in 1937, was a four-voice key-assignment
keyboard with two formant filters and a dynamic envelope controller[11][12] and possibly
manufactured commercially by a factory in Dachau, according to the 120 years of Electronic
Music.[13][verification needed] The Hammond Novachord released in 1939, was an electronic keyboard that
used twelve sets of top-octave oscillators with octave dividers to generate sound, with vibrato, a
resonator filter bank and a dynamic envelope controller. During the three years that Hammond
manufactured this model, 1,069 units were shipped, but production was discontinued at the start of
World War II.[14][15] Both instruments were the forerunners of the later electronic
organs and polyphonic synthesizers.

Monophonic electronic keyboards[edit]


Harald Bode's Multimonica (1940) and Georges Jenny Ondioline (c.1941)

In the 1940s and 1950s, before the popularization of electronic organs and the introductions
of combo organs, manufacturers developed and marketed various portable monophonic electronic
instruments with small keyboards. These small instruments consisted of an electronic
oscillator, vibrato effect, passive filters etc. Most of these (except for Clavivox) were designed for
conventional ensembles, rather than as experimental instruments for electronic music studios—but
they contributed to the evolution of modern synthesizers. These small instruments included:

 Solovox (1940) by Hammond Organ Company: a monophonic attachment keyboard


instrument consisting of a large tone-cabinet and a small keyboard-unit, intended to
accompany the pianos with monophonic lead voice of organ or orchestral sound.

 Multimonica (1940) designed by Harald Bode, produced by Hohner: dual keyboard


instrument consisting of an electrically blown reed organ (lower) and a
monophonic sawtooth synthesizer (upper).

 Ondioline (1941) designed by Georges Jenny in France.

 Clavioline (1947) designed by Constant Martin, produced by Selmer, Gibson, etc.. This
instrument was featured on various 1960s popular recordings, including Del Shannon's
"Runaway" (1961), and The Beatles' "Baby, You're a Rich Man" (1967).

 Univox (1951) by Jennings Musical Instruments (JMI).[16] This instrument was featured on The
Tornados' "Telstar" (1962).

 Clavivox (1952) by Raymond Scott.

 first portable digital keyboard (1971) by D. Ross Grable[citation needed]

Other innovations[edit]

Hugh Le Caine's Electronic Sackbut (1948) and Yamaha Magna Organ (1935)

In the late 1940s, Canadian inventor and composer, Hugh Le Caine invented the Electronic Sackbut,
a voltage-controlled electronic musical instrument that provided the earliest real-time control of
three aspects of sound (volume, pitch, and timbre)—corresponding to today's touch-sensitive
keyboard, pitch and modulation controllers. The controllers were initially implemented as
a multidimensional pressure keyboard in 1945, then changed to a group of dedicated controllers
operated by left hand in 1948.[17]

In Japan, as early as in 1935, Yamaha released Magna organ,[18] a multi-timbral keyboard instrument
based on electrically blown free reeds with pickups.[19] It may have been similar to the electrostatic
reed organs developed by Frederick Albert Hoschke in 1934 and then manufactured
by Everett and Wurlitzer until 1961.

In 1949, Japanese composer Minao Shibata discussed the concept of "a musical instrument with very
high performance" that can "synthesize any kind of sound waves" and is "...operated very easily,"
predicting that with such an instrument, "...the music scene will be changed
drastically."[neutrality is disputed][20][21]

Electronic music studios as sound synthesizers[edit]

Synthesizer (left) and an audio console at the Studio di fonologia musicale di Radio Milano(of RAI)
(1955–1983; renewed in 1968)

See also: Studio for Electronic Music (WDR), Groupe de Recherches Musicales, and Studio di fonologia
musicale di Radio Milano

After World War II, electronic music including electroacoustic music and musique concrète was
created by contemporary composers, and numerous electronic music studios were established
around the world, especially in Cologne, Paris and Milan. These studios were typically filled with
electronic equipment including oscillators, filters, tape recorders, audio consoles etc., and the whole
studio functioned as a "sound synthesizer".

Origin of the term "sound synthesizer"[edit]

RCA Mark II Sound Synthesizer (1957) and Siemens Studio for Electronic Music (de)(c. 1959)
In 1951–1952, RCA produced a machine called the Electronic Music Synthesizer; however, it was
more accurately a composition machine, because it did not produce sounds in real time.[22] RCA then
developed the first programmable sound synthesizer, RCA Mark II Sound Synthesizer, installing it at
the Columbia-Princeton Electronic Music Center in 1957.[23] Prominent composers including Vladimir
Ussachevsky, Otto Luening, Milton Babbitt, Halim El-Dabh, Bülent Arel, Charles Wuorinen, and Mario
Davidovsky used the RCA Synthesizer extensively in various compositions.[24]

From modular synthesizer to popular music[edit]

Main articles: Modular synthesizer, Harald Bode, Robert Moog, Moog synthesizer, and Doepfer A-100

In 1959–1960, Harald Bode developed a modular synthesizer and sound processor,[25][26] and in 1961,
he wrote a paper exploring the concept of self-contained portable modular synthesizer using newly
emerging transistor technology.[27] He also served as AESsession chairman on music and electronic for
the fall conventions in 1962 and 1964.[28] His ideas were adopted by Donald Buchla and Robert
Moog in the United States, and Paolo Ketoff et al. in Italy[29][30][31] at about the same time:[32] among
them, Moog is known as the first synthesizer designer to popularize the voltage control technique in
analog electronic musical instruments.[32]

A working group at Roman Electronic Music Center, composer Gino Marinuzzi, Jr., designer Giuliano
Strini, MSEE, and sound engineer and technician Paolo Ketoff in Italy; their vacuum-tube modular
"FonoSynth" slightly predated (1957–58) Moog and Buchla's work. Later the group created a solid-
state version, the "Synket". Both devices remained prototypes (except a model made for John
Eaton who wrote a "Concert Piece for Synket and Orchestra"), owned and used only by Marinuzzi,
notably in the original soundtrack of Mario Bava's sci-fi film "Terrore nello spazio" (a.k.a. Planet of
the Vampires, 1965), and a RAI-TV mini-series, "Jeckyll".[29][30][31]

The Moog modular synthesizer of 1960s–1970s

Robert Moog built his first prototype between 1963 and 1964, and was then commissioned by the
Alwin Nikolais Dance Theater of NY;[33][34]while Donald Buchla was commissioned by Morton
Subotnick.[35][36] In the late 1960s to 1970s, the development of miniaturized solid-state components
allowed synthesizers to become self-contained, portable instruments, as proposed by Harald Bode in
1961. By the early 1980s, companies were selling compact, modestly priced synthesizers to the
public. This, along with the development of Musical Instrument Digital Interface (MIDI), made it
easier to integrate and synchronize synthesizers and other electronic instruments for use in musical
composition. In the 1990s, synthesizer emulations began to appear in computer software, known
as software synthesizers. From 1996 onward, Steinberg's Virtual Studio Technology (VST) plug-ins –
and a host of other kinds of competing plug-in software, all designed to run on personal computers –
began emulating classic hardware synthesizers, becoming increasingly successful at doing so during
the following decades.

Wendy Carlos –
Switched-On
Bach(1968)

MENU

0:00

First Movement
(Allegro)
of Brandenburg
Concerto Number
3played on
synthesizer.

Problems playing this file?


See media help.

The synthesizer had a considerable effect on 20th-century music.[37] Micky Dolenz of The
Monkees bought one of the first Moog synthesizers. The band was the first to release an album
featuring a Moog with Pisces, Aquarius, Capricorn & Jones Ltd. in 1967,[38] which became a Billboard
number-one album. A few months later the title track of the Doors' 1967 album Strange
Daysfeatured a Moog played by Paul Beaver. Wendy Carlos's Switched-On Bach (1968), recorded
using Moog synthesizers, also influenced numerous musicians of that era and is one of the most
popular recordings of classical music ever made,[39] alongside the records (particularly Snowflakes are
Dancing in 1974) of Isao Tomita, who in the early 1970s utilized synthesizers to create new artificial
sounds (rather than simply mimicking real instruments[40]) and made significant advances in analog
synthesizer programming.[41]

The sound of the Moog reached the mass market with Simon and Garfunkel's Bookends in 1968
and The Beatles' Abbey Road the following year; hundreds of other popular recordings subsequently
used synthesizers, most famously the portable Minimoog. Electronic music albums by Beaver and
Krause, Tonto's Expanding Head Band, The United States of America, and White Noise reached a
sizable[clarification needed] cult audience and progressive rock musicians such as Richard Wright of Pink
Floyd and Rick Wakeman of Yes were soon using the new portable synthesizers extensively. Stevie
Wonder and Herbie Hancock also played a major role in popularising synthesizers in Black American
music.[42][43] Other early users included Emerson, Lake & Palmer's Keith Emerson, Tony
Banks of Genesis, Todd Rundgren, Pete Townshend, and The Crazy World of Arthur Brown's Vincent
Crane. In Europe, the first no. 1 single to feature a Moog prominently was Chicory Tip's 1972 hit "Son
of My Father".[44]
In 1974, Roland Corporation released the EP-30, the first touch-sensitive electronic keyboard.[45]

Polyphonic keyboards and the digital revolution[edit]

See also: Polyphony and monophony in instruments, Digital synthesizer, § Patch, MIDI, Physical
modelling synthesis, Virtual analog synthesizer, and Software synthesizer

The Prophet-5 synthesizer of the late 1970s-early 1980s.

In 1973, Yamaha developed the Yamaha GX-1, an early polyphonic synthesizer.[46] Other polyphonic
synthesizers followed, mainly manufactured in Japan and the United States from the mid-1970s to
the early-1980s, and included Roland Corporation's RS-101 and RS-202(1975 and 1976) string
synthesizers,[47][48] the Yamaha CS-80 (1976), Oberheim's Polyphonic and OB-X (1975 and 1979),
Sequential Circuits' Prophet-5 (1978), and Roland's Jupiter-4 and Jupiter-8 (1978 and 1981). The
success of the Prophet-5, a polyphonic and microprocessor-controlled keyboard synthesizer, aided
the shift of synthesizers towards their familiar modern shape, away from large modular units and
towards smaller keyboard instruments.[49] This form factor helped accelerate the integration of
synthesizers into popular music, a shift that had been lent powerful momentum by the Minimoog,
and also later the ARP Odyssey.[50] Earlier polyphonic electronic instruments of the 1970s, rooted
in string synthesizers before advancing to multi-synthesizers incorporating monosynths and more,
gradually fell out of favour in the wake of these newer, note-assigned polyphonic keyboard
synthesizers.[51]

In 1973,[52] Yamaha licensed the algorithms for the first digital synthesis algorithm, frequency
modulation synthesis (FM synthesis), from John Chowning, who had experimented with it since
1971.[53] Yamaha's engineers began adapting Chowning's algorithm for use in a commercial digital
synthesizer, adding improvements such as the "key scaling" method to avoid the introduction of
distortion that normally occurred in analog systems during frequency modulation.[54] In the 1970s,
Yamaha were granted a number of patents, under the company's former name "Nippon Gakki Seizo
Kabushiki Kaisha", evolving Chowning's early work on FM synthesis technology.[55] Yamaha built the
first prototype digital synthesizerin 1974.[52] Yamaha eventually commercialized FM synthesis
technology with the Yamaha GS-1, the first FM digital synthesizer, released in 1980.[56] The first
commercial digital synthesizer released a year earlier, the Casio VL-1,[57] released in 1979.[58]
The Fairlight CMI of the late 1970s-early 1980s.

By the end of the 1970s, digital synthesizers and digital samplers had arrived on the market around
the world (and are still sold today),[note 1]as the result of preceding research and development.[note
1]
Compared with analog synthesizer sounds, the digital sounds produced by these new instruments
tended to have a number of different characteristics: clear attack and sound outlines, carrying
sounds, rich overtones with inharmonic contents, and complex motion of sound textures, amongst
others. While these new instruments were expensive, these characteristics meant musicians were
quick to adopt them, especially in the United Kingdom[59] and the United States. This encouraged a
trend towards producing music using digital sounds,[note 2] and laid the foundations for the
development of the inexpensive digital instruments popular in the next decade (see below).
Relatively successful instruments, with each selling more than several hundred units per series,
included the NED Synclavier (1977), Fairlight CMI (1979), E-mu Emulator (1981), and PPG
Wave (1981).[note 1][59][60][61][62]

The Yamaha DX7 of 1983.

In 1983, however, Yamaha's revolutionary DX7 digital synthesizer[52][63] swept through popular music,
leading to the adoption and development of digital synthesizers in many varying forms during the
1980s, and the rapid decline of analog synthesizer technology. In 1987, Roland's D-50synthesizer was
released, which combined the already existing sample-based synthesis[note 3] and the onboard digital
effects,[64] while Korg's even more popular M1 (1988) now also heralded the era of
the workstation synthesizer, based on ROM sample sounds for composing and sequencing whole
songs, rather than solely traditional sound synthesis.[65]
The Clavia Nord Lead series released in 1995.

Throughout the 1990s, the popularity of electronic dance music employing analog sounds, the
appearance of digital analog modelling synthesizers to recreate these sounds, and the development
of the Eurorack modular synthesiser system, initially introduced with the Doepfer A-100 and since
adopted by other manufacturers, all contributed to the resurgence of interest in analog technology.
The turn of the century also saw improvements in technology that led to the popularity of
digital software synthesizers.[66] In the 2010s, new analog synthesizers, both in keyboard instrument
and modular form, are released alongside current digital hardware instruments.[67] In 2016, Korg
announced the release of the Korg Minilogue, the first polyphonic analogue synth to be mass-
produced in decades.

Impact on popular music[edit]

See also: Electronic music, Synthpop, and Electronic dance music

This section needs


expansion. You can help
by adding to it. (August
2014)

In the 1970s, electronic music composers such as Jean Michel Jarre,[68] Vangelis[69] and Isao
Tomita,[41][40][70] released successful synthesizer-led instrumental albums. Over time, this helped
influence the emergence of synthpop, a subgenre of new wave, from the late 1970s to the early
1980s. The work of German krautrock bands such as Kraftwerk[71] and Tangerine Dream, British acts
such as John Foxx, Gary Numan and David Bowie, African-American acts such as George
Clinton and Zapp, and Japanese electronic acts such as Yellow Magic Orchestra and Kitaro, were
influential in the development of the genre.[72] Gary Numan's 1979 hits "Are 'Friends' Electric?" and
"Cars" made heavy use of synthesizers.[73][74] OMD's "Enola Gay" (1980) used distinctive electronic
percussion and a synthesized melody. Soft Cell used a synthesized melody on their 1981 hit "Tainted
Love".[72] Nick Rhodes, keyboardist of Duran Duran, used various synthesizers including the Roland
Jupiter-4 and Jupiter-8.[75]

Chart hits include Depeche Mode's "Just Can't Get Enough" (1981),[72] The Human League's "Don't
You Want Me"[76] and Giorgio Moroder's Take My Breath Away (1986) for Berlin. Other notable
synthpop groups included New Order,[77] Visage, Japan, Men Without Hats, Ultravox,[72] Spandau
Ballet, Culture Club, Eurythmics, Yazoo, Thompson Twins, A Flock of Seagulls, Heaven
17, Erasure, Soft Cell, Pet Shop Boys, Bronski Beat, Kajagoogoo, ABC, Naked Eyes, Devo, and the early
work of Tears for Fears and Talk Talk. Giorgio Moroder, Brian Eno, Phil Collins, Howard Jones, Stevie
Wonder, Peter Gabriel, Thomas Dolby, Kate Bush, Enya, Mike Oldfield, Dónal Lunny, Frank
Zappa and Todd Rundgren all made use of synthesizers.

The synthesizer became one of the most important instruments in the music industry.[72]

Types of synthesizers[edit]

 Analog synthesizer
 Graphical sound

 Additive synthesis

 Subtractive synthesis

 Modular synthesizer

 Digital synthesizer

 Analog modeling synthesizer

 Distortion synthesis

 Frequency modulation synthesis

 Guitar synthesizer

 Phase distortion synthesis

 Linear Arithmetic synthesis

 Physical modelling synthesis

 Direct digital synthesizer

 Banded waveguide synthesis

 Digital waveguide synthesis

 Formant synthesis

 Karplus–Strong string synthesis

 Sample-based synthesis or Sampler

 Concatenative synthesis

 Granular synthesis

 Table-lookup synthesis

 Vector synthesis

 Wavetable synthesis

 RCA Synthesizer

 Scanned synthesis

 Software synthesizer

 Virtual analog synthesizer


Sound synthesis[edit]

This section does not cite any sources. Please help improve this
section by adding citations to reliable sources. Unsourced material may
be challenged and removed. (February 2017) (Learn how and when to
remove this template message)

Additive synthesiswas utilized as early as on Teleharmonium in 1900s and Hammond organ in 1930s.

Additive synthesis builds sounds by adding together waveforms into a composite sound. Instrument
sounds are simulated by matching their natural harmonic overtone structure. Early analog examples
of additive synthesizers are the Teleharmonium, Hammond organ, and Synclavier.

Subtractive synthesisis still utilized on various synths, including virtual analog synth.

Subtractive synthesis is based on filtering harmonically rich waveforms. It is implemented in early


monophonic keyboard synthesizers such as the MINI Moog Moog synthesizer. Signal routing, or
patching was usually very limited and followed a normalized path, as described here. Subtractive
synthesizers approximate instrumental sounds by an oscillator (producing sawtooth waves, square
waves, etc.) followed by a filter, followed by an amplifier which is being controlled by an ADSR. The
combination of simple modulation routings (such as pulse width modulation and oscillator sync),
along with the lowpass filter, is responsible for the "classic synthesizer" sound commonly associated
with "analog synthesis."

FM synthesis was hugely successful in earliest digital synthesizers.

FM synthesis (frequency modulation synthesis) is a process that usually involves the use of at least
two signal generators (sine-wave oscillators, commonly referred to as "operators" in FM-only
synthesizers) to create and modify a voice. Often, this is done through the analog or digital
generation of a signal that modulates the tonal and amplitude characteristics of a base carrier signal.
FM synthesis was pioneered by John Chowning,[78] who patented the idea and sold it to Yamaha.
Unlike the exponential relationship between voltage-in-to-frequency-out and multiple waveforms in
classical 1-volt-per-octave synthesizer oscillators, Chowning-style FM synthesis uses a linear voltage-
in-to-frequency-out relationship and sine-wave oscillators. The resulting complex waveform may
have many component frequencies, and there is no requirement that they all bear a harmonic
relationship. Sophisticated FM synths such as the Yamaha DX7 series can have 6 operators per voice;
some synths with FM can also often use filters and variable amplifier types to alter the signal's
characteristics into a sonic voice that either roughly imitates acoustic instruments or creates sounds
that are unique. FM synthesis is especially valuable for metallic or clangorous noises such as bells,
cymbals, or other percussion.

Phase distortion synthesis is a method implemented on Casio CZ synthesizers. It replaces the


traditional analog waveform with a choice of several digital waveforms which are more complex than
the standard square, sine, and sawtooth waves. This waveform is routed to a digital filter and digital
amplifier, each modulated by an eight-stage envelope. The sound can then be further modified with
ring modulation or noise modulation. source: http://manuals.fdiskc.com/flat/Casio%20CZ-
101%20Owners%20Manual.pdf

Physical modelling synthesis is often implemented as software synthesizers.

Physical modelling synthesis is the synthesis of sound by using a set of equations and algorithms to
simulate each sonic characteristic of an instrument, starting with the harmonics that make up the
tone itself, then adding the sound of the resonator, the instrument body, etc., until the sound
realistically approximates the desired instrument. When an initial set of parameters is run through
the physical simulation, the simulated sound is generated. Although physical modeling was not a new
concept in acoustics and synthesis, it was not until the development of the Karplus-Strong
algorithm and the increase in DSP power in the late 1980s that commercial implementations became
feasible. The quality and speed of physical modeling on computers improves with higher processing
power.

Analysis/resynthesis is typically used on the vocoder.


Sample-based synthesis may be one of the most popular methods at the moment.

Sample-based synthesis involves digitally recording a short snippet of sound from a real instrument
or other source and then playing it back at different speeds to produce different pitches. A sample
can be played as a one shot, used often for percussion or short duration sounds, or it can be looped,
which allows the tone to sustain or repeat as long as the note is held. Samplers usually include a
filter, envelope generators, and other controls for further manipulation of the sound. Virtual
samplers that store the samples on a hard drive make it possible for the sounds of an entire
orchestra, including multiple articulations of each instrument, to be accessed from a sample library..
See also Wavetable synthesis, Vector synthesis.

Analysis/resynthesis is a form of synthesis that uses a series of bandpass filters or Fourier transforms
to analyze the harmonic content of a sound. The results are then used to resynthesize the sound
using a band of oscillators. The vocoder, linear predictive coding, and some forms of speech
synthesis are based on analysis/resynthesis.

Imitative synthesis[edit]

Sound synthesis can be used to mimic acoustic sound sources. Generally, a sound that does not
change over time includes a fundamental partial or harmonic, and any number of partials. Synthesis
may attempt to mimic the amplitude and pitch of the partials in an acoustic sound source.

When natural sounds are analyzed in the frequency domain (as on a spectrum analyzer),
the spectra of their sounds exhibits amplitude spikes at each of the fundamental
tone's harmonics corresponding to resonant properties of the instruments (spectral peaks that are
also referred to as formants). Some harmonics may have higher amplitudes than others. The specific
set of harmonic-vs-amplitude pairs is known as a sound's harmonic content. A synthesized sound
requires accurate reproduction of the original sound in both the frequency domain and the time
domain. A sound does not necessarily have the same harmonic content throughout the duration of
the sound. Typically, high-frequency harmonics die out more quickly than the lower harmonics.

In most conventional synthesizers, for purposes of re-synthesis, recordings of real instruments are
composed of several components representing the acoustic responses of different parts of the
instrument, the sounds produced by the instrument during different parts of a performance, or the
behavior of the instrument under different playing conditions (pitch, intensity of playing, fingering,
etc.)

Components[edit]
This section does not cite any sources. Please help improve this
section by adding citations to reliable sources. Unsourced material may
be challenged and removed. (February 2017) (Learn how and when to
remove this template message)

Basic components of an analogue subtractive synthesizer

Synthesizers generate sound through various analogue and digital techniques. Early synthesizers
were analog hardware based but many modern synthesizers use a combination of DSP software and
hardware or else are purely software-based (see softsynth). Digital synthesizers often emulate classic
analog designs. Sound is controllable by the operator by means of circuits or virtual stages that may
include:

 Electronic oscillators – create raw sounds with a timbre that depends upon
the waveform generated. Voltage-controlled oscillators (VCOs) and digital oscillators may be
used. Harmonic additive synthesis models sounds directly from pure sine waves, somewhat
in the manner of an organ, while frequency modulation and phase distortion synthesis use
one oscillator to modulate another. Subtractivesynthesis depends upon filtering a
harmonically rich oscillator waveform. Sample-based and granularsynthesis use one or more
digitally recorded sounds in place of an oscillator.

 Low frequency oscillator (LFO) – an oscillator of adjustable frequency that can be used to
modulate the sound rhythmically, for example to create tremolo or vibrato or to control a
filter's operating frequency. LFOs are used in most forms of synthesis.

 Voltage-controlled filter (VCF) – "shape" the sound generated by the oscillators in the
frequency domain, often under the control of an envelope or LFO. These are essential
to subtractive synthesis.
 ADSR envelopes – provide envelope modulation to "shape" the volume or harmonic content
of the produced note in the time domain with the principal parameters being attack, decay,
sustain and release. These are used in most forms of synthesis. ADSR control is provided
by envelope generators.

 Voltage-controlled amplifier (VCA) – After the signal generated by one (or a mix of
more) VCOs has been modified by filters and LFOs, and its waveform has been shaped
(contoured) by an ADSR envelope generator, it then passes on to one or more voltage-
controlled amplifiers (VCAs). A VCA is a preamp that boosts (amplifies) the electronic signal
before passing it on to an external or built-in power amplifier, as well as a means to control
its amplitude (volume) using an attenuator. The gain of the VCA is affected by a control
voltage (CV), coming from an envelope generator, an LFO, the keyboard or some other
source.[79]

 Other sound processing effects units such as ring modulators and fuzz bass pedals may be
encountered.

Filter[edit]

Various filter modes.

Main article: Voltage controlled filter

Electronic filters are particularly important in subtractive synthesis, being designed to pass some
frequency regions through unattenuated while significantly attenuating ("subtracting") others.
The low-pass filter is most frequently used, but band-pass filters, band-reject filters and high-pass
filtersare also sometimes available.

The filter may be controlled with a second ADSR envelope. An "envelope modulation" ("env mod")
parameter on many synthesizers with filter envelopes determines how much the envelope affects
the filter. If turned all the way down, the filter produces a flat sound with no envelope. When turned
up the envelope becomes more noticeable, expanding the minimum and maximum range of the
filter.

Attack Decay Sustain Release (ADSR) envelope[edit]

"Release (music)" redirects here. For music release, see art release.
Schematic of ADSR

Attack Decay Sustain Release

Key on off

Inverted ADSR envelope

When an acoustic musical instrument produces sound, the loudness and spectral content of the
sound change over time in ways that vary from instrument to instrument. For example, plucked
guitars and the struck keys of pianos both produce sounds that die away over time. The "attack" and
"decay" of a sound have a great effect on the instrument's sonic character.[80][81] Sound synthesis
techniques often employ an envelope generator that controls a sound's parameters at any point in
its duration. Most often this is an (ADSR) envelope, which may be applied to overall amplitude
control, filter frequency, etc. The envelope may be a discrete circuit or module, or implemented in
software. The contour of an ADSR envelope is specified using four parameters:

 Attack time is the time taken for initial run-up of level from nil to peak, beginning when the
key is first pressed.

 Decay time is the time taken for the subsequent run down from the attack level to the
designated sustain level.

 Sustain level is the level during the main sequence of the sound's duration, until the key is
released.

 Release time is the time taken for the level to decay from the sustain level to zero after the
key is released.

An early implementation of ADSR can be found on the Hammond Novachord in 1938 (which predates
the first Moog synthesizer by over 25 years). A seven-position rotary knob set preset ADS parameter
for all 72 notes; a pedal controlled release time.[14] The notion of ADSR was specified by Vladimir
Ussachevsky (then head of the Columbia-Princeton Electronic Music Center) in 1965 while suggesting
improvements for Bob Moog's pioneering work on synthesizers, although the earlier notations of
parameter were (T1, T2, Esus, T3), then these were simplified to current form (Attack time, Decay time,
Sustain level, Release time) by ARP.[82]

Some electronic musical instruments can invert the ADSR envelope, reversing the behavior of the
normal ADSR envelope. During the attack phase, the modulated sound parameter fades from the
maximum amplitude to zero then, during the decay phase, rises to the value specified by the sustain
parameter. After the key has been released the sound parameter rises from sustain amplitude back
to maximum amplitude.

8-step envelope on Casio CZ series

A common variation of the ADSR on some synthesizers, such as the Korg MS-20, was ADSHR (attack,
decay, sustain, hold, release). By adding a "hold" parameter, the system could hold notes at the
sustain level for a fixed length of time before decaying. The General Instruments AY-3-8912 sound
chip included a hold time parameter only—the sustain level was not programmable. Another
common variation in the same vein is the AHDSR (attack, hold, decay, sustain, release) envelope, in
which the "hold" parameter controls how long the envelope stays at full volume before entering the
decay phase. Multiple attack, decay and release settings may be found on more sophisticated
models.

Certain synthesizers also allow for a delay parameter before the attack. Modern synthesizers like
the Dave Smith Instruments Prophet '08have DADSR (delay, attack, decay, sustain, release)
envelopes. The delay setting determines the length of silence between hitting a note and the attack.
Some software synthesizers, such as Image-Line's 3xOSC (included with their DAW FL Studio) have
DAHDSR (delay, attack, hold, decay, sustain, release) envelopes.

A common feature on many synthesizers is an AD envelope (attack and decay only). This can be used
to control e.g. the pitch of one oscillator, which in turn may be synchronized with another oscillator
by oscillator sync.

LFO[edit]

LFO section of Access Virus C

Main article: Low-frequency oscillation


A low-frequency oscillator (LFO) generates an electronic signal, usually below 20 Hz. LFO signals
create a periodic control signal or sweep, often used in vibrato, tremolo and other effects. In certain
genres of electronic music, the LFO signal can control the cutoff frequency of a VCF to make a
rhythmic wah-wah sound, or the signature dubstep wobble bass.

Arpeggiator[edit]

See also: Home organ and Music sequencer

An arpeggiator (arp) is a feature available on several synthesizers that automatically steps through a
sequence of notes based on an input chord, thus creating an arpeggio. The notes can often be
transmitted to a MIDI sequencer for recording and further editing. An arpeggiator may have controls
for speed, range, and order in which the notes play; upwards, downwards, or in a random order.
More advanced arpeggiators allow the user to step through a pre-programmed complex sequence of
notes, or play several arpeggios at once. Some allow a pattern sustained after releasing keys: in this
way, a sequence of arpeggio patterns may be built up over time by pressing several keys one after
the other. Arpeggiators are also commonly found in software sequencers. Some
arpeggiators/sequencers expand features into a full phrase sequencer, which allows the user to
trigger complex, multi-track blocks of sequenced data from a keyboard or input device, typically
synchronized with the tempo of the master clock.

Trance
Lead

sound
sample of
arpeggiator

MENU

0:00
An arpeggiator
interface A sample
on Novation Nova of
Eurodance
synthesizer
riff with
use of
rapid 1/16
notes
arpeggiator

[verification needed]

Arpeggiators seem to have grown from the accompaniment system used in electronic organs in the
mid-1960s to the mid-1970s.[83] They were also commonly fitted to keyboard instruments through
the late 1970s and early 1980s. Notable examples are the RMI Harmonic Synthesizer
(1974),[84] Roland Jupiter-8, Oberheim OB-8, Roland SH-101, Sequential Circuits Six-Trak and Korg
Polysix. A famous example can be heard on Duran Duran's song "Rio", in which the arpeggiator on
a Roland Jupiter-4 plays a C minor chord in random mode. They fell out of favor by the latter part of
the 1980s and early 1990s and were absent from the most popular synthesizers of the period but a
resurgence of interest in analog synthesizers during the 1990s, and the use of rapid-fire arpeggios in
several popular dance hits, brought with it a resurgence.

Patch[edit]

One of the earliest patch memory (bottom left) on Oberheim Four-voice(1975/1976)

A synthesizer patch (some manufacturers chose the term program) is a sound setting. Modular
synthesizers used cables ("patch cords") to connect the different sound modules together. Since
these machines had no memory to save settings, musicians wrote down the locations of the patch
cables and knob positions on a "patch sheet" (which usually showed a diagram of the synthesizer).
Ever since, an overall sound setting for any type of synthesizer has been referred to as a patch.

In mid–late 1970s, patch memory (allowing storage and loading of 'patches' or 'programs') began to
appear in synths like the Oberheim Four-voice (1975/1976)[85] and Sequential Circuits Prophet-
5 (1977/1978). After MIDI was introduced in 1983, more and more synthesizers could import or
export patches via MIDI SYSEX commands. When a synthesizer patch is uploaded to a personal
computer that has patch editing software installed, the user can alter the parameters of the patch
and download it back to the synthesizer. Because there is no standard patch language, it is rare that a
patch generated on one synthesizer can be used on a different model. However, sometimes
manufacturers design a family of synthesizers to be compatible.

Module[edit]

Korg Triton rack-mountable sound module.

A synth module is a standalone unit which synthesizes sounds using electronic or digital circuits. A
synth module does not typically have a built-in MIDI controller such as a musical keyboard. As such,
to play the sounds from a sound module using MIDI, a MIDI controller such as a MIDI-compatible
keyboard or other device has to be used. Some synth modules are the sound synthesis components
from an integrated synthesizer keyboard, packaged into a rackmountable unit.

Control interfaces[edit]

Non-contact interface (AirFX)

Tangible interface(Reactable)

Pitch & mod. wheels and touchpad

Drum pad

Guitar-style interface (SynthAxe)

Modern synthesizers often look like small pianos, though with many additional knob and button
controls. These are integrated controllers, where the sound synthesis electronics are integrated into
the same package as the controller. However, many early synthesizers were modular and
keyboardless, while most modern synthesizers may be controlled via MIDI, allowing other means of
playing such as:
 Fingerboards (ribbon controllers) and touchpads

 Wind controllers

 Guitar-style interfaces

 Drum pads

 Music sequencers

 Non-contact interfaces akin to theremins

 Tangible interfaces like a Reactable, AudioCubes

 Various auxiliary input device including: wheels for pitch bend and modulation, footpedals
for expression and sustain, breath controllers, beam controllers, etc.

Fingerboard controller[edit]

Left: Ondes Martenot (6G in 1960)


Right: Mixture Trautonium (replica of 1952)

Fingerboard
on Korg monotron

Ribbon controller
on Moog 3P (1972)
A ribbon controller or other violin-like user interface may be used to control synthesizer parameters.
The idea dates to Léon Theremin's 1922 first concept[86] and his 1932 Fingerboard Theremin and
Keyboard Theremin,[87][88] Maurice Martenot's 1928 Ondes Martenot (sliding a metal
ring),[89] Friedrich Trautwein's 1929 Trautonium (finger pressure), and was also later utilized
by Robert Moog.[90][91][92] The ribbon controller has no moving parts. Instead, a finger pressed down
and moved along it creates an electrical contact at some point along a pair of thin, flexible
longitudinal strips whose electric potential varies from one end to the other. Older fingerboards used
a long wire pressed to a resistive plate. A ribbon controller is similar to a touchpad, but a ribbon
controller only registers linear motion. Although it may be used to operate any parameter that is
affected by control voltages, a ribbon controller is most commonly associated with pitch bending.

Fingerboard-controlled instruments include the Trautonium (1929), Hellertion (1929)


and Heliophon (1936),[93][94][95] Electro-Theremin (Tannerin, late 1950s), Persephone (2004), and
the Swarmatron (2004). A ribbon controller is used as an additional controller in the Yamaha CS-
80 and CS-60, the Korg Prophecy and Korg Trinity series, the Kurzweilsynthesizers, Moog
synthesizers, and others.

Rock musician Keith Emerson used it with the Moog modular synthesizer from 1970 onward. In the
late 1980s, keyboards in the synth lab at Berklee College of Music were equipped with membrane
thin ribbon style controllers that output MIDI. They functioned as MIDI managers, with their
programming language printed on their surface, and as expression/performance tools. Designed by
Jeff Tripp of Perfect Fretworks Co., they were known as Tripp Strips. Such ribbon controllers can
serve as a main MIDI controller instead of a keyboard, as with the Continuum instrument.

Wind controllers[edit]

Wind controller

Accordion synthesizer

Main article: Wind controller

Wind controllers (and wind synthesizers) are convenient for woodwind and brass players, being
designed to imitate those instruments. These are usually either analog or MIDI controllers, and
sometimes include their own built-in sound modules (synthesizers). In addition to the follow of key
arrangements and fingering, the controllers have breath-operated pressure transducers, and may
have gate extractors, velocity sensors, and bite sensors. Saxophone-style controllers have included
the Lyricon, and products by Yamaha, Akai, and Casio. The mouthpieces range from alto clarinet to
alto saxophone sizes. The Eigenharp, a controller similar in style to a bassoon, was released by
Eigenlabs in 2009. Melodica and recorder-style controllers have included the Martinetta
(1975)[96] and Variophon (1980),[97] and Joseph Zawinul's custom Korg Pepe.[98] A harmonica-style
interface was the Millionizer 2000 (c. 1983).[99]

Trumpet-style controllers have included products by Steiner/Crumar/Akai, Yamaha, and Morrison.


Breath controllers can also be used to control conventional synthesizers, e.g. the Crumar Steiner
Masters Touch,[100] Yamaha Breath Controller and compatible products.[101]Several controllers also
provide breath-like articulation capabilities. [clarification needed]

Accordion controllers use pressure transducers on bellows for articulation.

Others[edit]

Ondes Martenot

Theremin

Vocoder

Other controllers include theremin, lightbeam controllers, touch buttons (touche d’intensité) on
the ondes Martenot, and various types of foot pedals. Envelope following systems, the most
sophisticated being the vocoder, are controlled by the power or amplitude of input audio signal. A
musician uses the talk box to manipulate sound using the vocal tract, though it is rarely categorized
as a synthesizer.

MIDI control[edit]

Main article: Musical Instrument Digital Interface

Synthesizers became easier to integrate and synchronize with other electronic instruments and
controllers with the introduction of Musical Instrument Digital Interface (MIDI) in 1983.[102] First
proposed in 1981 by engineer Dave Smith of Sequential Circuits, the MIDI standard was developed by
a consortium now known as the MIDI Manufacturers Association.[103] MIDI is an opto-isolated serial
interface and communication protocol.[103] It provides for the transmission from one device or
instrument to another of real-time performance data. This data includes note events, commands for
the selection of instrument presets (i.e. sounds, or programs or patches, previously stored in the
instrument's memory), the control of performance-related parameters such as volume, effects levels
and the like, as well as synchronization, transport control and other types of data. MIDI interfaces are
now almost ubiquitous on music equipment and are commonly available on personal
computers (PCs).[103]

The General MIDI (GM) software standard was devised in 1991 to serve as a consistent way of
describing a set of over 200 sounds (including percussion) available to a PC for playback of musical
scores.[104] For the first time, a given MIDI preset consistently produced a specific instrumental sound
on any GM-compatible device. The Standard MIDI File(SMF) format (extension .mid) combined MIDI
events with delta times – a form of time-stamping – and became a popular standard for exchanging
music scores between computers. In the case of SMF playback using integrated synthesizers (as in
computers and cell phones), the hardware component of the MIDI interface design is often
unneeded.

Open Sound Control (OSC) is another music data specification designed for online networking. In
contrast with MIDI, OSC allows thousands of synthesizers or computers to share music performance
data over the Internet in realtime.

Recent trends in synthesizer design, particularly the resurgence of modular systems in eurorack, have
allowed for a hybrid of MIDI control and control voltage i/o to be found together in many models.
(Examples being the Moog Model D reissue, which was enhanced from its original design to offer
both MIDI i/o and CV i/o). In these models of MIDI/CV hybrids, it is often possible to send and receive
control voltages to control parameters of equipment at the identical time MIDI messages are being
sent and received.

Additional examples of MIDI/CV hybrids include models like the Arturia Minibrute, which is able to
receive MIDI messages from an external controller and automatically convert the MIDI signal into
gate and pitch notes, which it can then send out as control voltage.

Typical roles[edit]

This section does not cite any sources. Please help improve this
section by adding citations to reliable sources. Unsourced material may
be challenged and removed. (February 2017) (Learn how and when to
remove this template message)

Synth lead
George Duke

Jordan Rudess

Synth lead[edit]

In popular music, a synth lead is generally used for playing the main melody of a song, but it is also
often used for creating rhythmic or bass effects. Although most commonly heard in electronic dance
music, synth leads have been used extensively in hip-hop music since the 1980s and some types of
rock songs since the 1970s. Many post-1980s pop music songs use a synth lead to provide a musical
hook to sustain the listener's interest throughout a song.

Synth pad[edit]

A synth pad is a sustained chord or tone generated by a synthesizer, often employed for
background harmony and atmosphere in much the same fashion that a string section is often used in
orchestral music and film scores. Typically, a synth pad is performed using whole notes, which are
often tied over bar lines. A synth pad sometimes holds the same note while a lead voice sings or
plays an entire musical phrase or section. Often, the sounds used for synth pads have a vaguely
organ, string, or vocal timbre. During the late 1970s and 1980s, specialized string synthesizers were
made that specialized in cresting string sounds using the limited technology of the time. Much
popular music in the 1980s employed synth pads, this being the time of polyphonic synthesizers, as
did the then-new styles of smooth jazz and new-age music. One of many well-known songs from the
era to incorporate a synth pad is "West End Girls" by the Pet Shop Boys, who were noted users of the
technique.

The main feature of a synth pad is very long attack and decay time with extended sustains. In some
instances pulse-width modulation (PWM) using a square wave oscillator can be added to create a
"vibrating" sound.

Synth bass[edit]
Synth bass

Synth filter
sweeps.ogg

MENU

0:00

An
example of
a classic
analog bass
Moog Taurus pedal
synthesizer
bass synth
sound.
Four
sawtooth
bass filter
sweeps
with
gradually
increasing
resonance.

Minimoog bass

Funkorama
(MacLeod, Kevin)
(ISRC
USUAN1100474).oga

MENU
A 1970s-
era minimoog 0:00

An example of funk-
styled grooving
synth bass by Kevin
MacLeod.[105]

See also: Keyboard bass

The bass synthesizer (or "bass synth") is used to create sounds in the bass range, from simulations of
the electric bass or double bass to distorted, buzz-saw-like artificial bass sounds, by generating and
combining signals of different frequencies. Bass synth patches may incorporate a range of sounds
and tones, including wavetable-style, analog, and FM-style bass sounds, delay effects, distortion
effects, envelope filters. A modern digital synthesizer uses a frequency
synthesizer microprocessor component to generate signals of different frequencies. While most bass
synths are controlled by electronic keyboards or pedalboards, some performers use an electric bass
with MIDIpickups to trigger a bass synthesizer.

In the 1970s miniaturized solid-state components allowed self-contained, portable instruments such
as the Moog Taurus, a 13-note pedal keyboard played by the feet. The Moog Taurus was used in live
performances by a range of pop, rock, and blues-rock bands. An early use of bass synthesizer was in
1972, on a solo album by John Entwistle(the bassist for The Who), entitled Whistle
Rymes. Genesis bass player Mike Rutherford used a Dewtron "Mister Bassman" for the recording of
their album Nursery Cryme in August 1971. Stevie Wonder introduced synth bass to a pop audience
in the early 1970s, notably on "Superstition" (1972) and "Boogie On Reggae Woman" (1974). In
1977 Parliament's funk single "Flash Light" used the bass synthesizer. Lou Reed, widely considered a
pioneer of electric guitartextures, played bass synthesizer on the song "Families", from his 1979
album The Bells.

Logic Synth

"Pollinate"- Logic
Synth and Effects
Demonstration.ogg

MENU

0:00

Logic's ESX24 Sampler,


EVD6 Clav and ESE
Ensemble Synthesizer
in effects of space
designer, ring
modulation and
"bitcrusher"

Following the availability of programmable music sequencers such as the Synclavier and Roland MC-8
Microcomposer in the late 1970s, bass synths began incorporating sequencers in the early 1980s. The
first bass synthesizer with a sequencer was the Firstman SQ-01.[106][107] It was originally released in
1980 by Hillwood/Firstman, a Japanese synthesizer company founded in 1972 by Kazuo Morioka
(who later worked for Akai in the early 1980s), and was then released by Multivox for North America
in 1981.[108][109][48]

A particularly influential bass synthesizer was the Roland TB-303.[110] Released in late 1981, it
featured a built-in sequencer and later became strongly associated with acid house music.[111] Bass
synthesizers began being used to create highly syncopated rhythms and complex, rapid basslines.
Bass synth patches incorporate a range of sounds and tones, including wavetable-style, analog, and
FM-style bass sounds, delay effects, distortion effects, envelope filters. In popular music, these
techniques gained wide popularity with the emergence of acid house music, after Phuture's use of
the TB-303 for the single "Acid Tracks" in 1987,[110] though such techniques were predated
by Charanjit Singh's use of the TB-303 in 1982.[111]

In the 2000s, several equipment manufacturers such as Boss and Akai produced bass synthesizer
effect pedals for electric bass guitar players, which simulate the sound of an analog or digital bass
synth. With these devices, a bass guitar is used to generate synth bass sounds. The BOSS SYB-3 was
one of the early bass synthesizer pedals. The SYB-3 reproduces sounds of analog synthesizers with
Digital Signal Processing saw, square, and pulse synth waves and user-adjustable filter cutoff. The
Akai bass synth pedal contains a four-oscillator synthesizer with user selectable parameters (attack,
decay, envelope depth, dynamics, cutoff, resonance). Bass synthesizer software allows performers to
use MIDI to integrate the bass sounds with other synthesizers or drum machines. Bass synthesizers
often provide samples from vintage 1970s and 1980s bass synths. Some bass synths are built into an
organ style pedalboard or button board.

Controversy[edit]

Since their invention, there has been concern over synthesizers putting session musicians out of a
job, since they can recreate the sounds of many instruments. Some musicians (especially
keyboardists) viewed the synth as they would any musical instrument. Other musicians viewed the
synth as a threat to traditional session musicians, and the British Musicians' Union attempted to ban
it in 1982. The ban never became official policy.[112] Broadway plays are also now using synthesizers
to reduce the number of live musicians required.[113]

Potrebbero piacerti anche