Sei sulla pagina 1di 43

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/8054311

Challenges and Recent Developments in Hearing Aids Part I. Speech Understanding in


Noise, Microphone Technologies and Noise Reduction Algorithms

Article  in  Trends in Amplification · February 2004


DOI: 10.1177/108471380400800302 · Source: PubMed

CITATIONS READS
127 244

1 author:

King Chung
Northern Illinois University
57 PUBLICATIONS   429 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Wind Noise Research View project

Enhancement of Hearing Devices View project

All content following this page was uploaded by King Chung on 30 May 2014.

The user has requested enhancement of the downloaded file.


Trends In Amplification

VOLUME 8, NUMBER 3, 2004

Challenges and Recent Developments in Hearing Aids


Part I. Speech Understanding in Noise, Microphone
Technologies and Noise Reduction Algorithms
King Chung, PhD

This review discusses the challenges in hearing aid design and fitting and the recent devel-
opments in advanced signal processing technologies to meet these challenges. The first part
of the review discusses the basic concepts and the building blocks of digital signal processing
algorithms, namely, the signal detection and analysis unit, the decision rules, and the time
constants involved in the execution of the decision. In addition, mechanisms and the differ-
ences in the implementation of various strategies used to reduce the negative effects of noise
are discussed. These technologies include the microphone technologies that take advantage
of the spatial differences between speech and noise and the noise reduction algorithms that
take advantage of the spectral difference and temporal separation between speech and noise.
The specific technologies discussed in this paper include first-order directional microphones,
adaptive directional microphones, second-order directional microphones, microphone
matching algorithms, array microphones, multichannel adaptive noise reduction algorithms,
and synchrony detection noise reduction algorithms. Verification data for these technolo-
gies, if available, are also summarized.

1. Introduction advances in digital chip designs and the reduc-


tion in current consumption, many of the histori-
About 10% of the world’s population suffers from cally unachievable concepts have now been put
hearing loss. For these individuals, the most com- into practice. This article reviews current hearing
mon amplification choice is hearing aids. The aid design and concepts that are specifically at-
hearing aids of today are vastly different from tempting to meet the variety of amplification
their predecessors because of the application of needs of hearing aid users. Some of these tech-
digital signal processing technologies. With the nologies and algorithms have been introduced to

From the Department of Speech, Language and Hearing Sciences, Purdue University,
West Lafayette, Indiana.
Correspondence: King Chung, PhD, Department of Speech, Language and Hearing Sciences,
Purdue University, 500 Oval Drive, West Lafayette, IN 47907; e-mail: kingchung@purdue.edu.
©2004 Westminster Publications, Inc., 708 Glen Cove Avenue, Glen Head, NY 11545, U.S.A.

83
Trends In Amplification Volume 8, Number 3, 2004

the consumer market very recently. Validation eral, channel refers to signal processing channels
data on the effectiveness of these technologies, if that are the signal processing units for such algo-
available, are also discussed. rithms as compression, noise reduction, and feed-
back reduction. The gain control and other func-
tions within the channel operate independently
2. Basics of Hearing Aid Signal of each other. A band, on the other hand, refers to
Processing Technologies a frequency shaping band that is mainly used to
control the amount of time-invariant gain in a
Four basic concepts in hearing aids and digital frequency region. A given channel may have
signal processing underlie today’s advanced sig- several frequency bands; each is subjected to the
nal processing technologies. same signal processing of the channel in which it
resides. Some digital hearing aids have an equal
2.1. Differentiating Hearing Aids number of channels and bands; for example, the
Natura by Sonic Innovations has nine signal pro-
The first and most basic concept is the differenti- cessing channels and nine frequency shaping
ation among analog, analog programmable, and bands. Other digital hearing aids, however, may
digital programmable hearing aids: have different numbers of channels and bands;
for example, the Adapto by Oticon has two sig-
• In the conventional analog hearing aids, the nal processing channels and seven frequency
acoustic signal is picked up by the microphone shaping bands.
and converted to an electric signal. The level In some cases, the above distinctions of the
and the frequency response of the microphone channels and bands are smeared because of a lack
output is then altered by a set of analog filters of a better term to describe the complexity of the
and the signal sent to the receiver. The signal of signal processing algorithms implemented in
the analog hearing aids remains continuous hearing aids. The Oticon Syncro, for example, has
throughout the signal processing path. four channels of signal processing in the adaptive
• In analog programmable hearing aids, the elec- directional microphone algorithm and eight chan-
tric signal is normally split into two or more fre- nels of signal processing for the compression and
quency channels. The level of the signal in each noise reduction algorithms. To distinguish the
channel is amplified and processed by a digital two, Oticon chooses to describe the multichannel
control circuit. The parameters of the digital adaptive directional microphone as a “multiband
control circuit are programmed via hearing aid adaptive directional microphone” and reserve the
fitting software. The signal, however, remains term channel to describe its compression system
continuous throughout the signal processing and noise reduction algorithm.
path in the analog programmable hearing aids. With the advances in signal processing tech-
• In the digital programmable hearing aid (or nologies, some hearing aids may have many chan-
simply digital hearing aids), the output of the nels and bands, which can become difficult to
microphone is sampled, quantized, and con- manage in the hearing aid fitting process. Some
verted into discrete numbers by analog-to-digi- manufacturers have grouped the channels and
tal converters. All the signal processing is then bands into a lesser number of fitting regions to
carried out in the digital domain by digital fil- simplify the fitting process. For example, the
ters and algorithms. Upon completion of digi- Canta7 by GNResound has 64 frequency-shaping
tal signal processing, the digital signal is con- bands and 14 signal processing channels. These
verted back to the analog domain by a digital- channels and bands are grouped into controls at
to-analog converter or a demodulator. six frequency regions in the fitting software.

For a detailed explanation on the differences in 2.3. The Building Blocks of Advanced
hearing aids, please refer to Schweitzer, 1997. Signal Processing Algorithms

2.2. Channels and Bands Recently, many adaptive or automatic features


have been implemented in digital hearing aids
Another basic concept of hearing aids is the dif- and most of these features are accomplished by
ferentiation between channels and bands. In gen- signal processing algorithms. These signal pro-

84
Chung Challenges and Recent Developments in Hearing Aids: Part I

cessing algorithms typically have three basic rectional microphone algorithm is the time for the
building blocks: a signal detection and analysis algorithm to switch from the default omni-direc-
unit, a set of decision rules, and an action unit. tional microphone to the directional microphone
The signal detection and analysis unit usually has mode when the hearing aid user walks from a
either one or several detectors. These detectors quiet street into a noisy restaurant. The re-
observe the signal for a period of time in an lease/disengaging time is the time for the algo-
analysis time window and then analyze for the rithm to switch from the directional microphone
presence or absence of certain pertinent charac- back to the omni-directional microphone mode
teristics or calculate a value of relevant charac- when the user exits the noisy restaurant to the
teristics. The output of the detection and analysis quiet street.
unit is subsequently compared with the set of pre- For some algorithms, the time constants can
determined decision rules. The action unit then also be the switching time from one mode to an-
executes the corresponding actions according to other. An example is the switching time from one
the decision rules. polar pattern to another polar pattern in an adap-
An analogy of the building blocks of digital tive directional microphone algorithm. In addi-
signal processing algorithms can be drawn from tion, time constants can also be associated with
the operation of compression systems: The signal tracking speed (e.g., the speed with which a feed-
detection and analysis unit of the compression back reduction algorithm tracks a change in the
system is the level detector that detects and esti- feedback path).
mates the level of the incoming system. The set of The proprietary algorithms from different
decision rules in the compression system is the manufacturers have different time constants,
input-output function and the time constants. depending on factors such as their hearing aid
Specifically, the input-output function specifies fitting philosophy, interactions or synergy
the amount of gain at different input levels, among other signal processing units, and limi-
whereas the attack and release times determine tations on signal processing speed. Similar to
how fast the change occurs. The action unit car-
the dilemma in choosing the appropriate release
ries out the action that is reflected in the output
time in a compression system, there are pros
of the compression system.
and cons associated with the choices of fast or
slow time constants in advanced signal process-
2.4. Time Constants of the Advanced ing algorithms:
Signal Processing Algorithms
Another important concept for advanced signal • Fast engaging and disengaging times can act on
processing algorithms is the time constants that the changes in the incoming signal very quickly.
govern the speed of action. The concept of time Yet they may be overly active and create unde-
constants of an adaptive or automatic algorithm sirable artifacts (e.g., the pumping effect gener-
can be described using the example of time con- ated by a noise reduction algorithm with fast
stants of a compression system. In a compression time constants).
system, the attack and release times are defined • Slow engaging and disengaging times may be
by the duration that a predetermined gain change more stable and have fewer artifacts. However,
occurs given a specific change in input level. In they may appear to be sluggish and allow the
other words, they tell us how quickly a change in undesirable components of the signal to linger a
the gain occurs in a compression system when the little longer before any signal processing action
level of the input changes. is taken.
Similarly, the time constants of an adaptive
or automatic algorithm tell us the time that the The general trend in the hearing aid industry is to
algorithm takes to switch from the default state to have variable engaging and disengaging times,
another signal processing state (i.e., attack/adap- similar to the concept of variable release times in
tation/engaging time) and the time that the al- a compression system. The exact value of the time
gorithm takes to switch back to the default state constants depends on the characteristics of the in-
(i.e., release/disengaging time) when the acoustic coming signal, the lifestyle of the hearing aid
environment changes. For example, the attack/ user, and the style and model of the hearing aid,
adaptation/engaging time for an automatic di- among others.

85
Trends In Amplification Volume 8, Number 3, 2004

3. Challenges and Recent Developments without introducing undesirable distortions.


in Hearing Aids Multiple technologies have been developed in the
long history of hearing aids to enhance speech un-
Challenge No. 1: Enhancing Speech derstanding and listening comfort for people with
Understanding and Listening Comfort hearing loss. The following section reviews some
in Background Noise of the recent developments in two broad cate-
Difficulty in understanding speech in noise has gories of noise reduction strategies: directional mi-
been one of the most common complaints of crophones and noise reduction algorithms.
hearing aid users. People with hearing loss often
have more difficulty understanding speech in 3.1. Noise Reduction Strategy No. 1:
noise than do people with normal hearing. When Directional Microphones
the ability to understand speech in noise is ex-
pressed in a signal-to-noise ratio (SNR) for un- Directional microphones are designed to take ad-
derstanding 50% of speech (SNR-50), the SNRs- vantage of the spatial differences between speech
50 of people with hearing loss may be as much and noise. They are second only to personal fre-
as 30 dB higher than that of people with normal quency modulation (FM) or infrared listening sys-
hearing. This means that for a given background tems in improving the SNR for hearing aid users.
noise, the speech needs to be as much as 30 dB Directional microphones are more sensitive to
higher for people with hearing loss to achieve the sounds coming from the front than sounds com-
same level of understanding as people with nor- ing from the back and the sides. The assumption
mal hearing (Baer and Moore, 1994; Dirks et al., is that when the hearing aid user engages in con-
1982; Duquesnoy, 1983; Eisenberg et al., 1995; versation, the talker(s) is usually in front and
Festen and Plomp, 1990; Killion, 1997a; Killion sounds from other directions are undesirable.
and Niquette, 2000; Peters et al., 1998; Plomp, In the last several years, many new algo-
1994; Tillman et al., 1970; Soede, 2000). The dif- rithms have been developed to maintain the per-
ference in SNRs-50 between people with normal formance of directional microphones over time
hearing and hearing loss is called SNR-loss and to maximally attenuate moving or fixed noise
(Killion, 1997b). The exact amount of SNR-loss source(s) from the back hemisphere. In addition,
depends on the degree and type of hearing loss, second-order directional microphones and array
the speech materials, and the temporal and spec- microphones with higher directional effects are
tral characteristics of background noise. available to further attenuate noise originating
From the signal processing point of view, the from the back hemisphere. The following section
relationship between speech and noise can be reviews the basics of first-order directional mi-
characterized by their relative occurrences in the crophones, updates the current research findings,
temporal, spectral, and spatial domain. Temp- and discusses some of the recent developments in
orally, speech and noise can occur at the same in- directional microphones.
stance or at different instances. Spectrally, speech
and noise can have similar frequency content, or 3.1.1. First-Order Directional Microphones
they may slightly overlap or have different pri- First-order directional microphones have been im-
mary frequency regions. Spatially, noise may orig- plemented in behind-the-ear hearing aids since
inate from the same direction as the targeted the 1970s. The performance of modern direction-
speech or from a different spatial angle than the al microphones has been greatly improved com-
targeted speech. Further, speech and noise can pared to the earlier generations of directional mi-
have a constant spatial relationship or their rela- crophones marketed in the 1970s and 1980s
tive positions may vary over time. When a con- (Killion, 1997b). Now, first-order directional mi-
stant spatial relationship exists with speech and crophones are implemented not only in behind-
noise, both components are fixed in space or both the-ear hearing aids but also in in-the-ear and in-
are moving at the same velocity. When their rela- the-canal hearing aids.
tive position varies over time, the talker, the
noise, or both may be moving in space. 3.1.1.1. How They Work
One of the most challenging tasks of engi- First-order directional microphones are imple-
neers who design hearing aids is to reduce back- mented with either a single-microphone design
ground noise and to increase speech intelligibility or a dual/twin-microphone design. In the single-

86
Chung Challenges and Recent Developments in Hearing Aids: Part I

microphone design, the directional microphone sound comes from the back, it reaches the poste-
has an anterior and a posterior microphone port. rior port first and continues to travel to the ante-
The acoustic signal entering the posterior port is rior port. If the internal delay equals the external
acoustically delayed and subtracted from the sig- delay, sounds entering the posterior port and the
nal entering the anterior port at the diaphragm anterior port reach the diaphragm at the same
of the microphone. time but on opposite sides of the diaphragm, thus
The rationale is that if a sound comes from they are cancelled. Therefore, the sensitivity of
the front, it reaches the anterior port first and the directional microphone to sounds from the
then reaches the posterior port a few milliseconds back is greatly reduced.
later. Since the sound in the posterior port is de- The sensitivity of the directional microphone
layed by the traveling time between the two mi- to sounds coming from different azimuths is usu-
crophone ports (i.e., external delay) and the ally displayed in a polar pattern. Directional mi-
acoustic delay network (i.e., the internal delay), crophones exhibit four distinct types of polar pat-
the sound from the front is minimally affected. terns: bipolar (or bidirectional, dipole), hyper-
Therefore, the directional microphones have high cardioid, supercardioid, and cardioid (Figure 1A).
sensitivity to sounds from the front. However, if a The least sensitive microphone locations (i.e.,

Figure 1. (A) The relationship between the ratio of the internal and external delays and the polar patterns.
(Reprinted with permission from Powers and Hamacher, Hear J, 55[10], 2002). (B) The directional sensitivity
patterns of directional microphones exhibiting cardioid, hypercardioid, and supercardioid patterns. A commercial
stand directional microphone is placed at the center of the directional sensitivity pattern.

87
Trends In Amplification Volume 8, Number 3, 2004

nulls) of these polar patterns are at different az- The directional effect of the directional mi-
imuths relative to the most sensitive location (0º crophones can be quantified in several ways:
azimuth). Notice that these measurements are
made when the directional microphones are free- 1. The front-back ratio is the microphone sensi-
hanging in a free field where the sound field is tivity difference in dB for sounds coming from
uniform, free from boundaries, free from the dis- 0° azimuth to sounds from 180° azimuth.
turbance of other sound sources, and nonreflec- 2. The directivity index is the ratio of the micro-
tive. When the directional microphones are mea- phone output for sounds coming from 0° az-
sured in three dimensions, their sensitivity pat- imuth to the average of microphone output for
terns to sounds from different locations are called sounds from all other directions in a diffuse/
directional sensitivity patterns (Figure 1B). reverberant field (Beranek, 1954).
The directional sensitivity patterns of direc- 3. The articulation index-weighted directivity
tional microphones are generated by altering the index (AI-DI) is the sum and average of the di-
ratio between the internal and external delays. rectivity index at each frequency multiplied by
The internal delay is determined by the acoustic the articulation index weighting of the fre-
delay network placed close to the entrance of the quency band for speech intelligibility (Killion
back microphone port. The external delay is de- et al., 1998).
termined by the port spacing between the front
and back ports, which in turn, is determined by For a review on the design and evaluation of first-
the available space and considerations of the order directional microphones, please refer to the
amount of low-frequency gain reduction and the reviews by Ricketts (2001) and Valente (1999).
amount of high-frequency directivity (Figure 2).
In the dual-microphone design, the direction- 3.1.1.2. Updates on the Clinical Verification of
al microphones are composed of two omni-direc- First Order Directional Microphones
tional microphones that are matched in frequen- Many factors affect the benefits of directional mi-
cy response and phase (Figure 3). The two omni- crophones. Research studies on the effect of di-
directional microphones are combined by using rectional microphones on speech recognition con-
delay-and-subtract processing, similar to single-
ducted in laboratory settings showed a large
microphone directional microphones. The electri-
range of SNR-50 improvement, from 1 to 16 dB.
cal signal generated from the posterior micro-
phone is electrically delayed and subtracted from The amount of benefit experienced by the hearing
that of the anterior microphone in an integrated aid users depends on the directivity index of the
circuit (Buerkli-Halevy, 1987; Preves, 1999; directional microphone; the number, the place-
Ricketts and Mueller, 1999). By varying the ratio ment, and the type of noise sources; the room and
between the internal and external delays, the environmental acoustics, relative distance be-
polar patterns of the dual-microphone directional tween the talker and listener, location of the noise
microphones can also generate bipolar, cardioid, relative to the listener, and vent size, among oth-
hypercardioid, or supercardioid patterns. ers (Beck, 1983; Hawkins and Yacullo, 1984;
Although the performances of single-micro- Gravel et al., 1999; Kuk et al., 1999; Nielsen and
phone and dual-microphone directional micro- Ludvigsen, 1978; Preves et al., 1999; Ricketts,
phones are comparable, most of the high-perfor- 2000a; Ricketts and Dhar, 1999; Studebaker et
mance digital hearing aids use dual-microphone al., 1980; Valente et al., 1995; Wouters et al.,
directional microphones because of their flexibili- 1999).
ty. Single-microphone directional microphones Normally, the higher the directivity index and
have fixed polar patterns after being manufac- AI-DI, the higher the directional benefit provided
tured because neither the external delay nor the by the directional microphones (Ricketts, 2000b,
internal delay can be altered. However, dual-mi- Ricketts et al., 2001). Studies by various re-
crophone directional microphones can have vari- searchers have also shown that the directivity
able polar patterns because their internal delays index or AI-DI can be used to predict the im-
can be varied by signal processing algorithms. The provements in SNRs-50 provided by the direc-
ability to vary the polar pattern after the hearing tional microphones with multiple noise sources
aid is made opens doors to the implementation of or relatively diffuse environments (Laugesen and
advanced signal processing algorithms (e.g., adap- Schmidtke, 2004; Mueller and Ricketts, 2000;
tive directional microphone algorithms). Ricketts, 2000a).

88
Chung Challenges and Recent Developments in Hearing Aids: Part I

Figure 2. (A) The relationship between the port spacing (p) and the amount of low-frequency cut-off for
directional microphones that use the delay-and-subtract processing. As port spacing decreases, the cut-off
frequency for low-frequency gain reduction increases. This is because sound pressures picked up by the
two microphone ports/two omni-directional microphones are subtracted at two adjacent points. As
frequency decreases, the wavelength increases, the differences between the two points decreases, and the
resultant microphone output becomes smaller after the subtraction. Thus, the cut-off frequency for low-
frequency roll-off increases as the microphone port spacing decreases. (B) The relationship between the
port spacing (p) and the amount of high-frequency directivity index (DI). As port spacing decreases, the
high-frequency directivity index increases. This occurs because as the wavelength of the incoming signal
approaches the port spacing, directionality breaks down. The smaller the port spacing, the higher the
frequency at which the directionality breaks down (AI-DI = articulation index weighted directivity index).
(Courtesy of Oticon, reprinted with permission).

89
Trends In Amplification Volume 8, Number 3, 2004

Figure 3. A figurative example of how the first-order dual-microphone directional microphone is implemented.
The polar pattern of the directional microphone depends on the ratio of the internal and external delays.
(Reprinted (modified) with permission from Thompson SC, Hear J 56[11], 2003).

The amount of directional benefit of the hear- In addition, greater improvements are gener-
ing aids is affected by the number of noise sources ally observed in less reverberant environments
and the testing environment. Studies conducted than in more reverberant environments (Hawkins
with one noise source placed at the null of the di- and Yacullo, 1984; Killion et al., 1998; Ricketts,
rectional microphones (Agnew and Block, 1997; 2000b; Ricketts and Dhar, 1999; Studebaker et
Gravel et al., 1999; Lurquin and Rafhay, 1996; al., 1980; Ricketts and Henry, 2002). Reverb-
Mueller and John, 1979; Valente et al., 1995; eration reduces directional effects because sounds
Wouters et al., 2002) showed greater SNR-50 im- are reflected from surfaces in all directions. The
provements than studies conducted with multiple reflected sounds make it impossible for direction-
noise sources or with testing materials recorded al microphones to take advantage of the spatial
in real-world environments (Killion et al., 1998; separation between speech and noise. Research
Preves et al., 1999; Pumfort et al., 2000; Ricketts, studies have also shown that directional micro-
2000a; Ricketts and Dhar, 1999; Valente et al., phones are more effective if speech, noise, or
2000). In general, 3 to 5 dB of improvement in both speech and noise are within the critical dis-
the SNR-50 is reported in real-world environ- tance (Ricketts, 2000a; Leeuw and Dreschler,
ments with multiple noise sources (Amlani, 2001; 1991; Ricketts and Hornsby, 2003). Critical dis-
Ricketts et al., 2001; Valente et al., 2000; Wouters tance is the distance at which the level of the di-
et al., 1999). rect sound is equal to the level of the reverberant

90
Chung Challenges and Recent Developments in Hearing Aids: Part I

sound. Within the critical distance, the level of significantly different for the directional micro-
the direct sound is higher than the level of the phones of the two hearing aid styles (Pumfort et
reverberant sound. al., 2000; Ricketts et al., 2001). This indicated
Further, the directional effect at the low-fre- that directional microphones provide less im-
quency region reduces as the vent size increases provement in speech understanding in an in-the-
because vents tend to reduce the gain of the hear- ear hearing aid than a behind-the-ear hearing aid.
ing aid below 1000 Hz and allow unprocessed In other words, although the omni-directional mi-
signals from all directions to reach the ear canal. crophones of in-the-ear hearing aids are more di-
However, when the weightings of articulation rectional than the omni-directional microphones
index are considered, the decrease in AI-DI was of the behind-the-ear hearing aids, the perfor-
only about 0.4 dB for each 1-mm increase in vent mance of the directional microphones imple-
size up to a diameter of 2 mm (Ricketts, 2001; mented in both hearing aid styles was not signifi-
Ricketts and Dittberner, 2002). Although a larger cantly different (Ricketts, 2001).
decrease of AI-DI (i.e., 0.8 dB) was observed Most laboratory tests have shown measurable
when the vent size increased from 2 mm to open directional benefits and many hearing aids users
fitting, the open earmold fitting would still have in field evaluation studies also report perceived
about a 4 dB higher AI-DI than the omni-direc- directional benefit. However, a number of recent
tional mode. In general, vents have the greatest field studies reported that a significant percent-
effect on hearing aids with high directivity index- age of hearing aid users might not perceive the
es at low frequencies (Ricketts, 2001). benefits of directional amplification in their daily
Factors that do not affect the benefit of direc- lives even if the signal processing, venting, and
tional microphones are compression and hearing hearing aid style are kept the same in the field
aid style (Pumfort et al., 2000; Ricketts, 2000b; trials and laboratory tests (Cord et al., 2002;
Ricketts et al., 2001). At the first glance, the ac- Mueller et al., 1983; Ricketts et al., 2003; Surr et
tions of compression and directional microphones al., 2002; Walden et al., 2000).
seem to act in opposite directions, i.e., direction- According to the researchers, the possible rea-
al microphones reduce background noise which sons for the discrepancies can be attributed to the
is usually softer than speech while compression relative locations of the signal and noise, acoustic
amplifies softer sounds more than louder sounds. environments, the type and location of noise en-
In practice, however, sounds from multiple countered, subjects’ willingness to switch be-
sources occur at the same time and the gain of tween directional and omni-directional micro-
the compression circuit is determined by the phones, and the percentage of time the use of di-
most dominant source or the overall level. If both rectional microphone is indicated, among others.
speech and noise occur at the same instance with (Cord et al., 2002, Surr et al., 2002, Walden et
a positive SNR, the gain of the hearing aid is de- al., 2000; Walden et al., 2004).
termined by the level of the speech, not the Specifically, directional microphones are de-
noise. Research studies comparing the direction- signed to be more sensitive to sounds coming
al benefits of linear and compression hearing from the front than sounds coming from other di-
aids did not show any difference in speech un- rections. Many laboratory tests showing the ben-
derstanding ability of hearing aid users if speech efit of directional microphones were conducted
and noise coexist at the same instance (Ricketts with speech presented at 0° azimuth and noise
et al., 2001). from the sides or the back with both speech and
Another factor that does not affect the per- noise in close proximity to the hearing aid user.
formance of directional microphones is the hear- However, hearing aid users reported that the de-
ing aid style (Pumfort et al., 2000; Ricketts et al., sired signal did not come from the front in as
2001). Previous research studies have shown that much as 20% of the time in daily life (Walden et
the omni-directional microphones of the in-the- al., 2003). Studies have indicated that when
ear hearing aids have higher directivity indexes speech comes from directions other than the
than do behind-the-ear hearing aids because of front, the use of directional microphone may have
the pinna effect (Fortune, 1997, Olsen and a positive, neutral, or negative effect on speech
Hagerman, 2002) and the SNRs-50 of subjects intelligibility, especially for low-level speech from
also concur with this finding (Pumfort et al., the back (Kuk, 1996; Kuk et al., 2005, Lee et al.,
2000). However, SNRs-50 of subjects were not 1998; Ricketts et al., 2003).

91
Trends In Amplification Volume 8, Number 3, 2004

Two other possible reasons for the discrepan- who will leave the hearing aids in the default
cies between laboratory tests and field trials are omni-directional mode. In addition, previous
the acoustic environments, and the type(s) and studies also failed to predict directional benefits
location(s) of noise encountered. Most laboratory based on hearing aid users’ audiometric testing
tests are conducted in environments with a re- results (Jespersen and Olsen, 2003; Ricketts and
verberation time of less than 600 milliseconds Mueller, 2000).
(Amlani, 2001). A wide range of reverberation
environments that often have higher reverbera- 3.1.1.3. Updates on the Limitations of First-
tion times may be encountered in daily life, how- Order Directional Microphones
ever. As directional benefits diminish with in- With the increase in directional microphone
crease in reverberation, hearing aid users may not usage in recent years, the limitations of direc-
be able to detect the benefits of directional mi- tional microphones have become more apparent
crophones in their everyday life. In addition, the to the hearing aid engineers and audiology com-
use of non-real-world noise in the laboratory munity. These limitations of directional micro-
(e.g., speech spectrum noise) at fixed, examiner- phones include relatively higher internal noise,
determined locations may have exaggerated the low-frequency gain reduction (roll-off), higher
benefits of directional microphones. sensitivity to wind noise, and reduced probability
The need for the user to switch between to hear soft sounds from the back (Kuk et al.,
omni-directional microphones and directional mi- 2005, Lee et al., 1998; Ricketts and Henry, 2002;
crophones and the percentage of time/situations Thompson, 1999).
that the use of directional microphone is indicat- Two factors contribute to the problem of
ed in daily life can also partly account for the lab- higher internal noise for the dual-microphone di-
oratory and field evaluation differences. Cord rectional microphones. First, the internal noise of
and colleagues (2002) reported that about 23% the modern omni-directional microphones is
of the subjects stated that they left their hearing about 28 dB SPL. When two omni-directional mi-
aids in the default omni-directional mode during crophones are combined to make a dual-micro-
the field trials because they did not notice much phone directional microphone in the delay-and-
difference in the first few trials of directional mi- subtract process, the internal noise of dual micro-
crophones. Further, Cord and colleagues report- phones is about 3 dB higher than the internal noise
ed that subjects who actively engaged in switch- of omni-directional microphones (Thompson,
ing between the omni-directional and direction- 1999). This internal noise is normally masked by
al microphones reported that they only used di- environmental sounds and is inaudible to hear-
rectional microphones about 22% of the time. ing aid users, even in quiet environments.
This indicated that omni-directional microphones However, the problem arises when a hearing aid
were sufficient in 78% of daily life, and subjects manufacturer tries to accommodate the second
may not have adequate directional microphones factor, low-frequency roll-off.
usage time to realize the benefit of directional The low-frequency roll-off occurs when low-
microphones. frequency sounds reaching the two omni-direc-
Cord and colleagues (2004) have also inves- tional microphones are subtracted at similar
tigated the predictive factors differentiating the phase. The amount of low-frequency roll-off is
subjects who regularly switched between the about 6 dB/octave for first-order directional mi-
omni-directional and directional modes and those crophones (Thompson, 1999; Ricketts, 2001).
who left the hearing aids in the default position in The perceptual consequence of the low-frequency
a subsequent study. They reported that the two roll-off is “tinny” sound quality and under-ampli-
groups did not significantly differ in their degree fication of low-frequency sounds for hearing aid
or configuration of hearing loss, hearing aid set- users with low-frequency hearing loss (Ricketts
tings, directional benefits that they receive when and Henry, 2002; Walden et al., 2004).
tested in the test booth, or the likelihood to en- The common practice to solve this problem is
counter situations where bothersome background to provide low-frequency equalization so that the
noise occurs. In other words, there is no ensured low-frequency responses of the directional micro-
evidence that can be used to predict which hear- phones are similar to that of the omni-directional
ing aid users will switch between the omni-direc- microphones. Unfortunately, by matching the
tional and directional microphones versus those gain between omni-directional and directional

92
Chung Challenges and Recent Developments in Hearing Aids: Part I

modes, the internal microphone noise is also am- phones have higher sensitivity to sounds in near
plified. Some hearing aid users may find this in- field (i.e., sounds within 30 cm), the wind noise
crease in microphone noise objectionable, espe- level picked up by the directional microphones
cially in quiet environments (Lee and Geddes, can be as much as 20 to 30 dB higher than that
1998; Macrae and Dillon, 1996). picked up by an omni-directional microphone
Two practices are adopted to circumvent this (Figure 4) (Kuk et al., 2000; Thompson, 1999).
dilemma. First, instead of fully compensating for Because wind noise has dominant energy at low
the 6-dB/octave low-frequency roll-off, hearing frequencies, the negative effect of wind noise is
aid manufacturers may decide to provide partial further exacerbated if the directional microphone
low-frequency compensation (i.e., 3 dB/octave). has low-frequency gain compensation. Again, the
Second, the consensus in the audiology commu- strategy is to use omni-directional microphone
nity is to stay in the omni-directional mode in mode should wind noise be the dominant signal
quiet environments. Field studies have also shown at the microphone input. In addition, some algo-
that subjects either preferred the omni-directional rithms automatically reduce low frequency am-
mode or showed no preference between the two plification when wind noise is detected (Siemens
modes in quiet environments (Mueller et al., Audiology Group, 2004).
1983; Preves et al., 1999; Walden et al., 2004).
Directional microphones are also more sus- Although a design objective, it can be said
ceptible to wind noise because they have a high- that a limitation of directional microphones is
er sensitivity to near field signals. When the wind that they are less sensitive to speech or environ-
curves around the head, turbulence is created mental sounds coming from the back hemisphere,
very close to the head. As directional micro- especially at low levels (Kuk et al., 2005; Lee et

Figure 4. Directional microphones (solid line) have higher outputs for wind noise than omni-directional
microphones (dotted line). (Original data from Dillon et al., 1999. Reprinted with permission from
Kuk et al., Hear Rev 7[9], 2000).

93
Trends In Amplification Volume 8, Number 3, 2004

al., 1998). Directional microphones should be was relatively far away. On the other hand, hear-
used with caution in environments in which au- ing aid users tended to prefer the directional
dibility to sounds or warning signals from the mode in noisy environments, when speech comes
back hemisphere is desirable. from the front, or when the signal is relatively
close to them. Walden and colleagues also noted
3.1.1.4. Working with Directional Microphones that counseling hearing aid users to switch to the
Despite these limitations, directional microphones appropriate microphone mode might increase the
are currently the most effective noise reduction success rate of directional hearing aid fitting.
strategy (second to personal FM or infrared sys- Fifth, although a number of studies have
tems) available in hearing aids. Several cautions shown that children can also benefit from direc-
should be exercised when clinicians fit direction- tional microphones to understand speech in noise
al microphones: (Condie et al., 2002; Gravel e al., 1999; Bohnert
First, the performance of directional micro- and Brantzen, 2004), the use of directional mi-
phones decreases near reflective surfaces, such as crophones that require manual switching in very
a wall or a hand, or in reverberant environments. young children should be cautioned. Very young
Hearing aid users therefore need to be counseled children who are starting to learn the auditory,
to move away from reflective surfaces or to con- speech, and language skills need every opportuni-
verse at a place with less reverberation, if possible. ty to access auditory stimuli. As directional micro-
Second, the polar patterns of a directional phones attenuate sounds from the sides and back,
hearing aid and the locations of the nulls when the they may reduce the incidental learning opportu-
hearing aid is worn on the user’s head can be very nities that may help children acquire or develop
different from an anechoic chamber measurement speech and language skills. In addition, young
in which the directional microphone is free-hang- children probably will not be able to switch be-
ing in space (Chung and Neuman, 2003; Neuman tween microphone modes requiring parents or
et al., 2002). Depending on the hearing aid style, caregivers to effectively assume this responsibility
the most sensitive angle of the first-order direc- among other care-giving liabilities.
tional microphones may vary from 30° to 45° for As mentioned before, always listening in the
in-the-ear hearing aids to 90° for behind-the-ear directional mode may reduce the chance of detect-
hearing aids (Foutune, 1997, Neuman et al, 2002; ing warning signals or soft speech from behind,
Ricketts, 2000b). If possible, clinicians need to use which is crucial to a child’s safety. The American
the polar patterns measured when the hearing aids Academy of Audiology recommended the use of di-
are worn in the ear so they can counsel the hearing rectional microphones on children with caution,
aid users to position themselves so that the most especially on young children who cannot switch
sensitive direction of their directional microphone between the directional and omni-directional
points to the direction of the desired signal and the modes (American Academy of Audiology, 2003).
most intense noise is located at the direction with
the least sensitivity, if possible. 3.1.2. Adaptive Directional Microphones
Third, clinicians need to be aware that some In the past, all directional microphones had fixed
hearing aids automatically provide low-frequency polar patterns. The azimuths of the nulls were
compensation for the directional microphone kept constant. Noise in the real world, however,
mode. Others require the clinician to select the may come from different locations and the rela-
low-frequency compensation in the fitting soft- tive locations of speech and noise may change
ware. Clinicians also need to determine if low-fre- over time. A directional microphone with a fixed
quency compensation for the directional micro- polar pattern may not provide the optimal direc-
phone mode is appropriate given the hearing aid tional effect in all situations. With the advances in
user’s listening needs. digital technology, directional microphones with
Fourth, Walden and colleagues (2004) re- variable polar patterns (i.e., adaptive directional
cently reported that hearing aid users who ac- microphones) are available in many digital hear-
tively switch between the omni-directional and ing aids. These adaptive directional microphones
directional microphones preferred the omni-di- can vary their polar patterns depending on the lo-
rectional mode more in relatively quiet listening cation of the noise. The goal is to always have
situations. When noise existed, they preferred the maximum sensitivity to sounds from the frontal
omni-directional mode when the signal source hemisphere and minimum sensitivity to sounds

94
Chung Challenges and Recent Developments in Hearing Aids: Part I

from the back hemisphere in noisy environments The adaptive ability of the adaptive direction-
(Kuk et al., 2002a; Powers and Hamacher, 2004; al microphones is achieved in three to four steps:
Ricketts and Henry, 2002). It should be noted
that adaptive directional microphones are not the 1. signal detection and analysis;
same as the switchless directional microphones 2. determination of the appropriate operational
implemented in some hearing aids. Adaptive di- mode (i.e., omni-directional mode or direc-
rectional microphones automatically vary their tional mode);
polar pattern, whereas switchless directional mi- 3. determination of the appropriate polar pattern;
crophones automatically switch between the and
omni-directional and directional mode. Most 4. execution of the decision.
adaptive directional microphones in the market,
automatically switch between polar patterns and Table 1 summarizes the characteristics and strate-
microphone modes, however. gies implemented in adaptive directional micro-
phones of some hearing aids. Notice that the de-
3.1.2.1. How They Work termination of the operational mode is user-de-
Most of the adaptive directional microphones im- termined for GNReSound Canta but automatic for
plemented in commercially available hearing aids other hearing aids. Another point worth noting is
are first-order directional microphones. The phys- that most adaptive directional microphone algo-
ical construction of adaptive directional micro- rithms process the signal in a single band. More
phones is identical to that of the dual-microphone recently, multichannel adaptive directional hear-
directional microphones. The difference is that the ing aids have been introduced. First introduced
signal processing algorithm of the adaptive direc- in the Oticon Syncro hearing aids, this technology
tional microphones can take advantage of the in- allows different directional sensitivity patterns to
dependent microphone outputs of the omni-direc- occur within multiple channels at the same time.
tional microphones and vary the internal delay of It is apparent in Table 1 that hearing aid man-
the posterior microphone. As mentioned before, ufacturers use different strategies to implement
the polar pattern of a directional microphone can their adaptive directional microphone algorithms.
be changed by varying the ratio of the internal and The following discussion explains the similarities
external delays. Because the external delay (de- and differences among the adaptive directional
termined by the microphone spacing) is fixed after microphone algorithms from different hearing aid
the hearing aid is manufactured, the ratio of the manufacturers or models.
internal and external delays can be changed by
varying the internal delay of the posterior micro- a. Signal Detection and Analysis
phone. When the ratio is changed from 0 to 1, the In the signal detection and analysis unit, algo-
polar pattern is varied from bidirectional to car- rithms implemented in different hearing aids may
dioid (Powers and Hamacher, 2004). have a different number of signal detectors to an-
alyze different aspects of the incoming signal.
Ideally, adaptive directional microphones
Some of the most common detectors are the level
should always adopt the polar pattern that places
detector, modulation detector, spectral content
the nulls at the azimuths of the dominant noise
analyzer, wind noise detector, and front-back
sources. For example, the adaptive directional mi- ratio detector, among others.
crophone should adopt the bidirectional pattern if
the dominant noise source is located at the 90° or i. Level Detector
270° azimuths and adopt the cardioid pattern if The level detector in adaptive directional micro-
the dominant noise source is located at 180° az- phone algorithms estimates the level of the in-
imuth. In practice, different hearing aid manu- coming signal. Many adaptive directional micro-
facturers use different calculation methods to es- phones only switch to directional mode when the
timate the location of the dominant noise source level of the signal exceeds a certain predeter-
and to vary the internal delay of the directional mined level. At levels lower than the predeter-
microphones accordingly. The actual location of mined level, the algorithm assumes that the hear-
the null may vary, depending on the calculation ing aid user is in a quiet environment and the di-
method and the existence of other noise and rectional microphone is not needed. Thus, the
sounds in the environment. hearing aid stays at the omni-directional mode.

95
Trends In Amplification Volume 8, Number 3, 2004

Table 1. The Characteristics of Adaptive Directional Microphones Implemented in Selected Commercially Available Hearing Aids*
Oticon–Syncro Phonak–Perseo ReSound–Canta Siemens–Triano Widex–Diva

Signal Detection 1. Global modulation 1. Level detector to Front-back ratio detector 1. Level detector to 1. Front-back ratio
and Analysis detector to determine the determine the overall to estimate the location of determine the overall detector to estimate the
continuous signal to noise sound pressure level of the dominant sound sound pressure level of location of the dominant
ratio. Parallel processing the incoming signal source the incoming signal input signal
to calculate the resultant
signal-to-noise level in each 2. Front-back ratio 2. Wind noise detector to 2. Noise classification
possible microphone mode detector to estimate the detect the presence and to determine noise type:
and polar configuration location of the dominant the level of wind noise wind, circuit or
input signal environmental
2. Level detector to
determine the overall sound 3. Analysis of the 3. Level detector to
pressure level of the amplitude modulations, determine the overall
incoming signal temporal fluctuation, and sound pressure level of the
spectral center of gravity incoming signal
3. Front-back ratio of the incoming signal to
detector to estimate the infer the presence of
location of the dominant speech and noise
input signal

4. Wind noise detector


to estimate the level of
wind noise

Decision Rules for Surround Mode: Omni Mode: speech only User determined Omni Mode: Omni Mode:
Determining the 1. If the omni-directional 1. Analysis indicates that 1. If incoming signal is
Microphone Mode mode in all 4 bands Directional Mode: primary signal is speech from the front only
provides the highest SNR The decision rules for or wind noise
switching to directional 2. If environmental noise
2. If the incoming signal microphones can be 2. The intensity of is insignificant
is at soft to moderate adjusted by the clinician incoming signal is below a
levels with no or low back- on the basis of user predetermined level (e.g., 3. If wind noise is the
ground noise priority for speech audi- 60 dB SPL). The trigger dominating noise source
bility or comfort: level varies depending on
3. If the dominant speaker the hearing aid model. Directional Mode:
is from the back 2. a. Audibility: the 1. Minimal wind noise is
algorithm switches to Directional Mode: detected
4. If strong wind is directional microphone 1. Minimal wind noise is
detected mode when speech in detected 2. Level of the incoming
noise is detected, but not signal exceeds the prede-
Split-Directionality Mode: when speech alone or 2. Level of the incoming termined level
1. If the omni-directional noise alone is detected signal exceeds the prede-
mode in the lowest termined level 3. Background noise is
frequency band and direc- 2. b. Comfort: the detected
tional mode at the upper algorithm switches to
3 bands provide the directional microphone
highest SNR mode whenever there is
noise, regardless of
2. If the incoming signal is whether speech is present
at a moderate level with or not. If only speech is
some background noise present, it is not activated

3. If a moderate level of
wind noise is detected

Full-Directionality Mode:
1. If the directional mode
in all 4 bands gives the
highest SNR

2. If the incoming signal is


high with background
noise

3. If no wind noise is
detected
(continued)

96
Chung Challenges and Recent Developments in Hearing Aids: Part I

Table 1. The Characteristics of Adaptive Directional Microphones Implemented in Selected Commercially Available Hearing Aids* (cont’d)
Oticon–Syncro Phonak–Perseo ReSound–Canta Siemens–Triano Widex–Diva

Adaptation Speed 2–4 sec, depending on the Variable/programmable by Not applicable because 6–12 sec, depending on 5–10 sec, depending on
for Omni-Directional hearing aid’s Identity clinician, from 4–10 sec, switch is user determined the settings of the the settings of the
and Directional setting, i.e., the life style based on “Audibility” or listening program listening program
Switch of the hearing aid user in “Comfort” selections in the
the fitting software hearing aid fitting
software

Decision Rules for 1. The analyzer unit calcu- The internal delay that The internal delay that The weighted sum of a bi- 1. The analyzer unit
Determining the lates the signal-to-noise yield the minimum power yield minimum output directional and cardioid receives the sounds from
Polar Patterns level at each azimuth from output from the direc- from the directional pattern is calculated and the front and back omni-
90º to 270º for each polar tional microphone is microphone is adopted the internal delay that directional microphones
pattern across the four adopted yields the minimum and adopts the internal
bands. The internal delays output (weighted sum) delay that would give the
of the polar patterns that from the directional lowest output at the direc-
generates the best SNRs microphone is adopted tional microphone output
are adopted

All hearing aids: Any polar pattern with nulls between 90º to 270º is possible

Adaptation Speech 2 sec/90°, speed may vary 100 ms between polar Analysis of environment 50 ms/90º Typically less than 5 sec
Between Different depending on the hearing patterns every 4 ms, changing of
Polar Patterns aid’s Identity setting polar pattern every 10 ms

Polar Pattern when 1. Cardioid in uniform Cardiod Hypercardioid Hypercardiod Hypercardioid


Multiple Noise noise field
Sources Exist
2. Multiband directionality
enables different polar
patterns to reduce the level
of the multiple noise
sources if they have
different frequency contents

Low Frequency Automatic Programmable in fitting Programmable in fitting Automatic Automatic for each polar
Equalization software via “Contrast” software patterns
feature

Information Oticon, 2004; Flynn, 2004, www.Phonak.com (a); Groth (2004), personal Powers (2004), personal Kuk et al., 2002a; Kuk,
Source(s) personal communication Ricketts and Henry communication communication. Powers 2004, personal
(2002); Fabry (2004), & Hamacher (2004) communication
personal communication

Clinical Verification Flynn (2004): compared Unavailable. See text for Unavailable Bentler et al. (2004a): the Valente and Mispagel
to the first-order fixed the evaluation of the first- hybrid second-order (2004): Compared to the
directional microphone order adaptive directional adaptive directional micro- omni-directional mode,
implemented in Adapto, microphone implemented phone has improved the the adaptive directional
Syncro’s Full-Directionality in Phonak Claro SNR-50s of hearing aid microphone improved
mode combined with its users for 4 dB. No signifi- SNR-50s for 7.2 dB if a
noise reduction algorithm cant difference in SNR-50s single noise source was
yielded about 1–2 dB between the first-order located at 180°. The
better SNR-50s for hearing and the hybrid second- improvement in SNR-50s
aid users with multiple order adaptive directional decreased to 5.1 dB and
broadband noise sources microphones. 4.5 dB when noise was
in the back hemisphere. It presented at 90° + 270°,
is unclear how much of Ricketts et al. (2003): and 90°+180°+270°,
the improvement is solely Significant benefit was respectively
generated by the adaptive observed using the second-
directional microphone order directional micro-
phone compared to its
fixed directionality mode
in moving noise

*These hearing aids are selected to demonstrate the range and the differences in implementation methods of adaptive directional microphone algorithms in commercially
available hearing aids. SNR = signal-to-noise ratio.

97
Trends In Amplification Volume 8, Number 3, 2004

ii. Modulation Detector analysis unit uses the modulation depth of signals
The modulation detector is commonly used in with modulation rates centered at 4 to 6 Hz to es-
hearing aids to infer the presence or absence of timate the SNR in the incoming signal—the
speech sounds in the incoming signal and to esti- greater the modulation depth, the higher the
mate the SNR of the incoming signal. The rationale SNR. Notice that if the competing signal (noise) is
is that the amplitude of speech has a constantly a single talker or has a modulation rate close to
varying envelope with a modulation rate between 2 that of speech, the signal detection and analysis
and 50 Hz (Rosen, 1992), with a center modula- unit cannot differentiate between the desired
tion rate of 4 to 6 Hz (Houtgast and Steeneken, speech and the noise.
1985). Noise, on the other hand, usually has a mod- The modulation detector is used in the adap-
ulation rate outside of this range (e.g., ocean waves tive directional microphone algorithms of the
at the beach may have a modulation rate of around Oticon Syncro and Phonak Perseo digital hearing
0.5 Hz) or it occurs at a relatively steady or un- aids. However, the results of the modulation de-
modulated level (e.g., fan noise). tectors are used to make different decisions in the
The speech modulation rate of 4 to 6 Hz is as- algorithm.
sociated with the closing and opening of the vocal The modulation detector of Perseo analyzes
tract and the mouth. Speech in quiet may have a the modulation pattern and the spectral center
modulation depth of more than 30 dB, which re- of gravity to estimate the presence or absence of
flects the difference between the softest conso- speech and noise. An analog to the spectral cen-
nant (i.e., voiceless /Θ/) and the loudest vowel ter of gravity is the center of gravity for an ob-
(i.e., /u/). Modulation depth is the level differ- ject. The difference is that center of gravity refers
ence between the peaks and toughs of a wave- to the weight center of the object, whereas spec-
form plotted in the amplitude-time domain tral center of gravity refers to the frequency cen-
(Figure 5). If a competing signal (noise or speech ter of a sound. The result of the modulation de-
babble) is present in the incoming signal, the tector in Perseo is then combined with the prior-
modulation depth decreases. Because the amount ity setting (i.e., Audibility or Comfort) and used
of amplitude modulation normally decreases with to determine the appropriate operational mode
an increase in noise level, the signal detection and for the instance.

Amplitude

Time (s)

Figure 5. The modulation detector is composed of a maxima (thick line) and a minima follower (thin line).
The maxima follower estimates the level of speech and the minima follower estimates the level of noise. The
difference between the two allows the estimation of signal-to-noise ratio in the frequency channel.

98
Chung Challenges and Recent Developments in Hearing Aids: Part I

Syncro, on the other hand, uses the results of signal comes from the front hemisphere. In addi-
the modulation detector to calculate SNR at the tion, the directional microphone exhibits a 6-
output of the directional microphone. The signal dB/octave roll-off at the low-frequency region for
processing algorithm is programmed to seek the sounds coming from any direction. Assuming
operational mode (i.e., the Surround Mode, Split- that the frequency response of the hearing aid is
or Full-Directionality Modes) and the polar pat- compensated for low-frequency roll-off, the out-
terns to maximize the SNRs of the four frequency put of the omni-directional microphone mode is
bands at the microphone output (Flynn, 2004a). comparable to the output of the directional mi-
Syncro defines the speech as waves with modula- crophone mode for sounds coming from the far
tion rates ranged from 2–20 Hz. field (Edwards, 2004, personal communication;
Thompson, 2004, personal communication).
iii. Wind Noise Detector When wind is blowing, a turbulence and
The wind noise detector is used to detect the some eddies are generated close to the head.
presence and the level of wind noise. Although the Wind noise is therefore a sound from the near
exact mechanisms used in the algorithms from dif- field. For a sound coming from the near field, the
ferent manufacturers are unknown, it is possible outputs of the two omni-directional microphones
that the wind noise detectors make use of several that form a directional microphone are poorly
physical characteristics of the wind noise and hear- correlated. When the outputs of the omni-direc-
ing aid microphones to infer the presence or ab- tional microphones are delayed and subtracted,
sence of wind. First, directional microphones are minimum reduction in amplitude results no mat-
more sensitive to sounds coming from the near ter which direction the sounds are coming from.
field than sounds coming from the far field, where- In fact, the wind noise entering the two micro-
as omni-directional microphones have similar sen- phones are added to further increase the sensitiv-
sitivity to sounds coming from the near field and ity of the directional microphone to wind noise,
the far field. To a dual-microphone directional mi- especially at high frequencies. In addition, the
crophone, near field refers to sounds coming from output of the directional microphone also does
a distance of within 10 times of the microphone not exhibit a 6-dB/octave roll-off at the low-fre-
spacing; far field refers to sounds coming from a quency region; that is, the frequency response of
distance of more than 100 times of the microphone the sounds is similar for the directional and the
spacing. Sounds coming from a distance of be- omni-directional modes. Assume that it is the
tween 10 to 100 times of the microphone spacing same hearing aid with low-frequency compensa-
have the properties of both a near field and a far tion; now, the output of the directional micro-
field (Thompson, 2004, personal communication). phone is much higher than the output of the
When a sound comes from the far field, the omni-directional microphone for this near field
outputs generated at the two omni-directional sound because of the increased sensitivity and the
microphones that form the directional micro- low frequency gain compensation (Edwards,
phone are highly correlated. If the outputs are 2004, personal communication; Thompson, 2004,
correlated 100%, the peaks and valleys of the personal communication).
waveform from the two microphone outputs Although the exact mechanisms of wind noise
should coincide when an appropriate amount of detectors are proprietary to each hearing aid
delay is applied to one of the microphone output manufacturer, it is possible that one characteristic
during the cross-correlation process. The amount that the wind noise detector monitors is the dif-
of delay applied depends on the direction of the ferences between the outputs of the omni-direc-
sound. In other words, the outputs of the two tional and directional microphones (Edwards,
omni-directional microphones have a constant re- 2004, personal communication). Using the exam-
lationship and similar amplitude for a sound com- ple with equalized low-frequency gain, the out-
ing from the far field. puts of the omni-directional and directional mi-
Because microphone output is highly correlat- crophones are comparable for sounds coming
ed for sounds from the far field, when the micro- from the far field, but the output of the direc-
phone outputs are delayed and subtracted to form tional microphone is much higher than the omni-
a directional microphone, the amplitude of the sig- directional microphone for sounds coming from
nal is reduced if the signal comes from the sides or the near field (wind noise). On the other hand, if
the back hemisphere and not much affected if the the low-frequency gain is not equalized, the out-

99
Trends In Amplification Volume 8, Number 3, 2004

put of the directional microphone is lower than wind noise detector to make use of this acoustic
the output of the omni-directional microphone for phenomenon can be: wind noise is present in the
sounds coming from the far field, but the output microphone output if the correlation coefficient
of the directional microphone is higher than the is less than 20% at the low-frequency region and
output of the omni-directional microphone for less than 35% at the high-frequency region.
sounds from the near field. When wind noise is detected, many hearing
Another possible strategy to detect wind noise aids with adaptive directional microphones either
is to use the correlation coefficient to infer the remain at or switch to the omni-directional mi-
presence of wind noise. The correlation coeffi- crophone mode to reduce annoyance of the wind
cient can be determined by applying several de- noise or to increase the audibility of speech, or
lays to the output of one of the omni-direction- both (Kuk et al., 2002b; Oticon, 2004a, Siemens
al microphones and calculating the correlation Audiology Group, 2004).
coefficient between the outputs of the two mi-
crophones for each delay time. As mentioned iv. Front-Back Detector
previously, if the microphone outputs are corre- Some adaptive directional microphone algorithms
lated 100%, the peaks and valleys of the wave- also have a front-back ratio detector that detects
forms coincide perfectly. If the peaks and valleys the level differences between the front and back
of the waveform are slightly mismatched in am- microphones and estimates the location of domi-
plitude or phase, the outputs are said to have a nant signals (Fabry, 2004, personal communica-
lower correlation coefficient. For sounds in the tion, Groth, 2004, personal communication, Kuk,
near field, the correlation coefficient can be 2004, personal communication; Oticon, 2004a).
close to 0%. For example, the front-back detector of Oticon
The wind noise detector can make inference Syncro combines the analysis results of the front-
based on the degree of correlation between the back ratio detector and the modulation detector
outputs of the two omni-directional microphones. to determine if the dominant speech is located at
If the outputs have a high correlation coefficient, the back. If a higher modulation depth is detected
the wind noise detector infers that wind noise is at the output of the back microphone, the algo-
absent. If the outputs have a low correlation co- rithm would remain at or switch to the omni-di-
efficient, the algorithm infers that wind noise is rectional mode (Oticon, 2004a).
present (Thompson, 2004, personal communica-
tion; Siemens Audiology Group, 2004). According b. Determination of Operational Mode
to Oticon, the wind noise detectors in the Syncro As mentioned, the automatic switching between
hearing aids detect the uncorrelated signals be- the omni-directional and directional mode, strict-
tween the microphone outputs that are consistent ly speaking, can be classified as a different hearing
with the spectral pattern of wind noise to infer aid feature in addition to adaptive directional mi-
the presence or absence of wind noise (Flynn, crophones. Most hearing aids, however, have in-
2004, personal communication). corporated the automatic switching function into
In addition, it is possible that a wind noise de- their adaptive directional microphone algorithms.
tector can set different correlation criteria for the Every hearing aid has its own set of decision
coefficients at low- and high-frequency regions rules to determine whether the hearing aid should
for wind noise reduction. High-frequency eddies operate in the omni-directional mode or the direc-
are normally generated by finer structures around tional mode for the instance (Table 1). Some hear-
the head (e.g., pinna, battery door of an in-the- ing aids have simple switching rules. For example,
ear hearing aid) and low-frequency eddies are the switching is user-determined in GNReSound
generated by larger structures (e.g., the head and Canta; whereas, Siemens Triano switches to the di-
the shoulders). As the finer structures are much rectional mode when the level of the incoming sig-
closer to the hearing aid microphones (in the near nal reaches a predetermined level.
field) and the larger structures are further away Other adaptive directional microphone algo-
from the microphone (in the mixed field), high- rithms take more factors into account in the deci-
frequency sounds tend to have a lower correla- sion-making process, such as the level of the wind
tion coefficient than low-frequency sounds at the noise, the location of the dominating signal, and
microphone output (Thompson, 2004, personal the level of environmental noise (Kuk et al.,
communication). A sample decision rule for the 2002a; Oticon, 2004a). The omni-directional

100
Chung Challenges and Recent Developments in Hearing Aids: Part I

mode is often chosen if wind noise dominates the The algorithm switches to the split-direction-
microphone input, if the front-back ratio detector ality mode if speech is detected in background
indicates that the dominant signal is located at noise, if the omni-directional mode at the lowest
the back of the hearing aid user, or if the level of band and directional mode in the upper three
the environmental noise or overall signal is below bands yields the highest SNR, if the incoming sig-
a predetermined level. The predetermined level nal is at the moderate level, or if a moderate
is usually between 50 and 68 dB SPL, depending amount of wind noise is detected. The algorithm
on the particular algorithm (Kuk, 2004, personal switches to the full-directionality mode if speech
communication; Oticon, 2004a; Powers, 2004, from the front is detected in a high level of back-
personal communication). ground noise, if the SNR is the highest with all
Some adaptive directional microphone algo- four bands in the directional mode, and if no or
rithms have more complex decision rules to de- only a low level of wind noise is detected (Flynn,
termine the switching between the omni-direc- 2004a).
tional and the directional mode. For example, the
switching rules of Phonak Perseo can be changed c. Determination of Polar Pattern(s)
by the clinician based on the hearing aid user’s After the adaptive directional microphone algo-
preference for audibility of speech (Audibility) or rithm decides that the hearing aid should oper-
listening comfort (Comfort) (Table 1). If audibil- ate in the directional mode, it needs to decide
ity is chosen, the hearing aid switches to direc- which polar pattern it should adopt for the in-
tional mode only when speech-in-noise is detect- stance. The common rule for all the adaptive di-
ed in the incoming signal. If speech-only or noise- rectional microphone algorithms is that the polar
only is detected, the hearing aid remains in the pattern always has the most sensitive beam point-
omni-directional mode. However, if comfort is ing to the front of the hearing aid user. To deter-
chosen, the hearing aid switches to the direction- mine the polar pattern, many algorithms adjust
al mode whenever noise is detected in the in- the internal delay so that the resultant output or
coming signal. This means that the hearing aid the power is minimum (Fabry, 2004, personal
remains in the omni-directional microphone communication, Groth, 2004, personal communi-
mode only if speech-only is detected. cation, Powers and Hamacher, 2004; Kuk et al.,
The adaptive directional microphone algo- 2002). Oticon Syncro, on the other hand, uses the
rithm implemented in Oticon Syncro has the most estimated SNR to guide the decision process for
complex decision rules (Table 1). Syncro operates choosing the polar patterns at the four frequency
at three distinctive directionality modes, namely, bands. Specifically, the adaptive directional mi-
surround mode (i.e., omni-directional in all four crophone algorithm of Syncro calculates the SNR
bands), split-directionality mode (i.e., omni-di- of each polar pattern with nulls from 180° to 270°
rectional at the lowest band and directional at the at 1° intervals in the four frequency bands. The
upper three bands), and full-directionality mode polar patterns that yield the highest SNR at the
(i.e., directional in all four bands). directional microphone output at each frequency
In the decision-making process, the algorithm band are chosen. As most of the adaptive direc-
uses the information from the level detector and tional microphones do not limit their calculations
the modulation detector in each of the frequency to bidirectional, hypercardioid, supercardioid, or
bands as well as two alarm detectors (i.e., the cardioid patterns, they are capable of generating
front-back ratio detector and the wind noise de- polar patterns with nulls at any angle(s) from 90°
tector). The information provided by the alarm to 270°.
detectors takes precedence in the microphone
mode selection process. As mentioned before, the d. Execution of Decision
signal processing algorithm implemented in After the algorithm decides which operational
Syncro seeks to maximize the SNR at the direc- mode or which polar pattern it needs to adopt,
tional microphone output. Specifically, the algo- the appropriate action is executed. A very impor-
rithm stays in the surround mode if the omni-di- tant parameter in this execution process is the
rectional mode provides the best SNR at the mi- time constants of the adaptive directional micro-
crophone output, if the level of the incoming sig- phone algorithm. Similar to the attack-and-re-
nal is soft to moderate, if the dominant speaker is lease times in the compression systems, each
located at the back, or if strong wind is detected. adaptive directional microphone algorithm has

101
Trends In Amplification Volume 8, Number 3, 2004

the adaptation/engaging times and release/dis- different programming/priority choices but not as
engaging times to govern the duration between a stand-alone parameter in the fitting software.
the changes in microphone modes or polar pat-
tern choices. Adaptive directional microphone al- 3.1.2.2. Verification and Limitations
gorithms implemented in different hearing aids a. Clinical Verification
have a set of time constants to switch from omni- Several researchers have evaluated studies to
directional microphones to directional micro- compare the performance of the single-band
phones and another set of time constants to adapt adaptive directional microphones with the regular
to different polar patterns (Table 1). The adapta- directional microphones with fixed polar patterns
tion time for the algorithms to switch from the (Bentler et al., 2004b; Ricketts and Henry, 2002;
omni-directional to the directional mode gener- Valente and Mispagel, 2004). Several inferences
ally varies from 4 to 10 seconds, depending on can be drawn from these research studies:
the particular algorithm. The adaptation time for
an algorithm to change from one polar pattern to 1. The adaptive directional microphones are su-
another is usually much shorter. It varies from 10 perior to the fixed directional microphones if
milliseconds to less than 5 seconds, depending on noise comes from a relatively narrow spatial
the particular algorithm. angle (Ricketts and Henry, 2002).
One feature of the adaptive directional mi- 2. The adaptive directional microphones perform
crophones worth noting is that their adaptation similarly to the fixed directional microphones if
time varies, depending on other settings in the noise sources span over a wide spatial angle or
hearing aid listening program. For example, the multiple noise sources from different azimuths
time constants of Siemens Triano and Widex Diva coexist (Bentler et al., 2004a). According to
change with the listening program, whereas the Ricketts (personal communication, 2004), when
time constants of Phonak Perseo change with the multiple noise sources from different azimuths
Audibility or Comfort setting. A set of faster time coexist, the single noise source needs to be at
constants is adopted if audibility is chosen as the least 15 dB greater than the total level of all
priority of the hearing aid use, and a set of slow- other noise sources to obtain a measurable
er time constants are used if comfort is chosen to adaptive advantages in at least two hearing aids.
increase listening comfort. 3. When multiple noise sources from different az-
In addition, the time constants of Oticon imuths coexist or the noise field is diffuse,
Syncro change with the Identity setting of the hear- adaptive directional microphones resort to a
ing aid program. The Identity setting is chosen by fixed cardioid or hypercardioid pattern (Table
the clinician during the hearing aid fitting session 1). Thus, the relative performance of the adap-
based on the degree of hearing loss, age, life style, tive and fixed directional microphones in a dif-
amplification experience, listening preference, and fuse field and for noise from a particular direc-
etiology of hearing loss of the hearing aid user. It tion depends on the polar pattern of the fixed
controls the time constants for the adaptive direc- directional microphone. For example, com-
tional microphones as well as many variables in pared to a fixed directional microphone with a
the compression and noise reduction systems. In cardioid pattern, the adaptive directional mi-
general, faster time constants are adopted if the crophone yields better speech understanding if
Identity is set at Energetic and slower time con- the noise comes from the side (i.e., it changes
stants are adopted if the Identity is set at Calm. to bidirectional pattern) and yields similar
Unlike the adaptive release times implement- speech understanding if the noise comes from
ed in compression systems, none of the time con- the back (i.e., it changes to the cardioid pat-
stants of the adaptive directional microphone al- tern) (Ricketts and Henry, 2002).
gorithm varies in corresponding to the changes in 4. Adaptive directional microphones have not
the characteristics of the incoming acoustic signal. been reported to be worse than the fixed di-
In other words, the time constants of the adaptive rectional microphones.
directional microphones are pre-set with hearing 5. Subjective ratings using the Abbreviated Profile
aid settings, but they do not vary with the acoustic of Hearing Aid Benefit (APHAB) scales have
environment. Further, the time constants of the shown higher ratings for the adaptive direction-
adaptive directional microphones are not directly al microphones compared with the omni-direc-
adjustable by the clinician. They are preset with tional microphones after a 4-week trial in real

102
Chung Challenges and Recent Developments in Hearing Aids: Part I

life environments (Valente and Mispagel, 2004). it may not be able to quickly and effectively at-
Oticon has conducted a clinical trial to compare tenuate a moving noise source, for example, a
the performance of its hearing aids with a single- truck moving from one side to the other behind
band, first-order, fixed directional microphone the hearing aid user.
(Adapto) and a multiband first-order adaptive di-
rectional microphone with the noise reduction 3.1.3. Second-Order Directional Microphones
and compression system active (Syncro) (Flynn, Although first-order directional microphones gen-
2004b). The SNRs-50 of hearing aid users were erally provide 3 to 5 dB of improvement in SNR
tested when speech was presented from 0° az- for speech understanding in real-world environ-
imuth and uncorrelated broadband noises were ments, people with hearing loss often experience a
presented from four locations in the back hemi- much higher degree of SNR-loss. This means that
sphere. Flynn reported approximately a 1-dB im- the benefits provided by first-order directional mi-
provement in the SNR-50 of hearing aid users be- crophones are insufficient to close the gap between
tween the omni-directional modes of the two the speech understanding ability of people with
hearing aids and approximately 2 dB of improve- hearing loss and that of people with normal hear-
ment between the directional modes of the two ing in background noise. This limitation prompted
hearing aids. However, as the noise reduction al- the development of a number of instruments, such
gorithm was active for the multi-band adaptive as second-order directional microphones and array
directional microphones and the two hearing aids microphones, to provide higher directionality.
have different compression systems, it is unclear Second-order directional microphones are
how much of the differences were solely due to composed of three matched omni-directional mi-
the differences in the directional microphones. crophones, and they usually have a higher direc-
tivity index than the first-order directional micro-
b. Time Constants phones. The only commercially available second-
The optimum adaptation speeds between the order directional microphones to date are imple-
omni-directional and directional mode or among mented in the behind-the-ear Siemens Triano
different polar patterns have not been systemati- hearing aids. According to Siemens, Triano is im-
cally explored. As noted before, adaptive direc- plemented with a first-order directional micro-
tional microphone algorithms implemented in dif- phone for frequencies below 1000 Hz and a sec-
ferent hearing aids have a different speed of adap- ond-order directional microphone above 1000 Hz
tation for switching the microphone modes and the (Figure 6) (Powers and Hamacher, 2002).
polar patterns. Some take several seconds to adapt The reason for this particular set up is be-
and others claim to adapt almost instantaneously cause the second-order directional microphone is
(i.e., in 4 to 5 milliseconds) (Kuk et al., 2000; implemented using the delay-and-subtract pro-
Powers and Hamacher, 2002; Ricketts and Henry, cessing that yields higher internal microphone
2002, Groth, 2004, personal communication). noise and a low-frequency roll-off of 12 dB/oc-
Similar to the attack-and-release times of a tave. The steep low-frequency roll-off makes it
compression system, there are pros and cons as- difficult to amplify the low-frequency region and
sociated with having a faster or a slower adap- any effort to compensate for the roll-off would ex-
tation time for the adaptive directional micro- acerbate the amount of internal noise.
phones. For example, a system with a fast adap- The first-order directional microphone is used
tation time can change the polar pattern for to circumvent the problem by keeping the inter-
maximum noise reduction when the head moves nal noise manageable. It can also preserve the
or when a noise source is traveling in the back ability of the hearing aid to provide low-frequen-
hemisphere of the hearing aid user. The fast cy amplification. The second-order directional mi-
adaptation may be overly active, however, and it crophone is used to take advantage of its higher
may change its polar pattern during the stress directionality. The directional microphones of
and unstressed patterns of natural speech when Triano can also be programmed to have adaptive
a competing speaker and a noise source are lo- directionality.
cated at different locations in the back hemi-
sphere of the hearing aid user. The advantage of 3.1.3.1. Verification and Limitations
a slower adaptation time is that it does not act Bentler and colleagues (2004a) measured a ran-
on every small change in the environment, yet dom sample of behind-the-ear Triano hearing aids

103
Trends In Amplification Volume 8, Number 3, 2004

Figure 6. The implementation of a commercially available hybrid second-order directional microphone. The
outputs of the front and back microphones are processed to form a first-order directional microphone (1 ord.
dir. mic.), the output of which is low-pass filtered. The outputs of all three microphones are processed to form a
second-order directional microphone (2 ord. dir. mic.), the output of which is then high-pass filtered. The low-
and high-pass filtered signals are subsequently summed and processed by other signal processing algorithms in
the hearing aid. (Reprinted with permission from Powers and Hamacher, Hear J 55[10], 2002).

with the hybrid second-order directional micro- hearing aid with a first-order adaptive directional
phones and reported the free-field average direc- microphone by the same manufacturer. They lis-
tivity index (DI-a) values ranged from 6.5 to 7.8 tened to the HINT sentences in omni-directional
dB. The DI-a values were calculated from the sum- and directional modes in a stationary noise field
average of the DI values from 500 to 5000 Hz and in omni-directional, directional, and adaptive
without frequency weighting. When the Triano directional modes in a moving noise field.
hearing aids were worn on a Knowles Electronics The results indicated that subjects with hear-
Manikin for Acoustic Research (KEMAR), the DI-a ing loss exhibited an aided SNR-loss of 4 dB in
values ranged from 4.5 to 6.0 dB. stationary noise and slightly less than 5 dB in a
Several research studies have investigated the moving noise. The performance of subjects with
effectiveness of the hybrid second-order direction- hearing loss showed roughly a 4 dB of SNR im-
al microphones in stationary and moving noises. provement in stationary noise by using Triano
Bentler and colleagues (2004a) compared the compared with the hearing aids with first-order
speech understanding performance of subjects directional microphones. Subjects with hearing
with normal hearing and subjects with an average loss also obtained a little more than 3 dB of
of 30 to 65 dB HL hearing loss from 250 to 8000 SNR improvement using the hearing aids with
Hz. Subjects with normal hearing listened to the first-order adaptive directional microphone and
Hearing in Noise Test (HINT, Nilsson et al., 1994) approximately 4 dB of improvements using
in stationary and moving noises to serve as the Triano adaptive directional microphone. No sig-
standard. Subjects with hearing loss were fit with a nificant differences were found in subjects’ per-
pair of Triano hearing aids and a pair of another formance between Triano and the hearing aid

104
Chung Challenges and Recent Developments in Hearing Aids: Part I

with first-order directional microphones in ei- when the microphones are exposed to extreme
ther noise field. temperature changes, humidity, vibration, or
Ricketts and colleagues (2003) have report- some other environmental factors (Dittberner,
ed that the hybrid second-order adaptive direc- 2003, Kuk et al., 2000; Matsui and Lemons,
tional microphones yielded approximately 2 dB 2001). Microphone drift can also occur in the nat-
lower SNR-50 than the same hearing aid at the ural aging process of the microphones. Matsui
fixed directional microphone mode in the pres- and Lemons (2001) reported an average of 1 dB
ence of a moving noise source at the back hemi- decrease in the directivity index when 13 dual-
sphere. They also reported that the fixed and microphone directional microphones were stored
adaptive directional microphones generated in an office for just 3 months. Therefore, some
about 5.7 dB and 7.6 dB lower SNRs-50 than the manufacturers use aged omni-directional micro-
omni-directional mode of the hearing aid. phones to improve the performance stability of
the resulting directional microphones.
3.1.4. Microphone-Matching Algorithms Microphone drift can happen in both the fre-
Most directional microphones implemented in quency and the phase domain(s). It poses a chal-
digital hearing aids use the dual-microphone de- lenge in the maintenance of directional effect
sign because of its flexibility for adaptive signal- over time. In addition, if the characteristics of the
processing options. The challenge is that the sen- microphone are drifted for the same amount in
sitivity and phases of the two omni-directional the high and low frequency regions (e.g., 1–2 dB),
microphones forming the dual-microphone direc- a more degrading effect is often created in the di-
tional microphones have to be matched to within rectivity index of the low-frequency region than
0.02 dB and within 1º, respectively, to ensure that of the high-frequency region. Figure 7 illus-
good directional performance (Schmitt M, 2003, trates the effects of frequency drift at low- and
personal communication). high-frequency regions and two examples of the
The matching of the omni-directional micro- effects of phase drift.
phones for directional microphone application re- When a directional microphone with hyper-
quires several steps. The first matching is conduct- cardioid pattern has perfectly matched frequency
ed in the factory where the microphone is manu- response, it has a directivity index of 6 dB and its
factured. The frequency and phase of the omni-di- polar pattern has two nulls at about 110° and
rectional microphones from the same lot are mea- 250° (Figure 7A). If the frequency responses of
sured and matched for the directional microphone the two omni-directional microphones have a
application. According to Thompson (2004, per- mismatch of 1 or 2 dB occurring in 1000 Hz, the
sonal communication), a simple predictable rela- directivity index is reduced to 4.4 dB and 2.7 dB,
tionship exists between the sensitivity and the respectively (Figure 7A) (Edwards, 1998).
phase of the microphone across frequency regions. However, if a much smaller mismatch (0.25 dB)
Therefore, if two omni-directional microphones are occurs at 250 Hz, the nulls in the polar pattern
matched for four parameters, they should be suffi- disappear and the directivity index decreases to
ciently matched for directional microphone appli- 4.1 dB (Figure 7B).
cations. These four parameters are the phase at a Deleterious effects can also be observed when
low frequency (e.g., 250 Hz), the sensitivity at a the phase of the two microphones drift apart (Kuk
mid frequency (e.g., 1000 Hz), and the peak fre- et al., 2000). The hypercardioid polar pattern at
quency and amplitude of the microphone reso- 250 Hz is changed to a cardioid pattern and the
nance (e.g., normally at 5000 to 6000 Hz). directivity index is decreased to 4.6 dB when the
The second matching is performed when the front microphone lags the back microphone for
directional microphone is built into a hearing aid 2° (Figure 7C). A more detrimental effect is seen
in the manufacturing facility. This procedure is if the back microphone lags the front microphone
accomplished by measuring the frequency re- for 2°. The polar pattern is changed to a reverse
sponse and the phase of the two omni-directional hypercardioid pattern where the nulls point to the
microphones and using a digital filter(s) to cor- front and the most sensitive beam of the direc-
rect the discrepancies. tional microphone is changed to 180° at the back
Despite these matching processes, the fre- (Figure 7C) (Kuk et al., 2000).
quency responses and phase relationship of the These examples illustrates that exact match-
two omni-directional microphones may drift apart ing of the microphones are essential to the per-

105
Trends In Amplification Volume 8, Number 3, 2004

Figure 7. (A) The effect of sensitivity mismatch at 1000 Hz between the two omni-directional microphones
that form a directional microphone. (Reprinted with permission from Edwards et al., Hear J 51[8]), 1998).
(B) The effect of sensitivity mismatch at 250 Hz. (C). The effects of phase mismatch at 250 Hz. (Figures B and
C reprinted with permission from Kuk et al., Hear Rev 7[(9], 2000).

formance of the directional microphones. meet the challenge of maintaining directional per-
Fortunately, phase mismatching at frequencies formance over time, engineers have developed
higher than 250 Hz has a much less detrimental microphone matching algorithms.
effect than at the frequency region below 250 Hz Like any other signal processing algorithms,
because the sensitivity-phase relationship is more microphone matching algorithms are also imple-
stable at higher frequency regions. In other mented in various ways. Because of the existence
words, phase can be relatively well matched if the of a predictable relationship between the sensi-
sensitivity of the microphones is matched at high- tivity and phase of the microphone, many micro-
er frequency regions. phone matching algorithms only match the sensi-
If microphone drift happens after the direc- tivity of the microphones (Flynn, 2004, personal
tional hearing aid is fit to its user, the hearing aid communication, Hamacher, 2004, personal com-
user may experience good directional benefit at munication). A few algorithms also match the
first but later may report no differences between phase of the microphones (Kuk, 2004, personal
the omni-directional and directional modes. To communication). If a difference is detected be-

106
Chung Challenges and Recent Developments in Hearing Aids: Part I

tween the microphones, the microphone match- though the hearing aids are equipped with mi-
ing algorithm generates a digital filter to match crophone-matching algorithms (Ricketts, 2001).
the two microphones (Groth, 2004, personal com-
munication). One important component of this 3.1.5. Microphone Arrays as Assistive Listening
matching process is that the microphone output is Devices
digitized separately so that the frequency response Because some people with hearing loss experi-
and the phase of the microphones can be adjusted ence more than 15 dB of SNR loss, the benefits
separately (Edwards et al., 1998; Kuk et al., 2000). provided by directional microphones may not be
Microphone matching algorithms can also dif- enough to compensate for their SNR loss. The tra-
fer in their speed of action. Depending on the sig- ditional solution is to resort to the use of personal
nal processing power of the hearing aid chip FM systems. An FM system is very useful in class-
and/or the priority of the microphone matching rooms or in one-on-one communications. The mi-
algorithm set among the signal processing algo- crophone of the FM system greatly reduces back-
rithms, some microphone-matching algorithms ground noise by significantly reducing the dis-
monitor the output of the two omni-directional tance between the talker and the hearing aid user.
microphones over a relatively long window of However, FM systems are limited in their effec-
several hours (Groth, 2004, personal communi- tiveness to pick up multiple speakers in a conver-
cation) and others match the sensitivity of the mi- sation. They are not practical to use in daily life
crophones in the order of seconds (Hamacher, where listening to multiple talkers is essential.
2004, personal communication; Kuk et al., Several companies have marketed several
2002a) or in the order of milliseconds (Flynn, array microphones that are designed to bypass
2004, personal communication). and to provide higher directional effects than the
hearing aid microphone(s). These array micro-
3.1.4.1. Verifications and Limitations phones are implemented in either head-worn or
A properly functioning microphone matching al- hand-held units. When these array microphones
gorithm should provide individualized in situ are used in conjunction with hearing aids, sounds
matching of the microphones and maximize the from the environment are pre-processed by the
directional performance of the directional micro- array microphone and then sent to the hearing
phone throughout the lifetime of the hearing aid. aids via a telecoil, a direct audio input, or an FM
The rationale of microphone matching algorithms receiver. The advantage of array microphones
sounds very logical and promising; however, the over traditional FM systems is that the talker does
exact procedures used by different manufacturers not need to wear the microphone or the trans-
are unknown to the public. To date, no verifica- mitter unit. The hearing aid user can choose to
tion data on the effectiveness of these algorithms listen to different talkers by facing or pointing to
are available. the desired talker. Some array microphones are
The limitation of the microphone matching only compatible with hearing aids from their own
algorithm is that it cannot protect against factors manufacturers (e.g., SmartLink SX from Phonak).
(e.g., clogged microphone ports) other than the Other array microphones are compatible with
microphone drift. In fact, Thompson (2003) ar- hearing aids from multiple manufacturers. This
gued that the degradation in directional perfor- following section focuses on the latter.
mance of the directional microphones is often due
to the condensation of debris on the screen or a. Head-Worn Array Microphones
clogged microphone ports rather than micro- Head-worn array microphones can be imple-
phone drift. As reasons other than microphone mented as either an end-fire or a broadside array.
mismatch may determine the performance of the An end-fire array has its most sensitive beam par-
directional microphones over time, constant mon- allel to the microphone array, such as an array
itoring of the microphones’ physical conditions is microphone implemented along an arm of a pair
very important. Clinicians need to check the con- of eyeglasses. A broadside array has its most sen-
ditions of the two omni-directional microphones sitive beam perpendicular to the microphone
under a microscope or amplifying lens during reg- array, such as an array microphone implemented
ular clinic visits to ensure that the microphone above the glasses of a pair of eyeglasses.
openings are free of debris and the microphone Etymotic Research designed and marketed an
screens are clearly seen and well defined, even end-fire array microphone, Link.It. Link.It uses

107
Trends In Amplification Volume 8, Number 3, 2004

delay-and-sum processing to combine the outputs less the low-frequency roll-off) and high directiv-
of three single-microphone directional micro- ity at the high-frequency range (the smaller the
phones. Each directional microphone is spaced 25 port spacing, the higher the high-frequency di-
mm apart, and the outputs of the second and third rectivity index).
directional microphones are delayed for 75 and Lexis has three user-switchable directionality
150 milliseconds, respectively. When sounds come modes: omni-directional, focus, and superfocus.
from the front, the outputs of the microphones are The superfocus mode has a narrower sensitive
in phase after accounting for the traveling time beam to the front than the focus mode. The AI-
and the delay circuit. The sum of the outputs from DI at the superfocus mode is reported to be 8.5
the three microphones is three times as large as dB and 5.9 dB in the focus mode. Lexis has a
the single directional microphone output. relatively flat frequency response, from 600 Hz
When sounds come from the sides, the out- to 5000 Hz (Oticon, 2004b). During one-on-one
puts of the three microphones are 180° out of communication or listening, the hand-held unit
phase because of the delay added to the output of can be worn around the neck of the talker like
the microphones (Christensen et al., 2002). the microphone and transmitter unit in other
According to Etymotic Reseach (http://www.ety- FM systems.
motic.com/ha/linkit-ts.asp), the single-micro-
phone directional microphones (instead of omni- 3.1.5.1. Verifications and Limitations
directional microphones) are used to optimize the Clinical trials of Link.It (Christensen et al., 2002)
performance of the array microphone over time and studies carried out during its developmental
and minimize the need to monitor and match the stages (Bilsen et al., 1993; Soede et al., 1993) re-
sensitivity and the phase of the microphones. ported a 7 to 10 dB SNR improvement for people
Link.It sends its processed signal to the hear- with hearing impairment in noisy, reverberate en-
ing aid wirelessly via telecoil (Figure 8A). If nec- vironments.
essary, the output of Link.It can be fed into the Oticon (2004b) conducted a “just-follow-the-
direct audio input of the hearing aid. Link.It has a conversation” test in a laboratory setting. During
relatively flat frequency response from 200 Hz to the test, speech was fixed at 65 dB SPL and sub-
4000 Hz. When measured on KEMAR in an ane- jects were asked to adjust the level of the noise so
choic chamber, it yielded an AI-DI of 7 dB on that they could understand 50% of the informa-
KEMAR and 8 dB in free-field (Christensen et al., tion. The results indicated that five subjects with
2002). moderate-to-profound hearing loss obtained 5.6
and 8.7 dB of directional benefit for the focus and
b. Hand-Held Array Microphones superfocus modes relative to the omni-direction-
Most hand-held array microphones are imple- al mode, respectively. An interesting observation
mented in the end-fire array (i.e., shot-gun mi- is that when Lexis is used in the hand-held posi-
crophone array). Recently, a new hand-held array tion, the omni-directional mode of Lexis is about
microphone, Lexis, has been introduced. Lexis has 4 dB better than the omni-directional mode of the
a hand-held unit with an array microphone and a subjects’ own hearing aids because of the body
built-in FM transmitter. Signals from Lexis are baffle effect. Significant improvement was also re-
sent to the hearing aid via an FM receiver plugged ported in all subtests of the Abbreviated Profile
into the direct audio input of the hearing aid. of Hearing Aid Benefit (APHAB, Cox and
The hand-held unit of Lexis is composed of Alexander 1995) when the subjects’ own hearing
four single-microphone directional microphones aids were compared with the superfocus mode of
aligned on the side of the unit (Figure 8B). Again, Lexis.
single-microphone directional microphones are Super-directionality is a double-edged sword.
used to maintain the directional effect over time On one hand, it has very high sensitivity to
while minimizing the need to monitor and match sounds from a very narrow beam to the front and
the sensitivity and phase of the microphone com- it can reduce background noise significantly. This
ponents. The port spacing between these single- feature is especially useful for listening to a talk-
microphone directional microphones is 15 mm. er located at a fixed direction or a talker moving
According to Oticon (2004b), 15 mm was chosen at a predictable path. On the other hand, if sev-
as a compromise between the amount of low-fre- eral talkers are participating in a discussion or
quency roll-off (the larger the port spacing, the conversation, say in a round table, it may be ex-

108
Chung Challenges and Recent Developments in Hearing Aids: Part I

Figure 8. (A) Link.It transmits the processed signal to an in-the-ear hearing aid via telecoil. (Courtesy of
Etymotic Research, reprinted with permission). (B) The hand-held unit of Lexis transits the processed signal
to an frequency modulated (FM) receiver attached to a behind-the-ear hearing aid. (Courtesy of Oticon,
reprinted with permission).

tremely hard for the user to zoom in to the cor- It is worthy of noting that although array mi-
rect talker as the beam is so narrow. If the talker crophones can provide up to 7 to 8 dB of im-
is not in the beam of the directional microphone provement in SNR, FM systems have been
sensitivity, the user has to rely on visual cues to shown to provide 10 to 20 dB of improvement
locate the talker and then point the hand-held (Crandell and Smaldino, 2001; Lewis et al.,
unit to the talker. The user may miss the first sev- 2004). FM systems can remarkably improve the
eral words whenever the talker changes. In such a SNR because the microphone is usually located
case, the focus mode may be more appropriate near the mouth of the talker, thus they signifi-
because it has a wider sensitivity beam than the cantly reduce the effects of reverberation, dis-
superfocus mode. The drawback is that it is less tance and noise. Therefore, in situations where
directional and thus the noise reduction ability is the voice of one talker is desirable (e.g., one-on-
less than that of the superfocus mode. Another one conversation in classrooms or lecture halls),
caution when using Lexis is that highly direction- the use of FM systems or array microphones con-
al devices may reduce the user’s ability to hear figured to function as FM systems (i.e., the
warning sounds from locations, such as the sides, hand-held unit of Lexis worn around the talkers’
with low microphone sensitivity. neck) is recommended.

109
Trends In Amplification Volume 8, Number 3, 2004

3.1.6. General Remarks Speech has a modulation rate centered at 4


Directional microphones and array microphones to 6 Hz. Noise in most listening environments has
have made significant advances in the past sever- either a constant temporal characteristic or a
al years. With all of the advances in directional modulation rate outside the range of speech.
hearing aids, counseling is essential. Clinicians Further, speech exhibits co-modulation, another
need to be knowledgeable about the benefits and type of modulation that is generated by the open-
the limitations of the hearing aids with direc- ing and closing of the vocal folds during the voic-
tional microphones and counsel the hearing aid ing of vowels and voiced constants (Rosen,
users accordingly. Hearing aid users need to be 1992). The rate of co-modulation is the funda-
informed of what to expect from their hearing mental frequency of the person’s voice.
aids and how to obtain the maximum benefit Depending on the type of modulation detec-
from different directional products. Topics for tion used, noise reduction algorithms are divided
into two categories: multichannel adaptive noise
additional discussions with users also need to in-
reduction algorithms that detect the slow modu-
clude how to position themselves to receive max- lation in speech, and synchrony-detection noise
imum directional benefit, how much low-fre- reduction algorithms that detect the co-modula-
quency equalization is appropriate and accept- tion in speech.
able, how to get used to directional microphones,
when to switch between directional and omni-di- 3.2.1. Multichannel Adaptive Noise-Reduction
rectional modes, and when is it appropriate to Algorithms
deactivate automatic signal processing options, Most of the noise reduction algorithms in com-
among others. mercial hearing aids use the multichannel adap-
tive noise reduction strategy. These algorithms are
3.2. Noise-Reduction Strategy No. 2: intended to reduce noise interference at frequen-
Noise-Reduction Algorithms cy channels with noise dominance. In theory, mul-
tichannel adaptive noise reduction algorithms are
Whereas directional microphones are designed to the most effective in their noise reduction efforts
take advantage of spatial separation between when there is spectral differences between speech
speech and noise, noise reduction algorithms are and noise. The major limitation of these noise re-
designed to take advantage of the temporal sepa- duction algorithms is that they cannot differenti-
ration and spectral differences between speech ate between the desired signal and the unwanted
and noise. The ultimate goals for noise reduction noise if speech is the competing noise. Table 2
algorithms are to increase listening comfort and summarizes the characteristics of noise reduction
speech intelligibility. Noise reduction algorithms algorithms implemented in some hearing aids.
are different from the speech enhancement algo-
rithms in that noise reduction algorithms aim to 3.2.1.1. How They Work
reduce noise interference whereas speech en- a. Signal Detection and Analysis
hancement algorithms are designed to enhance The first and foremost action of a multichannel
the contrast between vowel and consonants adaptive noise reduction algorithm is the classifi-
(Bunnel, 1990; Cheng and O’Shaughnessy, cation of speech and noise in the incoming sig-
1991). Most of the high-performance hearing aids nal. Noise-reduction algorithms may monitor one
have some type of noise reduction algorithms; or several aspects of the incoming signal for char-
whereas, only a few (e.g., GNReSound Canta) acteristics that resemble speech or noise, or both.
have speech-enhancement algorithms. The fol- Multichannel adaptive noise reduction algorithms
lowing discussion concentrates on the mecha- use similar speech detection strategies to the
nisms and features of noise reduction algorithms. adaptive directional microphone algorithms. They
All noise reduction algorithms are proprietary have detectors to estimate the modulation rate
to the hearing aid manufacturers. They have dif- and the modulation depth within each frequency
ferent signal detection methods, decision rules, channel to infer the presence of speech, noise, or
and time constants. The only common feature both, and the SNR within the frequency channel
among these algorithms is the detection of mod- (Boymans and Dreschler, 2000; Van Dijkhuizen
ulation in the incoming signal to infer the pres- et al., 1991; Edwards, 1998; Fang and Nilsson,
ence or absence of the speech signal and to esti- 2004; Mueller, 2002; Powers and Hamacher,
mate the SNR in the microphone output. 2002; Walden et al., 2000).

110
Chung Challenges and Recent Developments in Hearing Aids: Part I

Table 2. The Characteristics of Noise Reduction Algorithms Implemented in Selected Commercially Available Hearing Aids*
Oticon–Syncro ReSound–Canta Sonic In–Natura Siemens–Triano Widex Diva

No. of Channels 8 14 9 16 15

Type of Noise Synchrony detection Multichannel adaptive Multichannel adaptive Multichannel adaptive Multichannel adaptive
Reduction + Multichannel adaptive

Signal Detection 1. Synchrony detector to 1. Modulation detector to detect the modulation in the envelope of the incoming signal in each frequency channel
and Analysis detect synchronous energy
across the upper four 2. Maxima modulation de- 2. Noise detector to esti- 2. Modulation detection 2. Signal detector to detect
high-frequency channels tector to follow the maxi- mate the steady state noise block to determine the the intensity pattern of the
ma in the input signal. It based upon modulation modulation rate incoming in a 30–60-sec
2. Modulation detector to attempts to reduce noise in rate. The target modulation window within a frequency
detect modulations be- running speech without re- rate changes depending on channel
tween 2-20 Hz in the enve- ducing audibility the frequency channel
lope of the incoming signal 3. Signal detector to moni-
in each channel 3. Minima modulation de- 3. SNR calculation based tor the spectral-intensity-
tector to follow the minima on the noise estimate vs temporal patterns of in-
3. Noise level detector to in the input signal. It pro- the amplitude of the entire coming signal across fre-
estimate the noise level vides the baseline for de- signal quency channels
in each channel termining the modulation,
and estimates the level of 4. Level detector to esti-
noise mate the sound pressure
level in each channel

Decision Rules 1. The result of the syn- 1. Determine if the signal is 1. Determine if signal is 1. Determine if the signal is 1. The intensity level with-
chrony detector is used to speech or nonspeech based speech or nonspeech, based speech or nonspeech based in a frequency channel and
infer the presence/absence of on the modulation rates on the modulation rates of on the modulation rates of the spectral-intensity-tem-
speech in the incoming sig- detected the signal detected the signal detected in a 12- poral patterns of incoming
nal (i.e., a Yes/No analysis) s window. The modulation signal give a gradual reduc-
2. The threshold for gain 2. Gain reduction at a fre- rate of speech is assumed tion of the channel gain up
2. The results from the reduction depends on the quency channel depends to be between 4–6 Hz to 14 dB
modulation detector and noise reduction setting. For on the estimated SNR de-
the level detector are used the mild and moderate set- tected. If the SNR is less 2. If the detected modula- 2. The amount of gain re-
to estimate the SNR within tings, the threshold is 15 than 12 dB SNR, the gain tion rate is outside the duction in a frequency chan-
each channel dB modulation. For the of the frequency channel is speech range, gain is re- nel increases with an in-
strong setting, the thresh- reduced. Maximum gain duced at the frequency crease in input level, de-
3. The results of 1 and 2 old is 20 dB. Above this reduction is observed at channel crease in modulation depth,
are integrated to determine modulation depth, no gain SNR < 0 dB and a decrease in
the characteristics of the reduction is applied 3. The amount of gain re- Articulation Index weighting
incoming signal: 3. Maximum gain reduc- duction depends on the of the frequency channel
3. Speech Only: No noise 3. The amount of gain re- tion (6, 12, and 18 dB) modulation depth/SNR.
reduction duction decreases linearly depends on the noise The exact amount is de- 3. Adaptive changes in fil-
as the modulation depth reduction setting. The scribed by the Wiener ter characteristics (e.g.,
3. Speech-in-Noise: decreases amount of gain reduction Filter. compression ratio, amount
Restricted gain shaped by is frequency independent of attenuation): fast
the articulation index to en- 4. Maximum gain reduc- 4. Maximum gain reduc- change if modulation is de-
sure speech intelligibility is tion occurs only if the mod- tion of 12, 18, 24 dB de- tected and slow change if
maintained. In general, the ulation depth was 0 dB, pends on the noise reduc- no modulation is detected
amount of gain reduction in and depends on the noise tion setting and provided in the frequency channel
a frequency channel in- reduction setting. The max- that there is sufficient gain
creases with an increase in imum gain reduction for in the hearing aid to allow
noise level, a decrease in mild noise reduction set- the maximum amount of
modulation depth, or a de- ting is 12 dB, for moderate gain reduction
crease in articulation index is 18 dB, and for strong is
weighting of the frequency 24 dB
channel

3. Noise Only: Maximum


gain reduction in each fre-
quency channel
(continued)

111
Trends In Amplification Volume 8, Number 3, 2004

Table 2. The Characteristics of Noise Reduction Algorithms Implemented in Selected Commercially Available Hearing Aids* (cont’d)
Oticon–Syncro ReSound–Canta Sonic In–Natura Siemens–Triano Widex Diva

Adaptation 1. The system moves faster 1. Less than 5 ms for the The noise detector is a slid- Speed of gain reduction: Speed of gain reduction:
Speed/Speed of to speech and adapt slower maximum follower ing 1.2-sec calculation. It Initial gain reduction with- 5-sec for a 10 dB gain
Gain Reduction to noise changes gain based on in 2 sec, maximum gain change
2. 4 sec for the minimum estimated SNR. reduction is achieved
1. a. When the hearing aid follower within 6–8-sec
is in Noise Only mode, the Speed of gain reduction:
system takes 0.2–0.9 sec to 3. Speed of gain reduction: equals to the attack time
move to the Speech Only 10 sec from 0 dB gain re- of the compression system,
or Speech- in-Noise modes duction to the maximum (i.e., between 2 and 50 ms
gain reduction setting across frequency channels)
1. b. When the hearing aid
is in the Speech modes, it
takes the system 1.8–7 sec
to change to Noise Only
mode

Release Speed/ 2. The exact switching 1. Less than 5 ms for the Speed of gain recovery: Speed of gain recovery: Speed of gain recovery:
Speed of Gain times depend on the maximum follower equals to release time of Less than 1 sec 0.5 sec
Recovery Identity setting for the compression system (i.e.,
hearing aid user. In gener- 2. 2 ms for the minimum between 2 and 50 ms
al, a more active Identity follower across frequency
setting corresponds to channels)
faster time constants. 3. Speed of gain recovery:
10 ms for the noise reduc-
3. Speed of gain reduction tion system to recover to 0
and speech of gain dB gain reduction if the
recovery: Proprietary modulation depth is higher
than the modulation
threshold

Information Oticon, 2004; Flynn, 2004, Groth, 2004, personal com- Nilsson, personal commu- Powers, 2004, personal Kuk et al., 2002b; Kuk,
Source(s) personal communication munication; Smriga and nication. US Patent communication 2004, personal
Groth, 1999 06757395; Johns et al., communication
2002

Clinical Verification Unavailable Unavailable Bray and Nilsson, 2000; Unavailable Unavailable
Bray et al., 2002; Johns
et al., 2002; Galster and
Ricketts, 2004: improve-
ment of SNR for 1–1.8 dB

*These hearing aids are selected to demonstrate the range and the differences in implementation methods of noise reduction algorithms in commercially available hearing
aids. ACG = automatic gain control. SNR = signal-to-noise ratio.

Some noise reduction algorithms may also nel. The assumptions are that the level of noise is
detect other dimensions of the incoming signal, relatively stable within and across frequency
such as the intensity-modulation-temporal channels, whereas the level of speech varies
changes within each frequency channel (Tellier et rapidly within and across frequency channels.
al., 2003) or the spectral-intensity-temporal pat- Another important task of the signal detec-
terns of the incoming signal across frequency tion and analysis unit is to estimate the SNR with-
channels (Kuk et al., 2002b). For example, Widex in each frequency channel. As mentioned in the
Diva detects the modulation, the intensity pat- section on adaptive directional microphones, the
terns of the incoming signal in each channel, and estimation of the SNR is usually accomplished by
the spectral-intensity-temporal patterns across calculating the modulation depth of the incoming
frequency channels (Kuk et al., 2002b). The in- signals with a modulation rate resembling speech.
tensity distribution of the signal is monitored over If the modulation depth is high, say 30 dB, the
10- to 15-second periods in each frequency chan- signal detection and analysis unit assumes that

112
Chung Challenges and Recent Developments in Hearing Aids: Part I

the SNR is high in the frequency channel and that This approach is based on the rationale that if
speech is the dominant signal in the frequency the signal-detection and analysis unit estimated
channel. If the modulation is moderate or low, the a high SNR in a frequency channel, the algorithm
unit assumes that the SNR is moderate or low in assumes that speech-in-quiet is detected in the
the frequency channel. The actual implementation frequency channel and the action unit should let
of the signal detection and analysis unit in hearing the signal pass without attenuation. If the unit es-
aids from different manufacturers or among dif- timates a moderate or low SNR in the frequency
ferent models may vary. Table 2 summarizes sim- channel, the algorithm assumes that either speech
plified versions of the signal detection mechanisms coexists with noise or noise dominates the fre-
used by different hearing aid manufacturers. quency channel. Thus, the gain of the frequency
channel should be reduced to decrease the noise
b. Decision Rules interference. When no modulation is detected in
The decision rules of a noise reduction algorithm a frequency channel, the analysis unit assumes
may depend on several factors. The most com- that no speech is present in the frequency channel
mon of these include the estimated modulation and maximum attenuation should be applied.
depth/SNR, frequency importance function, the The modulation depth at which the gain re-
level of the incoming signal, and the degree of duction starts to be applied at a frequency chan-
noise reduction selected in the hearing aid fitting nel is sometimes called the “modulation thresh-
software. The amount of gain reduction in each old for noise reduction activation” (Groth, 2004,
channel is usually inversely proportional to the personal communication). The modulation
SNR estimated in the frequency channel (Kuk et threshold for noise reduction activation and the
al., 2002b; Powers and Hamacher, 2002; Johns exact amount of gain reduction applied at differ-
et al., 2002; Edwards et al., 1998; Latzel, Kiessling ent SNRs differs among the noise reduction algo-
and Margolf-Hackl, 2003; Schum, 2003; Walden rithms. Figure 9 illustrates the relationship be-
et al., 2000). tween the estimated SNR and the amount of gain

Figure 9. The relationship between the estimated signal-to-noise ratio (SNR) and the amount of gain
reduction applied by noise reduction algorithms of two commercial digital hearing aids.

113
Trends In Amplification Volume 8, Number 3, 2004

reduction in two commercially available digital gree of noise reduction in the fitting software. As
hearing aids. the degree of noise reduction increases, the max-
Another common consideration in the decision imum gain reduction also increases. This maxi-
rules of the multichannel adaptive noise reduction mum gain reduction is usually applied across fre-
algorithms is the frequency-importance weighting quency channels without affecting the frequency
of the frequency region for speech understanding. weighting of the particular channel (Tellier et al.,
One of the approaches is to set the amount of gain 2003).
reduction inversely proportional to the articulation
index of the frequency region (Kuk et al., 2002b; c. Execution of Gain Reduction
Alcantara et al., 2003; Boysmans and Dreschler, After the noise reduction algorithm “determines”
2000). The assumption is that as the weightings of that a certain amount of gain reduction is needed
the articulation index increase, the importance of for a given frequency channel, the gain reduction
the frequency channel for speech understanding is carried out. In this final stage of the noise re-
also increases; therefore, less gain reduction should duction signal processing, the time constants for
be applied to these frequency channels (Kuk et al., actions are crucial factors to determine the effec-
2002b; Oticon, 2004a). tiveness of the noise reduction algorithm and the
Other manufacturers may also use different amount of artifact, if any, generated. Four differ-
sets of gain reduction rules to account for the im- ent time constants are in the multichannel adap-
portance of speech information in the frequency tive noise reduction algorithms:
channel (Alcantara et al., 2003; Tellier et al.,
2003). For example, Phonak Claro only reduces 1. the engaging/adaption/attack time (i.e., the
the gain at frequency channels below 1 kHz and time between the noise reduction algorithm
above 2 kHz. The rationale is that frequencies be- detecting the presence of noise in a frequency
tween 1 and 2 kHz are very important for speech channel and the time that the gain of the fre-
understanding; therefore, the gain is not reduced quency channel starts to reduce);
regardless of the modulation depth of the incom- 2. the speed of gain reduction (i.e., the time be-
ing signal at those frequencies (Alcantara et al., tween the beginning of the gain reduction and
2003). Another form of frequency-dependent the maximum gain reduction);
gain reduction is found in Siemens’ Prisma, in 3. the disengaging/release time (i.e., the time be-
which the amount of maximum gain reduction in tween the noise reduction algorithm detecting
a frequency channel can be programmed by the the absence of noise in a frequency channel
clinician in the hearing aid fitting software and the time that the gain of the frequency
(Powers et al., 1999). channel starts to recover); and
In addition to the modulation depth and the 4. the speed of gain recovery (i.e., the time be-
importance of speech content in the frequency tween the starting of the gain recovery and 0
channel, some manufacturers may add another dB gain reduction).
dimension to their gain reduction decision rules:
the sound pressure level of the incoming signal The determination of the appropriate time con-
or the sound pressure level of the noise. For ex- stants is an art as well as a science. If a noise re-
ample, if a particular modulation depth is detect- duction algorithm has very fast attack and release
ed within a frequency channel, the Widex Diva time constants or very fast gain reduction or re-
starts to reduce the gain of the frequency channel covery times, it may treat transient speech com-
only if the input level exceeds 50 to 60 dB. The ponents such as stop or fricative consonants as
amount of reduction also increases as the level of noise and suppress them. This may result in re-
the incoming signal increases. If the modulation duced speech intelligibility or create other arti-
depth is high and the level is low, no gain reduc- facts. On the other hand, if a noise reduction al-
tion is applied (Kuk et al., 2002b). The assump- gorithm has very slow time constants or speed of
tions are that noise reduction is not needed in action, the algorithm may not respond to sudden
quiet or in environments with low levels of noise changes in the environment and brief noises may
and a higher amount of noise reduction is needed not be detected (Tellier et al., 2003).
in a noisier environment. Table 2 summarizes the time constants of the
Many multichannel adaptive noise reduction noise reduction algorithms implemented in dif-
algorithms allow the clinician to choose the de- ferent hearing aids. Notice that some algorithms

114
Chung Challenges and Recent Developments in Hearing Aids: Part I

use the same time constants as the compression and Ricketts, 2004). Chung and colleagues (2004)
system in the hearing aid, whereas others may have also observed an improvement in speech un-
have different time constants for the two systems. derstanding scores in cochlear implant users when
Natura is switched from the directional micro-
3.2.1.2. Verification and Limitations phone mode to the directional microphone plus
a. Evaluation of Noise Reduction Algorithms noise reduction mode.
Multichannel adaptive noise reduction algorithms Some studies investigated the combination ef-
have been evaluated for their effectiveness in im- fect of directional microphones with multichan-
proving speech understanding and perceived nel adaptive noise reduction algorithms (Ricketts
sound quality of hearing aid users. Many research and Dhar, 1999; Boymans and Dreschler, 2000;
studies reported that the noise reduction algo- Walden et al., 2000). The results indicated no ad-
rithms implemented in hearing aids increased ditional benefits provided by the noise reduction
subjective listening comfort, naturalness of algorithms implemented in various hearing aids
speech, sound quality, and/or listening prefer- when accessing speech understanding in noise.
ence in background noise (Boymans et al., 1999; It should be noted that the benefits of the
Boymans and Dreschler, 2000; Bray and Nilsson, noise reduction algorithms, if any, on speech un-
2001; Levitt, 2001; Mueller, 2002; Valente et al., derstanding or listening comfort are observed in
1998; Walden et al., 2000). A few studies report- steady-state noise (e.g., speech spectrum noise,
ed no benefits on sound quality ratings (Alcantara narrow-band noise) but not in noise that has
et al., 2003). modulation patterns of a speech signal (e.g., sin-
In theory, multichannel adaptive noise reduc- gle-talker competing signal, speech babble or the
tion algorithms work the best when there is spec- International Collegium for Rehabilitative Audiol-
tral differences between speech and noise. If noise ogy noise). This is because multichannel adaptive
only exists in a very narrow frequency region, the noise reduction algorithms rely heavily on the de-
multichannel adaptive noise reduction algorithm tection of modulation to infer the presence of
can reduce the gain of the hearing aid at that par- speech. If the competing noise has similar modu-
ticular region without affecting the speech compo- lation and acoustic patterns as the desired speech,
nents in other frequency regions. Lurquin and col- the noise reduction algorithm cannot differenti-
leagues (2001) reported that the noise reduction ate between the two. In general, the larger the
algorithm of the Phonak Claro, a 20-channel digi- differences in acoustic characteristics between
tal hearing aid, increased speech understanding in speech and noise, the more effective the noise re-
octave band noises centered at 250 Hz or 500 Hz. duction algorithm (Levitt, 2001).
However, Alcantara and colleagues (2003) tested
the same hearing aid and reported no significant b. Interaction Between Multichannel Adaptive Noise
improvement in speech understanding in car noise, Reduction Algorithm and Wide Dynamic Range
or in a noise with a much wider bandwidth than Compression
the low-frequency octave band noises. A caution for fitting hearing aids with wide dy-
Most studies on noise reduction algorithms namic range compression and noise reduction is
did not report any benefit for speech understand- that wide dynamic range compression may re-
ing in broadband noises, such as car noise or duce the effectiveness of noise reduction algo-
speech spectrum noise (Alcantara et al., 2003; rithms. The interactions between wide dynamic
Boymans and Dreschler, 2000; Ricketts and Dhar, range compression and noise reduction algorithm
1999; Walden et al., 2000). The reason is that if can be seen when these two signal processing
the noise reduction algorithm reduces the gain at units are implemented in series. Specifically, in-
frequency channels with noise dominance, it also teractions exist if the level detector of the com-
reduces the audibility of speech information in the pression system uses the output of the noise re-
channel. Thus, the user’s speech understanding is duction algorithm to make decisions on the
not enhanced. Nevertheless, some studies con- amount of gain that is applied to the signal or if
ducted by researchers at Sonic Innovations and by the noise reduction algorithm uses the output of
independent researchers have reported that Natura the compression system to make decisions of
hearing aids improved SNR-50 of subjects with noise level and modulation.
hearing loss for 1 to 1.8 dB (Bray and Nilsson, To illustrate the interactions, Figure 10A dis-
2000; Bray et al., 2002; Johns et al., 2002; Galster plays the amplitude envelope of two sentences

115
Trends In Amplification Volume 8, Number 3, 2004

presented to a diffuse sound field at SNR of 3 dB. noise reduction algorithm decides to turn down
Figure 10B and C are the same sentences the gain of one or two channels, the gain of a
processed by a digital hearing aid with its noise large proportion of the speech spectrum is also
reduction algorithm activated and the compres- reduced. On the other hand, a hearing aid with
sion system programmed to linear and 3:1 com- nine to ten channels can provide a finer tuning in
pression, respectively. The compression system the noise reduction process. The negative effects
has fast time constants and was implemented in on the overall speech spectrum are much less
series with the noise reduction algorithm in the when the gain in only one or two channels out of
signal processing path. The frequency responses nine is reduced.
in the compression and linear modes were In practice, digital hearing aids with many
matched at the presentation level. Compared with channels may have longer processing delays than
the envelope of the sentence processed in the lin- analog hearing aids or digital hearing aids with
ear mode, the envelope of the sentence processed fewer channels (Dillon et al., 2003; Henrickson
in the compression mode exhibits a lower modu- and Frye, 2003; Stone and Moore, 1999).
lation amplitude. The amplitude of the noise be- Processing delay is the time between the entrance
tween the sentences was also higher in the com- of an acoustic signal to the microphone and the
pression mode than the linear mode. The combi- exit of the same signal from the receiver. It is
nation of low modulation depth and higher noise sometimes referred to as group delay or time
level suggests that a 3:1 compression reduced the delay. A processing delay of 6 to 8 milliseconds
modulation depth and thus the SNR of the can be noticeable to some listeners (Agnew,
processed signal. The reason for the increase in 1997). A delay of 10 milliseconds is likely to be
noise level is because wide dynamic range com- annoying to most hearing aid users because an
pression provides more gain for soft sounds (i.e., echoing effect may be created. This echoing ef-
noise in this case) and less gain for loud sounds fect can be caused by two types of mismatch: (1)
(i.e., speech in this case). a mismatch between the bone-conducted and the
To date, few research studies have investigat- air-conducted signals during speech production,
ed the perceptual effects of the interactions be- and (2) a mismatch between the hearing aid-
tween wide dynamic range compression and processed sound and the direct sound entering
noise reduction algorithms on speech under- the ear canal via the vent while the hearing aid
standing and sound quality. Research studies are user is listening to others (Agnew and Thornton,
also needed to explore if any interaction exists be- 2000; Stone and Moore, 2002; Stone and Moore,
tween noise reduction algorithms and compres- 2003).
sion systems when the two systems are imple- Several researchers measured the processing
mented in parallel in the signal processing path. delay in some commercially available digital hear-
An example of the parallel implementation of ing aids and reported a processing delay of 1.1 to
the noise reduction algorithm and the compres- 11.2 milliseconds (Dillon et al., 2003; Henrickson
sion system is that the signal detectors of the and Frye, 2003). It is possible that the long pro-
noise reduction algorithm and the compression cessing delay of some commercial hearing aids
system detect and make decisions based on the with a high number of frequency channels can be
signal at the microphone output. Clinicians need rated as objectionable to some hearing aid users.
to keep in mind that the interactions between the In addition, a previous research study suggested
noise reduction algorithm and the wide dynamic that hearing aid users with less hearing loss or
range compression system with a high compres- good low-frequency hearing are likely to detect
sion ratio might reduce the effectiveness of noise processing delays and rate a lower processing
reduction algorithms. delay objectionable than are hearing aid users
with more severe hearing loss (Stone and Moore,
c. Number of Channels and Processing Delay 1999).
In theory, multichannel hearing aids with more Another form of processing delay is the
channels are better choices than those with a few across-frequency processing delay, which is the
channels for the application of multichannel relative processing delay among the frequency
adaptive noise reduction algorithms (Edwards, channels of a hearing aid. The low-frequency
2000; Mueller, 2002; Kuk et al., 2002b). If a hear- channels may have a longer processing delay than
ing aid only has two to three channels and the the high-frequency channels. Research showed

116
Chung Challenges and Recent Developments in Hearing Aids: Part I

Figure 10. The interaction between wide dynamic range compression and the noise reduction
algorithm when the two systems are implemented in series. (A) The amplitude envelope of two
sentences and speech spectrum noise presented in sound field at a signal-to-noise ratio (SNR) of
+3 dB. (B). The amplitude envelope of the same sentences and noise after being processed by a
directional microphone and a noise reduction algorithm with the compression system set in the
linear mode. (C). The envelope of the same sentences and noise after being processed by a
directional microphone and a noise reduction algorithm with the wide dynamic range
compression system set at 3:1 compression. The frequency responses of the hearing aid in the
linear and compression mode were matched at the presentation level.

that an across-channel processing delay of 15 mil- pecially important for users with good low-fre-
liseconds could significantly reduce nonsense quency hearing or when hearing aids with large
vowel-constant-vowel identification (Stone and vents are used. In addition, different brands and
Moore, 2003). Stone and Moore (2002, 2003) models of hearing aids may have different
also showed that an overall processing delay of 8 amounts of processing delay.
to 10 milliseconds was more preferable than the Extra care must be taken during binaural
same amount of across-frequency delay. For- hearing aid fitting. The processing delay and
tunately, most of the digital hearing aids have the phase relationship of the two hearing aids
across-channel processing delays lower than ob- must be matched for good localization ability
jectionable values (Dillon et al., 2003). and for the avoidance of objectionable echoing
In clinical practice, clinicians need to test the effects due to the differences in processing
processing delay of digital hearing aids and delay in the two hearing aids (Henrickson and
choose hearing aids with a balance between the Frye, 2003). This measurement can be made
signal processing complexity and the amount of with the AudioScan or Frye Hearing Aid
processing delay. A short processing delay is es- Analyzer.

117
Trends In Amplification Volume 8, Number 3, 2004

3.2.2. Synchrony-Detection Noise Reduction creased) at all frequency bands. When synchro-
Algorithms nous energy is detected, the hearing aid returns
The second category of noise reduction algo- to normal settings instantaneously and allows the
rithms detects the fast modulation of speech signal to pass without attenuation. In other
across frequency channels and takes advantage of words, the detection of synchronous energy
the temporal separation between speech and across the frequency bands deactivates the ac-
noise. The rationale is that the energy of speech tions of the noise reduction algorithm (Schum,
sounds is co-modulated by the opening and clos- 2003; Bachler et al., 1995; Elberling, 2002).
ing of the vocal folds during the voicing of vowels
and voiced consonants (i.e., the fast modulation 3.2.2.2. Verifications and Limitations
of speech). Noise, on the other hand, is rarely co- The synchrony-detection noise reduction algo-
modulated. rithm is designed to take advantage of the tem-
The co-modulated nature of speech is re- poral separation between speech and noise be-
vealed as the vertical striations in a spectrogram cause it acts at the instances when speech is not
(Figure 11). The colored vertical stripes of the present and allows the signal to pass when speech
spectrogram depict the instances with higher en- is present. The goal of this algorithm is to in-
ergy contents, such as when the vocal folds are crease listening comfort in the absence of speech
open. The darker stripes show the instances with signals. Yet it does not provide any benefit in lis-
no energy emitted from the mouth, such as when tening comfort or speech understanding when
the vocal folds are closed. These vertical striations speech and noise coexist or when speech is the
of the spectrogram thus indicate that speech con- competing signal. The synchrony-detection noise
tains periodic and synchronous energy bursts reduction algorithm is solely implemented in
across the speech frequency spectrum. In other hearing aids by Oticon. No validation data is
words, the speech components across the speech available on its effectiveness.
frequency spectrum are modulated by the opening
and closing of the vocal folds at the same rate and 3.2.3. Combination of the Two Types of Noise
at the same instance (i.e., co-modulated). The rate Reduction Algorithms
of co-modulation is the fundamental frequency of Most of the commercially available hearing aids
the human voice, which ranges from 100 to 250 are implemented with either a multichannel
Hz for adults and up to 500 Hz for children. adaptive noise reduction algorithm (e.g.,
GNReSound Canta, Widex Diva, Phonak Perseo,
3.2.2.1. How It Works Sonic Innovations Natura) or a synchrony detec-
The synchrony detection noise reduction algo- tion noise reduction algorithm (i.e., Oticon
rithm makes use of the co-modulated/synchro- Adapto). Oticon has recently launched Syncro,
nous nature of the speech sounds to detect the which incorporates a combination of the multi-
presence of speech (Elberling, 2002; Schum, channel adaptive and the synchronous detection
2003). The signal detection unit of the noise re- noise reduction algorithms.
duction algorithm constantly monitors the in-
coming signal at high-frequency bands (i.e., upper 3.2.3.1. How it Works
three bands in Adapto) for synchronous energy at The noise reduction algorithm of Syncro has three
the rate the fundamental frequencies of human detectors in the signal detection and analysis unit:
voices. According to Oticon, the signal detection a synchrony detector, a modulation detector, and
unit is capable of detecting synchronous energy a level detector (Table 2). The synchrony detector
down to –4 dB SNRs (Flynn, 2004, personal monitors the presence or the absence of synchro-
communication). nous energy across the upper four frequency
The synchrony detection noise reduction al- channels to infer the presence or absence of
gorithm is implemented in Oticon Adapto hear- speech in the incoming signal. The modulation
ing aids. If synchronous energy in the upper three detector monitors the modulation depth and the
frequency bands is not detected, the noise reduc- noise level of the incoming signal within each fre-
tion algorithm assumes that no speech signal is quency channel. The noise level detector deter-
present and the noise reduction unit gradually re- mines the noise level in the incoming signal.
duces the overall gain by decreasing the gain at The Syncro Optimization Equation integrates
high input levels (i.e., the compression ratio is in- the information from these detectors and deter-

118
Chung Challenges and Recent Developments in Hearing Aids: Part I

Figure 11. The spectrogram of the sentence “The boy fell from the window.” The vertical striations indicate that
speech is co-modulated during the vowel or voiced constants production. They represent the synchronous energy
emitted during the opening and closing of the vocal folds.

Figure 12. The decision rules of Oticon Syncro. The synchrony detector, the modulation detector and the
noise level detector determine the presence or the absence of speech and the relative level of speech and noise
in the incoming signal. No gain reduction is applied to the frequency channels if speech-in-quiet is detected. If
speech-in-noise is detected, the gain is reduced depending on the level of the noise, the modulation depth, and
the articulation index weighting of the frequency channel. Maximum gain reduction is applied if noise-only is
detected. (Reprinted with permission from Oticon The Syncro Concept, 2004).

119
Trends In Amplification Volume 8, Number 3, 2004

mines if the incoming signal has speech only, gaged. To obtain the frequency response of the
speech with noise, or noise only. Then it examines hearing aid when the noise reduction algorithm is
the current instrument settings, calculates the out- not engaged, clinicians can turn off the noise re-
put of each of the three states, and then decides duction algorithm feature. Yet, this is not desir-
the amount of gain reduction that should be ap- able because interactions among the signal pro-
plied to each frequency channel to maximize the cessing algorithms may exist and alter the test re-
SNR for that particular instance (Table 2). In gen- sults. To test the frequency response of the hear-
eral, the amount of gain reduction that should be ing aid with the noise reduction algorithm on but
applied to each frequency channel depends on the not engaged, the clinician needs to choose a test
modulation depth, the noise level, and the weight- signal that is not considered as noise by the noise
ing of articulation index of the frequency channel reduction algorithm (e.g., Digital speech noises in
(Figure 12) (Flynn, 2004a, Oticon, 2004a). Frye Hearing Aid Analyzers and filtered speech or
ICRA noise in AudioScan).
3.2.3.2. Verification and Limitations
Syncro was recently introduced into the hearing
aid market. No verification data on the effective-
ness of its noise reduction algorithms are available. Acknowledgment
3.2.4. Working with Noise Reduction Algorithms I sincerely thank Jennifer Groth at GNReSound,
First, many noise reduction algorithms are re- Drs David Fabry at Phonak, Mark Flynn at Oticon,
ported to enhance listening comfort and sound Francis Kuk at Widex, Michael Nilsson at Sonic
quality in noise. Clinicians need to be very careful Innovations, and Volkmar Hamacher and Tom
not to project unrealistically high expectations Powers at Siemens Hearing for providing techni-
that noise reduction algorithms can enhance cal information on their hearing products and for
speech understanding in broadband noise, which checking the accuracy of the tables. I appreciate
is typical of many daily listening situations. Dr Brent Edwards at Starkey and Steve Thompson
Unrealistic expectations often lead to user dissat- at Knowles Electronics very much for explaining
isfaction or disappointment. the properties of microphones and the mecha-
Second, most manufacturers provide a choice nisms of wind noise detectors. I would also like to
of the degree of noise reduction. It should be thank Drs Robert Novak and Jennifer Simpson,
noted that a higher degree of noise reduction Jessica Daw, and the Doctor of Audiology stu-
(i.e., higher allowable maximum gain reduction) dents in Department of Speech, Language and
does not necessarily imply better sound quality or Hearing Sciences at Purdue University for their
better speech understanding than the lower de- editorial help.
gree of noise reduction (Johns et al., 2002).
Third, noise reduction algorithms may mis-
classify music as noise because music generally
exhibits a higher modulation rate than that of
References
speech. Clinicians need to deactivate the noise re-
duction algorithms for music programs and in- Agnew J. (1997). An overview of digital signal process-
ing in hearing instruments. Hear Rev 4(7):8, 12, 16,
form the hearing aid user to use these listening 18, 66.
programs without noise reduction algorithms to
Agnew J, Block M. (1997). HINT thresholds for dual-mi-
enhance music appreciation. crophone BTE. Hear Rev 4(26):29-30.
Fourth, clinicians need to choose the appro-
priate test signals when checking the electroa- Agnew J, Thornton JM. (2000). Just noticeable and ob-
jectionable group delays in digital hearing aids. J Am
coustic characteristics or when making real ear Acad Audiol 11(6):330-336.
measurements of hearing aids with noise reduc-
American Academy of Audiology (2003). Pediatric
tion algorithms. Noise reduction algorithms may Amplification Protocol. http://www.audiology.org/
classify some of the conventional testing signals, professional/positions/pedamp.pdf.. Last accessed
such as pure tones or composite noise, as noise Dec 2, 2004.
and reduce the gain of the test signal. Thus, these Amlani AM. (2001). Efficacy of directional microphone
test signals generate gain/output frequency re- hearing aids: A meta-analytic perspective. J Am Acad
sponses when the noise reduction algorithm is en- Audiol 12(4):202-214.

120
Chung Challenges and Recent Developments in Hearing Aids: Part I

Alcantara JL, Moore DP, Kuhnel V, et al. (2003). Condie RK, Scollie SD, Checkley P. (2002). Children’s
Evaluation of the noise reduction system in a com- performance: analog vs digital adaptive dual micro-
mercial digital hearing aid. Int J Audiol 42(1):34-42. phone instruments. Hear Rev 9(6):40-43, 56.
Bachler H, Knecht WG, Launer S, et al. (1995). Cord MT, Surr RK, Walden BE, et al. (2002).
Audibility, intelligibility, sound quality and comfort. Performance of directional microphone hearing aids
High Perform Hear Sol 2:31-36. in everyday life. J Am Acad Audiol 13(6):295-307.
Baer T, Moore BCJ. (1994). Effects of spectral smearing Cord MT, Surr RK, Walden BE, et al. (2004).
on the intelligibility of sentences in the presence of Relationship between laboratory measures of direc-
interfering speech. J Acoust Soc Am 95:2277-2280. tional advantage and everyday success with direc-
Beck L. (1983). Assessment of directional hearing aid tional microphone hearing aids. J Am Acad Audiol
characteristics. Audiol Acoust 22:187-191. 15:353-364.
Bentler RA, Palmer C, Dittberner AB. (2004a). Hearing- Cox RM, Alexander GC. (1995). The abbreviated profile
in-noise: Comparions of listeners with normal and of hearing aid benefit. Ear Hear 16(2):176-86.
(aided) impaired hearing. J Am Acad Audiol
15(3):216-225. Crandell CC, Smaldino J. (2001). Improving classroom
acoustics: utilizing hearing assistive technology and
Bentler RA, Tubbs JL, Egge JLM, et al. (2004b). communication strategies in the educational setting.
Evaluation of an adaptive directional system in a DSP Volta Review 101:47-62.
hearing aid. Am J Audiol 13(1):73-79.
Dillon H, Keidser G, O’Brien A, et al. (2003). Sound qual-
Beranek LL. Acoustics. McGraw-Hill Electrical and ity comparisons of advanced hearing aids. Hear J
Electronic Engineering Series. New York: McGraw 56(4):30-40.
Hill, 1954.
Dirks DD, Morgan DE, Dubno JR. (1982). A procedure
Bilsen FA, Soede W, Berkhout AJ. (1993). Development
and assessment of two fixed-array microphones for for quantifying the effects of noise on speech recog-
use with hearing aids. J Rehab Res Develop nition. J Speech Hear 47:114-123.
30(1):73-81. Dittberner AB. (2003). Page Ten: What’s new in direc-
Bohnert A, Brantzen P. (2004). Experiences when fitting tional-microphone systems? Hear J 56(10):14-18.
children with a digital directional hearing aid. Hear Duquesnoy AJ. (1983). Effect of a single interfering noise
Rev 11(2):50, 52, 54-55. or speech source on the binaural sentence intelligi-
Boymans M, Dreschler WA. (2000). Field trials using a dig- bility of aged persons. J Acoust Soc Am 74:739-743.
ital hearing and with active noise reduction and dual-
microphone directionality. Audiology 39:260-268. Edwards BW. (2000). Beyond amplification: Signal pro-
cessing techniques for improving speech intelligibili-
Boymans, M, Dreschler W, Shoneveld P, et al. (1999). ty in noise with hearing aids. Semin Hear 21(2):137-
Clinical evaluation in a full-digital in-the-ear hearing 156.
instrument. Audiology 38(2):99-108.
Edwards BW, Struck CJ, Dharan P, Hou Z. (1998). New
Bray, V., and Nilsson, M. (2001). Additive SNR benefits digital processor for hearing loss compensation based
of signal processing features in a directional DSP aid. on the auditory system. Hear J 51(8):38-49.
Hear Rev 8(12):48-51, 62.
Eisenberg LS, Dirks DD, Bell TS. (1995). Speech recog-
Buerkli-Halevy O. (1987). The directional microphone
nition in amplitude-modulated noise of listeners with
advantage. Hearing Instruments 38(8):34-38.
normal and listeners with impaired hearing. J Speech
Bunnel HT. (1990). On enhancement of spectral contrast Hear 38(1):222-233.
in speech for hearing-impaired listeners. J Acoust Soc
Elberling C. (2002). About the VoiceFinder. News from
Am 88(6): 2546-56.
Oticon, January.
Cheng YM, O’Shaughnessy D. (1991). Speech enhance-
Etymotic Research (1996). FIG6. Hearing Aid Fitting
ment based conceptually on auditory evidence. IEEE
Protocol. Operating manual. Elk Grove Village:
Transaction Signal Processing 39, 1943-1954.
Etymotic Research.
Christensen LA, Helmink D, Soede W, et al. (2002).
Fang X, Nilsson MJ. (2004). Noise reduction apparatus
Complaints about hearing in noise: a new answer.
and method. US Patent No. 6,757,395 B1.
Hear Rev 9(6):34-36.
Festen JM, Plomp R. (1990). Effects of fluctuating noise
Chung K, Zeng F-G, Waltzman S. (2004). Utilizing hear-
and interfering speech on the speech-reception
ing aid directional microphones and noise reduction
threshold for impaired and normal hearing. J Acoust
algorithms to improve speech understanding and lis-
Soc Am 88:1725-1736.
tening preferences for cochlear implant users.
Proceedings to 8th International Cochlear Implant Flynn M. (2004a). Maximizing the voice-to-noise ratio
Conference. International Congress Series, Vol 1273, (VNR) via voice priority processing. Hear Rev
Nov. 2004, pp. 89-92. 11(4):54-59.

121
Trends In Amplification Volume 8, Number 3, 2004

Flynn M. (2004b). Clinical evidence for the benefits of Kuk F, Ludvigsen C, Paludan-Muller C. (2002b). Im-
Oticon Syncro. Oticon Syncro White Paper. proving hearing aid performance in noise: Challenges
and strategies. Hear J 55(4):34-46.
Fortune TW. (1997). Real-ear polar patterns and aided
directional sensitivity. J Am Acad Audiol 8: 119-131. Laugesen S, Schmidtke T. (2004). Improving on the
speech-in-noise problem with wireless array technol-
Gravel J, Fausel N, Liskow C, Chobot J. (1999) Children’s ogy. News from Oticon, 3-23.
speech recognition in noise using omni-directional
and dual- microphone hearing aid technology. Ear Latzel M, Kiessling J, Margolf-Hackl S. (2003).
Hear 20(1):1-11. Optimizing noise suppression and comfort in hearing
instruments. Hear Rev 10(3):76-82.
Hawkins DB, Yacullo WS. (1984). Signal-to-noise ratio
advantage of binaural hearing aids and directional Lee LW, Geddes ER. (1998). Perception of microphone
microphones under different levels of reverberation. noise in hearing instruments. J Acoustic Soc Am
104:41-56.
J Speech Hear Disord 49:278-286.
Lee L, Lau C, Sullivan D. (1998). The advantage of a low
Hellgren J, Lunner T, Arlinger S. (1999). Variations in
compression threshold in directional microphones.
the feedback of hearing aids. J Acoust Soc Am
Hear Rev 5(8):30-32.
106(5):2821-2833.
Leeuw AR, Dreschler WA. (1991). Advantages of direc-
Henrickson LK, Frye G. (2003). Processing delay in digi- tional hearing aid microphones related to room
tal hearing aids: Measurement and perception. acoustics. Audiology 30(6):330-344.
Presentation at American Speech-Language and
Hearing Association Convention, Chicago, IL. Levitt H. (2001). Noise reduction in hearing aids: A re-
view. J Rehab Res Dev 38(1):111-121.
Houtgast T. Steeneken HJM. (1985). A review of the
MTF concept in room acoustics and its use for esti- Lewis MS, Crandell CC, Valente M, et al. (2004). Speech
mating speech intelligibility in auditoria. J Acoust perception in noise: Directional microphones versus
Soc Am 77:1069-1077. frequency modulation (FM) systems. J Am Acad
Audiol 15:426-439.
Jespersen CT, Olsen SO. (2003). Hearing research: Does
directional benefit vary systematically with omnidi- Lurquin P, Rafhay S. (1996). Intelligibility in noise using
rectional performance? Hear Rev 10(11):16, 18, 20, multi-microphone hearing aids. Acta Otorhino-
22, 24, 62. laryngol Belg 50:103-109.

Johns M, Bray V, Nilsson M. (2002). Effective noise re- Lurquin P, Delacressonniere C, May A. (2001).
duction. www.audiologyonline.com. Jan 03, 2003. Examination of a multi-band noise cancellation sys-
tem. Hear Rev 8(1):48-54, 60.
Killion MC. (1997a). Hearing aids, past, present, future:
Moving toward normal conversations in noise. Br J Macrae JH, Dillon H. (1996). An equivalent noise level cri-
terion for hearing aids. J Rehab Res Dev 33:355-362.
Audiol 31:141-148.
Matsui G, Lemons T. (2001) A special report on new dig-
Killion MC. (1997b). Circuits haven’t solved the hearing-
ital hearing instrument technology. Hear Rev 8(4
in-noise problem. Hear J 51(10):28-32. suppl):7-31.
Killion MC, Schulien R Christensen L, et al. (1998). Real Mueller HG. (2002). A candid round-table discussion on
world performance of an ITE directional microphone. modern digital hearing aid and their features. Hear J
Hear J 51:24-38. 55(10):23-35.
Killion MC, Niquette PA. (2000). What can the pure-tone Mueller HG, John RM. (1979). The effects of various front-
audiogram tell us about a patient’s SNR loss? Hear J to-back ratios on the performance of directional mi-
53(3):46-53. crophone hearing aids. J Am Audiol Soc 5:30-33.
Kuk F. (1996). Sujective preference for microphone types Mueller, H.G., Grimes, A.M., and Erdman, S.A. (1983).
in daily listening environments. Hear J 49(4):29-34. Directional microphone. Hear Instrum 34(2):14-16,
47-48.
Kuk F, Baekgaard L, Ludvigsen C. (2000). Design con-
siderations in directional microphones. Hear Rev Mueller HG, Ricketts TA. (2000). Directional-microphone
7(9):58-63. hearing aids: An update. Hear J 53(5):10-19.
Kuk FK, Kollofski C, Brown S, et al. (1999). Use of a di- Neuman AC, Chung K Bakke M, et al. (2002). The
tial hearing aid with directional microphones in Directional Hearing Aid Analyzer: An In-situ
school-aged children. J Am Acad Audiol 10:535-548. Measurement System. Presentation at International
Hearing Aid Conference, Lake Tahoe, CA.
Kuk F, Keenan D, Lau C, Ludvigsen C. (2005).
Performance of a fully adaptive directional microphone Nielsen H, Ludvigsen C. (1978). Effects of hearing aids
to signals presenterd from various azimuths. J Am with directional microphones in different acoustic en-
Acad Audiol. Accepted for publication in June 2005. vironments. Scand Audiol Suppl 7:217-224.
Kuk F, Baekgaard L, Ludvigsen C. (2002a). Using digital Nilsson MJ, Soli SD, Sullivan J. (1994). Development of
signal processing to enhance the performance of dual a hearing in noise test for the measurement of speech
microphones. Hear J 55(1):35-43. reception threshold. J Acoust Soc Am 95:1985-1999.

122
Chung Challenges and Recent Developments in Hearing Aids: Part I

Olsen HL. Hagerman B. (2002). Directivity of different Ricketts T, Henry P, Gnewikow D. (2003) Full time di-
hearing aid microphone locations. Int J Audiol 41:48- rectional versus user selectable microphone modes in
56. hearing aids. Ear Hear 24(5):424-439.
Oticon (2004a). The Syncro Audiological Concept. Ricketts TA, Hornsby BW. (2003). Distance and rever-
beration effects on directional benefit. Ear Hear
Oticon (2004b). Improving on the speech-in-noise prob- 24(6):472-484.
lem with wireless array technology. News from
Oticon. Ricketts T, Lindley G, Henry P. (2001). Impact of com-
pression and hearing aid style on directional hearing
Peters RW, Moore BCJ, Baer T. (1998). Speech reception aid benefit and performance. Ear Hear 22(4) 348-361.
thresholds in noise with and without spectral and
temporal dips for hearing-impaired and normally Ricketts T, Mueller HG. (1999). Making sense of direc-
hearing people. J Acoust Soc Am 103:577-587. tional microphone hearing aids. Am J Audiol 8:117-
127.
Plomp R. (1994). Noise, amplification and compression:
Ricketts T, Mueller GH. (2000). Predicting directional
Considerations of three main issues in hearing aid
hearing aid benefit for individual listeners. J Am
design. Ear Hear 15:2-12.
Acad Audiol 11(10):561-574.
Powers TA, Hamacher V. (2002). Three-microphone in-
Rosen S. (1992). Temporal information in speech:
strument is designed to extend benefits of direction- acoustic, auditory and linguistic aspects. Phil Trans R
ality. Hear J 55(10):38-45. Soc Lond 336:367-373.
Powers T, Hamacher V. (2004). Proving adaptive direc- Schum D. (2003). Noise-reduction circuitry in hearing
tional technology works: A review of studies. Hear aids: Goals and current strategies. Hear J 56(6):32-
Rev 46:48-49, 69. 40.
Powers T, Holube I, Wesselkamp M. (1999). The use of Schweitzer C. (1997). Development of Digital Hearing
digital features to combat background noise. Hear Aids. Trends Amplif 2(2):41-77.
Rev 3(suppl):36-39.
Siemens Audiology Group (2004). http://factsandfig-
Preves D. (1999). Directional microphone use in ITE ures.hearing-siemens.com/englisch/allgemein/ue-
hearing instruments. Hear Rev 4(7):21-27. berblick_ido-hdo/triano/direktionales_mikro1.jsp.
Preves DA, Sammeth CA, Wynne MK. (1999). Field trial Last accessed November 20, 2004.
evaluations of a switched directional/omnidirection- Soede W. (2000). The array mic designed for people who
al in-the-ear hearing instrument. J Am Acad Audiol want to communicate in noise. Etymotic Research,
10(5):273-284. Elk Grove, IL.
Pumford JM, Seewald RC, Scollie S, et al. (2000). Speech Soede W, Bilsen FA, Berkhout AJ, et al. (1993).
recognition with in-the-ear and behind-the-ear dual Directional hearing aid based on array technology.
microphone hearing instruments. J Am Acad Audiol Scand Audiol Suppl (Copen) 38:20-27.
11:23-35. Stone MA, Moore BCJ. (1999). Tolerable hearing aid de-
Ricketts TA. (2000a). Impact of noise source configura- lays. I. Estimation of limits imposed by the auditory
tion on directional hearing aid benefit and perfor- path alone using simulated hearing losses. Ear a
mance. Ear Hear 21:194-205. andHear 20(3):182-192.

Ricketts T. (2000b). Directivity quantification in hearing Stone MA, Moore BC. (2002). Tolerable hearing aid de-
aids: Fitting and measurement effects. Ear Hear lays. II. Estimation of limits imposed during speech
production. Ear Hear 23(4):325-338.
21:45-58.
Stone MA, Moore BC. (2003). Tolerable hearing aid de-
Ricketts TA. (2001). Directional hearing aids. Trends
lays. III. Effects on speech production and perception
Amplif 5(4):139-176. of across-frequency variation in delay. Ear Hear
Ricketts TA, Dahr S. (1999). Aided benefit across direc- 24(2):175-183.
tional and omni-directional hearing aid microphones Studebaker G, Cox R, Formby C. (1980). The effect of
for behind-the-ear hearing aids. J Am Acad Audiol environment on the directional performance of head-
10(4):180-189. worn hearing aids. In Studebaker G, Hochberg I
Ricketts TA, Dittberner AB. (2002). Directional amplifi- (eds): Acoustical Factors Affecting Hearing Aid
cation for improved signal-to-noise ratio: Strategies, Performance. Baltimore, MD: University Park Press.
measurement, and limitations. In M. Valente (ed.): Surr RK, Walden BE, Cord MT, et al. (2002). Influence of
Strategies for Selecting and Verifying Hearing aid environmental factors on hearing aid microphone
Fittings, 2nd edition. New York: Thieme; 274-345. preference. J Am Acad Audiol 13(6):308-322.
Ricketts TA, Henry, P. (2002). Evaulation of an adaptive Tellier N, Arndt H, Luo H. (2003). Speech or noise?
directional-microphone hearing aid. Int J Audiol Using signal detection and noise reduction. Hear Rev
41:100-112. 10 (5):48-51.

123
Trends In Amplification Volume 8, Number 3, 2004

Tillman TW, Carhart R, Olsen WO. (1970). Hearing aid Valente M, Schuchman G, Potts LG, Beck LB. (2000).
efficiency in a competing speech situation. J Speech Performance of dual-microphone in-the-ear hearing
Hear Research 13(4):789-811. aids. J Am Acad Audiol 11(4):181-189.
Thompson SC. (1999). Dual microphones or directional- Van Dijkhuizen JN, Festen JM, Plomp R. (1991). The ef-
plus-omni: Which is best? In Kochkin S, Strom KE fect of frequency-selective attenuation on the speech-
(eds): High Performance Hearing Solutions, 3 reception threshold of sentences in conditions of low-
(Suppl) to Hearing Review 6(1):31-35. frequency noise. J Acoust Soc Am 90(2 Pt 1):885-
894.
Thompson SC. (2003). Tutorial on microphone tech-
nologies for directional hearing aids. Hear J Walden BE, Surr RK, Cord MT, et al. (2004). Predicting
56(11):14-21. hearing aid microphone preference in every day lis-
tening. J Am Acad Audiol 15:365-396.
Valente M. (1999). Use of microphone technology to im-
prove user performance in noise. Trends Amplif Walden BE, Surr RK, Cord MT, et al. (2000). Comparison
4(3):112-135. of benefits provided by different hearing aid tech-
nologies, J Am Acad Audiol 11(10):540-560.
Valente M, Mispagel KM. (2004). Performance of an au-
tomatic adaptive dual-microphone ITC digital hear- Wouters J, Vanden Berghe J, Maj J-B (2002). Adaptive
ing aid. Hear Rev 11(2):42-46, 71. noise suppression for a dual-mcrophone hearing aid.
Int J Audiology 41:401-407.
Valente M, Fabry D, Potts L. (1995). Recognition of
speech in noise with hearing aids using dual micro- Wouters J, Litere L, Van Wieringen A. (1999). Speech in-
phones. J Am Acad Audiol 6:440-449. telligibility in noisy environments with one and two
microphone hearing aids. Audiology 38:91-98.
Valente M, Fabry D, Potts L, Sandlin R. (1998).
Comparing the performance of the Widex Senso dig-
ital hearing aid with analog hearing aids. J Am Acad
Audiol 9(5):342-360.

124

View publication stats

Potrebbero piacerti anche