Sei sulla pagina 1di 19

Echo suppressor

An echo suppressor or acoustic echo suppressor is a telecommunications device used to reduce the echo heard on long telephone circuits, particularly circuits that traverse satellite links. Echo suppressors were developed in the 1950s in response to the first use of satellites for telecommunications, but they have since been largely supplanted by better performing echo cancellers. Echo suppressors work by detecting a voice signal going in one direction on a circuit, and then inserting a great deal of loss in the other direction. Usually the echo suppressor at the far-end of the circuit adds this loss when it detects voice coming from the near-end of the circuit. This added loss prevents the speaker from hearing his own voice.

Limitations
While effective, this approach leads to several problems:

Double-talk: It is fairly normal in conversation for both parties to speak at the same time, at least briefly. Because each echo suppressor will then detect voice energy coming from the farend of the circuit, the effect would ordinarily be for loss to be inserted in both directions at once, effectively blocking both parties. To prevent this, echo suppressors can be set to detect voice activity from the near-end speaker and to fail to insert loss (or insert a smaller loss) when both the near-end speaker and far-end speaker are talking. This, of course, temporarily defeats the primary effect of having an echo suppressor at all. Clipping: Since the echo suppressor is alternately inserting and removing loss, there is frequently a small delay when a new speaker begins talking that results in clipping the first syllable from that speaker's speech. Further information: Voice activity detection#Performance evaluation

Dead-set: If the far-end party on a call is in a noisy environment, the near-end speaker will hear that background noise while the far-end speaker is talking, but the echo suppressor will suppress this background noise when the near-end speaker starts talking. The sudden absence of the background noise gives the near-end user the impression that the line has gone dead.

These effects may be frustrating for both parties to a call, although the suppressor effectively deals with echo. In response to this, AT&T Bell Labs developed echo canceler theory in the early 1960s, which then resulted in laboratory echo cancelers in the late 1960s and commercial echo cancelers in the 1970s.

Current uses
In modern times, the main use of an AES (over an AEC) lies in the VoIP sector. This is primarily because AECs require a fast hardware, usually in the form of a Digital signal processor (DSP). For the PC market, and especially for the embedded VoIP market, this cost in MHZ comes at a
1

premium. On embedded platforms, it is not unusual to find a Wideband CODEC (such as AMRWB / G.722) incorporated in place of an AEC. This said, many (embedded) VoIP solutions do have a fully functional AEC. Examples of AES in VoIP include: "X-Ten Eyebeam", X-Lite and Skype

Echo cancellation
The term echo cancellation is used in telephony to describe the process of removing echo from a voice communication in order to improve voice quality on a telephone call. In addition to improving subjective quality, this process increases the capacity achieved through silence suppression by preventing echo from traveling across a network. Echo cancellation involves first recognizing the originally transmitted signal that re-appears, with some delay, in the transmitted or received signal. Once the echo is recognized, it can be removed by 'subtracting' it from the transmitted or received signal. This technique is generally implemented using a digital signal processor (DSP), but can also be implemented in software. In telephony, "echo" is very much like what one would experience yelling in a canyon. Echo is the reflected copy of the voice heard some time later and a delayed version of the original. On a telephone, if the delay is fairly significant (more than a few hundred milliseconds), it is considered annoying. If the delay is very small (10's of milliseconds or less), the phenomenon is called sidetone and while not objectionable to humans, can interfere with the communication between data modems.[citation needed] In the earlier days of telecommunications, echo suppression was used to reduce the objectionable nature of echos to human users. One person speaks while the other listens, and they speak back and forth. An echo suppressor attempts to determine which is the primary direction and allows that channel to go forward. In the reverse channel, it places attenuation to block or "suppress" any signal on the assumption that the signal is echo. Naturally, such a device is not perfect. There are cases where both ends are active, and other cases where one end replies faster than an echo suppressor can switch directions to keep the echo attenuated but allow the remote talker to reply without attenuation. Echo cancellers are the replacement for earlier echo suppressors that were initially developed in the 1950s to control echo caused by the long delay on satellite telecommunications circuits. Initial echo canceller theory was developed at AT&T Bell Labs in the 1960s,.[1] The concept of an echo canceller is to synthesize an estimate of the echo from the talker's signal, and subtract that synthesis from the return path instead of switching attenuation into/out of the path. This technique requires adaptive signal processing to generate a signal accurate enough to effectively cancel the echo, where the echo can differ from the original due to various kinds of degradation along the way. Rapid advances in the implementation of digital signal processing allowed echo cancellers to be made smaller and more cost-effective. In the 1990s, echo cancellers were implemented within voice switches for the first time (in the Northern Telecom DMS-250) rather than as standalone
2

devices. The integration of echo cancellation directly into the switch meant that echo cancellers could be reliably turned on or off on a call-by-call basis, removing the need for separate trunk groups for voice and data calls. Today's telephony technology often employs echo cancellers in small or handheld communications devices via a software voice engine, which provides cancellation of either acoustic echo or the residual echo introduced by a far-end PSTN gateway system; such systems typically cancel echo reflections with up to 64 milliseconds delay. Voice messaging and voice response systems which accept speech for caller input use echo cancellation while speech prompts are played to prevent the systems own speech recognition from falsely recognizing the echoed prompts. Examples of echo are found in everyday surroundings such as:

Hands-free car phone systems A standard telephone or cellphone in speakerphone or hands-free mode Dedicated standalone "conference phones" Installed room systems which use ceiling speakers and microphones on the table Physical coupling (vibrations of the loudspeaker transfer to the microphone via the handset casing)

In most of these cases, direct sound from the loudspeaker (not the person at the far end, otherwise referred to as the Talker) enters the microphone almost unaltered. The difficulties in cancelling echo stem from the alteration of the original sound by the ambient space. These changes can include certain frequencies being absorbed by soft furnishings, and reflection of different frequencies at varying strength. Since invention at AT&T Bell Labs[1] echo cancellation algorithms have been improved and honed. Like all echo cancelling processes, these first algorithms were designed to anticipate the signal which would inevitably re-enter the transmission path, and cancel it out. The acoustic echo cancellation (AEC) process works as follows:
1. 2. 3. 4. 5. A far-end signal is delivered to the system. The far-end signal is reproduced. The far-end signal is filtered and delayed to resemble the near-end signal. The filtered far-end signal is subtracted from the near-end signal. The resultant signal represents sounds present in the room excluding any direct or reverberated sound.

The primary challenge for an echo canceller is determining the nature of the filtering to be applied to the far-end signal such that it resembles the resultant near-end signal. The filter is essentially a model of speaker, microphone and the room's acoustical attributes. By using the farend signal as the stimulus, modern systems use an adaptive filter and can 'converge' from nothing to 55 dB of cancellation in around 200 ms.[citation needed] Until recently echo cancellation only needed to apply to the voice bandwidth of telephone circuits. PSTN calls transmit frequencies between 300 Hz and 3 kHz, the range required for
3

human speech intelligibility. Videoconferencing is one area where full bandwidth audio is transceived. In this case, specialised products are employed to perform echo cancellation. Echo suppression may have the side-effect of removing valid signals from the transmission. This can cause audible signal loss that is called "clipping" in telephony, but the effect is more like a "squelch" than amplitude clipping. In an ideal situation then, echo cancellation alone will be used. However this is insufficient in many applications, notably software phones on networks with long delay and meager throughput. Here, echo cancellation and suppression can work in conjunction to achieve acceptable performance.

Modems
Echo control on voice-frequency data calls that use dial-up modems may cause data corruption. Some telephone devices disable echo suppression or echo cancellation when they detect the 2100 or 2225 Hz "answer" tones associated with such calls, in accordance with ITU-T recommendation G.164 or G.165. In the 1990s most echo cancellation was done inside modems of type v.32 and later. In voiceband modems this allowed using the same frequencies in both directions simultaneously, greatly increasing the data rate. As part of connection negotiation, each modem sent line probe signals, measured the echoes, and set up its delay lines. Echoes in this case did not include long echoes caused by acoustic coupling, but did include short echoes caused by impedance mismatches in the 2-wire local loop to the telephone exchange. After the turn of the century, DSL modems also made extensive use of automated echo cancellation. Though they used separate incoming and outgoing frequencies, these frequencies were beyond the voiceband for which the cables were designed, and often suffered attenuation distortion due to bridge taps and incomplete impedance matching. Deep, narrow frequency gaps often resulted, that could not be made usable by echo cancellation. These were detected and mapped out during connection negotiation.

Crosstalk
In electronics, crosstalk (XT) is any phenomenon by which a signal transmitted on one circuit or channel of a transmission system creates an undesired effect in another circuit or channel. Crosstalk is usually caused by undesired capacitive, inductive, or conductive coupling from one circuit, part of a circuit, or channel, to another.

Crosstalk in cabling
In structured cabling, crosstalk can refer to electromagnetic interference from one unshielded twisted pair to another twisted pair, normally running in parallel.
Near end crosstalk (NEXT)

Interference between two pairs in a cable measured at the same end of the cable as the interfering transmitter.[1] Power sum near end crosstalk (PSNEXT) A NEXT measurement which includes the sum of crosstalk contributions of all adjacent pairs.[1] Far end crosstalk (FEXT) Interference between two pairs of a cable measured at the other end of the cable with respect to the interfering transmitter.[1] Equal level far end crosstalk (ELFEXT) An FEXT measurement with attenuation compensation.[1] Alien crosstalk (AXT) Interference caused by other cables routed close to the cable of interest.[2]

Other examples
In telecommunication or telephony, crosstalk is often distinguishable as pieces of speech or signaling tones leaking from other people's connections.[3] If the connection is analog, twisted pair cabling can often be used to reduce the effects of crosstalk. Alternatively, the signals can be converted to digital form, which is much less susceptible to crosstalk. In wireless communication, crosstalk is often denoted co-channel interference, and is related to adjacent-channel interference.[clarification needed] In integrated circuit design, crosstalk normally refers to a signal affecting another nearby signal. Usually the coupling is capacitive, and to the nearest neighbor, but other forms of coupling and effects on signal further away are sometimes important, especially in analog designs. See signal integrity for tools used to measure and prevent this problem, and substrate coupling for a discussion of crosstalk conveyed through the integrated circuit substrate. There are a wide variety of possible fixes, with increased spacing, wire re-ordering, and shielding being the most common. In stereo audio reproduction crosstalk can refer to signal leaking across from one program channel to another. This is an electrical effect and can be quantified with a crosstalk measurement. In full-field optical coherence tomography, "crosstalk" refers to the phenomenon that due to highly scattering objects, multiple scattered photons reach the image plane and generate coherent signal after traveling a pathlength that matches that of the sample depth within a coherence length.

In stereoscopic 3D displays, "crosstalk" refers to the incomplete isolation of the left and right image channels so that one leaks or bleeds into the other - like a double exposure, which produces a ghosting effect.

Time-division multiplexing
Time-division multiplexing (TDM) is a type of digital (or rarely analog) multiplexing in which two or more bit streams or signals are transferred appearing simultaneously as sub-channels in one communication channel, but are physically taking turns on the channel. The time domain is divided into several recurrent time slots of fixed length, one for each sub-channel. A sample byte or data block of sub-channel 1 is transmitted during time slot 1, sub-channel 2 during time slot 2, etc. One TDM frame consists of one time slot per sub-channel plus a synchronization channel and sometimes error correction channel before the synchronization. After the last sub-channel, error correction, and synchronization, the cycle starts all over again with a new frame, starting with the second sample, byte or data block from sub-channel 1, etc It's often practical to combine a set of low-bit-rate streams, each with a fixed and pre-defined bit rate, into a single high-speed bit stream that can be transmitted over a single channel. This technique is called time division multiplexing (TDM) and has many applications, including wireline telephone systems and some cellular telephone systems. The main reason to use TDM is to take advantage of existing transmission lines. It would be very expensive if each low-bit-rate stream were assigned a costly physical channel (say, an entire fiber optic line) that extended over a long distance. Consider, for instance, a channel capable of transmitting 192 kbit/sec from Chicago to New York. Suppose that three sources, all located in Chicago, each have 64 kbit/sec of data that they want to transmit to individual users in New York. As shown in Figure 7-2, the high-bit-rate channel can be divided into a series of time slots, and the time slots can be alternately used by the three sources. The three sources are thus capable of transmitting all of their data across the single, shared channel. Clearly, at the other end of the channel (in this case, in New York), the process must be reversed (i.e., the system must divide the 192 kbit/sec multiplexed data stream back into the original three 64 kbit/sec data streams, which are then provided to three different users). This reverse process is called demultiplexing.

Figure 7-2Time division multiplexing.

Choosing the proper size for the time slots involves a trade-off between efficiency and delay. If the time slots are too small (say, one bit long) then the multiplexer must be fast enough and powerful enough to be constantly switching between sources (and the demultiplexer must be fast enough and powerful enough to be constantly switching between users). If the time slots are larger than one bit, data from each source must be stored (buffered) while other sources are using the channel. This storage will produce delay. If the time slots are too large, then a significant delay will be introduced between each source and its user. Some applications, such as teleconferencing and videoconferencing, cannot tolerate long delays.

Figure 7-3Time division multiplexing on a T1 line.

Figure 7-4Multiplexing input lines with different transmission speeds.

Figure 7-5Statistical TDM.

Figure 7-6Structure of a typical statistical TDM packet.

Frequency Division Multiplexing


In many communication systems, a single, large frequency band is assigned to the system and is shared among a group of users. Examples of this type of system include: 1. A microwave transmission line connecting two sites over a long distance. Each site has a number of sources generating independent data streams that are transmitted simultaneously over the microwave link. AM or FM radio broadcast bands, which are divided among many channels or stations. The stations are selected with the radio dial by tuning a variable-frequency filter. (We examined AM and FM in Chapter 6.) A satellite system providing communication between a large number of ground stations that are separated geographically but that need to communicate at the same time. The total bandwidth assigned to the satellite system must be divided among the ground stations. A cellular radio system that operates in full-duplex mode over a given frequency band. The earlier cellular telephone systems, for example AMPS, used analog communication methods. The 8

2.

3.

4.

bandwidth for these systems was divided into a large number of channels. Each pair of channels was assigned to two communicating end-users for full-duplex communications.

Frequency division multiplexing (FDM) means that the total bandwidth available to the system is divided into a series of nonoverlapping frequency sub-bands that are then assigned to each communicating source and user pair. Figures 7-7a and 7-7b show how this division is accomplished for a case of three sources at one end of a system that are communicating with three separate users at the other end. Note that each transmitter modulates its source's information into a signal that lies in a different frequency sub-band (Transmitter 1 generates a signal in the frequency sub-band between 92.0 MHz and 92.2 MHz, Transmitter 2 generates a signal in the sub-band between 92.2 MHz and 92.4 MHz, and Transmitter 3 generates a signal in the sub-band between 92.4 MHz and 92.6 MHz). The signals are then transmitted across a common channel.

[+] Enlarge Image

Figure 7-7aA system using frequency division multiplexing.

Figure 7-7bSpectral occupancy of signals in an FDM system.

At the receiving end of the system, bandpass filters are used to pass the desired signal (the signal lying in the appropriate frequency sub-band) to the appropriate user and to block all the unwanted signals. To ensure that the transmitted signals do not stray outside their assigned sub-bands, it is also common to place appropriate passband filters at the output stage of each transmitter. It is also appropriate to design an FDM system so that the bandwidth allocated to each sub-band is slightly larger than the bandwidth needed by each source. This extra bandwidth, called a guardband, allows systems to use less expensive filters (i.e., filters with fewer poles and therefore less steep rolloffs).

FDM has both advantages and disadvantages relative to TDM. The main advantage is that unlike TDM, FDM is not sensitive to propagation delays. Channel equalization techniques needed for FDM systems are therefore not as complex as those for TDM systems. Disadvantages of FDM include the need for bandpass filters, which are relatively expensive and complicated to construct and design (remember that these filters are usually used in the transmitters as well as the receivers). TDM, on the other hand, uses relatively simple and less costly digital logic circuits. Another disadvantage of FDM is that in many practical communication systems, the power amplifier in the transmitter has nonlinear characteristics (linear amplifiers are more complex to build), and nonlinear amplification leads to the creation of out-of-band spectral components that may interfere with other FDM channels. Thus, it is necessary to use more complex linear amplifiers in FDM systems.

10

Grade of service
In telecommunication engineering, and in particular teletraffic engineering, the quality of voice service is specified by two measures: the grade of service (GoS) and the quality of service (QoS). Grade of service is the probability of a call in a circuit group being blocked or delayed for more than a specified interval, expressed as a vulgar fraction or decimal fraction. This is always with reference to the busy hour when the traffic intensity is the greatest. Grade of service may be viewed independently from the perspective of incoming versus outgoing calls, and is not necessarily equal in each direction or between different source-destination pairs. On the other hand, the quality of service which a single circuit is designed or conditioned to provide, e.g. voice grade or program grade is called the quality of service. Quality criteria for such circuits may include equalization for amplitude over a specified band of frequencies, or in the case of digital data transported via analogue circuits, may include equalization for phase. Criteria for mobile quality of service in cellular telephone circuits include the probability of abnormal termination of the call.

What is Grade of Service and how is it measured?


When a user attempts to make a telephone call, the routing equipment handling the call has to determine whether to accept the call, reroute the call to alternative equipment, or reject the call entirely. Rejected calls occur as a result of heavy traffic loads (congestion) on the system and can result in the call either being delayed or lost. If a call is delayed, the user simply has to wait for the traffic to decrease, however if a call is lost then it is removed from the system.[1] The Grade of Service is one aspect of the quality a customer can expect to experience when making a telephone call.[2] In a Loss System, the Grade of Service is described as that proportion of calls that are lost due to congestion in the busy hour.[3] For a Lost Call system, the Grade of Service can be measured using Equation 1.[4]

For a delayed call system, the Grade of Service is measured using three separate terms:[1]

The mean delay Describes the average time a user spends waiting for a connection if their call is delayed. The mean delay Describes the average time a user spends waiting for a connection whether or not their call is delayed. The probability that a user may be delayed longer than time t while waiting for a connection. Time t is chosen by the telecommunications service provider so that they can measure whether their services conform to a set Grade of Service. 11

Where and when is Grade of Service measured?


The Grade of Service can be measured using different sections of a network. When a call is routed from one end to another, it will pass through several exchanges. If the Grade of Service is calculated based on the number of calls rejected by the final circuit group, then the Grade of Service is determined by the final circuit group blocking criteria. If the Grade of Service is calculated based on the number of rejected calls between exchanges, then the Grade of Service is determined by the exchange-to-exchange blocking criteria.[1] The Grade of Service should be calculated using both the access networks and the core networks as it is these networks that allow a user to complete an end-to-end connection.[4] Furthermore, the Grade of Service should be calculated from the average of the busy hour traffic intensities of the 30 busiest traffic days of the year. This will cater for most scenarios as the traffic intensity will seldom exceed the reference level.

The grade of service is a measure of the ability of a user to access a trunk system during the busiest hour. The busy is based upon customer demand at the busiest hour during a week month or year.

Class of Service
Different telecommunications applications require different Qualities of Service. For example, if a telecommunications service provider decides to offer different qualities of voice connection, then a premium voice connection will require a better connection quality compared to an ordinary voice connection. Thus different Qualities of Service are appropriate, depending on the intended use. To help telecommunications service providers to market their different services, each service is placed into a specific class. Each Class of Service determines the level of service required.[4] To identify the Class of Service for a specific service, the networks switches and routers examine the call based on several factors. Such factors can include:[2]

The type of service and priority due to precedence The identity of the initiating party The identity of the recipient party

Quality of Service in broadband networks


In broadband networks, the Quality of Service is measured using two criteria. The first criterion is the probability of packet losses or delays in already accepted calls. The second criterion refers to the probability that a new incoming call will be rejected or blocked. To avoid the former, broadband networks limit the number of active calls so that packets from established calls will not be lost due to new calls arriving. As in circuit-switched networks, the Grade of Service can be calculated for individual switches or for the whole network.[5]
12

Erlang's lost call assumptions: To calculate the Grade of Service of a specified group of circuits or routes, Agner Krarup Erlang used a set of assumptions that relied on the network losing calls when all circuits in a group were busy. These assumptions are:[4]

All traffic through the network is pure-chance traffic, i.e. all call arrivals and terminations are independent random events There is statistical equilibrium, i.e., the average number of calls does not change Full availability of the network, i.e., every outlet from a switch is accessible from every inlet Any call that encounters congestion is immediately lost.

From these assumptions Erlang developed the Erlang-B formula which describes the probability of congestion in a circuit group. The probability of congestion gives the Grade of Service experienced.[4]

Calculating the Grade of Service


To determine the Grade of Service of a network when the traffic load and number of circuits are known, telecommunications network operators make use of Equation 2, which is the Erlang-B equation.[4]

A = Expected traffic intensity in Erlangs, N = Number of circuits in group. This equation allows operators to determine whether each of their circuit groups meet the required Grade of Service, simply by monitoring the reference traffic intensity (For delay networks, the Erlang-C formula allows network operators to determine the probability of delay depending on peak traffic and the number of circuits.

EPABX (Electronic Private Automatic Branch Exchange)


The electronic private automatic branch exchange (EPABX) is equipment that has made day-today working in the offices much simpler, especially in the area of communication. The EPABX may be defined as a switching system that makes available both internal and external stitching functions of any organisation.The selection of an EPBAX is a difficult task and requires deep knowledge of traffic pattern of the office. By using an EPABX both the internal and external needs of the organisation are fully served. With the advent of powerful microprocessors and advancements in the field of computers, the EPBAX can boast of versatile features. Hotline can be established between the boss and his immediate subordinates.
13

The feature of a call transferring and forwarding is another area enabling mobility of the users. Autoconferencing and automatic redialling of numbers found engaged on the first trial are some of other advancements in the features of the EPBAX. The selection of an EPBAX for an organisation should be preceded by a thorough study of the needs of the office. The exchange should be supporting features like voice DISA-n-auto attendant. This feature helps in doing away with a receptionist or an attendant. Further, the specifications should ensure inbuilt paging, auto fax homing, hot outward dialing, remote dialing, remote servicing and auto shut dynamic lock.

What is a PABX exchange


In generally the full meaning of PABX is Private Automatic Branch Exchange and in practically it stands for a telephone line exchange that is used for business or the office application as opposed to one that is also used as a common carrier or Telephone Company that is operates for different kinds of businesses or for the general public services. PABX system are used to make connections amongst the internal telephones of a private organization or in the different institutes, Those are generally used for business oriented appliction. The PABX system are also connected with a public switched telephone network through trunk lines. In practicaly it is called PSTN line. As they inter connected telephones, fax machines, modems, and many other parts, the usual term "extensions" that is given is referred to the ending point on the branch.

(i)Basic Blocks Of The PABX exchange:


The PABX exchange uses normal wire connection for total telecommunication system. But it also uses optical fiber line for its telecommunication system. The basic block of the PABX exchange is given below

(ii)TNT Input lines

For the PABX exchange 8 input lines are taken from the TNT. These lines are taken on a rent. For each line, has to pay about 100 taka per line. These lines are the input of the AX controller. For the another 8 lines are used as input lines to the AX controller but those lines are carried to the building through optical fiber.

(iii)AX Controller

AX controller is the major part of the PABX exchange. An AX controller mainly converts the input TNT lines for using in the telecommunication system. There are 12 ONSP card inside the AX controller. This AX controller is made by China. The ONSP cards of the AX controller are divided into 2 parts. 1st part consists of 3 ONSP cards
14

which has yellow colored input and output lines. 2nd part consists of 9 ONSP cards which has gray colored output lines.

(iv)ONSP Card

An ONSP card is the major part of the AX controller. An ONSP card can take maximum 4 inputs from outside and give maximum 12/24 outputs from it. 1st 3 ONSP cards can give 12 output lines each and last 12 ONSP cards can give 24 output lines each. These lines are connected to the telephone sets of different rooms of the varsity through wiring.

(v)ASU
ASU is a supporting device of AX controller. There are 2 ONSP cards in the ASU. These 2 ONSP card can give maximum 24 output lines from it.

(vi)Calculation of Total Number of Lines

Another 300 lines are managed about the same way for the internal communication.

(vii)Maintenance
The total PABX exchange system is maintained by navigator software which is mainly IP based. If any problem occurs then it is solved by this software. If anyone wants to call outside of the system area, then he first calls the member who is sitting in the exchange room. He then makes connection for the desired call. If anyone doesnt know the called subscribers number, he also calls the exchange room and find the number.

15

m-derived filter
m-derived filters or m-type filters are a type of electronic filter designed using the image method. They were invented by Otto Zobel in the early 1920s.[1] This filter type was originally intended for use with telephone multiplexing and was an improvement on the existing constant k type filter.[2] The main problem being addressed was the need to achieve a better match of the filter into the terminating impedances. In general, all filters designed by the image method fail to give an exact match, but the mtype filter is a big improvement with suitable choice of the parameter m. The m-type filter section has a further advantage in that there is a rapid transition from the cut-off frequency of the pass band to a pole of attenuation just inside the stop band. Despite these advantages, there is a drawback with m-type filters; at frequencies past the pole of attenuation, the response starts to rise again, and m-types have poor stop band rejection. For this reason, filters designed using m-type sections are often designed as composite filters with a mixture of k-type and m-type sections and different values of m at different points to get the optimum performance from both types.

Fig. m-derived series general filter half section. Fig. m-derived shunt low-pass filter half section.

The building block of m-derived filters, as with all image impedance filters, is the "L" network, called a half-section and composed of a series impedance Z, and a shunt admittance Y. The mderived filter is a derivative of the constant k filter. The starting point of the design is the values of Z and Y derived from the constant k prototype and are given by

16

where k is the nominal impedance of the filter, or R0. The designer now multiplies Z and Y by an arbitrary constant m (0 < m < 1). There are two different kinds of m-derived section; series and shunt. To obtain the m-derived series half section, the designer determines the impedance that must be added to 1/mY to make the image impedance ZiT the same as the image impedance of the original constant k section. From the general formula for image impedance, the additional impedance required can be shown to be[9]

To obtain the m-derived shunt half section, an admittance is added to 1/mZ to make the image impedance Zi the same as the image impedance of the original half section. The additional admittance required can be shown to be[10]

The general arrangements of these circuits are shown in the diagrams to the right along with a specific example of a low pass section. A consequence of this design is that the m-derived half section will match a k-type section on one side only. Also, an m-type section of one value of m will not match another m-type section of another value of m except on the sides which offer the Zi of the k-type.[11]
Operating frequency

For the low-pass half section shown, the cut-off frequency of the m-type is the same as the ktype and is given by

The pole of attenuation occurs at;

17

Constant k filter
Constant k filters, also k-type filters, are a type of electronic filter designed using the image method. They are the original and simplest filters produced by this methodology and consist of a ladder network of identical sections of passive components. Historically, they are the first filters that could approach the ideal filter frequency response to within any prescribed limit with the addition of a sufficient number of sections. However, they are rarely considered for a modern design, the principles behind them having been superseded by other methodologies which are more accurate in their prediction of filter response

Constant k low-pass filter half section. Here inductance L is equal Ck2

Constant k band-pass filter half section. L1 = C2k2 and L2 = C1k2

The building block of constant k filters is the half-section "L" network, composed of a series impedance Z, and a shunt admittance Y. The "k" in "constant k" is the value given by,[6]

Thus, k will have units of impedance, that is, ohms. It is readily apparent that in order for k to be constant, Y must be the dual impedance of Z. A physical interpretation of k can be given by observing that k is the limiting value of Zi as the size of the section (in terms of values of its components, such as inductances, capacitances, etc.) approaches zero, while keeping k at its initial value. Thus, k is the characteristic impedance, Z0, of the transmission line that would be formed by these infinitesimally small sections. It is also the image impedance of the section at resonance, in the case of band-pass filters, or at = 0 in the case of low-pass filters.[7] For example, the pictured low-pass half-section has

Elements L and C can be made arbitrarily small while retaining the same value of k. Z and Y however, are both approaching zero, and from the formulae (below) for image impedances,

18

. Image impedance See also Image impedance#Derivation

The image impedances of the section are given by[8]

and

Provided that the filter does not contain any resistive elements, the image impedance in the pass band of the filter is purely real and in the stop band it is purely imaginary. For example, for the pictured low-pass half-section,[9]

The transition occurs at a cut-off frequency given by

Below this frequency, the image impedance is real,

Above the cut-off frequency the image impedance is imaginary,

19

Potrebbero piacerti anche