Sei sulla pagina 1di 44

The basics of solid state devices.

Aug 1, 1995 12:00 PM, DeDad, John A.


0 Comments ShareThis3
A device frequently used in solid-state equipment such as lighting dimmers and variable speed
controls is the silicon controlled rectifier (SCR). As shown in Fig. 1, it's a semiconductor device
having three electrodes: an anode, a cathode, and a gate. An SCR's anode and cathode are similar
to those of an ordinary semiconductor diode. (See Back To Basics - Part 1, April 1995 issue for
detailed discussion of a diode.)
How does an SCR differ from a diode? Well, for one thing, it has the aforementioned gate
electrode, which is its control point. (More about this later.) For another, it will not pass
significant current, even when forward biased, unless the anode voltage equals or exceeds the
forward breakover voltage. Once this breakover voltage is reached an SCR will switch ON and
become highly conductive.
SCR characteristic curve
When an SCR is reverse biased and its gate diode is not connected, its voltage-current
characteristic curve is as shown in Fig. 2. (See Back To Basics-Part 1, April 1995 issue for
discussion of forward and reverse bias.) In this mode, an SCR operates like a regular zener or
avalanche diode. (See Back To Basics-Part 2, May 1995 issue.) In other words, there is a small
amount of current flow until avalanche is reached, after which the current increases dramatically.
And, as is the case with a zener diode, this current can cause damage if thermal runaway begins.
When an SCR is forward biased, there's a small current, the forward blocking current. This
current will stay relatively constant, at least until the forward blocking voltage is reached. At this
point, which is called the forward avalanche region, the current will increase rapidly. Here, an
SCR's resistance is very small. In fact, an SCR acts the same as a closed switch here, with the
current limited only by any external load resistance. As such, a short in an SCR's load circuit will
destroy the SCR if inadequate overload protection is provided.
Gate control
As mentioned earlier, an SCR works just like a mechanical switch: it's either ON or OFF. When
the applied voltage on an SCR is below its forward breakover voltage ([V.sub.BRF]), the SCR
fires (is ON). It will stay ON as long as the current stays above the holding current value; it will
turn OFF when the voltage across it drops to a value too low to maintain the holding current.
How does an SCR's gate electrode come into play here? Well, when the gate is forward biased
and current begins to flow in the gate-cathode junction, [V.sub.BRF] is reduced. The higher the
forward bias, the less [V.sub.BRF] needed to get the SCR to conduct. This is shown in Fig. 3.

Once an SCR is turned ON by its gate current, this current loses control of the SCR's forward
current. Even with its gate current completely removed, an SCR will stay ON until its anode
voltage is removed. It also will stay ON until the anode voltage is reduced enough so that the
current is not sufficient to maintain a proper holding current level.
SCR applications
Basically, an SCR is used as a DC switch because of its many advantages over mechanical DC
switching. These include arcless switching, low forward voltage drop, rapid switching time, and
no moving parts. An SCR can be used for AC switching, although two SCRs are needed.
Varying power to a load is perhaps an SCR's most prominent application. This is because of its
ability to turn ON at different points in its conducting cycle; thus, its usefulness in varying the
amount of power delivered to a load. This type of variable control is called phase control. Don't
confuse the term "phase" as used here with that pertaining to power distribution systems. Here,
"phase" refers to the time relationship between two events, in this case, between trigger pulse and
the point in the conducting cycle at which the pulse occurs.
Testing an SCR
You can "rough" test SCRs using an ohmmeter and a test circuit, as shown in Fig. 4, and the
following steps. If an SCR does not respond as indicated for each of these steps, it's defective
and should be replaced.
Step 1. Set the ohmmeter on the "R x 100" scale. Connect the ohmmeter's negative lead to the
SCR's cathode and its positive lead to the SCR's anode. The ohmmeter should read infinity.
(Resistance will actually be over 250,000 ohms.)
Step 2. Close the switch. This will short circuit the gate to the anode. The ohmmeter should read
almost zero ohms. (Resistance will actually be about 10 to 50 ohms; this range of readings will
not register on the "R x 100" scale.) Open the switch and the ohmmeter should still read zero
ohms.
Step 3. Reconnect the ohmmeters leads, positive lead to the SCR's cathode and its negative lead
to the SCR's anode. The ohmmeter should read infinity. (Resistance will actually be over
250,000 ohms.)
Step 4. Close the switch. This will short circuit the gate to the anode. The resistance reading
should remain high because the SCR is reverse-biased and, therefore, can't conduct.
Step 5. Open the switch. The resistance should remain high because the SCR is reversed-biased
and has no gate current.

1996 NEC issued with zone concept.

Aug 1, 1995 12:00 PM


0 Comments ShareThisNew
The hearing
The Council held a hearing on the question on July 19, 1995. Speaking in favor of sustaining the
floor action and including the zone concept included Don Zipse and Craig Wellman, who had
submitted formal complaints to the Council. Under NFPA rules, a complaint is "any request
submitted in writing to the Standards Council for a reversal or modification of any action taken
by any Technical Committee, Technical Correlating Committee, the Association, or the Standards
Council, at any time in the document development process."
They, along with others, made the point that the concept was well supported by widespread
international experience. Due to world-wide acceptance, it would promote international
harmonization of standards. It could increase safety in the high-hazard Zone 0 areas, where
power wiring is prohibited.
Bill Wusinich, representing the IBEW, had also submitted a formal complaint, in his case
opposing the floor amendment. He was joined by brother union member Jim Naughton and also
by Joe Ross of NEMA and Lon Ballard of Crouse Hinds. They argued that there is significant
confusion over the concept that could lead to improper installations. They questioned if the
proposal was completely thought through. For example, Article 501 is about five times the length
of new Article 505 on the same subject. Joe Pipkin, who works for OSHA and sits on the
Correlating Committee, expressed the opinion that OSHA wouldn't go along with the new
approach.
Although the panel had said the new approach shouldn't be intermixed with traditional
classification procedures, the actual text doesn't match that intention. For example, Sec. 501-11
provides specific rules for mixers that travel in and out of open mixing vats, which is normally a
Zone 0 location (hazardous vapors routinely present in ignitable concentrations); would
designers be able to describe this as Class 1 Div. 1 and then classify immediately adjacent areas
as Class I Zone 1 simply based on which classification is more convenient, or even jointly
classify the same area?
The concept has been discussed for over 25 years, beginning with a 1969 proposal to include it in
the 1971 NEC. The arguments at that time are striking in that they are so identical to
contemporary views on the same topic. Those in favor mentioned international trade and
analogized to the subdivision of hazardous locations in the 1947 NEC (the advent of Div. 1 and
Div. 2) so the tightest requirements would be targeted at areas with the greatest hazard. Those
opposed questioned whether the effort was really needed, since the present system had proved its
safety.
Don Zipse pleaded with the Council to proceed, noting that Article 780 ("Smart House") went
into the Code before any facilities using those concepts had ever been built. If that could go
ahead, why not a concept that had been used for a quarter of a century in extensive areas of the

world? Surely if there were problems, and he acknowledged that there were, they could be
resolved over the normal course of the standards making process, using Tentative Interim
Amendments (TIAs) if necessary.
The decision
The Council voted to uphold the action at the Annual Meeting, and to correlate that action with
parallel action on related comments. The result is that a number of public comments (14-13, -30,
-45, -171a, -174, -174a, -177, -178, -179, -182), which had been reported as "reject" are now in
place as accepted at the panel meeting.
In its decision, the Council noted the long history of this proposal in the NEC revision process
and the widespread use of the concept throughout much of the world. The Council noted that at
the end of the process, it had received the support of both the Association membership and the
Code Making Panel. Therefore, the Council declared that technical consensus had been achieved.
The Council noted that the Correlating Committee had not cited any technical objections to the
concept, but that instead the basis for its objections lay in correlation problems with other
actions. The Council's actions on the related proposals resolved some of those issues, and the
others will be addressed in future actions.
Mr. Wellman also raised the issue of how the Council judges Technical Committee (particularly
CMP 14) balance in his complaint. He felt that the user classification should be allotted greater
representation (up to 50%). The user members of CMP 14 had consistently voted unanimously in
favor of the new Zone concept, and they had been thwarted by other interests at times.
The Council voted to deny this portion of the complaint and to reaffirm support for the present
procedures, with no interest group having more than one third of a committee. The Council
concluded that "the need for a balance of interests on committees continues to be served by this
rule."
The issues in this appeal are so important, and the opposing sides so intransigent, it is a virtual
certainty that the Council decision will be appealed to the NFPA Board of Directors. If that
happens, the 1996 NEC will still be printed as released by the Council, but a disclaimer will be
printed in the front of the book notifying users of the pending appeal.

Troubleshooting signal attenuation in a


CCTV system.
Aug 1, 1995 12:00 PM, Lewis, Warren H.
2 Comments ShareThis7

This case history shows how decibel knowledge, a handheld oscilloscope, and intuitive thinking
can be used to solve a troublesome problem.
Now that we have a somewhat complete understanding of the decibel and how it applies to
various types of signals, we can apply this knowledge, along with the many modern test
instruments available today, in troubleshooting any electronic equipment operation problems that
may come our way.
As an example of how this may work, let's review a case history involving closed circuit
television, coaxial cable, and shoddy workmanship.
Background site information
We were called in by a client who complained about a very poor picture generated by a security
camera on a closed circuit television (CCTV) video monitor on a simple point-to-point link. The
picture had degraded from good to unusable over a short period of time and really got bad in the
few days following a recent lightning and rainstorm. The general opinion was that lightning had
somehow caused the problem.
Upon inspection, we found that the CCTV system consisted of National Television System
Committee (NTSC) standard video generated and displayed in black-and-white. The main
monitoring point of the CCTV system was a loading dock and a weather-proofed housed camera
located on a 10-ft high post mounted atop the building's roof was used to monitor that location.
The camera was connected to a video monitor in a security shack some 200 ft away via a 75 ohm
coaxial cable, which was routed down into the shack by means of a vent pipe type of entry.
Power to the camera was provided via a low-voltage DC link on another coaxial cable, which
was installed right alongside the coaxial cable used to transport the video signal. Both the
monitor and camera low-voltage DC supplies were simply plugged into a wall outlet convenient
to the operator at the guard's shack.
Symptom
The picture on the video monitor looked as though someone had turned the contrast control all
the way in one direction so that there was no contrast at all; the picture looked washed-out and
was barely visible on the screen, which was a nearly uniform light gray.
Testing procedures carried out
Our first task was to go up to the camera at the roof and see what the video signal looked like as
it exited the camera. This test was much aided by the fact that our handheld, 50 MHz bandwidth,
solid state, digitizing oscilloscope with LCD display had an internal battery pack and did not
require any AC power for operation.
First test. First, the coaxial cable was disconnected at the camera and a BNC style "TEE" fitting
was installed. This fitting was equipped with a 75 ohm terminator resistor on one leg. Then, we
connected our handheld oscilloscope into the remaining open end of the TEE. The result, as

shown in Fig. 1, was a healthy NTSC composite video signal. Conclusion: the camera was
clearly putting out a good signal, which was about 1.8V peak-to-peak across the 75 ohm
termination. (There is also a DC component with the AC video signal.)
We then placed our handheld oscilloscope into its meter-mode and the above signal at the camera
into the 75 ohm load was taken as a zero dB reference and stored into memory. This is shown in
Fig. 2, where + 000.1 dBV DC is taken as being close enough to zero to do the job. Now, the
"good" signal right out of the camera was available to be used again and again as a comparison
with signals we would measure at different locations. We then would be able to see how much
loss of signal occurred along the path, all of which was supposed to be a consistent 75 ohm.
The TEE was removed and the 75 ohm coaxial cable was reconnected.
Second test. The next test was made at the video monitor end of the cable and right at the point
where the cable was connected to the monitor. Again, the TEE was used, but this time no 75 ohm
termination resistor was used with it since the TEE was attached to both the monitor and the
cable. Thus, there was a fairly good 75 ohm load on everything. The result of this test was that
almost no video signal could be seen on our handheld oscilloscope's screen.
We then changed the oscilloscope's vertical scale from 500 mV/cm to 100 mV/cm and another
measurement was taken, which is shown in Fig. 3. As you can see, the video signal is simply
attenuated but does not appear to be distorted in any way that is easy to see. Conclusion: the
signal loss was occurring along the 75 ohm cable path, or was it?
Video monitors have been seen to "load down" a signal due to an internal failure on its input
circuit; as such, we didn't want to rule this possibility out. A quick test with the TEE and the 75
ohm termination resistor in place of the video monitor quickly ruled out this possibility; the
signal was essentially unchanged from that shown in Fig. 3. Now, we really could conclude that
the signal's loss was occurring along the 75 ohm cable.
Third test. We next placed our handheld oscilloscope into its meter-mode, while maintaining the
connection to the TEE at the junction of the cable and video monitor. This allowed us to take a
relative dB measurement reading, as shown in Fig. 4, using the original zero-level as the
reference. (Remember, we did this at the camera end to establish a comparison reference.) Here
we see that a - 13.7 dBV DC loss exists. This loss represents a voltage loss ratio of 4.84:1, or a
signal loss of nearly 5V for every volt put into the cable!
How much signal attenuation should you expect on a 200-ft long, 75 ohm coaxial cable? A quick
look at the coaxial cable manufacturer's Master Catalog gave us the approximate answer: around
2 dB of loss at 10 MHz for 200 ft of RG-59/U type cable as used in CATV applications. The
whole attenuation chart is shown in the accompanying table below.
What we were seeing in this path was more than 11 dB loss over and above that stated in the
manufacturer's literature. Also, the baseband video we were looking at shouldn't have a lot of
really high-frequency in it; thus, the cable probably shouldn't attenuate as much as 2 dB (per
manufacturer's literature) for 200 ft in any case.

Oh yes, since the manufacturer's information was provided only in dB form, what would we have
done if we didn't understand dB and weren't working in terms of dB on our handheld
oscilloscope? You guessed it. We would have had no idea what was "normal" and what was not
on a coaxial cable run of the type being investigated. All we would have had was some
guesswork, which is not a very good way to go in most cases.
Further analysis
What was happening on the cable? The BNC connector at the video monitor end was inspected
and it looked OK, except that it seemed to be a little wet after it was handled and the cable was
flexed.
Following this the same examination of the BNC at the camera end also failed to show any
problems. We also made sure that the connections were well protected from the environment by
the camera's enclosure.
Was the moisture a clue? Was it significant, or not? Past experience with coaxial cables with
water inside of them showed that this condition caused severe signal attenuation.
Back to the rooftop we went to make a closer examination of the 75 ohm cable and its route back
to the video monitor. First, we looked at the vent pipe, the rooftop penetration through which the
cable was passed. We found that it was not equipped with a weatherhead and that the cable was
simply stuck down into it from its open top. Sealing was done with some kind of putty or
caulking material and it looked as though it was really dried-out. Thus, water (from the storm,
remember?) could follow the cable's sheath down into the building around the bad seal.
But how did this condition let the water into the cable? We pulled the sealing material out of the
vent pipe and then hauled the coaxial cable up out of it. About 10 feet down, we found a
connection made up of two BNC fittings and a male-male adapter. The end of the cable going
into the bottom BNC fitting from the building was mostly pulled out of the connector and the
braid/sheath was fully exposed to any water flowing down the cable from above. In fact, the
arrangement was a pretty good funnel for the water to flow into the cable between the outer
sheath/shield and the inner dielectric material. Corrosion was also rampant in the damaged
connector set since it had not been sealed from moisture in any way. Obviously, this was not
good for reliable signal transport.
Where did this splice come from? After a little discussion with the personnel, we learned that the
camera came from the factory with about 10 ft of cable. Rather than throw this cable away, it was
simply kept in place and used by connecting it to the end of the cable being routed from the
video monitor. There was no explanation as to why such a poor rooftop penetration was made;
nobody would own up to it while we were there.
Solution
The whole existing 75 ohm coaxial cable run was replaced with a continuous length one. Where
this cable came from and what its quality was, we didn't know and couldn't find out; it might

have been surplus stock from somewhere (World War II?). After installation of the "new" cable,
the signal at the video monitor end was again checked with the test TEE and the monitor in
place. This signal is shown in Fig. 5. Here we see that there is still some attenuation, but nowhere
near as much as before.
Again, using our handheld oscilloscope in its meter-mode, we made a relative dB measurement
reading using the original zero-level as the reference. The "new" cable's signal loss, as shown in
Fig. 6, is about -4.5 dB. Compared to the previous dB measurement readings [ILLUSTRATION
FOR FIGURES 2 AND 4 OMITTED], this amount of signal loss is acceptable in this
application, as was evidenced by the good picture on the video monitor.
ARTIFAX
EC&M article:
"An Introduction To The Decibel," July 1995. Cast: Article cost $9.95. Order No. 2258. Orders
are taken via facsimile machines only. To order by fax dial 800-234-5709. (Have a credit card
and your fax number ready when you dial by fax.)
Warren H. Lewis is President of Lewis Consulting Services, Inc., San Juan Capistrano, Calif. and
Honorary Chairman of EC&M's Harmonics and Power Quality Steering Committee.

Providing four megs of power to protect


computers.
Aug 1, 1995 12:00 PM, Bender, Gayland J.
0 Comments ShareThisNew
Fast track design and construction, thorough testing, and adjustments as needed provide power
assurance to client's data operations.
A strong concern for power reliability at a data site is satisfied by an electrical design that
includes installed on-site power, dual primary feeders, and a full UPS with N+1 redundancy of
components (no single point of failure causing system shutdown). The on-site power consists of
two engine-generator (EG) sets, with each generator serving as backup to one of the dual primary
feeders. These same generators also can operate in parallel with utility service.
How the system works
The peaking/emergency electrical system is designed for five modes of operation. A review of
the system one-line diagram shown on page 24 will help you understand these modes.

Normal off-line. In this mode, the generators are not operating. The 13.8kV feeder breakers are
closed, energizing the client's step-down transformers from the respective utility feeders. The
13.8kV bus tie breaker is open. The generator systems must be in an automatic switching posture
during this mode.
Peaking mode. In this mode, the generators are operating in parallel with the utility's 13.8kV
system to provide peak power. The 13.8kV feeder breakers as well as the 480V generator
breakers are closed. Alternately, one 13.8kV feeder breaker is open, the other closed, and the
13.8kV bus tie breaker is closed. The generator's controls are operating to provide fixed power
output (usually equal to the prime rating) as well as adjusting the power factor (remote
controllable, usually set to near unity).
Feeder transferred mode. In this mode, the generators may or may not be operating. The power
system configuration calls for one of the 13.8kV feeder breakers to be open, the other closed, and
the 13.8kV bus tie breaker closed, thus supplying both the client's transformers from one utility
primary feeder.
Emergency stand-by power mode. In this mode, the generators are operating asynchronous to the
utility's 13.8kV system, providing power to the client. The 13.8kV feeder breakers and bus tie
breaker are open. The 480V generator breakers are closed. This mode is initiated automatically
as a result of loss of normal power to the client (loss of both of the utility feeders). The generator
controls operate the power output and the power factor. In this mode, the generators are not
operating in parallel and the client's tie breaker between Busses MSB-1 and 2 is open.
Manual mode. During this mode, the 13.8kV switchgear may be placed in any configuration
desired, and the generators may be run manually, through local control switches. All automatic
control aspects of the switchgear system and the engine generator sets are suspended.
Benefits of flexible operation
The option of parallel operation of each of the two generator sets with each of the two primary
feeders was a main design objective and meets several operational conditions. First, parallel
operation allows both generators to supply power to a single utility feeder in peaking mode. If,
while operating in the peaking mode with power flowing into the utility grid, utility power is lost
on one of the primary feeders, the engine generator controls switch to an emergency stand-by
mode. The affected 13.8kV feeder breaker opens. If an outlet exists through the other utility
13.8kV feeder, the 13.8kV bus tie breaker is synchronized closed, paralleling the two generator
sets on the healthy utility feeder. Each primary feeder is sized to handle the entire 4000kVA load.
This methodology allows maximum power delivery to the utility grid during a time when the
utility may be in great need of power generation.
An emergency 13.8kV loop feeder is provided between the client's two 2000kW transformers in
the event one of the primary feeders between the EG set switchgear and client's respective
transformer is lost. This loop feeder can be quickly connected between the transformer with the
healthy feeder and the other transformer after the dead feeder is disconnected. Both transformers
contain loop-feed bushings on the primary with the emergency feeder "parked" in each

transformer primary section. The use of this emergency loop feeder allows connection of the
client's entire load to the two EG sets in the event of total loss of utility power. When the
generators are paralleled, procedures allow for synchronization and equal load sharing of the
client's load.
While these abnormal conditions may be very unlikely, the criticality of the data site's load and
the utility's need for generated power during peak loads easily justifies the added expenses of
providing for parallel operation of the generators.
Electrical design parameters
Several important considerations impacted the electrical design of the peaking/emergency
electrical system.
Load factors. The facility was able to shed data processing load down to a required continuous
operation limit of 2000kW. As such, 2000kW was the lowest module of power requiring
continuous service to the data site. This estimated demand lead to the selection of two 1825kW
prime (2000kW standby) rated EGs.
On-site power generation. During discussions with the local utility at the initial design stages, a
dispersed generation program (on-site power) was identified. This program provides for the
utility company to construct, own, and operate prime rated EG capacity located on a client's
property for the dual purpose of providing standby emergency power to the client as well as
prime peaking power to the utility. This program is part of an overall demand side management
objective of the local utility and one that is carried out with concurrence of the State Utility
Commission. A fixed dollar per prime kW is invested by the utility with the remaining costs
contributed by the utility's customer. All operational and maintenance costs of the EG sets,
including fuel, are borne by the local utility, which has ownership rights to the equipment. The
customer has purchase rights after an agreed-upon time, normally 20 to 25 years.
System voltage. The serving utility had a standing agreement with a local dealer of EG sets for
complete system construction, assembly, and delivery to site of such units. The dealer had
successful experience with these installations.
The original intent was to order the generators with 13.8kV output to match the incoming
primary feeder voltage for parallel operation. The EG manufacturer, however, didn't offer a
13.8kV set. Because they were available and could be promptly delivered, and because the sets
had proven reliability, two 2596-hp turbo-charged diesels were specified through the utility, each
connected to a 480V generator. This selection required that the generator output voltage be
stepped up to 13.8kV for direct parallel connection to the utility grid. Thus, two 2000kVA stepup transformers were needed to bring the 480V generator power up to the 13.8kV level:These
mineral oil-filled pad-mounted transformers were installed simultaneously with the 15kV
switchgear. Each transformer is connected in series with it's respective generator set and power
breaker in the switchgear.

The 13.SkV primary power is then connected to two facility-owned, 2000kVA, 13.8 kV/480V,
pad-mounted transformers located adjacent to the data site.
Meeting site conditions
The two EG sets are each housed in separate weatherproof insulated enclosures, which were
constructed by a local specialty switchgear shop. The EG sets were shipped directly to the local
shop and fully assembled with controls inside the enclosures. The enclosures contain electric
heating for the rugged Minnesota winters and ample ventilation for the hot and humid summers.
Each EG set contains a 3000-gal bladder tank located in the sub-base, with double wall
containment in the event of a leak. External sound attenuation hoods and oversized internal
radiators are included to reduce the running noise while keeping the engines cool in midsummer. It was determined from previous experience that more ground-level noise is generated
by high tip speed of the radiator fan blades than from the muffled exhaust; hence, sound
attenuation hoods were provided. All assemblies were delivered to the site on flatbed trailers,
hoisted in place with a large crane, and installed within days.
In the mean time, the 15kV class switchgear, which includes utility grade relays and meters,
programmable logic controllers, drawout circuit breakers, monitors, gauges, etc., was constructed
by a local specialty switchgear shop. It was installed inside a separate weatherproof enclosure,
interconnected, and tested before leaving the shop. The entire assembly was delivered to the site
and set on a prepoured concrete pad within a matter of hours. To expedite construction, the
concrete pads, primary cables, and manholes were constructed just prior to delivery of the
switchgear and step-up transformers.
Equipment scheduling
The purchase order for the 13.8kV, twin peaking, 4000kW, generator-transformer system was
placed with the local engine generator dealer on March 15. This dealer subcontracted the
electrical apparatus (transformers, switchgear, etc.) with various other dealers. The schedule
called for switchgear and transformers to be delivered to the site by June 25 and switchgear in
service by July 16 of the same year. The EG sets also were to be delivered to the site by July 16.
The full system was to be in service by August 27. This schedule provided the client with
permanent dual-primary power by mid-July, and gave it additional standby EG-set power by the
end of August.
Fortunately, the state's Pollution Control Agency (PCA) (see sidebar on page 30) granted
preapproval for concrete pad construction in May, with final permit approval granted June 1.
This was a very aggressive schedule requiring a high degree of coordination among many
parties. A rainy spring nearly ruined the underground primary feeder installation and concrete
pads construction schedule. However, the team approach to construction paid off to everyone's
benefit as the schedule was met and came within budget.

The local utility was especially cooperative and provided specialists from several of its divisions
to help assure timely delivery of equipment by working with the engine generator vendor, by
assisting with the installation of equipment, and by performing some testing services.
Testing the components
Testing of the switchgear was initiated even before it left the assembly shop. Once in place and
connected to the utility primary feeders, load transfer testing was done on each feeder
individually. With the client's scheduled relocation date near, the switchgear was permanently
connected to the client's step-down transformers before the generators were on site. This way, the
client's regular load could be powered up, the data processing equipment connected for a trial
run, and internal adjustment procedures carried out.
Shortly after permanent power was established, the EG sets were transferred to the site and
hoisted in place. The generators were load bank tested before final connection to the switchgear.
Following testing of the EG sets, the next step was to cut over the generator feeders to the
switchgear and test the entire system simulating an actual power outage. By this time, the client's
data site was up and running. Therefore, any planned outage had to occur between the hours of
12:30 a.m. and 6:00 a.m., one Sunday a month, when the client was performing internal data
processing maintenance. And, there had to be advanced notification.
The first full system test and cutover occurred on an early Sunday morning in mid August. A
detailed time-based schedule of events and testing sequence was prepared, with alternate backup
routines established in the event the actual systems test resulted in equipment failure or damage.
Temporary site lighting was set up and backup personnel placed on call. As each feeder was
cutover, power transfer sequence was tested first using a dummy 120V source of power, then
each primary feeder was connected to the system.
In mid October, the generators were run in parallel with the utility for 10 hrs at full load output,
including multiple start/stop sequences. Since this operation did not require an outage, the
generators were run during normal hours.
On October 31, the generators were again tested early in the morning. A voltage potential of 59V
was detected on Phase C to ground at the No. 2 generator output and the test halted. Subsequent
investigation suspected the source of the problem to be a C-phase ground detection lamp with
incorrect voltage rating, creating a low impedance path to ground. This caused the bulb to burn
out, clearing the low impedance path before any protective devices operated. The problem was
corrected by using a higher wattage ground detection resistor and matched ground detection
lamps having equal voltage and wattage. At this point, the system still had not been tested with
the primary feeders actually shut off. Loss of utility power had been simulated by opening the
incoming power breaker.
Finally, on the early morning of December 19, the two primary feeders were sequentially
interrupted at their respective riser poles, resulting first with total load transfer to the remaining
primary feeder and then independent load transfer to respective generators upon startup and
stabilization. No. 1 generator started and picked up its respective load of about 600kW within 24

sec. No. 2 generator failed to start right away and was manually restarted after a quick fuelmixture adjustment was made. It also then picked up its respective load.
Unfortunately, No.2 generator exhibited a load imbalance: 500kW on Phase A, 300kW on Phase
B, and 450kW on Phase C. Since the actual load was a balanced 3-phase load, something was
wrong within the on-site generation power transfer system. After about 30 min of generator run
time, the individual loads were automatically retransferred back to the utility and the generators
initiated orderly shutdown. Follow-up investigation revealed that the B-phase fuse connecting
the generator to the switchgear bus had become disconnected from its holding clip.
The team learned from this project that, even with thorough systems testing prior to leaving the
factory or assembly shop, and with extensive testing on site prior to actual load pickup, small
unanticipated problems can still occur when the real load is transferred during an actual loss of
utility power. Therefore, you should specify and demand full systems operational testing under
all scenarios, including pulling the plug on the incoming utility feeders. Monthly manual
exercising of the plant under no load, but with maintenance personnel on site and alert, will also
allow timely identification of nuisance problems without risking the loss of power to a critical
load.
It was during this same cold December morning testing that the need for sound attenuation hoods
on the generator enclosures was fully identified. Previous generator testing in late summer and
fall occurred with leaves on the trees and other landscape vegetation helping to absorb the sound.
But with all vegetation gone and the air still and cold, sound is transmitted long distances. It's not
uncommon for generators to have a loading less than their full ratings, as occurred with this
project as well. Under lightly loaded conditions, the engine exhaust noise is lower than at full
load. However, the radiator fan blades rotate the same speed regardless of load. It's the noise of
the fan blades (high tip speed) that created the need for the sound attenuation hoods. This is the
same condition that causes an airplane's propeller to sound so much louder on engine run up and
takeoff on a cold, still day.
Monitoring helps client keep track of status of equipment
The facility receives limited operational information directly from the switchgear for each of the
two power modules for the following conditions.
* Run in peak mode.
* Run in standby mode.
* Fire detection alarm.
* System abnormal summary alarm.
* Programmable controller failure.

This information is imported directly to the facility monitoring system along with electrical load
data, which includes voltage, kW load, kVA load, and power factor for each power module. This
allows operating personnel to know what's going on with the power and assist the utility in
monitoring local site conditions.
A review of the facility's incoming power monitoring logs showed utility loss on one of the
primary feeders 14 times between June of one year and April of the following year. Twelve of
these outages occurred during the months of June through August. The other primary feeder
experienced eight outages during the same time period, with five of these during the Junethrough-August summer months.
Recent utility power outage
On a recent Sunday, at 7:42 a.m., another outage occurred. One of the local utility's primary
feeder cables faulted inside the utility main circuit breaker cubicle, causing the respective
substation feeder breaker to trip. Upon reclosure of the substation breaker, the nearby utility pole
cutouts opened due to the faulted cable. As a precaution, the remaining feeder's breaker was
opened upon hearing some crackling sounds coming from its cubical.
Unfortunately, the utility had the engine-generator controls on lockout mode, which prevented
automatic start of each of the generator sets. The customer started preparing for an orderly
shutdown of the datasite because UPS battery power was rapidly being used up and because
computer rooms were getting hot due to loss of cooling.
The utility was immediately contacted upon loss of power and they quickly switched the
generator sets into automatic operation. Within 15 min of initial loss of power, both sets were
running, delivering full power to the datasite, with the faulted feeder disconnected. The faulted
feeder was repaired and the circuit breaker line-side bussing replaced within a week. Full utility
operation commenced five days later.
TERMS TO KNOW
Demand side management: A process for reducing the demand on the power generation facility
(usually the local utility) by the power user (utility customer). Various strategies can be used
such as synchronizing the operation of large motors so that they do not operate concurrently.
Peak shaving: Reducing electrical power usage by a facility during a period when the serving
utility is experiencing a heavy demand for its power, and/or, by providing on-site power to help
the utility meet its power requirements.
Prime power: This is the rating for continuous operation of an engine-generator set (often, in lieu
of purchased power from a utility) and represents the highest electrical power output available
for unlimited hours per year, less time for maintenance.
Standby power: This is the rating of an engine-generator set when used as a secondary source of
electrical power. This rating is based on the set operating 24 hours per day for the duration of the

outage of the primary power source. Because there is only limited operating service of the set,
the rating of the electrical output is higher than for the rating of the set when operating in prime
power mode. Operation at the standby rating results in greater mechanical wear rates and greater
stress on the mechanical and electrical components.
PROJECT BACKGROUND
Its enlightening to see a complex project come together at breakneck speed, one that includes
installation of on-site power with microprocessor control of the power, for the mutual benefit of
both the client and the serving utility. The client, a Fortune 100 company, decided early in
December 1992 to build a major computer datasite and office support facility that would
accommodate nearly 1200 employees, consolidating its local work force. The company was
growing very rapidly and needed to obtain the facility quickly. Therefore the client decided to
lease an existing building in lieu of building a new one.
A vacant 340,000 sq ft manufacturing and storage building was found in a nearby community
and remodel plans were immediately initiated. The schedule required a new 80,000 sq ft
computer site, part of the overall project, to be completely operational within 8 months, with
design professionals and contractors quickly selected for the team. Our firm was chosen to carry
out the electrical and mechanical engineering, other than the engineering associated with the
computer room systems. Fast track construction techniques were used to provide "hypertrack"
construction.
Schematic design plans were started in early January 1993 with long lead equipment identified
first. Tentative orders were placed with escape cancellation clauses in the event the lease of the
building wasn't resolved in time to meet the client's schedule. Since the computer datasite was
the driving force behind the project, the total project power needs were assembled and reliability
of local utility power was analyzed. It was determined that two primary feeders, each from a
different substation, were desired. However, the costs and construction time needed to obtain
power from two separate substations was prohibitive. Therefore power from two separate feeders
served by a single nearby substation was agreed upon. Analysis of existing utility power in the
immediate area revealed that there had been multiple outages within the past 5 years, one caused
by an auto collision with a utility pole at the corner of the project site.
A voluminous contract covering the engine-generators sets and the utility service being offered
was worked out between the lawyers representing the utility company and our client. The whole
project had a very tight time frame and the contract was resolved at a critical time that just
avoided a time delay while the project permit was being reviewed by the state's PCA. The state
legislators had recently enacted a requirement for all new pollution contributors, such as enginegenerator sets, boilers, etc., to have the owner, or his or her agent, submit a highly detailed plan
for review by the local PCA. Approval of the plan was required before any construction could
begin, with stiff monetary penalties levied for early construction without a PCA permit.
Therefore the utility constitution was extra cautious not to start any construction until the permit
was approved, even though the project deadline was fast approaching.

Both the client and the serving utility benefited from this joint project. With new power
generation plants costing upward of $2000 per kW to build, and a number of years for regulatory
approval and construction, procuring 4MW of peaking power for $250 per kW is a bargain and a
savings to the utility. Likewise, the client has 4MW of available standby power on site with an
up-front contribution of about $188 per kW, but without the costs and headaches of maintaining,
operating, or replacing the system. Even 100% of the preventative maintenance and fuel costs are
borne by the utility. This allows the client to concentrate on it's core business with the peace of
mind that highly reliable power will always be available to maintain its datasite operations.
Credits:
Architect: Ankeny, Kell, Richter & Walsh Electrical Contractor: Electric Repair & Construction
Co. Engine-Generator Vendor: Ziegler Power Systems Co. Utility Company: Northern States
Power Co. Computer Room System Engineering: Hypertect, Inc.
SUGGESTED READINGS
EC&M Articles
"Mobile Generators Power Up Newark Airport," February '95 issue. "When Standby Systems
Are Emergency Systems," May '94 issue. Cost: $9.95 for articles. Order No. 2248. Orders are
taken via facsimile machines only. To order by fax dial 800-234-5709. (Have a credit card and
your fax number ready when you call.)
Gayland J. Bender, P.E. is Chief Electrical Engineer for Lundquist, Wilmar, Potvin & Bender,
Inc., Consulting Engineers, St. Paul, Minn.

Multiple generators provide power for peak


shaving/emergency systems.
Aug 1, 1995 12:00 PM, Lawrie, Robert J.
0 Comments ShareThis1
Modular 4160V generators slash $1 million off annual electric bills and assure emergency power.
Versatile monitoring, control, and PLC systems integrate for total building control.
To provide peak-shaving/emergency power for the new International Concourse E at Atlanta's
Hartsfield airport, eight 1250kVM 1100kW self-contained engine-generator modules are
installed in a dual parallel 4160V scheme. For peak shaving duty, the generators can supply
8.8MW of prime power in parallel with the utility. In the emergency power mode, they can
supply 10MW of emergency power to all critical loads and numerous selected loads as desired.

The single-line diagram shown on page 36 reveals how a very high reliability level of power is
attained. The accompanying photos and related data provide details of the selected equipment
and installation methods. In addition, the photos and diagram serve as a guided photo tour of the
site. On the diagram, numbered arrows point to key system components; the position of each
arrow indicates the angle of view as shown in the appropriate photo. Please refer to the diagram
and photos as you read on.
Dual design assures dependability
After several feasibility studies, the engineering firm of Stevens & Wilkinson (S&W) of Georgia,
Inc., Atlanta, selected 4160V distribution mainly because of the system's heavy loads and
demand as well as long runs to and within the very large building. A lower voltage level would
result in excessive voltage drop and losses.
An unusual but highly effective scheme for power distribution and the emergency power system
combines redundant features of building power distribution with the generator power supply. The
key is the use of dual sources at all voltage levels, dual feeds, tie switches, etc. The distribution
system design utilizes a dual network, redundant supply, and primary distribution that works in
coordination with a secondary-selective scheme.
It's important to note that the Concourse receives power in a dual utility/generator supply
arrangement. Power comes from either of two entirely separate utility sources, or from either of
two separate engine-generator sources, each of which supply medium-voltage (MV) power via
multiple feeders to two separate main 4160V switchgear assemblies. From the two main MV
switchgear line-ups, dual feeders supply 4160V to seven double-ended 4160/480/277V
substations, each with two 2500kVA liquid-filled transformers. The substations are furnished
with tie circuit breakers and automatic transfer switches as needed.
Multiple generators provide redundancy for emergency power as well as for diversification when
they are on peak shaving. The eight generators are normally at rest, except when called on for
peak-shaving duty. (A utility power failure automatically places the generators in an emergency
power mode.)
In the event that a utility power supply fails, potential transformers (PTs) sense the utility supply
loss and send a signal to generator controls, starting all generators. At the same time, all
nonessential loads are shed. When two generators on Systems A or B have stabilized, circuit
breakers are closed to reenergize the substations. If the critical loads demand more power,
additional generators synchronize and come online as needed.
Integrated monitoring and control
Installed monitoring and control systems are numerous and integrate to provide automatic
control, monitoring, and alarms. Included are four major systems with numerous subsystems that
all work well together.
* Generator control system (action initiation).

* Power monitoring system (action initiation).


* Building management system (monitors and provides automatic control of HVAC and other
systems).
* Ground-fault protection and monitoring, radio and phone communications, and coordinated
relays that protect and control power distribution when on normal, emergency, or peak-shaving
operation.
Also incorporated in the system are lighting controls and energy management functions.
The heart of all monitoring and control for the entire power system is based in a central control
room, which is furnished with a number of 486DX computers equipped with programs that
operate on "Window-type" programs. These programs bring up single-line diagrams of various
portions of the power system and allow the operator to call up real-time readings of volts, amps,
demand, kW, etc. at any component. Stored in the computer memory are minimum and
maximum values and other data.
Ancillary equipment such as color printers, modems, and radio and communications equipment
are included. Similar computer arrangements are installed in strategic locations throughout the
1.3 million sq ft building as well as in other parts of the huge airport complex. Printouts of data
or diagrams provide a record of all activity on any system as desired, such as minimum or
maximum demand and the date on which it occurred. Results of scheduled testing of the
generators and emergency-power system are recorded for regular reference.
The generator control system is complex. When normal utility power fails, a PT sends a 24V
signal to the generator controls, which initiate the following actions.
* Open utility circuit breakers.
* Start and parallel all generators.
* Transmit a DC signal to the central and remote computer control. If any type of malfunction
occurs, an alarm sounds and appropriate action (automatic or manual) takes place as required.
On peak shaving, a similar control system called an I/O system is activated. This system includes
reverse current, overcurrent, and over- and under-voltage relays. These relays send an
appropriate analog signal to the I/O board, where the signal is converted for application to a
computer for monitoring or action. The system will initiate an alarm if a generator has a problem
or can open or close circuit breakers at either the 4160V or 480V level.
TERMS TO KNOW
Peak Shaving: The reduction of electrical power usage by a facility during a period when the
serving utility is experiencing a heavy demand for power. To help the utility to meet its power
demand requirements, on site power at the customer's site can be utilized.

Prime Power: The rating of an engine generator set based on its continuous operation. It
represents the highest electrical power output available for unlimited hours per year, less the time
required for normal maintenance.

Detecting moisture in dry type transformers.


Aug 1, 1999 12:00 PM, Campbell, Dean
0 Comments ShareThis1
Find more articles on: Transformers
Resistance testing verifies moisture presence; simple dryout methods eliminate the problem.
Failure of dry type transformers can occur during operation when moisture is present in the
windings. Experience has shown this to be a particular problem in transformers ranging in size
from 500kVA and larger; and with a primary voltage greater than 600V. As such, you should test
these transformers before energizing to verify that windings are dry. This is especially important
for units that have been stored and/or deenergized in locations with high humidity, dampness, or
wide temperature fluctuations. If this testing is not done, catastrophic failure may occur.
Testing for moisture presence
The test method used in determining whether or not windings have taken on moisture is
relatively simple. All you have to do is test for resistance between individual windings and
between each winding and ground. (On existing transformers that are connected, make sure to
disconnect primary and secondary leads from everything, including feeders, secondary bus,
lightning arresters, etc. before beginning this test.) This should be done for both the primary and
secondary windings.
The test voltage should be a maximum of 1kV above the rated winding voltage, unless other
voltage limits are recommended by the transformer manufacturer. The minimum resistance
readings taken should be as recommended by the manufacturer before energizing the
transformer. If manufacturer recommendations are unavailable, the minimum resistance readings
at 68 [degrees] F should at least be 20,000 ohms per volt for the rated voltage of the transformer
coils being tested. If any of the tests result in a reading less than these recommendations, the
transformer should be dried out and retested before being energized and put into service.
For example, on the primary side of a 13.8kV/480V transformer, the test voltage should be a
maximum of 14,800V and the minimum acceptable resistance readings should be at least
13,800V x 20,000 ohms, or 276 megohms. The recommended resistance readings are very
dependent on the ambient temperature during testing. Correction factor tables are available for
other ambient temperatures. One source for these tables is the InterNational Electrical Testing
Association's (NETA's) MTS-1993 standard, Maintenance Testing Specification for Electrical
Power Distribution Equipment and Systems. (Call 1-303-697-8441.)

A resistance reading less than 20,000 ohms per volt would suggest moisture in the windings,
unless the reading is extremely low (less than 100 ohms per volt), which could suggest a short
circuit. If a short circuit is suspected, further testing should be done.
The above testing can be done with a megohmmeter or a high potential (hi-pot) tester. A hi-pot,
with a knowledgeable individual running the tests, is the safest due to the controlled rate of
increase in voltage.
For reference, the voltage (applied volts) divided by the microamps (leakage current read from
hi-pot tester) is approximately equal to megohms.
Leakage current should remain constant over time at any constant voltage or the test should be
immediately discontinued to avoid a failure.
If test results are OK, you should energize the unit as soon as possible.
If the results of the tests are marginally less than these readings, it may be possible to energize
without drying out the transformer, but an electrical engineer or trained testing technician should
be consulted to make such a decision.
Drying out a transformer
Drying out a transformer should be done as per the manufacturer's recommendations. If no
recommendations are available, you can use either of the following methods.
Method 1. Place a 60W to 100W incandescent lamp under the front and back of each coil and
leave them on for a minimum of two weeks if possible. Then, retest as per above and decide
whether or not more drying out is required.
Method 2. Disconnect the primary and secondary leads from everything, including feeders,
bussing, lighting arresters, etc. Then, short all the load ends of the secondary windings together
but not to ground. If you have to use a shorting jumper, the calculation below will help you
determine its size. If you can, just bolt all the secondary leads together.
Connect the primary leads to a voltage source as determined by the calculations below. If
possible, the drying out period should be one week at a minimum. The transformer should then
be retested as noted above. The resultant resistance readings will determine whether or not more
drying out is required.
This method is probably more reliable than Method I in uniformly drying out the entire
transformer.
Voltage source calculation
Step 1. Determine maximum primary dryout voltage with secondary shorted.

The maximum primary dryout voltage with secondary shorted ([V.sub.MPDV]) is found by using
the following equation.
[V.sub.MPDV] = [V.sub.p] X Z (eq. 1)
where [V.sub.p] = primary voltage (volts).
Z = transformer impedance/100
Step 2. Determine connected primary dryout voltage.
The connected primary dryout voltage ([V.sub.CPDV]) must be equal to or less than
[V.sub.MPDV]. Therefore, depending on the available voltages at the respective site, a choice is
made.
Step 3. Determine primary amps with connected primary dryout voltage applied.
First, calculate the primary amps with the normal application voltage applied ([I.sub.P]) using
the following equation.
[I.sub.P] = VA / ([V.sub.P] 1.732) (Eq. 2)
Next, calculate the transformer's primary with [V.sub.CPDV] applied ([I.sub.CPDA]) using the
following equation.
[I.sub.CPDA] = ([V.sub.CPDV] X [I.sub.P]) / [V.sub.MPDV] (Eq. 3)
This value is then used to determine appropriate circuit overcurrent protection and feeder sizing.
Step 4. Determine the magnitude of current flowing in the secondary jumper.
The current in the secondary jumper ([I.sub.SJ]) must be equal to or greater than the secondary
amps with [V.sub.CPDV] applied. To determine [I.sub.SJ], we use the following equation.
[I.sub.SJ] = ([V.sub.P] X [I.sub.CPDA]) / [V.sub.S] (Eq. 4)
where [V.sub.S] = secondary voltage.
This looks complicated but really isn't. Let's do a sample calculation to see how easy it is.
Sample calculation
Suppose we have a 1500kVA, 13.8kV-480/277V transformer with an impedance of 8% that
needs drying out. What is the maximum primary dryout voltage (with secondary shorted)
needed? What is the magnitude of current the secondary jumper will have to conduct?

Step 1. Determine maximum primary dryout voltage with secondary shorted.


Using Equation 1, we have:
[V.sub.MPDV] = [V.sub.P] X Z
= 13,800 x 0.08 = 1104V
Step 2. Determine connected primary dryout voltage.
Since [V.sub.MPDV] is 1104V, the next lower readily available voltage is 480V. Therefore,
[V.sub.CPDV] is 480V.
Step 3. Determine primary amps with connected primary dryout voltage applied.
First, we have to find the transformer's primary current with normal application voltage applied
([I.sub.P]) by using Equation 2.
[I.sub.P] = VA / ([V.sub.P] x 1.732)
= 1,500,000 / (13,800 x 1.732)
= 62.8A
We then insert this value into Equation 3 and solve for [I.sub.CPDA].
[I.sub.CPDA] = ([V.sub.CPDV] X [I.sub.P]) / [V.sub.MPDV]
= (480 x 62.8) / 1104 = 27.3A
Therefore, we should connect the primary windings to a 480V, 35A, 3P breaker (or fuse) for dry
out.
Step 4. Determine the magnitude of current flowing in the secondary jumper.
We insert [I.sub.CPDA] as determined from Step 3 into Equation 4 and calculate the amount of
current that will flow in the shorting jumper ([I.sub.SJ]) as follows.
[I.sub.SJ] ([V.sub.P] X [I.sub.CPDA]) / [V.sub.S]
= (13,800 X 27.3) + 480 = 785A
Therefore, we need a jumper of 785A minimum capacity to short the secondary windings of the
transformer to each other (but not to ground).

We strongly recommended that you check with the transformer manufacturer for methods,
voltage levels, resistance readings, dryout procedures; etc.

Implement a disaster recovery plan for


telecom systems.
Aug 1, 1995 12:00 PM, Knisley, Joseph R.
0 Comments ShareThisNew
Since telecom/network wiring is becoming so prevalent in facilities, what can be done to prevent
complete shutdown in case of a disaster?
The loss of elements that support the transport of information (voice, data, image, and other
signals) from one location to another can devastate operations at a facility. Therefore, you should
develop a disaster recovery plan in case such a catastrophe occurs. Even though this plan may be
well thought out, bringing back a downed network will not be a simple task. What cables are still
intact? Can rerouting accomplish anything? Can critical segments be remapped? These and many
other questions must be answered in great detail and as quickly as possible after a disaster
occurs.
Suggested design alternatives
Before we discuss restoration procedures, let's look at some system design concepts that could be
used in conjunction with setting up our disaster plan.
Alternate telephone service facility. Since the telephone service facility (where outside telephone
network cables come into a building) is subject to damage from a variety of causes, an alternate
entrance facility capable of handling part of the external telecommunications network is
recommended. This service facility should be in a different part of the building. Thus, if the
communication link to the telephone central office(s) is lost at the first entrance facility, some
circuits will still be available at the alternate location.
Additionally, if a roof space is available, you may want to modify your existing system to
accommodate satellite and microwave systems, which could be used to bypass the underground
conduit service entirely.
Seismic design. If your area has even a small potential for seismic activity, the conduit and
pathway systems should be designed to survive a credible event. Seismic and vibration restraints
(springs, bracing, and aircraft cable) should be added to the structural support elements to keep
cable systems and equipment in place.
You should segment entrance conduit runs to eliminate the possibility of a long conduit acting as
a battering ram during seismic activity. This can be accomplished by using flexible couplings at

some conduit joints to provide freedom of movement. Also, wall and slab penetrations should be
designed to permit conduit movement independent of the building.
Use trapeze hangers and two-plane bracing to support overhead ladder racks or cable trays.
House telecom/local area network equipment in seismic-rated cabinets.
Parallel backbones and telecom closets. Another design alternative is the use of two telecom
closets per floor along with two smaller-pair-count parallel backbone cables rather than one large
pair count cable. The chance of a single backbone cable being damaged is greater than the
chance that two backbone cables would be damaged at the same time. Each of these cables
should have separate routes or shafts so that if one riser shaft is damaged, the other can be used
to pull in new backbone cable.
Spare capacity. Spare capacity for future growth normally is not designed into a telecom
infrastructure. However, since cable costs are such a small part of the total installation costs, this
is an excellent way of providing spare media should temporary connections be needed.
Fire stops. Make sure that all wall and floor penetrations for cables and conduits have fire
stopping materials installed. This will help prevent the spread of fire and smoke. While floor slab
openings are generally fire-stopped when sleeves are installed in a new facility, this practice is
often omitted when additional penetrations are made in an existing facility.
Record keeping. Document and keep up-to-date your cable plant. A good cable plant
administration system makes it easy to identify both damaged and undamaged cables, thereby
eliminating the difficult task of identifying cables under a time constraint. Also, record keeping
makes it easy to assign spare capacity as replacement for damaged circuits.
Security. Access to network components should be controlled. This may range from locks and
remote monitoring to guards and access codes. Where feasible, cable should be routed through
secure parts of a building; outside cables should be buried to limit access. Manholes, handholes,
pull boxes, and pedestals located outside should have strong, tamper-proof locking mechanisms.
Testing and restoring a network
Unless complete plant destruction occurs, there will be parts of a cable plant that are still
functional. A visual inspection of wiring closets and wall plates can help you determine the
condition of cable ends. With this information, you can decide what to test. This also will help in
determining what sections of the cable plant to repair first. Also, you should compare the cable
plant records with notes made during this inspection. This information can be used in making
estimates about damage and preparing for cable evaluation.
Obviously, your first priority is to test the cables and separate the usable from the damaged. This
can be done with hand-held testing equipment. Several types of compact reasonably priced test
tools are available for copper cable evaluation.

An intelligent loopback plug and a signal injector are two devices used to identify conductors in
a multiple cable run.
The signal injector does just what its name implies: inject tones onto telephone and/or copper
data lines. Usually 3 distinct tones are generated so that several test devices can be used on the
same line without creating confusion. An inductive tracer, or probe, is the used to identify the
tone in the specific wire, usually from a distance of within 12 in. of the wire without piercing the
wire's insulation. The main benefit is that the tone can be traced through dry wall, wood, or other
nonmetallic surfaces.
This type of wire tracing test equipment is capable of checking for line polarity, continuity and
ringing current in telephone lines and is also suited for twisted pair cable, multi-conductor cable,
speaker wire, coaxial cable, alarm cable, and local area network (LAN) cable.
Handheld cable and network analyzers can do performance tests (Category 3, 4, or 5) to ensure
that the network cable passes traffic properly. Most handheld testers support battery-powered
printers; thus, testing can be done even if building power is not available.
In many locations, fiberoptic network restoration must also be considered. The equipment most
often used to do an end-to-end test is a loss test set (test light source and a power meter). It's
available as a piece of integrated equipment or as two separate components. The first step in fiber
restoration is to locate the damage. Measure the system power level with a power meter and if
the level is below that specified for the fiber, use an optical time domain reflectometer (OTDR).
An OTDR transmits pulsed light signals down the fiber, providing component loss and
reflectance information.
Although an OTDR can be used for troubleshooting, it has inherent limitations. It has dead zones
or blind spots following reflective events. Event dead zones refer to the minimum distance from
the start of the fiber where a reflective event can be distinguished.
Another handheld tool, the visual fault locator, is useful within OTDR dead zones. The fault
locater is a visible light source semiconductor diode with a wavelength of 650 nanometers; it
emits a red beam down the fiber. For breaks and significant fault points, the light is visible
through a 3-mm coated jacket.
Using EIA/TIA standards
EIA/TIA 568, Commercial Building Telecommunications Wiring Standard, and 569,
Commercial Building Standard for Telecommunications Pathways and Spaces, are used as the
basis for voice/data design so that a structured cabling system can be developed. EIA/TIA606,The Administration Standard for the Telecommunications Infrastructure of Commercial
Buildings, is recommended as the basis for documentation. This standard tells us how to label
and document the elements of the 568 and the 569 standards, such as the media (copper twisted
pair, fiber, etc.), the pathways, and the spaces in the building containing the equipment.
TERMS TO KNOW

Attenuation: Deterioration of the strength of signals as they pass through a transmission medium
(e.g. through cables, outlets, connecting hardware, patch panels, etc.).
Backbone: A term referring to certain cabling segments used to provide connectivity over long
distances within buildings as well as between buildings in a campus. It also refers to certain
network architectures used to connect multiple sub networks to one another.
SUGGESTED READING
Standards:
EIA/TIA-568, Commercial Building Telecommunications Wiring Standard.
EIA/TIA-569, Commercial Building Standard for Telecommunications Pathways and Spaces.
TIA/EIA-606, Administration Standard for Telecommunications Infrastructure of Commercial
Buildings.
Order copies of the standards from:
Global Engineering Documents, 156 Inverness Way East, Englewood, Colo. 80112-5704.
Phones: 1-800-624-3974, 1-303-792-2181; Fax 1-303-397-2633. EIA Engineering Publications
Office, 2001 Pennsylvania Ave. N.W., Washington, D.C. 20006. Phone: 1-202-457-4963.
EC&M Artifax:
* "The EIA/TIA 568 Cabling Standard," October 1993 issue. "What To Know About EIA/TIA
606," December 1993 issue. "What To Know About EIA/TIA 569," February 1994 issue.
Cost: A set of these articles cost $14.95. Order No. 2208. Orders are taken via facsimile
machines only. To order by fax dial 800-234-5709. (Have a credit card and your fax number
ready when you dial by fax.)
* EC&M's Voice/Data Engineering/Installation Guide, May 1993 issue.
Cost: Guide cost $14.95. Order No. 2218. Orders are taken via facsimile machines only. To order
by fax dial 800-234-5709. (Have a credit card and your fax number ready when you dial by fax.)
* EC&M's Voice/Data Engineering/Installation Guide, May 1994 issue. Cost: Guide cost $14.95.
Order No. 2228. Orders are taken via facsimile machines only. To order by fax dial 800-2345709. (Have a credit card and your fax number ready when you dial by fax.)
* EC&M's Voice/Data Engineering/Installation Guide, May 1995 issue.
Cost: Guide cost $14.95. Order No. 2238. Orders are taken via facsimile machines only. To order
by fax dial 800-234-5709. (Have a credit card and your fax number ready when you dial by fax.)

Implement a disaster recovery plan for


telecom systems.
Aug 1, 1995 12:00 PM, Knisley, Joseph R.
0 Comments ShareThisNew
Since telecom/network wiring is becoming so prevalent in facilities, what can be done to prevent
complete shutdown in case of a disaster?
The loss of elements that support the transport of information (voice, data, image, and other
signals) from one location to another can devastate operations at a facility. Therefore, you should
develop a disaster recovery plan in case such a catastrophe occurs. Even though this plan may be
well thought out, bringing back a downed network will not be a simple task. What cables are still
intact? Can rerouting accomplish anything? Can critical segments be remapped? These and many
other questions must be answered in great detail and as quickly as possible after a disaster
occurs.
Suggested design alternatives
Before we discuss restoration procedures, let's look at some system design concepts that could be
used in conjunction with setting up our disaster plan.
Alternate telephone service facility. Since the telephone service facility (where outside telephone
network cables come into a building) is subject to damage from a variety of causes, an alternate
entrance facility capable of handling part of the external telecommunications network is
recommended. This service facility should be in a different part of the building. Thus, if the
communication link to the telephone central office(s) is lost at the first entrance facility, some
circuits will still be available at the alternate location.
Additionally, if a roof space is available, you may want to modify your existing system to
accommodate satellite and microwave systems, which could be used to bypass the underground
conduit service entirely.
Seismic design. If your area has even a small potential for seismic activity, the conduit and
pathway systems should be designed to survive a credible event. Seismic and vibration restraints
(springs, bracing, and aircraft cable) should be added to the structural support elements to keep
cable systems and equipment in place.
You should segment entrance conduit runs to eliminate the possibility of a long conduit acting as
a battering ram during seismic activity. This can be accomplished by using flexible couplings at
some conduit joints to provide freedom of movement. Also, wall and slab penetrations should be
designed to permit conduit movement independent of the building.

Use trapeze hangers and two-plane bracing to support overhead ladder racks or cable trays.
House telecom/local area network equipment in seismic-rated cabinets.
Parallel backbones and telecom closets. Another design alternative is the use of two telecom
closets per floor along with two smaller-pair-count parallel backbone cables rather than one large
pair count cable. The chance of a single backbone cable being damaged is greater than the
chance that two backbone cables would be damaged at the same time. Each of these cables
should have separate routes or shafts so that if one riser shaft is damaged, the other can be used
to pull in new backbone cable.
Spare capacity. Spare capacity for future growth normally is not designed into a telecom
infrastructure. However, since cable costs are such a small part of the total installation costs, this
is an excellent way of providing spare media should temporary connections be needed.
Fire stops. Make sure that all wall and floor penetrations for cables and conduits have fire
stopping materials installed. This will help prevent the spread of fire and smoke. While floor slab
openings are generally fire-stopped when sleeves are installed in a new facility, this practice is
often omitted when additional penetrations are made in an existing facility.
Record keeping. Document and keep up-to-date your cable plant. A good cable plant
administration system makes it easy to identify both damaged and undamaged cables, thereby
eliminating the difficult task of identifying cables under a time constraint. Also, record keeping
makes it easy to assign spare capacity as replacement for damaged circuits.
Security. Access to network components should be controlled. This may range from locks and
remote monitoring to guards and access codes. Where feasible, cable should be routed through
secure parts of a building; outside cables should be buried to limit access. Manholes, handholes,
pull boxes, and pedestals located outside should have strong, tamper-proof locking mechanisms.
Testing and restoring a network
Unless complete plant destruction occurs, there will be parts of a cable plant that are still
functional. A visual inspection of wiring closets and wall plates can help you determine the
condition of cable ends. With this information, you can decide what to test. This also will help in
determining what sections of the cable plant to repair first. Also, you should compare the cable
plant records with notes made during this inspection. This information can be used in making
estimates about damage and preparing for cable evaluation.
Obviously, your first priority is to test the cables and separate the usable from the damaged. This
can be done with hand-held testing equipment. Several types of compact reasonably priced test
tools are available for copper cable evaluation.
An intelligent loopback plug and a signal injector are two devices used to identify conductors in
a multiple cable run.

The signal injector does just what its name implies: inject tones onto telephone and/or copper
data lines. Usually 3 distinct tones are generated so that several test devices can be used on the
same line without creating confusion. An inductive tracer, or probe, is the used to identify the
tone in the specific wire, usually from a distance of within 12 in. of the wire without piercing the
wire's insulation. The main benefit is that the tone can be traced through dry wall, wood, or other
nonmetallic surfaces.
This type of wire tracing test equipment is capable of checking for line polarity, continuity and
ringing current in telephone lines and is also suited for twisted pair cable, multi-conductor cable,
speaker wire, coaxial cable, alarm cable, and local area network (LAN) cable.
Handheld cable and network analyzers can do performance tests (Category 3, 4, or 5) to ensure
that the network cable passes traffic properly. Most handheld testers support battery-powered
printers; thus, testing can be done even if building power is not available.
In many locations, fiberoptic network restoration must also be considered. The equipment most
often used to do an end-to-end test is a loss test set (test light source and a power meter). It's
available as a piece of integrated equipment or as two separate components. The first step in fiber
restoration is to locate the damage. Measure the system power level with a power meter and if
the level is below that specified for the fiber, use an optical time domain reflectometer (OTDR).
An OTDR transmits pulsed light signals down the fiber, providing component loss and
reflectance information.
Although an OTDR can be used for troubleshooting, it has inherent limitations. It has dead zones
or blind spots following reflective events. Event dead zones refer to the minimum distance from
the start of the fiber where a reflective event can be distinguished.
Another handheld tool, the visual fault locator, is useful within OTDR dead zones. The fault
locater is a visible light source semiconductor diode with a wavelength of 650 nanometers; it
emits a red beam down the fiber. For breaks and significant fault points, the light is visible
through a 3-mm coated jacket.
Using EIA/TIA standards
EIA/TIA 568, Commercial Building Telecommunications Wiring Standard, and 569,
Commercial Building Standard for Telecommunications Pathways and Spaces, are used as the
basis for voice/data design so that a structured cabling system can be developed. EIA/TIA606,The Administration Standard for the Telecommunications Infrastructure of Commercial
Buildings, is recommended as the basis for documentation. This standard tells us how to label
and document the elements of the 568 and the 569 standards, such as the media (copper twisted
pair, fiber, etc.), the pathways, and the spaces in the building containing the equipment.
TERMS TO KNOW
Attenuation: Deterioration of the strength of signals as they pass through a transmission medium
(e.g. through cables, outlets, connecting hardware, patch panels, etc.).

Backbone: A term referring to certain cabling segments used to provide connectivity over long
distances within buildings as well as between buildings in a campus. It also refers to certain
network architectures used to connect multiple sub networks to one another.
SUGGESTED READING
Standards:
EIA/TIA-568, Commercial Building Telecommunications Wiring Standard.
EIA/TIA-569, Commercial Building Standard for Telecommunications Pathways and Spaces.
TIA/EIA-606, Administration Standard for Telecommunications Infrastructure of Commercial
Buildings.
Order copies of the standards from:
Global Engineering Documents, 156 Inverness Way East, Englewood, Colo. 80112-5704.
Phones: 1-800-624-3974, 1-303-792-2181; Fax 1-303-397-2633. EIA Engineering Publications
Office, 2001 Pennsylvania Ave. N.W., Washington, D.C. 20006. Phone: 1-202-457-4963.
EC&M Artifax:
* "The EIA/TIA 568 Cabling Standard," October 1993 issue. "What To Know About EIA/TIA
606," December 1993 issue. "What To Know About EIA/TIA 569," February 1994 issue.
Cost: A set of these articles cost $14.95. Order No. 2208. Orders are taken via facsimile
machines only. To order by fax dial 800-234-5709. (Have a credit card and your fax number
ready when you dial by fax.)
* EC&M's Voice/Data Engineering/Installation Guide, May 1993 issue.
Cost: Guide cost $14.95. Order No. 2218. Orders are taken via facsimile machines only. To order
by fax dial 800-234-5709. (Have a credit card and your fax number ready when you dial by fax.)
* EC&M's Voice/Data Engineering/Installation Guide, May 1994 issue. Cost: Guide cost $14.95.
Order No. 2228. Orders are taken via facsimile machines only. To order by fax dial 800-2345709. (Have a credit card and your fax number ready when you dial by fax.)
* EC&M's Voice/Data Engineering/Installation Guide, May 1995 issue.
Cost: Guide cost $14.95. Order No. 2238. Orders are taken via facsimile machines only. To order
by fax dial 800-234-5709. (Have a credit card and your fax number ready when you dial by fax.)

New developments in patch panel technology.

Aug 1, 1995 12:00 PM, McElroy, Mark W.


0 Comments ShareThisNew
Three types of patch panels (wireless, electronic, and intelligent)have benefits and drawbacks,
depending on the application.
When it comes to making a move, add, or change in a voice/data cable plant, sending a
technician to perform physical changes at a patch panel in a wiring closet is one of the most
time- and cost-intensive operations in the overall support of a communications system. In fact,
one industry research group recently estimated that the cost to maintain local area networks
(LANs) consumes 84% of a network manager's budget, 44% of which is spent on physical
management and troubleshooting.
There are new types of patch panels currently available that can help reduce the time and cost
associated with these activities. These "unconventional" patch panels each have their benefits
and drawbacks, depending on the application at hand.
Some definitions
Before getting into the subject of what's out there in the way of electronic patch panels, let's first
attempt to define the term. An electronic patch panel is a device that enables cross-connecting
electronically; that is, without the necessity of making by hand any physical changes using patch
cords or other cross-connect components. This definition also implies the ability to make such
changes from a centralized station or terminal such that the patch panels controlled in this
manner can be remotely monitored and/or configured by a central operator. The promise of this
technology could be very appealing, since it would effectively eliminate the need to dispatch
technicians to wiring closets every time a cross-connect of some kind (i.e., a move, add, or
change) is required.
The ideal electronic patch panel should also be very flexible and very intelligent. It should not
only enable the making of distributed cross-connections from a central point, but should also
support mixed media cross-connections as well. For example, you may, in some cases, want to
cross-connect horizontal unshielded twisted pair cables (UTP) to fiber in the backbone or, at a
minimum, make UTP-to-UTP connections. In any event, the ability to support opti-electronic
transitions in media would seem to be of some value in certain cases.
In practice, an intelligent, electronic patch panel would give a network manager the ability to
make cross-connections from any circuit in horizontal wiring to any circuit in the backbone
subsystem without ever having to leave his or her desk. Moreover, the ideal system would also
enable cross-connections between two or more circuits in the horizontal subsystem, up to and
including daisy chained-circuits when required.
An example

Let's consider a simple example. Let's say we're dealing with a situation where a well-designed
structured cabling system is in place. And let's say that this system provides for a standard
faceplate having four RJ-45 jacks at each workstation. At one such station, you therefore would
have four hard-wired 4-pair connections back to the wiring closet. Now, let's assume that each
user's connection to the wiring closet is terminated onto what we have described as electronic
patch panels. Let's also assume that the cabling system in our example also features the usual
mix of copper and fiber in the backbone (copper for voice, and fiber for everything else).
Now, if we look at a typical cross-connect situation for an average user in this scenario, we might
expect to find that one RJ-45 connection at the wiring-closet end is cross-connected to a circuit
that supports "data," while another is cross-connected to a circuit that supports "voice." The third
and fourth jacks might not be in use, but could be at some future time, if and when applications
appear.
Let's assume that a need comes up for the third jack at a workstation, thereby requiring the cable
manager to establish a new cross-connection to support a new application, a dial tone line to a
desktop for a fax machine, for example. Rather than dispatching a technician with a set of tools
and, hopefully, accurate documentation, the cable manager in this scenario (assuming electronic
patch panels are in place) simply turns to the cable management/patch panel control system and
makes the change on his or her screen within seconds of having received the request. Once
completed, the change is instantly recorded, thereby updating the cable documentation in the
same stroke.
Our example not only illustrates the power of electronics when applied to an otherwise physical
task, but also illustrates the overlap of this technology with another one: cable management
systems. But while cable management systems are a relatively mature technology at this point,
the same can not be said for electronic patch panels. There are, however, some interesting
developments out there that suggest it's time to start paying attention to this emerging
technology. Let's take a look at what products are currently available to you.
Product classifications
A high level review of what's out there in terms of what we'll generally refer to as
"unconventional" patch panel technology yields products in at least three different categories. In
spite of their differences (which we'll describe), all products in these categories have one thing in
common: they are designed to minimize, if not totally eliminate, the need to physically make
moves, adds, and changes as traditionally done by technicians using patch cords and crossconnect wires in wiring closets.
The three categories of products, according to our view of the world, are as follows.
* Wireless patch panels
* Electronic patch panels
* Intelligent patch panels

Wireless patch panels. Products in this category fit the description of "unconventional" patch
panels, but are far from electronic or intelligent. They are, in fact, passive devices. This is an
important distinction compared to the other two categories, which themselves may be wireless,
but are not passive.
Wireless patch panels are also wireless only to the extent that they avoid the use of external patch
cords to achieve cross connections. Instead, they rely on internal cross-connections using hardwired modules that connect "input conductors," or jacks, to "output conductors." This results in a
straight-through cross-connection of media such that each conductor's identification is
maintained all the way through the cross-connected circuit (e.g., the Tip of pair I is maintained;
Ring of pair i is maintained; Tip of pair 2 is maintained; etc.). Wireless patch panels designed for
copper connections are usually based on 2-, 4-, or 8-wire configurations (i.e., for UTP cable
plants).
With wireless patch panels, the assumption is that "normal" prevails for most jacks most of the
time, and patch cords should only be used to make changes or to deal with the exceptions. Patch
cords, therefore, can be used with wireless systems, but only when deviations from the norm are
required. Insertion of a patch cord into a wireless patch panel breaks the internal crossconnection, thereby freeing the input channel for reassignment to another output channel at the
user's discretion.
All of the physical administration of a wireless patch panel must still be performed on a remote
basis by technicians inside wiring closets. Wireless systems, according to our classification here,
are therefore not addressable by remote devices and can not be controlled from a central
management station of any kind. Wireless systems are also built using passive components and
are not electrified in any way. (In terms of appearance, wireless patch panels closely resemble
conventional patch panels and are comprised of RJ-type jacks on their front panels.)
Electronic patch panels. Unlike wireless patch panels, electronic patch panels are active in the
sense that they are electrified. [ILLUSTRATION FOR FIGURE 1 OMITTED]. They, therefore,
are capable of sensing, capturing, and storing certain operating or status conditions, which can
then be reported to the network manager for proper handling. Electronic patch panels, again
according to our classification, can not, however, be managed from a remote, centralized station,
and still require physical handling by technicians inside wiring closets.
Examples of functionality common to electronic patch panels include "sensing" of patch cord
insertion or removal from a jack, after which insertion/removal "events" are captured and
reported to a network manager. Other examples include the ability to turn LED lights on or off
around specific jacks to help guide technicians perform cross-connect work when changes are
required. In general, a well-rounded electronic patch panel system will provide a variety of status
and change-oriented reports, and may also be used to sense other systems of importance in
managing the physical environment in wiring closets (e.g., cooling fans, thermometers, alarm
systems, etc.).
Intelligent patch panels. This category of "unconventional" patch panel, as shown in Fig. 2,
includes devices with all of the attributes of wireless and electronic systems. As its name implies,

the intelligent patch panel is fully manageable from a remote, centralized station. Products that
fall into this category support a variety of automated functions including the following.
* Centralized online control: The ability to make cross-connection changes and assignments
from a remote, centralized station, thereby eliminating the need to send technicians to wiring
closets. This also implies the total elimination of patch cords, since all cross-connections are
made through internal electronics.
* System monitoring and reporting: The ability to activate certain sensor functions, as in the case
of electronic patch panels, along with powerful reporting capabilities for network management.
* Automated recordkeeping: Fully integrated cable management system such that moves, adds,
or changes are controlled by the system as well as recorded for full "as-built" documentation
reporting capabilities. Systems of this type make heavy use of user-friendly graphical user
interfaces (GUIs) as well.
* Disaster recovery: The ability to withstand power failures using backup power supplies, robust
memory systems, and secondary/back-up path selection.
* Security: The ability to password-protect configurations and associated databases is also
common to these and all other intelligent systems.
Equipment analogous to intelligent patch panels
In many respects, the concept and functions of intelligent patch panels are very analogous to
other forms of switches and intelligent hubs. In the voice arena, every PBX or telephone system
out there minimally performs internal cross-connections between stations (extensions) and trunks
(phone lines). Even the process of establishing station-to-station connections or multistation or
conference calls resembles the same kind of flexible connectivity implied by our definition of
intelligent patch panels.
In the world of high speed data networks, intelligent hubs (sometimes referred to as wiring
concentrators) also provide dynamic internal cross-connections when used in conjunction with
remote, centralized management software. Using systems of this type, a network manager can
electronically group several network users together in one "logical" network on one day, and on
the next, totally redefine the group such that a different mix of users results. This can all be done
without having to dispatch technicians to wiring closets.
Performance considerations
In spite of the general appeal to the products discussed here, none of them offer a panacea from a
functional standpoint, and all have their limitations. First of all, any of the products currently in
place on the market are media-bound. In other words, they are all either copper-only or fiberonly in their makeup. This may represent a drawback in cases where users require multimedia
connectivity such as needing to extend copper circuits over long distances where fiber may be
the preferred choice.

All of the systems currently on the market also appear to be rather application-bound. In other
words, the intended applications seem to be restricted to data-only requirements. This appears to
be less the case with the wireless patch panels and more the case with the others. The granularity
of treatment required for voice where, for example, 1-pair cross-connections are often required,
would appear to be either impractical or cost-prohibitive based on how these systems are built
and configured.
And lastly, even when it comes to data network performance, this technology is still somewhat
behind the rest of the industry. If we assume that the expectation in the industry is that 100megabit per second (Mbps), or Category 5 UTP, performance levels are required, products in
each of the three categories discussed above roughly break out as shown in Table 1.
What's interesting about this performance analysis is that only the "unconventional" patch panel
that still relies on "conventional" cross-connection methods rates at the industry-standard level of
Category 5 performance. This is not surprising since Category 5 systems are very sensitive to
deviations in crosstalk and attenuation. Unconventional methods of establishing and maintaining
circuits appear to have a ways to go before these alternative technologies can stand up to the task
of supporting continuous data streams at 100 Mbps. At this point in time, only one manufacturer
of "intelligent" patch panels indicates that its products have been tested successfully at 100
Mbps, but have not yet been certified as such. This could prove to be problematic in the short
run, since all of the industry standards for certification are based on the use of conventional
connecting hardware, not unconventional electronic switches.
In any case, the nature of the products described here as electronic patch panels is such that all
electronic functionality is essentially nonintrusive. In other words, the key functions of sensing
and reporting really do not interfere with or participate in the cross-connections themselves. And
certainly the presence of LED lights and so forth are external to the connectivity scheme inherent
to these devices. Thus, it's no surprise that performance for these products are in line with the
mainstream of traditional patch panels, and are higher than products in the other two categories.
Cost considerations
In looking at cost, we chose one representative product from each of the three categories defined
above. Our attempt to reduce the comparison to equal terms is based on the notion of a typical
cost per port. Our findings are shown in Table 2.
Well, as they say, "you get what you pay for." The lesson here is that with wireless and electronic
patch panels, you're still paying for the performance of physical moves, adds, and changes by
people (dispatching technicians to wiring closets, their salaries, etc.). With fully intelligent patch
panels, you're not. Thus, the real costs of wireless and electronic systems also include the
traditional cost of the human resources required to work with them. But given the still low levels
of performance for intelligent systems coupled with their severe application constraints (data
only), we would conclude that these systems have a long way to go before they begin to
outweigh the flexibility offered by traditional connecting hardware schemes.

Nevertheless, it's definitely time to start watching this technology. In this business, everything
can change in the course of a week!

The case of the soon-to-be overloaded neutral


conductor.
Aug 1, 1995 12:00 PM, Moravek, James
0 Comments ShareThisNew
Odd triplen harmonic currents and neutral overloading go hand-in-hand.
A facility manager of a data center was informed by the Information Systems Department that
additional file servers would be installed and would be powered by an existing panelboard.
Therefore, an analysis of the panelboard, its feeder, and loading need to be done.
To find out if a problem would occur with the proposed load addition, the facility manager asked
the maintenance electrician to review the annual testing and maintenance reports on the feeder of
the existing panelboard. The feeder in question consisted of three 500kcmil phase conductors, a
1/0 AWG grounded (neutral) conductor, and a 1/0 AWG isolated grounding conductor. All
conductors had 75 [degrees] C insulation. The data center's feeders were installed with the
practices of time, which included reducing the neutral conductor in the feeder. The annual testing
and maintenance report indicated loads of 99A, 130A, and 77A respectively for the phase
conductors and 130A for the neutral conductor. The load on the feeder after the addition of the
new file servers was estimated to be 132A, 182A, and 101A respectively for the phase
conductors and 197A for the neutral conductor.
The electrician noted that an overload problem would occur, not with the phase conductors but
with the neutral conductor. The phase conductors would be loaded to approximately 60% of their
ampacity (B phase as worst case) while the neutral would be overloaded to 131% of its ampacity.
Detailing the problem
The problem was more than an imbalance on the phase conductors. Using the following
equation, the anticipated (normally expected) neutral current ([I.sub.N]) is 46A for the present
load and 71A for the estimated revised load.
[I.sub.N] = [square root of ([I.sub.[A.sup.2]] + [I.sub.[B.sup.2]] + [I.sub.[C.sup.2]] - [I.sub.A]
[I.sub.B] - [I.sub.B][I.sub.C] - [I.sub.C][I.sub.A])]
The measured neutral current of 130A is obviously more than the 46A calculated.
The cause of this problem was harmonics. The load on the existing panelboard was line-toneutral connected nonlinear loads. This type of load consists primarily of odd triplen harmonic

currents (e.g. 3rd, 9th, 15th, 21th, etc.). These currents add in the neutral conductor and can
cause measured neutral currents as much as twice the phase conductor currents.
How can neutral currents be twice as great as phase conductor currents? The rules change on
power systems having odd triplen harmonic currents. Let's investigate this further.
Nonsinusoidal currents
The neutral current consists of the imbalance of the phase conductor currents ([I.sub.A],
[I.sub.B], and [I.sub.C]). If these currents consist only of the fundamental (60 Hz) current, there
is a 120 [degrees] phase shift between each phase and the summation of these currents at every
instant in time is zero.
Graphically, this is shown in Fig. 1 below. Here, three phase currents are superimposed on a
graph. By stopping time at the noted instant, the amplitudes of each of the currents when added
together will equal zero. As a result, the neutral current is zero.
If the phase conductor currents contain both fundamental and odd triplen harmonic currents, the
result is very different. Odd triplen currents are zero sequence currents in that they are in phase
and will add in the neutral conductor. The harmonic current with the largest profile in the odd
triplen harmonic currents is usually the third harmonic. Fig. 2 shows us the sinusoidal waveforms
of each of the phase currents, along with that of the third harmonic for all three phases. While the
fundamental (60 Hz) currents of the phases cancel each other, the third harmonic (180 Hz)
currents of each of the phasesadd together. Fig. 3 shows how these currents flow in a 3-phase, 4wire schematic.
When the third harmonic is present, a distorted voltage waveform will result and will provide a
characteristic signature of the nonlinear load. Fig. 4 shows how the combination of sinusoidal
voltage waveforms and third harmonic waveforms create harmonically distorted waveforms.
Possible solutions
In relation to our potentially overloaded neutral conductor, three options were developed to
address this problem.
* Replace the feeder with one that has full size neutral.
* Use a neutral filter to reduce the anticipated neutral current within the rating of the conductor.
* Relabel the isolated grounding conductor and reconnect it as a parallel grounded conductor.
The first option was not considered due to the problems and time involved in replacing the feeder
in this particular application. Financially, the data center could not afford the extended outage
required to replace the feeder.

The second option was seriously considered. The neutral filter uses a zig-zag transformer design
that reduces odd triplen harmonic currents by a factor of 7.5 or greater. It uses the phase shift
characteristics of a transformer to cause cancellation of the odd triplen harmonics. This would
reduce the anticipated neutral current to a value less than the current rating of the conductor. The
filter was reasonably priced and could be installed with a little downtime of the feeder. The
disadvantage of the filter was that it had to be installed adjacent to the affected panelboard. This
took up floor space and meant additional cooling load to the center's cooling system.
Though the third option was a late entry, it proved to be the winner. The isolated grounding
conductor (same size as the neutral conductor) was never used as intended. Other feeders to the
data center did not have grounding conductors; instead, the metal raceway system was used as a
ground. After discussions with the electrical inspector and the facility manager, the decision was
made to relabel the isolated grounding conductor, disconnect it, and reconnect it as a parallel
grounded conductor. This doubled the capacity of the neutral. The consensus was that this
situation was no worse than other feeders in the system. And, the work could be accomplished
with minimum downtime on the feeder. The anticipated neutral current would now be
approximately 65% of the capacity of the conductor.
James Moravek, P.E. is Vice President of Hammel Green & Abrahamson, an architectural and
engineering firm in Minneapolis, Minn.
Email0 0 0ShareThisNew

Contractor safety plan offers multiple


rewards.
Aug 1, 1995 12:00 PM, Lawrie, Robert J.
0 Comments ShareThisNew
Comprehensive electrical safely program reduces accidents, builds morale, increases
productivity, and boosts the bottom line.
Electrical safety is of paramount importance at Nordling Dean Electric Co., Inc. Chatham, N.J.
According to Jerry Murphy, president of the firm, electrical safety plays a major role in company
operations because of the many benefits imparted to all concerned.
First and foremost is the health and welfare of employees, particularly on-site electricians who
face possible accidents every day. Murphy says that when employees know that we care enough
to plan, install, and enforce a total safety program, they appreciate it and reward our efforts by
returning a sincere effort to comply with the program. It's a win-win situation.

Accidents cause injury to personnel, impact the family, and result in lost time as well as damage
to property. When accidents happen, morale drops, efficiency suffers, insurance rates go up,
customers are unhappy, and everybody loses.
On the other hand, a well-planned safety program benefits everyone.
Multi-facet safely program
The safety program was planned and organized by Carl Dumont, vice president of construction.
As the years passed, the program has expanded, developed further, and diversified to cover
essentially all aspects of safety. Some of the major parts of the safety program are as follows.
Corporate statement of policy. The firm has published their Safety Manual, a public document
that includes the following corporate statement of policy.
Safety of employees and the public is a top priority that will not be compromised. Every effort
will be made to prevent accidents by timely recognition and correction of unsafe conditions and
unsafe practices. The firm will comply with all laws concerning safety and health enforced by
local, state or federal authorities.
All-level safely coordination team. A safety coordination committee consists of four key
members from all levels in the company. All members have multiple duties; however, each has
specific duties. For example, Abe Bawarshi, vice president of engineering, coordinates and
updates the entire program with special emphasis on the Safety Manual. Leon Baptiste and
Charlie McCormick, project managers, monitor and review safety at job sites, especially work
procedures, to assure maximum safety. Roger Dumont, warehouse manager, keeps tabs on all
protective equipment, test instruments, and safety items. He regularly sends out rubber gloves,
blankets, and instruments to be tested for reliability. Bill Davi, journeyman electrician, teaches
apprentice safety classes.
Job start-up and site inspections. As soon as a job is scheduled, the superintendent and foreman
meet to discuss accident prevention. Job-site conditions are examined and any danger spots are
pointed out. Foremen make regular inspections of the site looking for any accident-prone
locations.
Supervisor/foreman meetings. At these meetings, which are held at least quarterly, supervisors
and foremen discuss safety data, experiences, accidents, and/or problems in detail. Their
objective is to increase safety and reduce accidents. These meetings accomplish much the same
as the job start-up and site inspection procedures do.
Tool box meetings. At each job site, a ten-min. meeting is held by the project supervisor or job
foreman each week. Accidents or near accidents are reviewed and actions to prevent recurrence
are discussed. Everyone is encouraged to contribute ideas that would enhance work safety.
Accident investigation. Accidents will happen, however. When they do, the Safety Coordination
Team makes a full investigation, obtains all facts, details, and possible causes, and then takes

suitable action to prevent recurrence. The team uses a thorough accident investigation procedure
to assure a meticulous inquiry. The procedure investigates sequence of actions, condition of
protective equipment, if they were used properly, etc. The job foreman is responsible for
assignment of men who are trained or competent for each kind of work. A comprehensive
questioning guide is used and an accident report must be filled out and sent to top management.
Seminars/training. On a regular schedule, Leon Baptiste, power quality engineer, also serves as a
safety director. One of his responsibilities is to attend OSHA and safety seminars to be certain
that work procedures and the safety manual incorporate the latest data concerning safety.
Training also includes sessions for all company personnel held at least twice a year. Training of
journeymen is an ongoing process in the field, as they observe and use protective items,
instruments, the safety manual, safety checklists, etc.
Safety manual. The company's 110-page safety manual is a mainstay of the safety plan. It covers
all common safety situations and provides guidance for essentially any safety circumstance that
might exist. For example, chapters in the manual discuss OSHA regulations, accident
investigation procedures, first aid, training, fire prevention and control, lockout and tagout
procedures, ground-fault protection on construction sites, just to name a few.
Safe procedure checklists. For common or repetitive work, certain hazardous work, and to meet
OSHA rules, "Safe Procedure Checklists" must be read and used to guide work; completed
checklist must be submitted to central accident record-keeping when the work is completed.
Working in confined space is especially hazardous; permits are required and checklists for the
work must be used. The same is true for work procedures such as work performed with
hazardous materials.
Protective equipment, tools, test instruments. All protective items are listed on a main index,
signed out to job sites, and tested upon return and/or on a schedule. Typical equipment includes
hard hats, hearing protection, glasses, face shields, respiratory equipment, first-aid kits,
stretchers, ladders, scaffolds, power tools, test instruments, rubber blankets, gloves, and highvoltage testers.
OSHA training. The firm keeps up to date with the latest OSHA standards by sending safety
directors to OSHA seminars and carrying out all regulations as required. Information concerning
OSHA rules is included in chapters in the Safety Manual. Typical OSHA rules and activities
include displaying OSHA posters in company buildings and strict use of signs, labels, color
codes, and posters, etc. to warn of hazards. Additional regulations relate to maintaining detailed
records (using the OSHA log and summary of occupational injuries and illnesses), maintaining a
list of all hazardous chemicals present at the company or at work sites, training relative to
hazardous chemical materials, and the use of safety data sheets.
Management review. Top level officers in the firm review all reports pertinent to the safety
program and are well versed in all aspects of the safety program on the job. They are aware of
the benefits that accrue as mentioned earlier. As a result, motivation program has been instituted.

Incentive plan. The firm started a program that motivates and rewards the foreman that has the
best safety record each year. The award is given at the company annual holiday party. This award
program reaffirms the company's dedication to an effective safety program and helps to motivate
foremen to do their best.

Grounding remote pump stations.


Aug 1, 1995 12:00 PM, Hartwell, Frederic P.
0 Comments ShareThisNew
Efforts to ensure continuity of power must not create potential hazards through improper
grounding techniques.
Remote pumping stations, each with their own local disconnecting means and grounding
electrodes, frequently cause difficulties in applying appropriate grounding rules. Often a property
owner and the utility establish a remote service point where the power will be metered, and the
owner takes over from there, providing the wiring to the remote stations. In some cases there
may be from four to a dozen installations stretching out up to a mile or more in total distance. In
the specific case involved here, the system is 480V corner grounded. Some utilities require this
type of service in cases where there are no ground detectors in place.
At the point of supply to the customer wiring system, there will be a meter and a fused
disconnect (or circuit breaker). This is the service disconnect for the installation, and it also
allows the utility to troubleshoot and for overload protection for the transformer. Where the
utility's primary distribution includes a grounded conductor (the usual case), there may be a 4wire drop to the customer's pole. One option some utilities use is to connect the messenger wire
to a driven ground at their distribution transformer only. The owner then usually distributes
power from the meter location with three separate, insulated overhead conductors.
We received a report that when motors are connected to the load side of this distribution, there
are instances of fault currents passing through local electrodes with magnitudes sufficiently high
enough to literally bake dry the soil at the electrode. The result is an extremely high electrode
resistance, and the frames of local motors and switchgear can be operating with a dangerous
touch voltage.
When this happens, some owners treat the downstream equipment as though it were connected to
an ungrounded system in order to increase reliability of service on these untended remote
installations. This means that although there is a grounding electrode at each equipment location,
there is no connection between the grounded phase conductor and this electrode.
According to the report we received, the owner, in cooperation with the utility, decided to lift the
grounded phase off the center lug of the meter socket, and insulate it. In addition, he proposed to
connect the load end of the messenger wire to a driven ground at his service pole. The meter

enclosure and disconnect will be grounded through a short piece of solid copper, bonded to the
pole ground. The proposed solution is shown in the drawing.
The EC&M Panel's response
We strongly disagree with the proposed solution. The system is a chimera, an incongruous
combination of grounded and ungrounded distribution practices that fails to offer the minimum
safety requirements of either.
Lifting the grounded phase from the service grounding terminals does not make the system
ungrounded. A fault in the service equipment will still return to the transformer over the pole
ground wire and the quadruplex messenger. Note, however, that the pole ground may not have
the required size and would not be "routed with the phase conductors" as required in Sec. 25023(b).
The owner's grounding system
However, our real concerns arise from what the description implies about the grounding system.
Grounding electrodes are part of a ground reference arrangement that stabilizes the voltage to
ground and dissipates surges. In performing this function, they are never intended to carry
significant amounts of current for extended periods of time. At utilization voltages, they must
never be relied upon for the return of fault current.
Fault current from an insulation failure can only bake an electrode if it is denied the properly
constituted conductive path back to the system source, as required for these systems. In this case,
that path would be over the grounded phase conductor by way of the main bonding jumper at
each structure disconnecting means. According to the description, the customers are omitting
these bonding jumpers in order to treat the system as though it were ungrounded. However, the
system is grounded at the transformer. And where distribution systems are grounded at any point,
Sec. 250-23(b) and Sec. 250-53(b) require that the grounded circuit conductor be brought to the
service disconnecting means and bonded to the enclosure and any equipment grounding
conductors.
Sec. 250-24(a) requires similar bonding for the downstream pumps, because the pumping
stations would be considered "structures" in the application of that rule.
In both cases (service and remote station), you must make a bond to the grounding electrode
conductor, but the primary fault current path is over the grounded circuit conductor, which
should be identified according to the rules in Art. 200.
If you don't make this bonding connection, in the event of an insulation failure that energizes
conductive materials that are connected to earth through an electrode, and with no other return
path, current will flow through the electrode in accordance with Ohm's law. If the ground rod
resistance is 25 ohms, then about 19A of current will begin flowing (480/25 = 19) through the
electrode. This won't trip the feeder overcurrent protective device.

Since power is equal to [I.sup.2]R, this will produce about 9kW of heat. Most electrode
resistance is concentrated near the electrode, and therefore this is where the heat will be
concentrated. If continued, this will indeed bake the ground to the point that the electrode is
worthless (and, incidentally, in violation of Sec. 250-84 at locations with a single electrode). As
the electrode resistance deteriorates, the touch voltage on local conductive surfaces will approach
480V (depending on local voltage gradients), an extreme hazard.
A true, ungrounded, alternative
If the owner truly wants the continuity advantages of an ungrounded system, then by Code he or
she may have it. But the owner must wire per the Code, without using identified phase
conductors and with overcurrent protective devices in all three phase conductors. And the owner
must make whatever arrangements the utility requires to connect it. On installations like this,
where exposed overhead conductors run for great distances, lightning arresters should be
provided at each pump location. These surges are often more destructive on ungrounded systems
because there is no circuit path to ground.
With a ground fault on an ungrounded system, the system simply becomes corner grounded at
the point of the fault. The touch voltages on the connected enclosures do not increase
significantly above ground. There is time to arrange an orderly shutdown and correct the
problem. Normally, if a second fault occurs on a different phase, the resulting short circuit
through the intervening equipment grounding paths simply opens one or more overcurrent
protective devices. In this case, however, ground detectors take on additional practical
significance.
However unlikely, if such a ground fault is not corrected at one location, and another fault occurs
on a different phase at a remote location, current will attempt to flow between the electrodes
connected at each. This will produce a similar result as that described for the improperly
connected grounded system. Overcurrent devices may well not operate. The phase-to-phase
voltage between the faulted phases will drop by varying amounts depending on electrode
resistance. This may decrease the life of the motors connected to the distribution, as well as
increasing shock hazards over time. The enormous amounts of energy wasted while such a fault
is in progress will be metered and charged to the customer.
The NEC does not require ground detection on ungrounded systems, but a fine print note in Sec.
250-5(b) refers to its usefulness. It might be possible to rig a radio operated monitoring device
that would alert a central station when a phase ground occurs. We think that the NEC applies and
should be enforced on all of the wiring downstream of the service point.
EDITOR'S NOTE:
These answers are given by our panel of experts. I am chairing this panel, and the other panel
members include Bill Summers, James Stallcup, and Dan Leaf. The opinion expressed is that of
the panel. If a panelist disagrees with the majority opinion, his explanation is printed following
the answer. Although authoritative, the answers printed here are not, and cannot be relied on as
formal interpretations of the National Electrical Code.

Potrebbero piacerti anche