Sei sulla pagina 1di 10

Plant Performance Test

Source: http://www.powermag.com/how-to-conduct-a-plant-performance-test/

Completing a power plants start-up and commissioning usually means pushing the prime contractor to
wrap up the remaining punch list items and getting the new operators trained. Staffers are tired of the long
hours theyve put in and are looking forward to settling into a work routine.

Just when the job site is beginning to look like an operating plant, a group of engineers arrives with
laptops in hand, commandeers the only spare desk in the control room, and begins to unpack boxes of
precision instruments. In a fit of controlled confusion, the engineers install the instruments, find primary
flow elements, and make the required connections. Wires are dragged back to the control room and
terminated at a row of neatly arranged laptops. When the test begins, the test engineers stare at their
monitors as if they were watching the Super Bowl and trade comments in some sort of techno-geek
language. The plant performance test has begun (Figure 1).
1. Trading spaces. This is a typical setup of data acquisition computers used during a plant performance
test. Courtesy: McHale & Associates

Anatomy of a test

The type and extent of plant performance testing activities are typically driven by the project
specifications or the turnkey contract. They also usually are linked to a key progress payment milestone,
although the value of the tests goes well beyond legalese. The typical test is designed to verify power and
heat rate guarantees that are pegged to an agreed-upon set of operating conditions. Sounds simple, right?
But the behind-the-scenes work to prepare for a test on which perhaps millions of dollars are at stake
beyond the contract guarantees almost certainly exceeds your expectations (see box).

Performance test economics are overpowering

Consider a 500-MW facility with a heat rate of 7,000 Btu/kWh. When operating at baseload with an 80%
capacity factor, the plant will consume over 24 million mmBtu per year. At a fuel cost of $8/mmBtu,
thats nearly $200 million in fuel costs for the year.

If an instrumentation or control error raises the heat rate of the facility by 0.5%, that would cost the plant
an additional $1 million each year. If, on the other hand, a misreported heat rate causes the facility to be
dispatched 0.5% less often, reducing the capacity factor to 79.5%, the losses in revenue at $50/MWh
would amount to nearly $1.1 million for the year.

Performance tests can bring the right people together at the facility to identify losses in performance and
to recapture or prevent such losses in facility profits.

Long before arriving on site, the test team will have:

Gathered site information.

Reviewed the plant design for the adequacy and proper placement of test taps and for the type and
location of primary flow elements.

Developed plant mathematical models and test procedures.

Met with the plant owner, contractor, and representatives of major original equipment
manufacturers (OEMs) to iron out the myriad details not covered by contract specifications.
Experienced owners will have made sure that the plant operations staff is included in these
meetings.

Tests are normally conducted at full-load operation for a predetermined period of time. The test team
collects the necessary data and runs them through the facility correction model to obtain preliminary
results. Usually within a day, a preliminary test report or letter is generated to allow the owner to declare
"substantial completion" and commence commercial operation. The results for fuel sample analysis
(and/or ash samples) are usually available within a couple of weeks, allowing the final customer report to
be finished and submitted.
The art and science of performance testing require very specialized expertise and experience that take
years to develop. The science of crunching data is defined by industry standards, but the art rests in the
ability to spot data inconsistencies, subtle instrument errors, skewed control systems, and operational
miscues. The experienced tester can also quickly determine how the plant must be configured for the tests
and can answer questions such as, Will the steam turbine be in pressure control or at valves wide open in
sliding-pressure mode? What control loops will need to be in manual or automatic during testing? and At
what level should the boiler or duct burners be fired?

For the novice, its easy to miss a 0.3% error in one area and an offsetting 0.4% error in another area that
together yield a poor result if they arent resolved and accounted for. With millions of dollars on the line,
the results have to be rock solid.

Mid-term exams

There are many reasons to evaluate the performance of a plant beyond meeting contract guarantees. For
example, a performance test might be conducted on an old plant to verify its output and heat rate prior to
an acquisition to conclusively determine its asset value. Other performance tests might verify capacity
and heat rate for the purpose of maintaining a power purchase agreement, bidding a plant properly into a
wholesale market, or confirming the performance changes produced by major maintenance or component
upgrades.

Performance tests are also an integral part of a quality performance monitoring program. If conducted
consistently, periodic performance tests can quantify nonrecoverable degradation and gauge the success
of a facilitys maintenance programs. Performance tests also can be run on individual plant components to
inform maintenance planning. If a component is performing better than expected, the interval between
maintenance activities can be extended. If the opposite is the case, additional inspection or repair items
may be added to the next outage checklist.

Whatever the reason for a test, its conduct should be defined by industry-standard specifications such as
the Performance Test Codes (PTCs) published by the American Society of Mechanical Engineers
(ASME), whose web sitewww.asme.orghas a complete list of available codes. Following the PTCs
allows you to confidently compare todays and tomorrows results for the same plant or equipment. Here,
repeatability is the name of the game.

The PTCs dont anticipate how to test every plant configuration but, rather, set general guidelines. As a
result, some interpretation of the codes intent is always necessary. In fact, the PTCs anticipate variations
in test conditions and reporting requirements in a code-compliant test. The test leader must thoroughly
understand the codes and the implications of how they are applied to the plant in question. Variances must
be documented, and any test anomalies must either be identified and corrected before starting the test or
be accounted for in the final test report.

A performance test involves much more than just taking data and writing a report. More time is spent in
planning and in post-test evaluations of the data than on the actual test. Following is a brief synopsis
describing the process of developing and implementing a typical performance test. Obviously, the details
of a particular plant and the requirements of its owner should be taken into account when developing a
specific test agenda.

Planning for the test


The ASME PTCs are often referenced in equipment purchase and/or engineering, procurement, and
construction (EPC) contracts to provide a standard means of determining compliance with performance
guarantees. The ASME codes are developed by balanced committees of users, manufacturers,
independent testing agencies, and other parties interested in following best engineering practices. They
include instructions for designing and executing performance tests at both the overall plant level and the
component level.

Planning a performance test begins with defining its objective(s): the validation of contractual guarantees
for a new plant and/or the acquisition of baseline data for a new or old plant. As mentioned, part of
planning is making sure that the plant is designed so it can be tested. Design requirements include
defining the physical boundaries for the test, making sure that test ports and permanent instrumentation
locations are available and accessible, and ensuring that flow metering meets PTC requirements (if
applicable).

After the design of the plant is fixed, the objectives of testing must be defined and documented along with
a plan for conducting the test and analyzing its results. A well-written plan will include provisions for
both expected and unexpected test conditions.

Understanding guarantees and corrections

The most common performance guarantees are the power output and heat rate that the OEM or contractor
agrees to deliver. Determining whether contractual obligations have been met can be tricky. For example,
a plant may be guaranteed to have a capacity of 460 MW at a heat rate of 6,900 Btu/kWhbut only under
a fixed set of ambient operating conditions (reference conditions). Typical reference conditions may be a
humid summer day with a barometric pressure of 14.64 psia, an ambient temperature of 78F, and relative
humidity of 80%.

The intent of testing is to confirm whether the plant performs as advertised under those specific
conditions. But how do you verify that a plant has met its guarantees when the test must be done on a dry
winter day, with a temperature of 50F and 20% relative humidity? The challenging part of performance
testing is correcting the results for differences in atmospheric conditions. OEMs and contractors typically
provide ambient correction factors as a set of correction curves or formulas for their individual
components. But it is often up to the performance test engineers to integrate the component information
into the overall performance correction curves for the facility.

The reference conditions for performance guarantees are unique to every site. A simple-cycle gas
turbines ratings assume its operation under International Standardization Organization (ISO) conditions:
14.696 psia, 59F, and relative humidity of 60%. The condition of the inlet air has the biggest impact on
gas turbinebased plants because the mass flow of air through the turbines (and consequently the power
they can produce) is a function of pressure, temperature, and humidity. Performance guarantees for steam
plants also depend on air mass flow, but to a lesser extent.

The barometric pressure reference condition is normally set to the average barometric pressure of the site.
If a gas turbine plant is sited at sea level, its barometric pressure reference is 14.696 psia. For the same
plant at an altitude of 5,000 feet, the reference would be 12.231 psia, and its guaranteed output would be
much lower.

The relative humidity reference condition may or may not have a significant bearing on plant
performance. In gas turbine plants the effect is not large (unless the inlet air is conditioned), but it still
must be accounted for. The effect of humidity, however, is more pronounced on cooling towers. Very
humid ambient air reduces the rate at which evaporation takes place in the tower, lowering its cooling
capacity. Downstream effects are an increase in steam turbine backpressure and a reduction in the turbine-
generators gross capacity.

The most important correction for gas turbine plant performance tests involves compressor inlet air
temperature. Although a sites barometric pressure typically varies by no more than 10% over a year, its
temperatures may range from 20F to 100F over the period. Because air temperature has a direct effect on
air density, temperature variation changes a units available power output. For a typical heavy-duty frame
gas turbine, a 3-degree change in temperature can affect its capacity by 1%. A temperature swing of 30
degrees could raise or lower power output by as much as 10%. The effect can be even more pronounced
in aeroderivative engines.

ISO-standard operating conditions or site-specific reference conditions are almost impossible to achieve
during an actual test. Accordingly, plant contractors and owners often agree on a base operating condition
that is more in line with normal site atmospheric conditions. For example, a gas turbine plant built in
Florida might be tested at reference conditions of 14.6 psia, 78F, and 80%. Establishing a realistic set of
reference conditions increases the odds that conditions during a performance test will be close to the
reference conditions. Realistic reference conditions also help ensure that the guarantee is representative of
expected site output.

Establishing site-specific reference conditions also reduces the magnitude of corrections to measurements.
When only small corrections are needed to relate measured performance from the actual test conditions to
the reference conditions, the correction methods themselves become less prone to question, raising
everyones comfort level with the quality of the performance test results.

Beyond site ambient conditions, the PTCs define numerous other correction factors that the test designer
must consider. Most are site-specific and include:

Generator power factor.

Compressor inlet pressure (after losses across the filter house).

Turbine exhaust pressure (due to the presence of a selective catalytic reduction system or heat-
recovery steam generator).

Degradation/fired hours, recoverable and unrecoverable.

Process steam flow (export and return). Blowdown (normally isolated during testing).

Cooling water temperature (if using once-through cooling, or if the cooling tower is outside the
test boundary).

Condenser pressure (if the cooling water cycle is beyond the test boundary).

Abnormal auxiliary loads (such as heat tracing or construction loads).

Fuel supply conditions, including temperature and/or composition.


Choose the right instrumentation

Instrumentation used to record test measurements should be selected based on a pre-test uncertainty
analysis (see "Understanding test uncertainty"). This analysis is important to fine-tune the instrumentation
to ensure that the quality of the test meets expectations. The test instruments themselves are usually a
combination of temporary units installed specifically for testing, permanently installed plant
instrumentation, and utility instrumentation (billing or revenue metering). Temporary instruments are
typically installed to make key measurements that have a significant impact on results and where higher
accuracy is needed to reduce the uncertainty of test results. Among the advantages of using a piece of
temporary instrumentation is that it has been calibrated specifically for the performance test in question
following National Institute of Standards and Technology (NIST) procedures.

Another benefit of installing temporary instrumentation is to verify the readings of permanent plant
instruments. Plant instrumentation typically lacks NIST-traceable calibration or has been calibrated by
technicians who are more concerned with operability than with accurate performance testing. Theres a
good reason for the former: Performing a code-level calibration on plant instrumentation can be more
expensive than installing temporary test instrumentation. An additional benefit of a complete temporary
test instrumentation setup is that the instrumentation, signal conditioning equipment, and data acquisition
system are often calibrated as a complete loop, as is recommended in PTC-46 (Overall Plant
Performance).

All performance instruments should be installed correctly, and any digital readings should be routed to a
central location. Choosing a good performance data center is very important. A performance command
center should be out of the way of site operations yet close enough to observe plant instrumentation input
and operation.

Obviously, performance instrument readings should be checked against those of plant instruments, where
available. This is one of the most important checks that can be made prior to a performance test. When a
performance tester can get the same result from two different instruments that were installed to
independent test ports and calibrated separately, theres a good chance the measurement is accurate. If
theres a difference between the readings that is close to or exceeds instrument error, something is likely
to be amiss.

Typically, when plant guarantees are tied to corrected output and heat rate, the two most important
instrument readings are measured power and fuel flow. If either is wrong, the test results will be wrong.
For example, say youre testing a unit whose expected output is 460 MW. The plant instrument is
accurate to within 1%, and the test instrument is even more accurate: +/0.3%. In this case, the tester
prefers to see the two readings well within 1% of each other (4.6 MW) but they still may be as far apart as
5.98 MW (1.3%) and technically be within the instruments uncertainty.

When setting up for a performance test, it is not uncommon to find errors in permanent plant
instrumentation, control logic, or equipment installation. These errors can influence the operation of a
generating unit, for example by causing over- or underfiring of a boiler or gas turbine and significantly
impacting the units output and heat rate. In cases where the impact on actual operation continues
undetected, the corrected test report values may still be in error due to corrections made based on faulty
instrument readings. If these reported values are used as the basis of facility dispatch, a small error could
have an enormous impact on the plants bottom line, ranging from erroneous fuel nominations to the
inability to meet a capacity commitment.
Understanding test uncertainty

Uncertainty is a measure of the quality of the test or calculation result. A pretest uncertainty
analysis can be used to design a test to meet predefined uncertainty limits. A post-test uncertainty
analysis should be performed to verify that those uncertainty limits were met and to determine the
impact of any random scatter recorded in the test data.

Each input to the calculation must be analyzed for its impact on the final result. This impact is
identified as the sensitivity of the result to that input. For example, if inlet air temperature changes
by 3 degrees F, and the corrected output changes by 1%, the sensitivity is 1% per 3 degrees F or
0.33%/degree F.

The instrumentation information is used to identify the systematic error potential for each input.
For example, a precision 4-wire resistance-temperature detector can measure inlet air temperature
with an accuracy of +/- 0.18F, based on information provided by the manufacturer and as
confirmed during periodic calibrations.

During a test run, multiple recordings are made for any given parameter, and there will be scatter
in the data. The amount of scatter in the data is an indication of the random error potential for
each input. For example, during a 1-hour test run, the inlet air temperature may be recorded as an
average of 75F, with a standard deviation in the measurements of 0.6F.

If more than one sensor is used to measure a parameter, there also will be variances between
sensors based on location. These variances may be due to the variances either in the
instrumentation or in the actual parameter measured. For example, if air temperature is being
measured by an array of sensors, there may be effects due to ground warming or exhaust vents in
the area, either of which would affect the uncertainty of the bulk average measurement. These
variances will affect the average and standard deviation values for that parameter. Spatial
variances are added into the systematic error potential, based on the deviation of each location
from the average value for all locations.

Now that weve defined the three separate inputs to the uncertainty determinationsensitivity (A),
systematic error potential/uncertainty (B), and random error potential/uncertainty (C)its time to
put on our statisticians hats.

The terms can be combined in the following equation:

Uncertainty = SQRT[(A x B)2 + ( t x A x C)2]

The "t" value on the right side of the equation is known as the Student-t factor and is based on the
number of degrees of freedom (or number of data points recorded) in the data set. For a 95%
confidence interval and data taken at 1-minute intervals for a 60-minute test run, the value of "t" is
2.0. If data are taken less frequently (such as at 2-minute intervals), fewer recordings are made and
therefore either the test run must be longer (which is not recommended, because ambient
conditions may change) or the value of "t" will increase.

The example given above is for a single parameter, such as inlet air temperature, and its effect on
corrected output. For each correction made, the same process must be carried out to determine the
sensitivity, systematic uncertainty, and random uncertainty of the corrected result on that
correction parameter (such as barometric pressure and relative humidity).

Once each individual uncertainty has been identified, they can be combined to determine the
overall uncertainty of the corrected result. Combining the individual uncertainties is a three-step
process:

Determine the total systematic uncertainty as the square root of the sum of the squares for
all the individual systematic uncertainties.
Determine the total random uncertainty as the square root of the sum of the squares for all
the individual random uncertainties.
Combine the total systematic uncertainty and total random uncertainty as follows: Total
uncertainty = SQRT[(systematic_total)2 + ( t x random_total)2].

The result of the analysis is an expression stated in terms of the uncertainty calculated for an
individual instrument or the overall system. We might normally say, "The inlet air temperature is
75F," but when including an uncertainty analysis of a temperature measurement system, a more
accurate statement would be, "We are 95% certain that the inlet air temperature is between 74.6F
and 75.4F."

Once again, the value for "t" will depend on the design of the test, including the number of multiple
sensors and the frequency of data recordings. Additional information on the Student-t factor as
well as a discussion of how to determine uncertainty can be found in ASME PTC 19.1 (Test
Uncertainty).

Conduct the test

The performance test should always be conducted in accordance with its approved procedure. Any
deviations should be discussed and documented to make sure their impact is understood by all parties. If
the test is conducted periodically, it is important to know what deviations were allowed in previous tests
to understand if any changes in performance might have been due to equipment changes or simply to the
setup of the test itself.

Calibrated temporary instrumentation should be installed in the predetermined locations, and calibration
records for any plant or utility instrumentation should be reviewed. Check any data collection systems for
proper resolution and frequency and do preliminary test runs to verify that all systems are operating
properly.

The performance test should be preceded by a walk-down of the plant to verify that all systems are
configured and operating correctly. Its important to verify that plant operations are in compliance with
the test procedure because equipment disposition, operating limits, and load stability affect the results.
Data can then be collected for the time periods defined in the test procedure and checked for compliance
with all test stability criteria. Once data have been collected and the test has been deemed complete, the
results can be shared with all interested parties.

Because the short preliminary test may be the most important part of the process, be sure to allow
sufficient time for it in the test plan. The preliminary test must be done during steady-state conditions
following load stabilization or when the unit is operating at steady state during the emissions testing
program. The preliminary test has three objectives: to verify all data systems, to make sure manual data
takers are reading the correct instruments and meters, and to have the data pass a "sanity check."

After the test data have been collected, the readings should be entered into the correction model as soon
as possible and checked for test stability criteria (as defined by the test procedure). At this point,
depending on the correction methods, the test director may be able to make a preliminary analysis of the
results. If the numbers are way out of whack with expected values, a good director will start looking for
explanationspossibly, errors in the recorded data or something in the operational setup of the unit itself.
Though everyone is concerned when a unit underperforms, a unit that performs unexpectedly well may
have problems that have been overlooked. For example, a unit that corrected test results indicate has a 5%
capacity margin may need to have its metering checked and rectified, or it may have been mistuned to
leave it in an overfired condition.

Although an overtuned gas turbine may produce more megawatt-hours during initial operations, the gain
comes with a price: increasing degradation of the units hot section, shortening parts life and increasing
maintenance costs. The most common mistake in testing is acceptance of results that are too good. If
results are bad, everyone looks for the problem. If the results are above par, everyone is happy
especially the plant owner, who seems to have gotten a "super" machine. However, theres a reason for
every excursion beyond expected performance limitsfor better or worse.

If all the pretest checks are done properly, the actual performance test should be uneventful and
downright boring. It should be as simple as verifying that test parameters (load, stability, etc.) are being
met. This is where the really good performance testers make their work look easy. They appear to have
nothing to do during the test, and thats true because they planned it that way. Having done all the "real"
work beforehand, they can now focus on making sure that nothing changes during the test that may affect
the stability of the data.

Analyze the results

Almost immediately after the performance test (and sometimes even before it is complete), someone is
sure to ask, "Do you have the results yet?" Everyone wants to know if the unit passed. As soon as
practical, the performance group should produce a preliminary report describing the test and detailing the
results. Data should be reduced to test run averages and scrutinized for any spurious outliers. Redundant
instrumentation should be compared, and instrumentation should be verified or calibrated after the test in
accordance with the requirements of the procedure and applicable test codes.

The test runs should be analyzed following the methods outlined in the test procedure. Results from
multiple test runs can be compared with one another for the sake of repeatability. PTC 46 (Overall Plant
Performance) outlines criteria for overlap of corrected test results. For example, if there are three test
runs, a quality test should demonstrate that the overlap is well within the uncertainty limits of the test.

Once test analysts are satisfied that the results were proper, the test report can be written to communicate
them. This report should describe any exceptions to the test procedure that may have been required due to
the conditions of the facility during the test. In the event that the results of the performance test are not as
expected, the report may also suggest potential next steps to rectify them.

For sites where the fuel analysis is not available online or in real time, a preliminary efficiency and/or
heat rate value may be reported based on a fuel sample taken days or even weeks before the test.
Depending on the type and source of the fuel, this preliminary analysis may be significantly different than
that for the fuel burned during the test. Its important to understand that preliminary heat rate and
efficiency results are often subject to significant changes. Once the fuel analyses are available for the fuel
samples taken during the test, a final report can be prepared and presented to all interested parties.

Tina L. Toburen, PE, is manager of performance monitoring and Larry Jones is a testing consultant
for McHale & Associates. Toburen can be reached at 425-557-8758 or tina.toburen@mchale.org; Jones
can be reached at 865-588-2654 or larry.jones@mchale.org.

Potrebbero piacerti anche