Sei sulla pagina 1di 15

INTERNATIONAL JOURNAL OF CLIMATOLOGY

Int. J. Climatol. 28: 219233 (2008)


Published online 1 June 2007 in Wiley InterScience
(www.interscience.wiley.com) DOI: 10.1002/joc.1519
Complicated ENSO models do not signicantly outperform
very simple ENSO models
Halmar Halide
a
and Peter Ridd
b
*
a
Australian Institute of Marine Science, PMB3, MC, Townsville, Australia
b
School of Mathematical and Physical Sciences, James Cook University, Townsville, Australia
ABSTRACT: An extremely simple univariate statistical model called IndOzy was developed to predict El Ni no-Southern
Oscillation (ENSO) events. The model uses ve delayed-time inputs of the Ni no 3.4 sea surface temperature anomaly
(SSTA) index to predict up to 12 months in advance. The prediction skill of the model was assessed using both short- and
long-term indices and compared with other operational dynamical and statistical models. Using ENSO-CLIPER(climatology
and persistence) as benchmark, only a few statistical models including IndOzy are considered skillful for short-range
prediction. All models, however, do not differ signicantly from the benchmark model at seasonal Lead-36. None of
the models show any skill, even against a no-skill random forecast, for seasonal Lead-7. When using the Ni no 3.4 SSTA
index from 1856 to 2005, the ultra simple IndOzy shows a useful prediction up to 4 months lead, and is slightly less
skillful than the best dynamical model LDEO5. That such a simple model such as IndOzy, which can be run in a few
seconds on a standard ofce computer, can perform comparably with respect to the far more complicated models raises
some philosophical questions about modelling extremely complicated systems such as ENSO. It seems evident that much
of the complexity of many models does little to improve the accuracy of prediction. If larger and more complex models do
not perform signicantly better than an almost trivially simple model, then perhaps future models that use even larger data
sets, and much greater computer power may not lead to signicant improvements in both dynamical and statistical models.
Investigating why simple models perform so well may help to point the way to improved models. For example, analysing
dynamical models by successively stripping away their complexity can focus in on the most important parameters for a
good prediction. Copyright 2007 Royal Meteorological Society
KEY WORDS El Ni no; climate forecast; statistical model
Received 30 March 2006; Revised 08 February 2007; Accepted 08 February 2007
1. Introduction
There has been considerable research into modelling
and predicting the El Ni no-Southern Oscillation (ENSO)
phenomenon due to the huge economic costs of extreme
ENSO events. The ability to accurately predict ENSO
events a few months in advance has been shown to
benet the sheries, animal husbandry and agricultural
sectors (Adams et al., 1995; Lehodey et al., 1997; Solow
et al., 1998; Jochec et al., 2001; Chen et al., 2002;
Letson et al., 2005). For example, using an accurate
ENSO forecast in managing a salmon shery resulted
in a considerable productivity increase (Costello et al.,
1998). An increased production of 79% was obtained
when an ENSO prediction was incorporated into grazing
strategies in the beef industry in Australia (McKeon
et al., 2000). Similar benets were also observed in other
sectors such as natural gas purchase (Changnon et al.,
2000), hydropower price (Hamlet et al., 2002), insurance
* Correspondence to: Peter Ridd, School of Mathematical and Physical
Sciences, James Cook University, Townsville, 4811 Australia.
E-mail: peter.ridd@jcu.edu.au

On a post-doc leave from Physics Dept. Hasanuddin Univ., Makassar


- Indonesia.
industry (Chichilnisky and Heal, 1998), and disease risk
(Bouma et al., 1997).
Not surprisingly, there are many models for predict-
ing ENSO (Barnston et al., 1999) and they have greatly
varying degrees of complexity. These models can be
divided into two categories: statistical models and phys-
ical/dynamical models. In statistical predictions, nding
an optimal predictor (input) model that gives a better t
to a predictant (output) is the objective regardless of any
causal relationship between them. Belonging to this cat-
egory are models such as the canonical correlation anal-
ysis (Barnston and Ropelewski, 1992) and the Markov
(Xue et al., 2000) models of NCEP, the linear-inverse
model of NOAA-CDC (Penland and Magorian, 1993),
the constructed-analog models (van den Dool, 1994), cli-
matology persistence model (Knaff and Landsea, 1997),
the UBC nonlinear canonical correlation analysis (Hsieh,
2001), and the regression model of the Florida State Uni-
versity (Clarke and van Gorder, 2001).
Physical/dynamical models apply the governing
laws of uids and thermodynamics to describe the
phenomenon. Some of the models in this category are:
the Lamont Doherty model (Zebiak and Cane, 1987),
the hybrid dynamical model of SIO/MPI (Barnett et al.,
Copyright 2007 Royal Meteorological Society
220 H. HALIDE AND P. RIDD
1993), the European Centre for Medium-range Weather
Forecast model (Molteni et al., 1996), the NCEP coupled
model (Ji et al., 1996), the JMA GCM model (Shibata
et al., 1999), the KMA-SNU model (Kang and Kug,
2000), the BMRC CGCM (Wang et al., 2002) and the
AGCM of NISPP/NASA (Bacmeister and Suarez, 2002).
There also exist hybrid models that combine an ocean
model with a statistical atmosphere. In this case, a wind
stress is produced from sea surface temperature anomaly
(SSTA) data using various statistical techniques ranging
from CCA/linear regression and EOF analysis to neural
network methods. The resulting wind stress is then used
to drive the ocean model (e.g, Barnett et al., 1993;
Balmaseda et al., 1994; Syu et al., 1995; Tang and Hsieh,
2002).
The predictive skills of the above models have been
reported by Kerr (1998, 2000, 2002), Latif et al. (1998),
Barnston et al. (1999), Landsea and Knaff (2000), and
Goddard et al. (2001). These studies show that the skill of
both types of models remain problematic irrespective of
whether the model success is gauged relative to a bench-
mark model, or for the forecasting of a single ENSO
event. In addition, it is also important to assess predic-
tion skill within a period, which includes both ENSO (El
Ni no/La Ni na) and non-ENSO events (Barnston et al.,
1996) in order to assess occasions when prediction might
give false alarms (Chen et al., 2004).
In this study, we have developed a very simple
statistical model called IndOzy, which was designed
to predict Ni no 3.4 SSTA. Although there are some new
facets of this model that have not been used in other
ENSO models, we do not make any great claims about the
model. Instead, the model was developed to demonstrate
how a very simple model can make surprisingly robust
predictions and to raise the question of why the more
elegant statistical and dynamical models do not perform
even better than they presently do.
2. Method
2.1. Data
The IndOzy model was developed to predict the future
monthly Ni no 3.4 index using past monthly Ni no 3.4 data.
No other data is used as input to the model and thus
the model can, in this regard, be considered extremely
simple. The Ni no 3.4 index is obtained by averaging the
sea surface temperature (SST) in the region 5

N5

S
latitude and 120

W170

W longitude, and subtracting


the average of the SST data from 1971 to 2000, on a
month-by-month basis.
Two Ni no 3.4 SST data sets of different lengths were
used in the analysis to address the statistical signi-
cance of the prediction skill. These two sets of data
are standard sets that are used commonly in other
studies of El Ni no prediction and are used in this
work, so that a comparison can be made between
the simple model presented in this paper and other
more complicated models. The rst data set (Data Set
NOAA) was the monthly Ni no 3.4 data, from January
1950 to December 2005. This data set was compiled
by the Climate Prediction Centre and is available at
http://www.cpc.ncep.noaa.gov/data/indices/sstoi.indices.
The monthly observed data was converted into a
3 month seasonal grouping such as April-May-June
(AMJ) averaged Ni no 3.4 index to comply with ENSO
predictions issued from many institutions and col-
lected in the International Research Institute website (at
http://iri.columbia.edu/climate/ENSO/currentinfo/arch-
ive/index.html). In order to make comparisons of the
IndOzy model with other models, the IRI monthly sum-
mary of a 3 monthly averaged SST Ni no 3.4 index
predictions of several lead times was used. It is impor-
tant to note about lead times used for this evaluation.
For a summary released in March 2002, the AMJ 2002
and May-June-July (MJJ) 2002 values are referred as
the Lead-1 and Lead-2 predictions, respectively. In this
study, the models comparison starts from AMJ 2002 sea-
son and ends in September-October-November (SON)
2005 season. Although this is only a short data set, pre-
dictions of a large suite of models are available and thus,
it is convenient to determine the comparative skill of
the IndOzy model. The physical models are: the Lam-
ont model (Zebiak and Cane, 1987), the SIO model
(Barnett et al., 1993), the European Centre for Medium-
range Weather Forecast model (Molteni et al., 1996), the
NCEP coupled model (Ji et al., 1996), and the AGCM
of NISPP/NASA (Bacmeister and Suarez, 2002) The sta-
tistical model includes the canonical correlation analy-
sis (Barnston and Ropelewski, 1992) and the Markov
(Xue et al., 2000) models of NCEP, the linear-inverse
model of NOAA-CDC (Penland and Magorian, 1993),
the constructed-analog models (van den Dool, 1994), cli-
matology and persistence (CLIPER) model (Knaff and
Landsea, 1997), and the UBC nonlinear canonical corre-
lation analysis (Hsieh, 2001).
The second Ni no 3.4 data set (Data Set KAPLAN) was
obtained from extended monthly SSTA Ni no 3.4, anal-
ysed from the ICOADS collection (Kaplan et al., 1998).
This data starts from January 1856 to December 2005 and
is available at http://ingrid.ldeo.columbia.edu/SOURCES/
.Indices/.nino/.EXTENDED/.NINO34/. IndOzy model
skill using this data is compared to that of the Zebiak
Cane LDEO5 model (Anderson, 2004; Chen et al., 2004).
This data set covers the period that is also covered by data
set NOAA and the data are very similar, but not precisely
the same for the overlapping period, 19502005.
2.2. IndOzy model description
As mentioned previously, the aim of the paper was to
deliberately develop a very simple model for ENSO pre-
diction and to compare its performance with more ele-
gant and complicated statistical models. IndOzy applies
a method of state-space reconstruction of a dynamical
system using delayed coordinates (Packard et al., 1980;
Takens, 1981; Sauer, 1994; Abarnabel and Lall, 1996).
In our ENSO case, the dynamical system to be recon-
structed refers to a value of Ni no 3.4 index at some time
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc
SIMPLE ENSO MODELS VERSUS COMPLICATED ENSO MODELS 221
t in the future (predictant), while the delayed coordi-
nates are the set of predictors (previous values of the
index). In other words, the delayed coordinates are sim-
ply the already measured data points that are used to
make a prediction. The number of delay coordinates, D
is related to an attractor dimension, d by Takens the-
orem (i.e. D = 2 d +1). For example, a chaotic time
series has d = 2, D = 5, i.e. there will be ve space
variables to reconstruct the series (Gautama et al., 2004).
Since ENSO phenomenon also exhibits chaotic behaviour
(Tziperman et al., 1997; Samelson and Tziperman, 2001),
the model is set to have D = 5. In fact, on their attempt
to nd the best state-space parameters for predicting
geophysical time series including ENSO, Regonda et al.
(2005) found that D ranges from 2 to 5 and the number
of delay time in each embedded space ranges from 11
to 21. Even though their model was apparently able to
correctly predict the time evolution of the phenomenon
around the peaks in the years 1982, 1984, 1997, and 1999,
its skill still needs to be compared with the other models
using appropriate skill measures. Further details on how
to implement this dynamical reconstruction principle into
the IndOzy model are described in Appendix A.
2.3. Skill measures
Three skill measures were used to assess the prediction
skill of an ENSO model. The two commonly used
measures for evaluating ENSO prediction are the Pearson
correlation coefcient and the root-mean-squared error
(RMSE). In this study, another skill measure commonly
used in weather prediction, the Peirce score, is applied
to assess the ENSO prediction. These score measures are
described in Appendix B.
3. Result
3.1. Predictions using data set NOAA (19502005)
3.1.1. Out-of-sample prediction
The IndOzy model was rst run and tested using Data Set
NOAA. The training data set for the model starts from
the 3 month averaged SSTA index of January-February-
March (JFM) 1950 up to SON 1995. The training data
set was used to calculate weights and biases for each
prediction lead. These weights and biases were then used
to produce an out-of-sample prediction for the period
19952005. Data from 1995 to 2005 was not used to
train the model.
Seasonal out-of-sample ENSO prediction results along
with observational data are presented in Figure 1. Here,
we only plot predictions from seasonal Lead-1, Lead-
3, Lead-5, and Lead-7. The model predicts the warm
events, such as the El Ni no event 19971998, La Ni na
event of 20002001 and the predictions decay at longer
lead times, as expected. However, the model produces
spurious results at longer lead times mostly during normal
conditions and the La Ni na event of 19992000. The
lack of skill for near-normal condition than that of
El Ni no and La Ni na conditions is in agreement with
previous ndings (van den Dool and Toth, 1991; Mason
and Mimmack, 2002; Kirtman, 2003). However, the
spurious prediction result at longer leads for the La
Figure 1. Time series of out-of-sample prediction plotted for 120 data points (n = 120). IndOzy model predicts up to seven seasons ahead. Here,
we only plot Lead-1 (average of the 1st, 2nd, and 3rd monthly Ni no 3.4 SST anomaly), Lead-3 (average of 3rd, 4th and 5th monthly anomaly),
Lead-5 (average of 5th, 6th and 7th monthly anomaly) and Lead-7 (average of 7th, 8th and 9th monthly anomaly). This gure is available in
colour online at www.interscience.wiley.com/ijoc
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc
222 H. HALIDE AND P. RIDD
Ni na event of 19992000 could be due to the inability
of the model to capture the fast changing dynamics
of the event, i.e. from strong El Ni no to strong La
Ni na. This phenomenon is also experienced by many
models,which fail during that particular event (Kirtman,
2003).
It is interesting to note that the predictions of the El
Ni no events become cooler with increasing lead times.
For example, for the 97/98 event, for a seasonal Lead-1,
the prediction is 0.3

C too cool; however, for seasonal


Leads 3, 5, and 7, the predictions are 0.8, 1.5, and 2

C
too cool, respectively. Generally, a prediction might be
expected some times to over predict and sometimes to
under predict; however, for the peak events, IndOzy does
not do this.
It is apparent that the model predictions do not diverge
from the present value by more than about 1

C and thus
it is unlikely for the model to predict an extreme event
such as the peak of an El Ni no event from conditions
when the temperature is low, i.e. a few months before
a major El Ni no event. On the other hand, at a time
of seasonal Lead-1 before a large El Ni no event peaks,
the present temperature is already elevated and thus the
small deviation of the prediction value from the present
value still allows a result similar to the peak values to be
achieved. We do not fully understand why the predictions
of the results are always too cool.
The degradation of the model performance with
increasing lead time is most evident in the scatter plot
diagrams (Figure 2). The correlation coefcient, RMSE
and Peirce skill score (and its error estimate for both the
IndOzy model and the no-skill random prediction) are
plotted in Figure 3. In Figure 3, the IndOzy prediction
has skill only up to Lead-5 since the Peirce score of sea-
sonal Lead-6 and Lead-7 predictions overlap with those
of the no-skill random forecast. We suggest that a predic-
tion is considered useful if the Peirce skill score is higher
than that of the no-skill forecast.
We also investigate whether or not the occurrence
and strength of ENSO events affect prediction skill.
In order to do that, the out-of-sample forecast is
divided into two parts, each containing 60 points
(n = 60). The rst part is from October-November-
December (OND) 1995 to SON 2000, which includes
the strongest 19971998 ENSO event (Anderson, 2004;
McPhaden, 2004) while the second consists mostly
of the normal years from OND 2000 to SON 2005.
The result using the Peirce skill score is presented in
Table I.
Table I shows that the strength of ENSO events affects
the prediction skill in two respects. First, sections, when
the events are dominated by strong ENSO, have a higher
score than that of weak and normal periods for each
lead time. Second, the presence of strong ENSO events
also extends the skill for a slightly longer lead while
the absence of such events limits this skill. For instance,
the IndOzy model has useful skill up to seasonal Lead-5
during the OND 1995SON 2000 period, whereas during
the normal years, i.e. during the OND 2000SON 2005
period, useful skill is only up to seasonal Lead-3. It is
evident that care must be taken when evaluating model
performance, as the particular data set used for model
testing can have a signicant effect on the apparent model
performance.
Figure 2. Scatter plot of IndOzy out-of-sample prediction against observation for seasonal Lead-1, Lead-3, Lead-5, and Lead-7 (with n = 120).
This gure is available in colour online at www.interscience.wiley.com/ijoc
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc
SIMPLE ENSO MODELS VERSUS COMPLICATED ENSO MODELS 223
Figure 3. IndOzy out-of-sample prediction skill against observation for the seasonal 7 leads (n = 120). Using the Peirce score (right-hand
diagram), seasonal Lead-6 and Lead-7 has no skill since each of them has an overlap with the no-skill prediction. This gure is available in
colour online at www.interscience.wiley.com/ijoc
Table I. Peirce skill score measures for different sections of the out-of-sample prediction.
Prediction
horizon
Peirce skill score
OND 1995SON 2000 (n = 60) OND 2000SON 2005 (n = 60)
IndOzy No-skill IndOzy No-skill
Seasonal Lead-1 0.84 0.1 0.01 0.15 0.75 0.09 0.00 0.13
Seasonal Lead-2 0.67 0.12 0.01 0.15 0.50 0.12 0.00 0.13
Seasonal Lead-3 0.54 0.13 0.01 0.15 0.32 0.12 0.00 0.13
Seasonal Lead-4 0.53 0.13 0.01 0.14 0.18 0.13 0.00 0.13
Seasonal Lead-5 0.29 0.14 0.01 0.14 0.03 0.13 0.00 0.13
Seasonal Lead-6 0.18 0.14 0.01 0.14 0.08 0.14 0.00 0.14
Seasonal Lead-7 0.03 0.14 0.01 0.14 0.08 0.13 0.00 0.13
3.1.2. Comparisons with other models
In this section, the prediction skill of the IndOzy model is
compared with the results of several dynamical and statis-
tical models. Predictions of these models are available for
the period from 2002 to 2005 and are obtained from var-
ious institutes mentioned in the introduction and all are
publicly available through the IRI website. The IndOzy
model was run using the training period from 1950 to
1995. The results of all the models for seasonal Lead-1
to Lead-4 are presented in Figure 4(a)(d) and the skills
for all seasonal leads are presented in Tables IIIV.
For short lead times (Figure 4(a) and (b)), most mod-
els, except the IndOzy model, predict ENSO peaks much
earlier and weaker than they should. On the other hand,
even though IndOzy predicts the peak after its occur-
rence, this time delay of up to two seasonal lead times
is present. In addition, the predicted amplitude is quite
close to the observed value. This results in the IndOzy
model having higher skill for one and two seasonal lead
times in comparison with other models, including the
no-skill random and ENSO-CLIPER predictions, in all
skill measures (Tables IIIV). Other models that per-
form well in this short-range prediction are the National
Centers for Environmental Predictions (NCEP)-Markov,
the Constructed-analog (CA) of van den Dool, and the
UBC nonlinear CCA statistical models and the hybrid
SIO dynamical model. On the other hand, when using
correlation coefcients as a measure of skill (Table IV),
the CA, CCA and SIO models perform poorly with coef-
cients below 0.5.
At longer leads, i.e. seasonal Lead-3 to Lead-6, IndOzy
model, lost its skill even in comparison with the no-
skill random forecast. The CA of van den dool also
suffered a similar fate as IndOzy. On the other hand, some
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc
224 H. HALIDE AND P. RIDD
Table II. Skill measure for all ENSO models using RMSE. A symbol means that the model does not have a prediction at
that particular lead time.
Model Root-mean-squared error (RMSE)

C
Lead-1 Lead-2 Lead-3 Lead-4 Lead-5 Lead-6 Lead-7
NASA 1.01 0.95 0.89 0.86 0.84 0.84 0.81
NCEP CM 0.59 0.52 0.47 0.44 0.49
SIO 0.5 0.45 0.4 0.43 0.46 0.47 0.46
LAMONT 0.61 0.56 0.47 0.43 0.49
ECMWF 0.62
NCEP/MKV 0.47 0.38 0.29 0.3 0.34 0.36 0.41
NOAA/CDC 0.65 0.71 0.77 0.79 0.83 0.85 0.85
DOOL CA 0.55 0.55 0.62 0.69 0.73 0.78 0.81
NCEP/CCA 0.41 0.43 0.48 0.51 0.47 0.46 0.44
CLIPER 0.6 0.58 0.58 0.56 0.57 0.62 0.63
UBC 0.54 0.49 0.46 0.5 0.5 0.49 0.45
IndOzy 0.24 0.39 0.49 0.58 0.71 0.75 0.72
Table III. Skill measure using Peirce skill score for all ENSO models. Symbol

is used when the skill is different from that


of the no-skill prediction. Symbol

is used when the skill is different from those of the no-skill prediction and the CLIPER
prediction. The score with any skill is also shaded.
Model Peirce skill score
Lead-1 Lead-2 Lead-3 Lead-4 Lead-5 Lead-6 Lead-7
NASA 0.14 0.15 0.31 0.15 0.04 0.16 0.08 0.16 0.37 0.15

0.24 0.16 0.21 0.16


NCEP CM 0.12 0.15 0.25 0.15 0.2 0.16 0.34 0.15

0.32 0.15


SIO 0.18 0.15 0.31 0.15

0.27 0.15 0.39 0.15

0.37 0.15

0.4 0.15

0.21 0.16
LAMONT 0.05 0.15 0.13 0.15 0.26 0.15 0.39 0.15

0.37 0.15


ECMWF 0.07 0.15
NCEP/MKV 0.42 0.15

0.37 0.14

0.42 0.15

0.29 0.15 0.26 0.15 0.17 0.16 0.18 0.16


NOAA/CDC 0.32 0.15

0.27 0.15 0.02 0.16 0.1 0.16 0.1 0.16 0.11 0.16 0.01 0.16
DOOL CA 0.26 0.15 0.4 0.14

0.39 0.15

0.43 0.14

0.16 0.16 0.16 0.16 0.1 0.17


NCEP/CCA 0.2 0.15 0.12 0.15 0.09 0.16 0.1 0.16 0.1 0.16 0.05 0.16 0.1 0.17
CLIPER 0.05 0.15 0.06 0.16 0.29 0.15 0.37 0.15

0.32 0.15

0.21 0.16 0.2 0.16


UBC 0.31 0.15

0.4 0.14

0.15 0.16 0.07 0.15 0.21 0.16 0.14 0.16 0 0.17


IndOzy 0.76 0.1

0.46 0.14

0.29 0.15 0.27 0.15 0.11 0.16 0.01 0.16 0.07 0.17
CONSENSUS 0.46 0.14

0.36 0.14

0.55 0.13

0.58 0.13

0.37 0.15

0.37 0.15

0.41 0.15

Table IV. Skill measures for all ENSO models using correlation. The useful prediction with correlation greater than 0.5 is shaded.
Model Pearson correlation
Lead-1 Lead-2 Lead-3 Lead-4 Lead-5 Lead-6 Lead-7
NASA 0.08 0.15 0.29 0.41 0.45 0.42 0.36
NCEP CM 0.18 0.33 0.41 0.45 0.29
SIO 0.33 0.46 0.56 0.52 0.43 0.42 0.4
LAMONT 0.26 0.38 0.55 0.62 0.51
ECMWF 0.04
NCEP/MKV 0.48 0.62 0.78 0.78 0.72 0.71 0.6
NOAA/CDC 0.25 0.16 0.05 0.06 0.06 0.05 0.06
DOOL CA 0.42 0.49 0.52 0.52 0.53 0.52 0.52
NCEP/CCA 0.57 0.54 0.42 0.35 0.44 0.49 0.51
CLIPER 0.14 0.27 0.37 0.43 0.51 0.53 0.5
UBC 0.34 0.4 0.44 0.36 0.31 0.31 0.45
IndOzy 0.88 0.7 0.55 0.38 0.11 0 0.17
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc
SIMPLE ENSO MODELS VERSUS COMPLICATED ENSO MODELS 225
Figure 4. (a) IndOzy model and other model predictions against observation: season Lead-1; (b) IndOzy model and other model predictions
against observation: season Lead-2; (c) IndOzy model and other model predictions against observation: season Lead-3; (d) IndOzy model and
other model predictions against observation: season Lead-4.
dynamical models and another statistical model start to
obtain much higher skill with longer lead time. They
are the Lamont, NCEP coupled, and NASA dynamical
models and CLIPER-ENSO statistical models. The poor
skill of the van den Dool CA model and the high
skill of the NASA model using the Peirce skill score
(Table III) measure are in contradiction to the correlation
measures (Table IV). Some earlier investigators have
rejected correlation as a criterion for this purpose (refer
for instance Woodcock (1976); Harvey et al., (1992) and
references therein). At seasonal Lead-7, all skill measures
show that none of the models has any skill even against
the no-skill random prediction.
In some instances, predictions can be improved by
generating a consensus forcast, i.e. averaging the outputs
of the different models at each prediction lead (Table III).
This consensus/ensemble technique is widely used in
climate forecasting (Thompson, 1977; Hagedorn et al.,
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc
226 H. HALIDE AND P. RIDD
Figure 4. (Continued).
2005). It can be observed in Table III that the consensus
forecast provides a skill, signicantly improved from a
no-skill random prediction.
It is interesting to note that many model results at short
lead times are surprisingly poor, i.e. only a few models
are considered skillful against both the CLIPER and the
no-skill random forecast. This is similar to the situation
when the skill of both dynamical and statistical models
was found to be lower than that of persistence as reported
by Latif et al., (1998) and Goddard et al. (2001).
One reason for the relatively poor performance of the
models may be that the prediction set used does not
contain a major ENSO event as found in other studies
(e.g. van den Dool and Toth, 1991; Mason and Mim-
mack, 2002). Using the COLA anomaly coupled model,
Kirtman (2003) showed that near-normal conditions have
lower prediction skill than those of El Ni no and La Ni na
conditions at lead times up to 12 months.
3.2. Predictions using data set KAPLAN (18562005)
In this section, the IndOzy model is used to predict
Ni no 3.4 indices using the extended data set that has
also been used in the ZebiakCane model (Chen et al.,
2004). Predictions were made for successive 20 year
periods using the remaining data as training data. For
example, the model was run to predict the Ni no 3.4 index
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc
SIMPLE ENSO MODELS VERSUS COMPLICATED ENSO MODELS 227
Figure 4. (Continued).
from 1916 to 1935 using training data from the period
18561915 and also the period from 1936 to 2005. Thus,
a slightly different training set was used for each of the
20 year prediction periods. It should be noted that this
is a slightly different approach from that of Chen et al.
(2004) who used the period from 1976 to 1995 to tune
their model and the rest of the data for comparison with
model predictions.
Figure 5 illustrates the correlation coefcient and
RMSE for the predictions over the various 20 year peri-
ods. The results for each period are broadly the same with
rapidly decreasing correlation coefcients and increasing
RMSE, as the lag period is increased. These results are
also similar to those of Chen et al. (2004). The similarity
is shown when we plot the average and standard devia-
tion of correlation and RMSE at each lead time for the
seven 20 year period predictions (Figure 6). Both models
show similar correlation up to 5 month lead. Beyond this
lead, IndOzy prediction has lower correlation than that of
the LDEO5. In terms of RMSE, however, both models
do not signicantly differ at all lead times.
The prediction skill of the IndOzy model is also
evaluated using the Peirce skill score in Figure 7. It
shows that the IndOzy model only has skill up to 4 month
lead, i.e. the skill is only better than the no-skill random
prediction up to and including 4 month lead.
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc
228 H. HALIDE AND P. RIDD
Figure 4. (Continued).
Finally, a time series of the IndOzy prediction for
the period from 1981 to 2005 is shown in Figure 8.
Visual comparison of the data and prediction seems to
indicate a high degree of model skill even for Lead-6
for which it has been shown above; the model in fact
has no skill. Close inspection of the predictions reveal
that the predictions have a very similar waveform to
the data, but are slightly delayed in time and smaller in
amplitude. Because the data is plotted on a time scale that
is 20 years long, the slight delay in the predicted peaks
appears very small and the predicted waveform appears
to follow the data very closely. This raises an important
point about assessing the performance of a model based
on representations such as those in Figure 8, suggesting
the real possibility of gaining a false impression of the
model performance.
4. Discussion and conclusion
IndOzy is an extremely simple model that makes use
of only the previous Ni no 3.4 SSTA data to make
predictions. Despite its simplicity, IndOzy has been
shown to perform favourably when compared with other
more elaborate and complex models. Using a short period
of test data, for which other model predictions were
available, the performance of IndOzy for short seasonal
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc
SIMPLE ENSO MODELS VERSUS COMPLICATED ENSO MODELS 229
Figure 5. IndOzy out-of-sample prediction skill against observation up to 12 month leads measured using the Pearson correlation (gridded
left-hand diagram) and RMSE (right-hand diagram). Each solid line in both diagrams represents skill for the seven 20 year periods and the
19812005 period. This gure is available in colour online at www.interscience.wiley.com/ijoc
Figure 6. IndOzy and LDEO5 (Chen et al., 2004) models prediction skill obtained by averaging the Pearson correlation (left diagram) and RMSE
(right diagram) up to 12 month leads for the seven 20 year periods excluding the 19812005 period. The standard deviation is plotted as error
bar. This gure is available in colour online at www.interscience.wiley.com/ijoc
lead times (Lead-1 and Lead-2) was found to be superior
to other models. However, with Lead-3 the skill of the
model had dropped to be no better than the no-skill
random prediction.
When tested against longer data sets, IndOzy was
found to have skill up to 4 or 5 month leads and
performed only slightly worse than the more complicated
model of Chen et al. (2004); although, to do this, a longer
training data set was used compared to that used by Chen
et al. (2004).
The fact that such a simple model as IndOzy can per-
form comparably with respect to the far more complicated
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc
230 H. HALIDE AND P. RIDD
Figure 7. IndOzy out-of-sample prediction skill against observation up
to 12 month leads measured using the Peirce skill score. Each line in the
diagram represents score along with an error bar for the seven 20 year
periods and the 19812005 period. The no-skill prediction depicted by
crosses is also plotted with an error bar. This gure is available in
colour online at www.interscience.wiley.com/ijoc
Figure 8. Time series of the IndOzy model prediction at Lead-1,
Lead-3, and Lead-6 months during the 19812005 period. This gure
is available in colour online at www.interscience.wiley.com/ijoc
models raises some philosophical questions about mod-
elling extremely complicated systems such as ENSO.
Why is it that a model that uses the minimum data and
that can run on an ofce desktop computer can perform
on par with much more complicated and computing inten-
sive codes? Dynamical models that use the basic laws
of physics are the most scientically satisfying approach
used for prediction of any phenomenon. In the case of the
ENSO predictions, these models use extremely detailed
descriptions of the atmosphere and the ocean. Many of
the statistical models use a large number of inputs in
order to make predictions. In the long run, these large
dynamical and statistical models offer the best hope of
increasing the range of ENSO prediction, but the lack of
major improvement in prediction ability of these present
models over a simple model such as IndOzy leads one to
speculate that much of the complexity of many of these
models is of little value at present.
This raises the question of what is the most sensitive
input data and physical processes that are necessary
for the best prediction. The success of IndOzy, which
uses only the historic data from Ni no 3.4 indicates that
some of the data used in more complicated models may
contribute insignicantly to model accuracy. In addition,
for the dynamical models, it seems likely that some of the
physical processes that are included also contribute little.
We suggest that in order for the large models to
progress so that they become signicantly more useful
than simple models such as IndOzy, much more atten-
tion needs to be focused on the most sensitive input
data, or combinations of input data (Zhang et al., 2005;
AchutaRao and Sperber, 2006; Fei et al., 2006). Addi-
tionally, for dynamic models, the way forward may not
be to simply increase the computer power and reduce
grid sizes but to be focused more on determining what
physical processes, i.e. model parameterizations and cou-
pling strategies (Wu and Kirtman, 2005; Power and Col-
man, 2006). It is evident from this paper that even the
largest models running on amongst the largest computers
in the world are presently not performing much better
than models that can be run on a desktop computer.
There seems to be no reason why using even faster
computers will do any better than the present day super-
computers unless a fundamental breakthrough is made in
understanding the processes that trigger the ENSO events
(Eisenman et al., 2005; Kondrashov et al., 2005; Perez
et al., 2005; Saynisch et al., 2006; Vecchi et al., 2006).
Such an understanding would allow more targeted data
collection to drive the models and would focus modi-
cations to dynamical model codes to those processes
that matter most. Also, in order to solve the problem of
sensitivity to initial conditions, higher quality input data
is likely required before signicant improvement in the
dynamical model performance can be expected.
Acknowledgements
We thank the IRI-Columbia Univ. and the CPC-NCEP-
NOAA for making the seasonal ENSO prediction and
the Ni no 3.4 index available to public. We express our
gratitude to Dr A. Kaplan of LDEO Columbia University
for helping us nding the extended Ni no 3.4 SSTA data
set. The critical comments by two anonymous reviewers
considerably helped us on focusing and improving the
paper. HH would also like to express his gratitude to the
Australian Government for the AUSAID PhD scholarship
and the ACIAR postdoctoral fellowship and also to Dr
D. McKinnon for his continuous encouragement.
A1. Appendix A
A1.1. IndOzy model
Let the data be represented by the vector X with the input
data elements represented by X(k), where k = 1, . . . , n;
and n is the number of data points in the time series.
This data could be either the monthly Ni no 3.4 index
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc
SIMPLE ENSO MODELS VERSUS COMPLICATED ENSO MODELS 231
or the 3 monthly averaged Ni no 3.4. To predict the
value of X at time P steps ahead of the present time,
t , i.e. X(t +P), we need as predictors ve elements of
the previous X, data, i.e. [X(t ); X(t P); X(t 2P);
X(t 3P); and X(t 4P)]. This form of time delaying
follows Lapedes and Farber (1987) and Jang (1993)
and references therein. For example, predicting 4 time
steps ahead (X(t +4)), the predictor elements are: [X(t );
X(t 4); X(t 8); X(t 12); and X(t 16)]. It is
clear that the model uses simpler predictors than those
of the CLIPER model (Knaff and Landsea, 1997). The
latter uses historical ENSO events from 1950 to 1994,
persistence, and trends in recent observed SSTA data
(Barnston et al., 1999; Landsea and Knaff, 2000). Having
determined our predictors/predictant pairs, we then use
a simple linear neural network method to congure the
relation between pairs.
The linear neural network (LNN) applied in this study
is different from the more complex neural networks used
for other ENSO prediction (Elsner and Tsonis, 1992;
Hsieh and Tang, 1998), in two respects. First, it consists
only an input and output layer without any hidden layer.
Second, it does not make use of the back propagation
technique of nding the optimum weights for reducing
the error. Instead, it employs a least mean square error
(LMS) or Widrow-Hoff algorithm (Hagan et al., 1996).
The algorithm adjusts the weights and biases of the
network to minimize the error, i.e. difference between the
data and prediction (Demuth et al., 2005). It can be noted
that for an LNN, the weights can be viewed as similar
to the regression coefcient in a regression analysis and
the bias is the interception point between the regression
line and the dependent axis.
The model is implemented as follows. The Ni no 3.4
index data set was divided into two parts of unequal
lengths called the training and testing data sets. The
training data were used to determine the weights and
biases to relate the ve lagged inputs/predictors data
to the predictant output data. These lagged inputs
during model training, I
train
= {X(t ), X(t P), X(t
2P), X(t 3P), X(t 4P)} and an output, O
train
=
{X(t +P), become inputs for the MATLAB

algorithm
called NEWLIN. This subroutine is used for calculat-
ing the strength of these input-output pair relations, i.e.
their weight, and the bias using the LMS algorithm.
Let us call the resulting weights and bias for each pair
W = {W
t
, W
tP
, . . . W
t4P
}, and b, respectively. These
weights and biases are then used to give an out-of-sample
prediction, O
test
prediction, given a set of predictors I
test
of the testing data set using SIM, another MATLAB

subroutine used for linearly transforming the inputs and


the bias, i.e. O
test
= W I
test
+b.
B1. Appendix
B1.1. Skill measures
There are two commonly used measures for evaluating
ENSO prediction: deterministic approach, i.e. the Pearson
Table BI. Contingency table for the Yes/No ENSO forecast.
ENSO event
forecast
ENSO event observed
Yes No
Yes a (hit) b (false alarm)
No c (miss) d (correct rejection)
correlation coefcient and RMSE. Another approach has
also been applied in ENSO model verication such as:
RPSS (rank probability skill score) and ROC (receiver
operating characteristic) (Mason and Mimmack, 2002;
Kirtman, 2003). In this study, another type of probabilis-
tic skill measure commonly used in weather prediction,
the Peirce score, is applied to assess the ENSO prediction.
This skill measure can readily be used against no-skill
random forecast. The formulae for both types of skill
measures are presented.
B1.2. Pearson correlation and RMSE
Prediction skill for most of the ENSO models can be
expressed as the RMSE and correlation coefcient. These
measures of skill are dened as follows (Wilks, 1995).
The root-mean-squared error (RMSE) are dened as
RMSE =

1/n(
n

m=1
(p
m
o
m
)
2

1/2
(B1)
where p
m
and o
m
are the mth prediction and observed
value, respectively (m = 1, 2, . . . , n), and the correlation
coefcient is
r =
n

m=1
(p
m
p)(o
m
o)/

m=1
(p
m
p)
2

1/2

m=1
(o
m
o)
2

1/2
(B2)
where p and o are the mean values of the prediction and
observation, respectively.
B1.3. Peirce score and its error estimates
The Peirce skill score (Woodcock, 1976, 1981; Stephen-
son, 2000) ranges from 1 to 1, and has been demon-
strated by Stephenson (2000) to have good properties as
a skill measure because of its fairness against forecast
hedging. This is where a forecaster tends to favour a
particular event. It also has a complementary symmetry;
interchanging event to non-event and vice versa does not
affect the skill. One useful aspect of the Pierce Score is
that both an error estimate and a random forecast score
are available. It is thus possible to test whether a particu-
lar model has a skill that is superior to a no-skill random
forecast. The score is based on a categorical type of fore-
cast such as the Yes/No forecast. A contingency table for
the Yes/No categorical forecast is shown in Table BI.
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc
232 H. HALIDE AND P. RIDD
In Table BI a, b, c, and d refer respectively to the
number of times the event (either El Ni no or La Ni na)
is forecast and also observed, the event is forecast but
did not occur, the event is not forecast but did occur, and
the event is neither forecast nor observed. El Ni no or La
Ni na events are respectively dened as those occasions
when the 3 month average of the Ni no 3.4 SSTA is equal
to or above 0.5

C, and is equal to or below 0.5

C
(McPhaden, 2004). It can be noted that when El Ni no
was predicted, but La Ni na was observed or vice versa,
a category b was assigned.
Having determined the parameter values of the Yes/No
forecast i.e. a, b, c, and d based on the contingency
table of Table BI, prediction skills along with their error
estimates were calculated for the model. The formulae
for the skill score and the error estimates are from
Stephenson (2000):
Peirce skill score PSS = (ad bc)/(a +c)(b +d)
(B3)
Standard error ePSS = [(n
2
4(a +c)(b +d)
PSS
2
)/4n(a +c)(b +d)]
1/2
(B4)
where the total number of predictions and observations
n = a +b +c +d.
The prediction skills obtained from all ENSO models,
including the IndOzy model, are compared against a
random no-skill forecast (Stephenson, 2000). The no-
skill forecast parameters are obtained by performing the
following transformation on a, b, c, d values resulting
from models prediction (Woodcock, 1976; Stephenson,
2000).
a
r
= (a +c)(a +b)/n (B5)
b
r
= (b +d)(a +b)/n (B6)
c
r
= (a +c)(c +d)/n (B7)
d
r
= (b +d)(c +d)/n (B8).
The skill scores of this random forecast are calculated
by replacing a, b, c, and d by a
r
, b
r
, c
r
, and d
r
in the
above skill scores formulae.
References
Abarnabel HDI, Lall U. 1996. Nonlinear dynamics of Great Salt
Lake: system identication and prediction. Climate of Dynamic 12:
287297.
AchutaRao K, Sperber KR. 2006. ENSO simulation in coupled ocean-
atmosphere models: are the current models better? Climate of
Dynamic 27: 115.
Adams RM, Bryant KJ, McCarl BA, Legler DM, OBrien J, Solow A,
Weiher R. 1995. Value of improved long-range weather information.
Contemporary Economic Policy 13: 1019.
Anderson D. 2004. Testing time for El Ni no. Nature 428: 709711.
Bacmeister JT, Suarez MJ. 2002. Wind stress simulation and the
equatorial momentum budget in an GCM. Journal of the Atmospheric
Sciences 59: 30513073.
Balmaseda MA, Anderson DLT, Davey MK. 1994. ENSO prediction
using a dynamical ocean model coupled to a statistical atmospheres.
Tellus 46A: 497511.
Barnett TP, Latif M, Graham N, Flugel M, Pazan S, White W. 1993.
ENSO and ENSO-related predictability. Part 1: Prediction of
equatorial Pacic sea surface temperature with a hybrid coupled
ocean-atmosphere model. Journal of Climate 6: 15451566.
Barnston AG, Ropelewski CF. 1992. Prediction of ENSO episodes
using canonical correlation analysis. Journal of Climate 7:
13161345.
Barnston AG, Glantz MH, He Y. 1999. Predictive skill of statistical
and dynamical climate models in SST forecasts during the 1997-98
El Ni no episode and the 1998 La Ni na onset. Bulletin of the American
Meteorological Society 80: 217243.
Barnston AG, van den Dool HM, Zebiak SE, Barnett TP, Ming J,
Rodenhuis DR, Cane MA, Leetmaa A, Graham NE, Ropelewski
CR, Kousky VE, OLenic EA, Livezey RE. 1996. Long-lead
seasonal forecasts: Where do we stand? Bulletin of the American
Meteorological Society 75: 20972114.
Bouma MJ, Poveda G, Rojas W, Chavasse D, Qui nones M, Cox J,
Patz J. 1997. Predicting high-risk years for Malaria in Colombia
using parameters of El Ni no Southern Oscillation. Tropical Medicine
and International Health 2: 11221127.
Changnon D, Ritsche M, Elyea K, Shelton S, Schramm K. 2000.
Integrating climate forecasts and natural gas supply information into
a natural gas purchasing decision. Meteorology Applied 7: 211216.
Chen C-C, McCarl B, Hill H. 2002. Agricultural value of ENSO
information under alternative phase denition. Climatic Change 54:
305325.
Chen D, Cane MA, Kaplan A, Zebiak S, Huang D. 2004. Predictabil-
ity over the past 148 years. Nature 428: 733736.
Chichilnisky G, Heal G. 1998. Managing unknown risksthe future of
global reinsurance. Journal of Portfolio Management 24: 8591.
Clarke AJ, van Gorder S. 2001. ENSO prediction using an ENSO
trigger and proxy for western equatorial Pacic warm pool
movement. Geophysical Research Letters 28: 579582.
Costello CJ, Adams RM, Polasky S. 1998. The value of El Ni no
forecasts in the management of Salmon: a stochastic dynamic
assessment. American Journal of Agricultural Economics 80:
765777.
Demuth H, Beale M, Hagan M. 2005. Neural Network ToolboxFor
use with MATLAB. The Maths Works Inc: Natik, MA, USA.
Eisenman I, Yu L, Tziperman E. 2005. Westerly Wind Bursts: ENSOs
Tail Rather than the Dog? Journal of Climate 18: 52245238.
Elsner JB, Tsonis AA. 1992. Nonlinear prediction, chaos and noise.
Bulletin of the American Meteorological Society 73: 4960.
Fei Z, Jiang Z, Zhang R-H, Guangqing Z. 2006. Improved ENSO
forecasts by assimilating sea surface temperature observations into
an intermediate coupled model. Advances in Atmospheric Sciences
23: 615624.
Gautama T, Mandic DP, van Heulle MM. 2004. The delay vector
variance method for detecting determinism and nonlinearity in time
series. Physica D 190: 167176.
Goddard L, Mason SJ, Zebiak SE, Ropelewski CF, Basher R,
Cane MA. 2001. Current approaches to seasonal-to-interannual
climate predictions. International Journal of Climatology 21:
11111152.
Hagan MT, Demuth HB, Beale M. 1996. Neural Network Design. PWS
Publishing Company: Boston.
Hagedorn AF, Doblas-Reyes FJ, Palmer TN. 2005. The rationale
behind the success of multi-model ensembles in seasonal forecasting.
I. Basic concept. Tellus 57A: 219233.
Hamlet AF, Huppert D, Lettenmaier DP. 2002. Economic value of
long-lead streamow forecasts for Columbia River hydropower.
Journal of Water Resources Planning-ASCE 128: 91101.
Harvey LO, Hammond KR, Lusk CM, Ross EFM. 1992. The
application of signal detection theory to weather forecasting
behavior. Monthly Weather Review 120: 863883.
Hsieh WW. 2001. Nonlinear canonical correlation analysis of the
tropical Pacic climate variability using a neural network approach.
Journal of Climate 14: 25282539.
Hsieh WW, Tang B. 1998. Applying neural network models to
prediction and data analysis in Meteorology and Oceanography.
Bulletin of the American Meteorological Society 79: 18551870.
Jang J-SR. 1993. ANFIS: Adaptive-Network-based Fuzzy Inference
System. IEEE Trans in System, Man, and Cybernetics 23: 665685.
Ji M, Leetmaa A, Kousky VE. 1996. Coupled model forecasts of
ENSO during the 1980s and 1990s at the National Meteorological
Centre. Journal of Climate 9: 31053120.
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc
SIMPLE ENSO MODELS VERSUS COMPLICATED ENSO MODELS 233
Jochec KG, Mjelde JE, Lee AC, Conner JR. 2001. Use of seaonal
climate forecasts in rangeland-based livestock operations in West
Texas. Journal of Applied Meteorology 40: 16291639.
Kang I-S, Kug J-S. 2000. An El Ni no prediction system with an
intermediate ocean and statistical atmosphere. Geophysical Research
Letters 27: 11671170.
Kaplan A, Cane MA, Kushnir Y, Clement AC, Blumenthal MB,
Rajagopalan B. 1998. Analysis of of global sea surface temperature
18961971. Journal of Geophysical Research 103: 1856718589.
Kerr RA. 1998. Models win big in forecasting El Ni no. Science 280:
522523.
Kerr RA. 2000. Second thoughts on skill of El Ni no predictions.
Science 290: 257258.
Kerr RA. 2002. Signs of success in forecasting El Ni no. Science 297:
497499.
Kirtman BP. 2003. The COLA anomaly coupled model: Ensemble
ENSO prediction. Monthly Weather Review 131: 23242341.
Knaff JA, Landsea CW. 1997. An El Ni no-Southern Oscillation
CLImatology and PERsistence (CLIPER) forecasting scheme.
Weather Forecast 12: 633652.
Kondrashov D, Kravtsov S, Robertson AW, Ghil M. 2005. A hierarchy
of data-based ENSO models. Journal of Climate 18: 44254444.
Landsea CW, Knaff JA. 2000. How much skill was there in forecasting
the very strong 1997-98 El Ni no. Bulletin of the American
Meteorological Society 81: 21072119.
Lapedes A, Farber R. 1987. Nonlinear Signal Processing using Neural
Networks: Prediction and System Modeling. Technical Report LA-
UR-87-2662, Los Alamos National Laboratory, Los Alamos, New
Mexico.
Latif M, Anderson D, Barnett T, Cane M, Kleeman R, Leetmaa A,
OBrien J, Rosati A, Schneider E. 1998. A review of the
predictability and prediction of ENSO. Journal of Geophysical
Research 103: 1437514394.
Lehodey P, Bertignac M, Hampton J, Lewis A, Picaut J. 1997. El Ni no
Southern oscillation and tuna in the western Pacic. Nature 389:
715718.
Letson D, Podesta GP, Messina CD, Ferreyra RA. 2005. The uncertain
value of perfect ENSO phase forecasts: Stochastic agricultural prices
and intra-phase climatic variations. Climatic Change 69: 163196.
Mason SJ, Mimmack GM. 2002. Comparison of some statistical
methods of probabilistic forecasting of ENSO. Journal of Climate
15: 829.
McKeon G, Ash A, Hall W, Smith MS. 2000. Simulation of grazing
strategies for beef production in north-east Queesnland. In
Applications of Seasonal Climate Forecasting in Agricultural
and Natural Ecosytems-The Australian Experience, Hammer GL,
Nicholls N, Mitchell C (eds). Kluwer Acadamic Publishers:
Dordrecht; 227252.
McPhaden MJ. 2004. Evolution of the 2002/03 El Ni no. Bulletin of the
American Meteorological Society 85: 677695.
Molteni F, Buizza R, Palmar TN, Petroliagis T. 1996. The ECMWF
ensemble prediction system: methodology and validation. Quarterly
Journal of the Royal Meteorological Society 122: 73119.
Packard NH, Crutcheld JP, Farmer JD, Shaw RD. 1980. Geometry
from a time series. Physical Review Letters 45: 712716.
Penland C, Magorian T. 1993. Prediction of Ni no 3 sea surface
temperatures using Linear Inverse Modeling. Journal of Climate 6:
10671076.
Perez CL, Moore AM, Zavala-Garay J, Kleeman R. 2005. A compari-
son of the inuence of additive and multiplicative stochastic forcing
on a coupled model of ENSO. Journal of Climate 18: 50665085.
Power S, Colman R. 2006. Multi-year predictability in a coupled
general circulation model. Climate Dynamic 2 6: 247272.
Regonda SK, Rajagopalan B, Lall U, Clark M, Moon Y-I. 2005. Local
polynomial method for ensemble forecast of time series. Nonlinear
Processes in Geophysics 12: 397406.
Samelson RM, Tziperman E. 2001. Instability of the chaotic ENSO:
The growth-phase predictability barrier. Journal of the Atmospheric
Sciences 58: 36133625.
Sauer T. 1994. Time series prediction by using delay coordinate
embedding. In Time Series Prediction: Forecasting the Future
and Understanding the Past, Weigend AS, Gershenfeld NA (eds).
Addison-Wesley Publishers Company: Reading; 175193.
Saynisch J, Kutths J, Maraun D. 2006. A conceptual ENSO model
under realistic noise forcing. Nonlinear Processes in Geophysics 13:
275285.
Shibata K, Yoshimura H, Ohizumi M, Hosaka M, Sugi M. 1999. A
simulation of troposphere, stratosphere and mesosphere with an
MRI/JMA98 GCM. Papers in Meteorology and Geophysics 50:
1553.
Solow A, Adams RF, Bryant KJ, OBrien JJ, McCarl BA, Nayda W,
Weiher R. 1998. The value of improved ENSO prediction to U.S.
agriculture. Climatic Change 39: 4760.
Stephenson DB. 2000. Use of the Odds ratio for diagnosing forecast
skill. Weather Forecast 15: 221232.
Syu HH, Neelin JD, Gutzler D. 1995. Seasonal and interannual
variability in a hybrid coupled GCM. Journal of Climate 8:
21212143.
Takens F. 1981. Detecting strange attractors in turbulence. In
Dynamical Systems and TurbulenceLecture Notes in Mathematics
898, Rand DA, Young LS (eds). Springer-Verlag: Berlin; 366381.
Tang Y, Hsieh WW. 2002. Hybrid coupled models of the tropical
Pacic. II. ENSO prediction. Climate Dynamics 19: 343353.
Thompson PD. 1977. How to improve accuracy by combining
idependent forecasts. Monthly Weather Review 105: 228229.
Tziperman E, Scher H, Zebiak SE, Cane MA. 1997. Controlling
spatiotemporal chaos in a realistic El Ni no prediction model.
Physical Review Letters 79: 10341037.
van den Dool HM. 1994. Searching for analogues, how long must we
wait? Tellus 46A: 314324.
van den Dool HM, Toth Z. 1991. Why do forecasts for near normal
often fail? Weather Forecast 6: 7685.
Vecchi GA, Wittenberg AT, Rosati A. 2006. Reassessing the role of
stochastic forcing in the 19971998 El Ni no. Geophysical Research
Letters 33: L01706, DOI: 10.1029/2005GL024738.
Wang G, Kleeman R, Smith N, Tseitkin F. 2002. The BMRC coupled
general circulation model ENSO forecast system. Monthly Weather
Review 130: 975991.
Wilks DS. 1995. Statistical Methods in the Atmospheric Sciences.
Academic Press: San Diego.
Woodcock F. 1976. The evaluation of yes/no forecasts for scientic and
administrative purposes. Monthly Weather Review 104: 12091214.
Woodcock F. 1981. Hanssen and Kuipers discriminant related to the
utility of yes/no forecasts. Monthly Weather Review 109: 172173.
Wu R, Kirtman BP. 2005. Roles of Indian and Pacic Ocean airsea
coupling in tropical atmospheric variability. Climate Dynamics 2 5:
155170.
Xue Y, Leetmaa A, Ji M. 2000. ENSO predictions with Markov
models: the impact of sea level. Journal of Climate 13: 849871.
Zebiak SE, Cane MA. 1987. A model El Ni no-Southern Oscillation.
Monthly Weather Review 115: 22622278.
Zhang S, Harrison MJ, Wittenberg AT, Rosati A, Anderson JL,
Balaji V. 2005. Initialization of an ENSO forecast system using
a paralleized ensemble lter. Monthly Weather Review 133:
31763201.
Copyright 2007 Royal Meteorological Society Int. J. Climatol. 28: 219233 (2008)
DOI: 10.1002/joc

Potrebbero piacerti anche