Sei sulla pagina 1di 8

F U E L P RO C ES S IN G TE CH N O L O G Y 8 9 ( 2 0 0 8) 1 3–2 0

w w w. e l s e v i e r. c o m / l o c a t e / f u p r o c

Prediction of coal grindability based on petrography,


proximate and ultimate analysis using multiple regression
and artificial neural network models

S. Chehreh Chelgania , James C. Hower b , E. Jorjania,⁎, Sh. Mesroghlia , A.H. Bagherieha


a
Department of Mining Engineering, Research and Science Campus, Islamic Azad University, Poonak, Hesarak Tehran, Iran
b
Center for Applied Energy Research, University of Kentucky, 2540 Research Park Drive, Lexington, KY 40511, USA

AR TIC LE I N FO ABS TR ACT

Article history: The effects of proximate and ultimate analysis, maceral content, and coal rank (Rmax) for a
Received 27 April 2007 wide range of Kentucky coal samples from calorific value of 4320 to 14960 (BTU/lb) (10.05 to
Received in revised form 8 June 2007 34.80 MJ/kg) on Hardgrove Grindability Index (HGI) have been investigated by multivariable
Accepted 22 June 2007 regression and artificial neural network methods (ANN). The stepwise least square
mathematical method shows that the relationship between (a) Moisture, ash, volatile
Keywords: matter, and total sulfur; (b) ln (total sulfur), hydrogen, ash, ln ((oxygen + nitrogen)/carbon)
Hardgrove grindability index and moisture; (c) ln (exinite), semifusinite, micrinite, macrinite, resinite, and Rmax input sets
Coal petrography with HGI in linear condition can achieve the correlation coefficients (R2) of 0.77, 0.75, and
Coal rank 0.81, respectively. The ANN, which adequately recognized the characteristics of the coal
Ultimate and proximate analysis samples, can predict HGI with correlation coefficients of 0.89, 0.89 and 0.95 respectively in
Artificial neural network testing process. It was determined that ln (exinite), semifusinite, micrinite, macrinite,
resinite, and Rmax can be used as the best predictor for the estimation of HGI on
multivariable regression (R2 = 0.81) and also artificial neural network methods (R2 = 0.95).
The ANN based prediction method, as used in this paper, can be further employed as a
reliable and accurate method, in the hardgrove grindability index prediction.
© 2007 Elsevier B.V. All rights reserved.

1. Introduction Although the HGI testing device is not costly, the measur-
ing procedure to get a HGI value is time consuming. Therefore,
Grindability of coal is an important technological parameter in some researchers have investigated the prediction of HGI
assessing the relative hardness of coals of varying ranks and based on proximate analysis, petrography, and vitrinite
grades during comminution [1]. This is usually determined by maximum reflectance with using regression [7–10].
Hardgrove Grindability Index (HGI), which is of great interest Artificial neural network (ANN) is an empirical modeling
since it is used as a predictive tool to determine the tool, which is analogous to the behavior of biological neural
performance capacity of industrial pulverizers in power structures [11]. Neural networks are powerful tools that have
station boilers [2]. HGI reflects the coal hardness, tenacity, the abilities to identify underlying highly complex relation-
and fracture and is directly related to the coal rank, megascopic ships from input–output data only [12]. Over the last 10 years,
coal lithology, microscopic maceral associations, and the type artificial neural networks (ANNs), and, in particular, feed-
and distribution of minerals [3]. Grinding properties are forward artificial neural networks (FANNs), have been exten-
important in mining applications since lower-HGI (harder to sively studied to present process models, and their use in
grind) lithotypes will require a greater energy input [4–6]. industry has been rapidly growing [13].

⁎ Corresponding author. Tel.: +98 912 1776737; fax: +98 21 44817194.


E-mail address: esjorjani@yahoo.com (E. Jorjani).

0378-3820/$ – see front matter © 2007 Elsevier B.V. All rights reserved.
doi:10.1016/j.fuproc.2007.06.004
14 F U E L P RO C ES S IN G TE CH N O L O G Y 8 9 ( 2 0 08 ) 13 –2 0

Li et al. [14] discussed neural network analyses using 67


coals of a wide rank range of coal quality for the prediction of
the HGI on the basis of the proximate analysis. A problem with
their analysis was the use of a rank range which spanned the
reversal of HGI values in the medium volatile bituminous rank
range. Also Bagherieh et al. [15] used vitrinite, inertinite,
liptinite, Rmax, fusinite, ash and sulfur analysis on 195 sets of
data and improved result of ANN prediction to R2 = 0.92 for
testing data.
The aim of the present work is the assessment of properties
of more than 600 Kentucky coals with reference to the HGI and
possible variations with respect to vitrinite maximum reflec-
tion, proximate and ultimate analysis, and petrography of coal
using the multivariable regression, SPSS software package,
also improvement of results by artificial neural network,
MATLAB software package.
This work is an attempt to solve the following important
questions: (a) Is there a suitable multivariable relationship
between vitrinite maximum reflection, petrography, proximate
and ultimate analysis with HGI for a wide range of Kentucky
Fig. 1 – Distribution of difference between actual HGI and
coals? (b) Can we improve the correlation of predicted HGI with
estimated (Eq. (1)).
actual measured HGI by using of artificial neural network?

found a nonlinear second-order regression equation with


correlation coefficient (r) of 0.93. [1]. A problem with their
2. Experimental data
analysis was the use of all four parameters, fixed carbon,
moisture, volatile matter, and ash. It is not necessary to use all
A mathematical model requires a comprehensive database to
four parameters since, by definition; the four parameters are a
cover a wide variety of coal types. Such a model will be capable
closed system, adding to 100% [16].
for predicting of HGI with a high degree of accuracy. Data used
In the current study, it was found that use of moisture, ash,
to test the proposed approaches are from studies conducted at
volatile matter, and total sulfur can achieve the best results to
the University of Kentucky Center for Applied Energy Re-
predict the HGI. The ranges of input variables to HGI prediction
search. A total of more than 600 sets of data were used.
for the 633 Kentucky samples are shown in Table 1. By a least
square mathematical method, the correlation coefficients of
3. Results and discussion moisture (M), ash (A), volatile matter (V) and total sulfur with
HGI value were determined to be +0.184, +0.107, − 0.160 and
3.1. Multivariable correlation of HGI with macerals, Rmax, + 0.619, respectively. The results are shown that higher
proximate and ultimate analysis moisture and total sulfur contents in coal can result higher
HGI and higher volatile matter content in coal results lower
3.1.1. Proximate analysis HGI.
Sengupta [1] examined relation between proximate analyses, The following equations resulted between HGI and proxi-
moisture, ash, volatile matter, and fixed carbon with HGI and mate analysis:

HGI ¼ 102:69 þ 4:227Stotal  1:634V  0:569A  0:237M


ð1Þ
Table 1 – The ranges of variables in coal samples (as R2 ¼ 0:77:
determined)
The distribution of difference between HGI predicted from
Variable Min Max Mean Standard deviation Eq. (1) and actual determined amounts of HGI is shown in Fig. 1.
Moisture 0.80 13.2 3.95 2.66
Ash 0.64 59.8 10.3 8.55 3.1.2. Ultimate analysis and moisture
Volatile matter 16.6 46.4 34.8 3.92 Vuthaluru et al. [17] studied the effects of moisture and coal
Total sulfur 0.30 7.60 1.84 1.56 blending on HGI for Collie coal of Western Australia, finding a
Carbon 28.9 83.5 70.8 8.44 significant effect of moisture content on HGI [17].
Hydrogen 2.48 6.05 5.15 0.52
In the current study, the best correlation was found
Nitrogen 0.09 2.34 1.49 0.24
Oxygen 0.00 20.6 10.4 3.14
between ln Stotal, ln (oxygen + nitrogen/carbon) ((O + N)/C),
Resinite 0.00 7.8 0.64 0.64 hydrogen (H), ash (A) and moisture (M) with HGI. By a least
Exinite 0.6 52.9 6.27 4.36 square mathematical method, the correlation coefficients of
Macrinite 0.0 12.5 0.25 0.81 ln Stotal, ln ((O + N)/C) and H with HGI value was determined to
Micrinite 0.0 34.9 2.86 2.63 be +0.662, +0.198 and −0.263, respectively. The results show
Semifusinite 0.0 47.1 5.52 5.70 that higher hydrogen content in coal can result lower HGI and
Rmax 0.4 1.1 0.87 0.16
higher ln Stotal content results higher HGI.
F U E L P RO C ES S IN G TE CH N O L O G Y 8 9 ( 2 0 0 8) 1 3–2 0 15

(SF), micrinite (MI), macrinite (MA), resinite (R), and Rmax are
the variables that are the best constituents of multivariable
regression. The ranges of petrography components for the
Kentucky samples are shown in Table 1.
By a least square mathematical method, the correlation
coefficients of ln (exinite), semi fusinite, micrinite, macrinite,
resinite and Rmax with HGI are −0.814, − 0.360, +0.588, − 0.090,
− 0.448, and − 0.116, respectively. The results show that
increase of the ln (exinite), semifusinite and resinite contents
in coal can decrease HGI. An increase in micrinite results in
higher HGI.
An equation between mentioned parameters and HGI can
be shown as follows:

HGI ¼ 48:175  7:679lnðExÞ þ 13:269Rmax þ 0:137SF


 0:584MI þ 1:237MA  1:171R
R2 ¼ 0:81: ð4Þ

The distribution of difference between HGI predicted from


Eq. (4) and actual determined amounts of HGI is shown in
Fig. 3.
Fig. 2 – Distribution of difference between actual HGI and
estimated (Eq. (2)).
3.2. Artificial neural network

The following equation resulted between HGI and ultimate Neural networks can be seen as a legitimate part of statistics
analysis: that fits snugly in the niche between parametric and non-
HGI ¼ 77:162 þ 3:994lnðStotal Þ  10:920H þ 1:904M  0:424A parametric methods [21]. They are non-parametric, since they
 11:765lnððO þ NÞ=CÞ generally do not require the specification of explicit process
2 models, but are not quite as unstructured as some statistical
R ¼ 0:75: ð2Þ
methods in that they adhere to a general class of models. In
The distribution of difference between HGI predicted from this context, neural networks have been used to extend, rather
Eq. (2) and actual determined amounts of HGI is shown in Fig. 2. than replace, regression models, principal component analysis
[22,23], principal curves [24], partial least squares methods [25],
3.1.3. Petrography and Rmax as well as the visualization of process data in several major
Hardgrove grindability index is primarily a function of the
ways, to name but a few. In addition, the argument that neural
maceral composition, more precisely the mix of macerals. The
networks are really highly parallelized neurocomputers or
greater amount of Liptinite macerals such as sporinite, cutinite,
hardware devices and should therefore be distinguished from
resinite, and alginite, from spores, leaf cuticles, resins, and algae,
respectively, particularly in combination with finely-dispersed
inertinite macerals, can result in a lower grindability index [18].
HGI is not simply a function of the maceral content though.
Through the rank range present through most of the Central
Appalachians, HGI will increase with an increase in rank. The
influence of mineral matter on HGI is also complex [18].
The relationship between HGI and coal petrography was
studied by Hsieh [19], Chandra and Maitra [20], Hower et al. [7],
Hower and Wild [8], Hower [9], and Trimble and Hower [10].
Trimble and Hower evaluated the influence of macerals
microlithotypes on HGI and on pulverizer performance in
different reflectance range [10].
Hower and Wild examined 656 Kentucky coal samples
to determine the relationship between proximate and ultimate
analysis, petrography, and vitrinite maximum reflectance with
HGI for both eastern and western Kentucky. For eastern
Kentucky, the subject of the investigations in this paper, they
found that HGI could be predicted as following equation [8]:
HGI ¼ 37:41  10:22lnðliptiniteÞ þ 28:18Rmax þ Stotal
R2 ¼ 0:64: ð3Þ
In the present work, macerals and Rmax were used as inputs Fig. 3 – Distribution of difference between actual HGI and
to the SPSS software and found that ln (exinite), semifusinite estimated (Eq. (4)).
16 F U E L P RO C ES S IN G TE CH N O L O G Y 8 9 ( 2 0 08 ) 13 –2 0

Fig. 4 – FANN architecture with two hidden layers.

statistical or other patter recognition algorithms is not entirely The main advantage of ANN is the ability to model a
convincing. In the vast majority of cases neural networks are problem by the use of examples (i.e. data driven), rather than
simulated on single processor machines. There is no reason describing it analytically. ANN's are also very powerful to
why other methods cannot also be simulated or executed in a effectively represent complex nonlinear systems. It is also
similar way (and are indeed) [21]. considered as a nonlinear statistical identification technique
Artificial neural networks (ANN) are simplified systems [11].
simulating the intelligent behavior exhibited by animals via For developing a nonlinear ANN model of a system, feed-
mimicking the types of physical connections occurring in their forward architecture namely MLP is most commonly used.
brains [26]. Derived from their biological counterparts, ANNs This network usually consists of a hierarchical structure of
are based on the concept that a highly interconnected system three layers described as input, hidden, and output layers,
of simple processing elements (also called “nodes” or “neu- comprising I, J, and K number of processing nodes, respec-
rons”) can learn complex nonlinear interrelationships existing tively. At times, two hidden layers (Fig. 4) are used between
between input and output variables of a data set [27]. input and output layers of the net work. Each node in the input
layer is linked to all the nodes in the hidden layer using
weighted {wij} connections. Similar connections exist between
hidden and output layer as also between hidden layer-I and
Table 2 – Details of ANN-based HGI models
hidden layer-II nodes [26]. Feed-forward networks consist of N
Model Basis Model inputs Training Test I J K layers using the dot prod weight function, netsum net input
no. set size set
function, and the specified transfer functions [28].
size
The first layer has weights coming from the input. Each
I As Moisture, total 400 232 4 12 – subsequent layer has a weight coming from the previous layer.
determined sulfur volatile All layers have biases. The last layer is the network output [28].
matter, ash
II As Carbon, 400 200 5 12 –
determined hydrogen,
oxygen +
Table 3 – Statistical analysis of HGI generalization
nitrogen, ln
performance of ANN-based
(Stotal), moisture
III As Resinite, 400 201 3 5 6 Models Performance of ANN Performance of ANN
determined micrinite, models models
macrinite ln
Train set Test set
(exinite),
Semifusinite, Correlation coefficient Correlation coefficient
Rmax,
I 0.82 0.89
I = No. of input nodes; J = No. of nodes in the first hidden layer; K = II 0.81 0.89
No. of nodes in the second hidden layer. III 0.86 0.95
F U E L P RO C ES S IN G TE CH N O L O G Y 8 9 ( 2 0 0 8) 1 3–2 0 17

Fig. 5 – Predicted HGI by neural network versus actual Fig. 7 – Predicted HGI by neural network versus actual
measured HGI in testing process (Model I). measured HGI in testing process (Model III).

Back propagation can train multilayer feed-forward net- determined as the best variables for the prediction of HGI.
works with differentiable transfer functions to perform Therefore these variables were used as inputs to ANN for the
function approximation, pattern association, and pattern improvement of HGI prediction.
classification. The term back propagation refers to the process Neural network training can be made more efficient by
by which derivatives of network error, with respect to network certain pre-processing steps. In the present work all inputs
weights and biases, can be computed. This process can be (before feeding to the network) and output data (in models I
used with a number of different optimization strategies [28]. and III) in training phase, were scaled so that they changed in
However, the number of nodes (J,K) in the hidden layers are the range of 0 and 1, using the mean and standard deviation:
adjustable parameters, whose magnitudes are governed by
pn ¼ ðAp  meanApsÞ=stdAp: ð5Þ
issues such as the desired prediction accuracy and general-
ization performance of the ANN model. In order that the MLP Where, Ap is actual parameter, meanAps is mean of actual
network accurately approximates the nonlinear relationship parameters, stdAp is standard deviation of actual parameter
existing between its inputs and the outputs, it is trained such and pn is normalized parameter (input) [28].
that a pre-specified error function is minimized. This training While the training set was used in the EBP algorithm-based
procedure essentially aims at obtaining an optimal set of iterative minimization of error, the test set was used after each
network connection weights that minimizes a pre-specified training iteration for assessing the generalization ability of
error function [29]. MLP model.
In this study, two ANN models (models I and II) have been Prediction and generalization performances of ANN models
developed by considering one hidden layers and the third one I, II and III were compared with results of Eqs. (2), (3) and (5),
(Model III) by considering two hidden layer in MLP architecture respectively. The results are shown in Table 3. The training
and with training using the EBP algorithm (Table 2). According process was stopped after 3000 for models I and II and 5000
to the Eqs. (2), (3) and (5), the selected variables were epochs for Model III. The performance function used is the
mean square error (MSE), the average squared error between
the network predicted outputs and the target outputs, that was
0.18, 6.47, 0.14 for training data for models I to III, respectively.
Figs. 5–7 and 11(a,b,c) shows the predicted data using FANN
versus actual data in testing process. The distribution of
difference between HGI calculated from described ANN
procedures and actual determined HGIs are shown in Figs. 8–
10. The above describe results suggest that ANNs owing to their
excellent nonlinear modeling ability are better alternative to
the linear models for the prediction of HGI of coals.

4. Technical considerations

According to Eq. (1), which presents the relation of HGI with


moisture, volatile matter, ash, and total sulfur for 632 coal
samples, the correlation coefficient of nonlinear-regression-
Fig. 6 – Predicted HGI by neural network versus actual estimated HGI and actual determined HGI is R2 = 0.77. Fig. 5
measured HGI in testing process (Model II). shows a better correlation coefficient of R2 = 0.89 than
18 F U E L P RO C ES S IN G TE CH N O L O G Y 8 9 ( 2 0 08 ) 13 –2 0

Fig. 8 – Graphical comparison of experimental HGIs with Fig. 10 – Distribution of difference between actual HGI and
those estimated by ANN model-I (panel a), ANN model-II estimated by neural network (Model II).
(panel b), ANN model-III (panel c).

between proximate analyses and HGI and found a correlation


coefficient (r) of 0.93 [1]. The problem of his work was the use
regression for estimating HGI with test data sets using FANN of all four parameters: fixed carbon, moisture, volatile matter
that 400 data sets were used for training and 232 data sets and ash that are a closed system, adding to 100% [16]. As a
were used for the test. In the related works to this study, Li et result, in the present work, the interrelationship between coal
al. [14] applied neural network analyses, generalized regres- properties was considered, achieving a higher correlation
sion neural network (GRNN), using only 67 coal samples, 61 (R2 = 0.89), avoiding the problems that was mentioned in the
data sets for training and six data sets for the test in the previous works.
prediction of HGI on the basis of the proximate analysis. As In Eq. (2), the relation of HGI with ln Stotal, hydrogen,
noted above, their study was flawed because of the use of moisture, ash, ln ((oxygen+ nitrogen)/carbon) was presented.
coals on both sides of the medium volatile bituminous To our knowledge, this is the first time that the mentioned
reversal of HGI. Also, Sengupta [1] examined the relation parameters were used to predict HGI using multivariable
regression and ANNs (400 and 200 data sets were used for
training and testing, respectively). The high correlation coeffi-
cients of R2 = 0.89 for HGI prediction by ANNs is evidence that the
proposed neural network model can accurately estimate the
HGI with the ultimate analysis and moisture as the predictors.
In Eq. (4), which presents the relation of HGI with ln
(exinite), semifusinite, micrinite, macrinite, resinite, and Rmax
for 601 coal samples, the correlation coefficient between
nonlinear-regression-estimated HGI and actual determined
HGI is R2 = 0.81.
As a related work to this one, Hower and Wild [8] studied
relationship between sulfur, petrography, and vitrinite max-
imum reflectance with HGI for eastern Kentucky coals and
found a correlation of 0.64 for which liptinite, reflectance, and
sulfur emerged as significant predictors. In this work, a better
correlation coefficient (0.81) was achieved in a linear equation
in which ln (exinite), semifusinite, micrinite, macrinite,
resinite, and Rmax were predictors.
Fig. 7 shows a better correlation coefficient of R2 = 0.95 than
regression for estimating HGI, with test data sets using FANN
in which 400 data sets were used for training and 201 data sets
were used for the test. Bagherieh et al. [15] applied generalized
Fig. 9 – Distribution of difference between actual HGI and regression neural network analyses using 195 coal samples,
estimated by neural network (Model I). 148 data sets for training and 33 data sets for the test in the
F U E L P RO C ES S IN G TE CH N O L O G Y 8 9 ( 2 0 0 8) 1 3–2 0 19

Fig. 11 – Distribution of difference between actual HGI and estimated by neural network (Model III).

prediction of HGI on the basis of the petrography. The best constituents of multivariable regression for the predic-
correlation coefficient (R2) of the predicted HGI with actual tion of HGI.
determined was 0.92 for testing data. In the current work it • Higher moisture content in coal can result in higher HGI and
was used from wide range (201 data sets) of coal sample for higher volatile matter content in coal results in lower HGI.
testing and the results were improved by FANN to R2 = 0.95, No other (a) set parameters were significant.
which is the highest correlation coefficient that was reported • The increase of hydrogen content in coal can result in lower
until now. HGI and higher ln (Stotal) result in higher HGI.
According to the above significant results, it can be • Higher ln (exinite), semi fusinite and resinite contents in
concluded that the proposed multiple regression formulas coal decrease HGI. An increase in micrinite results in higher
(Eqs. (2), (3) and (5)) and the ANN procedures yield significant HGI. No other macerals were significant.
predictions of HGI. As a comparison between inputs to the • The proposed multivariable equations:
models, the coal macerals and Rmax are better predictors in ○ Eq. (1) with moisture, ash, volatile matter, and total sulfur
regression and ANN procedures than the others (Table 3). input set achieved an R2 = 0.77.
○ Eq. (2) with ln (total sulfur), hydrogen, ash, ln ((oxygen +
nitrogen)/carbon) and moisture input set resulted in an
5. Conclusions R2 = 0.75.
○ Eq. (4) with ln (exinite), semifusinite, micrinite, macrinite,
• Three data sets of: (a) Moisture, ash, volatile matter, and resinite, and Rmax input set resulted in the best regression
total sulfur; (b) ln (total sulfur), hydrogen, ash, ln ((oxygen + correlation reported until now (R2 = 0.81).
nitrogen)/carbon) and moisture; (c) ln (exinite), semifusinite, • The FANN procedures used to improve of correlation co-
micrinite, macrinite, resinite, and Rmax were found to be the efficients between predicted HGIs and actual determined
20 F U E L P RO C ES S IN G TE CH N O L O G Y 8 9 ( 2 0 08 ) 13 –2 0

HGIs, with a good resulting R2 = 0.89, 0.89, 0.95 for the input Institute of Chemical Engineers Symposium Series 92 (1996)
sets of (a), (b) and (c) respectively, had not been previously 57–66.
[14] P. Li, Y. Xiong, D. Yu, X. Sun, Prediction of grindability with
reported.
multivariable regression and neural network in Chinese coal,
• ln (exinite), semifusinite, micrinite, macrinite, resinite and
Fuel 84 (2005) 2384–2388.
Rmax are the best predictors for the estimation of HGI by [15] A.H. Bagherieh, J.C. Hower, A.R. Bagherieh, E. Jorjani, Studies
both multivariable regression and artificial neural network of the relationship between petrography and grindability for
methods. Kentucky coals using artificial neural network. International
Journal of Coal Geology (in press).
[16] J.C. Hower, Letter to the editor, discussion: prediction of
grindability with multivariable regression and neural
REFERENCES network in Chinese coal, Fuel 85 (2006) 1307–1308.
[17] H.B. Vuthaluru, R.J. Brooke, D.K. Zhang, H.M. Yan, Effect of
moisture and coal blending on Hardgrove Grindability Index
[1] A.N. Sengupta, An assessment of grindability index of coal, of Western Australian coal, Fuel Processing Technology 81
Fuel Processing Technology 76 (1) (2002) 1–10. (2003) 67–76.
[2] X. Sun, Combustion Experiment Technology and Method for [18] J.C. Hower, C.F. Eble, Coal quality and coal utilization, Energy
Coal Fired Furnace, China Electricity and Power Press, Beijing, Minerals Division Hourglass 30 (7) (February 1996) 1–8.
2001. [19] S.-S. Hsieh, Effects of bulk-components on the grindability of
[3] S. Ural, M. Akyildiz, Studies of relationship between mineral coals (Ph.D dissertation, The Pennsylvania State University,
matter and grinding properties for low-rank coal, University Park, 1976).
International Journal of Coal Geology 60 (2004) 81–84. [20] U. Chandra, A. Maitra, A study on the effect of vitrinite
[4] M.-Th. Mackrowsky, C. Abramski, Kohlenpetrographische content on coal pulverization and preparation, Journal of
Untersuchengsmethoden und ihre praktische Anwendung, Indian Academy of Geosciences 19 (2) (1976) 9.
Feuerungstechnik 31 (3) (1943) 49–64. [21] C. Aldrich, Exploratory Analysis of Metallurgical Process
[5] J.T. Peters, N. Schapiro, R.J. Gray, Know your coal, Data with Neural Networks and Related Methods, Elsevier,
Transactions of the American Institute of Mining and 2002, p. 5.
Metallurgical Engineers 223 (1962) 1–6. [22] M.A. Kramer, Nonlinear principal component analysis using
[6] J.C. Hower, G.T. Lineberry, The interface of coal lithology and autoassociative neural networks, AIChE 37 (2) (1991) 233–243.
coal cutting: study of breakage characteristics of selected [23] M.A. Kramer, Autoassociative neural networks, Computers
Kentucky coals, Journal of Coal Quality 7 (1988) 88–95. and Chemical Engineering 16 (4) (1992) 313–328.
[7] J.C. Hower, A.M. Graese, J.G. Klapheke, Influence of [24] D. Dong, T.J. McAvoy, Non-liner principal component
microlithotype composition on Hardgrove Grindability Index analysis-based on principal curves and neural networks,
for selected Kentucky coals, International Journal of Coal Computers and Chemical Engineering 20 (1996) 65–78.
Geology 7 (1987) 227–244. [25] S. Qin, T.J. McAvoy, Nonlinear PLS modeling using neural
[8] J.C. Hower, G.D. Wild, Relationships between Hardgrove networks, Computers and Chemical Engineering 16 (1992)
Grindability Index and petrographic composition for 379–391.
high-volatile bituminous coals from Kentucky, Journal of Coal [26] S.U. Patel, B.J. Kumar, Y.P. Badhe, B.K. Sharma, S. Saha, S.
Quality 7 (1988) 122–126. Biswas, A. Chaudhury, S.S. Tambe, B.D. Kulkarni, Estimation
[9] J.C. Hower, Interrelationship of coal grinding properties and of gross calorific value of coals using artificial neural
coal petrology, Minerals and Metallurgical Processing 15 (3) networks, Fuel 86 (2007) 334–344.
(1998) 1–16. [27] S.S. Tambe, B.D. Kulkarni, P.B. Deshpande, Elements of
[10] A.S. Trimble, J.C. Hower, Studies of relationship between coal Artificial Neural Networks with Selected Applications in
petrology and grinding properties, International Journal of Chemical Engineering, and Chemical and Biological Sciences,
Coal Geology 54 (2002) 253–260. Simulation and Advanced Controls, Louisville, KY, 1996.
[11] H.M. Yao, H.B. Vuthaluru, M.O. Tade, D. Djukanovic, Artificial [28] H. Demuth, M. Beale, Neural network toolbox for use with
neural network-based prediction of hydrogen content of coal MATLAB, Handbook, 2002.
in power station boilers, Fuel 84 (2005) 1535–1542. [29] D. Rumelhart, G. Hinton, R. Williams, Learning
[12] S. Haykin, Neural Networks, a Comprehensive Foundation, representations by backpropagating error, Nature 323 (1986)
USA, 2nd ed.Prentice Hall, USA, 1999. 533–536.
[13] L.H. Ungar, E.J. Hartman, J.D. Keeler, G.D. Martin, Process
modelling and control using neural networks, American

Potrebbero piacerti anche