Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
w w w. e l s e v i e r. c o m / l o c a t e / f u p r o c
Article history: The effects of proximate and ultimate analysis, maceral content, and coal rank (Rmax) for a
Received 27 April 2007 wide range of Kentucky coal samples from calorific value of 4320 to 14960 (BTU/lb) (10.05 to
Received in revised form 8 June 2007 34.80 MJ/kg) on Hardgrove Grindability Index (HGI) have been investigated by multivariable
Accepted 22 June 2007 regression and artificial neural network methods (ANN). The stepwise least square
mathematical method shows that the relationship between (a) Moisture, ash, volatile
Keywords: matter, and total sulfur; (b) ln (total sulfur), hydrogen, ash, ln ((oxygen + nitrogen)/carbon)
Hardgrove grindability index and moisture; (c) ln (exinite), semifusinite, micrinite, macrinite, resinite, and Rmax input sets
Coal petrography with HGI in linear condition can achieve the correlation coefficients (R2) of 0.77, 0.75, and
Coal rank 0.81, respectively. The ANN, which adequately recognized the characteristics of the coal
Ultimate and proximate analysis samples, can predict HGI with correlation coefficients of 0.89, 0.89 and 0.95 respectively in
Artificial neural network testing process. It was determined that ln (exinite), semifusinite, micrinite, macrinite,
resinite, and Rmax can be used as the best predictor for the estimation of HGI on
multivariable regression (R2 = 0.81) and also artificial neural network methods (R2 = 0.95).
The ANN based prediction method, as used in this paper, can be further employed as a
reliable and accurate method, in the hardgrove grindability index prediction.
© 2007 Elsevier B.V. All rights reserved.
1. Introduction Although the HGI testing device is not costly, the measur-
ing procedure to get a HGI value is time consuming. Therefore,
Grindability of coal is an important technological parameter in some researchers have investigated the prediction of HGI
assessing the relative hardness of coals of varying ranks and based on proximate analysis, petrography, and vitrinite
grades during comminution [1]. This is usually determined by maximum reflectance with using regression [7–10].
Hardgrove Grindability Index (HGI), which is of great interest Artificial neural network (ANN) is an empirical modeling
since it is used as a predictive tool to determine the tool, which is analogous to the behavior of biological neural
performance capacity of industrial pulverizers in power structures [11]. Neural networks are powerful tools that have
station boilers [2]. HGI reflects the coal hardness, tenacity, the abilities to identify underlying highly complex relation-
and fracture and is directly related to the coal rank, megascopic ships from input–output data only [12]. Over the last 10 years,
coal lithology, microscopic maceral associations, and the type artificial neural networks (ANNs), and, in particular, feed-
and distribution of minerals [3]. Grinding properties are forward artificial neural networks (FANNs), have been exten-
important in mining applications since lower-HGI (harder to sively studied to present process models, and their use in
grind) lithotypes will require a greater energy input [4–6]. industry has been rapidly growing [13].
0378-3820/$ – see front matter © 2007 Elsevier B.V. All rights reserved.
doi:10.1016/j.fuproc.2007.06.004
14 F U E L P RO C ES S IN G TE CH N O L O G Y 8 9 ( 2 0 08 ) 13 –2 0
(SF), micrinite (MI), macrinite (MA), resinite (R), and Rmax are
the variables that are the best constituents of multivariable
regression. The ranges of petrography components for the
Kentucky samples are shown in Table 1.
By a least square mathematical method, the correlation
coefficients of ln (exinite), semi fusinite, micrinite, macrinite,
resinite and Rmax with HGI are −0.814, − 0.360, +0.588, − 0.090,
− 0.448, and − 0.116, respectively. The results show that
increase of the ln (exinite), semifusinite and resinite contents
in coal can decrease HGI. An increase in micrinite results in
higher HGI.
An equation between mentioned parameters and HGI can
be shown as follows:
The following equation resulted between HGI and ultimate Neural networks can be seen as a legitimate part of statistics
analysis: that fits snugly in the niche between parametric and non-
HGI ¼ 77:162 þ 3:994lnðStotal Þ 10:920H þ 1:904M 0:424A parametric methods [21]. They are non-parametric, since they
11:765lnððO þ NÞ=CÞ generally do not require the specification of explicit process
2 models, but are not quite as unstructured as some statistical
R ¼ 0:75: ð2Þ
methods in that they adhere to a general class of models. In
The distribution of difference between HGI predicted from this context, neural networks have been used to extend, rather
Eq. (2) and actual determined amounts of HGI is shown in Fig. 2. than replace, regression models, principal component analysis
[22,23], principal curves [24], partial least squares methods [25],
3.1.3. Petrography and Rmax as well as the visualization of process data in several major
Hardgrove grindability index is primarily a function of the
ways, to name but a few. In addition, the argument that neural
maceral composition, more precisely the mix of macerals. The
networks are really highly parallelized neurocomputers or
greater amount of Liptinite macerals such as sporinite, cutinite,
hardware devices and should therefore be distinguished from
resinite, and alginite, from spores, leaf cuticles, resins, and algae,
respectively, particularly in combination with finely-dispersed
inertinite macerals, can result in a lower grindability index [18].
HGI is not simply a function of the maceral content though.
Through the rank range present through most of the Central
Appalachians, HGI will increase with an increase in rank. The
influence of mineral matter on HGI is also complex [18].
The relationship between HGI and coal petrography was
studied by Hsieh [19], Chandra and Maitra [20], Hower et al. [7],
Hower and Wild [8], Hower [9], and Trimble and Hower [10].
Trimble and Hower evaluated the influence of macerals
microlithotypes on HGI and on pulverizer performance in
different reflectance range [10].
Hower and Wild examined 656 Kentucky coal samples
to determine the relationship between proximate and ultimate
analysis, petrography, and vitrinite maximum reflectance with
HGI for both eastern and western Kentucky. For eastern
Kentucky, the subject of the investigations in this paper, they
found that HGI could be predicted as following equation [8]:
HGI ¼ 37:41 10:22lnðliptiniteÞ þ 28:18Rmax þ Stotal
R2 ¼ 0:64: ð3Þ
In the present work, macerals and Rmax were used as inputs Fig. 3 – Distribution of difference between actual HGI and
to the SPSS software and found that ln (exinite), semifusinite estimated (Eq. (4)).
16 F U E L P RO C ES S IN G TE CH N O L O G Y 8 9 ( 2 0 08 ) 13 –2 0
statistical or other patter recognition algorithms is not entirely The main advantage of ANN is the ability to model a
convincing. In the vast majority of cases neural networks are problem by the use of examples (i.e. data driven), rather than
simulated on single processor machines. There is no reason describing it analytically. ANN's are also very powerful to
why other methods cannot also be simulated or executed in a effectively represent complex nonlinear systems. It is also
similar way (and are indeed) [21]. considered as a nonlinear statistical identification technique
Artificial neural networks (ANN) are simplified systems [11].
simulating the intelligent behavior exhibited by animals via For developing a nonlinear ANN model of a system, feed-
mimicking the types of physical connections occurring in their forward architecture namely MLP is most commonly used.
brains [26]. Derived from their biological counterparts, ANNs This network usually consists of a hierarchical structure of
are based on the concept that a highly interconnected system three layers described as input, hidden, and output layers,
of simple processing elements (also called “nodes” or “neu- comprising I, J, and K number of processing nodes, respec-
rons”) can learn complex nonlinear interrelationships existing tively. At times, two hidden layers (Fig. 4) are used between
between input and output variables of a data set [27]. input and output layers of the net work. Each node in the input
layer is linked to all the nodes in the hidden layer using
weighted {wij} connections. Similar connections exist between
hidden and output layer as also between hidden layer-I and
Table 2 – Details of ANN-based HGI models
hidden layer-II nodes [26]. Feed-forward networks consist of N
Model Basis Model inputs Training Test I J K layers using the dot prod weight function, netsum net input
no. set size set
function, and the specified transfer functions [28].
size
The first layer has weights coming from the input. Each
I As Moisture, total 400 232 4 12 – subsequent layer has a weight coming from the previous layer.
determined sulfur volatile All layers have biases. The last layer is the network output [28].
matter, ash
II As Carbon, 400 200 5 12 –
determined hydrogen,
oxygen +
Table 3 – Statistical analysis of HGI generalization
nitrogen, ln
performance of ANN-based
(Stotal), moisture
III As Resinite, 400 201 3 5 6 Models Performance of ANN Performance of ANN
determined micrinite, models models
macrinite ln
Train set Test set
(exinite),
Semifusinite, Correlation coefficient Correlation coefficient
Rmax,
I 0.82 0.89
I = No. of input nodes; J = No. of nodes in the first hidden layer; K = II 0.81 0.89
No. of nodes in the second hidden layer. III 0.86 0.95
F U E L P RO C ES S IN G TE CH N O L O G Y 8 9 ( 2 0 0 8) 1 3–2 0 17
Fig. 5 – Predicted HGI by neural network versus actual Fig. 7 – Predicted HGI by neural network versus actual
measured HGI in testing process (Model I). measured HGI in testing process (Model III).
Back propagation can train multilayer feed-forward net- determined as the best variables for the prediction of HGI.
works with differentiable transfer functions to perform Therefore these variables were used as inputs to ANN for the
function approximation, pattern association, and pattern improvement of HGI prediction.
classification. The term back propagation refers to the process Neural network training can be made more efficient by
by which derivatives of network error, with respect to network certain pre-processing steps. In the present work all inputs
weights and biases, can be computed. This process can be (before feeding to the network) and output data (in models I
used with a number of different optimization strategies [28]. and III) in training phase, were scaled so that they changed in
However, the number of nodes (J,K) in the hidden layers are the range of 0 and 1, using the mean and standard deviation:
adjustable parameters, whose magnitudes are governed by
pn ¼ ðAp meanApsÞ=stdAp: ð5Þ
issues such as the desired prediction accuracy and general-
ization performance of the ANN model. In order that the MLP Where, Ap is actual parameter, meanAps is mean of actual
network accurately approximates the nonlinear relationship parameters, stdAp is standard deviation of actual parameter
existing between its inputs and the outputs, it is trained such and pn is normalized parameter (input) [28].
that a pre-specified error function is minimized. This training While the training set was used in the EBP algorithm-based
procedure essentially aims at obtaining an optimal set of iterative minimization of error, the test set was used after each
network connection weights that minimizes a pre-specified training iteration for assessing the generalization ability of
error function [29]. MLP model.
In this study, two ANN models (models I and II) have been Prediction and generalization performances of ANN models
developed by considering one hidden layers and the third one I, II and III were compared with results of Eqs. (2), (3) and (5),
(Model III) by considering two hidden layer in MLP architecture respectively. The results are shown in Table 3. The training
and with training using the EBP algorithm (Table 2). According process was stopped after 3000 for models I and II and 5000
to the Eqs. (2), (3) and (5), the selected variables were epochs for Model III. The performance function used is the
mean square error (MSE), the average squared error between
the network predicted outputs and the target outputs, that was
0.18, 6.47, 0.14 for training data for models I to III, respectively.
Figs. 5–7 and 11(a,b,c) shows the predicted data using FANN
versus actual data in testing process. The distribution of
difference between HGI calculated from described ANN
procedures and actual determined HGIs are shown in Figs. 8–
10. The above describe results suggest that ANNs owing to their
excellent nonlinear modeling ability are better alternative to
the linear models for the prediction of HGI of coals.
4. Technical considerations
Fig. 8 – Graphical comparison of experimental HGIs with Fig. 10 – Distribution of difference between actual HGI and
those estimated by ANN model-I (panel a), ANN model-II estimated by neural network (Model II).
(panel b), ANN model-III (panel c).
Fig. 11 – Distribution of difference between actual HGI and estimated by neural network (Model III).
prediction of HGI on the basis of the petrography. The best constituents of multivariable regression for the predic-
correlation coefficient (R2) of the predicted HGI with actual tion of HGI.
determined was 0.92 for testing data. In the current work it • Higher moisture content in coal can result in higher HGI and
was used from wide range (201 data sets) of coal sample for higher volatile matter content in coal results in lower HGI.
testing and the results were improved by FANN to R2 = 0.95, No other (a) set parameters were significant.
which is the highest correlation coefficient that was reported • The increase of hydrogen content in coal can result in lower
until now. HGI and higher ln (Stotal) result in higher HGI.
According to the above significant results, it can be • Higher ln (exinite), semi fusinite and resinite contents in
concluded that the proposed multiple regression formulas coal decrease HGI. An increase in micrinite results in higher
(Eqs. (2), (3) and (5)) and the ANN procedures yield significant HGI. No other macerals were significant.
predictions of HGI. As a comparison between inputs to the • The proposed multivariable equations:
models, the coal macerals and Rmax are better predictors in ○ Eq. (1) with moisture, ash, volatile matter, and total sulfur
regression and ANN procedures than the others (Table 3). input set achieved an R2 = 0.77.
○ Eq. (2) with ln (total sulfur), hydrogen, ash, ln ((oxygen +
nitrogen)/carbon) and moisture input set resulted in an
5. Conclusions R2 = 0.75.
○ Eq. (4) with ln (exinite), semifusinite, micrinite, macrinite,
• Three data sets of: (a) Moisture, ash, volatile matter, and resinite, and Rmax input set resulted in the best regression
total sulfur; (b) ln (total sulfur), hydrogen, ash, ln ((oxygen + correlation reported until now (R2 = 0.81).
nitrogen)/carbon) and moisture; (c) ln (exinite), semifusinite, • The FANN procedures used to improve of correlation co-
micrinite, macrinite, resinite, and Rmax were found to be the efficients between predicted HGIs and actual determined
20 F U E L P RO C ES S IN G TE CH N O L O G Y 8 9 ( 2 0 08 ) 13 –2 0
HGIs, with a good resulting R2 = 0.89, 0.89, 0.95 for the input Institute of Chemical Engineers Symposium Series 92 (1996)
sets of (a), (b) and (c) respectively, had not been previously 57–66.
[14] P. Li, Y. Xiong, D. Yu, X. Sun, Prediction of grindability with
reported.
multivariable regression and neural network in Chinese coal,
• ln (exinite), semifusinite, micrinite, macrinite, resinite and
Fuel 84 (2005) 2384–2388.
Rmax are the best predictors for the estimation of HGI by [15] A.H. Bagherieh, J.C. Hower, A.R. Bagherieh, E. Jorjani, Studies
both multivariable regression and artificial neural network of the relationship between petrography and grindability for
methods. Kentucky coals using artificial neural network. International
Journal of Coal Geology (in press).
[16] J.C. Hower, Letter to the editor, discussion: prediction of
grindability with multivariable regression and neural
REFERENCES network in Chinese coal, Fuel 85 (2006) 1307–1308.
[17] H.B. Vuthaluru, R.J. Brooke, D.K. Zhang, H.M. Yan, Effect of
moisture and coal blending on Hardgrove Grindability Index
[1] A.N. Sengupta, An assessment of grindability index of coal, of Western Australian coal, Fuel Processing Technology 81
Fuel Processing Technology 76 (1) (2002) 1–10. (2003) 67–76.
[2] X. Sun, Combustion Experiment Technology and Method for [18] J.C. Hower, C.F. Eble, Coal quality and coal utilization, Energy
Coal Fired Furnace, China Electricity and Power Press, Beijing, Minerals Division Hourglass 30 (7) (February 1996) 1–8.
2001. [19] S.-S. Hsieh, Effects of bulk-components on the grindability of
[3] S. Ural, M. Akyildiz, Studies of relationship between mineral coals (Ph.D dissertation, The Pennsylvania State University,
matter and grinding properties for low-rank coal, University Park, 1976).
International Journal of Coal Geology 60 (2004) 81–84. [20] U. Chandra, A. Maitra, A study on the effect of vitrinite
[4] M.-Th. Mackrowsky, C. Abramski, Kohlenpetrographische content on coal pulverization and preparation, Journal of
Untersuchengsmethoden und ihre praktische Anwendung, Indian Academy of Geosciences 19 (2) (1976) 9.
Feuerungstechnik 31 (3) (1943) 49–64. [21] C. Aldrich, Exploratory Analysis of Metallurgical Process
[5] J.T. Peters, N. Schapiro, R.J. Gray, Know your coal, Data with Neural Networks and Related Methods, Elsevier,
Transactions of the American Institute of Mining and 2002, p. 5.
Metallurgical Engineers 223 (1962) 1–6. [22] M.A. Kramer, Nonlinear principal component analysis using
[6] J.C. Hower, G.T. Lineberry, The interface of coal lithology and autoassociative neural networks, AIChE 37 (2) (1991) 233–243.
coal cutting: study of breakage characteristics of selected [23] M.A. Kramer, Autoassociative neural networks, Computers
Kentucky coals, Journal of Coal Quality 7 (1988) 88–95. and Chemical Engineering 16 (4) (1992) 313–328.
[7] J.C. Hower, A.M. Graese, J.G. Klapheke, Influence of [24] D. Dong, T.J. McAvoy, Non-liner principal component
microlithotype composition on Hardgrove Grindability Index analysis-based on principal curves and neural networks,
for selected Kentucky coals, International Journal of Coal Computers and Chemical Engineering 20 (1996) 65–78.
Geology 7 (1987) 227–244. [25] S. Qin, T.J. McAvoy, Nonlinear PLS modeling using neural
[8] J.C. Hower, G.D. Wild, Relationships between Hardgrove networks, Computers and Chemical Engineering 16 (1992)
Grindability Index and petrographic composition for 379–391.
high-volatile bituminous coals from Kentucky, Journal of Coal [26] S.U. Patel, B.J. Kumar, Y.P. Badhe, B.K. Sharma, S. Saha, S.
Quality 7 (1988) 122–126. Biswas, A. Chaudhury, S.S. Tambe, B.D. Kulkarni, Estimation
[9] J.C. Hower, Interrelationship of coal grinding properties and of gross calorific value of coals using artificial neural
coal petrology, Minerals and Metallurgical Processing 15 (3) networks, Fuel 86 (2007) 334–344.
(1998) 1–16. [27] S.S. Tambe, B.D. Kulkarni, P.B. Deshpande, Elements of
[10] A.S. Trimble, J.C. Hower, Studies of relationship between coal Artificial Neural Networks with Selected Applications in
petrology and grinding properties, International Journal of Chemical Engineering, and Chemical and Biological Sciences,
Coal Geology 54 (2002) 253–260. Simulation and Advanced Controls, Louisville, KY, 1996.
[11] H.M. Yao, H.B. Vuthaluru, M.O. Tade, D. Djukanovic, Artificial [28] H. Demuth, M. Beale, Neural network toolbox for use with
neural network-based prediction of hydrogen content of coal MATLAB, Handbook, 2002.
in power station boilers, Fuel 84 (2005) 1535–1542. [29] D. Rumelhart, G. Hinton, R. Williams, Learning
[12] S. Haykin, Neural Networks, a Comprehensive Foundation, representations by backpropagating error, Nature 323 (1986)
USA, 2nd ed.Prentice Hall, USA, 1999. 533–536.
[13] L.H. Ungar, E.J. Hartman, J.D. Keeler, G.D. Martin, Process
modelling and control using neural networks, American