Sei sulla pagina 1di 22

energies

Article
An Approach to State of Charge Estimation of
Lithium-Ion Batteries Based on Recurrent Neural
Networks with Gated Recurrent Unit
Chaoran Li, Fei Xiao and Yaxiang Fan *
National Key Laboratory of Science and Technology on Vessel Integrated Power System, Naval University of
Engineering, Wuhan 430033, China; lichaoranhg@163.com (C.L.); xfeyninger@gmail.com (F.X.)
* Correspondence: fanyaxiang@126.com; Tel.: +86-1557-599-0344

Received: 26 March 2019; Accepted: 24 April 2019; Published: 26 April 2019 

Abstract: State of charge (SOC) represents the amount of electricity stored and is calculated and
used by battery management systems (BMSs). However, SOC cannot be observed directly, and SOC
estimation is a challenging task due to the battery’s nonlinear characteristics when operating in
complex conditions. In this paper, based on the new advanced deep learning techniques, a SOC
estimation approach for Lithium-ion batteries using a recurrent neural network with gated recurrent
unit (GRU-RNN) is introduced where observable variables such as voltage, current, and temperature
are directly mapped to SOC estimation. The proposed technique requires no model or knowledge of
the battery’s internal parameters and is able to estimate SOC at various temperatures by using a single
set of self-learned network parameters. The proposed method is evaluated on two public datasets
of vehicle drive cycles and another high rate pulse discharge condition dataset with mean absolute
errors (MAEs) of 0.86%, 1.75%, and 1.05%. Experiment results show that the proposed method is
accurate and robust.

Keywords: state of charge estimation; lithium-ion batteries; gated recurrent unit; recurrent neural
networks; neural networks

1. Introduction
Compared with the fossil fuel-based energy, renewable energy has been more and more attractive
because it is renewable and cannot be depleted, and it does not contribute to global warming or
greenhouse effects. As the energy buffer in renewable energy systems, energy storage could increase
the electric power efficiency of the customers and load shifting [1]. Lithium-ion batteries (LiB) have
been the first choice of energy storage components [2] due to their advantages of the high energy
density, long life cycle, and low self-discharge rate. LiB has a very high energy density, hence a great
deal of energy can be stored in it due to the fact that electrodes of LiB are made of lightweight lithium,
which is a highly reactive element. High energy density allows a 1 kg LiB to store the same amount of
energy as 6 kg batteries made by lead-acid. Additionally, hundreds of charge and discharge cycles
can be handled by LiB. Moreover, the charge lost by LiB is as low as 5% per month, whereas the
Nickel–Metal Hydride (NIMH) battery has a 20% charge loss per month. Therefore, applications for
portable electronics, electrified vehicles, and stationary energy storage are developing and are heavily
dependent on LiB technology.
As one of the most important states to be tracked in a battery, state of charge (SOC) is defined as
a ratio of the residual capacity to the maximum available capacity, which represents the amount of
electricity stored. SOC not only provides reference information for the endurance of the battery but is
also the basis of the battery management system (BMS) of the battery pack. BMS is beneficial to the

Energies 2019, 12, 1592; doi:10.3390/en12091592 www.mdpi.com/journal/energies


Energies 2019, 12, 1592 2 of 22

effective utilization of the battery energy and the optimal management of the upper energy system.
However, SOC cannot be observed directly and needs to be indirectly estimated by the observable
variables such as voltage, current, and temperature. A key issue that must be addressed is how to
establish the nonlinear mapping relation between the observable variables and SOC. Additionally, the
high dynamic operating conditions of battery will bring another challenge to SOC estimation due to
unpredictable charging and discharging.
Traditionally, two widely used SOC estimation methods are the ampere-hour counting method [3]
and the open-circuit voltage method [4]. Generally speaking, the former requires the battery to remain
resting for a long time to reach balance in order to obtain the open circuit voltage (OCV) versus the
SOC curve. On the other hand, for the ampere-hour counting method, the initial SOC needs to be
obtained, which has accumulated errors. Both methods have their limitations, and researchers have
devoted a lot of effort to put forward new SOC estimation methods, including the model-based method
and the data-driven method. With the observable parameters, the model-based methods estimate
the SOC state by combining an electrical circuit model (ECM) [5] or an electrochemical model with
adaptive filters, such as Kalman filters (KF) [6–9], particle filters [10,11], and adaptive observers [12,13].
Plett et al. [6] first proposed the extended Kalman filter (EKF)-based SOC estimation method for LiB
in hybrid electric vehicles. The EKF-based methods [7–9] have been improved with the electrical
circuit model or the filter method. In references [10,11] and references [12,13], the particle filter and
the adaptive observer were respectively utilized to replace the Kalman filters to improve the accuracy
of the SOC. Nevertheless, the SOC estimation accuracy of the model-based method depends very
much on the accuracy of the battery model. Actually, researchers have difficulty in characterizing the
non-linear characteristics of batteries. In addition, the parameters of different types of batteries are
different under varying conditions. There are a lot of parameters to be identified in a battery model,
which faces the bottleneck of hardware computation ability. The data-driven methods try to find the
non-linear relationship between the observable variables and the battery states from a statistical point
of view. These studies have used conventional machine learning techniques such as fuzzy logic [14],
support vector machines (SVMs) [15,16], and neural networks [17]. In reference [14], fuzzy logic
was used to overcome the problem of battery over-discharge and associated damage resulting from
inaccurate estimations of SOC. Alvarez et al. [15] used the application of SVM for SOC estimation
of a Li-ion battery. However, the model was validated only in constant pulse charging/discharging
working conditions. Similarly, Hu et al. [16] suggested the radial basis function (RBF) kernel to find
the optimal parameters for the SVM to apply it to the more complicated work condition. Because
the neural network is well suited for modeling complex and nonlinear systems, it has been used
for SOC estimation for several years. In reference [17], a feedforward neural network (FNN) was
used for SOC estimation in full-electric-vehicle application, and terminal voltage estimation of the
batteries was achieved by the SOC estimation results. With the real-time measurement of voltage,
current, temperature, and the internal resistance value, Guo et al. [18] proposed another FNN method,
called the back propagation neural network (BPNN), to reduce the SOC estimation error significantly.
Similarly, Dong et al. [19] used multiple variables as inputs and SOC as the output to train a BPNN.
By employing a load-classifying neural network, reference [20] categorized battery operation modes as
idle, charge, and discharge, and then estimated the SOC. Recently, in reference [21] and reference [22],
Hannan et al. introduced a backtracking search algorithm (BSA) and lighting search algorithm (LSA) to
improve the SOC estimation accuracy of the BPNN and nonlinear autoregressive with the exogenous
input-based neural network (NARXNN), respectively.
Despite many successful research studies on the FNN-based SOC estimation, significant problems
remain unresolved. For instance, the current SOC is not only related to the current observable variables
but is also associated with the history observable variables. However, the output of the FNN depends
only on the current input, and it is inaccurate to use the FNN process for time sequence problems such
as SOC estimation. Moreover, the input of FNN is fixed and can not be arbitrarily changed, which
makes it difficult for SOC estimation at any given moment. More recently, the most popular machine
Energies 2019, 12, 1592 3 of 22

learning technique and deep learning [23] (or deep neural network) achieved excellent results for the
solution of important problems across a wide spectrum of domains [24–27]. Especially in the field
of LiB state estimation, various deep/machine learning approaches have been employed to estimate
parameters of LiB, such as the SOC, state of health (SOH), and remaining useful life (RUL) [28,29]. The
key to their success is the increase in the amount of available computation and data. As one of the key
technologies of deep learning, the recurrent neural network (RNN) [30,31] is quite different from the
FNN. It can use the internal state (memory) to learn features and time dependencies from sequential
data and has been widely used in natural language processing, time series forecasting, and system
modeling. In reference [32], the RNN was used for lead-acid battery modeling and SOC estimation
simultaneously. Similarly, Zhao et al. [33] presented a method to combine battery modeling and SOC
estimation based on the RNN. The method of Park et al. [34] was also based on the RNN by combining
it with the single particle model to build a hybrid battery model.
Different from the RNN-based methods mentioned above, we propose an SOC method based on
the gated recurrent unit recurrent neural network (GRU-RNN) to build nonlinear mapping between
the observable variables and SOC. Specifically, the GRU-RNN is an improved form of the simple RNN
to overcome the short term dependence problem of the simple RNN. The main contributions of our
work are as follows: (1) the GRU-RNN can directly characterize the non-linear relationships between
voltage, current, temperature, and SOC without the help of a battery model; (2) the GRU-RNN can
achieve SOC estimation in various operating conditions with only a set of network parameters, while
the other existing methods need different models and parameters for different conditions; (3) the
GRU-RNN can self-learn network parameters by adaptive gradient descent algorithms. Compared
with electrochemical models and equivalent circuit models that contain differential equations, the
GRU-RNN is free from requiring a large amount of work to hand-engineer and parameterize; (4) two
public datasets of vehicle drive cycles are used to demonstrate the effectiveness of the proposed
method. Moreover, another high rate pulse discharge condition dataset of an 18Ah battery is collected
to validate the GRU-RNN’s ability for SOC estimation in extreme conditions.
After a brief introduction, Section 2 is a detailed presentation of the proposed approach.
In Section 3, three testing datasets recorded respectively under complex discharge conditions, mixed
charge-discharge conditions, and high rate pulse discharge conditions are described. Then, the
experimental validation of the whole method is exhibited in Section 4. Finally, Section 5 concludes
the paper.

2. GRU-RNN for SOC Estimation


The recurrent neural network (RNN) was first put forward in the 1990s in the form of Elman
and Jordan networks [35], which are also known as the “simple recurrent networks” (SRN). Different
from the FNN, the RNN can use the internal state as the memory of the network. Therefore, the
current state is jointly impacted by the current input and the previous state. This structure enables
the RNN to deal with the time sequence problem by storing, remembering, and processing past
complex signals for a period. RNNs have been widely used in natural language processing, time
series forecasting, and system modeling. However, long time series and complex hidden layers may
lead to gradients exploding and vanishing during back-propagation processes. This is called the
long-term dependencies problem, and plenty of improved RNNs were proposed to solve the problem
by designing the gating mechanism to control gradients information propagation. Among all the
improved RNNs, the GRU-RNN is not only able to capture long-term sequential dependencies but also
has a simple structure. Moreover, compared with other RNNs, it is more robust to vanishing gradients
and needs less memory requirements. The illustration of the GRU-RNN cell is shown in Figure 1.
Energies 2019, 12, 1592 4 of 22

Energies 2019, 12, x FOR PEER REVIEW 4 of 22

ht 1 ht ht 1
 
1 ht

ht 1 
rt zt ht ht +1
  tanh
ht 1
ht 1 ht

xt 1 xt xt 1

Figure 1. Illustration
Figure Illustrationofofthe
thegated
gatedrecurrent unit
recurrent memory
unit memory ht−1 is
cell.cell. ht theisoutput of the previous
the output hidden
of the previous
1
layer node, xt is input of the current hidden layer node, ht is the output of the current hidden layer
hidden layer node, xt is input of the current hidden layer node, ht is the output of the current
node, zt and rt are the output of the update gate and the reset gate, respectively.
hidden layer node, zt and rt are the output of the update gate and the reset gate, respectively.
The forward propagation of the GRU-RNN is computed by Equations (1) to (4). Among
these functions, Equation (1) represents the “update” operation of the GRU-RNN, Equation (2) and
zt of the
Equation (3) represent the “reset” operation wzGRU-RNN, 
 ht 1 , xt  +band
z  (1)
Equation (4) represents the “output”
operation of the GRU-RNN.

rt   wr  ht 1 , xt  +br  (2)
zt = σ(wz · [ht−1 , xt ] + bz ) (1)

ht rt = σ(ww
tanh


r ·h [h rt , xhtt]1+, xbtr) bh
t−1

 (2)
(3)
ht = tanh weh · [rt ht−1 , xt ] + beh
e (3)
ht  1  zt  ht 1  zt ht (4)
ht = (1 − zt ) ht−1 + zt e
ht (4)
In Equation (1) to Equation (4), represents an element-wise multiplication; w is the
In Equation (1) to Equation (4), represents an element-wise multiplication; w is the weight parameter;
weight parameter; b is the bias parameter;    is the gate activation function, which is set as
b is the bias parameter; σ(·) is the gate activation function, which is set as sigmoid function and shown
in Equation
sigmoid function and(·shown
(5); tanh ) is the in
output activation
Equation (5); tanh  is the
function, which is setactivation
output as tanh function and
function, shown
which in
is set
Equation
as (6). The
tanh function andderivatives
shown inof sigmoid(6).
Equation andThe
tanh are the functions
derivatives of sigmoidof the
andoriginal function,
tanh are thus the
the functions of
derivatives can be calculated by the original functions.
the original function, thus the derivatives can be calculated by the original functions.
11
σ(x) =x   (5)
1+ 1 exp
exp(  x))
(−x
exp(x) − exp(−x)
tanh(x) = exp( x)  exp( x) (6)
 (x) + exp(−x)
tanh( x) exp (6)
exp( x)  exp(  x)
By introducing the above operations, the GRU-RNN can learn the long-term sequential dependencies.
OnlyBy theintroducing the above
forward propagation operations,
is required the GRU-RNN
to obtain the SOC at each can timestep
learn theduring
long-term sequential
the testing stage.
dependencies. Onlyforward
At the end of each the forward propagation
propagation, is required
the loss function to L of obtain the SOC atiseach
the GRU-RNN timestep
calculated during
as follows:
the testing stage. At the end of each forward propagation, the loss function L of the GRU-RNN is
n
calculated as follows: 1 X
L= yt − y0t (7)
n1 n
L=  yt  yt
t=1
(7)
0
n t 1
where yt and yt are the real value and the estimated value at timestep t, respectively. n is the length of
where
the sequence. yt are
yt andBased onthe
thereal valueofand
gradient thethe
lossestimated
function, value at timestep
the GRU-RNN t , respectively.
updates the weightsnand is the
biases by backward propagation. Since the GRU-RNN is a structure through time,
length of the sequence. Based on the gradient of the loss function, the GRU-RNN updates the weights it needs a different
backthe
and propagation method, called
biases by backward the backSince
propagation. propagation
the GRU-RNN throughistime (BPTT),through
a structure to train the network.
time, it needs a
Suppose that the intermediate variable
different back propagation method, called the δ is defined as:
t back propagation through time (BPTT), to train the
network.
∂L
Suppose that the intermediate variable  δt t is
=defined as: (8)
∂ht
L
t  (8)
ht
Energies 2019, 12, x FOR PEER REVIEW 5 of 22

The errors in BPTT can be represented as Equation (9), and the errors passed to the upper layer
can be represented as Equation (10).
Energies 2019, 12, 1592 5 of 22
 tl1   z ,t wzh ,t   r ,t wrh ,t   h ,t whh ,t   t (1  zt ) (9)

The errors in BPTT can bel represented



as Equation (9), and the( l errors
 1   w   r ,t wrx ,t   h ,t whx ,t
can be represented as Equationt (10). z ,t zx ,t

passed to the upper layer
f  1) ( tl 1 ) (10)

 l 1
where f is the activation function
δlt−1 = δr,t wrh,tl +1δeh,t
δz,t wzh,tof+ layer neural
wehh,t +network
δt (1 − ztnodes,
)  tl 1 is the weighted
(9)
output of layer l  1 .  z ,t ,  r ,t ,  h ,t can be represented as:
= δz,t wzx,t + δr,t wrx,t + δeh,t wehx,t f 0 (l−1) (χl−1
 
δl−1 t ) (10)
 
 z ,t   t ht  ht 1 zt 1  zt 
t
(11)
where f (l−1) is the activation function of layer l − 1 neural network nodes, χl−1 t is the weighted output
of layer l − 1. δz,t , δr,t , δeh,t can be represented as:
ht
 r ,t  t zt rt  1  rt  (12)
r
δz,t = δt e ht − ht−1 t zt (1 − zt ) (11)

 h ,t   t
zt 1  ht 2   (13)
∂h t
e
δr,t = δt zt rt (1 − rt ) (12)
Gradients of weights and biases can be represented ∂rt as:
 
L δeh,t = δt  L zt 1 − ht 2 L (13)
  z ,t ht 1 ,   r ,t ht 1 ,   h ,t ht 1
wzh ,t wrh ,t whh ,t
Gradients of weights and biases can be represented as:
L L L
 x ,   r ,t xt ,   h ,t xt (14)
∂L
wzx ,t= δ z ,t ht , 
∂L rx ,t
w = δr,t ht−1 , ∂L w
= δhx ,t h
∂w z,t t−1 ∂w ∂wehh,t h,t t−1
zh,t rh,t
∂L L ∂L L  
e
L ∂L
 z ,δ
= ,
t z,t xt ,
= rδ,tr,t ,xt , = δh,t xht,t (14)
∂w
bzzx,t ∂w
brx,t ∂wehx,t b e
,t r ,t h ,t
∂L ∂L ∂L
= δz,t ,
∂bz,t ∂br,t
= δr,t , ∂beh,t
= δeh,t
The forward propagation, the loss function calculation, and the error back propagation
mentioned abovepropagation,
The forward constitute a complete trainingcalculation,
the loss function process of the
andGRU-RNN.
the error back propagation mentioned
As shown in Figure 2, the inputs of the proposed
above constitute a complete training process of the GRU-RNN. GRU-RNN based SOC estimation method are
the terminal
As shownvoltage Vt 2,
in Figure , the inputs ofI t the
thecurrent , and the temperature
proposed GRU-RNN of theSOC
Tt based battery at timestep
estimation t . are
method The
the terminal
output of thevoltage
GRU-RNNVt , the
is current
the SOC It ,t and thebattery
of the temperature Tt of the
at timestep ht 1 , htat 2timestep
t . battery  , , ht  kt. The
are output
hidden
of the GRU-RNN is the SOCt of the battery at timestep t. ht (1), ht (2), · · · , ht (k) are hidden layer nodes
layer nodes of the GUR-RNN, where k represents the number of hidden layer nodes and needs to
of the GUR-RNN, where k represents the number of hidden layer nodes and needs to be set in advance.
be set in advance. The expansion diagram of a single hidden layer node in the GRU-RNN is shown in
The expansion diagram of a single hidden layer node in the GRU-RNN is shown in Figure 3. Hidden
Figure 3. Hidden layer nodes take variables [ V , I , T ], …, [ V , I , TT -1 ], [ VT , IT , TT ] of time series
layer nodes take variables [V1 , I1 , T1 ], . . . , [VT−1 , I1T−11 , T1T−1 ], [VT ,TI-1T , TTT-1] of time series t = 1, 2, · · · , T as
t  1,2,The, TSOC
inputs. as inputs.
at t = The
T is SOC
the output.
t
 T ishidden
at tOther the output.
layer Other
nodes hidden
are in layer
the same nodes
way.are in the same
t
way.

Output
Layer
SOCt

Hidden
ht 1 ht  2  ht  3 …… ht  k 
Layer

Input
Layer
Vt It Tt

Figure2.2.Architecture
Figure Architectureof
ofthe
thegated
gatedrecurrent
recurrentunit
unitrecurrent
recurrentneural
neuralnetwork
network(GRU-RNN)
(GRU-RNN)for
forstate
stateof
of
charge
charge(SOC)
(SOC)estimation.
estimation.
Energies 2019, 12, 1592 6 of 22
Energies 2019, 12, x FOR PEER REVIEW 6 of 22

SOCt SOC1 SOCT -1 SOCT

ht = h1 … hT 1 hT

Vt , I t , Tt V1 , I1 , T1 VT -1 , IT -1 , TT -1 VT , IT , TT
Figure 3. A single hidden layer node in the GRU-RNN unfolded in time.
Figure 3. A single hidden layer node in the GRU-RNN unfolded in time.

The
The processes
processes of of the
the SOC
SOC estimation
estimation method
method based
based on on the
the GRU-RNN
GRU-RNN are are as
as follows:
follows:
Step
Step I:I: Normalize
Normalize the the testing
testing dataset
dataset and
and divide
divide themthem into
into aa training
training set
set and
and a validation set.
a validation set.
Step II: Set the parameter of input layer nodes, hidden layer nodes, and output
Step II: Set the parameter of input layer nodes, hidden layer nodes, and output layer nodes of the layer nodes of
the GRU-RNN; select the form of activation functions, loss functions, and
GRU-RNN; select the form of activation functions, loss functions, and optimization algorithms [36].optimization algorithms
[36]. Step III: Set the hyperparameters of the GRU-RNN including the timestep, sampling interval,
batchStep
size,III:
andSet the hyperparameters
iteration. of the GRU-RNN
Initialize the weights and biases including the timestep, sampling interval,
of the GRU-RNN.
batchStep
size,IV:
andSelect
iteration. Initialize the weights and biases of the GRU-RNN.
evaluation function and train the GRU-RNN with the training set. The training
Step IV: Select
set consisting evaluation
of voltage, function
current, and train
temperature, andtheSOC GRU-RNN
measured with the training
values is fed into set.the
The training
initialized
set consisting of voltage, current, temperature, and SOC measured
GRU-RNN and network parameters can be self-learned according to Equations (1) to (14). values is fed into the initialized
GRU-RNN
Step V:and network
Validate parameters can
the effectiveness be GRU-RNN
of the self-learnedfor according to Equations
SOC estimation with the(1) validation
to (14). set. The
Step V: Validate the effectiveness of the GRU-RNN for SOC
validation set also consists of voltage, current, temperature, and SOC measured values.estimation with the validation set.
The voltage,
The validation set also consists of voltage, current, temperature, and SOC
current, and temperature measured values in the validation set are fed into the trained GRU-RNN to measured values. The
voltage,
obtain SOC current, and values.
estimated temperature measured
By inputting values
the SOC in the validation
measured values in the setvalidation
are fed into the the
set and trained
SOC
GRU-RNN to obtain SOC estimated values. By inputting the SOC
estimated values of the GRU-RNN into the evaluation function, the effectiveness of the GRU-RNN measured values in the
can
validation
be evaluated. set and the SOC estimated values of the GRU-RNN into the evaluation function, the
effectiveness of the GRU-RNN can be evaluated.
3. Testing Datasets of LiB
3. Testing Datasets of LiB
In this section, we present the testing data in detail, which are the keys to the GRU-RNN for SOC
In this section,
estimation. we the
In practice, present the testing
operating data in
conditions detail,
of LiBs arewhich
complexare and
the keys to the GRU-RNN
changeable. Therefore, for the
SOC estimation. In practice, the operating conditions of LiBs are complex
testing conditions of LiBs should cover the operating conditions as far as possible. In order and changeable. Therefore,
to satisfy
the
the testing
diversity conditions
of operating of LiBs shouldthree
conditions, covertesting
the operating
datasets of conditions
LiBs wereasselected
far as possible.
to verify In theorder to
validity
satisfy the diversity of operating conditions, three testing datasets of LiBs were
of the proposed method in this paper. The first two testing datasets are two public datasets, which selected to verify the
validity of the proposed
were collected method in18650PF
by testing Panasonic this paper. The cells
battery first two testing
[37] and datasets
Samsung are two battery
18650-20R public datasets,
cells [38]
which
with a were
seriescollected
of vehicle bydrive
testing Panasonic
cycles at three18650PF
differentbattery cells [37]The
temperatures. andthird
Samsungtesting18650-20R
dataset isbattery
a new
cells [38] with a series of vehicle drive cycles at three different temperatures.
dataset that was collected under high rate pulsed discharge conditions at three different temperatures. The third testing
dataset is a new dataset that was collected under high rate pulsed discharge conditions at three
3.1. The Panasonic
different temperatures.18650PF Dataset
The Panasonic 18650PF dataset [37] was collected by Department of Mechanical Engineering at
3.1. The Panasonic 18650PF Dataset
McMaster University in Ontario, Canada. The Panasonic 18650PF battery cell has nominal voltage and
The Panasonic
a capacity of 3.6 V and18650PF
2.9 Ah,dataset [37] wasAs
respectively. collected
shown in byFigure
Department
4, nineofdrive
Mechanical Engineering
cycles, namely Cycleat1,
McMaster University
Cycle 2, Cycle 3, Cyclein4,Ontario, Canada.
Supplemental The Panasonic
Federal 18650PF
Test Procedure batterySchedule
Driving cell has nominal
(US06), thevoltage and
Highway
aFuel
capacity
Economyof 3.6Test
V and 2.9 Ah,the
(HWFET), respectively. As shown in
Urban Dynamometer FigureSchedule
Driving 4, nine drive cycles,
(UDDS), thenamely Cycle 92
Los Angeles 1,
Cycle
(LA92),2,and
Cycle
Neural3, Cycle
Network 4, (NN)
Supplemental Federal
were applied to theTest
same Procedure
battery cellDriving
for 100% Schedule (US06),
battery level the
at three
Highway Fuel Economy(0Test
different temperatures ◦ C, (HWFET),
10 ◦ C, and the
25 ◦Urban Dynamometer
C). Among Driving
the nine drive Schedule
cycles, US06,(UDDS),
HWFET,the Los
UDDS,
Angeles
and LA92 92were
(LA92),
fourand Neuraldrive
common Network
cycles(NN) were vehicles.
of electric applied toUS06
the same
was abattery
drivingcell for 100%
condition of battery
electric
level at three
vehicles different
with high temperatures
acceleration. In the(0HWFET,
°C, 10 °C, and vehicles
electric 25 °C). were
Among the under
tested nine drive cycles, US06,
60 miles/hour. The
HWFET,
UDDS was UDDS,
used and LA92
for the were conditions
driving four common drive
of light cycles ofThe
vehicles. electric
LA92vehicles. US06 was
was developed for aemission
driving
condition
reduction.ofCycles
electric vehicles2,with
1, Cycles Cycleshigh acceleration.
3, Cycles 4, and InNNthe HWFET,
were randomly electric
mixedvehicles were tested
data composed of
under
US06, 60 miles/hour.
HWFET, UDDS, The
andUDDS
LA92.was used for
Voltage, the driving
current, conditions
capacity, and cell of light vehicles.
temperature wereThe LA92 was
recorded in a
developed for emission
0.1-second interval, reduction.
and more Cycles
than five 1, Cycles
million 2, Cycles
data points were 3, Cycles 4,
collected. and NN
A more weredescription
detailed randomly
mixed
can be data
foundcomposed
in referenceof US06,
[37], andHWFET, UDDS, and
the maximum valuesLA92. Voltage,were
of currents current, capacity,byand
determined the cell
LiB
temperature were recorded in a 0.1-second interval, and more than five million data points were
operation conditions.
collected. A more detailed description can be found in reference [37], and the maximum values of
currents were determined by the LiB operation conditions.
Energies 2019, 12, 1592 7 of 22
Energies 2019, 12, x FOR PEER REVIEW 7 of 22
Energies 2019, 12, x FOR PEER REVIEW 7 of 22
Cycle 1 Cycle 2 Cycle 3
20 20 20

Current/A
Current/A Cycle 1 Cycle 2 Cycle 3

Current/A
10 20 10
20 2010

Current/A
Current/A

Current/A
0 10 0
10 10 0
-10 0 -100 0-10
-20-10 -20
-10 -20
-10
0 2 4 6 8 10 12 0 2 4 6 8 10 12 0 2 4 6 8 10 12
-20 -20 -20
0 2 Time/0.1s
4 6 8 1010412 0 2 4 Time/0.1s
6 8 10 10124 0 2 4 Time/0.1s
6 8 10 12104
Cycle 4
Time/0.1s 10
4
US06
Time/0.1s 10
4
HWFET 104
Time/0.1s
20 Cycle 4 US06 20 HWFET
15

Current/A

Current/A
Current/A

10 20 2010
5
15

Current/A

Current/A
Current/A

0 10 -55
10
0
0 0
-10 -15-5 -10
-10 -15 -10
-20 -25 -20
0 2 4 6 8 10 12 14
-20 -250 1 2 3 4 5 -20 0 2 4 6 8
0 2 4 6 8 10 12 414 0 1 2 3 4 54 0 2 4 6 8 4
Time/0.1s 10 4 Time/0.1s 10
4
Time/0.1s 4 10
Time/0.1s 10 Time/0.1s 10 Time/0.1s 10
UDDS LA92 NN
20 UDDS 20 LA92 20 NN
20 20 20
Current/A

Current/A

Current/A
Current/A

10 10 10
Current/A

Current/A
10 10 10
0 0 00 00
-10-10 -10
-10 -10
-10
-20-20 -20
-20 -20
-20
0 0 0.50.5 1 1 1.5
1.5 22 2.5
2.5 00 55 10
10 1515 0 0 2 2 4 4 6 6 8 8 10 1012 12
Time/0.1s 5 Time/0.1s 44 Time/0.1s 4
Time/0.1s 10
10
5 Time/0.1s 1010 Time/0.1s 10 10
4

Figure
Figure 4. 4.Current
Figure Current
4. profiles
profilesmeasured
profiles
Current measured in
measured inCycle
in Cycle Cycle
1, 1, 2,2,Cycle
Cycle
Cycle Cycle3,3,Cycle
2, Cycle Cycle4,4,Supplemental
3, Cycle Supplemental Federal
4, Supplemental
Federal Test
Federal
Test
Test ProcedureDriving
Procedure
Procedure Driving
Driving Schedule
Schedule
Schedule (US06),
(US06),
(US06), Highway
thethe
Highway
Highway Fuel
Fuel Economy
FuelEconomy
Economy Test
Test(HWFET),
Test(HWFET),thethe
(HWFET), Urban
theUrban
Urban
Dynamometer
Dynamometer
Dynamometer Driving
Driving
Driving Schedule
Schedule
Schedule (UDDS),the
(UDDS),
(UDDS), theLos
Los Angeles
Angeles92
LosAngeles 92(LA92),
(LA92),and
(LA92), and
and Neural
Neural
NeuralNetwork
Network(NN).
Network (NN).
(NN).

3.2.
3.2.3.2.
TheTheThe Samsung
Samsung
Samsung 18650-20R
18650-20R
18650-20R Dataset
Dataset
Dataset
The The
Samsung
The Samsung
Samsung 18650-20Rdataset
18650-20R
18650-20R dataset [38]
dataset [38] was
[38] was collected
was collected
collected by bythe
by theCenter
the Center
Center for Advanced
for
for Advanced
Advanced Life Cycle
Life
Life Cycle
Cycle
Engineering
Engineering
Engineering (CALCE) at University
(CALCE)atatUniversity
(CALCE) of Maryland.
UniversityofofMaryland. A 2.9-Ah,
Maryland. A 2.9-Ah, 3.2V
2.9-Ah, 3.2V Samsung
3.2V Samsung 18650-20R
Samsung18650-20R cylindrical
18650-20Rcylindrical LiB
cylindrical LiB
LiB
was tested by four drive cycles—the Beijing Dynamic Stress Test (BJDST), the Federal Urban Driving
waswas tested
tested byby four
four drive
drive cycles—theBeijing
cycles—the BeijingDynamic
Dynamic Stress
Stress Test
Test (BJDST),
(BJDST),thetheFederal
FederalUrban
UrbanDriving
Driving
Schedule (FUDS), US06, and the Dynamic Stress Test (DST). The current profiles of the drive cycles
Schedule
Schedule (FUDS),US06,
(FUDS), US06,andandthe
theDynamic
Dynamic Stress
Stress Test (DST).
(DST). The
The current
currentprofiles
profilesofofthe
thedrive
drive cycles
cycles
are shown in Figure 5. The maximum values of the currents were determined by the LiB operation
are shown
are shown in Figure
in Figure 5.
5. TheThe maximum values of the currents were determined by the LiB operation
conditions. All tests weremaximum
performed forvalues
80% of the currents
battery level andwere determined
50% battery level atby the25
0 °C, LiB
°C,operation
and
conditions.All
conditions. Alltests
tests were
were performed
performed for
for80%
80% battery
battery level andand
level 50%50%
battery level level
battery at 0 °C,
at 25
0 ◦°C, and◦ C,
C, 25
45 °C. The voltage, current, capacity, and cell temperature were recorded with a one second time step,
and4545
°C.
◦ The
andC.
voltage,
1.4The
million
current, capacity,
voltage,
data current,
points were
and and
capacity, cell temperature
collected. Incell
were were
temperature
contrast
recorded
to the Panasonic
with a one second
recorded
18650PFwith a one
dataset,
time step,
thesecond
data of time
and 1.4
step, charge,million
and 1.4pause,
milliondata points
anddata were collected.
pointsinwere
discharge In
collected.
this dataset contrast
wereInrecorded to the
to the Panasonic 18650PF dataset,data
Panasonic
contrastcontinuously. 18650PF dataset, the of
the data
charge, pause, and discharge inBJDST this dataset were recorded continuously. FUDS
of charge, pause, and 2
discharge in this dataset were recorded 5
continuously.
Current/A

Current/A

1 2.5
0 0
-1 -2.5
-2 -5
0 5000 10,000 15,000 0 5000 10,000 15,000
Time/1s Time/1s

US06 DST
5 5
Current/A

Current/A

2.5 2.5
0 0
-2.5 -2.5
-5 -5
0 5000 10,000 15,000 0 5000 10,000 15,000
Time/1s Time/1s

Figure 5. Current profiles measured in the Beijing Dynamic Stress Test (BJDST), the Federal Urban
Driving Schedule (FUDS), US06, and the Dynamic Stress Test (DST).

Figure 5. Current profiles measured in the Beijing Dynamic Stress Test (BJDST), the Federal Urban
Driving Schedule (FUDS), US06, and the Dynamic Stress Test (DST).
Energies 2019, 12, 1592 8 of 22

Energies 2019, 12, x FOR PEER REVIEW 8 of 22


3.3. High Rate Pulse Discharge Condition Dataset Collection
3.3. High Rate Pulse Discharge Condition Dataset Collection
To validate the GRU-RNN’s ability for SOC estimation in extreme working conditions, a new
datasetToof validate
an 18-Ah, the 3.65V
GRU-RNN’s
battery ability for SOCinestimation
was collected in extreme
high rate pulse dischargeworking conditions,
conditions at theathree
new
dataset of an 18-Ah, 3.65V◦ battery
◦ was collected
◦ in high rate pulse discharge
different temperatures (0 C, 10 C, and 25 C). Similar to the two datasets mentioned above, the test conditions at the three
different temperatures
equipment included a host(0 °C,computer,
10 °C, anda 25 °C). Similar
battery cycler, to theatwo
and datasets
thermal mentioned
chamber. above,
The test the test
execution
equipment
steps are: 1)included a host computer,
Set the temperature at 0 Caand
◦ battery
chargecycler, and a thermal
the battery cell fullychamber.
to 100% SOCThe test
using execution
the CC
steps are: 1) Set the temperature at 0 °C and charge the battery cell fully to 100%
(constant current)-CV (constant voltage) profile at 1 C rate; 2) rest the battery cell for 1 h and discharge SOC using the CC
(constant
the batterycurrent)-CV (constant
cell at a pulse current voltage)
of 30 Cprofile
with 0.5 at 1s width
C rate; every
2) rest0.6
thesbattery cell for3)
for 80 cycles; 1h and
rest thedischarge
battery
the for
cell battery cellrepeat
1 h and at a pulse
Stepscurrent
1 throughof 302 C forwith 0.5 s width
temperatures ofevery
45 ◦ C 0.6
ands 25for◦ C;
80 4)
cycles;
repeat3)Steps
rest the battery
1 through
cell for 1 h and repeat Steps 1 through 2 for temperatures of 45 °C and 25 °C;
3 two times to obtain three sets of data at each temperature. The voltage, current, capacity, and cell 4) repeat Steps 1 through
3 two times to
temperature of obtain three sets
the discharge of data
stage wereatrecorded
each temperature. The voltage,
in a 0.01-second interval,current,
and 90 capacity,
thousand anddata
cell
temperature of the discharge stage were recorded in a 0.01-second interval,
points were collected. This dataset is called the high rate pulse condition dataset throughout the rest of and 90 thousand data
points
the were
paper. Thecollected. This dataset
corresponding voltage is called the high
and current rate pulse
profiles of thecondition
high rate dataset throughout
pulse discharge the rest
conditions
of the
are shown paper. The corresponding
in Figure 6. From Figure voltage
6, we canand current
observe profiles of
a significant the high
voltage drop,rate
whichpulse discharge
is due to the
conditions are shown in Figure 6. From Figure 6, we can observe a significant
internal resistance of the battery. The differences in temperature lead to differences in the internal voltage drop, which is
due to theof
resistance internal resistance
the battery, whichoffurther
the battery.
leads The differences
to differences inin temperature
voltage lead to differences in the
drops [39,40].
internal resistance of the battery, which further leads to differences in voltage drops [39,40].
3.2 3.2
3 3.1
2.8 3
3.5 3.5
2.6
Volta g e/ V

2.9
Voltage/ V

3.25 6000 6250 6500 6000 6250 6500


3.3
3
3.1
2.75
2.5 2.9
0 2500 5000 7500 10,000 0 2500 5000 7500 10,000
Time/0.01s Time/0.01s
(a) (b)
3.3
500
3.2
3.1 250
3.4 0
3
Curre nt/ A
Voltage/ V

3.3 6000 6250 6500 6000 6250 6500

3.2 500
3.1
0
3
0 2500 5000 7500 10,000 0 2500 5000 7500 10,000
Time/0.01s Time/0.01s
(c) (d)
Figure 6.
Figure 6. Voltage
Voltage and
and current
currentprofiles
profilesofofhigh
highrate
ratepulse
pulsedischarge conditions:
discharge (a) (a)
conditions: voltage at 0at°C;
voltage 0 ◦(b)
C;
voltage at 25 °C;◦(c) voltage at 45 °C;◦(d) current.
(b) voltage at 25 C; (c) voltage at 45 C; (d) current.

4.
4. SOC
SOC Estimation
Estimation Results
Results
In
In this
this section,
section, we
we present
presentthe
theexperimental
experimentalsettings,
settings, evaluation
evaluationcriteria,
criteria,and
andexperimental
experimentalresults.
results.
Specifically,
Specifically, the Panasonic 18650PF dataset, the Samsung 18650-20R dataset, and the high-rate pulse
the Panasonic 18650PF dataset, the Samsung 18650-20R dataset, and the high-rate pulse
discharge
discharge dataset
dataset are
are used
used to
to evaluate
evaluate the
the performance
performance of of the
the GRU-RNN
GRU-RNN for for SOC
SOC estimation
estimation in
in
complex
complex and and changeable
changeable discharge
discharge conditions,
conditions, mixed
mixed charge
charge and
and discharge
discharge conditions,
conditions, and
and extreme
extreme
conditions,
conditions, respectively.
respectively.

4.1. Experimental Settings


We conduct our experiments on a desktop with Intel Core i7-8700k 3.2 GHz CPU, NVIDIA
GeForce GTX 1070Ti (8 GB on-board memory) GPU and 16 GB RAM. The proposed method is
implemented in Python and Keras, which uses a Tensorflow backend. Actually, the architecture of the
Energies 2019, 12, 1592 9 of 22

4.1. Experimental Settings


We conduct our experiments on a desktop with Intel Core i7-8700k 3.2 GHz CPU, NVIDIA GeForce
GTX 1070Ti (8 GB on-board memory) GPU and 16 GB RAM. The proposed method is implemented
in Python and Keras, which uses a Tensorflow backend. Actually, the architecture of the GRU-RNN
is very simple, and the GRU-RNN consists of one input layer, one hidden layer, one full connection
layer, and one output layer. The nodes of the input layer, the hidden layer, the full connection layer,
and the output layer are set as 3, 1000, 50 and 1, respectively. Specifically, the full connection layer
transforms the multiple output of the hidden layer into a single SOC value. After setting up the
structure of the neural network and the loss function, the network needs to be trained to obtain
parameters. The training is a process of minimizing the loss function. Adam is an adaptive optimizer
used for minimizing the loss function. The Adam optimizer dynamically adjusts the variations of
parameters by using the first-order and second-order matrix estimation of gradients. The parameters of
the network are optimized using the Adam optimizer [41] with a learning rate of 0.0001, the first-order
momentum attenuation coefficient β1 of 0.9, the second-order momentum attenuation coefficient β2 of
0.999, and mini-batches of size 72. In addition, the timestep and the iteration are initialized as 1000 and
100 and then discussed in Section 4.3.3.
Appropriate data normalization can make the training processes of the GRU-RNN more efficient
and robust. Moreover, data normalization can remove the negative effect that improves the convergence
rate. In this paper, data are normalized to the range [−1,1], as shown in Equation (15)

2(x − xmin )
xnorm = −1 (15)
xmax − xmin

where xmax and xmin are the maximum and the minimum values of data; x represents initial data, and
xnorm represents the data after normalization.

4.2. Evaluation Criteria


In order to evaluate the accuracy of the GRU-RNN for SOC estimation of battery, the mean
absolute error (MAE) and maximum error (MAX) are used for evaluation criteria below.
n
1 X
MAE = yt − y0t (16)
n
t=1

MAX = max yt − y0t (17)

where yt and y0t are the real value and the estimated value at timestep t, respectively. n is the length of
the sequence.

4.3. SOC Estimation under Changeable Discharge Conditions


In this section, the performance of the GRU-RNN for SOC estimation is validated with the
Panasonic 18650PF dataset. In these experiments, the effect of single-temperature, multi-temperature,
size of training sets, and hyperparameters are discussed. Additionally, some comparisons of SOC
estimation between the GRU-RNN and the RNN are made in this section. For the Panasonic 18650PF
dataset, the data of Cycle 1, Cycle 2, Cycle 3, Cycle 4, and NN are used for training the GRU-RNN, and
the data of US06, HWFET, UDDS, and LA92 are used for validating.

4.3.1. SOC Estimation Trained on Single-Temperature Data


In this experiment, we split the Panasonic 18650PF dataset into three subsets depending on the
three different temperatures. The GRU-RNN can self-learn network parameters by the Adam optimizer,
which frees researchers from establishments of battery models and identifications of parameters. Then,
three single-temperature data are used to evaluate the performance of the GRU-RNN individually.
Energies 2019, 12, 1592 10 of 22

The SOC estimation curves trained on single-temperature data are shown in Figure 7, and quantitative
results are shown in Table 1. In each subgraph of Figure 7, the first row displays the estimated curves,
and the second row displays the estimated error curves.
Energies 2019, 12, x FOR PEER REVIEW 10 of 22

HWFET LA92 UDDS US06


100 100 100 100
SOC/ %

SOC/ %

SOC/ %

SOC/ %
GRU-RNN GRU-RNN GRU-RNN GRU-RNN
Measured Measured Measured Measured
50 50 50 50

0 0 0 0
0 2 4 6 0 5 10 0 5 10 15 0 1 2 3 4
Time/0.1s 4 Time/0.1s 4 Time/0.1s 4 Time/0.1s 4
10 10 10 10
HWFET LA92 UDDS US06
SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %
10 10 10 10

5 5 5 5

0 0 0 0
0 2 4 6 0 5 10 0 5 10 15 0 1 2 3 4
Time/0.1s 4 Time/0.1s 4 Time/0.1s 4 Time/0.1s 4
10 10 10 10

(a)
HWFET LA92 UDDS US06
100 100 100 100
GRU-RNN
SOC/ %

SOC/ %
SOC/ %

SOC/ %
GRU-RNN GRU-RNN GRU-RNN
Measured Measured Measured Measured
50 50 50 50

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
HWFET LA92 UDDS US06
SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %
10 10 10 10

5 5 5 5

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10

(b)
HWFET LA92 UDDS US06
100 100 100 100
GRU-RNN GRU-RNN GRU-RNN
SOC/ %

SOC/ %

SOC/ %
SOC/ %

GRU-RNN
Measured Measured Measured Measured
50 50 50 50

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
HWFET LA92 UDDS US06
SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %

10 10 10 15
10
5 5 5
5
0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 4 5 4
10 Time/0.1s 10 Time/0.1s 10 Time/0.1s 10

(c)

FigureFigure
7. SOC7. SOC estimation curves trained on single-temperature data: (a) 0 °C; (b)
estimation curves trained on single-temperature data: (a) 0 ◦ C;10 °C; (c)◦ 25 °C.
(b) 10 C; (c) 25 ◦ C.

TableTable 1. SOC
1. SOC estimation
estimation resultofofthe
result the GRU-RNN
GRU-RNN trained onon
trained single-temperature data. data.
single-temperature
Temperature Operating Condition MAE (%) MAX (%)
Temperature Operating Condition MAE (%) MAX (%)
HWFET 2.12 5.58
LA92
HWFET 1.132.12 4.13 5.58
0 °C
◦C
UDDS
LA92 0.711.13 5.67 4.13
0 US06 1.010.71 6.16
UDDS 5.67
HWFET
US06 2.021.01 5.57 6.16
LA92 1.08 3.79
10 °C HWFET
UDDS 1.052.02 6.37 5.57
LA92
US06 1.271.08 5.64 3.79
10 ◦ C
UDDS
HWFET 1.501.05 3.8 6.37
US06
LA92 0.62 1.27 3.14 5.64
25 °C
UDDS
HWFET 0.321.50 2.00 3.8
US06
LA92 1.860.62 10.64 3.14
25 ◦ C Total UDDS 1.220.32 10.64 2.00
MAE= mean absolute
US06 error, MAX= maximum
1.86error. 10.64
By simple calculation with Total
the results from Table 1, the1.22
average MAE and MAX of SOC
10.64
estimation results are 1.24%, 1.36%, and 1.08%, and 6.16%, 6.37%, and 10.64%
MAE = mean absolute error, MAX = maximum error.
at 0 °C, 10 °C, and
25 °C, respectively. Therefore, the average MAE and MAX of SOC estimation results at three
Energies 2019, 12, 1592 11 of 22

By simple calculation with the results from Table 1, the average MAE and MAX of SOC estimation
results are 1.24%, 1.36%, and 1.08%, and 6.16%, 6.37%, and 10.64% at 0 ◦ C, 10 ◦ C, and 25 ◦ C, respectively.
Therefore, the average MAE and MAX of SOC estimation results at three ambient temperatures
are 1.22% and 10.64%, which proves that the GRU-RNN can directly characterize the non-linear
Energies 2019, 12, x FOR PEER REVIEW 11 of 22
relationships between voltage, current, temperature, and SOC without the battery model, and the
proposed
ambientmodel achieves satisfactory
temperatures performance
are 1.22% and in estimating
10.64%, which SOCthe
proves that for the single-temperature
GRU-RNN can directly data.
characterize the non-linear relationships between voltage, current, temperature, and SOC without
4.3.2.the
SOC Estimation
battery Trained
model, and on Multi-Temperature
the proposed Data
model achieves satisfactory performance in estimating SOC for
the single-temperature data.
In the last experiment, the performance of the GRU-RNN is evaluated with single-temperature
data.4.3.2.
However, the network
SOC Estimation parameters
Trained learned with
on Multi-Temperature the training set at 0 ◦ C could not be used for
Data
◦ ◦
SOC estimation with the validation set at 10 C or 25 C Therefore, it needs to learn a set of network
In the last experiment, the performance of the GRU-RNN is evaluated with single-temperature
parameters for the data at each temperature, which seriously increases computational complexity and
data. However, the network parameters learned with the training set at 0 °C could not be used for
memory consumption. To address this problem, in this experiment, we utilize the whole training
SOC estimation with the validation set at 10 °C or 25 °C Therefore, it needs to learn a set of network
dataset at 0 ◦ C, 10
parameters
◦ C, and 25 ◦ C to train one GRU-RNN and validate this network in HWFET, LA92,
for the data at each temperature, which seriously increases computational complexity
UDDS, andandmemory 0 ◦ C, 10◦ C, and
US06 atconsumption. To 25
◦ C The estimation curves are shown in Figure 8. The quantitative
address this problem, in this experiment, we utilize the whole
results are shown
training in Table
dataset 2. It
at 0 °C, 10could
°C, andbe 25
concluded thatone
°C to train theGRU-RNN
average MAE and and MAXthis
validate of SOC
networkestimation
in
HWFET, LA92, UDDS, and US06 at 0 °C, 10°C, and 25 °C The ◦
estimation ◦
curves are
results are 0.99%, 0.96%, and 0.63%, and 7.59%, 4.79%, and 3.08% at 0 C, 10 C, and 25 C, respectively. shown◦ in Figure
8. The quantitative
The experiment resultsresults
proveare shown
that in Table 2. Itcan
the GRU-RNN could be concluded
achieve that the average
SOC estimation MAEoperating
in various and
MAX of SOC estimation results are 0.99%, 0.96%, and 0.63%, and 7.59%,
conditions with only a set of network parameters. Obviously, the high temperatures result in 4.79%, and 3.08% at 0 °C,a low
10°C, and 25 °C, respectively. The experiment results prove that the GRU-RNN can achieve SOC
MAE and a low MAX due to reason that the performance of battery increases with the increase in
estimation in various operating conditions with only a set of network parameters. Obviously, the
temperature at ambient temperatures. This is because when the ambient temperature approaches room
high temperatures result in a low MAE and a low MAX due to reason that the performance of
temperature (25 ◦ C), the battery has excellent cycle stability. Compared with the estimation results at
battery increases with the increase in temperature at ambient temperatures. This is because when the
singleambient
temperature, the average
temperature approachesMAE room decreases from(251.22%
temperature to battery
°C), the 0.86%, has
andexcellent
the MAX decreases
cycle stability.from
10.64%Compared with the estimation results at single temperature, the average MAE decreases from 1.22% the
to 7.59% due to the increasing diversity of the training set. For the following experiments,
network parameters
to 0.86%, are trained
and the MAX with
decreases multi-temperature
from 10.64% to 7.59% due data.
to the increasing diversity of the training
set. For the following experiments, the network parameters are trained with multi-temperature data.
HWFET LA92 UDDS US06
100 100 100 100
GRU-RNN GRU-RNN GRU-RNN GRU-RNN
SOC/ %

SOC/ %

SOC/ %

SOC/ %

Measured Measured Measured Measured


50 50 50 50

0 0 0 0
0 2 4 6 0 5 10 0 5 10 15 0 1 2 3 4
Time/0.1s 4 Time/0.1s 4 Time/0.1s 4 Time/0.1s 4
10 10 10 10
HWFET LA92 UDDS US06
SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %

10 10 10 10

5 5 5 5

0 0 0 0
0 2 4 6 0 5 10 0 5 10 15 0 1 2 3 4
Time/0.1s 4 Time/0.1s 4 Time/0.1s 4 Time/0.1s 4
10 10 10 10

(a)
HWFET LA92 UDDS US06
100 100 100 100
GRU-RNN GRU-RNN GRU-RNN GRU-RNN
SOC/ %

SOC/ %

SOC/ %

SOC/ %

Measured Measured Measured Measured


50 50 50 50

0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
HWFET LA92 UDDS US06
SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %

10 10 10 10

5 5 5 5

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10

(b)

Figure 8. Cont.
Energies 2019, 12, 1592 12 of 22
Energies 2019, 12, x FOR PEER REVIEW 12 of 22

HWFET LA92 UDDS US06


100 100 100 100
SOC/ % GRU-RNN GRU-RNN GRU-RNN GRU-RNN

SOC/ %

SOC/ %

SOC/ %
Measured Measured Measured Measured
50
Energies 50
2019, 12, x FOR PEER REVIEW 50 50 12 of 22
0 0 0 0
0 2 HWFET
4 6 8 0 5 LA92 10 15 0 0.5 UDDS
1 1.5 2 2.5 0 1 US06
2 3 4 5
100 4 100 4 100 5 100 4
Time/0.1sGRU-RNN
10 Time/0.1s 10
GRU-RNN Time/0.1s 10
GRU-RNN Time/0.1s
GRU-RNN10
% %

% %

SOC/ %

SOC/ %
HWFET Measured LA92 Measured UDDSMeasured US06Measured
SOC/

SOC/
1050 50 50 5010

SOC error/ % SOC error/ %

SOC error/ %
10 10
SOC error/

SOC error/ % SOC error/


50 0 2 4 6 8 50 0 5 10 15
0
50 0.5 1 1.5 2 2.5
0
05 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
0 0 0 0
0 2 HWFET
4 6 8 0 5 LA92 10 15 0 0.5 UDDS
1 1.5 2 2.5 0 1 US06
2 3 4 5
SOC error/ %

SOC error/ %
10 4
10 4
10 5
10 4
Time/0.1s 10 Time/0.1s 10 Time/0.1s 10 Time/0.1s 10
5 5 (c) 5 5

0 0 0 0
0 Figure
Figure 8.4 SOC
8.2 SOC 6estimation
estimation
8 0curves
curves 5trained
trained onmulti-temperature
10on multi-temperature
15 0 0.5 1 data:
data:
1.5 (a) 0 2.5
2 (a) °C,
0 ◦(b) 10 °C,
C,0 (b)1 10 (c) 25(c)
◦2 C, 3 °C25
4 ◦ C5
4 4 Time/0.1s 5 4 Time/0.1s Time/0.1s Time/0.1s
10 10 10 10

Table 2. SOC
Table estimation
2. SOC estimationresult
resultof
ofthe (c)
the GRU-RNN
GRU-RNN trainedonon
trained multi-temperature
multi-temperature data.data.
Figure 8. SOC estimation
Temperature curvesOperating
trained on Condition
multi-temperature
MAE data:
(%) (a) 0 °C,(%)
(b) 10 °C, (c) 25 °C
Temperature Operating Condition MAE (%) MAX MAX (%)
HWFET 1.31 5.07
Table 2. SOC estimation result of HWFET
the GRU-RNN trained
LA92
1.31 5.07
on multi-temperature
0.60 3.48 data.
◦ 0 °C LA92 0.60 3.48
0 C UDDS 0.77 5.71
Temperature Operating UDDSCondition MAE0.77 (%) MAX (%) 5.71
US06
US06
HWFET
1.27
1.27
1.31
7.59
5.07 7.59
HWFET
LA92 1.11
0.60 4.45
3.48
0 °C HWFET 1.11 4.45
LA92
UDDS 0.47
0.77 3.63
5.71
◦10 °C LA92 0.47 3.63
10 C UDDS
US06 1.12
1.27 4.79
7.59
UDDS 1.12 4.79
US06
HWFET
US06 1.13
1.11
1.13 4.414.41
4.45
HWFET
LA92 0.71
0.47 2.90
3.63
10 °C HWFET 0.71 2.90
LA92
UDDS 0.32
1.12 2.78
4.79
◦25 °C LA92 0.32 2.78
25 C UDDS
US06 0.82
1.13 2.68
4.41
UDDS 0.82 2.68
US06
HWFET
US06 0.68
0.71
0.68 3.083.08
2.90
Total LA92 0.86
0.32 7.59
2.78
25 °C Total 0.86 7.59
UDDS 0.82 2.68
4.3.3. Influences of Hyperparameters on GRU-RNNs US06 for SOC0.68
Estimation 3.08
Total
4.3.3. Influences of Hyperparameters on GRU-RNNs for SOC Estimation 0.86 7.59
In this sub-section, we analyze the impact of the hyperparameters of the GRU-RNN on the
In this sub-section,
detection results,of which we analyze include
specifically the impact timestepof the
andSOChyperparameters
iteration. In detail, the of timestep
the GRU-RNNrepresentson the
4.3.3. Influences Hyperparameters on GRU-RNNs for Estimation
detection
the depthresults,in which
time ofspecifically
the input layer includeof thetimestep
GRU-RNN, and iteration. In detail,
and the iteration is the
the timestep
number of represents
the
GRU-RNN’s
the depth In time
in this sub-section,
training
of the input we
cycles. analyze
In
layer the the impact various
experiments,
of the GRU-RNN, of and
the hyperparameters
values of timestep
the iteration ofand
is the the iteration
number GRU-RNN on the
are GRU-RNN’s
of the applied,
detection
and the SOC results, which results
specifically theinclude timestep and MAEiteration. calculated.
In detail, the timestep represents
training cycles. Inestimation
the experiments, in various
formsvalues
of MAX ofand
timestep are and iteration are applied, and the SOC
the The
depthvalues
in time of the input layer of the GRU-RNN, and the iteration is the andnumber of the
estimation
GRU-RNN’s
in of
resultstraining thethe timestep
forms
cycles.
of
In
MAX
the
are and
set as MAE
experiments,
250, are
500,calculated.
various
and 1000, respectively,
values of timestep and iteration
other network
are applied,
hyperparameters are consistent with those in Section 4.1. The training times of the GRU-RNN are
The
and thevalues of the timestep
SOC estimation results inare theset
forms as of250,
MAX 500,
and andMAE1000, respectively, and other network
are calculated.
12,641 s, 25,012 s, and 51,732 s, respectively. It can be found that the training time is proportional to
hyperparameters
The values areofconsistent
the timestep with arethose
set asin250, Section
500, and4.1. 1000,
The training
respectively,times and ofother
the GRU-RNN
network are
the value of the timestep. The estimated results are shown in Figure 9 and Table 3. It can be observed
12,641that
s, 25,012
hyperparameters s, and 51,732
are s, respectively.
consistent with thoseItin can be
Sectionfound that
4.1. The the
trainingtraining
the performance of the GRU-RNN increases with the timestep. Here, the reason is that the input
times time
of theisGRU-RNN
proportional
are to the
12,641
valuelayer
of the s, 25,012 s, and 51,732 s, respectively. It can be found that the training
Tabletime3. Itis can
proportional to that
of timestep.
the networks The estimated
with results
larger depths in are
timeshown
can learnin Figure 9 and
longer term dependencies be historical
of the observed
the value
the performance of the timestep. The estimated results are shown in Figure 9 and Table 3. It can be observed
data, which isofconsistent
the GRU-RNN with theincreases
long termwith the timestep.
dependencies Here,SOC.
of battery the reason is that the input layer
that the performance
HWFET of the GRU-RNNLA92 increases with the timestep. UDDS Here, the reason is that US06 the input
of the networks
100 with larger depths
100 in time can learn longer
100 term dependencies 100 of the historical data,
layer of the networks with larger depths in time can learn longer term dependencies of the historical
SOC/ %

SOC/ %

SOC/ %
SOC/ %

whichdata,
is consistent
50 with the long
which is consistent with 50 term dependencies50of battery SOC.
the long term dependencies of battery SOC. 50
HWFET LA92 UDDS US06
0
100 0
100 1000 100 0
0 2 4 6 0 5 10 0 5 10 15 0 1 2 3 4
SOC/ %

% error/ %SOC/ %

SOC/ %
SOC/ %

Time/0.1s 4 Time/0.1s 4 Time/0.1s 4 Time/0.1s 4


50 10 50 10 50 10 50 10
HWFET LA92 UDDS US06
% error/ %

% error/ %

SOC error/ %

10 10 10 15
0 0 0 0
0 2 4 6 0 5 10 0 5 10 15 0
10 1 2 3 4
5 Time/0.1s 4 5 Time/0.1s 4 5 Time/0.1s 4 Time/0.1s 4
10 10 10 10
5
SOC error/ SOC

SOC error/ SOC

SOC error/ SOC

HWFET LA92 UDDS US06


SOC error/ %

010 10
0 100 15 0
0 2 4 6 0 5 10 0 5 10 15 0 1 2 3 4
4 4 4 10 4
5 Time/0.1s 10 5 Time/0.1s 10 5 Time/0.1s 10 Time/0.1s 10
5
Timestep=250 Timestep=500 Timestep=1000 Measured
0 0 0 0
0 2 4 6 0 5 (a)
10 0 5 10 15 0 1 2 3 4
Time/0.1s 4 Time/0.1s 4 Time/0.1s 4 Time/0.1s 4
10 10 10 10
Timestep=250 Timestep=500 Timestep=1000 Measured
(a)

Figure 9. Cont.
Energies 2019, 12, 1592 13 of 22
Energies 2019, 12, x FOR PEER REVIEW 13 of 22

HWFET LA92 UDDS US06


100 100 100 100
SOC/ %

SOC/ %

SOC/ %

SOC/ %
50 50 50 50

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
HWFET LA92 UDDS US06
SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %
10 10 10 10

5 5 5 5

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
Timestep=250 Timestep=500 Timestep=1000 Measured
(b)
HWFET LA92 UDDS US06
100 100 100 100
SOC/ %

SOC/ %

SOC/ %

SOC/ %
50 50 50 50

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
HWFET LA92 UDDS US06
SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %
10 10 10
5
5 5 5

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
Timestep=250 Timestep=500 Timestep=1000 Measured
(c)
Figure
Figure 9. SOC
9. SOC estimation
estimation curves
curves withdifferent
with different values
values of
of timestep:
timestep:(a)(a)
0 °C; (b) (b)
0 ◦ C; 10 °C;
10 (c)
◦ C;25
(c)°C.25 ◦ C.

3. SOC
Table
Table estimation
3. SOC estimationresults
results with differentvalues
with different valuesofof timestep.
timestep.

Temperature Operating TimestepTimestep


OperatingCondition = 250 = 250 Timestep
Timestep = 500 = 500 Timestep
Timestep = 1000
= 1000
Temperature MAE (%) MAX (%) MAE (%) MAX (%) MAE (%) MAX (%)
Condition MAE (%) MAX (%) MAE (%) MAX (%) MAE (%) MAX (%)
HWFET 1.83 6.11 2.12 5.63 1.31 5.07
HWFET LA92 1.83 1.09 6.11 6.04 2.12 1.02 5.63 3.87 1.310.60 5.07
3.48
0 °C LA92UDDS 1.09
0 ◦C 1.01 6.04 6.67 1.02 0.72 3.87 4.31 0.600.77 3.48
5.71
UDDSUS06 1.01 2.22 6.67 11.53 0.72 1.96 4.31 9.10 0.771.27 5.71
7.59
US06 2.22 11.53 1.96 9.10 1.27 7.59
HWFET 2.16 9.4 1.22 5.25 1.11 4.45
HWFET LA92 2.16 1.12 9.4 6.61 1.22 0.61 5.25 6.74 1.110.47 4.45
3.63
10 °C LA92UDDS 1.12
10 ◦ C 1.19 6.61 8.91 0.61 1.79 6.74 5.49 0.471.12 3.63
4.79
UDDSUS06 1.19 1.65 8.91 7.95 1.79 0.78 5.49 4.68 1.121.13 4.79
4.41
US06 1.65 7.95 0.78 4.68 1.13 4.41
HWFET 1.76 6.38 0.80 3.64 0.71 2.90
HWFET LA92 1.76 0.66 6.38 5.58 0.80 0.51 3.64 3.41 0.710.32 2.90
2.78
25 °C LA92UDDS 0.66
25 ◦ C 0.52 5.58 2.86 0.51 1.11 3.41 2.83 0.320.82 2.78
2.68
UDDSUS06 0.52 1.18 2.86 5.29 1.11 0.62 2.83 3.57 0.820.68 2.68
3.08
US06 1.18 5.29 0.62 3.57 0.68 3.08
Total 1.37 11.53 1.11 9.10 0.86 7.59
Total 1.37 11.53 1.11 9.10 0.86 7.59
Another experiment is performed to measure the impact on the estimation results of different
values of iteration. In detail, the values of iteration are set as 20, 40, 60, 80, and 100, respectively, and
Another experiment is performed to measure the impact on the estimation results of different
other network hyperparameters are consistent with those in Section 4.1. The training times of the
values of iteration. In detail, the values of iteration are set as 20, 40, 60, 80, and 100, respectively, and
GRU-RNN are 9979 s, 20,575 s, 29,623 s, 40,982 s and 51,732 s, respectively. Additionally, it can be
otherfound
network
that hyperparameters
the training time is are consistent
proportional to with those
the value of in SectionThe
iteration. 4.1. The training
estimated curves times
and theof the
GRU-RNN are 9979 s, 20,575 s, 29,623 s, 40,982 s and 51,732 s, respectively.
corresponding bar graph of the estimation results are shown in Figure 10 and Figure 11. It Additionally, it can
is be
foundimmediately
that the training
observabletimethat
is proportional to theisvalue
the more iteration of iteration.
exposed, the better The estimated
the accuracy is.curves
The MAE and the
corresponding bar graph
decreases from 1.46% to of 0.86%
the estimation
and the MAX results are shown
decreases from in Figures
11.23% 10 and
to 7.59% 11. It
when theisvalues
immediately
of
iteration
observable increase
that the morefrom iteration
20 to 100.isThis is mainly
exposed, thebecause the accuracy
better the value of iteration
is. Theincreases, thus thefrom
MAE decreases
1.46%training
to 0.86%data
andsetsthe
areMAX
randomly reusedfrom
decreases by the11.23%
GRU-RNN, which
to 7.59% increases
when the diversity
the values of training
of iteration increase
samples.
from 20 to 100. This is mainly because the value of iteration increases, thus the training data sets are
randomly reused by the GRU-RNN, which increases the diversity of training samples.
Energies 2019, 12, x FOR PEER REVIEW 14 of 22

Energies 2019, 12, 1592 14 of 22


HWFET LA92 UDDS US06
100 100 100 100
Energies 2019, 12, x FOR PEER REVIEW 14 of 22

SOC/ %

SOC/ %

SOC/ % SOC/ %

SOC/ %
50 50 50 50
HWFET LA92 UDDS US06
SOC/ % 100 100 100 100

SOC error/ % SOC/ %

SOC/ %
0 0 0 0
0 2 4 6 0 5 10 0 5 10 15 0 1 2 3 4
50 4
50 50 50
Time/0.1s Time/0.1s 4 Time/0.1s 4 Time/0.1s 4
10 10 10 10
HWFET LA92 UDDS US06
SOC error/ %

% error/ %

SOC error/ %
10 0 10 0 10
0 0 10
0 2 4 6 0 5 10 0 5 10 15 0 1 2 3 4
Time/0.1s 4 4 4 4
10 Time/0.1s 10 Time/0.1s 10 Time/0.1s 10
5 HWFET 5 LA92 5 UDDS 5 US06
SOC error/ %

SOC error/ %

SOC error/ %
10 10 10 10

SOC
SOC error/
0 0 0 0
0 5 2 4 6 05 5 10 50 5 10 15 5 0 1 2 3 4
Time/0.1s 4 4 4 4
10 Time/0.1s 10 Time/0.1s 10 Time/0.1s 10
0 0 Iteration=20 Iteration=60 0Iteration=100 Measured 0
0 2 4 6 0 5 10 0 5 10 15 0 1 2 3 4
Time/0.1s 10
4
Time/0.1s 10
4
(a) Time/0.1s 10
4 Time/0.1s 10
4

HWFET LA92 Iteration=60


Iteration=20 Iteration=100 UDDS
Measured US06
100 100 100 100
(a)
SOC/ %

SOC/ %

SOC/ %

SOC/ %
HWFET LA92 UDDS US06
50 100 50
100 50
100 100 50
SOC/ %

SOC/ %

SOC/ %

SOC/ %
0 50 050 500 50 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
0 0 0 0
0 2HWFET4 6 8 0 5 LA92 10 15 0 0.5 1 UDDS
1.5 2 2.5 0 1 2 US06
3 4 5
SOC error/ %

SOC error/ %

SOC error/ %
10 4
10 4
10 5
15 4
Time/0.1s 10 Time/0.1s 10 Time/0.1s 10 Time/0.1s 10
error/ error/
HWFET LA92 UDDS 10 US06
SOC error/ %

SOC error/ %

SOC error/ %
5 10 510 105 15
5
SOC SOC

10
0 5 05 50 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
0 10 0 10 0 10 0 10
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
4
Iteration=20 Iteration=60
4
Iteration=100 Measured5 4
Time/0.1s 10 Time/0.1s 10 Time/0.1s 10 Time/0.1s 10
Iteration=20 (b)
Iteration=60 Iteration=100 Measured
HWFET LA92 UDDS US06
100 100 (b) 100 100
SOC/ %

SOC/ %

% %

SOC/ %
HWFET LA92 UDDS US06
100 100 100 100
SOC/

50 50 50 50
SOC/ %

SOC/ %

SOC/ %
SOC/

50 50 50 50
0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
0 4 0 4 0 5 0 4
0 2Time/0.1s
4 6 10 8 0 5Time/0.1s
10 1015 0 0.5 1Time/0.1s
1.5 2 10
2.5 0 1 2 Time/0.1s
3 4 5 10
HWFET
Time/0.1s 10
4 LA92
Time/0.1s 10
4 UDDS
Time/0.1s 10
5 US06
Time/0.1s 10
4
SOC error/ %

SOC error/ %

%%

SOC error/ %

10 10 10 10
HWFET LA92 UDDS US06
error/
SOC error/ %

SOC error/ %

SOC error/ %

10 10 10 10
error/

5 5 5 5
SOC

5 5 5 5
SOC

0 0 0 0
0 0 2 4 6 8 00 5 10 15 00 0.5 1 1.5 2 2.5 0 0 1 2 3 4 5
0 2Time/0.1s
4 6 4 8 0 5Time/0.1s
10 15
4 0 0.5 1Time/0.1s
1.5 2 2.55 0 1 2 Time/0.1s
3 4 5 4
10 10 10 10
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
Iteration=20 Iteration=60 Iteration=100 Measured
Iteration=20 Iteration=60 Iteration=100 Measured
(c)
(c)
Figure 10.10.
Figure SOC estimation
SOC estimationcurves with
curveswith different
withdifferent values of iteration:(a)
different values (a)0 0°C;
°C; (b)1010 °C; (c) 25 °C.◦
Figure 10. SOC estimation curves valuesofofiteration:
iteration: (a) 0 ◦(b)
C; (b)°C;
10 (c) 25(c)
◦ C; °C.25 C.

2.5
2.5 1515
MAE
MAE
22 MAX
MAX 1212

1.5
1.5 99
MAX
MAE

MAX
MAE

11 66

0.5
0.5 33

00 00
20
20 40
40 60
60 80
80 100
100
Iteration
Iteration
Figure 11. Performances at different values of iteration.
Figure 11.Performances
Figure11. at different
Performances at differentvalues
valuesofofiteration.
iteration.

4.3.4. Influences of the Size of Training Data on GRU-RNN for SOC Estimation
In order to analyze the influences of the training data size on the GRU-RNN for SOC estimation, we
record the estimation result of the GRU-RNN trained on different sizes of mixed drive cycles. In detail,
4.3.4. Influences of the Size of Training Data on GRU-RNN for SOC Estimation
In order to analyze the influences of the training data size on the GRU-RNN for SOC
estimation, we record the estimation result of the GRU-RNN trained on different sizes of mixed
drive cycles. In detail, the GRU-RNN is trained on one to five mixed drive cycles at various
Energies 2019, 12, 1592 Accordingly, the sizes of the training data are three, six, nine, 12, and 15, respectively.15 of 22
temperatures.
The other network hyperparameters are consistent with those in Section 4.1. For display purposes,
only the estimated curves with sizes of three, nine, and 15 are shown in Figure 12.
the GRU-RNN is trained on bar
The corresponding one graphs
to five mixed
of the drive cyclesresults
estimation at various
are temperatures.
shown in Figure Accordingly,
13. It is the
sizes immediately
of the training data are three, six, nine, 12, and 15, respectively. The other network hyperparameters
observable that, in this case, the performance of the GRU-RNN increases with the sizes
are consistent
of trainingwith
data.those in Section
It should also be4.1. Forthat
noted display purposes,can
the GRU-RNN only the estimated
achieve curves
a MAE below with
1.5% whensizes of
three,training
nine, andis conducted on nine
15 are shown in or more12.
Figure training data. This means the proposed GRU-RNN is robust
for SOC estimation when the training data is inadequate.
HWFET LA92 UDDS US06
100 100 100 100
SOC/ %

SOC/ %

SOC/ %

SOC/ %
50 50 50 50

0 0 0 0
0 2 4 6 0 5 10 0 5 10 15 0 1 2 3 4
Time/0.1s 4 Time/0.1s 4 Time/0.1s 4 Time/0.1s 4
10 10 10 10
HWFET LA92 UDDS US06
SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %
10 10 10 15

10
5 5 5
5

0 0 0 0
0 2 4 6 0 5 10 0 5 10 15 0 1 2 3 4
Time/0.1s 4 Time/0.1s 4 Time/0.1s 4 Time/0.1s 4
10 10 10 10
Size=3 Size=9 Size=15 Measured

(a)
HWFET LA92 UDDS US06
100 100 100 100
SOC/ %

SOC/ %

SOC/ %

SOC/ %
50 50 50 50

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10

HWFET LA92 UDDS US06


SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %
10 10 10 15

10
5 5 5
5

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
Size=3 Size=9 Size=15 Measured
(b)
HWFET LA92 UDDS US06
100 100 100 100
SOC/ %

SOC/ %

SOC/ %

SOC/ %

50 50 50 50

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
HWFET LA92 UDDS US06
SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %

15 15 15 15

10 10 10 10

5 5 5 5

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
Size=3 Size=9 Size=15 Measured
(c)
Figure 12. SOC estimation curves with different sizes of training data: (a) 0 °C; (b) 10 °C; (c) 25 °C.
Figure 12. SOC estimation curves with different sizes of training data: (a) 0 ◦ C; (b) 10 ◦ C; (c) 25 ◦ C.

The corresponding bar graphs of the estimation results are shown in Figure 13. It is immediately
observable that, in this case, the performance of the GRU-RNN increases with the sizes of training data.
It should also be noted that the GRU-RNN can achieve a MAE below 1.5% when training is conducted
on nine or more training data. This means the proposed GRU-RNN is robust for SOC estimation when
the training data is inadequate.
Energies 2019, 12, 1592 16 of 22
Energies 2019, 12, x FOR PEER REVIEW 16 of 22

Energies 2019, 12, x FOR PEER REVIEW 16 of 22

2.5 15
MAE
2 MAX 12

1.5 9

MAX
MAE
1 6

0.5 3

0 0
3 6 9 12 15
Figure 13. Performances
Amount ofat different
training sizes
datasizes of training data.
sets of
Figure 13. Performances at different training data.
Figure
4.3.5. Comparisons between 13. Performances
GRU-RNN and RNNat different sizes of training data.
4.3.5. Comparisons between GRU-RNN and RNN
In order
4.3.5. to demonstrate
Comparisons betweenthe advantages
GRU-RNN and RNNof the adopted GRU-RNN, we replace the GRU-RNN
In order to demonstrate the advantages of the adopted GRU-RNN, we replace the GRU-RNN
with an original RNN. The same network architecture as the GRU-RNN is adopted by the RNN
withInan original
order RNN. The same
to demonstrate network architecture
the advantages of the adopted as theGRU-RNN,
GRU-RNNwe is adopted
replace the by the RNN for
GRU-RNN
for a fair comparison. The training times of the RNN and the GRU-RNN are 30,152 s and 51,732 s,
a fair
with ancomparison.
original RNN. TheThe training times of architecture
same network the RNN and the GRU-RNN
as the GRU-RNNisare 30,152by
adopted s and
the RNN51,732fors,
respectively. The The
arespectively.
fair comparison.
GRU-RNN
GRU-RNN
The training
takes a longer
takes
timesa longer time
of thetime
RNN
tototrain theGRU-RNN
trainthe
and the networkparameters
network parameters
are 30,152than
than
s andthe the
RNN
51,732
RNNdue
s,
due
to a more complex
to a more complex
respectively. structure, as
structure,takes
The GRU-RNN shown
as shown in
a longer Figure
in Figure 1. Figure
time to1.train
Figure 14 compares
the14network
compares the performance
the performance
parameters than theof of the
the RNN
RNN due RNN
and the a GRU-RNN.
toand more complexItstructure,
the GRU-RNN. isItclear that,
is clear compared
asthat,
shown compared
in Figurewith
with the
the RNN
1. Figure RNN (bluecurve),
(blue
14 compares curve),
the thethe GRU-RNN
GRU-RNN
performance the(red
of(red curve)
RNN curve)
has smaller
hasthe
and SOC
smaller errors.
SOC
GRU-RNN. ItThe
errors. detailed
The
is clear detailed comparison
comparison
that, compared withresults
results
the RNN are(blue
are shown
shown ininTable
curve), Table 4. 4. It can
It can
the GRU-RNN be
(redseen
be seen that that
curve)the the
GRU-RNNGRU-RNN
has smaller outperforms
SOC
outperforms errors.theThethe
RNN RNN inin
detailed boththe
comparison
both theMAE
MAE andare
results
and the MAX.
shown
the MAX. inThe GRU-RNN
Table
The 4. It can be
GRU-RNN is more
seen suitable
that
is more the
suitable
thanRNN
thanGRU-RNN
the the RNN for SOC
outperforms
for SOC estimation
the RNNwith
estimation inwith
both long-term
the MAE
long-term dependencies.
and the MAX.The
dependencies. TheSOC
The SOCofof
GRU-RNN LiBs
LiBsis amore
issequence that that
suitable
a sequence
increases over time, and the historical sequences have influence on the current sequences. Intoorder
increases
than the over
RNN time,
for SOC and the historical
estimation with sequences
long-term have influence
dependencies. on the
The current
SOC of sequences.
LiBs is a In
sequence order
that
accurately
increases overestimate
time, and thethecurrent SOC of LiBs, as
historical many historical sequences as possibleInshould be
to accurately estimate the current SOCsequences
of LiBs, as have
manyinfluence on the
historical current sequences.
sequences as possible order to
should be
taken intoestimate
accurately account. the Thecurrent
RNN cannot SOC ofdeal with
LiBs, as long
manysequences
historicalwell becauseasofpossible
sequences exploding gradient
should be
taken into account. The RNN cannot deal with long sequences well because of exploding gradient
and vanishing
taken into account.gradient.
The RNN However,cannot thedeal
GRU-RNN
with long can realize the
sequences control
well of the
because ofcurrent
exploding input and the
gradient
and vanishing
historical
and vanishinggradient.
state by GRU,
gradient. However, the
as introduced
However, theGRU-RNN
in Section 2.
GRU-RNN canrealize
can realizethethe control
control of the
of the current
current inputinput
and theand the
historical statestate
historical by GRU,
by GRU, as asintroduced
introducedin inSection
Section 2.2.
HWFET LA92 UDDS US06
100 100 100 100
SOC/ %

SOC/ %

SOC/ %

SOC/ %

50 50 50 50

0 0 0 0
0 2 4 6 0 5 10 0 5 10 15 0 1 2 3 4
Time/0.1s 4 Time/0.1s 4 Time/0.1s 4 Time/0.1s 4
10 10 10 10
HWFET LA92 UDDS US06
SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %

40 40 40 40

20 20 20 20

0 0 0 0
0 2 4 6 0 5 10 0 5 10 15 0 1 2 3 4
Time/0.1s 4 Time/0.1s 4 Time/0.1s 4 Time/0.1s 4
10 10 10 10
RNN GRU-RNN Measured
(a)
(a)
SOC/ %

SOC/ %

SOC/ %

SOC/ %

HWFET LA92 UDDS US06


100 100 100 100
SOC/ %

SOC/ %

SOC/ %

SOC/ %

50 50 50 50

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %

HWFET LA92 UDDS US06


SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %

40 40 40 40

20 20 20 20

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
RNN GRU-RNN Measured
(b)

Figure 14. Cont.


Energies 2019, 12, 1592 17 of 22

Energies 2019, 12, x FOR PEER REVIEW 17 of 22

HWFET LA92 UDDS US06


SOC/ % 100 100 100 100

SOC/ %

SOC/ %

SOC/ %
50 50 50 50

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
HWFET LA92 UDDS US06
SOC error/ %

SOC error/ %

SOC error/ %

SOC error/ %
40 40 40 40

20 20 20 20

0 0 0 0
0 2 4 6 8 0 5 10 15 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5
Time/0.1s 4 Time/0.1s 4 Time/0.1s 5 Time/0.1s 4
10 10 10 10
RNN GRU-RNN Measured
(c)
Figure
Figure 14. Performance
14. Performance comparison
comparison ofof the
the recurrentneural
recurrent neuralnetwork
network (RNN)
(RNN) and
and the
the GRU-RNN:
GRU-RNN:(a)
(a) 0 ◦ C;
0 °C;
◦ (b) 10 °C;
◦ (c) 25 °C.
(b) 10 C; (c) 25 C.
Table
Table 4. 4.Quantitative
Quantitative comparison
comparison ofofthe
theRNN
RNNand thethe
and GRU-RNN.
GRU-RNN.
RNN GRU-RNN
Temperature Operating Condition RNN GRU-RNN
Temperature Operating Condition MAE (%) MAX (%) MAE (%) MAX (%)
HWFET MAE (%)
11.06 MAX
23.7 (%) 1.31MAE (%)5.07 MAX (%)
HWFET LA92 7.84
11.06 31.27
23.7 0.60 1.31 3.48 5.07
0 °C
UDDS
LA92 4.87
7.84 32.06
31.27 0.77 0.60 5.71 3.48
0 ◦C US06 14.75 37.92 1.27 0.77 7.59
UDDS 4.87 32.06 5.71
HWFET
US06 6.75
14.75 14.58
37.92 1.11 1.27 4.45 7.59
LA92 4.31 20.11 0.47 3.63
10 °C HWFET 6.75 14.58
UDDS 3.32 17.84 1.12 1.11 4.79 4.45
LA92US06 4.31
8.96 20.11
26.89 1.13 0.47 4.41 3.63
10 ◦ C
UDDSHWFET 3.32
5.69 17.84
11.29 0.71 1.12 2.90 4.79
US06LA92 8.96
4.08 26.89
15.29 0.32 1.13 2.78 4.41
25 °C
HWFETUDDS 3.48
5.69 12.15
11.29 0.82 0.71 2.68 2.90
LA92US06 7.32
4.08 21.08
15.29 0.68 0.32 3.08 2.78
25 ◦ C Total 6.87 37.92 0.86 0.82 7.59
UDDS 3.48 12.15 2.68
US06 7.32 21.08 0.68 3.08
4.4. SOC Estimation under Mixed Charge-Discharge Conditions
Total 6.87 37.92 0.86 7.59
In order to validate the performance of the proposed GRU-RNN for SOC estimation in whole
working states of battery, we evaluate the performance of the proposed method on the Samsung
4.4. SOC Estimation under Mixed Charge-Discharge Conditions
18650-20R dataset, in which the data of charge, pause, and discharge are recorded continuously. A
detailed
In orderdescription
to validateof the
the Samsung 18650-20R
performance of thedataset can beGRU-RNN
proposed found in Section 3.2. Two
for SOC sets of data
estimation in whole
at each ambient temperature under FUDS, US06, and BJDST are used
working states of battery, we evaluate the performance of the proposed method onin this experiment. Half
theofSamsung
the
data are used for training, and the other half are used for validating.
18650-20R dataset, in which the data of charge, pause, and discharge are recorded continuously.
The SOC estimation curves are calculated and compared with the measured SOC estimation
A detailed description of the Samsung 18650-20R dataset can be found in Section 3.2. Two sets of data
curves in Figure 15. The curves are similar but slightly differ from each other. The quantitative
at each ambient temperature under FUDS, US06, and BJDST are used in this experiment. Half of the
results are shown in Table 5 with a total average MAE of 1.75% and a maximum MAX of 7.04%. This
dataproves
are used forthe
that training,
GRU-RNNand the
canother half areestimate
accurately used forthe
validating.
SOC under mixed charge-discharge
The SOC estimation curves are calculated and compared with the measured SOC estimation
conditions.
curves in Figure 15. The curves are similar but slightly differ from each other. The quantitative results
are shown in Table 5 with a total average MAE of 1.75% and a maximum MAX of 7.04%. This proves
that the GRU-RNN can accurately estimate the SOC under mixed charge-discharge conditions.
Energies 2019, 12, 1592 18 of 22
Energies 2019, 12, x FOR PEER REVIEW 18 of 22

FUDS US06 BJDST


100 100 100
SOC/ %

SOC/ %

SOC/ %
50 50 50

0 0 0
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time/1s 4 Time/1s 4 Time/1s 4
10 10 10
FUDS US06 BJDST
10 10 10
SOC error/ %

SOC error/ %

SOC error/ %
5 5 5

0 0 0
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time/1s 4 Time/1s 4 Time/1s 4
10 10 10
GRU-RNN Measured

(a)
FUDS US06 BJDST
100 100 100
S OC/ %

SOC/ %

SOC/ %
50 50 50

0 0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5
Time/1s 4 Time/1s 4 Time/1s 4
10 10 10
FUDS US06 BJDST
10 10 10
SOC error/ %

SOC error/ %

SOC error/ %

5 5 5

0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5
Time/1s 4 Time/1s 4 Time/1s 4
10 10 10
GRU-RNN Measured
(b)
FUDS US06 BJDST
100 100 100
SOC/ %

SOC/ %

SOC/ %

50 50 50

0 0 0
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time/1s 4 Time/1s 4 Time/1s 4
10 10 10
FUDS US06 BJDST
10 10 10
SOC error/ %

SOC error/ %

SOC error/ %

5 5 5

0 0 0
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Time/1s 4 Time/1s 4 Time/1s 4
10 10 10
GRU-RNN Measured
(c)
Figure 15. SOC estimation curves under mixed charge-discharge conditions: (a) 0 °C; (b) 25 °C; (c)
Figure 15. SOC estimation curves under mixed charge-discharge conditions: (a) 0 ◦ C; (b) 25 ◦ C;
45 °C.
(c) 45 ◦ C.

Table 5. SOC estimation results under mixed charge-discharge conditions.

Temperature Operating Condition MAE (%) MAX (%)


FUDS 2.28 7.04
0 °C US06 2.19 5.72
BJDST 1.84 4.73
FUDS 0.86 3.13
25 °C
US06 2.56 4.03
Energies 2019, 12, 1592 19 of 22

Table 5. SOC estimation results under mixed charge-discharge conditions.

Temperature Operating Condition MAE (%) MAX (%)


FUDS 2.28 7.04
0 ◦C US06 2.19 5.72
BJDST 1.84 4.73
FUDS 0.86 3.13
25 ◦ C US06 2.56 4.03
Energies 2019, 12, x FOR PEER REVIEW 19 of 22
BJDST 3.17 5.11
FUDS
BJDST 1.05
3.17 5.11 3.91
45 ◦ C US06
FUDS 0.83
1.05 3.91 2.43
45 °C BJDST
US06 0.98
0.83 2.43 2.87
Total BJDST 1.75
0.98 2.87 7.04
Total 1.75 7.04
4.5. SOC Estimation under High Rate Pulse Discharge Conditions
4.5. SOC Estimation under High Rate Pulse Discharge Conditions
The last experiment is conducted under high rate pulse discharge conditions with the high rate
pulseThe last experiment
discharge condition is conducted
dataset underthe
to validate high rate pulseability
GRU-RNN’s discharge conditions
for SOC withunder
estimation the high rate
extreme
pulse discharge
conditions. A setcondition
of data fordataset to validate isthe
each temperature GRU-RNN’s
used for training,ability for SOCsetestimation
and another of data forunder
each
extreme conditions. A set of data for each temperature is used for training, and
temperature is used for validating. The SOC estimation curves and the corresponding quantitative another set of data
for each
results aretemperature is used
shown in Figure forTable
16 and validating. The SOC
6, respectively. It isestimation curves
clear that the and the
GRU-RNN corresponding
achieves excellent
quantitative results are shown in Figure 16 and Table 6, respectively. It is clear that
results with MAE = 1.05% and MAX = 2.22% on the high rate pulse discharge condition dataset, which the GRU-RNN
achieves excellent
indicates results
the feasibility andwith MAE
validity of =this
1.05% and MAX = 2.22% on the high rate pulse discharge
method.
condition dataset, which indicates the feasibility and validity of this method.
100 GRU-RNN 100 GRU-RNN 100 GRU-RNN
Measured Measured Measured
SOC/ %
SOC/ %
SOC/ %

50 50 50

0 0 0
0 2500 5000 7500 10,000 0 2500 5000 7500 10,000 0 2500 5000 7500 10,000
Time/0.01s Time/0.01s Time/0.01s
10 10 10
SOC error/ %

SOC error/ %

SOC error/ %

5 5 5

0 0 0
0 2500 5000 7500 10,000 0 2500 5000 7500 10,000 0 2500 5000 7500 10,000
Time/0.01s Time/0.01s Time/0.01s
(a) (b) (c)

Figure 16. SOC


SOCestimation
estimationcurves
curvesunder
underhigh rate
high pulse
rate discharge
pulse conditions:
discharge (a) 0(a)
conditions: ◦ C;25
°C;0 (b) (b)°C; ◦ C;
25(c)
45 °C.
(c) ◦
45 C.

Table 6. SOC estimation result under high rate pulse discharge conditions.
Table 6. SOC estimation result under high rate pulse discharge conditions.
Temperature
Temperature MAE
MAE (%)
(%) MAX
MAX (%)(%)
0 ◦ C0 °C 0.93 1.971.97
25 ◦25
C °C 1.32 2.222.22
45 ◦45
C °C 0.91
0.91 2.042.04
Total
Total 1.05
1.05 2.222.22

summary, the
In summary, the proposed
proposed SOCSOC estimation
estimation approach
approach for LiBs using the GRU-RNN has been
complex and changeable discharge conditions, mixed charge and discharge conditions,
evaluated in complex
and extreme conditions by by the
the Panasonic
Panasonic 18650PF
18650PF dataset,
dataset, the
the Samsung
Samsung 18650-20R
18650-20R dataset, and the
high-rate pulse
pulse discharge
discharge dataset.
dataset. The
TheMAEs
MAEsofofthe
theexperiment
experimentresults
resultsare
are 0.86%,
0.86%, 1.75%,
1.75%, and
and 1.05%,
theMAXs
and the MAXsare are 7.59%,
7.59%, 7.04%,
7.04%, andand 2.22%.
2.22%. This This
provesproves that
that the the proposed
proposed methodmethod
has goodhas good
abilities
abilities in accuracy and robustness. In addition, the GRU-RNN can self-learn network
in accuracy and robustness. In addition, the GRU-RNN can self-learn network parameters by the parameters
by the Adam optimizer, which frees researchers from the establishments of battery models and
identifications of parameters.

5. Conclusions
In this work, an accurate and robust SOC estimation approach is developed for LiBs. The
Energies 2019, 12, 1592 20 of 22

Adam optimizer, which frees researchers from the establishments of battery models and identifications
of parameters.

5. Conclusions
In this work, an accurate and robust SOC estimation approach is developed for LiBs. The proposed
strategy is based on a machine learning framework and leverages the GRU-RNN to establish the
nonlinear mapping relation between the observable variables and SOC. Both the qualitative and the
quantitative results on three challenging datasets prove the feasibility and advantages of the proposed
method. Two public datasets of vehicle drive cycles and another high rate pulse discharge condition
dataset are used to evaluate the performance of the GRU-RNN for SOC estimation in complex and
changeable discharge conditions, mixed charge and discharge conditions, and extreme conditions,
respectively. In addition, the following works are accomplished in this paper: (a) the proposed
GRU-RNN method can directly characterize the non-linear relationships between voltage, current,
temperature, and SOC without battery models; (b) the proposed GRU-RNN method is proven to have
the ability to estimate SOC at various temperatures by using a single set of network parameters; (c) the
proposed GRU-RNN method is proven to self-learn network parameters without requiring a great
deal of work to hand-engineer and parameterize; (d) the influences of the GRU-RNN hyperparameters
and training data size on the performance for SOC estimation are analyzed. The experimental results
demonstrate the performance of the GRU-RNN increases with the hyperparameters such as timestep
and iteration as well as the training data size. The increase of timestep represents the increase of
sequence span during the GRU-RNN training, and the increase of iteration represents the increase of
diversity of the GRU-RNN training samples. It is also proven that the model still has high estimation
accuracy, even though the amount of data is small; (e) the comparison results between the RNN and
the GRU-RNN show that the GRU-RNN can achieve a higher accuracy and overcome the problem
of long-term dependencies in the RNN. In summary, the proposed SOC estimation method for LiBs
based on the GRU-RNN has been widely validated and achieved good results. To extend this work,
the future plan might entail conducting battery lifetime experiments to obtain battery aging data for
evaluating the accuracy of the proposed method after a large number of cycles and optimizing the
datasets for testing with complete mission profiles. In addition, future work will design the proposed
method in the prototype hardware implementation for the BMS.

Author Contributions: Conceptualization, C.L., F.X. and Y.F.; Data curation, C.L.; Formal analysis, F.X.; Funding
acquisition, F.X.; Investigation, Y.F.; Methodology, C.L. and Y.F.; Project administration, F.X.; Resources, F.X.;
Software, C.L. and Y.F.; Supervision, F.X.; Validation, C.L. and Y.F.; Visualization, C.L.; Writing—original draft,
C.L.; Writing—review & editing, F.X. and Y.F.
Funding: This research was funded by National Defense Science Innovation Zone Project and National Natural
Science Foundation, grant number 51807200.
Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the
study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to
publish the results.

References
1. Wade, N.S.; Taylor, P.C.; Lang, P.D.; Jones, P.R. Evaluating the benefits of an electrical energy storage system
in a future smart grid. Energy Policy 2010, 38, 7180–7188. [CrossRef]
2. Dunn, B.; Kamath, H.; Tarascon, J.M. Electrical energy storage for the grid: a battery of choices. Science 2011,
334, 928–935. [CrossRef] [PubMed]
3. Liu, Z.; Li, Z.; Zhang, J.; Su, L.; Ge, H. Accurate and Efficient Estimation of Lithium-Ion Battery State of
Charge with Alternate Adaptive Extended Kalman Filter and Ampere-Hour Counting Methods. Energies
2019, 12, 757. [CrossRef]
4. Hannan, M.A.; Lipu, M.S.H.; Hussain, A.; Mohamed, A. A review of lithium-ion battery state of charge
estimation and management system in electric vehicle applications: challenges and recommendations. Renew.
Sustain. Energy Rev. 2017, 78, 834–854. [CrossRef]
Energies 2019, 12, 1592 21 of 22

5. Lai, X.; Zheng, Y.; Sun, T. A comparative study of different equivalent circuit models for estimating
state-of-charge of lithium-ion batteries. Electrochim. Acta 2018, 259, 566–577. [CrossRef]
6. Plett, G.L. Extended kalman filtering for battery management systems of LiPB-based HEV battery packs:
Part 1. Background. J. Power Sources 2004, 134, 252–261. [CrossRef]
7. Hua, X.; Li, S.; Peng, H.; Sun, F. Robustness analysis of state-of-charge estimation methods for two types of
Li-ion batteries. J. Power Sources 2012, 217, 209–219. [CrossRef]
8. He, Z.W.; Gao, M.Y.; Wang, C.S.; Wang, L.Y.; Liu, Y.Y. Adaptive state of charge estimation for Li-ion batteries
based on an unscented kalman filter with an enhanced battery model. Energies 2013, 6, 4134–4151. [CrossRef]
9. Li, J.; Barillas, J.K.; Guenther, C.; Danzer, M.A. A comparative study of state of charge estimation algorithms
for LiFePO4 batteries used in electric vehicles. J. Power Sources 2013, 230, 244–250. [CrossRef]
10. He, H.; Xiong, R.; Guo, H.; Li, S. Comparison study on the battery models used for the energy management
of batteries in electric vehicles. Energy Convers. Manag. 2012, 64, 113–121. [CrossRef]
11. Rubio, J.; Pascual-Iserte, A. Energy-aware broadcast multiuse-MIMO precoder design with imperfect channel
and battery knowledge. IEEE Trans. Wirel. Commun. 2014, 13, 3137–3152. [CrossRef]
12. Zheng, L.; Jiang, J.; Wang, Z.; Zhao, T. Embedded implementation of SOC estimation based on the Luenberger
observer technique. In Proceedings of the IEEE Conference and Expo Transportation Electrification
Asia-Pacific, Beijing, China, 31 August–3 September 2014.
13. Ning, B.; Xu, J.; Cao, B.; Xu, G. A sliding mode observer SOC estimation method based on parameter adaptive
battery model. Energy Procedia 2016, 88, 619–626. [CrossRef]
14. Li, S.G.; Sharkh, S.M.; Walsh, F.C.; Zhang, C.N. Energy and battery management of a plug-in series hybrid
electric vehicle using fuzzy logic. IEEE Trans. Veh. Technol. 2011, 60, 3571–3585. [CrossRef]
15. Alvarez, A.J.C.; Garcia, N.P.J.; Blanco, V.C.; Vilan, J.A. Support vector machines used to estimate the battery
state of charge. IEEE Trans. Power Electr. 2013, 28, 5919–5926. [CrossRef]
16. Hu, J.N.; Hu, J.J.; Lin, H.B.; Li, X.P.; Jiang, C.L.; Qiu, X.H.; Li, W.S. State-of-charge estimation for battery
management system using optimized support vector machine for regression. J. Power Sources 2014, 269,
682–693. [CrossRef]
17. Eddahech, A.; Briat, O.; Vinassa, J.M. Adaptive voltage estimation for EV Li-ion cell based on artificial
neural networks state-of-charge meter. In Proceedings of the IEEE International Symposium on Industrial
Electronics, Hangzhou, China, 28–31 May 2012.
18. Guo, Y.; Zhao, Z.; Huang, L. SoC estimation of Lithium battery based on improved BP neural network.
Energy Procedia 2017, 105, 4153–4158. [CrossRef]
19. Dong, C.; Wang, G. Estimation of power battery SOC based on improved BP neural network. In Proceedings
of the IEEE International Conference on Mechatronics and Automation, Tianjin, China, 3–6 August 2014.
20. Tong, S.; Lacap, J.H.; Park, J.W. Battery state of charge estimation using a load-classifying neural network.
J. Energy Storage 2016, 7, 236–243. [CrossRef]
21. Hannan, M.A.; Lipu, M.S.H.; Hussain, A.; Saad, M.H.M. Neural network approach for estimating state
of charge of Lithium-ion battery using backtracking search algorithm. IEEE Access 2018, 6, 10069–10079.
[CrossRef]
22. Lipu, M.S.H.; Hannan, M.A.; Hussain, A.; Saad, M.H.M. State of charge estimation for Lithium-ion
battery using recurrent NARX neural network model based lighting search algorithm. IEEE Access 2018, 6,
28150–28161. [CrossRef]
23. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [CrossRef]
24. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Driessche, G.V.D.; Schrittwieser, J.; Antonoglou, I.;
Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of go with deep neural networks and tree search.
Nature 2016, 529, 484–489. [CrossRef]
25. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level
classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [CrossRef]
26. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.;
Fidjeland, A.K.; Ostrovski, G.; et al. Human-level control through deep reinforcement learning. Nature 2015,
518, 529–533. [CrossRef]
27. Banino, A.; Barry, C.; Uria, B.; Blundell, C.; Lillicrap, T.; Mirowski, P.; Pritzel, A.; Chadwick, M.J.; Degris, T.;
Modayil, J.; et al. Vector-based navigation using grid-like representations in artificial agents. Nature 2018,
557, 429–433. [CrossRef]
Energies 2019, 12, 1592 22 of 22

28. Zhang, Y.; Xiong, R.; He, H.; Pecht, M.G. Long short-term memory recurrent neural network for remaining
useful life prediction of lithium-ion batteries. IEEE Trans. Veh. Technol. 2018, 67, 5695–5705. [CrossRef]
29. Khumprom, P.; Yodo, N. A data-driven predictive prognostic model for lithium-ion batteries based on a
deep learning algorithm. Energies 2019, 12, 660. [CrossRef]
30. Williams, R.J.; Zipser, D. A learning algorithm for continually running fully recurrent neural networks; MIT Press:
Cambridge, MA, USA, 1989.
31. Sutskever, I.; Martens, J.; Hinton, G.E. Generating text with recurrent neural networks. In Proceedings of the
28th International Conference on Machine Learning, Bellevue, WA, USA, 28 June–2 July 2011.
32. Capizzi, G.; Bonanno, F.; Tina, G.M. Recurrent neural network-based modeling and simulation of lead-acid
batteries charge–discharge. IEEE Trans. Energy Convers. 2011, 26, 435–443. [CrossRef]
33. Zhao, R.; Kollmeyer, P.J.; Lorenz, R.D.; Jahns, T.M. A compact unified methodology via a recurrent neural
network for accurate modeling of lithium-ion battery voltage and state-of-charge. In Proceedings of the
IEEE Energy Conversion Congress and Exposition (ECCE), Cincinnati, OH, USA, 1–5 October 2017.
34. Park, S.; Zhang, D.; Moura, S. Hybrid electrochemical modeling with recurrent neural networks for li-ion
batteries. In Proceedings of the American Control Conference (ACC), Seattle, WA, USA, 24–26 May 2017.
35. Elman, J.L. Finding structure in time. Cogn. Sci. 1990, 14, 179–211. [CrossRef]
36. Ian, G. Deep learning; Posts & Telecom Press: Beijing, China, 2017.
37. Chemali, E.; Kollmeyer, P.J.; Preindl, M.; Emadi, A. State-of-charge estimation of Li-ion batteries using deep
neural networks: A machine learning approach. J. Power Sources 2018, 400, 242–255. [CrossRef]
38. Zheng, F.; Xing, Y.; Jiang, J.; Sun, B.; Kim, J.; Pecht, M. Influence of different open circuit voltage tests on state
of charge online estimation for lithium-ion batteries. Appl. Energy 2016, 183, 513–525. [CrossRef]
39. Hande, A. Internal battery temperature estimation using series battery resistance measurements during cold
temperatures. J. Power Sources 2006, 158, 1039–1046. [CrossRef]
40. Barai, A.; Widanage, W.D.; McGordon, A.; Jennings, P. The influence of temperature and charge-discharge
rate on open circuit voltage hysteresis of an LFP Li-ion battery. In Proceedings of the IEEE Transportation
Electrification Conference and Expo (ITEC), Dearborn, MI, USA, 27–29 June 2016.
41. Jiang, S.; Guo, K.; Liao, J.; Zheng, G. Solving Fourier ptychographic imaging problems via neural network
modeling and TensorFlow. Biomed. Opt. Express 2018, 9, 3306–3319. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Potrebbero piacerti anche