Sei sulla pagina 1di 1

This article has been accepted for inclusion in a future issue of this journal.

Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

In (6), |||| denotes nuclear norm. It is believed that nuclear Its objective function can be rewritten as
norm can describe structural information more effectively than
L1 -norm or L2 -norm. To see this, we arrange the pixels of E 
s
  
J (P) = Wi Ai Ai PPT 2
and obtain the image F. The L2 -norm values of matrices E and F
i=1
F are equal (the value is 33.96), but their nuclear norm values
are different (the values are 91.99 for E and 98.52 for F). For 
s
   
= Tr Wi Ai I PPT AiT WiT
previous 2-D methods based on L2 or L1 norm, the measure of
i=1
the error image is still based on pixel values, so the structural
information of the error image cannot be revealed. 
s
 
= Tr Wi Ai AiT WiT
i=1


s
 
B. Algorithm Tr PP T
AiT WiT Wi Ai
i=1
We discuss how to solve (6) in this section. Motivated
by [25], we convert a nuclear norm optimization problem to 
s
 
= Tr AiT WiT Wi Ai
the F-norm (L2 -norm) optimization problem. To this end, let
i=1
us give the following lemma.

Lemma 1 [25]: For matrix X R pq , one has 


s
 
T
Tr P AiT WiT Wi Ai P (13)
i=1
||X|| = ||(XXT )1/4 X||2F . (7)
where the third equation is derived from the
sfact that the matrix
I PPT is idempotent. Denote D = i=1 (A T W T W A ),
i i i i
Lemma 1 represents the nuclear norm in the form of
problem (11) can be rewritten as
F-norm, and provides a base for solving our model.
In Lemma 1, the th power of a matrix X of rank r is Pk+1 = arg max Tr(P T DP) s.t. PPT = Ir . (14)
defined by P

  So, Pk+1 is the matrix formed by r orthonormal eigenvec-


X = U VT,  = diag 1 , . . . , r (8) tors of D corresponding to the first r largest eigenvalues.
Now, we consider how to update (12) efficiently. Let
where UVT is the singular value decomposition of X, Xi = Ai Ai PPT , Wi can be rewritten as Wi = (Xi XiT )1/4 .
 = diag(1 , . . . , r ). From Lemma 1, the objective function When some of the singular values of Xi XiT become small,
in model (6) can be rewritten as the computation of Wi becomes ill conditioned. To improve
the stability of the algorithm, let us replace Xi by its

s
   -stabilization (Xi ) . The -stabilization of one matrix X is
J (P) = Wi Ai Ai PPT 2 (9) defined by
F
i=1 
X = U VT,  = diag(max{i , }i=1:r ). (15)
where Wi is the weight matrix and defined by

However, for a fixed , we would no longer expect the


  T  1
Wi = Ai Ai PP Ai Ai PPT
T 4 . (10) algorithm to converge to the nuclear norm solution of (6).
We select ik = min{ik1, K (Xik )} at step k, and then one
may hope for the stability and convergence toward the solution
Now, we use iteratively reweighted method to solve our
of (6). Above all, we update Wi by
model. The procedure consists of the following iterations.
   T 1
1) Given Wi = Wik , updating P by Wik+1 = Ai Ai PPT k Ai Ai PPT k
4
. (16)
i i


s
   The algorithm is summarized in Algorithm 1.
P k+1
= arg min Wi Ai Ai PPT 2
P F The convergence of the iteratively reweighted algorithm
i=1 can be guaranteed when the constraint is linear [25]. Fig. 4
s.t. PPT = Ir . (11) shows that the objective function value of N-2-DPCA con-
verges well. Generally speaking, the variation of objective
2) Given P = Pk+1 , updating Wi by function value is <106 when the number of iteration time is
over 10.
  T  1 After obtaining the projection matrix P by Algorithm 1, for
Wik+1 = Ai Ai PPT Ai Ai PPT 4 . (12)
a given image sample A, the feature matrix B of the image
sample A is obtained by B = AP. The feature matrix B is
The key step is to solve the optimization problems (11). used to represent image A for classification.

Potrebbero piacerti anche