Sei sulla pagina 1di 14

2510 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 36, NO.

12, DECEMBER 2017

Low-Dose Lung CT Image Restoration


Using Adaptive Prior Features From
Full-Dose Training Database
Yuanke Zhang, Member, IEEE , Junyan Rong, Hongbing Lu , Member, IEEE ,
Yuxiang Xing, Member, IEEE , and Jing Meng

Abstract — The valuable structure features in full-dose Index Terms — Low-dose CT, lung CT imaging, restora-
computed tomography (FdCT) scans can be exploited as tion, prior features, principal component analysis (PCA).
prior knowledge for low-dose CT (LdCT) imaging. How-
ever, lacking the capability to represent local character- I. I NTRODUCTION
istics of interested structures of the LdCT image adap-
tively may result in poor preservation of details/textures
in LdCT image. This paper aims to explore a novel
prior knowledge retrieval and representation paradigm,
N OWADAYS, the risk of radiation induced genetic, can-
cerous and other diseases is of a significant concern
to patients and operators [1]–[3]. Minimizing X-ray expo-
called adaptive prior features assisted restoration algo- sure to patients has been one of the major efforts in the
rithm, for the purpose of better restoration of the low-
dose lung CT images by capturing local features from CT fields. A simple and cost-effective method to reduce
FdCT scans adaptively. The innovation lies in the con- radiation exposure is to lower the X-ray tube current and/or
struction of an offline training database and the online exposure time (mAs) during data acquisition. However the
patch-search scheme integrated with the principal com- low mAs acquisition protocols could be highly detrimental
ponents analysis (PCA). Specifically, the offline training to image quality, resulting in images with serious noise and
database is composed of 3-D patch samples extracted
from existing full-dose lung scans. For online patch-search, streak artifacts [4]. To address this problem, various noise-
3-D patches with structure similar to the noisy target patch reduction strategies have been proposed, including statistic-
are first selected from the database as the training samples. based iterative reconstruction (SIR) [5], [6], sinogram domain
Then, PCA is applied on the training samples to retrieve denoising [7]–[9] and image domain denoising [10]–[12].
their local prior principal features adaptively. By employing In CT imaging, the reconstructed images from the same
the principal features to decompose the noisy target patch
and using an adaptive coefficient shrinkage technique for patient, or even from different patients, generally share sim-
inverse transformation, the noise of the target patch can ilar structures and corresponding texture characteristics for
be efficiently removed and the detailed texture can be well a specific tissue type. The rich structure information in the
preserved. The effectiveness of the proposed algorithm was high quality full-dose CT (FdCT) scans can be exploited as
validated by CT scans of patients with lung cancer. The prior knowledge for low-dose CT (LdCT) imaging, which has
results show that it can achieve a noticeable gain over some
state-of-the-art methods in terms of noise suppression and become a noticeable research interest recently. Up to now,
details/textures preservation. manyprior-knowledge-assisted LdCT restoration algorithms
have been developed [13]–[27]. These algorithms can be
Manuscript received August 7, 2017; revised September 20, 2017; categorized into two groups in terms of the prior knowledge
accepted September 21, 2017. Date of publication September 27, 2017;
date of current version November 29, 2017. This work was supported retrieval and representation paradigm.
in part by the National Key Research and Development Program of The first group shares the common ideas of first registering
China under Grant 2017YFC0107400, in part by the China Postdoctoral the previous FdCT scan of the same patient with the LdCT
Science Foundation under Grant 2017M613348, in part by the National
Natural Science Foundation of China under Grant 61572283, Grant scan, and then incorporates the intensity information of local
81230035, and Grant 11275104, and in part by the Award Foundation pixels in FdCT scan as prior knowledge to regularize the
Project of Excellent Young Scientists in Shandong Province under Grant corresponding pixels in LdCT scan [13]–[21]. While these
BS2014DX005. (Corresponding authors: Hongbing Lu; Yuxiang Xing.)
Y. Zhang is with the School of Biomedical Engineering, Fourth methods have been successful in many cases, such a previous
Military Medical University, Xi’an 710032, China, and also with the FdCT scan of the same patient may not always be available.
School of Information Science and Engineering, Qufu Normal University, Moreover, since these approaches extract intensity information
Rizhao 276826, China (e-mail: yuankezhang@163.com).
J. Rong and H. Lu are with the School of Biomedical Engineering, of pixels directly from FdCT scan for prior knowledge, they
Fourth Military Medical University, Xi’an 710032, China (e-mail: are usually sensitive to the accuracy of image registration.
junyanrong@126.com; luhb@fmmu.edu.cn). The other group is based on the feature learning technique.
Y. Xing is with the Department of Engineering Physics, Tsinghua Uni-
versity, Beijing 100084, China (e-mail: xingyx@mail.tsinghua.edu.cn). It usually captures the global features of FdCT training
J. Meng is with the School of Information Science and Engineering, samples through an information learning process and then
Qufu Normal University, Rizhao 276826, China (e-mail: qfmj@163.com). incorporates the learned features as a prior knowledge to
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. assist the LdCT imaging [22]–[27]. Since the features are
Digital Object Identifier 10.1109/TMI.2017.2757035 learned from training samples, the FdCT scans can be obtained

0278-0062 © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
ZHANG et al.: LOW-DOSE LUNG CT IMAGE RESTORATION USING ADAPTIVE PRIOR FEATURES 2511

Fig. 1. 2D Illustration of the flowchart of the proposed adaptive prior features assisted (APFA) algorithm. (a) 2D Illustration of patch window and
search window used in this study. (b) Flowchart of the APFA process. In this example, we set K = 7 and L = 51.

from different patients, and they do not need to be pre- knowledge retrieval and representation paradigm by capturing
registered with the LdCT scans. For instance, Xu et al. [22] local features adaptively from FdCT samples, for the restora-
and Chen et al. [23], [24] adopted the dictionary learning and tion of low-dose lung CT images with detailed textures. The
sparse representation techniques [25] to train an redundant innovation is the construction of an offline training database
dictionary from image patch samples, and then incorporate the from FdCT scans and the use of online patch-search scheme
trained dictionary into an object function for image restora- integrated with the PCA. Specifically, the offline training
tion. Wu et al. [26] proposed a sophisticated feature con- database is composed of 3D patches extracted from full-dose
strained compressed sensing reconstruction algorithm (FCCS) lung CT scans of different patients. Given a 3D target patch to
for incomplete data problem. The FCCS utilized a robust be restored in the LdCT images, the online patch-search first
principal components analysis (PCA) to learn main features selects patches whose structure is similar to the target patch
of several FdCT images which were similar to the target from the database as the training samples, and then PCA is
image, and then combined the total variation constraint and employed on the training patches to retrieve the local principal
the feature constraint into an object function for reconstruc- features adaptively. By employing the learned features to
tion. More recently, Zhang et al. [27] adapted the Markov decompose the target patch and using an adaptive coefficient
random fields (MRF) model to capture the tissue textures from shrinkage technique for inverse transformation, the noise can
previous FdCT scan and incorporated the texture as a prior be efficiently removed and the detail textures can be well
knowledge for Bayesian reconstruction of the LdCT image. preserved.
For the present feature learning based LdCT image restora- The rest of this paper is organized as follows. In section II,
tion paradigm [22]–[27], the features are retrieved from all the proposed adaptive prior features assisted (APFA) restora-
FdCT image samples without training samples selection and tion scheme is presented in details. Section III evaluates
are integrated into a global optimization scheme. It would be the proposed scheme with both numerical simulations and
difficult to represent local characteristics of a target image clinical studies. Finally, a discussion and conclusion is given
adaptively. For a specific region in an LdCT image, its local in section IV.
structure and texture pattern may be quite different from
others. Therefore, locally adaptive prior features would be II. M ETHODS
of great help for the preservation of details/textures in the The proposed APFA scheme is illustrated by the flowchart
LdCT image. shown in Fig. 1. This flowchart contains four major steps:
In lung imaging such as lung nodule detection, the offline training database construction, online patch-search,
details/textures of CT images are highly desirable for online local principal priors retrieved by PCA, and online
clinical diagnosis. This study aims to explore a novel prior target patch decomposition and adaptive coefficient shrink-
2512 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 36, NO. 12, DECEMBER 2017

age. In the following subsections, we describe each step in derived as


1  2
detail.
1  Ref
M
 Ld Ref 
di = x − x i  = (x i,k − x kLd )2
M 2 M
A. Offline Training Database Construction k=1

1  Ref 1  Ref
M M
In the APFA scheme, a 3D patch, which is modeled as a = (x i,k − x k − n k )2 ≈ (x i,k − x k )2 + σ 2
voxel and its K × K × K cubic nearest neighbors, is used M M
k=1 k=1
to reflect local spatial structure of a voxel. To construct (2)
the offline training database, several full dose lung scans
were first gathered from different patients. For each patient’ To search patches similar to the target patch from the
FdCT scans, 3D patches were extracted automatically with training dataset, we can select xRef
i as a training sample
a sliding distance of one voxel on the 3D volume. Then all candidate for xLd if
the 3D patches from different patients were stacked into a
di ≤ T + σ 2 (3)
reference training database, denoted by .
Theoretically, a large-scale training database consisting of where T is a preset threshold. The method of esti-
millions of patches could better reflect all possible structures mating the local noise variance σ 2 will be given later
available shown in lung CT images. However, larger scale in Subsection II. D.3).
would greatly increase the computational burden of online With the selected training sample candidates, there are two
search for each target patch. Considering the structure redun- situations to be considered. One is that the number of can-
dancy of patches in the entire set , we instead use a randomly didates, N̄ , is too small to guarantee robust estimation of the
selected subset of , denoted by , as the training database. prior local features, which may introduce false prior structures
In practice, a small subset , consisting of 0.5% patches in the restored LdCT image. The other is that N̄ is too large,
of the entire set , which was constructed from 432 full- which increase the computational burden greatly. Through
dose lung CT slices of 10 patients, could achieve almost the extensive experiments, we found that for a patch vector of size
same restoration compared with that using the entire set , K 3 × 1 (K is the window size of a patch), selecting at least
when other parameters were set the same. The influence of the M(M = K 3 ) most similar candidates as training samples
the database size on the proposed algorithm will be further could provide relatively robust estimation of the prior local
evaluated in Subsection III.B.4).a), where the database size features. Furthermore, the restoration performance would be
can be further reduced. hardly improved even with candidates more than 3M. Based
on the experimental observation, in this study, the following
B. Online Patch Search criterion is proposed to select training samples for xLd :
Criterion 1:
1) Search Similar Patches From the Offline Training
(1). If M ≤ N̄ ≤ 3M, all the candidates are used as
Database: By representing a 3D patch as a column vector, the
training samples;
i th patch in the training database  is denoted by xRef =
i (2). If N̄ < M, it indicates that there are not enough
[x i,1
Ref , x Ref , · · · , x Ref ]T , where M = K 3 , K is the patch size,
i,2 i,M similar patches in the offline training database. Instead,
and i = 1, 2, · · · , ||. The target patch to be restored in Ñ noisy similar patches extracted from the local search
the LdCT image (as shown in Fig. 1(a) with 2D illustration)
window of LdCT Images are used as training samples (please
is modeled as a random vector variable, and denoted by
refer to the next section for detailed description);
x = [x 1 , x 2 , · · · , x M ]T . Here, the noise in a small neighbor- (3). If N̄ ≥ 3M, then the 3M most similar candidates are
hood of an LdCT image is assumed to follow approximately
selected as the training samples.
additive, independent and identical distribution. Though such
Suppose finally N training samples are selected, we denote
a noise assumption may have a violation to the true noise
them by a two-dimensional matrix as
statistics of LdCT images, the experimental results show that
the noise residual caused by this violation can be nearly XRef = (xRef Ref Ref
1 , x2 , · · · , x N ), i = 1, 2, · · · , N. (4)
neglected, especially for lung tissues (please see the detail
information of experiments in Subsection III.B). Based on where each training sample is a column vector of length M,
the assumption, the noisy version of the target patch can be xRef
i = [x i,1
Ref , x Ref , · · · , x Ref ]T .
i,2 i,M
2) Search Similar 3D Patches From a Local Search Window
expressed by
of LdCT Image Volume: For each 3D noisy target patch xLD ,
xLd = x + n (1) we select its similar 3D patches in a local search window. Here
the local search window is defined as an L × L × L (L > K )
where xLd = [x 1Ld , x 2Ld , · · · , x M
Ld ]T is the noisy patch vari- local cubic neighborhood surrounding the target patch to be
able, and n = [n 1 , n 2 , · · · , n M ]T with n k being the noise denoised, as shown in Fig. 1(a) (with a 2D illustration).
of a voxel in the patch with zero mean and constant There are two purposes to search similar patches from the
variance σ 2 . LdCT image volume. One is to use them for the variance
On the basis of the noise assumption, the Euclidean estimation of noise present on the target patch. More details on
distance di between the noisy target patch xLd and a patch the local noise variance estimation method will be given later
i (i = 1, 2, · · · ||) in the training database can be
xRef in Subsection II.D.3). The other is to use them as alternative
ZHANG et al.: LOW-DOSE LUNG CT IMAGE RESTORATION USING ADAPTIVE PRIOR FEATURES 2513

training samples in the case that there are no enough similar where y = [y1 , y2 , · · · , y M ]T and ny = [n y,1 , n y,2 , · · · ,
patches found in the offline training database. More details on n y,M ]T are the principal coefficient vectors of the signal x
this situation will be given later in Subsection II.E. and the noise n, respectively.
To select patches similar to xLD , the Euclidean distance One important property of the PCA subspace is that most
between xLD and each patch in the 3D local search window signal power concentrates on the first few components while
is computed, and then Ñ most similar patches are selected. the noise power distributes more evenly. Base on this property,
For the number of similar patches Ñ , some references have the linear minimum mean square-error estimation (LMMSE)
suggested Ñ = (8 ∼ 10) × K 2 as a good candidate for can be used as a coefficient shrinkage technique to suppress
2D patch selection [28]. In our practice, setting Ñ = 3K 3 the noise in yLd , as used by Zhang et al. and Muresan et al. in
could achieve a relatively robust estimation of the local statis- their works [28] and [30], respectively. The LMMSE estimator
tics in a 3D mode. for the k th component of y is

ŷk = wk · ykLd , k = 1, 2, · · · M, (9)


C. Local Principal Features of Prior Samples
Retrieved by PCA where the shrinkage coefficient
PCA is a useful technique to retrieve major features
of image signals [29]. When applied on training samples, wk = E[yk2 ]/(E[yk2 ] + E[n 2y,k ]). (10)
PCA can provide locally adaptive principal features of these Here, the symbol E[·] represents the statistical expectation.
patch signals. As yLd is centralized, E[yk2 ] and E[n 2y,k ] can be viewed as
Given the training samples XRef , their principal features can the signal variability/power and thenoise variability/power of
be retrieved using PCA by the following two steps: the k th component of the target patchxLd in the PCA subspace.
Step 1: Compute the average patch μRef of all training In flat zones, the signal power is much smaller than the noise
samples by power, so that wk is close to 0. Hence the noise in y can be
N efficiently suppressed by using Eq. (9).
μRef = 1/(N − 1) xRef
i , (5)
i=1 As the principal vectors  are retrieved from clean patch
and obtain the centralized patches x̄Ref with x̄Ref = xRef − samples, the decomposed eigenvalue (as in Eq. (6)) mainly
i i i
μ , i = 1, 2, · · · N.
Ref represents the signal variability of each component. Base on
Step 2: Calculate the covariance matrix of the training the property of the PCA decomposition [29], we can derive
Ref Ref
dataset by C = 1/(N − 1) X̄ (X̄ )T and decompose the that
covariance matrix C as E[yk2 ] ≈ (k, k) = λk (11)
T
C =  , (6)
where λk is the k th eigenvalue of C, as in Eq. (6).
where  = di ag{λ1 , λ2 , · · · λ M } and  = [ϕ 1 , ϕ 2 , · · · , ϕ M ]. On the basis of the noise assumption in a patch
The term λk (k = 1, 2, · · · M) is the eigenvalue of C, sorted in (i.e., independent and identical), we can easily derive that
descending order. ϕ k (k = 1, 2 · · · , M) is the corresponding
eigenvector, which forms the desired principal features. E[n 2y,k ] = σ 2 (12)

where σ 2 is noise variance of the pixel in the target patch.


D. Target Patch Decomposition and Adaptive On the basis of Eqs.(11)-(12), the shrinkage coefficient in
Coefficient Shrinkage Eq. (10) can be estimated by:
1) Target Patch Decomposition: A locally adaptive
M-dimensional PCA subspace can be spanned by orthogonal wk = λk /(λk + σ 2 ) (13)
principal directions {ϕ k : k = 1 : M}. Then the noisy target 3) Noise Estimation: To estimate σ 2 of a noisy patch adap-
patch xLd , can be decomposed by projecting it onto the tively, here we use a PCA-based local noise estimation method
subspace using proposed by Manjón [31], by using those similar patches
yLd = T (xLd − μRef ) (7) selected from the local search window of the LdCT images.
In Subsection II.B.2), Ñ similar 3D patches selected from
to get coefficient vector yLd = [y1Ld, y2Ld , · · · , y M
Ld ]T of xLd .
the local search window can be treated as noisy observations
2) Adaptive Coefficient Shrinkage: Before using the adaptive of the target patch variable xLd . We denote the noisy patches
coefficient shrinkage to suppress the noise in yLd , we need as
analyze its signal and noise contribution first. As discussed in
Subsection II.B.1), the noise in a small local neighborhood XLd = (xLd
1 , x2 , · · · , x Ñ ),
Ld Ld
(14)
is assumed to follow approximately additive, independent
and identical distribution. Accordingly, the noisy target patch where each column vector of the matrix represents a noisy
patch that can be expressed as xLdi = [x i,1 , x i,2 , · · · , x i,M ] ,
Ld Ld Ld T
decomposition (i.e., Eq. (7)) can be rewritten as
where i = 1, 2, · · · , Ñ . With XLd , the PCA-based local
yLd = T (x + n − μRef ) = T (x − μRef ) + T n =y + ny noise estimation method can be summarized into the following
(8) three steps:
2514 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 36, NO. 12, DECEMBER 2017

Step 1: Calculate the covariance matrix C̄ of the noisy F. Parameter Selection


patches, decompose C̄ as C̄ =  ¯ ¯¯ T , and project the cen-
Algorithm parameters include the number of training sam-
Ld
tralized noisy patch set X̄ onto  ¯ as YLd =  ¯ T X̄Ld . Please ples N, number of noisy similar patches Ñ , scalar β, patch
refer to Subsection II.C for the details of PCA transformation. size K , search window size L, and the threshold T . For
Step 2: Denote the row vector of the coefficient matrix parameters N and Ñ , we have discussed the selection method
YLd by YkLd = [y1,k Ld , y Ld , · · · , y Ld ](k = 1, 2, · · · M), and
2,k and their suggestion values have been given in Subsection II.B.
Ñ ,k
compute the variability of YkLd using v k = 1/( Ñ − 1) The scalar β corresponds to the correction factor in the noise
Ñ estimation process. In Ref. 31, β = 1.66 was used as a good
(yi,k
Ld )2 . suggestion. In our practice, setting β = 0.8 could achieve
i=1 satisfying results for lung CT images.
Step 3: Estimate the noise variance using [31]
As a patch-based algorithm, the patch size K influences the
⎧ √
⎪ tradeoff between the noise reduction and the details/textures
⎨T1 = 2medi

an({ v k , k = 1, 2, · · · M}),

√ preservation. Generally, a larger patch size would result in
V = v j
v j ∈ {v k , k = 1, 2, · · · , M}& v j < T1 , (15)

⎩ 2 well-smoothed restorations with details/textures obliterated,
σ̂ = β · medi an(V ). while a smaller patch size may preserve details/textures better
with some noise left. For the purpose of better details/textures
where β is a scalar and the operator medi an[·] extracts the preservation in lung CT images, a small patch size would
median of a data set. be preferred. We experimentally studied the proposed APFA
4) Inverse Transform to the Image Domain: After the adaptive algorithm by using various configurations of the patch size K
shrinkage of all coefficients for the target patch x in the and local search window size L, and found that the setting of
PCA subspace, we have ŷ = [ ŷ1 , ŷ2 , · · · , ŷ M ]T . By trans- K = 3 and L = 15 would lead to satisfying results in both
forming ŷ back to the image domain, the restored target patch details/textures preservation and noise reduction for images
can be obtained by with relatively low noise level (such as 120 kVp/20 mAs lung
images), while the setting of K = 4 and L = 21 would
x̂ = ŷ + μRef (16)
work well for images with relatively high noise level (such
as 120 kVp/14 mAs lung images)..
Applying the above steps (Subsection II.B-D) to each
The threshold T is a parameter to determine whether
3D target patch, all the patches in the image volume can be
a 3D patch in the offline training database can be selected as
estimated. For each voxel, there are multiple estimates from
a training sample candidate. In Eq. (3), since the noise level
neighboring overlapped patches. We estimate the final intensity
of the noisy target patch is considered by σ 2 , T generally
value of each voxel by averaging its estimates from all patches
depends on the intensity range of a specific image. For
overlapped on it, as did in Refs. 28 and 31.
CT images, since the voxel intensity is expressed by
Hounsfield Unit (HU), the intensity range of lung tissues
E. An Alternative if No Similar Patch Found in the seems quite consistent, no matter in which scanning condition
Training Database they were acquired. Considering that the proposed APFA
employs the learned principal features of training samples as a
As indicated by the criterion given in Subsection II.B.1), it is prior knowledge, instead of direct use of voxel intensities, the
essential to detect whether there are enough similar patches in restoration process seems less sensitive to the threshold T .
the training database, to avoid introducing false prior structures In this study, with experiments using training databases of
into the restoration process. different scales and consisting of training patches acquired by
For the case that there are not enough similar reference different kernels/scanner models, we found that T = 4 × 103
patches be found in the database, we instead use noisy could achieve satisfying restoration results for lung CT images.
similar patches extracted from local search window, XLd =
1 , x2 , · · · , x Ñ ), as training samples for PCA. The rest of
(xLd Ld Ld
III. E XPERIMENT R ESULTS
the denoising steps are essentially the same as those described
in Subsection II.C-D, except for the estimation of E[yk2 ] in Full-dose and low-dose CT scans acquired from clinical
Eq. (11). As in this case the principal directions  are trained CT-guided lung nodule needle biopsy studies were used
by the noisy patch samples, the eigenvalue  (k, k), or λk , to evaluate and validate the proposed APFA algorithm.
represents the variability of both signal and noise contri- 13 patients (denoted with patient #1 to patient #13) were
butions to the corresponding component. Instead of using recruited under informed consents after the approval by the
λk to approximate E[yk2 ] in Eq. (11), we estimate E[yk2 ] Institutional Review Board. For patients #1 to #12, the full-
by dose and low-dose CT scans were acquired by a Siemens
Sensation 16 CT scanner at X-ray tube voltage of 120 kVp
E[yk2 ] = max(0, (k, k) − σ 2 ) = max(0, λk − σ 2 ). (17) and tube current of 100 mAs and 20 mAs, respectively. For
patient #13, the low-dose CT scan was acquired at 120 kVp
Thus, the shrinkage coefficient wk can be computed by and 14 mAs for the purpose of evaluating the proposed
algorithm with ultra-low-dose CT imaging, and corresponding
wk = max(0, λk − σ 2 )/λk . (18) 120 kVp/40 mAs scans from the same patient scanned three
ZHANG et al.: LOW-DOSE LUNG CT IMAGE RESTORATION USING ADAPTIVE PRIOR FEATURES 2515

TABLE I
S CANNING I NFORMATION OF THE D OWNLOADED S CANS

months ago were assumed as a reference. FBP with Siemens


kernels “B60f” and “B31f” were employed to reconstruct
the clinical images, respectively. The kernel “B31f” can
be viewed as a vendor-based noise reduction method for
CT images. Compared with kernel “B31f”, kernel “B60f”
provide CT images with more details but more artifacts/noise.
Other scanning parameters were as follows: 0.5 s per gantry
rotation, helical pitch of 1 mm, 16 × 0.75 mm collimation,
5 mm slice thickness, 2 mm reconstruction slice thickness,
1 mm reconstruction slice interval, without using auto-
matic exposure control (AEC). The reconstructed image was
of 512 × 512 size. The averaged CT dose indexes
(CTDIvol) recorded for the 120kVp/100mAs, 120kVp/20mAs
and 120kVp/14mAs scans were 10.31 mGy, 2.09 mGy and
1.52 mGy, respectively. 13 patients were classified into a
training group (10 patients: patient #1 to patient #10) and a
validation group (3 patients: patient #11 to patient #13).
In addition, to evaluate how the training dataset affect the
restoration result when it is constructed from scans acquired
by other scanner model and/or reconstruction parameters, two
patients’ FdCT scans were downloaded from the “Give a
Scan()” open access dataset [32] (denoted with patient #I and
patient #II). Their scanning information was listed in Table I.
Based on these patient scans, both numerical simulations
and clinical studies were performed to evaluate the perfor-
mance of the proposed algorithm. For comparison, the perfor-
mance of the well-known LPG-PCA algorithm [28] and the
global dictionary learning (DL) algorithm [24], [25] were also
evaluated.

A. Numerical Simulation Studies Fig. 2. Processing results of the simulated phantom. (a1)–(a2): the
FdCT image (FBP reconstruction with Ramp filter from the noise-free
1) Data Acquisition: The full-dose scan of a patient from sinogram). (b1)–(b2): the LdCT image (FBP reconstruction with Ramp
the validation group (patient #11) was used as a numerical filter from the noisy sinogram). (c1)–(c2): the proposed APFA processed
reference phantom (shown in Figs. 2(a1)-(a2)). In all result. (d1)–(d2): the LPG-PCA processed result. (e1)–(e2): the DL
processed result. The first column ((a1)–(e1)) are the images with the soft
simulations, a fan-beam geometry was used with imaging tissue display window of [−160,240] HU. The second column ((a2)–(e2))
parameters the same as presented above. The noise-free are the images of selected regions (outlined by blue dotted rectangles in
sinogram was obtained based on the full-dose phantoms. (a1)) with a lung display window of [−1024, −24] HU.
To simulate noisy sinograms, we added noise to the noise-
free sinograms using the simulation method described in
Ref.33 (N0i = 5 × 104 , and σe2 = 10pA2 , approximately 2) Visual Evaluation: For comparison, the LPG-PCA and the
corresponding to the noise level acquired with 20 mAs tube DL algorithms were implemented. In the LPG-PCA, the local
current). The traditional FBP with the Ramp convolution noise variance was estimated using the method presented in
kernel was employed to reconstruct the images. Subsection II.D.3) with K = 3, L = 15 and β = 0.8.
10 full-dose volume scans (total 432 slices, reconstructed In the DL, the global dictionary was trained using the patches
by FBP“B60f”) of patients #1 to #10 in the training group extracted from the reference images as described in Ref. 24,
were used to construct the reference set. A small subset with patch size of 8 × 8 and atom number of 256, as rec-
with 0.5% patches randomly selected from the entire set ommended in Ref. 24. In the proposed APFA algorithm,
formed the offline training database used in simulation the parameters were set as K =3, L = 15, β = 0.8 and
studies (with 318672 3D patches). T = 4 × 103 . All CT images were displayed in two windows,
2516 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 36, NO. 12, DECEMBER 2017

Fig. 3. Zoomed-in views and the difference images. The first row are the zoomed images of a selected region with a lung nodule (ROI I, outlined
by the orange dotted rectangle in Fig.2(a2)) with the display window of [−1024, −24] HU. The lung nodule is outlined by the red dotted rectangle
in Fig.2(b2). The second row are the difference images of ROI I between the FBP and the proposed APFA, LPG-PCA and DL, respectively. The
display window is [−100, 200] HU.

i.e., the lung window [−1024, -24] HU and the soft tissue
window [−160, 240] HU, respectively.
Figures 2 and 3 show the processing results and associated
difference images of one slice of the scans. It can be observed
that by using LPG-PCA and DL, the noise was suppressed,
but severe streak artifacts could be observed in both the
soft tissue regions and the lung regions. We can observe
in Figs. 2(c1)-(c2) and Fig. 3 that the proposed APFA algo-
rithm performs much better in both noise suppression and
details/textures preservation compared with LPG-PCA and
DL, and produces a visual effect similar to the full-dose refer-
ence scan, especially for the ROI of the lung nodule (indicated
by the blue arrow in Fig. 2(a2)).
To further compare the performance differences of the
LPG-PCA, DL and the APFA algorithms, intensity profiles
through the lung nodule (along the red line in Fig. 2(a2)) Fig. 4. Intensity profiles along the vertical red line labeled in Fig. 2(a2).
were drawn in Fig. 4. We can see that the profile of the (a)-(d) Comparison between the ground truth and the FBP, the proposed
APFA, the LPG-PCA and the DL processed results, respectively.
APFA matches that of the ground truth better. It indi-
cates that the proposed APFA performs better in preserving
details/textures of the lung nodule. where r and x denote the same ROI of the reference phantom
3) Quantitative Evaluation: The performance of the pro- and the processing result to be evaluated, respectively, r̄ and
posed algorithm was quantitatively evaluated using the root x̄ are the mean intensities of each ROI, σr and σx are the
mean square error (RMSE) and the structural similarity standard deviations, and σxr is the covariance between the
index (SSIM) of a region containing a lung nodule (ROI I, phantom and the processed image. I represents the number
outlined by the orange dotted rectangle in Fig. 2(a2)). of pixels in a ROI. c1 and c2 are small constants with
The RMSE was employed to reflect the difference between c1 = (K 1 L s )2 and c2 = (K 2 L s )2 , where L s is the dynamic
the processed result and the ground truth and characterize range of the image (was set as 4095 for the range from
the restoration accuracy. The SSIM [34] was employed to −1024 HU to 3071 HU in our study), and K 1 and K 2 were
specifically gauge the preservation of perceptually salient set to 0.01 and 0.03 respectively based on Ref. 34.
information. These metrics are defined as The corresponding results using different methods were
shown in Fig. 5. It is obvious that the proposed APFA exhibits
I
 the best result with the lowest RMSE and the highest SSIM.
RMSE =  (x − r )2 /I
i i (19)
4) Normal Vector Flow Measures: The texture similarity on
i=1
the lung nodule region (indicated by the red dotted box in
SSIM = (2 x̄ · r̄ +c1 )(2σxr +c2 )/(x̄ 2 + r̄ 2 +c1 )(σx2 +σr2 + c2 ) Fig. 2(b2)) between different methods can be quantitatively
(20) evaluated by the normal vector flow (NVF) [27]. In NVF
ZHANG et al.: LOW-DOSE LUNG CT IMAGE RESTORATION USING ADAPTIVE PRIOR FEATURES 2517

Fig. 5. Performance comparison of different algorithms by the SSIM and


RMSE metrics, respectively.

Fig. 6. NVF images of the lung nodule region (indicated by the red dotted
rectangular box in Fig. 2(b2)). (a)-(e) are corresponding to the FdCT
image, the LdCT image, and the images processed by the proposed
APFA, the LPG-PCA and the DL, respectively.

image, the gradual changes of the intensities in the desired


image are often shown as ordered arrows, while the noise
in the image is often shown as disordered arrows. The cor-
responding NVF images were shown in Fig. 6. The NVF
image of the full-dose reference (shown in Fig.6(a)) was
drawn as the ground truth. Fig. 6(b) shows the NVF image
of the FBP reconstruction, where strong noise are presented.
We can observe from Fig. 6(c)-(e) that for the selected ROI,
all the LPG-PCA, DL and APFA algorithms can suppress the Fig. 7. Processing results of the real 120 kVp/20 mAs low-
dose scan. (a1)-(a2): the FdCT image (FBP“B60f” reconstruction
disordered arrows in the uniform regions. In the NVF image from the 120kVp/100mAs full-dose sinogram). (b1)-(b2): the LdCT
of the proposed APFA method, the ordered arrows matched image (FBP“B60f” reconstruction from the low-dose sinogram). (c1)-(c2):
those of the ground truth best, illustrating that fine textures of the proposed APFA processed result. (d1)-(d2): the FBP reconstruction
with Siemens commercial smoothing kernel “B31f” from the low-dose
the nodule were better preserved. sinogram. (e1)-(e2): the LPG-PCA processed result. (f1)-(f2): the DL
5) Haralick Texture Measures: In order to further verify processed result. The first column ((a1)-(f1)) are the images with the soft
the improvement of the APFA method on texture preserving tissue display window of [−160,240] HU. The second column ((a2)-(f2))
are the images of selected regions (outlined by blue dotted rectangles
over the LPG-PCA and DL methods, Haralick texture feature in (b1)) with a lung display window of [−1024, −24] HU.
measure [35] was adopted in this study. Haralick texture
features were extracted from the region (ROI I) indicated by texture preservation. The corresponding results were shown in
the orange dotted box in Fig. 2, which was selected on the lung Table II. It is obvious that the gain provided by the proposed
region (with a nodule). The corresponding ROI of the FdCT APFA on preserving textures of the lung tissues is noticeable.
image are served as the baseline. We extracted 13 Haralick
texture features from the ROI and then computed the Euclidean
distance between the features of the reference and the images B. Clinical Studies With the Low-Dose CT Scans
processed by the FBP, the proposed APFA, the LPG-PCA 1) Data Acquisition: In this pilot clinical study,
and the DL, respectively. A shorter distance indicates better a 120 kVp/20 mAs low-dose scan from a patient in
2518 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 36, NO. 12, DECEMBER 2017

Fig. 8. Zoomed-in views and the difference images. (a)-(f) are the zoomed images of a selected region with a lung nodule (ROI I, outlined by
the orange dotted rectangle in Fig.7(b2)) with the display window of [−1024, −24] HU. The lung nodule is indicated by the blue arrow in Fig.7(b2).
(g)-(j) are the difference images of ROI I between the LdCT image and images processed by the APFA, FBP “B31f”, LPG-PCA and DL, respectively.
The display window is [−100, 200] HU.

TABLE II
N ORMALIZED H ARALICK T EXTURE D ISTANCE
B ETWEEN THE R EFERENCE I MAGE AND THE
P ROCESSED R ESULTS IN F IG . 2

the validate group (patient #12) were chosen for the Fig. 9. Comparison of the performance with different scale of the training
evaluations (shown in Figs. 7(b1)-(b2)). The corresponding database by the SSIM metric.
slices of a 120 kVp/100 mAs full-dose scan from the
same patient were used as the gold standard, as shown in
Figs. 7(a1)-(a2). The FBP with Siemens kernel “B60f” was
employed to reconstruct both the full-dose and low-dose
scans.
2) Comparison With Other Methods: The LPG-PCA and
the DL algorithms were used for comparison. In addition,
the FBP reconstruction of the LdCT scan with Siemens kernel
“B31f” was employed as a vendor-based noise reduction
Fig. 10. The image regions used for extracting the 3D training samples.
method for comparison. For the proposed APFA, the same
offline training database as described in Subsection III.A.1)
TABLE III
was used. The parameters were the same as those presented
R ADIOLOGISTS ’ S CORING OF THE I MAGE Q UALITY
in Subsection III.A.2).
Figures 7 and 8 show the processing results and associated
difference images of one slice of the scans. We can observe
in Figs. 7 and 8 that compared with LPG-PCA and DL algo-
rithms, the proposed APFA algorithm performs much better
in both noise suppression and details/textures preservation.
It can be seen in Figs. 7 and 8 that the FBP with “B31f”
filtering can achieve a more smoothing reconstruction result LPG-PCA, DL and the proposed APFA, respectively)
but the details/textures were seriously blurred, especially for were scored by three radiologists independently based on
the small lung nodule regions. Again, our method produces a their overall impression, in terms of noise reduction and
most similar visual effect to the full-dose reference scan. details/textures preservation. For a fair comparison, we first
3) Evaluation by Radiologist Experts: 32 slices mixed all the full-dose images and the reconstructed/processed
of 120kVp/100mAs full-dose scans from patients #12 and the low-dose images into one image set. Then an image was
corresponding slices of 120kVp/20mAs low-dose scans (after randomly selected from the image set and displayed with
being reconstructed/processed by FBP“B60f”, FBP“B31f”, the lung window of [−1024,−24] HU for each radiologist’s
ZHANG et al.: LOW-DOSE LUNG CT IMAGE RESTORATION USING ADAPTIVE PRIOR FEATURES 2519

Fig. 11. Processing results of the proposed APFA with the training database containing 671 samples. The last is the difference image
of ROI I between the LdCT image and the APFA process result. The soft tissue display window and the lung display window are [−160, 240]
HU and [−1024, −24] HU respectively.

Fig. 12. Processing results with FBP“B31f” reconstructed training samples. (a1)-(a3) are the FdCT image (FBP “B31f”reconstruction from the full-
dose sinogram). (b1)-(b3) are processing results of the proposed APFA with FBP“B31f” reconstructed training samples. (b4) is the the difference image
of ROI I between the LdCT image and the APFA process result. The soft tissue display window and the lung display window are [−160, 240] HU
and [−1024, −24] HU respectively.

Fig. 13. Processing results of the proposed APFA with the training samples from the GE scanner. The last image is the difference image
of ROI I between the LdCT image and the APFA process result. The soft tissue display window and the lung display window are [−160,240]
HU and [−1024, −24] HU respectively.

scoring one by one. The score ranged from 0 (worst) to 4) Influence of the Offline Training Database
10 (best). Finally the average score of each radiologist was on the APFA Performance:
computed for each image subset. The resulted scores were a) Influence of the database size: The offline training data-
reported in Table III. It indicates that the proposed APFA base consisting of all 3D patches extracted from 10 full-
outperformed the FBP“B31f”, LPG-PCA and DL methods dose volume scans (total 432 slices from patients #1 to #10)
greatly. More importantly, it indicates that the APFA can was served as a reference set (total 63735822 3D patches)
achieve almost the same image quality as the full-dose lung to evaluate the influence of database size on the APFA per-
scan from the radiologists’ point of view. formance. We randomly chose a subset, consisting of 0.01%,
2520 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 36, NO. 12, DECEMBER 2017

0.05%, 0.1%, 0.5%, 1%, 10%, 50% patches of the entire


reference set , respectively, to form training databases of
different scales. Then the performance of the APFA algorithm
using different training databases was compared with that
using set  (with all other parameters being the same as
used previously). Quantitative SSIM values shown in Fig. 9
indicates that compared with that using all training patches,
there is no significant degradation on image quality of the
restored images when using a small set of training patches.
It also indicates that by using a training database containing
more than 0.5% patches, the image quality of the restorations
seems almost identical.
To further illustrate the influence of the database size on
the restoration, only 3 slices were randomly selected from
the full-dose scan of patient #1. Considering the approximate
symmetry of a CT slice, we selected all 3 × 3 × 3 3D
patches only from the right half part of the 3 slices, as shown
in Fig. 10. By randomly selected a subset of 0.5% of the
number from all 3D patched, we obtained a relatively small
training database with only 671 training samples. Fig. 11
gives the result of the APFA algorithm with the small training
database.
It indicates that even with such a small training database
of 671 patches, the APFA algorithm could achieve relatively
satisfactory result compared with other traditional methods (as
shown in Figs. 7(d1)-(f2)). In addition, it can be observed
from Fig.11 that in the restored LdCT image, no false prior
information from training samples was introduced, while no
important details were lost with APFA.
b) Influence of the reconstruction kernel: To evaluate how
the reconstruction kernel used for training samples impacts
the performance of the proposed algorithm, we used the same
432 slices of 10 full-dose scans to construct the training
database, but all slices were reconstructed by FBP“B31f”
kernel, as shown in Figs.12(a1)-(a3). The construction method
is the same as that described above.
The processing results and the difference image are
shown in Figs. 12(b1)-(b4). We can observe that with the
FBP“B31f” reconstructed training samples, our method pro-
duces a visual effect similar to the FBP“B31f” reconstructed
full-dose reference for soft tissues. For lung tissues, our
method seems better in details/textures preservation com- Fig. 14. Processing results of the real 120 kVp/14 mAs ultra-low-dose
scan. (a1)-(a2): the reference image (FBP“B60f” reconstruction from
pared with the full-dose reference reconstructed directly the 120kVp/40mAs sinogram). (b1)-(b2): the LdCT image (FBP“B60f”
by FBP“B31f”. reconstruction from the low-dose sinogram). (c1)-(c2): the proposed
c) Influence of the scanner model: Two patients’ full-dose APFA processed result with FBP“B60f” reconstructed training samples.
(d1)-(d2): the proposed APFA processed result with FBP“B31f” recon-
lung CT scans downloaded from the “Give a Scan” open structed training samples. (e1)-(e2): the FBP reconstruction with
access dataset [32] were used for this evaluation. The scanner Siemens commercial smoothing kernel “B31f” from the low-dose
model of the two scan series are GE LightSpeed VCT CT sinogram. (f1)-(f2): the LPG-PCA processed result. (g1)-(g2): the
DL processed result. The first column ((a1)-(g1)) are the images
scanner and GE HiSpeed CT scanner respectively. Please refer with a soft tissue display window of [−160,240] HU. The second
to Table I for detail information on acquisition parameters column ((a2)-(g2)) are the images of selected regions (outlined by blue
of the downloaded scans. The offline training database was dotted rectangles in (b1)) with a lung display window of [−1024, −24] HU.
constructed by 0.5% patches (total 58357 patches) randomly
chosen from all patches extracted from the two CT volumes. C. Clinical Studies With the 120kVp/14mAs
The processing results of the APFA with the training database Ultra-Low-Dose CT Scan
from GE scanners are shown in Fig. 13. The results indicate
that even with training samples acquired by different scanner We further evaluated the APFA algorithm using
models, our method still works well and produces a satisfying a 120kVp/14mAs ultra-low-dose CT scan from the
result. patient #13 (shown in Figs. 14(b1)-(b3)). The corresponding
ZHANG et al.: LOW-DOSE LUNG CT IMAGE RESTORATION USING ADAPTIVE PRIOR FEATURES 2521

We can observe in Figs. 14(b1)-(b2) that in the ultra-low-dose


scan, the noise and streak artifacts were extremely strong and
could submerge some anatomical structures. In this situation,
the LPG-PCA and DL methods are not only ineffective in
suppressing artifacts, but also lead to structure obscurity. From
Figs. 14(c1)-(c2), we can observe that though the proposed
APFA with training samples reconstructed by FBP“B60f”
produces a restoration with more artifacts residual than that
using FBP“B31f” filtering, it preserves the details/textures
much better, especially for the ROI of lung nodule (please see
in Figs. 15(c) and (h)). It can be seen in Figs. 14(d1)-(d2) that
compared with the FBP using “B31f” filtering, the proposed
APFA with samples reconstructed by FBP“B31” performs
better in noise/artifacts suppression, and also achieves a bet-
ter performance in details/textures preservation (please see
in Figs. 15(d) and (i)). In conclusion, the proposed APFA
algorithm can still achieves a better result than other traditional
methods for the ultra-low-dose lung CT scans.

IV. D ISCUSSION AND C ONCLUSION


This study developed a novel prior knowledge retrieval
and representation paradigm to capture the locally adaptive
features from the FdCT training database, which can better
preserve the detail textures of the low-dose lung CT images.
The proposed paradigm contains four major components,
which are the offline training database construction, online
patch-search, online PCA retrieving of local prior features
and online target patch decomposition and adaptive coefficient
shrinkage. Through the offline training database with clean
3D patches extracted from the existing full-dose lung scans
Fig. 15. Zoomed-in views and the difference images. (a)-(g) are the
zoomed images of the selected region with a lung nodule (ROI I, outlined of different patients, valuable prior knowledge of high quality
by the orange dotted rectangle in Fig.14(b2)) with the display window of could be obtained, avoiding repeated scans of the same patient.
[−1024, −24] HU. The lung nodule is indicated by the blue arrow in The online patch-search and the use of PCA on training
Fig.14(b2). (a): the reference image. (b): the LdCT image. (c): APFA
with FBP “B60f” training samples. (d): APFA with FBP “B31f” training samples guarantee that the retrieved prior features could reflect
samples. (e): FBP “B31f” reconstruction. (f): LPG-PCA. (g): DL. (h)-(l) are principal information of the target patch adaptively. In this
the difference images of ROI I between the LdCT image and images way, local details/textures of the low-dose lung CT image
processed by the APFA with FBP “B60f” training samples, APFA with FBP
“B31f” training samples, FBP “B31f”, LPG-PCA and DL, respectively. The could be better preserved. In addition, local noise could be
display window is [−100, 200] HU. estimated from similar patches within a local search window,
making the coefficient shrinkage more optimal.
120 kVp/40 mAs scan from the same patient was acquired In this study, we introduce a local noise related threshold to
after three months and used as a reference (shown select training sample candidates from the training database.
in Figs. 14(a1)-(a3)). The ultra-low-dose scan was recon- We also propose a criterion to judge whether the candidates
structed by FBP“B60f” and FBP“B31f”, respectively. For can be used as training samples or not. In case that the
better restoration of noisy images, patches of size 4 × 4 × 4 number of training samples found in the training database is
were used in this experiment and the offline training insufficient, we instead use similar patches from local search
database with FBP“B60f” reconstructed training samples window of the LdCT image volume as the training samples.
and with FBP“B31f” reconstructed training samples from This strategy guarantees that no false prior structures are
the 432 slices of 10 full-dose scans were used for the introduced from the FdCT images.
APFA algorithm, respectively. The training database In this study, we assume that the noise in a small local
construction method is the same as that described search neighborhood follows approximately additive, indepen-
in Subsection III.A.1). dent and identical distribution. Generally for a small local
The FBP reconstruction of the LdCT scans with Siemens region of natural image, such a noise distribution is usually
smoothing kernel “B31f”, the LPG-PCA and the DL were used applicable. However, for CT images, the assumption violation
for comparison. For the LPG-PCA and the APFA algorithms, of true noise statistics of LdCT images needs further justi-
the parameters were both set as: β = 1.2, K = 4 and L = 21. fication. The violation of the noise assumption may result
The other parameters were the same as those presented in Sub- in some noise residuals in the filtering result, as illustrated
section III.A.2). Figures 14 and 15 show the processing results in Fig. 8(c) and Fig. 15(c). By using high quality train-
and associated difference images of one slice of the scans. ing samples, such as those reconstructed by FBP“B31f”,
2522 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 36, NO. 12, DECEMBER 2017

the noise residuals can be further removed, as illustrated in In the further work we would incorporate the proposed
Fig. 12(b3) and Fig. 15(d). adaptive prior feature retrieving paradigm into the statisti-
Both digital simulations and clinical data from lung cal reconstruction framework. Prior feature retrieving takes
nodule needle biopsy studies were used to evaluate the pro- advantage of anatomical characteristics of objects. However,
posed algorithm. The results indicate that compared with it is difficult to accurately take statistical information in
other methods like FBP“B31f” filtering, LPG-PCA and DL, the projection data into account. The assumption of locally
the proposed algorithm can achieve better performance on both independent, additive and identical noise distribution for the
noise/artifacts suppression and details/textures preservation for image domain is indirect and not accurate enough. Hence,
low-dose lung CT images. We also evaluated the influence of we aim to combine this adaptive prior and raw data statistics
the offline training database on the performance of the APFA so that a comprehensive optimization can be done for ultra-
algorithm. The results show that the proposed algorithm can low-dose CT reconstruction. Further improvement in noise
still work well with training sample from different reconstruc- reduction and feature preservation will be expected.
tion kernel/scanner model or with a small training database.
In the proposed algorithm, most of the computational cost R EFERENCES
is in the prior patch-search. A smaller training database is
admired for pratical use. To tackle this problem, a randomly [1] D. J. Brenner and E. J. Hall, “Cancer risks from CT scans: Now we have
data, what next?” Radiology, vol. 265, no. 2, pp. 330–331, Nov. 2012.
chosen subset of the entire set of training patches was used [2] O. W. Linton and F. A. Mettler, “National conference on dose reduction
as the training database, based on the fact that there are lots in CT, with an emphasis on pediatric patients,” Amer. J. Roentgenol.,
of redundant structures among the patches in the entire set. vol. 181, no. 2, pp. 321–329, Aug. 2003.
[3] D. J. Brenner and E. J. Hall, “Computed tomography—An increasing
Experimant study indicates that with a smaller subset consist- source of radiation exposure,” New England J. Med., vol. 357, no. 22,
ing of only 671 patches, there is no significant degradation on pp. 2277–2284, Nov. 2007.
image quality compared with that using the entire set. We have [4] J. Hsieh, “Adaptive streak artifact reduction in computed tomography
resulting from excessive X-ray photon noise,” Med. Phys., vol. 25,
implemented the proposed algorithm with the Matlab parallel no. 11, pp. 2139–2147, Nov. 1998.
pool on a PC workstation configured with Intel Xeon CPU [5] I. A. Elbakri and J. A. Fessler, “Statistical image reconstruction for
(20 core, 2.5G Hz) and 64G RAM memory. With the patch polyenergetic X-ray computed tomography,” IEEE Trans. Med. Imag.,
vol. 21, no. 2, pp. 89–99, Feb. 2002.
size of 3 × 3 × 3 and the training database of 318672 patches, [6] J. Wang, T. Li, and L. Xing, “Iterative image reconstruction for CBCT
the algorithm took approximately 82 minutes to process using edge-preserving prior,” Med. Phys., vol. 36, no. 1, pp. 252–260,
a 512 × 512 × 30 CT volume. Based on the quan- Jan. 2009.
[7] P. J. L. Riviere, J. Bian, and P. A. Vargas, “Penalized-likelihood sinogram
titative results shown in Fig. 9, the computational load restoration for computed tomography,” IEEE Trans. Med. Imag., vol. 25,
can be further reduced by using a smaller subset, such no. 8, pp. 1022–1036, Aug. 2006.
as 0.1% patches of the entire set, with relatively satisfiying [8] J. Wang, T. Li, H. Lu, and Z. Liang, “Penalized weighted least-squares
approach to sinogram noise reduction and image reconstruction for low-
performance. dose X-ray computed tomography,” IEEE Trans. Med. Imag., vol. 25,
In this study, we focus on lung imaging where high- no. 10, pp. 1272–1283, Oct. 2006.
contrast lung nodules are of main interest. It is worth to [9] Y. Zhang, J. Zhang, and H. Lu, “Statistical sinogram smoothing for low-
dose CT with segmentation-based adaptive filtering,” IEEE Trans. Nucl.
mention that 20-30 mAs exposure is normally suggested as Sci., vol. 57, no. 5, pp. 2587–2598, Oct. 2010.
a clinical pertinent setting from previous studies [36], [37]. [10] A. Schilham, B. van Ginneken, H. Gietema, and M. Prokop, “Local noise
However, according to our study of ultra-low-dose imaging weighted filtering for emphysema scoring of low-dose CT images,” IEEE
Trans. Med. Imag., vol. 25, no. 4, pp. 451–463, Apr. 2006.
with 14 mAs, the image quality can be greatly improved by [11] A. Borsdorf, R. Raupach, T. Flohr, and J. Hornegger, “Wavelet based
the assist of prior features, indicating that further reduction noise reduction in CT-images using correlation analysis,” IEEE Trans.
over the modern clinical protocols is possible. Med. Imag., vol. 27, no. 12, pp. 1685–1703, Dec. 2008.
[12] Y. Chen et al., “Thoracic low-dose CT image processing using an artifact
Though the proposed APFA algorithm has been proved to be suppressed large-scale nonlocal means,” Phys. Med. Biol., vol. 57, no. 9,
effective in many cases including those listed above, it may be pp. 2667–2688, Apr. 2012.
invalid for some extremely low-dose lung CT images. The lim- [13] H. Yu, S. Zhao, E. A. Hoffman, and G. Wang, “Ultra-low dose lung CT
perfusion regularized by a previous scan,” Acad. Radiol., vol. 16, no. 3,
itation of this method has also been noticed. In some regions of pp. 363–373, Mar. 2009.
FBP reconstructions, noise may result in structure-like signals [14] P. Thériault-Lauzier, J. Tang, and G.-H. Chen, “Prior image constrained
which can not be completely removed by our APFA. The compressed sensing: Implementation and performance evaluation,” Med.
residual error could be significant to low-contrast areas. If keep Phys., vol. 39, no. 1, pp. 66–80, Jan. 2012.
[15] H. Dang, A. S. Wang, M. S. Sussman, J. H. Siewerdsen, and
decreasing dose and resulting higher noise, the performance of J. W. Stayman, “dPIRPLE: A joint estimation framework for deformable
this APFA method will degrade gradually. Firstly, FBP recon- registration and penalized-likelihood CT image reconstruction using
structions will suffer from photon starvation and result in prior images,” Phys. Med. Biol., vol. 59, pp. 4799–4826, Aug. 2014.
[16] J. Ma et al., “Low-dose computed tomography image restoration using
more and more streaking artifacts which might be mistakenly previous normal-dose scan,” Med. Phys., vol. 38, pp. 5713–5731,
treated as structures in the step of similar patch searching Oct. 2011.
and thus preserved. Secondly, with high noise, low contrast [17] H. Zhang et al., “Iterative reconstruction for X-ray computed tomog-
raphy using prior-image induced nonlocal regularization,” IEEE Trans.
features will be overwhelmed by noise in the step of coefficient Biomed. Eng., vol. 61, no. 9, pp. 2367–2378, Sep. 2014.
shrinkage as Eq. (13), and thus difficult to recover. For [18] W. Xu, S. Ha, and K. Mueller, “Database-assisted low-dose CT image
these situations, we shall be aware expectable overall poor restoration,” Med. Phys., vol. 40, no. 3, p. 031109, Mar. 2013.
[19] L. Ouyang, T. Solberg, and J. Wang, “Noise reduction in low-dose cone
image quality or unexpected difference from the full dose beam CT by incorporating prior volumetric image information,” Med.
reconstruction. Phys., vol. 39, pp. 2569–2577, May 2012.
ZHANG et al.: LOW-DOSE LUNG CT IMAGE RESTORATION USING ADAPTIVE PRIOR FEATURES 2523

[20] H. Zhang et al., “Deriving adaptive MRF coefficients from previous [28] L. Zhang, W. Dong, D. Zhang, and G. Shi, “Two-stage image denoising
normal-dose CT scan for low-dose image reconstruction via penal- by principal component analysis with local pixel grouping,” Pattern
ized weighted least-squares minimization,” Med. Phys., vol. 41, no. 4, Recognit., vol. 43, no. 4, pp. 1531–1549, Apr. 2010.
pp. 041916-1–041916-15, Apr. 2014. [29] I. T. Jolliffe, Principal Component Analysis, 2nd ed. New York, NY,
[21] H. Lee, L. Xing, R. Davidi, R. Li, J. Qian, and R. Lee, “Improved USA: Springer-Verlag, 2002, pp. 29–59.
compressed sensing-based cone-beam CT reconstruction using adaptive [30] D. D. Muresan and T. W. Parks, “Adaptive principal components and
prior image constraints,” Phys. Med. Biol., vol. 57, no. 8, pp. 2287–2307, image denoising,” in Proc. ICIP, 2003, pp. I101–I104.
Apr. 2012. [31] J. V. Manjón, P. Coupé, and A. Buades, “MRI noise estimation and
[22] Q. Xu, H. Yu, X. Mou, L. Zhang, J. Hsieh, and G. Wang, “Low-dose denoising using non-local PCA,” Med. Image Anal., vol. 22, no. 1,
X-ray CT reconstruction via dictionary learning,” IEEE Trans. Med. pp. 35–47, May 2015.
Imag., vol. 31, no. 9, pp. 1682–1697, Sep. 2012. [32] Lung Cancer Alliance, Washington, DC, USA. Lung Cancer Datasets,
[23] Y. Chen et al., “Artifact suppressed dictionary learning for low-dose in GIVE A SCAN. Accessed: May 20, 2017. [Online]. Available:
CT image processing,” IEEE Trans. Med. Imag., vol. 33, no. 12, http://www.giveascan.org
pp. 2271–2292, Dec. 2014. [33] J. Ma et al., “Variance analysis of X-ray CT sinograms in the presence
[24] Y. Chen et al., “Improving abdomen tumor low-dose CT images using of electronic noise background,” Med. Phys., vol. 39, pp. 4051–4065,
a fast dictionary learning based processing,” Phys. Med. Biol., vol. 58, Jul. 2012.
pp. 5803–5820, Aug. 2013. [34] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image
[25] M. Elad and M. Aharon, “Image denoising via sparse and redundant quality assessment: From error visibility to structural similarity,” IEEE
representations over learned dictionaries,” IEEE Trans. Image Process., Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
vol. 15, no. 12, pp. 3736–3745, Dec. 2006. [35] R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for
[26] D. Wu, L. Li, and L. Zhang, “Feature constrained compressed sens- image classification,” IEEE Trans. Syst., Man, Cybern., vol. SMC-3,
ing CT image reconstruction from incomplete data via robust princi- no. 6, pp. 610–621, Nov. 1973.
pal component analysis of the database,” Phys. Med. Biol., vol. 58, [36] H. Rusinek et al., “Pulmonary nodule detection: Low-dose ver-
pp. 4047–4070, May 2013. sus conventional CT,” Radiology, vol. 209, no. 1, pp. 243–249,
[27] H. Zhang et al., “Extracting information from previous full-dose CT Oct. 1998.
scan for knowledge-based Bayesian reconstruction of current low-dose [37] S. Itoh et al., “Lung cancer screening: Minimum tube current
CT images,” IEEE Trans. Med. Imag., vol. 35, no. 3, pp. 860–870, required for helical CT,” Radiology, vol. 215, no. 1, pp. 175–183,
Mar. 2016. Apr. 2000.

Potrebbero piacerti anche