Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Abstract — The valuable structure features in full-dose Index Terms — Low-dose CT, lung CT imaging, restora-
computed tomography (FdCT) scans can be exploited as tion, prior features, principal component analysis (PCA).
prior knowledge for low-dose CT (LdCT) imaging. How-
ever, lacking the capability to represent local character- I. I NTRODUCTION
istics of interested structures of the LdCT image adap-
tively may result in poor preservation of details/textures
in LdCT image. This paper aims to explore a novel
prior knowledge retrieval and representation paradigm,
N OWADAYS, the risk of radiation induced genetic, can-
cerous and other diseases is of a significant concern
to patients and operators [1]–[3]. Minimizing X-ray expo-
called adaptive prior features assisted restoration algo- sure to patients has been one of the major efforts in the
rithm, for the purpose of better restoration of the low-
dose lung CT images by capturing local features from CT fields. A simple and cost-effective method to reduce
FdCT scans adaptively. The innovation lies in the con- radiation exposure is to lower the X-ray tube current and/or
struction of an offline training database and the online exposure time (mAs) during data acquisition. However the
patch-search scheme integrated with the principal com- low mAs acquisition protocols could be highly detrimental
ponents analysis (PCA). Specifically, the offline training to image quality, resulting in images with serious noise and
database is composed of 3-D patch samples extracted
from existing full-dose lung scans. For online patch-search, streak artifacts [4]. To address this problem, various noise-
3-D patches with structure similar to the noisy target patch reduction strategies have been proposed, including statistic-
are first selected from the database as the training samples. based iterative reconstruction (SIR) [5], [6], sinogram domain
Then, PCA is applied on the training samples to retrieve denoising [7]–[9] and image domain denoising [10]–[12].
their local prior principal features adaptively. By employing In CT imaging, the reconstructed images from the same
the principal features to decompose the noisy target patch
and using an adaptive coefficient shrinkage technique for patient, or even from different patients, generally share sim-
inverse transformation, the noise of the target patch can ilar structures and corresponding texture characteristics for
be efficiently removed and the detailed texture can be well a specific tissue type. The rich structure information in the
preserved. The effectiveness of the proposed algorithm was high quality full-dose CT (FdCT) scans can be exploited as
validated by CT scans of patients with lung cancer. The prior knowledge for low-dose CT (LdCT) imaging, which has
results show that it can achieve a noticeable gain over some
state-of-the-art methods in terms of noise suppression and become a noticeable research interest recently. Up to now,
details/textures preservation. manyprior-knowledge-assisted LdCT restoration algorithms
have been developed [13]–[27]. These algorithms can be
Manuscript received August 7, 2017; revised September 20, 2017; categorized into two groups in terms of the prior knowledge
accepted September 21, 2017. Date of publication September 27, 2017;
date of current version November 29, 2017. This work was supported retrieval and representation paradigm.
in part by the National Key Research and Development Program of The first group shares the common ideas of first registering
China under Grant 2017YFC0107400, in part by the China Postdoctoral the previous FdCT scan of the same patient with the LdCT
Science Foundation under Grant 2017M613348, in part by the National
Natural Science Foundation of China under Grant 61572283, Grant scan, and then incorporates the intensity information of local
81230035, and Grant 11275104, and in part by the Award Foundation pixels in FdCT scan as prior knowledge to regularize the
Project of Excellent Young Scientists in Shandong Province under Grant corresponding pixels in LdCT scan [13]–[21]. While these
BS2014DX005. (Corresponding authors: Hongbing Lu; Yuxiang Xing.)
Y. Zhang is with the School of Biomedical Engineering, Fourth methods have been successful in many cases, such a previous
Military Medical University, Xi’an 710032, China, and also with the FdCT scan of the same patient may not always be available.
School of Information Science and Engineering, Qufu Normal University, Moreover, since these approaches extract intensity information
Rizhao 276826, China (e-mail: yuankezhang@163.com).
J. Rong and H. Lu are with the School of Biomedical Engineering, of pixels directly from FdCT scan for prior knowledge, they
Fourth Military Medical University, Xi’an 710032, China (e-mail: are usually sensitive to the accuracy of image registration.
junyanrong@126.com; luhb@fmmu.edu.cn). The other group is based on the feature learning technique.
Y. Xing is with the Department of Engineering Physics, Tsinghua Uni-
versity, Beijing 100084, China (e-mail: xingyx@mail.tsinghua.edu.cn). It usually captures the global features of FdCT training
J. Meng is with the School of Information Science and Engineering, samples through an information learning process and then
Qufu Normal University, Rizhao 276826, China (e-mail: qfmj@163.com). incorporates the learned features as a prior knowledge to
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. assist the LdCT imaging [22]–[27]. Since the features are
Digital Object Identifier 10.1109/TMI.2017.2757035 learned from training samples, the FdCT scans can be obtained
0278-0062 © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
ZHANG et al.: LOW-DOSE LUNG CT IMAGE RESTORATION USING ADAPTIVE PRIOR FEATURES 2511
Fig. 1. 2D Illustration of the flowchart of the proposed adaptive prior features assisted (APFA) algorithm. (a) 2D Illustration of patch window and
search window used in this study. (b) Flowchart of the APFA process. In this example, we set K = 7 and L = 51.
from different patients, and they do not need to be pre- knowledge retrieval and representation paradigm by capturing
registered with the LdCT scans. For instance, Xu et al. [22] local features adaptively from FdCT samples, for the restora-
and Chen et al. [23], [24] adopted the dictionary learning and tion of low-dose lung CT images with detailed textures. The
sparse representation techniques [25] to train an redundant innovation is the construction of an offline training database
dictionary from image patch samples, and then incorporate the from FdCT scans and the use of online patch-search scheme
trained dictionary into an object function for image restora- integrated with the PCA. Specifically, the offline training
tion. Wu et al. [26] proposed a sophisticated feature con- database is composed of 3D patches extracted from full-dose
strained compressed sensing reconstruction algorithm (FCCS) lung CT scans of different patients. Given a 3D target patch to
for incomplete data problem. The FCCS utilized a robust be restored in the LdCT images, the online patch-search first
principal components analysis (PCA) to learn main features selects patches whose structure is similar to the target patch
of several FdCT images which were similar to the target from the database as the training samples, and then PCA is
image, and then combined the total variation constraint and employed on the training patches to retrieve the local principal
the feature constraint into an object function for reconstruc- features adaptively. By employing the learned features to
tion. More recently, Zhang et al. [27] adapted the Markov decompose the target patch and using an adaptive coefficient
random fields (MRF) model to capture the tissue textures from shrinkage technique for inverse transformation, the noise can
previous FdCT scan and incorporated the texture as a prior be efficiently removed and the detail textures can be well
knowledge for Bayesian reconstruction of the LdCT image. preserved.
For the present feature learning based LdCT image restora- The rest of this paper is organized as follows. In section II,
tion paradigm [22]–[27], the features are retrieved from all the proposed adaptive prior features assisted (APFA) restora-
FdCT image samples without training samples selection and tion scheme is presented in details. Section III evaluates
are integrated into a global optimization scheme. It would be the proposed scheme with both numerical simulations and
difficult to represent local characteristics of a target image clinical studies. Finally, a discussion and conclusion is given
adaptively. For a specific region in an LdCT image, its local in section IV.
structure and texture pattern may be quite different from
others. Therefore, locally adaptive prior features would be II. M ETHODS
of great help for the preservation of details/textures in the The proposed APFA scheme is illustrated by the flowchart
LdCT image. shown in Fig. 1. This flowchart contains four major steps:
In lung imaging such as lung nodule detection, the offline training database construction, online patch-search,
details/textures of CT images are highly desirable for online local principal priors retrieved by PCA, and online
clinical diagnosis. This study aims to explore a novel prior target patch decomposition and adaptive coefficient shrink-
2512 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 36, NO. 12, DECEMBER 2017
1 Ref 1 Ref
M M
In the APFA scheme, a 3D patch, which is modeled as a = (x i,k − x k − n k )2 ≈ (x i,k − x k )2 + σ 2
voxel and its K × K × K cubic nearest neighbors, is used M M
k=1 k=1
to reflect local spatial structure of a voxel. To construct (2)
the offline training database, several full dose lung scans
were first gathered from different patients. For each patient’ To search patches similar to the target patch from the
FdCT scans, 3D patches were extracted automatically with training dataset, we can select xRef
i as a training sample
a sliding distance of one voxel on the 3D volume. Then all candidate for xLd if
the 3D patches from different patients were stacked into a
di ≤ T + σ 2 (3)
reference training database, denoted by .
Theoretically, a large-scale training database consisting of where T is a preset threshold. The method of esti-
millions of patches could better reflect all possible structures mating the local noise variance σ 2 will be given later
available shown in lung CT images. However, larger scale in Subsection II. D.3).
would greatly increase the computational burden of online With the selected training sample candidates, there are two
search for each target patch. Considering the structure redun- situations to be considered. One is that the number of can-
dancy of patches in the entire set , we instead use a randomly didates, N̄ , is too small to guarantee robust estimation of the
selected subset of , denoted by , as the training database. prior local features, which may introduce false prior structures
In practice, a small subset , consisting of 0.5% patches in the restored LdCT image. The other is that N̄ is too large,
of the entire set , which was constructed from 432 full- which increase the computational burden greatly. Through
dose lung CT slices of 10 patients, could achieve almost the extensive experiments, we found that for a patch vector of size
same restoration compared with that using the entire set , K 3 × 1 (K is the window size of a patch), selecting at least
when other parameters were set the same. The influence of the M(M = K 3 ) most similar candidates as training samples
the database size on the proposed algorithm will be further could provide relatively robust estimation of the prior local
evaluated in Subsection III.B.4).a), where the database size features. Furthermore, the restoration performance would be
can be further reduced. hardly improved even with candidates more than 3M. Based
on the experimental observation, in this study, the following
B. Online Patch Search criterion is proposed to select training samples for xLd :
Criterion 1:
1) Search Similar Patches From the Offline Training
(1). If M ≤ N̄ ≤ 3M, all the candidates are used as
Database: By representing a 3D patch as a column vector, the
training samples;
i th patch in the training database is denoted by xRef =
i (2). If N̄ < M, it indicates that there are not enough
[x i,1
Ref , x Ref , · · · , x Ref ]T , where M = K 3 , K is the patch size,
i,2 i,M similar patches in the offline training database. Instead,
and i = 1, 2, · · · , ||. The target patch to be restored in Ñ noisy similar patches extracted from the local search
the LdCT image (as shown in Fig. 1(a) with 2D illustration)
window of LdCT Images are used as training samples (please
is modeled as a random vector variable, and denoted by
refer to the next section for detailed description);
x = [x 1 , x 2 , · · · , x M ]T . Here, the noise in a small neighbor- (3). If N̄ ≥ 3M, then the 3M most similar candidates are
hood of an LdCT image is assumed to follow approximately
selected as the training samples.
additive, independent and identical distribution. Though such
Suppose finally N training samples are selected, we denote
a noise assumption may have a violation to the true noise
them by a two-dimensional matrix as
statistics of LdCT images, the experimental results show that
the noise residual caused by this violation can be nearly XRef = (xRef Ref Ref
1 , x2 , · · · , x N ), i = 1, 2, · · · , N. (4)
neglected, especially for lung tissues (please see the detail
information of experiments in Subsection III.B). Based on where each training sample is a column vector of length M,
the assumption, the noisy version of the target patch can be xRef
i = [x i,1
Ref , x Ref , · · · , x Ref ]T .
i,2 i,M
2) Search Similar 3D Patches From a Local Search Window
expressed by
of LdCT Image Volume: For each 3D noisy target patch xLD ,
xLd = x + n (1) we select its similar 3D patches in a local search window. Here
the local search window is defined as an L × L × L (L > K )
where xLd = [x 1Ld , x 2Ld , · · · , x M
Ld ]T is the noisy patch vari- local cubic neighborhood surrounding the target patch to be
able, and n = [n 1 , n 2 , · · · , n M ]T with n k being the noise denoised, as shown in Fig. 1(a) (with a 2D illustration).
of a voxel in the patch with zero mean and constant There are two purposes to search similar patches from the
variance σ 2 . LdCT image volume. One is to use them for the variance
On the basis of the noise assumption, the Euclidean estimation of noise present on the target patch. More details on
distance di between the noisy target patch xLd and a patch the local noise variance estimation method will be given later
i (i = 1, 2, · · · ||) in the training database can be
xRef in Subsection II.D.3). The other is to use them as alternative
ZHANG et al.: LOW-DOSE LUNG CT IMAGE RESTORATION USING ADAPTIVE PRIOR FEATURES 2513
training samples in the case that there are no enough similar where y = [y1 , y2 , · · · , y M ]T and ny = [n y,1 , n y,2 , · · · ,
patches found in the offline training database. More details on n y,M ]T are the principal coefficient vectors of the signal x
this situation will be given later in Subsection II.E. and the noise n, respectively.
To select patches similar to xLD , the Euclidean distance One important property of the PCA subspace is that most
between xLD and each patch in the 3D local search window signal power concentrates on the first few components while
is computed, and then Ñ most similar patches are selected. the noise power distributes more evenly. Base on this property,
For the number of similar patches Ñ , some references have the linear minimum mean square-error estimation (LMMSE)
suggested Ñ = (8 ∼ 10) × K 2 as a good candidate for can be used as a coefficient shrinkage technique to suppress
2D patch selection [28]. In our practice, setting Ñ = 3K 3 the noise in yLd , as used by Zhang et al. and Muresan et al. in
could achieve a relatively robust estimation of the local statis- their works [28] and [30], respectively. The LMMSE estimator
tics in a 3D mode. for the k th component of y is
an({ v k , k = 1, 2, · · · M}),
√ preservation. Generally, a larger patch size would result in
V = v j
v j ∈ {v k , k = 1, 2, · · · , M}& v j < T1 , (15)
⎪
⎩ 2 well-smoothed restorations with details/textures obliterated,
σ̂ = β · medi an(V ). while a smaller patch size may preserve details/textures better
with some noise left. For the purpose of better details/textures
where β is a scalar and the operator medi an[·] extracts the preservation in lung CT images, a small patch size would
median of a data set. be preferred. We experimentally studied the proposed APFA
4) Inverse Transform to the Image Domain: After the adaptive algorithm by using various configurations of the patch size K
shrinkage of all coefficients for the target patch x in the and local search window size L, and found that the setting of
PCA subspace, we have ŷ = [ ŷ1 , ŷ2 , · · · , ŷ M ]T . By trans- K = 3 and L = 15 would lead to satisfying results in both
forming ŷ back to the image domain, the restored target patch details/textures preservation and noise reduction for images
can be obtained by with relatively low noise level (such as 120 kVp/20 mAs lung
images), while the setting of K = 4 and L = 21 would
x̂ = ŷ + μRef (16)
work well for images with relatively high noise level (such
as 120 kVp/14 mAs lung images)..
Applying the above steps (Subsection II.B-D) to each
The threshold T is a parameter to determine whether
3D target patch, all the patches in the image volume can be
a 3D patch in the offline training database can be selected as
estimated. For each voxel, there are multiple estimates from
a training sample candidate. In Eq. (3), since the noise level
neighboring overlapped patches. We estimate the final intensity
of the noisy target patch is considered by σ 2 , T generally
value of each voxel by averaging its estimates from all patches
depends on the intensity range of a specific image. For
overlapped on it, as did in Refs. 28 and 31.
CT images, since the voxel intensity is expressed by
Hounsfield Unit (HU), the intensity range of lung tissues
E. An Alternative if No Similar Patch Found in the seems quite consistent, no matter in which scanning condition
Training Database they were acquired. Considering that the proposed APFA
employs the learned principal features of training samples as a
As indicated by the criterion given in Subsection II.B.1), it is prior knowledge, instead of direct use of voxel intensities, the
essential to detect whether there are enough similar patches in restoration process seems less sensitive to the threshold T .
the training database, to avoid introducing false prior structures In this study, with experiments using training databases of
into the restoration process. different scales and consisting of training patches acquired by
For the case that there are not enough similar reference different kernels/scanner models, we found that T = 4 × 103
patches be found in the database, we instead use noisy could achieve satisfying restoration results for lung CT images.
similar patches extracted from local search window, XLd =
1 , x2 , · · · , x Ñ ), as training samples for PCA. The rest of
(xLd Ld Ld
III. E XPERIMENT R ESULTS
the denoising steps are essentially the same as those described
in Subsection II.C-D, except for the estimation of E[yk2 ] in Full-dose and low-dose CT scans acquired from clinical
Eq. (11). As in this case the principal directions are trained CT-guided lung nodule needle biopsy studies were used
by the noisy patch samples, the eigenvalue (k, k), or λk , to evaluate and validate the proposed APFA algorithm.
represents the variability of both signal and noise contri- 13 patients (denoted with patient #1 to patient #13) were
butions to the corresponding component. Instead of using recruited under informed consents after the approval by the
λk to approximate E[yk2 ] in Eq. (11), we estimate E[yk2 ] Institutional Review Board. For patients #1 to #12, the full-
by dose and low-dose CT scans were acquired by a Siemens
Sensation 16 CT scanner at X-ray tube voltage of 120 kVp
E[yk2 ] = max(0, (k, k) − σ 2 ) = max(0, λk − σ 2 ). (17) and tube current of 100 mAs and 20 mAs, respectively. For
patient #13, the low-dose CT scan was acquired at 120 kVp
Thus, the shrinkage coefficient wk can be computed by and 14 mAs for the purpose of evaluating the proposed
algorithm with ultra-low-dose CT imaging, and corresponding
wk = max(0, λk − σ 2 )/λk . (18) 120 kVp/40 mAs scans from the same patient scanned three
ZHANG et al.: LOW-DOSE LUNG CT IMAGE RESTORATION USING ADAPTIVE PRIOR FEATURES 2515
TABLE I
S CANNING I NFORMATION OF THE D OWNLOADED S CANS
A. Numerical Simulation Studies Fig. 2. Processing results of the simulated phantom. (a1)–(a2): the
FdCT image (FBP reconstruction with Ramp filter from the noise-free
1) Data Acquisition: The full-dose scan of a patient from sinogram). (b1)–(b2): the LdCT image (FBP reconstruction with Ramp
the validation group (patient #11) was used as a numerical filter from the noisy sinogram). (c1)–(c2): the proposed APFA processed
reference phantom (shown in Figs. 2(a1)-(a2)). In all result. (d1)–(d2): the LPG-PCA processed result. (e1)–(e2): the DL
processed result. The first column ((a1)–(e1)) are the images with the soft
simulations, a fan-beam geometry was used with imaging tissue display window of [−160,240] HU. The second column ((a2)–(e2))
parameters the same as presented above. The noise-free are the images of selected regions (outlined by blue dotted rectangles in
sinogram was obtained based on the full-dose phantoms. (a1)) with a lung display window of [−1024, −24] HU.
To simulate noisy sinograms, we added noise to the noise-
free sinograms using the simulation method described in
Ref.33 (N0i = 5 × 104 , and σe2 = 10pA2 , approximately 2) Visual Evaluation: For comparison, the LPG-PCA and the
corresponding to the noise level acquired with 20 mAs tube DL algorithms were implemented. In the LPG-PCA, the local
current). The traditional FBP with the Ramp convolution noise variance was estimated using the method presented in
kernel was employed to reconstruct the images. Subsection II.D.3) with K = 3, L = 15 and β = 0.8.
10 full-dose volume scans (total 432 slices, reconstructed In the DL, the global dictionary was trained using the patches
by FBP“B60f”) of patients #1 to #10 in the training group extracted from the reference images as described in Ref. 24,
were used to construct the reference set. A small subset with patch size of 8 × 8 and atom number of 256, as rec-
with 0.5% patches randomly selected from the entire set ommended in Ref. 24. In the proposed APFA algorithm,
formed the offline training database used in simulation the parameters were set as K =3, L = 15, β = 0.8 and
studies (with 318672 3D patches). T = 4 × 103 . All CT images were displayed in two windows,
2516 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 36, NO. 12, DECEMBER 2017
Fig. 3. Zoomed-in views and the difference images. The first row are the zoomed images of a selected region with a lung nodule (ROI I, outlined
by the orange dotted rectangle in Fig.2(a2)) with the display window of [−1024, −24] HU. The lung nodule is outlined by the red dotted rectangle
in Fig.2(b2). The second row are the difference images of ROI I between the FBP and the proposed APFA, LPG-PCA and DL, respectively. The
display window is [−100, 200] HU.
i.e., the lung window [−1024, -24] HU and the soft tissue
window [−160, 240] HU, respectively.
Figures 2 and 3 show the processing results and associated
difference images of one slice of the scans. It can be observed
that by using LPG-PCA and DL, the noise was suppressed,
but severe streak artifacts could be observed in both the
soft tissue regions and the lung regions. We can observe
in Figs. 2(c1)-(c2) and Fig. 3 that the proposed APFA algo-
rithm performs much better in both noise suppression and
details/textures preservation compared with LPG-PCA and
DL, and produces a visual effect similar to the full-dose refer-
ence scan, especially for the ROI of the lung nodule (indicated
by the blue arrow in Fig. 2(a2)).
To further compare the performance differences of the
LPG-PCA, DL and the APFA algorithms, intensity profiles
through the lung nodule (along the red line in Fig. 2(a2)) Fig. 4. Intensity profiles along the vertical red line labeled in Fig. 2(a2).
were drawn in Fig. 4. We can see that the profile of the (a)-(d) Comparison between the ground truth and the FBP, the proposed
APFA, the LPG-PCA and the DL processed results, respectively.
APFA matches that of the ground truth better. It indi-
cates that the proposed APFA performs better in preserving
details/textures of the lung nodule. where r and x denote the same ROI of the reference phantom
3) Quantitative Evaluation: The performance of the pro- and the processing result to be evaluated, respectively, r̄ and
posed algorithm was quantitatively evaluated using the root x̄ are the mean intensities of each ROI, σr and σx are the
mean square error (RMSE) and the structural similarity standard deviations, and σxr is the covariance between the
index (SSIM) of a region containing a lung nodule (ROI I, phantom and the processed image. I represents the number
outlined by the orange dotted rectangle in Fig. 2(a2)). of pixels in a ROI. c1 and c2 are small constants with
The RMSE was employed to reflect the difference between c1 = (K 1 L s )2 and c2 = (K 2 L s )2 , where L s is the dynamic
the processed result and the ground truth and characterize range of the image (was set as 4095 for the range from
the restoration accuracy. The SSIM [34] was employed to −1024 HU to 3071 HU in our study), and K 1 and K 2 were
specifically gauge the preservation of perceptually salient set to 0.01 and 0.03 respectively based on Ref. 34.
information. These metrics are defined as The corresponding results using different methods were
shown in Fig. 5. It is obvious that the proposed APFA exhibits
I
the best result with the lowest RMSE and the highest SSIM.
RMSE = (x − r )2 /I
i i (19)
4) Normal Vector Flow Measures: The texture similarity on
i=1
the lung nodule region (indicated by the red dotted box in
SSIM = (2 x̄ · r̄ +c1 )(2σxr +c2 )/(x̄ 2 + r̄ 2 +c1 )(σx2 +σr2 + c2 ) Fig. 2(b2)) between different methods can be quantitatively
(20) evaluated by the normal vector flow (NVF) [27]. In NVF
ZHANG et al.: LOW-DOSE LUNG CT IMAGE RESTORATION USING ADAPTIVE PRIOR FEATURES 2517
Fig. 6. NVF images of the lung nodule region (indicated by the red dotted
rectangular box in Fig. 2(b2)). (a)-(e) are corresponding to the FdCT
image, the LdCT image, and the images processed by the proposed
APFA, the LPG-PCA and the DL, respectively.
Fig. 8. Zoomed-in views and the difference images. (a)-(f) are the zoomed images of a selected region with a lung nodule (ROI I, outlined by
the orange dotted rectangle in Fig.7(b2)) with the display window of [−1024, −24] HU. The lung nodule is indicated by the blue arrow in Fig.7(b2).
(g)-(j) are the difference images of ROI I between the LdCT image and images processed by the APFA, FBP “B31f”, LPG-PCA and DL, respectively.
The display window is [−100, 200] HU.
TABLE II
N ORMALIZED H ARALICK T EXTURE D ISTANCE
B ETWEEN THE R EFERENCE I MAGE AND THE
P ROCESSED R ESULTS IN F IG . 2
the validate group (patient #12) were chosen for the Fig. 9. Comparison of the performance with different scale of the training
evaluations (shown in Figs. 7(b1)-(b2)). The corresponding database by the SSIM metric.
slices of a 120 kVp/100 mAs full-dose scan from the
same patient were used as the gold standard, as shown in
Figs. 7(a1)-(a2). The FBP with Siemens kernel “B60f” was
employed to reconstruct both the full-dose and low-dose
scans.
2) Comparison With Other Methods: The LPG-PCA and
the DL algorithms were used for comparison. In addition,
the FBP reconstruction of the LdCT scan with Siemens kernel
“B31f” was employed as a vendor-based noise reduction
Fig. 10. The image regions used for extracting the 3D training samples.
method for comparison. For the proposed APFA, the same
offline training database as described in Subsection III.A.1)
TABLE III
was used. The parameters were the same as those presented
R ADIOLOGISTS ’ S CORING OF THE I MAGE Q UALITY
in Subsection III.A.2).
Figures 7 and 8 show the processing results and associated
difference images of one slice of the scans. We can observe
in Figs. 7 and 8 that compared with LPG-PCA and DL algo-
rithms, the proposed APFA algorithm performs much better
in both noise suppression and details/textures preservation.
It can be seen in Figs. 7 and 8 that the FBP with “B31f”
filtering can achieve a more smoothing reconstruction result LPG-PCA, DL and the proposed APFA, respectively)
but the details/textures were seriously blurred, especially for were scored by three radiologists independently based on
the small lung nodule regions. Again, our method produces a their overall impression, in terms of noise reduction and
most similar visual effect to the full-dose reference scan. details/textures preservation. For a fair comparison, we first
3) Evaluation by Radiologist Experts: 32 slices mixed all the full-dose images and the reconstructed/processed
of 120kVp/100mAs full-dose scans from patients #12 and the low-dose images into one image set. Then an image was
corresponding slices of 120kVp/20mAs low-dose scans (after randomly selected from the image set and displayed with
being reconstructed/processed by FBP“B60f”, FBP“B31f”, the lung window of [−1024,−24] HU for each radiologist’s
ZHANG et al.: LOW-DOSE LUNG CT IMAGE RESTORATION USING ADAPTIVE PRIOR FEATURES 2519
Fig. 11. Processing results of the proposed APFA with the training database containing 671 samples. The last is the difference image
of ROI I between the LdCT image and the APFA process result. The soft tissue display window and the lung display window are [−160, 240]
HU and [−1024, −24] HU respectively.
Fig. 12. Processing results with FBP“B31f” reconstructed training samples. (a1)-(a3) are the FdCT image (FBP “B31f”reconstruction from the full-
dose sinogram). (b1)-(b3) are processing results of the proposed APFA with FBP“B31f” reconstructed training samples. (b4) is the the difference image
of ROI I between the LdCT image and the APFA process result. The soft tissue display window and the lung display window are [−160, 240] HU
and [−1024, −24] HU respectively.
Fig. 13. Processing results of the proposed APFA with the training samples from the GE scanner. The last image is the difference image
of ROI I between the LdCT image and the APFA process result. The soft tissue display window and the lung display window are [−160,240]
HU and [−1024, −24] HU respectively.
scoring one by one. The score ranged from 0 (worst) to 4) Influence of the Offline Training Database
10 (best). Finally the average score of each radiologist was on the APFA Performance:
computed for each image subset. The resulted scores were a) Influence of the database size: The offline training data-
reported in Table III. It indicates that the proposed APFA base consisting of all 3D patches extracted from 10 full-
outperformed the FBP“B31f”, LPG-PCA and DL methods dose volume scans (total 432 slices from patients #1 to #10)
greatly. More importantly, it indicates that the APFA can was served as a reference set (total 63735822 3D patches)
achieve almost the same image quality as the full-dose lung to evaluate the influence of database size on the APFA per-
scan from the radiologists’ point of view. formance. We randomly chose a subset, consisting of 0.01%,
2520 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 36, NO. 12, DECEMBER 2017
the noise residuals can be further removed, as illustrated in In the further work we would incorporate the proposed
Fig. 12(b3) and Fig. 15(d). adaptive prior feature retrieving paradigm into the statisti-
Both digital simulations and clinical data from lung cal reconstruction framework. Prior feature retrieving takes
nodule needle biopsy studies were used to evaluate the pro- advantage of anatomical characteristics of objects. However,
posed algorithm. The results indicate that compared with it is difficult to accurately take statistical information in
other methods like FBP“B31f” filtering, LPG-PCA and DL, the projection data into account. The assumption of locally
the proposed algorithm can achieve better performance on both independent, additive and identical noise distribution for the
noise/artifacts suppression and details/textures preservation for image domain is indirect and not accurate enough. Hence,
low-dose lung CT images. We also evaluated the influence of we aim to combine this adaptive prior and raw data statistics
the offline training database on the performance of the APFA so that a comprehensive optimization can be done for ultra-
algorithm. The results show that the proposed algorithm can low-dose CT reconstruction. Further improvement in noise
still work well with training sample from different reconstruc- reduction and feature preservation will be expected.
tion kernel/scanner model or with a small training database.
In the proposed algorithm, most of the computational cost R EFERENCES
is in the prior patch-search. A smaller training database is
admired for pratical use. To tackle this problem, a randomly [1] D. J. Brenner and E. J. Hall, “Cancer risks from CT scans: Now we have
data, what next?” Radiology, vol. 265, no. 2, pp. 330–331, Nov. 2012.
chosen subset of the entire set of training patches was used [2] O. W. Linton and F. A. Mettler, “National conference on dose reduction
as the training database, based on the fact that there are lots in CT, with an emphasis on pediatric patients,” Amer. J. Roentgenol.,
of redundant structures among the patches in the entire set. vol. 181, no. 2, pp. 321–329, Aug. 2003.
[3] D. J. Brenner and E. J. Hall, “Computed tomography—An increasing
Experimant study indicates that with a smaller subset consist- source of radiation exposure,” New England J. Med., vol. 357, no. 22,
ing of only 671 patches, there is no significant degradation on pp. 2277–2284, Nov. 2007.
image quality compared with that using the entire set. We have [4] J. Hsieh, “Adaptive streak artifact reduction in computed tomography
resulting from excessive X-ray photon noise,” Med. Phys., vol. 25,
implemented the proposed algorithm with the Matlab parallel no. 11, pp. 2139–2147, Nov. 1998.
pool on a PC workstation configured with Intel Xeon CPU [5] I. A. Elbakri and J. A. Fessler, “Statistical image reconstruction for
(20 core, 2.5G Hz) and 64G RAM memory. With the patch polyenergetic X-ray computed tomography,” IEEE Trans. Med. Imag.,
vol. 21, no. 2, pp. 89–99, Feb. 2002.
size of 3 × 3 × 3 and the training database of 318672 patches, [6] J. Wang, T. Li, and L. Xing, “Iterative image reconstruction for CBCT
the algorithm took approximately 82 minutes to process using edge-preserving prior,” Med. Phys., vol. 36, no. 1, pp. 252–260,
a 512 × 512 × 30 CT volume. Based on the quan- Jan. 2009.
[7] P. J. L. Riviere, J. Bian, and P. A. Vargas, “Penalized-likelihood sinogram
titative results shown in Fig. 9, the computational load restoration for computed tomography,” IEEE Trans. Med. Imag., vol. 25,
can be further reduced by using a smaller subset, such no. 8, pp. 1022–1036, Aug. 2006.
as 0.1% patches of the entire set, with relatively satisfiying [8] J. Wang, T. Li, H. Lu, and Z. Liang, “Penalized weighted least-squares
approach to sinogram noise reduction and image reconstruction for low-
performance. dose X-ray computed tomography,” IEEE Trans. Med. Imag., vol. 25,
In this study, we focus on lung imaging where high- no. 10, pp. 1272–1283, Oct. 2006.
contrast lung nodules are of main interest. It is worth to [9] Y. Zhang, J. Zhang, and H. Lu, “Statistical sinogram smoothing for low-
dose CT with segmentation-based adaptive filtering,” IEEE Trans. Nucl.
mention that 20-30 mAs exposure is normally suggested as Sci., vol. 57, no. 5, pp. 2587–2598, Oct. 2010.
a clinical pertinent setting from previous studies [36], [37]. [10] A. Schilham, B. van Ginneken, H. Gietema, and M. Prokop, “Local noise
However, according to our study of ultra-low-dose imaging weighted filtering for emphysema scoring of low-dose CT images,” IEEE
Trans. Med. Imag., vol. 25, no. 4, pp. 451–463, Apr. 2006.
with 14 mAs, the image quality can be greatly improved by [11] A. Borsdorf, R. Raupach, T. Flohr, and J. Hornegger, “Wavelet based
the assist of prior features, indicating that further reduction noise reduction in CT-images using correlation analysis,” IEEE Trans.
over the modern clinical protocols is possible. Med. Imag., vol. 27, no. 12, pp. 1685–1703, Dec. 2008.
[12] Y. Chen et al., “Thoracic low-dose CT image processing using an artifact
Though the proposed APFA algorithm has been proved to be suppressed large-scale nonlocal means,” Phys. Med. Biol., vol. 57, no. 9,
effective in many cases including those listed above, it may be pp. 2667–2688, Apr. 2012.
invalid for some extremely low-dose lung CT images. The lim- [13] H. Yu, S. Zhao, E. A. Hoffman, and G. Wang, “Ultra-low dose lung CT
perfusion regularized by a previous scan,” Acad. Radiol., vol. 16, no. 3,
itation of this method has also been noticed. In some regions of pp. 363–373, Mar. 2009.
FBP reconstructions, noise may result in structure-like signals [14] P. Thériault-Lauzier, J. Tang, and G.-H. Chen, “Prior image constrained
which can not be completely removed by our APFA. The compressed sensing: Implementation and performance evaluation,” Med.
residual error could be significant to low-contrast areas. If keep Phys., vol. 39, no. 1, pp. 66–80, Jan. 2012.
[15] H. Dang, A. S. Wang, M. S. Sussman, J. H. Siewerdsen, and
decreasing dose and resulting higher noise, the performance of J. W. Stayman, “dPIRPLE: A joint estimation framework for deformable
this APFA method will degrade gradually. Firstly, FBP recon- registration and penalized-likelihood CT image reconstruction using
structions will suffer from photon starvation and result in prior images,” Phys. Med. Biol., vol. 59, pp. 4799–4826, Aug. 2014.
[16] J. Ma et al., “Low-dose computed tomography image restoration using
more and more streaking artifacts which might be mistakenly previous normal-dose scan,” Med. Phys., vol. 38, pp. 5713–5731,
treated as structures in the step of similar patch searching Oct. 2011.
and thus preserved. Secondly, with high noise, low contrast [17] H. Zhang et al., “Iterative reconstruction for X-ray computed tomog-
raphy using prior-image induced nonlocal regularization,” IEEE Trans.
features will be overwhelmed by noise in the step of coefficient Biomed. Eng., vol. 61, no. 9, pp. 2367–2378, Sep. 2014.
shrinkage as Eq. (13), and thus difficult to recover. For [18] W. Xu, S. Ha, and K. Mueller, “Database-assisted low-dose CT image
these situations, we shall be aware expectable overall poor restoration,” Med. Phys., vol. 40, no. 3, p. 031109, Mar. 2013.
[19] L. Ouyang, T. Solberg, and J. Wang, “Noise reduction in low-dose cone
image quality or unexpected difference from the full dose beam CT by incorporating prior volumetric image information,” Med.
reconstruction. Phys., vol. 39, pp. 2569–2577, May 2012.
ZHANG et al.: LOW-DOSE LUNG CT IMAGE RESTORATION USING ADAPTIVE PRIOR FEATURES 2523
[20] H. Zhang et al., “Deriving adaptive MRF coefficients from previous [28] L. Zhang, W. Dong, D. Zhang, and G. Shi, “Two-stage image denoising
normal-dose CT scan for low-dose image reconstruction via penal- by principal component analysis with local pixel grouping,” Pattern
ized weighted least-squares minimization,” Med. Phys., vol. 41, no. 4, Recognit., vol. 43, no. 4, pp. 1531–1549, Apr. 2010.
pp. 041916-1–041916-15, Apr. 2014. [29] I. T. Jolliffe, Principal Component Analysis, 2nd ed. New York, NY,
[21] H. Lee, L. Xing, R. Davidi, R. Li, J. Qian, and R. Lee, “Improved USA: Springer-Verlag, 2002, pp. 29–59.
compressed sensing-based cone-beam CT reconstruction using adaptive [30] D. D. Muresan and T. W. Parks, “Adaptive principal components and
prior image constraints,” Phys. Med. Biol., vol. 57, no. 8, pp. 2287–2307, image denoising,” in Proc. ICIP, 2003, pp. I101–I104.
Apr. 2012. [31] J. V. Manjón, P. Coupé, and A. Buades, “MRI noise estimation and
[22] Q. Xu, H. Yu, X. Mou, L. Zhang, J. Hsieh, and G. Wang, “Low-dose denoising using non-local PCA,” Med. Image Anal., vol. 22, no. 1,
X-ray CT reconstruction via dictionary learning,” IEEE Trans. Med. pp. 35–47, May 2015.
Imag., vol. 31, no. 9, pp. 1682–1697, Sep. 2012. [32] Lung Cancer Alliance, Washington, DC, USA. Lung Cancer Datasets,
[23] Y. Chen et al., “Artifact suppressed dictionary learning for low-dose in GIVE A SCAN. Accessed: May 20, 2017. [Online]. Available:
CT image processing,” IEEE Trans. Med. Imag., vol. 33, no. 12, http://www.giveascan.org
pp. 2271–2292, Dec. 2014. [33] J. Ma et al., “Variance analysis of X-ray CT sinograms in the presence
[24] Y. Chen et al., “Improving abdomen tumor low-dose CT images using of electronic noise background,” Med. Phys., vol. 39, pp. 4051–4065,
a fast dictionary learning based processing,” Phys. Med. Biol., vol. 58, Jul. 2012.
pp. 5803–5820, Aug. 2013. [34] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image
[25] M. Elad and M. Aharon, “Image denoising via sparse and redundant quality assessment: From error visibility to structural similarity,” IEEE
representations over learned dictionaries,” IEEE Trans. Image Process., Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
vol. 15, no. 12, pp. 3736–3745, Dec. 2006. [35] R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for
[26] D. Wu, L. Li, and L. Zhang, “Feature constrained compressed sens- image classification,” IEEE Trans. Syst., Man, Cybern., vol. SMC-3,
ing CT image reconstruction from incomplete data via robust princi- no. 6, pp. 610–621, Nov. 1973.
pal component analysis of the database,” Phys. Med. Biol., vol. 58, [36] H. Rusinek et al., “Pulmonary nodule detection: Low-dose ver-
pp. 4047–4070, May 2013. sus conventional CT,” Radiology, vol. 209, no. 1, pp. 243–249,
[27] H. Zhang et al., “Extracting information from previous full-dose CT Oct. 1998.
scan for knowledge-based Bayesian reconstruction of current low-dose [37] S. Itoh et al., “Lung cancer screening: Minimum tube current
CT images,” IEEE Trans. Med. Imag., vol. 35, no. 3, pp. 860–870, required for helical CT,” Radiology, vol. 215, no. 1, pp. 175–183,
Mar. 2016. Apr. 2000.