Sei sulla pagina 1di 8

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 27, NO.

1, JANUARY 2008

11

Optic Disc Detection From Normalized Digital Fundus Images by Means of a Vessels Direction Matched Filter
Aliaa Abdel-Haleim Abdel-Razik Youssif, Atef Zaki Ghalwash, and Amr Ahmed Sabry Abdel-Rahman Ghoneim*
AbstractOptic disc (OD) detection is a main step while developing automated screening systems for diabetic retinopathy. We present in this paper a method to automatically detect the position of the OD in digital retinal fundus images. The method starts by normalizing luminosity and contrast through out the image using illumination equalization and adaptive histogram equalization methods respectively. The OD detection algorithm is based on matching the expected directional pattern of the retinal blood vessels. Hence, a simple matched lter is proposed to roughly match the direction of the vessels at the OD vicinity. The retinal vessels are segmented using a simple and standard 2-D Gaussian matched lter. Consequently, a vessels direction map of the segmented retinal vessels is obtained using the same segmentation algorithm. The segmented vessels are then thinned, and ltered using local intensity, to represent nally the OD-center candidates. The difference between the proposed matched lter resized into four different sizes, and the vessels directions at the surrounding area of each of the OD-center candidates is measured. The minimum difference provides an estimate of the OD-center coordinates. The proposed method was evaluated using a subset of the STARE projects dataset, containing 81 fundus images of both normal and diseased retinas, and initially used by literature OD detection methods. The OD-center was detected correctly in 80 out of the 81 images (98.77%). In addition, the OD-center was detected correctly in all of the 40 images (100%) using the publicly available DRIVE dataset. Index TermsBiomedical image processing, fundus image analysis, matched lter, optic disc (OD), retinal imaging, telemedicine.

Fig. 1. (a) Typical normal fundus image, it shows the properties of a normal optic disc (bright ovoid shape on the left-hand side). (b) Fundus diagnosed of having high severity retinal/subretinal exudates. [9].

I. INTRODUCTION HE optic disc (OD) is considered one of the main features of a retinal fundus image (Fig. 1), where methods are described for its automatic detection [1], [2]. OD Detection is a key preprocessing component in many algorithms designed for the automatic extraction of retinal anatomical structures and lesions [3], thus, an associated module of most retinopathy screening systems. The OD often serves as a landmark for other fundus features; such as the quite constant distance between the OD and the macula-center (fovea) which can be used as a priori knowledge to help estimating the location of the macula [1],

Manuscript received February 19, 2007; revised May 2, 2007. Asterisk indicates corresponding author. A. A.-H. A.-R. Youssif and A. Z. Ghalwash are with the Department of Computer Science, the Faculty of Computers and Information, Helwan University, Cairo, Egypt (e-mail: aliaay@helwan.edu.eg; aghalwash@edara.gov.eg). *A. A. S. A.-R. Ghoneim is with the Department of Computer Science, the Faculty of Computers and Information, Helwan University, Cairo, Egypt (e-mail: amr_ghoneim@yahoo.com). Digital Object Identier 10.1109/TMI.2007.900326

[3]. The OD was also used as an initial point for retinal vasculature tracking methods [3], [4]; large vessels found in the OD vicinity can serve as seeds for vessel tracking methods. Also, the OD-rim (boundary) causes false responses for linear blood vessel lters [5]. The change in the shape, color or depth of OD is an indicator of various ophthalmic pathologies especially for glaucoma [6], thus the OD dimensions are used to measure abnormal features due to certain retinopathies, such as glaucoma and diabetic retinopathies [4], [7]. Furthermore, the OD can initially be recognized as one or more candidate exudates regionsone of the occurring lesions in diabetic retinopathies [6] due to its similarity of its color to the yellowish exudates [Fig. 1(b)]. Identifying and removing the OD improves the classication of exudates regions [8]. The OD is the brightest feature of the normal fundus, and it has approximately a vertically slightly oval (elliptical) shape. In colored fundus images, the OD appears as a bright yellowish or white region (Fig. 1). The OD is considered the exit region of the blood vessels and the optic nerves from the retina, also characterized by a relatively pale view owing to the nerve tissue underlying it. Measured relatively to the retinal fundus image, it occupies about one seventh of the entire image [1]. Alternatively, according to [4], the OD size varies from one person to another, occupying about one tenth to one fth of the image. The process of automatically detecting/localizing the OD aims only to correctly detect the centroid (center point) of the OD. On the other hand, disc boundary detection aims to correctly segment the OD by detecting the boundary between the retina and the nerve head (neuroretinal rim). Some methods estimated the contour (boundary) of the OD as a circle or an ellipse (e.g., [1], [3], [7], and [10]). Other methods have

0278-0062/$25.00 2007 IEEE

12

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 27, NO. 1, JANUARY 2008

been proposed for the exact detection of the OD contour (e.g., snakes which has the ability to bridge discontinuities of the edges [6]). After stating the motivations of localizing the OD, dening the OD, and highlighting the difference between disc detection and disc boundary detection, the remaining part of the paper is organized as follows. Most of the available methods for automatic OD detection are reviewed in Section II. In Section III, a description of the material used is given. Section IV presents the proposed algorithm. The results are presented and discussed in Sections V and VI, respectively. Finally, conclusion and further work are found in Section VII. II. OD DETECTION METHODS: A LITERATURE REVIEW Although the OD has well-dened features and characteristics, localizing the OD automatically and in a robust manner is not a straight-forward process, since the appearance of the OD may vary signicantly due to retinal pathologies. Consequently, in order to effectively detect the OD, the various methods developed should consider the variation in appearance, size, and location among different images [11]. The appearance of the yellowish OD region was characterized by a relatively rapid variation in intensity because of the dark blood-lled vessels beside the bright nerve bers. Sinthanayothin et al. [1], [12] detected the OD by identifying the area with the highest average variation among adjacent pixels using a window size equal to that of the OD. The images were preprocessed using an adaptive local contrast enhancement method which was applied to the intensity component. Instead of using the average variance in intensity, and assuming that the bright appearing retinopathies (e.g., exudates) are far from reaching the OD size, Walter and Klein [13] approximated the OD center as the center of the largest brightest connected object in a fundus image. They obtained a binary image including all the bright regions by simply thresholding the intensity image. In [14], Chrstek et al. applied an averaging lter to the greenband image, and located the OD roughly at the point of the highest average intensity. Again, the brightness was used by Li and Chutatape [4], [6] in order to nd the OD candidate regions for their model-based approach. Pixels with the highest 1% gray-levels in the intensity image were selected; obviously these pixels were mainly from areas in the OD or bright lesions. The selected pixels were then clustered, and small clusters were discarded. A disc-space (OD model) was created by applying principal component analysis (PCA) to a training set of 10 intensity normalized square subimages manually cropped around the OD. Then for each pixel in the candidate regions, the PCA transform was applied through a window with different ). The OD was detected as the region with the scales ( smallest Euclidian distance to its projection onto the disc-space. Another model-based (template matching) approach was employed by Osareh et al. 542 [8], [15], and [16] to approximately locate the OD. Initially, the images were normalized by applying histogram specication, and then the OD region from 25 color-normalized images was averaged to produce a gray-level template. The normalized correlation coefcient was then used to nd the most perfect match between the template and all the candidate pixels in the given image.

One more template matching approach is the Hausdorff-based template matching used by Lalonde et al. [10] together with pyramidal decomposition and condence assignment. In the beginning, multiresolution processing was employed through pyramidal decomposition which allowed large scale object tracking; small bright retinal lesions (e.g., exudates) vanish at lower resolutions, facilitating the search for the OD region with few false candidates. A simple condence value was calculated for all the OD candidate regions representing the ratio between the mean intensity inside the candidate region and in its neighborhood. The Canny edge detector and a Rayleigh-based threshold were then applied to the green-band image regions corresponding to the candidate regions, constructing a binary edge map. Finally, the edge map regions were matched to a circular template with different radii using the Hausdorff distance. Another condence value (the number of overlapped template pixels divided by the total number of template pixels) was calculated for all the regions having a Hausdorff distance between the edge map and the template less than a certain threshold value. The region having the highest total condence value was considered the OD. Pyramidal decomposition aids signicantly in detecting large areas of bright pixels that probably coincides with the OD, but it can be simply fooled by large areas of bright pixels that may occur near the images borders due to uneven illumination. For that reason, Frank ter Haar [11] applied illumination equalization to the green-band of the image, and then a resolution pyramid using a simple Haar-based discrete wavelet transform was created. Finally, the brightest pixel at the fth level of the resolution pyramid was chosen to correspond to the OD area. Frank ter Haar [11] proposed an alternative to the later method based on the pyramidal decomposition of both the vasculature and the green-band, where the fth level of the resolution pyramid for both the illumination equalized green-band and the binary vessel segmentation are summed, and the highest value corresponds to the OD center. Hough transform; a technique capable of nding geometric shapes within an image, was employed to detect the OD. In [7], Abdel-Ghafar et al. employed the circular hough transform (CHT) to detect the OD which has a roughly circular shape. The retinal vasculature in the green-band image was suppressed using the closing morphological operator. The Sobel operator and a simple threshold were then used to extract the edges in the image. CHT was nally applied to the edge points, and the largest circle was found consistently to correspond to the OD. Barrett et al. [17] also proposed using the Hough transform to localize the OD. Their method was implemented by [11] in two different ways. Frank ter Haar [11] explored the Hough transform using a couple of methods. In the rst one, Hough transform was applied only to pixels on or close to the retinal vasculature in a binary image of the vasculature obtained by [18]. The binary vasculature was dilated in order to increase the possible OD candidates. Alternatively, in the second method Frank ter Haar [11] applied Hough transform once again but only to the brightest 0.35% of the fuzzy convergence image obtained by [19], [20]. Once more, dilation was applied to the convergence image to overcome the gaps created by small vessels [11].

YOUSSIF et al.: OPTIC DISC DETECTION FROM NORMALIZED DIGITAL FUNDUS IMAGES BY MEANS OF A VESSELS DIRECTION MATCHED FILTER

13

TABLE I OD DETECTION RESULTS FOR THE PROPOSED AND LITERATURE REVIEWED METHODS

Fig. 2. Schematic drawing of the retinal vasculature orientations [11].

The shape, color, and size of the OD showed large variance especially in the presence of retinopathies, and therefore, detection methods based on these properties were shown to be weak, and impractical [11]. An alternative property to be examined is the retinal vasculature. The OD is the entrance point for both the optic nerve, and the few main blood vessels which split into many smaller vessels that spread around the retina. As a result, most of the recently proposed techniques try to utilize the information provided by the retinal vasculature. Fuzzy convergence [19], [20] is a novel voting type algorithm developed by Hoover and Goldbaum in order to determine the origination of the retinal vasculature (convergence point), and thus, localize the OD which is considered the only consistently visible property of the OD. The inputs to the fuzzy convergence algorithm were six binary vessel segmentations (each at a different scale) obtained from the green-band image. Each vessel was modeled by a fuzzy segment, which contributes to a cumulative voting image (a convergence image) where each pixel equals the amount of fuzzy segments on which the pixel lied. Finally, the convergence image was smoothed and thresholded to determine the strongest point(s) of convergence. If the nal result was inconclusive, the green-image was illumination equalized, and Fishers linear discriminant was applied to regions containing the brightest pixels to detect the OD. Frank ter Haar [11] proposed two methods using a vesselbranch network constructed from a binary vessel image. In the rst one, he searched for the branch with the most vessels so as to locate the OD. Alternatively, in the second method he searched the constructed vessel-branch network for all paths, and since the end points of all paths represents a degree of convergence, therefore the area where more path-endings were located probably coincides with the OD location. The Hough transform was then applied to these areas (having the highest path-endings convergence) to detect the OD. One last method employed by Frank ter Haar [11] was based on tting the vasculature orientations on a directional model. Since starting at the OD the retinal vasculature follows approximately the same divergence pattern in all retinal images (Fig. 2), any segmented vasculature can be tted on the directional model representing the common divergence pattern in order to detect the OD. The directional model (DM) was created using the vessel segmentations of 80 training images. For all the vessels pixels in each image, the orientation was calculated to form a directional

vessel map. The 80 directional maps were then manually aligned, and the DM was created by averaging (at each pixel) all the corresponding orientation values. The right-eye DM created was reected in order to create a left-eye DM. Finally, each pixel in an input vasculature was aligned to the OD center in both DMs, and then the distance between the tted input vasculature and both DMs was measured. The pixel having the minimal distance to both DMs is selected as the OD location. This method recorded the best success rate among all other 15 automatic OD-detection methods evaluated by Frank ter Haar [11] (Table I).

14

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 27, NO. 1, JANUARY 2008

Closely related to vasculature tting on a directional model, Foracchia et al. [21] identied the position of the OD using a geometrical model of the vessel structure. Once again, this method was based on the fact that in all images, the retinal vasculature originates from the OD following a similar directional pattern. In this method, the main vessels originating from the OD was geometrically modeled using two parabolas. Consequently, the OD position can be located as the common vertex of the two parabolas (i.e., the convergence point for the retinal vasculature). After the complete model of vessels direction was created, the vasculature of the input image was extracted, and the difference between the model directions and the extracted vessels directions was minimized using the weighted residual sum of squares (RSS) and a simulated annealing (SA) optimization algorithm. In [22], Goldbaum et al. combined three OD properties that jointly located the OD. Namely, the blood vasculature convergence towards the OD, the ODs appearance as a bright disc, and the large vessels entering the OD from above and below. Using most of the OD properties previously mentioned, Lowell et al. [5] carefully designed an OD template which was then correlated to the intensity component of the fundus image using the full Pearson-R correlation. This detection lter (template) consists of a Laplacian of Gaussian with a vertical channel carved out of the middle corresponding to the main blood vessels departing the OD vertically. Instead, Tobin et al. [23] proposed a method that mainly relied on vasculature-related OD properties. A Bayesian classier (trained using 50 images) was used to classify each pixel in red-free images as OD or Not-OD, using probability distributions describing the luminance across the retina and the density, average thickness, and average orientation of the vasculature. Abrmoff and Niemeijer [24] generally used the same features for OD detection, but the features were measured in a special fashion. Then a kNN regression (trained using 100 images) was used to estimate the OD location. To the best of our knowledge, the latter two methods are the only supervised methods proposed in literature for OD detection. III. MATERIAL Two publicly available datasets were used to test the proposed method. The main dataset is a subset of the STARE Projects dataset [9]. The subset contains 81 fundus images that were used initially by Hoover and Goldbaum [19] for evaluating their automatic OD localization method. The images were captured using a TopCon TRV-50 fundus camera at 35 eld-of-view (FOV), and subsequently digitized at 605 700, 24-bits pixel [19]. The dataset contains 31 images of normal retinas and 50 of diseased retinas. The dataset was also used by Foracchia et al. [21]. Moreover, Frank ter Haar [11] used it for comparing 15 different OD-detection methods. Reported results using STARE are shown in Table I. The second dataset used is the DRIVE dataset [25], established to facilitate comparative studies on retinal vasculature segmentation. The dataset consists of a total of 40 color fundus photographs used for making actual clinical diagnoses, where 33 photographs do not show any sign of diabetic retinopathy and seven show signs of mild early diabetic retinopathy. The

24 bits, 768 by 584 pixelscolor images are in compressed JPEG-format, and acquired using a Canon CR5 nonmydriatic 3CCD camera with a 45 FOV. IV. PROPOSED METHOD This study, inspired by the work of Frank ter Haar [11], Hoover and Goldbaum [19], and Foracchia et al. [21], presents a method for the automatic detection of the OD. The proposed method comprises several steps. Initially, a binary mask is generated. Then the illumination and contrast through out the image are equalized. Finally, the retinal vasculature is segmented, and the directions of the vessels are matched to the proposed lter which represents the expected vessels directions in the OD vicinity. A. Mask Generation Mask generation aims to label the pixels belonging to the (semi) circular retinal fundus region-of-interest (ROI) in the entire image, and exclude the background of the image from further calculations and processing [3]. We used the method proposed by Frank ter Haar [11], who applied a threshold to the images red-band (empirically ). And then the morphological operators (opening, closing, and erosion) were applied respectively (to the result of the preceding step) using a 3 3 square kernel to give the nal ROI mask [Fig. 4(a)]. B. Illumination Equalization The illumination in retinal images is nonuniform due to the variation of the retina response or the non-uniformity of the imaging system. Vignetting and other forms of uneven illumination make the typical analysis of retinal images impractical and useless. In addition, Aliaa et al. [26] proved that uneven illumination negatively affects the process of localizing successful OD candidates. To overcome the nonuniform illumination, Hoover and Goldbaum [19] adjusted (equalized) each pixel using the following equation: (1) where is the desired average intensity (128 in an 8-bit is the mean intensity value grayscale image) and of size . The mean of the pixels within a window intensities are smoothed using the same windowing. Instead of as applied by [19], we used a using a variable window size running window of only one size (40 40) as applied by [11]. Consequently, since the amount of pixels used while calculating the local average intensity in the center is more than the amount of pixels used near the border, the ROI of the retinal images is shrunk by ve pixels to discard the pixels near the border [11]. Illumination equalization (1) is applied to the green-band image [Fig. 4(b) and (c)]. C. Adaptive Histogram Equalization (AHE) Wu et al. [27] applied the AHE to normalize and enhance the contrast within fundus images. They found it more effective than the classical histogram equalization, especially when detecting small blood vessels characterized by low contrast levels. AHE

YOUSSIF et al.: OPTIC DISC DETECTION FROM NORMALIZED DIGITAL FUNDUS IMAGES BY MEANS OF A VESSELS DIRECTION MATCHED FILTER

15

Fig. 3. Proposed vessels direction at the OD vicinity matched lter.

is applied to an illumination equalized inverted green-band image as proposed in [27], where each pixel is adapted using the following equation: (2) where , denotes the pixel s neighborhood (a if ,and square window with length ), otherwise. The values of and where empirically chosen by [27] to be 81 and 8, respectively [Fig. 4(d)]. D. Retinal Blood Vessels Segmentation To segment the retinal blood vessels, we used the simple and standard edge tting algorithm proposed by Chaudhuri et al. [28], where the similarity between a predened 2-D Gaussian template and the fundus image is maximized. Twelve 15 15 lters (templates) were generated to model the retinal vasculature along all different orientations (0 to 165 ) with an angular resolution of 15 , then applied to each pixel where only the maximum of their responses is kept. In order to generate a binary vessel/nonvessel image [Fig. 4(e)], the maximum responses are thresholded using the global threshold selection algorithm proposed by Otsu [29]. Instead of applying the 12 templates to an averaged green-band image as suggested by [28], applying them to the adaptively histogram equalized image signicantly improves the segmentation algorithm and increases the sensitivity and specicity of the detected vessels [30]. A vessels direction map (VDM) can be obtained from the segmentation algorithm by recording the direction of the template that achieved the maximum response at each pixel. Then, for all the pixels labeled as nonvessel, the corresponding values in the VDM can be assigned to or not-anumber (NAN) in order to exclude them from further processing. E. Vessels Direction Matched Filter A matched lter describes the expected appearance of a desired signal, for purposes of comparative modeling [31]. Thus, in order to detect the OD, a simple vessels direction matched lter is proposed to roughly match the direction of the vessels at the OD vicinity (Fig. 3). The 9 9 template is resized using bilinear interpolation to sizes 241 81, 361 121, 481 161, and 601 201 to match the structure of the vessels at different scales. These sizes are specially tuned for the STARE and DRIVE, but they can be easily adjusted to other datasets. The difference between all four templates (in the single given

Fig. 4. Proposed method applied to the fundus image in Fig. 4(h). (a) ROI mask generated. (b) Green-band image. (c) Illumination equalized image. (d) Adaptive histogram equalization. (e) Binary vessel/non-vessel image. (f) Thinned version of the preceding binary image. (g) Final OD-center candidates. (h) OD detected successfully using the proposed method (white cross, right-hand side).

direction) and a VDM is calculated, and the pixel having the least accumulated difference is selected as the OD center [Fig. 4(h)]. To reduce the computational burden, matched lters are applied only to candidate pixels picked from the fundus image. The binary vessel/nonvessel image is thinned [Fig. 4(f)]. Hence, reducing the amount of pixels labeled as vessels into the vessels centerline. All remaining vessel-labeled pixels that are not within a 41 41 square centered on each of the highest 4% intensity pixels in the illumination equalized image are relabeled as nonvessel pixels [Fig. 4(g)]. This nal step aims only to reduce the number of OD candidates, and thus altering the size of the square or the amount of highest intensity pixels simply has no signicant effect. The remaining vessel-labeled pixels are potential OD centers, thus selected as candidates for applying the four sizes of the matched lter.

16

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 27, NO. 1, JANUARY 2008

Fig. 6. Results of the proposed method using the DRIVE dataset (white cross represents the estimated OD center).

Fig. 5. Results of the proposed method (white cross represents the estimated OD center). (a) Only case where the OD detection method failed. (b)(h) Results of the proposed method on the images shown in [21].

for comparison, Fig. 5(b)(h) shows the OD detection results of the proposed method on the images shown in [21]. Additionally, the OD was detected correctly in all of the 40 DRIVE dataset images (a 100% success rate) using the proposed method (Fig. 6). The average distance between the estimated and the manually identied OD centers was 17 pixels. VI. DISCUSSION

V. RESULTS The proposed method achieved a success rate of 98.77% (i.e., the OD was detected correctly in 80 out of the 81 images contained in the STARE dataset). The estimated OD center is considered correct if it was positioned within 60 pixels of the manually identied center, as proposed in [19] and [21]. The average distance (for the 80 successful images) between the estimated OD center and the manually identied center was 26 pixels. The only case in which the OD was not correctly detected [Fig. 5(a)], was due to uneven crescent-shaped illumination near the border that biased the OD candidates and affected the vessels segmentation algorithm. OD detection results for the proposed and literature reviewed methods are summarized in Table I. Besides, and

A Matlab prototype was implemented for the procedure applying the matched lters, where runs needed on average 3.5 min for each image on a Laptop (2-MHz Intel Centrino 1.7 CPU and 512 Mb RAM). Though using 2-D matched lters for retinal vessels segmentation and OD detection involves more computation than other literature reviewed methods, the use of such template-based methods eases the implementation on specialized high speed hardware and/or parallel processors. Characterized as the brightest anatomical structure in a retinal image, selecting the highest 1% intensity pixels should contain areas in the OD [6], unfortunately, due to uneven illumination (vignetting in particular) the OD may appear darker than other retinal regions, especially when retinal images are often captured with the fovea appearing in the middle of the image and the

YOUSSIF et al.: OPTIC DISC DETECTION FROM NORMALIZED DIGITAL FUNDUS IMAGES BY MEANS OF A VESSELS DIRECTION MATCHED FILTER

17

OD to one side [19]. Illumination equalization signicantly normalized the luminosity across the fundus image, increasing the number of OD candidates within the OD vicinity. Selecting the highest 4% intensity pixels guaranteed the presence of OD-candidates within the OD area. Moreover, using high intensity pixels to lter segmented vessels implicitly searches for areas of high variance, as proposed in [1]. However, employing intensity to segmented vessels as proposed is much more robust compared to OD localization methods based on intensity variation or just on intensity values. Later methods will not be straightforward once the OD loses its distinct appearance due to the nonuniform illumination or pathologies. AHE transformed the image into a more appropriate appearance for the application of the segmentation algorithm. Though the 2-D matched lter achieved poor results compared to other retinal vessels segmentation methods [18], applying it to the AHE image signicantly improved its performance. A clear advantage of using the 2-D matched lter for vessels segmentation is the ability to obtain the VDM implicitly while segmentation, without any additional algorithm as proposed in [23] and [24]. The proposed 2-D vessels direction matched lter was successful and robust in representing the directional model of the retinal vessels surrounding the OD. Resizing the lter into four sizes (see Section IV-E) aimed to capture the vertical longitudinal structure of the retinal vessels, as proposed by Tobin et al. [23], where the convolution mask was roughly 1 OD diameter wide and 3 OD diameters tall. The proposed lter is obviously simpler than the directional model proposed by [11], the parabolic geometrical model proposed by [21], and the fuzzy convergence approach proposed by [19]. In addition, using the STARE dataset that is full of tough pathological situations, the proposed method achieved better results compared to the reviewed methods (Table I). VII. CONCLUSION AND FUTURE WORK The paper presented a simple method for OD detection using a 2-D vessels direction matched lter. The proposed approach achieved better results compared to the results reported in literature. An extension for this study could be as follows. 1) Enhancing the performance of the vessel segmentation algorithm which will signicantly affect the performance and efciency of the proposed method. This can be achieved by employing other preprocessing techniques, or by employing postprocessing steps as proposed in [1]. 2) Using other OD properties or vascular-related OD properties beside intensity and variance to reduce the OD-center candidates will further enhance the performance (e.g., vessels density and diameter). 3) Examining the performance of the existing OD detection methods using other large, benchmark, and publicly-available datasets to achieve more comprehensive results. 4) Choosing other vessels segmentation algorithms where the VDM can be implicitly obtained (e.g., the angles of the maximum 2-D-Gabor wavelet response used in [32] and [33] can be simply used as a VDM). ACKNOWLEDGMENT The authors would like to thank their fellow authors of references [1], [3], [9], [18], and [28] for their support in acquiring the materials and resources needed to conduct the present study.

REFERENCES
[1] C. Sinthanayothin, J. F. Boyce, H. L. Cook, and T. H. Williamson, Automated localisation of the optic disk, fovea, and retinal blood vessels from digital colour fundus images, Br. J. Ophthalmol., vol. 83, no. 8, pp. 902910, 1999. [2] T. Teng, M. Leey, and D. Claremont, Progress towards automated diabetic ocular screening: A review of image analysis and intelligent systems for diabetic retinopathy, Med. Biol. Eng. Comput., vol. 40, pp. 213, 2002. [3] L. Gagnon, M. Lalonde, M. Beaulieu, and M.-C. Boucher, Procedure to detect anatomical structures in optical fundus images, in Proc. Conf. Med. Imag. 2001: Image Process., San Diego, CA, Feb. 1922, 2001, pp. 12181225. [4] H. Li and O. Chutatape, Automatic location of optic disc in retinal images, in IEEE Int. Conf. Image Process., Oct. 710, 2001, vol. 2, pp. 837840. [5] J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher, and L. Kennedy, Optic nerve head segmentation, IEEE Trans. Med. Imag., vol. 23, no. 2, pp. 256264, Feb. 2004. [6] H. Li and O. Chutatape, A model-based approach for automated feature extraction in fundus images, in 9th IEEE Int. Conf. Computer Vision (ICCV03), 2003, vol. 1, pp. 394399. [7] R. A. Abdel-Ghafar, T. Morris, T. Ritchings, and I. Wood, Detection and characterisation of the optic disk in glaucoma and diabetic retinopathy, presented at the Med. Image Understand. Anal. Conf., London, U.K., Sep. 2324, 2004. [8] A. Osareh, M. Mirmehdi, B. Thomas, and R. Markham, Classication and localisation of diabetic-related eye disease, in 7th Eur. Conf. Computer Vision (ECCV), May 2002, vol. 2353, LNCS, pp. 502516. [9] STARE Project Website Clemson Univ., Clemson, SC [Online]. Available: http://www.ces.clemson.edu~ahoover/stare [10] M. Lalonde, M. Beaulieu, and L. Gagnon, Fast and robust optic disk detection using pyramidal decomposition and Hausdorff-based template matching, IEEE Trans. Med. Imag., vol. 20, no. 11, pp. 11931200, Nov. 2001. [11] F. ter Haar, Automatic localization of the optic disc in digital colour images of the human retina, M.S. thesis, Utrecht University, Utrecht, The Netherlands, 2005. [12] C. Sinthanayothin, Image analysis for automatic diagnosis of diabetic retinopathy, Ph.D. dissertation, University of London (Kings College London), London, U.K., 1999. [13] T. Walter and J.-C. Klein, Segmentation of color fundus images of the human retina: Detection of the optic disc and the vascular tree using morphological techniques, in Proc. 2nd Int. Symp. Med. Data Anal., 2001, pp. 282287. [14] R. Chrstek, M. Wolf, K. Donath, G. Michelson, and H. Niemann, Optic disc segmentation in retinal images, Bildverarbeitung fr die Medizin 2002, pp. 263266, 2002. [15] A. Osareh, Automated identication of diabetic retinal exudates and the optic disc, Ph.D. dissertation, Department of Computer Science, Faculty of Engineering, University of Bristol, Bristol, U.K., 2004. [16] A. Osareh, M. Mirmehdi, B. Thomas, and R. Markham, Comparison of colour spaces for optic disc localisation in retinal images, in Proc. 16th Int. Conf. Pattern Recognition, 2002, pp. 743746. [17] S. F. Barrett, E. Naess, and T. Molvik, Employing the Hough transform to locate the optic disk, in Biomed. Sci. Instrum., 2001, vol. 37, pp. 8186. [18] M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. D. Abrmoff, J. M. Fitzpatrick and M. Sonka, Eds., Comparative study of retinal vessel segmentation methods on a new publicly available database, SPIE Med. Imag., vol. 5370, pp. 648656, 2004. [19] A. Hoover and M. Goldbaum, Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels, IEEE Trans. Med. Imag., vol. 22, no. 8, pp. 951958, Aug. 2003. [20] A. Hoover and M. Goldbaum, Fuzzy convergence, in Proc. IEEE Computer Soc. Conf. Computer Vis. Pattern Recognit., Santa Barbara, CA, 1998, pp. 716721. [21] M. Foracchia, E. Grisan, and A. Ruggeri, Detection of optic disc in retinal images by means of a geometrical model of vessel structure, IEEE Trans. Med. Imag., vol. 23, no. 10, pp. 11891195, Oct. 2004. [22] M. Goldbaum, S. Moezzi, A. Taylor, S. Chatterjee, J. Boyd, E. Hunter, and R. Jain, Automated diagnosis and image understanding with object extraction, object classication, and inferencing in retinal images, in Proc. IEEE Int. Congress Image Process., Los Alamitos, CA, 1996, vol. 3, pp. 695698.

18

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 27, NO. 1, JANUARY 2008

[23] K. W. Tobin, E. Chaum, V. P. Govindasamy, T. P. Karnowski, and O. Sezer, Reinhardt, M. Joseph, Pluim, and P. W. Josien, Eds., Characterization of the optic disc in retinal imagery using a probabilistic approach, in Med. Imag. 2006: Image Process., 2006, vol. 6144, pp. 10881097. [24] M. D. Abrmoff and M. Niemeijer, The automatic detection of the optic disc location in retinal images using optic disc location regression, in Proc. IEEE EMBC 2006, Aug. 2006, pp. 44324435. [25] Research section, digital retinal image for vessel extraction (DRIVE) database University Medical Center Utrecht, Image Sciences Institute, Utrecht, The Netherlands [Online]. Available: http://www.isi.uu.nl/Research/Databases/DRIVE [26] A. A. A. Youssif, A. Z. Ghalwash, and A. S. Ghoneim, A comparative evaluation of preprocessing methods for automatic detection of retinal anatomy, in Proc. 5th Int. Conf. Informatics Syst. (INFOS2007), Mar. 2426, 2007, pp. 2430. [27] D. Wu, M. Zhang, J.-C. Liu, and W. Bauman, On the adaptive detection of blood vessels in retinal images, IEEE Trans. Biomed. Eng., vol. 53, no. 2, pp. 341343, Feb. 2006. [28] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, Detection of blood vessels in retinal images using two-dimensional matched lters, IEEE Trans. Med. Imag., vol. 8, no. 3, pp. 263269, Sep. 1989.

[29] N. Otsu, A threshold selection method from gray level histograms, IEEE Trans. Syst., Man, Cybern., vol. SMC-9, no. 1, pp. 6266, Jan. 1979. [30] A. A. A. Youssif, A. Z. Ghalwash, and A. S. Ghoneim, Comparative study of contrast enhancement and illumination equalization methods for retinal vasculature segmentation, presented at the 3rd Cairo Int. Biomed. Eng. Conf. (CIBEC06), Cairo, Egypt, Dec. 2124, 2006. [31] A. Hoover, V. Kouznetsova, and M. Goldbaum, Locating blood vessels in retinal images by piecewise threshold probing of a matched lter response, IEEE Trans. Med. Imag., vol. 19, no. 3, pp. 203210, Mar. 2000. [32] J. V. B. Soares, J. J. G. Leandro, R. M. Cesar Jr., H. F. Jelinek, and M. J. Cree, Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classication, IEEE Trans. Med. Imag., vol. 25, no. 9, pp. 12141222, Sep. 2006. [33] A. A. A. Youssif, A. Z. Ghalwash, and A. S. Ghoneim, Automatic segmentation of the retinal vasculature using a large-scale support vector machine, in 2007 IEEE Pacic Rim Conf. Commun., Computers Signal Process., Victoria, BC, Canada, Aug. 2224, 2007.

Potrebbero piacerti anche