Foveal avascular zone segmentation in optical coherence tomography angiography images using a deep learning approach

In this retrospective comparative study, 104 eyes of 69 diabetic patients with different stages of DR and 12 eyes from 12 healthy subjects were selected for the training/validation database. Thirty-seven eyes (10 eyes from 6 normal subjects and 27 eyes from 19 diabetic patients) were used for the final evaluation of the trained system. The study was approved by the Iran University of Medical Sciences Ethics Committee (IR.IUMS.REC.1398.078) and adherents to the tenets of the declaration of Helsinki. Informed consent was obtained from all participants.All OCTA images were obtained using RTVue XR 100 Avanti instrument (Version 2017.1.0.151, Optovue, Inc., Fremont, CA, USA). The images of patients with significant media opacity, refractive error beyond ± 3 spherical equivalent, and image quality lower than 5 were excluded from the study. The inner retinal slab, from the internal limiting membrane (ILM) to an offset of 9 µm below the outer plexiform layer (OPL), was automatically segmented in en face 3 × 3 mm OCTA images. The ILM and OPL segmentations were manually corrected if needed as described elsewhere9,10.Automated FAZ measurements were performed using AngioVue, the device’s built-in commercial software. AngioVue software measures the FAZ area automatically using the “Measure: FAZ” tool. Upon detection of the FAZ area by the software, a yellow overlay delineating the border is added to the enface image of the inner retinal slab. In addition to FAZ area, signal strength of the images, presence of cystic changes in foveal center, and presence of diabetic macular edema (thickness greater than 320 µm) were recorded.The raw OCTA images were then exported and transferred to ImageJ software (http://imagej.nih.gov/ij/; provided in the public domain by the National Institutes of Health, Bethesda, MD, USA) for manual measurements. All manual measurements were conducted by a skilled grader (RM) and rechecked by another independent grader (PA). In case of any dispute, a senior grader (KGF) corrected the outline of the FAZ area. All manual measurements were performed before running the deep learning method.Model trainingA total of 126 enface OCTA images (104 with diabetic retinopathy and 22 healthy subjects) were used as the training dataset. The ground truth pixel labeling was based on manual segmentation of the FAZ which divided the pixels into the FAZ and non-FAZ labels.Detectron2, an open-source modular object detection library developed by the Facebook AI Research (FAIR) team11 was used for deep learning-based image segmentation. Detectron2 is a software system that implements state-of-the-art object detection algorithms with three distinct blocks that performs semantic and instance segmentation. The first block is based on the Feature Pyramid Network (FPN)12 implemented in a ResNet-50 network13. The FPN network extracts features at predefined spatial resolutions used to construct a feature pyramid, parallel to selected feature maps in forward layers of related convolutional neural network (CNN) but containing rich semantics in all layers. In the following block, a Cascade/Mask R-CNN on top of FPN is used for segmentation. The proposed regions of interest undergo an operation called Region of Interest Align (RoIAlign) before applying Mask R-CNN to each pyramid level separately. In the final block, a lightweight dense prediction branch is used on top of the same FPN features to merge different layers into a pixel-wise output. A simplified flowchart of the model is illustrated in Fig. 1.Figure 1Simplified flowchart of the deep learning model simulating the steps for a single image.The pre-trained CNN on the COCO dataset14 was implemented using Python (version 3.6) on a cloud computing service (Google Colab). Data augmentation strategies including random flip were used for the compensation of the relatively small sample size and avoidance of model overfitting.Model metricsThirty-seven enface OCTA images of 10 normal eyes and 27 eyes with diabetic retinopathy were used as the testing dataset for validation of the model. The measured FAZ area of every subject in training and testing dataset was recorded for each individual.For evaluation of the accuracy of instance segmentation, the predicted FAZ masks of training and test datasets were exported from the trained model. Afterwards, the Dice similarity coefficient (DSC) was calculated based on the following formula15:$${text{DSC}} = frac{{2left( {A cap B} right)}}{A + B}$$where A is the predicted mask and B is the ground truth mask based on manual segmentation. The DSC was evaluated for each pair of images (prediction and ground truth) separately and the mean IoU across the training and testing dataset was calculated. This index is the most popular metric for showing measurement similarities in image segmentation and provides an excellent estimation of overlapping pixels between the two images.Statistical analysisAll statistical analyses were conducted using SPSS (IBM Corp., Armonk, NY, USA) version 22.0 and Excel 2013 (Microsoft, Redmond, WA, USA). Bland–Altman plots with 95% limit of agreement (95% LoA) was used to illustrate the agreement between the measurements. Graphpad prism version 8.0 was used for plotting Bland–Altman graphs. In addition, the correlation coefficient was calculated for evaluation of consistency in FAZ measurements. To address the inter-eye correlation for the enrolled bilateral cases, the generalized estimating equation (GEE) was used to assess factors affecting the difference observed between FAZ measurements of different methods. A P value < 0.05 was considered significant.

Via Source link

Most Popular


All About information only. Please consult your Doctor for any illness.

Copyright © 2020 Safety Health News. Powered by Wordpress.

To Top