-
摘要:
目的 基于深度学习算法构建宫颈原位腺癌(cervical adenocarcinoma in situ, CAIS)病理图像诊断模型。 方法 回顾性收集2019年1月至2021年12月中国医科大学附属盛京医院病理科保存的CAIS患者病变组织、慢性宫颈炎患者正常宫颈管腺体病理切片。图像采集后,均按4∶3∶3的比例随机分为训练集、验证集和测试集。使用训练集、验证集数据对VGG16、VGG19、Inception V3、Xception、ResNet50和DenseNet201共6种网络模型进行迁移学习训练和参数调试,构建可识别CAIS病理图像的卷积神经网络二分类模型,并将模型进行组合,构建集成学习模型。基于测试集数据,采用运算时间、准确率、精确率、召回率、F1值、受试者操作特征曲线下面积(area under the curve,AUC)对模型识别CAIS病理图像的性能进行评价。 结果 共入选符合纳入和排除标准的CAIS患者病理切片104张、慢性宫颈炎患者正常宫颈管腺体病理切片90张。共收集CAIS、正常宫颈管腺体病理图像各500张,其中训练集、验证集、测试集图像分别400张、300张、300张。6种模型中,ResNet50模型的准确率(87.33%)、精确率(90.00%)、F1值(86.90%)及AUC(0.87)均最高,召回率(84.00%)居第2位,运算时间较短(2062.04 s),整体性能最佳,VGG19模型次之,Inception V3与Xception模型的性能最差。6种集成学习模型中,ResNet50与DenseNet201集成模型的整体性能最优,其准确率、精确率、召回率、F1值、AUC分别为89.67%、84.67%、94.07%、89.12%、0.90,VGG19与ResNet50集成模型次之。 结论 通过深度学习算法构建CAIS病理图像识别模型具有可行性,其中ResNet50模型的整体性能较高。集成学习可提高单一模型对病理图像的识别效果。 Abstract:Objective To construct a pathological image diagnostic model of cervical adenocarcinoma in situ(CAIS) based on deep learning algorithm. Methods Pathological tissue sections of CAIS and normal cervical canal and gland sections of chronic cervicitis stored in the Pathology Department of Shengjing Hospital, China Medical University from January 2019 to December 2021 were retrospectively collected. After image collection, they were randomly divided into training set, validation set and test set with a ratio of 4∶3∶3. The data of training set and validation set were used to conduct transfer learning training and parameter debugging for 6 network models, including VGG16, VGG19, Inception V3, Xception, ResNet50 and DenseNet201, and the convolutional neural network binary classification model that could recognize pathological images of CAIS was constructed. The models were combined to build the ensemble learning model. Based on the test set data, the performance of pathological image recognition of single model and ensemble learning model was evaluated. The results were expressed by operation time, accuracy, precision, recall, F1 score and area under the curve(AUC) of receiver operating characteristic. Results A total of 104 pathological sections of CAIS and 90 pathological sections of normal cervical duct and gland with chronic cervicitis were selected. A total of 500 pathological images of CAIS and normal cervical duct glands were collected, including 400 images of training set, 300 images of validation set and 300 images of test set, respectively. Among the 6 models, ResNet50 model, with the highest accuracy(87.33%), precision(90.00%), F1 score(86.90%) and AUC(0.87), second highest recall(84.00%) and shorter operation time(2062.04 s), demonstrated the best overall performance; VGG19 model was the second; and Inception V3 and Xception model had the worst performance.Among the 6 kinds of ensemble learning models, ResNet50 and DenseNet201 showed the best overall performance, and their accuracy, precision, recall, F1 score and AUC were 89.67%, 84.67%, 94.07%, 89.12% and 0.90, respectively. VGG19 and ResNet50 ensemble model followed. Conclusions It is feasible to construct CAIS pathological image recognition models by deep learning algorithm, among which ResNet50 models has higher overall performance. Ensemble learning can improve the recognition effect on pathological images by single model. 作者贡献:刘畅负责研究设计、临床病理数据收集、病理诊断判读与论文撰写;郑宇超负责研究设计、模型构建与数据分析;谢文倩负责临床病理数据收集与图像采集;李晨、李晓晗负责指导研究设计与论文审校。利益冲突:所有作者均声明不存在利益冲突 -
图 6 集成学习模型对测试集图像识别效果的混淆矩阵
CAIS、normal:同图 4
表 1 6种模型对测试集图像识别效果的评价指标
模型 运算时间(s) 准确率(%) 精确率(%) 召回率(%) F1值(%) AUC VGG16 2071.33 80.33 86.40 72.00 78.55 0.80 VGG19 2147.58 84.67 84.67 84.67 84.67 0.85 Inception V3 2115.85 64.67 61.83 76.67 68.45 0.65 ResNet50 2062.04 87.33 90.00 84.00 86.90 0.87 Xception 2061.44 65.00 73.68 46.67 57.14 0.65 DenseNet201 2124.49 80.33 80.13 80.67 80.40 0.80 AUC:曲线下面积 表 2 6种集成学习模型对测试集图像识别效果的评价指标
集成学习模型 准确率(%) 精确率(%) 召回率(%) F1值(%) AUC VGG16+VGG19 83.67 77.33 88.55 82.56 0.84 VGG16+ResNet50 86.00 79.33 91.54 85.00 0.86 VGG16+DenseNet201 86.33 82.00 89.78 85.71 0.86 VGG19+ResNet50 88.33 87.33 89.12 88.22 0.88 VGG19+DenseNet201 87.33 86.00 88.36 87.16 0.87 ResNet50+DenseNet201 89.67 84.67 94.07 89.12 0.90 AUC: 同表 1 -
[1] Cao W, Chen HD, Yu YW, et al. Changing profiles of cancer burden worldwide and in China: a secondary analysis of the global cancer statistics 2020[J]. Chin Med J, 2021, 134: 783-791. doi: 10.1097/CM9.0000000000001474 [2] Baalbergen A, Helmerhorst TJ. Adenocarcinoma in situ of the uterine cervix: a systematic review[J]. Int J Gynecol Cancer, 2014, 24: 1543-1548. doi: 10.1097/IGC.0000000000000260 [3] Cleveland AA, Gargano JW, Park IU, et al. Cervical adenocarcinoma in situ: human papillomavirus types and incidence trends in five states, 2008—2015[J]. Int J Cancer, 2020, 146: 810-818. doi: 10.1002/ijc.32340 [4] 刘从容. 宫颈腺上皮病变病理学相关问题及其研究进展[J]. 中华妇幼临床医学杂志, 2016, 12: 2-6. doi: 10.3877/cma.j.issn.1673-5250.2016.01.001 Liu CR. Pathologic problems and research progress of cervical adenoepithelial lesions[J]. Zhonghua Fuyou Linchuang Yixue Zazhi, 2016, 12: 2-6. doi: 10.3877/cma.j.issn.1673-5250.2016.01.001 [5] Liu J, Li L, Wang L. Acetowhite region segmentation in uterine cervix images using a registered ratio image[J]. Comput Biol Med, 2018, 93: 47-55. doi: 10.1016/j.compbiomed.2017.12.009 [6] Xin TL, Chen L, Md Mamunur R, et al. A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches[J]. Artifi Intell Rev, 2022, 55: 4809-4878. doi: 10.1007/s10462-021-10121-0 [7] Song D, Kim E, Huang X, et al. Multimodal entity coreference for cervical dysplasia diagnosis[J]. IEEE Trans Med Imaging, 2015, 34: 229-245. doi: 10.1109/TMI.2014.2352311 [8] Asiedu MN, Simhal A, Chaudhary U, et al. Development of algorithms for automated detection of cervical pre-cancers with a low-cost, point-of-care, Pocket Colposcope[J]. IEEE Trans Biomed Eng, 2019, 66: 2306-2318. doi: 10.1109/TBME.2018.2887208 [9] Li C, Chen H, Li XY, et al. A review for cervical histopathology image analysis using machine vision approaches[J]. Artifi Intell Rev, 2020, 53: 4821-4862. doi: 10.1007/s10462-020-09808-7 [10] 庄福振, 罗平, 何清, 等. 迁移学习研究进展[J]. 软件学报, 2015, 26: 26-39. https://www.cnki.com.cn/Article/CJFDTOTAL-RJXB201501003.htm Zhuang FZ, Luo P, He Q, et al. Research progress of transfer learning[J]. Ruanjian Xuebao, 2015, 26: 26-39. https://www.cnki.com.cn/Article/CJFDTOTAL-RJXB201501003.htm [11] Jun L, Guang YL, Xiang RT, et al. Transfer learning enhanced generative adversarial networks for multi-channel MRI reconstruction[J]. Comput Biol Med, 2021, 134: 104504. doi: 10.1016/j.compbiomed.2021.104504 [12] Niu S, Liu M, Liu Y, et al. Distant Domain Transfer Learning for Medical Imaging[J]. IEEE J Biomed Health Inform, 2021, 25: 3784-3793. doi: 10.1109/JBHI.2021.3051470 [13] 颜悦, 陈丽萌, 李锦涛, 等. 基于深度学习和组织病理图像的癌症分类研究进展[J]. 协和医学杂志, 2021, 12: 742-748. doi: 10.12290/xhyxzz.2021-0452 Yan Y, Chen LM, Li JT, et al. Progress in cancer classification based on deep learning and histopathological images[J]. Xiehe Yixue Zazhi, 201, 12: 742-748. doi: 10.12290/xhyxzz.2021-0452 [14] 杨志明, 李亚伟, 杨冰, 等. 融合宫颈细胞领域特征的多流卷积神经网络分类算法[J]. 计算机辅助设计与图形学学报, 2019, 31: 531-540. https://www.cnki.com.cn/Article/CJFDTOTAL-JSJF201904003.htm Yang ZM, Li YW, Yang B, et al. Multi-flow convolutional neural network classification algorithm based on domain features of cervical cells[J]. Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao, 2019, 31: 531-540. https://www.cnki.com.cn/Article/CJFDTOTAL-JSJF201904003.htm [15] Qiao ZH, Herve D, Nicolas D, et al. 3-D consistent and robust segmentation of cardiac images by deep learning with spatial propagation[J]. IEEE Trans Med Imaging, 2018, 37: 2137-2148. doi: 10.1109/TMI.2018.2820742 [16] Yang H, Sun J, Li H, et al. Neural multi-atlas label fusion: Application to cardiac MR images[J]. Med image Anal, 2018, 49: 60-75. doi: 10.1016/j.media.2018.07.009 [17] Jamaludin A, Kadir T, Zisserman A. SpineNet: automated classification and evidence visualization in spinal MRIs[J]. Med image Anal, 2017, 41: 63-73. doi: 10.1016/j.media.2017.07.002 [18] Sato M, Horie K, Hara A, et al. Application of deep learning to the classification of images from colposcopy[J]. Oncol lett, 2018, 15: 3518-3523. [19] Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition[J]. Comput Sci, 2014. https://doi.org/10.48550/arXiv.1409.1556. doi: 10.48550/arXiv.1409.1556 [20] Russakovsky O, Deng J, Su H, et al. Imagenet large scale visual recognition challenge[J]. Int J Comput Vision, 2015, 115: 211-252. [21] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015: 1-9. [22] Chollet F. Xception: Deep learning with depthwise separable convolutions[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 1251-1258. [23] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]. Proceedings of the IEEE Confer-ence on Computer Vision and Pattern Recognition, 2016: 770-778. [24] Huang G, Liu Z, Van DML, et al. Densely connected convolutional networks[C]. Proceedings of the IEEE Confer-ence on Computer VIsion and Pattern Recognition, 2017: 4700-4708. [25] Al-Haija QA, Adebanjo A. Breast Cancer Diagnosis in Histopathological Images Using ResNet-50 Convolutional Neural Network[C]. 2020 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS). IEEE, 2020. -