留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于深度学习的宫颈原位腺癌图像识别方法

刘畅 郑宇超 谢文倩 李晨 李晓晗

刘畅, 郑宇超, 谢文倩, 李晨, 李晓晗. 基于深度学习的宫颈原位腺癌图像识别方法[J]. 协和医学杂志, 2023, 14(1): 159-167. doi: 10.12290/xhyxzz.2022-0109
引用本文: 刘畅, 郑宇超, 谢文倩, 李晨, 李晓晗. 基于深度学习的宫颈原位腺癌图像识别方法[J]. 协和医学杂志, 2023, 14(1): 159-167. doi: 10.12290/xhyxzz.2022-0109
LIU Chang, ZHENG Yuchao, XIE Wenqian, LI Chen, LI Xiaohan. Image Recognition Method of Cervical Adenocarcinoma in Situ Based on Deep Learning[J]. Medical Journal of Peking Union Medical College Hospital, 2023, 14(1): 159-167. doi: 10.12290/xhyxzz.2022-0109
Citation: LIU Chang, ZHENG Yuchao, XIE Wenqian, LI Chen, LI Xiaohan. Image Recognition Method of Cervical Adenocarcinoma in Situ Based on Deep Learning[J]. Medical Journal of Peking Union Medical College Hospital, 2023, 14(1): 159-167. doi: 10.12290/xhyxzz.2022-0109

基于深度学习的宫颈原位腺癌图像识别方法

doi: 10.12290/xhyxzz.2022-0109
详细信息
    通讯作者:

    李晨, E-mail:lichen@bmie.neu.edu.cn

    李晓晗, E-mail:li_xiaohan1975@hotmail.com

  • 中图分类号: R737

Image Recognition Method of Cervical Adenocarcinoma in Situ Based on Deep Learning

More Information
  • 摘要:   目的  基于深度学习算法构建宫颈原位腺癌(cervical adenocarcinoma in situ, CAIS)病理图像诊断模型。  方法  回顾性收集2019年1月至2021年12月中国医科大学附属盛京医院病理科保存的CAIS患者病变组织、慢性宫颈炎患者正常宫颈管腺体病理切片。图像采集后,均按4∶3∶3的比例随机分为训练集、验证集和测试集。使用训练集、验证集数据对VGG16、VGG19、Inception V3、Xception、ResNet50和DenseNet201共6种网络模型进行迁移学习训练和参数调试,构建可识别CAIS病理图像的卷积神经网络二分类模型,并将模型进行组合,构建集成学习模型。基于测试集数据,采用运算时间、准确率、精确率、召回率、F1值、受试者操作特征曲线下面积(area under the curve,AUC)对模型识别CAIS病理图像的性能进行评价。  结果  共入选符合纳入和排除标准的CAIS患者病理切片104张、慢性宫颈炎患者正常宫颈管腺体病理切片90张。共收集CAIS、正常宫颈管腺体病理图像各500张,其中训练集、验证集、测试集图像分别400张、300张、300张。6种模型中,ResNet50模型的准确率(87.33%)、精确率(90.00%)、F1值(86.90%)及AUC(0.87)均最高,召回率(84.00%)居第2位,运算时间较短(2062.04 s),整体性能最佳,VGG19模型次之,Inception V3与Xception模型的性能最差。6种集成学习模型中,ResNet50与DenseNet201集成模型的整体性能最优,其准确率、精确率、召回率、F1值、AUC分别为89.67%、84.67%、94.07%、89.12%、0.90,VGG19与ResNet50集成模型次之。  结论  通过深度学习算法构建CAIS病理图像识别模型具有可行性,其中ResNet50模型的整体性能较高。集成学习可提高单一模型对病理图像的识别效果。
    作者贡献:刘畅负责研究设计、临床病理数据收集、病理诊断判读与论文撰写;郑宇超负责研究设计、模型构建与数据分析;谢文倩负责临床病理数据收集与图像采集;李晨、李晓晗负责指导研究设计与论文审校。
    利益冲突:所有作者均声明不存在利益冲突
  • 图  1  宫颈原位腺癌组织病理图像(×100)

    A.正常宫颈管组织(HE染色,箭头);B.宫颈原位腺癌组织(HE染色,箭头);C.免疫组化示P16强阳性表达;D.免疫组化示癌胚抗原阳性;E.免疫组化示雌激素受体阴性;F.免疫组化示Ki-67指数增高(约90%)

    图  2  6种模型训练结果

    图  3  6种模型对测试集图像识别效果的受试者操作特征曲线图

    图  4  6种模型对测试集图像识别效果的混淆矩阵

    CAIS:宫颈原位腺癌;normal:正常宫颈管腺体

    图  5  集成学习模型的受试者操作特征曲线图

    图  6  集成学习模型对测试集图像识别效果的混淆矩阵

    CAIS、normal:同图 4

    表  1  6种模型对测试集图像识别效果的评价指标

    模型 运算时间(s) 准确率(%) 精确率(%) 召回率(%) F1值(%) AUC
    VGG16 2071.33 80.33 86.40 72.00 78.55 0.80
    VGG19 2147.58 84.67 84.67 84.67 84.67 0.85
    Inception V3 2115.85 64.67 61.83 76.67 68.45 0.65
    ResNet50 2062.04 87.33 90.00 84.00 86.90 0.87
    Xception 2061.44 65.00 73.68 46.67 57.14 0.65
    DenseNet201 2124.49 80.33 80.13 80.67 80.40 0.80
    AUC:曲线下面积
    下载: 导出CSV

    表  2  6种集成学习模型对测试集图像识别效果的评价指标

    集成学习模型 准确率(%) 精确率(%) 召回率(%) F1值(%) AUC
    VGG16+VGG19 83.67 77.33 88.55 82.56 0.84
    VGG16+ResNet50 86.00 79.33 91.54 85.00 0.86
    VGG16+DenseNet201 86.33 82.00 89.78 85.71 0.86
    VGG19+ResNet50 88.33 87.33 89.12 88.22 0.88
    VGG19+DenseNet201 87.33 86.00 88.36 87.16 0.87
    ResNet50+DenseNet201 89.67 84.67 94.07 89.12 0.90
    AUC: 同表 1
    下载: 导出CSV
  • [1] Cao W, Chen HD, Yu YW, et al. Changing profiles of cancer burden worldwide and in China: a secondary analysis of the global cancer statistics 2020[J]. Chin Med J, 2021, 134: 783-791. doi:  10.1097/CM9.0000000000001474
    [2] Baalbergen A, Helmerhorst TJ. Adenocarcinoma in situ of the uterine cervix: a systematic review[J]. Int J Gynecol Cancer, 2014, 24: 1543-1548. doi:  10.1097/IGC.0000000000000260
    [3] Cleveland AA, Gargano JW, Park IU, et al. Cervical adenocarcinoma in situ: human papillomavirus types and incidence trends in five states, 2008—2015[J]. Int J Cancer, 2020, 146: 810-818. doi:  10.1002/ijc.32340
    [4] 刘从容. 宫颈腺上皮病变病理学相关问题及其研究进展[J]. 中华妇幼临床医学杂志, 2016, 12: 2-6. doi:  10.3877/cma.j.issn.1673-5250.2016.01.001

    Liu CR. Pathologic problems and research progress of cervical adenoepithelial lesions[J]. Zhonghua Fuyou Linchuang Yixue Zazhi, 2016, 12: 2-6. doi:  10.3877/cma.j.issn.1673-5250.2016.01.001
    [5] Liu J, Li L, Wang L. Acetowhite region segmentation in uterine cervix images using a registered ratio image[J]. Comput Biol Med, 2018, 93: 47-55. doi:  10.1016/j.compbiomed.2017.12.009
    [6] Xin TL, Chen L, Md Mamunur R, et al. A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches[J]. Artifi Intell Rev, 2022, 55: 4809-4878. doi:  10.1007/s10462-021-10121-0
    [7] Song D, Kim E, Huang X, et al. Multimodal entity coreference for cervical dysplasia diagnosis[J]. IEEE Trans Med Imaging, 2015, 34: 229-245. doi:  10.1109/TMI.2014.2352311
    [8] Asiedu MN, Simhal A, Chaudhary U, et al. Development of algorithms for automated detection of cervical pre-cancers with a low-cost, point-of-care, Pocket Colposcope[J]. IEEE Trans Biomed Eng, 2019, 66: 2306-2318. doi:  10.1109/TBME.2018.2887208
    [9] Li C, Chen H, Li XY, et al. A review for cervical histopathology image analysis using machine vision approaches[J]. Artifi Intell Rev, 2020, 53: 4821-4862. doi:  10.1007/s10462-020-09808-7
    [10] 庄福振, 罗平, 何清, 等. 迁移学习研究进展[J]. 软件学报, 2015, 26: 26-39. https://www.cnki.com.cn/Article/CJFDTOTAL-RJXB201501003.htm

    Zhuang FZ, Luo P, He Q, et al. Research progress of transfer learning[J]. Ruanjian Xuebao, 2015, 26: 26-39. https://www.cnki.com.cn/Article/CJFDTOTAL-RJXB201501003.htm
    [11] Jun L, Guang YL, Xiang RT, et al. Transfer learning enhanced generative adversarial networks for multi-channel MRI reconstruction[J]. Comput Biol Med, 2021, 134: 104504. doi:  10.1016/j.compbiomed.2021.104504
    [12] Niu S, Liu M, Liu Y, et al. Distant Domain Transfer Learning for Medical Imaging[J]. IEEE J Biomed Health Inform, 2021, 25: 3784-3793. doi:  10.1109/JBHI.2021.3051470
    [13] 颜悦, 陈丽萌, 李锦涛, 等. 基于深度学习和组织病理图像的癌症分类研究进展[J]. 协和医学杂志, 2021, 12: 742-748. doi:  10.12290/xhyxzz.2021-0452

    Yan Y, Chen LM, Li JT, et al. Progress in cancer classification based on deep learning and histopathological images[J]. Xiehe Yixue Zazhi, 201, 12: 742-748. doi:  10.12290/xhyxzz.2021-0452
    [14] 杨志明, 李亚伟, 杨冰, 等. 融合宫颈细胞领域特征的多流卷积神经网络分类算法[J]. 计算机辅助设计与图形学学报, 2019, 31: 531-540. https://www.cnki.com.cn/Article/CJFDTOTAL-JSJF201904003.htm

    Yang ZM, Li YW, Yang B, et al. Multi-flow convolutional neural network classification algorithm based on domain features of cervical cells[J]. Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao, 2019, 31: 531-540. https://www.cnki.com.cn/Article/CJFDTOTAL-JSJF201904003.htm
    [15] Qiao ZH, Herve D, Nicolas D, et al. 3-D consistent and robust segmentation of cardiac images by deep learning with spatial propagation[J]. IEEE Trans Med Imaging, 2018, 37: 2137-2148. doi:  10.1109/TMI.2018.2820742
    [16] Yang H, Sun J, Li H, et al. Neural multi-atlas label fusion: Application to cardiac MR images[J]. Med image Anal, 2018, 49: 60-75. doi:  10.1016/j.media.2018.07.009
    [17] Jamaludin A, Kadir T, Zisserman A. SpineNet: automated classification and evidence visualization in spinal MRIs[J]. Med image Anal, 2017, 41: 63-73. doi:  10.1016/j.media.2017.07.002
    [18] Sato M, Horie K, Hara A, et al. Application of deep learning to the classification of images from colposcopy[J]. Oncol lett, 2018, 15: 3518-3523.
    [19] Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition[J]. Comput Sci, 2014. https://doi.org/10.48550/arXiv.1409.1556. doi:  10.48550/arXiv.1409.1556
    [20] Russakovsky O, Deng J, Su H, et al. Imagenet large scale visual recognition challenge[J]. Int J Comput Vision, 2015, 115: 211-252.
    [21] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015: 1-9.
    [22] Chollet F. Xception: Deep learning with depthwise separable convolutions[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 1251-1258.
    [23] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]. Proceedings of the IEEE Confer-ence on Computer Vision and Pattern Recognition, 2016: 770-778.
    [24] Huang G, Liu Z, Van DML, et al. Densely connected convolutional networks[C]. Proceedings of the IEEE Confer-ence on Computer VIsion and Pattern Recognition, 2017: 4700-4708.
    [25] Al-Haija QA, Adebanjo A. Breast Cancer Diagnosis in Histopathological Images Using ResNet-50 Convolutional Neural Network[C]. 2020 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS). IEEE, 2020.
  • 加载中
图(6) / 表(2)
计量
  • 文章访问数:  404
  • HTML全文浏览量:  79
  • PDF下载量:  79
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-03-10
  • 录用日期:  2022-05-26
  • 网络出版日期:  2022-09-20
  • 刊出日期:  2023-01-30

目录

    /

    返回文章
    返回

    【温馨提醒】近日,《协和医学杂志》编辑部接到作者反映,有多名不法人员冒充期刊编辑发送见刊通知,鼓动作者添加微信,从而骗取版面费的行为。特提醒您,本刊与作者联系的方式均为邮件通知或电话,稿件进度通知邮箱为:mjpumch@126.com,编辑部电话为:010-69154261,请提高警惕,谨防上当受骗!如有任何疑问,请致电编辑部核实。谢谢!