An Automatic Quantitative Analysis Method of Ki-67 Index for Breast Cancer Immunohistochemistry Based on Fusion of Spatial and Multi-scale Features
-
摘要:
目的 针对乳腺癌免疫组化全视野数字图像(whole slide image, WSI),提出一种智能化定量分析Ki-67指数的方法。 方法 回顾性纳入2020年1—12月北京协和医院乳腺癌患者的病理切片,将其以40倍率扫描为WSI图像,并由2名病理科医生按照2019年国际乳腺癌Ki-67工作组制订的指南对Ki-67指数进行人工判读。按5:8的比例随机将WSI图像分为A、B两个数据集(A数据集按7:1:2比例随机分为训练集、验证集和测试集)。病理科医生对A数据集人工标注热点区域后,40倍视野下将每张WSI随机裁剪为2000个512×512像素的图块,随机选取其中的50个图块,对肿瘤细胞进行标注并计算Ki-67指数。采用条件随机场模型融合图块的空间特征,经ResNet34预训练模型进行特征提取后构建热点区域识别模型,并采用准确率评价其性能。在热点区域内,40倍视野下随机选取10个视野,模型可自动完成细胞分类,并计算Ki-67指数均值。以人工判读结果为金标准,计算模型对B数据集Ki-67指数评估结果的准确率,并采用Bland-Altman法对人工判读与模型分析结果进行一致性评价。 结果 共入选符合纳入和排除标准的乳腺癌患者病理切片132张。其中A数据集50张(训练集、验证集和测试集分别为35张、5张、10张,分别包含图块70 000个、10 000个、20 000个),B数据集82张。模型对测试集热点区域识别的平均准确率为81.5%,对B数据集Ki-67指数计算结果的准确率为90.2%。Bland-Altman法分析显示,人工判读和模型计算的Ki-67指数的一致性良好。 结论 本研究提出智能化定量分析Ki-67指数的方法准确率高,可辅助病理医师实现Ki-67指数的高效判读。 Abstract:Objective To propose an intelligent quantitative analysis method of Ki-67 index for breast cancer immunohistochemical whole slide image (WSI). Methods The pathological sections of patients with breast cancer diagnosed and treated in Peking Union Medical College Hospital from January 2020 to December 2020 were retrospectively collected, and scanned at 40 magnification as WSI images. Manual interpretation of the Ki-67 index was conducted by 2 pathologists according to the guidelines formulated by the International Breast Cancer Ki-67 Working Group in 2019, which is considered the gold standard. According to the ratio of 5:8, WSI was randomly divided into two data sets, A and B (data set A was randomly divided into training set, validation set and test set according to a ratio of 7:1:2). After the hot spot area in WSI of the data set A was manually marked, each WSI randomly cropped 2000 512×512 pixel patches in the 40 field of view, and 50 patches of them were randomly selected to label tumor cells and calculate the Ki-67 index. The conditional random field model was used to fuse the spatial features of the image blocks, the features were extracted by the ResNet34 pre-training model to construct a hot spot recognition model, and its performance (accuracy) was evaluated in the test set. In the hot spot area, 10 fields of view were randomly selected under the high-power field of view (×40), and the model automatically completed the cell classification and calculated the average Ki-67 index. Taking the results of manual interpretation as the gold standard, the accuracy of the Ki-67 index evaluation results of the data set B by the model was calculated, and the Bland-Altman method was used to evaluate the consistency between the results of manual interpretation and model analysis. Results A total of 132 pathological sections of patients with breast cancer which met the inclusion and exclusion criteria were selected. There were 50 images in data set A (35, 5, and 10 images in training set, validation set, and test set, including 70 000, 10 000, and 20 000 patches, respectively), and 82 images in data set B. The average accuracy of the model for identifying hot spots in the test set was 81.5%, and the accuracy of the Ki-67 index calculation results for the B data set was 90.2%. Bland-Altman analysis showed that the Ki-67 index calculated by manual interpretation and model was in good agreement. Conclusion The intelligent quantitative analysis method of Ki-67 index proposed in this study has high accuracy and can assist pathologists to achieve efficient interpretation of Ki-67 index. -
Key words:
- breast cancer /
- immunohistochemistry /
- Ki-67 index /
- quantitative analysis
作者贡献:熊学春负责对人工智能分析计算方法流程的实现及论文撰写;吴焕文负责病理图像收集、标注及论文撰写;任菲负责选题构思、论文修订;崔莉负责分析方法技术指导;梁智勇负责病理诊断流程制定、结果评测;赵泽负责智能分析方法指导、深度学习相关方案设计、论文修订与审核。利益冲突:所有作者均声明不存在利益冲突 -
图 5 乳腺癌WSI热点区域识别模型整体框架图
WSI: 同图 2
-
[1] Cao W, Chen HD, Yu YW, et al. Changing profiles of cancer burden worldwide and in China: a secondary analysis of the global cancer statistics 2020[J]. Chin Med J (Engl), 2021, 134: 783-791. doi: 10.1097/CM9.0000000000001474 [2] Skjervold AH, Pettersen HS, Valla M, et al. Visual and digital assessment of Ki-67 in breast cancer tissue-a comparison of methods[J]. Diagn Pathol, 2022, 17: 45. doi: 10.1186/s13000-022-01225-4 [3] Li L, Han D, Yu Y, et al. Artificial intelligence-assisted interpretation of Ki-67 expression and repeatability in breast cancer[J]. Diagn Pathol, 2022, 17: 20. doi: 10.1186/s13000-022-01196-6 [4] Nielsen TO, Leung SCY, Rimm DL, et al. Assessment of Ki-67 in breast cancer: updated recommendations from the international Ki-67 in breast cancer working group[J]. J Natl Cancer Inst, 2021, 113: 808-819. doi: 10.1093/jnci/djaa201 [5] 刘月平. 国际乳腺癌Ki-67工作组Ki-67评估更新的主要内容解读[J]. 中华病理学杂志, 2021, 50: 704-709. doi: 10.3760/cma.j.cn112151-20210303-00179 Liu YP. Interpretation of Ki-67 assessment update of International Ki-67 in Breast Cancer Working Group[J]. Zhonghua Binglixue Zazhi, 2021, 50: 704-709. doi: 10.3760/cma.j.cn112151-20210303-00179 [6] Zhou SK, Greenspan H, Davatzikos C, et al. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises[J]. Proc IEEE, 2021, arXiv: 2008.09104. [7] Rimm DL, Leung SCY, McShane LM, et al. An interna-tional multicenter study to evaluate reproducibility of automated scoring for assessment of Ki-67 in breast cancer[J]. Mod Pathol, 2019, 32: 59-69. doi: 10.1038/s41379-018-0109-4 [8] Li C, Li XT, Rahaman MM, et al. A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches[J]. Artif Intell Rev, 2022, ArXiv: 2102.10553. [9] Xing F, Cornish TC, Bennett T, et al. Pixel-to-pixel learning with weak supervision for single-stage nucleus recognition in Ki-67 images[J]. IEEE Trans Biomed Eng, 2019, 66: 3088-3097. doi: 10.1109/TBME.2019.2900378 [10] Negahbani F, Sabzi R, Pakniyat Jahromi B, et al. PathoNet introduced as a deep neural network backend for evaluation of Ki-67 and tumor-infiltrating lymphocytes in breast cancer[J]. Sci Rep, 2021, 11: 8489. doi: 10.1038/s41598-021-86912-w [11] Shete PG, Kharate GK. Evaluation of Immunohistochemistry (Ihc) Marker Her2 In Breast Cancer[J]. ICTACT J Image Video Proc, 2016, 7: 1318-1323. doi: 10.21917/ijivp.2016.0192 [12] Ko CC, Chen YR, Lin WY. Improving the evaluation accuracies of histopathologic grade and Ki-67 immunohistochemistry expression of breast carcinoma using computer image processing(Ⅱ)[C]. 2016 International Computer Symposium (ICS). IEEE, 2016: 410-414. [13] Altman DG, Bland JM. Measurement in medicine: the analysis of method comparison studies[J]. J Roy Statist Soc: Series D, 1983, 32: 307-317. [14] Hou L, Samaras D, Kurc T M, et al. Patch-based convolutional neural network for whole slide tissue image classifica-tion[C]. Proceedings of the IEEE Conference on Computer Vision And Pattern Recognition, 2016: 2424-2433. [15] Otsu N. A threshold selection method from gray-level histograms[J]. IEEE Transact Syst Man Cyb, 1979, 9: 62-66. doi: 10.1109/TSMC.1979.4310076 [16] Abubakar M, Figueroa J, Ali HR, et al. Combined quantitative measures of ER, PR, HER2, and KI67 provide more prognostic information than categorical combinations in luminal breast cancer[J]. Mod Pathol, 2019, 32: 1244-1256. doi: 10.1038/s41379-019-0270-4 [17] Stepec D, Skocaj D. Unsupervised detection of cancerous regions in histology imagery using image-to-image translation[C]. Proceedings of the IEEE/CVF Conference on Com-puter Vision and Pattern Recognition, 2021: 3785-3792. [18] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv, 2014. https://doi.org/10.48550/arXiv.1409.1556. [19] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778. [20] Howard AG, Zhu M, Chen B, et al. Mobilenets: Effi-cient convolutional neural networks for mobile vision applications[J]. arXiv, 2017. https://doi.org/10.48550/arXiv.1704.04861. [21] Ye J, Luo Y, Zhu C, et al. Breast cancer image classification on WSI with spatial correlations[C]. ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019: 1219-1223. [22] Li Y, Ping W. Cancer metastasis detection with neural conditional random field[J]. arXiv, 2018. https://doi.org/10.48550/arXiv.1806.07064. [23] Lafferty J, McCallum A, Pereira F. Conditional random fields: Probabilistic models for segmenting and labeling sequence data[C]. Proc. 18th International Conf. on Machine Learning, 2001: 282-289. [24] Zheng Y, Jiang Z, Zhang H, et al. Adaptive color deconvolution for histological WSI normalization[J]. Comput Methods Programs Biomed, 2019, 170: 107-120. doi: 10.1016/j.cmpb.2019.01.008 [25] Geijs DJ, Intezar M, Litjens G, et al. Automatic color unmixing of IHC stained whole slide images[C]. Digit Pathol, 2018, 10581: 105810L. [26] Kumar N, Gupta R, Gupta S. Whole slide imaging (WSI) in pathology: current perspectives and future directions[J]. J Digit Imaging, 2020, 33: 1034-1040. doi: 10.1007/s10278-020-00351-z [27] Goldhirsch A, Winer EP, Coates AS, et al. Personalizing the treatment of women with early breast cancer: highlights of the St Gallen International Expert Consensus on the Primary Therapy of Early Breast Cancer 2013[J]. Ann Oncol, 2013, 24: 2206-2223. doi: 10.1093/annonc/mdt303