Volume 13 Issue 4
Jul.  2022
Turn off MathJax
Article Contents
XIONG Xuechun, WU Huanwen, REN Fei, CUI Li, LIANG Zhiyong, ZHAO Ze. An Automatic Quantitative Analysis Method of Ki-67 Index for Breast Cancer Immunohistochemistry Based on Fusion of Spatial and Multi-scale Features[J]. Medical Journal of Peking Union Medical College Hospital, 2022, 13(4): 581-589. doi: 10.12290/xhyxzz.2022-0158
Citation: XIONG Xuechun, WU Huanwen, REN Fei, CUI Li, LIANG Zhiyong, ZHAO Ze. An Automatic Quantitative Analysis Method of Ki-67 Index for Breast Cancer Immunohistochemistry Based on Fusion of Spatial and Multi-scale Features[J]. Medical Journal of Peking Union Medical College Hospital, 2022, 13(4): 581-589. doi: 10.12290/xhyxzz.2022-0158

An Automatic Quantitative Analysis Method of Ki-67 Index for Breast Cancer Immunohistochemistry Based on Fusion of Spatial and Multi-scale Features

doi: 10.12290/xhyxzz.2022-0158
Funds:

National Key Research and Development Program of China 2021YFF1201005

Strategic Priority Research Program of the Chinese Academy of Sciences XDA16021400

Chinese Academy of Sciences Network Security and Informatization Special Application Demonstration Project CAS-WX2021SF-0101

More Information
  • Corresponding author: LIANG Zhiyong, E-mail: liangzy@pumch.cn; ZHAO Ze, E-mail: zhaoze@ict.ac.cn
  • Received Date: 2022-03-28
  • Accepted Date: 2022-05-26
  • Publish Date: 2022-07-30
  •   Objective  To propose an intelligent quantitative analysis method of Ki-67 index for breast cancer immunohistochemical whole slide image (WSI).  Methods  The pathological sections of patients with breast cancer diagnosed and treated in Peking Union Medical College Hospital from January 2020 to December 2020 were retrospectively collected, and scanned at 40 magnification as WSI images. Manual interpretation of the Ki-67 index was conducted by 2 pathologists according to the guidelines formulated by the International Breast Cancer Ki-67 Working Group in 2019, which is considered the gold standard. According to the ratio of 5:8, WSI was randomly divided into two data sets, A and B (data set A was randomly divided into training set, validation set and test set according to a ratio of 7:1:2). After the hot spot area in WSI of the data set A was manually marked, each WSI randomly cropped 2000 512×512 pixel patches in the 40 field of view, and 50 patches of them were randomly selected to label tumor cells and calculate the Ki-67 index. The conditional random field model was used to fuse the spatial features of the image blocks, the features were extracted by the ResNet34 pre-training model to construct a hot spot recognition model, and its performance (accuracy) was evaluated in the test set. In the hot spot area, 10 fields of view were randomly selected under the high-power field of view (×40), and the model automatically completed the cell classification and calculated the average Ki-67 index. Taking the results of manual interpretation as the gold standard, the accuracy of the Ki-67 index evaluation results of the data set B by the model was calculated, and the Bland-Altman method was used to evaluate the consistency between the results of manual interpretation and model analysis.  Results  A total of 132 pathological sections of patients with breast cancer which met the inclusion and exclusion criteria were selected. There were 50 images in data set A (35, 5, and 10 images in training set, validation set, and test set, including 70 000, 10 000, and 20 000 patches, respectively), and 82 images in data set B. The average accuracy of the model for identifying hot spots in the test set was 81.5%, and the accuracy of the Ki-67 index calculation results for the B data set was 90.2%. Bland-Altman analysis showed that the Ki-67 index calculated by manual interpretation and model was in good agreement.  Conclusion  The intelligent quantitative analysis method of Ki-67 index proposed in this study has high accuracy and can assist pathologists to achieve efficient interpretation of Ki-67 index.
  • loading
  • [1] Cao W, Chen HD, Yu YW, et al. Changing profiles of cancer burden worldwide and in China: a secondary analysis of the global cancer statistics 2020[J]. Chin Med J (Engl), 2021, 134: 783-791. doi:  10.1097/CM9.0000000000001474
    [2] Skjervold AH, Pettersen HS, Valla M, et al. Visual and digital assessment of Ki-67 in breast cancer tissue-a comparison of methods[J]. Diagn Pathol, 2022, 17: 45. doi:  10.1186/s13000-022-01225-4
    [3] Li L, Han D, Yu Y, et al. Artificial intelligence-assisted interpretation of Ki-67 expression and repeatability in breast cancer[J]. Diagn Pathol, 2022, 17: 20. doi:  10.1186/s13000-022-01196-6
    [4] Nielsen TO, Leung SCY, Rimm DL, et al. Assessment of Ki-67 in breast cancer: updated recommendations from the international Ki-67 in breast cancer working group[J]. J Natl Cancer Inst, 2021, 113: 808-819. doi:  10.1093/jnci/djaa201
    [5] 刘月平. 国际乳腺癌Ki-67工作组Ki-67评估更新的主要内容解读[J]. 中华病理学杂志, 2021, 50: 704-709. doi:  10.3760/cma.j.cn112151-20210303-00179

    Liu YP. Interpretation of Ki-67 assessment update of International Ki-67 in Breast Cancer Working Group[J]. Zhonghua Binglixue Zazhi, 2021, 50: 704-709. doi:  10.3760/cma.j.cn112151-20210303-00179
    [6] Zhou SK, Greenspan H, Davatzikos C, et al. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises[J]. Proc IEEE, 2021, arXiv: 2008.09104.
    [7] Rimm DL, Leung SCY, McShane LM, et al. An interna-tional multicenter study to evaluate reproducibility of automated scoring for assessment of Ki-67 in breast cancer[J]. Mod Pathol, 2019, 32: 59-69. doi:  10.1038/s41379-018-0109-4
    [8] Li C, Li XT, Rahaman MM, et al. A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches[J]. Artif Intell Rev, 2022, ArXiv: 2102.10553.
    [9] Xing F, Cornish TC, Bennett T, et al. Pixel-to-pixel learning with weak supervision for single-stage nucleus recognition in Ki-67 images[J]. IEEE Trans Biomed Eng, 2019, 66: 3088-3097. doi:  10.1109/TBME.2019.2900378
    [10] Negahbani F, Sabzi R, Pakniyat Jahromi B, et al. PathoNet introduced as a deep neural network backend for evaluation of Ki-67 and tumor-infiltrating lymphocytes in breast cancer[J]. Sci Rep, 2021, 11: 8489. doi:  10.1038/s41598-021-86912-w
    [11] Shete PG, Kharate GK. Evaluation of Immunohistochemistry (Ihc) Marker Her2 In Breast Cancer[J]. ICTACT J Image Video Proc, 2016, 7: 1318-1323. doi:  10.21917/ijivp.2016.0192
    [12] Ko CC, Chen YR, Lin WY. Improving the evaluation accuracies of histopathologic grade and Ki-67 immunohistochemistry expression of breast carcinoma using computer image processing(Ⅱ)[C]. 2016 International Computer Symposium (ICS). IEEE, 2016: 410-414.
    [13] Altman DG, Bland JM. Measurement in medicine: the analysis of method comparison studies[J]. J Roy Statist Soc: Series D, 1983, 32: 307-317.
    [14] Hou L, Samaras D, Kurc T M, et al. Patch-based convolutional neural network for whole slide tissue image classifica-tion[C]. Proceedings of the IEEE Conference on Computer Vision And Pattern Recognition, 2016: 2424-2433.
    [15] Otsu N. A threshold selection method from gray-level histograms[J]. IEEE Transact Syst Man Cyb, 1979, 9: 62-66. doi:  10.1109/TSMC.1979.4310076
    [16] Abubakar M, Figueroa J, Ali HR, et al. Combined quantitative measures of ER, PR, HER2, and KI67 provide more prognostic information than categorical combinations in luminal breast cancer[J]. Mod Pathol, 2019, 32: 1244-1256. doi:  10.1038/s41379-019-0270-4
    [17] Stepec D, Skocaj D. Unsupervised detection of cancerous regions in histology imagery using image-to-image translation[C]. Proceedings of the IEEE/CVF Conference on Com-puter Vision and Pattern Recognition, 2021: 3785-3792.
    [18] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv, 2014. https://doi.org/10.48550/arXiv.1409.1556.
    [19] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778.
    [20] Howard AG, Zhu M, Chen B, et al. Mobilenets: Effi-cient convolutional neural networks for mobile vision applications[J]. arXiv, 2017. https://doi.org/10.48550/arXiv.1704.04861.
    [21] Ye J, Luo Y, Zhu C, et al. Breast cancer image classification on WSI with spatial correlations[C]. ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019: 1219-1223.
    [22] Li Y, Ping W. Cancer metastasis detection with neural conditional random field[J]. arXiv, 2018. https://doi.org/10.48550/arXiv.1806.07064.
    [23] Lafferty J, McCallum A, Pereira F. Conditional random fields: Probabilistic models for segmenting and labeling sequence data[C]. Proc. 18th International Conf. on Machine Learning, 2001: 282-289.
    [24] Zheng Y, Jiang Z, Zhang H, et al. Adaptive color deconvolution for histological WSI normalization[J]. Comput Methods Programs Biomed, 2019, 170: 107-120. doi:  10.1016/j.cmpb.2019.01.008
    [25] Geijs DJ, Intezar M, Litjens G, et al. Automatic color unmixing of IHC stained whole slide images[C]. Digit Pathol, 2018, 10581: 105810L.
    [26] Kumar N, Gupta R, Gupta S. Whole slide imaging (WSI) in pathology: current perspectives and future directions[J]. J Digit Imaging, 2020, 33: 1034-1040. doi:  10.1007/s10278-020-00351-z
    [27] Goldhirsch A, Winer EP, Coates AS, et al. Personalizing the treatment of women with early breast cancer: highlights of the St Gallen International Expert Consensus on the Primary Therapy of Early Breast Cancer 2013[J]. Ann Oncol, 2013, 24: 2206-2223. doi:  10.1093/annonc/mdt303
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(9)

    Article Metrics

    Article views (481) PDF downloads(73) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return