Volume 13 Issue 4
Jul.  2022
Turn off MathJax
Article Contents
ZHOU Yanyan, DENG Yang, BAO Ji, BU Hong. Trusted Artificial Intelligence for Pathology: From Theory to Practice[J]. Medical Journal of Peking Union Medical College Hospital, 2022, 13(4): 525-529. doi: 10.12290/xhyxzz.2022-0184
Citation: ZHOU Yanyan, DENG Yang, BAO Ji, BU Hong. Trusted Artificial Intelligence for Pathology: From Theory to Practice[J]. Medical Journal of Peking Union Medical College Hospital, 2022, 13(4): 525-529. doi: 10.12290/xhyxzz.2022-0184

Trusted Artificial Intelligence for Pathology: From Theory to Practice

doi: 10.12290/xhyxzz.2022-0184
Funds:

Technological Innovation Project of Chengdu New Industrial Technology Research Institute 2017-CY02-00026-GX

1·3·5 Project for Disciplines of Excellence Clinical Research Incubation Project, West China Hospital, Sichuan University 20HXFH029

1·3·5 Project for Disciplines of Excellence, West China Hospital ZYGD18012

More Information
  • Corresponding author: BAO Ji, E-mail: baoji@scu.edu.cn
  • Received Date: 2022-04-06
  • Accepted Date: 2022-05-26
  • Available Online: 2022-06-10
  • Publish Date: 2022-07-30
  • Artificial intelligence (AI) has gradually integrated into every aspect of pathology research. However, we also encounter some problems in the practical application of pathological artificial intelligence. 1. Research institutions attach importance to the protection of data privacy, which results in the emergence of data islands and is detrimental to our training of AI models. 2. The lack of interpretability of existing AI models leads to users' incomprehension and difficulty in human-computer interaction. 3. AI models make insufficient use of multi-modal data, making it difficult to further improve their predictive effectiveness. To address the above-mentioned challenges, we propose to introduce the latest technologies of trusted artificial intelligence (TAI) into existing research of pathological AI, which is embodied as the following: 1. Securely share data. We try to break data islands on the basis of adhering to data protection. We can use federated learning methods, only provide the results of data training without uploading the data itself, and greatly increase the amount of data that can be used for training without affecting the data security. 2. Give AI interpretability. The technology of graphic neural networks is used to simulate the process of pathologists' learning pathological diagnosis, making the model itself interpretable. 3. Fuse multimodal information. Use the technology of knowledge graph to integrate and deepen the analysis of more diverse and comprehensive data sources in order to derive more accurate models. Through the above three aspects, we can achieve reliable and controllable pathological AI and clear the responsibility through trusted pathological AI technology, so as to promote the development and clinical application of pathological AI.
  • loading
  • [1] Li BH, Hou BC, Yu WT, et al. Applications of artificial intelligence in intelligent manufacturing: a review[J]. Front Inform Technol Electron Eng, 2017, 18: 86-96. doi:  10.1631/FITEE.1601885
    [2] Rajpurkar P, Chen E, Banerjee O, et al. AI in health and medicine[J]. Nat Med, 2022, 28: 31-38. doi:  10.1038/s41591-021-01614-0
    [3] European Commission. White Paper On Artificial Intelli-gence-A European approach to excellence and trust[EB/OL]. (2020-02-19)[2022-04-05]. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.
    [4] 中国信息通信研究院. 可信人工智能白皮书[EB/OL]. (2021-07-09)[2022-04-05]. http://www.caict.ac.cn/k-xyj/qwfb/bps/202107/P020210709319866413974.pdf.
    [5] Parwani A. Whole Slide Imaging[M]. Switzerland: Springer Nature Switzerland AG, 2022: 223-236.
    [6] Willemink MJ, Koszek WA, Hardell C, et al. Preparing Medical Imaging Data for Machine Learning[J]. Radiology, 2020, 295: 4-15. doi:  10.1148/radiol.2020192224
    [7] European Commission. General Data Protection Regulation[EB/OL]. (2016-04-27)[2022-04-05]. https://gdpr.eu/article-1-subject-matter-and-objectives-overview/.
    [8] Law Reform Commission. Hong Kong Person Date Privacy Ordinance[EB/OL]. (2012-10-01)[2022-04-05]. https://www.pcpd.org.hk/english/data_privacy_law/ordinance_at_a_Glance/ordinance.html.
    [9] Kaissis GA, Makowski MR, Rückert D, et al. Secure, privacy-preserving and federated machine learning in medical imaging[J]. Nat Machine Intel, 2020, 2: 305-311. doi:  10.1038/s42256-020-0186-1
    [10] Zhou SK, Greenspan H, Davatzikos C, et al. A Review of Deep Learning in Medical Imaging: Imaging Traits, Technology Trends, Case Studies With Progress Highlights, and Future Promises[J]. Proc IEEE Inst Electr Electron Eng, 2021, 109: 820-838. doi:  10.1109/JPROC.2021.3054390
    [11] 刘再毅, 石镇维, 梁长虹. 推进联邦学习技术在医学影像人工智能中的应用[J]. 中华医学杂志, 2022, 102: 318-320.

    Liu ZY, Shi ZW, Liang CH. Promoting the application of federated learning in medical imaging artificial intelligence[J]. Zhonghua Yixue Zazhi, 2022, 102: 318-320.
    [12] Yang D, Xu Z, Li WQ, et al. Federated Semi-Supervised Learning for COVID Region Segmentation in Chest CT using Multi-National Data from China, Italy, Japan[J]. Med Image Anal, 2021, 70: e101992. doi:  10.1016/j.media.2021.101992
    [13] Lu MY, Chen RJ, Kong DH, et al. Federated learning for computational pathology on gigapixel whole slide images[J]. Med Image Anal, 2022, 76: e102298. doi:  10.1016/j.media.2021.102298
    [14] Du MN, Liu NH, Hu X. Techniques for Interpretable Machine Learning[J]. Commun ACM, 2020, 63: 68-77.
    [15] Selvaraju RR, Cogswell M, Das A, et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization[J]. Int J Comput Vis, 2020, 128: 336-359. doi:  10.1007/s11263-019-01228-7
    [16] Yu K, Wang F, Berry GJ, et al. Classifying non-small cell lung cancer types and transcriptomic subtypes using convolutional neural networks[J]. J Am Med Inform Assoc, 2020, 27: 757-769. doi:  10.1093/jamia/ocz230
    [17] Sousa I, Vellasco M, Silva E. Local Interpretable Model-Agnostic Explanations for Classification of Lymph Node Metastases[J]. Sensors, 2019, 19: 1-18. doi:  10.1109/JSEN.2019.2912688
    [18] Saporta A, Gui XT, Agrawal A, et al. Deep learning saliency maps do not accurately highlight diagnostically relevant regions for medical image interpretation[J]. MedRxiv, 2021. https://doi.org/10.1101/2021.02.28.21252634. doi:  10.1101/2021.02.28.21252634
    [19] Ehsan U, Passi S, Liao QV, et al. The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations[J]. Arxiv, 2021. https://arxiv.org/abs/2107.13509.
    [20] Li X, Dvornek NC, Zhou Y, et al. Efficient Interpretation of Deep Learning Models Using Graph Structure and Coopera-tive Game Theory: Application to ASD Biomarker Discovery[J]. Inf Process Med Imaging, 2019, 11492: 718-730.
    [21] Li X, Zhou Y, Dvornek NC, et al. Efficient Shapley Explana-tion for Features Importance Estimation Under Uncertainty[J]. Med Image Comput Assist Interv, 2020, 12261: 792-801.
    [22] Sarder SP. From What to Why, the Growing Need for a Focus Shift Toward Explainability of AI in Digital Pathology[J]. Front Physiol, 2022, 12: e821217.
    [23] Hegde N, Hipp JD, Liu Y, et al. Similar Image Search for Histopathology: SMILY[J]. NPJ Digit Med, 2019, 2: 56-65.
    [24] Li X, Duncan J. BrainGNN: Interpretable Brain Graph Neural Network for fMRI Analysis[J]. Med Image Anal, 2021, 74, e102233.
    [25] Li X, Zhou Y, Dvornek NC, et al. Pooling Regularized Graph Neural Network for fMRI Biomarker Analysis[J]. Med Image Comput Assist Interv, 2020, 12267: 625-635.
    [26] Amit S. Introducing the knowledge graph[R]. America: Official Blog of Google, 2012.
    [27] 崔洁. 面向乳腺肿瘤诊断的知识图谱及辅助决策研究[D]. 上海: 东华大学, 2018.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(1)

    Article Metrics

    Article views (603) PDF downloads(165) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return