Trusted Artificial Intelligence for Pathology: From Theory to Practice
-
-
Abstract
Artificial intelligence (AI) has gradually integrated into every aspect of pathology research. However, we also encounter some problems in the practical application of pathological artificial intelligence. 1. Research institutions attach importance to the protection of data privacy, which results in the emergence of data islands and is detrimental to our training of AI models. 2. The lack of interpretability of existing AI models leads to users' incomprehension and difficulty in human-computer interaction. 3. AI models make insufficient use of multi-modal data, making it difficult to further improve their predictive effectiveness. To address the above-mentioned challenges, we propose to introduce the latest technologies of trusted artificial intelligence (TAI) into existing research of pathological AI, which is embodied as the following: 1. Securely share data. We try to break data islands on the basis of adhering to data protection. We can use federated learning methods, only provide the results of data training without uploading the data itself, and greatly increase the amount of data that can be used for training without affecting the data security. 2. Give AI interpretability. The technology of graphic neural networks is used to simulate the process of pathologists' learning pathological diagnosis, making the model itself interpretable. 3. Fuse multimodal information. Use the technology of knowledge graph to integrate and deepen the analysis of more diverse and comprehensive data sources in order to derive more accurate models. Through the above three aspects, we can achieve reliable and controllable pathological AI and clear the responsibility through trusted pathological AI technology, so as to promote the development and clinical application of pathological AI.
-
-