Explainable AI in Healthcare: Leveraging Machine Learning and Knowledge Representation for Personalized Treatment Recommendations Cover Image

Explainable AI in Healthcare: Leveraging Machine Learning and Knowledge Representation for Personalized Treatment Recommendations
Explainable AI in Healthcare: Leveraging Machine Learning and Knowledge Representation for Personalized Treatment Recommendations

Author(s): Shafiqul Islam, Tofayel Gonee ManiK, Mohammad Moniruzzaman, Abu Saleh Muhammad Saimon, Sharmin Sultana, Mohammad Muzahidur Rahman BhuiyaN, Sazzat Hossain, Kamal Ahmed
Subject(s): Health and medicine and law, ICT Information and Communications Technologies
Published by: Transnational Press London
Keywords: Explainable AI (XAI); Machine Learning; Personalized Treatment Recommendations; Knowledge Representation; Knowledge Graphs; SHAP; Clinical Decision Support Systems; Healthcare AI;

Summary/Abstract: In this research, an advanced framework is presented which combines Explainable Artificial Intelligence (XAI), machine learning algorithms and knowledge representation techniques to improve personalized treatment recommendations in healthcare. Random Forest, XGBoost and Deep Neural Networks (DNN) are used in this study to predict optimal treatment plans; thereby, SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) provides means of explaining models. A method is implemented, which uses knowledge graphs and SNOMED CT and UMLS ontologies for structuring patient data and disease-treatment relationships. Thus, the proposed framework is trained and tested on MIMIC-III and eICU Collaborative Research Database, utilizing over 50,000 patient records to assess its performance. The model performance is evaluated using accuracy, F1-score, AUC-ROC and SHAP scores to measure the model explain ability. Results show a 25% improvement in interpretability ratings of healthcare professionals and a 17.6% improvement in predictive accuracy from traditional AI models to state-of-the-art AI models. This study bridges representation gaps of AI driven recommendations and brings it closer to aid in clinical decision-making, improving transparency and trust in AI assisted healthcare. While integrating knowledge graphs and explainable AI techniques can help improve model performance and clinician adoption, using limited human insights to train AIs can perpetuate biased practices and institutions from linear AI. We will continue future research with real world clinical trials and further expand the framework to also utilize multi-institutional datasets for wider application.

  • Issue Year: 5/2025
  • Issue No: 1
  • Page Range: 1541-1559
  • Page Count: 19
  • Language: English
Toggle Accessibility Mode