A STUDY OF INTERPRETABLE AI AND MLWITHIN HEALTHCARE SYSTEMS
DOI:
https://doi.org/10.63001/tbs.2025.v20.i02.S2.pp786-792Keywords:
Artificial Intelligence, Machine Learning, Big Data Analytics, Internet of Things, pharmaceuticals, RadiologyAbstract
The healthcare sector is particularly sensitive, as it pertains to individuals' lives, necessitating that decisions be made with great care and based on robust evidence. Nevertheless, the majority of AI and ML systems are intricate and fail to offer clarity on how issues are resolved or the rationale behind proposed decisions. This deficiency in interpretability is a primary factor hindering the widespread adoption of certain AI and ML models in practical settings like healthcare. Consequently, it would be advantageous for AI and ML models to furnish explanations that empower physicians to make informed, data-driven decisions, ultimately enhancing the quality of service. Recently, numerous initiatives have been undertaken to propose interpretable machine learning models that are more user-friendly and applicable in real-world scenarios. This paper intends to deliver a thorough survey and explore the phenomena of Interpretable AI and ML models along with their applications in healthcare. It addresses the essential characteristics, theoretical foundations required for the development of IML, and emerging technologies, as well as the top ten areas within healthcare.