The growing success of Machine Learning (ML) is making significant improvements to predictive models, facilitating their integration in various application fields, especially the healthcare context. However, it still has limitations and drawbacks, such as the lack of interpretability which does not allow users to understand how certain decisions are made. This drawback is identified with the term "Black-Box", as well as models that do not allow to interpret the internal work of certain ML techniques, thus discouraging their use. In a highly regulated and risk-averse context such as healthcare, although "trust" is not synonymous with decision and adoption, trusting an ML model is essential for its adoption. Many clinicians and health researchers feel uncomfortable with black box ML models, even if they achieve high degrees of diagnostic or prognostic accuracy. Therefore more and more research is being conducted on the functioning of these models. Our study focuses on the Random Forest (RF) model. It is one of the most performing and used methodologies in the context of ML approaches, in all fields of research from hard sciences to humanities. In the health context and in the evaluation of health policies, their use is limited by the impossibility of obtaining an interpretation of the causal links between predictors and response. This explains why we need to develop new techniques, tools, and approaches for reconstructing the causal relationships and interactions between predictors and response used in a RF model. Our research aims to perform a machine learning experiment on several medical datasets through a comparison between two methodologies, which are inTrees and NodeHarvest. They are the main approaches in the rules extraction framework. The contribution of our study is to identify, among the approaches to rule extraction, the best proposal for suggesting the appropriate choice to decision-makers in the health domain.
in the Catalogue