Artificial intelligence (AI) technologies have been widely applied in medicine and healthcare. Explainable AI (XAI) has been proposed to make AI applications more transparent and efficient. This study applies some simple cross-domain tools and techniques, including common expression (with linguistic terms), color management, traceable aggregation, and segmented distance diagrams, among others, to improve the explainability of AI applications in healthcare. Four applications of AI technologies in hospitals were used, and recommendations were studied to illustrate the applicability of the proposed methodology. The explainability of each AI application was evaluated before and after improvement for comparison. According to the experimental results, these AI-based hospital recommendation methods could be better explained by modifying their explanations using simple and cross-domain tools.