XAI-IDS: A Transparent and Interpretable Framework for Robust Cybersecurity Using Explainable Artificial Intelligence
Main Article Content
Abstract
The rapid evolution of cyberattacks, and in particular DDoS attacks, has outpaced many intrusion detection systems (IDSs) that fail to be interpretable or transparent, which subsequently hinders wider adoption in real-world high-stake environments. To our knowledge, this is the first joint utilization of CNNs with both SHAP and LIME for XAI-IDS tasks, particularly based on comparative evaluations across a wide range of different intrusions and a combination of XAI methods. The proposed method starts with data acquisition and data preprocessing of NSL-KDD dataset as only DDoS labeled records output are chosen and this is followed by data cleaning through data normalization, feature selection and label encoding to get accurate results. The dataset is divided into training and testing sets, a 1D CNN is trained on the dataset to differentiate DDoS attacks from normal traffic using the hyperparameters with optimized values and early stopping. It yields high predictive performance with 94% accuracy, 93% precision, 95% recall, and 94% F1-score. For these results, it demonstrates strong classification ability with very few false positive and false negative. Stack SHAP for global feature importance and LIME for individual predictions to give us human-understandable explainable results of the model. Not only does this dual explainability cultivate trust and accountability, it also enhances auditing and compliance in sensitive industries. Overall, XAI-IDS shows that the combination of deep learning and post-hoc interpretability is a promising approach to the design of trustworthy systems for cybersecurity. Future works will be conducted toward real-time deployment and multi-class detection in federated and edge learning frameworks.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.