Author :
Farah SyazwaniJourna Name:
International Journal of Scientific Research & Engineering Trends Volume:
7 issue:6 Year:Volume-7-issue-6 Views : 4
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical paradigm in enhancing trust, transparency, and accountability in cybersecurity systems. As cyber threats become increasingly sophisticated, traditional black-box machine learning models often fail to provide interpretable insights into their decision-making processes, thereby limiting their adoption in high-stakes environments. This review explores the integration of explainable AI techniques within cybersecurity frameworks, focusing on how interpretability improves threat detection, incident response, and risk assessment. The article highlights key methodologies such as feature attribution, model-agnostic explanations, and rule-based learning that enable analysts to understand and validate model outputs. Additionally, the role of XAI in regulatory compliance and ethical AI deployment is examined, emphasizing the need for transparency in automated decision systems. Challenges such as trade-offs between accuracy and interpretability, adversarial manipulation of explanations, and scalability issues are also discussed. Emerging trends, including hybrid explainability approaches and human-in-the-loop systems, are presented as promising directions for future research. By bridging the gap between complex machine learning models and human understanding, XAI holds significant potential to transform cybersecurity decision-making into a more reliable and interpretable process. This review provides a comprehensive overview of current advancements and outlines future pathways for integrating explainable intelligence into cybersecurity infrastructures.