IDS-attention: an efficient algorithm for intrusion detection systems using attention mechanism

Abstract Network attacks are illegal activities on digital resources within an organizational network with the express intention of compromising systems. A cyber attack can be directed by individuals, communities, states or even from an anonymous source. Hackers commonly conduct network attacks to a...

Description complète

Enregistré dans:
Détails bibliographiques
Auteurs principaux: FatimaEzzahra Laghrissi, Samira Douzi, Khadija Douzi, Badr Hssina
Format: article
Langue:EN
Publié: SpringerOpen 2021
Sujets:
Accès en ligne:https://doaj.org/article/43294da0d6c24bb7957af3fd39148911
Tags: Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
Description
Résumé:Abstract Network attacks are illegal activities on digital resources within an organizational network with the express intention of compromising systems. A cyber attack can be directed by individuals, communities, states or even from an anonymous source. Hackers commonly conduct network attacks to alter, damage, or steal private data. Intrusion detection systems (IDS) are the best and most effective techniques when it comes to tackle these threats. An IDS is a software application or hardware device that monitors traffic to search for malevolent activity or policy breaches. Moreover, IDSs are designed to be deployed in different environments, and they can either be host-based or network-based. A host-based intrusion detection system is installed on the client computer, while a network-based intrusion detection system is located on the network. IDSs based on deep learning have been used in the past few years and proved their effectiveness. However, these approaches produce a big false negative rate, which impacts the performance and potency of network security. In this paper, a detection model based on long short-term memory (LSTM) and Attention mechanism is proposed. Furthermore, we used four reduction algorithms, namely: Chi-Square, UMAP, Principal Components Analysis (PCA), and Mutual information. In addition, we evaluated the proposed approaches on the NSL-KDD dataset. The experimental results demonstrate that using Attention with all features and using PCA with 03 components had the best performance, reaching an accuracy of 99.09% and 98.49% for binary and multiclass classification, respectively.