999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Multi-Layer Reconstruction Errors Autoencoding and Density Estimate for Network Anomaly Detection

2021-08-26 09:38:56RuikunLiYunLiWenHeLirongChenandJianchaoLuo

Ruikun Li,Yun Li,Wen He,Lirong Chen and Jianchao Luo

1University of Electronic Science and Technology of China,Chengdu,611730,China

2Guangdong Weichen Information Technology Co.,Ltd.,Guangzhou,510000,China

3Chongqing Changan Automobile Corporation,Chongqing,400000,China

ABSTRACT Anomaly detection is an important method for intrusion detection.In recent years,unsupervised methods have been widely researched because they do not require labeling.For example,a nonlinear autoencoder can use reconstruction errors to attain the discrimination threshold.This method is not effective when the model complexity is high or the data contains noise.The method for detecting the density of compressed features in a hidden layer can be used to reduce the influence of noise on the selection of the threshold because the density of abnormal data in hidden layers is smaller than normal data.However,compressed features may lose some of the high-dimensional distribution information of the original data.In this paper,we present an efficient anomaly detection framework for unsupervised anomaly detection,which includes network data capturing,processing,feature extraction,and anomaly detection.We employ a deep autoencoder to obtain compressed features and multi-layer reconstruction errors,and feeds them the same to the Gaussian mixture model to estimate the density.The proposed approach is trained and tested on multiple current intrusion detection datasets and real network scenes,and performance indicators,namely accuracy,recall,and F1-score,are better than other autoencoder models.

KEYWORDS Anomaly detection;deep autoencoder;density estimate

1 Introduction

In the last few years,numerous network attacks have emerged.An intrusion detection system acts as a barrier for a computer system,and can detect network anomalies.A typical intrusion detection method includes anomaly detection and misuse detection.The latter monitors network traffic or system activities for known misuse behaviors and can add known simple attacks to the model;however,it cannot detect unknown attacks.Anomaly detection identifies anomalies by establishing rules for normal behavior.With the extensive application of deep learning,unsupervised anomaly detection method is gaining increasing prominence[1,2]because it can train samples without labels and detect unknown attacks.

Normal behavior rules can be established without labeling by using unsupervised anomaly detection,and an appropriate value can be set as a detection threshold.Supervised learning has a number of shortcomings,which can be enumerated as follows:1)it requires manual labeling,which is time-consuming and costly;2)manually labeled data may be misclassified,which can potentially affect the effect of training;3)training data for supervised model classification cannot cover huge types of attacks,which makes it challenging to identify new types of attack methods[3,4].

Although unsupervised models can detect unknown attack behavior,it is difficult to select the threshold to detect anomalies with low accuracy and high false alarm rate.Such models therefore cannot attain the requisite effect and need improvement.

Owing to the stated limitations,we propose a lightweight and network-based anomaly detection framework MEAEDE,which comprises a deep autoencoder(DAE)and density estimate network(DEN).To estimate the density of data for anomaly detection,we derive multi-layer reconstruction errors(MRE),which is the correlation coefficient of input and output from DAE.As the data in the latent layer has a low dimension to compute density,we can use normal data in the latent layer to form a standard feature space,while abnormal data in the latent can form another feature space,and thus it can be easily distinguished.The modules of our framework include network data capturing,managing,feature extraction,and anomaly detector.DAE is deployed in the detector for data compression and reconstruction,and then a Gaussian mixture model(GMM)is used to calculate the energy of features from DAE to obtain the detection threshold.Our framework has the following characteristics:

1)Packet capture:We use network cards to capture packets to obtain real-time data;to avoid data accumulation in memory,we quickly process and discard the processed data.

2)Unsupervised model:Compressed features,MRE,and correlation coefficients between input and reconstruction of DAE are first gathered,and then fed to GMM for energy detection;data can be then classified as anomalous if energy exceeds a threshold.Our experimental results prove that MRE can compensate for the loss of information for a latent layer,which can in turn enhance the performance of anomaly detection.

In DAE,abnormal data cannot represent compressed features effectively;therefore,reconstruction error is generally used to detect the anomaly.However,samples used for model training may contain noise,which could introduce problems during threshold acquisition.Density estimation in a hidden layer can decelerate this impact and also cause loss of pertinent sample information.With the aim of feature compensation in hidden layer of autoencoder for density estimation in GMM,we gather compressed features,MRE,and the correlation coefficient between the raw input and reconstruction features,feed them to GMM for density detection,and finally construct an appropriate objective function to jointly optimize sample loss.Furthermore,we adopt the acquired energy value as the standard for obtaining a discrimination threshold.The experimental results show that there is a strong correlation between these factors,which can effectively classify normal and abnormal data.

In brief,the contributions of this paper can be enumerated as follows:

1.We propose a lightweight network-based anomaly detection framework MEAEDE,which includes data capturing,efficient data processing,feature extraction,and anomaly detection.First,packets in the network are captured and parsed to bidirectional flows by a Packet Capturer module;next,time-related features are extracted by a Feature Extractor module,which are further sent to an Anomaly Detector module to detect whether the data is normal.During this entire procedure,a Data Manager module is responsible for continuously capturing packets and removing processed packets.

2.Our framework can efficiently process and manage data including rapid feature extraction and prompt discarding of processed data;the experiment results show that MEAEDE has a high speed in training the model and easy deployment of the network.

3.MRE and other features from DAE are employed to form new featuresc,which are further fed into GMM to calculate energy and acquire the detection threshold.Our experimental results show that these features can greatly improve the performance of network anomaly detectors.We will discuss how MRE can improve the detection results of the autoencoder in Section 3.

The rest of this paper is organized as follows:Section 2 discusses related work in the domain of unsupervised anomaly detection.Section 3 describes the architecture of MEAEDE and details of its components.Section 4 presents the evaluation and includes experimental settings and test results.Section 5 outlines the conclusion and future work.

2 Related Work

There is two mainstream intrusion detection that is based on supervised and unsupervised methods.In the past,supervised learning methods have been widely researched because of their high accuracy,but they also have obvious shortcomings.First,the training data used to feed needs to be manually labeled;it is difficult to define abnormal and normal data and differentiate between them.Moreover,they have a low performance for the detection of unseen attacks,and thus unsupervised methods have been extensively researched in the past few years.

2.1 Conventional Anomaly Detection

Previous studies based on unsupervised anomaly detection have various approaches,which can be briefly categorized into distance-based,reconstruction-based and probabilistic-based methods[2].Probabilistic-based methods such as GMM[5]and Kernal density estimate(KDE)[6]are used to estimate the density of normal points;a point will be classified as an anomaly if it has a lower density than a predefined value.Reconstruction-based method,such as principal component analysis(PCA)and autoencoder methods,assumes that the anomalies cannot effectively reconstruct from a compressed representation in a low dimension;therefore,higher reconstruction error may be detected as anomalies.PCA is used to reduce the dimension of input data when the sample has a high dimension[7,8].This method can reduce the cost when data has a high complexity.However,a drawback of PCA is that the new features obtained after dimension reduction do not acquire all information from the original space,and it’s hard to determine the dimension of subspace and detection threshold.Distance-based methods assume that abnormal data is far from a clustered set,while normal data is gathered within an acceptable cluster.In[9],the authors use an unsupervised clustering method to establish a classification center for data and calculates the distance of samples from the center to detect anomalies.However,the feature vector with a higher dimension has a high computational complexity,and it is not advisable to represent features simply by computing distance.

2.2 Autoencoder-Based Approaches

The state-of-the-art machine learning methods are widely used in anomaly detection[10].With developments in deep learning,a few unsupervised methods,such as generative adversarial networks(GANs)and autoencoders,have been used in anomaly detection.In[11],the authors apply a GAN model for anomaly detection and detect anomalies on a trained generator.In[12],anomaly detection is accomplished using a variational autoencoder predicated on a gradient-based fingerprinting technique.In[13],the authors use layers of autoencoders to calculate the mean square error of sample and reconstructed features in order to capture the data of real environment and obtain advantages in an effective way.

In the last few years,methods for obtaining reconstruction error from DAE have achieved effective results[14,15].In these methods,normal data is first trained and its reconstruction is obtained,and then test data is used to get the deviation from normal data of a trained model;if the deviation exceeds the predetermined threshold,the data is classified as anomaly.However,these methods rely only upon reconstruction errors,and the effect is not ideal when the training sample contains noise or the model has a high complexity.The reconstruction error of anomalies may be hidden within the established threshold range because of the presence of noise.Therefore,some scholars have proposed a hidden layer density estimate method because the density of abnormal data in the hidden layer is lower than normal data.In[16],the authors combine autoencoder and density estimate for anomaly detection;their approach is based on the feature density of the hidden layer in the autoencoding network.First,the DAE network is utilized to acquire the subspace of original data;next,a Gaussian kernel function is used to obtain the likelihood of data points.This method can efficiently estimate the density of the sample in the hidden layer in a low dimension.

2.3 Mixed Approaches Combine Autoencoder and Density Estimate

The aforementioned methods either consider only the density of the hidden layer or the reconstruction error and fail to realize the effect of their inter-relation on the prediction of the sample density,and thus the resultant effect is not ideal when the training sample contains noises.In[17],the authors propose a deep autoencoding Gaussian mixture model(DAGMM)by combining DAE with GMM.The reconstruction error and compressed features in the hidden layer are adopted to construct new features,energy is acquired after feeding new features into GMM,and desirable results are obtained.However,the threshold of discrimination needs to be set in accordance with the prior knowledge of the dataset,and thereby the approach lacks generalization ability;In addition density estimation on compressed layer loses some high-level information,and thus the method needs to be improved in the actual scenario.

Inspired by the density estimation method,we adopt DAE and gather compressed features,MRE,and the correlation coefficient of input and reconstruction as new features,and then construct a suitable energy detection network to calculate the energy of the new features.Because normal data has lower energy(i.e.,higher density)during training,the energy value is considered a part of the loss function,so the threshold for judging anomalies is obtained through joint optimization.One can infer from Section 3 why we adopt MRE,coefficient correlation and compressed features.In addition,we propose a complete and efficient network-based anomaly detection framework,optimized in data capturing,package management,feature extraction and anomaly detection.Precision,Recall,and F1-score are used as indicators for evaluating the proposed model’s performance.The obtained experimental results confirm the advantages of our model.

3 Proposed Model

The framework of anomaly detection in this paper includes several modules of Data Capturer,Data Manager,Feature Extractor and Anomaly Detector and is depicted in Fig.1.

3.1 Data Capturer

Data Capturer can continuously capture data from real network.The captured data can reflect information such as the source and purpose of the user.For example,the attacker who wants to attack the server using a denial-of-service(DoS)attack or a distributed-of-service(DDoS)attack will create a fake IP address,and then send a TCP SYN packet to the server.The server then sends an SYN as a response but fails.Receipt of subsequent confirmation packets causes the server to wait,consume resources,and evently achieve the purpose of DoS.

Figure 1:The framework and components of MEAEDE

The characteristics of the package can reflect its purpose to a certain extent,so it is possible to analyze whether the data is abnormal by analyzing the package.In this paper,libpcap is used to capture the original binary package,and then tcpdump can call the interface of libpcap and parse acquired raw packet into a standard package format for use by Data Manager.

3.2 Data Manager

The captured packet is saved in the “.pcap” files after parsing.In order to improve the efficiency of memory management and feature extraction,we revise the tcpdump source code so that it not only can capture packets continuously but also discard the processed data in time after feature extraction.In our experimental results in Section 4,average extraction time of 14s for every 100 k “.pcap” file is obtained.

3.3 Feature Extractor

Changes in applications and services pose challenges to traffic analysis,and thereby make it difficult to handle encrypted information such as in VPN.There are various protocols or services used for different applications.Time-related feature extraction is well adapted to the changes of network services[18]and can be used to distinguish between different applications.We extract bidirectional flows from the parsed packets,which are defined as the same packet sequence(source IP,destination IP,source port,destination port,and protocol)and then estimate the timerelated information from the flows as features.In a bidirectional flow,the first packet determines the forward(source to destination)and backward(destination to source)directions,so statistical analysis based on time-related features can be calculated separately.Since TCP and UDP flows end upon receiving a FIN packet and with flow timeout,the timeout of flow is unified to 15s in this paper.

The collective procedure of Data Capturer and Feature Extractor is illustrated in Fig.2,wheret1~tnis the start time of a flow,and the packet vector is a “.pcap” file.If the time interval exceeds 15s or a FIN flag is received,the flow will be sent to Feature Extractor for further analysis.

Figure 2:The procedure of data capturer and feature extractor

At the beginning,we initialize n flows in the flow vectorF=(f1,f2,...,fn)and then use revised tcpdump to capture packets continuously;if the size of packet vector exceeds 10 k,the packet file is moved,and a new vector is created with the same name,while packet is sent to its own flow.A time vector is also created to record the previous time of a flow;if the current timeout exceeds 15 s or a FIN flag of a TCP packet is received,then the flow is sent to Feature Extractor for further analysis.

Finally,we extract the features including time-related statistical information(mean and variance of packet length),packet number,packet length,and TCP flag bit from the flows to form input features of the anomaly detector.Finally,we acquire 49 features showed in Tab.1,and the extracted features are normalized to 0-1 for training and testing of anomaly detector.

3.4 Anomaly Detector

Owing to the merits of the unsupervised model summarized in Section 1,DAE is used in this paper.However,the method of acquiring the threshold on the basis of reconstruction error is sensitive to the selection of parameters and can be easily affected by noise.Moreover,compressed features are not taken into account,which in turn hinders the detection effect.Therefore,MRE,compressed features in the hidden layer,and correlation coefficients of input and reconstruction are gathered to form new features.GMM is further used to calculate sample energy,and we design appropriate loss functions,and use the random gradient descent method(SGD)to jointly optimize the loss of each component.Details will be introduced in the following sections.

Table 1:Description and value of features in a bidirectional flow

The procedure of acquiring MRE from autoencoder is illustrated in Fig.3,wherein the green neurons denote hidden layer neurons,yellow neurons are compressed features,and blue neurons are input and output data points.In Figs.4 and 5,MRE distribution of KDDCUP99 and NSLKDD datasets in a seven-layer autoencoder(including four layers of the encoder and three layers of the decoder)shows that symmetry reconstruction errors can clearly divide normal points and anomalies.In Figs.4 and 5,cost1 represents the error of the first hidden layer of the encoder and the second layer of the decoder,cost2 represents the error of the second hidden layer of the encoder and the first hidden layer,and cost3 is the error of input data and output data of the decoder.Therefore,in our experiment,described in Section 4,we derive MRE and other features to acquire a new feature vector for density estimation of GMM in the last epoch of the training procedure,and the results reveal that MRE is a powerful feature in our network-based anomaly detection framework.

3.4.1 Loss of DAE

Generally,DAE consists of an encoder and a decoder;the encoder can compress original data into latent representation,while the decoder tries to reconstruct original data from latent layer.Let markxibe an original data point;we can then get a latent representationhiby encoder function enc(xi,θ),where θ denotes parameters of the encoder.Furthermore,the reconstruction vector can be expressed asyi=dec(γ;hi),where γ is the parameters of the decoder.We define the objective function of DAE as

wheremrepresents the count of input data.

Figure 3:The procedure of MRE,where green points denote hidden layer neuros,yellow points are compressed features,and blue points are input and output data point

Figure 4:Reconstruction error distribution under KDDCUP99 with two hidden layers of the encoder and the decoder

3.4.2 New Features

To represent the reconstruction error,we use the root mean square error(RMSE),which represents the arithmetic square root of the difference between input and reconstruction and is defined as

where d denotes the dimension of input,iis thei-th data point,and j is thej-th dimension of a layer.Pearson’s correlation coefficient is obtained by dividing the covariance between input and reconstruction by the product of the standard deviations between variables and is expressed as

where cov(.)is the covariance ofxiandyi,σ is the standard deviation of a vector.Further,we can get MRE using Algorithm 1.

Figure 5:Reconstruction error distribution under NSL-KDD with two hidden layers of the encoder and the decoder

We can then acquireci=[hi,ei,ρi,mi],wherehidenotes compressed feature,andmiis MRE.

We build a DEN to calculate the density of sample,which can receive a new featurecfrom DAE,and generate aK-dimensional distribution named ?π.Kdenotes the number of components of GMM;note that to calculate the energy value,we feed ?πinto the GMM.In addition,we adopt GELU[19]as the activation function for DAE and DEN;it can be defined as

Asxdecreases,the probability of it being classified becomes 0.Asxincreases,the activation value also increases.This type of activation method not only retains the probability,but also retains the dependence on the input.

3.4.3 GMM

GMM is a linear combination of multiple Gaussian functions.Assuming the density of anomalies to be lower than that of normal data,we train the energy threshold of normal data,and the input data will be classified as abnormal if energy exceeds the threshold.Fig.6 shows energy distribution between normal data and anomalies under DDoS.

Figure 6:Energy distribution of normal data and anomalies under DDoS attack,where yellow points are anomalies,while purple points denote normal data

For inputc,the probability prediction of components generated by GMM is expressed as

whereωis the parameter of DEN.Thus,we define the probability of c as

whereN(c|μk,Σk)is thek-th component of GMM,is mixture coefficient with limitation

Given m samples from DAE,we can get the average probability,mean value and covariance of each component using DEN and GMM as

Furthermore,we can define sample energy as

Finally,we get the objective function as

where θ,γ,and ω denote parameters of DAE,DEN,and GMM,anddis the dimension of compressed features.Finally,the training object is to be minimized using Eq.(10).

In order to select a suitable threshold,we adopt ? as an energy parameter with the initial value?Inf,which represents the max energy during the process of training.We then set the adjustment factor ?∈(0,∞).Hence,the threshold T=??.The training procedure of the anomaly detector is presented in Algorithm 2,and the testing process is shown in Algorithm 3.

Algorithm 1:The computing procedure of MRE Input:Training data set S,epochs N,layer of DAE L,θi,γi for encoder and decoder of DAE Output:MRE set M Set M=?for epoch from 1 to N if epoch==N:for xi ∈S do for j from 2 to L xij=enc(xi(j?1); θi),yij=dec(xi[L?1 images/BZ_382_520_1389_558_1434.pngimages/BZ_382_563_1410_610_1456.pngdj=1(xij,yij)2 2 ?j]; γi)ei=d Append ei into M

4 Evaluation

In this section,we evaluate the proposed MEAEDE using three classical intrusion detection datasets:KDDCUP99 10%[20],NSL-KDD[21],and CIC-IDS-2017[22].At the same time,our network model is compared with other autoencoder-based approaches,including DAE and DAGMM.In our experiment,reconstruction error is used only in DAE to derive threshold.

4.1 Datasets

1)KDDCUP99 contains 41 dimensions,of which 34 are continuous and 7 are categorical attributes.Each data point in KDDCUP is labeled as either normal or a specific attack group for big categories,including DoS,user to local(U2L),remote to local(R2L)and probe.

2)NSL-KDD has the same features as KDDCUP99,except that redundant data is filtered.The dataset consists of two files:KDDTrain+ and KDDTest+.

3)CIC-IDS-2017 was generated by CIC Laboratory of Canada,and the types of attacks collected are very extensive.From Monday to Friday,the network data was captured for five days,with nearly 80 dimensions.The dataset consists of eight files,including Monday benign sample,Tuesday BForce,SFTP and SSH sample,Wednesday DoS and Heartbleed Attacks,Slowloris,Slowhttptest,Hulk and GoldenEye,Thursday Web and Infiltration Attacks,Web BForce,XSS and Sql Inject,Infiltration Dropbox Download and Cool disk,Friday DDoS LOIT,Botnet ARES,and PortScans(sS,sT,sF,sX,sN,sP,sV,sU,sO,sA,sW,sR,sL and B)sample.

4.2 Metrics

To evaluate our model,we use the following three common information retrieval evaluation metrics:

1)Precision(Pr):It is the ratio of correctly classified attack flows(TP)to all the classified flows(TP + FP).

2)Recall(Rc):It is the ratio of correctly classified attack flows(TP)to all generated flows(TP + FN).

3)F-Measure(F1):It is a harmonic combination of thePrandRcinto a single measure.These metrics can be defined as

Algorithm 2:The procedure for training the anomaly detector Input:Training data set S,epochs N Output:MEAEDE model,? θ,γ ←parameters of encoder and decoder of DAE ω ←parameters of DEN for epoch from 1 to N for xi ∈S do Minimize L(θ,γ,ω,?μ,??,?Σ)hi=enc(xi;θ),yi=dec(hi;γ)ρi= cov(xi,yi)images/BZ_383_673_1899_711_1944.png images/BZ_383_716_1921_764_1967.pngdj=1(x(j)i?y(j)i )2 σxi.σyi,ei=d compute mi as Algorithm 1 ci ←[hi,ei,ρi,mi]?π=DEN(ci;ω)?μ,??,?Σ ←update parameters for GMM as Eq.(8)if ei>?: ? ←ei θ,γ,ω ←update parameters using SGD

4.3 Parameter Discussion

In order to optimally fit these datasets,we use slightly different parameters for data of different dimensions.Owing to the existence of categorical values,KDDCUP99 10% and NSLKDD use unique one-hot encoding,and their inputs have 118 and 122 dimensions,respectively.After discarding the “flow packet number/s” and “flow byte number/s” features,77 features are selected from CIC-IDS-2017.We find in the exploration that with four hidden layers in the encoder,the DAE yields the expected result when the number of layer increases,and the effect does not rise significantly or even decreases;thus,the parameters of different networks set as follows:

KDDCUP99(NSL-KDD):

DAE:FC(118(122),60,GELU)-FC(60,30,GELU)-FC(30,10,GELU)-FC(10,3,none)-FC(3,10,GELU)-FC(10,30,GELU)-FC(30,60,GELU)-FC(60,118(122),none).

DEN:FC(5,10,GELU)-Drop(0.5)-FC(10,5,softmax).

CIC-IDS-2017:

DAE:FC(77,40,GELU)-FC(40,20,GELU)-FC(20,10,GELU)-FC(10,3,none)-FC(3,10,GELU)-FC(10,20,GELU)-FC(20,40,GELU)-FC(40,77,none).

DEN:FC(5,10,GELU)-Drop(0.5)-FC(10,5,softmax).

We uniformly setλ1=0.1,andλ2=0.0001,and the learning rate as 0.0001 in every model.As can be seen in Eq.(10),the objective loss function of MEAEDE has three components:L1represents reconstruction error from the DAE,the second component represents energy function,and the third represents the penalty function for covariance matrices.A large value ofλ1may impactL1and make Eq.(10)getting unimportant,thus the expected representation of input data cannot be derived.Whenλ1has a small value,the GMM will not be trained well for the small weight of the estimated network.Whenλ2has a large value,the covariance of the GMM has a large value,which is not desirable as most samples have large values.Whenλ2has a small value,the regularization is weak in countering the singularity effect.In our experiment,we find that setting the values of the said hyper-parameters as 1,0.1,and 0.0001 leads to a remarkable performance across the three datasets.

Algorithm 3:The procedure of executing anomaly detector Input:Testing data set S,?Output:Anomalies Set T=?? for xi in data set S do hi=enc(xi;θ),yi=dec(hi;γ)ci ←[hi,ei,ρi,mi]?π=DEN(ci;ω)ei=E■ci;?μ,??,?Σ■If ei>T:Alert

4.4 Detection Results

For KDDCUP99 and NSL-KDD,we calculated the average of each indicator.The experimental results are encapsulated in Tabs.2 and 3.The results show that DEAEME outperforms other autoencoder-based models.With a simple threshold selection,it is hard for the DAE model to be compressed with other two models because of information loss in the hidden layer.The DAGMM uses cosine similarity and Euclidean distance of input and output data of DAE,and thus it outperforms DAE.While MEAEDE derives MRE,coefficient correlation and compressed features,it has a sharp increase in comparison with DAGMM,so MRE has a powerful effect to compensate for the loss of information in the hidden layer.

It is worth mentioning that the threshold of each model is set according to their optimal results.For DAE,we set thresholdTas maximum reconstruction error multiplied by a value between 0 and 1,and for DAGMM and MEAEDE,we select threshold parameters according to different datasets to adapt to optimal results.

Table 2:Detection results on KDDCUP99 10% dataset

Table 3:Detection results on NSL-KDD dataset

In CIC-IDS-2017,we select six files from the entire datasets.A 1:1 training and testing ratio was adopted as data is imbalanced in the other two files.Unfortunately,due to the existence of infinite data,we remove the “flow bytes/s” and “flow packets/s” attributes,and finally 77 features of each data point are selected in the samples,and the results are shown in Tabs.4–6.Upon encountering Friday’s DDoS attack,MEAEDE and DAGMM perform equally well,while DAGMM has an inadequate performance when encountering Friday-PortScan and web-attacks.On the contrary,MEAEDE performs more satisfactorily most of the time.It is worth noting that DAE is stable for all datasets,and the average value of the three metrics is better than DAGMM,but the recognition accuracy is not commensurate to the expectations.

Table 4:MEAEDE detection results on CIC-IDS2017 dataset

To evaluate the training time of MEAEDE,we compute the average time consumed during training of 50 epochs.Tab.7 presents details of each dataset.Evidently,MEAEDE has an average training time of about 200 s in every dataset,which can be deployed and reset efficiently in a number of applications.

Table 5:DAGMM detection results on CIC-IDS2017 dataset

Table 6:DAE detection results on CIC-IDS2017 dataset

Table 7:Average time consumed in training datasets

We also evaluate our model under actual network attacks using our feature extractor module and employ several types of attacks such as DDoS,port scan,and injection in five LAN hosts using eight different computers to launch attacks(including 30% of normal data and 70% of attacks).We then capture packets in the network card with an average feature extraction time of 14 s for every 100 k “.pcap” file.In order to correctly label the captured data,we use specific hosts to only implement attacks or normal requests and record them.Tab.8 presents the results depicting the performance of our model.Unfortunately,the performance is not satisfactory on CIC-IDS-2017 dataset.In this dataset,the data ratio we selected is 1:1,while in the data we collected,the ratio is obviously different,which is the reason why the detection efficiency is not satisfactory.So the ratio of normal data and anomalies can disturb the accuracy of MEAEDE.

Table 8:Detection results in actual network environment with ∈of 0.8

5 Conclusion

In this paper,we propose a lightweight anomaly detection framework,which includes network packet capturing,efficient processing,and a feature extraction module that has a good expressive ability to derive network packets and flows.We then transform the extracted features to timerelated features for Anomaly Detector module.This module combines MRE,the correlation coefficient of input and output,and compressed features from DAE to form a new feature vector,which is fed into a DEN to transform to a K-component vector and then fed to GMM to get an energy value.The system detects anomalies if the computed value exceeds a pre determined threshold.We train and test our model under several intrusion detection datasets and compare three common information retrieval evaluation metrics with DAE and DAGMM.With improvement in feature extraction,the proposed model outperforms other autoencoder-based models and can further improve the performance of anomaly detection.Furthermore,after training only with normal data,it can identify various unknown attacks such as DDoS,web-attack,and port-scan with desirable performance.

Notwithstanding its adequate performance,the input data distribution has a few limitations;it is only suitable to network-based packets and flows.In addition,the ratio of normal data and anomalies can disturb the performance of MEAEDE.In the future,it is necessary to improve the feature extraction ability and make the model more robust.It is also imperative to study better methods for threshold selection of GMM,which can efficiently identify more attack types to adapt to the complex network environment.

Acknowledgement:We would like to thank TopEdit(www.topeditsci.com)for the English language editing of this manuscript.

Funding Statement:This work is supported by the Introducing Program of Dongguan for Leading Talents in Innovation and Entrepreneur(Dongren Han[2018],No.738).

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 亚洲国产精品一区二区第一页免 | 亚洲色图欧美一区| 国产在线一区二区视频| 2018日日摸夜夜添狠狠躁| 成人永久免费A∨一级在线播放| 亚洲精品无码成人片在线观看 | 亚洲中文精品久久久久久不卡| 日本人妻丰满熟妇区| 亚洲日韩第九十九页| 国产亚洲高清在线精品99| 精品色综合| 国产一区二区三区视频| 高清久久精品亚洲日韩Av| 国产精品欧美在线观看| 在线视频亚洲欧美| 免费观看亚洲人成网站| 高清欧美性猛交XXXX黑人猛交 | 在线观看av永久| 日本国产精品一区久久久| 爱爱影院18禁免费| 欧美色99| 91久久偷偷做嫩草影院电| 久久久四虎成人永久免费网站| 91精品视频在线播放| 欧美精品v日韩精品v国产精品| 国产成人精品免费视频大全五级 | 视频一本大道香蕉久在线播放 | 欧美中文字幕在线二区| 在线观看热码亚洲av每日更新| 日韩高清一区 | 99热这里只有精品在线观看| 日本不卡免费高清视频| 国产精品亚洲va在线观看| 在线观看国产精美视频| 色综合成人| 日韩高清在线观看不卡一区二区| 成年人国产网站| 亚洲精品无码不卡在线播放| 国产精品无码久久久久久| 欧美午夜精品| 亚洲人成在线精品| 亚洲国产在一区二区三区| 全部免费毛片免费播放| 亚洲精品波多野结衣| 老熟妇喷水一区二区三区| 日韩精品成人网页视频在线| 色哟哟色院91精品网站| 亚洲成人网在线观看| 一区二区理伦视频| 91精品在线视频观看| 色综合久久无码网| 黄片在线永久| 亚洲中文字幕在线精品一区| 波多野结衣一二三| 无码专区国产精品第一页| 国产成人精品免费av| 波多野结衣中文字幕一区二区 | 黄色网址免费在线| 成人中文在线| 在线欧美日韩| 日韩欧美91| 国产激情国语对白普通话| 欧美国产日韩在线播放| 91免费国产在线观看尤物| 91精品综合| 国产在线精品美女观看| 亚洲永久视频| 欧美第九页| 国产h视频在线观看视频| 国产成人a毛片在线| 女人天堂av免费| 乱码国产乱码精品精在线播放| 久久这里只有精品国产99| 亚洲一级毛片在线观播放| 国产视频欧美| 欧洲极品无码一区二区三区| 亚洲天堂成人在线观看| 国产性生交xxxxx免费| 2048国产精品原创综合在线| 成年A级毛片| 亚洲国产成人无码AV在线影院L| 欧美在线视频不卡|