999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Segmentation and Classification of Stomach Abnormalities Using Deep Learning

2021-12-10 11:55:06JaveriaNazMuhammadAttiqueKhanMajedAlhaisoniOhYoungSongUsmanTariqandSeifedineKadry
Computers Materials&Continua 2021年10期

Javeria Naz,Muhammad Attique Khan,Majed Alhaisoni,Oh-Young Song,Usman Tariq and Seifedine Kadry

1Department of Computer Science,HITEC University Taxila,Taxila,Pakistan

2College of Computer Science and Engineering,University of Ha’il,Ha’il,Saudi Arabia

3Department of Software,Sejong University,Seoul,Korea,Gwangjin-gu,Korea

4College of Computer Engineering and Sciences,Prince Sattam Bin Abdulaziz University,Al-Khraj,Saudi Arabia

5Faculty of Applied Computing and Technology,Noroff University College,Kristiansand,Norway

Abstract:An automated system is proposed for the detection and classification of GI abnormalities.The proposed method operates under two pipeline procedures:(a)segmentation of the bleeding infection region and(b)classification of GI abnormalities by deep learning.The first bleeding region is segmented using a hybrid approach.The threshold is applied to each channel extracted from the original RGB image.Later,all channels are merged through mutual information and pixel-based techniques.As a result,the image is segmented.Texture and deep learning features are extracted in the proposed classification task.The transfer learning(TL)approach is used for the extraction of deep features.The Local Binary Pattern(LBP)method is used for texture features.Later,an entropy-based feature selection approach is implemented to select the best features of both deep learning and texture vectors.The selected optimal features are combined with a serial-based technique and the resulting vector is fed to the Ensemble Learning Classifier.The experimental process is evaluated on the basis of two datasets:Private and KVASIR.The accuracy achieved is 99.8 per cent for the private data set and 86.4 percent for the KVASIR data set.It can be confirmed that the proposed method is effective in detecting and classifying GI abnormalities and exceeds other methods of comparison.

Keywords:Gastrointestinal tract;contrast stretching;segmentation;deep learning;features selection

1 Introduction

Computerized detection of human diseases is an emerging research domain for the last two decades[1,2].In medical imaging,numerous remarkable methods were developed for automated medical diagnosis systems[3].Stomach infection is one of the most common types that caused a large number of deaths every year[4].The most common type of disease related to the stomach is colorectal cancer.It affects both women and men[5].Colorectal cancer consists of three significant infections that are polyp,ulcer,and bleeding.In 2015,about 132,000 cases of colorectal cancer were recorded in the USA[6].A survey conducted in 2017 showed that approximately 21% population of the USA is suffering from gastrointestinal infections,and 765,000 deaths are noticed due to maladies found in the stomach[7].According to the global cancer report of 2018 for 20 regions of the world,total estimated cases of cancer are 18.1 million.Among them,9.2%for deaths and 6.1% for new cases relate to colorectal cancer.According to the American cancer society,in 2019,approximately 27,510 new stomach cancer cases of both genders(consisting of 10,280 women’s and 17,230 men’s)are observed in the US.A total of 11,140 deaths(consisting of 4340 women’s and 6800 men’s)are noticed in 2019 due to colorectal cancer[8].

GI infections can be easily cured if they were diagnosed at an early stage.As small bowel has a complex structure,that is why push gastroscopy is not considered as the best choice for the diagnosis of small bowel infections like bleeding,polyp,and ulcer[9].The traditional endoscopic method is an invasive method that is not utilized by endoscopists and is also not recommended to patients due to its high level of discomfort[10].These problems were resolved by a new technology introduced in the year 2000,namely,Wireless Capsule Endoscopy(WCE)[11].WCE is a small pill shaped device,consisting of batteries,a camera,and a light source[12].While passing through the gastrointestinal tract(GIT),WCE captures almost 50,000 images and releases them through the anus.Mostly malignant diseases of GIT like bleeding,polyp,and ulcer are diagnosed through WCE.WCE is proved to be an authentic modality for painless investigation and examination of GIT[13].This technique is more convenient to use than traditional endoscopies.Moreover,it provides better diagnostic accuracy for bleeding and tumor detection,specifically in the small intestine[14].It is too difficult for an expert physician to thoroughly examine all of the captured images as it is a chaotic and time taking task.The manual analysis of WCE frames is not an easy process,and it takes much time[15].To resolve this problem,researchers are working on introducing several computer-aided diagnosis(CAD)methods[16].The suggested methods will help the doctors to detect the disease more accurately in less time.A typical CAD system consists of five major steps that are image pre-processing,feature extraction,optimal feature selection,feature fusion,and classification.The extraction of useful features is one of the most significant tasks.Numerous features are extracted in the CAD systems for accurate diagnosis of disease.These features include texture[17],color[18],and so on[19].All of the extracted features are not useful;there may exist some irrelevant.Therefore,it is essential to reduce the feature vector to remove irrelevant features for better efficiency.

In this work,to detect and classify GI abnormalities,two pipeline procedures are considered:bleeding segmentation and GI abnormality classification.The significant contributions of the work are:i)in the bleeding detection step,a hybrid technique is implemented.In this method,the original image is split into three channels and thresholding is applied for each color channel.After that,pixel-by-pixel matching is performed,and a mask is generated for each channel.Finally,combining all mask images in one image for final bleeding segmentation;ii)in the classification procedure,transfer learning is ustilized for extracting deep learning features.The original images are enhanced by using a unified method,which is a combination of chrominance weight map,gamma correction,haze reduction,and YCbCr color conversion.Later,deep learning features are extracted by using a pre-trained CNN model.Further,the LBP features are also obtained for textural information of each abnormal regions;iii)a new method named entropy controlled ensemble learning is proposed and it selects the best learning features for correct prediction as well as fast execution.The selected features are ensemble in one vector by using a concatenation approach;iv)the performance of the proposed method is validated by several combinations of features.Further,many classification methods are also used for validation of selected features vector.

2 Related Work

Several machine learning and computer vision-based techniques are introduced for the diagnosis of human diseases like lung cancer,brain tumor,GIT infections from WCE images,and so on[20,21].The stomach is one of the most significant organs of the human body.The most conspicuous diseases of the stomach are ulcer,bleeding,and polyp.In the study[22],the authors have utilized six features of different color spaces for the classification of non-ulcer and ulcerous regions.The used color spaces are CMYK,YUV,RGB,HSV,LAB,and XYZ.After the extraction of features,the cross-correlation method is utilized for the fusion of extracted features.Finally,97.89% accuracy is obtained by utilizing the support vector machine(SVM)for the classification.Suman et al.[23]proposed a new method for the classification of bleeding images from non-bleeding ones.Their suggested method is mainly based on the statistical color features obtained from the RGB images.Charfi et al.[24]presented another methodology for colon irregularity detection from WCE images utilizing variance,LBP,and DWT features.They have been used a multilayer perceptron(MLP)and SVM for the classification.The suggested method performed well than existing methods and achieved 85.86% accuracy on linear SVM and 89.43%accuracy on MLP.In[25],authors have proposed a CAD method for bleeding classification.They have used unsupervised and supervised learning algorithms.Souaidi et al.[26]proposed a unique approach named multiscale complete LBP(MS-CLBP)for ulcer detection which is mainly based on the Laplacian pyramid and completed LBP(CLBP).In this method,ulcer detection is performed using two-color spaces(YCbCr and RGB).They have used the G channel of RGB and Cr of YCbCr for the detection of ulcers.Classification is performed using SVM and attained an average accuracy of 93.88% and 95.11% for both datasets.According to the survey conducted by Fan et al.[27],different pre-trained models of deep learning have covered numerous aspects of the medical imaging domain.Many researchers have utilized CNN models for the accurate classification and segmentation of disease or infections.

In contrast,the images that have the same category should share the same learned features.The overall achieved recognition accuracy of this method is 98%.Sharif et al.[28]have used the contrast-enhanced color features for the segmentation of the infected region from the image.After that,geometric features are pull-out from the resultant segmented portion of the image.Two deep CNN models VGG19 and VGG16,are also used in this method.Extracted deep features of both models are fused using the Euclidean fisher vector(EFV)that are later combined with the geometric characteristics to obtain strong features.Conditional entropy is employed on the resultant feature vector for optimal feature selection,which is classified using the KNN classifier and achieved the highest accuracy of 99.42%.Diamantis et al.[29]have proposed a novel method named as LB-FCN(Look Behind Fully Convolutional Neural Network).The proposed method has performed well than existing methods and achieved better GI abnormality detection results.Alaskar et al.[30]have utilized GoogleNet and AlexNet for the ulcer and non-ulcer images classification.Khan et al.[31]have suggested a technique for the ulcer detection and GI abnormality classification.ResNet101 is utilized by the authors for features extraction.Moreover,they optimized features by utilizing distance fitness function along with grasshopper optimization.They have utilized C-SVM and achieved 99.13% classification accuracy.The literature depicts that CAD systems for disease recognition mostly rely on handcrafted features(shape,color,and texture information).However,impressed by the performance of CNN in other domains,some researchers have employed CNN models in the medical field for disease segmentation and classification[32,33].Inspired by these studies,we utilized deep learning for better classification accuracy.

3 Proposed Methodology

A hybrid architecture is proposed in this work for automated detection and classification of stomach abnormalities.The proposed architecture follows two pipeline procedures:Bleeding abnormality segmentation and GI infections classification.The proposed bleeding segmentation procedure is illustrated in Fig.1.First,select the RGB bleeding images from the Database,then,extract all three channels and apply thresholding.Output images produced by a threshold function are compared by pixel-wise and used for generating a mask for each channel.Later,combined the mask of all three channels,as a result,a segmented image is obtained.The detail description of each step given as follows.

Figure 1:Flow diagram of proposed segmentation method

3.1 Bleeding Abnormality Segmentation

ConsiderX(i,j)is an original RGB WCE image of dimension 256×256×3.ConsiderδR,δG,andδBdenotes extracted channels,namely red channel,green channel,and blue channel,respectively,and illustrated in Fig.2.After splitting all channels,applying a threshold on each channel.The Otsu thresholding is used for each channel.Mathematically,the thresholding process is defined as follows.

Figure 2:Visualization of the original image and extracted channels: δR,δG,and δB

In Ostu segmentation,threshold selection is based on the cost function and can be calculated as:

where,?(yi)is a thresholded image andidenotes three thresholded images for channelsδR,δG,andδB.After that,each channel is compared with corresponding threshold channels.As a result,the mask is generated for each color channel.Later,it combines all channels information through the following equation.Consider?(y1)denote a thresholded image of the red channel,?(y2)denote a thresholded image of the green channel and?(y3)denote a thresholded image of the blue channel,respectively.

Suppose,?(y1)∈S1,?(y2)∈S2,and?(y3)∈S3represent three threshold channels,as shown in Fig.3;then the fusion formula is defined as follows:

whereP(R1∩R2∩R3)denotes the final bleeding segmented image,and visible results are shown in Fig.4.

Figure 3:Extracted thresholding channels

3.2 Abnormalities Classification

In the classification task,we presented a deep learning-based architecture for GI abnormalities classification such as bleeding,ulcer,and healthy.This task consists of three steps:enhancement of original images,deep feature extraction,and selection of robust features for classification.The flow diagram is presented in Fig.5.The mathematical description of the proposed classification task is given below.

Figure 4:Segmentation output.(a)Original image,(b)detected red spots,(c)proposed binarized segmented image,(d)ground truth image

3.2.1 Contrast Enhancement

WCE images may suffer from non-uniform lighting,low visibility,diminished colors,blurring,and low contrast of image characteristics[34].In the very first step,we have applied a chromatic weight map to improve the saturation in the images.Thus,the color can be an essential indicator of image value.Let,X(i,j)be the input image of dimension 256×256.For each input image distance of each pixel saturation with most of the congestion,the series is calculated to get this weight map.Mathematically,the chromatic weight map is expressing as follows:

where,Xc(i,j)is the output image of the chromatic weight map,σrepresents the standard deviation,and its default value is 0.3.TheSmaxdenote is the highest value of the saturation range.Therefore the small amount of saturation is allocated to the pixels.After chromatic weight computation,gamma correction is applying to control the brightness and contrast ratio of an imageXc(i,j).Generally,the value of input and output ranges between 0 and 1.In the proposed preprocessing approach,we have usedγ=0.4 because,at this value,images are enhanced suitably without losing important information.Mathematically gamma correction is formulated as follows:

where,Ais a constant,VXc(i,j)is the positive real value of chromatic weight map image,raised to the power ofγmultiplied by a constant valueA(generallyA=1).Input and output range typically from 0 to 1.Further,we are applying image dehazing.The main objective of Image dehazing is to recover the flawless image from a blurry image.However,it may be deficient in capturing the essential features of hazy images fully.Mathematically image dehazing is formulated as follows:

The foundation of a hazy image can be presented as a curved combination of the image radianceJand the lightA,which can be formulated as whereXGM(m,n)is a pixel from the hazy imageXGMandt(XGM(i,j))is its transmission.As a consequence,by recovering the image radianceXGM(i,j)from hazy pixelXGM(m,n)the problem of image hazing is resolved.The visual effects of this formulation are illustrated in Fig.6.Finally,the YCbCr color transformations are applying and select the Y channel based on peak pixel values.This channel is further utilized for features extraction task.

Figure 5:Flow diagram of proposed classification method

Figure 6:(a)Output images of gamma correction,(b)output images of haze reduction

3.2.2 Features Extraction(FE)

Deep Learning Features:For deep learning features,we have utilized a pre-trained inceptionV3 model.This model is trained on an enhanced WCE dataset.InceptionV3 model consists of 316 layers and 350× 2 connections.Fig.7 gives the generalized Visual representation of the layers of inceptionV3 model.The input size for this model is 299 × 299× 2;the output of the first Inceptionv3 layer is to the next layer.It consists of a series of layers,such as scaling layer,AveragePooling layer,convolutional layers,Depth Concatenation layer,ReLU layer,Batch normalization layer,MaxPooling layer,softmax layer,and a classification layer.Each layer in the inceptionV3 model has several parameters.

Figure 7:Detailed model of inceptionV3

Using this model,the ratio that we have used for training and testing is 70:30.This feature extraction process is conducted using the transfer learning method.In the TL,a pre-trained model was trained on GI WCE images.For this purpose,we required an input layer and an output layer.For input layer,we are using the first convolution layer,while in the output;we select the average pool layer.We have obtained a feature vector of dimension 1×2048,after activation of the average pool layer.ForNimages,the vector should beN×2048.

Texture Oriented Features—For the sake of texture oriented features,we are extracting Local Binary Patterns(LBP).LBP is a significant method used for object identification and detection[35].Basically,LBP features consist of two bitwise transitions from 0 to 1 and 1 to 0.LBP use a greyscale image as input,then calculate mean and variance for each pixel intensity.Mathematical representation of LBP is formulated as follow:

Here,the number of neighborhood intensities are represented byP,Ris the radius,kpis the variance of the neighboring pixel intensity,KCrepresents the contrast of intensity calculated from(P,R).

where neighboring pixelsdn(P)are compared with the central pixelt.It obtains a feature vector of 1×59 dimension for one image and N×59 for N number of images.

3.2.3 Features Selection

After the extraction of texture and deep learning features,the next phase involves the optimal features selection.In this work,we have utilized Shannon entropy along with an ensemble learning classifier for best features selection.A heuristic approach has opted for feature selection.The Shannon Entropy is computed from both vectors separately and set a target function based on the mean value of original entropy vectors.The features that are equal or higher than mean features are selects as robust features and passed to ensemble classifiers.However,this process is to continue until the error rate of the ensemble classifier is below 0.1.Mathematically Shannon entropy is ratified by the equation as follow:

Letnkjbe the number of occurrences oftjin the categoryckand,the frequency oftjin this categoryck:

The Shannon entropyE(tj)of the termtjis given by:

Through this process,approximately 50% of features are removed from both vectors-deep learning and texture oriented.Later on,these selected vectors are fused in one vector by simple concatenation approach as given as:LetAandBbe two feature spaces that are defined on the sample spaceΩ,whereArepresents selected LBP features andBrepresents selected inceptionV3 features.An arbitrary sampleξis selected from the sample space such thatξ∈Ω,the corresponding two FVs areαlbp∈Aandβincep∈B.Then,the fusion of featureξis defined byγ=αlbpβincep,ifαlbpisndimensional andβincepismdimensional then the fused FVγis(n+m)-dimensional.All combined feature FVs form a(n+m)-dimensional feature space.

4 Experimental Results and Comparison

Two datasets are used in this work for the assessment of suggested GI infections detection and classification method.The description of each dataset is given as:

KVASIR Datasetcontains a total of 4000 images,which are confirmed by expert doctors[36].Eight different types of infections are included in this dataset,such as Dyed-Lifted-Polyp(DLP),Dyed-Resection-Margin(DRM),Esophagitis(ESO),Normal-Cecum(NC),Normal-Pylorus(NP),Normal Z-Line(NZL),Polyps(P),and Ulcerative-Colities(UCE).Every class contains 500 images of different resolution-720×576 up to 1920×1072 pixels.Sample images are shown in Fig.8a.

Private Datasetwas collected from COMSATS Computer Vision Lab[37],and it includes a total of 2326 clinical sample images.These images consist of three categories-ulcer,bleeding,and healthy.The image size is 512×512.Some sample images are presented in Fig.8b.

Figure 8:Sample images selected from the datasets:(a)KVASIR dataset[36],(b)Private dataset[37]

4.1 Results

A detailed description of classification results in quantitative and graphical form is given in this section.For experimental results,seven different classifiers are used for the evaluation of suggested methods that are Linear Discriminant(L-Disc),Fine Tree(F-Tree),Cubic SVM(C-SVM),Medium Gaussian SVM(M-G-SVM),Linear SVM(L-SVM),Fine KNN(F-KNN),and ENSEM subspace discriminant(ESDA).In this research,we have utilized different performance measures for the evaluation of suggested methods,including Specificity(SPE),FNR,Precision(PRE),FPR,Sensitivity(SEN),Accuracy(ACC),F1-Score,Jaccard index,and Dice.All the tests are implemented on MATLAB 2019b using Core i5-7thGen,4 GB RAM.Further,an 8 GB graphics card is also used for the evaluation of results.

Bleeding Segmentation Results:To validate segmentation accuracy,20 sample images from the Private dataset are randomly chosen for bleeding segmentation.In this research we have utilized the ground truth images designed by an expert physician for the segmentation accuracy calculation.The ground truth images are compared with the proposed segmented images pixel-by-pixel.Our proposed bleeding method has achieved the highest accuracy of 93.39%.Other calculated measures are Jack-index(96.58%)and FNR(6.61).The selected image’s accuracy is given in Tab.1.In this table,it is shown that for all selected 20 images,Jack-Index,Dice,FNR,and Time(s)is presented.The average dice rate is 87.59%,which is good for bleeding segmentation.

Table 1:Bleeding segmentation results

Classification Results:For classification results,we have performed experiments for both selection and fusion processes separately.As mentioned above,two datasets named KVASIR and Private datasets are used for the evaluation of the proposed method.As the Private Dataset includes a total of 2326 RGB images consisting of three classes,namely ulcer,bleeding,and healthy.Initially,robust deep learning features are selected using the proposed selected approach.For robust deep learning features,maximum achieved accuracy of 99.7% on ESDA classifier,as numerical results are given in Tab.2.In this table,it is observed that the other calculated measures are SEN(99.7%),SPE(99.88%),PRE(99.70%),F1 score(99.70%),FPR(zero),and FNR(0.3%).Further,the accuracy of ESDA can also be validated by Tab.3.Besides,the execution time of each classifier is noted during the testing,as given in Tab.2.In this table,the best noted time is 16.818 s for the ESDA classifier.The worst execution time is 26.166 s for the M-G-SVM classifier.

Table 2:Propose classification results on selected deep learning features for private dataset.The maximum accuracy is bold in the table

Table 3:Confusion matrix using selected inceptionV3 features for private dataset

Tab.4 shows the classification results using robust deep learning features for KVASIR dataset.The 10-fold cross-validation was used for testing results and achieved the best accuracy of 86.6%on the ESDA classifier,as numerical results can be seen in Tab.4.In this table,it is observed that the other calculated measures of ESDA classifier are SEN(86.62%),SPE(98.08%),PRE(87.08%),F1 score(86.60%),FPR(0.04)and FNR(13.4%).The accuracy of this classifier can also be validated by Tab.5.The minimum noted time is 27.67 s for F-Tree,while 43.01 s for ESDA classifier.However,the accuracy of the ESDA classifier is better as compared to all other methods.

Later on,the selected deep learning and LBP features are fused and performed evaluation.Results are presented in Tab.6 using a fused vector for Private Dataset.The maximum achieved accuracy after the fusion process is reached to 99.8% for the ESDA classifier.In this table,it is observed that the ESDA classifier shows better performance in terms of other performance measures as compared to other classifiers.Other calculated measures of ESDA are SEN(99.83%),SPE(99.92%),PRE(99.80%),F1 score(99.81%),FPR(zero),and FNR(0.2%).The accuracy of this classifier can also be validated by Tab.7.From the comparison with Tab.2,it is shown that after the fusion process,the results are a little bit improved.But on the other side,the computational time is increased up to an average of 4 to 5 sec.The best noted time for this experiment is 17.03 s,while the worst time is 42.39 s.In conclusion,ESDA performs better as compared to other methods.

Table 4:Propose classification results on selected deep learning features for KVASIR dataset.The maximum accuracy is bold in the table

Table 5:Confusion matrix using selected inception features for KVASIR dataset

Table 6:Classification results of proposed classification architecture for private dataset.The maximum accuracy is bold in the table

The fused vector is also applied to the KVASIR dataset and achieved maximum accuracy of 87.8% for the ESDA classifier.Tab.8 shows the results of the proposed architecture.The other calculated measures of this classifier are SEN(86.4%),SPE(98.06%),PRE(86.99%),F1 score(86.63%),FPR(0.02),and FNR(13.6%).Moreover,Tab.9 shows a confusion matrix of this classifier,which can be used as the authenticity of proposed ESDA accuracy.The diagonal values show the correct prediction of each abnormality while the rest of the values are FNR.After fusion process accuracy is improved.However,a little bit of increase occurs in the execution time.After the fusion process,the best noted time is 26.06 s,while for ESDA classifier is 50.70 s.Overall,it can be easily concluded that the ESDA classifier shows better performance.

Table 7:Confusion matrix for proposed classification architecture for private dataset

Table 8:Classification results of proposed classification architecture for KVASIR Dataset.The maximum accuracy is bold in the table

Table 9:Confusion matrix using selected fused LBP and inceptionV3 features for KVASIR dataset

4.2 Comparison with Existing Methods

From the above review,it is noted that several methods are proposed by the researchers for GI disease abnormality detection and classification.Most of the existing CAD methods are based on traditional techniques and algorithms,such as most of them are based on only color information or texture information.Although there exists some methods in which authors have used a combination of features.Some methods are also based on deep CNN features.Despite too many existing CAD methods,there exist some limitations in the old approaches such as low contrast of captured frames,the same color of the infected and healthy region,the problem of proper color model selection,hazy image,redundant information,etc.These limitations forced us to develop a robust method for GI abnormality detection and classification with better accuracy.The proposed deep learning method is evaluated on two datasets-Kvasir and Private and achieved an accuracy of 99.80% and 87.80%.The comparison with existing techniques is given in Tab.10.In this table,the comparison is conducted based on the abnormality name or dataset.Because most of the GI datasets are private,therefore,we mainly focused on disease type.From this table,it is shown that the proposed architecture gives better accuracy as well as execution time.

Table 10:Comparison with existing methods

5 Conclusion

In this article,we proposed a deep learning architecture for the detection and classification of GI abnormalities.The proposed architecture consists of two procedures for pipeline detection and classification.In the detection task,the bleeding region is segmented by a fusion of three separate channels.In the classification task,deep learning features and texture-oriented features are extracted and the best features are selected using the Shanon Entropy controlled ESDA classifier.The selected features are concatenated and are classified.In the evaluation phase,the segmentation process achieves an average accuracy of over 87% for abnormal bleeding regions.For classification,the accuracy of the private data set is 99.80 percent,while for the Kvasir data set,the accuracy is 87.80 percent.It is concluded from the results that the proposed selection method shows better performance compared to the existing techniques.It also concludes that the merger process is effective for more classes,such as the Kvasir dataset classification.In addition,texture features also have a high impact on disease classification and deep learning fusion,addressing the issue of texture variation.In future studies,we will focus on ulcer segmentation through deep learning.

Funding Statement:This research was financially supported in part by the Ministry of Trade,Industry and Energy(MOTIE)and Korea Institute for Advancement of Technology(KIAT)through the International Cooperative R&D program.(Project No.P0016038)and in part by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021-2016-0-00312)supervised by the IITP(Institute for Information &communications Technology Planning &Evaluation).

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 丰满人妻中出白浆| 亚洲无码精彩视频在线观看| 欧美激情第一欧美在线| 免费a在线观看播放| 九九这里只有精品视频| 99久久这里只精品麻豆| 久久中文字幕不卡一二区| 免费在线成人网| 91在线无码精品秘九色APP | 久久影院一区二区h| 91口爆吞精国产对白第三集 | 国产男女免费视频| 99在线视频免费| a色毛片免费视频| 在线播放91| 日韩毛片在线播放| 日韩精品亚洲一区中文字幕| 国产精品视频白浆免费视频| 国产午夜福利在线小视频| 日本高清视频在线www色| 精品丝袜美腿国产一区| 亚洲国产成人无码AV在线影院L| 欧美精品亚洲二区| 首页亚洲国产丝袜长腿综合| 国产自在线拍| 国产成人超碰无码| 亚洲人成影视在线观看| 视频一本大道香蕉久在线播放| 国产精品浪潮Av| 天天色天天综合网| 操国产美女| 久精品色妇丰满人妻| 亚洲人成网站色7799在线播放| 亚洲日韩Av中文字幕无码| 青青草原国产一区二区| 日韩一区精品视频一区二区| 日韩在线观看网站| 欧美日韩免费观看| 色亚洲激情综合精品无码视频 | 91在线播放国产| 国产三级韩国三级理| 人妻无码一区二区视频| 国产成人精品无码一区二| 久久久久国色AV免费观看性色| 视频在线观看一区二区| 人妻丰满熟妇AV无码区| 最新午夜男女福利片视频| 在线视频亚洲欧美| 国产精品一区二区不卡的视频| 午夜影院a级片| 免费99精品国产自在现线| 深夜福利视频一区二区| 欧美日韩免费| 麻豆国产精品一二三在线观看| 国产特一级毛片| 国产在线一区视频| 欧美不卡二区| 欧美成a人片在线观看| 久久国产成人精品国产成人亚洲 | 国产成人亚洲综合a∨婷婷| 午夜毛片免费观看视频 | 婷婷丁香在线观看| 福利在线不卡一区| 粉嫩国产白浆在线观看| 波多野结衣国产精品| 思思热在线视频精品| 无码专区第一页| 色呦呦手机在线精品| 亚洲日本一本dvd高清| 六月婷婷精品视频在线观看| 精品五夜婷香蕉国产线看观看| 欧美日一级片| 久久亚洲黄色视频| 亚洲va精品中文字幕| 免费A级毛片无码免费视频| 国产成人精品亚洲77美色| 日本黄色a视频| 手机成人午夜在线视频| 亚洲综合在线网| 最新国产你懂的在线网址| 国产打屁股免费区网站| 国产欧美日韩精品综合在线|