999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Gastric Tract Disease Recognition Using Optimized Deep Learning Features

2021-12-11 13:30:44ZainabNayyarMuhammadAttiqueKhanMusaedAlhusseinMuhammadNazirKhursheedAurangzebYunyoungNamSeifedineKadryandSyedIrtazaHaider
Computers Materials&Continua 2021年8期

Zainab Nayyar,Muhammad Attique Khan,Musaed Alhussein,Muhammad Nazir,Khursheed Aurangzeb,Yunyoung Nam,Seifedine Kadry and Syed Irtaza Haider

1Department of Computer Science,HITEC University,Taxila,47040,Pakistan

2Department of Computer Engineering,College of Computer and Information Sciences,King Saud University,Riyadh,11543,Saudi Arabia

3Department of Computer Science and Engineering,Soonchunhyang University,Asan,Korea

4Department of Mathematics and Computer Science,Faculty of Science,Beirut Arab University,Beirut,Lebanon

Abstract: Artificial intelligence aids for healthcare have received a great deal of attention.Approximately one million patients with gastrointestinal diseases have been diagnosed via wireless capsule endoscopy (WCE).Early diagnosis facilitates appropriate treatment and saves lives.Deep learning-based techniques have been used to identify gastrointestinal ulcers, bleeding sites,and polyps.However, small lesions may be misclassified.We developed a deep learning-based best-feature method to classify various stomach diseases evident in WCE images.Initially,we use hybrid contrast enhancement to distinguish diseased from normal regions.Then,a pretrained model is fine-tuned,and further training is done via transfer learning.Deep features are extracted from the last two layers and fused using a vector length-based approach.We improve the genetic algorithm using a fitness function and kurtosis to select optimal features that are graded by a classifier.We evaluate a database containing 24,000 WCE images of ulcers, bleeding sites, polyps, and healthy tissue.The cubic support vector machine classifier was optimal; the average accuracy was 99%.

Keywords:Stomach cancer; contrast enhancement; deep learning;optimization; features fusion

1 Introduction

Stomach (gastric) cancer can develop anywhere in the stomach [1] and is curable if detected and treated early [2], for example, before cancer spreads to lymph nodes [3].The incidence of stomach cancer varies globally.In 2019, the USA reported 27,510 cases (17,230 males and 10,280 females) with 11,140 fatalities (6,800 males and 4,340 females) [4].In 2018, 26,240 new cases and 10,800 deaths were reported in the USA (https://www.cancer.org/research/cancer-facts-statistics/allcancer-facts-figures/cancer-facts-figures-2020.html).In Australia, approximately 2,462 cases were diagnosed in 2019 (1,613 males and 849 females) with 1,287 deaths (780 males and 507 females) (www.canceraustralia.gov.au/affected-cancer/cancer-types/stomach-cancer/statistics).Regular endoscopy and wireless capsule endoscopy (WCE) are used to detect stomach cancer [5,6].Typically, WCE yields 57,000 frames, and all must be checked [7].Manual inspection is not easy and must be performed by an expert [8].Automatic classification of stomach conditions has been attempted [9].Image preprocessing is followed by feature extraction, fusion, and classification [10],and image contrast is enhanced by contrast stretching [11].The most commonly used features are color, texture, and shape.Some researchers have fused selected features to enhance diagnostic accuracy [12].Recent advances in deep learning have greatly improved performance [13].

The principal conventional techniques used to detect stomach cancer are least-squares saliency transformation (LSST), a saliency-based method, contour segmentation, and color transformation [14].Kundu et al.[15] sought to automate WCE frame evaluation employing LSST followed by probabilistic model-fitting; LSST detected the initially optimal coefficient vectors.A saliency/best-features method was used by Khan et al.[16] to classify stomach conditions using a neural network; the average accuracy was 93%.Khan et al.[7] employed deep learning to identify stomach diseases.Deep features were extracted from both original WCE images and segmented stomach regions; the latter was important in terms of model training.Alaskar et al.[17] established a fully automated method of disease classification.Pretrained deep models (AlexNet and GoogleNet) were used for feature extraction and a softmax classifier was used for classification.A fusion of data processed by two pretrained models enhanced accuracy.Khan et al.[10] used deep learning to classify stomach disease, employing Mask RCNN for segmentation and finetuning of ResNet101; the Grasshopper approach was used for feature optimization.Selected features were classified using a multiclass support vector machine (SVM).Wang et al.[18]presented a deep learning approach featuring superpixel segmentation.Initially, each image was divided into multiple slices and superpixels were computed.The superpixels were used to segment lesions and train a convolutional neural network (CNN) that extracted deep learning features and engaged in classification.The features of segmented lesions were found to be more useful than those of the original images.Xing et al.[19] extracted features from globally averaged pooled layers and fused them with the hyperplane features of a CNN model to classify ulcers.Here, the accuracy was better than that afforded by any single model.Most studies have focused on training segmentation, which improves accuracy; however, the computational burden is high.Thus, most existing techniques are sequential and include disease segmentation, feature extraction,reduction, and classification.Most existing techniques focus on initial disease detection to extract useful features, which are then reduced.The limitations include mistaken disease detection and elimination of relevant features.

In the medical field, data imbalances compromise classification.In addition, various stomach conditions have similar colors.Redundant and irrelevant features must be removed.In this paper, we report the development of a deep learning-based automated system employing a modified genetic algorithm (GA) to accurately detect stomach ulcers, polyps, bleeding sites, and healthy tissue.

Our primary contributions are as follows.We develop a new hybrid method for color-based disease identification.Initially, a bottom-hat filter is applied and the product is fused with the YCbCr color space.Dehazed colors are used for further enhancement.A pretrained AlexNet model is fine-tuned and further trained using transfer learning.Also, deep learning features are extracted from FC layers 6 and 7 and fused using a vector length-based approach.Finally, an improved GA that incorporates fitness and kurtosis-controlled activation functions is developed.

The remainder of this paper is organized as follows.Section 2 reviews the literature.Our methodology is presented in Section 3.The results and a discussion follow in Section 4.Conclusions and suggestions for future work are presented in Section 5.

2 Proposed Methodology

Fig.1 shows the architecture of the proposed method.Initial database images are processed via a hybrid approach that facilitates color-based identification of diseased and healthy regions.AlexNet was fine-tuned via transfer learning and further trained using stomach features.A cross entropy-based activation function was employed for feature extraction from the last two layers;these were fused using a vector length approach.A GA was modified employing both a fitness function and kurtosis.Several classifiers were tested on several datasets; the outcomes were both numerical and visual.

Figure 1:Architecture of proposed methodology

2.1 Color Based Disease Identification

Early, accurate disease identification is essential [20,21].Segmentation is commonly used to identify skin and stomach cancers [22].We sought to identify stomach conditions in WCE images.To this end, we employed color-based discrimination of healthy and diseased regions.The latter were black or near-black.We initially applied bottom-hat filtering and then dehazing.The output was passed to the YCbCr color space for final visualization.Mathematically, this process is presented as follows.

GivenΔ(x)is a database of four classesc1,c2,c3, andc4.Consider,X(i,j)∈Δ(x)is an input image of dimensionN×M×3, whereN=256,M=256, andk=3, respectively.The bottom hat filtering is applied on imageX(i,j)as follows:

where the bottom hat image is represented byXbot(i,j), s is a structuring element of value 21,and · is a closing operator.To generate the color, a dehazing formulation is applied onXbot(i,j)as follows [23]:

Here,Xhaz(i,j)represents a haze reduced image of the same dimension as the input image,Lightrepresents the internal color of an image,t(x)is transparency and its value is between[0,1].Then, YCbCr color transformation is applied onXhaz(i,j)for the final infected region discrimination.The YCbCr color transformation is defined by the following formula [24].

Here, the red, green, and blue channels are denotedR,G, andB, respectively.The visual output of this transformation is shown in Fig.2.The top row shows original WCE images of different infections, and the dark areas in the images in the bottom row are the identified resultant disease infected parts.These resultant images are utilized in the next step for deep learning feature extraction.

Figure 2:Visual representation of contrast stretching results

2.2 Convolutional Neural Network

A CNN is a form of deep learning that facilitates object recognition in medical [25], object classification [26], agriculture [27], action recognition [28], and other [29] fields.Classification is a major issue.Differing from most classification algorithms, a CNN does not require significant preprocessing.A CNN features three principal hierarchical layers.The first two layers (convolution and pooling) are used for feature extraction (weights and biases).The last layer is usually fully connected and derives the final output.In this study, we use a pretrained version of AlexNet as the CNN.

2.2.1 Modified AlexNet Model

AlexNet [30] facilitates fast training and reduces over-fitting.The AlexNet model has five convolutional layers and three fully connected layers.All layers employ the max-out activation function, and the last two use a softmax function for final classification [31].Each input is of dimension 227×227×3.The dataset is denotedΔ, and the training data is represented byAcd∈Δ.EachAcdbelongs to the real numberR.

Heres(.)denotes the ReLU activation function andρ(1)denotes the bias vector.m(1)denotes the weights of the first layer and is defined as follows:

whereFdenotes the fully connected layer.The input of the next layer is the output from the previous layer.This process is shown in mathematical form below.

Here, ?(n?1)and ?(n)are the second last and last fully connected layers, respectively.Moreover,m(2)∈RF(2)×F(1)andm(2)∈RF(2)×F(1); therefore, ?(Z)denotes the last fully connected layer which helps extract the high-level feature.

Here,W(a)denotes the cross-entropy function,Uindicates the overall number of classes v andp, andQis predicted probability.The overall architecture of AlexNet is shown in Fig.3.

Figure 3:Architecture of AlexNet model

2.2.2 Transfer Learning

Transfer Learning [32] is used to further train a model that is already trained.Transfer learning improves model performance.The given input isand the learning task isThe target isand its learning task is(m,r)where r ?mandyK1andye1are training data labels.We fine-tuned the AlexNet architecture and removed the last layer (Fig.4).Then, we added a new layer featuring ulcers, polyps, bleeding sources, and normal tissue; these are the target labels.Fig.5 shows that the source data were derived from ImageNet and that the source model was AlexNet.The number of classes/labels was 1,000.The modified model featured four classes (see above) and was fine-tuned.Transfer learning delivered the new knowledge to create a modified CNN used for feature extraction.

Figure 4:Fine-tuning of original AlexNet model

Figure 5:Transfer learning for stomach infection classification

2.3 Features Extraction&Fusion

Feature extraction is vital; the features are the object input [33].We extracted deep learning features from layers FC6 and FC7.Mathematically, the vectors are F1and F2.The original feature sizes wereN×4096; 4,096 features were extracted for each image.However, the accuracies of individual vectors were inadequate.Thus, we combined multiple features into single vectors.We fused information based on vector length, as follows.

The resultant feature-length isN×8192.This feature-length is large, and many features will be redundant/irrelevant.We minimized this issue by applying a mean threshold function that compared each feature to the mean.Mathematically, this process is expressed as follows.

This shows that fused vector features ≥mwere selected before proceeding to the next step.The other features are ignored.Then, the optimal features are chosen using an improved GA (IGA).

2.4 Modified Genetic Algorithm

A GA [34] is an evolutionary algorithm applied to identify optimal solutions among a set of original solutions.In other words, a GA is a heuristic search algorithm that organizes the best solutions into spaces.GAs involve five steps:initialization/population initialization, crossover,mutation, selection, and reproduction.

Initialization.The maximum number of iterations, population size, crossover percentage, offspring number, mutation percentage, number of mutants, and the mutation and selection rates are initialized.Here, the iteration number is 100, the population size 20, the mutation rate 0.2, the crossover rate 0.5, and the selection pressure 7.

Population Initialization.We initialize the size of the GA population (here 20).Every population is selected randomly in terms of its fused vector and evaluated using a fitness function.Here,the softmax function with the fine-k-nearest neighbor [F-KNN] method is used.Non-selected features undergo crossover and mutation.

Crossover.Crossover mirrors chromosomal behavior.A parent is used to create a child.Here,the uniform crossover rate is 0.5.Mathematically, crossover can be expressed as follows.

Here, P1and P2are the parents, which are selected,uis a random value that is initially selected as 1.Visually, this process is shown in Fig.6.

Figure 6:Architecture of crossover

Mutation.To impart unique characteristics to the offspring, one mutation is created in each offspring generated by crossover.The mutation rate was 0.2.Then, we used the Roulette Wheel(RW) [35] method to select chromosomes.The RW is based on probability.

In Eq.(16), the sorted population isyδ, the last population isOl, andβ1is the selected parent, which is 7.When the mutation is done, a new generation will be selected.

Selection and Reproduction.Crossover and mutation facilitate chromosome selection by the RW method.Thus, the selection pressure is moderate rather than high or low.All offspring engage in reproduction, and then fitness values are computed.The chromosomes are illustrated in Fig.7.They were evaluated using the fitness function where the error rate was the measure of interest.Then, the old generation was updated.

This process continues until no further iteration is possible.A vector has been obtained, but remains of high dimensions.To reduce the length, we added an activation function based on kurtosis.This value is computed after iteration is complete and used to compare selected features(chromosomes).Those that do not fulfill the activation criterion are discarded.Mathematically, it can be expressed as follows:

The final selected vector is passed to several machine learning classifiers for classification.In this study, the vector dimension in isN×1726.

Figure 7:Demonstration of chromosomes

3 Results and Analysis

3.1 Experimental Setup

We used 4,000 WCE images and employed 10 classifiers:The Cubic SVM, Quadratic SVM,Linear SVM, Coarse Gaussian SVM, Medium Gaussian SVM, Fine KNN, Medium KNN,Weighted KNN, Cosine KNN, and Bagged Tree.Of the complete dataset, 70% was used for training and 30% for testing (10 cross-validations).We used a Core i7 CPU with 14 GB of RAM and a 4 GB graphics card.Coding employed MATLAB 2020a and Matconvent (for deep learning).We measured sensitivity, precision, the F1-score, the false-positive rate (FPR), the area under the curve (AUC), accuracy, and time.

3.2 Results

The results are shown in Tab.1.The highest accuracy was 99.2% (using the Cubic SVM).The sensitivity, precision, and F1-score were all 99.00%.The FPR was 0.002, the AUC was 1.00, and the (computational) time was 83.79 s.The next best accuracy was 99.6% (Quadratic SVM).The associated metrics (in the above order) were 98.75%, 99.00%, 99.00%, 0.002, 1.000, and 78.52 s,respectively.The Cosine KNN, Weighted KNN, Medium KNN, Fine KNN, MG SVM, Coarse Gaussian SVM, Linear SVM, and Bagged Tree accuracies were 97.0%, 98.0%, 96.7%, 98.9%,98.9%, 93.3%, 96.9%, and 96.8%, respectively.The Cubic SVM scatterplot of the original test features is shown in Fig.8.The first panel refers to the original data and the second to the Cubic SVM predictions.The good Cubic SVM performance is confirmed by the confusion matrix shown in Fig.9.Bleeding was accurately predicted 99% of the time, as were healthy tissue and ulcers;the polyp figure was>99%.The ROC plots of the Cubic SVM are shown in Fig.10.

Next, we applied our improved GA.The results are shown in Tab.2.The top accuracy(99.8%) was afforded by the Cubic SVM accompanied by sensitivity of 99.00%, precision of 99.25%, F1-score of 99.12%, FPR of 0.00, AUC of 1.00, and a time of 211.90 s.The second highest accuracy was 99.0% achieved by the Fine KNN, accompanied by (in the above order) values of 99.0%, 99.25%, 99.12%, 0.00, 1.00, and 239.08 s, respectively.The Cosine KNN, Weighted KNN, Medium KNN, Quadratic SVM, MG SVM, Coarse Gaussian SVM, Linear SVM, and Bagged Tree achieved accuracies of 99.0%, 99.5%, 98.7%, 99.6%, 99.6%, 96.2%, 98.3%, and 98.3%,respectively.The Cubic SVM scatterplot of the original test features is shown in Fig.11.The first panel refers to the original data and the second to the Cubic SVM predictions.The good Cubic SVM performance is confirmed by the confusion matrix shown in Fig.12.In this figure, the four classes are healthy tissue, bleeding sites, ulcers, and polyps.Bleeding was accurately predicted 99%of the time, healthy tissue<99% of the time, and ulcers and polyps>99% of the time.The ROC plots of the Cubic SVM are shown in Fig.13.

Table 1:Classification accuracy of proposed optimal feature selection algorithm (testing feature results)

Figure 8:Scatter plot for testing features after applying GA

Figure 9:Confusion matrix of cubic SVM for proposed method

Figure 10:ROC plots for selected stomach cancer classes using cubic SVM after applying GA

Table 2:Classification accuracy of proposed optimal feature selection algorithm using training features

Figure 11:Scatter plot of training features after applying GA

Figure 12:Confusion matrix of CUBIC SVM

Figure 13:ROC plots for selected stomach cancer classes using cubic SVM after applying GA

3.3 Comparison with Existing Techniques

In this section, we compare the proposed method to existing techniques (Tab.3).In a previous study [7], CNN feature extraction, fusing of different features, selection of the best features, and classification were used to detect ulcers in WCE images.The dataset was collected in the POF Hospital Wah Cantt, Pakistan; the accuracy was 99.5%.Another study [9] described handcrafted and deep CNN feature extraction from the Kvasir, CVC-ClinicDB, a private, and ETIS-Larib PolypDB datasets.The accuracy was 96.5%.In another study [15], and LSST technique using probabilistic model-fitting was used to evaluate a WCE dataset; the accuracy was 98%.Our method employs deep learning and a modified GA.We used the private dataset of the POF Hospital, and the Kvasir and CVC datasets to identify ulcers, polyps, bleeding sites, and healthy tissue.The accuracy was 99.8% and the computational time was 211.90 s.Our method outperforms the existing techniques.

Table 3:Proposed method’s accuracy compared with published techniques

4 Conclusion

We automatically identify various stomach diseases using deep learning and an improved GA.WCE image contrast is enhanced using a new color discrimination-based hybrid approach.This distinguishes diseased and healthy regions, which facilitates later feature extraction.We finetuned the pretrained AlexNet deep learning model by the classifications of interest.We employed transfer learning further train the AlexNet model.We fused features extracted from two layers;this improved local and global information.We removed some redundant features by modifying the GA fitness function and using kurtosis to select the best features.This improved accuracy and minimized computational time.The principal limitation of the work is that the features are of high dimension, which increases computational cost.We will resolve this problem by employing DarkNet and MobileNet (the latest deep learning models [36,37]).Before feature extraction,disease localization accelerates execution.

Acknowledgement:The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through research group NO (RG-1438-034).The authors thank the Deanship of Scientific Research and RSSU at King Saud University for their technical support.

Funding Statement:This research was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0012724, The Competency Development Program for Industry Specialist) and the Soonchunhyang University Research Fund.

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 日韩高清在线观看不卡一区二区| 国产一区二区福利| 欧美精品成人| 91色在线观看| 19国产精品麻豆免费观看| 一本大道香蕉久中文在线播放| 亚洲高清在线播放| 欧美人与性动交a欧美精品| 亚洲精品无码抽插日韩| 色香蕉影院| 亚洲人网站| 久久77777| 五月天久久综合国产一区二区| 久久亚洲综合伊人| 福利一区三区| 在线观看av永久| 沈阳少妇高潮在线| 99在线观看视频免费| 亚洲午夜福利精品无码不卡| 人妻夜夜爽天天爽| 国产高清在线精品一区二区三区| 免费人成黄页在线观看国产| 国产精品短篇二区| 亚洲香蕉久久| 内射人妻无套中出无码| 国产本道久久一区二区三区| 欧美视频在线不卡| 成人午夜免费视频| 午夜免费视频网站| 亚洲第一成年网| 亚洲精品高清视频| 欧美视频在线不卡| 久久黄色一级视频| 免费毛片全部不收费的| 伊人久热这里只有精品视频99| 精品精品国产高清A毛片| 久久久精品国产SM调教网站| 亚洲综合久久一本伊一区| 曰韩人妻一区二区三区| 色AV色 综合网站| 国产亚洲精品无码专| 亚洲欧美成aⅴ人在线观看| 亚洲精品少妇熟女| 九色视频线上播放| 国产成人久久综合777777麻豆| 精品剧情v国产在线观看| 99久久精品免费观看国产| 欧美成人综合在线| 欧美狠狠干| 亚洲欧美日韩动漫| 亚洲有无码中文网| 婷婷六月综合网| 中文国产成人久久精品小说| 精品国产一区91在线| 97无码免费人妻超级碰碰碰| 日韩欧美在线观看| 日本日韩欧美| 亚洲成aⅴ人片在线影院八| 亚洲福利片无码最新在线播放 | 国产va在线观看| 国产成人禁片在线观看| 伊人久久久久久久| 亚洲国产欧美国产综合久久 | 毛片卡一卡二| AV熟女乱| 亚洲综合香蕉| 欧美国产日韩在线播放| 欧美日韩在线观看一区二区三区| 成人日韩欧美| 欧美伦理一区| 亚洲AⅤ波多系列中文字幕| 在线观看免费人成视频色快速| 午夜老司机永久免费看片| 国产h视频在线观看视频| 久久人搡人人玩人妻精品| 在线视频亚洲欧美| 欧美色视频网站| 一级一毛片a级毛片| 五月婷婷欧美| 小13箩利洗澡无码视频免费网站| 色视频久久| 亚洲第一精品福利|