999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Screening of COVID-19 Patients Using Deep Learning and IoT Framework

2021-12-15 07:08:32HarshitKaushikDilbagSinghShailendraTiwariManjitKaurChangWonJeongYunyoungNamandMuhammadAttiqueKhan
Computers Materials&Continua 2021年12期

Harshit Kaushik,Dilbag Singh,Shailendra Tiwari,Manjit Kaur,Chang-Won Jeong,Yunyoung Nam and Muhammad Attique Khan

1Manipal University Jaipur,India

2Bennett University,Greater Noida,India

3Thapar Institute of Engineering and Technology,Patiala,India

4Medical Convergence Research Center,Wonkwang University,Iksan,Korea

5Department of Computer Science and Engineering,Soonchunhyang University,Asan,Korea

6Department of Computer Science,HITEC University,Taxila,Pakistan

Abstract: In March 2020,the World Health Organization declared the coronavirus disease(COVID-19)outbreak as a pandemic due to its uncontrolled global spread.Reverse transcription polymerase chain reaction is a laboratory test that is widely used for the diagnosis of this deadly disease.However,the limited availability of testing kits and qualified staff and the drastically increasing number of cases have hampered massivetesting.To handle COVID-19 testing problems,we apply the Internet of Things and artificial intelligence to achieve self-adaptive,secure, and fast resource allocation,real-time tracking, remote screening, and patient monitoring.In addition, we implement a cloud platform for efficient spectrum utilization.Thus, we propose a cloudbased intelligent system for remote COVID-19 screening using cognitiveradio-based Internet of Things and deep learning.Specifically,a deep learning technique recognizes radiographic patterns in chest computed tomography(CT) scans.To this end, contrast-limited adaptive histogram equalization is applied to an input CT scan followed by bilateral filtering to enhance the spatial quality.The image quality assessment of the CT scan is performed using the blind/referenceless image spatial quality evaluator.Then, a deep transfer learning model,VGG-16,is trained to diagnose a suspected CT scan as either COVID-19 positive or negative.Experimental results demonstrate that the proposed VGG-16 model outperforms existing COVID-19 screening models regarding accuracy, sensitivity, and specificity.The results obtained from the proposed system can be verified by doctors and sent to remote places through the Internet.

Keywords: Medical image analysis; transfer learning; vgg-16;image processing system pipeline; quantitative evaluation; internet of things

1 Introduction

In 2019, Wuhan city in China emerged as the epicenter of a deadly virus denominated severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which causes the coronavirus disease(COVID-19) [1].COVID-19 is a highly contagious disease that was declared a pandemic by the World Health Organization on March 11, 2020 [2,3].According to the Centers for Disease Control and Prevention of the USA, COVID-19 is a respiratory illness whose common symptoms include dry cough, fever, muscle pain, and fatigue [4].Due to the unavailability of any effective therapeutic vaccine or drug, COVID-19 may affect people’s lungs, breathing tract, and other body organs and systems [5].Therefore, advanced methods are being developed to handle this pandemic using recent technologies such as artificial intelligence and the Internet of Things (IoT) [6].

COVID-19 can be accurately diagnosed by reverse transcription polymerase chain reaction(RT-PCR), which requires nose and throat swab samples [7].As SARS-CoV-2 has RNA (ribonucleic acid), its genetic material can be converted into DNA (deoxyribonucleic acid) for detection,in the process known as reverse transcription.However, RT-PCR has a low sensitivity, and the high number of false negatives can be counterproductive to mitigate the pandemic.In addition,RT-PCR is a time-consuming test [8].Alternatively, radiological imaging can be used to overcome these problems [9].For instance, chest computed tomography (CT) scans have enabled a higher sensitivity and thus fewer false negatives than RT-PCR for COVID-19 detection [10,11].

Radiographic patterns obtained from chest CT scans can be detected using image processing and artificial intelligence techniques [12,13].Although CT scanners are widely available in most medical centers, radiologists face a burden because they are required to determine radiographic patterns of COVID-19 and then contact patients for treatment and further analysis.This process is time-consuming and error-prone.Hence, an automatic detection system for remote COVID-19 screening should be developed.Such system may incorporate cloud computing, IoT, and deep learning technologies to minimize human labor and reduce the screening time [14].Deep learning has shown great success in medical imaging [15,16].For remote screening, IoT networking can be used to acquire the samples and transmit the results to other locations for further analysis.Moreover, cognitive radio can be adopted for optimum utilization of the communication spectrum according to the environment and frequency channels available.

We use cloud computing, cognitive-radio-based IoT, and deep learning for remote COVID-19 screening to improve detection and provide high-quality-of-service wireless communication by using the available transmission bandwidth using an intelligent method.Using the proposed system, samples can be obtained and analyzed remotely to inform patients about their results through communication networks, which can also support follow-up and medical assistance.To the best of our knowledge, scarce research on COVID-19 detection using chest CT scans using deep learning and IoT has been conducted [17], and the significance of efficient communication between devices for data collection and diagnosis has been neglected.

The proposed IoT-based deep learning model for COVID-19 detection has the following characteristics:

? Contrast limited adaptive histogram equalization (CLAHE) and bilateral filters are used for preprocessing to enhance the spatial and spectral information of CT scans.

? The blind/referenceless image spatial quality evaluator (BRISQUE) is used to determine the preprocessing performance.

? The VGG-16 deep transfer learning model is trained to classify suspected CT scans as either COVID-19 positive or negative.

? An IoT-enabled cloud platform transmits detection information remotely.Specifically, the platform uses deep learning to recognize radiographic patterns and transfers the results to remote locations through communication networks.

? For better resource allocation and efficient spectrum utilization between the IoT devices,cognitive radio is adopted for intelligent self-adapting operation through the transmission media.

The performance of the proposed COVID-19 detection model is compared with similar stateof-the-art deep transfer learning models, such as XceptionNet, DenseNet, and a convolutional neural network (CNN).

The remainder of this paper is organized as follows.In Section 2, we analyze related work.Section 3 presents the formulation of the proposed model and implementation of the COVID-19 detection system.Experimental results and the corresponding discussion are presented in Section 4.Finally, we draw conclusions in Section 5.

2 Related Work

Chest CT scans have been used for early detection of COVID-19 using various pattern recognition and deep learning techniques [18].Ahmed et al.[19] proposed a fractional-order marine predators algorithm implemented in an artificial neural network for X-Ray image classification to diagnose COVID-19.The algorithm achieved 98% accuracy using a hybrid classification model.Li et al.[12] analyzed COVID-19 manifestations in chest CT scans of approximately 100 patients,finding that multifocal peripheral ground glass and mixed consolidation are highly suspicious features of COVID-19.Rajinikanth et al.[20] developed an Otsu-based system to segment COVID-19 infection regions on CT scans and used the pixel ratio of infected regions to determine the disease severity.

Features related to COVID-19 on CT scans cannot be extracted manually.Singh et al.[8]studied CT scans and classified them using a differential-evolution-based CNN, which outperformed previous models by 2.09% regarding sensitivity.Li et al.[21] conducted a severity-based classification on CT scans of 78 patients and found a total severity score for diagnosing severe–critical COVID-19 cases of 0.918.However, no clinical information of the patients was included in the study, and the dataset was imbalanced due to the smaller number of severe COVID-19 cases.

Various IoT-enabled technologies related to COVID-19 have been proposed.Adly et al.[5]reviewed various contributions to prevent the spread of COVID-19.They found that artificial intelligence has been used for COVID-19 modeling and simulation, resource allocation, and cloudbased applications with support of networking devices.Sun et al.[22] proposed a four-layer IoT and blockchain method to obtain remote information from patients, such as electrocardiography signals and motion signals.The signals were transmitted via a secure network to the application layer of a mobile device.They focused on securing transmission using hash functions of blockchain, thereby preventing threats and attacks during communication.Ahmed et al.[23] reviewed the cognitive Internet of Medical Things focusing on the COVID-19 pandemic.Machine-to-machine communication was performed using a wireless network.

Overall, CT scans seem suitable for the accurate COVID-19 diagnosis.This motivated us to develop a system based on deep learning and IoT techniques for COVID-19 detection from chest CT scans.

3 Proposed COVID-19 Detection System

3.1 Motivation

Recently, various image processing, deep learning, and IoT-based techniques have been used to screen suspected patients with COVID-19 from their CT scans by identifying representative radiographic patterns.However, the available datasets are neither extensive nor diverse.Moreover,few studies have addressed remote COVID-19 diagnosis using machine learning and intelligent networking for optimal spectrum utilization during communication.Thus, we introduce a cognitive-radio-based deep learning technique leveraging IoT technologies for remote COVID-19 screening from CT scans.The machine-to-machine connection in the network is designed to enable contactless communication between institutions, doctors, and patients.

3.2 Developed System

Fig.1 shows the block diagram of the proposed intelligent cognitive radio network cycle for efficient resource management and communication across channels.Cognitive-radio-based models efficiently use the bandwidth to enable machine-to-machine communication.The cycle begins with the real-time monitoring of samples via different network devices and health applications.To ensure robust transmission, we analyze various environmental characteristics, such as throughput and bit rate.To achieve a high quality of service, we use different supervised and unsupervised methodologies during decision-making for the efficient selection of the available bandwidth during communication.

Figure 1:Proposed Internet of things (IoT) based cognitive cycle

We adopt a deep transfer learning model for COVID-19 detection.The results are wirelessly transmitted to other systems after adapting the parameters to the available spectrum of the cognitive radio networks, and the cycle continues.Fig.2 shows the block diagram of the deep learning technique.First, in the diagnostic system, a preprocessing pipeline is established by applying CLAHE, a bilateral filter, and morphological operations to each CT scan.To determine the image quality, the BRISQUE is applied to the preprocessed images.In this study, we split a dataset into training, test, and validation sets.Then, a modified VGG-16 model was used for feature extraction, and CT scans were classified as either COVID-19 positive or negative.The acquired results were transmitted to other devices via IoT networking for further analysis.

The proposed COVID-19 detection method consists of four main steps, namely, CLAHE,morphological operations, bilateral filtering, and deep transfer learning, as detailed below.

Step 1:Contrast limited adaptive histogram equalization:

CLAHE is an adaptive image enhancement technique [24].Individual mapping is performed on a specified neighborhood of a pixel, whose size must be set manually during computation [25].In some cases, noise is amplified in the neighborhood of a pixel owing to homogeneous intensities,resulting in high-peak histograms.CLAHE prevents noise amplification by defining a clip limit,whose value can be determined by setting the limiting slope of intensity mapping.In this study, we considered a 4 × 4 convolution filter with a clip limit of 2.Histogram bins above this limit were clipped and distributed uniformly.In addition, bilinear interpolation was applied to recombine the tiles for mapping [26].Fig.3 illustrates the performance of CLAHE used in this study and uniform histogram equalization.Tile-based equalization using CLAHE (Fig.3c) preserves the regional integrity of the candidate features indicated by arrows.In Fig.3b, these features are hardly visible because they were equalized according to the global contrast.Thus, CLAHE outperforms similar methods regarding contrast enhancement.

Figure 3:Illustration of the applied Histogram Equalization technique on Chest CT-scan images.(a) The low contrast original image having the symptoms of COVID-19, (b) The image after applying the uniform Histogram Equalization, (c) The image after the application of CLAHE

Step 2:Morphological operations and bilateral filtering:

The next operation in the image processing pipeline is shrinking pixels using morphological filtering.Morphological operations use a structuring element known as kernel to transform an image by using its texture, shape, and other geometric features [26].The kernel is anN×Ninteger matrix that performs pixel erosion.Structuring element z is fit over grayscale imageito produce output image g:

When the kernel is superimposed over the pixels inside imagei, the center pixel is given the value of 1 and the neighboring pixels are deleted (value of 0).The boundaries of the individual regions can be determined by subtracting the original image from the eroded image.The kernel size and maximum number of iterations are hyperparameters that can be tuned manually.

Then, bilateral filtering is applied to the images for edge preservation and denoising.The bilateral filter replaces the intensity of each pixel with the neighboring weighted average intensity.The spatial and intensity terms can be defined as follows:

whereBdenotes the resulting image,WpandPrepresent the normalization term and image pixel,respectively,qrepresents the coordinates of the pixel to be filtered (i.e., current pixel),IqandIpare the intensities of pixelsqandp, respectively,GσsandGσrare the spatial kernel and range kernel (intensity term), respectively, andSdenotes a window centered at pixelq.In addition, term||p–q|| controls the influence of the distant pixels of the image, and term ||Ip–Iq|| controls the pixels whose intensity differs from that of the center pixel.High values ofGσslead to Gaussian blur instead of an edge-preserving bilateral filter.Thus, a proper value should be determined.Fig.4 illustrates the proposed image processing pipeline.

Figure 4:Proposed image processing pipeline

Step 3.Feature extraction:

In this study, we modified a VGG-16 model for automatic feature extraction and COVID-19 detection from CT scans.The VGG-16 model allows to transfer learned weights and minimize the computation time.The modified model accepts an input of dimensionNi× 224 × 224 × 3,whereNiis the number of three-channel RGB images (CT scans) of 224 × 224 pixels.The model consists of numerous partially connected layers.The VGG-16 model improves the AlexNet for transfer learning due to the kernel size of a 3 × 3 filter with a predefined stride of 1.In addition,various 1 × 1 convolutions are used in the VGG-16 model to reduce the filter dimensionality for nonlinearity.Maximum pooling of size 2 × 2 downsamples the data to reduce the model complexity during both training and updating of the model weights in subsequent layers.In the modified VGG-16 model, the last few dense layers of the original model are removed and replaced by an average pooling layer, as shown in Fig.5.In addition, the fully connected layers are replaced by another average pooling layer.Fig.6 shows the modified architecture of the VGG-16 model.

Figure 5:A demonstration of feature down sampling by an AveragePooling2D filter (red block)

Figure 6:Architecture of proposed modified VGG-16 model

4 Performance Evaluation

To validate the performance of the proposed VGG-16 model, it was compared with existing transfer learning techniques (e.g., DenseNet-201, XceptionNet) on the SARS-CoV-2 CT-scan dataset [27].

4.1 Data Acquisition

The SARS-CoV-2 CT-scan dataset [27] was used to evaluate the proposed COVID-19 detection system.Fig.7 shows the interpretation of a sample chest CT scan with COVID-19 positive features indicated by arrows.A total of 1688 COVID-19 chest CT scans were used for evaluation.Fig.8 shows sample CT scans from the dataset.The samples were split into 65% for the training set, 20% for the validation set, and 15% for the test set.To determine the model performance,it should be tested on unseen data.Therefore, the dataset was split for training, validation, and testing.To further improve the diversity of the CT scans for training, we used data augmentation by applying operations of rotation, shifting, and flipping (Fig.9).The validation and test sets are not augmented to determine the detection error on real data.

Figure 7:An axial view of a chest CT-scan image with characteristics of COVID-19 (+ve) disease(a–c).The red arrows indicate the regions of abnormal manifestations.(a) Onset of lung lesion formation near the centre.(b) Peripherally distributed consolidation with GGO.(c) Shadow of ground glass opacities (GGO) in the lower right region of the CT-scan

Figure 8:Sample images of COVID-19 chest CT-scans from the SARS-CoV-2 dataset

Figure 9:Sample images of data augmentation of the CT-scan images (Rotation angle = 74 Degrees)

4.2 Experimental Setup

The proposed COVID-19 detection method was implemented on a computer equipped with an Intel Core i7 processor, 8 GB RAM, and a NVIDIA GeForce GTX1060 graphics card.The model weights were updated using stochastic gradient descend with a learning rate of 9 × 10-2.The learning rate reduced exponentially by a factor of one-hundredth after each epoch to obtain the global optimum during backpropagation.This method prevents missing the global optimum and falling in a local minimum.The number of epochs was set to 20.The images enhanced after preprocessing were analyzed using the BRISQUE [28].

4.3 Quantitative Analysis

We analyzed the performance of the proposed system quantitatively using the BRISQUE,model accuracy, loss error, sensitivity, and specificity.The analysis was performed on the CT scans after image enhancement and deep transfer learning.

4.3.1 Analysis of Image Enhancement Phase

The BRISQUE is used for automatic image analysis in the spatial domain.We implemented image quality assessment in MathWorks MATLAB 2019a.The BRISQUE has been applied to magnetic resonance images [29] demonstrating evaluation reliability for medical images.Fig.10 shows the image quality results of a chest CT scan before and after enhancement.The BRISQUE score falls from 135.25 to 46.25 after enhancement, representing a 65% decrease that demonstrates the effectiveness of the proposed image enhancement pipeline, which provides suitable images for COVID-19 detection.

Figure 10:Image enhancement analysis of the original image and the processed image by assigning a BRISQUE score.(a) BRISQUE score = 135.25 (b) BRISQUE score = 46.25

4.3.2 Analysis of Deep Transfer Learning Phase

We then analyzed the modified VGG-16 model quantitatively.Deep transfer learning for COVID-19 detection has been applied in various studies [11–13].We considered the large-scale SARS-CoV-2 CT-scan dataset with 1688 samples for evaluation and obtained various performance measures, such as the F1 score, recall, sensitivity, specificity, precision, receiver operating characteristic (ROC) curve, and area under the curve (AUC).The proposed deep learning model yielded a training accuracy of 96%, validation accuracy of 95.8%, and test accuracy of 95.26%.

Fig.11 shows the confusion matrices of the modified VGG-16 model and similar models including DenseNet, XceptionNet, and a CNN.The proposed VGG-16 model outperforms the other models with more true positives and fewer false negatives.False negatives indicate that we incorrectly predicted a patient with COVID-19 as not presenting the disease.In a pandemic context, many false negatives could be dangerous owing to the unaware spread of the disease.The proposed model had a false negative ratio of only 0.04.

Figure 11:A comparative visualization of the confusion matrix analysis, (a) Proposed VGG-16 DTL model, (b) CNN model, (c) DenseNet DTL model, (d) XceptionNet DTL model

We also determined the sensitivity and specificity to test the reliability of the proposed model.Sensitivity (Sn), also known as the true positive rate or recall, deals directly with the prediction of the positive cases correctly.It can be expressed as:

Specificity (Sp) represents the true negatives, that is, people not having COVID-19 being correctly identified as healthy.It can be expressed as:

An accurate and dependable model should achieve high sensitivity and specificity.The proposed model attainedSnandSpvalues of 95.37% and 95.15%, respectively.Thus, the proposed model provides high rates of true positives and true negatives.In addition, the precision relates the true positives and false positives as follows:

The F1 score is another important measure to validate models with imbalanced classes.It is the harmonic mean of precision and recall.Therefore, it evaluates a model by depicting a correlation between these two measures as follows:

Tab.1 lists the results of the proposed and similar COVID-19 detection models.The proposed VGG-16 model achieved better F1 score, precision, and recall values than the comparison models.In addition, the proposed model achieved training, validation, and test accuracies of 96%,95.8%, and 95.26%, respectively.The specificity values of the XceptionNet and DenseNet models were 87% and 80%, respectively, being much lower than the specificity of the proposed model,indicating fewer false positives provided by the proposed model.Although the CNN achieved the highest sensitivity of 99%, its accuracy and specificity were relatively low, being 93% and 86%,respectively.

Table 1:Comparative analysis between the existing and the proposed COVID-19 screening models

To further analyze the proposed model, we obtained the learning curves to identify aspects such as overfitting, underfitting, and bias.Fig.12 shows the loss curves during training and testing.The proposed model suitably converged after 20 epochs in both phases.In Fig.12b,during epochs 2 and 3 of testing, the proposed VGG-16 and XceptionNet showed an uneven convergence of loss.This may be due to random classification or noise.In subsequent epochs, the testing error decreased uniformly with negligible fluctuations.Remarkably, the proposed VGG-16 model achieved the lowest loss curve with the least training error of 0.16 and testing error of 0.17.

Figure 12:Training and validation loss analysis during 20 epochs of the competitive model.(a)Train loss, (b) Validation loss

The ROC curve of the proposed model shows the tradeoff between the true and false positive rates.It also indicates the separability between COVID-19 positive and negative cases [8] in the binary classifier.Fig.13 shows the ROC curve of the proposed model.The model achieved an AUC of 0.99, being superior to the other deep transfer learning models (i.e., XceptionNet and DenseNet).

Figure 13:Receiver operating curve of the proposed VGG-16 model with AUC = 0.9952

4.4 Comparisons with Other Deep Transfer Learning Models

Tab.2 lists the performance of the proposed model and the state-of-the-art MODE-CNN [8]and DRE-Net CNN [30] models.The proposed model outperforms these model regarding accuracy, sensitivity, and specificity.Thus, the proposed model can be used to accurately detect COVID-19 from CT scans.

Table 2:Comparative analysis among the existing and proposed COVID-19 screening models

5 Conclusions

We propose a system for remote screening of COVID-19 using deep learning techniques and IoT-based cognitive radio networks.The adaptability of the cloud-based platform for COVID-19 screening would enable contactless interaction with the patients by sending the samples and results to medical institutions and other locations via IoT networking devices for further analysis.As bandwidth availability is a major issue, cognitive-radio-based transmission is suitable for transmission because it can adapt to environmental conditions, enabling effective communication between connected IoT devices.The cognitive-radio-based spectrum utilization technique combined with artificial intelligence may largely reduce the diagnostic and transmission latency.We recognize radiographic patterns using a modified VGG-16 model.To ensure a high quality of service during feature extraction, a separate image enhancement pipeline is implemented.The proposed model successfully classifies CT scans as corresponding to COVID-19 positive or negative cases with an accuracy of 95.26%.The model also reduces the false negative rate to 0.05%, outperforming similar models.The accuracy achieved by the proposed VGG-16 model is higher than that of the transfer learning models DenseNet and XceptionNet by 15.26% and 8.26%, respectively.Various limitations of our study remain to be addressed.When networking devices are combined with artificial intelligence, security breaches and malicious attacks by hackers become major concerns.In future work, the IoT-based system can be endowed with a secure mode for data transmission using recent technologies such as blockchain for data encryption, thus preventing unauthorized data access during communication.Deep learning models achieve high performance with large datasets.Thus, a larger dataset for COVID-19 screening may further improve detection.The clinical information of the patients, such as age, medical history, and stage of disease, was not available, impeding the severity evaluation of COVID-19 positive cases based solely on the CT scans.In future work, we will extend the system to classify CT scans and assign a severity scores.The results will be transmitted through a cloud-based platform for contactless COVID-19 screening.

Funding Statement:This study was supported by the grant of the National Research Foundation of Korea (NRF 2016M3A9E9942010), the grants of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare (HI18C1216), and the Soonchunhyang University Research Fund.

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 國產尤物AV尤物在線觀看| 国产精品视频久| 国产成人综合在线观看| 国产大片喷水在线在线视频| 亚洲熟女偷拍| 2020亚洲精品无码| 国产精品福利尤物youwu| 99视频国产精品| 久久永久免费人妻精品| 91精品国产自产91精品资源| 亚洲女同一区二区| 欧美人在线一区二区三区| 在线国产91| 欧美精品伊人久久| 69av免费视频| 国产激情无码一区二区免费| a毛片免费观看| 九月婷婷亚洲综合在线| 亚洲欧洲日本在线| 强乱中文字幕在线播放不卡| 亚洲男人天堂久久| 婷婷丁香色| 一级毛片a女人刺激视频免费| 粗大猛烈进出高潮视频无码| 国产精品视频猛进猛出| 欧美另类精品一区二区三区| 亚洲综合色区在线播放2019 | 天天躁夜夜躁狠狠躁图片| 青青草原国产| 国产欧美日本在线观看| 九九视频免费看| 中文字幕亚洲综久久2021| 国产视频一区二区在线观看| 久久午夜影院| 国产精品手机在线播放| 男女男免费视频网站国产| 国内精品手机在线观看视频| 色综合久久综合网| 重口调教一区二区视频| 欧美成人影院亚洲综合图| 免费观看欧美性一级| 伦精品一区二区三区视频| 久久综合丝袜日本网| 中国毛片网| 日韩第九页| 成人一级免费视频| 日韩中文字幕亚洲无线码| 欧美国产三级| 久久亚洲AⅤ无码精品午夜麻豆| 日韩av电影一区二区三区四区 | 色综合天天娱乐综合网| 国产超薄肉色丝袜网站| 国产高清国内精品福利| 国产成人亚洲无码淙合青草| 99在线观看国产| 国产在线精品网址你懂的| 亚洲侵犯无码网址在线观看| 欧美视频二区| 欧美成人综合视频| 国产精品免费福利久久播放| 极品尤物av美乳在线观看| 亚洲无线视频| 五月婷婷综合色| 思思热在线视频精品| 亚洲国产黄色| 青青热久麻豆精品视频在线观看| 综合五月天网| 免费国产高清视频| 国产高清不卡| 超清无码一区二区三区| 人妖无码第一页| 91无码人妻精品一区二区蜜桃| 亚洲永久视频| a在线观看免费| 免费看美女自慰的网站| 国产一区二区三区精品久久呦| 波多野结衣无码AV在线| 伊人成色综合网| 国产SUV精品一区二区| 亚洲人成在线精品| 91久久性奴调教国产免费| 久久天天躁夜夜躁狠狠|