999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

An Automated Real-Time Face Mask Detection System Using Transfer Learning with Faster-RCNN in the Era of the COVID-19 Pandemic

2022-08-24 03:32:04MahaFaroukSabirIrfanMehmoodWafaaAdnanAlsaggafEnasFawaiKhairullahSamarAlhuraijiAhmedAlghamdiandAhmedAbdElLatif
Computers Materials&Continua 2022年5期

Maha Farouk S.Sabir,Irfan Mehmood,Wafaa Adnan Alsaggaf,Enas Fawai Khairullah,Samar Alhuraiji,Ahmed S.Alghamdi and Ahmed A.Abd El-Latif

1Department of Information Systems,F(xiàn)aculty of Computing and Information Technology,King Abdulaziz University,Saudi Arabia

2Centre for Visual Computing,F(xiàn)aculty of Engineering and Informatics,University of Bradford,Bradford,U.K

3Department of Information Technology,F(xiàn)aculty of Computing and Information Technology,King Abdulaziz University,Jeddah,P.O.Box 23713,Saudi Arabia

4Department of Computer Science,F(xiàn)aculty of Computing and Information Technology,King Abdulaziz University,Jeddah,Saudi Arabia

5Department of Cybersecurity,College of Computer Science and Engineering,University of Jeddah,Saudi Arabia

6Department of Mathematics and Computer Science,F(xiàn)aculty of Science,Menoufia University,32511,Egypt

Abstract: Today, due to the pandemic of COVID-19 the entire world is facing a serious health crisis.According to the World Health Organization(WHO),people in public places should wear a face mask to control the rapid transmission of COVID-19.The governmental bodies of different countries imposed that wearing a face mask is compulsory in public places.Therefore,it is very difficult to manually monitor people in overcrowded areas.This research focuses on providing a solution to enforce one of the important preventative measures of COVID-19 in public places,by presenting an automated system that automatically localizes masked and unmasked human faces within an image or video of an area which assist in this outbreak of COVID-19.This paper demonstrates a transfer learning approach with the Faster-RCNN model to detect faces that are masked or unmasked.The proposed framework is built by fine-tuning the state-of-the-art deep learning model,F(xiàn)aster-RCNN,and has been validated on a publicly available dataset named Face Mask Dataset(FMD)and achieving the highest average precision(AP)of 81%and highest average Recall (AR) of 84%.This shows the strong robustness and capabilities of the Faster-RCNN model to detect individuals with masked and un-masked faces.Moreover, this work applies to real-time and can be implemented in any public service area.

Keywords: COIVD-19;deep learning;faster-RCNN;object detection;transfer learning;face mask

1 Introduction

The governmental response of differing nations to control the rapid global spread of COVID-19 was to take necessary preventative measures[1]to avoid a majorly disruptive impact on economic and normal day-to-day activities.In various countries where an increased curve of COVID-19 cases are recorded,a lockdown for several months is implemented as a direct response[2].To minimize people’s exposure to the novel virus, many authorities like the World Health Organization (WHO) have laid down several preventative measures and guidelines, one such being that all citizens in public places should wear a face mask[3,4].

Before the pandemic of COVID-19, only a minority of people used to wear face masks mainly in an attempt to protect themselves from air pollution.Many other health professionals including doctors and nurses also wore face masks during their operational practices.In addition to wearing face masks,social distancing i.e., maintaining a distance of 3 ft from any other individual was suggested[4].According to WHO,COVID-19 is a global pandemic and throughout the world,there are up to 22 million infected cases.Many positive cases are usually found in crowded places [4].Due to the pernicious effect COVID-19 has on people[5],it has become a serious health and economic problem worldwide [6].According to [7], it is observed that in more than 180 countries there are six million infected cases with a death rate of 3%.The reason behind this rapid spread of the disease is due to a lack of rule adherence regarding the preventative measures suggested,especially,in overcrowded,high populace areas.The usage of Personal Protective Equipment (PPE) has also been recommended by WHO.The production of PPE however is very limited in many countries[8].In addition to COVID-19, another disease which includes Severe Acute Respiratory Syndrome (SARS) and the Middle East Respiratory Syndrome(MERS)are also large-scale respiratory diseases that occurred in recent years[9,10].It is reported by Liu et al.[11]that exponential growth in COVD-19 cases is more than SARS.Therefore,the top priority of government is public health[12].So,in order to help the global effort,the detection of face masks is a very crucial task.

Many scientists prescribed that these respiratory diseases can be prevented by wearing face masks[13].Previous studies also show that the spread of all respiratory diseases can easily be prevented by wearing face masks[14–16].Fortunately,Leung et al.[17]observed that the use of surgical face masks also prevents the spread of coronavirus.Using N95 and surgical masks in blocking the spread of SARS have an effective rate of about 91% and 68% respectively [18].So, throughout the world, there are many countries where wearing masks is mandatory by governmental law.Many private organizations also follow the guidelines of wearing masks [19].Furthermore, many public service providers only provide services to customers if they adhere to the face mask-wearing policy [12].These rules and laws are imposed by the government in response to the exponential growth and spread of the virus and it is difficult to ensure that people are following rules.There are a lot of challenges and risks faced by different policymakers in controlling the transmission of COVID-19 [20].To track people who violate the rules,there is a need for the implementation of a robust automated system.In France,the surveillance cameras of the Paris Metro Systems are integrated with new AI software to track the face masks of passengers[21].Similarly,New software developed by French startup DatakaLab[22]produces statistical data by recognizing the people who are not wearing face masks which helps different authorities in predicting COVID-19’s potential outbreaks.This need is also recognized in our research and we have developed an automated system that is well suited to detecting real-time violations of individuals not adhering to mask-wearing policies,in turn,assisting supervisory bodies.As in the era of Artificial Intelligence (AI) [23], there are various Deep learning (DL) [24–28] and Machine learning (ML) techniques [29] that are available to design such systems that prevent the transmission of this novel global pandemic[30].Many techniques are used to design early prediction systems that can help to forecast the spread of disease.This will allow for the implementation of various controlling and monitoring strategies that are adopted to prevent the further spread of this disease.Many emerging technologies which include the Internet of Things(IoT),AI,DL,and ML are used to diagnose complex diseases and forecasting their early prediction like COVID-19[31–35].Many researchers have exploited AI’s power to quickly detect the infections of COVID-19[36]which includes the diagnosis of the virus through chest X-rays.Furthermore,face mask detection refers to the task of finding the location of the face and then detecting masked or unmasked faces[37].Currently,there are many applications of face recognition and object detection in domains of education[38],autonomous driving[39],surveillance and so on[40].

In this research work, we have mainly considered one of the important preventative measures which are face masks to control the rapid transmission of COVID-19.Our proposed model is based on the Faster-RCNN object detection model,and it is suitable to detect violations and detect persons who are not wearing a face mask.The main contributions of this research work are given below:

·A novel deep learning model based on transfer learning with Faster-RCNN to automatically detect and localize masked and un-masked faces in images and videos

·A detailed analysis of the proposed model on primary challenging MS COCO evaluation metricsis also performed to measure the performance of the model

·This technique is not previously used and experimental analysis shows the capability of Faster-RCNN in localizing masked and un-masked faces

·Detailed analysis on real-time video of different frame rates is also performed which shows the capability of our proposed system in real-time videos.

·This system can be integrated with several surveillance cameras and assist different countries and establishments to monitor people in crowded areas and prevent the spread of disease.

The rest of the paper is categorized as follows; Section 2 explains related work, Section 3 presents methodology,Section 4 explains Results and Comparative Analysis followed by a Conclusion.Moreover,some sample images of the FMA dataset are given in Fig.1.

Figure 1:Sample images of face mask dataset(FMA)

2 Related Work

Towards object detection and image recognition[41],many applicative advancements have taken place[40,42].In various works,the main focus is on image reconstruction tasks and face recognition for identity verification,however,the main objective of this work is to detect the faces of individuals who are wearing or refraining from wearing masks.It should also be mentioned that this is within a real-time capacity, at various locations, with a focus on ensuring public safety from viruses and secondly,detecting individuals who violate the rules imposed by establishments or supervisory bodies.

Qin et al.[43]proposed the SRCNet classification network for the identification of face masks with an accuracy of 98.7%.In their work,they used three categories“correct wearing facemask”,“incorrect wearing facemask”, and faces with “no mask’.Ejaz et al.[44] used Principal Component Analysis(PCA)to recognize persons with masked and un-masked faces.It was observed in this research that PCA is not capable of identifying faces with a mask as its accuracy is decreased to 68.75%.Similarly,Park et al.[45]proposed the method to remove sunglasses from a human face,and then by using the recursive error compensation method the removed region was reconstructed.

Li et al.[46]used the object detection algorithm yolov3 to detect faces,based on darknet-19 which is a deep network architecture.For training purposes,they used WIDER FACE and Celebi databases,and later on,they validated their model on the FDDB database achieving accuracy results of 93.9%.In the same way,Din et al.[47]used Generative Adversarial Networks(GAN)based model to remove face masks from facial images, and then the region covered by the face mask is reconstructed using GAN.Nieto-Rodríguez et al.[48] proposed an automated system to detect the presence of surgical masks in operating rooms.The main objective of this system is to generate an alarm when a face mask is not worn by medical staff.This work has achieved 95%accuracy results.Khan et al.[49]proposed and designed an interactive model that can remove multiple objects from the given facial image such as a microphone and later on,by using GAN the removed region is reconstructed.Hussain et al.[50]use the architecture of VGG16 for recognizing and classifying the emotions from the face.In this work,KDEF database is used for the training of VGG16 and hence achieved an accuracy of 88%.Loey et al.[51] proposed a hybrid method for face mask classification which includes transfer learning models and traditional machine learning models.The proposed method is divided into two phases in which the first phase deals with feature extraction using ResNet 50, while the classification is performed in the second stage with Support Vector Machines (SVM), decision tree, and ensemble methods.They used three benchmark datasets for the validation of their proposed model and achieved the highest accuracy of 99.64% with SVM.Ge et al.[52] proposed a model along with a dataset to recognize masked and un-masked faces in the wild.MAFA a large face mask dataset that includes 35,806 faces with a mask is introduced in this work.Moreover,they also proposed a model named LLECNN which is based on convolutional neural networks with three major modules.These modules include a proposal, embedding, and verification.They achieved an average precision of 76.1% with LLE-CNNs.

Furthermore,some researchers proposed different methods for detecting different accessories on faces by employing the use of image features and deep learning methods.These accessories commonly include glasses and hat detection.Jing et al.[53] proposed a method of glasses detection in a small area between eyes by using the information of edges in the image.Some traditional machine learning algorithms which include SVM and K-Nearest-Neighbor (KNN) are also used in the detection of different accessories from facial images [54–56].Recently, deep learning methods are widely used in the detection of face accessories that are capable of extracting abstract and high-level information[57,58].However,the different kinds of face masks on the face are also considered as facial accessories.Moreover, the conversion of low-quality images to high-quality images is necessary to increase the performance of classification and object detection methods [59–63].For surveillance monitoring,Uiboupin et al.[64]adopted the approach of Super-Resolution(SR)based networks that utilize sparse representation for improving the performance of face recognition.Zou et al.[65]also adopted SR on low-resolution images to improve the performance of facial recognition and proved that there is a significant improvement in recognition performance by combining a face recognition model with Na et al.[62]improve object detection and classification performance by introducing the method of SR networks on cropped regions of candidates.However, these SR methods are either based on highlevel representations or features of the face for improving accuracy of face recognition.In the case of facial image classification,especially regarding the automated detection of conditions with face masks,there have not been any report’s published related to improvements in classification of facial images by employing deep-learning-based SR networks combined with networks for classification.

It is evident and observed from the above context that there is a very limited number of articles and research on face mask detection and there is also a need to improve existing methods.Additional experimentation and research on currently unused algorithms is also required.So,in the battle against COVID-19, we contribute to the body of mask recognition techniques utilizing the approach of transfer learning with the Faster-RCNN model.

3 Proposed Methodology

The proposed methodology is described below, Fig.2 shows the architecture diagram of our proposed methodology:

Figure 2:Schematic representation of the transfer learning with Faster-RCNN

3.1 Faster-RCNN Architecture

Faster-RCNN is an extension of the Fast-RCNN model and it consists of two modules.The first module is based on the Region Proposal Network (RPN) which is simply a convolutional neural network that proposes different regions from an image.The second module is the detector which detects different objects based on the region proposals extracted by the first module.For object detection,it is a single-stage network.The attention mechanisms[66]helps the RPN network to tell the network of Faster-RCNN that where to look in the image.

3.2 Region Proposal Network(RPN)

The input of the region proposal network can be an image of any size,and its output is different region proposals of a rectangular size that each has their own objectness score.It is a score generated for each region that shows whether the region contains an object or not.To generate region proposals,a small network slides over the output which is a convolutional feature map.An×nspatial window is also taken as input by a small network.Every sliding window is mapped to a lower-dimensional feature(256-d for ZF or ZF-net [67] and 512-d for VGG, with ReLu [68] following).These features are passed to two fully connected layers named a box-regression layer and a box-classification layer.

3.3 Anchors

At the location of each sliding window, multiple region proposals are predicted simultaneously,and for each location,the maximum possible proposal is denoted byk.All of thesekproposals are relative tokreference boxes which are called anchors.Each anchor is associated with an aspect ratio and scale and it is centered at the sliding window.

3.4 Loss Functions

To train the RPN network,a binary class label is used to determine whether it is an object or not to each anchor.The objective function which is to be minimized for an image is defined as:

In the above equation, a mini-batchirepresents the index of an anchor and for each anchor,pirepresents the predicted probability or output score of anchors being an object or not.If a positive anchor comes thenpi*which represents the ground truth is also one and it is zero if negative comes.In simple words, the first term in Eq.(1) is classification loss over two classes to determine whether it is an object or not an object.Similarly, the regression loss of bounding boxes is represented by the second term in Eq.(1).The four bounding box coordinates which are predicted by the model is represented bytiwhile the ground truth coordinates associated with a positive anchor is represented byti*.The classification loss over two classes is represented byLclsandLreg (ti,ti*)=R(ti-ti*)is used for regression loss whereRrepresents the robust loss function (smooth L1) as defined in [69].The regression loss is activated and represented by the termpi*Lregfor positive anchors(pi*=1)only and it is disabled if(pi*=0).The outputs of the fully connected layers namelyclsandregcomprised of{pi}and {ti}.These terms are weighted byλwhich is a balancing parameter and normalized byNclsandNregrespectively.Nclsis the normalization parameter of mini-batch andNregis the normalization parameter of regression loss which is equal to the number of locations of anchors.Moreover,for bounding box regression,the four coordinate’s parameterizations are adopted[24]:

where the box center coordinates are denoted byx,w,y,hand also its width and height.The predicted bounding box,anchor bounding box,and the ground truth bounding box is denoted byx,xa,,x*.The same is the case withy,w,andh.From an anchor box,this can be the same as a bounding regression box to the nearby ground truth.More specifically the width,height,and coordinates of the prediction box is represented by,w,y,h,for anchor box it is represented byxa,wa,ya,ha,andx*,w*,y*,h*denotes the ground truth box coordinates.

3.5 Sharing of Features

Several ways are used to train the Faster-RCNN,such as,sharing features which include alternating training whereby the RPN network is trained first and then the proposal of regions generated by the RPN is used to train the Fast-RCNN.The alternative is to approximate joint learning via an ROI pooling layer used to differentiate w.r.t coordinates of boxes.

3.6 Transfer Learning Using Faster-RCNN

In this work, we utilize a transfer learning approach with Faster-RCNN.We start with a pretrained Faster-RCNN model trained on the COCO-2017 dataset and then fine-tune the last layer to train a model on our custom dataset and the required number of classes[70].The classifier is replaced with our classes which are“mask”,“un-masked”faces,and background class.The backbone network here used is Resnet-50.The layers of Resnet-50 are not further train and will be kept frozen.Usually,in the concept of transfer learning by fine-tuning,the layers of the pre-trained network are kept frozen to prevent weight modification and avoid loss of information contained in pre-trained layers during future training.The layers of feature generation are fixed and there is the change in the onlyclsandreglayers.The total number of input channels specified is 256.

Moreover,there are many different anchor boxes,saynis given for each pixel with certain aspect ratios.The specification of anchor sizes and scale are 32,24,24,16,and 8 respectively.The optimizer is set to Stochastic Gradient Descent(SGD).The learning rate is 0.005 with momentum and weight decay values are 0.9 and 0.0005 and the number of epochs is set to 20.

4 Results and Experiment Analysis

4.1 Dataset

In this research,the Face Mask Dataset(FMD)is used which is comprised of 853 images and their corresponding XML annotation files.Some image samples of the FMD dataset are shown in Fig.1.The augmentation which includes Random horizontal flip is also applied to the images of the training set.Furthermore,the experimentation is performed on Google Colab with GPU in Python.

4.2 Results and Discussions

To evaluate the Faster-RCNN model,COCO-2017 evaluation metrics are used which include an Intersection of Union (IOU) score at different thresholds and the computing average precision and recall.The IOU score represents the overlapping and intersection area between actual and predicted bounding boxes divided by taking a union of both.To determine the value of IOU at which an object is inside the predicted bounding box we consider different thresholds.The challenge datasets which include PascalVoc and MS COCO show that the 0.5 IOU threshold is good enough.IOU is defined by Eq.(6):

ABpandBgtare the bounding boxes of predicted and actual.A detection is considered to be True Positive(TP)if a detection has an IOU greater than the threshold.A False Positive(FP)is considered to be the wrong detection because the IOU score is less than the threshold in this case.A case in which ground truth is not detected is False Negative (FN) while the corrected misdetection result is represented by True Negative(TN).Tab.1 shows the Average Precision starting from IOU threshold 0.50 to 0.95 with a step size of 0.05 and with areas considered as“small”“medium”,”large”and“all”.This is a primary and very challenging metric.The Average Precision(AP)in our experiment for small objects in the image in which an area is less than 322(on a scale of the pixel)is covered is 0.37.In this scenario the people standing very far away from the camera.The AP for the area of objects greater than 322and less than 962is 0.52 and are the medium objects in the image,while an area greater than 962is for large objects in the image and the AP for large objects is 0.81.Similarly,for areas equal to“all”it is 0.42 respectively.The maximum detections per image considered in our experiment is 100.Moreover, with this primary challenging metric, our Faster-RCNN model has achieved the highest AP of 0.81 respectively.

Table 1:Average precision at different scales and thresholds

Similarly,the Average Recall(AR)values are also considered with this primary challenging metric.Tab.2 shows the AR values.If we consider maximum detections per image is 100,with an area greater than 962,then AR is 0.84.Furthermore,AR values of IOU thresholds starting from 0.50 to 0.95,with a step size of 0.05 are also given in Tab.2.For the medium objects having an area greater than 322and less than 962,the AR values are 0.60.Similarly,for small objects,it is 0.44 respectively.

Table 2:Average recall at different scales and thresholds

Furthermore,if the maximum detections 1,10,and 100 are considered for“all”areas then,AR achieved is 0.20,0.46,and 0.50 respectively which is shown in Tab.3.

Table 3:Average recall at a rate of different maximum detections

The other evaluation metric of MS COCO which is identical to PascalVoc is the AP at IOU threshold 0.5.So, in this case, the AP is 0.71 by considering the 100 detections per image with an area equal to“all”.Another strict metric of MS-COCO is AP at the IOU threshold of 0.75.The AP achieved,in this case,is 0.47 with maximum detections per image of 100 as shown in Tab.4.

Table 4:Average precision at PascalVoc metric and strict metric of MS COCO

Moreover, during the training of Faster-RCNN the different loss functionsvs.the number of epochs and no of steps are plotted and In Fig.3 the loss of classifierLclsand loss of bounding box regressionLboxis shown.The loss of objectness for region proposal by the RPN network and loss of regression box in the RPN network is also shown in Fig.4.Figs.3 and 4 graphs show the values of loss over many epochs.Similarly,the values of the same losses over many steps per epoch are also plotted which are shown in Figs.5 and 6.

Figure 3:Graph of Lcls and Lbox losses vs.Epochs

4.3 Analysis of Real-Time Video

In the proposed work, we have also considered the detection of face masks in real-time videos.Videos consist of a stack of frames passing per second usually referred to as fps.If the real-time camera captures 30 frames per second,then it means there are 30 images in which the model needs to detect persons with masked and un-masked faces.The total time to detect each frame of video by our model is 0.17 s.Time analysis on videos of different fps is shown in Tab.5.

It is observed from the above table that time decreases with the increase number of frames.Generally,the frame rate started with 30 fps is used in most of the videos.

Figure 4:Graph of RPN loses vs.no of epochs

Figure 5:Graph of Lcls and Lbox losses vs.no of steps per epoch

Figure 6:Loss of RPN network vs.no of steps per epoch

Table 5:Time Analysis for real-time videos

4.4 Comparison with Related Works

The outcomes of our proposed Faster-RCNN with transfer learning is elaborated in previous sections.Our approach of using a transfer learning-based Faster-RCNN with Resnet-50 performs better in real-time face mask detection than previous models.Previous research articles mostly focus on the classification of masked and unmasked faces.The comparison of our approach to other models in this area is shown in Tab.6.Altmann et al.[20] proposed LLE-CNN which is a CNN with three modules and achieved an average precision of 76.1% on the MAFA dataset which is a dataset of real face masks.The first module of LLE-CNN is the proposal module which is responsible for extracting candidate facial regions by combining two pre-trained CNN.All extracted regions are represented with high dimensional descriptors.After that,the second module named the Embedding module which uses a Linear Embedding (LLE) algorithm is used to convert these descriptors to the similarity-based descriptor.Lastly, the Verification module is employed for the identification of candidate faces followed by the utilization of unified CNN to jointly perform classification and regression tasks.Moreover, Alghamdi et al.[19] uses the hybrid approach to perform classification on masked and un-masked faces and achieves a classification accuracy of 99.64%.In their work,they use a deep learning approach by utilizing the architecture of ResNet50 for feature extraction followed by traditional ML algorithms to perform classification.The algorithms include decision tree,SVM,and ensemble learning.Similarly, Feng et al.[13] also perform a classification task on a face mask classification problem and achieves an accuracy of 70%by using the PCA algorithm.In the presented research, the object detection techniques are utilized and the highest AP and AR achieved is 81%and 84% respectively.We have analyzed the performance of our approach under the strict primary challenging metrics of MS COCO with different scales, IOU thresholds, and several detections per image.Moreover,some examples of detection results are also shown in Fig.7.

Table 6:A comparative analysis of the proposed framework with existing work

Figure 7:Detection results on images

5 Conclusion

In this paper,we proposed an automated system for the real-time detection of face masks to act as a preventative measure in controlling the rapid spread of COVID-19.This system helps the policymakers of different governmental authorities to track and monitor people who are not wearing face masks at public places in a bid to prevent the spread of the virus.Many countries have published statistics of COVID-19 cases that demonstrate the spread of COVID-19 is more than the observed value in crowded areas.The proposed model is based on a transfer learning approach with Faster-RCNN and achieved the highest AP and AR of 81%and 84%respectively.We analyze the performance of the proposed work with twelve primary challenging metrics of MS COCO.Furthermore,a detailed analysis of real-time videos of different frame rates is also presented.This work can be improved and extended by adding more diversity to a dataset and by applying other object detection algorithms which include a Single Shot Detector (SSD) in comparison with Faster-RCNN.Moreover, a generalized face recognition system while wearing a face mask can also be implemented.

Funding Statement:This work was supported King Abdulaziz University under grant number IFPHI-033-611-2020.

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 少妇精品在线| 精品久久综合1区2区3区激情| 国产精品亚洲专区一区| 久久婷婷五月综合色一区二区| 国产91精品调教在线播放| 久久久国产精品无码专区| 亚洲午夜天堂| 欧美精品亚洲日韩a| 欧美日韩综合网| 国产无码精品在线| 欧美精品在线免费| 丁香五月婷婷激情基地| 色偷偷一区| 精品小视频在线观看| 国产v欧美v日韩v综合精品| 亚洲日本一本dvd高清| 无码福利日韩神码福利片| 欧美日本在线观看| 亚洲av色吊丝无码| 国产亚洲高清视频| 色综合久久综合网| 五月六月伊人狠狠丁香网| 午夜日本永久乱码免费播放片| 亚洲永久视频| 欧美日本在线一区二区三区| 亚洲国产系列| 久久国产毛片| 精品国产福利在线| 狠狠色丁香婷婷| 92午夜福利影院一区二区三区| 波多野结衣一区二区三区AV| 欧美一级在线看| 国产97视频在线观看| av一区二区三区在线观看| 性网站在线观看| 免费人成在线观看视频色| 亚洲毛片一级带毛片基地| 欧美成人第一页| 久久久久人妻一区精品| 国产成人高清精品免费| 亚洲欧美不卡中文字幕| 在线免费看片a| 亚洲日韩精品综合在线一区二区| 免费中文字幕在在线不卡| 在线欧美a| 亚洲日本一本dvd高清| 五月天福利视频| 国产视频入口| 久精品色妇丰满人妻| 性欧美精品xxxx| 人妻一区二区三区无码精品一区| 人妖无码第一页| 亚洲熟女偷拍| 91口爆吞精国产对白第三集| 在线观看国产网址你懂的| 久久永久精品免费视频| 狠狠v日韩v欧美v| 亚洲综合网在线观看| 91精品亚洲| 国产精欧美一区二区三区| 在线观看免费人成视频色快速| 免费无码又爽又刺激高| 亚洲精品少妇熟女| 国产成人精品一区二区免费看京| 四虎影视无码永久免费观看| 97久久超碰极品视觉盛宴| 丰满的少妇人妻无码区| 91香蕉国产亚洲一二三区| 伊人福利视频| 九色最新网址| 欧美日韩亚洲国产| 亚洲第一黄片大全| 久久久久亚洲av成人网人人软件 | 精品国产99久久| 日本日韩欧美| 久久美女精品| 亚洲av综合网| www.av男人.com| 亚洲色偷偷偷鲁综合| 国模沟沟一区二区三区| 2020精品极品国产色在线观看| 欧美成人免费午夜全|