999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Air-Ground Integrated Low-Energy Federated Learning for Secure 6G Communications

2022-02-07 06:04:48WANGPengfeiSONGWeiSUNGengWEIZongzhengZHANGQiang
ZTE Communications 2022年4期

WANG Pengfei ,SONG Wei ,SUN Geng ,WEI Zongzheng ,ZHANG Qiang

(1.School of Computer Science and Technology,Dalian University of Technology,Dalian 116024,China;2.School of Computer Science and Technology,Jilin University,Changchun 130015,China;3.Key Laboratory of Symbolic Computing and Knowledge Engineering,Jilin University,Changchun 130015,China)

Abstract: Federated learning (FL) is a distributed machine learning approach that could provide secure 6G communications to preserve user privacy.In 6G communications,unmanned aerial vehicles (UAVs) are widely used as FL parameter servers to collect and broadcast related parameters due to the advantages of easy deployment and high flexibility.However,the challenge of limited energy restricts the popularization of UAV-enabled FL applications.An air-ground integrated low-energy federated learning framework is proposed,which minimizes the overall energy consumption of application communication while maintaining the quality of the FL model.Specifically,a hierarchical FL framework is proposed,where base stations (BSs) aggregate model parameters updated from their surrounding users separately and send the aggregated model parameters to the server,thereby reducing the energy consumption of communication.In addition,we optimize the deployment of UAVs through a deep Q-network approach to minimize their energy consumption for transmission as well as movement,thus improving the energy efficiency of the air-ground integrated system.The evaluation results show that our proposed method can reduce the system energy consumption while maintaining the accuracy of the FL model.

Keywords: federated learning;6G communications;privacy preserving;secure communication

1 Introduction

Even though 5G specifications are still being developed,6G of mobile communications has already attracted great attention from both academia and industry[1].Compared with 5G communications,6G[2]will achieve faster speed,higher energy efficiency,wider coverage,etc.However,the wireless channel used for 6G is usually open,which gives wireless users the freedom to communicate but brings insecurity factors at the same time[3].For example,the communication content can be easily eavesdropped or tampered with[4].At the same time,data servicers collect large amounts of user information[5],which leads to frequent private data leaks.These factors pose a threat to the data security of 6G users.

Federated learning (FL) is a distributed machine learning framework[6].In FL,participants train the model with local datasets and upload the obtained model parameters instead of the user privacy data to the parameter server,which aggregates the parameters to obtain the updated global model.With the distributed nature of FL,users can benefit from the global model while keeping the data in their own hands[7–8].Therefore,utilizing FL at the 6G edge can protect user data,thus making users more willing to participate and fully utilize the value of their local dataset for the training of the global model[9].In recent years,there have been some studies on integrating FL into wireless communication to improve its privacy and security[10–12],but they still face many realistic problems,e.g.,low deployment flexibility in terrestrial communication networks and huge communication costs.

Unmanned aerial vehicles (UAVs) have the advantages of high flexibility and mobility which can give FL more possibilities.Specifically,it can easily provide air-ground integrated line-of-sight communication and effectively improve the transmission range of terahertz signals in 6G networks.As a result,the air-ground integrated network (AGIN) has gradually become the trend for 6G development,aiming to provide users with ubiquitous connectivity and seamless global coverage.In this paper,we consider the organic combination of the airground integrated network and FL in the 6G network.We utilize UAVs as parameter servers for FL to collect data from dispersed users,providing wider coverage for users while protecting the private data of 6G users.However,in 6G communications,the framework will face the challenge of limited energy for mobile users as well as for UAVs[13–14].Specifically,users are reluctant to spend too much energy on the FL process,and the UAV does not have a constant source of energy to support multiple rounds of the FL model transfer and aggregation process.As a result,it may lead to delays in updating the global model.Therefore,to achieve a sustainable FL solution,the issue of energy efficiency in the system has to be considered.Existing solutions that optimize the energy efficiency of airground integrated FL[15]generally focus on UAV scheduling optimization and resource allocation,in which mobile devices need to communicate directly with the server,which may increase energy consumption.

In this paper,we propose air-ground integrated low-energy federated learning (AGILFL).Specifically,we use terrestrial base stations (BSs) as message middleware for users and the UAV parameter server to aggregate model parameter updates from their surrounding users separately and send the aggregated model parameters to the server,thus reducing the energy consumption of communication.In addition,deep Qnetwork (DQN) is adopted to optimize the deployment of UAVs,thus further reducing the overall energy consumption.To implement this procedure,we face the challenge that in some dynamic scenarios,the users’ locations are not fixed[16],which would lead to a load on the BS when too many users move within a range of a certain BS.In such a case,users are required to send model parameters directly to the UAV server.To ensure that the 6G communication is always highly reliable,we consider predicting the BS load situation in advance and performing an emergency scheduling for the UAV.Our main contributions are summarized as follows:

1) We propose AGILFL,a framework that integrates AGIN and FL,which is devised to provide low-energy FL for secure 6G communications.

2) We use hierarchical aggregation to reduce the communication consumption efficiency of AGILFL by using BSs as middleware between users and the UAV parameter server significantly.The BS collects and aggregates the updated parameters of users within its coverage area,and sends the aggregated parameters to the UAV server for a second aggregation.With this approach,we can reduce the aggregation workload of the UAV server and the redundant communication between the UAV and users.

3) To ensure the reliability of 6G communication,we consider predicting the BS load situation in advance and urgently dispatching the UAV to cope with extreme situations,e.g.,scenarios with a high density of smart devices such as weekend supermarket promotions and concerts,etc.

4) Extensive evaluation experiments are conducted on the MINIST dataset to demonstrate the effectiveness of our proposed method.Experiments have shown that our method can improve the system’s overall energy efficiency while maintaining the model’s accuracy,which is better than the comparison algorithm.

The remainder of this paper is organized as follows.Section 2 presents the current research work combining FL and wireless networks,with consideration of their energy consumption.Section 3 provides an overview of FL and presents the system model and problem formulation of this paper.DQN and our allocation strategy for the UAV are introduced in Section 4.Section 5 verifies the effectiveness of AGILFL through experiments.Finally,we summarize the contributions and experiments of this paper and present future work in Section 6.

2 Related Work

FL enables a large number of users to train a machine learning (ML) model together in a distributed manner,as a result,it provides a secure and effective training model for ubiquitous 6G intelligence.Recently,studies have explored how FL can be integrated into wireless networks while considering their energy consumption.TRAN et al.[17]proposed a wireless FL model that implemented a trade-off between FL learning time and user energy consumption.HAMER et al.[18]proposed another FL approach to reduce the costs of server-to-client and client-to-server communications by building an ensemble of pretrained base predictors.However,the above studies are limited to terrestrial networks.

ZENG et al.[19]first investigated the possibility of implementing FL on UAVs.They proposed an optimization issue by considering the problem of limited energy of UAVs and designing algorithms to optimize the convergence performance of FL,thus reducing the energy consumption of UAVs in the system.SHIRI et al.[20]proposed an algorithm that combined channel allocation as well as equipment scheduling optimization to reduce the communication among swarms of a large number of UAVs.PHAM et al.[21]proposed a sustainable federal learning framework that used UAVs to provide wireless power to energy-limited FL participants’ devices while improving the energy efficiency of UAVs.However,none of the above methods consider integrating UAVs into terrestrial communication networks.

To make full use of UAVs,QU et al.[22]first proposed a conceptual framework of air-ground integrated federated learning(AGIFL) to give FL greater flexibility,thus enhancing the muchneeded artificial intelligence in 6G communication networks.JING et al.[23]verified for the first time the feasibility of FL deployment between UAVs and the terrestrial network through a practical platform based on AGIFL.However,none of them solves the problem of the huge energy consumption of the system.

In summary,few extant studies has considered how to reduce the energy consumption of AGIFL.In addition,the above approaches require terminal nodes to communicate directly with the parameter server,which may increase transmission costs.Therefore,in this paper,we propose AGILFL,in which BSs are used as message middleware between users and the UAV parameter server in the FL system.We also use the DQN algorithm to optimize the location of the UAV and minimize the total energy consumption for its movement and transmission,so that AGIFL can effectively reduce the energy consumption of the system.

3 Preliminaries

3.1 Federated Learning

FL is a distributed ML approach that trains shared models in the context of protecting individual privacy.In FL,many participants train the global model in cooperation through a parameter server by aggregating model parameter updates[24].Participants download the latest global model from the parameter server in each communication round,train the model on their own devices using local datasets,and then upload the updated parameters of the trained model to the server.The server then aggregates (e.g.,using FedAvg[6]) the collected updates to get a new global model.In the process,users can benefit from the global model while keeping the data in their own hands.

Note that the above process will be repeated until the global model reaches convergence.

3.2 System Model

In this paper,we consider an air-ground integrated 6G communication FL system that can protect the private security of 6G users,as shown in Fig.1.It consists of a UAV server,musers (e.g.,mobile users,Internet of things devices,and the UAV carrying data),andnBSs.These devices are randomly distributed in the air-ground domain.We define the set of users asU={u1,u2,…,um},the UAV server asV,thenBSs asB={b1,…,bn},and the model size of FL training isΩ.

▲Figure 1.Overview of AGILFL’s framework

wherebi,gi,pi,andN0represent transmission bandwidth,channel gain,transmission power,and noise power density respectively.In order to ensure thatΩis transmitted within the upload timeti,constraintΩ≤Ritineeds to be satisfied.In this case,the energy transmitted to BSEu2b iand the energy transmitted to UAVEu2v iare expressed in Eq.(3).

When transferring global model parameters,we have the following restrictions.There is path loss during the transmission of global model parameters,that is,with the increase of transmission distance,the power gradually decreases.The corresponding relationship is expressed in Eq.(4).

wherePsandPrrepresent the transmission power of the sender and the receiver respectively,lsandlrrepresent the location of the sender and the receiver respectively,d(·) is the distance function,and?is the influence factor under different environments.We also limit the minimum received power of all devices topmin.

UAVVacts as the global model manager of the FL system.UAVVis responsible for automatically sending or receiving global model parameters from surrounding users or base stations,aggregating local training models,and updating global model parameters.We assume that UAVVhas a fixed heightHand moves only in the horizontal direction.Suppose the position of the UAV isl=(x,y) and the position after moving isl'=(x',y').According to Ref.[26],the energy of UAVVmovement is expressed in Eq.(5).

wherevhis the velocity in the horizontal direction andPHrepresents the power consumed by the energy of horizontal movement.PHcan be expressed in Eq.(6).

wherePpis the energy consumption power to overcome its own skin friction from drag and its calculation formula is shown as follows.

whereCDis the drag coefficient,cbis the rotor chord,Sis the front area of the UAV,wis the angular velocity,βis the rotor disk radius,andρdenotes the fluid density of air.PIis the energy consumed by the wing to redirect air to generate lift to compensate for the weight of the aircraft,and the specific formula is expressed as follows.

3.3 Problem Formulation

Our goal is to optimize the UAV position to minimize the energy consumed by the entire FL system while protecting the private security of 6G users.Because parameter aggregation and global model update are necessary tasks of the FL system,the energy of parameter aggregation and global model update is not considered when the energy is minimized.Aiming to optimize the energy of the AGILFL system,we will focus on the optimization problems as in Eq.(9).

wherexijrepresents whether useruisends local model parameters to BSbj,yiindicates whether useruisends local model parameters to UAVV,denotes the energy required for BSbjto transmit to UAVV,lirepresents the location of userui,ljmeans the location of BSbj,lindicates the initial location of UAVV,l'shows the location of UAVVafter it moves,andlmaxrepresents the maximum movement range of UAVV.

In the problem,Constraint (9a) limits the range of values ofxiandyi;Constraint (9b) denotes that the user sends the global model parameters to either the BS or the UAV;Constraints (9c) and (9d) limit the time and rate of transmission parameters to ensure that the model size of FL trainingΩis transmitted within upload timetiortj;Constraint (9e) limits the range of movement of the UAV;Constraints (9f),(9g) and(9h) indicate that the power of the signal received by all the devices must be higher than the minimum power.

4 Allocation Strategy of UAV

In this section,we detail the strategy for UAV deployment.The algorithm we propose in this paper consists of two separate processes: model training and model application. We first train the model by the DQN algorithm to obtain the output Qnetwork model. Then we continuously update the environmental state of UAVVand put it into the Q-network to make the optimal action decision for the current state.

4.1 Deep Q-Network

The DQN algorithm is a reinforcement learning method combining deep learning and Q-learning, which has both the powerful feature-aware capability of deep learning and the trial-and-error learning advantage of reinforcement learning.In the DQN algorithm,Q(s,a) represents the value assessment of actionataken by the agent under states, and the agent selects the action with the highest Q value to perform to obtain a higher reward. In the Q-learning method, the Q-table is used to store the corresponding Q values of the actions in each state. However, the disadvantage of Q-learning is that it takes up a lot of memory space in a more complex state space and the calculation process is also complicated. Compared with traditional Q-learning, DQN can compute the Q-table of the current state in a huge state space as in Eq. (10).

whereQθ(s,a) is a neural network with parameterθ, which is called Q-network, and its output result is an estimate ofQ.

DQN proposes two improvements to overcome the problems of unstable learning targets and excessive correlation of consecutive samples: 1) experience replay; 2) target Q-network.In this context, the goal of the training process is to minimize the value of the loss function, and the loss function is the mean-square error between the target Q value and the Q value, which is expressed as in Eq. (11).

whereθiis the parameter of Q-network;Yiis the target Q value. The formula is expressed as in Eq. (12).

whereis the parameter of the target Q-network. By fixing the Q value, the stability of the Q value can be guaranteed for a period of training time.

4.2 Allocation Strategy

We use the DQN algorithm to determine the 3D position of UAVV, thereby minimizing its communication and movement costs. The DQN algorithm predicts the value of the agent’s behavior through a deep neural network, thus allowing the agent to obtain a higher return in subsequent decisions. Specifically,in our method, UAVVneeds to decide on the appropriate working position based on the large number of distributed BSs around, which is a more complex task scenario. Due to many environmental elements in complex scenes in reinforcement learning, not only will it increase the training cycle and slow down the convergence of the model, but also bring the problem of sparse rewards, which causes the model to work improperly. Aiming to solve the potential sparse reward problem, we propose an energy field model to abstract various parameters in the environment and simplify the UAV state representation,thus speeding up the model convergence and avoiding the sparse reward problem. The energy field is modeled as in Eq. (13).

whereεis the weight parameter used to control the order of magnitude of energy;diis the Euclidean distance betweenVandbi;Liis the load situation ofbi;Diis the number of data thatbineeds to transmit toV;Uriis the number of users connected tobi. The formula calculates the energy situation of the UAV’s location, and the total energy is the sum of the subenergy of all BSs. The higher value ofEmeans more users and base station loads near the point and more need for UAVVto serve. This energy field model can guide UAVVto fly to a more suitable working area and also generate the corresponding decision for the high load situation in the area.

The state space of UAVVis composed of spatial coordinates,current energy consumption power,user coverage,and BS coverage.For the action space of UAVV,we have defined six possible actions: forward,backward,left,right,up,and down.The six actions are denoted asa1,a2,…,a6respectively.If the energy consumption of the transmission of UAVVis higher than the previous energy consumption,UAVVneeds to change its location.In this case,the agent must make behavioral decisions based on the state of the environment in which it is located.UAVVtakes actionajbased on decisions and transfers to a new states',while receiving a reward or punishment according to the reward rule to optimize the behavioral decision of the intelligence.

The purpose of this section is to determine suitable UAV locations to reduce the energy loss of the mission and also to perform emergency scheduling for possible regional loads(such as large sporting events,supermarket events,concerts,etc.).Combined with the energy field model proposed above,this paper proposes the reward function as in Eq.(14).

whereΔEis the change in energy at the location of the UAV.When the energy increases,which means that UAVVflies to a more suitable space position,it will be rewarded and the opposite will be punished;ωis the weight parameter that controls the order of magnitude of the reward;Dsumis the total amount of data transferred by the system.

The algorithm we propose in this paper consists of two separate processes: model training and model application.The training is performed in a simulated environment,the specific details are shown in Algorithm 1,where the target Q-network and Q-network are first initialized to predict the Q value of the previous step of the behavior and the current Q value,respectively.In each training epoch,the environmental status of UAVVis first updated,such as pedestrian movement,BS model aggregation,BS load,user model training,system energy consumption,etc.Then current statesof the UAV is determined according to the external state,and is input into the Q-network to get the Q values of all actions.Actionais selected for execution based on the greedy method,UAV statesis changed tos'after the execution of the action,and then reward informationrtis obtained.Then quaternion (s,a,rt,s') is stored in replay bufferMand batch samples are taken fromMto train the Q-network.After that,the Q-network is updated with the target Q-network and finally,the Q-network model is output.Although the process of training UAVVrequires some energy,the energy consumption of the proposed DQN method is much smaller and even negligible compared with the traditional greedy scheme[27].

Algorithm 2 shows the process of applying our DQN model,which continuously updates the environment state during the system operation and then calculates the required energy consumption of the system.If the energy consumption is greater than the threshold value,it means that UAVVis required to move,and in this process,UAVVconstantly updates its state and inputs the state into the Qnetwork.UAVVmakes action decisions based on the Qnetwork output and then updates the environment state until the step reaches the max.

5 Experiment and Evaluation

In this section,we evaluate the performance of our proposed algorithm.Firstly,we introduce the default settings,datasets,benchmarks,and metrics in detail.Secondly,we evaluate the utility of AGILFL on the overall energy.Finally,we evaluate the utility of AGILFL on average resource utilization and model accuracy.

5.1 Default Settings

We consider that the FL system is composed of users,BSs,and the UAV.In order to reduce the space for parameter search,we set up 100 users,5 BSs,and 1 UAV.Each trainable device trains locally using the lenet-5 model.The maximum number of iterations of the global model is set to 200,which is optimized by the mini batch stochastic gradient descent (SGD) optimizer,and the minimum mini batch is 50.During model training,the learning rate is set to 0.03,and the loss function uses cross entropy.The maximum epoch to train the UAV mobile model is set to 200,and the maximum epoch to train the FL model is set to 40.

5.2 Dataset

We use a well-known image classification data set named MINIST,which is composed of 70 000 grayscale pictures of 28 × 28 pixels and each picture corresponds to a number from 0 to 9.In the MINIST dataset,55 000 pictures are used as the training set,5 000 pictures as the verification set,and 10 000 pictures as the test set.In our experiment,we evenly place 55 000 pictures among 50 users,and each device contains 1 000 pictures.The parameter UAV places 10 000 pictures as a test set for model training.

5.3 Benchmarks

Firstly,in order to evaluate the advantage of the AGILFL system,we choose the FL system without BSs and the multihop transmission (MHT) with BSs as the benchmark.They all have the same number of users.Secondly,in assessing the advantage of AGILFL UAV training,we use DQN training and random movement as benchmarks.Finally,to prove that AGILFL can achieve precision without degradation,we use FL and ML as benchmarks.AGILFL,ML and FL use the same number of data for training,and AGILFL and FL have the same number of users.

5.4 Metrics

We adopt the total energy consumption as an evaluation metric,that is,the energy consumed by the whole system in energy transmission and UAV movement during each training of FL.In assessing the UAV training performance of AGILFL,we use the reward function during training as a metric.Finally,we also use accuracy as a metric to evaluate the impact on the FL model accuracy.

5.5 Results Analysis

We uniformly generate five BSs in the 200×200×200 airground integrated area.To evaluate the impact of user growth on total energy consumption,we randomly generate 100 to 500 users in the region and calculate the total energy consumption.Fig.2 shows the energy comparison of AGILFL and other benchmarks in the AGIFL system.AGILFL can reduce the total energy consumption by using the BSs as caching devices and by controlling the UAV to find the best position.This experiment shows that AGILFL reduces the overall energy by 11.9% and 18.4% respectively,compared with the other two algorithms.

Our UAV,which is trained to complete the DQN intensive learning network,is placed in the AGIFL system.The UAV starts from a random point and moves in the FL system according to the movement strategy.Fig.3 shows the trajectory of the UAV in AGILFL.Each step of the UAV’s movement maximizes the reward function.Every step the UAV moves,it moves toward the BS and is close to the central BS.We can also see that UAVs will not be far away from users or base stations to avoid wasting energy.

▲Figure 2.Performance of total energy consumption

▲Figure 3.Movement trajectory of UAV

In default settings,we evaluate the performance of the UAV movement strategy.Fig.4 shows the performance of our optimization algorithm.In the AGIFL system,we use AGILFL,DQN algorithm and random movement respectively to compare their performance in the reward function.AGIFL adopts DQN with an empirical replay algorithm.AGILFL can learn the optimal parameters faster than the DQN algorithm,and experience replay can make the training more stable.Compared with the other two algorithms,AGILFL improves the reward function by 59.5% and 13.5%,respectively.

In default settings,we evaluate the accuracy change of the training model using users’ data in different scenarios.Fig.5 shows the accuracy performance of AGILFL,FL,and ML.AGILFL can reduce total energy consumption without causing serious accuracy degradation.Therefore,we propose AGILFL as a friendly,privacy-safe,and low-energy FL framework.

▲Figure 4.Performance of reward

▲Figure 5.Performance of accuracy

6 Conclusions and Future Work

In this paper,we investigate the problem of how to improve the energy efficiency of AGIFL and propose the AGILFL framework which can guarantee the private security of 6G users.Specifically,in AGILFL,we use a hierarchical aggregation method to improve the energy efficiency of communication by using BSs as middleware between users and the UAV parameter server.At the same time,to ensure that the 6G communication is always in a highly reliable state,we predict the overloaded BSs in advance and make emergency scheduling of the UAV.We use the DQN algorithm to optimize the position of the UAV to minimize the overall energy consumption for UAV movement as well as communication.Finally,through simulation experiments,our proposed method is proven to be real and effective.Compared with the baseline,AGILFL reduces the overall energy by 11.9% and 18.4%,respectively,and improves the reward function by 59.5% and 13.5%,respectively.

The way to reduce the energy consumption of local computing for 6G users in the AGILFL framework is not explored in this paper.In the mechanism we designed,we should also consider a replacement option when the UAV is almost out of power.Our future work will focus on addressing the above issues and exploring the possibility of applying our solutions on a large scale in real-world environments.

主站蜘蛛池模板: 波多野结衣久久高清免费| 91免费观看视频| 91年精品国产福利线观看久久| AV无码一区二区三区四区| 无码中字出轨中文人妻中文中| 久久综合国产乱子免费| 无码网站免费观看| 亚洲视屏在线观看| 又粗又硬又大又爽免费视频播放| 国产AV无码专区亚洲A∨毛片| 亚洲AV永久无码精品古装片| 久久精品最新免费国产成人| 国产精品无码制服丝袜| 亚洲精品福利视频| 中日韩一区二区三区中文免费视频| 亚洲综合婷婷激情| 欧美日韩成人| 五月婷婷精品| 亚洲清纯自偷自拍另类专区| 亚洲另类色| 天天做天天爱夜夜爽毛片毛片| 丰满的少妇人妻无码区| 久久精品女人天堂aaa| 国产精品亚洲日韩AⅤ在线观看| 伊人色综合久久天天| 国产亚洲精品97在线观看| 99久久精品国产麻豆婷婷| 久久精品人人做人人爽电影蜜月| 69国产精品视频免费| 亚洲高清中文字幕| 亚洲狠狠婷婷综合久久久久| 国产成人综合亚洲网址| 青青久久91| 四虎影视8848永久精品| 日韩av无码DVD| 国产精品第一区| 日韩欧美国产中文| 97亚洲色综久久精品| 国产精品无码翘臀在线看纯欲| 91免费国产在线观看尤物| 国产农村妇女精品一二区| 999国内精品久久免费视频| 中文字幕久久亚洲一区| 国产精品一区在线麻豆| 亚洲三级影院| 无码精油按摩潮喷在线播放| 免费国产福利| 成人综合网址| 四虎影视无码永久免费观看| 在线国产欧美| 亚洲国产精品不卡在线| 亚洲三级电影在线播放| 婷婷色丁香综合激情| 亚洲第一黄片大全| 精品一区二区三区水蜜桃| 日韩一级毛一欧美一国产| 免费一级大毛片a一观看不卡| 一级毛片免费观看久| 国产网站免费| 亚洲中文字幕久久无码精品A| 麻豆精品视频在线原创| 91蝌蚪视频在线观看| 精品国产一区二区三区在线观看| 日韩在线2020专区| 国产成人精品男人的天堂下载 | 精品国产Av电影无码久久久| 色吊丝av中文字幕| 国产H片无码不卡在线视频| 国产偷倩视频| 国产精品lululu在线观看| 国产无吗一区二区三区在线欢| 免费看黄片一区二区三区| 久久国产高清视频| 99在线观看精品视频| 伊人色综合久久天天| 国产福利在线免费| 亚洲性色永久网址| 精品一区二区三区无码视频无码| 国产超碰在线观看| 无码网站免费观看| 亚洲av片在线免费观看| 国产特级毛片aaaaaa|