999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

A Tensor—based Enhancement Algorithm for Depth Video

2018-05-07 07:05:28YAOMENG-qiZHANGWEI-zhong
科技視界 2018年5期

YAO MENG-qi ZHANG WEI-zhong

【Abstract】In order to repair the dark holes in Kinect depth video, we propose a depth hole-filling method based on tensor. First, we process the original depth video by a weighted moving average system. Then, reconstruct the low-rank sensors and sparse sensors of the video utilize the tensor recovery method, through which the rough motion saliency can be initially separated from the background. Finally, construct a four-order tensor for moving target part, by grouping similar patches. Then we can formulate the video denoising and hole filling problem as a low-rank completion problem. In the proposed algorithm, the tensor model is used to preserve the spatial structure of the video modality. And we employ the block processing method to overcome the problem of information loss in traditional video processing based on frames. Experimental results show that our method can significantly improve the quality of depth video, and has strong robustness.

【Key words】Depth video; Ttensor; Tensor recovery; Kinect

中圖分類號: TN919.81 文獻標識碼: A 文章編號: 2095-2457(2018)05-0079-003

1 Introduction

With the development of depth sensing technique, depth data was increasingly used in computer vision, image processing, stereo vision and 3D reconstruction, object recognition etc. As the carriers of the human activities, video contains a wealth of information and has become an important approach to get real-time information from the outside world. But due to the limitation of the device itself, gathering sources, lighting and other reasons, the depth video always contains noise and dark holes. Thus the quality of video is far from satisfactory.

For two dimensional videos, the traditional measures to denoising and repairing adopted filter methods based on frames[1]. But the continuous frames have a lot of redundant information, which will bring us much trouble. This representation method ensures the completeness of the videos inherent structure.

2 Tensor-based Enhancement Algorithm for Depth Video

2.1 A weighted moving average system[2]

When Kinect captures the video, the corresponding depth values are constantly changing, even at the same pixel position of the same scene. It is called Flickering Effect, which caused by random noise. In order to avoid this effect, we take the following measures:

1)Use a queue representing a discrete set of data, which saves the previous N frames of the current depth video.

2)Assign weighted values to the N frames according to the time axis. The closer the distance, the smaller the frame weight.

3)Calculate the weighted average of the depth frames in the queue as new depth frame.

In this process, we can adjust the weights and the value of N to achieve the best results.

2.2 Low-rank tensor recovery model

Low-rank tensor recovery[3] is also known as high order robust principal component analysis (High-order RPCA). The model can automatically identify damaged element in the matrix, and restore the original data. The details are as follows: the original data tensor D is decomposed into the sum of the low rank tensor L and the sparse tensor S,

The tensor recovery can be represented as the following optimization problem:

where,D,L,S∈RI1×I2×..×IN ,Trank(L) is the Tucker Rank of tensor L.

The above tensor recovery problem can be transformed into the following convex optimization problem.

Aiming at the optimization problem in (2), typical solutions[4] are as follows: Accelerated Proximal Gradient (APG) algorithm, Augmented Lagrange Multiplier (ALM) algorithm. In consideration of the accuracy and fast convergence speed of ALM algorithm, we use ALM algorithm to solve this optimization problem and generalize it to tensor. According to (2), we formulate an augmented Lagrange function:

2.3 Similar patches matching

There is a great similarity between frame and frame of video, so the tensor constructing by the video has a strong low rank property[5]. For a moving object in the current frame, if the scene is not switched, the similar part should be in its front frame and back frame. For each frame, set an image patch bi,j with size a×a as the reference patch. Then set a window B(i,j)=l·(a×a)×f centered on the reference patch,where is a positive integer and f is the number of original video frames. The similarity criterion of the patches is expressed by MSE, which is defined as

where N=a×a denotes the size of patch bij,Cij is the pixel value of the frame to be detected at present, and Rij is the pixel value of the reference frame. The smaller the value of MSE is, the more accurate the two patches match. Search for image patches bx,y which similar to reference patch in B(i,j),and put their coordinate values in set :

where t is threshold. It can be tested and determined according to the experimental environment. When MSE is less than or equal to this value, we can conclude that the test patch and reference patch are similar. Then add it to set i,j. The first n similar patches can be used to define as a tensor:

3 Experiment

3.1 Experiment setting

The experiment uses three videos to test. Some color image frames of the test video are as listed in Figure 1.

Fig.1. Test video captured from the Kinect sensor (a) Background is easy, the moving target is the man.(b) Background is complex, the moving target are two men, and they are far from the camera.(c) Background is messy, and the moving target is the man in red T-shirt, he is near the camera.

3.2 Parameter setting

In the same experimental environment, we compare our method with VBM3D[6] and RPCA. For VBM3D and RPCA algorithm, the source code is used, provided by the literature, to get the best result. For our algorithm, the parameters are all set empirically, so that the algorithm can achieve the best results. In all tests, we set some parameters as follows: the number of test frames is 120; the number of similarity patches is 30; the size of patch is 6*6, the maximum number of iterations is 180; tolerance thresholds are ?著1=10-5,?著2=5×10-8. We use Peak Signal-to-Noise Ratio (PSNR)[7] to quantatively measure the quality of denoised video images. And the visual effect of video enhancement can be observed directly.

3.3 Experiment results

In order to measure the quality of the processed image, we usually refer to the PSNR value to measure whether a handler is satisfactory. The unit of PSNR is dB. So the smaller the value, the better the quality. As can be seen from table 1, in the same experimental environment, the effect of the proposed method is better than other methods in the three groups of test videos. Fig.2 shows the enhancement result of moving object after removing the background by our method .

As we can see from Figure 3, the proposed method in this paper can remove noise very well and basically restore the texture structure of the video. The effect of video enhancement is satisfactory.

Fig.2. The enhancement result of moving object after removing the background by our method. (a) (b)(c)are depth video frame screenshot in original depth video a,b,c. (d)(e)(f) The enhancement results of moving object in video a, video b and video c respectively.

Fig.3. Depth video enhancement result (a)(b)(c) Depth video frame screenshot in original depth video a, video b and video c respectively. (d)(e)(f) The enhancement results in video a, video b and video c respectively.

Fig.4. The comparison results(partial enlarged view) of our method and other methods(VBM3D and RPCA method). (a)(b)(c) The enhancement results(partial enlarged view) of depth video a, video b and video c respectively with our method. (d)(e)(f) The enhancement results(partial enlarged view) of depth video a, video b and video c respectively with VBM3D.(g)(h)(i) The enhancement results(partial enlarged view) of depth video a, video b and video c respectively with RPCA.

We compare the results of our method used in this article with those of the VBM3D and RPCA method. In order to make the experimental results clearer, we put the partial magnification. By comparison, we can see that our method is superior to the other methods in denoising, repairing holes and maintaining edges.

4 Conclusion

In this paper, we propose a tensor-based enhancement algorithm for depth video, combining tensor recovery model and video patching. Experimental results show that the proposed method can effectively remove the interference noise and maintain the edge information. It is superior to the traditional methods in the processing of depth video.

References

[1]Liu J, Gong X. Guided inpainting and filtering for Kinect depth maps[C]. IEEE International Conference on Pattern Recognition, 2012:2055-2058.

[2]Zhang X, Wu R. Fast depth image denoising and enhancement using a deep convolutional network[C]//Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016: 2499-2503.

[3]Xie J, Feris R S, Sun M T. Edge-guided single depth image super resolution[J]. IEEE Transactions on Image Processing, 2016, 25(1): 428-438.

[4]Compressive Principal Component Pursuit, Wright, Ganesh, Min, Ma, ISIT 2012, submitted to Information and Inference, 2012.

[5]Chang Y J, Chen S F, Huang J D. A Kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities.[J]. Research in Developmental Disabilities, 2011, 32(6):2566-2570.

[6]Bang J Y, Ayaz S M, Danish K, et al. 3D Registration Using Inertial Navigation System And Kinect For Image-Guided Surgery[J]. 2015, 977(8):1512-1515.

[7]Zhongyuan Wang, Jinhui Hu, ShiZheng Wang, Tao Lu Trilateral constrained sparse representation for Kinect depth hole filling[J]. Pattern Recognition Letters, 65 (2015) 95–102

主站蜘蛛池模板: 欧美区一区| 成人夜夜嗨| 2021精品国产自在现线看| 国产噜噜噜视频在线观看 | 日韩人妻无码制服丝袜视频| 国产区在线观看视频| 亚洲中文字幕久久无码精品A| 香蕉色综合| 色视频国产| 亚洲精品福利视频| 欧洲亚洲欧美国产日本高清| 亚洲香蕉伊综合在人在线| 亚洲精品国产综合99| 永久在线精品免费视频观看| 中文天堂在线视频| 日韩第九页| 精品国产乱码久久久久久一区二区| 国产成人精品一区二区三在线观看| 欧日韩在线不卡视频| 亚洲最猛黑人xxxx黑人猛交| 人妻丰满熟妇AV无码区| 三上悠亚在线精品二区| 国产国产人在线成免费视频狼人色| 久久精品人人做人人综合试看| 中国丰满人妻无码束缚啪啪| 视频一区亚洲| 精品一区国产精品| 精品国产aⅴ一区二区三区| 成人一级黄色毛片| 国产经典三级在线| 欧美在线视频不卡| 亚洲人成网7777777国产| 精品国产三级在线观看| 婷婷色丁香综合激情| 粉嫩国产白浆在线观看| 国产人成午夜免费看| a级毛片在线免费观看| 国产情精品嫩草影院88av| 91久久国产热精品免费| 在线色国产| 国产成人综合网| 内射人妻无套中出无码| 久久五月视频| 91九色最新地址| 国产成人精品男人的天堂下载| 欧美精品H在线播放| 中文字幕在线观看日本| 在线日韩日本国产亚洲| 久久免费视频6| 毛片网站在线播放| 国产第一页屁屁影院| 72种姿势欧美久久久大黄蕉| 色婷婷亚洲综合五月| 欧美a级完整在线观看| 午夜啪啪福利| 国产网站免费观看| 欧美日韩精品一区二区在线线| 亚卅精品无码久久毛片乌克兰| 国产丝袜丝视频在线观看| 亚洲人成网18禁| 天天躁夜夜躁狠狠躁躁88| 日本AⅤ精品一区二区三区日| 国产精品yjizz视频网一二区| 久久天天躁夜夜躁狠狠| 国产丰满大乳无码免费播放| AV不卡在线永久免费观看| 久久精品亚洲专区| 麻豆精品在线视频| 色综合a怡红院怡红院首页| 久久精品国产亚洲AV忘忧草18| 亚洲伊人电影| 欧美视频免费一区二区三区| 高清无码不卡视频| 亚洲综合色吧| 亚洲天堂精品在线| 丝袜无码一区二区三区| 91久久国产热精品免费| 亚洲人成网站日本片| 97超碰精品成人国产| 一本大道视频精品人妻| 久久婷婷六月| 亚洲成人高清无码|