,*,,,
1.College of Astronautics,Nanjing University of Aeronautics and Astronautics,Nanjing 210016,P.R.China;2.Tandon School of Engineering,New York University,New York 11201,USA
Abstract: The scene matching navigation is a research focus in the field of autonomous navigation,but the real-time performance of image matching algorithm is difficult to meet the needs of real navigation systems. Therefore,this paper proposes a fast image matching algorithm. The algorithm improves the traditional line segment extraction algorithm and combines with the Delaunay triangulation method. By combining the geometric features of points and lines,the image feature redundancy is reduced. Then,the error with confidence criterion is analyzed and the matching process is completed. The simulation results show that the proposed algorithm can still work within 3° rotation and small scale variation. In addition,the matching time is less than 0.5 s when the image size is 256 pixel×256 pixel.The proposed algorithm is suitable for autonomous navigation systems with multiple feature distribution and higher real-time requirements.
Key words:image matching;navigation;Hough transform;Delaunay triangulation;combined feature
Image matching is a process of matching opera?tions between reference images and measured imag?es. With the development of image sensors and com?puter vision technology,the image matching tech?nology is widely used in aircraft navigation,aug?mented reality,pattern recognition and other fields.In the field of navigation,using image navigation has a greater advantage in acquiring information than other methods(e.g. inertial navigation,GPS navigation)[1],so the scene matching navigation is widely used in aerospace navigation,intelligent ro?bots and military applications[2]. And the research of image matching algorithm has great significance for the development of autonomous navigation.
Image matching algorithms can be roughly clas?sified into gray-based[3-5]and feature-based[6-7]matching methods. In the navigation process,the al?gorithm based on gray-scale correlation is not fully applicable,because the result of image matching is limited by the shooting conditions and time con?straints,and the pre-stored map in navigation sys?tem has certain gray-scale and geometric deforma?tion differences with the actual aerial image. The feature-based matching algorithm is generally robust to illumination changes,rotation and even scale changes of scenes in the image,so it is more suit?able for autonomous navigation. The feature-based matching algorithm first extracts the features in the image,and then establishes the correspondence be?tween the two images. The difficulty of this method lies in the automatic,stable and consistent feature extraction and matching process. At the same time,in the case of navigation,there is also a high de?mand for real-time image matching[8]. Lowe[9]pro?posed the scale-invariant feature transform(SIFT)matching algorithm,which has good performance against rotation and scale change,so it has received extensive attention. However,the SIFT feature ex?traction process is time consuming and difficult to meet navigation requirements. Bay et al.[10]pro?posed the speeded-up robust features(SURF)algo?rithm. Its feature point detection theory is also based on the scale space and is robust to illumination. The calculation speed of the SURF algorithm is about three times that of the SIFT algorithm. However,when using the SURF algorithm to match the fea?ture point,mismatching points is easy to occur. At present,many domestic and foreign researchers are working to solve this problem. In 2011,Rublee et al.[11]proposed an algorithm for fast feature point ex?traction and description called ORB (Oriented FAST and rotated BRIEF),which combines the FAST feature extraction algorithm and the BRIEF feature description algorithm to greatly improve the operation speed. However,this algorithm is not ro?bust to the case of scale transformation. In addition,its fuzzy robustness and rotational robustness are slightly weaker than the SURF algorithm. In the ac?tual navigation environment,especially in urban ar?eas,there are many feature points in the back?ground. These feature points produce huge amount of data calculation,which will seriously slow down the running speed of matching process. Under such a huge amount of computation,even the fast ORB algorithm is also difficult to meet navigation needs.Therefore,the research of image matching algo?rithm has its particularity in the field of autonomous navigation.
In order to meet real-time requirements,do?mestic and foreign scholars try to use line features for image processing. Zhang et al.[12]proposed a fast and high-precision image matching method based on the Edline line features. Wu[13]proposed a plane tracking algorithm based on edge matching for the planar target tracking problem of video mobile pilot.Chen et al.[14]applied the line segment matching method to the three-dimensional reconstruction of unmanned aerial camera imaging. Cao et al.[15]used the line feature to optimize the 3D model,improved the visual effect of the model,and maintained the plane and edge features of the 3D model. Konova?lenko et al.[16]applied linear extraction to the back?ground of drone navigation and achieved good exper?imental results. Kunina et al.[17]used the Hough transform to extract the straight line in the image and fit it with least squares method,and combined the RANSAC algorithm to use the line feature in UAV image navigation. Wu et al.[18]used a new method of line detection,which directly carried out contour difference in the image plane to enhance the robustness to noise and blur. Although the line seg?ment feature can greatly reduce the feature density,the line segment extraction algorithm still has prob?lems such as low extraction precision. Its matching accuracy is less than the traditional point feature al?gorithm. In practical applications,the improvement and combination of line segment feature and point feature algorithm have important research value and significance.
This paper improves the Hough transform and the Delaunay triangulation algorithm,and proposes a fast matching algorithm based on the line segment combination feature,in order to improve the accura?cy and robustness,and to meet real-time require?ments of autonomous navigation system in image matching technology. The simulation results show that the proposed algorithm not only simplifies the redundancy features,but also improves its accuracy compared with the traditional line feature extraction.Besides,it has good real-time performance,which can meet the specific requirements of the image matching algorithm in autonomous navigation sys?tem.
The Hough transform is a feature extraction technique in image processing. It can be used to de?tect lines and curves in the images[19]. The core idea is the transformation of the coordinate system.
The Hough transform is concerned with identi?fication of lines. It determines whether the edge points formed a straight line by detecting the line family function of these points and counting the number of corresponding intersecting sinusoids in the polar coordinate system. On the basis of the above,the progressive probabilistic Hough trans?form(PPHT)uses a random extraction point and sets the accumulator to determine the line,which speeds up the operation[20].
The probabilistic Hough transform is used to extract line segments in the edge binary map[21].Then a threshold is used to eliminate very short line segments. The threshold should be determined by the specific matching image size and feature distribu?tion. In this way,linesl1,l2,…,ltare obtained. Af?ter that,the corresponding midpoints(p1,p2,…,pt)of each line segment are extracted in turn. The next step is to calculate the coordinates of the corre?sponding midpoints(p1(x1,y1),p2(x2,y2),… ,pt(xt,yt))and dip angles relative to the horizontal line of the image(α1,α2,…,αt). Since the Hough transform may have certain errors in the calculation process,in order to reduce the calculation error,several points at both ends of the line segment can be selected to calculate the midpoint coordinates and the line segment inclination. In this paper,for each line segment,take five points at both ends of the line segment. Taking the line segmentl1as an exam?ple,the five points areoi1(x1,y1),oi2(x2,y2),…,oi(n-1)(xn-1,yn-1),oin(xn,yn). All the points on the line can be extracted by the polar feature conversion process of edge binary image. The transform is im?plemented by quantizing the Hough parameter space into finite intervals or accumulator cells. Specific im?plementation principles are shown in Ref.[20].Since the minimum line length threshold is 20 in the test,there is no case ofn-5≤5,that is,the point distribution used for calculation is selected on both sides of the line segment,as shown in Fig.1.

Fig.1 Schematic diagram of calculation of line segment in?formation
Let
where(xi,yi)is the coordinates of the midpoint of the line segment.
In the schematic diagram,αirepresents the an?gle between the current black line segment and the horizontal axis of the image coordinate system. And the dip angleαiis calculated as follows

whereαjis the angle between the horizontal axis and the line connected by thejth and the(n-j+1)th point. For example,whenj=3 in Fig.1,αjrepre?sents the angle between the horizontal axis,which is marked with blue dotted line,and the red line,which connects the 3rd point and the third last point.Calculation ofαjis expressed as

Traverse the information of all the line seg?ments,and detect the difference between their dis?tances and difference between dip angles. Use the distance from a midpoint of the line segment to the other line segment to describe the distancesk,l(1≤k≤t,1≤l≤t,k≠l)between two line segments. For two line segments,when the differences of their dip angles and distances are less than a given threshold,they are judged as one straight line. The weighted average value of various information of the line seg?ments replaces their true values,and the weight is the length of the extracted line segment. For exam?ple,when two straight linesl1,l2are satisfied with Eqs.(5,6),those two line segments are determined as the same line segment.

whereσ1andσ2are set thresholds,(x1,y1)and(x2,y2)the midpoint coordinates corresponding to the line segment,α1andα2the dip angles of line seg?ments,andA,BandCthe linear equation parame?ters ofl2,which satisfyAx2+By2+C=0. Com?bine the two line segments to get the new line seg?mentlnewwith parameters as follows

wherek1,k2andknewrepresent the length of line seg?mentl1,l2andlnew,respectively;α1,α2andαnewrep?resent the dip angles ofl1,l2andlnew,respectively.(xnew,ynew)represents the coordinate of new mid?point. After the above calculation,although(xnew,ynew)is not the real midpoint,this weighted averag?ing method preserves the geometrical characteristics of the feature distribution and makes it easier to con?struct subsequent combinations of features.
The simulation experiment is carried out using the improved feature extraction algorithm,and com?pared with the traditional Hough transform line ex?traction algorithm. The results are shown in Figs.2—5.
From Figs.2—5 we can see that:
(1)These simulation results show that using the traditional Hough transform algorithm to extract lines often causes extraction abnormalities.
(2)If extracting an edge which is too wide,we may get several very close edges. These edges are incorrect extraction results. As shown in Fig.3,the upper side edge of the riverside road is extracted as two line segments(marked by green and cyan line segments respectively).

Fig.3 Error extraction in the traditional algorithm where the edge is too wide
(3)In noisy environment,segments extracted from long edges often break to several parts. As shown in Fig.4,the edges of the bridge break into two segments marked by yellow and purple during the extraction process.

Fig.4 Error in extracting line segment breaks in tradi?tional algorithms
(4)Fig.5 is a curve with a large arc at the edge of the scene.This kind of curves is sometimes recog?nized as edges due to the algorithm error,which will cause matching error. As shown in Fig.2,the blue and green line segments are extracted in error.

Fig.2 Simulation results of improved line segment ex?traction algorithm

Fig.5 Error extraction in traditional algorithm where curve is defined as line segment
Therefore,the experimental results prove that the proposed line segment extraction algorithm can reduce the error extraction above effectively.
In order to improve the accuracy of line seg?ments matching and to construct robust features,we introduced the Delaunay triangulation to de?scribe the geometric relationship between line seg?ments. We further propose the matching for com?bined features based on line segment features. The schematic diagram of the combined features is shown in Fig.6.

Fig.6 Schematic diagram of combined features
In Fig.6,αis the angle between the characteris?tic line segment and the horizontal axis of the im?age,andα∈[-90,90]. When the left end of the line segment is lower than the right end,αis de?fined as a positive value,and when the left end of the line segment is higher than the right end,αis negative.
Regard the midpoints of the obtained line seg?ments as vertexes and use these vertexes to con?struct the Delaunay triangulation[22]. As shown in Fig.6,each side of the triangular mesh is marked asm1,m2,m3,respectively. The extended lines of each edge line segment intersect atQ1,Q2,Q3.Con?nect these three points and then get three linesl1,l2andl3.α1,α2,α3are the angles between horizontal line andl1,l2andl3,respectively. We define these angles as the basic matching elementΔi.

Each Delaunay triangle contains three elementsΔi(i=1,2,3),andΔ1,Δ2,Δ3form a three-dimen?sional combined featureS. According to the meth?od,traversing the midpoints of all the line segments can get several combined featuresS1,S2,…,Sn.
The combined featureScontains the angle in?formation of the three segments,which have the specific geometric relationship,and the line intersec?tion information of their extension line(Q1,Q2,Q3shown in Fig.6). The combined features obtained by directly searching for every three straight lines and the combined feature map obtained by the De?launay triangulation are shown in Fig.7 and Fig.8,respectively. It can be seen that the method based on the Delaunay triangulation reduces the computa?tional complexity for describing the geometric rela?tionship of the line segments,and avoids a lot of re?peated description.

Fig.7 Combined features obtained by directly search?ing for every three straight lines

Fig.8 Combined feature map obtained by Delaunay triangulation
Moreover,because the Delaunay triangle net?work has good characteristics such as stability and disjointness,local errors have the least impact on the overall network. Compared with the fully-con?nected triangular network,the Delaunay triangula?tion can improve the stability of matching.
For each matched combined features,there are three sets of matching segments as elements of im?age matching. In addition,the intersection point co?ordinatept(x,y)of the extension line corresponding to the combined features can be obtained by simple geometric calculation. These intersections can also be used as the elements required for matching,and the confidence of each coordinate corresponds to its corresponding angleΔt,i.e.Under the same error,the closer to 90° the angleΔis,the smaller the position errorε2of the intersection of the line extension lines is. Therefore,under the same matching threshold,whenΔis closer to 90°,the confidence of the intersection coordinate matching is higher.Proof is shown in Fig.9.

Fig.9 Schematic diagram of error analysis
As shown in Fig.9,the linesp,qintersect at pointP1,and the intersection angle isΔ1. Due to the error in line segment extraction,phas an error dip dξwith respect toq. Note the center of rotation asO,andQis the perpendicular foot of the straight linepperpendicular toO. And note the length ofOQass,and the length ofQPchanges with the change of error angleξ. The lengths ofOPandQPare denoted as variablev,w. The two straight lines actually intersect at pointP2with an angleΔ2. Ac?cording to the geometric relationship of the com?bined features,the angleΔof the straight line rang?es from[0°,180°],and we also get that

From Eqs.(12—14),the offset errorεof the line intersectionPcan be calculated as

The curve of error functionε?Δis shown in Fig.10.

Fig.10 Curve of error function ε-Δ
From the error function,it can be concluded that the position errorεof the line intersection point takes the minimum value atΔ=π/2,that is,under the same extraction error,the closer to the perpen?dicular the two lines are,the smaller the position er?ror of the intersection point is,and the higher the confidence is.
Sort the angles stored in each of the two com?bined featuresSiof the two imagesAandBto be matched . Then select the angle closest to 90° as the largest feature elementΔiMax,which satisfies |90°-ΔiMax|=min {|90°-Δj|},j=1,2,3;And select the angle closest to 0° or 180° as the smallest feature elementΔiMin, which satisfies |90°-ΔiMin|=max {|90°-Δk|},k=1,2,3. The last remaining ele?ment is marked asΔiMid,and each element is marked with its corresponding confidence,which satisfy with that

Next,traverse through the elementsin each combined fea?tureWhen we have

it shows thatmatches with,and then the mean of the confidence of the two elements is taken as

In Eqs.(17,18),ε1is the threshold set accord?ing to the experimental requirements,tandurepre?sent any one of the subscripts Max,Min,Mid,andk=1,2,3 records three comparison processes. If the three angles of a combined feature match,three confidence mean values are accumulated. If the accu?mulated total confidence is bigger than the set threshold,the combined fea?tures can be considered to match. In this experi?ment,ε1=1.5,V0=1.1, confidenceVt=1/(cscΔt)2.
In general,the steps of the matching process are as follows:
(1)For each combined featureS,get match?ing elementsΔMax,ΔMidandΔMinand corresponding confidencesVMax,VMidandVMin.
(2)Calculate the interpolation of matching ele?ments in two combined features using Eq.(17). If Eq.(17)is satisfied for all three pairs of matching primitives,use Eq.(18)to average the three pairs of confidencesV1,V2andV3. If Eq.(17)is not sat?isfied,compare other feature pairs.
(3)Add upV1,V2andV3. If the accumulated total confidence is bigger than the set thresholdV0,the combined features can be con?sidered to match.

Fig.12 Matching results of optical image

Fig.13 Matching results of synthetic aperture radar image
The flow chart of the matching algorithm is shown in Fig.11. Experiment results of the match?ing algorithm are shown in Figs.12 —14,where red lines represent the extracted line feature,yellow lines describe the Delaunay triangle network con?structed by our algorithm,and purple lines are the extension cords of the line feature. The line segment extension line in each combination feature is com?pared with the cyan point,which shows the angle in?formation in the combined features. These angles correspond toΔ1,Δ2andΔ3in each combined fea?tureS. Particularly,some of the extension line inter?sections beyond the portion of the image are not drawn in the image,but they still participate in the matching process.

Fig.11 Flow chart of proposed algorithm

Fig.14 Matching result when image has a 3°rotation
In order to verify the effectiveness of the pro?posed algorithm,the experiment is performed on a PC with a frequency of Intel CORE(TM)i5-3337U 1.8 GHz and 12 GB memory. The program develop?ment environment is Visual Studio 2013. In the ex?periment,the two matched images are 256 pixel×256 pixel and 128 pixel×128 pixel,respectively,and the minimum length threshold of the line seg?ment extraction is 20. Constraint parameters for line segment screening are set asσ1=2.5° ,σ2=3° ,matching thresholdε1=1.5,V0=1.1. Multiple sets of experiments are performed with image shifting,small amplitude rotation and scaling,and the results are compared with the results from the SIFT and ORB algorithms.
Table 1 shows the averages of matching results in different images and transformation situation. Ta?ble 2 shows the comparison results of three algo?rithms. Results show that the proposed algorithm can effectively extract combined feature information from images,and also be able to maintain the matching accuracy with certain rotation and scaling.According to the experimental situation,the upper limit of rotation is 5°,the upper and the lower scal?ing limits are 2 and 0.7 times. Once these limits are exceeded,the extracted features could not be matched correctly and effectively. But the run time of the proposed algorithm is much faster than that of the traditional algorithms,which can meet naviga?tion needs better. According to these characteristics of the proposed algorithm,it can be applied to the scene navigation task based on sequence image,which requires the used algorithm be robust to small amplitude rotation and scale change. However,the proposed algorithm still needs to be improved to adapt to more strict conditions.

Table 1 Matching results in different images and transformation situation

Table 2 Comparison results of three algorithms
To further show the advantages of the pro?posed algorithm,the comparison of features be?tween SIFT,ORB and the proposed algorithms is shown in Fig.15.

Fig.15 Feature quantity comparison of three algorithms
According to the above results,the matching time of the proposed algorithm is basically distribut?ed in the period of 0.2—0.5 s. The calculation speed of the proposed algorithm is faster than those of ORB and SIFT algorithms. In addition,the feature distribution of the proposed algorithm is more uni?form than that of the ORB algorithm,and in the case of a large amount of feature information,the proposed algorithm can reduce redundant informa?tion and ensure matching speed. Besides,the match?ing accuracy is kept within 1.5 pixels,and the matching accuracy can be ensured under the condi?tion of 3° rotation and small amplitude scale varia?tion. Results show that the proposed algorithm has certain robustness and high accuracy,and is applica?ble for the case where the feature distribution is large and the matching speed is required.
A fast matching algorithm based on line seg?ment geometry is proposed. Based on the Hough transform,the line segment features are searched,and the Delaunay triangulation method is introduced to combine the point and line features to realize the searching and matching process. Experiments show that the algorithm can reduce feature information re?dundancy and has good robustness and accuracy. In addition,the algorithm has good real-time perfor?mance and high speed to meet navigation and posi?tioning requirements. However,there are still some improvements in the algorithm. For example,we need consider how to overcome the inaccuracy of line segment extraction and line segmentation ambi?guity,and how to extend the scope of the method.
Transactions of Nanjing University of Aeronautics and Astronautics2021年3期