Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature

In this paper, an improved multi-sensor image matching algorithm based on line segments feature is proposed. Firstly, the line segments in the image are extracted and optimized. And the virtual feature points are constructed by the line segments. Then a set of affine transformation matrices of two images is obtained by different combinations of virtual feature points. And then the matching function is constructed, by which the relationship of affine transform between the two images are preliminarily determined. And the matching point set is obtained. Finally, false matching points are eliminated based on RANSAC algorithm and the final transformation matrix to accomplish accurate matching is achieved. The experimental results show that the matching accuracy of this proposed algorithm can reach up to 80%. At the same time, the speed of the algorithm is significantly improved.


Introduction
The technology of multi-sensor image matching is a new technology developed on the basis of multi-discipline, and it has important application value in civil, military, medical, and some other fields [1].In the scene matching aided navigation system, the matching reference image and real-time image are used to determine the real-time position of the UAV, and to correct the errors of inertial navigation system in order that the UAV can fly at a predetermined track [2].The difficulties the multi-sensor image matching algorithm to overcome are that the imaging mechanism is different, the gray correlation is small, and there are many differences between the images, such as scale, view angle, brightness, resolution, and so on [3].In recent years, many scholars at home and abroad have done a lot of research on the field of multi-sensor image matching [4,5,6].At present, the commonly used scene matching algorithm is divided into three parts: region based image matching algorithm, image matching method based on transform domain and image matching algorithm based on feature [10].Region based image matching methods mainly include region similarity matching method and mutual information matching method.In this kind of method, the pixels of the image are involved in the matching operation.It is difficult to achieve a high success rate of matching because the gray difference between the multi-sensor images is too large.The image matching method based on transform domain mainly includes Fourier-Mellin algorithm (FMT) and phase congruency algorithm.This kind of algorithm is not suitable for the image with a larger scale and translation, and is sensitive to the deformation of the image.In addition, the ability to adapt to the distortion of the image or the change of some content is poor [7].Feature based image matching algorithm mainly includes point feature matching algorithm [8], edge feature matching algorithm [9] and line segments feature matching algorithm [10,11,12].The algorithm based on point feature has a higher precision in the image gray difference, but it is invalid for the image with large gray difference.Edge based matching algorithm has high matching accuracy, but the time complexity is too high.Based on the line segments feature matching algorithm, the feature is easy to extract and the time complexity is low, but it is difficult to ensure the accuracy of the matching in the case of the large gray difference.
The single image feature is difficult to be applied in the field of image matching, and the main method is the combination of line segment features and point features.In the literature [11], the control points are constructed by the extracted line features, and the matching degree function is established to get the matched control points.In literature [10], the line pairs are constructed by using the extracted line segments, and the similarity measure function is established to get the matching line pairs.However, when the two methods are used to construct the objective function, the length and angle information of the line segments are directly used.Therefore, there will be an match error when there is a scaling or rotation transformation between the images.In the literature [12], the control points are constructed by the extracted line features, and the affine transformation relation between the images is obtained by the control points, so that the two images to be matched are first aligned in the geometric space, which eliminates the influence of the affine transformation between the images.And then the matching line pairs are obtained.However, this method is to obtain the optimal affine transformation matrix by cyclic traversal and it result in high time complexity.
Aiming at the problems in the literature [12], this paper presents a matching algorithm based on the improved line segments feature, and it is based on the visible image and SAR image of typical artificial target.In this paper, the algorithm mainly includes line segments extraction, line segments optimization, construction of virtual feature points and feature matching.First of all, the line segments in the image are extracted and optimized.On line optimization, some short line segments and collinear line segments are removed.And the virtual feature points are constructed by the reserved line segments.Then a set of to be elected affine transformation matrices of two images is obtained by different combinations of virtual feature points.And then we construct the matching function, by which the relationship of affine transform between the two images are preliminarily determined.And the matching point set is obtained.Finally, we eliminate false matching points based on RANSAC algorithm and get the final transformation matrix to achieve accurate matching.

Line segments extraction
The existing line segments extraction algorithms mainly include Hough transform [13], line detection method based on edge detection operator [15] and LSD line extraction algorithm [14].Due to the influence of the application background, the matching algorithm applied to the aerial scene must be able to meet the requirements in terms of time complexity.In the above three algorithms, the time complexity of Hough transform is too high, so it is not suitable to be used in real time.The algorithm based on edge detection operator has low detection accuracy.It is difficult to extract the complete line segment.The LSD algorithm can be used to obtain the sub-pixel accuracy results in linear time.In this paper, LSD algorithm is used to extract the line segments in the image.
The LSD algorithm is based on the gradient value and direction of each pixel in the image, excluding the influence of pixels whose gradient values are too small.It combines pixels of similar gradient direction and adjacent pixel position to get line segments to be screened.The final line segments are determined by calculating the similarity of line segments to be screened.In this paper, the visible image is taken as the reference image, and the SAR image is used as the measured image.The line segments set extracted from the reference image and the measured image are respectively recorded as 1 S and ' 1 S .The result of using the LSD to extract line segments is shown in Figure 1.

Line segments optimization
The method proposed by literature [12] can get a good matching result, but the method has a high time complexity.Therefore, the extracted line segments are optimized in this paper, and this will greatly reduce the time complexity of the algorithm.First of all, since the line segments extracted by LSD method do not intersect each other, it will result in some complete long segments being divided into several shorter segments.In order to preserve the original segment information in the image, we need to merge these short segments.At the same time, the image feature of this paper is obtained by the intersection points of the straight lines of the extracted line segments.The intersection of two collinear line segments and other segments are overlapped, and multiple collinear segments will lead to many repeated calculations.So we remove collinear segments before the feature is constructed.

line segments merging
For two given line segments s and ' s , the distance function ( ) ' , D s s ρθ is defined according to the literature [16].As shown in Figure 2, the angle θ Δ and vertical distance d Δ of two line segments are calculated.The vertical distance d Δ of the two lines is defined as: ' max( , ) Where d is the distance from the midpoint of the line segment s to the line segment ' s , ' d is the distance from the midpoint of the line segment '  s to the line segment s .Then the distance between the two line segments is defined as: ( ) At the same time, for two collinear line segments 1 l and 2 l that meet the above condition, if they don't have overlap region, and the distance between two most recent endpoints in the four endpoints of the two line segments is greater the threshold, which is set to 3 in this paper, the two line segments will not be merged.As shown in Figure 3, for three collinear line segments { } , , l l l , only 1 l and 2 l should be merged, 2 l and 3 l should not be merged.Thus, the merging rule of two line segments is obtained: the two lines are collinear, and the distance between the two most recent endpoints in the four endpoints of two line segments is less than the threshold.After merging, the set of line segments in the reference and measured image are recorded as 2 S and ' 2 S , respectively, as shown in Figure 4 (a) (b).

Collinear segments elimination
The image feature of this paper is obtained by the intersection points of the straight lines of the extracted line segments.The intersection of two collinear line segments and other segments are overlapped, and multiple collinear segments will lead to many repeated calculations.So we eliminate the collinear line segments of merged line segment set.Specific methods are as follows: for any two line segments in the image, we determine whether they are collinear according to the formula (2).If they are collinear, then we calculate the length of each line segment, and retain the longer one.After excluding collinear line segments, the set of line segments in the reference and measured image are recorded as 3 S and ' 3 S , respectively, as shown in Figure 4 (c) (d).

Line segment reservation
After the collinear line segments are removed, there are still a large number of line segments in the image.And the time complexity is still unable to meet the requirements.Therefore, for the above obtained line segments, we select the first 8% line segments with the longest length to construct features.After reservation, the set of line segments in the reference and measured image are recorded as 4 S and

Constructing virtual feature points
Visible images and SAR images have the same stable line segment features, but their location and length are often inconsistent.It is difficult to match directly.However, these line segments show stable geometric properties in the two kinds of images, such as parallel and intersection of lines.We therefore construct the virtual feature points by using the geometric properties of the line segments, and establish the matching function based on the virtual feature points.
If three non-collinear line segments or their extension lines don't intersect at the same point, they can form a triangle, as shown in Figure 5.A group of three line segments { } , , a b c intersect at point { } 0 1 2 , , P P P .Taking these three points as a set of virtual feature points, the virtual feature points do not necessarily correspond to an actual corner of the image, but can reflect the geometric properties of the line segments.In this paper, a set of triangles are constructed by 4 S and ' 4 S , respectively.At the same time, we screen out the triangles which do not meet the requirements, and finally get the triangle set 1 G and ' 1 G .Screening rules are to remove the triangles whose area is less than 20 and angle is less than 5 degrees.The area threshold 20 and angle threshold 5 degree are empirical values.G , as shown in Figure 6.And an affine transformation matrix can be obtained according to each corresponding relation.G , respectively, the number of candidate affine transformation matrices can be obtained: For the assignation { } { } , , , , p p p p p p → , equations are: , , , are the coordinates of the points in the reference image and the measured image, and 1 T is the affine transformation.

Constructing matching degree function
After obtaining an affine transformation matrix, we use the related attributes of the extracted line segments to construct the matching function, so that we can measure the matching degree of the two images in the affine transformation.Finally, the affine transformation matrix with the maximum value of the matching function is chosen as the initial matching result.
Because the line segments in the ' 4 S are few, it is not possible to ensure that each line segment in S is obtained by formula (2): Where, the definition ( ) According to the distance function, the matching degree of two line segments ( ) The smaller the distance between the two lines is, the higher is the degree of matching.Furthermore, the matching degree ( ) In addition, in order to objectively evaluate this method, the method of this paper, the SIFT matching method, the mutual information method and the matching method proposed in literature [12] are quantitatively compared in two groups of experiments.As shown in Table 1.It can be seen from the table that the matching time of this method is greatly shorter than that of the method in literature [12] under the premise of keeping high accuracy.

Conclusion
In this paper, we propose an improved matching algorithm for multi-sensor image based on line segments feature in order to solve the problem that there exist the large gray difference of multi-sensor image caused by different imaging mechanism.The experimental results show that the proposed algorithm has a good adaptability to image rotation and scaling, and the matching speed is greatly improved compared with the method in literature [12].In addition, since the algorithm is based on line segments, and there are a large number of line segments in the artificial scene.However, in some natural scenes where line segments are difficult to extract, such as oceans and grasslands, the performance of the algorithm will be greatly affected.

Figure 2 .
Figure 2. The angle and the vertical distance , we get the collinear condition of two line segments:

Figure 5 .
Figure 5. Three line segments or their extension form a triangle After getting the triangle set, we match each of two triangles.According to clockwise order, there are three kinds of corresponding relations between the three vertices of a triangle 0 1 2 P PP in the line segment set 1 G and the three vertices of a triangle ' ' ' 0 1 2 P P P in the set ' 1

Figure 6 .
Figure 6.Correspondence of the three groups of two triangles

M
triangles are obtained in the triangle set 1 G and ' 1

4 S 4 S 1 S 1 S 1 S 1 S
can find line segments that meet the requirements in the ' .So we select the line segment set ' .The line segment set '' are obtained by transforming ' using matrix T .And the reference image and measured image are aligned in the geometric space.Then, for each segment in 4 S , we find DOI: 10.1051/ , 05001 (2017) 71105001 11 ITM Web of Conferences itmconf/201IST2017 line segments of the closest matching degree in '' , and get the matching degree between the measured and the reference image under the affine transformation.Firstly, the distance

Table 1 .
Performance comparison of the four algorithms