系统仿真学报 ›› 2021, Vol. 33 ›› Issue (7): 1638-1646.doi: 10.16182/j.issn1004731x.joss.20-0218

• 仿真建模理论与方法 • 上一篇    下一篇

基于标定区域特征点组合匹配的位姿估计方法

蔡鹏1,2, 沈朝萍1,2, 李红燕1,2   

  1. 1.江苏航空职业技术学院 航空工程学院,江苏 镇江 212134;
    2.江苏航空职业技术学院 镇江市无人机应用创新重点实验室,江苏 镇江 212134
  • 收稿日期:2020-04-28 修回日期:2020-06-03 出版日期:2021-07-18 发布日期:2021-07-20
  • 作者简介:蔡鹏(1977-),男,博士,副教授,研究方向为计算机图形学、计算机视觉、无人机应用技术。E-mail:caipeng568@163.com
  • 基金资助:
    镇江市科技计划(GY2018029); 校级重点课题(JATC19010107,JATC20020101,JATC20010104)

Position and Attitude Estimation Based on Combination Matching in Calibration Area

Cai Peng1,2, Shen Chaoping1,2, Li Hongyan1,2   

  1. 1. Aeronautical Engineering Institute, Jiangsu Aviation Technical College, Zhenjiang 212134, China;
    2. Zhenjiang Key Laboratory of UAV Application Technology, Jiangsu Aviation Technical College, Zhenjiang 212134, China
  • Received:2020-04-28 Revised:2020-06-03 Online:2021-07-18 Published:2021-07-20

摘要: 景象匹配视觉导航技术通常需要硬件测量相机的距离和姿态,提出了基于标定区域的特征点组合匹配的位姿估计方法,通过选择实时图像的标定区域中最优的一组尺度不变特征变换(Scale-Invariant Feature Transform,SIFT)匹配点,通过三角形内部线性插值方法计算SIFT特征匹配点的地面局部坐标,利用空间后方交会得到实时图像的相机位置和姿态,避免了硬件测量相机的距离和姿态的缺陷,扩大了景象匹配视觉导航技术的适用范围。实验结果表明:该方法计算的实时图像相机位姿与真实结果比较接近。

关键词: 位姿估计, SIFT特征匹配, 标定区域, 特征点组合匹配, 空间后方交会

Abstract: Vision navigation technology of scene matching needs hardware to measure the distance and attitude of the camera. A method of position and attitude estimation based on the combination of feature points in the calibration area is proposed. The software can get the camera position and attitude of the real-time image by selecting the best set of Scale-Invariant Feature Transform(SIFT) feature matching points of the reference image calibration area of the real-time image, and calculating the local coordinates of the ground of SIFT feature matching points based on the triangle internal linear interpolation method, using space resection, avoids the defect of hardware measuring camera's distance and attitude, and expands the application scope of vision navigation technology of scene matching. The experimental results show that the position and attitude of real-time image calculated by this method is close to the real.

Key words: position and attitude estimation, Scale-Invariant Feature Transform(SIFT), feature matching, calibration area, combination matching of feature points, space resection

中图分类号: