系统仿真学报 ›› 2025, Vol. 37 ›› Issue (5): 1210-1221.doi: 10.16182/j.issn1004731x.joss.23-1593

• 研究论文 • 上一篇    下一篇

基于扩展图像特征的无标定视觉伺服方法

张淑珍1, 成煜坤1, 刘杨波1, 查富生2   

  1. 1.兰州理工大学 机电工程学院,甘肃 兰州 730050
    2.哈尔滨工业大学 机器人技术与系统国家重点实验室,黑龙江 哈尔滨 150001
  • 收稿日期:2023-12-28 修回日期:2024-03-19 出版日期:2025-05-20 发布日期:2025-05-23
  • 通讯作者: 成煜坤
  • 第一作者简介:张淑珍(1969-),女,副教授,博士,研究方向为特种机器人、智能工业机器人等。
  • 基金资助:
    国家自然科学基金(52265065);国家重点研发计划(2020YFB13134)

An Extended Image Features Based Uncalibrated Visual Servoing Method

Zhang Shuzhen1, Cheng Yukun1, Liu Yangbo1, Zha Fusheng2   

  1. 1.School of Electrical and Mechanical Engineering, Lanzhou University of Technology, Lanzhou 730050, China
    2.State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
  • Received:2023-12-28 Revised:2024-03-19 Online:2025-05-20 Published:2025-05-23
  • Contact: Cheng Yukun

摘要:

针对传统无标定视觉伺服依赖图像雅可比矩阵的估计、相机各自由度运动耦合的问题,在基于图像的无标定视觉伺服的基础上,提出了一种基于扩展图像特征的无标定视觉伺服方法。通过分析视觉伺服过程中图像特征和相机位姿变化关系,将图像空间中视觉伺服过程分解为平移、拉伸、旋转、缩放四个基本过程;通过分析视觉伺服过程中图像特征变化规律,采用扩展图像特征补充传统图像特征的含义,以图像重心坐标、直线相对长度、两点距离、方向角等作为扩展图像特征与相机各自由度运动对应,通过图像特征误差直接控制机器人运动,实现不依赖图像雅可比矩阵的、解耦的视觉伺服。在CoppeliaSim平台进行对比仿真实验,结果表明本研究所提的方法与传统有标定视觉伺服在目标图像位置误差、相机位置误差和姿态误差相比分别降低了88%、94%和93%,并利用实物实验验证了本算法的有效性。

关键词: 机器人, 无标定视觉伺服, 扩展图像特征, 特征选择, 运动解耦

Abstract:

Aiming at the traditional uncalibrated visual servo relying on the estimation of image Jacobi matrix and the coupling of the motion of each degree of freedom of the camera, on the basis of image-based uncalibrated visual servo, an extended image features based uncalibrated visual servo method is proposed. By analyzing the relationship between image features and camera frames change in the visual servoing process, the visual servoing process in the image space is decomposed into four basic processes: translation, stretching, rotation and scaling; by analyzing the changing of image features in the visual servoing process, extended image features are used to complement the meaning of traditional image features, image center coordinates, relative length of a straight line, distance between two points, and direction angle are selected to correspond to the motion of each degree of freedom of the camera, and the robot's motion is directly controlled by the error of the image features, which realizes a completely decoupled visual servo that does not depend on the image Jacobian matrix. Comparative simulation experiments on CoppeliaSim platform show that compared with the traditional calibrated visual servoing, the method proposed in this study reduces the image position error, the camera position error, and orientation error by 88%, 94%, and 93% of respectively. The effectiveness of the algorithm is verified using physical experiments.

Key words: robot, uncalibrated visual servoing, extended image feature, feature selection, motion decoupling

中图分类号: