Hello, I’m Xiao Zhi. Today, my colleague said that I should do something with depth, so TODAY I will publish a note of the hand-eye calibration paper I wrote before.

Many students by reading the small wisdom before the hand-eye calibration article, all know how to do the hand-eye calibration, but as a technology tycoon, only how to use, but also to understand the calibration principle. So today xiaozhi will tell you about the tsai-Lenz algorithm which is the largest in the hand-eye calibration process.

If you still don’t know if the hand-eye calibration, you can scan the TWO-DIMENSIONAL code at the end of the article to follow the wit of the small wisdom public number, the background reply hand-eye calibration, you can get the tutorial for free.

  • The title of the paper annotated today is: A New Technique for Fully Autonomous and Efficient 3D Robotics HandEye Calibration, which mainly explains A calculation method for realizing hand-eye Calibration
  • This method is suitable for both hands outside the eyes and hands over the eyes
  • In this paper, the gripper = robot end effector = robot end

Why is hand-eye calibration needed? What is the hand-eye marker?

When we want to use the robot arm combined with vision to grasp, the pose information of the object in space is obtained through the camera, but the pose information at this time is based on the camera coordinate system and cannot be used directly. If we want the end-effector of the robot to reach the target position, we need to know the position of the target position in the robot coordinate system (usually the base of the robot).

We can derive it as follows:

The pose of the target in the robot coordinate system = the pose of the target in the camera coordinate system — > the pose of the camera in the jaw (end) coordinate system — > the jaw (end) in the robot base coordinate system

How do we get the three positions on the right?

  • The position of the object in the camera can be obtained by a visual recognition program. The use of Aruco Calibration Plate for robot perception part)
  • The position of the gripper (end) in the base coordinate system of the robot can be obtained through the robot arm trainer or the supporting SDK
  • The pose relationship of the camera in the gripper (end) coordinate system is calculated by the hand-eye calibration program

Therefore, in order for the robotic arm to reach the spatial pose recognized by vision, it is necessary to know the pose relationship between the camera and the robotic arm end-effector. Hand-eye calibration is to calibrate the pose relationship between the robotic arm end and the camera.

What is AX equal to XB? What’s the use?

Continue with the image above:

The figure above represents the transformation relationship between the robot arm and the camera coordinate system. I represents the pose relationship between the robot end and the camera at time I, and j represents their pose relationship at time J.

Note: The position of the calibration plate and the robot at time I and J remains unchanged.

Hgi uses the pose of the calibration plate in the camera

Due to the use of a large number of formulas, it is not convenient to write, so the picture is posted, if you need to click the original link:

Mp.weixin.qq.com/s?__biz=Mzk…

Write in the last

This is the first time to do paper notes, next time to tell you a path planning algorithm related papers, please continue to pay attention to.

I am xiaozhi, a veteran player in the field of robotics, and now I am an engineer of a unicorn robot algorithm in Shenzhen

I learned programming in junior high school. I started to learn robotics in senior high school. I played robot related competitions in college to earn 2W+ per month (prize money).

At present, I am doing an official account to output robot learning guide, paper annotations and work experience. Welcome to pay attention to Xiaozhi, exchange technology and learn robot together