I recently did a visual positioning project, eyes out of hand, using halcon 4-point calibration (actually 3 points is ok, because 3 non-collinear points can determine a plane. In addition, if the 4-point calibration result is accurate enough, there is no need for 9-point calibration.
First, let’s talk about why calibration is needed and what is the function of calibration:
1. What situation requires camera calibration? Do this unless the single-camera FOV is less than 20mm, or the system accuracy requires several mm.
Many beginners, are unclear to the concept of calibration, can not distinguish the relationship between the coordinate system, do not understand the relationship between camera calibration and manipulator camera calibration, this article explains to you.
We usually say that camera calibration is divided into two kinds, one is the calibration of camera parameters, which generally uses Zhang calibration method, the function of calibration is to correct the distortion of the camera itself, using the parameters of the correction to process the graph and then present it. There’s a lot of information about this on the Internet, and I’m not going to explain it. General manipulator positioning will not carry out this calibration, because the camera distortion is still very small, the accuracy can meet most requirements.
This article is to introduce the second, the calibration between the camera and manipulator, function: to establish the relationship between the camera coordinate system and manipulator coordinate system, that is, to install eyes on the manipulator, let it go where it goes.
The commonly used calibration method is 9 point calibration. The 9-point calibration is adopted because the more points the more accurate, but not the better, because the more points, the more trouble the calibration will be, the 9 points selected in the middle of the camera should not be too close to the edge, can take care of the point to be located.
2, the essence is affine transformation, one coordinate system to another coordinate system;
3, at least three points of different lines, in theory, the more points the better, such as nine-point calibration has a certain degree of cooperation, can reduce part of the error, such as the error caused by mechanical installation or camera rotation tilt; The accuracy can reach 0.05mm;
Second, let’s look at my 4-point calibration, focusing on vector_to_HOM_mat2D operator:
* pxX := gen_tuple_const(4, 0) pxY := gen_tuple_const(4, 0) mmY := gen_tuple_const(4, 0) mmY := gen_tuple_const(4, 0) PxY :=[368, 2262, 2102, 385] * mmX:=[74.53, 76.53, 131.07, MmY :=[68.498, 124.3, 119.69, 69.116] * Supports rotation, translation, and scaling of vector_to_similarity (pxX, pxY, mmX, mmY, HomMat2D_Not) *[Recommended] Computes affine transformation matrices based on three or more point pairs, By the way, it supports rotation, translation, scaling, oblique cutting *Approximate an affine transformation from Point correspondences. Vector_to_hom_mat2d (pxX, pxY, mmX, mmY, HomMat2D) *Compute the affine transformation parameters from a (HomMat2D) *Compute the affine transformation parameters from a (HomMat2D) *Compute the affine transformation parameters from a (HomMat2D) *Compute the affine transformation parameters from a (HomMat2D) *Compute the affine transformation parameters from a (HomMat2D) *Compute the affine transformation parameters from a (HomMat2D) *Compute the affine transformation parameters from a Homogeneous 2D transformation matrix. *HomMat2D (input parameter): affine transformation matrix *Sx (output parameter):x direction of the scaling factor (if the transformation from image space to physical space, Is the amount of pixels single x direction) * Sy (output parameters) : y direction of the zoom factor (if from image space transformation to the physical space, Phi (output parameter): rotation Angle *Theta (output parameter): tangent Angle *Tx (output parameter): translation distance in x direction *Ty (output parameter): translation distance in y direction HOM_mat2d_to_affine_PAR (HomMat2D, Sx, Sy, Phi, Theta, Tx, Ty) * Arc conversion Angle tuple_deg (Phi, DegPhi) tuple_deg (Theta, * read_tuple ('d:\\1.tup', HomMat2D) * Read_tuple ('d:\\1.tup', HomMat2D) * Read_tuple ('d:\\1.tup', HomMat2D) * Row := [0, 3664, 3664, 0, 3664/2] Column := [0, 2748, 0, 2748, 2748/2] affine_trans_point_2d (HomMat2D, Row, Column, Qx, Qy) stop()Copy the code
The operator hom_mat2d_to_affine_PAR is calculated backwards through affine transformation matrix to get some parameters.
3. Problems encountered: How to do when the camera needs to move during processing?
www.ihalcon.com/read-16036….
The specific scene is as follows: 1. The camera is installed above the manipulator and on the guide rail, but the camera is not fixed. It can move along the X direction and needs to be moved to A,B and C for shooting respectively. 2. The manipulator is below the camera and can move in the X and Y directions. 3. In theory, the X rail of the camera and the X axis of the manipulator are parallel, but in the actual installation, there are inevitable mechanical errors. I now dock the camera at point A, take A picture, and then use the 4-point calibration to get the affine transformation matrix M1. The e question is: when the camera is docked at point B and point C, is it necessary to conduct 4-point calibration respectively? Or is there an easier way to do it, just use the matrix M1 that you get at point A? So how do I use this M1 at B and C?
Answer: Take A picture at point B, calculate (x,y) with the old matrix M1, and then get the actual mechanical coordinate value of (x+ b-A,y). It’s just a translation.
—
Eye out of hand references
www.ihalcon.com/read-13820-…