Camera Posture Estimation Using An ArUco Board
Preparation
Today, let’s test on an aruco board, instead of a single marker or a diamond marker. Again, you need to make sure your camera has already been calibrated. In the coding section, it’s assumed that you can successfully load the camera calibration parameters.
Coding
The code can be found at OpenCV Examples.
First of all
We need to ensure cv2.so is under our system path. cv2.so is specifically for OpenCV Python.
1 | import sys |
Then, we import some packages to be used.
1 | import os |
Secondly
Again, we need to load all camera calibration parameters, including: cameraMatrix, distCoeffs, etc. :
1 | calibrationFile = "calibrationFileName.xml" |
If you are using a calibrated fisheye camera like us, two extra parameters are to be loaded from the calibration file.
1 | r = calibrationParams.getNode("R").mat() |
Afterwards, two mapping matrices are pre-calculated by calling function cv2.fisheye.initUndistortRectifyMap() as (supposing the images to be processed are of 1080P):
1 | image_size = (1920, 1080) |
Thirdly
In our test, the dictionary aruco.DICT_6X6_1000 is adopted as the unit pattern to construct a grid board. The board is of size 5X7, which looks like:
1 | aruco_dict = aruco.Dictionary_get( aruco.DICT_6X6_1000 ) |
After having this aruco board marker printed, the edge lengths of this particular aruco marker and the distance between two neighbour markers are to be measured and stored in two variables markerLength and markerSeparation, which are used to create the 5X7 grid board.
1 | markerLength = 40 # Here, our measurement unit is centimetre. |
Meanwhile, create aruco detector with default parameters.
1 | arucoParams = aruco.DetectorParameters_create() |
Finally
Now, let’s test on a video stream, a .mp4 file. We first load the video file and initialize a video capture handle.
1 | videoFile = "aruco\_board\_57.mp4" |
Then, we calculate the camera posture frame by frame:
1 | while(True): |
The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera. At the end of the code, we release the video capture handle and destroy all opening windows.
1 | cap.release() # When everything done, release the capture |