ChAruco is an integrated marker, which combines a chessboard with an aruco marker. The code is also very similar to the code in our previous blog aruco board.
Coding
The code can be found at OpenCV Examples. And the code in the first two subsections are exactly the same as what’s written in our previous blogs. We’ll neglect the first two subsections ever since.
First of all
Exactly the same as in previous blogs.
Secondly
Exactly the same as in previous blogs.
Thirdly
Dictionary aruco.DICT_6X6_1000 is integrated with a chessboard to construct a grid charuco board. The experimenting board is of size 5X7, which looks like:
After having this aruco board marker printed, the edge lengths of this chessboard and aruco marker (displayed in the white cell of the chessboard) are to be measured and stored in two variables squareLength and markerLength, which are used to create the 5X7 grid board.
1 2 3
squareLength = 40# Here, our measurement unit is centimetre. markerLength = 30# Here, our measurement unit is centimetre. board = aruco.CharucoBoard_create(5, 7, squareLength, markerLength, aruco_dict)
Meanwhile, create aruco detector with default parameters.
1
arucoParams = aruco.DetectorParameters_create()
Finally
Now, let’s test on a video stream, a .mp4 file.
1 2
videoFile = "charuco_board_57.mp4" cap = cv2.VideoCapture(videoFile)
Then, we calculate the camera posture frame by frame:
if ids != None: # if there is at least one marker detected charucoretval, charucoCorners, charucoIds = aruco.interpolateCornersCharuco(corners, ids, frame_remapped_gray, board) im_with_charuco_board = aruco.drawDetectedCornersCharuco(frame_remapped, charucoCorners, charucoIds, (0,255,0)) retval, rvec, tvec = aruco.estimatePoseCharucoBoard(charucoCorners, charucoIds, board, camera_matrix, dist_coeffs) # posture estimation from a charuco board if retval == True: im_with_charuco_board = aruco.drawAxis(im_with_charuco_board, camera_matrix, dist_coeffs, rvec, tvec, 100) # axis length 100 can be changed according to your requirement else: im_with_charuco_left = frame_remapped
cv2.imshow("charucoboard", im_with_charuco_board)
if cv2.waitKey(2) & 0xFF == ord('q'): break else: break
The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera. At the end of the code, we release the video capture handle and destroy all opening windows.
1 2
cap.release() # When everything done, release the capture cv2.destroyAllWindows()
Today, let’s test on an aruco board, instead of a single marker or a diamond marker. Again, you need to make sure your camera has already been calibrated. In the coding section, it’s assumed that you can successfully load the camera calibration parameters.
If you are using a calibrated fisheye camera like us, two extra parameters are to be loaded from the calibration file.
1 2
r = calibrationParams.getNode("R").mat() new_camera_matrix = calibrationParams.getNode("newCameraMatrix").mat()
Afterwards, two mapping matrices are pre-calculated by calling function cv2.fisheye.initUndistortRectifyMap() as (supposing the images to be processed are of 1080P):
After having this aruco board marker printed, the edge lengths of this particular aruco marker and the distance between two neighbour markers are to be measured and stored in two variables markerLength and markerSeparation, which are used to create the 5X7 grid board.
1 2 3
markerLength = 40# Here, our measurement unit is centimetre. markerSeparation = 8# Here, our measurement unit is centimetre. board = aruco.GridBoard_create(5, 7, markerLength, markerSeparation, aruco_dict)
Meanwhile, create aruco detector with default parameters.
1
arucoParams = aruco.DetectorParameters_create()
Finally
Now, let’s test on a video stream, a .mp4 file. We first load the video file and initialize a video capture handle.
1 2
videoFile = "aruco\_board\_57.mp4" cap = cv2.VideoCapture(videoFile)
Then, we calculate the camera posture frame by frame:
if ids != None: # if there is at least one marker detected im_with_aruco_board = aruco.drawDetectedMarkers(frame_remapped, corners, ids, (0,255,0)) retval, rvec, tvec = aruco.estimatePoseBoard(corners, ids, board, camera_matrix, dist_coeffs) # posture estimation from a diamond if retval != 0: im_with_aruco_board = aruco.drawAxis(im_with_aruco_board, camera_matrix, dist_coeffs, rvec, tvec, 100) # axis length 100 can be changed according to your requirement else: im_with_aruco_board = frame_remapped
cv2.imshow("arucoboard", im_with_aruco_board)
if cv2.waitKey(2) & 0xFF == ord('q'): break else: break
The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera. At the end of the code, we release the video capture handle and destroy all opening windows.
1 2
cap.release() # When everything done, release the capture cv2.destroyAllWindows()
Very similar to our previous post Camera Posture Estimation Using A Single aruco Marker, you need to make sure your camera has already been calibrated. In the coding section, it’s assumed that you can successfully load the camera calibration parameters.
If you are using a calibrated fisheye camera like us, two extra parameters are to be loaded from the calibration file.
1 2
r = calibrationParams.getNode("R").mat() new_camera_matrix = calibrationParams.getNode("newCameraMatrix").mat()
Afterwards, two mapping matrices are pre-calculated by calling function cv2.fisheye.initUndistortRectifyMap() as (supposing the images to be processed are of 1080P):
The dictionary aruco.DICT_6X6_250 is to be loaded. Although current OpenCV provides four groups of aruco patterns, 4X4, 5X5, 6X6, 7X7, etc., it seems OpenCV Python does NOT provide a function named drawCharucoDiamond(). Therefore, we have to refer to the C++ tutorial Detection of Diamond Markers. And, we directly use this particular diamond marker in the tutorial:
After having this aruco diamond marker printed, the edge lengths of this particular diamond marker are to be measured and stored in two variables squareLength and markerLength.
1 2
squareLength = 40# Here, our measurement unit is centimetre. markerLength = 25# Here, our measurement unit is centimetre.
Meanwhile, create aruco detector with default parameters.
1
arucoParams = aruco.DetectorParameters_create()
Finally
This time, let’s test on a video stream, a .mp4 file. We first load the video file and initialize a video capture handle.
1 2
videoFile = "aruco_diamond.mp4" cap = cv2.VideoCapture(videoFile)
Then, we calculate the camera posture frame by frame:
if ids != None: # if there is at least one marker detected diamondCorners, diamondIds = aruco.detectCharucoDiamond(frame_remapped_gray, corners, ids, squareLength/markerLength) # Second, detect diamond markers iflen(diamondCorners) >= 1: # if there is at least one diamond detected im_with_diamond = aruco.drawDetectedDiamonds(frame_remapped, diamondCorners, diamondIds, (0,255,0)) rvec, tvec = aruco.estimatePoseSingleMarkers(diamondCorners, squareLength, camera_matrix, dist_coeffs) # posture estimation from a diamond im_with_diamond = aruco.drawAxis(im_with_diamond, camera_matrix, dist_coeffs, rvec, tvec, 100) # axis length 100 can be changed according to your requirement else: im_with_diamond = frame_remapped
if cv2.waitKey(2) & 0xFF == ord('q'): # press 'q' to quit break else: break
The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera. At the end of the code, we release the video capture handle and destroy all opening windows.
1 2
cap.release() # When everything done, release the capture cv2.destroyAllWindows()
Before start coding, you need to ensure your camera has already been calibrated. (Camera calibration is covered in our blog as well.) In the coding section, it’s assumed that you can successfully load the camera calibration parameters.
Since we are testing a calibrated fisheye camera, two extra parameters are to be loaded from the calibration file.
1 2
r = calibrationParams.getNode("R").mat() new_camera_matrix = calibrationParams.getNode("newCameraMatrix").mat()
Afterwards, two mapping matrices are pre-calculated by calling function cv2.fisheye.initUndistortRectifyMap() as (supposing the images to be processed are of 1080P):
A dictionary is to be loaded. Current OpenCV provides four groups of aruco patterns, 4X4, 5X5, 6X6, 7X7, etc. Here, aruco.DICT_6X6_1000 is randomly selected as our example, which looks like:
Hi, everyone. This is Nobody from Longer Vision Technology. I come back to life, at least, half life. And finally, I decided to write something, either useful, or useless. Hope my blogs will be able to help some of the pure researchers, as well as the students, in the field of Computer Vision & Machine Vision. By the way, our products will be put on sale soon. Keep an eye on our blogs please. Thank you…