Before start coding, you need to ensure your camera has already been calibrated. (Camera calibration is covered in our blog as well.) In the coding section, it’s assumed that you can successfully load the camera calibration parameters.
The code can be found at OpenCV Examples.
Then, we import some packages to be used.
We now load all camera calibration parameters, including: cameraMatrix, distCoeffs, etc. For example, your code might look like the following:
calibrationFile = "calibrationFileName.xml"
Since we are testing a calibrated fisheye camera, two extra parameters are to be loaded from the calibration file.
r = calibrationParams.getNode("R").mat()
Afterwards, two mapping matrices are pre-calculated by calling function cv2.fisheye.initUndistortRectifyMap() as (supposing the images to be processed are of 1080P):
image_size = (1920, 1080)
A dictionary is to be loaded. Current OpenCV provides four groups of aruco patterns, 4X4, 5X5, 6X6, 7X7, etc. Here, aruco.DICT_6X6_1000 is randomly selected as our example, which looks like:
aruco_dict = aruco.Dictionary_get( aruco.DICT_6X6_1000 )
After having this aruco square marker printed, the edge length of this particular marker is to be measured and stored in a variable markerLength.
markerLength = 20 # Here, our measurement unit is centimetre.
Meanwhile, create aruco detector with default parameters.
arucoParams = aruco.DetectorParameters_create()
Estimate camera postures. Here, we are testing a sequence of images, rather than video streams. We first list all file names in sequence.
imgDir = "imgSequence" # Specify the image directory
Then, we calculate the camera posture frame by frame:
for i in range(0, nbOfImgs):
The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera.