Preparation

Traditionally, a camera is calibrated using a chessboard. Existing documentations are already out there and have discussed camera calibration in detail, for example, OpenCV-Python Tutorials.

Coding

Our code can be found at OpenCV Examples.

First of all

Unlike estimating camera postures which is dealing with the extrinsic parameters, camera calibration is to calculate the intrinsic parameters. In such a case, there is NO need for us to measure the cell size of the chessboard. Anyway, the adopted chessboard is just an ordinary chessboard as:

pattern_chessboard

Secondly

Required packages need to be imported.

1
2
3
import numpy as np
import cv2
import yaml

Thirdly

Some initialization work need to be done, including: 1) define the termination criteria when refine the corner sub-pixel later on; 2) object points coordinators initialization.

1
2
3
4
5
6
7
8
9
10
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.

Fourthly

After localizing 10 frames (10 can be changed to any positive integer as you wish) of a grid of 2D chessboard cell corners, camera matrix and distortion coefficients can be calculated by invoking the function calibrateCamera. Here, we are testing on a real-time camera stream.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cap = cv2.VideoCapture(0)
found = 0
while(found < 10): # Here, 10 can be changed to whatever number you like to choose
ret, img = cap.read() # Capture frame-by-frame
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (7,6),None)

# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp) # Certainly, every loop objp is the same, in 3D.
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners2)

# Draw and display the corners
img = cv2.drawChessboardCorners(img, (7,6), corners2, ret)
found += 1

cv2.imshow('img', img)
cv2.waitKey(10)

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)

Finally

Write the calculated calibration parameters into a yaml file. Here, it is a bit tricky. Please bear in mind that you MUST call function tolist() to transform a numpy array to a list.

1
2
3
4
5
6
# It's very important to transform the matrix to list.

data = {'camera_matrix': np.asarray(mtx).tolist(), 'dist_coeff': np.asarray(dist).tolist()}

with open("calibration.yaml", "w") as f:
yaml.dump(data, f)

Additionally

You may use the following piece of code to load the calibration parameters from file “calibration.yaml”.

1
2
3
4
5
with open('calibration.yaml') as f:
loadeddict = yaml.load(f)

mtxloaded = loadeddict.get('camera_matrix')
distloaded = loadeddict.get('dist_coeff')

Preparation

Open a bash terminal, and type in the following commands:

1
2
3
4
5
6
$ python
Python 3.5.2 (default, Nov 17 2016, 17:05:23)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> help (cv2.aruco)

Then, you will be able to see all the doc contents for cv2.aruco:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
Help on module cv2.aruco in cv2:

NAME
cv2.aruco

FUNCTIONS
Board_create(...)

Board_create(objPoints, dictionary, ids) -> retval

CharucoBoard_create(...)

CharucoBoard_create(squaresX, squaresY, squareLength, markerLength, dictionary) -> retval

DetectorParameters_create(...)

DetectorParameters_create() -> retval

Dictionary_create(...)

Dictionary_create(nMarkers, markerSize) -> retval

Dictionary_create_from(...)

Dictionary_create_from(nMarkers, markerSize, baseDictionary) -> retval

Dictionary_get(...)

Dictionary_get(dict) -> retval

GridBoard_create(...)

GridBoard_create(markersX, markersY, markerLength, markerSeparation, dictionary[, firstMarker]) -> retval

calibrateCameraAruco(...)

calibrateCameraAruco(corners, ids, counter, board, imageSize, cameraMatrix, distCoeffs[, rvecs[, tvecs[, flags[, criteria]]]]) -> retval, cameraMatrix, distCoeffs, rvecs, tvecs

calibrateCameraArucoExtended(...)

calibrateCameraArucoExtended(corners, ids, counter, board, imageSize, cameraMatrix, distCoeffs[, rvecs[, tvecs[, stdDeviationsIntrinsics[, stdDeviationsExtrinsics[, perViewErrors[, flags[, criteria]

]]]]]]) -> retval, cameraMatrix, distCoeffs, rvecs, tvecs, stdDeviationsIntrinsics, stdDeviationsExtrinsics, perViewErrors

calibrateCameraCharuco(...)

calibrateCameraCharuco(charucoCorners, charucoIds, board, imageSize, cameraMatrix, distCoeffs[, rvecs[, tvecs[, flags[, criteria]]]]) -> retval, cameraMatrix, distCoeffs, rvecs, tvecs

calibrateCameraCharucoExtended(...)

calibrateCameraCharucoExtended(charucoCorners, charucoIds, board, imageSize, cameraMatrix, distCoeffs[, rvecs[, tvecs[, stdDeviationsIntrinsics[, stdDeviationsExtrinsics[, perViewErrors[, flags[, cr

iteria]]]]]]]) -> retval, cameraMatrix, distCoeffs, rvecs, tvecs, stdDeviationsIntrinsics, stdDeviationsExtrinsics, perViewErrors

custom_dictionary(...)

custom_dictionary(nMarkers, markerSize) -> retval

custom_dictionary_from(...)

custom_dictionary_from(nMarkers, markerSize, baseDictionary) -> retval

detectCharucoDiamond(...)

detectCharucoDiamond(image, markerCorners, markerIds, squareMarkerLengthRate[, diamondCorners[, diamondIds[, cameraMatrix[, distCoeffs]]]]) -> diamondCorners, diamondIds

detectMarkers(...)

detectMarkers(image, dictionary[, corners[, ids[, parameters[, rejectedImgPoints]]]]) -> corners, ids, rejectedImgPoints

drawAxis(...)

drawAxis(image, cameraMatrix, distCoeffs, rvec, tvec, length) -> image

drawDetectedCornersCharuco(...)

drawDetectedCornersCharuco(image, charucoCorners[, charucoIds[, cornerColor]]) -> image

drawDetectedDiamonds(...)

drawDetectedDiamonds(image, diamondCorners[, diamondIds[, borderColor]]) -> image

drawDetectedMarkers(...)

drawDetectedMarkers(image, corners[, ids[, borderColor]]) -> image

drawMarker(...)

drawMarker(dictionary, id, sidePixels[, img[, borderBits]]) -> img

drawPlanarBoard(...)

drawPlanarBoard(board, outSize[, img[, marginSize[, borderBits]]]) -> img

estimatePoseBoard(...)

estimatePoseBoard(corners, ids, board, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess]]]) -> retval, rvec, tvec

estimatePoseCharucoBoard(...)

estimatePoseCharucoBoard(charucoCorners, charucoIds, board, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess]]]) -> retval, rvec, tvec

estimatePoseSingleMarkers(...)

estimatePoseSingleMarkers(corners, markerLength, cameraMatrix, distCoeffs[, rvecs[, tvecs]]) -> rvecs, tvecs

getPredefinedDictionary(...)

getPredefinedDictionary(dict) -> retval

interpolateCornersCharuco(...)

interpolateCornersCharuco(markerCorners, markerIds, image, board[, charucoCorners[, charucoIds[, cameraMatrix[, distCoeffs[, minMarkers]]]]]) -> retval, charucoCorners, charucoIds

refineDetectedMarkers(...)

refineDetectedMarkers(image, board, detectedCorners, detectedIds, rejectedCorners[, cameraMatrix[, distCoeffs[, minRepDistance[, errorCorrectionRate[, checkAllOrders[, recoveredIdxs[, parameters]]]]]]]) -> detectedCorners, detectedIds, rejectedCorners, recoveredIdxs

DATA
DICT_4X4_100 = 1
DICT_4X4_1000 = 3
DICT_4X4_250 = 2
DICT_4X4_50 = 0
DICT_5X5_100 = 5
DICT_5X5_1000 = 7
DICT_5X5_250 = 6
DICT_5X5_50 = 4
DICT_6X6_100 = 9
DICT_6X6_1000 = 11
DICT_6X6_250 = 10
DICT_6X6_50 = 8
DICT_7X7_100 = 13
DICT_7X7_1000 = 15
DICT_7X7_250 = 14
DICT_7X7_50 = 12
DICT_ARUCO_ORIGINAL = 16

FILE
(built-in)

Preparation

A widely used asymmetric circle grid pattern can be found in doc of OpenCV 2.4. Same as previous blogs, the camera needs to be calibrated beforehand. For this asymmetric circle grid example, a sequence of images (instead of a video stream) is tested.

Coding

The code can be found at OpenCV Examples.

First of all

We need to ensure cv2.so is under our system path. cv2.so is specifically for OpenCV Python.

1
2
import sys
sys.path.append('/usr/local/python/3.5')

Then, we import some packages to be used (NO ArUco).

1
2
3
import os
import cv2
import numpy as np

Secondly

We now load all camera calibration parameters, including: cameraMatrix, distCoeffs, etc. For example, your code might look like the following:

1
2
3
4
calibrationFile = "calibrationFileName.xml"
calibrationParams = cv2.FileStorage(calibrationFile, cv2.FILE_STORAGE_READ)
camera_matrix = calibrationParams.getNode("cameraMatrix").mat()
dist_coeffs = calibrationParams.getNode("distCoeffs").mat()

Since we are testing a calibrated fisheye camera, two extra parameters are to be loaded from the calibration file.

1
2
r = calibrationParams.getNode("R").mat()
new_camera_matrix = calibrationParams.getNode("newCameraMatrix").mat()

Afterwards, two mapping matrices are pre-calculated by calling function cv2.fisheye.initUndistortRectifyMap() as (supposing the images to be processed are of 1080P):

1
2
image_size = (1920, 1080)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(camera_matrix, dist_coeffs, r, new_camera_matrix, image_size, cv2.CV_16SC2)

Thirdly

The circle pattern is to be loaded.

asymmetric_circle_grid

Here in our case, this asymmetric circle grid pattern is manually loaded as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# Original blob coordinates
objectPoints = np.zeros((44, 3)) # In this asymmetric circle grid, 44 circles are adopted.
objectPoints[0] = (0 , 0 , 0)
objectPoints[1] = (0 , 72 , 0)
objectPoints[2] = (0 , 144, 0)
objectPoints[3] = (0 , 216, 0)
objectPoints[4] = (36 , 36 , 0)
objectPoints[5] = (36 , 108, 0)
objectPoints[6] = (36 , 180, 0)
objectPoints[7] = (36 , 252, 0)
objectPoints[8] = (72 , 0 , 0)
objectPoints[9] = (72 , 72 , 0)
objectPoints[10] = (72 , 144, 0)
objectPoints[11] = (72 , 216, 0)
objectPoints[12] = (108, 36, 0)
objectPoints[13] = (108, 108, 0)
objectPoints[14] = (108, 180, 0)
objectPoints[15] = (108, 252, 0)
objectPoints[16] = (144, 0 , 0)
objectPoints[17] = (144, 72 , 0)
objectPoints[18] = (144, 144, 0)
objectPoints[19] = (144, 216, 0)
objectPoints[20] = (180, 36 , 0)
objectPoints[21] = (180, 108, 0)
objectPoints[22] = (180, 180, 0)
objectPoints[23] = (180, 252, 0)
objectPoints[24] = (216, 0 , 0)
objectPoints[25] = (216, 72 , 0)
objectPoints[26] = (216, 144, 0)
objectPoints[27] = (216, 216, 0)
objectPoints[28] = (252, 36 , 0)
objectPoints[29] = (252, 108, 0)
objectPoints[30] = (252, 180, 0)
objectPoints[31] = (252, 252, 0)
objectPoints[32] = (288, 0 , 0)
objectPoints[33] = (288, 72 , 0)
objectPoints[34] = (288, 144, 0)
objectPoints[35] = (288, 216, 0)
objectPoints[36] = (324, 36 , 0)
objectPoints[37] = (324, 108, 0)
objectPoints[38] = (324, 180, 0)
objectPoints[39] = (324, 252, 0)
objectPoints[40] = (360, 0 , 0)
objectPoints[41] = (360, 72 , 0)
objectPoints[42] = (360, 144, 0)
objectPoints[43] = (360, 216, 0)

In our case, the distance between two neighbour circle centres (in the same column) is measured as 72 centimetres. Meanwhile, the axis at the origin is loaded as well, with respective length 300, 200, 100 centimetres.

1
axis = np.float32([[360,0,0], [0,240,0], [0,0,-120]]).reshape(-1,3)

Fourthly

Since we are going to use OpenCV’s SimpleBlobDetector for the blob detection, the SimpleBlobDetector’s parameters are to be created beforehand. The parameter values can be adjusted according to your own testing environments. The iteration criteria for the simple blob detection is also created at the same time.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Setup SimpleBlobDetector parameters.
blobParams = cv2.SimpleBlobDetector_Params()

# Change thresholds
blobParams.minThreshold = 8
blobParams.maxThreshold = 255

# Filter by Area.
blobParams.filterByArea = True
blobParams.minArea = 64 # minArea may be adjusted to suit for your experiment
blobParams.maxArea = 2500 # maxArea may be adjusted to suit for your experiment

# Filter by Circularity
blobParams.filterByCircularity = True
blobParams.minCircularity = 0.1

# Filter by Convexity
blobParams.filterByConvexity = True
blobParams.minConvexity = 0.87

# Filter by Inertia
blobParams.filterByInertia = True
blobParams.minInertiaRatio = 0.01

# Create a detector with the parameters
blobDetector = cv2.SimpleBlobDetector_create(blobParams)

# Create the iteration criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
###################################################################################################

Finally

Estimate camera postures. Here, we are testing a sequence of images, rather than video streams. We first list all file names in sequence.

1
2
3
imgDir = "imgSequence"  # Specify the image directory
imgFileNames = [os.path.join(imgDir, fn) for fn in next(os.walk(imgDir))[2]]
nbOfImgs = len(imgFileNames)

Then, we calculate the camera posture frame by frame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
for i in range(0, nbOfImgs-1):
img = cv2.imread(imgFileNames[i], cv2.IMREAD_COLOR)
imgRemapped = cv2.remap(img, map1, map2, cv2.INTER_LINEAR, cv2.BORDER_CONSTANT) # for fisheye remapping
imgRemapped_gray = cv2.cvtColor(imgRemapped, cv2.COLOR_BGR2GRAY) # blobDetector.detect() requires gray image

keypoints = blobDetector.detect(imgRemapped_gray) # Detect blobs.

# Draw detected blobs as red circles. This helps cv2.findCirclesGrid() .
im_with_keypoints = cv2.drawKeypoints(imgRemapped, keypoints, np.array([]), (0,255,0), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
im_with_keypoints_gray = cv2.cvtColor(im_with_keypoints, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findCirclesGrid(im_with_keypoints, (4,11), None, flags = cv2.CALIB_CB_ASYMMETRIC_GRID) # Find the circle grid

if ret == True:
corners2 = cv2.cornerSubPix(im_with_keypoints_gray, corners, (11,11), (-1,-1), criteria) # Refines the corner locations.

# Draw and display the corners.
im_with_keypoints = cv2.drawChessboardCorners(imLeftRemapped, (4,11), corners2, ret)

# 3D posture
if len(corners2) == len(objectPoints):
retval, rvec, tvec = cv2.solvePnP(objectPoints, corners2, camera_matrix, dist_coeffs)

if retval:
projectedPoints, jac = cv2.projectPoints(objectPoints, rvec, tvec, camera_matrix, dist_coeffs) # project 3D points to image plane
projectedAxis, jacAsix = cv2.projectPoints(axis, rvec, tvec, camera_matrix, dist_coeffs) # project axis to image plane
for p in projectedPoints:
p = np.int32(p).reshape(-1,2)
cv2.circle(im_with_keypoints, (p[0][0], p[0][1]), 3, (0,0,255))
origin = tuple(corners2[0].ravel())
im_with_keypoints = cv2.line(im_with_keypoints, origin, tuple(projectedAxis[0].ravel()), (255,0,0), 2)
im_with_keypoints = cv2.line(im_with_keypoints, origin, tuple(projectedAxis[1].ravel()), (0,255,0), 2)
im_with_keypoints = cv2.line(im_with_keypoints, origin, tuple(projectedAxis[2].ravel()), (0,0,255), 2)

cv2.imshow("circlegrid", im_with_keypoints) # display

cv2.waitKey(2)

The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera.

Preparation

ChAruco is an integrated marker, which combines a chessboard with an aruco marker. The code is also very similar to the code in our previous blog aruco board.

Coding

The code can be found at OpenCV Examples. And the code in the first two subsections are exactly the same as what’s written in our previous blogs. We’ll neglect the first two subsections ever since.

First of all

Exactly the same as in previous blogs.

Secondly

Exactly the same as in previous blogs.

Thirdly

Dictionary aruco.DICT_6X6_1000 is integrated with a chessboard to construct a grid charuco board. The experimenting board is of size 5X7, which looks like:

charuco.DICT_6X6_1000.board57

1
aruco_dict = aruco.Dictionary_get( aruco.DICT_6X6_1000 )

After having this aruco board marker printed, the edge lengths of this chessboard and aruco marker (displayed in the white cell of the chessboard) are to be measured and stored in two variables squareLength and markerLength, which are used to create the 5X7 grid board.

1
2
3
squareLength = 40   # Here, our measurement unit is centimetre.
markerLength = 30 # Here, our measurement unit is centimetre.
board = aruco.CharucoBoard_create(5, 7, squareLength, markerLength, aruco_dict)

Meanwhile, create aruco detector with default parameters.

1
arucoParams = aruco.DetectorParameters_create()

Finally

Now, let’s test on a video stream, a .mp4 file.

1
2
videoFile = "charuco_board_57.mp4"
cap = cv2.VideoCapture(videoFile)

Then, we calculate the camera posture frame by frame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
while(True):
ret, frame = cap.read() # Capture frame-by-frame
if ret == True:
frame_remapped = cv2.remap(frame, map1, map2, cv2.INTER_LINEAR, cv2.BORDER_CONSTANT) # for fisheye remapping
frame_remapped_gray = cv2.cvtColor(frame_remapped, cv2.COLOR_BGR2GRAY)

corners, ids, rejectedImgPoints = aruco.detectMarkers(frame_remapped_gray, aruco_dict, parameters=arucoParams) # First, detect markers
aruco.refineDetectedMarkers(frame_remapped_gray, board, corners, ids, rejectedImgPoints)

if ids != None: # if there is at least one marker detected
charucoretval, charucoCorners, charucoIds = aruco.interpolateCornersCharuco(corners, ids, frame_remapped_gray, board)
im_with_charuco_board = aruco.drawDetectedCornersCharuco(frame_remapped, charucoCorners, charucoIds, (0,255,0))
retval, rvec, tvec = aruco.estimatePoseCharucoBoard(charucoCorners, charucoIds, board, camera_matrix, dist_coeffs) # posture estimation from a charuco board
if retval == True:
im_with_charuco_board = aruco.drawAxis(im_with_charuco_board, camera_matrix, dist_coeffs, rvec, tvec, 100) # axis length 100 can be changed according to your requirement
else:
im_with_charuco_left = frame_remapped

cv2.imshow("charucoboard", im_with_charuco_board)

if cv2.waitKey(2) & 0xFF == ord('q'):
break
else:
break

The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera.
At the end of the code, we release the video capture handle and destroy all opening windows.

1
2
cap.release()   # When everything done, release the capture
cv2.destroyAllWindows()

Preparation

Today, let’s test on an aruco board, instead of a single marker or a diamond marker. Again, you need to make sure your camera has already been calibrated. In the coding section, it’s assumed that you can successfully load the camera calibration parameters.

Coding

The code can be found at OpenCV Examples.

First of all

We need to ensure cv2.so is under our system path. cv2.so is specifically for OpenCV Python.

1
2
import sys
sys.path.append('/usr/local/python/3.5')

Then, we import some packages to be used.

1
2
3
4
import os
import cv2
from cv2 import aruco
import numpy as np

Secondly

Again, we need to load all camera calibration parameters, including: cameraMatrix, distCoeffs, etc. :

1
2
3
4
calibrationFile = "calibrationFileName.xml"
calibrationParams = cv2.FileStorage(calibrationFile, cv2.FILE_STORAGE_READ)
camera_matrix = calibrationParams.getNode("cameraMatrix").mat()
dist_coeffs = calibrationParams.getNode("distCoeffs").mat()

If you are using a calibrated fisheye camera like us, two extra parameters are to be loaded from the calibration file.

1
2
r = calibrationParams.getNode("R").mat()
new_camera_matrix = calibrationParams.getNode("newCameraMatrix").mat()

Afterwards, two mapping matrices are pre-calculated by calling function cv2.fisheye.initUndistortRectifyMap() as (supposing the images to be processed are of 1080P):

1
2
image_size = (1920, 1080)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(camera_matrix, dist_coeffs, r, new_camera_matrix, image_size, cv2.CV_16SC2)

Thirdly

In our test, the dictionary aruco.DICT_6X6_1000 is adopted as the unit pattern to construct a grid board. The board is of size 5X7, which looks like:

aruco.DICT_6X6_1000.board57

1
aruco_dict = aruco.Dictionary_get( aruco.DICT_6X6_1000 )

After having this aruco board marker printed, the edge lengths of this particular aruco marker and the distance between two neighbour markers are to be measured and stored in two variables markerLength and markerSeparation, which are used to create the 5X7 grid board.

1
2
3
markerLength = 40   # Here, our measurement unit is centimetre.
markerSeparation = 8 # Here, our measurement unit is centimetre.
board = aruco.GridBoard_create(5, 7, markerLength, markerSeparation, aruco_dict)

Meanwhile, create aruco detector with default parameters.

1
arucoParams = aruco.DetectorParameters_create()

Finally

Now, let’s test on a video stream, a .mp4 file. We first load the video file and initialize a video capture handle.

1
2
videoFile = "aruco\_board\_57.mp4"
cap = cv2.VideoCapture(videoFile)

Then, we calculate the camera posture frame by frame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
while(True):
ret, frame = cap.read() # Capture frame-by-frame
if ret == True:
frame_remapped = cv2.remap(frame, map1, map2, cv2.INTER_LINEAR, cv2.BORDER_CONSTANT) # for fisheye remapping
frame_remapped_gray = cv2.cvtColor(frame_remapped, cv2.COLOR_BGR2GRAY)

corners, ids, rejectedImgPoints = aruco.detectMarkers(frame_remapped_gray, aruco_dict, parameters=arucoParams) # First, detect markers
aruco.refineDetectedMarkers(frame_remapped_gray, board, corners, ids, rejectedImgPoints)

if ids != None: # if there is at least one marker detected
im_with_aruco_board = aruco.drawDetectedMarkers(frame_remapped, corners, ids, (0,255,0))
retval, rvec, tvec = aruco.estimatePoseBoard(corners, ids, board, camera_matrix, dist_coeffs) # posture estimation from a diamond
if retval != 0:
im_with_aruco_board = aruco.drawAxis(im_with_aruco_board, camera_matrix, dist_coeffs, rvec, tvec, 100) # axis length 100 can be changed according to your requirement
else:
im_with_aruco_board = frame_remapped

cv2.imshow("arucoboard", im_with_aruco_board)

if cv2.waitKey(2) & 0xFF == ord('q'):
break
else:
break

The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera.
At the end of the code, we release the video capture handle and destroy all opening windows.

1
2
cap.release()   # When everything done, release the capture
cv2.destroyAllWindows()

Preparation

Very similar to our previous post Camera Posture Estimation Using A Single aruco Marker, you need to make sure your camera has already been calibrated. In the coding section, it’s assumed that you can successfully load the camera calibration parameters.

Coding

The code can be found at OpenCV Examples.

First of all

We need to ensure cv2.so is under our system path. cv2.so is specifically for OpenCV Python.

1
2
import sys
sys.path.append('/usr/local/python/3.5')

Then, we import some packages to be used.

1
2
3
4
import os
import cv2
from cv2 import aruco
import numpy as np

Secondly

Again, we need to load all camera calibration parameters, including: cameraMatrix, distCoeffs, etc. :

1
2
3
4
calibrationFile = "calibrationFileName.xml"
calibrationParams = cv2.FileStorage(calibrationFile, cv2.FILE_STORAGE_READ)
camera_matrix = calibrationParams.getNode("cameraMatrix").mat()
dist_coeffs = calibrationParams.getNode("distCoeffs").mat()

If you are using a calibrated fisheye camera like us, two extra parameters are to be loaded from the calibration file.

1
2
r = calibrationParams.getNode("R").mat()
new_camera_matrix = calibrationParams.getNode("newCameraMatrix").mat()

Afterwards, two mapping matrices are pre-calculated by calling function cv2.fisheye.initUndistortRectifyMap() as (supposing the images to be processed are of 1080P):

1
2
image_size = (1920, 1080)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(camera_matrix, dist_coeffs, r, new_camera_matrix, image_size, cv2.CV_16SC2)

Thirdly

The dictionary aruco.DICT_6X6_250 is to be loaded. Although current OpenCV provides four groups of aruco patterns, 4X4, 5X5, 6X6, 7X7, etc., it seems OpenCV Python does NOT provide a function named drawCharucoDiamond(). Therefore, we have to refer to the C++ tutorial Detection of Diamond Markers. And, we directly use this particular diamond marker in the tutorial:

aruco.DICT_6X6_250.diamond

1
aruco_dict = aruco.Dictionary_get( aruco.DICT_6X6_250 )

After having this aruco diamond marker printed, the edge lengths of this particular diamond marker are to be measured and stored in two variables squareLength and markerLength.

1
2
squareLength = 40   # Here, our measurement unit is centimetre.
markerLength = 25 # Here, our measurement unit is centimetre.

Meanwhile, create aruco detector with default parameters.

1
arucoParams = aruco.DetectorParameters_create()

Finally

This time, let’s test on a video stream, a .mp4 file. We first load the video file and initialize a video capture handle.

1
2
videoFile = "aruco_diamond.mp4"
cap = cv2.VideoCapture(videoFile)

Then, we calculate the camera posture frame by frame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
while(True):
ret, frame = cap.read() # Capture frame-by-frame
if ret == True:
frame_remapped = cv2.remap(frame, map1, map2, cv2.INTER_LINEAR, cv2.BORDER_CONSTANT) # for fisheye remapping
frame_remapped_gray = cv2.cvtColor(frame_remapped, cv2.COLOR_BGR2GRAY)

corners, ids, rejectedImgPoints = aruco.detectMarkers(frame_remapped_gray, aruco_dict, parameters=arucoParams) # First, detect markers

if ids != None: # if there is at least one marker detected
diamondCorners, diamondIds = aruco.detectCharucoDiamond(frame_remapped_gray, corners, ids, squareLength/markerLength) # Second, detect diamond markers
if len(diamondCorners) >= 1: # if there is at least one diamond detected
im_with_diamond = aruco.drawDetectedDiamonds(frame_remapped, diamondCorners, diamondIds, (0,255,0))
rvec, tvec = aruco.estimatePoseSingleMarkers(diamondCorners, squareLength, camera_matrix, dist_coeffs) # posture estimation from a diamond
im_with_diamond = aruco.drawAxis(im_with_diamond, camera_matrix, dist_coeffs, rvec, tvec, 100) # axis length 100 can be changed according to your requirement
else:
im_with_diamond = frame_remapped

cv2.imshow("diamondLeft", im_with_diamond) # display

if cv2.waitKey(2) & 0xFF == ord('q'): # press 'q' to quit
break
else:
break

The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera.
At the end of the code, we release the video capture handle and destroy all opening windows.

1
2
cap.release()   # When everything done, release the capture
cv2.destroyAllWindows()

Preparation

Before start coding, you need to ensure your camera has already been calibrated. (Camera calibration is covered in our blog as well.) In the coding section, it’s assumed that you can successfully load the camera calibration parameters.

Coding

The code can be found at OpenCV Examples.

First of all

We need to ensure cv2.so is under our system path. cv2.so is specifically for OpenCV Python.

1
2
import sys
sys.path.append('/usr/local/python/3.5')

Then, we import some packages to be used.

1
2
3
4
import os
import cv2
from cv2 import aruco
import numpy as np

Secondly

We now load all camera calibration parameters, including: cameraMatrix, distCoeffs, etc. For example, your code might look like the following:

1
2
3
4
calibrationFile = "calibrationFileName.xml"
calibrationParams = cv2.FileStorage(calibrationFile, cv2.FILE_STORAGE_READ)
camera_matrix = calibrationParams.getNode("cameraMatrix").mat()
dist_coeffs = calibrationParams.getNode("distCoeffs").mat()

Since we are testing a calibrated fisheye camera, two extra parameters are to be loaded from the calibration file.

1
2
r = calibrationParams.getNode("R").mat()
new_camera_matrix = calibrationParams.getNode("newCameraMatrix").mat()

Afterwards, two mapping matrices are pre-calculated by calling function cv2.fisheye.initUndistortRectifyMap() as (supposing the images to be processed are of 1080P):

1
2
image_size = (1920, 1080)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(camera_matrix, dist_coeffs, r, new_camera_matrix, image_size, cv2.CV_16SC2)

Thirdly

A dictionary is to be loaded. Current OpenCV provides four groups of aruco patterns, 4X4, 5X5, 6X6, 7X7, etc. Here, aruco.DICT_6X6_1000 is randomly selected as our example, which looks like:

aruco.DICT_6X6_1000

1
aruco_dict = aruco.Dictionary_get( aruco.DICT_6X6_1000 )

After having this aruco square marker printed, the edge length of this particular marker is to be measured and stored in a variable markerLength.

1
markerLength = 20 # Here, our measurement unit is centimetre.

Meanwhile, create aruco detector with default parameters.

1
arucoParams = aruco.DetectorParameters_create()

Finally

Estimate camera postures. Here, we are testing a sequence of images, rather than video streams. We first list all file names in sequence.

1
2
3
imgDir = "imgSequence"  # Specify the image directory
imgFileNames = [os.path.join(imgDir, fn) for fn in next(os.walk(imgDir))[2]]
nbOfImgs = len(imgFileNames)

Then, we calculate the camera posture frame by frame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
for i in range(0, nbOfImgs):
img = cv2.imread(imgFileNames[i], cv2.IMREAD_COLOR)
imgRemapped = cv2.remap(img, map1, map2, cv2.INTER_LINEAR, cv2.BORDER_CONSTANT) # for fisheye remapping
imgRemapped_gray = cv2.cvtColor(imgRemapped, cv2.COLOR_BGR2GRAY) # aruco.etectMarkers() requires gray image
corners, ids, rejectedImgPoints = aruco.detectMarkers(imgRemapped_gray, aruco_dict, parameters=arucoParams) # Detect aruco
if ids != None: # if aruco marker detected
rvec, tvec = aruco.estimatePoseSingleMarkers(corners, markerLength, camera_matrix, dist_coeffs) # For a single marker
imgWithAruco = aruco.drawDetectedMarkers(imgRemapped, corners, ids, (0,255,0))
imgWithAruco = aruco.drawAxis(imgWithAruco, camera_matrix, dist_coeffs, rvec, tvec, 100) # axis length 100 can be changed according to your requirement
else: # if aruco marker is NOT detected
imgWithAruco = imgRemapped # assign imRemapped_color to imgWithAruco directly

cv2.imshow("aruco", imgWithAruco) # display

if cv2.waitKey(2) & 0xFF == ord('q'): # if 'q' is pressed, quit.
break

The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera.

Hi, everyone. This is Nobody from Longer Vision Technology. I come back to life, at least, half life. And finally, I decided to write something, either useful, or useless. Hope my blogs will be able to help some of the pure researchers, as well as the students, in the field of Computer Vision & Machine Vision. By the way, our products will be put on sale soon. Keep an eye on our blogs please. Thank you…

0%