Camera Calibration Using a Chessboard

Preparation

Traditionally, a camera is calibrated using a chessboard. Existing documentations are already out there and have discussed camera calibration in detail, for example, OpenCV-Python Tutorials.

Coding

Our code can be found at OpenCV Examples.

First of all

Unlike estimating camera postures which is dealing with the extrinsic parameters, camera calibration is to calculate the intrinsic parameters. In such a case, there is NO need for us to measure the cell size of the chessboard. Anyway, the adopted chessboard is just an ordinary chessboard as:

pattern_chessboard

Secondly

Required packages need to be imported.

1
2
3
import numpy as np
import cv2
import yaml

Thirdly

Some initialization work need to be done, including: 1) define the termination criteria when refine the corner sub-pixel later on; 2) object points coordinators initialization.

1
2
3
4
5
6
7
8
9
10
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.

Fourthly

After localizing 10 frames (10 can be changed to any positive integer as you wish) of a grid of 2D chessboard cell corners, camera matrix and distortion coefficients can be calculated by invoking the function calibrateCamera. Here, we are testing on a real-time camera stream.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cap = cv2.VideoCapture(0)
found = 0
while(found < 10): # Here, 10 can be changed to whatever number you like to choose
ret, img = cap.read() # Capture frame-by-frame
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (7,6),None)

# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp) # Certainly, every loop objp is the same, in 3D.
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners2)

# Draw and display the corners
img = cv2.drawChessboardCorners(img, (7,6), corners2, ret)
found += 1

cv2.imshow('img', img)
cv2.waitKey(10)

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)

Finally

Write the calculated calibration parameters into a yaml file. Here, it is a bit tricky. Please bear in mind that you MUST call function tolist() to transform a numpy array to a list.

1
2
3
4
5
6
# It's very important to transform the matrix to list.

data = {'camera_matrix': np.asarray(mtx).tolist(), 'dist_coeff': np.asarray(dist).tolist()}

with open("calibration.yaml", "w") as f:
yaml.dump(data, f)

Additionally

You may use the following piece of code to load the calibration parameters from file “calibration.yaml”.

1
2
3
4
5
with open('calibration.yaml') as f:
loadeddict = yaml.load(f)

mtxloaded = loadeddict.get('camera_matrix')
distloaded = loadeddict.get('dist_coeff')