OpenCV - undistort image and create point cloud based on it - python

I made around 40 images with a realsense camera, which gave me rgb and corresponding aligned depth images. With rs.getintrinsic() i got the intrinsic matrix of the camera. But there is still a distortion which can be seen in the pointcloud, which can be easily generated with the depth image. Here you can see it on the right side: PointCloud of a Plane in depth image
The Pointcloud represent a plane.
Now I calculated based on cv.calibrateCamera(..., intrinsic_RS_matrix, flags= cv2.CALIB_USE_INTRINSIC_GUESS|cv2.CALIB_FIX_FOCAL_LENGTH|cv2.CALIB_FIX_PRINCIPAL_POINT) the distortion coefficients of the Camera. For that I use all the 40 rgb images.
Based on the new calculated distortion I calculate with cv2.getOptimalNewCameraMatrix() the new camera matrix and with cv2.undistort(image, cameraMatrix, distCoeffs, None, newCameraMatrix) the undistorted new rgb and depth image.
Now I want to compute the pointcloud of the new undistorted depth image. But which camera Matrix should I use? The newCameraMatrix or the old one which I got from rs.getIntrinsic()?
As well I used alpha=0, so there is no cropping of the image. But if I would use alpha = 1 there would be a cropping. In that case should I use the cropped image or the uncropped one?
Here is the full Code for calculating the distortion and newCameraMatrix:
checkerboard = (6, 10)
criteria = (cv2.TERM_CRITERIA_EPS +
cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Vector for 3D points
threedpoints = []
# Vector for 2D points
twodpoints = []
# 3D points real world coordinates
objectp3d = np.zeros((1, checkerboard[0]*checkerboard[1], 3), np.float32)
objectp3d[0, :, :2] = np.mgrid[0:checkerboard[0], 0:checkerboard[1]].T.reshape(-1, 2)* 30
prev_img_shape = None
path = r"..."
resolution= "1280_720"
_,dates,_ = next(os.walk(path))
images = glob.glob(path)
print(len(images))
for filename in images:
image = cv2.imread(filename)
grayColor = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(image, checkerboard, flags = cv2.CALIB_CB_ADAPTIVE_THRESH )
if ret == True :
threedpoints.append(objectp3d)
# Refining pixel coordinates for given 2d points.
corners2 = cv2.cornerSubPix(
grayColor, corners,
(11, 11),
(-1, -1), criteria)
twodpoints.append(corners2)
# Draw and display the corners
image = cv2.drawChessboardCorners(image,
checkerboard,
corners2, ret)
print("detected corners: ", len(twodpoints))
K_RS = np.load(r"path to RS intrinsic")
ret, matrix, distortion, r_vecs, t_vecs = cv2.calibrateCamera(
threedpoints, twodpoints, grayColor.shape[::-1], cameraMatrix=K_RS, distCoeffs= None, flags= cv2.CALIB_USE_INTRINSIC_GUESS|cv2.CALIB_FIX_FOCAL_LENGTH|cv2.CALIB_FIX_PRINCIPAL_POINT)# None, None)
def loadUndistortedImage(filename, cameraMatrix, distCoeffs):
image = cv2.imread(filename,-1)
# setup enlargement and offset for new image
imageShape = image.shape #image.size
imageSize = (imageShape[1],imageShape[0])
# # create a new camera matrix with the principal point offest according to the offset above
newCameraMatrix, roi = cv2.getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize,
alpha = 0, imageSize)
# create undistortion maps
R = np.array([[1,0,0],[0,1,0],[0,0,1]])
outputImage = cv2.undistort(image, cameraMatrix, distCoeffs, None, newCameraMatrix)
roi_x, roi_y, roi_w, roi_h = roi
cropped_outputImage = outputImage[roi_y : roi_y + roi_h, roi_x : roi_x + roi_w]
fixed_filename = r"..."
cv2.imwrite(fixed_filename,outputImage)
return newCameraMatrix
#Undistort the images, then save the restored images
newmatrix = loadUndistortedImage(r'...', matrix, distortion)

I would suggest to use uncropped image that has same width and length of the original images that been used for camera calibration. The cropped one will has different image shape/size.

Related

Why are the distortion coefficients changing when calibrating camera?

I took an iPhone video of my computer monitor with a chessboard on it. During the video I did not change any of the camera settings, just simply moved my phone around.
From the video, I saved two screenshots where the grid was fully in view:
I calibrated both images using the code below and I got two very different sets of distortion coefficients... why are they different if it's the exact same camera?
import cv2
import numpy as np
import os
import glob
# Define the dimensions of checkerboard
CHECKERBOARD = (15,22)
# stop the iteration when specified
# accuracy, epsilon, is reached or
# specified number of iterations are completed.
criteria = (cv2.TERM_CRITERIA_EPS +
cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Vector for 3D points
threedpoints = []
# Vector for 2D points
twodpoints = []
# 3D points real world coordinates
objectp3d = np.zeros((1, CHECKERBOARD[0]
* CHECKERBOARD[1],
3), np.float32)
objectp3d[0, :, :2] = np.mgrid[0:CHECKERBOARD[0],
0:CHECKERBOARD[1]].T.reshape(-1, 2)
prev_img_shape = None
# Extracting path of individual image stored
# in a given directory. Since no path is
# specified, it will take current directory
# jpg files alone
images = glob.glob('*.jpg')
print(images)
for filename in images:
image = cv2.imread(filename)
grayColor = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Find the chess board corners
# If desired number of corners are
# found in the image then ret = true
ret, corners = cv2.findChessboardCorners(
grayColor, CHECKERBOARD,
cv2.CALIB_CB_ADAPTIVE_THRESH
+ cv2.CALIB_CB_FAST_CHECK +
cv2.CALIB_CB_NORMALIZE_IMAGE)
print("return: " + ret.__str__())
# If desired number of corners can be detected then,
# refine the pixel coordinates and display
# them on the images of checker board
if ret == True:
threedpoints.append(objectp3d)
# Refining pixel coordinates
# for given 2d points.
corners2 = cv2.cornerSubPix(
grayColor, corners, (11, 11), (-1, -1), criteria)
twodpoints.append(corners2)
# Draw and display the corners
image = cv2.drawChessboardCorners(image,
CHECKERBOARD,
corners2, ret)
cv2.imshow('img', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
h, w = image.shape[:2]
# Perform camera calibration by
# passing the value of above found out 3D points (threedpoints)
# and its corresponding pixel coordinates of the
# detected corners (twodpoints)
ret, matrix, distortion, r_vecs, t_vecs = cv2.calibrateCamera(
threedpoints, twodpoints, grayColor.shape[::-1], None, None)
# Displaying required output
print(" Camera matrix:")
print(matrix)
print("\n Distortion coefficient:")
print(distortion)
Distortion Coefficients for images 1 and 2 respectively:
Distortion coefficient:
[[ 1.15092474e-01 2.51065895e+00 2.16077891e-03 4.76654910e-03
-3.40419245e+01]]
Distortion coefficient:
[[ 2.50995880e-01 -6.98047707e+00 1.14468356e-03 -1.10525114e-02
1.43212364e+02]]

OpenCV image alignment how to rotate after homography

I'm a novice at openCV, currently i'm following this tutorial on image alignment, i have the following image and template for testing
scanned image(test_image.jpg):
template image(template.jpg):
and the following python code:
from __future__ import print_function
import cv2
import numpy as np
MAX_FEATURES = 500
GOOD_MATCH_PERCENT = 0.15
def alignImages(im1, im2):
# Convert images to grayscale
im1Gray = cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY)
im2Gray = cv2.cvtColor(im2, cv2.COLOR_BGR2GRAY)
# Detect ORB features and compute descriptors.
orb = cv2.ORB_create(MAX_FEATURES)
keypoints1, descriptors1 = orb.detectAndCompute(im1Gray, None)
keypoints2, descriptors2 = orb.detectAndCompute(im2Gray, None)
# Match features.
matcher = cv2.DescriptorMatcher_create(
cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = list(matcher.match(descriptors1, descriptors2, None))
# Sort matches by score
matches.sort(key=lambda x: x.distance, reverse=False)
# Remove not so good matches
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]
# Draw top matches
imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None)
cv2.imwrite("matches.jpg", imMatches)
# Extract location of good matches
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = keypoints1[match.queryIdx].pt
points2[i, :] = keypoints2[match.trainIdx].pt
# Find homography
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)
# Use homography
height, width, channels = im2.shape
im1Reg = cv2.warpPerspective(im1, h, (width, height))
return im1Reg, h
if __name__ == '__main__':
# Read reference image
refFilename = "template.jpg"
print("Reading reference image : ", refFilename)
imReference = cv2.imread(refFilename, cv2.IMREAD_COLOR)
# Read image to be aligned
imFilename = "test_image.jpg"
print("Reading image to align : ", imFilename)
im = cv2.imread(imFilename, cv2.IMREAD_COLOR)
print("Aligning images ...")
# Registered image will be resotred in imReg.
# The estimated homography will be stored in h.
imReg, h = alignImages(im, imReference)
# Write aligned image to disk.
outFilename = "aligned.jpg"
print("Saving aligned image : ", outFilename)
cv2.imwrite(outFilename, imReg)
# Print estimated homography
print("Estimated homography : \n", h)
I get the following results after i ran the script:
matches.jpg:
UPDATE:
I was able to get the image when i increase the amount of orb features to 2000
aligned.jpg
But the homography is still not rotating the image, how can i rotate the image to the same position as the template?
There are two types of forms to finding a homography (forward and backward), but if you already found the homography, applying it can be done without using opencv as follows:
import numpy as np
from scipy.interpolate import griddata
# creating the homogenious coordinates
src_h, src_w, _ = src_image.shape
values = np.matrix.reshape(src_image, (-1, 3), order='F')
yy, xx = np.meshgrid(np.arange(src_h), np.arange(src_w))
input_flat = np.concatenate((xx.reshape((1, -1)), yy.reshape((1, -1)), np.ones_like(xx.reshape((1, -1)))), axis=0)
# applying the homography and converting back to homogenious coordinates
points = np.matmul(homography, input_flat)
points_homogeneous = points[0:2, :] / points[2, :]
# interpolating the result to nicely fit the grid coordinates
dst_image_shape = [400, 400] # could be any number here
yy, xx = np.meshgrid(np.arange(dst_image_shape[1]), np.arange(dst_image_shape[0]))
src_image_warp = griddata(np.transpose(points_homogeneous ), values_relevant, (yy, xx), method='linear')
#numerical rounding
src_image_warp[np.isnan(src_image_warp)] = 0
src_image_warp[src_image_warp > 255] = 255
src_image_warp = np.uint8(src_image_warp)
Note that this is done for a 1 channel image, for RGB image this has to be done for each channel searately. In addition, this could be made to run faster by interpolating only the relevant coordinates since the interpolation is the most time-consuming operation.
With opencv this can be done by:
import cv2
image_dst = cv2.warpPerspective(image_src, homography, size) # size is a tuple (width, height) of the destination image
Read more on homographies and the opencv implementation here.
Finding the homography
The homography can be found without using opencv but that requires knowlage in linear algebra adn the explanation is a bit lengthy, if needed I will post it as an edit. For any practical case however, the homography can be found using opencv as follows:
homography, status = cv2.findHomography(pts_src, pts_dst)
where pts_src are coordinates in the original image and pts_dst are their matching location in the destination image. Since you already found the point pairs, this will yield you the homography (opencv optimizes the hmography for minimal distortion in the backward operation which is the correct way to perform homography computations).
You have a homography h calculated from findHomography and you can use warpPerspective to transform the template to have the same perspective as the photo.
Now you just need to invert the homography, and apply it to the photo instead of the template.
Either use np.linalg.inv for that, or pass the WARP_INVERSE_MAP flag to warpPerspetive instead.

How to project a chessboard in camera coordinates to real-world coordinates in OpenCV using rvecs and tvecs outputs of cv.calibrateCamera?

Situation
Following the Camera Calibration tutorial in OpenCV I managed to get an undistorted image of a checkboard using cv.calibrateCamera:
Original image: (named image.tif in my computer)
Code:
import numpy as np
import cv2 as cv
import matplotlib.pyplot as plt
# termination criteria
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((12*13,3), np.float32)
objp[:,:2] = np.mgrid[0:12,0:13].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
img = cv.imread('image.tif')
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv.findChessboardCorners(gray, (12,13), None)
# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp)
corners2 = cv.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners)
# Draw and display the corners
cv.drawChessboardCorners(img, (12,13), corners2, ret)
cv.imshow('img', img)
cv.waitKey(2000)
cv.destroyAllWindows()
ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
#Plot undistorted
h, w = img.shape[:2]
newcameramtx, roi = cv.getOptimalNewCameraMatrix(mtx, dist, (w,h), 1, (w,h))
dst = cv.undistort(img, mtx, dist, None, newcameramtx)
# crop the image
x, y, w, h = roi
dst = dst[y:y+h, x:x+w]
plt.figure()
plt.imshow(dst)
plt.savefig("undistorted.png", dpi = 300)
plt.close()
Undistorted image:
The undistorted image indeed has straight lines. However, in order to test the calibration procedure I would like to further transform the image into real-world coordinates using the rvecs and tvecs outputs of cv.calibrateCamera. From the documentation:
rvecs: Output vector of rotation vectors (Rodrigues ) estimated for each pattern view (e.g. std::vector<cv::Mat>>). That is, each i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter description) brings the calibration pattern from the object coordinate space (in which object points are specified) to the camera coordinate space. In more technical terms, the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space to camera coordinate space. Due to its duality, this tuple is equivalent to the position of the calibration pattern with respect to the camera coordinate space.
tvecs: Output vector of translation vectors estimated for each pattern view, see parameter describtion above.
Question: How can I manage this? It would be great if the answers include a working code that outputs the transformed image.
Expected output
The image I expect should look something like this, where the red coordinates correspond to the real-world coordinates of the checkboard (notice the checkboard is a rectangle in this projection):
What I have tried
Following the comment of #Christoph Rackwitz, I found this post, where they explain the homography matrix H that relates the 3D real world coordinates (of the chessboard) to the 2D image coordinates is given by:
H = K [R1 R2 t]
where K is the camera calibration matrix, R1 and R2 are the first two columns of the rotational matrix and t is the translation vector.
I tried to calculate this from:
K we already have it as the mtx from cv.calibrateCamera.
R1 and R2 from rvecs after converting it to a rotational matrix (because it is given in Rodrigues decomposition): cv.Rodrigues(rvecs[0])[0].
t should be tvecs.
In order to calculate the homography from the image coordinates to the 3D real world coordinates then I use the inverse of H.
Finally I use cv.warpPerspective to display the projected image.
Code:
R = cv.Rodrigues(rvecs[0])[0]
tvec = tvecs[0].squeeze()
H = np.dot(mtx, np.concatenate((R[:,:2], tvec[:,None]), axis = 1) )/tvec[-1]
plt.imshow(cv.warpPerspective(dst, np.linalg.inv(H), (dst.shape[1], dst.shape[0])))
But this does not work, I find the following picture:
Any ideas where the problem is?
Related questions:
How do I obtain the camera world position from calibrateCamera results?
Homography from 3D plane to plane parallel to image plane
OpenCV Camera Calibration mathematical background
Coordinate transformation in OpenCV
transform 3d camera coordinates to 3d real world coordinates with opencv
Every camera has its own Intrinsic parameters connecting 2D image coordinates with 3D real-world. You should solve a branch of linear equations to find them out. Or look at cameras specification parameters, provided by manufacturers.
Furthermore, if you want to warp your surface to be parallel to the image border use homography transformations. You need the projective one. scikit-image has prepaired tools for parameter estimation.
The Concept
Detect the corners of the chessboard using the cv2.findChessboardCorners() method. Then, define an array for the destination point for each corner point in the image. Use the triangle warping technique to warp the image from the chessboard corner points to the points in the array defined for the destination locations.
The Code
import cv2
import numpy as np
def triangles(points):
points = np.where(points, points, 1)
subdiv = cv2.Subdiv2D((*points.min(0), *points.max(0)))
for pt in points:
subdiv.insert(tuple(map(int, pt)))
for pts in subdiv.getTriangleList().reshape(-1, 3, 2):
yield [np.where(np.all(points == pt, 1))[0][0] for pt in pts]
def crop(img, pts):
x, y, w, h = cv2.boundingRect(pts)
img_cropped = img[y: y + h, x: x + w]
pts[:, 0] -= x
pts[:, 1] -= y
return img_cropped, pts
def warp(img1, img2, pts1, pts2):
img2 = img2.copy()
for indices in triangles(pts1):
img1_cropped, triangle1 = crop(img1, pts1[indices])
img2_cropped, triangle2 = crop(img2, pts2[indices])
transform = cv2.getAffineTransform(np.float32(triangle1), np.float32(triangle2))
img2_warped = cv2.warpAffine(img1_cropped, transform, img2_cropped.shape[:2][::-1], None, cv2.INTER_LINEAR, cv2.BORDER_REFLECT_101)
mask = np.zeros_like(img2_cropped)
cv2.fillConvexPoly(mask, np.int32(triangle2), (1, 1, 1), 16, 0)
img2_cropped *= 1 - mask
img2_cropped += img2_warped * mask
return img2
img = cv2.imread("image.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
ret, corners = cv2.findChessboardCorners(gray, (12 ,13), None)
corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
x, y, w, h, r, c = 15, 40, 38, 38, 12, 13
pts1 = np.int32(corners2.squeeze())
arr2 = np.tile(np.arange(c), r).reshape((r, c))
arr1 = np.tile(np.arange(r), c).reshape((c, r))
arr = np.dstack((arr1[:, ::-1] * h + y, arr2.T * w + x))
pts2 = arr.reshape((r * c, 2))
cv2.imshow("result", warp(img, np.zeros_like(img), pts1, pts2))
cv2.waitKey(0)
The Output
Here is the output image:
For the input image of:
The Explanation
Import the necessary libraries:
import cv2
import numpy as np
Define a function, triangles, that will take in an array of coordinates, points, and yield lists of 3 indices of the array for triangles that will cover the area of the original array of coordinates:
def triangles(points):
points = np.where(points, points, 1)
subdiv = cv2.Subdiv2D((*points.min(0), *points.max(0)))
for pt in points:
subdiv.insert(tuple(map(int, pt)))
for pts in subdiv.getTriangleList().reshape(-1, 3, 2):
yield [np.where(np.all(points == pt, 1))[0][0] for pt in pts]
Define a function, crop, that will take in an image array, img, and an array of three coordinates, pts. It will return a rectangular segment of the image just large enough to fit the triangle formed by the three point, and return the array of three coordinates transferred to the top-left corner of image:
def crop(img, pts):
x, y, w, h = cv2.boundingRect(pts)
img_cropped = img[y: y + h, x: x + w]
pts[:, 0] -= x
pts[:, 1] -= y
return img_cropped, pts
Define a function, warp, that will take in 2 image arrays, img1 and img2, and 2 arrays of coordinates, pts1 and pts2. It will utilize the triangles function defined before iterate through the triangles from the first array of coordinates, the crop function defined before to crop both images at coordinates corresponding to the triangle indices and use the cv2.warpAffine() method to warp the image at the current triangle of the iterations:
def warp(img1, img2, pts1, pts2):
img2 = img2.copy()
for indices in triangles(pts1):
img1_cropped, triangle1 = crop(img1, pts1[indices])
img2_cropped, triangle2 = crop(img2, pts2[indices])
transform = cv2.getAffineTransform(np.float32(triangle1), np.float32(triangle2))
img2_warped = cv2.warpAffine(img1_cropped, transform, img2_cropped.shape[:2][::-1], None, cv2.INTER_LINEAR, cv2.BORDER_REFLECT_101)
mask = np.zeros_like(img2_cropped)
cv2.fillConvexPoly(mask, np.int32(triangle2), (1, 1, 1), 16, 0)
img2_cropped *= 1 - mask
img2_cropped += img2_warped * mask
return img2
Read in the image of the distorted chessboard, convert it to grayscale and detect the corners of the chessboard:
img = cv2.imread("image.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
ret, corners = cv2.findChessboardCorners(gray, (12 ,13), None)
corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
Define an array of the destination points for each corner detected. If you plot each corner along with their corresponding index in the array, you will see that they are in this order:
So our destination array must be in that order, or we will end up with unreadable results. The x, y, w, h, r, c below will be the destination array of coordinates' top-left x, y position, each square's width & height, and the number of rows & columns of points in the board:
x, y, w, h, r, c = 15, 40, 38, 38, 12, 13
pts1 = np.int32(corners2.squeeze())
arr2 = np.tile(np.arange(c), r).reshape((r, c))
arr1 = np.tile(np.arange(r), c).reshape((c, r))
arr = np.dstack((arr1[:, ::-1] * h + y, arr2.T * w + x))
pts2 = arr.reshape((r * c, 2))
Finally, show the warped part of the image on a blank image:
cv2.imshow("result", warp(img, np.zeros_like(img), pts1, pts2))
cv2.waitKey(0)
At the end, I did not manage to achieve it with the outputs of cv.calibrateCamera but instead I did something simple inspired by #Ann Zen answer. In case it may help someone I will just post it here.
I transform both the image and some data points in the image to the new coordinates given by the chessboard reference frame using only the four corner points.
Input:
undistorted.png
Code:
import numpy as np
import cv2 as cv
image = cv.imread('undistorted.png')
#Paint some points in blue
points = np.array([[200, 300], [400, 300], [500, 200]])
for i in range(len(points)):
cv.circle(image, tuple(points[i].astype('int64')), radius=0, color=(255, 0, 0), thickness=10)
cv.imwrite('undistorted_withPoints.png', image)
#Put pixels of the chess corners: top left, top right, bottom right, bottom left.
cornerPoints = np.array([[127, 58], [587, 155], [464, 437], [144,344]], dtype='float32')
#Find base of the rectangle given by the chess corners
base = np.linalg.norm(cornerPoints[1] - cornerPoints[0] )
#Height has 11 squares while base has 12 squares.
height = base/12*11
#Define new corner points from base and height of the rectangle
new_cornerPoints = np.array([[0, 0], [int(base), 0], [int(base), int(height)], [0, int(height)]], dtype='float32')
#Calculate matrix to transform the perspective of the image
M = cv.getPerspectiveTransform(cornerPoints, new_cornerPoints)
new_image = cv.warpPerspective(image, M, (int(base), int(height)))
#Function to get data points in the new perspective from points in the image
def calculate_newPoints(points, M):
new_points = np.einsum('kl, ...l->...k', M, np.concatenate([points, np.broadcast_to(1, (*points.shape[:-1], 1)) ], axis = -1) )
return new_points[...,:2] / new_points[...,2][...,None]
new_points = calculate_newPoints(points, M)
#Paint new data points in red
for i in range(len(new_points)):
cv.circle(new_image, tuple(new_points[i].astype('int64')), radius=0, color=(0, 0, 255), thickness=5)
cv.imwrite('new_undistorted.png', new_image)
Outputs:
undistorted_withPoints.png
new_undistorted.png
Explanation:
I paint some data points in the original picture that I also want to transform.
With another program I look for the pixels of the corners of the chess (I skip the outer rows and columns).
I calculate the height and base in pixels of the rectangle defined by the corners.
I define from the rectangle the new corners in the chessboard coordinates.
I calculate the matrix M to do the perspective transformation.
I do the transformation for the image and for the data points following the documentation of cv.warpPerspective:
I paint the transformed data points in red.

Image Rectification from single camera using two points

I'm trying to implement image rectification. I was using a software which is not available anymore. To rectify the image, the software used the height of the camera (h), the distance of two points (d1, d2) from the camera and the correspond lines in the image to the reference points (Line1, Line2).
So the variables are:
h (camera elevation);
Line1, Line2 (row pixel)
d1, d2 (Distance in meters from the camera)
Configuration:
I tried to implement few code using OpenCV (Python) but the final result is not the same of the software. I wrote a code to calibrate the camera and a second to undistort the image and then I want to apply the rectification.
The problem is that I'm using a single camera (take photos of a landscape) that is fixed with a fixed focal length and focus which I can't change anymore.
Can someone tell me a good way to execute the rectification using the same way of the software or an another valid solution?
My code for the calibration is
# Numbers of corners
n_w = 9
n_h = 6
patternSize = (n_w, n_h)
# SIZE OF THE WINDOW TO IMPROVE THE COORDINATES OF CORNERS
windowSize = (11, 11)
# TERMINATION CRITERIA
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
objp = np.zeros((n_h * n_w, 3), dtype=np.float32)
objp[:, :2] = np.mgrid[0:n_w, 0:n_h].T.reshape(-1, 2)
# LIST OF POINT
objpoints = []
imgpoints = []
# GET ALL IMAGES
images = glob.glob('*.jpg')
for fname in images:
img = cv2.imread(fname)
# IMAGE ON GRAY SACLE
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# fIND CORNERS
retval, corners = cv2.findChessboardCorners(gray_img, patternSize, None)
if retval == True:
print 'Looping through image %s' % fname
objpoints.append(objp)
cv2.cornerSubPix(gray_img, corners, windowSize, (-1, -1), criteria)
imgpoints.append(corners)
cv2.drawChessboardCorners(img, patternSize, corners, retval)
cv2.imshow('ChessBoard Image %s' % fname, img)
cv2.waitKey(500)
cv2.destroyAllWindows()
print "------START CALIBRATION....."
ret, cameraMatrix, distCoeffs, revcs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray_img.shape[::-1],
None, None)
print ret
print cameraMatrix
print distCoeffs
print '---SAVING CALIBRATION DATA'
np.savez('calibration_data', RMS=ret, distCoeffs=distCoeffs, cameraMatrix=cameraMatrix)
if ret <= 1.0:
print '''-----GOOD CALIBRATION'''
The code to remove the distortion is:
# LOAD CALIBRATION DATA
calibrationData = np.load('calibration_data.npz')
distCoeffs = calibrationData['distCoeffs']
cameraMatrix = calibrationData['cameraMatrix']
calibrationData.close()
# LOAD IMAGES
images = glob.glob('/*.jpg')
for i, fname in enumerate(images):
img = cv2.imread(fname)
# UNDISTORT
undistorted_img = cv2.undistort(img, cameraMatrix, distCoeffs, None)
# SAVE IMAGE
cv2.imwrite(os.path.join(dirname, 'Undistorted_%05d.jpg' % i), undistorted_img)
cv2.imshow('Undistorted Image %s' % fname, undistorted_img)
The first idea to rectify the image was to find the 4 corners inside the real world image of a trapezoid (A4 paper) and compute a transformation matrix given 4 points of a rectangle (real dimension of an A4). But I think that is an wrong approce.
To do this I wrote this code:
#load image
img_Trap = cv2.imread('image.png', cv2.IMREAD_GRAYSCALE)
#points on the image (corners of an A4 paper)
ptsTrap = np.array(((1556, 1050), (1556, 1050), (2189, 1677), (1425, 1723)), dtype=np.float32)
img_Rect = cv2.imread('image2.png', cv2.IMREAD_GRAYSCALE)
# corner of a A4 (saving the aspect ratio)
ptsRect = np.array(((1980, 1381), (2189, 1381), (2189, 1677), (1980, 1677)), dtype=np.float32)
#transformation matrix
T = cv2.getPerspectiveTransform(ptsTrap, ptsRect)
print T
# warp perspective
warp = cv2.warpPerspective(img_Trap, T, img_Rect.shape[:2])
cv2.imwrite('warpimage.png', warp)

Camera Calibration with OpenCV - How to adjust chessboard square size?

I am working on a camera calibration program using the OpenCV/Python example (from: OpenCV Tutorials) as a guidebook.
Question: How do I tailor this example code to account for the size of a square on a particular chessboard pattern? My understanding of the camera calibration process is that this information must somehow be used otherwise the values given by:
cv2.calibrateCamera()
will be incorrect.
Here is the portion of my code that reads in image files and runs through the calibration process to produce the camera matrix and other values.
#import cv2
#import numpy as np
#import glob
"""
Corner Finding
"""
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Prepare object points, like (0,0,0), (1,0,0), ....,(6,5,0)
objp = np.zeros((5*5,3), np.float32)
objp[:,:2] = np.mgrid[0:5,0:5].T.reshape(-1,2)
# Arrays to store object points and image points from all images
objpoints = []
imgpoints = []
counting = 0
# Import Images
images = glob.glob('dir/sub dir/Images/*')
for fname in images:
img = cv2.imread(fname) # Read images
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Convert to grayscale
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (5,5), None)
# if found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp)
cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners)
#Draw and display corners
cv2.drawChessboardCorners(img, (5,5), corners, ret)
counting += 1
print str(counting) + ' Viable Image(s)'
cv2.imshow('img', img)
cv2.waitKey(500)
cv2.destroyAllWindows()
# Calibrate Camera
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
Here, if you have your square size assume 30 mm then multiply this value with objp[:,:2]. Like this
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)*30 # 30 mm size of square
As objp[:,:2] is a set of points of checkboard corners given as (0,0),(0,1), (0,2) ....(8,5). (0,0) point is the left upper most square corner and (8,5) is the right lowest square corner. In this case, these points have no unit but if we multiply these values with square size (for example 30 mm), then these will become (0,0),(0,30), .....(240,150) which are the real world units. Your translation vector will be in mm units in this case.
From here: https://docs.opencv.org/4.5.1/dc/dbb/tutorial_py_calibration.html
What about the 3D points from real world space? Those images are taken from a static camera and chess boards are placed at different locations and orientations. So we need to know (X,Y,Z) values. But for simplicity, we can say chess board was kept stationary at XY plane, (so Z=0 always) and camera was moved accordingly. This consideration helps us to find only X,Y values. Now for X,Y values, we can simply pass the points as (0,0), (1,0), (2,0), ... which denotes the location of points. In this case, the results we get will be in the scale of size of chess board square. But if we know the square size, (say 30 mm), we can pass the values as (0,0), (30,0), (60,0), ... . Thus, we get the results in mm. (In this case, we don't know square size since we didn't take those images, so we pass in terms of square size).

Categories