OpenCV - Calibrate fisheye lens error (Ill-conditioned matrix) - python

I am trying to calibrate a fisheye lens following these instructions
https://medium.com/#kennethjiang/calibrate-fisheye-lens-using-opencv-333b05afa0b0
where you can find the full code I'm using for the calibration part.
I arrive at this point where:
N_OK = len(objpoints)
K = np.zeros((3, 3))
D = np.zeros((4, 1))
rvecs = [np.zeros((1, 1, 3), dtype=np.float64) for i in range(N_OK)]
tvecs = [np.zeros((1, 1, 3), dtype=np.float64) for i in range(N_OK)]
rms, _, _, _, _ = \
cv2.fisheye.calibrate(
objpoints,
imgpoints,
gray.shape[::-1],
K,
D,
rvecs,
tvecs,
calibration_flags,
(cv2.TERM_CRITERIA_EPS+cv2.TERM_CRITERIA_MAX_ITER, 30, 1e-3)
)
print("Found " + str(N_OK) + " valid images for calibration")
print("DIM=" + str(_img_shape[::-1]))
print("K=np.array(" + str(K.tolist()) + ")")
print("D=np.array(" + str(D.tolist()) + ")")
I get this error:
Traceback (most recent call last)
<ipython-input-10-deaca9981fe4> in <module>()
13 tvecs,
14 calibration_flags,
---> 15 (cv2.TERM_CRITERIA_EPS+cv2.TERM_CRITERIA_MAX_ITER, 30, 1e-3)
16 )
17 print("Found " + str(N_OK) + " valid images for calibration")
error: C:\ci\opencv_1512688052760\work\modules\calib3d\src\fisheye.cpp:1414:
error: (-3) CALIB_CHECK_COND - Ill-conditioned matrix for input array 0 in
function cv::internal::CalibrateExtrinsics
I don't understand what's going on and I could only find so little information around the internet, does anyone have experienced something similar and know how to solve this?
Thanks
These are the images of the checkerboard I'm using:
https://i.stack.imgur.com/goHIg.jpg
https://i.stack.imgur.com/kja3O.jpg
https://i.stack.imgur.com/XbIzh.jpg
https://i.stack.imgur.com/uQ9Gr.jpg
https://i.stack.imgur.com/N8aLJ.jpg
https://i.stack.imgur.com/JMmmZ.jpg
https://i.stack.imgur.com/QJ8wn.jpg
https://i.stack.imgur.com/NmiFQ.jpg
https://i.stack.imgur.com/sQHdd.jpg
https://i.stack.imgur.com/DNBuD.jpg
https://i.stack.imgur.com/vFtOo.jpg
https://i.stack.imgur.com/lhkF5.jpg

I think it is because your variable calibration_flags has CALIB_CHECK_COND set.
Try disabling this flag. Without it I was able to undistort your images (see links below).
I am not sure what this check is for (the documentation is not very explicit). This flag reject some images¹ of my gopro hero 3 even when the chessboard is visible and detected. In my case one image among 20 is not passing this test. This image has the chessboard close to the left border.
https://i.stack.imgur.com/m2WF6.jpg
https://i.stack.imgur.com/KiTRz.jpg
https://i.stack.imgur.com/MhHyN.jpg
https://i.stack.imgur.com/pSiyG.jpg
https://i.stack.imgur.com/drXSL.jpg
https://i.stack.imgur.com/DDze1.jpg
https://i.stack.imgur.com/b6l8f.jpg
https://i.stack.imgur.com/9MrAk.jpg
https://i.stack.imgur.com/WYmg5.jpg
https://i.stack.imgur.com/rmJ5Q.jpg
https://i.stack.imgur.com/K8k8Y.jpg
https://i.stack.imgur.com/MiBER.jpg
¹ in OpenCV versions >= 3.4.1 the error message tells you which image is not passing the test

I did not find the code in python so I manually check the images with chessboard on the edge and delete them one by one until the error is gone.

As #Ahmadiah mentioned, the "ill conditioned" thing can happen when the checkerboard falls near the edge of the image. One way to handle this is to remove images one by one and try again when they cause calibration to fail. Here is an example where we do that:
def calibrate_fisheye(all_image_points, all_true_points, image_size):
""" Calibrate a fisheye camera from matching points.
:param all_image_points: Sequence[Array(N, 2)[float32]] of (x, y) image coordinates of the points. (see cv2.findChessboardCorners)
:param all_true_points: Sequence[Array(N, 3)[float32]] of (x,y,z) points. (If from a grid, just put (x,y) on a regular grid and z=0)
Note that each of these sets of points can be in its own reference frame,
:param image_size: The (size_y, size_x) of the image.
:return: (rms, mtx, dist, rvecs, tvecs) where
rms: float - The root-mean-squared error
mtx: array[3x3] A 3x3 camera intrinsics matrix
dst: array[4x1] A (4x1) array of distortion coefficients
rvecs: Sequence[array[N,3,1]] of estimated rotation vectors for each set of true points
tvecs: Sequence[array[N,3,1]] of estimated translation vectors for each set of true points
"""
assert len(all_true_points) == len(all_image_points)
all_true_points = list(all_true_points) # Because we'll modify it in place
all_image_points = list(all_image_points)
while True:
assert len(all_true_points) > 0, "There are no valid images from which to calibrate."
try:
rms, mtx, dist, rvecs, tvecs = cv2.fisheye.calibrate(
objectPoints=[p[None, :, :] for p in all_true_points],
imagePoints=[p[:, None, :] for p in all_image_points],
image_size=image_size,
K=np.zeros((3, 3)),
D=np.zeros((4, 1)),
flags=cv2.fisheye.CALIB_RECOMPUTE_EXTRINSIC + cv2.fisheye.CALIB_CHECK_COND + cv2.fisheye.CALIB_FIX_SKEW,
criteria=(cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 1e-6),
)
print('Found a calibration based on {} well-conditioned images.'.format(len(all_true_points)))
return rms, mtx, dist, rvecs, tvecs
except cv2.error as err:
try:
idx = int(err.message.split('array ')[1][0]) # Parse index of invalid image from error message
all_true_points.pop(idx)
all_image_points.pop(idx)
print("Removed ill-conditioned image {} from the data. Trying again...".format(idx))
except IndexError:
raise err

my opencv version is 4.0+, I sloved it by this method
rms, _, _, _, _ = \
cv2.fisheye.calibrate(
objpoints,
imgpoints,
#gray.shape[::-1],
gray.shape,
K,
D,
rvecs,
tvecs,
calibration_flags,
(cv2.TERM_CRITERIA_EPS+cv2.TERM_CRITERIA_MAX_ITER, 30, 1e-3)
)

Related

Image stitching problem using Python and OpenCV

I got output like below after stitching result of 24 stitched images to next 25th image. Before that stitching was good.
Is anyone aware of why/when output of stitching comes like this? What are the possibilities of output coming like that? What may be the reason of that?
Stitching code is following standard stitching steps like finding keypoints, descriptors then matching points, calculating homography and then warping of images. But I am not understanding why that output is coming.
Core part of stitching is like below:
detector = cv2.SIFT_create(400)
# find the keypoints and descriptors with SIFT
gray1 = cv2.cvtColor(image1,cv2.COLOR_BGR2GRAY)
ret1, mask1 = cv2.threshold(gray1,1,255,cv2.THRESH_BINARY)
kp1, descriptors1 = detector.detectAndCompute(gray1,mask1)
gray2 = cv2.cvtColor(image2,cv2.COLOR_BGR2GRAY)
ret2, mask2 = cv2.threshold(gray2,1,255,cv2.THRESH_BINARY)
kp2, descriptors2 = detector.detectAndCompute(gray2,mask2)
keypoints1Im = cv2.drawKeypoints(image1, kp1, outImage = cv2.DRAW_MATCHES_FLAGS_DEFAULT, color=(0,0,255))
keypoints2Im = cv2.drawKeypoints(image2, kp2, outImage = cv2.DRAW_MATCHES_FLAGS_DEFAULT, color=(0,0,255))
# BFMatcher with default params
matcher = cv2.BFMatcher()
matches = matcher.knnMatch(descriptors2,descriptors1, k=2)
# Apply ratio test
good = []
for m, n in matches:
if m.distance < 0.75 * n.distance:
good.append(m)
print (str(len(good)) + " Matches were Found")
if len(good) <= 10:
return image1
matches = copy.copy(good)
matchDrawing = util.drawMatches(gray2,kp2,gray1,kp1,matches)
#Aligning the images
src_pts = np.float32([ kp2[m.queryIdx].pt for m in matches ]).reshape(-1,1,2)
dst_pts = np.float32([ kp1[m.trainIdx].pt for m in matches ]).reshape(-1,1,2)
H = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)[0]
h1,w1 = image1.shape[:2]
h2,w2 = image2.shape[:2]
pts1 = np.float32([[0,0],[0,h1],[w1,h1],[w1,0]]).reshape(-1,1,2)
pts2 = np.float32([[0,0],[0,h2],[w2,h2],[w2,0]]).reshape(-1,1,2)
pts2_ = cv2.perspectiveTransform(pts2, H)
pts = np.concatenate((pts1, pts2_), axis=0)
# print("pts:", pts)
[xmin, ymin] = np.int32(pts.min(axis=0).ravel() - 0.5)
[xmax, ymax] = np.int32(pts.max(axis=0).ravel() + 0.5)
t = [-xmin,-ymin]
Ht = np.array([[1,0,t[0]],[0,1,t[1]],[0,0,1]]) # translate
result = cv2.warpPerspective(image2, Ht.dot(H), (xmax-xmin, ymax-ymin))
resizedB = np.zeros((result.shape[0], result.shape[1], 3), np.uint8)
resizedB[t[1]:t[1]+h1,t[0]:w1+t[0]] = image1
# Now create a mask of logo and create its inverse mask also
img2gray = cv2.cvtColor(result,cv2.COLOR_BGR2GRAY)
ret, mask = cv2.threshold(img2gray, 0, 255, cv2.THRESH_BINARY)
kernel = np.ones((5,5),np.uint8)
k1 = (kernel == 1).astype('uint8')
mask = cv2.erode(mask, k1, borderType=cv2.BORDER_CONSTANT)
mask_inv = cv2.bitwise_not(mask)
difference = cv2.bitwise_or(resizedB, resizedB, mask=mask_inv)
result2 = cv2.bitwise_and(result, result, mask=mask)
result = cv2.add(result2, difference)
Edit:
This image shows match drawing while stitching 25 to result until 24 images:
And before that match drawing:
I have total 97 images to stitch. If I stitch 24 and 25 image separately they stitches properly. If I start stitching from 23rd image onwards then also stitching is good but it gives me problem when I stitches starting from 1st image. I am not able to understand the problem.
Result after stitching 23rd image:
Result after stitching 24th image:
Result after stitching 25th image is as above which went wrong.
Strange Observation: If I stitch 23,24,25 images seperately with same code it gets stitches. If I stitch images after 23 till 97 , it gets stitches. But somehow if I stitch images from 1st, it breaks while stitching 25th image. I am not understanding why this happens.
I have tried different combination like different keypoint detection, extraction methods, matching methods, different homography calculations, different warping code but those combinations didn't work. Something is missing or wrong in the steps combination code. I am not able to figure it out.
Sorry for this long question. As I am completely new to this I am not able to explain and get the things properly. Thanks for your help and guidance.
Stitched result of 23,24,25 images separately with SAME code:
With different code (gives black lines in between stitching), if I stitched 97 images then 25th goes up in stitching and stitches as shown below (right corner point):
Firstly, I was not able to recreate your problem and solve it as the images were too big for my system to process. However, I had faced the same problem in my Panorama Stitching project, so I am sharing the reason behind it and my approach to solving my problem. Hope this helps you too.
Here's what my problem looked like when I stitched 4 images together just like you did.
As you can see, the 4th image was getting distorted a lot which must not happen. The same thing happened with you but on a greater level.
Now, here's the output when I stitched 8 images after some image pre-processing.
After some pre-processing on the input images, I was able to stitch 8 images together perfectly without any distortion.
To understand the exact reason behind this kind of distortion, watch this video by Joseph Redmon between 50:26 - 1:07:23.
As suggested in the video, we'll first have to project the images onto a cylinder and then unroll them and then stitch these unrolled images together.
Below is the initial input image(left) and the image after projection and unrolling onto a cylinder(right).
For your problem, as you are using satellite images, I guess projection onto a sphere would work better than the cylinder however you'll have to give it a try.
Sharing below my code for projecting the image onto a cylinder and unrolling it for reference. The mathematics used behind it is the same as given in the video.
def Convert_xy(x, y):
global center, f
xt = ( f * np.tan( (x - center[0]) / f ) ) + center[0]
yt = ( (y - center[1]) / np.cos( (x - center[0]) / f ) ) + center[1]
return xt, yt
def ProjectOntoCylinder(InitialImage):
global w, h, center, f
h, w = InitialImage.shape[:2]
center = [w // 2, h // 2]
f = 1100 # 1100 field; 1000 Sun; 1500 Rainier; 1050 Helens
# Creating a blank transformed image
TransformedImage = np.zeros(InitialImage.shape, dtype=np.uint8)
# Storing all coordinates of the transformed image in 2 arrays (x and y coordinates)
AllCoordinates_of_ti = np.array([np.array([i, j]) for i in range(w) for j in range(h)])
ti_x = AllCoordinates_of_ti[:, 0]
ti_y = AllCoordinates_of_ti[:, 1]
# Finding corresponding coordinates of the transformed image in the initial image
ii_x, ii_y = Convert_xy(ti_x, ti_y)
# Rounding off the coordinate values to get exact pixel values (top-left corner)
ii_tl_x = ii_x.astype(int)
ii_tl_y = ii_y.astype(int)
# Finding transformed image points whose corresponding
# initial image points lies inside the initial image
GoodIndices = (ii_tl_x >= 0) * (ii_tl_x <= (w-2)) * \
(ii_tl_y >= 0) * (ii_tl_y <= (h-2))
# Removing all the outside points from everywhere
ti_x = ti_x[GoodIndices]
ti_y = ti_y[GoodIndices]
ii_x = ii_x[GoodIndices]
ii_y = ii_y[GoodIndices]
ii_tl_x = ii_tl_x[GoodIndices]
ii_tl_y = ii_tl_y[GoodIndices]
# Bilinear interpolation
dx = ii_x - ii_tl_x
dy = ii_y - ii_tl_y
weight_tl = (1.0 - dx) * (1.0 - dy)
weight_tr = (dx) * (1.0 - dy)
weight_bl = (1.0 - dx) * (dy)
weight_br = (dx) * (dy)
TransformedImage[ti_y, ti_x, :] = ( weight_tl[:, None] * InitialImage[ii_tl_y, ii_tl_x, :] ) + \
( weight_tr[:, None] * InitialImage[ii_tl_y, ii_tl_x + 1, :] ) + \
( weight_bl[:, None] * InitialImage[ii_tl_y + 1, ii_tl_x, :] ) + \
( weight_br[:, None] * InitialImage[ii_tl_y + 1, ii_tl_x + 1, :] )
# Getting x coorinate to remove black region from right and left in the transformed image
min_x = min(ti_x)
# Cropping out the black region from both sides (using symmetricity)
TransformedImage = TransformedImage[:, min_x : -min_x, :]
return TransformedImage, ti_x-min_x, ti_y
You just have to call the function ProjectOntoCylinder and pass it an image to get the resultant image and the coordinates of white pixels in the mask image. Use the code below to call this function and get the mask image.
# Applying Cylindrical projection on Image
Image_Cyl, mask_x, mask_y = ProjectOntoCylinder(Image)
# Getting Image Mask
Image_Mask = np.zeros(Image_Cyl.shape, dtype=np.uint8)
Image_Mask[mask_y, mask_x, :] = 255
Here are links to my project and its detailed documentation for reference:
Part 1:
Source Code,
Documentation
Part 2:
Source Code,
Documentation

OpenCV Stereo Camera Calibration Error: Assertion failed

I know this question was asked a few Times, but the answers doesn't solve my problem.
I want to calibrate a pair of Cameras to use as Stereo Input.
But when I run the code I get the error Message :
OpenCV(3.4.1) Error: Assertion failed (nimages > 0 && nimages == (int)imagePoints1.total() && (!imgPtMat2 || nimages == (int)imagePoints2.total())) in collectCalibrationData, file /tmp/opencv-20180529-49540-yj8rbk/opencv-3.4.1/modules/calib3d/src/calibration.cpp, line 3133
Traceback (most recent call last):
File "/Users/MyName/Pycharm/Project/calibration.py", line 342, in <module>
TERMINATION_CRITERIA )
cv2.error: OpenCV(3.4.1) /tmp/opencv-20180529-49540-yj8rbk/opencv-3.4.1/modules/calib3d/src/calibration.cpp:3133: error: (-215) nimages > 0 && nimages == (int)imagePoints1.total() && (!imgPtMat2 || nimages == (int)imagePoints2.total()) in function collectCalibrationData
My Code is :
def distortion_matrix(path, objpoints, imgpoints):
for item in os.listdir(path):
if item.endswith(".jpg"):
cap = cv2.VideoCapture(path+item, cv2.CAP_IMAGES)
ret, img = cap.read() # Capture frame-by-frame
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
keypoints = blobDetector.detect(gray) # Detect blobs.
im_with_keypoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0, 255, 0),
cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
im_with_keypoints_gray = cv2.cvtColor(im_with_keypoints, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findCirclesGrid(im_with_keypoints, (4, 11), None,
flags=cv2.CALIB_CB_ASYMMETRIC_GRID)
if ret == True:
objpoints.append(objp)
corners2 = cv2.cornerSubPix(im_with_keypoints_gray, corners, (11, 11), (-1, -1),
criteria)
imgpoints.append(corners2)
cap.release()
_, leftCameraMatrix, leftDistortionCoefficients, _, _ , objpoints0, imgpoints0 = distortion_matrix("./calibration/left/", objpoints0, imgpoints0)
_, rightCameraMatrix, rightDistortionCoefficients, _, _, objpoints1, imgpoints1 = distortion_matrix("./calibration/right/", objpoints1, imgpoints1)
(_, _, _, _, _, rotationMatrix, translationVector, _, _) = cv2.stereoCalibrate( objp, imgpoints0, imgpoints1,
leftCameraMatrix, leftDistortionCoefficients,
rightCameraMatrix, rightDistortionCoefficients,
imageSize, None, None, None, None,
cv2.CALIB_FIX_INTRINSIC, TERMINATION_CRITERIA )
Most times when this gets thrown, it seems that the Message refers to arrays (imgpoint and objpoint) which are empty or not evenly filled.
But at the end both got the length 20 (I scan 20 images so this seems right) and every cell of the array has 44 arrays stored (the circle grid I use has 44 points so this seems also right).
**Edit: **
my objp, imgpoint and objpoint are defined like this:
objp = np.zeros((np.prod(pattern_size), 3), np.float32)
objp[0] = (0, 0, 0)
objp[1] = (0, 2, 0)
objp[2] = (0, 4, 0)
objp[3] = (0, 6, 0)
...
objpoints0 = []
objpoints1 = []
imgpoints0 = []
imgpoints1 = []
** Edit 2: **
If NUM_IMAGES stands for Number of images, I thing I've got it now. But only when I add the new axis after I call distortion_matrix().
Then the code is able to complete. I need to test the results, but at least this problem seems be be solved.
Thank you very much
You said you are doing stereo calibration, is there any case where some of the points on your grid does not visible from other camera? This error may appear when one of your view unable to detect all points on the calibration pattern. Three points to consider are
1- Make sure your object points are 3d
2- Make sure your left points, right points and object points have same size (number of views).
3- Make sure your left points, right points and object points have same amount of points at each index of list.
Edit: Your object points objp must contain a list/vector of 3d points, currently its shape is something like (44, 3), it must be (NUM_IMAGES, 44, 3). You can achieve this with objp = np.repeat(objp[np.newaxis, :, :], NUM_IMAGES, axis=0)

How to use camera parameters to calculate world distance in cm between pixels?

I calibrated my camera using, separately, ROS, OpenCv and Matlab. People say that I need extrinsic parameters to calculate real distance between pixels in cm from the image. ROS does not provide extrinsic parameters explicitly, it provides a (4,3) projection matrix which is the output of multiplied intrinsic and extrinsic parameters.
That is why I again calibrated my camera using OpenCv and Matlab to get extrinsic parameters. Although I searched how can I calculate real distance in cm between pixels (i.e from (x1,y1) to (x2,y2)), I could not figure out how to calculate the real distance. Moreover, I did not understand that which parameters to use for distance calculation. I want to use OpenCv to calculate the distance between pixels and write the output as a txt file so that I can use this txt file to move my robot.
For example, here is array of pixel output sample for the path,
array([[ 4.484375 , 799.515625 ],
[ 44.484375 , 487. ],
[255.296875 , 476.68261719],
[267.99707031, 453.578125 ],
[272.484375 , 306. ],
[403.484375 , 300.515625 ],
[539.484375 , 296.515625 ],
[589.421875 , 270.00292969],
[801.109375 , 275.18554688],
[819. , 467.515625 ]])
I want to find the real distance in cm between these pixels in an order.
OpenCv Code that calculating parameters:
import numpy as np
import cv2
import glob
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
cbrow = 6
cbcol = 9
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((cbrow*cbcol,3), np.float32)
objp[:,:2] = np.mgrid[0:cbcol,0:cbrow].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
images = glob.glob('C:\Users\Ender\Desktop\CalibrationPhotos\*.jpg')
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (cbcol,cbrow),None)
# If found, add object points, image points (after refining them)
if ret == True:
print "%s: success" % fname
objpoints.append(objp)
cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (cbcol,cbrow), corners,ret)
cv2.imshow('img',img)
cv2.waitKey(150)
else:
print "%s: failed" % fname
cv2.imshow('img',img)
cv2.waitKey(1)
cv2.destroyAllWindows()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
print "mtx"
print mtx
print "dist"
print dist
#print "rvecs"
#print rvecs
#print "tvecs"
#print tvecs
np.savez("CalibData.npz" ,mtx=mtx, dist=dist, rvecs=rvecs, tvecs=tvecs)
#UNDISTROTION
img = cv2.imread('C:\Users\Ender\Desktop\Maze\Maze Images\Partial Maze Images-Raw\Raw7.jpg')
h, w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
# crop the image
x,y,w,h = roi
dst = dst[y:y+h, x:x+w]
cv2.imwrite('calibresult.jpg',dst)
#Re-projection Errors
total_error = 0
for i in xrange(len(objpoints)):
imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
total_error += error
print "total error: ", total_error/len(objpoints)
What is the way of doing this ?

Image Rectification from single camera using two points

I'm trying to implement image rectification. I was using a software which is not available anymore. To rectify the image, the software used the height of the camera (h), the distance of two points (d1, d2) from the camera and the correspond lines in the image to the reference points (Line1, Line2).
So the variables are:
h (camera elevation);
Line1, Line2 (row pixel)
d1, d2 (Distance in meters from the camera)
Configuration:
I tried to implement few code using OpenCV (Python) but the final result is not the same of the software. I wrote a code to calibrate the camera and a second to undistort the image and then I want to apply the rectification.
The problem is that I'm using a single camera (take photos of a landscape) that is fixed with a fixed focal length and focus which I can't change anymore.
Can someone tell me a good way to execute the rectification using the same way of the software or an another valid solution?
My code for the calibration is
# Numbers of corners
n_w = 9
n_h = 6
patternSize = (n_w, n_h)
# SIZE OF THE WINDOW TO IMPROVE THE COORDINATES OF CORNERS
windowSize = (11, 11)
# TERMINATION CRITERIA
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
objp = np.zeros((n_h * n_w, 3), dtype=np.float32)
objp[:, :2] = np.mgrid[0:n_w, 0:n_h].T.reshape(-1, 2)
# LIST OF POINT
objpoints = []
imgpoints = []
# GET ALL IMAGES
images = glob.glob('*.jpg')
for fname in images:
img = cv2.imread(fname)
# IMAGE ON GRAY SACLE
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# fIND CORNERS
retval, corners = cv2.findChessboardCorners(gray_img, patternSize, None)
if retval == True:
print 'Looping through image %s' % fname
objpoints.append(objp)
cv2.cornerSubPix(gray_img, corners, windowSize, (-1, -1), criteria)
imgpoints.append(corners)
cv2.drawChessboardCorners(img, patternSize, corners, retval)
cv2.imshow('ChessBoard Image %s' % fname, img)
cv2.waitKey(500)
cv2.destroyAllWindows()
print "------START CALIBRATION....."
ret, cameraMatrix, distCoeffs, revcs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray_img.shape[::-1],
None, None)
print ret
print cameraMatrix
print distCoeffs
print '---SAVING CALIBRATION DATA'
np.savez('calibration_data', RMS=ret, distCoeffs=distCoeffs, cameraMatrix=cameraMatrix)
if ret <= 1.0:
print '''-----GOOD CALIBRATION'''
The code to remove the distortion is:
# LOAD CALIBRATION DATA
calibrationData = np.load('calibration_data.npz')
distCoeffs = calibrationData['distCoeffs']
cameraMatrix = calibrationData['cameraMatrix']
calibrationData.close()
# LOAD IMAGES
images = glob.glob('/*.jpg')
for i, fname in enumerate(images):
img = cv2.imread(fname)
# UNDISTORT
undistorted_img = cv2.undistort(img, cameraMatrix, distCoeffs, None)
# SAVE IMAGE
cv2.imwrite(os.path.join(dirname, 'Undistorted_%05d.jpg' % i), undistorted_img)
cv2.imshow('Undistorted Image %s' % fname, undistorted_img)
The first idea to rectify the image was to find the 4 corners inside the real world image of a trapezoid (A4 paper) and compute a transformation matrix given 4 points of a rectangle (real dimension of an A4). But I think that is an wrong approce.
To do this I wrote this code:
#load image
img_Trap = cv2.imread('image.png', cv2.IMREAD_GRAYSCALE)
#points on the image (corners of an A4 paper)
ptsTrap = np.array(((1556, 1050), (1556, 1050), (2189, 1677), (1425, 1723)), dtype=np.float32)
img_Rect = cv2.imread('image2.png', cv2.IMREAD_GRAYSCALE)
# corner of a A4 (saving the aspect ratio)
ptsRect = np.array(((1980, 1381), (2189, 1381), (2189, 1677), (1980, 1677)), dtype=np.float32)
#transformation matrix
T = cv2.getPerspectiveTransform(ptsTrap, ptsRect)
print T
# warp perspective
warp = cv2.warpPerspective(img_Trap, T, img_Rect.shape[:2])
cv2.imwrite('warpimage.png', warp)

My stereo rectified (OpenCV) images are wrong. PYTHON

I am using OpenCV to compute a disparity map of a scene. I have already calibrate the stereo camera by finding the intrinsic parameters individually with cv2.calibrateCamera and then with cv2.stereoCalibrate to find the rotation matrix and the translation vector.
I copy my calibration code but I think my problem is not here:
import numpy as np
import cv2
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.000001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpointsL = []
imgpointsL = []
objpointsR = []
imgpointsR = []
imgR = cv2.imread('right2.jpg',0)
# Find the chess board corners
ret, cornersR = cv2.findChessboardCorners(imgR, (7,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpointsR.append(objp)
cv2.cornerSubPix(imgR,cornersR,(11,11),(-1,-1),criteria)
imgpointsR.append(cornersR)
imgL = cv2.imread('left3.jpg',0)
#grayL = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, cornersL = cv2.findChessboardCorners(imgL, (7,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpointsL.append(objp)
cv2.cornerSubPix(imgL,cornersL,(11,11),(-1,-1),criteria)
imgpointsL.append(cornersL)
#Intrinsic parameters
distCoeffsR = np.array([1.191755160158372399e-02, -8.585146447098485067e-03, 8.429413399383550720e-04, -6.222973871460263460e-05, -7.966310474599684957e-03])
distCoeffsL = np.array([-1.627558337813042599e-02, 2.409982163230293128e-01, 4.443126374210568282e-03, 1.288079049351137243e-03, -3.177831292965794807e-01])
cameraMatrixR = np.matrix('3.252248978261580987e+02 0 3.269955537627058106e+02;0 3.228400384496266042e+02 2.341068611530280350e+02;0 0 1')
cameraMatrixL = np.matrix('4.570360097428241488e+02 0 3.465188967298854550e+02;0 4.573286269805292363e+02 2.691439570063795372e+02;0 0 1')
retval,cameraMatrixL, distCoeffsL, cameraMatrixR, distCoeffsR, R, T, E, F = cv2.stereoCalibrate(objpointsL, imgpointsL, imgpointsR, cameraMatrixL, distCoeffsL, cameraMatrixR, distCoeffsR, (640,480))
Now I cv2.stereoRectify:
lFrame = cv2.imread('izquierda.jpg')
rFrame = cv2.imread('derecha.jpg')
w, h = lFrame.shape[:2] # both frames should be of same shape
#Perform stereorectification
R1, R2, P1, P2, Q, validPixROI1, validPixROI2 = cv2.stereoRectify(cameraMatrixL, distCoeffsL, cameraMatrixR, distCoeffsR, (w,h), R, T, cv2.CALIB_ZERO_DISPARITY,0, (0,0))
#computes undistort and rectify maps
mapxL, mapyL = cv2.initUndistortRectifyMap(cameraMatrixL, distCoeffsL, R1, P1, (w,h), cv2.CV_32FC1)
mapxR, mapyR = cv2.initUndistortRectifyMap(cameraMatrixR, distCoeffsR, R2, P2, (w,h), cv2.CV_32FC1)
dstL = cv2.remap(lFrame, mapxL, mapyL,cv2.INTER_LINEAR)
dstR = cv2.remap(rFrame, mapxR, mapyR,cv2.INTER_LINEAR)
while (True):
cv2.imshow('Left normal',lFrame)
cv2.imshow('Right normal',rFrame)
cv2.imshow('Left rectify',dstL)
cv2.imshow('Right rectify',dstR)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
And these are the rectified images:
Left rectify
Right rectify
Anyone can help me please? I am getting stuck with this...
Try to switch height and width, namely change this line of your code:
w, h = lFrame.shape[:2] # both frames should be of same shape
to this:
h, w = lFrame.shape[:2] # both frames should be of same shape
I ran into the same problem and it helped me. I think this happens because OpenCV in its methods expects the second shape of numpy array as width and the first as height.
Hope, it helps.
I was also facing the similar problem and then I compared my all matrices (obtained from python) with matrices obtained from Matlab using stereocalibration app. I found out that it is happening due to incorrect calibration. I got wrong camera distortion matrices which was causing this error. You can read my full answer here: Python 2.7/OpenCV 3.3: Error in cv2.initUndistortRectifyMap . Not showing undistort rectified images

Categories