Fisheye calibration OpenCV Python - python

I am using a fisheye camera and I would like to calibrate it and correct its barrel distortion using OpenCV or any other library in Python. I have tried different methods using the classic chessboard pattern through different images.
I have been following this approach:
# Prepare object points, like (0,0,0), (1,0,0), (2,0,0), ...,(7,5,0)
object_p = np.zeros((self.__ny*self.__nx,3),np.float32)
object_p[:,:2] = np.mgrid[0:self.__nx,0:self.__ny].T.reshape(-1,2) # s,y coordinates
for image_filename in calibration_filenames:
# Read in each image
image = cv2.imread(image_filename)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # RGB is standard in matlibplot
# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (self.__nx, self.__ny), None)
# If found, draw corners
if ret == True:
# Store the corners found in the current image
object_points.append(object_p) # how it should look like
image_points.append(corners) # how it looks like
# Draw and display the corners
cv2.drawChessboardCorners(image, (self.__nx, self.__ny), corners, ret)
plt.figure()
plt.imshow(image)
plt.show()
# Do the calibration
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(object_points, image_points, shape, None, None)
And then this to correct my image:
cv2.undistort(image, mtx, dist, None, mtx)
The main problem is that, though correction performs good with some images, is very susceptible to camera location. A variation of cm in the camera will result in very different corrections (see below).
Any idea of how could I control this zoom effect to show maintain the grid area always in my corrected image?
I am aware that this question may be similar to the one in here.

Your calibration is poorly estimated: as you can see, straight lines in the scene grid are not transformed to straight lines in the (allegedly) undistorted image.
Once you have a better calibration, you can compute (using the inverse mapping) the location of the 4 sides of the undistorted image in the original ones. These define your usable image boundaries, and can guide the placement of the camera with respect to the scene.

Related

OpenCV: Projecting 3D points from Homography Matrix

I am trying to project points on an image after estimating its pose as shown in this OpenCV tutorial, but for any textured "base image" instead of a chessboard image.
To match the base image with a frame from my webcam, I'm using ORB and FLANN and it is working perfectly. The 2D border around the image is correctly rendered with cv2.findHomography and cv2.perspectiveTransform(border_points, homography_matrix) to map the points and cv2.polylines to draw them.
Now I should be able to draw 3D axes on the image's surface using the homography matrix, but I'm stuck not knowing how I would do it nor finding how to do it anywhere. What am I missing here?
# target_pts and webcam_pts are arrays of 2D points generated by ORB
homography_matrix, _ = cv2.findHomography(target_pts, webcam_pts, cv2.RANSAC, 5.0)
_, rvecs, tvecs, _ = cv2.decomposeHomographyMat(homography_matrix, calibration_matrix)
axis = np.float32([[3,0,0], [0,3,0], [0,0,-3]]).reshape(-1,3)
# TODO: cv2.Rodrigues() for rvecs ?
axis_points, _ = cv2.projectPoints(axis, rvecs[0], tvecs[0], calibration_matrix, calibration_coefficients)
# Draw axis_points on image
# ...

Finding bright spots in a image using opencv

I want to find the bright spots in the above image and tag them using some symbol. For this i have tried using the Hough Circle Transform algorithm that OpenCV already provides. But it is giving some kind of assertion error when i run the code. I also tried the Canny edge detection algorithm which is also provided in OpenCV but it is also giving some kind of assertion error. I would like to know if there is some method to get this done or if i can prevent those error messages.
I am new to OpenCV and any help would be really appreciated.
P.S. - I can also use Scikit-image if necessary. So if this can be done using Scikit-image then please tell me how.
Below is my preprocessing code:
import cv2
import numpy as np
image = cv2.imread("image1.png")
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
binary_image = np.where(gray_image > np.mean(gray_image),1.0,0.0)
binary_image = cv2.Laplacian(binary_image, cv2.CV_8UC1)
If you are just going to work with simple images like your example where you have black background, you can use same basic preprocessing/thresholding then find connected components. Use this example code to draw a circle inside all circles in the image.
import cv2
import numpy as np
image = cv2.imread("image1.png")
# constants
BINARY_THRESHOLD = 20
CONNECTIVITY = 4
DRAW_CIRCLE_RADIUS = 4
# convert to gray
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# extract edges
binary_image = cv2.Laplacian(gray_image, cv2.CV_8UC1)
# fill in the holes between edges with dilation
dilated_image = cv2.dilate(binary_image, np.ones((5, 5)))
# threshold the black/ non-black areas
_, thresh = cv2.threshold(dilated_image, BINARY_THRESHOLD, 255, cv2.THRESH_BINARY)
# find connected components
components = cv2.connectedComponentsWithStats(thresh, CONNECTIVITY, cv2.CV_32S)
# draw circles around center of components
#see connectedComponentsWithStats function for attributes of components variable
centers = components[3]
for center in centers:
cv2.circle(thresh, (int(center[0]), int(center[1])), DRAW_CIRCLE_RADIUS, (255), thickness=-1)
cv2.imwrite("res.png", thresh)
cv2.imshow("result", thresh)
cv2.waitKey(0)
Here is resulting image:
Edit: connectedComponentsWithStats takes a binary image as input, and returns connected pixel groups in that image. If you would like to implement that function yourself, naive way would be:
1- Scan image pixels from top left to bottom right until you encounter a non-zero pixel that does not have a label (id).
2- When you encounter a non-zero pixel, search all its neighbours recursively( If you use 4 connectivity you check UP-LEFT-DOWN-RIGHT, with 8 connectivity you also check diagonals) until you finish that region. Assign each pixel a label. Increase your label counter.
3- Continue scanning from where you left.

Python cv2 edge and contour detection

I am trying to detect bubbles on an OMR sheet which looks something like this:
My code for edge detection and contour display is referenced from here. However, before finding the actual contours, I am trying to detect the edges but somehow not able to set the correct values of parameters.
This is what I get:
Code:
from imutils.perspective import four_point_transform
from imutils import contours
import numpy as np
import argparse
import imutils
import cv2
def auto_canny(image, sigma=0.50):
# compute the median of the single channel pixel intensities
v = np.median(image)
# apply automatic Canny edge detection using the computed median
lower = int(max(0, (1.0 - sigma) * v))
upper = int(min(255, (1.0 + sigma) * v))
edged = cv2.Canny(image, lower, upper)
# return the edged image
return edged
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path to the input image")
args = vars(ap.parse_args())
image = cv2.imread(args["image"])
r = 500.0 / image.shape[1]
dim = (500, int(image.shape[0] * r))
# perform the actual resizing of the image and show it
image = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
equalized_img = cv2.equalizeHist(gray)
cv2.imshow('Equalized', equalized_img)
# cv2.waitKey(0)
blurred = cv2.GaussianBlur(equalized_img, (7, 7), 0)
# edged =cv2.Canny(equalized_img, 30, 160)
edged = auto_canny(blurred)
cv2.imshow('edged', edged)
cv2.waitKey(0)
How can I get all the 90*4 circles?
You should be using Hough to search for circles. This method project every single white pixel as a circle, and tries to get as many overlapping pixels possible. You'll have to specify the predicted radiuses of circles to be found within image.
Left - original image
Top-right - each white pixel is projected as red circle - they are too small to find intersecting point
Bottom-right - green circle is larger, and all the intersecting points meet exactly at the middle of the circle! Both radius and position is returned by cvHoughCircles
This person dealt with blob detection (that's what finding circles is called I think) using cvHoughCircles with cvCanny-ized image (read OPs update).
OpenCV: Error in cvHoughCircles usage
You need to improve your contour detection.
Eventually by not changing it, but by better pre-processing the earlier stage.
Contour detection works better with more contrast and color separation in image. If you donĀ“t have yet need to threshold you image with techniques like Simple Threshold, Adaptive or more smart techniques like Otsu's. Check Open CV document here.
Besides that, for your case eventually need more advanced techniques like "Adaptive Thresholding Using the Integral Image", described here.

Camera Calibration with OpenCV - How to adjust chessboard square size?

I am working on a camera calibration program using the OpenCV/Python example (from: OpenCV Tutorials) as a guidebook.
Question: How do I tailor this example code to account for the size of a square on a particular chessboard pattern? My understanding of the camera calibration process is that this information must somehow be used otherwise the values given by:
cv2.calibrateCamera()
will be incorrect.
Here is the portion of my code that reads in image files and runs through the calibration process to produce the camera matrix and other values.
#import cv2
#import numpy as np
#import glob
"""
Corner Finding
"""
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Prepare object points, like (0,0,0), (1,0,0), ....,(6,5,0)
objp = np.zeros((5*5,3), np.float32)
objp[:,:2] = np.mgrid[0:5,0:5].T.reshape(-1,2)
# Arrays to store object points and image points from all images
objpoints = []
imgpoints = []
counting = 0
# Import Images
images = glob.glob('dir/sub dir/Images/*')
for fname in images:
img = cv2.imread(fname) # Read images
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Convert to grayscale
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (5,5), None)
# if found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp)
cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners)
#Draw and display corners
cv2.drawChessboardCorners(img, (5,5), corners, ret)
counting += 1
print str(counting) + ' Viable Image(s)'
cv2.imshow('img', img)
cv2.waitKey(500)
cv2.destroyAllWindows()
# Calibrate Camera
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
Here, if you have your square size assume 30 mm then multiply this value with objp[:,:2]. Like this
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)*30 # 30 mm size of square
As objp[:,:2] is a set of points of checkboard corners given as (0,0),(0,1), (0,2) ....(8,5). (0,0) point is the left upper most square corner and (8,5) is the right lowest square corner. In this case, these points have no unit but if we multiply these values with square size (for example 30 mm), then these will become (0,0),(0,30), .....(240,150) which are the real world units. Your translation vector will be in mm units in this case.
From here: https://docs.opencv.org/4.5.1/dc/dbb/tutorial_py_calibration.html
What about the 3D points from real world space? Those images are taken from a static camera and chess boards are placed at different locations and orientations. So we need to know (X,Y,Z) values. But for simplicity, we can say chess board was kept stationary at XY plane, (so Z=0 always) and camera was moved accordingly. This consideration helps us to find only X,Y values. Now for X,Y values, we can simply pass the points as (0,0), (1,0), (2,0), ... which denotes the location of points. In this case, the results we get will be in the scale of size of chess board square. But if we know the square size, (say 30 mm), we can pass the values as (0,0), (30,0), (60,0), ... . Thus, we get the results in mm. (In this case, we don't know square size since we didn't take those images, so we pass in terms of square size).

OpenCV findContours only finds the border of the image

I'm working on a project to track a laser and a photodiode with a camera attached to a raspberry pi. The pi will send instructions to an arduino, which will reorient the laser until I get a response from the photodiode. Right now, I'm working on the camera aspect of the process.
I'm trying to find the contours of my image so that I can match them with the general contours of the objects I'll be using, but my findContours() only gives me the border of my image.
I wish I could post the images, but I don't have enough rep. The Canny Edge is black and white, white lines with a black background. The image with the contours on it is the captured image but with a drawn border and no other contours.
Here's my code:
def DED(grayImg): #Edge Detection, returns image array
minInt, maxInt, minLoc, maxLoc = cv2.minMaxLoc(grayImg) #Grayscale: MinIntensity, Max, and locations
beam = cv2.mean(grayImg) #Find the mean intensity in the img pls.
mean = float(beam[0])
CannyOfTuna = cv2.Canny(grayImg, (mean + minInt)/2, (mean + maxInt)/2) #Finds edges using thresholding and the Canny Edge process.
return CannyOfTuna
def con2z(Gray, ogImage): #Find contours from = Canny Edge Image, draw onto original
lines, pyramids = cv2.findContours(Gray, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
gimmeGimme = cv2.drawContours(ogImage, lines, -1, (128,255,0), 3) #draw contours on
#The -1 signifies ALL contours will be drawn.
return lines
with picamera.PiCamera() as camera:
camera.resolution = (640,480)
out = camera.capture('output.jpg') # Camera start
output = cv2.imread('output.jpg')
grayput = cv2.cvtColor(output, cv2.COLOR_BGR2GRAY) #Grayscale
cv2.imwrite('gray.jpg', grayput)
cans = DED(grayput) #Canny Edge
cv2.imwrite('Canny.jpg', cans)
lines = con2z(grayput, output) # Contours please
print(lines)
cv2.imwrite('contours.jpg', output)
EDIT: Here are the two photos
http://imgur.com/EVeMVdm,QLoYa2o#0
http://imgur.com/EVeMVdm,QLoYa2o#1
findContours returns this tuple (image, contours, hierarchy).
So in your case try this as L.H.S of your findContours function: _, lines, pyramids = cv2.findContours
EDIT:
Sorry, that was not the solution, below one worked for me.
Replace grayput with cans in con2z function call. findContours expects binary image, which grayput is not.

Categories