How to undistort a cropped fisheye image using OpenCV - python

I am trying to undistort a fisheye image using OpenCV. I already calibrated the camera, and also getting a decent undistorted image using the following code (I just posted the relevant part):
img = cv2.imread('img.jpeg') #image path of pic to distort
h,w = img.shape[:2]
Knew = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(K,D,(w,h),None)
Knew[(0,1), (0,1)] = .3* Knew[(0,1), (0,1)]
mapx, mapy = cv2.fisheye.initUndistortRectifyMap(K,D,None,Knew,(w,h),5)
img_undistorted = cv2.remap(img, mapx, mapy, cv2.INTER_LINEAR)
cv2.imshow('undistorted', img_undistorted)
cv2.waitKey(0)
cv2.destroyAllWindows()
Now I have to crop the fisheye image like this:
These photos are only for demonstration purposes.
I used the mapping function of a fisheye lens to determine the point where I crop at the right side. However, if I use the same code as above (and same camera matrix), the output is a really weird distorted picture.
I have also thought about first undistorting and then cropping, however I am not able to calculate exactly the point I have to crop in this way. So I have to crop the image first.
So how do I correctly undistort a not symmetrically cropped fisheye image?

When you move and crop the image, the fish camera calibration parameters are not fit to the new camera.
May be you could try to make the iamge symmetrical by padding zero values.
The point is to make the center point of both images at the same position.
Reference:
http://docs.opencv.org/3.0-beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#fisheye-estimatenewcameramatrixforundistortrectify
http://answers.opencv.org/question/64614/fisheyeundistortimage-doesnt-work-what-wrong-with-my-code/

Related

Approach to apply canny edge detector on a video instead of a still image

As part of my current project, I need to perform edge detection using canny edge detector on a given video. I know how to do this on a still image. But, Iam interetsed to know if there is any other approach to do this on a video instead of simply applying canny over all the frames one after the other.
Iam adding the sample code for edge detector using canny on a still
image.
I tried the same approach over a video by reading all the frames one
by one and I got the output as well.
I am actually interested to know is this the only approach to be
followed incase of a video or is there any other approach for a video.
#reading a still image
image = cv2.imread(image_path)
#converting to Grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
#applying gaussian low pass filter
gauss_image = cv2.GaussianBlur(gray_image, kernel_size,sigmaX,sigmaY)
#applying canny edge detector
edges = cv2.Canny(gauss_image,lower_threshold, upper_threshold)

Crop the object from infrared picture. Rstudio/Python

I am trying to crop only liver from this picture. I have tried everything but cannot find any useful information about it. I am using Python/Rstudio. Also after removing the unnecessary part, I am willing to get pixels/intensity of the new image. Any help would be appreciated. Please check one of the image This is somehow what I want to crop
UPDATE:
I am trying to crop the main image based on edges I got from the canny edge detector. Is there any way to crop the main image based on edges? Please check the images.
Liver Image
Canny Edge Detection
Well, if your images are static, same size, and taken from the same angle, then below simple script should suffice your needs:
import cv2
img = cv2.imread("cMyp9.png")
mask = np.where(img==255)
img2 = cv2.imread("your_next_image.png")
img2 [mask] = 255
now your img2 is a cropped version

Detect rectanglular signature fields in document scans using OpenCV

I am trying to extract rectangular big boxes from document images with signatures in it. Since i don't have training data (for deep learning), i want to cut rectangular boxes (3 in all images) from these images using OpenCV.
Here is what I tried:
import numpy as np
import cv2
img = cv2.imread('S-0330-444-20012800.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,127,255,1)
contours,h = cv2.findContours(thresh,1,2)
for cnt in contours:
approx = cv2.approxPolyDP(cnt,0.02*cv2.arcLength(cnt,True),True)
if len(approx)==4:
cv2.drawContours(img,[cnt],0,(26,60,232),-1)
cv2.imshow('img',img)
cv2.waitKey(0)
sample image
With the above code, I get a lot of squares (around 152 small points like squares) and of course not the 3 boxes.
Replies appreciated. [sample image is attached]
I would suggest you read up on template matching. There is also a good OpenCV tutorial on this.
For your use case, the idea would be to generate a stereotyped image of a rectangular box with the same shape (width/height ratio) as the boxes found on your documents. Depending on whether your input images show the document always in the same scaling or not, your would need to either resize the inputs to keep their magnification constant, or you would need to operate with a template bank (e.g. an array of box templates in various scalings).
Briefly, you would then cross-correlate the template box(es) with the input image and (in case of well-matched scaling) would find ideally relatively sharp peaks indicating the centers of your document boxes.
In the code above, use image pyramids (to merge unwanted contour noises) and cv2.findContours in combination. Post to that filtering list of Contours based on contour area cv2.contourArea will lead to only bigger squares.
There is also an alternate solution. Looking at images, we can see that the signature text usually is bigger than that of printed text in that ROI. so we can filter out contours smaller than signature contours and extract only the signature.
Its always good to remove noise before using cv2.findContours e.g. dilate, erode, blurring etc.

Fisheye camera calibration opencv

I need to evaluate whether a camera is viewing a 3D real object. To do so I count with the 3D model of the world I am moving and the pose from the robot my camera is attached to. So far, so good, the camera coordinate will be
[x,y,z]' = RX +T
where X is the real object position, and . The camera I am using is a 170ยบ FOV camera, and I need to calibrate it in order to convert these [x,y,z] into pixel coordinates I can evaluate. If the pixel coordinates are bigger than (0,0) and smaller than (width,height) I will consider that the camera is looking at the object.
Can I do a similar test without the conversion to pixel coordinates? I guess not, so I am trying to calibrate the fisheye camera with https://bitbucket.org/amitibo/pyfisheye/src, which is a wrapper over the faulty opencv 3.1.0 fisheye model.
Here is one of my calibration images:
Using the simplest test ( https://bitbucket.org/amitibo/pyfisheye/src/default/example/test_fisheye.py) This is the comparison with the undistorted image:
It looks really nice, and here is the undistorted:
How can I get the whole "butterfly" undistorted image? I am currently seeing the lower border...

Open CV Python- Image distortion radial and perspective-

I have a distorted picture, where without distortion the point A, B C and D form a square of 1 cm * 1 cm.
I tried to use homography to correct it, but it distort the line AD and BC, as you can see in the figure.
Do you have an idea how could I correct that?
Thanks a lot!
Marie- coder beginner
PS: for info, the image is taken in a tube with an endoscope camera having a large field of view allowing to take picture of the tube almost around the camera. I will use the 1*1 cm square to estimate roots growth with several pictures taken over time.
here is my code:
import cv2
import numpy as np
import matplotlib.pyplot as plt
if __name__ == '__main__' :
# Read source image.
im_src = cv2.imread('points2.jpg', cv2.IMREAD_COLOR)
# Four points of the miniR image
pts_src = np.array([[742,223],[806,255],[818,507],[753,517]], dtype=float)
# Read destination image.
im_dst = cv2.imread('rectangle.jpg', cv2.IMREAD_COLOR)
# Four points of the square
pts_dst = np.array([[200,200],[1000,200],[1000,1000],[200,1000]], dtype=float)
# Calculate Homography
h, status = cv2.findHomography(pts_src, pts_dst)
# Warp source image to destination based on homography
im_out = cv2.warpPerspective(im_src, h, (im_dst.shape[1],im_dst.shape[0]))
cv2.imwrite('corrected2.jpg', im_out)
# Display images
cv2.imshow("Source Image", im_src)
cv2.imshow("Destination Image", im_dst)
cv2.imshow("Warped Source Image", im_out)
cv2.waitKey(0)
A homography is a projective transformation. As such it can only map straight lines to straight lines. The straight sides of your input curvilinear quadrangle are correctly rectified, but there is no way that you can straighten the curved sides using a projective transform.
In the photo you posted it may be reasonable to assume that the overall geometry is approximately a cylinder, and the "vertical" lines are parallel to the axis of the cylinder. So they are approximately straight, and a projective transformation (the camera projection) will map them to straight lines. The "horizontal" lines are the images of circles, or ellipses if the cylinder is squashed. A projective transformation will map ellipses (in particular, circles) into ellipses. So you could proceed by fitting ellipses. See this other answer for hints.
I found a solution using GDAL. We can use two chessboard images. One image is imaged with the device creating the distortion and remain unchanged - so with no distortion. With the help QGIS you create a file with associating distorted point to undistorted one. For that you add a Ground Control Point at each intersection using a defined grid interval (e.g. 100px) and export the resulting GCPs as pointsfile.points.
After that, you can use a batch file that a collaborator created here. It is using GDAL to geo-correct the images.
You just need to put the images that you would like to transform (jpg format) into the root directory of the repo and run bash warp.sh. This will output the re-transformed images into the out/ directory.

Categories