Crop the object from infrared picture. Rstudio/Python - python

I am trying to crop only liver from this picture. I have tried everything but cannot find any useful information about it. I am using Python/Rstudio. Also after removing the unnecessary part, I am willing to get pixels/intensity of the new image. Any help would be appreciated. Please check one of the image This is somehow what I want to crop
UPDATE:
I am trying to crop the main image based on edges I got from the canny edge detector. Is there any way to crop the main image based on edges? Please check the images.
Liver Image
Canny Edge Detection

Well, if your images are static, same size, and taken from the same angle, then below simple script should suffice your needs:
import cv2
img = cv2.imread("cMyp9.png")
mask = np.where(img==255)
img2 = cv2.imread("your_next_image.png")
img2 [mask] = 255
now your img2 is a cropped version

Related

Approach to apply canny edge detector on a video instead of a still image

As part of my current project, I need to perform edge detection using canny edge detector on a given video. I know how to do this on a still image. But, Iam interetsed to know if there is any other approach to do this on a video instead of simply applying canny over all the frames one after the other.
Iam adding the sample code for edge detector using canny on a still
image.
I tried the same approach over a video by reading all the frames one
by one and I got the output as well.
I am actually interested to know is this the only approach to be
followed incase of a video or is there any other approach for a video.
#reading a still image
image = cv2.imread(image_path)
#converting to Grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
#applying gaussian low pass filter
gauss_image = cv2.GaussianBlur(gray_image, kernel_size,sigmaX,sigmaY)
#applying canny edge detector
edges = cv2.Canny(gauss_image,lower_threshold, upper_threshold)

Detect rectanglular signature fields in document scans using OpenCV

I am trying to extract rectangular big boxes from document images with signatures in it. Since i don't have training data (for deep learning), i want to cut rectangular boxes (3 in all images) from these images using OpenCV.
Here is what I tried:
import numpy as np
import cv2
img = cv2.imread('S-0330-444-20012800.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,127,255,1)
contours,h = cv2.findContours(thresh,1,2)
for cnt in contours:
approx = cv2.approxPolyDP(cnt,0.02*cv2.arcLength(cnt,True),True)
if len(approx)==4:
cv2.drawContours(img,[cnt],0,(26,60,232),-1)
cv2.imshow('img',img)
cv2.waitKey(0)
sample image
With the above code, I get a lot of squares (around 152 small points like squares) and of course not the 3 boxes.
Replies appreciated. [sample image is attached]
I would suggest you read up on template matching. There is also a good OpenCV tutorial on this.
For your use case, the idea would be to generate a stereotyped image of a rectangular box with the same shape (width/height ratio) as the boxes found on your documents. Depending on whether your input images show the document always in the same scaling or not, your would need to either resize the inputs to keep their magnification constant, or you would need to operate with a template bank (e.g. an array of box templates in various scalings).
Briefly, you would then cross-correlate the template box(es) with the input image and (in case of well-matched scaling) would find ideally relatively sharp peaks indicating the centers of your document boxes.
In the code above, use image pyramids (to merge unwanted contour noises) and cv2.findContours in combination. Post to that filtering list of Contours based on contour area cv2.contourArea will lead to only bigger squares.
There is also an alternate solution. Looking at images, we can see that the signature text usually is bigger than that of printed text in that ROI. so we can filter out contours smaller than signature contours and extract only the signature.
Its always good to remove noise before using cv2.findContours e.g. dilate, erode, blurring etc.

Opencv - Extracting data from in-game images

I need some help with an OpenCV project I'm working on. I'm taking images from a computer game (in this case, Fortnite), and I would like to extract different elements from them, eg. timer value, quantities of materials, health and shield etc.
Currently I perform a series of image preprocessing functions until I get a binary image, followed by locating the contours in the image and then sending those contours to a machine learning algorithm (K-Nearest-Neighbours).
I am able to succeed in a lot of cases, but there are some images where I don't manage to find some of the contours, therefore I don't find all the data.
An important thing to note is that I use the same preprocessing pipeline for all images, because I'm looking for as robust of a solution that I can manage.
I would like to know what I can do to improve the performance of my program. -
Is KNN a good model for this sort of task, or are there other models that might give me better results?
Is there any way to recognise characters without locating contours?
How can I make my preprocessing pipeline as robust as possible, given the fact that there is a lot of variance in the background across all images?
My goal is to process the images as fast as possible, starting out with a minimum of at least 2 images per second.
Thanks in advance for any help or advice you can give me!
Here is an example image before preprocessing
Here is the image after preprocessing, in this example I cannot find the contour for the 4 on the right side.
Quite simply, enlarging the image might help, since it increases the dark border of the number.
I threw together some code that does that. The result could be improved, but my point here is to show that the 4 can now be detected as a contour. To increase efficiency I only selected contours within a certain size.
Also, since it is part of the HUD, that usually means that the location on screen is always the same. If so, you can get great performance increase by only selecting the area with values (described here) - as I have done manually.
Finally, since the numbers have a consistent shape, you could try matchShapes as an alternative to kNN to recognize the numbers. I don't know how they compare in performance though, so you'll have to try that out yourself.
Result:
Code:
import numpy as np
import cv2
# load image
img = cv2.imread("fn2.JPG")
# enlarge image
img = cv2.resize(img,None,fx=4, fy=4, interpolation = cv2.INTER_CUBIC)
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# create mask using threshold
ret,mask = cv2.threshold(gray,200,255,cv2.THRESH_BINARY)
# find contours in mask
im, contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# draw contour on image
for cnt in contours:
if cv2.contourArea(cnt) < 3000 and cv2.contourArea(cnt) > 200:
cv2.drawContours(img, [cnt], 0, (255,0,0), 2)
#show images
cv2.imshow("Mask", mask)
cv2.imshow("Image", img)
cv2.waitKey(0)
cv2.destroyAllWindows()

How to undistort a cropped fisheye image using OpenCV

I am trying to undistort a fisheye image using OpenCV. I already calibrated the camera, and also getting a decent undistorted image using the following code (I just posted the relevant part):
img = cv2.imread('img.jpeg') #image path of pic to distort
h,w = img.shape[:2]
Knew = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(K,D,(w,h),None)
Knew[(0,1), (0,1)] = .3* Knew[(0,1), (0,1)]
mapx, mapy = cv2.fisheye.initUndistortRectifyMap(K,D,None,Knew,(w,h),5)
img_undistorted = cv2.remap(img, mapx, mapy, cv2.INTER_LINEAR)
cv2.imshow('undistorted', img_undistorted)
cv2.waitKey(0)
cv2.destroyAllWindows()
Now I have to crop the fisheye image like this:
These photos are only for demonstration purposes.
I used the mapping function of a fisheye lens to determine the point where I crop at the right side. However, if I use the same code as above (and same camera matrix), the output is a really weird distorted picture.
I have also thought about first undistorting and then cropping, however I am not able to calculate exactly the point I have to crop in this way. So I have to crop the image first.
So how do I correctly undistort a not symmetrically cropped fisheye image?
When you move and crop the image, the fish camera calibration parameters are not fit to the new camera.
May be you could try to make the iamge symmetrical by padding zero values.
The point is to make the center point of both images at the same position.
Reference:
http://docs.opencv.org/3.0-beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#fisheye-estimatenewcameramatrixforundistortrectify
http://answers.opencv.org/question/64614/fisheyeundistortimage-doesnt-work-what-wrong-with-my-code/

Is it possible to mask an image in Python Imaging Library (PIL)?

I have some traffic camera images, and I want to extract only the pixels on the road. I have used remote sensing software before where one could specify an operation like
img1 * img2 = img3
where img1 is the original image and img2 is a straight black-and-white mask. Essentially, the white parts of the image would evaluate to
img1 * 1 = img3
and the black parts would evaluate to
img1 * 0 = img3
And so one could take a slice of the image and let all of the non-important areas go to black.
Is there a way to do this using PIL? I can't find anything similar to image algebra like I'm used to seeing. I have experimented with the blend function but that just fades them together. I've read up a bit on numpy and it seems like it might be capable of it but I'd like to know for sure that there is no straightforward way of doing it in PIL before I go diving in.
Thank you.
The Image.composite method can do what you want. The first image should be a constant value representing the masked-off areas, and the second should be the original image - the third is the mask.
You can use the PIL library to mask the images. Add in the alpha parameter to img2, As you can't just paste this image over img1. Otherwise, you won't see what is underneath, you need to add an alpha value.
img2.putalpha(128) #if you put 0 it will be completly transparent, keep image opaque
Then you can mask both the images
img1.paste(im=img2, box=(0, 0), mask=img2)

Categories