Python: Extract contours/boundries into a new image using OpenCV - python

I have a binary image with alot of blobs in different shapes. I use python and OpenCV and I got rotated bounding boxes and "more exact" contours. Here is the code. Mult2 is a binary image and "gray" is the same image with a different name (bad, i know).
# "rect" gives bounding box centroid coordinates, width/length, angle
ret, contours, hierarchy = cv2.findContours(gray,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
if 5<cv2.contourArea(cnt)<50000:
# Draw smaller objects with rectangular boxes (note that I actually draw all objects and not only the small ones atm)
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(mult2,[box],0,(0,0,255),2)
if 1000<cv2.contourArea(cnt)<500000:
# Draw bigger objects with help from perimeter lenght ish something
epsilon = 0.004*cv2.arcLength(cnt,True)
approx = cv2.approxPolyDP(cnt,epsilon,True)
cv2.drawContours(mult2,[approx],0,(255,0,0),2)
And the result is This image
In this example both the bounding boxes and the contours are drawn, but that does not matter atm. What I want is to draw, or extract, the bounding boxes (in blue), so that the result is that the bounding boxes "replaces" the blobs. I want to do be able to do the same thing with the contours (in red) and place them in the same image (so for a certain blob I want either the bounding box or the contour). How do I do this?
Edit: It cannot be seen here, but in the original binary image there are places where the bounding boxes overlaps each other. So for example one blob can have a big U-shape and sometimes there exist a blob within the U so to speak. In that case I want the contour from the U and the bounding box of the blob inside. How to separate those is already solved, my problem is to extract them. Thank you

Related

How to show the biggest rectangle in OpenCV Haar classifier

I have already trained positive and negative images on side view of a car using haar cascade object detection, now when i use cascade xml file to predict car in the images i get multiple rectangles.
Now
1)why am i getting multiple rectangle around my object.
2)How to show only the largest rectangle detected in image
Output Image
This is the type of output that i am getting on every image
Code
car_cascade = cv2.CascadeClassifier('data/cascade.xml')
img = cv2.imread('test/46.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cars = car_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in cars:
img = cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Piglet's answer will help you set a threshold for the minimum / maximum size, but if you wanted to find the largest bounding box in the image, you could do something like this:
areas = [w*h for x,y,w,h in cars]
i_biggest = np.argmax(areas)
biggest = cars[i_biggest]
Here, we're doing the following:
Calculating all bounding box areas using list comprehension
Finding the index of areas with the largest value, storing in i_biggest
Using this index to extract the biggest (largest area) rectangle from cars
As the function name alread suggests cv2.CascadeClassifier.detectMultiScale and the documentation says:
Detects objects of different sizes in the input image
Also from the documentation:
Python: cv2.CascadeClassifier.detectMultiScale(image[, scaleFactor[,
minNeighbors[, flags[, minSize[, maxSize]]]]]) → objects
minSize – Minimum possible object size. Objects smaller than that are
ignored.
So either you filter the list of resulting rectangles by size or you prevent small objects by setting the minSize parameter.

How to crop an image from screenshot with the use of OpenCV?

I recently began studying image processing and took a task where I need to crop an image from mobile Instagram screenshot via use of OpenCV. I need to find edges of the image with contours and crop, but I'm not sure how to do this correctly.
I've tried to look up some examples like these:
How to crop biggest rectangle out of an image
https://www.quora.com/How-can-I-detect-an-object-from-static-image-and-crop-it-from-the-image-using-openCV
How to detect edge and crop an image in Python
How to crop rectangular shapes in an image using Python
But I'm still don't understand how to do it in my case.
Basically I have images like these:
https://imgur.com/a/VbwCdkO
and
https://imgur.com/a/Mm69i35
And the result should be like this:
https://imgur.com/a/Bq6Zjw0
https://imgur.com/a/AhzOkWS
Screenshots used need to be only from mobile version of Instagram and it can be assumed that they are always of rectangular shape
And if there are more than one image like here:
https://imgur.com/a/avv8Wvv
Then only one of the two is cropped (which one doesn't matter).
For example:
https://imgur.com/a/a4KnRKC
Thanks!
One of the prominent feature in your snapshot images is the white background color. Everything appears on top of it, even that user image. So we will try to segment out the background which would leave us with smaller components such as Instagram icon, likes, etc. Then we will pick the largest element assuming that the user image is the largest element present on the screen. Then we will simply find the cv2.boundingRect() of the largest contour and crop the snapshot accordingly as:
import cv2
import numpy as np
img = cv2.imread("/path/to/img.jpg")
white_lower = np.asarray([230, 230, 230])
white_upper = np.asarray([255, 255, 255])
mask = cv2.inRange(img, white_lower, white_upper)
mask = cv2.bitwise_not(mask)
Now we fill find contours in this mask and select the largest one.
im, cnt, hierarchy = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
largest_contour = max(cnt, key=lambda x:cv2.contourArea(x))
bounding_rect = cv2.boundingRect(largest_contour)
cropped_image = img[bounding_rect[1]: bounding_rect[1]+bounding_rect[3],
bounding_rect[0]:bounding_rect[0]+bounding_rect[2]]

how to center MRI images

I work on MRIs. The problem is that the images are not always centered. In addition, there are often black bands around the patient's body.
I would like to be able to remove the black borders and center the patient's body like this:
I have already tried to determine the edges of the patient's body by reading the pixel table but I haven't come up with anything very conclusive.
In fact my solution works on only 50% of the images... I don't see any other way to do it...
Development environment: Python3.7 + OpenCV3.4
I'm not sure this is the standard or most efficient way to do this, but it seems to work:
# Load image as grayscale (since it's b&w to start with)
im = cv2.imread('im.jpg', cv2.IMREAD_GRAYSCALE)
# Threshold it. I tried a few pixel values, and got something reasonable at min = 5
_,thresh = cv2.threshold(im,5,255,cv2.THRESH_BINARY)
# Find contours:
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# Put all contours together and reshape to (_,2).
# The first "column" will be your x values of your contours, and second will be y values
c = np.vstack(contours).reshape(-1,2)
# Extract the most left, most right, uppermost and lowermost point
xmin = np.min(c[:,0])
ymin = np.min(c[:,1])
xmax = np.max(c[:,0])
ymax = np.max(c[:,1])
# Use those as a guide of where to crop your image
crop = im[ymin:ymax, xmin:xmax]
cv2.imwrite('cropped.jpg', crop)
What you get in the end is this:
There are multiple ways to do this, and this is answer is pretty much computer vision tips and tricks.
If the mass is in the center, and the area outside is always going to be black, you can threshold the image and then find the edge pixels like you already are. I'd add 10 pixels to the border to adjust for variances in the threshold process.
Or if the body is always similarly sized, you can find the centroid of the blob (white area in the threshold image), and then crop a fixed area around it.

Edge detection on dim edges using Python

I want to find dim edges using Python.
Input images (100 X 100) :
It consists of several horizontal boards: top, middle, bottom.
I want to find middle board bounding box like:
I used several edge detection methods (prewitt_x, sobel_x, cv2.findContours) but cannot detect well.
Because edge btw black region and board region is dim.
How can I find bounding box like red box?
Code below is example using prewitt_x and cv2.findContours:
import cv2
import numpy as np
img = cv2.imread('my_dir/my_img.bmp',0)
# prewitts_x
kernelx = np.array([[1,1,1],[0,0,0],[-1,-1,-1]])
img_prewittx = cv2.filter2D(img, -1, kernelx)
img_prewittx_gray = cv2.cvtColor(img_prewittx, cv2.COLOR_BGR2GRAY)
cv2.imwrite('my_outdir/my_outimg.bmp',img_prewittx)
# cv2.findContours
image, contours, hierarchy = cv2.findContours(img_prewittx_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
rects = [cv2.boundingRect(cnt) for cnt in contours]
print(rects)
In fact, I don't want to use slower one like Canny detector.
Help me :)
My suggestion:
use a simple edge detection filter such as Prewitt
project horizontally (sum of the pixels in every row)
analyze the resulting profile to detect the regions of low/high activity and delimit the desired slabs.
You can also try the maximum along rows instead of the sum.
But don't expect miracles, this is a hard problem.

Rectangular bounding boxes around objects in monochrome images in python?

I have a set of two monochrome images [attached] where I want to put rectangular bounding boxes for both the persons in each image. I understand that cv2.dilate may help, but most of the examples I see are focusing on detecting one rectangle containing the maximum pixel intensities, so essentially they put one big rectangle in the image. I would like to have two separate rectangles.
UPDATE:
This is my attempt:
import numpy as np
import cv2
im = cv2.imread('splinet.png',0)
print im.shape
kernel = np.ones((50,50),np.uint8)
dilate = cv2.dilate(im,kernel,iterations = 10)
ret,thresh = cv2.threshold(im,127,255,0)
im3,contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
plt.imshow(im,cmap='Greys_r')
#plt.imshow(im3,cmap='Greys_r')
for i in range(0, len(contours)):
if (i % 2 == 0):
cnt = contours[i]
#mask = np.zeros(im2.shape,np.uint8)
#cv2.drawContours(mask,[cnt],0,255,-1)
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(im,(x,y),(x+w,y+h),(255,255,0),5)
plt.imshow(im,cmap='Greys_r')
cv2.imwrite(str(i)+'.png', im)
cv2.destroyAllWindows()
And the output is attached below: As you see, small boxes are being made and its not super clear too.
The real problem in your question lies in selection of the optimal threshold from the monochrome image.
In order to do that, calculate the median of the gray scale image (the second image in your post). The threshold level will be set 33% above this median value. Any value below this threshold will be binarized.
This is what I got:
Now performing morphological dilation followed by contour operations you can highlight your region of interest with a rectangle.
Note:
Never set a manual threshold as you did. Threshold can vary for different images. Hence always opt for a threshold based on the median of the image.

Categories