I have an image of a circular-shaped mask, which is essentially a colored circle within a black image.
I want to remove all the blank space around the mask, such that the boundaries of the image align with the circle as such:
I've written up a script to do this by searching through every column and row until a pixel with a value greater than 0 appears. searching from left to right, right to left, top to bottom, and bottom to the top gets me the mask boundaries, allowing me to crop the original image. Here is the code:
ROWS, COLS, _ = img.shape
BORDER_RIGHT = (0,0)
BORDER_LEFT = (0,0)
right_found = False
left_found = False
# find borders of blank space for removal.
# left and right border
print('Searching for Right and Left corners')
for col in tqdm(range(COLS), position=0, leave=True):
for row in range(ROWS):
if left_found and right_found:
break
# searching from left to right
if not left_found and N.sum(img[row][col]) > 0:
BORDER_LEFT = (row, col)
left_found = True
# searching from right to left
if not right_found and N.sum(img[row][-col]) > 0:
BORDER_RIGHT = (row, img.shape[1] + (-col))
right_found = True
BORDER_TOP = (0,0)
BORDER_BOTTOM = (0,0)
top_found = False
bottom_found = False
# top and bottom borders
print('Searching for Top and Bottom corners')
for row in tqdm(range(ROWS), position=0, leave=True):
for col in range(COLS):
if top_found and bottom_found:
break
# searching top to bottom
if not top_found and N.sum(img[row][col]) > 0:
BORDER_TOP = (row, col)
top_found = True
# searching bottom to top
if not bottom_found and N.sum(img[-row][col]) > 0:
BORDER_BOTTOM = (img.shape[0] + (-row), col)
bottom_found = True
# crop left and right borders
new_img = img[:,BORDER_LEFT[1]: BORDER_RIGHT[1] ,:]
# crop top and bottom borders
new_img = new_img[BORDER_TOP[0] : BORDER_BOTTOM[0],:,:]
I was wondering whether there was a more efficient way to do this. With larger images, this can be quite time-consuming especially if the mask is relatively small with respect to the original image shape. thanks!
Assuming you have only this object inside the image, there are two ways to do this:
You can threshold the image, then use numpy.where to find all locations that are non-zero, then use numpy.min and numpy.max on the appropriate row and column locations that come out of numpy.where to give you the bounding rectangle.
You can first find the contour points of the object after you threshold with cv2.findContours. This should result in a single contour, so once you have these points you put this through cv2.boundingRect to return the top-left corner of the rectangle followed by the width and height of its extent.
The first method will work if there is a single object and efficiently at that. The second one will work if there is more than one object, but you have to know which contour the object of interest is in, then you simply index into the output of cv2.findContours and pipe this through cv2.boundingRect to get the rectangular dimensions of the object of interest.
However, the takeaway is that either of these methods is much more efficient than the approach you have proposed where you are manually looping over each row and column and calculating sums.
Pre-processing
These sets of steps are going to be common to both methods. In summary, we read in the image, then convert it to grayscale then threshold. I didn't have access to your original image so I read it in from Stack Overflow and cropped it so that the axes are not showing. This will apply to the second method as well.
Here's a reconstruction of your image where I've taken a snapshot.
First I'll read in the image directly from the Internet as well as import the relevant packages I need to get the job done:
import skimage.io as io
import numpy as np
import cv2
img = io.imread('https://i.stack.imgur.com/dj1a8.png')
Thankfully, Scikit image has a method that reads in images directly from the Internet: skimage.io.imread.
After, I'm going to convert the image to grayscale, then threshold it:
img_gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
im = img_gray > 40
I use OpenCV's cv2.cvtColor to convert the image from colour to grayscale. After, I threshold the image so that any intensity above 40 is set to True and everything else is set to False. The threshold of 40 I chose by trial and error until I get a mask that appeared to be circular. Taking a look at this image we get:
Method #1
As I illustrated above, use numpy.where on the thresholded image, then use numpy.min and numpy.max find the appropriate top-left and bottom-right corners and crop the image:
(r, c) = np.where(im == 1)
min_row, min_col = np.min(r), np.min(c)
max_row, max_col = np.max(r), np.max(c)
im_crop = img[min_row:max_row+1, min_col:max_col+1]
numpy.where for a 2D array will return a tuple of row and column locations that are non-zero. If we find the minimum row and column location, that corresponds to the top-left corner of the bounding rectangle. Similarly, the maximum row and column location corresponds to the bottom-right corner of the bounding rectangle. What's nice is that numpy.min and numpy.max work in a vectorised fashion, meaning that it operates on entire NumPy arrays in a single sweep. This logic is used above, then we index into the original colour image to crop out the range of rows and columns that contain the object of interest. im_crop contains that result. Note that the maximum row and column needs to be added with 1 when we're indexing as slicing with the end indices are exclusive so adding with 1 ensures we include the pixel locations at the bottom right corner of the rectangle.
We therefore get:
Method #2
We will use cv2.findContours to find all contour points of all objects in the image. Because there's a single object, only one contour should result, so we use this contour to pipe into cv2.boundingRect to find the top-left corner of the bounding rectangle of the object, combined with its width and height to crop out the image.
cnt, _ = cv2.findContours(im.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
x, y, w, h = cv2.boundingRect(cnt[0])
im_crop = img[y:y+h, x:x+w]
Take note that we have to convert the thresholded image into unsigned 8-bit integer, as that is the type that the function is expecting. Furthermore, we use cv2.RETR_EXTERNAL as we only want to retrieve the coordinates of the outer perimeter of any objects we see in the image. We also use cv2.CHAIN_APPROX_NONE to return every possible contour point on the object. The cnt is a list of contours that was found in the image. The size of this list should only be 1, so we index into this directly and pipe this into cv2.boundingRect. We then use the top-left corner of the rectangle, combined with its width and height to crop out the object.
We therefore get:
Full Code
Here's the full code listing from start to finish. I've left comments below to delineate what methods #1 and #2 are. For now, method #2 has been commented out, but you can decide whichever one you want to use by simply commenting and uncommenting the relevant code.
import skimage.io as io
import cv2
import numpy as np
img = io.imread('https://i.stack.imgur.com/dj1a8.png')
img_gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
im = img_gray > 40
# Method #1
(r, c) = np.where(im == 1)
min_row, min_col = np.min(r), np.min(c)
max_row, max_col = np.max(r), np.max(c)
im_crop = img[min_row:max_row+1, min_col:max_col+1]
# Method #2
#cnt, _ = cv2.findContours(im.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#x, y, w, h = cv2.boundingRect(cnt[0])
#im_crop = img[y:y+h, x:x+w]
Related
I have an image that only contains a tiled shape in it with everywhere else black. However, this tiled pattern can be shifted/offset anywhere in the image particularly over the image borders. Knowing that this shape can be fit inside the image after offsetting it and leaving the borders black, how can I calculate how many pixels in x and y coordinates it needs to get offset for that to happen in an optimized way?
Input image
Desired output after offset/shiftimg
My thought was getting connected components in the image, check which labels are on the border, calculate the longest distance between each axis shapes that are on the border and offsetting in the axis' with those values. It can work but I feel like there should be smarter ways.
So here is the details of what I put in my comment for doing that with Python/OpenCV/Numpy. Is this what you want?
Read the input
Convert to gray
Threshold to binary
Count the number of white pixels in each column and store in array
Find the first and last black (zero count) element in the array
Get the center x values
Crop the image into left and right parts at the center x
Stack them together horizontally in the opposite order
Save the result
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('black_white.jpg')
hh, ww = img.shape[:2]
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY)[1]
# count number of white pixels in columns as new array
count = np.count_nonzero(thresh, axis=0)
# get first and last x coordinate where black (count==0)
first_black = np.where(count==0)[0][0]
last_black = np.where(count==0)[0][-1]
# compute x center
black_center = (first_black + last_black) // 2
print(black_center)
# crop into two parts
left = img[0:hh, 0:black_center]
right = img[0:hh, black_center:ww]
# combine them horizontally after swapping
result = np.hstack([right, left])
# write result to disk
cv2.imwrite("black_white_rolled.jpg", result)
# display it
cv2.imshow("RESULT", result)
cv2.waitKey(0)
I have binary images where rectangles are placed randomly and I want to get the positions and sizes of those rectangles.
If possible I want the minimal number of rectangles necessary to exactly recreate the image.
On the left is my original image and on the right the image I get after applying scipys.find_objects()
(like suggested for this question).
import scipy
# image = scipy.ndimage.zoom(image, 9, order=0)
labels, n = scipy.ndimage.measurements.label(image, np.ones((3, 3)))
bboxes = scipy.ndimage.measurements.find_objects(labels)
img_new = np.zeros_like(image)
for bb in bboxes:
img_new[bb[0], bb[1]] = 1
This works fine if the rectangles are far apart, but if they overlap and build more complex structures this algorithm just gives me the largest bounding box (upsampling the image made no difference). I have the feeling that there should already exist a scipy or opencv method which does this.
I would be glad to know if somebody has an idea on how to tackle this problem or even better knows of an existing solution.
As result I want a list of rectangles (ie. lower-left-corner : upper-righ-corner) in the image. The condition is that when I redraw those filled rectangles I want to get exactly the same image as before. If possible the number of rectangles should be minimal.
Here is the code for generating sample images (and a more complex example original vs scipy)
import numpy as np
def random_rectangle_image(grid_size, n_obstacles, rectangle_limits):
n_dim = 2
rect_pos = np.random.randint(low=0, high=grid_size-rectangle_limits[0]+1,
size=(n_obstacles, n_dim))
rect_size = np.random.randint(low=rectangle_limits[0],
high=rectangle_limits[1]+1,
size=(n_obstacles, n_dim))
# Crop rectangle size if it goes over the boundaries of the world
diff = rect_pos + rect_size
ex = np.where(diff > grid_size, True, False)
rect_size[ex] -= (diff - grid_size)[ex].astype(int)
img = np.zeros((grid_size,)*n_dim, dtype=bool)
for i in range(n_obstacles):
p_i = np.array(rect_pos[i])
ps_i = p_i + np.array(rect_size[i])
img[tuple(map(slice, p_i, ps_i))] = True
return img
img = random_rectangle_image(grid_size=64, n_obstacles=30,
rectangle_limits=[4, 10])
Here is something to get you started: a naïve algorithm that walks your image and creates rectangles as large as possible. As it is now, it only marks the rectangles but does not report back coordinates or counts. This is to visualize the algorithm alone.
It does not need any external libraries except for PIL, to load and access the left side image when saved as a PNG. I'm assuming a border of 15 pixels all around can be ignored.
from PIL import Image
def fill_rect (pixels,xp,yp,w,h):
for y in range(h):
for x in range(w):
pixels[xp+x,yp+y] = (255,0,0,255)
for y in range(h):
pixels[xp,yp+y] = (255,192,0,255)
pixels[xp+w-1,yp+y] = (255,192,0,255)
for x in range(w):
pixels[xp+x,yp] = (255,192,0,255)
pixels[xp+x,yp+h-1] = (255,192,0,255)
def find_rect (pixels,x,y,maxx,maxy):
# assume we're at the top left
# get max horizontal span
width = 0
height = 1
while x+width < maxx and pixels[x+width,y] == (0,0,0,255):
width += 1
# now walk down, adjusting max width
while y+height < maxy:
for w in range(x,x+width,1):
if pixels[x,y+height] != (0,0,0,255):
break
if pixels[x,y+height] != (0,0,0,255):
break
height += 1
# fill rectangle
fill_rect (pixels,x,y,width,height)
image = Image.open('A.png')
pixels = image.load()
width, height = image.size
print (width,height)
for y in range(16,height-15,1):
for x in range(16,width-15,1):
if pixels[x,y] == (0,0,0,255):
find_rect (pixels,x,y,width,height)
image.show()
From the output
you can observe the detection algorithm can be improved, as, for example, the "obvious" two top left rectangles are split up into 3. Similar, the larger structure in the center also contains one rectangle more than absolutely needed.
Possible improvements are either to adjust the find_rect routine to locate a best fit¹, or store the coordinates and use math (beyond my ken) to find which rectangles may be joined.
¹ A further idea on this. Currently all found rectangles are immediately filled with the "found" color. You could try to detect obviously multiple rectangles, and then, after marking the first, the other rectangle(s) to check may then either be black or red. Off the cuff I'd say you'd need to try different scan orders (top-to-bottom or reverse, left-to-right or reverse) to actually find the minimally needed number of rectangles in any combination.
I have a set of two monochrome images [attached] where I want to put rectangular bounding boxes for both the persons in each image. I understand that cv2.dilate may help, but most of the examples I see are focusing on detecting one rectangle containing the maximum pixel intensities, so essentially they put one big rectangle in the image. I would like to have two separate rectangles.
UPDATE:
This is my attempt:
import numpy as np
import cv2
im = cv2.imread('splinet.png',0)
print im.shape
kernel = np.ones((50,50),np.uint8)
dilate = cv2.dilate(im,kernel,iterations = 10)
ret,thresh = cv2.threshold(im,127,255,0)
im3,contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
plt.imshow(im,cmap='Greys_r')
#plt.imshow(im3,cmap='Greys_r')
for i in range(0, len(contours)):
if (i % 2 == 0):
cnt = contours[i]
#mask = np.zeros(im2.shape,np.uint8)
#cv2.drawContours(mask,[cnt],0,255,-1)
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(im,(x,y),(x+w,y+h),(255,255,0),5)
plt.imshow(im,cmap='Greys_r')
cv2.imwrite(str(i)+'.png', im)
cv2.destroyAllWindows()
And the output is attached below: As you see, small boxes are being made and its not super clear too.
The real problem in your question lies in selection of the optimal threshold from the monochrome image.
In order to do that, calculate the median of the gray scale image (the second image in your post). The threshold level will be set 33% above this median value. Any value below this threshold will be binarized.
This is what I got:
Now performing morphological dilation followed by contour operations you can highlight your region of interest with a rectangle.
Note:
Never set a manual threshold as you did. Threshold can vary for different images. Hence always opt for a threshold based on the median of the image.
I am using opencv (cv2 module) in python to recognize objects in a video. In each frame, I want to extract a particular region aka, the contour. After learning from opencv docs, I have the following code snippet:
# np is numpy module, contours are expected results,
# frame is each frame of the video
# Iterate through the contours.
for contour in contours:
# Compute the bounding box for the contour, draw
# it on the frame, and update the text.
x, y, w, h = cv2.boundingRect(contour)
# Find the mask and build a histogram for the object.
mask = np.zeros(frame.shape[:2], np.uint8)
mask[y:h, x:w] = 255
masked_img = cv2.bitwise_and(frame, frame, mask = mask)
obj_hist = cv2.calcHist([masked_img], [0], None, [256], [0, 256])
However, when I use matplotlib to show the masked_img, it returns a dark image. The obj_hist has only one element with number greater than 0, which is the first one. What is wrong?
The problem is the way you are setting the values in your mask. Specifically this line:
mask[y:h, x:w] = 255
You are trying to slice into each dimension of the image by using y:h and x:w to set up the mask. The left of the colon is the starting row or column, and the right of the colon denotes the end row or column. Given that you start at y, you need to offset by h using the same reference y... the same goes for x and w.
Doing the slicing where the right value of the colon is less than the left will not modify the array in any way, and that's most why you aren't getting any output as you are not modifying the mask when it's initially all zeroes.
You probably meant to do:
mask[y:y+h, x:x+w] = 255
This will properly set the proper region given by cv2.boundingRect to white (255).
I followed this tutorial from official documentation. I run their code:
import numpy as np
import cv2
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img, contours, -1, (0,255,0), 3)
That is ok: no errors, but nothing is displayed.I want to display the result they got as they showed it on the picture:
How can I display the result of the countours like that (just the left result or the right one) ?
I know I must use cv2.imshow(something) but how in this specific case ?
First off, that example only shows you how to draw contours with the simple approximation. Bear in mind that even if you draw the contours with the simple approximation, it will be visualized as having a blue contour drawn completely around the rectangle as seen in the left image. You will not be able to get the right image by simply drawing the contours onto the image. In addition, you want to compare two sets of contours - the simplified version on the right with its full representation on the left. Specifically, you need to replace the cv2.CHAIN_APPROX_SIMPLE flag with cv2.CHAIN_APPROX_NONE to get the full representation. Take a look at the OpenCV doc on findContours for more details: http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#findcontours
In addition, even though you draw the contours onto the image, it doesn't display the results. You'll need to call cv2.imshow for that. However, drawing the contours themselves will not show you the difference between the full and simplified version. The tutorial mentions that you need to draw circles at each contour point so we shouldn't use cv2.drawContours for this task. What you should do is extract out the contour points and draw circles at each point.
As such, create two images like so:
# Your code
import numpy as np
import cv2
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)
## Step #1 - Detect contours using both methods on the same image
contours1, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
contours2, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
### Step #2 - Reshape to 2D matrices
contours1 = contours1[0].reshape(-1,2)
contours2 = contours2[0].reshape(-1,2)
### Step #3 - Draw the points as individual circles in the image
img1 = im.copy()
img2 = im.copy()
for (x, y) in contours1:
cv2.circle(img1, (x, y), 1, (255, 0, 0), 3)
for (x, y) in contours2:
cv2.circle(img2, (x, y), 1, (255, 0, 0), 3)
Take note that the above code is for OpenCV 2. For OpenCV 3, there is an additional output to cv2.findContours that is the first output which you can ignore in this case:
_, contours1, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
_, contours2, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
Now let's walk through the code slowly. The first part of the code is what you provided. Now we move onto what is new.
Step #1 - Detect contours using both methods
Using the thresholded image, we detect contours using both the full and simple approximations. This gets stored in two lists, contours1 and contours2.
Step #2 - Reshape to 2D matrices
The contours themselves get stored as a list of NumPy arrays. For the simple image provided, there should only be one contour detected, so extract out the first element of the list, then use numpy.reshape to reshape the 3D matrices into their 2D forms where each row is a (x, y) point.
Step #3 - Draw the points as individual circles in the image
The next step would be to take each (x, y) point from each set of contours and draw them on the image. We make two copies of the original image in colour form, then we use cv2.circle and iterate through each pair of (x, y) points for both sets of contours and populate two different images - one for each set of contours.
Now, to get the figure you see above, there are two ways you can do this:
Create an image that stores both of these results together side by side, then show this combined image.
Use matplotlib, combined with subplot and imshow so that you can display two images in one window.
I'll show you how to do it using both methods:
Method #1
Simply stack the two images side by side, then show the image after:
out = np.hstack([img1, img2])
# Now show the image
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
I stack them horizontally so that they are a combined image, then show this with cv2.imshow.
Method #2
You can use matplotlib:
import matplotlib.pyplot as plt
# Spawn a new figure
plt.figure()
# Show the first image on the left column
plt.subplot(1,2,1)
plt.imshow(img1[:,:,::-1])
# Turn off axis numbering
plt.axis('off')
# Show the second image on the right column
plt.subplot(1,2,2)
plt.imshow(img2[:,:,::-1])
# Turn off the axis numbering
plt.axis('off')
# Show the figure
plt.show()
This should display both images in separate subfigures within an overall figure window. If you take a look at how I'm calling imshow here, you'll see that I am swapping the RGB channels because OpenCV reads in images in BGR format. If you want to display images with matplotlib, you'll need to reverse the channels as the images are in RGB format (as they should be).
To address your question in your comments, you would take which contour structure you want (contours1 or contours2) and search the contour points. contours is a list of all possible contours, and within each contour is a 3D matrix that is shaped in a N x 1 x 2 format. N would be the total number of points that represent the contour. I'm going to remove the singleton second dimension so we can get this to be a N x 2 matrix. Also, let's use the full representation of the contours for now:
points = contours1[0].reshape(-1,2)
I am going to assume that your image only has one object, hence my indexing into contours1 with index 0. I unravel the matrix so that it becomes a single row vector, then reshape the matrix so that it becomes N x 2. Next, we can find the minimum point by:
min_x = np.argmin(points[:,0])
min_point = points[min_x,:]
np.argmin finds the location of the smallest value in an array that you supply. In this case, we want to operate along the x coordinate, or the columns. Once we find this location, we simply index into our 2D contour point array and extract out the contour point.
You should add cv2.imshow("Title", img) at the end of your code. It should look like this:
import numpy as np
import cv2
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(im, contours, -1, (0,255,0), 3)
cv2.imshow("title", im)
cv2.waitKey()
Add these 2 lines at the end:
cv2.imshow("title", im)
cv2.waitKey()
Also, be aware that you have img instead of im in your last line.