Crop subimage from image with Edge Detection - python

So basically I have an image with whitespace and a text above.
The output should be only the picture. Without the text and the whitespaces. The best example would probably be a meme:
I believe I would have to get the corner coordinates and then use something like pillow's Image.crop(corner_coordinates).
How could I implement this?
Edit: So I tried a little bit. I used the Canny Edge Detection Algorithm (opencv). Im now getting the desired edges bit also the edges from the text. Would be nice if someone could help me:)

You may find bounding rectangle of the largest contour that is not white.
I suggest using the following stages:
Convert image from BGR to Gray.
Convert from gray to to binary image.
Use automatic threshold (use cv2.THRESH_OTSU flag) and invert polarity.
The result is white color where original image is dark, and black where image is bright.
Find contours using cv2.findContours() (as Mark Setchell commented).
Finding the outer contour is simpler solution than detecting the edges.
Find the bounding rectangle of the contour with the maximum area.
Crop the bounding rectangle from the input image.
I used NumPy array slicing instead of using pillow.
Here is the code:
import cv2
# Read input image
img = cv2.imread('img.jpg')
# Convert from BGR to Gray.
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Convert to binary image using automatic threshold and invert polarity
_, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
# Find contours on thresh
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # Use index [-2] to be compatible to OpenCV 3 and 4
# Get contour with maximum area
c = max(cnts, key=cv2.contourArea)
x, y, w, h = cv2.boundingRect(c)
# Crop the bounding rectangle (use .copy to get a copy instead of slice).
crop = img[y:y+h, x:x+w, :].copy()
# Draw red rectangle for testing
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 0, 255), thickness = 2)
# Show result
cv2.imshow('img', img)
cv2.imshow('crop', crop)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
crop:
img:
thresh:

Related

Python: How to cut out an area with specific color from image (OpenCV, Numpy)

so I've been trying to code a Python script, which takes an image as input and then cuts out a rectangle with a specific background color. However, what causes a problem for my coding skills, is that the rectangle is not on a fixed position in every image (the position will be random).
I do not really understand how to manage the numpy functions. I also read something about OpenCV, but I'm totally new to it. So far I just cropped the images through the ".crop" function, but then I would have to use fixed values.
This is how the input image could look and now I would like to detect the position of the yellow rectangle and then crop the image to its size.
Help is appreciated, thanks in advance.
Edit: #MarkSetchell's way works pretty good, but found a issue for a different test picture. The problem with the other picture is that there are 2 small pixels with the same color at the top and the bottom of the picture, which cause errors or a bad crop.
Updated Answer
I have updated my answer to cope with specks of noisy outlier pixels of the same colour as the yellow box. This works by running a 3x3 median filter over the image first to remove the spots:
#!/usr/bin/env python3
import numpy as np
from PIL import Image, ImageFilter
# Open image and make into Numpy array
im = Image.open('image.png').convert('RGB')
na = np.array(im)
orig = na.copy() # Save original
# Median filter to remove outliers
im = im.filter(ImageFilter.MedianFilter(3))
# Find X,Y coordinates of all yellow pixels
yellowY, yellowX = np.where(np.all(na==[247,213,83],axis=2))
top, bottom = yellowY[0], yellowY[-1]
left, right = yellowX[0], yellowX[-1]
print(top,bottom,left,right)
# Extract Region of Interest from unblurred original
ROI = orig[top:bottom, left:right]
Image.fromarray(ROI).save('result.png')
Original Answer
Ok, your yellow colour is rgb(247,213,83), so we want to find the X,Y coordinates of all yellow pixels:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Open image and make into Numpy array
im = Image.open('image.png').convert('RGB')
na = np.array(im)
# Find X,Y coordinates of all yellow pixels
yellowY, yellowX = np.where(np.all(na==[247,213,83],axis=2))
# Find first and last row containing yellow pixels
top, bottom = yellowY[0], yellowY[-1]
# Find first and last column containing yellow pixels
left, right = yellowX[0], yellowX[-1]
# Extract Region of Interest
ROI=na[top:bottom, left:right]
Image.fromarray(ROI).save('result.png')
You can do the exact same thing in Terminal with ImageMagick:
# Get trim box of yellow pixels
trim=$(magick image.png -fill black +opaque "rgb(247,213,83)" -format %# info:)
# Check how it looks
echo $trim
251x109+101+220
# Crop image to trim box and save as "ROI.png"
magick image.png -crop "$trim" ROI.png
If still using ImageMagick v6 rather than v7, replace magick with convert.
What I see is dark and light gray areas on sides and top, a white area, and a yellow rectangle with gray triangles inside the white area.
The first stage I suggest is converting the image from RGB color space to HSV color space.
The S color channel in HSV space, is the "color saturation channel".
All colorless (gray/black/white) are zeros and yellow pixels are above zeros in the S channel.
Next stages:
Apply threshold on S channel (convert it to binary image).
The yellow pixels goes to 255, and other goes to zero.
Find contours in thresh (find only the outer contour - only the rectangle).
Invert polarity of the pixels inside the rectangle.
The gray triangles become 255, and other pixels are zeros.
Find contours in thresh - find the gray triangles.
Here is the code:
import numpy as np
import cv2
# Read input image
img = cv2.imread('img.png')
# Convert from BGR to HSV color space
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Get the saturation plane - all black/white/gray pixels are zero, and colored pixels are above zero.
s = hsv[:, :, 1]
# Apply threshold on s - use automatic threshold algorithm (use THRESH_OTSU).
ret, thresh = cv2.threshold(s, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Find contours in thresh (find only the outer contour - only the rectangle).
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # [-2] indexing takes return value before last (due to OpenCV compatibility issues).
# Mark rectangle with green line
cv2.drawContours(img, contours, -1, (0, 255, 0), 2)
# Assume there is only one contour, get the bounding rectangle of the contour.
x, y, w, h = cv2.boundingRect(contours[0])
# Invert polarity of the pixels inside the rectangle (on thresh image).
thresh[y:y+h, x:x+w] = 255 - thresh[y:y+h, x:x+w]
# Find contours in thresh (find the triangles).
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # [-2] indexing takes return value before last (due to OpenCV compatibility issues).
# Iterate triangle contours
for c in contours:
if cv2.contourArea(c) > 4: # Ignore very small contours
# Mark triangle with blue line
cv2.drawContours(img, [c], -1, (255, 0, 0), 2)
# Show result (for testing).
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
S color channel in HSV color space:
thresh - S after threshold:
thresh after inverting polarity of the rectangle:
Result (rectangle and triangles are marked):
Update:
In case there are some colored dots on the background, you can crop the largest colored contour:
import cv2
import imutils # https://pypi.org/project/imutils/
# Read input image
img = cv2.imread('img2.png')
# Convert from BGR to HSV color space
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Get the saturation plane - all black/white/gray pixels are zero, and colored pixels are above zero.
s = hsv[:, :, 1]
cv2.imwrite('s.png', s)
# Apply threshold on s - use automatic threshold algorithm (use THRESH_OTSU).
ret, thresh = cv2.threshold(s, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Find contours
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = imutils.grab_contours(cnts)
# Find the contour with the maximum area.
c = max(cnts, key=cv2.contourArea)
# Get bounding rectangle
x, y, w, h = cv2.boundingRect(c)
# Crop the bounding rectangle out of img
out = img[y:y+h, x:x+w, :].copy()
Result:
In opencv you can use inRange. This basically makes whatever color is in the range you speciefied white, and the rest black. This way all the yellow will be white.
Here is the documentation: https://docs.opencv.org/3.4/da/d97/tutorial_threshold_inRange.html

How to approximate jagged edges as lines using Python OpenCV?

I am trying to find accurate locations for the corners on ink blotches as seen below:
My idea is to fit lines to the edges and then find where they intersect. As of now, I've tried using cv2.approxPolyDP() with various values of epsilon to approximate the edges, however this doesn't look like the way to go. My cv2.approxPolyDP code gives the following result:
Ideally, this is what I want to produce (drawn on paint):
Are there CV functions in place for this sort of problem? I've considered using Gaussian blurring before the threshold step although that method does not seem like it would be very accurate for corner finding. Additionally, I would like this to be robust to rotated images, so filtering for vertical and horizontal lines won't necessarily work without other considerations.
Code:*
import numpy as np
from PIL import ImageGrab
import cv2
def process_image4(original_image): # Douglas-peucker approximation
# Convert to black and white threshold map
gray = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
(thresh, bw) = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
# Convert bw image back to colored so that red, green and blue contour lines are visible, draw contours
modified_image = cv2.cvtColor(bw, cv2.COLOR_GRAY2BGR)
contours, hierarchy = cv2.findContours(bw, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(modified_image, contours, -1, (255, 0, 0), 3)
# Contour approximation
try: # Just to be sure it doesn't crash while testing!
for cnt in contours:
epsilon = 0.005 * cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, epsilon, True)
# cv2.drawContours(modified_image, [approx], -1, (0, 0, 255), 3)
except:
pass
return modified_image
def screen_record():
while(True):
screen = np.array(ImageGrab.grab(bbox=(100, 240, 750, 600)))
image = process_image4(screen)
cv2.imshow('window', image)
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
screen_record()
A note about my code: I'm using screen capture so that I can process these images live. I have a digital microscope that can display live feed on a screen, so the constant screen recording will allow me to sample from the video feed and locate the corners live on the other half of my screen.
Here's a potential solution using thresholding + morphological operations:
Obtain binary image. We load the image, blur with cv2.bilateralFilter(), grayscale, then Otsu's threshold
Morphological operations. We perform a series of morphological open and close to smooth the image and remove noise
Find distorted approximated mask. We find the bounding rectangle coordinates of the object with cv2.arcLength() and cv2.approxPolyDP() then draw this onto a mask
Find corners. We use the Shi-Tomasi Corner Detector already implemented as cv2.goodFeaturesToTrack() for corner detection. Take a look at this for an explanation of each parameter
Here's a visualization of each step:
Binary image -> Morphological operations -> Approximated mask -> Detected corners
Here are the corner coordinates:
(103, 550)
(1241, 536)
Here's the result for the other images
(558, 949)
(558, 347)
Finally for the rotated image
(201, 99)
(619, 168)
Code
import cv2
import numpy as np
# Load image, bilaterial blur, and Otsu's threshold
image = cv2.imread('1.png')
mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.bilateralFilter(gray,9,75,75)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Perform morpholgical operations
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (10,10))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1)
close = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel, iterations=1)
# Find distorted rectangle contour and draw onto a mask
cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
rect = cv2.minAreaRect(cnts[0])
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(image,[box],0,(36,255,12),4)
cv2.fillPoly(mask, [box], (255,255,255))
# Find corners
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
corners = cv2.goodFeaturesToTrack(mask,4,.8,100)
offset = 25
for corner in corners:
x,y = corner.ravel()
cv2.circle(image,(x,y),5,(36,255,12),-1)
x, y = int(x), int(y)
cv2.rectangle(image, (x - offset, y - offset), (x + offset, y + offset), (36,255,12), 3)
print("({}, {})".format(x,y))
cv2.imshow('image', image)
cv2.imshow('thresh', thresh)
cv2.imshow('close', close)
cv2.imshow('mask', mask)
cv2.waitKey()
Note: The idea for the distorted bounding box came from a previous answer in How to find accurate corner positions of a distorted rectangle from blurry image
After seeing the description of the corners, here is what I would recommend:
by any method, find the gross location of the corners (for instance by looking for the extreme values of (±X+±Y, ±X+±Y) or (±X, ±Y)).
consider a strip than joins two corners, with a certain width. Extract the pixels in that strip, on a portion close to the corner, rotate to make it horizontal and average the values along horizontals.
you will obtain a gray profile that tells you the accurate position of the edge, at the mean between the background and foreground intensities.
repeat on all four edges and at both ends. This will give you four accurate corners, by intersection.

How to detect the colors of detected shapes OpenCV

I wrote the code bellow to detect 3D shapes in an image and it works correctly.
Now I need to detect the colors inside the shapes and calculate them.
Could anyone point me where should I start with color detection?
Code for shape detection below, maybe it will be of use:
import cv2
import numpy as np
cv2.imshow('Original Image',rawImage)
cv2.waitKey(0)
hsv = cv2.cvtColor(rawImage, cv2.COLOR_BGR2HSV)
cv2.imshow('HSV Image',hsv)
cv2.waitKey(0)
hue ,saturation ,value = cv2.split(hsv)
cv2.imshow('Saturation Image',saturation)
cv2.waitKey(0)
retval, thresholded = cv2.threshold(saturation, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
cv2.imshow('Thresholded Image',thresholded)
cv2.waitKey(0)
medianFiltered = cv2.medianBlur(thresholded,5)
cv2.imshow('Median Filtered Image',medianFiltered)
cv2.waitKey(0)
cnts, hierarchy = cv2.findContours(medianFiltered, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for c in cnts:
# compute the center of the contour
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
first = cv2.drawContours(rawImage, [c], -1, (0, 255, 0), 2)
second =cv2.circle(rawImage, (cX, cY),1 , (255, 255, 255), -1)
cv2.imshow('Objects Detected',rawImage)
cv2.waitKey(0)
Basic idea:
(1) Convert the image to HSV color space;
(2) Threahold the `S` to find color regions;
(3) Calculate average hsv for each color-region-maskin HSV, then convert into BGR.
For the image:
(1) Convert the image to HSV color space:
(2) Threahold the `S` to find color regions:
(3) Calculate average hsv for each color-region-maskin HSV, then convert into BGR.
Some links maybe useful:
1. just detect color regions:
(1) How to detect colored patches in an image using OpenCV?
(2)
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
2. detect specific color in HSV:
(1) for green: How to define a threshold value to detect only green colour objects in an image :Opencv
(2) for orange: Choosing the correct upper and lower HSV boundaries for color detection with`cv::inRange` (OpenCV)
(3) take of the H of the red: How to find the RED color regions using OpenCV?
3. If you want to crop polygon mask:
(1)
Cropping Concave polygon from Image using Opencv python

drawcontours on different position

I would like to draw contours in the middle of a blank image. I don't know how to set the contour location to be drawn. this is the line I use.
cv2.drawContours(bimg, c, -1, 255, 1)
bimg is the blank image, c is the contour I've extracted from an image. I believe I can move the contour by manipulating c, but I don't understand how c is written actually
You can look at the official documentation of opencv for contours. This code can be used to find contours of an image threshold and draw them on a white background with red color.
img = cv2.imread('image_name.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_,thresh1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY)
_, cnts, _ = cv2.findContours(mask,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
bgr = np.ones((img.shape[0], img.shape[1]), dtype= 'uint8')*255 #this creates a white background of the same size of input shape
cv2.drawContours(bgr, cnts, -1, (0,0,255), 1)

How to crop the internal area of a contour?

I am working on Retinal fundus images.The image consists of a circular retina on a black background. With OpenCV, I have managed to get a contour which surrounds the whole circular Retina. What I need is to crop out the circular retina from the black background.
It is unclear in your question whether you want to actually crop out the information that is defined within the contour or mask out the information that isn't relevant to the contour chosen. I'll explore what to do in both situations.
Masking out the information
Assuming you ran cv2.findContours on your image, you will have received a structure that lists all of the contours available in your image. I'm also assuming that you know the index of the contour that was used to surround the object you want. Assuming this is stored in idx, first use cv2.drawContours to draw a filled version of this contour onto a blank image, then use this image to index into your image to extract out the object. This logic masks out any irrelevant information and only retain what is important - which is defined within the contour you have selected. The code to do this would look something like the following, assuming your image is a grayscale image stored in img:
import numpy as np
import cv2
img = cv2.imread('...', 0) # Read in your image
# contours, _ = cv2.findContours(...) # Your call to find the contours using OpenCV 2.4.x
_, contours, _ = cv2.findContours(...) # Your call to find the contours
idx = ... # The index of the contour that surrounds your object
mask = np.zeros_like(img) # Create mask where white is what we want, black otherwise
cv2.drawContours(mask, contours, idx, 255, -1) # Draw filled contour in mask
out = np.zeros_like(img) # Extract out the object and place into output image
out[mask == 255] = img[mask == 255]
# Show the output image
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
If you actually want to crop...
If you want to crop the image, you need to define the minimum spanning bounding box of the area defined by the contour. You can find the top left and lower right corner of the bounding box, then use indexing to crop out what you need. The code will be the same as before, but there will be an additional cropping step:
import numpy as np
import cv2
img = cv2.imread('...', 0) # Read in your image
# contours, _ = cv2.findContours(...) # Your call to find the contours using OpenCV 2.4.x
_, contours, _ = cv2.findContours(...) # Your call to find the contours
idx = ... # The index of the contour that surrounds your object
mask = np.zeros_like(img) # Create mask where white is what we want, black otherwise
cv2.drawContours(mask, contours, idx, 255, -1) # Draw filled contour in mask
out = np.zeros_like(img) # Extract out the object and place into output image
out[mask == 255] = img[mask == 255]
# Now crop
(y, x) = np.where(mask == 255)
(topy, topx) = (np.min(y), np.min(x))
(bottomy, bottomx) = (np.max(y), np.max(x))
out = out[topy:bottomy+1, topx:bottomx+1]
# Show the output image
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
The cropping code works such that when we define the mask to extract out the area defined by the contour, we additionally find the smallest horizontal and vertical coordinates which define the top left corner of the contour. We similarly find the largest horizontal and vertical coordinates that define the bottom left corner of the contour. We then use indexing with these coordinates to crop what we actually need. Note that this performs cropping on the masked image - that is the image that removes everything but the information contained within the largest contour.
Note with OpenCV 3.x
It should be noted that the above code assumes you are using OpenCV 2.4.x. Take note that in OpenCV 3.x, the definition of cv2.findContours has changed. Specifically, the output is a three element tuple output where the first image is the source image, while the other two parameters are the same as in OpenCV 2.4.x. Therefore, simply change the cv2.findContours statement in the above code to ignore the first output:
_, contours, _ = cv2.findContours(...) # Your call to find contours
Here's another approach to crop out a rectangular ROI. The main idea is to find the edges of the retina using Canny edge detection, find contours, and then extract the ROI using Numpy slicing. Assuming you have an input image like this:
Extracted ROI
import cv2
# Load image, convert to grayscale, and find edges
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY)[1]
# Find contour and sort by contour area
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
# Find bounding box and extract ROI
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
ROI = image[y:y+h, x:x+w]
break
cv2.imshow('ROI',ROI)
cv2.imwrite('ROI.png',ROI)
cv2.waitKey()
This is a pretty simple way. Mask the image with transparency.
Read the image
Make a grayscale version.
Otsu Threshold
Apply morphology open and close to thresholded image as a mask
Put the mask into the alpha channel of the input
Save the output
Input
Code
import cv2
import numpy as np
# load image as grayscale
img = cv2.imread('retina.jpeg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold input image using otsu thresholding as mask and refine with morphology
ret, mask = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = np.ones((9,9), np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# put mask into alpha channel of result
result = img.copy()
result = cv2.cvtColor(result, cv2.COLOR_BGR2BGRA)
result[:, :, 3] = mask
# save resulting masked image
cv2.imwrite('retina_masked.png', result)
Output

Categories