I am working with videos, that have borders (margins) around them. Some have it along all 4 sides, some along left&right only and some along top&bottom only. Length of these margins is also not fixed.
I am extracting frames from these videos, as for example,
and
Both of these contain borders on the top and bottom.
Can anyone please suggest some methods to remove these borders from these images (in Python, preferably).
I came across some methods, like this on Stackoverflow, but this deals with an ideal situation where borders are perfectly black (0,0,0). But in my case, they may not be pitch black, and also may contain jittery noises too.
Any help/suggestions would be highly appreciated.
Here is one way to do that in Python/OpenCV.
Read the image
Convert to grayscale and invert
Threshold
Apply morphology to remove small black or white regions then invert again
Get the contour of the one region
Get the bounding box of that contour
Use numpy slicing to crop that area of the image to form the resulting image
Save the resulting image
import cv2
import numpy as np
# read image
img = cv2.imread('gymnast.png')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# invert gray image
gray = 255 - gray
# gaussian blur
blur = cv2.GaussianBlur(gray, (3,3), 0)
# threshold
thresh = cv2.threshold(blur,236,255,cv2.THRESH_BINARY)[1]
# apply close and open morphology to fill tiny black and white holes
kernel = np.ones((5,5), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
# invert thresh
thresh = 255 -thresh
# get contours (presumably just one around the nonzero pixels)
# then crop it to bounding rectangle
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
cntr = contours[0]
x,y,w,h = cv2.boundingRect(cntr)
crop = img[y:y+h, x:x+w]
cv2.imshow("IMAGE", img)
cv2.imshow("THRESH", thresh)
cv2.imshow("CROP", crop)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save cropped image
cv2.imwrite('gymnast_crop.png',crop)
cv2.imwrite('gymnast_crop.png',crop)
Input:
Thresholded and cleaned image:
Cropped Result:
Related
I want to crop the image to only extract the text sections. There are thousands of them with different sizes so I can't hardcode coordinates. I'm trying to remove the unwanted lines on the left and on the bottom. How can I do this?
Original
Expected
Determine the least spanning bounding box by finding all the non-zero points in the image. Finally, crop your image using this bounding box. Finding the contours is time-consuming and unnecessary here, especially because your text is axis-aligned. You may accomplish your goal by combining cv2.findNonZero and cv2.boundingRect.
Hope this will work ! :
import numpy as np
import cv2
img = cv2.imread(r"W430Q.png")
# Read in the image and convert to grayscale
img = img[:-20, :-20] # Perform pre-cropping
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = 255*(gray < 50).astype(np.uint8) # To invert the text to white
gray = cv2.morphologyEx(gray, cv2.MORPH_OPEN, np.ones(
(2, 2), dtype=np.uint8)) # Perform noise filtering
coords = cv2.findNonZero(gray) # Find all non-zero points (text)
x, y, w, h = cv2.boundingRect(coords) # Find minimum spanning bounding box
# Crop the image - note we do this on the original image
rect = img[y:y+h, x:x+w]
cv2.imshow("Cropped", rect) # Show it
cv2.waitKey(0)
cv2.destroyAllWindows()
in above code from forth line of code is where I set the threshold below 50 to make the dark text white. However, because this outputs a binary image, I convert to uint8, then scale by 255. The text is effectively inverted.
Then, using cv2.findNonZero, we discover all of the non-zero locations for this image.We then passed this to cv2.boundingRect, which returns the top-left corner of the bounding box, as well as its width and height. Finally, we can utilise this to crop the image. This is done on the original image, not the inverted version.
Here's a simple approach:
Obtain binary image. Load the image, grayscale, Gaussian blur, then Otsu's threshold to obtain a binary black/white image.
Remove horizontal lines. Since we're trying to only extract text, we remove horizontal lines to aid us in our next step so incorrect contours will not merge together.
Merge text into a single contour. The idea is that characters which are adjacent to each other are part of the wall of text. So we can dilate individual contours together to obtain a single contour to extract.
Find contours and extract ROI. We find contours, sort contours by area, then extract the largest contour ROI using Numpy slicing.
Here's the visualization of each step:
Binary image -> Removed horizontal lines in green
1
2
Dilate to combine into a single contour -> Detected ROI to extract in green
3
4
Result
Code
import cv2
import numpy as np
# Load image, grayscale, Gaussian blur, Otsu's threshold
image = cv2.imread('1.png')
original = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3, 3), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Remove horizontal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (25,1))
detected_lines = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=1)
cnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(thresh, [c], -1, 0, -1)
# Dilate to merge into a single contour
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2,30))
dilate = cv2.dilate(thresh, vertical_kernel, iterations=3)
# Find contours, sort for largest contour and extract ROI
cnts, _ = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2:]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:-1]
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 4)
ROI = original[y:y+h, x:x+w]
break
cv2.imshow('image', image)
cv2.imshow('dilate', dilate)
cv2.imshow('thresh', thresh)
cv2.imshow('ROI', ROI)
cv2.waitKey()
This is my
I want to get this
but the problem is I am not able to enclose the contour and how should I add these dots?
Does Open cv have any such function to handle this?
So basically,
The first problem is how to enclose this image
Second, how to add Dots.
Thank you
Here is one way to do that in Python/OpenCV. However, I cannot close your dotted outline without connecting separate regions. But it will give you some idea how to proceed with most of what you want to do.
If you manually add a few more dots to your input image where there are large gaps, then the morphology kernel can be made smaller such that it can connected the regions without merging separate parts that should remain isolated.
Read the input
Convert to grayscale
Threshold to binary
Apply morphology close to try to close the dotted outline. Unfortunately it connected separate regions.
Get the external contours
Draw white filled contours on a black background as a mask
Draw a single black circle on a white background
Tile out the circle image to the size of the input
Mask the tiled circle image with the filled contour image
Save results
Input:
import cv2
import numpy as np
import math
# read input image
img = cv2.imread('island.png')
hh, ww = img.shape[:2]
# convert img to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)[1]
# use morphology to close figure
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (35,35))
morph = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, )
# find contours and bounding boxes
mask = np.zeros_like(thresh)
contours = cv2.findContours(morph, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for cntr in contours:
cv2.drawContours(mask, [cntr], 0, 255, -1)
# create a single tile as black circle on white background
circle = np.full((11,11), 255, dtype=np.uint8)
circle = cv2.circle(circle, (7,7), 3, 0, -1)
# tile out the tile pattern to the size of the input
numht = math.ceil(hh / 11)
numwd = math.ceil(ww / 11)
tiled_circle = np.tile(circle, (numht,numwd))
tiled_circle = tiled_circle[0:hh, 0:ww]
# composite tiled_circle with mask
result = cv2.bitwise_and(tiled_circle, tiled_circle, mask=mask)
# save result
cv2.imwrite("island_morph.jpg", morph)
cv2.imwrite("island_mask.jpg", mask)
cv2.imwrite("tiled_circle.jpg", tiled_circle)
cv2.imwrite("island_result.jpg", result)
# show images
cv2.imshow("morph", morph)
cv2.imshow("mask", mask)
cv2.imshow("tiled_circle", tiled_circle)
cv2.imshow("result", result)
cv2.waitKey(0)
Morphology connected image:
Contour Mask image:
Tiled circles:
Result:
This is Wendy.Image processing related question.
I have to select the bubbles properly to analyze its size and calculate the velocity among image sequences.
original image
enter image description here
image sequences
enter image description here//enter image description here//enter image description here
In the picture,the largest shape is a droplet,and inside the droplet contains lots of tiny bubbles.
So far, the questions are:
1.Because the shape of bubbles are broken, few contours are detected. At the same time, lots of edges detected. How can I fill out the edges to be contours ( like a coin, I just want the outline of each bubble and don't want to get the lines inside bubbles)
current progress
enter image description here
enter image description here
code(detect contours)
gray=cv2.imread("1000.tif")
blurred = cv2.GaussianBlur(gray, (3,3), 0)
edged = cv2.Canny(blurred, 5, 60)
cv2.imshow('edged ', edged )
cv2.waitKey(0)
contours, hierarchy = cv2.findContours(edged,cv2.RETR_EXTERNAL,cv2.RETR_LIST)
img = gray.copy()
cv2.drawContours(img,contours,-1,(0,255),1)
cv2.imwrite("contours.tif", img)
cv2.imshow("contours", img)
cv2.waitKey(0)
code(fill holes)
im_floodfill = edged.copy()
h, w = edged.shape[:2]
mask = np.zeros((h+2, w+2), np.uint8)
cv2.floodFill(im_floodfill, mask, (0,0), 255)
im_floodfill_inv = cv2.bitwise_not(im_floodfill)
im_out = edged | im_floodfill_inv
cv2.imshow("Thresholded Image", edged)
cv2.imshow("Floodfilled Image", im_floodfill)
cv2.imshow("Inverted Floodfilled Image", im_floodfill_inv)
cv2.imshow("Foreground", im_out)
cv2.waitKey(0)
cv2.imwrite("fill.jpg", im_out)
the result will be
enter image description here
The velocity of the bubbles is my objective. I've tried to put the binary image sequences into Iamgej and used the "trackmate" (a tool for automated, and semi-automated particle tracking). It has a fixed bubble size setting to track, however, the size of the bubbles is quite different in my case, so is there any method to track bubbles moving?
If this isn't clear enough please let me know and I can try and be more specific. Thank you for your time.
Here is one way in Python/OpenCV.
Read the input
Convert to gray
Threshold
Fill the holes with morphology close
Get the contours and the largest contour (though presumably just one)
Draw the contour on a copy of the input
Save the results
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('vapor.jpg')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
#thresh = cv2.threshold(gray,5,255,cv2.THRESH_BINARY)[1]
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# apply close morphology to fill white circles
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (45,45))
thresh_cleaned = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# get largest contour
contours = cv2.findContours(thresh_cleaned, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
big_contour = max(contours, key=cv2.contourArea)
# draw contour
result = img.copy()
cv2.drawContours(result,[big_contour],0,(0,0,255),1)
# save image with points drawn
cv2.imwrite('vapor_thresh.jpg',thresh)
cv2.imwrite('vapor_cleaned_thresh.jpg',thresh_cleaned)
cv2.imwrite('vapor_contour.jpg',result)
cv2.imshow("thresh", thresh)
cv2.imshow("thresh_cleaned", thresh_cleaned)
cv2.imshow("contour", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Threshold image:
Morphology filled image:
Contour image:
I have the following image, which is a scanned printed paper with 4 images. I printed 4 images in the same sheet of paper to save printing resources:
However, now I need to extract image by image and, for each of them, create an individual image file. Is there any easy way of doing that with Python, Matlab or any other programming language?
Here is one way to do that in Python/OpenCV. But it requires that the pictures colors at their sides be sufficiently different from the background color. If so, you can threshold the image, then get contours and use their bounding boxes to crop out each image.
Read the input
Threshold based on the background color
Invert the threshold, so that the background is black
Apply morphology open and close to fill the picture regions and remove noise
Get the external contours
For each contour, get its bounding box and crop the input image and save it to disk
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('4faces.jpg')
# threshold on background color
lowerBound = (230,230,230)
upperBound = (255,255,255)
thresh = cv2.inRange(img, lowerBound, upperBound)
# invert so background black
thresh = 255 - thresh
# apply morphology to ensure regions are filled and remove extraneous noise
kernel = np.ones((7,7), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
kernel = np.ones((11,11), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# get contours
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
# get bounding boxes and crop input
i = 1
for cntr in contours:
# get bounding boxes
x,y,w,h = cv2.boundingRect(cntr)
crop = img[y:y+h, x:x+w]
cv2.imwrite("4faces_crop_{0}.png".format(i), crop)
i = i + 1
# save threshold
cv2.imwrite("4faces_thresh.png",thresh)
# show thresh and result
cv2.imshow("thresh", thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()
Thresholded image after morphology cleaning:
Cropped Images:
I am stitching multiple images into one. In an intermediate step, I get an image like:
It isn't necessary at all that the image starts from left and has black region in the right. I want to obtain a rectangular image from this one that doesn't contain the black region. That is, something like:
Can someone please suggest me a way to do that?
Here is one way to crop out the excess black on the right side of the image:
Read the image
Convert to grayscale
Threshold
Apply closing and opening morphology to remove small black and white spots.
Get the surrounding contour
Crop the contour
Save the result
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('road.jpg')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
_,thresh = cv2.threshold(gray,5,255,cv2.THRESH_BINARY)
# apply close and open morphology to fill tiny black and white holes
kernel = np.ones((5,5), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
# get contours (presumably just one around the nonzero pixels)
# then crop it to bounding rectangle
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for cntr in contours:
x,y,w,h = cv2.boundingRect(cntr)
crop = img[y:y+h,x:x+w]
# show cropped image
cv2.imshow("CROP", crop)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save cropped image
cv2.imwrite('road_crop.png',crop)