How can I isolate a basketball court more effectively using OpenCV? - python

My goal is to try and isolate the court in the below frame and to outline it:
I'm using OpenCV for Python and here are my results after taking the following steps:
Converting the image to HSV
Isolating pixels within a given hue range
Developing a bitwise-AND mask
Using Canny edge detection
Here is my mask:
And here is the result from my Canny edge detector:
As you can see, my Canny detector performed very poorly and there is a lot of noise in my mask. I tried certain techniques including erosion and dilation but they didn't help too much.
What else can I do to make sure when I pass the mask along to the Hough Line Transformer, it will actually be able to detect the edges of the court?
Here is some code for reference:
img = cv2.imread('imgs/bulls.jpg')
hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
court_color = np.uint8([[[160,221,248]]])
hsv_court_color = cv2.cvtColor(court_color, cv2.COLOR_BGR2HSV)
hue = hsv_court_color[0][0][0]
# define range of blue color in HSV
lower_color = np.array([hue - 10,10,10])
upper_color = np.array([hue + 10,255,255])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv_img, lower_color, upper_color)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(img,img, mask= mask)
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)), plt.title('Original Image'), plt.show()
plt.imshow(mask, cmap='Greys'), plt.title('Mask'), plt.savefig('imgs/mask.jpg'), plt.show()
# Erosion
kernel = np.ones((2,2),np.uint8)
erosions2 = cv2.erode(mask,kernel,iterations = 5)
# Dilation
dilation = cv2.dilate(mask,kernel,iterations = 3)
# Opening
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# Closing
closing = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
EDIT: I am attempting to replicate this research: web.stanford.edu/class/ee368/Project_Spring_1415/Reports/…. I want to isolate the court by detecting the straight lines that outline it so that I can eventually use homography to find coordinates of players on the court.

Detecting Hough lines on the image is your best bet in this case, because court colors can change from place to place and also the camera settings. Detecting the lines, and some further processing using uniform color patches should allow you to segment the court region with some accuracy.

Related

How to prevent INPAINTING blocks and improve coloring

I wanted to Remove all the texts USING INPAINTING from this IMAGE. I had been trying various methods, and eventually found that I can get the results through OCR and then using thresholding MASK THE IMAGE.
processedImage = preprocess(partOFimg)
mask = np.ones(img.shape[:2], dtype="uint8") * 255
for c in cnts:
cv2.drawContours(mask, [c], -1, 0, -1)
img = cv2.inpaint(img,mask,7,cv2.INPAINT_TELEA)
Preprocess operations:
ret,thresh1 = cv2.threshold(gray, 0, 255,cv2.THRESH_OTSU|cv2.THRESH_BINARY_INV)
rect_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15, 3))
dilation = cv2.dilate(thresh1, rect_kernel, iterations = 1)
edged = cv2.Canny(dilation, 50, 100)
cnts = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
mask =
np.ones(img.shape[:2], dtype="uint8") * 255
When I run the above code, I here am the OUTPUT Image OUTPUT. As we can see, it is making some BLOCKS OF DIFFERENT COLOR over the IMAGE, I want to prevent that, How do I achieve this? I see that mask images are not formed well many times, and in cases when the text is white the PREPROCESSING doesn't occur properly.
How do I prevent these BLOCKS of other colours to FORM on the IMAGE?
Grayed Sub Image GRAYED
Threshold Sub IMG part: Thresholded Image
Masked Image Masked
EDIT 1:
I've managed to get this new better result by noticing that my threshold is the best mask I can get. After doing this I performed the masking process 3 different times with variable masks and inversions. I did the inpainting algorithm 3 times, it basically the other times inverse the mask, because in some cases required mask is the inversed mask. Still I think it needs improvement, If I chose a different image the results are not so good.
Python/OpenCV inpaint methods, generally, are not appropriate to your type of image. They work best on thin (scratch-like) regions, not large blocks. You really need an exemplar type method such as https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/criminisi_tip2004.pdf. But OpenCV does not have that.
However, the OpenCV methods do work here, I suspect, because you are filling with constant colors (green) and not texture. So you are best to try to get the mask of just the letters (characters), not rectangular blocks for the words. So, to show you what I mean, here is my Python/OpenCV approach.
Input:
Read the input
Threshold on the green sign
Apply morphology to close it up and keep as mask1
Apply the mask to the image to blacken out the outside of the sign
Threshold on the white in this new image and keep as mask2
Apply morphology dilate to enlarge it slightly and save as mask3
Do the inpaint
Save the results
import cv2
import numpy as np
# read input
img = cv2.imread('airport_sign.jpg')
# threshold on green sign
lower = (30,80,0)
upper = (70,120,20)
thresh = cv2.inRange(img, lower, upper)
# apply morphology close
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (135,135))
mask1 = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# apply mask to img
img2 = img.copy()
img2[mask1==0] = (0,0,0)
# threshold on white
#gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
#mask2 = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
lower = (120,120,120)
upper = (255,255,255)
mask2 = cv2.inRange(img2, lower, upper)
# apply morphology dilate
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5))
mask3 = cv2.morphologyEx(mask2, cv2.MORPH_DILATE, kernel)
# do inpainting
result1 = cv2.inpaint(img,mask3,11,cv2.INPAINT_TELEA)
result2 = cv2.inpaint(img,mask3,11,cv2.INPAINT_NS)
# save results
cv2.imwrite('airport_sign_mask.png', mask3)
cv2.imwrite('airport_sign_inpainted1.png', result1)
cv2.imwrite('airport_sign_inpainted2.png', result1)
# show results
cv2.imshow('thresh',thresh)
cv2.imshow('mask1',mask1)
cv2.imshow('img2',img2)
cv2.imshow('mask2',mask2)
cv2.imshow('mask3',mask3)
cv2.imshow('result1',result1)
cv2.imshow('result2',result2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Mask 3:
Inpaint 1 (Telea):
Inpaint 2 (NS):

Slicing of a scanned image based on large white spaces

I am planning to split the questions from this PDF document. The challenge is that the questions are not orderly spaced. For example the first question occupies an entire page, second also the same while the third and fourth together make up one page. If I have to manually slice it, it will be ages. So, I thought to split it up into images and work on them. Is there a possibility to take image as this
and split into individual components like this?
This is a classic situation for dilate. The idea is that adjacent text corresponds with the same question while text that is farther away is part of another question. Whenever you want to connect multiple items together, you can dilate them to join adjacent contours into a single contour. Here's a simple approach:
Obtain binary image. Load the image, convert to grayscale, Gaussian blur, then Otsu's threshold to obtain a binary image.
Remove small noise and artifacts. We create a rectangular kernel and morph open to remove small noise and artifacts in the image.
Connect adjacent words together. We create a larger rectangular kernel and dilate to merge individual contours together.
Detect questions. From here we find contours, sort contours from top-to-bottom using imutils.sort_contours(), filter with a minimum contour area, obtain the rectangular bounding rectangle coordinates and highlight the rectangular contours. We then crop each question using Numpy slicing and save the ROI image.
Otsu's threshold to obtain a binary image
Here's where the interesting section happens. We assume that adjacent text/characters are part of the same question so we merge individual words into a single contour. A question is a section of words that are close together so we dilate to connect them all together.
Individual questions highlighted in green
Top question
Bottom question
Saved ROI questions (assumption is from top-to-bottom)
Code
import cv2
from imutils import contours
# Load image, grayscale, Gaussian blur, Otsu's threshold
image = cv2.imread('1.png')
original = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (7,7), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Remove small artifacts and noise with morph open
open_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, open_kernel, iterations=1)
# Create rectangular structuring element and dilate
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9,9))
dilate = cv2.dilate(opening, kernel, iterations=4)
# Find contours, sort from top to bottom, and extract each question
cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
(cnts, _) = contours.sort_contours(cnts, method="top-to-bottom")
# Get bounding box of each question, crop ROI, and save
question_number = 0
for c in cnts:
# Filter by area to ensure its not noise
area = cv2.contourArea(c)
if area > 150:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2)
question = original[y:y+h, x:x+w]
cv2.imwrite('question_{}.png'.format(question_number), question)
question_number += 1
cv2.imshow('thresh', thresh)
cv2.imshow('dilate', dilate)
cv2.imshow('image', image)
cv2.waitKey()
We may solve it using (mostly) morphological operations:
Read the input image as grayscale.
Apply thresholding with inversion.
Automatic thresholding using cv2.THRESH_OTSU is working well.
Apply opening morphological operation for removing small artifacts (using the kernel np.ones(1, 3))
Dilate horizontally with very long horizontal kernel - make horizontal lines out of the text lines.
Apply closing vertically - create two large clusters.
The size of the vertical kernel should be tuned according to the typical gap.
Finding connected components with statistics.
Iterate the connected components and crop the relevant area in the vertical direction.
Complete code sample:
import cv2
import numpy as np
img = cv2.imread('scanned_image.png', cv2.IMREAD_GRAYSCALE) # Read image as grayscale
thesh = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY_INV)[1] # Apply automatic thresholding with inversion.
thesh = cv2.morphologyEx(thesh, cv2.MORPH_OPEN, np.ones((1, 3), np.uint8)) # Apply opening morphological operation for removing small artifacts.
thesh = cv2.dilate(thesh, np.ones((1, img.shape[1]), np.uint8)) # Dilate horizontally - make horizontally lines out of the text.
thesh = cv2.morphologyEx(thesh, cv2.MORPH_CLOSE, np.ones((50, 1), np.uint8)) # Apply closing vertically - create two large clusters
nlabel, labels, stats, centroids = cv2.connectedComponentsWithStats(thesh, 4) # Finding connected components with statistics
parts_list = []
# Iterate connected components:
for i in range(1, nlabel):
top = int(stats[i, cv2.CC_STAT_TOP]) # Get most top y coordinate of the connected component
height = int(stats[i, cv2.CC_STAT_HEIGHT]) # Get the height of the connected component
roi = img[top-5:top+height+5, :] # Crop the relevant part of the image (add 5 extra rows from top and bottom).
parts_list.append(roi.copy()) # Add the cropped area to a list
cv2.imwrite(f'part{i}.png', roi) # Save the image part for testing
cv2.imshow(f'part{i}', roi) # Show part for testing
# Show image and thesh testing
cv2.imshow('img', img)
cv2.imshow('thesh', thesh)
cv2.waitKey()
cv2.destroyAllWindows()
Results:
Stage 1:
Stage 2:
Stage 3:
Stage 4:
Top area:
Bottom area:

Obtain complete pattern of shapes with OpenCV

I'm working in a script using different OpenCV operations for processing an image with solar panels in a house roof. My original image is the following:
After processing the image, I get the edges of the panels as follows:
It can be seen how some rectangles are broken due to reflection of the Sun in the picture.
I would like to know if it's possible to fix those broken rectangles, maybe by using the pattern of those which are not broken.
My code is the following:
# Load image
color_image = cv2.imread("google6.jpg")
cv2.imshow("Original", color_image)
# Convert to gray
img = cv2.cvtColor(color_image, cv2.COLOR_BGR2GRAY)
# Apply various filters
img = cv2.GaussianBlur(img, (5, 5), 0)
img = cv2.medianBlur(img, 5)
img = img & 0x88 # 0x88
img = cv2.fastNlMeansDenoising(img, h=10)
# Invert to binary
ret, thresh = cv2.threshold(img, 127, 255, 1)
# Perform morphological erosion
kernel = np.ones((5, 5),np.uint8)
erosion = cv2.morphologyEx(thresh, cv2.MORPH_ERODE, kernel, iterations=2)
# Invert image and blur it
ret, thresh1 = cv2.threshold(erosion, 127, 255, 1)
blur = cv2.blur(thresh1, (10, 10))
# Perform another threshold on blurred image to get the central portion of the edge
ret, thresh2 = cv2.threshold(blur, 145, 255, 0)
# Perform morphological erosion to thin the edge by ellipse structuring element
kernel1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
contour = cv2.morphologyEx(thresh2, cv2.MORPH_ERODE, kernel1, iterations=2)
# Get edges
final = cv2.Canny(contour, 249, 250)
cv2.imshow("final", final)
I have tried to modify all the filters I'm using in order to reduce as much as possible the effect of the Sun in the original picture, but that is as far as I have been able to go.
I'm in general happy with the result of all those filters (although any advice is welcome), so I'd like to work on the black/white imaged I showed, which is already smooth enough for the post-processing I need to do.
Thansk!
The pattern is not broken in the original image, so it being broken in your binarized result must mean your binarization is not optimal.
You apply threshold() to binarize the image, and then Canny() to the binary image. The problems here are:
Thresholding removes a lot of information, this should always be the last step of any processing pipeline. Anything you lose here, you've lost for good.
Canny() should be applied to a gray-scale image, not a binary image.
The Canny edge detector is an edge detector, but you want to detect lines, not edges. See here for the difference.
So, I suggest starting from scratch.
The Laplacian of Gaussian is a very simple line detector. I took these steps:
Read in image, convert to grayscale.
Apply Laplacian of Gaussian with sigma = 2.
Invert (negate) the result and then set negative values to 0.
This is the output:
From here, it should be relatively straight-forward to identify the grid pattern.
I don't post code because I used MATLAB for this, but you can accomplish the same result in Python with OpenCV, here is a demo for applying the Laplacian of Gaussian in OpenCV.
This is Python + OpenCV code to replicate the above:
import cv2
color_image = cv2.imread("/Users/cris/Downloads/L3RVh.jpg")
img = cv2.cvtColor(color_image, cv2.COLOR_BGR2GRAY)
out = cv2.GaussianBlur(img, (0, 0), 2) # Note! Specify size of Gaussian by the sigma, not the kernel size
out = cv2.Laplacian(out, cv2.CV_32F)
_, out = cv2.threshold(-out, 0, 1e9, cv2.THRESH_TOZERO)
However, it looks like OpenCV doesn't linearize (apply gamma correction) when converting from BGR to gray, as the conversion function does that I used when creating the image above. I think this gamma correction might have improved the results a bit by reducing the response to the roof tiles.

How to remove image noise using opencv - python?

I am working with skin images, in recognition of skin blemishes, and due to the presence of noises, mainly by the presence of hairs, this work becomes more complicated.
I have an image example in which I work in an attempt to highlight only the skin spot, but due to the large number of hairs, the algorithm is not effective. With this, I would like you to help me develop an algorithm to remove or reduce the amount of hair so that I can only highlight my area of ​​interest (ROI), which are the spots.
Algorithm used to highlight skin blemishes:
import numpy as np
import cv2
#Read the image and perform threshold
img = cv2.imread('IMD006.bmp')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.medianBlur(gray,5)
_,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
#Search for contours and select the biggest one
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
#Create a new mask for the result image
h, w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
#Draw the contour on the new mask and perform the bitwise operation
cv2.drawContours(mask, [cnt],-1, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
#Display the result
cv2.imwrite('IMD006.png', res)
#cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
Example image used:
How to deal with these noises to the point of improving my region of interest?
This is quite a difficult task becasue the hair goes over your ROI (mole). I don't know how to help remove it from the mole but I can help to remove the backround like in the picture without hairs. For the removal of hairs from mole I advise you to search for "removing of watermarks from image" and "deep neural networks" to maybe train a model to remove the hairs (note that this task will be quite difficult).
That being said, for the removing of background you could try the same code that you allready have for detection without hairs. You will get a binary image like this:
Now your region is filled with white lines (hairs) that go over your contour that is your ROI and cv2.findContours() would also pick them out because they are connected. But if you look at the picture you will find out that the white lines are quite thin and you can remove it from the image by performing opening (cv2.morphologyEx) on the image. Opening is erosion followed by dilation so when you erode the image with a big enough kernel size the white lines will dissapear:
Now you have a white spot with some noises arround which you can connect by performing another dilation (cv2.dilate()):
To make the ROI a bit smoother you can blur the image cv2.blur():
After that you can make another treshold and search for the biggest contour. The final result:
Hope it helps a bit. Cheers!
Example code:
import numpy as np
import cv2
# Read the image and perfrom an OTSU threshold
img = cv2.imread('hair.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Remove hair with opening
kernel = np.ones((5,5),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2)
# Combine surrounding noise with ROI
kernel = np.ones((6,6),np.uint8)
dilate = cv2.dilate(opening,kernel,iterations=3)
# Blur the image for smoother ROI
blur = cv2.blur(dilate,(15,15))
# Perform another OTSU threshold and search for biggest contour
ret, thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Create a new mask for the result image
h, w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
# Draw the contour on the new mask and perform the bitwise operation
cv2.drawContours(mask, [cnt],-1, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
# Display the result
cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()

How to take a threshold of only a part of the image with OTSU thresholding?

I'm using OTSU threshold on a dilated and eroded image as shown below:
k = np.ones((5,5),np.float32)/1
d = cv2.dilate(self.img, k, iterations=10)
e = cv2.erode(d, k, iterations=10)
self.thresh = cv2.threshold(e, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
This is the eroded and dilated image, and the one that gets thresholded:
I only want the circular bright region in the middle to be obtained from thresholding, but instead I'm getting this result:
How do I go about thresholding such that I only get the circular region in the middle, which also seems like the brightest (visually) part of the image?
Note: To avoid playing around with different values, I want to stick to OTSU thresholding, but I'm open to ideas.
You can use Dilate and Erode filters to this image, but in another order: Erode first and then Dialte. It will suppress bright areas from upper side of the image and threshold method will provide better result
You can try a gradient based approach. Below I've used the morphological gradient. I apply Otsu thresholding to this gradient image, followed by a similar amount of morphological closing (10 iterations), then take the morphological gradient of the resulting image.
Now the regions are easy to detect. You can filter the circular region from the contours, for example, using an area based approach: using bounding box dimensions of the contour, you can get an estimate of the radius, then compare the calculated area to the contour area.
Don't know how generic this approach would be for your collection.
Gradient image: intensity values scaled for visualization
Binarized gradient image
Closed image
Gradient
im = cv2.imread('LDxOj.jpg', 0)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
morph = cv2.morphologyEx(im, cv2.MORPH_GRADIENT, kernel)
_, bw = cv2.threshold(morph, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
morph2 = cv2.morphologyEx(bw, cv2.MORPH_CLOSE, kernel, anchor = (-1, -1), iterations = 10)
morph3 = cv2.morphologyEx(morph2, cv2.MORPH_GRADIENT, kernel)

Categories