Remove noise from image using OpenCV - python

I have these images
enter image description here
enter image description here
I want to remove noise from these images so I can convert them into text using pytesseract. The noise is only in blue colour so I tried to remove blue from the image. Still not good results.
This is what I did
import cv2
import pytesseract
# Extract the blue channel
blue = img[:, :, 0]
# Apply thresholding to the blue channel
thresh = cv2.threshold(blue, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
# Perform morphological operations to remove noise
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,1))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=7)
# Apply blur to smooth out the image
blur = opening#cv2.medianBlur(opening, 1)
cv2.imwrite("/Users/arjunmalik/Desktop/blur.png",blur)
display("/Users/arjunmalik/Desktop/blur.png")
The result was
enter image description here
The OCR results were FL1S4y.

As stated by Sembei, You need to use a closing operator which's a must for a situation like this because you want to close black points on the object to improve the image quality.
Solution:
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (4,4))
closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=1)
You can modify your code to this one to achieve the following output for the second image.
Output:
Result
You might need to change the size of the kernel for different input images.
Thoughts:
I think it'd be better if you do the character segmentation first before applying the closing operator in order to achieve the finest results.

Related

How to decide on the kernel to use for dilations (OpenCV/Python)?

I'm very new to OpenCV and recently, I'm trying to compare two images of rails, one with a train and one without. After the comparison, I apply a threshold, and there are some 'holes' in the white regions which I do not want. Currently, I am using dilation with 4 iterations and kernel set to "None", which defaults to a 3x3 by my understanding.
How do I decide what sort of kernel to use so that the dilation does a better job at making the white region continuous? Would also be nice if I could remove the small white blobs in the background. Here is the code:
resized = imutils.resize(img2, width=1050)
resized2 = imutils.resize(img3, width=1050)
grayA = cv2.cvtColor(resized, cv2.COLOR_BGR2GRAY)
grayB = cv2.cvtColor(resized2, cv2.COLOR_BGR2GRAY)
grayA = cv2.GaussianBlur(grayA,(7,7),0)
grayB = cv2.GaussianBlur(grayB,(7,7),0)
frameDelta = cv2.absdiff(grayA, grayB)
thresh = cv2.threshold(frameDelta, 20, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.dilate(thresh, None, iterations=4)
Complete beginner in this, so even general tips/advice to improve these comparisons would be vastly appreciated!
Perhaps this will give you some idea about morphology in Python/OpenCV. First I use a square "open" kernel about the size of the small white spots to remove them. Then I use a horizontal rectangle "close" kernel about the size of the black gap to fill it. "Open" removes white regions (or fills black gaps) and close removes black regions (or fills white gaps)
Input:
import cv2
import numpy as np
# read image as grayscale
img = cv2.imread('blob3.png', cv2.IMREAD_GRAYSCALE)
# threshold to binary
thresh = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY)[1]
# apply morphology open with square kernel to remove small white spots
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (19,19))
morph1 = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
# apply morphology close with horizontal rectangle kernel to fill horizontal gap
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (101,1))
morph2 = cv2.morphologyEx(morph1, cv2.MORPH_CLOSE, kernel)
# write results
cv2.imwrite("blob3_morph1.png", morph1)
cv2.imwrite("blob3_morph2.png", morph2)
# show results
cv2.imshow("thresh", thresh)
cv2.imshow("morph1", morph1)
cv2.imshow("morph2", morph2)
cv2.waitKey(0)
Morphology Square Open:
Morphology Rectangle Close:
Alternate Morphology Square Close:

Obtain complete pattern of shapes with OpenCV

I'm working in a script using different OpenCV operations for processing an image with solar panels in a house roof. My original image is the following:
After processing the image, I get the edges of the panels as follows:
It can be seen how some rectangles are broken due to reflection of the Sun in the picture.
I would like to know if it's possible to fix those broken rectangles, maybe by using the pattern of those which are not broken.
My code is the following:
# Load image
color_image = cv2.imread("google6.jpg")
cv2.imshow("Original", color_image)
# Convert to gray
img = cv2.cvtColor(color_image, cv2.COLOR_BGR2GRAY)
# Apply various filters
img = cv2.GaussianBlur(img, (5, 5), 0)
img = cv2.medianBlur(img, 5)
img = img & 0x88 # 0x88
img = cv2.fastNlMeansDenoising(img, h=10)
# Invert to binary
ret, thresh = cv2.threshold(img, 127, 255, 1)
# Perform morphological erosion
kernel = np.ones((5, 5),np.uint8)
erosion = cv2.morphologyEx(thresh, cv2.MORPH_ERODE, kernel, iterations=2)
# Invert image and blur it
ret, thresh1 = cv2.threshold(erosion, 127, 255, 1)
blur = cv2.blur(thresh1, (10, 10))
# Perform another threshold on blurred image to get the central portion of the edge
ret, thresh2 = cv2.threshold(blur, 145, 255, 0)
# Perform morphological erosion to thin the edge by ellipse structuring element
kernel1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
contour = cv2.morphologyEx(thresh2, cv2.MORPH_ERODE, kernel1, iterations=2)
# Get edges
final = cv2.Canny(contour, 249, 250)
cv2.imshow("final", final)
I have tried to modify all the filters I'm using in order to reduce as much as possible the effect of the Sun in the original picture, but that is as far as I have been able to go.
I'm in general happy with the result of all those filters (although any advice is welcome), so I'd like to work on the black/white imaged I showed, which is already smooth enough for the post-processing I need to do.
Thansk!
The pattern is not broken in the original image, so it being broken in your binarized result must mean your binarization is not optimal.
You apply threshold() to binarize the image, and then Canny() to the binary image. The problems here are:
Thresholding removes a lot of information, this should always be the last step of any processing pipeline. Anything you lose here, you've lost for good.
Canny() should be applied to a gray-scale image, not a binary image.
The Canny edge detector is an edge detector, but you want to detect lines, not edges. See here for the difference.
So, I suggest starting from scratch.
The Laplacian of Gaussian is a very simple line detector. I took these steps:
Read in image, convert to grayscale.
Apply Laplacian of Gaussian with sigma = 2.
Invert (negate) the result and then set negative values to 0.
This is the output:
From here, it should be relatively straight-forward to identify the grid pattern.
I don't post code because I used MATLAB for this, but you can accomplish the same result in Python with OpenCV, here is a demo for applying the Laplacian of Gaussian in OpenCV.
This is Python + OpenCV code to replicate the above:
import cv2
color_image = cv2.imread("/Users/cris/Downloads/L3RVh.jpg")
img = cv2.cvtColor(color_image, cv2.COLOR_BGR2GRAY)
out = cv2.GaussianBlur(img, (0, 0), 2) # Note! Specify size of Gaussian by the sigma, not the kernel size
out = cv2.Laplacian(out, cv2.CV_32F)
_, out = cv2.threshold(-out, 0, 1e9, cv2.THRESH_TOZERO)
However, it looks like OpenCV doesn't linearize (apply gamma correction) when converting from BGR to gray, as the conversion function does that I used when creating the image above. I think this gamma correction might have improved the results a bit by reducing the response to the roof tiles.

How to remove noise in image OpenCV, Python?

I have some cropped images and I need images that have black texts on white background. Firstly I apply adaptive thresholding and then I try to remove noise. Although I tried a lot of noise removal techniques but when the image changed, the techniques I used failed.
The best method for converting image color to binary for my images is Adaptive Gaussian Thresholding. Here is my code:
im_gray = cv2.imread("image.jpg", cv2.IMREAD_GRAYSCALE)
image = cv2.GaussianBlur(im_gray, (5,5), 1)
th = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,3,2)
I need smooth values, Decimal separator(dot) and postfix letters. How can I do this?
Before binarization, it is necessary to correct the nonuniform illumination of the background. For example, like this:
import cv2
image = cv2.imread('9qBsB.jpg')
image=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
se=cv2.getStructuringElement(cv2.MORPH_RECT , (8,8))
bg=cv2.morphologyEx(image, cv2.MORPH_DILATE, se)
out_gray=cv2.divide(image, bg, scale=255)
out_binary=cv2.threshold(out_gray, 0, 255, cv2.THRESH_OTSU )[1]
cv2.imshow('binary', out_binary)
cv2.imwrite('binary.png',out_binary)
cv2.imshow('gray', out_gray)
cv2.imwrite('gray.png',out_gray)
Result:
You can do slightly better using division normalization in Python/OpenCV.
Input:
import cv2
import numpy as np
# load image
img = cv2.imread("license_plate.jpg")
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# blur
blur = cv2.GaussianBlur(gray, (0,0), sigmaX=33, sigmaY=33)
# divide
divide = cv2.divide(gray, blur, scale=255)
# otsu threshold
thresh = cv2.threshold(divide, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# apply morphology
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
morph = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# write result to disk
cv2.imwrite("hebrew_text_division.jpg", divide)
cv2.imwrite("hebrew_text_division_threshold.jpg", thresh)
cv2.imwrite("hebrew_text_division_morph.jpg", morph)
# display it
cv2.imshow("gray", gray)
cv2.imshow("divide", divide)
cv2.imshow("thresh", thresh)
cv2.imshow("morph", morph)
cv2.waitKey(0)
cv2.destroyAllWindows()
Division Image:
Thresholded Image:
Morphology Cleaned Image:
Im assuming that you are preprocessing the image for OCR(Optical Character Recognition)
I had a project to detect license plates and these were the steps I did, you can apply them to your project. After greying the image try applying equalize histogram to the image, this allows the area's in the image with lower contrast to gain a higher contrast. Then blur the image to reduce the noise in the background. Next apply edge detection on the image, make sure that noise is sufficiently removed as ED is susceptible to it. Lastly, apply closing(dilation then erosion) on the image to close all the small holes inside the words.
Instead of erode and dilate, you can check this, that is basically both in one.
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,2))
morphology_img = cv2.morphologyEx(img_grey, cv2.MORPH_OPEN, kernel,iterations=1)
plt.imshow(morphology_img,'Greys_r')
MORPHOLOGICAL_TRANSFORMATIONS

Computer vision: Creating mask of hand using OpenCv and Python

I am trying to create the mask from hand (black - background, white - hand)
This is the first original image 1:
This is my code:
hand = cv2.imread('hand.jpg')
blur = cv2.GaussianBlur(hand, (3, 3), 0)
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
mask2 = cv2.inRange(hsv, np.array([2, 0, 0]), np.array([20, 255, 255]))
kernel = np.ones((7, 7))
dilation = cv2.dilate(mask2, kernel, iterations=1)
erosion = cv2.erode(dilation, kernel, iterations=1)
filtered = cv2.GaussianBlur(erosion, (5, 5), 0)
ret, thresh = cv2.threshold(filtered, 90, 255, 0)
cv2.imshow('Threshold', thresh)
And the result is:
But I need have better result - like that:
What should I do?
[EDIT]
second image with different background:
Result using #Rotem code:
1 Ajay Kumar. IIT Delhi Palmprint Image Database version 1.0. 2007
You may solve it by applying threshold on the red color channel.
The background color is dark blue and green, and the hand color is bright and tend to red color, so using only the red color channel may give better results than converting to HSV.
The solution below uses the following stages:
Extract red color channel - use the red channel as Grayscale image.
Apply binary threshold using automatically selected threshold (using cv2.THRESH_OTSU parameter).
Use "opening" morphological operation for clearing some small dots (noise).
Opening is equivalent to applying erode and than dilate.
Use "closing" morphological operation for closing small gaps (I chose disk shaped mask).
Closing is equivalent to applying dilate and than erode.
Here is the code:
import cv2
img = cv2.imread('hand.jpg')
# Extract red color channel (because the hand color is more red than the background).
gray = img[:, :, 2]
# Apply binary threshold using automatically selected threshold (using cv2.THRESH_OTSU parameter).
ret, thresh_gray = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
# Use "opening" morphological operation for clearing some small dots (noise)
thresh_gray = cv2.morphologyEx(thresh_gray, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3)))
# Use "closing" morphological operation for closing small gaps
thresh_gray = cv2.morphologyEx(thresh_gray, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9,9)))
# Display result:
cv2.imshow('thresh_gray', cv2.resize(thresh_gray, (thresh_gray.shape[1]//2, thresh_gray.shape[0]//2)))
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
I think you may improve the result by finding the contour of the hand, and proximate it to polygon with less vertices (but it might be an over-fitting).

How can I isolate a basketball court more effectively using OpenCV?

My goal is to try and isolate the court in the below frame and to outline it:
I'm using OpenCV for Python and here are my results after taking the following steps:
Converting the image to HSV
Isolating pixels within a given hue range
Developing a bitwise-AND mask
Using Canny edge detection
Here is my mask:
And here is the result from my Canny edge detector:
As you can see, my Canny detector performed very poorly and there is a lot of noise in my mask. I tried certain techniques including erosion and dilation but they didn't help too much.
What else can I do to make sure when I pass the mask along to the Hough Line Transformer, it will actually be able to detect the edges of the court?
Here is some code for reference:
img = cv2.imread('imgs/bulls.jpg')
hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
court_color = np.uint8([[[160,221,248]]])
hsv_court_color = cv2.cvtColor(court_color, cv2.COLOR_BGR2HSV)
hue = hsv_court_color[0][0][0]
# define range of blue color in HSV
lower_color = np.array([hue - 10,10,10])
upper_color = np.array([hue + 10,255,255])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv_img, lower_color, upper_color)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(img,img, mask= mask)
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)), plt.title('Original Image'), plt.show()
plt.imshow(mask, cmap='Greys'), plt.title('Mask'), plt.savefig('imgs/mask.jpg'), plt.show()
# Erosion
kernel = np.ones((2,2),np.uint8)
erosions2 = cv2.erode(mask,kernel,iterations = 5)
# Dilation
dilation = cv2.dilate(mask,kernel,iterations = 3)
# Opening
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# Closing
closing = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
EDIT: I am attempting to replicate this research: web.stanford.edu/class/ee368/Project_Spring_1415/Reports/…. I want to isolate the court by detecting the straight lines that outline it so that I can eventually use homography to find coordinates of players on the court.
Detecting Hough lines on the image is your best bet in this case, because court colors can change from place to place and also the camera settings. Detecting the lines, and some further processing using uniform color patches should allow you to segment the court region with some accuracy.

Categories