Remove black header section of image using Python OpenCV - python

I need to remove the blackened section in multiple parts of image using Python CV.
I tried with denoising which doesn't give satisfactory results.
Eg. I need to remove the blackened part in Table Header (below image) and convert the header background to white with contents as black.
Can anyone help me with choosing the correct library or solution to overcome this?

It is difficult to filter the dotted pattern, as you can see. It is clearly overlapping the text. I see at least two options: 1) Exploit the periodic nature of the pattern and carry out frequency filtering. 2) Try a simpler approach using a morphological hit or miss operation on the target pixels, aiming to isolate them.
Let's check out option 2. The noise has a very distinctive pattern. If you work with the binary image where all the blobs are colored in white, the pattern you are looking for is a white pixel (1) surrounded by 8 black pixels (0):
[ 0, 0, 0 ]
[ 0, 1, 0 ]
[ 0, 0, 0 ]
The hit and miss operation can be used to locate and isolate pixel patterns. Here's a good post if you want to learn more about it. For now, let’s work on the code:
//Read the input image, as normal:
std::string imagePath = "C://opencvImages//tableTest.png";
cv::Mat testImage = cv::readImage( imagePath );
//Convert the image to grayscale:
cv::Mat grayImage;
cv::cvtColor( testImage, grayImage, cv::COLOR_BGR2GRAY );
//Get the binary image via otsu:
cv::Mat binaryImage;
cv::threshold( grayImage, binaryImage, 0, 255,cv::THRESH_OTSU );
//Invert the image, as we will be working on white blobs:
binaryImage = 255 - binaryImage;
//Prepare the target kernel. This is where you define the pattern of
//pixels you are looking for
//Keep in mind that -1 -> black and 1 -> white
cv::Mat kernel = ( cv::Mat_<int>(3, 3) <<
-1, -1, -1,
-1, 1, -1,
-1, -1, -1
);
//perform the hit or miss operation:
cv::Mat hitMissMask;
cv::morphologyEx( binaryImage, hitMissMask, cv::MORPH_HITMISS, kernel );
This is the mask you get:
Now, just subtract this mask to the original (binary) image and you get this:
As you can see, part of the column header gets in the way of the operation. If you want a white background and black blobs, just invert the image:

Here's a modified version of #eldesgraciado's approach to filter the dotted pattern using a morphological hit or miss operation on the target pixels in Python. The difference is that instead of subtracting the mask with the binary image which decreases text quality, we dilate the binary image then bitwise-and to retain the text quality.
Obtain binary image. Load image, grayscale, Otsu's threshold
Perform morphological hit or miss operation. We create a dot pattern kernel with cv2.getStructuringElement then use cv2.filter2D to convolve the image
Remove dots. We cv2.bitwise-xor the mask with the binary image
Fix damaged text pixels. We cv2.dilate then cv2.bitwise_and the finalized mask with the input image and color background pixels white
Binary image
Dot mask
Remove dots
Dilate to fix damaged text pixels from the thresholding process
Result
Code
import cv2
import numpy as np
# Load image, grayscale, Otsu's threshold
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Perform morphological hit or miss operation
kernel = np.array([[-1,-1,-1], [-1,1,-1], [-1,-1,-1]])
dot_mask = cv2.filter2D(thresh, -1, kernel)
# Bitwise-xor mask with binary image to remove dots
result = cv2.bitwise_xor(thresh, dot_mask)
# Dilate to fix damaged text pixels
# since the text quality has decreased from thresholding
# then bitwise-and with input image
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2))
dilate = cv2.dilate(result, kernel, iterations=1)
result = cv2.bitwise_and(image, image, mask=dilate)
result[dilate==0] = [255,255,255]
cv2.imshow('dot_mask', dot_mask)
cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.imshow('dilate', dilate)
cv2.waitKey()

Related

Remove noise from image using OpenCV

I have these images
enter image description here
enter image description here
I want to remove noise from these images so I can convert them into text using pytesseract. The noise is only in blue colour so I tried to remove blue from the image. Still not good results.
This is what I did
import cv2
import pytesseract
# Extract the blue channel
blue = img[:, :, 0]
# Apply thresholding to the blue channel
thresh = cv2.threshold(blue, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
# Perform morphological operations to remove noise
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,1))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=7)
# Apply blur to smooth out the image
blur = opening#cv2.medianBlur(opening, 1)
cv2.imwrite("/Users/arjunmalik/Desktop/blur.png",blur)
display("/Users/arjunmalik/Desktop/blur.png")
The result was
enter image description here
The OCR results were FL1S4y.
As stated by Sembei, You need to use a closing operator which's a must for a situation like this because you want to close black points on the object to improve the image quality.
Solution:
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (4,4))
closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=1)
You can modify your code to this one to achieve the following output for the second image.
Output:
Result
You might need to change the size of the kernel for different input images.
Thoughts:
I think it'd be better if you do the character segmentation first before applying the closing operator in order to achieve the finest results.

How do I denoise straight lines

So I have images of a bunch of straight lines forming various shapes. However these straight line have a tendency to not be quite straight because the underlying source often has these lines go between pixels. In that case the underlying source produces pixels that are next to a line. In that case I would like for these extra pixels to be removed.
Here we have a source image:
And here we have the same image denoised:
.
I'm messing around with houghline transforms or de-noising algorithms but none of these are working well and it feels like there should be a good way of fixing this that uses the specific fact that these are lines rather than just normal pepper and salt noise.
I'm working with python right now but other language answers are acceptable.
I thought of the same solution as commented by Cris:
Convert the image to binary image.
Apply morphological opening with np.ones((3, 15)) kernel - keeping only horizontal lines.
Apply morphological opening with np.ones((15, 3)) kernel - keeping only vertical lines.
Apply bitwise or between the above matrices to form a mask.
Apply mask on image - zero pixels in the mask gets zero values.
Here is a code sample:
import numpy as np
import cv2
img = cv2.imread('input.png') # Read input image (BGR color format)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Convert to grayscale
thresh = cv2.threshold(gray, 1, 255, cv2.THRESH_BINARY)[1] # Convert to binary (only zeros and 255)
horz = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, np.ones((3, 15))) # Keep only horizontal lines
vert = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, np.ones((15, 3))) # Keep only vertical lines
mask = cv2.bitwise_or(horz, vert) # Unite horizontal and vertical lines to form a mask
res = cv2.bitwise_and(img, img, mask=mask) # Place zeros where mask is zero
# Show the result
cv2.imshow('mask', mask)
cv2.imshow('res', res)
cv2.waitKey()
cv2.destroyAllWindows()
Result:
mask:
res:
The result is not perfect, and not generalized.
You may get better result using for loops.
Example: Delete pixels that has many horizontal pixels below them, but only few next to them (repeat for left, right, and bottom).

Obtain complete pattern of shapes with OpenCV

I'm working in a script using different OpenCV operations for processing an image with solar panels in a house roof. My original image is the following:
After processing the image, I get the edges of the panels as follows:
It can be seen how some rectangles are broken due to reflection of the Sun in the picture.
I would like to know if it's possible to fix those broken rectangles, maybe by using the pattern of those which are not broken.
My code is the following:
# Load image
color_image = cv2.imread("google6.jpg")
cv2.imshow("Original", color_image)
# Convert to gray
img = cv2.cvtColor(color_image, cv2.COLOR_BGR2GRAY)
# Apply various filters
img = cv2.GaussianBlur(img, (5, 5), 0)
img = cv2.medianBlur(img, 5)
img = img & 0x88 # 0x88
img = cv2.fastNlMeansDenoising(img, h=10)
# Invert to binary
ret, thresh = cv2.threshold(img, 127, 255, 1)
# Perform morphological erosion
kernel = np.ones((5, 5),np.uint8)
erosion = cv2.morphologyEx(thresh, cv2.MORPH_ERODE, kernel, iterations=2)
# Invert image and blur it
ret, thresh1 = cv2.threshold(erosion, 127, 255, 1)
blur = cv2.blur(thresh1, (10, 10))
# Perform another threshold on blurred image to get the central portion of the edge
ret, thresh2 = cv2.threshold(blur, 145, 255, 0)
# Perform morphological erosion to thin the edge by ellipse structuring element
kernel1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
contour = cv2.morphologyEx(thresh2, cv2.MORPH_ERODE, kernel1, iterations=2)
# Get edges
final = cv2.Canny(contour, 249, 250)
cv2.imshow("final", final)
I have tried to modify all the filters I'm using in order to reduce as much as possible the effect of the Sun in the original picture, but that is as far as I have been able to go.
I'm in general happy with the result of all those filters (although any advice is welcome), so I'd like to work on the black/white imaged I showed, which is already smooth enough for the post-processing I need to do.
Thansk!
The pattern is not broken in the original image, so it being broken in your binarized result must mean your binarization is not optimal.
You apply threshold() to binarize the image, and then Canny() to the binary image. The problems here are:
Thresholding removes a lot of information, this should always be the last step of any processing pipeline. Anything you lose here, you've lost for good.
Canny() should be applied to a gray-scale image, not a binary image.
The Canny edge detector is an edge detector, but you want to detect lines, not edges. See here for the difference.
So, I suggest starting from scratch.
The Laplacian of Gaussian is a very simple line detector. I took these steps:
Read in image, convert to grayscale.
Apply Laplacian of Gaussian with sigma = 2.
Invert (negate) the result and then set negative values to 0.
This is the output:
From here, it should be relatively straight-forward to identify the grid pattern.
I don't post code because I used MATLAB for this, but you can accomplish the same result in Python with OpenCV, here is a demo for applying the Laplacian of Gaussian in OpenCV.
This is Python + OpenCV code to replicate the above:
import cv2
color_image = cv2.imread("/Users/cris/Downloads/L3RVh.jpg")
img = cv2.cvtColor(color_image, cv2.COLOR_BGR2GRAY)
out = cv2.GaussianBlur(img, (0, 0), 2) # Note! Specify size of Gaussian by the sigma, not the kernel size
out = cv2.Laplacian(out, cv2.CV_32F)
_, out = cv2.threshold(-out, 0, 1e9, cv2.THRESH_TOZERO)
However, it looks like OpenCV doesn't linearize (apply gamma correction) when converting from BGR to gray, as the conversion function does that I used when creating the image above. I think this gamma correction might have improved the results a bit by reducing the response to the roof tiles.

OpenCV: Contour detection of shadowed image before OCR

I am trying to OCR the picture of documents and my current approach is
Read an image as a grayscale
Binarize it thresholding
Wrap perspective along the contours obtained from cv2.findContours()
The above works well if image is not shadowed. Now I want to get contours of shadowed pictures. My first attempt is to use cv2.adaptiveThreshold for step 2. The adaptive threshold successfully weakened the shadow but the resulted image lost the contrast between the paper and the background. That made cv2 impossible to find contours of the paper. So I need to use other method to remove the shadow.
Is there any way to remove shadow maintaining the background colour?
For reference here is the sample picture I am processing with various approaches. From left, I did
grayscale
thresholding
adaptive thresholdin
normalization
My goal is to obtain the second picture without shadow.
Please note that I actually have a temporary solution specifically to the picture which is to process the part of the picture with shadow separately. Yet, it is not the general solution to shadowed picture as its performance depends on the size, shape and position of a shadow so please use other methods.
This is the original picture.
Here is one way in Python/OpenCV using division normalization, optionally followed by sharpening and/or thresholding.
Input:
import cv2
import numpy as np
import skimage.filters as filters
# read the image
img = cv2.imread('receipt.jpg')
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# blur
smooth = cv2.GaussianBlur(gray, (95,95), 0)
# divide gray by morphology image
division = cv2.divide(gray, smooth, scale=255)
# sharpen using unsharp masking
sharp = filters.unsharp_mask(division, radius=1.5, amount=1.5, multichannel=False, preserve_range=False)
sharp = (255*sharp).clip(0,255).astype(np.uint8)
# threshold
thresh = cv2.threshold(sharp, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# save results
cv2.imwrite('receipt_division.png',division)
cv2.imwrite('receipt_division_sharp.png',sharp)
cv2.imwrite('receipt_division_thresh.png',thresh)
# show results
cv2.imshow('smooth', smooth)
cv2.imshow('division', division)
cv2.imshow('sharp', sharp)
cv2.imshow('thresh', thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()
Division:
Sharpened:
Thresholded:

How to remove noise in image OpenCV, Python?

I have some cropped images and I need images that have black texts on white background. Firstly I apply adaptive thresholding and then I try to remove noise. Although I tried a lot of noise removal techniques but when the image changed, the techniques I used failed.
The best method for converting image color to binary for my images is Adaptive Gaussian Thresholding. Here is my code:
im_gray = cv2.imread("image.jpg", cv2.IMREAD_GRAYSCALE)
image = cv2.GaussianBlur(im_gray, (5,5), 1)
th = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,3,2)
I need smooth values, Decimal separator(dot) and postfix letters. How can I do this?
Before binarization, it is necessary to correct the nonuniform illumination of the background. For example, like this:
import cv2
image = cv2.imread('9qBsB.jpg')
image=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
se=cv2.getStructuringElement(cv2.MORPH_RECT , (8,8))
bg=cv2.morphologyEx(image, cv2.MORPH_DILATE, se)
out_gray=cv2.divide(image, bg, scale=255)
out_binary=cv2.threshold(out_gray, 0, 255, cv2.THRESH_OTSU )[1]
cv2.imshow('binary', out_binary)
cv2.imwrite('binary.png',out_binary)
cv2.imshow('gray', out_gray)
cv2.imwrite('gray.png',out_gray)
Result:
You can do slightly better using division normalization in Python/OpenCV.
Input:
import cv2
import numpy as np
# load image
img = cv2.imread("license_plate.jpg")
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# blur
blur = cv2.GaussianBlur(gray, (0,0), sigmaX=33, sigmaY=33)
# divide
divide = cv2.divide(gray, blur, scale=255)
# otsu threshold
thresh = cv2.threshold(divide, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# apply morphology
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
morph = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# write result to disk
cv2.imwrite("hebrew_text_division.jpg", divide)
cv2.imwrite("hebrew_text_division_threshold.jpg", thresh)
cv2.imwrite("hebrew_text_division_morph.jpg", morph)
# display it
cv2.imshow("gray", gray)
cv2.imshow("divide", divide)
cv2.imshow("thresh", thresh)
cv2.imshow("morph", morph)
cv2.waitKey(0)
cv2.destroyAllWindows()
Division Image:
Thresholded Image:
Morphology Cleaned Image:
Im assuming that you are preprocessing the image for OCR(Optical Character Recognition)
I had a project to detect license plates and these were the steps I did, you can apply them to your project. After greying the image try applying equalize histogram to the image, this allows the area's in the image with lower contrast to gain a higher contrast. Then blur the image to reduce the noise in the background. Next apply edge detection on the image, make sure that noise is sufficiently removed as ED is susceptible to it. Lastly, apply closing(dilation then erosion) on the image to close all the small holes inside the words.
Instead of erode and dilate, you can check this, that is basically both in one.
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,2))
morphology_img = cv2.morphologyEx(img_grey, cv2.MORPH_OPEN, kernel,iterations=1)
plt.imshow(morphology_img,'Greys_r')
MORPHOLOGICAL_TRANSFORMATIONS

Categories