I'm trying to remove horizontal and vertical lines in this image in order to have more distinct text areas.
I'm using the below code, which follows this guide
image = cv2.imread('image.jpg')
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(
blurred, 255,
cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV,
25,
15
)
# Create the images that will use to extract the horizontal and vertical lines
horizontal = np.copy(thresh)
vertical = np.copy(thresh)
# Specify size on horizontal axis
cols = horizontal.shape[1]
horizontal_size = math.ceil(cols / 20)
# Create structure element for extracting horizontal lines through morphology operations
horizontalStructure = cv2.getStructuringElement(cv2.MORPH_RECT, (horizontal_size, 1))
# Apply morphology operations
horizontal = cv2.erode(horizontal, horizontalStructure)
horizontal = cv2.dilate(horizontal, horizontalStructure)
# Show extracted horizontal lines
cv2.imwrite("horizontal.jpg", horizontal)
# Specify size on vertical axis
rows = vertical.shape[0]
verticalsize = math.ceil(rows / 20)
# Create structure element for extracting vertical lines through morphology operations
verticalStructure = cv2.getStructuringElement(cv2.MORPH_RECT, (1, verticalsize))
# Apply morphology operations
vertical = cv2.erode(vertical, verticalStructure)
vertical = cv2.dilate(vertical, verticalStructure)
After this, I know I would need to isolate the lines and mask the original image with the white lines, however I'm not really sure on how to proceed.
Does anyone have any suggestion?
Jeru's answer already gives you what you want. But I wanted to add an alternative that is maybe a bit more general than what you have so far.
You are converting the color image to gray-value, then apply adaptive threshold in an attempt to find lines. You filter this to get only the long horizontal and vertical lines, then use that mask to paint the original image white at those locations.
Here we look for all lines, and remove them from the image making painting them with whatever the surrounding color is. This process does not involve thresholding at all, all morphological operations are applied to the channels of the color image.
Ideally we'd use color morphology, but implementations of that are rare. Mathematical morphology is based on maximum and minimum operations, and the maximum or minimum of a color triplet (i.e. a vector) is not well defined.
So instead we apply the following procedure to each of the three color channels independently. This should produce results that are good enough for this application:
Extract the red channel: take the input RGB image, and extract the first channel. This is a gray-value image. We'll call this image channel.
Apply a top-hat filter to detect the thin structures: the difference between a closing with a small structuring element (SE) applied to channel, and channel (a closing is a dilation followed by an erosion with the same SE, you're using this to find lines as well). We'll call this output thin. thin = closing(channel)-channel. This step is similar to your local thresholding, but no actual threshold is applied. The resulting intensities indicate how dark the lines are w.r.t. to background. If you add thin to channel, you'll fill in these thin structures. The size of the SE here determines what is considered "thin".
Filter out the short lines, to keep only the long ones: apply an opening with a long horizontal SE to thin, and an opening with a long vertical SE to thin, and take the maximum of the two result. We'll call this lines. Note that this is the same process you used to generate horizontal and vertical. Instead of adding them together as Jeru suggested, we take the maximum. This makes it so that output intensities still match the contrast in channel. (In Mathematical Morphology parlance, the supremum of openings is an opening). The length of the SEs here determines what is long enough to be a line.
Fill in the lines in the original image channel: now simply add lines to channel. Write the result to the first channel of the output image.
Repeat the same process with the other two channels.
Using DIPlib this is quite a simple script:
import diplib as dip
input = dip.ImageReadTIFF('/home/cris/tmp/T4tbM.tif')
output = input.Copy()
for ii in range(0,3):
channel = output.TensorElement(ii)
thin = dip.Closing(channel, dip.SE(5, 'rectangular')) - channel
vertical = dip.Opening(thin, dip.SE([100,1], 'rectangular'))
horizontal = dip.Opening(thin, dip.SE([1,100], 'rectangular'))
lines = dip.Supremum(vertical,horizontal)
channel += lines # overwrites output image
Edit:
When increasing the size of the first SE, above set to 5, to be large enough to remove also the thicker gray bar in the middle of the example image, causes part of the block containing the inverted text "POWERLIFTING" to be left in thin.
To filter out those parts as well, we can change the definition of thin as follows:
notthin = dip.Closing(channel, dip.SE(11, 'rectangular'), ["add max"]))
notthin = dip.MorphologicalReconstruction(notthin, channel, 1, "erosion")
thin = notthin - channel
That is, instead of thin=closing(channel)-channel, we do thin=reconstruct(closing(channel))-channel. The reconstruction simply expands selected (not thin) structures so that where part of a structure was selected, now the full structure is selected. The only thing that is now in thin are lines that are not connected to thicker structures.
I've also added "add max" as a boundary condition -- this causes the closing to expand the area outside the image with white, and therefore see lines at the edges of the image as lines.
To elaborate more here is what to do:
First, add the resulting images of vertical and horizontal. This will give you an image containing both the horizontal and vertical lines. Since both the images are of type uint8 (unsigned 8-bit integer) adding them won't be a problem:
res = vertical + horizontal
Finally, mask the resulting image obtained above with the original 3-channel image. This can be accomplished using cv2.bitwise_and:
fin = cv2.bitwise_and(image, image, mask = cv2.bitwise_not(res))
A sample for removing horizontal lines.
Sample image:
import cv2
import numpy as np
img = cv2.imread("Image path", 0)
if len(img.shape) != 2:
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
else:
gray = img
gray = cv2.bitwise_not(gray)
bw = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY, 15, -2)
horizontal = np.copy(bw)
cols = horizontal.shape[1]
horizontal_size = cols // 30
horizontalStructure = cv2.getStructuringElement(cv2.MORPH_RECT, (horizontal_size, 1))
horizontal = cv2.erode(horizontal, horizontalStructure)
horizontal = cv2.dilate(horizontal, horizontalStructure)
cv2.imwrite("horizontal_lines_extracted.png", horizontal)
horizontal_inv = cv2.bitwise_not(horizontal)
cv2.imwrite("inverse_extracted.png", horizontal_inv)
masked_img = cv2.bitwise_and(gray, gray, mask=horizontal_inv)
masked_img_inv = cv2.bitwise_not(masked_img)
cv2.imwrite("masked_img.jpg", masked_img_inv)
=> horizontal_lines_extracted.png:
=> inverse_extracted.png
=> masked_img.png(resultant image after masking)
Do you want something like this?
image = cv2.imread('image.jpg', cv2.IMREAD_UNCHANGED);
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ret,binary = cv2.threshold(gray, 170, 255, cv2.THRESH_BINARY)#|cv2.THRESH_OTSU)
V = cv2.Sobel(binary, cv2.CV_8U, dx=1, dy=0)
H = cv2.Sobel(binary, cv2.CV_8U, dx=0, dy=1)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
V = cv2.morphologyEx(V, cv2.MORPH_DILATE, kernel, iterations = 2)
H = cv2.morphologyEx(H, cv2.MORPH_DILATE, kernel, iterations = 2)
rows,cols = image.shape[:2]
mask = np.zeros(image.shape[:2], dtype=np.uint8)
contours = cv2.findContours(V, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[1]
for cnt in contours:
(x,y,w,h) = cv2.boundingRect(cnt)
# manipulate these values to change accuracy
if h > rows/2 and w < 10:
cv2.drawContours(mask, [cnt], -1, 255,-1)
contours = cv2.findContours(H, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[1]
for cnt in contours:
(x,y,w,h) = cv2.boundingRect(cnt)
# manipulate these values to change accuracy
if w > cols/2 and h < 10:
cv2.drawContours(mask, [cnt], -1, 255,-1)
mask = cv2.morphologyEx(mask, cv2.MORPH_DILATE, kernel, iterations = 2)
image[mask == 255] = (255,255,255)
So I have found a solution by using part of Juke's suggestion. Eventually I would need to continue to process the image using a binary mode so figured I might keep it that way.
First, add the resulting images of vertical and horizontal. This will give you an image containing both the horizontal and vertical lines. Since both the images are of type uint8 (unsigned 8-bit integer) adding them won't be a problem:
res = vertical + horizontal
Then, subtract res from the original input image tresh, which was used to find the lines. This will remove the white lines and can than be used to apply some other morphology transformations.
fin = thresh - res
Related
I've been researching and trying a couple functions to get what I want and I feel like I might be overthinking it.
One version of my code is below. The sample image is here.
My end goal is to find the angle (yellow) of the approximated line with respect to the frame (green line) Final
I haven't even got to the angle portion of the program yet.
The results I was obtaining from the below code were as follows. Canny Closed Small Removed
Anybody have a better way of creating the difference and establishing the estimated line?
Any help is appreciated.
import cv2
import numpy as np
pX = int(512)
pY = int(768)
img = cv2.imread('IMAGE LOCATION', cv2.IMREAD_COLOR)
imgS = cv2.resize(img, (pX, pY))
aimg = cv2.imread('IMAGE LOCATION', cv2.IMREAD_GRAYSCALE)
# Blur image to reduce noise and resize for viewing
blur = cv2.medianBlur(aimg, 5)
rblur = cv2.resize(blur, (384, 512))
canny = cv2.Canny(rblur, 120, 255, 1)
cv2.imshow('canny', canny)
kernel = np.ones((2, 2), np.uint8)
#fringeMesh = cv2.dilate(canny, kernel, iterations=2)
#fringeMesh2 = cv2.dilate(fringeMesh, None, iterations=1)
#cv2.imshow('fringeMesh', fringeMesh2)
closing = cv2.morphologyEx(canny, cv2.MORPH_CLOSE, kernel)
cv2.imshow('Closed', closing)
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(closing, connectivity=8)
#connectedComponentswithStats yields every separated component with information on each of them, such as size
sizes = stats[1:, -1]; nb_components = nb_components - 1
min_size = 200 #num_pixels
fringeMesh3 = np.zeros((output.shape))
for i in range(0, nb_components):
if sizes[i] >= min_size:
fringeMesh3[output == i + 1] = 255
#contours, _ = cv2.findContours(fringeMesh3, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#cv2.drawContours(fringeMesh3, contours, -1, (0, 255, 0), 1)
cv2.imshow('final', fringeMesh3)
#cv2.imshow("Natural", imgS)
#cv2.imshow("img", img)
cv2.imshow("aimg", aimg)
cv2.imshow("Blur", rblur)
cv2.waitKey()
cv2.destroyAllWindows()
You can fit a straight line to the first white pixel you encounter in each column, starting from the bottom.
I had to trim your image because you shared a screen grab of it with a window decoration, title and frame rather than your actual image:
import cv2
import math
import numpy as np
# Load image as greyscale
im = cv2.imread('trimmed.jpg', cv2.IMREAD_GRAYSCALE)
# Get index of first white pixel in each column, starting at the bottom
yvals = (im[::-1,:]>200).argmax(axis=0)
# Make the x values 0, 1, 2, 3...
xvals = np.arange(0,im.shape[1])
# Fit a line of the form y = mx + c
z = np.polyfit(xvals, yvals, 1)
# Convert the slope to an angle
angle = np.arctan(z[0]) * 180/math.pi
Note 1: The value of z (the result of fitting) is:
array([ -0.74002694, 428.01463745])
which means the equation of the line you are looking for is:
y = -0.74002694 * x + 428.01463745
i.e. the y-intercept is at row 428 from the bottom of the image.
Note 2: Try to avoid JPEG format as an intermediate format in image processing - it is lossy and changes your pixel values - so where you have thresholded and done your morphology you are expecting values of 255 and 0, JPEG will lossily alter those values and you end up testing for a range or thresholding again.
Your 'Closed' image seems to quite clearly segment the two regions, so I'd suggest you focus on turning that boundary into a line that you can do something with. Connected components analysis and contour detection don't really provide any useful information here, so aren't necessary.
One quite simple approach to finding the line angle is to find the first white pixel in each row. To get only the rows that are part of your diagonal, don't include rows where that pixel is too close to either side (e.g. within 5%). That gives you a set of points (pixel locations) on the boundary of your two types of grass.
From there you can either do a linear regression to get an equation for the straight line, or you can get two points by averaging the x values for the top and bottom half of the rows, and then calculate the gradient angle from that.
An alternative approach would be doing another morphological close with a very large kernel, to end up with just a solid white region and a solid black region, which you could turn into a line with canny or findContours. From there you could either get some points by averaging, use the endpoints, or given a smooth enough result from a large enough kernel you could detect the line with hough lines.
I'm new to opencv and I m trying to remove all these diagonal parallel lines that are noise in my image.
I have tried using HoughLinesP after some erosion/dilatation but the result is poo (and keeping only the one with a near 135 degree angle).
img = cv2.imread('images/dungeon.jpg')
ret,img = cv2.threshold(img,180,255,0)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(5,5))
eroded = cv2.erode(img,element)
dilate = cv2.dilate(eroded, element)
skeleton = cv2.subtract(img, dilate)
gray = cv2.cvtColor(skeleton,cv2.COLOR_BGR2GRAY)
minLineLength = 10
lines = cv2.HoughLinesP(gray, 1, np.pi/180, 1, 10, 0.5)
for line in lines:
for x1,y1,x2,y2 in line:
angle = math.atan2(y2-y1,x2-x1)
if (angle > -0.1 and angle < 0.1):
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),1)
cv2.imshow("result", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
My thinking here was to detect these lines in order to remove them afterwards but I m not even sure that's the good way to do this.
I guess you are trying to get the contours of the walls, right? Here’s a possible path to the solution using mainly spatial filtering. You will still need to clean the results to get where you want. The idea is to try and compute a mask of the parallel lines (high-frequency noise) of the image and calculate the difference between the (binary) input and this mask. These are the steps:
Convert the input image to grayscale
Apply Gaussian Blur to get rid of the high-frequency noise you are trying to eliminate
Get a binary image of the blurred image
Apply area filters to get rid of everything that is not noise, to get a noise mask
Compute the difference between the original binary mask and the noise mask
Clean up the difference image
Compute contours on this image
Let’s see the code:
import cv2
import numpy as np
# Set image path
path = "C://opencvImages//"
fileName = "map.png"
# Read Input image
inputImage = cv2.imread(path+fileName)
# Convert BGR to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Apply Gaussian Blur:
blurredImage = cv2.GaussianBlur(grayscaleImage, (3, 3), cv2.BORDER_DEFAULT)
# Threshold via Otsu:
_, binaryImage = cv2.threshold(blurredImage, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Save a copy of the binary mask
binaryCopy = cv2.cvtColor(binaryImage, cv2.COLOR_GRAY2BGR)
This is the output:
Up until now you get this binary mask. The process so far has smoothed the noise and is creating thick black blobs where the noise is located. Again, the idea is to generate a noise mask that can be subtracted to this image.
Let’s apply an area filter and try to remove the big white blobs, which are NOT the noise we are interested to preserve. I’ll define the function towards the end, for now I just want to present the general idea:
# Set the minimum pixels for the area filter:
minArea = 50000
# Perform an area filter on the binary blobs:
filteredImage = areaFilter(minArea, binaryImage)
The filter will suppress every white blob that is above the minimum threshold. The value is big because in this particular case we are interested in preserving only the black blobs. This is the result:
We have a pretty solid mask. Let’s subtract this from the original binary mask we created earlier:
# Get the difference between the binary image and the mask:
imgDifference = binaryImage - filteredImage
This is what we get:
The difference image has some small noise. Let’s apply the area filter again to get rid of it. This time with a more traditional threshold value:
# Set the minimum pixels for the area filter:
minArea = 20
# Perform an area filter on the binary blobs:
filteredImage = areaFilter(minArea, imgDifference)
Cool. This is the final mask:
Just for completeness. Let’s compute contours on this input, which is very straightforward:
# Find the big contours/blobs on the filtered image:
contours, hierarchy = cv2.findContours(filteredImage, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# Draw the contours on the mask image:
cv2.drawContours(binaryCopy, contours, -1, (0, 255, 0), 3)
Let’s see the result:
As you see it is not perfect. However, there’s still some room for improvement, perhaps you can polish a little bit more this idea to get a potential solution. Here's the definition and implementation of the areaFilter function:
def areaFilter(minArea, inputImage):
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(inputImage, connectivity=4)
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
return filteredImage
I have a image which I have binarized and the problem is I want to remove the white spots in that image which are not connected in a loop i.e. small white dots. The reason is I want to take a measurement of that section like shown here.
I have tried some OpenCV morphology functions like erode, open, close but the results are not what I require. I have also tried using canny edge but some diagonal lines which I want for some processing are also gone. here
is the result of both thresh(left) and canny(right)
I have read that pruning is a process in which we remove pixels which are not connected but I don't know how it works? Is there a function in Opencv for that?
th3 = cv2.adaptiveThreshold(gray_img,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,2)
th3=~th3
th3 = cv2.dilate(th3, cv2.getStructuringElement(cv2.MORPH_RECT,(3,3)))
th3 = cv2.erode(th3, cv2.getStructuringElement(cv2.MORPH_RECT,(5,5)))
edged = cv2.Canny(gray_img,lower,upper)
This answer explains how to measure the diameter of the drill bit:
The first step is to read the image as 2 channel (grayscale) and find the edges using a Canny filter:
img = cv2.imread('/home/stephen/Desktop/drill.jpg',0)
canny = cv2.Canny(img, 100, 100)
The edges of the drill using np.argmax():
diameters = []
# Iterate through each column in the image
for row in range(img.shape[1]-1):
img_row = img[:, row:row+1]
start = np.argmax(img_row)
end = np.argmax(np.flip(img_row))
diameters.append(end+start)
a = row, 0
b = row, start
c = row, img.shape[1]
d = row, img.shape[0] - end
cv2.line(img, a, b, 123, 1)
cv2.line(img, c, d, 123, 1)
The diameter of the drill in each column is plotted here:
All of this assumes that you have an aligned image. I used this code to align the image.
I have an image file with text which I want to extract using OCR.
But it has a diagonal overlapping line of text over it (top right), like .
I remove this line using,
image = cv2.imread(image_path)
image = cv2.resize(image, None, fx=2, fy=2, interpolation=cv2.INTER_CUBIC)
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image = cv2.GaussianBlur(image, (5, 5), 0)
image = cv2.threshold(image, 100, 255, cv2.THRESH_BINARY)[1] # 100 here as the diagonal line is grey
This results in an image like, .
Notice the thick characters for shear stress, it is one of the regions where the diagonal line overlapped.
Now I apply OCR. However, the previous steps remove some pixels. For instance, the e in edge dislocation is not complete.
This results in poor results like, "edve dislocation". I tried erosion and dilation but with no significant improvement.
Is there any way to fill up the holes in characters?
Is there any way to reduce thickness of the characters which overlap with the diagonal line?
Since in Images if you see, we can represent dark areas(Black) from 2^0 = 0 to light areas(white) 2^8 = 256 .
So one thing which you can try (I'm also not sure on this):
img = cv2.imread(image_path,0)
new_img = img.copy()
new_img[new_img<=230] = 0 ## just try to change that 230 value to anywhere b/w 150 to 230
then try to use OCR to check if it really worked.
-- apply this on result of your image after removing overlapping
I have this image with tables where I want to remove the tabular structure from the image so that it can work more effectively with Tesseract. I used the following code to create a boundary around the table (and individual cells) so that it can be deleted.
img =cv2.imread('bfir.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
img1 = np.ones(img.shape, dtype=np.uint8)*255
ret,thresh = cv2.threshold(gray,127,255,1)
(_,contours,h) = cv2.findContours(thresh,1,2)
for cnt in contours:
approx = cv2.approxPolyDP(cnt,0.01*cv2.arcLength(cnt,True),True)
if len(approx)==4:
cv2.drawContours(img1,[cnt],0,(0,255,0),2)
This draws green lines around the table like this image.
Next, I tried the cv2.subtract method to subtract the table from the image, somewhat like this.
final_img = cv2.subtract(img1, img)
But this didn't work as I expected and gives me a grayscale image with the table still in it. Link
While I just want the original image in B&W with the table removed. I am using OpenCV for the first time so I don't know what I am doing wrong and I am sorry for the long post but if anybody can please help with how to go about with this or just point me in the right direction about how to remove the table, that would be very much appreciated.
EDIT:
As suggested by RobAu it can also work with simply drawing the contours in white in the first place but I don't know how to do that without losing the rest of the data in the preprocessing stage.
You could try and simply overwrite the cells that represent the borders. This can be done by creating a mask image, and then using that as reference as to where to overwrite pixels in the original.
This can be done with:
mask_image = np.zeros(img.shape[0:2], np.uint8)
cv2.drawContours(mask_image, contours, -1, color=255, thickness=2)
border_points = np.array(np.where(mask_image == 255)).transpose()
background = [0, 0, 0] # Change this to the colour you want
for point in border_points :
img[point[0], point[1]] = background
Update:
You could use the 3-channel you already created for the mask, but that slightly complicates the algorithms. The mask image propose is more fitted for the task, but I will try to adapt it to your code:
# Create your mask image as usual...
border_points = np.array(np.where(img1[:,:,1] == 255)).transpose() # Only look at channel 2
background = [0, 0, 0] # Change this to the colour you want
for point in border_points :
img[point[0], point[1]] = background
Update to do as #RobAu suggested (quicker than my previous methods):
line_thickness = 3 # Change this value until it looks the best.
cv2.drawContours(img, contours, -1, color=(0,0,0), thickness=line_thickness )
Please note I didn't test this code. So it might need some further fiddling.
As a reference to the comments of this question, this is an example of a code that locates rectangles and creates new images for each one, this was an attempt at creating individual images of a picture of shredded paper. Some of the values will need to be changed for it to locate the rectangles with the right amount of size
There is also some code for tracking sizes of images and the code is made up by 50% what i have written and 50% by stackoverflow help.
import cv2
import numpy as np
fileName = ['9','8','7','6','5','4','3','2','1','0']
img = cv2.imread('#YOUR IMAGE#')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.bilateralFilter(gray, 11, 17, 17)
kernel = np.ones((5,5),np.uint8)
erosion = cv2.erode(gray,kernel,iterations = 2)
kernel = np.ones((4,4),np.uint8)
dilation = cv2.dilate(erosion,kernel,iterations = 2)
edged = cv2.Canny(dilation, 30, 200)
_, contours, hierarchy = cv2.findContours(edged, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
rects = [cv2.boundingRect(cnt) for cnt in contours]
rects = sorted(rects,key=lambda x:x[1],reverse=True)
i = -1
j = 1
y_old = 5000
x_old = 5000
for rect in rects:
x,y,w,h = rect
area = w * h
print('width: %d and height: %d' %(w,h))
if w > 50 and h > 500:
print('abs:')
print(abs(x_old - x))
if abs(x_old - x) > 0:
print('writing')
x_old = x
x,y,w,h = rect
out = img[y+10:y+h-10,x+10:x+w-10]
cv2.imwrite('assets/newImage' + fileName[i] + '.jpg', out)
j+=1
if (y_old - y) > 1000:
i += 1
y_old = y
Even though, the given input image links are not working & so I obviously doesn't know the following is what you have asked for, I learnt something from your question, when I was working on, removing table structure lines from given image, I like to share what I have learnt, for the future readers.
I followed the steps provided in opencv documentation to remove the lines.
But that only removed the horizontal lines. When I tried to remove vertical lines, the result image only had the vertical lines. The text in the table was not there.
Then I came across your question & saw final_img = cv2.subtract(img1, img) in the question. Tried that & it worked great.
Here are the steps that I followed:
# Load the image
src = cv.imread(argv[0], cv.IMREAD_COLOR)
# Check if image is loaded fine
if src is None:
print ('Error opening image: ' + argv[0])
return -1
# Show source image
cv.imshow("src", src)
# [load_image]
# [gray]
# Transform source image to gray if it is not already
if len(src.shape) != 2:
gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
else:
gray = src
# Show gray image
# show_wait_destroy("gray", gray)
# [gray]
# [bin]
# Apply adaptiveThreshold at the bitwise_not of gray, notice the ~ symbol
gray = cv.bitwise_not(gray)
bw = cv.adaptiveThreshold(gray, 255, cv.ADAPTIVE_THRESH_MEAN_C, \
cv.THRESH_BINARY, 15, -2)
# Show binary image
# show_wait_destroy("binary", bw)
# [bin]
# [init]
# Create the images that will use to extract the horizontal and vertical lines
horizontal = np.copy(bw)
vertical = np.copy(bw)
# [horiz]
# [vert]
# Specify size on vertical axis
rows = vertical.shape[0]
verticalsize = rows / 10
# Create structure element for extracting vertical lines through morphology operations
verticalStructure = cv.getStructuringElement(cv.MORPH_RECT, (1, verticalsize))
# Apply morphology operations
vertical = cv.erode(vertical, verticalStructure)
vertical = cv.dilate(vertical, verticalStructure)
# [init]
# [horiz]
# Specify size on horizontal axis
cols = horizontal.shape[1]
horizontal_size = cols / 30
# Create structure element for extracting horizontal lines through morphology operations
horizontalStructure = cv.getStructuringElement(cv.MORPH_RECT, (horizontal_size, 1))
# Apply morphology operations
horizontal = cv.erode(horizontal, horizontalStructure)
horizontal = cv.dilate(horizontal, horizontalStructure)
lines_removed = cv.subtract(gray, vertical + horizontal)
show_wait_destroy("lines_removed", ~lines_removed)
Input:
Output:
Few things that I changed from the sources:
verticalsize = rows / 10, here, I do not understand the significance of the number 10. In the documentation, 30 was used. I got better result with 10. I guess, the less the division number, the large the structure element & here, as we are targeting straight lines, reducing the number works.
In the documentation, vertical lines are processed after horizontal lines. I reversed the order
I swapped the parameters to cv2.substract(). I used cv2.subtract(img, img1).