Related
Here's an example image ->
I would like to extract text that has text-decoration/styling of strikethrough.
So for the above image I would like to extract - de location
How would I do this ?
Here's what I have so far using OpenCV and python :
import cv2
import numpy as np
import matplotlib.pyplot as plt
im = cv2.imread(<image>)
kernel = np.ones((1,44), np.uint8)
morphed = cv2.morphologyEx(im, cv2.MORPH_CLOSE, kernel)
plt.imshow(morphed)
This gives me the horizontal lines ->
I am new to image processing and hence having a difficult time isolating only the text that has strikethroughs.
Bonus -> Along with the strikethrough text, I would like to also extract neighboring text so that I can correctly style/mark the strikethrough text information back along with other text.
UPDATE 1 :
Based on the first answer I did the following : -
import cv2
# Load image, convert to grayscale, Otsu's threshold
image = cv2.imread('image.png')
result = image.copy()
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV +
cv2.THRESH_OTSU)[1]
# Detect horizontal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT,(40,1))
detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN,
horizontal_kernel, iterations=10)
cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(result, [c], -1, (36,255,12), 2)
plt.imshow(result)
I was able to get this image -
I tried playing with the values for the horizontal kernel but no luck.
UPDATE 2:
I modified the above snippet further and got this -
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Load image, convert to grayscale, Otsu's threshold
result = image.copy()
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
kernel = np.ones((4,2),np.uint8)
erosion = cv2.erode(thresh,kernel,iterations = 1)
dilation = cv2.dilate(thresh,kernel,iterations = 1)
trans = dilation
# plt.imshow(erosion)
# Detect horizontal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (8,1))
detect_horizontal = cv2.morphologyEx(trans, cv2.MORPH_OPEN, horizontal_kernel, iterations=10)
cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(result, [c], -1, (36,255,12), 2)
plt.imshow(result)
I was able to get this image -
And this solution applies to my other image types as well -
This is not a 100% accuracy solution (failed to get the de strikethrough text) but I like the performance so far.
Now, I am struggling with how to check if the neighboring pixels are black or white to isolate the strikethrough.
one way you can achieve this is:
Binarise the image (https://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html)
Find horizontal lines (Horizontal Line detection with OpenCV)
For each line, check if the top and bottom pixels are white or not
If there are non white top and bottom pixels, that region corresponds to strikethrough
Do a connected component of the image (connected component labeling in python)
Check the label corresponding to the lines detected previously and mask that label to get the strike-through texts.
You can use a strikethrough property such as thickness. The thickness of the strikethrough line is less than the underline. It can be select by morphology and restore the connected components by morphological reconstruction.
import cv2
img = cv2.imread('juFpe.png', cv2.IMREAD_GRAYSCALE)
thresh = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY_INV )[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT,(1,5))
kernel2=cv2.getStructuringElement(cv2.MORPH_RECT,(8,8))
detect_thin = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
detect_thin = cv2.morphologyEx(detect_thin, cv2.MORPH_DILATE, kernel2)
marker=cv2.compare(detect_thin, thresh,cv2.CMP_LT) # thin lines
while True: #morphological reconstruction
tmp=marker.copy()
marker=cv2.dilate(marker, kernel2)
marker=cv2.min(thresh, marker)
difference = cv2.subtract(marker, tmp)
if cv2.countNonZero(difference) == 0:
break
cv2.imwrite('lines.png', marker)
Result:
I would like to detect the contour of the completed form in this scan.
Ideally I would want to find the corners of the table painted with red.
My final goal is to detect that the whole document was scanned and that the four corners are within the boundaries of the scan.
I used OpenCV from python - but it was not able to find the contour of the big container.
Any ideas?
With the observation that the form can be identified using the table grid, here's a simple approach:
Obtain binary image. Load the image, grayscale, Gaussian blur, then Otsu's threshold to get a binary image
Find horizontal sections. We create a horizontal shaped kernel and find horizontal table lines and draw onto a mask
Find vertical sections. We create a vertical shaped kernel and find vertical table lines and draw onto a mask
Fill text document body and morph open. We perform morph operations to close the table then find contours and fill the mask to obtain a contour of the shape. This step fulfills your needs since you can just find contours on the mask but we can go further and extract only the desired sections.
Perform four-point perspective transform. We find contours, sort for the largest contour, sort using contour approximation then perform a four-point perspective transform to obtain a birds eye view of the image.
Here's the results:
Input image
Detected contour to extract highlighted in green
Output after 4-point perspective transform
Code
import cv2
import numpy as np
from imutils.perspective import four_point_transform
# Load image, create mask, grayscale, and Otsu's threshold
image = cv2.imread('1.jpg')
mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,11,3)
# Find horizontal sections and draw on mask
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (80,1))
detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)
cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(mask, [c], -1, (255,255,255), -1)
# Find vertical sections and draw on mask
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,50))
detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2)
cnts = cv2.findContours(detect_vertical, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(mask, [c], -1, (255,255,255), -1)
# Fill text document body
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
close_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9,9))
close = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, close_kernel, iterations=3)
cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(mask, [c], -1, 255, -1)
# Perform morph operations to remove noise
# Find contours and sort for largest contour
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, close_kernel, iterations=5)
cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
displayCnt = None
for c in cnts:
# Perform contour approximation
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
if len(approx) == 4:
displayCnt = approx
break
# Obtain birds' eye view of image
warped = four_point_transform(image, displayCnt.reshape(4, 2))
cv2.imwrite('mask.png', mask)
cv2.imwrite('thresh.png', thresh)
cv2.imwrite('warped.png', warped)
cv2.imwrite('opening.png', opening)
What about using the Hough transform with a narrow direction range, to find the verticals and horizontals ? If you are lucky, those that you need will be the longest, and after selecting them you can reconstruct the rectangle.
i have a problem with my python code. I want to make image processing with chest X-rays in order to obtain a lung pattern. but my code results still have little stains. how to get rid of these small objects
and this is my code
import cv2
import numpy as np
from skimage import morphology
im = cv2.imread('image.jpg')
ret, thresh = cv2.threshold(im, 150, 255, cv2.THRESH_BINARY)
kernel = np.ones((5, 5), np.uint8)
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
cleaned = morphology.remove_small_objects(opening, min_size=62, connectivity=2)
cv2.imshow("cleaned", cleaned)
cv2.waitKey(0)
P.S :
when i try with the matlab code, the small object can be removed with this code
K=bwareaopen(~K,1500); %Remove small object (area) pixels less than 1500 pixels
and that code can remove small object well:
You can filter using contour area then apply morpholgical closing to fill the small holes in the image. Here's the result:
import cv2
# Load image, convert to grayscale, Gaussian blur, Otsu's threshold
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Filter using contour area and remove small noise
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area < 5500:
cv2.drawContours(thresh, [c], -1, (0,0,0), -1)
# Morph close and invert image
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
close = 255 - cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=2)
cv2.imshow('thresh', thresh)
cv2.imshow('close', close)
cv2.waitKey()
From the Documentation for bwareaopen you can find the algorithm that is used in the method, which is:
Determine the connected components:
CC = bwconncomp(BW, conn);
Compute the area of each component:
S = regionprops(CC, 'Area');
Remove small objects:
L = labelmatrix(CC);
BW2 = ismember(L, find([S.Area] >= P));
You could simply follow these steps to get to the result.
I am trying to extract handwritten characters from field boxes
My desired output would be the character segments with the boxes removed. So far, I've tried defining contours and filtering by area but that hasn't yielded any good results.
# Reading image and binarization
im = cv2.imread('test.png')
char_gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
char_bw = cv2.adaptiveThreshold(char_gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 75, 10)
# Applying erosion and dilation
kernel = np.ones((5,5), np.uint8)
img_erosion = cv2.erode(char_bw, kernel, iterations=1)
img_dilation = cv2.dilate(img_erosion, kernel, iterations=1)
# Find Canny edges
edged = cv2.Canny(img_dilation, 100, 200)
# Finding Contours
edged_copy = edged.copy()
im2, cnts, hierarchy = cv2.findContours(edged_copy, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
print("Number of Contours found = " + str(len(cnts)))
# Draw all contours
cv2.drawContours(im, cnts, -1, (0, 255, 0), 3)
# Filter using area and save
for no, c in enumerate(cnts):
area = cv2.contourArea(c)
if area > 100:
contour = c
(x, y, w, h) = cv2.boundingRect(contour)
img = im[y:y+h, x:x+w]
cv2.imwrite(f'./cnts/cnt-{no}.png', img_dilation)
Here's a simple approach:
Obtain binary image. We load the image, enlarge using imutils.resize(), convert to grayscale, and perform Otsu's thresholding to obtain a binary image
Remove horizontal lines. We create a horizontal kernel then perform morphological opening and remove the horizontal lines using cv2.drawContours
Remove vertical lines. We create a vertical kernel then perform morphological opening and remove the vertical lines using cv2.drawContours
Here's a visualization of each step:
Binary image
Detected lines/boxes to remove highlighted in green
Result
Code
import cv2
import numpy as np
import imutils
# Load image, enlarge, convert to grayscale, Otsu's threshold
image = cv2.imread('1.png')
image = imutils.resize(image, width=500)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Remove horizontal
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (25,1))
detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)
cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(image, [c], -1, (255,255,255), 5)
# Remove vertical
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,25))
detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2)
cnts = cv2.findContours(detect_vertical, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(image, [c], -1, (255,255,255), 5)
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.waitKey()
I have data which in a structured table image. The data is like below:
I tried to extract the text from this image using this code:
import pytesseract
from PIL import Image
value=Image.open("data/pic_table3.png")
text = pytesseract.image_to_string(value, lang="eng")
print(text)
and, here is the output:
EA Domains
Traditional role
Future role
Technology e Closed platforms ¢ Open platforms
e Physical e Virtualized
Applicationsand |e Proprietary e Inter-organizational
Integration e Siloed composite
e P2P integrations applications
e EAI technology e Software asa Service
e Enterprise Systems e Service-Oriented
e Automating transactions Architecture
e “Informating”
interactions
However, the expected data output should be aligned according to the column and row. How can I do that?
You must preprocess the image to remove the table lines and dots before throwing it into OCR. Here's an approach using OpenCV.
Load image, grayscale, and Otsu's threshold
Remove horizontal lines
Remove vertical lines
Dilate to connect text and remove dots using contour area filtering
Bitwise-and to reconstruct image
OCR
Here's the processed image:
Result from Pytesseract
EA Domains Traditional role Future role
Technology Closed platforms Open platforms
Physical Virtualized
Applications and Proprietary Inter-organizational
Integration Siloed composite
P2P integrations applications
EAI technology Software as a Service
Enterprise Systems Service-Oriented
Automating transactions Architecture
“‘Informating”
interactions
Code
import cv2
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Load image, grayscale, and Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Remove horizontal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (50,1))
detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)
cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(thresh, [c], -1, (0,0,0), 2)
# Remove vertical lines
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,15))
detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2)
cnts = cv2.findContours(detect_vertical, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(thresh, [c], -1, (0,0,0), 3)
# Dilate to connect text and remove dots
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (10,1))
dilate = cv2.dilate(thresh, kernel, iterations=2)
cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area < 500:
cv2.drawContours(dilate, [c], -1, (0,0,0), -1)
# Bitwise-and to reconstruct image
result = cv2.bitwise_and(image, image, mask=dilate)
result[dilate==0] = (255,255,255)
# OCR
data = pytesseract.image_to_string(result, lang='eng',config='--psm 6')
print(data)
cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.imshow('dilate', dilate)
cv2.waitKey()
You might want to detect the cells first, as shown in this image. You can do it using a hough line transform, a library provided by OpenCV. After that, you can use the detected lines to select the ROI and then extract the text for each cell.
For detailed explanation, kindly visit my blogpost