Pytesseract unable to recognize characters in binary images - python

Using various methods I have changed an image captcha to look somewhat like this
However while using Pytesseract OCR, the package is unable to identify any character and I think it is due to the line above the letters.
script.py
cv2.imwrite(filename, imgOP)
text = pytesseract.image_to_string(Image.open(filename))
Output in the console for the image is none
However when tried with another image (given below) I got the output as
PGKQKf
Which is wrong again because of the line above the letter T
I have used various techniques to clean the images such as erosion, dilation and also Probabilistic Hough Transform (result given below)
#Hough Line Transform
img = cv2.imread('Output1.png')
edges = cv2.Canny(img, 1000, 1500)
minLineLength = 0
maxLineGap = 10000000000
lines = cv2.HoughLinesP(edges, 1, np.pi / 180, 15, minLineLength, maxLineGap)
for x in range(0, len(lines)):
for x1, y1, x2, y2 in lines[x]:
cv2.line(img, (x1, y1), (x2, y2), (255, 255, 255), 2)
cv2.imwrite('houghlines3.jpg', img)
where the image after transformation looks somewhat like this
Any other combination of values of minLineLength and maxLineGap do not work.
How should one proceed forward? I had checked on various techniques to make Tesseract more accurate however I am confused as to which one should I use.
Other than Tesseract are there any other techniques that could be applied to get the desired the results.
I had thought of creating a mask, where using an online tool I had converted the image into 0 and 1 given below. However how to go about it and use it for identifying the characters ?

Related

Detecting handwritten lines using OpenCV

I'm trying to detect underlines and boxes in the following image:
For example, this is the output I'm aiming for:
Here's what I've atempted:
import cv2
import numpy as np
# Load image
img = cv2.imread('document.jpg')
# Convert image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply Gaussian blur to reduce noise
blur = cv2.GaussianBlur(gray, (3, 3), 0)
# Threshold the image
ret, thresh = cv2.threshold(blur, 127, 255, cv2.THRESH_TRUNC)
# Apply Canny Edge Detection
edges = cv2.Canny(thresh, 155, 200)
# Use HoughLinesP to detect lines
lines = cv2.HoughLinesP(edges, rho=1, theta=1*np.pi/180, threshold=100, minLineLength=100, maxLineGap=50)
# Draw lines on image
for line in lines:
x1, y1, x2, y2 = line[0]
cv2.line(img, (x1, y1), (x2, y2), (0, 0, 255), 4)
However, this is the result I get:
Here are my thoughts regarding this problem:
I might need to use adaptive thresholding with Otsu's algorithm to provide a proper binary image to cv2.Canny(). However, I doubt this is the core issue. Here is how the image looks with the current thresholding applied:
cv2.threshold() already does a decent job separating the notes from the page.
Once I get HoughLinesP() to properly draw all the lines (and not the weird scribbles it's currently outputting), I can write some sort of box detector function to find the boxes based on the intersections (or near-intersections) of four lines. As for underlines, I simply need to detect horizontal lines from the output of HoughLinesP(), which shouldn't be difficult (e.g., for any given line, check if the two y coordinates are within some range of each other).
So the fundamental problem I have is this: how do I get HoughLinesP() to output smoother lines and not the current mess it's giving so I can then move forward with detecting boxes and lines?
Additionally, do my proposed methods for finding boxes and underlines make sense from an efficiency standpoint? Does OpenCV provide a better way for achieving what I want to accomplish?

merging two external edge by python

I'm working on a project that I should merge 2 images of the cross-section of an Object. most of the time because of different perspectives and some Noises the two images don't merge exactly. at some points, it becomes two lines whereas it should be one. so I want to make it one line by some approaches like meaning, interpolating, etc. but I don't know how can I do that. I'm working on Opencv. Note that I also have images of each of the cross-sections in different images in the position that they are in the merged image. the thickness of each line is one
I couldn't come up with an optimal solution, but if you detect the lines and play with coordinates you may reach similar:
The shape is not ideal, but it should give you an intuition about where to start.
I user createFastLineDetector to find the lines and then draw the second image into the first image.
I used the default parameters from the Documentation :
_length_threshold = 10
_distance_threshold = 1.41421356
_canny_th1 = 50
_canny_th2 = 50
_canny_aperture_size = 3
_do_merge = False
Maybe you can enhance the image by changing the above parameters.
Code:
import cv2
edge1 = cv2.imread('edge1.png')
edge2 = cv2.imread('edge2.png')
gray1 = cv2.cvtColor(edge1, cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(edge2, cv2.COLOR_BGR2GRAY)
d = cv2.ximgproc.createFastLineDetector(_do_merge=True)
lines1 = d.detect(gray1)
lines2 = d.detect(gray2)
for current_line1 in lines1:
(x11, y11, x12, y12) = current_line1[0]
cv2.line(edge1, (x11, int(y11 - 5)), (x12, int(y12 - 5)), (255, 255, 255), 3)
for current_line2 in lines2:
(x11, y11, x12, y12) = current_line2[0]
cv2.line(edge1, (x11, int(y11 - 15)), (x12, int(y12 - 15)), (255, 255, 255), 3)

Detecting palm lines with OpenCV in Python

I'm studying OpenCV with python by working on a project which aims to detect the palm lines.
What I have done is basically use Canny edge detection and then apply Hough line detection on the edges but the outcome is not so good.
Here is the source code I am using:
original = cv2.imread(file)
img = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY)
save_image_file(img, "gray")
img = cv2.equalizeHist(img)
save_image_file(img, "equalize")
img = cv2.GaussianBlur(img, (9, 9), 0)
save_image_file(img, "blur")
img = cv2.Canny(img, 40, 80)
save_image_file(img, "canny")
lined = np.copy(original) * 0
lines = cv2.HoughLinesP(img, 1, np.pi / 180, 15, np.array([]), 50, 20)
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(lined, (x1, y1), (x2, y2), (0, 0, 255))
save_image_file(lined, "lined")
output = cv2.addWeighted(original, 0.8, lined, 1, 0)
save_image_file(output, "output")
I tried different parameter sets of Gaussian kernel size and Canny low/high thresholds, but the outcome is either having too much noises, or missing (part of) major lines. Above picture is already the best I get, so far..
Is there anything I should do to get result improved, or any other approach would get better result?
Any help would be appreciated!
What you are looking for is really experimental. You have already done the most important function. I suggest that you tune your parameters to get a reasonable and a noisy number of lines, then you can make some filtering:
using morphological filters,
classification of lines
(according to their lengths, fits on contrasted area...etc)
improving your categories by dividing the area of palm (without
fingers) into a grid (4x4 .. where 4 vertical fingers corners can
define the configs of the grid).
calculate the gradient image,
orientation of lines may help as well
Make a search about the algorithm "cumulative level lines detection", it can help for the certainty of detected lines

How to detect street lines in satellite aerial image using openCV?

I am really new to openCV,I have to detect lines of streets in sattelite imagery
this is the original image
I applied different thresholds and I am able to differentiate background and fg
retval, threshold = cv2.threshold(img1,150, 255, cv2.THRESH_BINARY)
plt.imshow(threshold)
after this, I smoothened to remove noise for HoughLines
# Initialize output
out = cv2.cvtColor(threshold, cv2.COLOR_GRAY2BGR)
# Median blurring to get rid of the noise; invert image
threshold_blur = 255 - cv2.medianBlur(threshold, 5)
plt.imshow(threshold_blur,cmap='gray', vmin = 0, vmax = 255)
Now when I applied hough lines I am getting this
# Detect and draw lines
lines = cv2.HoughLinesP(threshold_blur, 1, np.pi/180, 10, minLineLength=10, maxLineGap=100)
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(out, (x1, y1), (x2, y2), (0, 0, 255), 2)
plt.imshow(out)
I need lines like this
Why it's not working and what should I do?
Not sure if you would want to do this with only with openCV you may also want to use something like tensorflow.
But if you want to use only openCV here are a few options:
You could try to do dilate and erode. And use findContours, and loop through those contours to find objects with a certain minimum length, and use that to create a line from A to B.
Another solution would be to train a cascade to detect parts of the images that are street sections and connect those points. This is a lot more time consuming though.

OpenCV houghLinesP parameters

I am having difficulty finding the lines on a chessboard in this image using HoughLinesP with OpenCV in Python.
In an attempt to understand the parameters of HoughLinesP, I have come up with the following code:
import numpy as np
import cv2
from matplotlib import pyplot as plt
from matplotlib import image as image
I = image.imread('chess.jpg')
G = cv2.cvtColor(I, cv2.COLOR_BGR2GRAY)
# Canny Edge Detection:
Threshold1 = 150;
Threshold2 = 350;
FilterSize = 5
E = cv2.Canny(G, Threshold1, Threshold2, FilterSize)
Rres = 1
Thetares = 1*np.pi/180
Threshold = 1
minLineLength = 1
maxLineGap = 100
lines = cv2.HoughLinesP(E,Rres,Thetares,Threshold,minLineLength,maxLineGap)
N = lines.shape[0]
for i in range(N):
x1 = lines[i][0][0]
y1 = lines[i][0][1]
x2 = lines[i][0][2]
y2 = lines[i][0][3]
cv2.line(I,(x1,y1),(x2,y2),(255,0,0),2)
plt.figure(),plt.imshow(I),plt.title('Hough Lines'),plt.axis('off')
plt.show()
The problem I am having is that this picks up only one line. If I reduce the maxLineGap to 1, it picks up thousands.
I understand why this might be but how do I pick a suitable set of parameters to get all these co-linear lines to merge? Am I missing something?
I would like to keep the code simple as I am using it as an example of this function in action.
Thanks in advance for any help!
Update: This works perfectly with HoughLines.
And there doesn't seem to be edge detection issues as Canny is working just fine.
However, I still need to get HoughLinesP to work. Any ideas??
Images here: Results
Ok, I finally found the problem and thought I would share the solution for anyone else driven nuts by this. The issue is that in the HoughLinesP function, there is an extra parameter, "lines" which is redundant because the output of the function is the same:
cv2.HoughLinesP(image, rho, theta, threshold[, lines[, minLineLength[, maxLineGap]]])
This is causing issues with the parameters as they are read in the wrong order. To avoid confusion with the order of the parameters, the simplest solution is to specify them inside the function like so:
lines = cv2.HoughLinesP(E,rho = 1,theta = 1*np.pi/180,threshold = 100,minLineLength = 100,maxLineGap = 50)
This totally fixed my problem and I hope it will help others.
edges: Output of the edge detector.
lines: A vector to store the coordinates of the start and end of the line.
rho: The resolution parameter \rho in pixels.
theta: The resolution of the parameter \theta in radians.
threshold: The minimum number of intersecting points to detect a line.
Sample application
import cv2
import numpy as np
img = cv2.imread('sudoku.png', cv2.IMREAD_COLOR)
# Convert the image to gray-scale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the edges in the image using canny detector
edges = cv2.Canny(gray, 50, 200)
# Detect points that form a line
lines = cv2.HoughLinesP(edges, 1, np.pi/180, 100, minLineLength=10, maxLineGap=250)
# Draw lines on the image
for line in lines:
x1, y1, x2, y2 = line[0]
cv2.line(img, (x1, y1), (x2, y2), (255, 0, 0), 3)
# Show result
img = cv2.resize(img, dsize=(600, 600))
cv2.imshow("Result Image", img)
if cv2.waitKey(0) & 0xff == 27:
cv2.destroyAllWindows()
cv2.HoughLinesP(image,rho, theta, threshold, np.array ([ ]), minLineLength=xx, maxLineGap=xx)
This will also work.
It is not HoughLinesP issue, using that method will only get all the lines detected in the picture and return to you.
To be able to get the lines you want,you will need to smooth the image before you use the method. However if you smooth too much, there won't be any edges for HoughLinesP to detect.
You can know more about Smoothing Effects of OpenCV here.

Categories