Our team set up a vision system with a camera, a microscope and a tunable lens to look at the internal surface of a cone.
Visually speaking, the camera takes 12 image for one cone with each image covering 30 degrees.
Now we've collected many sample images and want to make sure each "fan"(as shown below) is at least 30 degree.
Is there any way in Python, with cv2 or other packages, to measure this central angle. Thanks.
Here is one way to do that in Python/OpenCV.
Read the image
Convert to gray
Threshold
Use morphology open and close to smooth and fill out the boundary
Apply Canny edge extraction
Separate the image into top edge and bottom edge by blackening the opposite side to each edge
Fit lines to the top and bottom edges
Compute the angle of each edge
Compute the difference between the two angles
Draw the lines on the input
Save the results
Input:
import cv2
import numpy as np
import math
# read image
img = cv2.imread('cone_shape.jpg')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray,11,255,cv2.THRESH_BINARY)[1]
# apply open then close to smooth boundary
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (13,13))
morph = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
kernel = np.ones((33,33), np.uint8)
morph = cv2.morphologyEx(morph, cv2.MORPH_CLOSE, kernel)
# apply canny edge detection
edges = cv2.Canny(morph, 150, 200)
hh, ww = edges.shape
hh2 = hh // 2
# split edge image in half vertically and blacken opposite half
top_edge = edges.copy()
top_edge[hh2:hh, 0:ww] = 0
bottom_edge = edges.copy()
bottom_edge[0:hh2, 0:ww] = 0
# get coordinates of white pixels in top and bottom
# note: need to transpose y,x in numpy to x,y for opencv
top_white_pts = np.argwhere(top_edge.transpose()==255)
bottom_white_pts = np.argwhere(bottom_edge.transpose()==255)
# fit lines to white pixels
# (x,y) is point on line, (vx,vy) is unit vector along line
(vx1,vy1,x1,y1) = cv2.fitLine(top_white_pts, cv2.DIST_L2, 0, 0.01, 0.01)
(vx2,vy2,x2,y2) = cv2.fitLine(bottom_white_pts, cv2.DIST_L2, 0, 0.01, 0.01)
# compute angle for vectors vx,vy
top_angle = (180/math.pi)*math.atan(vy1/vx1)
bottom_angle = (180/math.pi)*math.atan(vy2/vx2)
print(top_angle, bottom_angle)
# cone angle is the difference
cone_angle = math.fabs(top_angle - bottom_angle)
print(cone_angle)
# draw lines on input
lines = img.copy()
p1x1 = int(x1-1000*vx1)
p1y1 = int(y1-1000*vy1)
p1x2 = int(x1+1000*vx1)
p1y2 = int(y1+1000*vy1)
cv2.line(lines, (p1x1,p1y1), (p1x2,p1y2), (0, 0, 255), 1)
p2x1 = int(x2-1000*vx2)
p2y1 = int(y2-1000*vy2)
p2x2 = int(x2+1000*vx2)
p2y2 = int(y2+1000*vy2)
cv2.line(lines, (p2x1,p2y1), (p2x2,p2y2), (0, 0, 255), 1)
# save resulting images
cv2.imwrite('cone_shape_thresh.jpg',thresh)
cv2.imwrite('cone_shape_morph.jpg',morph)
cv2.imwrite('cone_shape_edges.jpg',edges)
cv2.imwrite('cone_shape_lines.jpg',lines)
# show thresh and result
cv2.imshow("thresh", thresh)
cv2.imshow("morph", morph)
cv2.imshow("edges", edges)
cv2.imshow("top edge", top_edge)
cv2.imshow("bottom edge", bottom_edge)
cv2.imshow("lines", lines)
cv2.waitKey(0)
cv2.destroyAllWindows()
Thresholded image:
Morphology processed image:
Edge Image:
Lines on input:
Cone Angle (in degrees):
42.03975696357633
That sounds possible. You need to do some preprocessing and filtering to figure out what works and there is probably some tweaking involved.
There are three approaches that could work.
1.)
The basic idea is to somehow get two lines and measure the angle between them.
Define a threshold to define the outer black region (out of the central angle) and set all values below it to zero.
This will also set some of the blurry stripes inside the central angle to zero so we have to try to "heal" them away. This is done by using Morphological Transformations. You can read about them here and here.
You could try the operation Closing, but I don't know if it fixes stripes. Usually it fixes dots or scratches. This answer seems to indicate that it should work on lines.
Maybe at that point apply some Gaussian blurring and to the threshold thing again. Then try to use some edge or line detection.
It's basically try and error, you have to see what works.
2.)
Another thing that could work is to try to use the arc-enter code herelike scratches, maybe even strengthen them and use the Hough Circle Transform. I think it detects arcs as well.
Just try it and see what the function returns. In the best case there are several circles / arcs that you can use to estimate the central angle.
There are several approaches on arc detection here on StackOverflow or here.
I am not sure if that's the same with all your image, but the one above looks like there are some thin, green and pink arcs that seem to stretch all along the central angle. You could use that to filter for that color, then make it grey scale.
This question might be helpful.
3.)
Apply an edge filter, e.g Canny skimage.feature.canny
Try several sigmas and post the images in your question, then we can try to think on how to continue.
What could work is to calculate the convex hull around all points that are part of an edge. Then get the two lines that form the central angle from the convex hull.
I'm studying OpenCV with python by working on a project which aims to detect the palm lines.
What I have done is basically use Canny edge detection and then apply Hough line detection on the edges but the outcome is not so good.
Here is the source code I am using:
original = cv2.imread(file)
img = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY)
save_image_file(img, "gray")
img = cv2.equalizeHist(img)
save_image_file(img, "equalize")
img = cv2.GaussianBlur(img, (9, 9), 0)
save_image_file(img, "blur")
img = cv2.Canny(img, 40, 80)
save_image_file(img, "canny")
lined = np.copy(original) * 0
lines = cv2.HoughLinesP(img, 1, np.pi / 180, 15, np.array([]), 50, 20)
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(lined, (x1, y1), (x2, y2), (0, 0, 255))
save_image_file(lined, "lined")
output = cv2.addWeighted(original, 0.8, lined, 1, 0)
save_image_file(output, "output")
I tried different parameter sets of Gaussian kernel size and Canny low/high thresholds, but the outcome is either having too much noises, or missing (part of) major lines. Above picture is already the best I get, so far..
Is there anything I should do to get result improved, or any other approach would get better result?
Any help would be appreciated!
What you are looking for is really experimental. You have already done the most important function. I suggest that you tune your parameters to get a reasonable and a noisy number of lines, then you can make some filtering:
using morphological filters,
classification of lines
(according to their lengths, fits on contrasted area...etc)
improving your categories by dividing the area of palm (without
fingers) into a grid (4x4 .. where 4 vertical fingers corners can
define the configs of the grid).
calculate the gradient image,
orientation of lines may help as well
Make a search about the algorithm "cumulative level lines detection", it can help for the certainty of detected lines
I am really new to openCV,I have to detect lines of streets in sattelite imagery
this is the original image
I applied different thresholds and I am able to differentiate background and fg
retval, threshold = cv2.threshold(img1,150, 255, cv2.THRESH_BINARY)
plt.imshow(threshold)
after this, I smoothened to remove noise for HoughLines
# Initialize output
out = cv2.cvtColor(threshold, cv2.COLOR_GRAY2BGR)
# Median blurring to get rid of the noise; invert image
threshold_blur = 255 - cv2.medianBlur(threshold, 5)
plt.imshow(threshold_blur,cmap='gray', vmin = 0, vmax = 255)
Now when I applied hough lines I am getting this
# Detect and draw lines
lines = cv2.HoughLinesP(threshold_blur, 1, np.pi/180, 10, minLineLength=10, maxLineGap=100)
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(out, (x1, y1), (x2, y2), (0, 0, 255), 2)
plt.imshow(out)
I need lines like this
Why it's not working and what should I do?
Not sure if you would want to do this with only with openCV you may also want to use something like tensorflow.
But if you want to use only openCV here are a few options:
You could try to do dilate and erode. And use findContours, and loop through those contours to find objects with a certain minimum length, and use that to create a line from A to B.
Another solution would be to train a cascade to detect parts of the images that are street sections and connect those points. This is a lot more time consuming though.
I am attempting to pull text from a few hundred JPGs that contain information on capital punishment records; the JPGs are hosted by the Texas Department of Criminal Justice (TDCJ). Below is an example snippet with personally identifiable information removed.
I've identified the underlines as being the impediment to proper OCR--if I go in, screenshot a sub-snippet and manually white-out lines, the resulting OCR through pytesseract is very good. But with underlines present, it's extremely poor.
How can I best remove these horizontal lines? What I have tried:
Started on OpenCV doc's walkthrough: Extract horizontal and vertical lines by using morphological operations. Got stuck pretty quickly, because I know zero C++.
Followed along with Removing Horizontal Lines in image - ended up with an illegible string.
Followed along with Removing long horizontal/vertical lines from edge image using OpenCV - wasn't able to get the intuition behind sizing the array of zeros here.
Tagging this question with c++ in the hope that someone could help to translate Step 5 of the docs walkthrough to Python. I've tried a batch of transformations such as Hugh Line Transform, but I am feeling around in the dark within a library and area I have zero prior experience with.
import cv2
# Inverted grayscale
img = cv2.imread('rsnippet.jpg', cv2.IMREAD_GRAYSCALE)
img = cv2.bitwise_not(img)
# Transform inverted grayscale to binary
th = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY, 15, -2)
# An alternative; Not sure if `th` or `th2` is optimal here
th2 = cv2.threshold(img, 170, 255, cv2.THRESH_BINARY)[1]
# Create corresponding structure element for horizontal lines.
# Start by cloning th/th2.
horiz = th.copy()
r, c = horiz.shape
# Lost after here - not understanding intuition behind sizing/partitioning
All the answers so far seem to be using morphological operations. Here's something a bit different. This should give fairly good results if the lines are horizontal.
For this I use a part of your sample image shown below.
Load the image, convert it to gray scale and invert it.
import cv2
import numpy as np
import matplotlib.pyplot as plt
im = cv2.imread('sample.jpg')
gray = 255 - cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
Inverted gray-scale image:
If you scan a row in this inverted image, you'll see that its profile looks different depending on the presence or the absence of a line.
plt.figure(1)
plt.plot(gray[18, :] > 16, 'g-')
plt.axis([0, gray.shape[1], 0, 1.1])
plt.figure(2)
plt.plot(gray[36, :] > 16, 'r-')
plt.axis([0, gray.shape[1], 0, 1.1])
Profile in green is a row where there's no underline, red is for a row with underline. If you take the average of each profile, you'll see that red one has a higher average.
So, using this approach you can detect the underlines and remove them.
for row in range(gray.shape[0]):
avg = np.average(gray[row, :] > 16)
if avg > 0.9:
cv2.line(im, (0, row), (gray.shape[1]-1, row), (0, 0, 255))
cv2.line(gray, (0, row), (gray.shape[1]-1, row), (0, 0, 0), 1)
cv2.imshow("gray", 255 - gray)
cv2.imshow("im", im)
Here are the detected underlines in red, and the cleaned image.
tesseract output of the cleaned image:
Convthed as th(
shot once in the
she stepped fr<
brother-in-lawii
collect on life in
applied for man
to the scheme i|
Reason for using part of the image should be clear by now. Since personally identifiable information have been removed in the original image, the threshold wouldn't have worked. But this should not be a problem when you apply it for processing. Sometimes you may have to adjust the thresholds (16, 0.9).
The result does not look very good with parts of the letters removed and some of the faint lines still remaining. Will update if I can improve it a bit more.
UPDATE:
Dis some improvements; cleanup and link the missing parts of the letters. I've commented the code, so I believe the process is clear. You can also check the resulting intermediate images to see how it works. Results are a bit better.
tesseract output of the cleaned image:
Convicted as th(
shot once in the
she stepped fr<
brother-in-law. ‘
collect on life ix
applied for man
to the scheme i|
tesseract output of the cleaned image:
)r-hire of 29-year-old .
revolver in the garage ‘
red that the victim‘s h
{2000 to kill her. mum
250.000. Before the kil
If$| 50.000 each on bin
to police.
python code:
import cv2
import numpy as np
import matplotlib.pyplot as plt
im = cv2.imread('sample2.jpg')
gray = 255 - cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
# prepare a mask using Otsu threshold, then copy from original. this removes some noise
__, bw = cv2.threshold(cv2.dilate(gray, None), 128, 255, cv2.THRESH_BINARY or cv2.THRESH_OTSU)
gray = cv2.bitwise_and(gray, bw)
# make copy of the low-noise underlined image
grayu = gray.copy()
imcpy = im.copy()
# scan each row and remove lines
for row in range(gray.shape[0]):
avg = np.average(gray[row, :] > 16)
if avg > 0.9:
cv2.line(im, (0, row), (gray.shape[1]-1, row), (0, 0, 255))
cv2.line(gray, (0, row), (gray.shape[1]-1, row), (0, 0, 0), 1)
cont = gray.copy()
graycpy = gray.copy()
# after contour processing, the residual will contain small contours
residual = gray.copy()
# find contours
contours, hierarchy = cv2.findContours(cont, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
for i in range(len(contours)):
# find the boundingbox of the contour
x, y, w, h = cv2.boundingRect(contours[i])
if 10 < h:
cv2.drawContours(im, contours, i, (0, 255, 0), -1)
# if boundingbox height is higher than threshold, remove the contour from residual image
cv2.drawContours(residual, contours, i, (0, 0, 0), -1)
else:
cv2.drawContours(im, contours, i, (255, 0, 0), -1)
# if boundingbox height is less than or equal to threshold, remove the contour gray image
cv2.drawContours(gray, contours, i, (0, 0, 0), -1)
# now the residual only contains small contours. open it to remove thin lines
st = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
residual = cv2.morphologyEx(residual, cv2.MORPH_OPEN, st, iterations=1)
# prepare a mask for residual components
__, residual = cv2.threshold(residual, 0, 255, cv2.THRESH_BINARY)
cv2.imshow("gray", gray)
cv2.imshow("residual", residual)
# combine the residuals. we still need to link the residuals
combined = cv2.bitwise_or(cv2.bitwise_and(graycpy, residual), gray)
# link the residuals
st = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (1, 7))
linked = cv2.morphologyEx(combined, cv2.MORPH_CLOSE, st, iterations=1)
cv2.imshow("linked", linked)
# prepare a msak from linked image
__, mask = cv2.threshold(linked, 0, 255, cv2.THRESH_BINARY)
# copy region from low-noise underlined image
clean = 255 - cv2.bitwise_and(grayu, mask)
cv2.imshow("clean", clean)
cv2.imshow("im", im)
One can try this.
img = cv2.imread('img_provided_by_op.jpg', 0)
img = cv2.bitwise_not(img)
# (1) clean up noises
kernel_clean = np.ones((2,2),np.uint8)
cleaned = cv2.erode(img, kernel_clean, iterations=1)
# (2) Extract lines
kernel_line = np.ones((1, 5), np.uint8)
clean_lines = cv2.erode(cleaned, kernel_line, iterations=6)
clean_lines = cv2.dilate(clean_lines, kernel_line, iterations=6)
# (3) Subtract lines
cleaned_img_without_lines = cleaned - clean_lines
cleaned_img_without_lines = cv2.bitwise_not(cleaned_img_without_lines)
plt.imshow(cleaned_img_without_lines)
plt.show()
cv2.imwrite('img_wanted.jpg', cleaned_img_without_lines)
Demo
The method is based on the answer by Zaw Lin. He/she identified lines in the image and just did subtraction to get rid of them. However, we cannot just subtract lines here because we have letters e, t, E, T, - containing lines as well! If we just subtract horizontal lines from the image, e will be nearly identical to c. - will be gone...
Q: How do we find lines?
To find lines, we can make use of erode function. To make use of erode, we need to define a kernel. (You can think of a kernel as a window/shape that functions operate on.)
The kernel slides through
the image (as in 2D convolution). A pixel in the original image
(either 1 or 0) will be considered 1 only if all the pixels under the
kernel is 1, otherwise it is eroded (made to zero). -- (Source).
To extract lines, we define a kernel, kernel_line as np.ones((1, 5)), [1, 1, 1, 1, 1]. This kernel will slide through the image and erode pixels that have 0 under the kernel.
More specifically, while the kernel is applied to one pixel, it will capture the two pixels to its left and two to its right.
[X X Y X X]
^
|
Applied to Y, `kernel_line` captures Y's neighbors. If any of them is not
0, Y will be set to 0.
Horizontal lines will be preserved under this kernel while pixel that don't have horizontal neighbors will disappear. This is how we capture lines with the following line.
clean_lines = cv2.erode(cleaned, kernel_line, iterations=6)
Q: How do we avoid extracting lines within e, E, t, T, and -?
We will combine erosion and dilation with iteration parameter.
clean_lines = cv2.erode(cleaned, kernel_line, iterations=6)
You might have noticed the iterations=6 part. The effect of this parameter will make the flat part in e, E, t, T, - disappear. This is because while we apply the same operation multiple times, the boundary part of these lines would be shrinking. (Applying the same kernel, only the boundary part will meet 0s and become 0 as the result.) We use this trick to make the lines in these characters disappear.
This, however, comes with a side effect that the long underline part that we want to get rid of also shrinks. We can grow it with dilate!
clean_lines = cv2.dilate(clean_lines, kernel_line, iterations=6)
Contrary to erosion that shrinks a image, dilation makes image larger. While we still have the same kernel, kernel_line, if any part under the kernel is 1, the target pixel will be 1. Applying this, the boundary will grow back. (The part in e, E, t, T, - won't grow back if we pick the parameter carefully such that it disappears at the erosion part.)
With this additional trick, we can successfully get rid of the lines without hurting e, E, t, T, and -.
As most of the lines to be detected in your source are horizontal-long-lines, similar with my another answer, that is Find single color, horizontal spaces in image
This is the source image:
Here are my two main steps to remove the long horizontal line:
Do morph-close with long line kernel on the gray image
kernel = np.ones((1,40), np.uint8)
morphed = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, kernel)
then, get the morphed image contains the long lines:
Invert the morphed image, and add to the source image:
dst = cv2.add(gray, (255-morphed))
then get image with long lines removed:
Simple enough, right? And also there exist small line segments, I think it has little effects on OCR. Notice, almost all chars keep original, except g,j,p,q,y,Q, maybe a little diffent. But mordern OCR tools such as Tesseract( with LSTM technology) has ability to deal with such simple confusion.
0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ
Total code to save removed image as line_removed.png:
#!/usr/bin/python3
# 2018.01.21 16:33:42 CST
import cv2
import numpy as np
## Read
img = cv2.imread("img04.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
## (1) Create long line kernel, and do morph-close-op
kernel = np.ones((1,40), np.uint8)
morphed = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, kernel)
cv2.imwrite("line_detected.png", morphed)
## (2) Invert the morphed image, and add to the source image:
dst = cv2.add(gray, (255-morphed))
cv2.imwrite("line_removed.png", dst)
Update # 2018.01.23 13:15:15 CST:
Tesseract is a powerful tool to do OCR. Today I install the tesseract-4.0 and pytesseract. Then I do ocr using pytesseract on the my result line_removed.png.
import cv2
import pytesseract
img = cv2.imread("line_removed.png")
print(pytesseract.image_to_string(img, lang="eng"))
This is the reuslt, fine to me.
Convicted as the triggerman in the murder—for—hire of 29—year—old .
shot once in the head with a 357 Magnum revolver in the garage of her home at ..
she stepped from her car. Police discovered that the victim‘s husband,
brother—in—law, _ ______ paid _ $2,000 to kill her, apparently so .. _
collect on life insurance policies totaling $250,000. Before the killing, .
applied for additional life insurance policies of $150,000 each on himself and his wife
to the scheme in three different statements to police.
was
and
could
had also
. confessed
A few suggestions:
Given that you're starting with a JPEG, don't compound the loss. Save your intermediate files as PNGs. Tesseract copes with those just fine.
Scale the image 2x (using cv2.resize) handing to Tesseract.
Try detecting and removing the black underline. (This question might help). Doing that while preserving descenders might be tricky.
Explore Tesseract command-line options, of which there are many (and they're horribly documented, some requiring dives into C++ source to try to understand them). It's looking like ligatures are causing some grief. IIRC (it's been a while), there's a setting or two that might help.