Context: I am trying to find the directional heading from a small image of a compass. Directional heading meaning if the red (north) point is 90 degrees counter-clockwise from the top, the viewer is facing East, 180 degrees is south, 270 is west, 0 is north. etc. I understand there are limitations with such a small blurry image but I'd like to be as accurate as possible. The compass is overlaid on street view imagery meaning the background is noisy and unpredictable.
The first strategy I thought of was to find the red pixel that is furthest away from the center and calculate the directional heading from that. The math is simple enough.
The tough part for me is differentiating the red pixels from everything else. Especially because almost any color could be in the background.
My first thought was to black out the completely transparent parts to eliminate the everything but the white transparent ring and the tips of the compass.
True Compass Values: 35.9901, 84.8366, 104.4101
These values are taken from the source code.
I then used this solution to find the closest RGB value to a user given list of colors. After calibrating the list of colors I was able to create a list that found some of the compass's inner most pixels. This yielded the correct result within +/- 3 degrees. However, when I tried altering the list to include every pixel of the red compass tip, there would be background pixels that would be registered as "red" and therefore mess up the calculation.
I have manually found the end of the tip using this tool and the result always ends up within +/- 1 degree ( .5 in most cases ) so I hope this should be possible
The original RGB value of the red in the compass is (184, 42, 42) and (204, 47, 48) but the images are from screenshots of a video which results in the tip/edge pixels being blurred and blackish/greyish.
Is there a better way of going about this than the closest_color() method? If so, what, if not, how can I calibrate a list of colors that will work?
If you don't have hard time constraints (e.g. live detection from video), and willing to switch to NumPy, OpenCV, and scikit-image, you might use template matching. You can derive quite a good template (and mask) from the image of the needle you provided. In some loop, you'll iterate angles from 0° to 360° with a desired resolution – the finer the longer takes the whole procedure – and perform the template matching. For each angle, you save the value of the best match, and finally search for the best score over all angles.
That'd be my code:
import cv2
import numpy as np
from skimage.transform import rotate
# Set up template (and mask) for template matching
templ = cv2.resize(cv2.imread('templ_compass.png')[2:-2, :], (23, 69))
templ = cv2.cvtColor(templ, cv2.COLOR_BGR2BGRA)
templ[..., 3] = cv2.cvtColor(
cv2.addWeighted(templ[..., :3], 0.5,
cv2.flip(templ[..., :3], 0), 0.5, 0), cv2.COLOR_BGR2GRAY)
templ[..., 3] = cv2.threshold(templ[..., 3], 254, 255, cv2.THRESH_BINARY_INV)[1]
# Collect image file names
images = ['compass_36.png', 'compass_85.png', 'compass_104.png']
# Initialize angles and minimum values
angles = np.arange(0, 360, 1)
min_vals = np.zeros_like(angles)
# Iterate image file names
for image in images:
# Read image
img = cv2.imread(image).astype(np.float32) / 255
# Iterate angles
for i_a, angle in enumerate(angles):
# Rotate template and mask
templ_rot = rotate(templ.copy(), angle, resize=True).astype(np.float32)
# Actual template matching
result = cv2.matchTemplate(img, templ_rot[..., :3], cv2.TM_SQDIFF,
mask=templ_rot[..., 3])
# Save minimum value
min_vals[i_a] = cv2.minMaxLoc(result)[0]
# Find best match angle
best_match_idx = np.argmin(min_vals)
print('{}: {}'.format(image, angles[best_match_idx]))
And, these are the results:
compass_36.png: 37
compass_85.png: 85
compass_104.png: 104
If you switch the angle resolution to angles = np.arange(0, 360, 0.5), you get:
compass_36.png: 36.5
compass_85.png: 85.0
compass_104.png: 104.5
Setting up the template involved some manual work, e.g. properly cropping the needle, getting an appropriate size, and deriving a good mask.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.19041-SP0
Python: 3.9.1
PyCharm: 2021.1.1
NumPy: 1.20.3
OpenCV: 4.5.2
scikit-image: 0.18.1
----------------------------------------
Related
I want to rotate a black and white image. I am trying to use the rotate function as follows:
image.rotate(angle, fillcolor=255)
I am required to older versions of Python and Pillow, and they do not support the 'fillcolor' argument. I cannot upgrade to the newer versions due to certain restrictions and cannot use any external libraries.
Is there another way to fill the area outside the rotated image with white color using Pillow?
Rotated image has black color in the area outside the rotated part. I want to fill it with white color.
Original : Original image
Rotated :Rotated image
You can try Interpolating the Original Image, with the cropped one via Image.composite() to get rid of the black bars/borders.
from PIL import Image
img = Image.open(r"Image_Path").convert("RGBA")
angle = 30
img = img.rotate(angle)
new_img = Image.new('RGBA', img.size, 'white')
Alpha_Image = Image.composite(img, new_img, img)
Alpha_Image = Alpha_Image.convert(img.mode)
Alpha_Image.show()
The above code takes in an Image, converts it into mode RGBA (Alpha is required for this process), and then rotates the Image by 30 degrees. After that It creates a empty Image object of mode RGBA of the same dimensions as the original image, with each pixel having a default value of 255 each channel (i.e Pure white for RGB, and Full Opacity in the context of Alpha/Transparency). Then Interpolates the original image with this empty one using the mask of original Image (we are using the transparency mask of the first image). This results in the Desired images, where black bars/edges are replaced by white. In the end we convert the image color space to the original one.
ORIGINAL IMAGE:-
IMAGE AFTER ROTATING 30 DEGREES:-
An awkward option that has always worked for me, seeing as with my tools I always get a light gray "border" around the rotated image that interferes with filling:
add a border on the non-rotated image and use the fill color with that border.
The bordering operation is lossless and filling will be exact (and easy).
rotate the bordered image. The seam will now also be correct (but not exact unless you
rotate by 45° or 90°).
calculate the size of the rotated border using trigonometry. The result will not be exact (i.e. "131.12 pixel"). Usually you can do this in reverse, starting with an exact border on the rotated image and calculating the border you need to add, and adjust the border width so that the nonrotated border is exact. Example: with a rotated border of 170 pixels you get a nonrotated border of 140.3394 pixels. So you use a 510 pixel rotated border, resulting in the need to add a 421.018 pixel nonrotated border. This is close enough to 421 pixels that it is acceptable.
remove the rotated border.
This also helps avoiding some artefacts near the cut parts of the image that fall off the rotated image.
It has the drawback that you end up with a more massive rotation, with higher memory expenditure and computation time, especially if you use larger borders to increase precision.
Edit: As no external libraries are allowed, I would suggest cropping the rectangle you want and pasting it onto the original image, this could be done with magic numbers (of the rectangle's coordinates), this works for me (you might will need to tweek a little)
im = Image.open("mFul4.png")
rotated = im.rotate(105)
box = (55, 65,200,210)
d = rotated.crop(box=box)
im.paste(d, box=box)
im.save("ex.bmp" )
and the output
Edit2: This is the ugliest way, but it works, you might need to tweak the magic numbers a bit to have it more precise, I was working on your given image, so couldn't tell when i'm overdoing it. It produces the same output
from PIL import Image
im = Image.open("mFul4.png")
angle=105
cos = 0.240959049 # -cos(angle)
d = im.rotate(angle)
pix = d.load()
tri_x = 120
for i in range(4): # 4 triangles
for j in range(tri_x, -1, -1):
for k in range(int((tri_x-j)*cos)+1, -1, -1):
x,y =( j, k )if i <1 else (d.size[0]-j-1, d.size[1]-k-1)
if i in [2,3]:
y, x = (d.size[0] - j-2 , k) if i <3 else (j, d.size[1] - k)
pix[x,y] = (255, 255, 255, 255)
d.show()
I work on MRIs. The problem is that the images are not always centered. In addition, there are often black bands around the patient's body.
I would like to be able to remove the black borders and center the patient's body like this:
I have already tried to determine the edges of the patient's body by reading the pixel table but I haven't come up with anything very conclusive.
In fact my solution works on only 50% of the images... I don't see any other way to do it...
Development environment: Python3.7 + OpenCV3.4
I'm not sure this is the standard or most efficient way to do this, but it seems to work:
# Load image as grayscale (since it's b&w to start with)
im = cv2.imread('im.jpg', cv2.IMREAD_GRAYSCALE)
# Threshold it. I tried a few pixel values, and got something reasonable at min = 5
_,thresh = cv2.threshold(im,5,255,cv2.THRESH_BINARY)
# Find contours:
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# Put all contours together and reshape to (_,2).
# The first "column" will be your x values of your contours, and second will be y values
c = np.vstack(contours).reshape(-1,2)
# Extract the most left, most right, uppermost and lowermost point
xmin = np.min(c[:,0])
ymin = np.min(c[:,1])
xmax = np.max(c[:,0])
ymax = np.max(c[:,1])
# Use those as a guide of where to crop your image
crop = im[ymin:ymax, xmin:xmax]
cv2.imwrite('cropped.jpg', crop)
What you get in the end is this:
There are multiple ways to do this, and this is answer is pretty much computer vision tips and tricks.
If the mass is in the center, and the area outside is always going to be black, you can threshold the image and then find the edge pixels like you already are. I'd add 10 pixels to the border to adjust for variances in the threshold process.
Or if the body is always similarly sized, you can find the centroid of the blob (white area in the threshold image), and then crop a fixed area around it.
I am trying to detect silver balls reflecting the environment with OpenCV:
With black balls, I successfully did it by detecting circles:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray,(5,5),0);
gray = cv2.medianBlur(gray,5)
gray = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,3.5)
kernel = np.ones((3,3),np.uint8)
gray = cv2.erode(gray,kernel,iterations = 1)
gray = cv2.dilate(gray,kernel,iterations = 1)
circles = cv2.HoughCircles(gray, cv.CV_HOUGH_GRADIENT, 1, 260, \
param1=30, param2=65, minRadius=0, maxRadius=0)
But when using the program with silver balls, we don't get any result.
When looking at the edges calculated by the program, the edges of the ball are quite sharp. But the Code is not recognizing any ball.
How do I improve the detection rate of the silver ball? I think of two ways doing that:
- Improving edge calculation
- Make the circle detection accept an image with unclear edges
Is that possible? What is the best way of doing so?
Help is very appreciated.
You have to tune your parameters. The HoughCircles function does a good job in detecting circles (even with gaps). Note that HoughCircles performs an internal binarization using canny edge detection. Thus, you don't have to do thresholding.
Given your image above, the code
import cv2
from matplotlib import pyplot as plt
import numpy as np
PATH = 'path/to/the/image.jpg'
img = cv2.imread(PATH, cv2.IMREAD_GRAYSCALE)
plt.imshow(img, cmap='gray')
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, 1, 20, param1=130, param2=30, minRadius=0, maxRadius=0)
if circles is not None:
for x, y, r in circles[0]:
c = plt.Circle((x, y), r, fill=False, lw=3, ec='C1')
plt.gca().add_patch(c)
plt.gcf().set_size_inches((12, 8))
plt.show()
yields the result
What do the different parameters mean?
The function signature is defined as
cv.HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]])
image and circles are self explanatory and will be skipped.
method
Specifies the variant of the hough algorithm that is used internally. As stated in the documentation, only HOUGH_GRADIENT is support atm. This method utilizes the 21HT (p.2, THE 2-1 HOUGH TRANSFORM) algorithm. The major advantage of this variant lies in the reduction of memory usage. The standard way of detecting circles using hough transform requires a search in a 3D hough space (x, y and radius). However, with 21HT your hough space is reduced to only 2 dimensions, which lowers the memory consumption by a fair amount.
dp
The dp parameter sets the inverse accumulator resolution. A good explanation can be found here. Note that this explanation uses the standard hough transform as an example. But the effect is the same for 21HT. The accumulator for 21HT is just a bit different as for the standard HT.
minDist
Simply specifies the minimum distance between the centers of circles. In the code example above it's set to 20 which means that the centers of two detected circles have to be at least 20 pixels away from each other. I'm not sure how opencv filters the circles out, but scanning the source code it looks like circles with lower matches are thrown out.
param1
Specifies the thresholds that are passed to the Canny Edge algorithm. Basically it's called like cv2.Canny(image, param1 / 2, param1).
param2
This paragraph should probably be validated by someone who is more familiar with the opencv source code.
param2 specifies the accumulator threshold. This value decides how complete a circle has to be in order to count as a valid circle. I'm not sure in which unit the parameter is given, tho. But (scanning the source code again) it looks like it is an absolute vote threshold (meaning that it is directly affected by different radii).
The image below shows different circles (or what can be identified as a circle). The further you go to the right, the lower the threshold has to be in order to detect that circle.
minRadius and maxRadius
Simply limits the circle search in the range [minRadius, maxRadius] for the radius. This is useful (and can increase performance) if you can approximate (or know) the size of the circles you are searching for.
I'm working on a project in which I have to detect Traffic lights (circles obviously). Now I am working with a sample image I picked up from a spot, however after all my efforts I can't get the code to detect the proper circle(light).
Here is the code:-
# import the necessary packages
import numpy as np
import cv2
image = cv2.imread('circleTestsmall.png')
output = image.copy()
# Apply Guassian Blur to smooth the image
blur = cv2.GaussianBlur(image,(9,9),0)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 200)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
cv2.imshow("output", output)
cv2.imshow('Blur', blur)
cv2.waitKey(0)
The image in which I want to detect the circle-
This is what the output image is:-
I tried playing with the Gaussian blur radius values and the minDist parameter in hough transform but didn't get much of success.
Can anybody point me in the right direction?
P.S- Some out of topic questions but crucial ones to my project-
1. My computer takes about 6-7 seconds to show the final image. Is my code bad or my computer is? My specs are - Intel i3 M350 2.6 GHz(first gen), 6GB RAM, Intel HD Graphics 1000 1625 MB.
2. Will the hough transform work on a binary thresholded image directly?
3. Will this code run fast enough on a Raspberry Pi 3 to be realtime? (I gotta mount it on a moving autonomous robot.)
Thank you!
First of all you should restrict your parameters a bit.
Please refer to: http://docs.opencv.org/2.4/modules/imgproc/doc/feature_detection.html#houghcircles
At least set reasonable values for min and max radius. Try to find that one particular circle first. If you succeed increase your radius tolerance.
Hough transform is a brute force method. It will try any possible radius for every edge pixel in the image. That's why it is not very suitable for real time applications. Especially if you do not provide proper parameters and input. You have no radius limits atm. So you will calculate hundreds, if not thousands of circles for every pixel...
In your case the trafficlight also is not very round so the accumulated result won't be very good. Try finding highly saturated, bright, compact blobs of a reasonable size instead. It should be faster and more robust.
You can further reduce processing time if you restrict the image size. I guess you can assume that the traffic light will always be in the upper half of your image. So omit the lower half. Traffic lights will always be green, red or yellow. Remove everything that is not of that color... I think you get what I mean...
I think that you should first perform a color segmentation based on the stoplight colors. It will tremendously reduce the ROI. Then you can apply the Hough Transform on the ROI edges only (because you want the contour).
Another restriction: Only accept circles where the inside color is homogenous.This would throw out all the false hits in the above example.
I am using OpenCV HoughlinesP to find horizontal and vertical lines. It is not finding any lines most of the time. Even when it finds a lines it is not even close to actual image.
import cv2
import numpy as np
img = cv2.imread('image_with_edges.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
flag,b = cv2.threshold(gray,0,255,cv2.THRESH_OTSU)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(1,1))
cv2.erode(b,element)
edges = cv2.Canny(b,10,100,apertureSize = 3)
lines = cv2.HoughLinesP(edges,1,np.pi/2,275, minLineLength = 100, maxLineGap = 200)[0].tolist()
for x1,y1,x2,y2 in lines:
for index, (x3,y3,x4,y4) in enumerate(lines):
if y1==y2 and y3==y4: # Horizontal Lines
diff = abs(y1-y3)
elif x1==x2 and x3==x4: # Vertical Lines
diff = abs(x1-x3)
else:
diff = 0
if diff < 10 and diff is not 0:
del lines[index]
gridsize = (len(lines) - 2) / 2
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imwrite('houghlines3.jpg',img)
Input Image:
Output Image: (see the Red Line):
#ljetibo Try this with:
c_6.jpg
There's quite a bit wrong here so I'll just start from the beginning.
Ok, first thing you do after opening an image is tresholding. I recommend strongly that you have another look at the OpenCV manual on tresholding and the exact meaning of the treshold methods.
The manual mentions that
cv2.threshold(src, thresh, maxval, type[, dst]) → retval, dst
the special value THRESH_OTSU may be combined with one of the above
values. In this case, the function determines the optimal threshold
value using the Otsu’s algorithm and uses it instead of the specified
thresh .
I know it's a bit confusing because you don't actully combine THRESH_OTSU with any of the other methods (THRESH_BINARY etc...), unfortunately that manual can be like that. What this method actually does is it assumes that there's a "foreground" and a "background" that follow a bi-modal histogram and then applies the THRESH_BINARY I believe.
Imagine this as if you're taking an image of a cathedral or a high building mid day. On a sunny day the sky will be very bright and blue, and the cathedral/building will be quite a bit darker. This means the group of pixels belonging to the sky will all have high brightness values, that is will be on the right side of the histogram, and the pixels belonging to the church will be darker, that is to the middle and left side of the histogram.
Otsu uses this to try and guess the right "cutoff" point, called thresh. For your image Otsu's alg. supposes that all that white on the side of the map is the background, and the map itself the foreground. Therefore your image after thresholding looks like this:
After this point it's not hard to guess what goes wrong. But let's go on, What you're trying to achieve is, I believe, something like this:
flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
Then you go on, and try to erode the image. I'm not sure why you're doing this, was your intention to "bold" the lines, or was your intention to remove noise. In any case you never assigned the result of erosion to something. Numpy arrays, which is the way images are represented, are mutable but it's not the way the syntax works:
cv2.erode(src, kernel, [optionalOptions] ) → dst
So you have to write:
b = cv2.erode(b,element)
Ok, now for the element and how the erosion works. Erosion drags a kernel over an image. Kernel is a simple matrix with 1's and 0's in it. One of the elements of that matrix, usually centre one, is called an anchor. An anchor is the element that will be replaced at the end of the operation. When you created
cv2.getStructuringElement(cv2.MORPH_CROSS, (1, 1))
what you created is actually a 1x1 matrix (1 column, 1 row). This makes erosion completely useless.
What erosion does, is firstly retrieves all the values of pixel brightness from the original image where the kernel element, overlapping the image segment, has a "1". Then it finds a minimal value of retrieved pixels and replaces the anchor with that value.
What this means, in your case, is that you drag [1] matrix over the image, compare if the source image pixel brightness is larger, equal or smaller than itself and then you replace it with itself.
If your intention was to remove "noise", then it's probably better to use a rectangular kernel over the image. Think of it this way, "noise" is that thing that "doesn't fit in" with the surroundings. So if you compare your centre pixel with it's surroundings and you find it doesn't fit, it's most likely noise.
Additionally, I've said it replaces the anchor with the minimal value retrieved by the kernel. Numerically, minimal value is 0, which is coincidentally how black is represented in the image. This means that in your case of a predominantly white image, erosion would "bloat up" the black pixels. Erosion would replace the 255 valued white pixels with 0 valued black pixels if they're in the reach of the kernel. In any case it shouldn't be of a shape (1,1), ever.
>>> cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
array([[0, 1, 0],
[1, 1, 1],
[0, 1, 0]], dtype=uint8)
If we erode the second image with a 3x3 rectangular kernel we get the image bellow.
Ok, now we got that out of the way, next thing you do is you find edges using Canny edge detection. The image you get from that is:
Ok, now we look for EXACTLY vertical and EXACTLY horizontal lines ONLY. Of course there are no such lines apart from the meridian on the left of the image (is that what it's called?) and the end image you get after you did it right would be this:
Now since you never described your exact idea, and my best guess is that you want the parallels and meridians, you'll have more luck on maps with lesser scale because those aren't lines to begin with, they are curves. Additionally, is there a specific reason to get a Probability Hough done? The "regular" Hough doesn't suffice?
Sorry for the too-long post, hope it helps a bit.
Text here was added as a request for clarification from the OP Nov. 24th. because there's no way to fit the answer into a char limited comment.
I'd suggest OP asks a new question more specific to the detection of curves because you are dealing with curves op, not horizontal and vertical lines.
There are several ways to detect curves but none of them are easy. In the order of simplest-to-implement to hardest:
Use RANSAC algorithm. Develop a formula describing the nature of the long. and lat. lines depending on the map in question. I.e. latitude curves will almost be a perfect straight lines on the map when you're near the equator, with the equator being the perfectly straight line, but will be very curved, resembling circle segments, when you're at high latitudes (near the poles). SciPy already has RANSAC implemented as a class all you have to do is find and the programatically define the model you want to try to fit to the curves. Of course there's the ever-usefull 4dummies text here. This is the easiest because all you have to do is the math.
A bit harder to do would be to create a rectangular grid and then try to use cv findHomography to warp the grid into place on the image. For various geometric transformations you can do to the grid you can check out OpenCv manual. This is sort of a hack-ish approach and might work worse than 1. because it depends on the fact that you can re-create a grid with enough details and objects on it that cv can identify the structures on the image you're trying to warp it to. This one requires you to do similar math to 1. and just a bit of coding to compose the end solution out of several different functions.
To actually do it. There are mathematically neat ways of describing curves as a list of tangent lines on the curve. You can try to fit a bunch of shorter HoughLines to your image or image segment and then try to group all found lines and determine, by assuming that they're tangents to a curve, if they really follow a curve of the desired shape or are they random. See this paper on this matter. Out of all approaches this one is the hardest because it requires a quite a bit of solo-coding and some math about the method.
There could be easier ways, I've never actually had to deal with curve detection before. Maybe there are tricks to do it easier, I don't know. If you ask a new question, one that hasn't been closed as an answer already you might have more people notice it. Do make sure to ask a full and complete question on the exact topic you're interested in. People won't usually spend so much time writing on such a broad topic.
To show you what you can do with just Hough transform check out bellow:
import cv2
import numpy as np
def draw_lines(hough, image, nlines):
n_x, n_y=image.shape
#convert to color image so that you can see the lines
draw_im = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
for (rho, theta) in hough[0][:nlines]:
try:
x0 = np.cos(theta)*rho
y0 = np.sin(theta)*rho
pt1 = ( int(x0 + (n_x+n_y)*(-np.sin(theta))),
int(y0 + (n_x+n_y)*np.cos(theta)) )
pt2 = ( int(x0 - (n_x+n_y)*(-np.sin(theta))),
int(y0 - (n_x+n_y)*np.cos(theta)) )
alph = np.arctan( (pt2[1]-pt1[1])/( pt2[0]-pt1[0]) )
alphdeg = alph*180/np.pi
#OpenCv uses weird angle system, see: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html
if abs( np.cos( alph - 180 )) > 0.8: #0.995:
cv2.line(draw_im, pt1, pt2, (255,0,0), 2)
if rho>0 and abs( np.cos( alphdeg - 90)) > 0.7:
cv2.line(draw_im, pt1, pt2, (0,0,255), 2)
except:
pass
cv2.imwrite("/home/dino/Desktop/3HoughLines.png", draw_im,
[cv2.IMWRITE_PNG_COMPRESSION, 12])
img = cv2.imread('a.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
cv2.imwrite("1tresh.jpg", b)
element = np.ones((3,3))
b = cv2.erode(b,element)
cv2.imwrite("2erodedtresh.jpg", b)
edges = cv2.Canny(b,10,100,apertureSize = 3)
cv2.imwrite("3Canny.jpg", edges)
hough = cv2.HoughLines(edges, 1, np.pi/180, 200)
draw_lines(hough, b, 100)
As you can see from the image bellow, straight lines are only longitudes. Latitudes are not as straight therefore for each latitude you have several detected lines that behave like tangents on the line. Blue drawn lines are drawn by the if abs( np.cos( alph - 180 )) > 0.8: while the red drawn lines are drawn by rho>0 and abs( np.cos( alphdeg - 90)) > 0.7 condition. Pay close attention when comparing the original image with the image with lines drawn on it. The resemblance is uncanny (heh, get it?) but because they're not lines a lot of it only looks like junk. (especially that highest detected latitude line that seems like it's too "angled" but in reality those lines make a perfect tangent to the latitude line on its thickest point, just as hough algorithm demands it). Acknowledge that there are limitations to detecting curves with a line detection algorithm