Hough transform line follower - python

Okay, I want to make a program to detect a line from a camera stream. This is for al line follower robot. So if the robot know the angle of two parallel lines, he knew in which way he must ride.
I perform the follow functions:
Make frame gray
Gaussian blur
Canny edge
Hough transform
The first thing is, that when there are no lines, the program is terminated. (also when there are only a few lines).
I don't know how to solve that.
Also, I want to get the angle of the line(s). And I want to get the distance of 2 parallel lines (and know witch 2 lines are parallel)
Here is my very simple code, i contains most of the examples on the internet:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
ret = cap.set(3,640)
ret = cap.set(4,480)
while True:
ret, frame = cap.read()
gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
gauss = cv2.GaussianBlur(gray,(3,3),0)
edges = cv2.Canny(gray,0,150,apertureSize = 3)
lines = cv2.HoughLines(edges,1,np.pi/180,50)
for rho,theta in lines[0]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(frame,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imshow('frame',edges)
cv2.imshow('frame',frame)

maybe 'try' can solve that:
while True:
try:
'your code'
except:
'other code'
this way an error wouldn't end the program, but you could decide what to do.

Related

Houghline method of openCV is giving only 1 line as a output

I am using houghLine method for line detection. But it gives only 1 line as an output which is I think is the line with the largest votes. I tried a solution from In opencv using houghlines prints only one line but this is taking a lot of time and is not working.
My code is:
folderInPath = "DevanagariHandwrittenCharacterDataset/Test"
folderOutPath = "DevanagariHandwrittenCharacterDataset/Test_Lines"
for folderPath in os.listdir(folderInPath):
inPath = "DevanagariHandwrittenCharacterDataset/Test/" + folderPath
#os.mkdir(os.path.join(folderOutPath, folderPath+'_lines'))
outPath = os.path.join(folderOutPath, folderPath+'_lines')
dirs = "DevanagariHandwrittenCharacterDataset/Test/" + folderPath
for imagePath in os.listdir(dirs):
# imagePath contains name of the image for eg. 46214.png
inputPath = os.path.join(inPath, imagePath)
# inputPath contains the full directory name for eg. character_1_ka/46214.png|
# Reading the required image in which operations are to be done.
# Make sure that the image is in the same directory in which this python program is
img = cv2.imread(inputPath)
# Convert the img to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Apply edge detection method on the image
edges = cv2.Canny(gray,50,150,apertureSize = 3)
# This returns an array of r and theta values
lines = cv2.HoughLines(edges,1,np.pi/180, 5)
for line in lines:
# The below for loop runs till r and theta values are in the range of the 2d array
for r,theta in line:
# Stores the value of cos(theta) in a
a = np.cos(theta)
# Stores the value of sin(theta) in b
b = np.sin(theta)
# x0 stores the value rcos(theta)
x0 = a*r
# y0 stores the value rsin(theta)
y0 = b*r
# x1 stores the rounded off value of (rcos(theta)-1000sin(theta))
x1 = int(x0 + 1000*(-b))
# y1 stores the rounded off value of (rsin(theta)+1000cos(theta))
y1 = int(y0 + 1000*(a))
# x2 stores the rounded off value of (rcos(theta)+1000sin(theta))
x2 = int(x0 - 1000*(-b))
# y2 stores the rounded off value of (rsin(theta)-1000cos(theta))
y2 = int(y0 - 1000*(a))
# cv2.line draws a line in img from the point(x1,y1) to (x2,y2). (0,0,255) denotes the colour of the line to be
#drawn. In this case, it is red.
cv2.line(img,(x1,y1), (x2,y2), (0,0,255),2)
# fullOutPath contains the path of the output
fullOutPath = os.path.join(outPath, 'lines_'+imagePath)
# All the changes made in the input image are finally written on a new image houghlines.jpg
cv2.imwrite(fullOutPath, img)
print("Done " + folderPath)
For information, my image input is a character from the Hindi language of 32 x 32 pixels. Someone with any suggestions or solution for this.
I am attaching one of the image in my dataset. There are several image like this.Image
Your code is generally correct, except that you didn't choose the correct parameters for the Line detection lines = cv2.HoughLines(edges,1,np.pi/180, 5).
replacing this instruction by: lines = cv2.HoughLines(edges,1,np.pi/90, 18)
will give you this result:
Note: if you want to detect more or less lines, you have to change the parameters accordingly.

Detecting and isolating lines in an image

I'm trying to write a piece of code that can detect and isolate straight lines from an image. I'm using the opencv library, together with Canny edge detection and Hough transformation to achieve this. So far I've come up with the following:
import numpy as np
import cv2
# Reading the image
img = cv2.imread('sudoku-original.jpg')
# Convert the image to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Edge detection
edges = cv2.Canny(gray,50,150,apertureSize = 3)
# Line detection
lines = cv2.HoughLines(edges,1,np.pi/180,200)
for rho,theta in lines[0]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imwrite('linesDetected.jpg',img)
In theory this code snippet should do the job, but unfortunately it doesn't. The resulting picture clearly shows only one line being found. I'm not quite sure what I'm doing wrong here and why it's only detecting one specific line. Could someone possibly figure out the problem here?
Despite opencv's Hough Transform tutorial is using just one loop, the shape of lines is actual [None,1,2], thus when you use lines[0] you will only get one item for rho and one item for theta, which will only get you one line. therefore, my suggestion is to use a double loop (below) or some numpy slice magic to maintain using just 1 loop. To get all the grid detected, as Dan Masek mentioned, you will need to play with the edge detection logic. Maybe see the solution that uses HoughLinesP.
for item in lines:
for rho,theta in item:
...

How to select specific parts of a uniformly Colored Figure in openCV Python?

I have an image of a blue colored cross drawn in MS Paint.
https://imgur.com/cMjZrra
I want to be able to extract the four separate arms of the cross from the image, and store them in four separate images.
What I have tried is to use cv2.inRange() method to detect the color blue, as per the following code :
import cv2
import numpy as np
img=cv2.imread("PECross.png")
blue=[
([250,0,0],[255,0,0])]
for (lower, upper) in blue:
lower=np.array(lower, dtype="uint8")
upper=np.array(upper, dtype="uint8")
mask=cv2.inRange(img,lower,upper)
output=cv2.bitwise_and(img,img,mask=mask)
cv2.imshow("Out",output)
cv2.waitKey(0)
cv2.destroyAllWindows()
and then display the extracted blue colors. It extracts the entire cross because its colored the same, but I want to extract the four arms of the cross separately.
What code do I need to add, to extract the four arms of the cross separately?
Here is the code. It uses hough lines to detect lines than simply crops the image regions given by the lines
img = cv2.imread('PECross.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
lines_coords = []
edges = cv2.Canny(gray, 50,150,3)
lines = cv2.HoughLines(edges,1,np.pi/180,200)
for rho,theta in lines.squeeze():
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
lines_coords.append((x1,y1,x2,y2))
vertical_left = lines_coords[0]
vertical_right = lines_coords[1]
horizontal_up = lines_coords[2]
horizontal_down = lines_coords[3]
x1,y1,x2,y2=vertical_left
left_arm = img[:, :x1]
x1,y1,x2,y2=vertical_right
right_arm = img[:, x1:]
x1,y1,x2,y2=horizontal_up
upper_arm = img[:y1, :]
x1,y1,x2,y2=horizontal_down
lower_arm = img[y1:, :]
cv2.imwrite('left_arm.png', left_arm)
cv2.imwrite('right_arm.png', right_arm)
cv2.imwrite('upper_arm.png', upper_arm)
cv2.imwrite('lower_arm.png', lower_arm)

How can only detect the white borders in this image using python and openCV

How can I detect the white lines alone from the above image using opencv python?
which method can I use? or is there any inbuilt function available for this purpose?
Hough's didn't work well on this case.
The final output should like these:
It seems like Hough's does actually do a good job. I took the example shown here and introduced only a small modification for the extraction of edges:
Load image and convert to grayscale:
import numpy as np
import cv2
import scipy.ndimage as ndi
img = cv2.imread(path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Then I applied a median filter to get rid of some high frequency edges/noise like the net in the image:
smooth = ndi.filters.median_filter(gray, size=2)
Result looks like this:
Then I applied a simple threshold to extract the white lines. Alternatively, you could apply e.g. a Laplace filter to extract edges, however, simple thresholding will neglect e.g. the horizontal line at the back of the tennis court:
edges = smooth > 180
Result:
Then perform the Hough line transform similar to this example.
lines = cv2.HoughLines(edges.astype(np.uint8), 0.5, np.pi/180, 120)
for rho,theta in lines[0]:
print(rho, theta)
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
# Show the result
cv2.imshow("Line Detection", img)
You can play around with the accuracies and modify the result. I ended up choosing the parameters 0.5 and 120. Overall result:

Distance between cv2 lienes and centre of screen - python

I'm using openCV to detect the distance between two lines and their position relative to the centre point of an image. Doesn't need to be an exact distance - just a contextual value of some sort (pixels would be fine)
My code which I have working detecting the two lines is this;
import PIL
import time
import io
import picamera
import cv2
import numpy as np
image_count = 0
with picamera.PiCamera() as camera:
camera.start_preview()
camera.resolution = (340, 240)
time.sleep(2)
while(True):
try:
stream = io.BytesIO()
image_counter+=1
camera.capture(stream, format='png')
data = np.fromstring(stream.getvalue(), dtype=np.uint8)
image = cv2.imdecode(data, 1)
grey_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
edge_image = cv2.Canny(grey_image, 50, 150, apertureSize = 3)
lines = cv2.HoughLines(edge_image, 1, np.pi/180, 95)
if(lines.any):
for rho, theta in lines[0]
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(image, (x1, y1), (x2, y2), (0,0,255), 2)
cv2.imwrite('lined_image_' + str(image_counter) + '.png, image)
except:
print 'loop error'
It detects lines such as in this image;
I've been trying to work out how to do this numerically but it's convoluted and probably wrong - there must be an easier way but I can't see it with my inexperience using open CV.
How can I find the distance between the centre point of the image and the innermost red lines you see? (at the point where the lines intersects the horizontal line which intersects both in and the images centre point)
Thanks!
If you were to use HoughLinesP, you'd directly get start and end points of the lines. Then, when
Dx is (x2-x1) and Dy is (y2-y1), your required distance d from the centre point (x0,y0) is
If you intend to stick with HoughLines, you can easily transform rho and theta to get the equation of a line, and use one of the many formulae described here, which is also where the above formula has been borrowed from.

Categories