I'm using openCV to detect the distance between two lines and their position relative to the centre point of an image. Doesn't need to be an exact distance - just a contextual value of some sort (pixels would be fine)
My code which I have working detecting the two lines is this;
import PIL
import time
import io
import picamera
import cv2
import numpy as np
image_count = 0
with picamera.PiCamera() as camera:
camera.start_preview()
camera.resolution = (340, 240)
time.sleep(2)
while(True):
try:
stream = io.BytesIO()
image_counter+=1
camera.capture(stream, format='png')
data = np.fromstring(stream.getvalue(), dtype=np.uint8)
image = cv2.imdecode(data, 1)
grey_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
edge_image = cv2.Canny(grey_image, 50, 150, apertureSize = 3)
lines = cv2.HoughLines(edge_image, 1, np.pi/180, 95)
if(lines.any):
for rho, theta in lines[0]
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(image, (x1, y1), (x2, y2), (0,0,255), 2)
cv2.imwrite('lined_image_' + str(image_counter) + '.png, image)
except:
print 'loop error'
It detects lines such as in this image;
I've been trying to work out how to do this numerically but it's convoluted and probably wrong - there must be an easier way but I can't see it with my inexperience using open CV.
How can I find the distance between the centre point of the image and the innermost red lines you see? (at the point where the lines intersects the horizontal line which intersects both in and the images centre point)
Thanks!
If you were to use HoughLinesP, you'd directly get start and end points of the lines. Then, when
Dx is (x2-x1) and Dy is (y2-y1), your required distance d from the centre point (x0,y0) is
If you intend to stick with HoughLines, you can easily transform rho and theta to get the equation of a line, and use one of the many formulae described here, which is also where the above formula has been borrowed from.
Related
I'm trying to rotate an image that is clearly seen with rotation.
I'm using HoughLine with opencv.
Here is the image with code below (working in google colab):
import numpy as np
import cv2
from scipy import ndimage
from google.colab.patches import cv2_imshow
image1 = cv2.imread('/content/rotate.png')
gray=cv2.cvtColor(image1,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
canimg = cv2.Canny(gray, 50, 200)
lines= cv2.HoughLines(canimg, 1, np.pi/180.0, 250, np.array([]))
#lines= cv2.HoughLines(edges, 1, np.pi/180, 80, np.array([]))
for line in lines:
rho, theta = line[0]
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(image1,(x1,y1),(x2,y2),(0,0,255),2)
print(theta)
print(rho)
cv2_imshow(image1)
cv2_imshow(edges)
This is the ouput:
theta: 0.9773844
rho: 311.0
So, when I try to rotate this image with this line and then show it:
img_rotated = ndimage.rotate(image1, theta)
cv2_imshow(img_rotated)
This is the output:
This result does not agree with the rotation that it should be for the frame to be horizontal.
Any advise? What am I doing wrong?
In ndimage.rotate angle in degrees.
img_rotated = ndimage.rotate(image1, 180*theta/3.1415926)
I want to split this image into multiple images based on the black lines
I use cv2.HoughLines to get some lines, merging them to avoid overlapping lines.
And here my drawing code:
# After get lines from cv2.HoughLines()
for line in lines:
rho, theta = line
a = np.cos(theta)
b = np.sin(theta)
x0 = a * rho
y0 = b * rho
x1 = int(x0 + 1000 * (-b))
y1 = int(y0 + 1000 * (a))
x2 = int(x0 - 1000 * (-b))
y2 = int(y0 - 1000 * (a))
cv2.line(image, (x1, y1), (x2, y2), (0, 200, 0), 2)
cv2.imwrite('results/result.jpg', image)
Here's the result:
I wonder how can I split the images into multiple small images with those green lines
Suppose image is the variable in which the image is read by opencv as an nd-array.
image = cv2.imread(image_filepath)
Now if lines is the variable assigned after the houghline transformation like :
lines = cv2.HoughLinesP(...)
Get its's shape :
a,b,c = lines.shape
Initiate a variable to get the coordinates and append the bounding-boxes :
line_coords_list = []
for i in range(a):
line_coords_list.append([(lines[i][0][0], lines[i][0][1]), (lines[i][0][2], lines[i][0][3])])
Now, loop through the list of bounding boxes and crop the main image and write them with some filename :
temp_img = image[start_y_coordinate : end_y_coordinate , start_x_coorinate : end_x_coordinate]
temp_name = image_filepath[:-4] + "_"+str(start_y_coordinate )+"_"+str(end_y_coordinate)+ "_" + str(start_x_coorinate) + "_" + str(end_x_coordinate) + ".png"
cv2.imwrite(temp_name, temp_img)
If you are using cv2.HoughLines(...), then you probably have to find contours in the image using :
_, blackAndWhite = cv2.threshold(img, 100, 255, cv2.THRESH_BINARY_INV)
_,contours,h = cv2.findContours(blackAndWhite,cv2.RETR_LIST ,cv2.CHAIN_APPROX_SIMPLE)
and, then loop through the contours :
for cnt in contours:
x,y,w,h = cv2.boundingRect(cnt)
line_coords_list.append((x,y,w,h))
Here while finding contours the third and fourth items are width and height respectively. So end_y_coordinate = y+h and end_x_coordinate = x+w.
See "region of interest"
(Region of Interest opencv python - StackOverflow)
Read this to get x/y:
(Hough Line Transform - Opencv Phyton Tutorials 1 documentation)
I am having trouble with Hough Line transformation. I am trying to identify the major lines in a kitchen. I first just used Canny, but it was picking up more noise than I wanted and wasn't picking up the meeting of the wall and ceiling. However, the Hough Line transformation is only identifying one line that it should not be identifying at all. Any help would be appreciated.
My input:
kitchen_sample.jpg
My output:
kitchen_lines.jpg
And here is my code:
import cv2
import numpy as np
image = cv2.imread('kitchen_sample.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray, 50, 150, apertureSize=3)
lines = cv2.HoughLines(edges, 1, np.pi / 180, 200)
for rho, theta in lines[0]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a * rho
y0 = b * rho
x1 = int(x0 + 1000 * (-b))
y1 = int(y0 + 1000 * a)
x2 = int(x0 - 1000 * (-b))
y2 = int(y0 - 1000 * a)
cv2.line(image, (x1, y1), (x2, y2), (0, 0, 255), 2)
cv2.imwrite('kitchen_lines.jpg', image)
You were probably looking at the old opencv tutorial page which probably has a mistake in it (or something changed with versioning, didn't track opencv-python).
Here's a new & correct one
All you need to change is replace
for rho, theta in lines[0]:
with
for line in lines:
rho,theta = line[0]
But anyway it would take you some time to get desired output.
What I would recommend you is using HoughLinesP which would easily give you what you likely need
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength=100,maxLineGap=10)
for line in lines:
x1,y1,x2,y2 = line[0]
cv2.line(image,(x1,y1),(x2,y2),(0,255,0),2)
How can I detect the white lines alone from the above image using opencv python?
which method can I use? or is there any inbuilt function available for this purpose?
Hough's didn't work well on this case.
The final output should like these:
It seems like Hough's does actually do a good job. I took the example shown here and introduced only a small modification for the extraction of edges:
Load image and convert to grayscale:
import numpy as np
import cv2
import scipy.ndimage as ndi
img = cv2.imread(path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Then I applied a median filter to get rid of some high frequency edges/noise like the net in the image:
smooth = ndi.filters.median_filter(gray, size=2)
Result looks like this:
Then I applied a simple threshold to extract the white lines. Alternatively, you could apply e.g. a Laplace filter to extract edges, however, simple thresholding will neglect e.g. the horizontal line at the back of the tennis court:
edges = smooth > 180
Result:
Then perform the Hough line transform similar to this example.
lines = cv2.HoughLines(edges.astype(np.uint8), 0.5, np.pi/180, 120)
for rho,theta in lines[0]:
print(rho, theta)
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
# Show the result
cv2.imshow("Line Detection", img)
You can play around with the accuracies and modify the result. I ended up choosing the parameters 0.5 and 120. Overall result:
Okay, I want to make a program to detect a line from a camera stream. This is for al line follower robot. So if the robot know the angle of two parallel lines, he knew in which way he must ride.
I perform the follow functions:
Make frame gray
Gaussian blur
Canny edge
Hough transform
The first thing is, that when there are no lines, the program is terminated. (also when there are only a few lines).
I don't know how to solve that.
Also, I want to get the angle of the line(s). And I want to get the distance of 2 parallel lines (and know witch 2 lines are parallel)
Here is my very simple code, i contains most of the examples on the internet:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
ret = cap.set(3,640)
ret = cap.set(4,480)
while True:
ret, frame = cap.read()
gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
gauss = cv2.GaussianBlur(gray,(3,3),0)
edges = cv2.Canny(gray,0,150,apertureSize = 3)
lines = cv2.HoughLines(edges,1,np.pi/180,50)
for rho,theta in lines[0]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(frame,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imshow('frame',edges)
cv2.imshow('frame',frame)
maybe 'try' can solve that:
while True:
try:
'your code'
except:
'other code'
this way an error wouldn't end the program, but you could decide what to do.