"global name 'range_iter_len' is not defined" when using Numba - python

I was working with massive image manipulation using OpenCV and have the idea to use GPU instead of CPU to speed up the computation. I'm using Pointillism to manipulate the image and it involves quite some math, it took significant time to finish processing each image. Here's the code:
#jit
def toPointillismPainting(img):
stroke_scale = int(math.ceil(max(img.shape) / 250))
#print("Automatically chosen stroke scale: %d" % stroke_scale)
gradient_smoothing_radius = int(round(max(img.shape) / 50))
#print("Automatically chosen gradient smoothing radius: %d" % gradient_smoothing_radius)
# convert the image to grayscale to compute the gradient
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#print("Computing color palette...")
palette = ColorPalette.from_image(img, 20)
#print("Extending color palette...")
palette = palette.extend([(0, 50, 0), (15, 30, 0), (-15, 30, 0)])
#print("Computing gradient...")
gradient = VectorField.from_gradient(gray)
#print("Smoothing gradient...")
gradient.smooth(gradient_smoothing_radius)
#print("Drawing image...")
# create a "cartonized" version of the image to use as a base for the painting
res = cv2.medianBlur(img, 11)
# define a randomized grid of locations for the brush strokes
grid = randomized_grid(img.shape[0], img.shape[1], scale=3)
batch_size = 10000
#bar = progressbar.ProgressBar()
for h in range(0, len(grid), batch_size):
# get the pixel colors at each point of the grid
pixels = np.array([img[x[0], x[1]] for x in grid[h:min(h + batch_size, len(grid))]])
# precompute the probabilities for each color in the palette
# lower values of k means more randomnes
color_probabilities = compute_color_probabilities(pixels, palette, k=9)
for i, (y, x) in enumerate(grid[h:min(h + batch_size, len(grid))]):
color = color_select(color_probabilities[i], palette)
angle = math.degrees(gradient.direction(y, x)) + 90
length = int(round(stroke_scale + stroke_scale * math.sqrt(gradient.magnitude(y, x))))
# draw the brush stroke
cv2.ellipse(res, (x, y), (length, stroke_scale), angle, 0, 360, color, -1, cv2.LINE_AA)
return res
This returned me an error which usually didn't happen when I'm not using #jit decorator.
global name 'range_iter_len' is not defined
Traceback (most recent call last):
File "[projectname]", line 201, in <module>
res = toPointillismPainting(img)
NameError: global name 'range_iter_len' is not defined
removed temporary images
When debugging, I realized that this range_iter_len variable is nowhere used in my project nor the Pointillism code itself. And I only found a single github issue on Numba and it doesn't seem to have similar root cause. Any help would be appreciated, and I will update any important points I missed or when I found the solution.

Related

Python - Map coordinates into list of lines and arcs

i'm trying to parse an array of coordinates (which represents a closed shape) into a set of lines and arcs in python (I'm using OpenCV for edge detection).
What I'm trying to achieve, briefly, is to use the coordinates which draw this example image
Example shape
Into this set of lines and arcs
Set of arcs
Obviously, arcs are not so defined as in the image, but are something like "pixeled" arcs.
Is there any utility which can help with this kind of processing?
Let's load the image as grayscale, threshold it to black and white and invert colors, erode it a little, use Canny edge detection, then Hough lines detection (mostly just following this tutorial):
import cv2
import numpy as np
import math
import random
src = cv2.imread("s34I0.png", cv2.IMREAD_GRAYSCALE)
thr, bw = cv2.threshold(src, 128, 255, cv2.THRESH_BINARY_INV)
eroded = cv2.erode(bw, np.ones((5, 5), np.uint8))
canny = cv2.Canny(src, 50, 200, None, 3)
lines = cv2.HoughLines(canny, 1, np.pi / 180, 150, None, 0, 0)
lines = [list(x[0]) for x in lines]
def draw_line(img, line, color, thickness):
rho, the = line
a = math.cos(the)
b = math.sin(the)
x0 = a * rho
y0 = b * rho
pt1 = (int(x0 + 1000 * (-b)), int(y0 + 1000 * (a)))
pt2 = (int(x0 - 1000 * (-b)), int(y0 - 1000 * (a)))
cv2.line(img, pt1, pt2, color, thickness, cv2.LINE_AA)
We have, unfortunately, two parallel lines detected for every straight segment. Let's replace each such pair of close parallel lines with their mid-line:
lines_ = []
def midline(line1, line2):
return [(x + y) / 2 for x, y in zip(line1, line2)]
used = []
for l1 in lines:
if l1 in used: continue
for l2 in lines:
if l2 in used: continue
if l1 is l2: continue
if (abs(l1[0] - l2[0]) < 20) and (abs(l1[1] - l2[1]) < 1):
lines_.append(midline(l1, l2))
used.append(l1)
used.append(l2)
continue
lines = lines_
Now, let's create binary masks for our straight lines. For every straight line, we create a temporary binary black image (all the pixel values are zeros), then draw the line over it as a thick white line (same or slightly thicker than the lines on the original image). Then we logical-AND the original thresholded image and the temporary line image, so we get the pixels common for both - that is the binary mask for the line.
line_masks = []
for i, line in enumerate(lines):
line_img = np.zeros(bw.shape)
draw_line(line_img, line, 255, 10) # 10 pixel thick white line
common = np.logical_and((bw != 0), (line_img != 0))
line_masks.append(common)
Remove the masked pixels from the original black and white image, so only the arcs should remain. Unfortunately, some garbage remains, because the lines in the original image aren't perfect. To get rid of that, we could've drawn our Hough lines thicker (say, 15, or 20 pixels instead of 10), but then they take too much of the arc pixels. Instead, we could erode-dilate the resulting image a little, to get rid of the junk:
for lm in line_masks:
bw[lm] = 0
bw = cv2.erode(bw, np.ones((5, 5), np.uint8))
bw = cv2.dilate(bw, np.ones((5, 5), np.uint8))
Let's create binary masks for the arcs. There's no function in OpenCV to detect arcs, but for this case we could use detection of connected components:
arc_masks = []
num, labels = cv2.connectedComponents(bw)
for i in range(1, num):
arc_masks.append(labels == i)
Now that we have the masks, let's visualize them by drawing over the original image. Lines are going to have random shades of green, arcs - of blue:
line_colors = [(0, random.randint(127, 256), 0) for _ in line_masks]
arc_colors = [(random.randint(127, 256), 0, 0) for _ in arc_masks]
dst = cv2.imread("s34I0.png")
for color, mask in zip(line_colors, line_masks):
dst[mask] = color
for color, mask in zip(arc_colors, arc_masks):
dst[mask] = color

RGB to Grayscale (Average Method) Python

I'm supposed to write a method that converts an RGB image to Grayscale by using the "average method" where I take the average of the 3 colors (not the weighted method or luminosity method). I then must display the original RGB image and grayscale image next to each other (concatenated). The language I'm writing in is Python. This is what my code looks like currently.
import numpy as np
import cv2
def getRed(redVal):
return '#%02x%02x%02x' % (redVal, 0, 0)
def getGreen(greenVal):
return '#%02x%02x%02x' % (0, greenVal, 0)
def getBlue(blueVal):
return '#%02x%02x%02x' % (0, 0, blueVal)
# Grayscale = (R + G + B / 3)
# For each pixel,
# 1- Get pixels red, green, and blue
# 2- Calculate the average value
# 3- Set each of red, green, and blue values to average value
def average_method(img):
for p in img:
red = p.getRed()
green = p.getGreen()
blue = p.getBlue()
average = (red + green + blue) / 3
p.setRed(average)
p.setGreen(average)
p.setBlue(average)
def main():
img1 = cv2.imread('html/images/sun.jpeg')
img1 = cv2.resize(img1, (0, 0), None, .50, .50)
img2 = average_method(img1)
img2 = np.stack(3 * [img2], axis=2)
numpy_concat = np.concatenate((img1, img2), 1)
cv2.imshow('Numpy Concat', numpy_concat)
cv2.waitKey(0)
cv2.destroyAllWindows
if __name__ =="__main__":
main()
The portion that is commented within the average_method function is the steps that I must follow.
When I try to run the code, I get
File "test.py", line 38, in <module>
main()
File "test.py", line 30, in main
img2 = average_method(img1)
File "test.py", line 15, in average_method
red = p.getRed()
AttributeError: 'numpy.ndarray' object has no attribute 'getRed'
I thought that defining the functions for getRed, getGreen, and getBlue up above would mean they'd become recognizable in my average_method function (I got those functions from online so I hope they're right). I'm also not sure what it has to do with numpy.ndarray. If anyone could help me fill in this average_method function with code that follows the commented steps correctly, I would really appreciate it.
EDIT:::
New code looks like this:
import cv2
import numpy as np
def average_method(img):
for p in img:
gray = sum(p)/3
for i in range(3):
p[i] = gray
def main():
img1 = cv2.imread('html/images/sun.jpeg')
img1 = cv2.resize(img1, (0, 0), None, .50, .50)
img2 = average_method(img1)
img2 = np.stack(3 * [img2], axis=2)
numpy_concat = np.concatenate((img1, img2), 1)
cv2.imshow('Numpy Concat', numpy_concat)
cv2.waitKey(0)
cv2.destroyAllWindows
if __name__ =="__main__":
main()
I now get the error
File "test.py", line 50, in <module>
main()
File "test.py", line 43, in main
img2 = np.stack(3 * [img2], axis=2)
File "<__array_function__ internals>", line 5, in stack
File "C:\Users\myname\AppData\Local\Programs\Python\Python38-32\lib\site-packages\numpy\core\shape_base.py", line 430, in stack
axis = normalize_axis_index(axis, result_ndim)
numpy.AxisError: axis 2 is out of bounds for array of dimension 1
I have that line "img2 = np.stack(3 * [img2], axis=2)" since I was previously told on Stack Overflow I need it due to my img2 now being a greyscale (single-channel) image, when img1 is still color (three-channel). This line apparently fixes that. But now it seems like there is something wrong with that?
In Java, the for loop you highlighted is called an "enhanced for loop". Python doesn't have these because Python for loops pog (in terms of concision).
The Python equivalent of the line in question would be:
for p in img:
No need to state object class or anything like that.
EDIT: After OP changed question
The problem now is that you're not calling the functions correctly. p is an array containing the RGB values for that pixel. To call the function as you defined above do:
for p in img:
red = getRed(p[0])
green = getGreen(p[1])
blue = getBlue(p[2])
average = (red + green + blue) / 3
p[0] = average
p[1] = average
p[2] = average
Remember when you moved the code to Python, you seem to no longer be working in Object Oriented Programming! Pixels don't come with methods that you can call like that anymore.
However, as pointed out by Guimoute in the comments, the code can be much simpler if you get rid of the get[Color] functions and do the following:
for p in img:
gray = sum(p)/3
for i in range(3):
p[i] = gray

How can I extract image segment with specific color in OpenCV?

I work with logos and other simple graphics, in which there are no gradients or complex patterns. My task is to extract from the logo segments with letters and other elements.
To do this, I define the background color, and then I go through the picture in order to segment the images. Here is my code for more understanding:
MAXIMUM_COLOR_TRANSITION_DELTA = 100 # 0 - 765
def expand_segment_recursive(image, unexplored_foreground, segment, point, color):
height, width, _ = image.shape
# Unpack coordinates from point
py, px = point
# Create list of pixels to check
neighbourhood_pixels = [(py, px + 1), (py, px - 1), (py + 1, px), (py - 1, px)]
allowed_zone = unexplored_foreground & np.invert(segment)
for y, x in neighbourhood_pixels:
# Add pixel to segment if its coordinates within the image shape and its color differs from segment color no
# more than MAXIMUM_COLOR_TRANSITION_DELTA
if y in range(height) and x in range(width) and allowed_zone[y, x]:
color_delta = np.sum(np.abs(image[y, x].astype(np.int) - color.astype(np.int)))
print(color_delta)
if color_delta <= MAXIMUM_COLOR_TRANSITION_DELTA:
segment[y, x] = True
segment = expand_segment_recursive(image, unexplored_foreground, segment, (y, x), color)
allowed_zone = unexplored_foreground & np.invert(segment)
return segment
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Pass image as the argument to use the tool")
exit(-1)
IMAGE_FILENAME = sys.argv[1]
print(IMAGE_FILENAME)
image = cv.imread(IMAGE_FILENAME)
height, width, _ = image.shape
# To filter the background I use median value of the image, as background in most cases takes > 50% of image area.
background_color = np.median(image, axis=(0, 1))
print("Background color: ", background_color)
# Create foreground mask to find segments in it (TODO: Optimize this part)
foreground = np.zeros(shape=(height, width, 1), dtype=np.bool)
for y in range(height):
for x in range(width):
if not np.array_equal(image[y, x], background_color):
foreground[y, x] = True
unexplored_foreground = foreground
for y in range(height):
for x in range(width):
if unexplored_foreground[y, x]:
segment = np.zeros(foreground.shape, foreground.dtype)
segment[y, x] = True
segment = expand_segment_recursive(image, unexplored_foreground, segment, (y, x), image[y, x])
cv.imshow("segment", segment.astype(np.uint8) * 255)
while cv.waitKey(0) != 27:
continue
Here is the desired result:
In the end of run-time I expect 13 extracted separated segments (for this particular image). But instead I got RecursionError: maximum recursion depth exceeded, which is not surprising as expand_segment_recursive() can be called for every pixel of the image. And since even with small image resolution of 600x500 i got at maximum 300K calls.
My question is how can I get rid of recursion in this case and possibly optimize the algorithm with Numpy or OpenCV algorithms?
You can actually use a thresholded image (binary) and connectedComponents to do this job in a couple of steps. Also, you may use findContours or other methods.
Here is the code:
import numpy as np
import cv2
# load image as greyscale
img = cv2.imread("hp.png", 0)
# puts 0 to the white (background) and 255 in other places (greyscale value < 250)
_, thresholded = cv2.threshold(img, 250, 255, cv2.THRESH_BINARY_INV)
# gets the labels and the amount of labels, label 0 is the background
amount, labels = cv2.connectedComponents(thresholded)
# lets draw it for visualization purposes
preview = np.zeros((img.shape[0], img.shape[2], 3), dtype=np.uint8)
print (amount) #should be 3 -> two components + background
# draw label 1 blue and label 2 green
preview[labels == 1] = (255, 0, 0)
preview[labels == 2] = (0, 255, 0)
cv2.imshow("frame", preview)
cv2.waitKey(0)
At the end, the thresholded image will look like this:
and the preview image (the one with the colored segments) will look like this:
With the mask you can always use numpy functions to get things like, coordinates of the segments you want or to color them (like I did with preview)
UPDATE
To get different colored segments, you may try to create a "border" between the segments. Since they are plain colors and not gradients, you can try to do an edge detector like canny and then put it black in the image....
import numpy as np
import cv2
img = cv2.imread("total.png", 0)
# background to black
img[img>=200] = 0
# get edges
canny = cv2.Canny(img, 60, 180)
# make them thicker
kernel = np.ones((3,3),np.uint8)
canny = cv2.morphologyEx(canny, cv2.MORPH_DILATE, kernel)
# apply edges as border in the image
img[canny==255] = 0
# same as before
amount, labels = cv2.connectedComponents(img)
preview = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
print (amount) #should be 14 -> 13 components + background
# color them randomly
for i in range(1, amount):
preview[labels == i] = np.random.randint(0,255, size=3, dtype=np.uint8)
cv2.imshow("frame", preview )
cv2.waitKey(0)
The result is:

Detecting moving object in moving camera(monitoring one area mounted on a drone)

def run(self):
while True:
_ret, frame = self.cam.read()
frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
vis = frame.copy()
if len(self.tracks) > 0:
img0, img1 = self.prev_gray, frame_gray
p0 = np.float32([tr[-1] for tr in self.tracks]).reshape(-1, 1, 2)
p1, _st, _err = cv2.calcOpticalFlowPyrLK(img0, img1, p0, None, **lk_params)
p0r, _st, _err = cv2.calcOpticalFlowPyrLK(img1, img0, p1, None, **lk_params)
d = abs(p0-p0r).reshape(-1, 2).max(-1)
good = d < 1
new_tracks = []
for i in range(len(p1)):
A.append(math.sqrt((p1[i][0][0])**2 + (p1[i][0][1])**2))
counts,bins,bars = plt.hist(A)
for tr, (x, y), good_flag in zip(self.tracks, p1.reshape(-1, 2), good):
if not good_flag:
continue
tr.append((x, y))
if len(tr) > self.track_len:
del tr[0]
new_tracks.append(tr)
cv2.circle(vis, (x, y), 2, (0, 255, 0), -1)
self.tracks = new_tracks
cv2.polylines(vis, [np.int32(tr) for tr in self.tracks], False, (0, 255, 0))
draw_str(vis, (20, 20), 'track count: %d' % len(self.tracks))
if self.frame_idx % self.detect_interval == 0:
mask = np.zeros_like(frame_gray)
mask[:] = 255
for x, y in [np.int32(tr[-1]) for tr in self.tracks]:
cv2.circle(mask, (x, y), 5, 0, -1)
p = cv2.goodFeaturesToTrack(frame_gray, mask = mask, **feature_params)
if p is not None:
for x, y in np.float32(p).reshape(-1, 2):
self.tracks.append([(x, y)])
self.frame_idx += 1
self.prev_gray = frame_gray
cv2.imshow('lk_track', vis)
ch = cv2.waitKey(1)
if ch == 27:
break
i am using lk_track.py from opencv samples to try and detect a moving object. I am trying to find the camera motion using the histogram of magnitude of optical flow vectors and then calculate the average for similar values which should be directly proportional to the camera motion. I have calculated the magnitude of the vectors and saved it in a list A. Can some suggest on how to find highest similar values from it and calculate the average for only those values?
I created a toy problem to model the approach of binarizing the images by optical flow. This is a massively simplified view of the problem, but gives the general idea well. I'll split the problem up into a few chunks and give functions for them. If you're working directly with video, there will be a lot of additional code needed of course, and I just hardcoded a lot of values that you'll need to turn into parameters.
The first function is just for generating the image sequence. The images are moving through a scene with an object moving inside the sequence. The image sequence is just simply translating through the scene, and the object appears stationary in the sequence, but that means that the object is actually moving in the opposite direction of the camera of course.
import numpy as np
import cv2
def gen_seq():
"""Generate motion sequence with an object"""
scene = cv2.GaussianBlur(np.uint8(255*np.random.rand(400, 500)), (21, 21), 3)
h, w = 400, 400
step = 4
obj_mask = np.zeros((h, w), np.bool)
obj_h, obj_w = 50, 50
obj_x, obj_y = 175, 175
obj_mask[obj_y:obj_y+obj_h, obj_x:obj_x+obj_w] = True
obj_data = np.uint8(255*np.random.rand(obj_h, obj_w)).ravel()
imgs = []
for i in range(0, 1+w//step, step):
img = scene[:, i:i+w].copy()
img[obj_mask] = obj_data
imgs.append(img)
return imgs
# generate image sequence
imgs = gen_seq()
# display images
for img in imgs:
cv2.imshow('Image', img)
k = cv2.waitKey(100) & 0xFF
if k == ord('q'):
break
cv2.destroyWindow('Image')
So here's the basic image sequence visualized. I just used a random scene, translated through, and added a random object in the center.
Great! Now we need to calculate the flow between each frame. I used dense flow here, but sparse flow would be more robust for actual images.
def find_flows(imgs):
"""Finds the dense optical flows"""
optflow_params = [0.5, 3, 15, 3, 5, 1.2, 0]
prev = imgs[0]
flows = []
for img in imgs[1:]:
flow = cv2.calcOpticalFlowFarneback(prev, img, None, *optflow_params)
flows.append(flow)
prev = img
return flows
# find optical flows between images
flows = find_flows(imgs)
# display flows
h, w = imgs[0].shape[:2]
hsv = np.zeros((h, w, 3), dtype=np.uint8)
hsv[..., 1] = 255
for flow in flows:
mag, ang = cv2.cartToPolar(flow[..., 0], flow[..., 1])
hsv[..., 0] = ang*180/np.pi/2
hsv[..., 2] = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX)
rgb = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
cv2.imshow('Flow', rgb)
k = cv2.waitKey(100) & 0xFF
if k == ord('q'):
break
cv2.destroyWindow('Flow')
Here I colorized the flow based on it's angle and magnitude. The angle will determine the color and the magnitude will determine the intensity/brightness of the color. This is the same view the OpenCV tutorial on dense optical flow uses.
Then, we need to binarize this flow so that we get two distinct sets of pixels based on how they're moving. In the sparse case, this works out the same except you will get two distinct sets of features.
def label_flows(flows):
"""Binarizes the flows by direction and magnitude"""
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
flags = cv2.KMEANS_RANDOM_CENTERS
h, w = flows[0].shape[:2]
labeled_flows = []
for flow in flows:
flow = flow.reshape(h*w, -1)
comp, labels, centers = cv2.kmeans(flow, 2, None, criteria, 10, flags)
n = np.sum(labels == 1)
camera_motion_label = np.argmax([labels.size-n, n])
labeled = np.uint8(255*(labels.reshape(h, w) == camera_motion_label))
labeled_flows.append(labeled)
return labeled_flows
# binarize the flows
labeled_flows = label_flows(flows)
# display binarized flows
for labeled_flow in labeled_flows:
cv2.imshow('Labeled Flow', labeled_flow)
k = cv2.waitKey(100) & 0xFF
if k == ord('q'):
break
cv2.destroyWindow('Labeled Flow')
The annoying thing here is the labels will be set randomly, i.e. the labels will be different for each frame. If you visualized the binary image, it would flip between black and white randomly. I'm only using binary labels, 0 and 1, so what I did was considered the label that is assigned to more pixels to be the "camera motion label" and then I set that label to be white in the resulting images, and the other label to be black, that way the camera motion label is always the same in each frame. This may need to be much more sophisticated for working on video feed.
But here we have it, a binarized flow where the color is just showing the two distinct sets of flow vectors.
Now if we wanted to find the target in this flow, we could invert the image and find the connected components of the binary image. The inversion will make the camera motion the background label (0). Then each of the black blobs will be white and will be labeled, and we could find the blob relating to the largest component which, in this case, will be the target. That will give a mask around the target, and we can draw the contours of that mask on the original images to see the target being detected. I'll also cut the borders of the image off before finding the connected components so edge effects from dense flow are ignored.
def find_target_in_labeled_flow(labeled_flow):
labeled_flow = cv2.bitwise_not(labeled_flow)
bw = 10
h, w = labeled_flow.shape[:2]
border_cut = labeled_flow[bw:h-bw, bw:w-bw]
conncomp, stats = cv2.connectedComponentsWithStats(border_cut, connectivity=8)[1:3]
target_label = np.argmax(stats[1:, cv2.CC_STAT_AREA]) + 1
img = np.zeros_like(labeled_flow)
img[bw:h-bw, bw:w-bw] = 255*(conncomp == target_label)
return img
for labeled_flow, img in zip(labeled_flows, imgs[:-1]):
target_mask = find_target_in_labeled_flow(labeled_flow)
display_img = cv2.merge([img, img, img])
contours = cv2.findContours(target_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[1]
display_img = cv2.drawContours(display_img, contours, -1, (0, 255, 0), 2)
cv2.imshow('Detected Target', display_img)
k = cv2.waitKey(100) & 0xFF
if k == ord('q'):
break
And of course this could get some cleaning up, and you won't be doing exactly this for sparse flow. You could just define a region of interest around the tracked points.
Now, there is still a lot of work to do. You have a binarized flow...you can probably assume that the label which occurs most frequently is the camera motion (like I did) safely. However, you'll have to make sure that the other label is the object you're interested in tracking. You'll have to keep track of it between flows so that if it stops moving, you'll know where it is as the camera is moving. When you do the k-means step, you'll want to make sure that the centers from k-means are "far enough" apart so that you know the object is moving or not.
The basic steps for that would be, from the starting frame of the video:
If the two centers are "close", then you can assume your object is either not in the scene or not moving in the scene.
Once the centers are split enough apart, you'll have found the object to track. Keep track of the location of the object.
During tracking of the object, verify the location is nearby a prediction. You can use the optical flow velocity vectors from the previous frame to predict the location each pixel/feature in the new frame, so make sure your predictions agree with your tracking result.
If the object stops moving, the centers from k-means should be close. Keep track of the optical flow vectors around the object location and follow them to have a prediction of where the object is again once it resumes moving, and again verify the detected location with this prediction.
I've never used these methods before so I'm not sure how robust they are. The typical approach for HOOF or "Histogram of oriented optical flow" is much more advanced than this (see the seminal paper here). Instead of just binarizing, the idea is to use histograms from each frame as a probability distribution, and the way this probability distribution changes over time can be analyzed with the tools from time series analysis, which I assume give a more robust framework to this approach.
with #alkasm's answer to avoid the following error:
(-215:Assertion failed) npoints > 0 in function 'drawContours'
simply replace:
contours = cv2.findContours(target_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[1]
with
contours, _ = cv2.findContours(target_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
I can't comment this below as an answer due to new account with low reputation.

Value Error float NAN to integer Hough Lines

I have read few articles and saw videos of Lane Detection and thus decided to learn how it works
I'm completely new to OpenCV so kindly forgive me for dumb Doubts.
I took Udacity Opensource Project to develop Lane Detection,but I'm not able to execute the code. I 'm getting a value error which I'm not able to understand
Code:
import numpy as np
import cv2
import math
import matplotlib.pyplot as plt
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
# defining a blank mask to start with
mask = np.zeros_like(img)
# defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
# filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
# returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=10):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
imshape = img.shape
left_x1 = []
left_x2 = []
right_x1 = []
right_x2 = []
y_min = img.shape[0]
y_max = int(img.shape[0] * 0.611)
for line in lines:
for x1, y1, x2, y2 in line:
if ((y2 - y1) / (x2 - x1)) < 0:
mc = np.polyfit([x1, x2], [y1, y2], 1)
left_x1.append(np.int(np.float((y_min - mc[1])) / np.float(mc[0])))
left_x2.append(np.int(np.float((y_max - mc[1])) / np.float(mc[0])))
# cv2.line(img, (xone, imshape[0]), (xtwo, 330), color, thickness)
elif ((y2 - y1) / (x2 - x1)) > 0:
mc = np.polyfit([x1, x2], [y1, y2], 1)
right_x1.append(np.int(np.float((y_min - mc[1])) / np.float(mc[0])))
right_x2.append(np.int(np.float((y_max - mc[1])) / np.float(mc[0])))
# cv2.line(img, (xone, imshape[0]), (xtwo, 330), color, thickness)
l_avg_x1 = np.int(np.nanmean(left_x1))
l_avg_x2 = np.int(np.nanmean(left_x2))
r_avg_x1 = np.int(np.nanmean(right_x1))
r_avg_x2 = np.int(np.nanmean(right_x2))
# print([l_avg_x1, l_avg_x2, r_avg_x1, r_avg_x2])
cv2.line(img, (l_avg_x1, y_min), (l_avg_x2, y_max), color, thickness)
cv2.line(img, (r_avg_x1, y_min), (r_avg_x2, y_max), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len,
maxLineGap=max_line_gap)
line_img = np.zeros(img.shape, dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
def process_image(img):
img_test = grayscale(img)
img_test = gaussian_blur(img_test, 7)
img_test = canny(img_test, 50, 150)
imshape = img.shape
vertices = np.array([[(100, imshape[0]), (400, 330), (600, 330), (imshape[1], imshape[0])]], dtype=np.int32)
img_test = region_of_interest(img_test, vertices)
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi / 180 # angular resolution in radians of the Hough grid
threshold = 55 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 40 # minimum number of pixels making up a line
max_line_gap = 100 # maximum gap in pixels between connectable line segments
line_image = np.copy(img) * 0 # creating a blank to draw lines on
img_test = hough_lines(img_test, rho, theta, threshold, min_line_length, max_line_gap)
return img_test
img = cv2.imread("sy1.jpg")
res = process_image(img)
plt.imshow(res)
The Resulting Error:
/Users/ViditShah/anaconda/envs/py27/bin/python /Users/ViditShah/Downloads/untitled1/gist.py
/Users/ViditShah/Downloads/untitled1/gist.py:85: RuntimeWarning: Mean of empty slice
r_avg_x1 = np.int(np.nanmean(right_x1))
Traceback (most recent call last):
File "/Users/ViditShah/Downloads/untitled1/gist.py", line 122, in <module>
res = process_image(img)
File "/Users/ViditShah/Downloads/untitled1/gist.py", line 117, in process_image
img_test = hough_lines(img_test, rho, theta, threshold, min_line_length, max_line_gap)
File "/Users/ViditShah/Downloads/untitled1/gist.py", line 100, in hough_lines
draw_lines(line_img, lines)
File "/Users/ViditShah/Downloads/untitled1/gist.py", line 85, in draw_lines
r_avg_x1 = np.int(np.nanmean(right_x1))
ValueError: cannot convert float NaN to integer
Process finished with exit code 1
I'm using python2.7
Please Guide me.
Your Sincerely,
Vidit Shah
One possibility is dividing by zero creating Nans when you calculate gradient; try filtering out for x1 == x2. This potential source of errors will crop up only rarely.
The most important issue is that the threshold is set too high in the Hough transform (at 55) for the structure of your code. If the Hough Lines stage does not identify lines then you will not be able to plot them.
You can get around this by either lowering the threshold (and losing quality in your line-detection for the cases when it does work) or adjusting something else in your code, for example using error handling or pre-processing the image differently so that there will always be lines output by the Hough step.

Categories