Python OpenCV: inRange() stopped working without change - python

I am currently doing real time object detection of an orange ball with Raspberry Pi 3 Model B. The code below is supposed to take a frame, then with the cv2.inRange() function, filter out the image using RGB (BGR). Then I apply dialation and erosion to remove noise. Then I find the contours and draw them. This code worked until now. However when I ran it today without changing it, I got the folowing error:
Traceback (most recent call last):
File "/home/pi/Desktop/maincode.py", line 12, in <module>
mask = cv2.inRange(frame, lower, upper)
error: /build/opencv-ISmtkH/opencv-2.4.9.1+dfsg/modules/core/src/arithm.cpp:2701: error: (-209) The lower bounary is neither an array of the same size and same type as src, nor a scalar in function inRange
Any help would be really awesome, because I was new to openCV and spent a lot of time proggraming this, and I have a competetion of robotics in 5 days.
Thank you in advance
import cv2
import cv2.cv as cv
import numpy as np
capture = cv2.VideoCapture(0)
while capture.isOpened:
ret, frame = capture.read()
im = frame
lower = np.array([0, 100 ,150], dtype = 'uint8')
upper = np.array([10,180,255], dtype = 'uint8')
mask = cv2.inRange(frame, lower, upper)
eroded = cv2.erode(mask, np.ones((7, 7)))
dilated = cv2.dilate(eroded, np.ones((7, 7)))
contours, hierarchy = cv2.findContours(dilated,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(im,contours,-1,(0,255,0),3)
cv2.imshow('colors',im)
cv2.waitKey(1)

The error you receive almost certainly means you have an empty image (or you mix up the sizes of your input image).
Webcam captures in OpenCV often start with one or a couple of black/emtpy images (crappy drivers). Since it goes too fast, that's why you don't notice this. However, this will have an influence on your application if you want to process the image. Therefore, I recommend you to check the image before proceeding with the calculations on them. Just add this after your capture.read() line:
if ret == True:
Note: make sure (by printing in the console or something) that this only happens when you start capturing. If this happens regularly (empty frames from your webcam), maybe there's something else wrong (or maybe with your webcam). Also check it on another computer.

Related

Kernel dies when running .detector (openCV)

I'm currently working on an openCV course through Udemy and have run into the trouble where my kernel is dying. I tried eliminating line by line to see what could the cause, and I found that when the code comes to the line: keypoints = detector.detect(image) it fails. Now I am kind of an amateur when it comes to this kind of stuff, but would appreciate some feedback as to why this could be occurring. Here's the code i'm working with:
import cv2
import numpy as np;
# Read image
image = cv2.imread("images/Sunflowers.jpg")
# Set up the detector with default parameters.
detector = cv2.SimpleBlobDetector()
# Detect blobs.
keypoints = detector.detect(image)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of
# the circle corresponds to the size of blob
blank = np.zeros((1,1))
blobs = cv2.drawKeypoints(image, keypoints, blank, (0,255,255),
cv2.DRAW_MATCHES_FLAGS_DEFAULT)
# Show keypoints
cv2.imshow("Blobs", blobs)
cv2.waitKey(0)
cv2.destroyAllWindows()```
Please replace:
detector = cv2.SimpleBlobDetector()
with:
detector = cv2.SimpleBlobDetector_create()

OpenCV Python Feature Detection: how to provide a mask? (SIFT)

I am building a simple project in Python3, using OpenCV3, trying to match jigsaw pieces to the "finished" jigsaw image. I have started my tests by using SIFT.
I can extract the contour of the jigsaw piece and crop the image but since most of the high frequencies reside, of course, around the piece (where the piece ends and the floor starts), I want to pass a mask to the SIFT detectAndCompute() method, thus forcing the algorithm to look for the keypoints only within the piece.
test_mask = np.ones(img1.shape, np.uint8)
kp1, des1 = sift.detectAndCompute(img1, mask = test_mask)
After passing a test mask (to make sure it's uint8), I get the following error:
kp1, des1 = sift.detectAndCompute(img1,mask = test_mask)
cv2.error: /home/pyimagesearch/opencv_contrib/modules/xfeatures2d/src/sift.cpp:772: error: (-5) mask has incorrect type (!=CV_8UC1) in function detectAndCompute
From my research, uint8 is just an alias for CV_8U, which is the same as CV_8UC1. Couldn't find any code sample passing a mask to any feature detection algorithm in Python.
Thanks to Miki I've managed to find a bug.
It turned out that my original mask that I created using threshold operations, even though looked binary, was a 3-channel image ([rows], [cols], 3). Thus it couldn't be accepted as a mask.
After checking for type and shape (has to be uint8 and [rows,cols,1]):
print(mask.dtype)
print(mask.shape)
Convert the mask to gray if it's still 3-channel:
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)

Python/OpenCV - Colored Droplets Recognition and Tracking

I'm a semi-noob in Video Analysis.
I have a Petri dish with some colored droplets inside and I must detect them, and keep trace of their position,area and color.
I want first to detect my Petri dish (maybe using HoughCircles) and define a ROI on which work later.
The problem is that mi dish detection is very "noisy": the program detects many circles (and I only need the one corresponding to the dish) and it never detects the right one.
Here is my code:
import cv2
import numpy as np
def main():
cap=cv2.VideoCapture("dropletsS.wmv")
cv2.namedWindow("prova")
while(1):
ret, RGBframe = cap.read()
grayFrame = cv2.cvtColor(RGBframe,cv2.COLOR_BGR2GRAY)
grayFrame=cv2.medianBlur(grayFrame,7)
circles=cv2.HoughCircles(grayFrame,cv2.HOUGH_GRADIENT ,50,50)
for c in circles[0,:]:
cv2.circle(RGBframe,(c[0],c[1]),c[2],(0,255,0),2)
cv2.imshow("prova", RGBframe)
cv2.imshow("grigio", grayFrame)
cv2.waitKey(10)
if __name__ == "__main__":
main()
And here is the result.
Do someone have some suggestions? Suggestions on the way I can later identify and track droplets are welcome too.
Thanks in advance!
Its kind of hard to come up with a solution without having much of an idea on how the dish actually looks like, but I'll try helping you anyways.
If the problem is what I think it is, then you can probably open and dilate your image to join all the discontinuous blobs.
Do the following before you apply Hough Transform:
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5)) #declare outside while
grayFrame = cv2.morphologyEx(grayFrame, cv2.MORPH_OPEN, kernel)
grayFrame = cv2.dilate(grayFrame, kernel, iterations = 2)
Let me know if the output is output image that you desire. Also play around with the parameters to get the required result. You can change the dimensions of the MORPH_ELLIPSE and also the number of iterations. Increasing any of them would increase the degree of dilation, thus more of the blobs would join up and vice versa.

Extract foreground from individual frames using opencv for python

The problem
I'm working with a camera that posts a snapshot to the web every 5 seconds or so. The camera is monitoring a line of people. I'd like my script to be able to tell me how long the line of people is.
What I've tried
At first, I thought I could do this using BackgroundSubtractorMOG, but this is just producing a black image. Here's my code for that, modified to use an image instead of a video capture:
import numpy as np
import cv2
frame = cv2.imread('sample.jpg')
fgbg = cv2.BackgroundSubtractorMOG()
fgmask = fgbg.apply(frame)
cv2.imshow('frame', fgmask)
cv2.waitKey()
Next, I looked at foreground extraction on an image, but this is interactive and doesn't suit my use case of needing the script to tell me how long the line of people is.
I also tried to use peopledetect.py, but since the image of the line is from an elevated position, that script doesn't detect any people.
I'm brand new to opencv, so any help is greatly appreciated. I can supply any additional details upon request.
Note:
I'm not so much looking for someone to solve the overall problem, as I am just trying to figure out a way to separate out the people from the background. However, I am open to approaching the problem a different way if you think you have a better solution.
EDIT: Here's a sample image as requested:
I figured it out! #QED helped me get there. Basically, you can't do this with just one image. You need AT LEAST 2 frames to compare so the algorithm can tell what's different (foreground) and what's the same (background). So I took 2 frames and looped through them to "train" the algorithm. Here's my code:
import numpy as np
import cv2
i = 1
while(1):
fgbg = cv2.BackgroundSubtractorMOG()
while(i < 3):
print 'img' + `i` + '.jpg'
frame = cv2.imread('img' + `i` + '.jpg')
fgmask = fgbg.apply(frame)
cv2.imshow('frame', fgmask)
i += 1
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cv2.destroyAllWindows()
And here's the result from 2 consecutive images!

cv2.imshow and numpy.dstack core dumped

I am trying to stack two images together, so i can show both in a single window.
First image is the original, 3-channel image, second one is a gray version.
I did the color conversion with cv2.cvtColor, transformed back to 3-channel with numpy.dstack,
and when i try cv2.imshow, it gives me a "core dumped" error.
Am i missing something? I need both images to have the same number of channels to stack them
with numpy.hstack. This happens on a Ubuntu 64bit machine.
import cv2
import numpy as np
img = cv2.imread("/home/bernie/Dropbox/Python/Opencv/lena512.jpg")
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = np.dstack((gray,gray,gray))
#res = np.hstack((img,gray))
print gray.dtype
print gray.shape
cv2.imshow('gray',gray)
#cv2.imshow('res',res)
cv2.waitKey()
addition
On the other hand, using
gray = cv2.cvtColor(gray,cv2.COLOR_GRAY2BGR)
in line 7 works without complaints, so i will stick to this for now. This means that there is a difference
between
the cv2.cvtColor result and numpy.dstack result for turning a 1-channel image to 3-channel with equal values.
As suggested in the comments, try using cv2.merge since apparently it's strided differently from np.dstack:
gray = cv2.merge([gray]*3)
See #fraxel's link for more info

Categories