I am trying to make a face tracker that combines Haar Cascade Classification with Lucas Kanade good feature detection. However, I keep getting an error that I cannot figure out what it means nor how to solve it.
Can anyone help me here?
Error:
line 110, in <module>
cv2.imshow('frame',img)
error: /build/buildd/opencv-2.4.8+dfsg1/modules/highgui/src/window.cpp:269:
error: (-215)size.width>0 && size.height>0 in function imshow
Code:
from matplotlib import pyplot as plt
import numpy as np
import cv2
face_classifier = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(0)
# params for ShiTomasi corner detection
feature_params = dict( maxCorners = 200,
qualityLevel = 0.01,
minDistance = 10,
blockSize = 7 )
# Parameters for lucas kanade optical flow
lk_params = dict( winSize = (15,15),
maxLevel = 2,
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))
# Create some random colors
color = np.random.randint(0,255,(100,3))
# Take first frame and find corners in it
ret, old_frame = cap.read()
cv2.imshow('Old_Frame', old_frame)
cv2.waitKey(0)
old_gray = cv2.cvtColor(old_frame, cv2.COLOR_BGR2GRAY)
restart = True
#while restart == True:
face = face_classifier.detectMultiScale(old_gray, 1.2, 4)
if len(face) == 0:
print "This is empty"
for (x,y,w,h) in face:
focused_face = old_frame[y: y+h, x: x+w]
cv2.imshow('Old_Frame', old_frame)
face_gray = cv2.cvtColor(old_frame,cv2.COLOR_BGR2GRAY)
gray = cv2.cvtColor(focused_face,cv2.COLOR_BGR2GRAY)
corners_t = cv2.goodFeaturesToTrack(gray, mask = None, **feature_params)
corners = np.int0(corners_t)
print corners
for i in corners:
ix,iy = i.ravel()
cv2.circle(focused_face,(ix,iy),3,255,-1)
cv2.circle(old_frame,(x+ix,y+iy),3,255,-1)
plt.imshow(old_frame),plt.show()
# Create a mask image for drawing purposes
mask = np.zeros_like(old_frame)
while(1):
ret,frame = cap.read()
frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# calculate optical flow
p1, st, err = cv2.calcOpticalFlowPyrLK(old_gray, frame_gray, corners_t, None, **lk_params)
# Select good points
good_new = p1[st==1]
good_old = corners_t[st==1]
# draw the tracks
print "COLORING TIME!"
for i,(new,old) in enumerate(zip(good_new,good_old)):
print i
print color[i]
a,b = new.ravel()
c,d = old.ravel()
mask = cv2.line(mask, (a,b),(c,d), color[i].tolist(), 2)
frame = cv2.circle(frame,(a, b),5,color[i].tolist(),-1)
if i == 99:
break
img = cv2.add(frame,mask)
cv2.imshow('frame',img)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
# Now update the previous frame and previous points
old_gray = frame_gray.copy()
p0 = good_new.reshape(-1,1,2)
cv2.destroyAllWindows()
cap.release()
This error message
error: (-215)size.width>0 && size.height>0 in function imshow
simply means that imshow() is not getting video frame from input-device.
You can try using
cap = cv2.VideoCapture(1)
instead of
cap = cv2.VideoCapture(0)
& see if the problem still persists.
I have the same problem, fix the ret in capture video
import numpy as np
import cv2
# Capture video from file
cap = cv2.VideoCapture('video1.avi')
while True:
ret, frame = cap.read()
if ret == True:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',gray)
if cv2.waitKey(30) & 0xFF == ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
I had this problem.
Solution: Update the path of the image.
If the path contains (for example: \n or \t or \a) it adds to the corruption. Therefore, change every back-slash "\" with front-slash "/" and it will not make create error but fix the issue of reading path.
Also double check the file path/name. any typo in the name or path also gives the same error.
You have to delay
Example Code:
import cv2
import numpy as np
import time
cam = cv2.VideoCapture(0)
time.sleep(2)
while True:
ret,frame = cam.read()
cv2.imshow('webcam', frame)
if cv2.waitKey(1)&0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
I have also met this issue. In my case, the image path is wrong, so the img read is NoneType. After I correct the image path, I can show it without any issue.
In these two lines:
mask = cv2.line(mask, (a,b),(c,d), color[i].tolist(), 2)
frame = cv2.circle(frame,(a, b),5,color[i].tolist(),-1)
try instead:
cv2.line(mask, (a,b),(c,d), color[i].tolist(), 2)
cv2.circle(frame,(a, b),5,color[i].tolist(),-1)
I had the same problem and the variables were being returned empty
I also met the error message in raspberry pi 3, but my solution is reload kernel of camera after search on google, hope it can help you.
sudo modprobe bcm2835-v4l2
BTW, for this error please check your camera and file path is workable or not
That error also shows when the video has played fine and the script will finish but that error always throws because the imshow() will get empty frames after all frames have been consumed.
That is especially the case if you are playing a short (few sec) video file and you don't notice that the video actually played on the background (behind your code editor) and after that the script ends with that error.
while(cap.isOpened()):
ret, img = cap.read()
print img
if img==None: #termino los frames?
break #si, entonces terminar programa
#gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow('img2',img)
cv2.circle and cv2.lines are not working. Mask and frame both are returning None. these functions (line and circle) are in opencv 3 but not in older versions.
I use ssh to connect to remote server and have python code execute cv2.VideoCapture(0) to capture remote webcam, then encounter this error message:
error: (-215)size.width>0 && size.height>0 in function imshow
Finally, I have to grant access to /dev/video0 (which is my webcam device) with my user account and the error message was gone. Use usermod to add user into group video
usermod -a -G video user
This is a problem with space consumption or choosing the wrong camera.
My suggestion in to restart kernel and clear output and run it again.
It works then.
Although this is an old thread, I got this error as well and the solution that worked for me is not mentioned here.
Simply put, in my case the webcam was still in use on the background, as I saw the LED light being on. I have not yet been able to reproduce the issue, so I'm not sure a simple cv2.VideoCapture(0).release() would have solved it. I'll edit this post if and when I have found it out.
For me a restart of my PC solved the issue, without changing anything to the code.
This Error can also occur if you slice a negative point and pass it to the array. So check if you did
I was facing the same problem while trying to open images containing spaces and special
characters like the following ยด in their names
So, after modifying the images names removing their spaces and special characters, everything worked perfectly.
Check if you have "opencv_ffmpeg330.dll" in python27 root directory or of the python version you are using. If not you will find it in "....\OpenCV\opencv\build\bin".
Choose the appropriate version and copy the dll into the root directory of your python installation and re-run the program
Simply use an image extension like .jpeg or .png.
I'm relatively new to scripting. (I know quite a bit but I also don't know quite a bit.)
I'm trying to have a simple script use OpenCV-Python to subtract two frames from a webcam and draw a bounding box around the changed pixels. The issue is that when I try to define the boundingRect (x,y,w,h = cv2.boundingRect(contours)) it gives the error:
Message=OpenCV(4.5.3) :-1: error: (-5:Bad argument) in function 'boundingRect'
> Overload resolution failed:
> - array is not a numpy array, neither a scalar
> - Expected Ptr<cv::UMat> for argument 'array'
I've been searching around for quite a while but there seems to be a very small number of people who've had my issue and pretty much none of them had solutions that worked.
Here's my code:
import cv2
from time import sleep as wait
import numpy as np
lastFrame = "foobaz"
i = 0
#My webcam is on index 1, this isn't (at least shouldn't) be the issue. Make sure to set it back to 0 if you are testing
vid = cv2.VideoCapture(1)
#A placeholder black image for the 'subract' imshow window
black = np.zeros((512,512,3), np.uint8)
while(True):
wait(0.5)
ret, frame = vid.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
blurframe = cv2.GaussianBlur(frame,(25,25),0)
#Makes sure the lastFrame has been assigned, if not set it as placeholder black image.
if lastFrame != "foobaz":
#Subtracts current frame and the last frame to find difference.
subFrame = cv2.subtract(blurframe,lastFrame)
else:
subFrame = black
#Assigns the next lastFrame
lastFrame = blurframe
#Gets the threshold of the subtracted image
ret,thresh1 = cv2.threshold(subFrame,40,255,cv2.THRESH_BINARY)
#Sets the thresholded image to grayscale if the loop was ran for the first time.
if i==0:
thresh1 = cv2.cvtColor(thresh1, cv2.COLOR_BGR2GRAY)
i+=1
#This is where issues arize. I'm trying to apply a bounding box using a contour but it always errors at line 44.
contours = cv2.findContours(thresh1, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
print(len(contours))
x,y,w,h = cv2.boundingRect(contours)
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow('frame',frame)
cv2.imshow('subtract',thresh1)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
I saw that some other posts had the contour type() so here it is:
type(contours) = <class 'list'>
CLOSED: I found out the issue. You have to iterate contours for it to work.
So, this piece of code seems to find a circle using my webcam pretty easily. However, I'd like it to also draw a circle whenever one was found instead of simply closing the program. I tried to add a "cv2.circle(parameters...)" to the code but it didn't work. Can anyone help me?
import cv2
import numpy as np
import sys
color = (0,0,255)
cap = cv2.VideoCapture(0)
while(True):
gray = cv2.medianBlur(cv2.cvtColor(cap.read()[1], cv2.COLOR_BGR2GRAY),5)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1, 10, minRadius = 1, maxRadius = 20)
if circles != None: print "Circle There !"
cv2.imshow('video',gray)
if cv2.waitKey(1) == 27:# esc Key
break
cap.release()
cv2.destroyAllWindows()
You are probably going to want to mess with the parameters in the cv2.HoughCircles function (you may even be able to delete some of the params entirely), but this should work.
cap = cv2.VideoCapture(0)
while(True):
gray = cv2.medianBlur(cv2.cvtColor(cap.read()[1], cv2.COLOR_BGR2GRAY),5)
circles = cv2.HoughCircles(gray,cv2.HOUGH_GRADIENT,1,10, param1=150,param2=40,minRadius=0,maxRadius=1000)
if circles != None:
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cgray,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(cgray,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow('video',cgray)
if cv2.waitKey(1) == 27:# esc Key
break
cap.release()
cv2.destroyAllWindows()
i am trying to write a code using opencv python that automatically get canny threshold values instead of doing them manually every time.
img= cv2.imread('micro.png',0)
output = np.zeros(img.shape, img.dtype)
# Otsu's thresholding
ret2,highthresh = cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
lowthresh=0.1*highthres
edges = cv2.Canny(img,output,lowthresh,highthresh)
cv2.imshow('canny',edges)
i am getting this error
"File "test2.py", line 14, in
edges = cv2.Canny(img,output,lowthresh,highthresh)
TypeError: only length-1 arrays can be converted to Python scalars"
can anyone help me to sort out this error.thankx in advance
It seems like cv2.threshold returns the detected edges, and Canny applies them to the image. The code below worked for me and gave me some nice detected edges in my image.
import cv2
cv2.namedWindow('canny demo')
img= cv2.imread('micro.png',0)
ret2,detected_edges = cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
edges = cv2.Canny(detected_edges,0.1,1.0)
dst = cv2.bitwise_and(img,img,mask = edges)
cv2.imshow('canny',dst)
if cv2.waitKey(0) == 27:
cv2.destroyAllWindows()
You are running:
cv2.Canny(img,output,lowthresh,highthresh)
It is looking for
cv2.Canny(img,lowthresh,highthresh,output)
I think the ordering changed in some version, because I have seen references to both.
Below is output
Here are those marked in red circle. There are lots of such line the image, I have marked only 3 circle. How to remove these lines? Is there any filter to remove this line? I think, this is due to presence of corner.
You can use Median filter to remove that thing.
Try this code and see the difference while changing the value of the trackbar (mask size).
import cv2
import numpy as np
def nothing(x):
pass
#image window
cv2.namedWindow('image')
cv2.namedWindow('image2')
# create trackbars for color change
cv2.createTrackbar('mask','image',1,10,nothing)
cv2.createTrackbar('mask2','image2',1,10,nothing)
#loading images
img = cv2.imread('VfgSN.png') # load your image with path
gray_im = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
while True:
# get current positions of trackbars
m = cv2.getTrackbarPos('mask','image')
m2 = cv2.getTrackbarPos('mask2','image2')
median = cv2.medianBlur(gray_im,(2*m+1))
median2 = cv2.medianBlur(img,(2*m2+1))
cv2.imshow('image',median)
cv2.imshow('image2',median2)
#press ESC to stop
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
Resultant image with mask : 5
In grayscale (with the same mask)
But then it also affects the other parts of the image. If you don't want that to happen to the other part. You can apply this filter in your Region of Interest only.
Hope it helps.