I'm trying to read a video file in opencv (python 2.7), and I just copied the example in the opencv tutorial, but nothing happens:
import numpy as np
import cv2
cap = cv2.VideoCapture('input.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
The function cap.isOpened always returns FALSE.I have already tried to use absolute path in the argument of VideoCapture, but I still get the same result. What am I getting wrong?
Maybe your OpenCV version is not properly installed. You can check your build infos with print cv2.getBuildInformation() if there is any weird components.
I would suggest to rebuild it, or install it via Anaconda to be sure not to miss any package.
You need to define video location or move the video where python is installed
Keep the full path of the video file.
For example :-
cap = cv2.VideoCapture("D:\\Video Folder\\input.mp4")
I believe this would solve this issue.
Related
My problem is when I try to run this code on my Mac, the camera turns on the green light but it doesn't open at all. I have no idea why this is happing. I tried a lot of things but nothing worked for me, I am just thinking the new update from Apple messed up some stuff, because it used to work before.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
In mac you have to use the Mac Terminal for the cv2 library since currently no other terminal to my knowledge asks for the camera permission.
When I try to open and play a video, the code runs and finishes in 0.2 seconds, without any errors or playing the actual video. The code finishes without doing anything.
import numpy as np
import cv2
cap = cv2.VideoCapture(r'C:\Users\Hayden\Desktop\test.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',gray)
if cv2.waitKey(30) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
The only recommendation I've seen for this is to change the waitKey parameter to 1, but that doesn't make a difference right now.
I appreciate any and all help!
I would recommend you use os.startfile.
You have to import os then say I had a file in a directory path C:\Users\me\Videos
Then I would do,
os.startfile('C:\Users\me\Videos'). This is if you are on Windows, if you are not,
don't include the 'C:\' in that. If you get a unicode error in the path, use double backslashes.
os.startfile('C:\\Users\\me\\Videos')
This question already has answers here:
Permanent fix for Opencv videocapture
(2 answers)
Closed 4 years ago.
I was trying the openCV background subtraction code, but I kept getting this error message:
unable to stop the stream: invalid argument
I commented out all other codes now, and the only code I have left is:
import cv2
cap = cv2.VideoCapture('test.avi')
But the error message remains the same when I run it.
I was using a Ubuntu system.
What should I do?
It seems to be a test.avi file related problem... can you please check with this file i just tested right now?
http://www.engr.colostate.edu/me/facil/dynamics/files/drop.avi
import cv2
cap = cv2.VideoCapture('drop.avi')
the code is ok and should work. You can check your dependencies and you can do one last try to ensure your opencv is installed properly, using official example code:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
This code will show your webcam. If it works i'm quite sure your problem is with your file.
I run this (first one) example that launches the webcam of my latop so that I can see myself on the screen.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
I installed OpenBr on Ubuntu 14.04 LTS and I run successfully this command on a picture of myself:
br - gui -algorithm ShowFaceDetection -enrollAll -enroll /home/nakkini/Desktop/myself.png
The above command I run on the Terminal displays my picture and draws a square around my face (face detection), it also highlights my eyes in green.
My Dream:
I wonder if there is a way to combine this command with the short program above so that when the webcam is launched I can see my face surrounded by the green rectangle ?
Why do I need this ?
I found similar programs in pure OpenCV/Python for this purpos. However, for later needs, I need more things than the simple face detection and I judge by myself that OpenBR will save me lot of headache. That is why I am looking for a way to run the command line somewhere inside the code above as a first but big step.
Hints:
The frame in the code corresponds to myself.png of the command line. The solution to be found will try to pass frame in the place of myself.png to the command line within the program itself.
Thank you very much in advance.
EDIT:
After correcting the typos of #Xavier's solution I have no errors. However the program does not run as I want it:
First, the camer is launched and I see myself but my face is not detected with a green rectangle. Secondly, I press any key to exit but the program does not exit: it shows me a picture of myself with my face detected. A last key press exists the program. My goal is to see my face detected during the camera functionment.
you do not need openbr for this at all.
just see opencv's python face-detect tutorial
something like this should work
import numpy as np
import cv2
import os
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
cv2.imwrite( "/home/nakkini/Desktop/myself.png", gray );
os.system('br - gui -algorithm -ShowFaceDetection -enrollAll -enroll /home/nakkini/Desktop/myself.png')
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
I would like to access my webcam from Python.
I tried using the VideoCapture extension (tutorial), but that didn't work very well for me, I had to work around some problems such as it's a bit slow with resolutions >320x230, and sometimes it returns None for no apparent reason.
Is there a better way to access my webcam from Python?
OpenCV has support for getting data from a webcam, and it comes with Python wrappers by default, you also need to install numpy for the OpenCV Python extension (called cv2) to work.
As of 2019, you can install both of these libraries with pip:
pip install numpy
pip install opencv-python
More information on using OpenCV with Python.
An example copied from Displaying webcam feed using opencv and python:
import cv2
cv2.namedWindow("preview")
vc = cv2.VideoCapture(0)
if vc.isOpened(): # try to get the first frame
rval, frame = vc.read()
else:
rval = False
while rval:
cv2.imshow("preview", frame)
rval, frame = vc.read()
key = cv2.waitKey(20)
if key == 27: # exit on ESC
break
vc.release()
cv2.destroyWindow("preview")
gstreamer can handle webcam input. If I remeber well, there are python bindings for it!