Problems using webcam in python + openCV - python

I am using the following code to access my webcam using openCV + python...
import cv
cv.NamedWindow('webcam_feed', cv.CV_WINDOW_AUTOSIZE)
cam = cv.CaptureFromCAM(-1)
I am then getting the following error in the console...
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
I was originally using,
cv.CaptureFromCAM(0)
to access the same and had the same issue and used -1 because it is suppose to pick up any webcam.
I also tested to see if Ubuntu recognizes the webcam and it does. I am using v4l2 for the webcam.
{EDIT}
I am using the following code to display a video feed, it seems to only be showing just one image the web cam captured instead of a continuous video feed...
import cv
cv.NamedWindow('webcam_feed', cv.CV_WINDOW_AUTOSIZE)
cam = cv.CaptureFromCAM(-1)
feed = cv.QueryFrame(cam)
cv.ShowImage("webcam_feed", feed)
cv.WaitKey(-1)

WOW, answered my own question in 15 after me posting this. I did some research and the reason for the web cam only grabbing one image is because of the...
cv.WaitKey(-1)
This doesn't allow the contents of the window to refresh. I set the number to 10...
cv.WaitKey(10)
and it worked beautifully. I also tried 100, but saw no difference. I only saw a difference when the number was 1000. I use 1 because seems that it runs the smoothest.
Here is the full code to display a web cam feed
import cv
cv.NamedWindow("webcam", 1)
cam = cv.CaptureFromCAM(-1)
While True:
feed = cv.QueryFrame(cam)
cv.ShowImage("webcam", feed)
cv.WaitKey(1)

I believe you need to put
frame = cv.QueryFrame(cam)
cv.ShowImage("Webcam Feed", frame)
in a loop to continuously update the image shown in the window. That is, the frame from cv.QueryFrame is a static image, not a continuous video.
If you want to be able to exit with a key press, test cv.WaitKey with a small timeout in the loop too.

For me, the command in root
xhost +
save my time, Note to close and open new terminal.
See you.

Related

Python: Create video out of processed images

i have a video where the front view of a car was recorded. The file is an .mp4 and i want to process the single images so i can extract more information (Objects, Lane Lines etc.). The problem is, when i want to create a video out of the processed files, i get an error. Here is what i have done so far:
Opened the video with cv2.VideoCapture() - Works fine
Saved the single frames of the video with cv2.imwrite() - Works fine
Creating a video out of single frames with cv2.VideoWriter() - Works fine
Postprocessing the video with cv2.cvtColor(), cv2.GaussianBlur() and cv2.Canny() - Works fine
Creating a video out of the processed images - Does not work.
Here is the code i used:
enter code here
def process_image(image):
gray = functions.grayscale(image)
blur_gray = functions.gaussian_blur(gray, 5)
canny_blur = functions.canny(blur_gray, 100, 200)
return canny_blur
process_on =0
count =0
video= cv2.VideoWriter("output.avi", cv2.VideoWriter_fourcc(*"MJPG"), 10, (1600, 1200))
vidcap = cv2.VideoCapture('input.mp4')
success,image = vidcap.read()
success = True
while success:
processed = process_image(image)
video.write(processed)
This is the error i get:
OpenCV Error: Assertion failed (img.cols == width && img.rows == height*3) in cv::mjpeg::MotionJpegWriter::write, file D:\Build\OpenCV\opencv-3.2.0\modules\videoio\src\cap_mjpeg_encoder.cpp, line 834
Traceback (most recent call last):
File "W:/Roborace/03_Information/10_Data/01_Montreal/camera/right_front_camera/01_Processed/Roborace_CAMERA_create_video.py", line 30, in
video.write(processed)
cv2.error: D:\Build\OpenCV\opencv-3.2.0\modules\videoio\src\cap_mjpeg_encoder.cpp:834: error: (-215) img.cols == width && img.rows == height*3 in function cv::mjpeg::MotionJpegWriter::write
My suggestion is: The normal images have 3 dimensions because of the RGB-color field. The processed images are only having one dimension. How can i adjust this in the cv2.VideoWriter function.
Thanks for your help
The VideoWriter() class only writes color images, not grayscale images unless you are on Windows, which it looks like you might be judging from the paths in your output. On Windows, you can pass the optional argument isColor=0 or isColor=False to the VideoWriter() constructor to write single-channel images. Otherwise, the simple solution is to just stack your grayscale frames into a three-channel image (you can use cv2.merge([gray, gray, gray]) and write that.
From the VideoWriter() docs:
Parameters:
isColor – If it is not zero, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only).
So by default, isColor=True and the flag cannot be changed on a non-Windows system. So simply doing:
video.write(cv2.merge([processed, processed, processed])
should patch you up. Even though the Windows variant allows writing grayscale frames, it may be better to use this second method for platform independence.
Also as Zindarod mentions in the comments below there are a number of other possible issues with your code here. I'm assuming you've pasted modified code that you weren't actually running here, or that you would have modified otherwise...if that's the case, please only post minimal, complete, and verifiable code examples.
First and foremost, your loop has no end condition, so it's indefinite. Secondly, you're hard-coding the frame size but VideoWriter() does not simply resize the images to that size. You must provide the size of the frame that you will pass into the VideoWriter(). Either resize the frame to the same size before writing to be sure, or create your VideoWriter using the frame size as defined in your VideoCapture() device (using the .get() methods for the frame size properties).
Additionally, you're reading only the first frame outside the loop. Maybe that was intentional, but if you want to process each frame of the video, you'll need to of course read them in a loop, process them, and then write them.
Lastly you should have some better error catching in your code. For e.g., see the OpenCV "Getting Started with Video" Python tutorial. The "Saving a Video" section has the proper checks and balances: run the loop while the video capture device is opened, and process and write the frame only if the frame was read properly; then once it is out of frames, the .read() method will return False, which will allow you to break out of the loop and then close the capture and writer devices. Note the ordering here---the VideoCapture() device will still be "opened" even when you've read the last frame, so you need to close out of the loop by checking the contents of the frame.
Add isColor=False argument to the VideoWriter.
Adjusting VideoWriter this way will solve the issue.
Code:
video= cv2.VideoWriter("output.avi", cv2.VideoWriter_fourcc(*"MJPG"), 10, (1600, 1200), isColor=False)

Moviepy unable to read duration of file

I have been using Moviepy to combine several shorter video files into hour long files. Some small files are "broken", they contain video but was not completed correctly (i.e. they play with VLC but there is no duration and you cannot skip around in the video).
I noticed this issue when I try to create a clip using VideoFileClip(file) function. The error that comes up is:
MoviePy error: failed to read the duration of file
Is there a way to still read the "good" frames from this video file and then add them to the longer video?
UPDATE
To clarify, my issue specifically is with the following function call:
clip = mp.VideoFileClip("/home/test/"+file)
Stepping through the code it seems to be an issue when checking the duration of the file in ffmpeg_reader.py where it looks for the duration parameter in the video file. However, since the file never finished recording properly this information is missing. I'm not very familiar with the way video files are structured so I am unsure of how to proceed from here.
You're correct. This issue arises commonly when the video duration info is missing from the file.
Here's a thread on the issue: GitHub moviepy issue 116
One user proposed the solution of using MP4Box to convert the video using this guide: RASPIVID tutorial
The final solution that worked for me involved specifying the path to ImageMagick's binary file as WDBell mentioned in this post.
I had the path correctly set in my environment variables, but it wasn't till I specificaly defined it in config_defaults.py that it started working:
I solved it in a simpler way, with the help of VLC I converted the file to the forma MPEG4 xxx TV/device,
and you can now use your new file with python without any problem
xxx = 720p or
xxx = 1080p
everything depends on your choice on the output format
I already answered this question on the blog: https://github.com/Zulko/moviepy/issues/116
This issue appears when VideoFileClip(file) function from moviepy it looks for the duration parameter in the video file and it's missing. To avoid this (in those corrupted files cases) you should make sure that the total frames parameter is not null before to shoot the function: clip = mp.VideoFileClip("/home/test/"+file)
So, I handled it in a simpler way using cv2.
The idea:
find out the total frames
if frames is null, then call the writer of cv2 and generate a temporary copy of the video clip.
mix the audio from the original video with the copy.
replace the original video and delete copy.
then call the function clip = mp.VideoFileClip("/home/test/"+file)
Clarification: Since OpenCV VideoWriter does not encode audio, the new copy will not contain audio, so it would be necessary to extract the audio from the original video and then mix it with the copy, before replacing it with the original video.
You must import cv2
import cv2
And then add something like this in your code before the evaluation:
cap = cv2.VideoCapture("/home/test/"+file)
frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
print(f'Checking Video {count} Frames {frames} fps: {fps}')
This will surely return 0 frames but should return at least framerate (fps).
Now we can set the evaluation to avoid the error and handle it making a temp video:
if frames == 0:
print(f'No frames data in video {file}, trying to convert this video..')
writer = cv2.VideoWriter("/home/test/fixVideo.avi", cv2.VideoWriter_fourcc(*'DIVX'), int(cap.get(cv2.CAP_PROP_FPS)),(int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))))
while True:
ret, frame = cap.read()
if ret is True:
writer.write(frame)
else:
cap.release()
print("Stopping video writer")
writer.release()
writer = None
break
Mix the audio from the original video with the copy. I have created a function for this:
def mix_audio_to_video(pathVideoInput, pathVideoNonAudio, pathVideoOutput):
videoclip = VideoFileClip(pathVideoInput)
audioclip = videoclip.audio
new_audioclip = CompositeAudioClip([audioclip])
videoclipNew = VideoFileClip(pathVideoNonAudio)
videoclipNew.audio = new_audioclip
videoclipNew.write_videofile(pathVideoOutput)
mix_audio_to_video("/home/test/"+file, "/home/test/fixVideo.avi", "/home/test/fixVideo.mp4")
replace the original video and delete copys:
os.replace("/home/test/fixVideo.mp4", "/home/test/"+file)
I had the same problem and I have found the solution.
I don't know why but if we enter the path in this method path = r'<path>' instead of ("F:\\path") we get no error.
Just click on the
C:\Users\gladi\AppData\Local\Programs\Python\Python311\Lib\site-packages\moviepy\video\io\ffmpeg_reader.py
and delete the the code and add this one
Provided by me in GITHUB - https://github.com/dudegladiator/Edited-ffmpeg-for-moviepy
clip1=VideoFileClip('path')
c=clip1.duration
print(c)

Read Frames from RTSP Stream in Python

I have recently set up a Raspberry Pi camera and am streaming the frames over RTSP. While it may not be completely necessary, here is the command I am using the broadcast the video:
raspivid -o - -t 0 -w 1280 -h 800 |cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/output.h264}' :demux=h264
This streams the video perfectly.
What I would now like to do is parse this stream with Python and read each frame individually. I would like to do some motion detection for surveillance purposes.
I am completely lost on where to start on this task. Can anyone point me to a good tutorial? If this is not achievable via Python, what tools/languages can I use to accomplish this?
Using the same method listed by "depu" worked perfectly for me.
I just replaced "video file" with "RTSP URL" of actual camera.
Example below worked on AXIS IP Camera.
(This was not working for a while in previous versions of OpenCV)
Works on OpenCV 3.4.1 Windows 10)
import cv2
cap = cv2.VideoCapture("rtsp://root:pass#192.168.0.91:554/axis-media/media.amp")
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame', frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Bit of a hacky solution, but you can use the VLC python bindings (you can install it with pip install python-vlc) and play the stream:
import vlc
player=vlc.MediaPlayer('rtsp://:8554/output.h264')
player.play()
Then take a snapshot every second or so:
while 1:
time.sleep(1)
player.video_take_snapshot(0, '.snapshot.tmp.png', 0, 0)
And then you can use SimpleCV or something for processing (just load the image file '.snapshot.tmp.png' into your processing library).
use opencv
video=cv2.VideoCapture("rtsp url")
and then you can capture framse. read openCV documentation visit: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html
Depending on the stream type, you can probably take a look at this project for some ideas.
https://code.google.com/p/python-mjpeg-over-rtsp-client/
If you want to be mega-pro, you could use something like http://opencv.org/ (Python modules available I believe) for handling the motion detection.
Here is yet one more option.
It's much more complicated than the other answers.
But this way, with just one connection to the camera, you could "fork" the same stream simultaneously to several multiprocesses, to the screen, recast it into multicast, write it to disk, etc.
Of course, just in the case you would need something like that (otherwise you'd prefer the earlier answers)
Let's create two independent python programs:
Server program (rtsp connection, decoding) server.py
Client program (reads frames from shared memory) client.py
Server must be started before the client, i.e.
python3 server.py
And then in another terminal:
python3 client.py
Here is the code:
(1) server.py
import time
from valkka.core import *
# YUV => RGB interpolation to the small size is done each 1000 milliseconds and passed on to the shmem ringbuffer
image_interval=1000
# define rgb image dimensions
width =1920//4
height =1080//4
# posix shared memory: identification tag and size of the ring buffer
shmem_name ="cam_example"
shmem_buffers =10
shmem_filter =RGBShmemFrameFilter(shmem_name, shmem_buffers, width, height)
sws_filter =SwScaleFrameFilter("sws_filter", width, height, shmem_filter)
interval_filter =TimeIntervalFrameFilter("interval_filter", image_interval, sws_filter)
avthread =AVThread("avthread",interval_filter)
av_in_filter =avthread.getFrameFilter()
livethread =LiveThread("livethread")
ctx =LiveConnectionContext(LiveConnectionType_rtsp, "rtsp://user:password#192.168.x.x", 1, av_in_filter)
avthread.startCall()
livethread.startCall()
avthread.decodingOnCall()
livethread.registerStreamCall(ctx)
livethread.playStreamCall(ctx)
# all those threads are written in cpp and they are running in the
# background. Sleep for 20 seconds - or do something else while
# the cpp threads are running and streaming video
time.sleep(20)
# stop threads
livethread.stopCall()
avthread.stopCall()
print("bye")
(2) client.py
import cv2
from valkka.api2 import ShmemRGBClient
width =1920//4
height =1080//4
# This identifies posix shared memory - must be same as in the server side
shmem_name ="cam_example"
# Size of the shmem ringbuffer - must be same as in the server side
shmem_buffers =10
client=ShmemRGBClient(
name =shmem_name,
n_ringbuffer =shmem_buffers,
width =width,
height =height,
mstimeout =1000, # client timeouts if nothing has been received in 1000 milliseconds
verbose =False
)
while True:
index, isize = client.pull()
if (index==None):
print("timeout")
else:
data =client.shmem_list[index][0:isize]
img =data.reshape((height,width,3))
img =cv2.GaussianBlur(img, (21, 21), 0)
cv2.imshow("valkka_opencv_demo",img)
cv2.waitKey(1)
If you got interested, check out some more in https://elsampsa.github.io/valkka-examples/
Hi reading frames from video can be achieved using python and OpenCV . Below is the sample code. Works fine with python and opencv2 version.
import cv2
import os
#Below code will capture the video frames and will sve it a folder (in current working directory)
dirname = 'myfolder'
#video path
cap = cv2.VideoCapture("your rtsp url")
count = 0
while(cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
else:
cv2.imshow('frame', frame)
#The received "frame" will be saved. Or you can manipulate "frame" as per your needs.
name = "rec_frame"+str(count)+".jpg"
cv2.imwrite(os.path.join(dirname,name), frame)
count += 1
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Use in this
cv2.VideoCapture("rtsp://username:password#IPAddress:PortNO(rest of the link after the IPAdress)").

Can't open video with opencv2

I'm trying to grab the images from a video file but I can't succeed to open it and I don't know why.
Below is a code sample that print False where I'm expecting to get a True. I don't get why I can't open this simple video file, any lead would be very much appreciated!
I tried with a relative path first then moved to an absolute path to see if anything changed and it's still the same...
video = cv2.VideoCapture()
path = "C:\\Users\\Leo\\Dropbox\\Projet VISORD\\TP3\\video.mpg"
print video.open(path)
The codecs that cv2 supports out of the box are limited. A few of the formats can be found at the link below. I haven't tried them all yet.
http://opencv.willowgarage.com/wiki/documentation/cpp/highgui/VideoWriter
I've had some luck with mp42 codec. Had to convert my camera's mp4 (h264) format to an avi in the correct format.
Using a tool ffmpeg at the moment.
ffmpeg -i input.mp4 -codec:v msmpeg4v2 output.avi
This still leaves something to be desired as it loses resolution, so I am working toward a better solution myself. I only just started at this myself.
The following code works for me:
import cv2
Load the video file:
capture = cv2.VideoCapture('videos/my_video.avi')
Frame is the image you want, flag is success/failure:
flag, frame = capture.read()
Loop through the video's frames:
while True:
flag, frame = capture.read()
if flag == 0:
break
cv2.imshow("Video", frame)
key_pressed = cv2.waitKey(10) #Escape to exit
if key_pressed == 27:
break
However, MPEG is a compressed format, which means that you need the correct codecs to be installed and might have to do some more work to handle the conversion. You can read about the supported different types of video formats at the OpenCV VideoCodec documentation.
(However, if you just want a simple working example, try using a .AVI file and see if it works for you.)
Had a similar problem. Try changing
path = "C:\\Users\\Leo\\Dropbox\\Projet VISORD\\TP3\\video.mpg"
to
path = "C:/Users/Leo/Dropbox/Projet VISORD/TP3/video.mpg"
and see if it works.

Problem with Opencv and Python

I'm new to python and Opencv and I tried to put in the following code to save an image to my computer from my webcam:
import cv
if __name__=='__main__':
pCapturedImage = cv.CaptureFromCAM(1)
rospy.sleep(0.5)
pSaveImg=cv.QueryFrame(pCapturedImage)
cv.SaveImage("test.jpg", pSaveImg)
But when I try to open it,
I find that the jpeg is empty.
Could someone please help?
Also, I tried a program to show what my webcam is seeing:
import cv
if __name__=='__main__':
cv.NamedWindow("camera",1)
capture=cv.CaptureFromCAM(0)
while True:
img=cv.QueryFrame(capture)
cv.ShowImage("camera", img)
if cv.WaitKey(10)==27:
break
cv.DestroyedWindow("camera")
But when I run it, I get an application that just shows me a gray screen.
Could someone help with this too?
Thanks.
Have you tried the demo programs? They show how to use the webcam among many other things.
For the first problem, I am not familiar with using cameras in opencv, but I got it to work by opening the capture (capture.open(device_id) in the code below)
Here is a working python sample (I use the newer c++ interface: imread, imwrite, VideoCapture, etc... which you can find in the OpenCV docs listed as "cv2" when it is available for python.):
import cv2
capture = cv2.VideoCapture() # this is the newer c++ interface
capture.open(0) # Use your device id; I think this is what you are missing.
image = capture.read()[1]
cv2.imwrite("test.jpg", image)
I got your second sample also working just by using open on the capture object:
import cv2
cv2.namedWindow("camera", 1) # this is where you will put the video images
capture = cv2.VideoCapture()
capture.open(0) # again, use your own device id
while True:
img = capture.read()[1]
cv2.imshow("camera", img)
if cv2.waitKey(10) == 27: # waiting for the esc key
break
cv2.destroyWindow("camera")

Categories