I am trying to grab frames from a webcam and then writing them in a video. It works and the video shows something but it is useless.
Here you can see an example of the videos I get:
THe script is quite big so I will try to pick the relevant pieces for this problem:
import cv
capture = cv.CaptureFromCAM(1) # from webcam
frame = cv.QueryFrame(capture)
newvideo = 'Videos/%d_%d_%d_%d_%d_%d.avi' % (localtime()[0],localtime()[1],localtime()[2],localtime()[3],localtime()[4],localtime()[5])
video = cv.CreateVideoWriter(newvideo, cv.CV_FOURCC('D','I','V','X'), 30, cv.GetSize(frame), 1)
while(1):
frame = cv.QueryFrame(capture)
cv.WriteFrame(video, frame)
key = cv.WaitKey( int((1/30.)*1000)+1 )
Tip: start coding defensively and check the return of the calls you make. For instance:
video = cv.CreateVideoWriter(newvideo, cv.CV_FOURCC('D','I','V','X'), 30, cv.GetSize(frame), 1)
if not video :
print "Error in creating video writer"
sys.exit(1)
This might be a codec related problem, so try to create your video with other codecs:
video = cv.CreateVideoWriter(newvideo, cv.CV_FOURCC('F','L','V','1'), 30, cv.GetSize(frame), 1)
It might be a good idea to update the ones you've installed.
Related
So I've been scouring GitHub looking for answers but haven't yet found the solution, so I will be grateful for any help!
I am trying to make a DIY trail camera; and I have an IP camera providing me an RTSP feed, I want to capture this feed and take photos based on a PIR motion sensor (HC-SR50);
I am running this off a raspberry PI remotely; However the image is stuck on the first frame, and saves the first image from RTSP feed; and then saves and outputs the same image over and over; whilst imshow() shows the live feed fine (this is commented out below asit was interrupting the code)
I figured out that when I do imshow() it was alsostuck- and managed to resolve this by searching this site; (see code)
I am using the TAPO cameras.
the issue seems to be in the While loop where the pir_wait_for_motion begins;
thanks in advance for any help!!
from gpiozero import MotionSensor
import cv2
from datetime import datetime
import time
import getpass
**SO THIS PART WORKS OK
**
rtsp in
rtsp_url = 'rtsp://user:pass#IP/stream2'
#vlc-in
#output
writepath = "OUTPUTPATH"
pir = MotionSensor(4)
cap = cv2.VideoCapture(rtsp_url)
frameRate = cap.get(5)
Just to show that the RTSP feed was working, all ok so commented out for now as it blocked the rest of the code from running. (Below) So this part isn't so necessary for now.
- while cap.isOpened():
- flags, frame = cap.read()
- gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
- cv2.startWindowThread()
- cv2.imshow('RGB OUTPUT', gray)
- key = cv2.waitKey(1)
- cv2.destroyAllWindows()
The below part is where the problem seems to be, however I can't figure out how to keep the frames moving from RTSP feed.
#Image Capture while cap.isOpened():
pir.wait_for_motion()
print("Motion")
ret, frame = cap.read()
if (ret != True):
break
cc1 = datetime.now()
c2 = cc1.strftime("%Y")
c3 = cc1.strftime("%M")
c4 = cc1.strftime("%D")
c5 = cc1.strftime("%H%M%S")
hello = "image"+c2+c3+c5
hellojoin = "".join(hello.split())
#photo write
cv2.imwrite(f'{writepath}/{hellojoin}.png', frame)
print("image saved!")
pir.wait_for_no_motion()
cap.release() cv2.destroyAllWindows()
I wanted a PIR motion sensor to capture images from RTSP based on activity infront of sensor; basically acting as a trail camera/camera trap would.
I'm trying to save video stream, and to timelapse it. I know how to read any video from computer (for example mp4 format) and make a timelapse (I tried, I succeeded). Now I'm trying to do the same with a video stream.
I have link for m3u8 video https://wstream.comanet.cz:1943/live/Vrchlabi-sjezdovka2.stream_360p/playlist.m3u8, I'm reading video like this:
(Initialization:)
import cv2 as cv
url = 'https://wstream.comanet.cz:1943/live/Vrchlabi-sjezdovka2.stream_360p/playlist.m3u8'
fourcc = cv.VideoWriter_fourcc(*'mp4v')
height = video.get(cv.CAP_PROP_FRAME_HEIGHT)
width = video.get(cv.CAP_PROP_FRAME_WIDTH)
outp = cv.VideoWriter(save_path, fourcc, 60,(int(width), int(height)))
video = cv.VideoCapture("url")
dir_stream = "streamframes"
if not os.path.exists(dir_stream):
os.makedirs(dir_stream)
if not video.isOpened():
print("Error opening video file.")
(There is a main part - loop:)
name = 0
while video.isOpened():
ret, frame = video.read()
name += 1
filename = f"{dir_stream}/{name}.jpg"
cv.imwrite(filename, frame)
f = cv.imread(filename)
outp.write(f)
I don't like the way of saving the video... anyway - it's not completely wrong... it kind of works, but after few read frames (sometimes it's 153, 212 etc), ret value is returning False value and code gets into infinite loop. After I stopped the program, my video was saved and I could play it, it was just short (cause it was not recording as long as I wanted, because of infinite loop)...
In the "while" part I also tried...:
while video.isOpened():
ret, frame = video.read()
if ret:
frame_r = cv.imread(frame)
outp.write(frame_r)
... it's much more nice, but cv.imread(frame) was not working so I did it in dirty way u can see above.
My goal is recording online stream with python for some time (e.g. for 5 hours), I don't need record with 30fps, (e.g. 1fps is also fine).
Can anyone help me with this problem? Or do you have some advice? Why ret value returns False after some random time? Do you have other solution? I'll be really thankful for your help, it's for my school project.
I have recently set up a Raspberry Pi camera and am streaming the frames over RTSP. While it may not be completely necessary, here is the command I am using the broadcast the video:
raspivid -o - -t 0 -w 1280 -h 800 |cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/output.h264}' :demux=h264
This streams the video perfectly.
What I would now like to do is parse this stream with Python and read each frame individually. I would like to do some motion detection for surveillance purposes.
I am completely lost on where to start on this task. Can anyone point me to a good tutorial? If this is not achievable via Python, what tools/languages can I use to accomplish this?
Using the same method listed by "depu" worked perfectly for me.
I just replaced "video file" with "RTSP URL" of actual camera.
Example below worked on AXIS IP Camera.
(This was not working for a while in previous versions of OpenCV)
Works on OpenCV 3.4.1 Windows 10)
import cv2
cap = cv2.VideoCapture("rtsp://root:pass#192.168.0.91:554/axis-media/media.amp")
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame', frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Bit of a hacky solution, but you can use the VLC python bindings (you can install it with pip install python-vlc) and play the stream:
import vlc
player=vlc.MediaPlayer('rtsp://:8554/output.h264')
player.play()
Then take a snapshot every second or so:
while 1:
time.sleep(1)
player.video_take_snapshot(0, '.snapshot.tmp.png', 0, 0)
And then you can use SimpleCV or something for processing (just load the image file '.snapshot.tmp.png' into your processing library).
use opencv
video=cv2.VideoCapture("rtsp url")
and then you can capture framse. read openCV documentation visit: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html
Depending on the stream type, you can probably take a look at this project for some ideas.
https://code.google.com/p/python-mjpeg-over-rtsp-client/
If you want to be mega-pro, you could use something like http://opencv.org/ (Python modules available I believe) for handling the motion detection.
Here is yet one more option.
It's much more complicated than the other answers.
But this way, with just one connection to the camera, you could "fork" the same stream simultaneously to several multiprocesses, to the screen, recast it into multicast, write it to disk, etc.
Of course, just in the case you would need something like that (otherwise you'd prefer the earlier answers)
Let's create two independent python programs:
Server program (rtsp connection, decoding) server.py
Client program (reads frames from shared memory) client.py
Server must be started before the client, i.e.
python3 server.py
And then in another terminal:
python3 client.py
Here is the code:
(1) server.py
import time
from valkka.core import *
# YUV => RGB interpolation to the small size is done each 1000 milliseconds and passed on to the shmem ringbuffer
image_interval=1000
# define rgb image dimensions
width =1920//4
height =1080//4
# posix shared memory: identification tag and size of the ring buffer
shmem_name ="cam_example"
shmem_buffers =10
shmem_filter =RGBShmemFrameFilter(shmem_name, shmem_buffers, width, height)
sws_filter =SwScaleFrameFilter("sws_filter", width, height, shmem_filter)
interval_filter =TimeIntervalFrameFilter("interval_filter", image_interval, sws_filter)
avthread =AVThread("avthread",interval_filter)
av_in_filter =avthread.getFrameFilter()
livethread =LiveThread("livethread")
ctx =LiveConnectionContext(LiveConnectionType_rtsp, "rtsp://user:password#192.168.x.x", 1, av_in_filter)
avthread.startCall()
livethread.startCall()
avthread.decodingOnCall()
livethread.registerStreamCall(ctx)
livethread.playStreamCall(ctx)
# all those threads are written in cpp and they are running in the
# background. Sleep for 20 seconds - or do something else while
# the cpp threads are running and streaming video
time.sleep(20)
# stop threads
livethread.stopCall()
avthread.stopCall()
print("bye")
(2) client.py
import cv2
from valkka.api2 import ShmemRGBClient
width =1920//4
height =1080//4
# This identifies posix shared memory - must be same as in the server side
shmem_name ="cam_example"
# Size of the shmem ringbuffer - must be same as in the server side
shmem_buffers =10
client=ShmemRGBClient(
name =shmem_name,
n_ringbuffer =shmem_buffers,
width =width,
height =height,
mstimeout =1000, # client timeouts if nothing has been received in 1000 milliseconds
verbose =False
)
while True:
index, isize = client.pull()
if (index==None):
print("timeout")
else:
data =client.shmem_list[index][0:isize]
img =data.reshape((height,width,3))
img =cv2.GaussianBlur(img, (21, 21), 0)
cv2.imshow("valkka_opencv_demo",img)
cv2.waitKey(1)
If you got interested, check out some more in https://elsampsa.github.io/valkka-examples/
Hi reading frames from video can be achieved using python and OpenCV . Below is the sample code. Works fine with python and opencv2 version.
import cv2
import os
#Below code will capture the video frames and will sve it a folder (in current working directory)
dirname = 'myfolder'
#video path
cap = cv2.VideoCapture("your rtsp url")
count = 0
while(cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
else:
cv2.imshow('frame', frame)
#The received "frame" will be saved. Or you can manipulate "frame" as per your needs.
name = "rec_frame"+str(count)+".jpg"
cv2.imwrite(os.path.join(dirname,name), frame)
count += 1
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Use in this
cv2.VideoCapture("rtsp://username:password#IPAddress:PortNO(rest of the link after the IPAdress)").
In my project i want to save streaming video.
import cv2;
if __name__ == "__main__":
camera = cv2.VideoCapture(0);
while True:
f,img = camera.read();
cv2.imshow("webcam",img);
if (cv2.waitKey (5) != -1):
break;
`
using above code it is possible to stream video from the web cam. How to write this streaming video to a file?
You can simply save the grabbed frames into images:
camera = cv2.VideoCapture(0)
i = 0
while True:
f,img = camera.read()
cv2.imshow("webcam",img)
if (cv2.waitKey(5) != -1):
break
cv2.imwrite('{0:05d}.jpg'.format(i),img)
i += 1
or to a video like this:
camera = cv2.VideoCapture(0)
video = cv2.VideoWriter('video.avi', -1, 25, (640, 480));
while True:
f,img = camera.read()
video.write(img)
cv2.imshow("webcam",img)
if (cv2.waitKey(5) != -1):
break
video.release()
When creating VideoWriter object, you need to provide several parameters that you can extract from the input stream. A tutorial can be found here.
In ubuntu create video from given pictures using following code
os.system('ffmpeg -f image2 -r 8 -i %05d.bmp -vcodec mpeg4 -y movie3.mp4')
where name of picture is 00000.bmp,00001.bmp,00002.bmp etc..
If you really want to use the codec provided for your pc to compress the frames.
You should set the 2nd parameter of cv2.VideoWriter([filename, fourcc, fps, frameSize[, isColor]]) with the flag value = -1. This will allow you to see a list of codec compress utilized in your pc.
In my case, the codec provided by Intel is named IYUV or I420. I don't know for another manufacturers. fourcc webpage.
Set this information as follow
fourcc = cv2.cv.CV_FOURCC('I','Y','U','V')
# or
fourcc = cv2.cv.CV_FOURCC('I','4','2','0')
# settting all the information
out = cv2.VideoWriter('output1.avi',fourcc, 20, (640,480))
Remember two small parameters which gave me a big headache:
Don't forget the cv2.cv prefix
Introduce the correct frame Size
For everything else, you can use the code provided by Ekalic
I have a webcam video recorder program built with python, opencv and ffmpeg
It works ok except that the color of the video is more blue than the reality. The problem seems to come from color format of images.
It seems that OpenCv is giving BGR images and ffmpeg+libx264 is expecting YUV420p. I've read that YUV420p correspond to YCbCr.
opencv has no conversion from BGR to YCbCr. It only has a conversion to YCrCb.
I have made some searchs and tried different alternatives to try converting opencv image to something that could be ok for ffmpeg+libx264. None is working. At this point, I am a bit lost and I would appreciate any pointer that could help me to fix this color issue.
You are right, the default pixel format of OpenCV is BGR.
The equivalent format on the ffmpeg side would be BGR24, so you don't need to convert it to YUV420p if you don't want to.
This post shows how to use a python application to capture frames from the webcam and write the frames to stdout. The purpose is to invoke this app on the cmd-line and pipe the result directly to the ffmpeg application, which stores the frames on the disk. Quite clever indeed!
capture.py:
import cv, sys
cap = cv.CaptureFromCAM(0)
if not cap:
sys.stdout.write("failed CaptureFromCAM")
while True :
if not cv.GrabFrame(cap) :
break
frame = cv.RetrieveFrame(cap)
sys.stdout.write( frame.tostring() )
And the command to be executed on the shell is:
python capture.py | ffmpeg -f rawvideo -pix_fmt bgr24 -s 640x480 -r 30 -i - -an -f avi -r 30 foo.avi
Where:
-r gives the frame rate coming off the camera
-an says "don't encode audio"
I tested this solution on my Mac OS X with OpenCV 2.4.2.
EDIT:
In case you haven't tried to record from the camera and use OpenCV to write the video to an mp4 file on the disk, here we go:
import cv, sys
cap = cv.CaptureFromCAM(0) # 0 is for /dev/video0
if not cap:
sys.stdout.write("!!! Failed CaptureFromCAM")
sys.exit(1)
frame = cv.RetrieveFrame(cap)
if not frame:
sys.stdout.write("!!! Failed to retrieve first frame")
sys.exit(1)
# Unfortunately, the following instruction returns 0
#fps = cv.GetCaptureProperty(cap, cv.CV_CAP_PROP_FPS)
fps = 25.0 # so we need to hardcode the FPS
print "Recording at: ", fps, " fps"
frame_size = cv.GetSize(frame)
print "Video size: ", frame_size
writer = cv.CreateVideoWriter("out.mp4", cv.CV_FOURCC('F', 'M', 'P', '4'), fps, frame_size, True)
if not writer:
sys.stdout.write("!!! Error in creating video writer")
sys.exit(1)
while True :
if not cv.GrabFrame(cap) :
break
frame = cv.RetrieveFrame(cap)
cv.WriteFrame(writer, frame)
cv.ReleaseVideoWriter(writer)
cv.ReleaseCapture(cap)
I've tested this with Python 2.7 on Mac OS X and OpenCV 2.4.2.
Have you tried switching the Cb/Cr channels in OpenCV using split and merge ?
Checked the conversion formulas present in: http://en.wikipedia.org/wiki/YCbCr?
The libx264 codec is able to process BGR images. No need to use any conversion to YCbCr. NO need to give a spcific pix_ftm to ffmpeg. I was using RGB and it was causing the blueish effect on the video.
The solution was simply to use the original image retuned by the camera without any conversion. :)
I tried this in my previous investigation and it was crashing the app. The solution is to copy the frame returned by the camera.
frame = opencv.QueryFrame(camera)
if not frame:
return None, None
# RGB : use this one for displaying on the screen
im_rgb = opencv.CreateImage(self.size, opencv.IPL_DEPTH_8U, 3)
opencv.CvtColor(frame, im_rgb, opencv.CV_BGR2RGB)
# BGR : Use this one for the video
im_bgr = opencv.CreateImage(self.size, opencv.IPL_DEPTH_8U, 3)
opencv.Copy(frame, im_bgr)
return im_rgb, im_bgr
I've already answered this here. But my VidGear Python Library automates the whole process of pipelining OpenCV frames into FFmpeg and also robustly handles the format conversion. Here's a basic python example:
# import libraries
from vidgear.gears import WriteGear
import cv2
output_params = {"-vcodec":"libx264", "-crf": 0, "-preset": "fast"} #define (Codec,CRF,preset) FFmpeg tweak parameters for writer
stream = cv2.VideoCapture(0) #Open live webcam video stream on first index(i.e. 0) device
writer = WriteGear(output_filename = 'Output.mp4', compression_mode = True, logging = True, **output_params) #Define writer with output filename 'Output.mp4'
# infinite loop
while True:
(grabbed, frame) = stream.read()
# read frames
# check if frame empty
if not is grabbed:
#if True break the infinite loop
break
# {do something with frame here}
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# write a modified frame to writer
writer.write(gray)
# Show output window
cv2.imshow("Output Frame", frame)
key = cv2.waitKey(1) & 0xFF
# check for 'q' key-press
if key == ord("q"):
#if 'q' key-pressed break out
break
cv2.destroyAllWindows()
# close output window
stream.release()
# safely close video stream
writer.close()
# safely close writer
Source: https://github.com/abhiTronix/vidgear/wiki/Compression-Mode:-FFmpeg#2-writegear-classcompression-mode-with-opencv-directly
You can check out full VidGear Docs for more advanced applications and exciting features.
Hope that helps!