Extract 1 single image from rtsp-stream using ffmpeg - python

Im looking for a way to extract one single image from an rtsp stream. My current solution is based on doing that with opencv, but i want to optimize this. So I'm going with a solution I found here Handle large number of rtsp cameras without real-time constraint
The drawback here is that ffmpeg opens a whole videostream, so a consequence is that this is not more efficient than opencv. I want to fetch one single image with ffmpeg, but currently im not able to do that. i get error
"ValueError: cannot reshape array of size 0 into shape (160,240,3)" - So there is no image in the frame. I'm not completely convinced that my ffmpeg_cmd is correct.
import numpy as np
import subprocess as sp
import cv2
# Use public RTSP Streaming for testing:
in_stream = "rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4"
# Use OpenCV for getting the video resolution.
cap = cv2.VideoCapture(in_stream)
# Get resolution of input video
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Release VideoCapture - it was used just for getting video resolution
cap.release()
# http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
FFMPEG_BIN = "ffmpeg"
ffmpeg_cmd = [FFMPEG_BIN,
'-vframes', '1',
'-rtsp_transport', 'tcp'
'-i', in_stream,'-c:v','mjpeg','-f','image2pipe', 'pipe:']
process = sp.Popen(ffmpeg_cmd, stdout=sp.PIPE)
raw_frame = process.stdout.read(width * height * 3)
frame = np.frombuffer(raw_frame, np.uint8).reshape((height, width, 3))
process.stdout.close()
process.wait()
cv2.destroyAllWindows()
If someone have a completely different solution, thats ok. I need something that is more efficient than doing it with opencv.

Related

Opencv saving empty video Python

I am facing a problem in recording a video from camera. I am using python and opencv to do so. I have my images in QImage format, I convert them to numpy array in order to display to stream it when video capturing of the cam (using VideoCapture of opencv), everything works fine.
When I try to record a video and save it in the folder (using VideoWriter_fourcc of opencv), I get no errors but I get an empty video. I did a lot of searching to find the problem but I couldn't.
Here's the code that I use to record a video:
import cv2
fourcc = cv2.VideoWriter_fourcc('M','J','P','G')
#img is a numpy array
videoWriter = cv2.VideoWriter('video.avi', fourcc, 20, (img.shape[0],img.shape[1]))
while True:
videoWriter.write(img)
videoWriter.release()
I tried to change the Framerate, the frameSize, the extension of the video and the code of codec but nothing worked.
I am so desperate. I appreciate all and every help and suggestion I can get.
Thank you
The issue is that you mixed up width and height as you passed them to VideoWriter.
VideoWriter wants (width, height) for the frameSize argument.
Note that width = img.shape[1] and height = img.shape[0]
Giving VideoWriter a, say, 1920x1080 frame to write, while having promised 1080x1920 frames in the constructor, will fail, but silently.

Using FFMPEG command to read the frame and show using the inshow function in opencv

I am trying to get the frame using the ffmpeg command and show using the opencv function cv2.imshow(). This snippet gives the black and white image on the RTSP Stream link . Output is given below link [ output of FFmpeg link].
I have tried the ffplay command but it gives the direct image . i am not able to access the frame or apply the image processing.
Output of FFMPEG
import cv2
import subprocess as sp
command = [ 'C:/ffmpeg/ffmpeg.exe',
'-i', 'rtsp://192.168.1.12/media/video2',
'-f', 'image2pipe',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo', '-']
import numpy
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)
while True:
raw_image = pipe.stdout.read(420*360*3)
# transform the byte read into a numpy array
image = numpy.fromstring(raw_image, dtype='uint8')
image = image.reshape((360,420,3))
cv2.imshow('hello',image)
cv2.waitKey(1)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()
You're using a wrong output format, it should be -f rawvideo. This should fix your primary problem. Current -f image2pipe wraps the RGB data in an image format (donno what it is maybe BMP as rawvideo codec is being used?) thus not shown correctly.
Other tips:
If your data is grayscale, use -pix_fmt gray and read 420*360 bytes at a time.
Don't know the difference in speed, but I use np.frombuffer instead of np.fromstring
pipe.stdout.flush() is a dangerous move IMO as the buffer may have a partial frame. Consider setting bufsize to be an exact integer multiple of framesize in bytes.
If you are expecting processing to take much longer than input frame rate, you may want to reduce the output framerate -r to match the processing rate (to avoid extraneous data transfer from ffmpeg to python)

Sending OpenCV output to VLC stream

This has been keeping me busy for a good part of the afternoon and I haven't been able to get it to work but I feel like I'm really close.
I've got openCV set up which takes the videofeed from a webcam. To be able to access this video feed (with openCV overlay) I want to pipe the output of the openCV python script to a VLC stream. I managed to get the stream up and running and can connect to it. VLC resizes to the correct aspect ratio and resolution so it gets some correct data but the image I get is just Jitter;
python opencv.py | cvlc --demux=rawvideo --rawvid-fps=30 --rawvid-width=320 --rawvid-height=240 --rawvid-chroma=RV24 - --sout "#transcode{vcodec=h264,vb=200,fps=30,width=320,height=240}:std{access=http{mime=video/x-flv},mux=ffmpeg{mux=flv},dst=:8081/stream.flv}" &
The output of the script is a constant video feed sent to stdout as follows
from imutils.video import WebcamVideoStream
vs = WebcamVideoStream(src=0)
while True:
frame = vs.read()
sys.stdout.write(frame.tostring())
Above example is a dumbed down version of the script I'm using; Also as seen I'm making use of the imutils library; https://github.com/jrosebr1/imutils
If anyone could give me a nudge in the right direction I would appreciate it greatly. My guess is the stdout.write(frame.tostring()) is not what vlc expects but I haven't been able to figure it out myself.
The following works for me under Python 3
import numpy as np
import sys
import cv2
cap = cv2.VideoCapture(0)
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
sys.stdout.buffer.write(frame.tobytes())
else:
break
cap.release()
And the command line (my webcam has a different resolution, and I only display the result, but you did not have problems with that)
python opencv.py | vlc --demux=rawvideo --rawvid-fps=25 --rawvid-width=640 --rawvid-height=480 --rawvid-chroma=RV24 - --sout "#display"
Of course this requires a conversion from BGR to RGB as the former is default in OpenCV.
This worked for me, though I am sending to RTSP stream and not using imutils library:
import numpy as np
import sys
import cv2
input_rtsp = "rtsp://10.10.10.9:8080"
cap = cv2.VideoCapture(input_rtsp)
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
sys.stdout.write(frame.tostring())
else:
break
cap.release()
Then in command line:
python opencv.py | cvlc --demux=rawvideo --rawvid-fps=25 --rawvid-width=1280 --rawvid-height=720 --rawvid-chroma=RV24 - --sout "#transcode{vcodec=h264,vb=200,fps=25,width=1280,height=720}:rtp{dst=10.10.10.10,port=8081,sdp=rtsp://10.10.10.10:8081/test.sdp}"
Note that you do not need to convert opencv BGR to RGB.

Can you "stream" images to ffmpeg to construct a video, instead of saving them to disk?

My work recently involves programmatically making videos. In python, the typical workflow looks something like this:
import subprocess, Image, ImageDraw
for i in range(frames_per_second * video_duration_seconds):
img = createFrame(i)
img.save("%07d.png" % i)
subprocess.call(["ffmpeg","-y","-r",str(frames_per_second),"-i", "%07d.png","-vcodec","mpeg4", "-qscale","5", "-r", str(frames_per_second), "video.avi"])
This workflow creates an image for each frame in the video and saves it to disk. After all images have been saved, ffmpeg is called to construct a video from all of the images.
Saving the images to disk (not the creation of the images in memory) consumes the majority of the cycles here, and does not appear to be necessary. Is there some way to perform the same function, but without saving the images to disk? So, ffmpeg would be called and the images would be constructed and fed to ffmpeg immediately after being constructed.
Ok I got it working. thanks to LordNeckbeard suggestion to use image2pipe. I had to use jpg encoding instead of png because image2pipe with png doesn't work on my verision of ffmpeg. The first script is essentially the same as your question's code except I implemented a simple image creation that just creates images going from black to red. I also added some code to time the execution.
serial execution
import subprocess, Image
fps, duration = 24, 100
for i in range(fps * duration):
im = Image.new("RGB", (300, 300), (i, 1, 1))
im.save("%07d.jpg" % i)
subprocess.call(["ffmpeg","-y","-r",str(fps),"-i", "%07d.jpg","-vcodec","mpeg4", "-qscale","5", "-r", str(fps), "video.avi"])
parallel execution (with no images saved to disk)
import Image
from subprocess import Popen, PIPE
fps, duration = 24, 100
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'mjpeg', '-r', '24', '-i', '-', '-vcodec', 'mpeg4', '-qscale', '5', '-r', '24', 'video.avi'], stdin=PIPE)
for i in range(fps * duration):
im = Image.new("RGB", (300, 300), (i, 1, 1))
im.save(p.stdin, 'JPEG')
p.stdin.close()
p.wait()
the results are interesting, I ran each script 3 times to compare performance:
serial:
12.9062321186
12.8965060711
12.9360799789
parallel:
8.67797684669
8.57139396667
8.38926696777
So it seems the parallel version is faster about 1.5 times faster.
imageio supports this directly. It uses FFMPEG and the Video Acceleration API, making it very fast:
import imageio
writer = imageio.get_writer('video.avi', fps=fps)
for i in range(frames_per_second * video_duration_seconds):
img = createFrame(i)
writer.append_data(img)
writer.close()
This requires the ffmpeg plugin, which can be installed using e.g. pip install imageio[ffmpeg].
I'm Kind of late, But VidGear Python Library's WriteGear API automates the process of pipelining OpenCV frames into FFmpeg on any platform in real-time with Hardware Encoders support and at the same time provides same opencv-python syntax. Here's a basic python example:
# import libraries
from vidgear.gears import WriteGear
import cv2
output_params = {"-vcodec":"libx264", "-crf": 0, "-preset": "fast"} #define (Codec,CRF,preset) FFmpeg tweak parameters for writer
stream = cv2.VideoCapture(0) #Open live webcam video stream on first index(i.e. 0) device
writer = WriteGear(output_filename = 'Output.mp4', compression_mode = True, logging = True, **output_params) #Define writer with output filename 'Output.mp4'
# infinite loop
while True:
(grabbed, frame) = stream.read()
# read frames
# check if frame empty
if not is grabbed:
#if True break the infinite loop
break
# {do something with frame here}
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# write a modified frame to writer
writer.write(gray)
# Show output window
cv2.imshow("Output Frame", frame)
key = cv2.waitKey(1) & 0xFF
# check for 'q' key-press
if key == ord("q"):
#if 'q' key-pressed break out
break
cv2.destroyAllWindows()
# close output window
stream.release()
# safely close video stream
writer.close()
# safely close writer
Source:https://abhitronix.github.io/vidgear/latest/gears/writegear/compression/usage/#using-compression-mode-with-opencv
You can check out VidGear Docs for more advanced applications and features.

Making a video with opencv and ffmpeg. How to find the right color format?

I have a webcam video recorder program built with python, opencv and ffmpeg
It works ok except that the color of the video is more blue than the reality. The problem seems to come from color format of images.
It seems that OpenCv is giving BGR images and ffmpeg+libx264 is expecting YUV420p. I've read that YUV420p correspond to YCbCr.
opencv has no conversion from BGR to YCbCr. It only has a conversion to YCrCb.
I have made some searchs and tried different alternatives to try converting opencv image to something that could be ok for ffmpeg+libx264. None is working. At this point, I am a bit lost and I would appreciate any pointer that could help me to fix this color issue.
You are right, the default pixel format of OpenCV is BGR.
The equivalent format on the ffmpeg side would be BGR24, so you don't need to convert it to YUV420p if you don't want to.
This post shows how to use a python application to capture frames from the webcam and write the frames to stdout. The purpose is to invoke this app on the cmd-line and pipe the result directly to the ffmpeg application, which stores the frames on the disk. Quite clever indeed!
capture.py:
import cv, sys
cap = cv.CaptureFromCAM(0)
if not cap:
sys.stdout.write("failed CaptureFromCAM")
while True :
if not cv.GrabFrame(cap) :
break
frame = cv.RetrieveFrame(cap)
sys.stdout.write( frame.tostring() )
And the command to be executed on the shell is:
python capture.py | ffmpeg -f rawvideo -pix_fmt bgr24 -s 640x480 -r 30 -i - -an -f avi -r 30 foo.avi
Where:
-r gives the frame rate coming off the camera
-an says "don't encode audio"
I tested this solution on my Mac OS X with OpenCV 2.4.2.
EDIT:
In case you haven't tried to record from the camera and use OpenCV to write the video to an mp4 file on the disk, here we go:
import cv, sys
cap = cv.CaptureFromCAM(0) # 0 is for /dev/video0
if not cap:
sys.stdout.write("!!! Failed CaptureFromCAM")
sys.exit(1)
frame = cv.RetrieveFrame(cap)
if not frame:
sys.stdout.write("!!! Failed to retrieve first frame")
sys.exit(1)
# Unfortunately, the following instruction returns 0
#fps = cv.GetCaptureProperty(cap, cv.CV_CAP_PROP_FPS)
fps = 25.0 # so we need to hardcode the FPS
print "Recording at: ", fps, " fps"
frame_size = cv.GetSize(frame)
print "Video size: ", frame_size
writer = cv.CreateVideoWriter("out.mp4", cv.CV_FOURCC('F', 'M', 'P', '4'), fps, frame_size, True)
if not writer:
sys.stdout.write("!!! Error in creating video writer")
sys.exit(1)
while True :
if not cv.GrabFrame(cap) :
break
frame = cv.RetrieveFrame(cap)
cv.WriteFrame(writer, frame)
cv.ReleaseVideoWriter(writer)
cv.ReleaseCapture(cap)
I've tested this with Python 2.7 on Mac OS X and OpenCV 2.4.2.
Have you tried switching the Cb/Cr channels in OpenCV using split and merge ?
Checked the conversion formulas present in: http://en.wikipedia.org/wiki/YCbCr?
The libx264 codec is able to process BGR images. No need to use any conversion to YCbCr. NO need to give a spcific pix_ftm to ffmpeg. I was using RGB and it was causing the blueish effect on the video.
The solution was simply to use the original image retuned by the camera without any conversion. :)
I tried this in my previous investigation and it was crashing the app. The solution is to copy the frame returned by the camera.
frame = opencv.QueryFrame(camera)
if not frame:
return None, None
# RGB : use this one for displaying on the screen
im_rgb = opencv.CreateImage(self.size, opencv.IPL_DEPTH_8U, 3)
opencv.CvtColor(frame, im_rgb, opencv.CV_BGR2RGB)
# BGR : Use this one for the video
im_bgr = opencv.CreateImage(self.size, opencv.IPL_DEPTH_8U, 3)
opencv.Copy(frame, im_bgr)
return im_rgb, im_bgr
I've already answered this here. But my VidGear Python Library automates the whole process of pipelining OpenCV frames into FFmpeg and also robustly handles the format conversion. Here's a basic python example:
# import libraries
from vidgear.gears import WriteGear
import cv2
output_params = {"-vcodec":"libx264", "-crf": 0, "-preset": "fast"} #define (Codec,CRF,preset) FFmpeg tweak parameters for writer
stream = cv2.VideoCapture(0) #Open live webcam video stream on first index(i.e. 0) device
writer = WriteGear(output_filename = 'Output.mp4', compression_mode = True, logging = True, **output_params) #Define writer with output filename 'Output.mp4'
# infinite loop
while True:
(grabbed, frame) = stream.read()
# read frames
# check if frame empty
if not is grabbed:
#if True break the infinite loop
break
# {do something with frame here}
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# write a modified frame to writer
writer.write(gray)
# Show output window
cv2.imshow("Output Frame", frame)
key = cv2.waitKey(1) & 0xFF
# check for 'q' key-press
if key == ord("q"):
#if 'q' key-pressed break out
break
cv2.destroyAllWindows()
# close output window
stream.release()
# safely close video stream
writer.close()
# safely close writer
Source: https://github.com/abhiTronix/vidgear/wiki/Compression-Mode:-FFmpeg#2-writegear-classcompression-mode-with-opencv-directly
You can check out full VidGear Docs for more advanced applications and exciting features.
Hope that helps!

Categories