I'm trying very hard to safe a frame of a ip-cam. The cam streams in (h264 mp4 avc - says vlc) and supports rtsp and onvif. So i can see the stream in vlc.
I want to record the frame on a headless raspberry pi.
I can receive the rpt frames with this python script: https://code.google.com/p/python-mjpeg-over-rtsp-client/downloads/detail?name=rtsp_mjpeg_client-0.1.zip&can=2&q=
But since my cam is not streaming mjpeg i can't use his jpeg-creation.
I tried several other solutions
headless selenium (to slow)
live555 (do not get it to run)
opencv (does not record a stream ?)
Do you have any other suggestion ?
I made it with ffmpeg and some shell scripts.
FFmpeg is able to read the stream and create an jpeg for every frame in one line of Code:
ffmpeg -i rtsp://$user:$pw#$ip:554 -f image2 -vf fps=3 $name_%03d.jpg -loglevel quiet
This limits the stream to 3Fps, which is enough for me.
Related
I'm trying to use PyAV to output video to a V4l2 loopback device (/dev/video1), but I can't figure out how to do it. It uses the avformat_write_header() from libav* (ffmpeg bindings).
I've been able to get ffmpeg to output to the v4l2 device from the CLI but not from python.
Found the solution. The way to do this is:
Set the container format to v4l2
Set the stream format as "rawvideo"
Set the framerate (if it's a live stream, set the framerate to 1 fps higher than the stream is so that you don't get an error)
Set pixel format to either RGB24 or YUV420
I'm looking for a way to continuously stream audio from a server, the main issue is that the server side code it will receive many url's to stream audio. There will also be instances where the url is swapped live and a new piece of audio is streamed instead. I have not yet found a solution that wouldn't involve me downloading each file to then stream, which would hinder the live feature.
I've attempted to use vlc for python but it wouldn't allow for the ability to change the url being streamed in the moment. I've also attempted to use pyaudio but I haven't been able to get the correct audio format let alone swap the source of the audio.
An example link, fairwarning it'll autoplay: audio
To make a continuous stream that is sent to clients, you'll need to break this project into two halves.
Playout
You need something to decode the source streams from their compressed formats to a non-compressed standardized format that you can manipulate... raw PCM samples. Use a child process and have it output to STDOUT so you can get that data in your Python script. You can use VLC for this if you want, but FFmpeg is pretty easy:
ffmpeg -i "http://example.com/stream" -ar 48000 -ac 2 -f f32le -acodec pcm_f32le -
That will output raw PCM to STDOUT as 32-bit floats, in stereo, at 48 kHz. Once in this standard format, you can arbitrarily join streams. So, when you're done playing one stream, just kill the process, switch to the next, and start playing back samples from the new one.
Encoding
You want to create a single PCM stream that then you can re-encode with some external encoder, basically in reverse from what you did on playout. Again, something FFmpeg can do for you:
ffmpeg -f f32le -ar 48000 -ac 2 - -f opus -acodec libopus icecast://...
Now, you'll note the output example here, I suggested sending this off to Icecast. Icecast is a decent streaming server you can use. If you'd rather just output directly over HTTP, you can. But if you're playing this stream out to more than one listener, I'd suggest letting Icecast or similar take care of it for you.
I'm quite new to video streaming and opencv in general.
I wanted to stream my computations to another device via rtsp from a raspberry pi 3 using h264.
I tried writing to a pipe using popen with ffmpeg to a ffserver anf with vlc creating rtsp servers to stream the content. Unfortunately I have huge lag in the stream, the best I could do was go down to 3 seconds.
Is there any way to achieve this? I'm open to consider other technologies.
Thank you
RTMP is no the best way to achieve low latency (< 5s).
I suggest you to use FFMPEG with pure RTP to stream the video to a RTPS server. Or use directly Gstreamer with Gst-RTSP-server, both are open solutions in C.
Latency will also be impacted by your encoder and the hardware it uses to process.
This question has more information.
I would recommend you to use RTMP instead. Latency can be as low as 100's of milliseconds.
Another thing to consider is that VLC and other clients will introduce a video delay due to internal buffering by the player. Look for the option to not buffer the video and you should be able to shave off a couple of seconds from the video latency.
With ffplay you can try the following:
ffplay --fflags nobuffer rtmp://your.server.ip/path/to/stream -loglevel verbose
If you will transmux to DASH or HLS you can also expect to introduce more latency to the video streaming.
I need to use ffmpeg/avconv to pipe jpg frames to a python PIL (Pillow) Image object, using gst as an intermediary*. I've been searching everywhere for this answer without much luck. I think I'm close - but I'm stuck. Using Python 2.7
My ideal pipeline, launched from python, looks like this:
ffmpeg/avconv (as h264 video)
Piped ->
gst-streamer (frames split into jpg)
Piped ->
Pil Image Object
I have the first few steps under control as a single command that writes .jpgs to disk as furiously fast as the hardware will allow.
That command looks something like this:
command = [
"ffmpeg",
"-f video4linux2",
"-r 30",
"-video_size 1280x720",
"-pixel_format 'uyvy422'",
"-i /dev/video0",
"-vf fps=30",
"-f H264",
"-vcodec libx264",
"-preset ultrafast",
"pipe:1 -",
"|", # Pipe to GST
"gst-launch-1.0 fdsrc !",
"video/x-h264,framerate=30/1,stream-format=byte-stream !",
"decodebin ! videorate ! video/x-raw,framerate=30/1 !",
"videoconvert !",
"jpegenc quality=55 !",
"multifilesink location=" + Utils.live_sync_path + "live_%04d.jpg"
]
This will successfully write frames to disk if ran with popen or os.system.
But instead of writing frames to disk, I want to capture the output in my subprocess pipe and read the frames, as they are written, in a file-like buffer that can then be read by PIL.
Something like this:
import subprocess as sp
import shlex
import StringIO
clean_cmd = shlex.split(" ".join(command))
pipe = sp.Popen(clean_cmd, stdout = sp.PIPE, bufsize=10**8)
while pipe:
raw = pipe.stdout.read()
buff = StringIO.StringIO()
buff.write(raw)
buff.seek(0)
# Open or do something clever...
im = Image.open(buff)
im.show()
pipe.flush()
This code doesn't work - I'm not even sure I can use "while pipe" in this way. I'm fairly new to using buffers and piping in this way.
I'm not sure how I would know that an image has been written to the pipe or when to read the 'next' image.
Any help would be greatly appreciated in understanding how to read the images from a pipe rather than to disk.
This is ultimately a Raspberry Pi 3 pipeline and in order to increase my frame rates I can't (A) read/write to/from disk or (B) use a frame by frame capture method - as opposed to running H246 video directly from the camera chip.
I assume the ultimate goal is to handle a USB camera at a high frame rate on Linux, and the following addresses this question.
First, while a few USB cameras support H.264, the Linux driver for USB cameras (UVC driver) currently does not support stream-based payloads, which includes H.264, see "UVC Feature" table on the driver home page. User space tools like ffmpeg use the driver, so have the same limitations regarding what video format is used for the USB transfer.
The good news is that if a camera supports H.264, it almost certainly supports MJPEG, which is supported by the UVC driver and compresses well enough to support 1280x720 at 30 fps over USB 2.0. You can list the video formats supported by your camera using v4l2-ctl -d 0 --list-formats-ext. For a Microsoft Lifecam Cinema, e.g., 1280x720 is supported at only 10 fps for YUV 4:2:2 but at 30 fps for MJPEG.
For reading from the camera, I have good experience with OpenCV. In one of my projects, I have 24(!) Lifecams connected to a single Ubuntu 6-core i7 machine, which does real-time tracking of fruit flies using 320x240 at 7.5 fps per camera (and also saves an MJPEG AVI for each camera to have a record of the experiment). Since OpenCV directly uses the V4L2 APIs, it should be faster than a solution using ffmpeg, gst-streamer, and two pipes.
Bare bones (no error checking) code to read from the camera using OpenCV and create PIL images looks like this:
import cv2
from PIL import Image
cap = cv2.VideoCapture(0) # /dev/video0
while True:
ret, frame = cap.read()
if not ret:
break
pil_img = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
... # do something with PIL image
Final note: you likely need to build the v4l version of OpenCV to get compression (MJPEG), see this answer.
Problem Outline
I have an h264 real-time video stream (I'll call this "the stream") being captured in Process1. My goal is to extract each frame from the stream as it comes through and use Process2 to analyze it with OpenCV. (Process1 is nodejs, Process2 is Python)
Things I've tried, and their failure modes:
Send the stream directly from one Process1 to Process2 over a named fifo pipe:
I succeeded in directing the stream from Process1 into the pipe. However, in Process2 (which is Python) I could not (a) extract individual frames from the stream, and (b) convert any extracted data from h264 into an OpenCV format (e.g. JPEG, numpy array).
I had hoped to use OpenCV's VideoCapture() method, but it does not allow you to pass a FIFO pipe as an input. I was able to use VideoCapture by saving the h264 stream to a .h264 file, and then passing that as the file path. This doesn't help me, because I need to do my analysis in real time (i.e. I can't save the stream to a file before reading it in to OpenCV).
Pipe the stream from Process1 to FFMPEG, use FFMPEG to change the stream format from h264 to MJPEG, then pipe the output to Process2:
I attempted this using the command:
cat pipeFromProcess1.fifo | ffmpeg -i pipe:0 -f h264 -f mjpeg pipe:1 | cat > pipeToProcess2.fifo
The biggest issue with this approach is that FFMPEG takes inputs from Process1 until Process1 is killed, and only then does Process2 begin to receive the data.
Additionally, on the Process2 side, I still don't understand how to extract individual frames from the data coming over the pipe. I open the pipe for reading (as "f") and then execute data = f.readline(). The size of data varies drastically (some reads have length on the order of 100, others length on the order of 1,000). When I use f.read() instead of f.readline(), the length is much larger, on the order of 100,000.
If I were to know that I was getting the correct size chunk of data, I would still not know how to transform it into an OpenCV-compatible array because I don't understand the format it's coming over in. It's a string, but when I print it out it looks like this:
��_M~0A0����tQ,\%��e���f/�H�#Y�p�f#�Kus�} F����ʳa�G������+$x�%V�� }[����Wo �1'̶A���c����*�&=Z^�o'��Ͽ� SX-ԁ涶V&H|��$
~��<�E�� ��>�����u���7�����cR� �f�=�9 ��fs�q�ڄߧ�9v�]�Ӷ���& gr]�n�IRܜ�檯����
� ����+ �I��w�}� ��9�o��� �w��M�m���IJ ��� �m�=�Soՙ}S �>j �,�ƙ�'���tad =i ��WY�FeC֓z �2�g�;EXX��S��Ҁ*, ���w� _|�&�y��H��=��)� ���Ɗ3# �h���Ѻ�Ɋ��ZzR`��)�y�� c�ڋ.��v�!u���� �S�I#�$9R�Ԯ0py z ��8 #��A�q�� �͕� ijc �bp=��۹ c SqH
Converting from base64 doesn't seem to help. I also tried:
array = np.fromstring(data, dtype=np.uint8)
which does convert to an array, but not one of a size that makes sense based on the 640x368x3 dimensions of the frames I'm trying to decode.
Using decoders such as Broadway.js to convert the h264 stream:
These seem to be focused on streaming to a website, and I did not have success trying to re-purpose them for my goal.
Clarification about what I'm NOT trying to do:
I've found many related questions about streaming h264 video to a website. This is a solved problem, but none of the solutions help me extract individual frames and put them in an OpenCV-compatible format.
Also, I need to use the extracted frames in real time on a continual basis. So saving each frame as a .jpg is not helpful.
System Specs
Raspberry Pi 3 running Raspian Jessie
Additional Detail
I've tried to generalize the problem I'm having in my question. If it's useful to know, Process1 is using the node-bebop package to pull down the h264 stream (using drone.getVideoStream()) from a Parrot Bebop 2.0. I tried using the other video stream available through node-bebop (getMjpegStream()). This worked, but was not nearly real-time; I was getting very intermittent data streams. I've entered that specific problem as an Issue in the node-bebop repository.
Thanks for reading; I really appreciate any help anyone can give!
I was able to solve opening a Parrot Anafi stream with OpenCV (built with FFMPEG) in Python by setting the following environment variable:
export OPENCV_FFMPEG_CAPTURE_OPTIONS="rtsp_transport;udp"
FFMPEG defaults to TCP transport, but the feed from the drone is UDP so this sets the correct mode for FFMPEG.
Then use:
cv2.VideoCapture(<stream URI>, cv2.CAP_FFMPEG)
ret, frame = cap.read()
while ret:
cv2.imshow('frame', frame)
# do other processing on frame...
ret, frame = cap.read()
if (cv2.waitKey(1) & 0xFF == ord('q')):
break
cap.release()
cv2.destroyAllWindows()
as usual.
This should also work with a Parrot Bebop, but I don't have one to test it.
There are some suggestions online for piping the h264 stream into the opencv program using standard in:
some-h264-stream | ./opencv-program
where opencv-program contains something like:
VideoCapture cap("/dev/stdin");