I need to record a rtsp video stream to file in python, I want to stop the recording at anytime depending on the situation. I used subprocess to interact with ffmpeg command line, but it breaks sometimes, the video keep recording, resulting in a huge file.
Is there any good way to dump a rtsp video stream to file, with the capability that you can stop the recording anytime after you start recording?
Any help is much appreciated.
ADDED:
Preferably the similar API as the PiCamera like so:
import picamera
from time import sleep
camera = picamera.PiCamera()
camera.start_recording('video.h264')
sleep(5)
camera.stop_recording()
But the only difference is that the source of video is the IP camera.
Related
I would like to stream the audio of a youtube video in python, youtube-dl allows me to download a video (audio in my case), but that process can take some time. My objective is to be able to stream the audio 'dynamically', like I would by going on a youtube video. I would like to start playing the audio and still download the rest of it at the same time.
I know that the youtube-dl command line program allows to stream videos to media players, like VLC for example:
youtube-dl -o - -- "[videoID]" | vlc -. I could create a subprocess and execute that command, but I would prefer to use a cleaner way, if possible.
I would expect to have some sort of data that I can transmit to an audio device later on. I don't need to store the audio in a file, but it's not a big deal if there is a temporary file.
This is unfortunately not possible. Youtube-DL does not expose an API that makes this straightforward. This is where Youtube-DL opens the file (or stdout) for writing. It's not exactly written to allow for easy switching of the output stream.
It's probably easier to just subprocess it and pipe its output if you really want this functionality.
I am very new to the cv2 library, and since I recently discovered it is unable to play audio I wanted to know if there is another library that will allow me to play the audio in sync with the video itself (preferably without needing an audio file, just using the video's audio)
Turns out there is a very simple method that I will found after researching later, unfortunately, it requires a separate audio file that matches the video:
First, import simpleaudio as sa (install it with pip if necessary) and then add this piece of code to your cv2 video player. Add it before the while loop but after the line of code that defines the video file:
wave_obj = sa.WaveObject.from_wave_file("AudioFile.wav")
wave_obj.play()
Then you must manually adjust the waitKey until the audio matches the video, otherwise it might be too fast or too slow. Generally a value close to 25 must be used, and if the audio abruptly ends, try adding one to the value until it matches
I want to process the audio data from sound card (i.e. USB external sound card). For example: I can record all sound while playing a Youtube clip along with PC game. However, I have tried Pyaudio which only allows me to record audio from microphone. Can someone suggest me any library to try? Thanks so much!
I'm quite new to video streaming and opencv in general.
I wanted to stream my computations to another device via rtsp from a raspberry pi 3 using h264.
I tried writing to a pipe using popen with ffmpeg to a ffserver anf with vlc creating rtsp servers to stream the content. Unfortunately I have huge lag in the stream, the best I could do was go down to 3 seconds.
Is there any way to achieve this? I'm open to consider other technologies.
Thank you
RTMP is no the best way to achieve low latency (< 5s).
I suggest you to use FFMPEG with pure RTP to stream the video to a RTPS server. Or use directly Gstreamer with Gst-RTSP-server, both are open solutions in C.
Latency will also be impacted by your encoder and the hardware it uses to process.
This question has more information.
I would recommend you to use RTMP instead. Latency can be as low as 100's of milliseconds.
Another thing to consider is that VLC and other clients will introduce a video delay due to internal buffering by the player. Look for the option to not buffer the video and you should be able to shave off a couple of seconds from the video latency.
With ffplay you can try the following:
ffplay --fflags nobuffer rtmp://your.server.ip/path/to/stream -loglevel verbose
If you will transmux to DASH or HLS you can also expect to introduce more latency to the video streaming.
I'm trying to have my raspberry pi capture short videos and save them to a file with date and time. I have a short Python script that will capture video but it seems to overwrite the file with the latest video instead of saving each capture to a separate file.
import time
import picamera
with picamera.PiCamera() as camera:
camera.start_preview()
camera.start_recording('/home/pi/Desktop/video.h264')
time.sleep(60)
camera.stop_recording()
camera.stop_preview()
Of course it will.
Since you're always using the same path and file name to save the file it will override whatever was in the file video.h264 before. You should implement an algorithm to avoid this.
You could name the files with the recording date in it.This will usually never be the same and you can rename your files later if you want so. The system date is available in the standard time module.
When you hit into problems with that we can help you.