I have a project that works with video processing. I have many clients who send live video and one service that receives and processes the videos. The client is a Raspberry Pi that captures video and sends it to the service. The service is a Python module, which receives the video from all clients, and process them. Let's suppose that the client sends video at 24 FPS, and the service can process only 8 FPS. So, the service would need to take the newest frame, and drop others. In this case, it would take 1 frame and drop 2. In principle, audio is not required.
So, I want to know if there is a Python library to transmit video live streaming using UDP protocol.
Thanks!
The question is not crystal clear, but I deduce that..
You have several rtsp cameras, sending, say H264 video
In you rasperry pi you have processes that receive and decode the video. In addition to that, you want to do some image analysis, etc. for the decoded (yuv or rgb) bitmaps
Your rasp pi can't keep up with the pace, so the processes analyzing the video will be missing some frames
Right?
There's a python library that can stream video from several sources, decode the video and distribute it among python multiprocesses. Check out the tutorial in
http://www.valkka.fi
The learning curve might be a bit steep, but if you follow the tutorial from lesson 1, you'll be fine.
(disclaimer: I did that)
Related
To keep it simple I'm trying to copy the functionality of this project to use in my own: https://github.com/deepch/RTSPtoWebRTC
Using Go. this program takes an RTSP stream and redirects the raw stream data to a port,
then it generates an SDP packet that a WebRTC client consumes through POST and grabs the video. This results in instant streaming, with no noticeable latency.
Trying to implement this in python has proven to be very difficult. So far the only way I can see to interact with RTSP is to use OpenCV and I can't figure out a way to get the raw stream data from it.
I could some tips on libraries on use that do exactly that, take an RTSP stream and prepare it for WebRTC Travel, really anything.
I am trying to show live webcam video stream on webpage and I have a working draft. However, I am not satisfied with the performance and looking for a better way to do the job.
I have a webcam connected to Raspberry PI and a web server which is a simple python-Flask server. Webcam images are captured by using OpenCV and formatted as JPEG. Later, those JPEGs are sent to one of the server's UDP ports. What I did up to this point is something like a homemade MJPEG(motion-jpeg) streaming.
At the server-side I have a simple python script that continuously reads UDP port and put JPEG image in the HTML5 canvas. That is fast enough to create a perception of a live stream.
Problems:
This compress the video very little. Actually it does not compress the video. It only decreases the size of a frame by formatting as JPEG.
FPS is low and also quality of the stream is not that good.
It is not a major point for now but UDP is not a secure way to stream video.
Server is busy with image picking from UDP. Needs threaded server design.
Alternatives:
I have used FFMPEG before to convert video formats and also stream pre-recorded video. I guess, it is possible to encode(let say H.264) and stream WebCam live video using ffmpeg or avconv. (Encoding)
Is this applicable on Raspberry PI ?
VLC is able to play live videos streamed on network. (Stream)
Is there any Media Player to embed on HTML/Javascript to handle
network stream like the VLC does ?
I have read about HLS (HTTP Live Stream) and MPEG-DASH.
Does these apply for this case ? If it does,how should I use them ?
Is there any other way to show live stream on webpage ?
RTSP is a secure protocol.
What is the best practice for transport layer protocol in video
streaming ?
I will try to answer as many of your listed "Problems" as I can:
FPS is low and also quality of the stream is not that good.
Since you are using Flask, you can use the immensely powerful url_for keyword to display your streamed frames directly into an img tag. Here is how you can achieve this:
On the webpage, add <img src="{{url_for('video_feed')}}" where you want your feed to be streamed.
At the backend, do the following:
def gen(camera):
frame = camera.read()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
#app.route('/video_feed')
def video_feed():
return Response(gen(), mimetype='multipart/x-mixed-replace; boundary=frame')
Now, the img tag will display the image present on the endpoint video_feed which is of type multipart. Hence, it will keep asking for newer parts (newer frames) in your case.
I have found this approach to be quite fast!
Credits: Miguel Grinberg's blog post
It is not a major point for now but UDP is not a secure way to stream video.
Most video streaming devices don’t encrypt the video they stream, because doing so is computationally expensive. So while you might connect to an embedded web server on the device via HTTPS, and you probably have to log into the device to control it, all that security is limited to the “control plane.” The video itself will almost certainly be transmitted unencrypted. If it complies with an open standard, it is probably being sent over RTP/RTSP or over HTTP Live Streaming (HLS).
Source: Thoughts on Streaming Videos Securely
Server is busy with image picking from UDP. Needs threaded server design.
Using the above approach, I was able to interact with the server even while streaming the video. With the Flask development web server, you can add threaded=True to your app.run() call, or --with-threads if you use the Flask CLI.
You can also use Gunicorn in combination with gevent or eventlet to have further control over your threads.
You could use FFmpeg to mux the video stream in to H.264 in a mp4 container and then that can be directly used in a HTML5 video element.
The objective: download the videos being streamed from any website (not just youtube).
To do that, Python could monitor the network traffic, isolate the video stream, and then write that stream to a file.
I have two questions:
Is this doable in Python?
How to isolate (identify) the packets that belong to the video stream?
The objective is to download videos being streamed from any website.
Ok, so first thing is there are many different ways video is streamed over the internet. Some sites use Http Live Streaming, some use RTMP, multicast UDP, etc.. so your application is going to need to be versitile in handling different streaming protocols.
Python could monitor the network traffic. Perhaps tcpdump could be called from python and you could listen on a specific interface which the video traffic is flowing.
Then after you save the capture parse the capture file for specific types of packets. You will need to research how to assemble the packets from the capture to recreate the video stream.
I would pick one website and see if you can automate the downloading of the video and then once that is working pick other sites which use different streaming protocols and add that functionality to your application one at a time.
I am doing a project on media (more like a video conferencing).
The problem is , although I am able to send text/string data from one peer to another, I am still not sure about video files.
Using gstreamer, I am able to capture video stream from my webcam and doing the encoding/coding (H.264) , I am able to write the video stream into actual mp4 contanier directly using a file sink
Now my problem is, I am not sure on reading the video files as it contains both audio and video streams, convert into transmission stream to transmit it using packets
(I am able to send a very small jpeg file although).
I am using socket module and implementing UDP
If you are to send a video(with audio) to a peer in the network, you would better use RTP(Real time Transfer Protocol) which works on top of UDP. RTP provides feature of timestamps and profile which help you syncronize the audio and video sent through two ports.
I am working on a project where I create an image with a Raspberry Pi and a bunch of sensors.
I need to encode that image so that it can be sent through an XBee Pro S2B and then decoded in a Windows/Ubuntu machine. I can't just send the raw sensor data to the Windows/Ubuntu computer; I need the image itself. I need this to be in Python if possible, but any language is okay.
Because of the low bandwidth on the XBee module, you should really design your solution to push raw data from the Raspberry Pi and have the receiving end build the image. If that doesn't fit your model due to some other design restriction, then take a look at the Open Source python-xbee project.
You'll have to come up with a protocol for sending the data, since you just get binary streams between the two devices. Since the stream ensures data integrity, you just need to work out framing and ensure that frames arrive in the correct order.
The sending node can make use of the Transmit Status frame to trigger sending of the next frame. Transmit Status acknowledges that the remote node received the frame.
If you're open to using C, consider this ANSI C XBee Host Library on GitHub.