How to implement a livestream with playback functionality? - python

I am trying to implement a video player/live stream on web page which allows me to play back from a timestamp in the same player. What I do have now is a simple flask app that consumes image data from my IP camera's RTSP stream and streams it to an endpoint. I am then using the endpoint as the src of the to display it.
For the playback, I'm guessing that I will need some video player that constantly reads from a video file. However, I am unsure if that can be done while new frames are written into the same video file.
What will I need to implement such a player that streams and offers playback?

Related

Is there any way to fluently receive audio and send it to the backend

I want to create a web application(Flask- A Flashcard AI), a part of which is a bot which needs to directly interact with the human through speech recognition and text-to-speech. I have pyttsx3 and speech_recognition installed for that, where I am confused is how am I supposed to get the user's audio as input and then send it to the backend. I have tried to look up YouTube tutorials and asked other people about the same, the only success I've had is learning about Navigator.MediaDevices.getUserMedia. I want to make the communication fluent, and I will have to send the data to the back-end as well. I am not sure how to send it to the back-end and get the user media fluently, I could use Navigator.MediaDevices.getUserMedia and convert it into an audio file(not sure how to do that yet but I think I'll figure it out eventually, and having the user upload a audio recording won't be nice at all), but then that'll take up a lot of space on the database.
If you just want to process some action based on voice you can use speech API.
https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API
This API will be able to give you text based captions which you can easily store in the database.
If you need to store audio on server side you would convert that to some loassy format like mp3 or aac to save space.

Images to HTTP Live Streaming

I have a series of video frames as images that have been processed using some algorithm. I now want to stream these in a format that Google Glass can read (instructions and formats).
How would I go about encoding these images into a video and serving them as an HTTP Live Streaming video URL in Python?

Stream live videos

How can I stream a live movie or video for my clients to connect to a webpage and watch? I would prefer that it's in a SWF form, so maybe I could load a file into the SWF. This isn't a webcam type of thing, like maybe if I want to stream a video I made a while ago and allow my users to watch it all at once, and not have it restart every time they refresh. Could this be done in Python, PHP, etc? Or does it involve a program? I am using Ubuntu for my dedicated server. I also need the audio to be live too.
Thanks
"stream a live movie or video" and "stream a video I made a while ago" are contradictory, what do you really mean by "live"? Are you referring to a "broadcasted video" (i.e. may be prerecorded, but everyone will be viewing from the same stream) or "live video" (i.e. viewing events as it happens, being live implies broadcast).
There are services like Justin.TV or Ustream.TV that allows you to broadcast vieo stream from either live cameras, live screen capture, and/or prerecorded videos. They have their own (flash-based or HTML5-based) web players that can handle live streams and you will need to use their broadcasting software to manage the live stream. There is also Youtube Live if you are a Youtube partner.

Creating a python audio player using QWebView and the HTML5 Audio API

I am seriously new to Python and my first project is quite ambitious :D
I'm trying to create an audio player using a QWebView and the HTML5 Audio API.
I want to use Phonon to actually play the media, but I'd like to be able to use the HTML5 Audio API to make an equalizer, like the one in Winamp.
I can get Phonon to play an audio file no problem, but is there a way to connect the audio output to my JavaScript so that I can play around with the different channels etc.?
Is it even the best way? I mean, would doing it this way limit the formats available to my player to those supported by WebKit, or would I still be able to play any format Phonon is able to play? (I'm assuming here, that Phonon would stream a raw/decoded version of the audio to my JavaScript, which I could then use via the Audio API)
If this isn't possible I could make a simple JavaScript wrapper around a Phonon AudioOutput object I suppose?
Any thoughts?
I haven't worked with the Qt framework, but peeking at the QWebView docs seems like there's no readily available solution to communicate with the window object.
If you want to work with a familiar protocol, then I suggest you look at the Flask microframework. It's basically a small piece of opinionated code where all the application behavior is provided by functions that receive and then return HTTP request and response objects. Here's the official streaming documentation so you can get an idea how building a response object looks like.
It seems you figured out how to generate the output, this would mean you'd only need to run the built-in Flask server at runtime and transport the audio data to your JavaScript client over HTTP.

Writing a Python Music Streamer

I would like to implement a server in Python that streams music in MP3 format over HTTP. I would like it to broadcast the music such that a client can connect to the stream and start listening to whatever is currently playing, much like a radio station.
Previously, I've implemented my own HTTP server in Python using SocketServer.TCPServer (yes I know BaseHTTPServer exists, just wanted to write a mini HTTP stack myself), so how would a music streamer be different architecturally? What libraries would I need to look at on the network side and on the MP3 side?
The mp3 format was designed for streaming, which makes some things simpler than you might have expected. The data is essentially a stream of audio frames with built-in boundary markers, rather than a file header followed by raw data. This means that once a client is expecting to receive audio data, you can just start sending it bytes from any point in an existing mp3 source, whether it be live or a file, and the client will sync up to the next frame it finds and start playing audio. Yay!
Of course, you'll have to give clients a way to set up the connection. The de-facto standard is the SHOUTcast (ICY) protocol. This is very much like HTTP, but with status and header fields just different enough that it isn't directly compatible with Python's built-in http server libraries. You might be able to get those libraries to do some of the work for you, but their documented interfaces won't be enough to get it done; you'll have to read their code to understand how to make them speak SHOUTcast.
Here are a few links to get you started:
https://web.archive.org/web/20220912105447/http://forums.winamp.com/showthread.php?threadid=70403
https://web.archive.org/web/20170714033851/https://www.radiotoolbox.com/community/forums/viewtopic.php?t=74
https://web.archive.org/web/20190214132820/http://www.smackfu.com/stuff/programming/shoutcast.html
http://en.wikipedia.org/wiki/Shoutcast
I suggest starting with a single mp3 file as your data source, getting the client-server connection setup and playback working, and then moving on to issues like live sources, multiple encoding bit rates, inband meta-data, and playlists.
Playlists are generally either .pls or .m3u files, and essentially just static text files pointing at the URL for your live stream. They're not difficult and not even strictly necessary, since many (most?) mp3 streaming clients will accept a live stream URL with no playlist at all.
As for architecture, the field is pretty much wide open. You have as many options as there are for HTTP servers. Threaded? Worker processes? Event driven? It's up to you. To me, the more interesting question is how to share the data from a single input stream (the broadcaster) with the network handlers serving multiple output streams (the players). In order to avoid IPC and synchronization complications, I would probably start with a single-threaded event-driven design. In python 2, a library like gevent will give you very good I/O performance while allowing you to structure your code in a very understandable way. In python 3, I would prefer asyncio coroutines.
Since you already have good python experience (given you've already written an HTTP server) I can only provide a few pointers on how to extend the ground-work you've already done:
Prepare your server for dealing with Request Headers like: Accept-Encoding, Range, TE (Transfer Encoding), etc. An MP3-over-HTTP player (i.e. VLC) is nothing but an mp3 player that knows how to "speak" HTTP and "seek" to different positions in the file.
Use wireshark or tcpdump to sniff actual HTTP requests done by VLC when playing an mp3 over HTTP, so you know how what request headers you'll be receiving and implement them.
Good luck with your project!
You'll want to look into serving m3u or pls files. That should give you a file format that players understand well enough to hit your http server looking for mp3 files.
A minimal m3u file would just be a simple text file with one song url per line. Assuming you've got the following URLs available on your server:
/playlists/<playlist_name/playlist_id>
/songs/<song_name/song_id>
You'd serve a playlist from the url:
/playlists/myfirstplaylist
And the contents of the resource would be just:
/songs/1
/songs/mysong.mp3
A player (like Winamp) will be able to open the URL to the m3u file on your HTTP server and will then start streaming the first song on the playlist. All you'll have to do to support this is serve the mp3 file just like you'd serve any other static content.
Depending on how many clients you want to support you may want to look into asynchronous IO using a library like Twisted to support tons of simultaneous streams.
Study these before getting too far:
http://wiki.python.org/moin/PythonInMusic
Specifically
http://edna.sourceforge.net/
You'll want to have a .m3u or .pls file that points at a static URI (e.g. http://example.com/now_playing.mp3) then give them mp3 data starting wherever you are in the song when they ask for that file. Probably there are a bunch of minor issues I'm glossing over here...However, at least as forest points out, you can just start streaming the mp3 data from any byte.

Categories