I'm trying to add HLS playback to a Python application. Currently, the most appropriate path to go down seems to be to use LibVLC and its Python bindings, as its the only multimedia library I've found for Python which can play MPEG transport streams out of the box (or at all). I'm open to other suggestions, though.
However, I also need my application to handle the fetching of the MPEG TS chunks from the HLS manifest itself, in order to set an appropriate user agent, manage proxy settings and store cookies between HTTP requests. Therefore, I have a thread downloading HLS chunks and adding them to a queue, which then feeds them into a BytesIO instance. I can easily enough save that instance to disk to emulate download functionality, but my question is, how can I feed the data from a BytesIO stream into LibVLC in order to play the stream in realtime?
I've tried using ctypes along with libvlc_media_new_callbacks (see my previous question here), but didn't get very far. I've also tried passing file descriptors of temporary files or pipes created using os.pipe, but VLC doesn't seem to be able to inherit and access these. If I save each chunk to its own temporary file and then queue them up in VLC, there is a gap in playback between each one. So I'm a bit stuck.
Any help would be very much appreciated!
Related
I need to read a really big file of jsonl's from a URL the approach I am using is as follow
bulk_status_info = _get_bulk_info(shop)
url = bulk_status_info.get('bulk_info').get('url')
file = urllib.request.urlopen(url)
for line in file:
print(json.loads(line.decode("utf-8")))
However, my CPU and memory are limited so that brings me to two questions
Is the file loaded all at once or is it have some buffering mechanism to prevent memory from overflowing.
In case my task failed I want to start from the place I failed. Is there some sort of cursor I can save. Note things like seek or tell do not work here since it is not an actual file
Some additional info I am using Python3 and urllib
The file will be loaded in its entirety before running the for loop. The file will be loaded packet by packet but this is abstracted away by urllib. If you want to have closer access I'm sure there is a way similar to how it can be done using the requests library.
Generally there is no way to resume the downloading of a webpage, or any file request for that matter unless the server specifically supports it. That would require the server to allow for a start point to be specified, this is the case for video streaming protocols.
I am working on a project that need to extarct audio from a stream which is transmitted by .ts(MPEG-2 Transport Stream) file.
Currently I need to First save the file to file system, Then open it using moivepy to convert to WAV format audio.
The streaming requires realtime transmit, and there are multiple .ts file need to be process every second, Moivepy is too slow to open them all and convert each in realtime.
So I wonder if I can finish the whole process of extracting audio from MPEG in memory, avioding file system IO may speed up the process. How can I do it?
You can possibly try the ffmpeg-python package where you can take a look at the -target flag in the output function and specify .wav file output. https://ffmpeg.org/ffmpeg.html#Synopsis. Most flags in the synopsis page are offered in the package. I haven't yet encountered one that is not offered.
python-ffmpeg python bindings documentation
Example code:
import ffmpeg
audio_input = ffmpeg.input(url)
audio_output = ffmpeg.output(audio_input, save_location, target='filename.wav')
audio_output.run()
I would like to stream the audio of a youtube video in python, youtube-dl allows me to download a video (audio in my case), but that process can take some time. My objective is to be able to stream the audio 'dynamically', like I would by going on a youtube video. I would like to start playing the audio and still download the rest of it at the same time.
I know that the youtube-dl command line program allows to stream videos to media players, like VLC for example:
youtube-dl -o - -- "[videoID]" | vlc -. I could create a subprocess and execute that command, but I would prefer to use a cleaner way, if possible.
I would expect to have some sort of data that I can transmit to an audio device later on. I don't need to store the audio in a file, but it's not a big deal if there is a temporary file.
This is unfortunately not possible. Youtube-DL does not expose an API that makes this straightforward. This is where Youtube-DL opens the file (or stdout) for writing. It's not exactly written to allow for easy switching of the output stream.
It's probably easier to just subprocess it and pipe its output if you really want this functionality.
In VLC I can do "Media -> Open Network Stream" and under "Please enter a network URL" I can put udp://#239.129.8.16:1234.
And this is opening local UDP video stream.
VLC is not related to my question, I have put it just for clarification.
How can I connect to "udp://#239.129.8.16:1234" network stream in Python, get image from it (screenshot) and save it in file?
I think neither network programming nor Python is the focus of your question here. At the core, you need to feed a video decoder with the binary data stream, make the video decoder collect a sufficient amount of data for decoding a single frame, let the decoder save this, and abort the operation.
I am quite sure that the command line tool avconv from the libav project can do everything you need. All you need to do is dig into the rather complex documentation and find the right command line parameters for your application scenario. From a quick glance, it looks you will need for instance
‘-vframes number (output)’
Set the number of video frames to record. This is an alias for -frames:v.
Also, you should definitely search the docs for this sentence:
If you want to extract just a limited number of frames, you can use
the above command in combination with the -vframes or -t option
It also looks like avconv can directly read from a UDP stream.
Note that VLC, the example you gave, uses libav at the core. Also note that if you really need to execute all this "from Python", then spawn avconv from within Python using the subprocess module.
I'm searching for a library / module that can transcode an MP3 (other formats are a plus) to OGG, on the fly.
What I need this for: I'm writing a relatively small web app, for personal use, that will allow people to listen their music via a browser. For the listening part, I intend to use the new and mighty <audio> tag. However, few browsers support MP3 in there. Live transcoding seems like the best option because it doesn't waste disk space (like if I were to convert the entire music library) and I will not have performance issues since there will be at most 2-3 listeners at the same time.
Basically, I need to feed it an MP3 (or whatever else) and then get a file-like object back that I can pass back to my framework (flask, by the way) to feed to the client.
Stuff I've looked at:
gstreamer -- seems overkill, although has good support for a lot of formats; documentation lacks horribly
timeside -- looks nice and simple to use, but again it has a lot of stuff I don't need (graphing, analyzing, UI...)
PyMedia -- last updated: 01 Feb 2006...
Suggestions?
You know, there's no shame in using subprocess to call external utilities. For example, you could construct pipes like:
#!/usr/bin/env python
import subprocess
frommp3 = subprocess.Popen(['mpg123', '-w', '-', '/tmp/test.mp3'], stdout=subprocess.PIPE)
toogg = subprocess.Popen(['oggenc', '-'], stdin=frommp3.stdout, stdout=subprocess.PIPE)
with open('/tmp/test.ogg', 'wb') as outfile:
while True:
data = toogg.stdout.read(1024 * 100)
if not data:
break
outfile.write(data)
In fact, that's probably your best approach anyway. Consider that on a multi-CPU system, the MP3 decoder and OGG encoder will run in separate processes and will probably be scheduled on separate cores. If you tried to do the same with a single-threaded library, you could only transcode as fast as a single core could handle it.