I'm writing an application that uses the Python Gstreamer bindings to play audio, but I'm now trying to also just decode audio -- that is, I'd like to read data using a decodebin and receive a raw PCM buffer. Specifically, I want to read chunks of the file incrementally rather than reading the whole file into memory.
Some specific questions: How can I accomplish this with Gstreamer? With pygst specifically? Is there a particular "sink" element I need to use to read data from the stream? Is there a preferred way to read data from a pygst Buffer object? How do I go about controlling the rate at which I consume data (rather than just entering a "main loop")?
To get the data back in your application, the recommended way is appsink.
Based on a simple audio player like this one (and replace the oggdemux/vorbisdec by decodebin & capsfilter with caps = "audio/x-raw-int"), change autoaudiosink to appsink, and connect "new-buffer" signal to a python function + set "emit-signals" to True. The function will receive decoded chunks of PCM/int data. The rate of the decoding will depend on the rate at which you can decode and consume. Since the new-buffer signal is in the Gstreamer thread context, you could just sleep/wait in that function to control or slow down the decoding speed.
Related
I want to ask some advices about realtime audio data processing.
For the moment, I created a simple server and client using python sockets which send and receive audio data from microphone until I stop it (4096 bytes for each packet, but could be much more).
I saw two kinds of different analysis:
realtime: perform analysis on each X bytes packet and send back result in response
after receiving a lot of bytes (for example every 1h), append these bytes and store them into a DB. When the microphone is stopped, concatenate all the previous chunk and perform some actions on it (like create a waveplot image for this recorded session).
For this kind of usage, which kind of selfhosted DB can I use ?
how can I concatenate these large volumes of data at regular intervals and add them to the DB ?
For only 6 minutes, I received something like 32MB of data. Maybe I should put each chunk in a redis as soon as I receipt it, rather than keeping it in a python object. Another way could be serialize audio data into b64. I'm just afraid of losing speed since I'm currently using tcp for sending data.
Thanks for your help !
On your question about the size. Is there any reason not to compress the audio data? It's very easy. 32 MB for 6 mins of uncompressed audio (mono) is normal. You could Store smaller chunks and/or append incoming chunks to a bigger file. Have a look at this, it might help you:
https://realpython.com/playing-and-recording-sound-python/
How to join two wav files using python?
I am working on a project that need to extarct audio from a stream which is transmitted by .ts(MPEG-2 Transport Stream) file.
Currently I need to First save the file to file system, Then open it using moivepy to convert to WAV format audio.
The streaming requires realtime transmit, and there are multiple .ts file need to be process every second, Moivepy is too slow to open them all and convert each in realtime.
So I wonder if I can finish the whole process of extracting audio from MPEG in memory, avioding file system IO may speed up the process. How can I do it?
You can possibly try the ffmpeg-python package where you can take a look at the -target flag in the output function and specify .wav file output. https://ffmpeg.org/ffmpeg.html#Synopsis. Most flags in the synopsis page are offered in the package. I haven't yet encountered one that is not offered.
python-ffmpeg python bindings documentation
Example code:
import ffmpeg
audio_input = ffmpeg.input(url)
audio_output = ffmpeg.output(audio_input, save_location, target='filename.wav')
audio_output.run()
i doing a VoIP software in python, i try to recreate a specific ham radio program protocol, it uses GSM audio codec.
as python has no easy way to play gsm files, i however managed to at least convert a file with it, so i know it is possible.
i use myfile.write(data3) from network stream to write a .gsm file on hard drive.
then i use pysoundfile to convert it to wav file
data, samplerate = sf.read('temppi.gsm')
sf.write('temppi.wav', data, samplerate)
after i can play it with pyaudio. it give huge delay it need to be on the fly not after audio packet came in..
My question how i can direcly on the fly play the file from stream with soundfile? i tried to search google all is only about converting files, there is no way to play it direcly on the fly? any suggestions what i could do. Thanks and happy new year :)
EDIT:
now i have it on fly but this is bad.. and it doing alot chunking sounds
here we start thread aaniulos
if ekabitti == b'\x01':
dataaa = self.socket.recv(198)
data3 = io.BytesIO(bytes(dataaa))
while True:
global aani
#global data3
if aani:
print ('Ääni saije lopetetaan..')
break
data, samplerate = sf.read(io.BytesIO(bytes(data3.getbuffer())), format = 'RAW', channels = 1, samplerate=8000, dtype ='int16', subtype='GSM610', endian ='FILE')
virtuaalifilu = io.BytesIO()
sf.write (virtuaalifilu, data, 8000, format='wav', subtype= 'PCM_16')
sound_file = io.BytesIO(bytes(virtuaalifilu.getbuffer()))
print ('striimataan ääntä nyt kaijuttimiin!!!')
stream.stop_stream()
stream.close()
return
Since you omit much detail I can only guess how your implementation works. It sounds like you are not doing it correctly. My guess is that the huge delay you experience is because you are sending too much audio in each packet, maybe even a whole audio file? To achieve audio streaming with low latency you basically need to follow this crude scheme:
At the sender:
Record audio to a buffer.
Continuously slice the buffer in chunks of a pre-defined length, e.g. 20 milliseconds.
Encode each chunk with a suitable audio codec, e.g. GSM.
Send each chunk in a packet to the receiver, preferably using a datagram based protocol like UDP.
At the receiver:
Read packets from network when available.
Decode each packet to raw audio data and put it in an audio buffer.
Continuously play audio from the audio buffer.
If using UDP as transfer protocol you also need to handle packet losses and out-of-order packets. Depending on the latency requirements you could probably also use (or at least try) TCP to send each audio chunk.
To achieve continuous audio recording and playback sounddevice seems to be a good alternative. For recording, check out InputStream or RawInputStream. For playback, have a look at OutputStream or RawOutputStream.
It might be possible to still use SoundFile to convert from GSM codec to raw audio, but you need to do that for each chunk. And the chunk must be quite small, e.g. 20 milliseconds.
I'm looking for a way to continuously stream audio from a server, the main issue is that the server side code it will receive many url's to stream audio. There will also be instances where the url is swapped live and a new piece of audio is streamed instead. I have not yet found a solution that wouldn't involve me downloading each file to then stream, which would hinder the live feature.
I've attempted to use vlc for python but it wouldn't allow for the ability to change the url being streamed in the moment. I've also attempted to use pyaudio but I haven't been able to get the correct audio format let alone swap the source of the audio.
An example link, fairwarning it'll autoplay: audio
To make a continuous stream that is sent to clients, you'll need to break this project into two halves.
Playout
You need something to decode the source streams from their compressed formats to a non-compressed standardized format that you can manipulate... raw PCM samples. Use a child process and have it output to STDOUT so you can get that data in your Python script. You can use VLC for this if you want, but FFmpeg is pretty easy:
ffmpeg -i "http://example.com/stream" -ar 48000 -ac 2 -f f32le -acodec pcm_f32le -
That will output raw PCM to STDOUT as 32-bit floats, in stereo, at 48 kHz. Once in this standard format, you can arbitrarily join streams. So, when you're done playing one stream, just kill the process, switch to the next, and start playing back samples from the new one.
Encoding
You want to create a single PCM stream that then you can re-encode with some external encoder, basically in reverse from what you did on playout. Again, something FFmpeg can do for you:
ffmpeg -f f32le -ar 48000 -ac 2 - -f opus -acodec libopus icecast://...
Now, you'll note the output example here, I suggested sending this off to Icecast. Icecast is a decent streaming server you can use. If you'd rather just output directly over HTTP, you can. But if you're playing this stream out to more than one listener, I'd suggest letting Icecast or similar take care of it for you.
Is there a way to capture and write very fast serial data to a file?
I'm using a 32kSPS external ADC and a baud rate of 2000000 while printing in the following format: adc_value (32bits) \t millis()
This results in ~15 prints every 1 ms. Unfortunately every single soulution I have tried fails to capture and store real time data to a file. This includes: Processing sketches, TeraTerm, Serial Port Monitor, puTTY and some Python scripts. All of them are unable to log the data in real time.
Arduino Serial Monitor on the other hand is able to display real time serial data, but it's unable to log it in a file, as it lacks this function.
Here's a printscreen of the serial monitor in Arduino with the incoming data:
One problematic thing is probably that you try to do a write each time you receive a new record. That will waste a lot of time writing data.
Instead try to collect the data into buffers, and as a buffer is about to overflow write the whole buffer in a single and as low-level as possible write call.
And to not stop the receiving of the data to much, you could use threads and double-buffering: Receive data in one thread, write to a buffer. When the buffer is about to overflow signal a second thread and switch to a second buffer. The other thread takes the full buffer and writes it to disk, and waits for the next buffer to become full.
After trying more than 10 possible solutions for this problem including dedicated serial capture software, python scripts, Matlab scripts, and some C projects alternatives, the only one that kinda worked for me proved to be MegunoLink Pro.
It does not achieve the full 32kSPS potential of the ADC, rather around 12-15kSPS, but it is still much better than anything I've tried.
Not achieving the full 32kSPS might also be limited by the Serial.print() method that I'm using for printing values to the serial console. By the way, the platform I've been using is ESP32.
Later edit: don't forget to edit MegunoLinkPro.exe.config file in the MegunoLink Pro install directory in order to add further baud rates, like 1000000 or 2000000. By default it is limited to 500000.