I want to ask some advices about realtime audio data processing.
For the moment, I created a simple server and client using python sockets which send and receive audio data from microphone until I stop it (4096 bytes for each packet, but could be much more).
I saw two kinds of different analysis:
realtime: perform analysis on each X bytes packet and send back result in response
after receiving a lot of bytes (for example every 1h), append these bytes and store them into a DB. When the microphone is stopped, concatenate all the previous chunk and perform some actions on it (like create a waveplot image for this recorded session).
For this kind of usage, which kind of selfhosted DB can I use ?
how can I concatenate these large volumes of data at regular intervals and add them to the DB ?
For only 6 minutes, I received something like 32MB of data. Maybe I should put each chunk in a redis as soon as I receipt it, rather than keeping it in a python object. Another way could be serialize audio data into b64. I'm just afraid of losing speed since I'm currently using tcp for sending data.
Thanks for your help !
On your question about the size. Is there any reason not to compress the audio data? It's very easy. 32 MB for 6 mins of uncompressed audio (mono) is normal. You could Store smaller chunks and/or append incoming chunks to a bigger file. Have a look at this, it might help you:
https://realpython.com/playing-and-recording-sound-python/
How to join two wav files using python?
Related
i doing a VoIP software in python, i try to recreate a specific ham radio program protocol, it uses GSM audio codec.
as python has no easy way to play gsm files, i however managed to at least convert a file with it, so i know it is possible.
i use myfile.write(data3) from network stream to write a .gsm file on hard drive.
then i use pysoundfile to convert it to wav file
data, samplerate = sf.read('temppi.gsm')
sf.write('temppi.wav', data, samplerate)
after i can play it with pyaudio. it give huge delay it need to be on the fly not after audio packet came in..
My question how i can direcly on the fly play the file from stream with soundfile? i tried to search google all is only about converting files, there is no way to play it direcly on the fly? any suggestions what i could do. Thanks and happy new year :)
EDIT:
now i have it on fly but this is bad.. and it doing alot chunking sounds
here we start thread aaniulos
if ekabitti == b'\x01':
dataaa = self.socket.recv(198)
data3 = io.BytesIO(bytes(dataaa))
while True:
global aani
#global data3
if aani:
print ('Ääni saije lopetetaan..')
break
data, samplerate = sf.read(io.BytesIO(bytes(data3.getbuffer())), format = 'RAW', channels = 1, samplerate=8000, dtype ='int16', subtype='GSM610', endian ='FILE')
virtuaalifilu = io.BytesIO()
sf.write (virtuaalifilu, data, 8000, format='wav', subtype= 'PCM_16')
sound_file = io.BytesIO(bytes(virtuaalifilu.getbuffer()))
print ('striimataan ääntä nyt kaijuttimiin!!!')
stream.stop_stream()
stream.close()
return
Since you omit much detail I can only guess how your implementation works. It sounds like you are not doing it correctly. My guess is that the huge delay you experience is because you are sending too much audio in each packet, maybe even a whole audio file? To achieve audio streaming with low latency you basically need to follow this crude scheme:
At the sender:
Record audio to a buffer.
Continuously slice the buffer in chunks of a pre-defined length, e.g. 20 milliseconds.
Encode each chunk with a suitable audio codec, e.g. GSM.
Send each chunk in a packet to the receiver, preferably using a datagram based protocol like UDP.
At the receiver:
Read packets from network when available.
Decode each packet to raw audio data and put it in an audio buffer.
Continuously play audio from the audio buffer.
If using UDP as transfer protocol you also need to handle packet losses and out-of-order packets. Depending on the latency requirements you could probably also use (or at least try) TCP to send each audio chunk.
To achieve continuous audio recording and playback sounddevice seems to be a good alternative. For recording, check out InputStream or RawInputStream. For playback, have a look at OutputStream or RawOutputStream.
It might be possible to still use SoundFile to convert from GSM codec to raw audio, but you need to do that for each chunk. And the chunk must be quite small, e.g. 20 milliseconds.
I have a process that continuously reads data off of a bus and sends the data (converted into strings) via a pipe to a GUI. I'm using multiple processes (multipleprocessing library) where the GUI and the read process are separate.
The GUI then takes that data (strings) and displays them in a textbox. I want to read incoming data to the GUI at 1ms intervals (after(1, read_data)). However, I think the strings are too big so the data in the pipes are transmitting too slowly and therefore causing the textbox to display the data too slowly. I'm trying to get live data of the information on the bus so fast timing is necessary.
bus data --> read process --> GUI (GUI prints the data strings in a textbox)
How do I reduce the lag or slowdown between the reading process and the GUI? How long does it take for data to travel through a pipe?
An example of a string would be:
"######################################################################\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\n######################################################################\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\n######################################################################\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\n######################################################################\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAA\n"
I've tried sending the data as the entire string. I've also tried sending it in chunks but it's still slow because both ends of the pipes have to perform multiple iterations of the reads or writes.
What can I do to maintain fast sending and receiving (within the 0.1ms to 4ms range)?
edit:
I'm updating a textbox in Tkinter every 25ms with the data received.
Is there a way to capture and write very fast serial data to a file?
I'm using a 32kSPS external ADC and a baud rate of 2000000 while printing in the following format: adc_value (32bits) \t millis()
This results in ~15 prints every 1 ms. Unfortunately every single soulution I have tried fails to capture and store real time data to a file. This includes: Processing sketches, TeraTerm, Serial Port Monitor, puTTY and some Python scripts. All of them are unable to log the data in real time.
Arduino Serial Monitor on the other hand is able to display real time serial data, but it's unable to log it in a file, as it lacks this function.
Here's a printscreen of the serial monitor in Arduino with the incoming data:
One problematic thing is probably that you try to do a write each time you receive a new record. That will waste a lot of time writing data.
Instead try to collect the data into buffers, and as a buffer is about to overflow write the whole buffer in a single and as low-level as possible write call.
And to not stop the receiving of the data to much, you could use threads and double-buffering: Receive data in one thread, write to a buffer. When the buffer is about to overflow signal a second thread and switch to a second buffer. The other thread takes the full buffer and writes it to disk, and waits for the next buffer to become full.
After trying more than 10 possible solutions for this problem including dedicated serial capture software, python scripts, Matlab scripts, and some C projects alternatives, the only one that kinda worked for me proved to be MegunoLink Pro.
It does not achieve the full 32kSPS potential of the ADC, rather around 12-15kSPS, but it is still much better than anything I've tried.
Not achieving the full 32kSPS might also be limited by the Serial.print() method that I'm using for printing values to the serial console. By the way, the platform I've been using is ESP32.
Later edit: don't forget to edit MegunoLinkPro.exe.config file in the MegunoLink Pro install directory in order to add further baud rates, like 1000000 or 2000000. By default it is limited to 500000.
Hope you're well and thanks for reading.
Been revisiting an old project, leveraging plotly to stream data out of mysql with python in-between the two. I've never had great luck w/ plot.ly (which I'm sure relates more to my understanding than their platform), streams/iframes seem to stall over time and I am not apt enough to troubleshoot completely.
My current symptom is this: Plots arbitrarily stall - I'm pushing data, but the iframe isn't updating.
The current solution is: Refresh the browser every X minutes.
The solution works, but it's aggrevating, because I dont understand why the visual is stalling in the first place (is it me, is it them, etc).
As I was reviewing some of the documentation, specifically this link:
https://plot.ly/streaming/
I noticed they call out NOT continually opening and closing streams, and that heartbeats should be placed every so often to keep things alive/fresh.
Here's what I'm currently calling every 10 minutes:
pullData(mysql)
format data
open(plotly.stream1)
write data to plotly.stream1
close(plotly.stream1)
open(plotly.stream2)
write data to plotly.stream2
close(plotly.stream2)
Based on what I am reading, it sounds like I should actually execute the script once on startup, and keep the streams open, but heartbeat() them every 15 or-so seconds between actual write() calls like this:
open(plotly.stream1)
open(plotly.stream2)
every 10 minutes:
pullData(mysql)
format data
write data to plotly.stream1
write data to plotly.stream2
while not pulling and writing:
every 15 seconds:
heartbeat(plotly.stream1)
heartbeat(plotly.stream2)
if error:
close(plotly.stream1)
close(plotly.stream2)
Please excuse the sudo-mess, I'm just trying to convey an idea. Anyone have any advice? I started on my original path of opening, writing, closing based on the streaming example, but that's a one time write. The other example is a constant stream of data. I'm somewhere in between those two.
Furthermore - is this train of thought even related to the iframe not refreshing? Part of me believes the symptom is unrelated to my idea - the data is getting to plot.ly fine - it's my session that's expiring, or the iframe "connection" that's going stale. If the symptom is unrelated, at least I'll have made my source code a bit cleaner and more appropriate.
Any advice is greatly appreciated!
Thanks
-justin
Plotly will close a stream that is inactive for more than 60 seconds. You must send a newline down the streaming channel (a heartbeat) to keep it open. I recommend every 30 seconds.
Your first code example may not work as expected because the client side websocket (that connects the Plot to our system) may close when your first source stream (the stream that connects your script to our system) exits. When you disconnect a source stream a signal is sent to our system that lets it know your stream is now inactive. If a new source stream does not reconnect quickly we close the client connecting websockets.
Now, when your script gets more data and opens a new stream it will successfully stream data to our system but the client-side websocket, now closed, will not pass the data to the Plot. We will cache a certain amount of points for you behind the scenes so that when you refresh the page the websocket reconnects and you get the last n points (where n is set by max-points in the API call).
This is why sending the heartbeat is important. We keep the source stream open and that in turn ensures that all the connected Clients keep their websockets open.
This isn't necessarily the most robust behaviour for a streaming platform to have and we will likely make it better in the future. For now though you will likely see better results by attempting to implement the code in your second example.
Hope that helped!
I'm writing an application that uses the Python Gstreamer bindings to play audio, but I'm now trying to also just decode audio -- that is, I'd like to read data using a decodebin and receive a raw PCM buffer. Specifically, I want to read chunks of the file incrementally rather than reading the whole file into memory.
Some specific questions: How can I accomplish this with Gstreamer? With pygst specifically? Is there a particular "sink" element I need to use to read data from the stream? Is there a preferred way to read data from a pygst Buffer object? How do I go about controlling the rate at which I consume data (rather than just entering a "main loop")?
To get the data back in your application, the recommended way is appsink.
Based on a simple audio player like this one (and replace the oggdemux/vorbisdec by decodebin & capsfilter with caps = "audio/x-raw-int"), change autoaudiosink to appsink, and connect "new-buffer" signal to a python function + set "emit-signals" to True. The function will receive decoded chunks of PCM/int data. The rate of the decoding will depend on the rate at which you can decode and consume. Since the new-buffer signal is in the Gstreamer thread context, you could just sleep/wait in that function to control or slow down the decoding speed.