Threading priority workaround method - python

I have a python application where I use threading for the I/O bound tasks (reading from two seperate input sensors). I am aware that becouse of the GIL it is not possible to set a priority to a thread however I feel like my problem simply must be common enought that someone has though of a decent workaround. As I run the application it uses the maximum computational power of the CPU and I assume that is the problem, however I can not work around using the full potential.
Now for the issue, I know that a specific sensor is sending data every ~24:th ms (might drift over time). However the time when the application reads the data for example in the following time-order:
Available data at time (s): 4.361776
Available data at time (s): 4.3772116
Available data at time (s): 4.4171033
Available data at time (s): 4.4250908
Available data at time (s): 4.4596746
Available data at time (s): 4.5154788
Available data at time (s): 4.5154788
Available data at time (s): 4.5254734
Which is basicly on average 24ms inbetween each measurement but they are read in "clumps". Does anyone have a workaround for the problem? I know that I could implement sort of a "guessing" algorith to estimate the actual time for the measurement based on the time for the previous measurements however that seems like a solution prone to unexpected errors.

Related

Time variable in script became negative during execution

I have a Python program running on my Raspberry Pi 3B doing a bunch of image processing and so on. I wanted to gather some data measurements from the program by writing it into a .csv file, and I wanted to write the corresponding time with each measurement. I used time.clock() (see code snippet below) to find the time before each write operation, but somewhere between 2147 seconds and 2148 seconds, the time becomes negative (see snippet table below). I expect some kind over overflow occurred, but I'm having trouble understanding in which manner it overflowed. The Raspberry Pi is a 32 bit system, and as I understand, the time.clock() method returns a float. Shouldn't the time variable have overflowed only at much larger values, or is the time variable in this case not 32 bits?
Since then, I've read various threads that indicate time.time() might have been a better method for this use-case, and I might do that in future tests, but I just want to see what I can do with the values I've gathered thus far. I believe I can do some processing on the logged time to "de-overflow", for the lack of a better word, and use it as is. Any thoughts?
import time
import csv
def somefunction(someX, someY, csvwriter):
t = time.clock()
x = somefunc(someX)
y = somefunc(someY)
csvwriter.writerow([t, x, y])
return
Time (s)
X value
Y value
2146.978524
-0.0019
0.00032
2147.30423
-0.00191
0.00023
-2147.336675
-0.00182
0.00034
-2147.000555
-0.00164
0.00037
I doubt this is an 32-bit issue. The third bullet point near the beginning of the Python 3.7 documentation of the time module says:
The functions in this module may not handle dates and times before the epoch or far in the future. The cut-off point in the future is determined by the C library; for 32-bit systems, it is typically in 2038.
That said, I don't really know what the problem is. Perhaps using the time.perf_counter() or time.process_time() functions instead would avoid the issue.

python accurate timing of multiple sensors

I have a hardware setup (Nvidia Jetson) with multiple sensors which I read out and store for later processing. I use multiprocessing to read the sensors in separate processes. I essentially read the data and add a time to it so I can later on interpolate the data of the different sensors so they "sync up".
To store the time I use time.perf_counter() because I read it is an accurate option. I don't necessarily need the time since epoch, but I do need the time of all the processes to go at the same speed as a wall clock (so real time, not cpu-time). Does this work with perf_counter() or do I need to use a different function?

Is Python sometimes simply not fast enough for a task?

I noticed a lack of good soundfont-compatible synthesizers written in Python. So, a month or so ago, I started some work on my own (for reference, it's here). Making this was also a challenge that I set for myself.
I keep coming up against the same problem again and again and again, summarized by this:
To play sound, a stream of data with a more-or-less constant rate of flow must be sent to the audio device
To synthesize sound in real time based on user input, little-to-no buffering can be used
Thus, there is a cap on the amount of time one 'buffer generation loop' can take
Python, as a language, simply cannot run fast enough to do synthesize sound within this time limit
The problem is not my code, or at least, I've tried to optimize it to extreme levels - using local variables in time-sensitive parts of the code, avoiding using dots to access variables in loops, using itertools for iteration, using pre-compiled macros like max, changing thread switching parameters, doing as few calculations as possible, making approximations, this list goes on.
Using Pypy helps, but even that starts to struggle after not too long.
It's worth noting that (at best) my synth at the moment can play about 25 notes simultaneously. But this isn't enough. Fluidsynth, a synth written in C, has a cap on the number of notes per instrument at 128 notes. It also supports multiple instruments at a time.
Is my assertion that Python simply cannot be used to write a synthesizer correct? Or am I missing something very important?

Does python iterate at a constant speed?

I writing some code to get sensor readings from GPIO against time.
To make sure the measurements corresponds to a specific time, I want to know if python iterates at a constant speed (so that the gap between iterations is constant) - and what is its minimum time gap between iterations.
If they're not, can someone let me know how to make the time gap constant.
Thank you!
No, Python does not and can not iterate at constant speed.
Python is just another process on your Raspberry PI, and your OS is responsible for allocating it time to run on the CPU (called multi-tasking). Other processes also get allotted time. This means Python is never going to be running all the time and any processing times are going to be depend on what the other processes are doing.
Iteration itself is also delegated to specific types; how the next item is produced then varies widely, and even if Python was given constant access to the CPU iteration would still vary. Whatever you do in your loop body also takes time, and unless the inputs and outputs are always exactly the same, will almost certainly take a variable amount of time to do the work.
Instead of trying to time your loops, measure time with time.time() or timeit.default_timer (depending on how precise you need to be, on your Raspberry it'll be the same function) in a loop and adjust your actions based on that.

Python Accurate Sampling

I am trying to sample cpu registers every millisecond and calculate the frequency. To have an accurate measurement, I require that the sampling time to be very accurate. I have been using the time.sleep() to achieve this but sleep is not very accurate past 1 second.
What I would like to do is set up a counter and sample when that counter reaches a certain value and where the counter is incremented at an accurate rate. I am running Python 2.6. Does anyone have any suggestions?
I suspect there are likely several Python packages out there that -help- with what you want. I also suspect Python is not the right tool for that surpose.
There is the timeit module
There is time module with clock() which is NOT a wall clock but a CPU usage clock (for the application initializing the time.clock() object. It is a floating point value which shows some 12+ digits below the ones place ie 1.12345678912345. Python floats are not know for there accuracy and the return value from time.clock() is not something I personally trust as accurate.
There are other Python introspection tools that you can google for, like inspect, itertools, and others, that time processes. however I suspect their accuracy is dependant on running averages of many iterations of measuring the same thing.

Categories