I have a Python program running on my Raspberry Pi 3B doing a bunch of image processing and so on. I wanted to gather some data measurements from the program by writing it into a .csv file, and I wanted to write the corresponding time with each measurement. I used time.clock() (see code snippet below) to find the time before each write operation, but somewhere between 2147 seconds and 2148 seconds, the time becomes negative (see snippet table below). I expect some kind over overflow occurred, but I'm having trouble understanding in which manner it overflowed. The Raspberry Pi is a 32 bit system, and as I understand, the time.clock() method returns a float. Shouldn't the time variable have overflowed only at much larger values, or is the time variable in this case not 32 bits?
Since then, I've read various threads that indicate time.time() might have been a better method for this use-case, and I might do that in future tests, but I just want to see what I can do with the values I've gathered thus far. I believe I can do some processing on the logged time to "de-overflow", for the lack of a better word, and use it as is. Any thoughts?
import time
import csv
def somefunction(someX, someY, csvwriter):
t = time.clock()
x = somefunc(someX)
y = somefunc(someY)
csvwriter.writerow([t, x, y])
return
Time (s)
X value
Y value
2146.978524
-0.0019
0.00032
2147.30423
-0.00191
0.00023
-2147.336675
-0.00182
0.00034
-2147.000555
-0.00164
0.00037
I doubt this is an 32-bit issue. The third bullet point near the beginning of the Python 3.7 documentation of the time module says:
The functions in this module may not handle dates and times before the epoch or far in the future. The cut-off point in the future is determined by the C library; for 32-bit systems, it is typically in 2038.
That said, I don't really know what the problem is. Perhaps using the time.perf_counter() or time.process_time() functions instead would avoid the issue.
Related
I am currently working on a IoT Project in which I am trying to interface my Raspberry Pi 3 to HX711 so that I can read weight readings from my load cell having a range of 200 kg.
For the Python code, I tried this Python library from github
According to the description of this repository, I first calibrated the HX711 (calibration.py) by using a 5 kg known weight, giving me the offset and scale. After which I copied them and used them in example_python3.py.
But I keep getting variable readings from the load cell as shown in the following screenshot from the Raspberry Pi window:
I am getting this output by putting 5 kg load. I tried this loop of calibration and checking the output many many times but my output is still variable.
This is the code that I was using:
import RPi.GPIO as GPIO
import time
import sys
from hx711 import HX711
# Force Python 3 ###########################################################
if sys.version_info[0] != 3:
raise Exception("Python 3 is required.")
############################################################################
GPIO.setwarnings(False)
hx = HX711(5, 6)
def cleanAndExit():
print("Cleaning...")
GPIO.cleanup()
print("Bye!")
sys.exit()
def setup():
"""
code run once
"""
#Pasted Offset and Scale I got from calibration..
hx.set_offset(8608276.3125)
hx.set_scale(19.828315054835493)
def loop():
"""
code run continuosly
"""
try:
val = hx.get_grams()
print(val)
hx.power_down()
time.sleep(0.001)
hx.power_up()
except (KeyboardInterrupt, SystemExit):
cleanAndExit()
##################################
if __name__ == "__main__":
setup()
while True:
loop()
Unfortunately I do not have an HX711 so I cannot test your code. But I can give some pointers that might help.
My main question is: why does your code contain
hx.power_down()
time.sleep(0.001)
hx.power_up()
in the loop? According to the datasheet, the output settling time (i.e. the time from power up, reset, input channel change and gain change to valid stable output data) is 400 ms. So if you power down the HX711 after every reading, every reading will be instable!
Furthermore, how much deviation between readings do you expect? Your values currently fluctuate between roughly 4990 and 5040, which is a difference of 50, so only 1% difference. That's not really bad. Unfortunately, the accuracy of the HX711 is not specified in the datasheet, so I can't determine if this is "correct" or "wrong". However, you should check what you can expect before assuming something is wrong. The data sheet mentions an input offset drift of 0.2 mV, while the full-scale differential input voltage (at gain 128) is 20 mV. That's 1% too. (That might be coincidence, but you should probably dive into it if you want to be sure.)
Did you check the timing of your serial communication? The code just toggles IO pins without any specific timing, while the datasheet mentions that PD_SCK must be high for at least 0.2 us and at most 50 us. Toggling faster might result in incorrect readings, while toggling slower might cause the device to reset (since keeping PD_SCK high for longer than 60 us causes it to enter power down mode). See for example this C implementation which included a fix for fast CPUs. Your Python library did not include this fix.
I'm not sure how you would enforce this using a Raspberry Pi, though. It seems like you're just lucky if you get this working reliably on a Raspberry Pi (or any other non-real-time platform), because if your code is interrupted in a bad time, the reading may fail (see this comment for example).
I've read a few reports on the internet of persons stating that the HX711 needs to "warm up" for 2-3 minutes, so the readings become more stable after that time.
Finally, the issue could also be hardware related. There seem to be many low-quality boards. For example, there is this known design fault that might be related.
NB: Also note that your offset (8,608,276.3125) can't be correct. The HX711 returns a 24-bit 2's complement value. That means a value between -8,388,607 and +8,388,608. Your value is outside that range. The reason that you got this value is that the library you're using is not taking the data coding into account correctly. See this discussion. There are several forks of the repository in which this was fixed, for example this one. If correctly read, the value would have been -8,168,939.6875. This bug won't affect the accuracy, but could result in incorrect results for certain weights.
Just a final note for everyone thinking that the precision is 24-bit and thus it should return very reliable readings: precision is not the same as accuracy. Just because the device returns 24 bits does not mean that those bits are correct. How close the value is to the real value (the actual weight) depends on many other factors. Note that the library that you use by default reads the weight 16 times and averages the result. That shouldn't be necessary at all if the device is so accurate!
My advice:
Remove the power down from your loop (you may want to use it once in your setup though).
Wait at least 400 ms before your first reading.
Fix the data conversion for 2's complement logic if you want to use the full range.
Do not expect the device to be accurate to 24 bits.
If you still want to improve the accuracy of the readings, you might want to dive into the hardware- or timing-related issues (the datasheet and these github issues contain a lot of information).
I noticed a lack of good soundfont-compatible synthesizers written in Python. So, a month or so ago, I started some work on my own (for reference, it's here). Making this was also a challenge that I set for myself.
I keep coming up against the same problem again and again and again, summarized by this:
To play sound, a stream of data with a more-or-less constant rate of flow must be sent to the audio device
To synthesize sound in real time based on user input, little-to-no buffering can be used
Thus, there is a cap on the amount of time one 'buffer generation loop' can take
Python, as a language, simply cannot run fast enough to do synthesize sound within this time limit
The problem is not my code, or at least, I've tried to optimize it to extreme levels - using local variables in time-sensitive parts of the code, avoiding using dots to access variables in loops, using itertools for iteration, using pre-compiled macros like max, changing thread switching parameters, doing as few calculations as possible, making approximations, this list goes on.
Using Pypy helps, but even that starts to struggle after not too long.
It's worth noting that (at best) my synth at the moment can play about 25 notes simultaneously. But this isn't enough. Fluidsynth, a synth written in C, has a cap on the number of notes per instrument at 128 notes. It also supports multiple instruments at a time.
Is my assertion that Python simply cannot be used to write a synthesizer correct? Or am I missing something very important?
I have a python application where I use threading for the I/O bound tasks (reading from two seperate input sensors). I am aware that becouse of the GIL it is not possible to set a priority to a thread however I feel like my problem simply must be common enought that someone has though of a decent workaround. As I run the application it uses the maximum computational power of the CPU and I assume that is the problem, however I can not work around using the full potential.
Now for the issue, I know that a specific sensor is sending data every ~24:th ms (might drift over time). However the time when the application reads the data for example in the following time-order:
Available data at time (s): 4.361776
Available data at time (s): 4.3772116
Available data at time (s): 4.4171033
Available data at time (s): 4.4250908
Available data at time (s): 4.4596746
Available data at time (s): 4.5154788
Available data at time (s): 4.5154788
Available data at time (s): 4.5254734
Which is basicly on average 24ms inbetween each measurement but they are read in "clumps". Does anyone have a workaround for the problem? I know that I could implement sort of a "guessing" algorith to estimate the actual time for the measurement based on the time for the previous measurements however that seems like a solution prone to unexpected errors.
I writing some code to get sensor readings from GPIO against time.
To make sure the measurements corresponds to a specific time, I want to know if python iterates at a constant speed (so that the gap between iterations is constant) - and what is its minimum time gap between iterations.
If they're not, can someone let me know how to make the time gap constant.
Thank you!
No, Python does not and can not iterate at constant speed.
Python is just another process on your Raspberry PI, and your OS is responsible for allocating it time to run on the CPU (called multi-tasking). Other processes also get allotted time. This means Python is never going to be running all the time and any processing times are going to be depend on what the other processes are doing.
Iteration itself is also delegated to specific types; how the next item is produced then varies widely, and even if Python was given constant access to the CPU iteration would still vary. Whatever you do in your loop body also takes time, and unless the inputs and outputs are always exactly the same, will almost certainly take a variable amount of time to do the work.
Instead of trying to time your loops, measure time with time.time() or timeit.default_timer (depending on how precise you need to be, on your Raspberry it'll be the same function) in a loop and adjust your actions based on that.
I am trying to sample cpu registers every millisecond and calculate the frequency. To have an accurate measurement, I require that the sampling time to be very accurate. I have been using the time.sleep() to achieve this but sleep is not very accurate past 1 second.
What I would like to do is set up a counter and sample when that counter reaches a certain value and where the counter is incremented at an accurate rate. I am running Python 2.6. Does anyone have any suggestions?
I suspect there are likely several Python packages out there that -help- with what you want. I also suspect Python is not the right tool for that surpose.
There is the timeit module
There is time module with clock() which is NOT a wall clock but a CPU usage clock (for the application initializing the time.clock() object. It is a floating point value which shows some 12+ digits below the ones place ie 1.12345678912345. Python floats are not know for there accuracy and the return value from time.clock() is not something I personally trust as accurate.
There are other Python introspection tools that you can google for, like inspect, itertools, and others, that time processes. however I suspect their accuracy is dependant on running averages of many iterations of measuring the same thing.