I applied the following code (https://circuitpython.readthedocs.io/projects/ads1x15/en/latest/examples.html) to read voltage.
In the last line of python code, I have set time.sleep command as (1/3300 s)
I have following queries:
In the time column, time-step comes out to be approximately (0.02 s). However expected time-step is (1/3300)s. Why does this occur?
How do I ensure that the time-step i.e sampling frequency between two successive time data points remains exactly at 3300 Hz. ?
How do I ensure that 1st time-data point starts with "0"?
Can somebody please clarify my doubts!
The sampling rate of the ADS1015 is meant to be 3300S/sec only in continuous mode, and sampling one channel at a time.
There are 2 steps here:
Ensure your ADC is in continuous sampling mode.
Putting it in continuous mode would be something like "adc.mode = 0", provided your library supports it. I have used this one https://github.com/adafruit/Adafruit_ADS1X15 and it does support it.
Ensure that the Data rate in the config register is set to 3300. (page 16 on the datasheet at https://cdn-shop.adafruit.com/datasheets/ads1015.pdf)
Purely that would also mostly not be enough, getting to the full potential of the ADC would also need a compatible processor that can handle large amounts of data on its i2c bus. Something like a raspberry pi is mostly not powerful enough.
Using faster languages like C/C++ would also help.
You have at least 3 problems and need to read the time module docs.
time.time is not guaranteed to be accurate to more than a second. In the following, in IDLE Shell on Win 10, multiple time.time() calls give same time.
>>> for i in range(30):
print(time.perf_counter(), time.time(), time.perf_counter())
8572.4002846 1607086901.7035756 8572.4002855
8572.4653746 1607086901.756807 8572.4653754
8572.4706208 1607086901.7724454 8572.4706212
8572.4755909 1607086901.7724454 8572.4755914
8572.4806756 1607086901.7724454 8572.4806759
... # time.time continues repeating 3 or 4 times.
time.sleep(t) has a minimum system-dependent interval, even if t is much smaller. On Windows, it is about .015 seconds. There is no particular upper limit if there is other system activity.
>>> for i in range(5):
print(time.perf_counter())
time.sleep(.0000001)
9125.1041623
9125.1188101
9125.134417
9125.1565579
9125.1722012
Printing to IDLE's shell is slower than running a program direct with Python (from command line) and printing to the system console. For one thing, IDLE runs user code in a separate process, adding interprocess overhead. For another, IDLE is a GUI program and the GUI framework, tk via tkinter, add more overhead. IDLE is designed for learning Python and developing Python programs. It is not optimized for running Python programs.
If user code outputs to a tkinter GUI it creates in the same process, avoiding the interprocess delay, the minimum interval is much shorter, about .0012 seconds in this particular example.
>>> import tkinter as tk
>>> r = tk.Tk()
>>> t = tk.Text(r)
>>> t.pack()
>>> for i in range(5):
t.insert('insert', f'{time.perf_counter()}\n')
r.update()
# In the text widget...
9873.6484271
9873.6518752
9873.6523338
9873.6527421
9873.6532307
Related
Recently, when I was fiddling Python with different IDE/shells, I was most surprised at the performance differences among them.
The code I wrote is a simple for-loop through 1-1000. When executed by PythonIDLE or Windows Powershell, it took about 16 seconds to finish it while PyCharm almost finished it immediately within about 500ms.
I'm wondering why the difference is so huge.
for x in range(0, 1000, 1):
print(x)
The time to execute the loop is almost zero. The time you're seeing elapse is due to the printing, which is tied to the output facilities of the particular shell you are using. For example, the sort of buffering it does, maybe the graphics routines being used to render the text, etc. There is no practical application for printing numbers in a loop as fast as possible to a human-readable display, so perhaps you can try the same test writing to a file instead. I expect the times will be more similar.
On my laptop your code takes 4.8 milliseconds if writing to the terminal. It takes only 460 microseconds if writing to a file.
TL;DR: run stupid benchmarks, get stupid times.
IDLE is written in Python and uses tkinter, which wraps tcl/tk. By default, IDLE runs your code in a separate process, with output sent through a socket for display in IDLE's Shell window. So there is extra overhead for each print call. For me, on a years-old Windows machine, the 1000 line prints take about 3 seconds, or 3 milliseconds per print.
If you print the the 1000 lines with one print call, as with
print('\n'.join(str(i) for i in range(1000)))
the result may take a bit more that 3 milliseconds but it is still subjectly almost 'instantaneous'.
Note: in 3.6.7 and 3.7.1, single 'large' prints, where 'large' can be customized by the user, are squeezed down to a label that can be expanded either in-place or in a separate window.
I'm trying to get a looping call to run every 2 seconds. Sometimes, I get the desired functionality, but othertimes I have to wait up to ~30 seconds which is unacceptable for my applications purposes.
I reviewed this SO post and found that looping call might not be reliable for this by default. Is there a way to fix this?
My usage/reason for needing a consistent ~2 seconds:
The function I am calling scans an image (using CV2) for a dollar value and if it finds that amount it sends a websocket message to my point of sale client. I can't have customers waiting 30 seconds for the POS terminal to ask them to pay.
My source code is very long and not well commented as of yet, so here is a short example of what I'm doing:
#scan the image for sales every 2 seconds
def scanForSale():
print ("Now Scanning for sale requests")
#retrieve a new image every 2 seconds
def getImagePreview():
print ("Loading Image From Capture Card")
lc = LoopingCall(scanForSale)
lc.start(2)
lc2 = LoopingCall(getImagePreview)
lc2.start(2)
reactor.run()
I'm using a Raspberry Pi 3 for this application, which is why I suspect it hangs for so long. Can I utilize multithreading to fix this issue?
Raspberry Pi is not a real time computing platform. Python is not a real time computing language. Twisted is not a real time computing library.
Any one of these by itself is enough to eliminate the possibility of a guarantee that you can run anything once every two seconds. You can probably get close but just how close depends on many things.
The program you included in your question doesn't actually do much. If this program can't reliably print each of the two messages once every two seconds then presumably you've overloaded your Raspberry Pi - a Linux-based system with multitasking capabilities. You need to scale back your usage of its resources until there are enough available to satisfy the needs of this (or whatever) program.
It's not clear whether multithreading will help - however, I doubt it. It's not clear because you've only included an over-simplified version of your program. I would have to make a lot of wild guesses about what your real program does in order to think about making any suggestions of how to improve it.
Input: array of float time values (in seconds) relative to program start. [0.452, 0.963, 1.286, 2.003, ... ]. They are not evenly spaced apart.
Desired Output: Output text to console at those times (i.e. printing '#')
My question is what is the best design principle to go about this. Below is my naive solution using time.time.
times = [0.452, 0.963, 1.286, 2.003]
start_time = time.time()
for event_time in times:
while 1:
if time.time() - start_time >= event_time:
print '#'
break
The above feels intuitively wrong using that busy loop (even if its in its own thread).
I'm leaning towards scheduling but want to make sure there aren't better design options: Executing periodic actions in Python
There is also the timer object: timers
Edit: Events only need 10ms precision, so +/- 10ms from exact event time.
A better pattern than busy waiting might be to use time.sleep(). This suspends execution rather than using the CPU.
time_diffs = [0.452, 0.511, 0.323, 0.716]
for diff in time_diffs:
time.sleep(diff)
print '#'
Threading can also be used to similar effect. However both of these solutions only work if the action you want to perform each time the program 'restarts' takes negligible time (perhaps not true of printing).
That being said no pattern is going to work if you are after 10ms precision and want to use Python on a standard OS. I recommend this question on Real time operating via Python which explains both that GUI events (i.e. printing to a screen) are too slow and unreliable for that level of precision, that your typical OSs where Python is run do not guarantee that level of precision and that Python's garbage collection and memory management also play havoc if you want 'real-time' events.
I am working in a project using Raspberry Pi 3 B where I get data from a IR sensor(Sharp GP2Y0A21YK0F) through a ADC MPC3008 and display it in real-time using PyQtgraph library.
However, it seems that I am getting very few samples and the graph is not "smooth" as I expect.
I am using the Adafruit Python MCP3008 Library and the function mcp.read_adc(0) to get the data.
Is there a way to measure the sample rate in Python?
Thank you
Hugo Oliveira
I would suggest setting up some next level buffering, ideally via multiprocessing (see multiprocessing and GUI updating - Qprocess or multiprocessing?) to better get a handle on how fast you can access the data. Currently you're using a QTimer to poll with, which is only getting 3 raw reads every 50 msec... so you're REALLY limiting yourself artificially via the timer. I haven't used the MCP3008, but a quick look at some at their code seems like you'll have to set up some sample testing to try some things out, or investigate further for better documentation. The question is the behavior of the mcp.read_adc(0) method and is it blocking or non-blocking... if non-blocking, does it return stale data if there's no new data, ... etc. It would be ideal if it was blocking from a timing sense, you could just set up a loop on it and time delta each successive return to determine how fast you're able to get new samples. If it's non-blocking, you would want it to return null for no new samples, and only return the actual samples that were new if it does return something. You'll have to play around with it and see how it behaves. At any rate, once you get the secondary thread set up to just poll the mcp.read_adc(0), then you can use the update() timer to collect the latest buffer and plot it. I also don't know the implications of multi-threading / multiprocessing on the RaspPI (see general discussion here: Multiprocessing vs Threading Python) , but anything should be better than the QTimer polling.
I would like to perform a measurement and plot a graph while the measurement is
running. This measurements takes quite some time in python (it has to retrieve data over a slow connection). The problem is that the graph freezes when measuring. The measurement
consists of setting a center wavelength, and then measuring some signal.
My program looks something like this:
# this is just some arbitrary library that has the functions set_wavelength and
# perform_measurement
from measurement_module import set_wavelength, perform_measurement
from pylab import *
xdata = np.linspace(600,1000,30) # this will be the x axis
ydata = np.zeros(len(xdata)) # this will be the y data. It will
for i in range(len(xdata)):
# this call takes approx 1 s
set_wavelength(xdata[i])
# this takes approx 10 s
ydata[i] = perform_measurement(xdata)
# now I would like to plot the measured data
plot(xdata,ydata)
draw()
This will work when it is run in IPython with the -pylab module switched on,
but while the measurement is running the figure will freeze. How can modify
the behaviour to have an interactive plot while measuring?
You cannot simply use pylab.ion(), because python is busy while performing the measurements.
regards,
Dirk
You can, though maybe a bit awkward, run the data-gathering as a serparate process. I find Popen in the subprocess module quite handy. Then let that data-gathering script save what it does to disk somewhere and you use
Popen.poll()
To check if it has completed.
It ought to work.
I recommend buffering the data in large chunks and render/re-render when the buffer fills up. If you want it to be nonblocking look at greenlets.
from gevent.greenlet import Greenlet
import copy
def render(buffer):
'''
do rendering stuff
'''
pass
buff = ''
while not_finished:
buff = connection.read()
g = Greenlet(render, copy.deepcopy(buff))
g.start()
Slow input and output is the perfect time to use threads and queues in Python. Threads have there limitations, but this is the case where they work easily and effectively.
Outline of how to do this:
Generally the GUI (e.g., the matplotlib window) needs to be in the main thread, so do the data collection in a second thread. In the data thread, check for new data coming in (and if you do this in some type of infinite polling loop, put in a short time.sleep to release the thread occasionally). Then, whenever needed, let the main thread know that there's some new data to be processed/displayed. Exactly how to do this depends on details of your program and your GUI, etc. You could just use a flag in the data thread that you check for from the main thread, or a theading.Event, or, e.g., if you have a wx backend for matplotlib wx.CallAfter is easy. I recommend looking through one of the many Python threading tutorials to get a sense of it, and also threading with a GUI usually has a few issues too so just do a quick google on threading with your particular backend. This sounds cumbersome as I explain it so briefly, but it's really pretty easy and powerful, and will be smoother than, e.g., reading and writing to the same file from different processes.
Take a look at Traits and Chaco, Enthought's type system and plotting library. They provide a nice abstraction to solve the problem you're running into. A Chaco plot will update itself whenever any of its dependencies change.