I've been writing a program for a workstation automation in a laboratory. One of the instruments I communicate is called beam profiler, it basically reads light inputs from two ortogonal directions (x,y). Once the input is read, I need to convert it to a 2D image, for that I use the numpy meshgrid and I'm able to obtain my desired output.
For better clarity, see image bellow. The two Gaussian lines in the x and y axis are my raw input and the colored figure is after processed with meshgrid.
I divide my software in two parts for this. First I create another QT thread that initializes my device and runs in a loop getting the data and processing it. Then this thread sends a signal to the main thread with the values.
In the main thread I get the values, plot the graph and update the gui screen.
It is already working, the problem is that when I start the beam profiler readings the software starts getting slower as time passes. At first I thought it was because of the data processing but it doesn't make sense because it is running in the second thread and when I start the device there is no lag.
It seems like if it were "saving" the data in memory and getting slower, which is weird since I'm using the set_data and draw methods for plotting.
Note: if I close the device readings inside my software the lags stops and if I start it again, it starts good but then lags as time passes.
Any incoming help is much appreciated!
Data acquisition thread code:
class ThreadGraph(QtCore.QThread):
_signalValues = QtCore.pyqtSignal(float, float, float, float, float, float, float, float)
_signalGraph = QtCore.pyqtSignal(np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray)
_signalError = QtCore.pyqtSignal(str)
BEAMstatus = QtCore.pyqtSignal(str)
def __init__(self, parent=None):
super(ThreadGraph, self).__init__(parent)
self.slit = 0
self.state = False
#Thread starts
def run(self):
self.init() #Device initialization (Not relevant, therefore omitted)
time.sleep(0.1)
while self.state == True: #Thread loop (data acquisition)
self.emitValues() #Fun to get the data and emit
time.sleep(0.016)
self.emitGraph() #Process data into 2D and emit
try: #When while is over, terminate the thread
self.beam.close(self.session)
except RuntimeError as err:
print err
self.quit()
def emitGraph(self): #Use the data acquired to to generate 2D image and emit
xx, yy = np.meshgrid(self.slit_data_int[self.slit][0::10], self.slit_data_int[self.slit+1][0::10])
zz = xx * yy
self._signalGraph.emit(
self.slit_data_pos[self.slit][0::10],
self.slit_data_int[self.slit][0::10],
self.slit_data_pos[self.slit + 1][0::10],
self.slit_data_int[self.slit + 1][0::10],
zz
)
def emitValues(self):
try: #Try to get data from device (data is stored in calculation_result)
self.slit_data_pos, self.slit_data_int, self.calculation_result, self.power, self.power_saturation, self.power_intensities = self.beam.get_slit_scan_data(self.session)
except RuntimeError as err:
self._signalError.emit(str(err))
return
else: #emit data to gui main thread
self._signalValues.emit(
self.calculation_result[self.slit].peakPosition,
self.calculation_result[self.slit + 1].peakPosition,
self.calculation_result[self.slit].peakIntensity,
self.calculation_result[self.slit + 1].peakIntensity,
self.calculation_result[self.slit].centroidPosition,
self.calculation_result[self.slit + 1].centroidPosition,
self.calculation_result[self.slit].gaussianFitDiameter,
self.calculation_result[self.slit + 1].gaussianFitDiameter
)
Main Gui code:
class BP209_class(QtGui.QWidget):
def __init__(self, vbox, slit25, slit5, peakposx, peakposy, peakintx, peakinty, centroidposx, centroidposy, mfdx, mfdy):
QtGui.QWidget.__init__(self)
#Initialize a bunch of gui variables
self.matplotlibWidget = MatplotlibWidget('2d')
self.vboxBeam = vbox
self.vboxBeam.addWidget(self.matplotlibWidget)
self.vboxBeam.addWidget(self.matplotlibWidget.canvastoolbar)
#Create the thread and connects
self.thread = ThreadGraph(self)
self.thread._signalError.connect(self.Error_Handling)
self.thread._signalValues.connect(self.values_update)
self.thread._signalGraph.connect(self.graph_update)
self.thread.BEAMstatus.connect(self.Status)
#Initialize variables for plots
self.zz = zeros([750, 750])
self.im = self.matplotlibWidget.axis.imshow(self.zz, cmap=cm.jet, origin='upper', vmin=0, vmax=1, aspect='auto', extent=[-5000,5000,-5000,5000])
self.pv, = self.matplotlibWidget.axis.plot(np.zeros(750) , np.zeros(750) , color="white" , alpha=0.6, lw=2)
self.ph, = self.matplotlibWidget.axis.plot(np.zeros(750) , np.zeros(750), color="white" , alpha=0.6, lw=2)
self.matplotlibWidget.figure.subplots_adjust(left=0.00, bottom=0.01, right=0.99, top=1, wspace=None, hspace=None)
self.matplotlibWidget.axis.set_xlim([-5000, 5000])
self.matplotlibWidget.axis.set_ylim([-5000,5000])
def __del__(self): #stop thread
self.thread.state = False
self.thread.wait()
def start(self): #start thread
if self.thread.state == False:
self.thread.state = True
self.thread.start()
else:
self.thread.state = False
self.thread.wait()
#Slot that receives data from device and plots it
def graph_update(self, slit_samples_positionsX, slit_samples_intensitiesX, slit_samples_positionsY, slit_samples_intensitiesY, zz):
self.pv.set_data(np.divide(slit_samples_intensitiesX, 15)-5000, slit_samples_positionsX)
self.ph.set_data(slit_samples_positionsY, np.divide(slit_samples_intensitiesY, 15)-5000)
self.im.set_data(zz)
self.im.autoscale()
self.matplotlibWidget.canvas.draw()
Edit: I also have a camera attached to my system and I display it also in the gui using opencv. I noticed that if I start the cam the beam profiler's fps reduce to almost a half. So, maybe a QT paint optimization would be the way to go?
Calls to canvas.draw() are expensive. You are likely acquiring data faster than drawing commands can complete. This will cause paint events to get queued up and your plot will appear to lag. This blog post details a method that avoids calling canvas.draw() and can be used to speed up matplotlib realtime plotting.
If this is still not fast enough you may have to lower the acquisition rate, implement some form of frame skipping mechanism or use a different plotting library better optimised for speed.
Related
I wrote a script for modeling the evolution of a pandemic (with graphs and scatter plots).
I tried several libraries to display results in real-time (8 countries x 500 particles):
Matplotlib (not fast enough)
PyQtGraph (better but still not fast enough)
OpenGL (good, but I did not find how to use it in 2D efficiently, using subplots, titles, legends...)
Bokeh (good, but the scatter plots "blink" each time their particles turn color. Code is here if you are interested)
That is why I am turning now to VisPy.
I am using a class Visualizer to display the results, with the method app.Timer().connect to manage the real-time side. Pandemic code is here.
from Pandemic import *
from vispy.plot import Fig
from vispy import app
class Visualizer:
def __init__(self, world):
self.fig = Fig()
self.world = world
self.traces = {}
#Scatter plots
for idx, c in world.countries.items():
pos_x = idx % self.world.nb_cols
pos_y = idx // self.world.nb_cols
subplot = self.fig[pos_y, pos_x]
data = np.array([c.x_coord, c.y_coord]).reshape(-1,2)
self.traces[idx] = subplot.plot(data, symbol='o', width=0, face_color=c.p_colors, title='Country {}'.format(idx+1))
def display(self):
for idx, c in self.world.countries.items():
data = np.array([c.x_coord, c.y_coord]).reshape(-1,2)
self.traces[idx].set_data(data, face_color=c.p_colors)
def update(self, event):
self.world.update(quarantine=False)
self.display()
def animation(self):
self.timer = app.Timer()
self.timer.connect(self.update)
self.timer.start(0)
self.start()
def start(self):
if (sys.flags.interactive != 1):
self.status = app.run()
if __name__ == '__main__':
w = World(move=0.001)
for i in range(8):
w.add_country(nb_S=500)
v = Visualizer(w)
v.animation()
The scatter plots "blink" each time their particles turn color, as with Bokeh. Am I doing something wrong?
Is there a more efficient way for real-time display, maybe using vispy.gloo or vispy.scene? (It is slower than pyqtgraph.opengl for the moment)
We can efficiently plot in real time by using vispy.gloo module to leverage the power of GPU. Here is one way of doing it :
1) Build a class that inherits vispy.app.Canvas class.
2) Create an OpenGL Program whose inputs are shaders. This object allows us to link our data to shader variables. Each dot on the canvas depends on these variable values (describing its coordinate, color, etc). For example, it is way harder for displaying text (titles, labels, etc) than with Matplotlib library. Here is a deeper explanation of the process.
3) Set a timer connected to the function we want to call repeatedly (real-time side).
The vispy.scene module, dedicated to the high-level visualization interfaces for scientists, is still experimental. Maybe this is the reason why my first code got some bugs.
Here is my new code.
I need to plot in realtime a series floating point numbers from the serial port. These values are sepparated by the '\n' character, so the data sequence is something like this:
x1
x2
x3
...
How would you plot the data?
I am using an Arduino board, the data rate is 200 samples/s, and my PC is running on Windows7 64 bits.
I think a good choice is use the pyqtgraph library. I started to use the Plotting.py example in pyqtgraph (plenty more examples available after installing pyqtgraph and then running python3 -m pyqtgraph.examples), but I don't know how to adapt this code for my needs (see below).
Thank you very much in advance.
from pyqtgraph.Qt import QtGui, QtCore
import numpy as np
import pyqtgraph as pg
# Set graphical window, its title and size
win = pg.GraphicsWindow(title="Sample process")
win.resize(1000,600)
win.setWindowTitle('pyqtgraph example')
# Enable antialiasing for prettier plots
pg.setConfigOptions(antialias=True)
# Random data process
p6 = win.addPlot(title="Updating plot")
curve = p6.plot(pen='y')
data = np.random.normal(size=(10,1000)) # If the Gaussian distribution shape is, (m, n, k), then m * n * k samples are drawn.
# plot counter
ptr = 0
# Function for updating data display
def update():
global curve, data, ptr, p6
curve.setData(data[ptr%10])
if ptr == 0:
p6.enableAutoRange('xy', False) ## stop auto-scaling after the first data set is plotted
ptr += 1
# Update data display
timer = QtCore.QTimer()
timer.timeout.connect(update)
timer.start(50)
## Start Qt event loop unless running in interactive mode or using pyside.
if __name__ == '__main__':
import sys
if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
Here is the code that works fine. The main process is contained in the update() function. It reads the input value from the serial port, updates the array Xm (that contains the input values) and then updates its associated curve.
This code was posted for the sake of simplicity and works just for low data rates (less than 100 samples/s). For higher data rates, it should be modified inside the update() function as follows. A set of values (instead of a single one) should be read from the serial port. Then, such set should be appended to the array Xm
I hope this answer is useful for you, and thank you very much for your help!
# Import libraries
from numpy import *
from pyqtgraph.Qt import QtGui, QtCore
import pyqtgraph as pg
import serial
# Create object serial port
portName = "COM12" # replace this port name by yours!
baudrate = 9600
ser = serial.Serial(portName,baudrate)
### START QtApp #####
app = QtGui.QApplication([]) # you MUST do this once (initialize things)
####################
win = pg.GraphicsWindow(title="Signal from serial port") # creates a window
p = win.addPlot(title="Realtime plot") # creates empty space for the plot in the window
curve = p.plot() # create an empty "plot" (a curve to plot)
windowWidth = 500 # width of the window displaying the curve
Xm = linspace(0,0,windowWidth) # create array that will contain the relevant time series
ptr = -windowWidth # set first x position
# Realtime data plot. Each time this function is called, the data display is updated
def update():
global curve, ptr, Xm
Xm[:-1] = Xm[1:] # shift data in the temporal mean 1 sample left
value = ser.readline() # read line (single value) from the serial port
Xm[-1] = float(value) # vector containing the instantaneous values
ptr += 1 # update x position for displaying the curve
curve.setData(Xm) # set the curve with this data
curve.setPos(ptr,0) # set x position in the graph to 0
QtGui.QApplication.processEvents() # you MUST process the plot now
### MAIN PROGRAM #####
# this is a brutal infinite loop calling your realtime data plot
while True: update()
### END QtApp ####
pg.QtGui.QApplication.exec_() # you MUST put this at the end
##################
The best way to deal with this may be to run a separate "worker" thread to process your data and then update the graph. I believe you can do it with Qthread.
I don't know the exact reason why, but apparently .processEvents() is not the best way to solve this problem.
So I'm trying to plot real time data with PyQt. I have it working in a sense, but matplotlib seems to be slowing it down a lot. If I reduce the number of plots I can get the sample rate I want. I have two timer events, one that gathers data and another that plots, with a ratio of 10 to 1.
Searching for a fix I found about Blitting with Matplotlib from SO, and was led to tutorials like this. The problem I'm seeing is that this is only dealing with the plotting part. Every attempt I have made at sampling and plotting a portion of the data I've gathered ends in a crash.
So an outline of what I'd like to do would be this
class graph(ParentMplCanvas):
def __init__(self, *args, **kwargs):
self.axes = fig.add_subplot(111)
self.x = range(1000)
self.data = np.zeros(1000)
#set timer for data to be sampled once every 10ms.
self.updateData()
self.line, = self.ax.plot(self.x,self.data, animated=True)
# Update the plot every second
self.gTimer = fig.canvas.new_timer(interval=1000)
self.gTimer.add_callback(self.update_figure)
self.gtimer.start()
def updateData(self):
self.i += 1
#append with 0's if self.i > 1000
self.data[self.i] = self.funcToGrabCurrentValFromDevice()
self.updateTimer()
def updateTimer(self):
self.dTimer = Timer(0.01,updateData)
self.dTimer.start()
class ApplicationWindow(gui.QMainWindow):
some stuff to call docked windows and the above graph in a docked window see [how I did it here][2]
Maybe I am just not understanding the blitting, but everything I'm seeing there they already have all the data. Any time I've tried to just access a portion of the data it seems to crash the program. I'm trying to just plot a 100 sample region at a time and have it continuously update.
Where I am lost:
How do I properly write update_figure so that I can plot the last 100 (or n) data points that were sampled?
What is the fastest Python mechanism for getting data read off of a serial port, to a separate process which is plotting that data?
I am plotting eeg data in real-time that I read off of a serial port. The serial port reading and packet unpacking code works fine, in that if I read and store the data, then later plot the stored data, it looks great. Like this:
note: device generates test sine wave for debugging
I am using pyQtGraph for the plotting. Updating the plot in the same process that I read the serial data in is not an option because the slight delay between serial read() calls causes the serial buffer to overflow and bad check-sums ensue. pyQtGraph has provisions for rendering the graph on a separate process, which is great, but the bottle-neck seems to be in the inter-process communication. I have tried various configuration of Pipe() and Queue(), all of which result in laggy, flickering graph updates. So far, the smoothest, most consistent method of getting new values from the serial port to the graph seems to be through shared memory, like so:
from pyqtgraph.Qt import QtGui
import pyqtgraph as pg
from multiprocessing import Process, Array, Value, Pipe
from serial_interface import EEG64Board
from collections import deque
def serialLoop(arr):
eeg = EEG64Board(port='/dev/ttyACM0')
eeg.openSerial()
eeg.sendTest('1') #Tells the eeg device to start sending data
while True:
data = eeg.readEEG() #Returns an array of the 8 latest values, one per channel
if data != False: #Returns False if bad checksum
val.value = data[7]
val = Value('d',0.0)
q = deque([],500)
def graphLoop():
global val,q
plt = pg.plot(q)
while True:
q.append(val.value)
plt.plot(q,clear=True)
QtGui.QApplication.processEvents()
serial_proc = Process(target=serialLoop, args=(val,), name='serial_proc')
serial_proc.start()
try:
while True:
graphLoop()
except KeyboardInterrupt:
print('interrupted')
The above code performs real-time plotting by simply pulling the latest value recorded by the serialLoop and appending it to a deque. While the plot updates smoothly, it is only grabbing about 1 in 4 values, as seen in the resulting plot:
So, what multi-process or thread structure would you recommend, and then what form of IPC should be used between them?
Update:
I am receiving 2,000 samples per second. I am thinking that if I update the display at 100 fps and add 20 new samples per frame then I should be good. What is the best Python multithreading mechanism for implementing this?
This may not be the most efficient, but the following code achieves 100 fps for one plot, or 20 fps for 8 plots. The idea is very simple: share an array, index, and lock. Serial fills array and increments index while is has lock, plotting process periodically grabs all of the new values from the array and decrements index, again, under lock.
from pyqtgraph.Qt import QtGui
import pyqtgraph as pg
from multiprocessing import Process, Array, Value, Lock
from serial_interface import EEG64Board
from collections import deque
def serialLoop(arr,idx,lock):
eeg = EEG64Board(port='/dev/ttyACM0')
eeg.openSerial()
eeg.sendTest('1') #Tells the eeg device to start sending data
while True:
data = eeg.readEEG() #Returns an array of the 8 latest values, one per channel
if data != False: #Returns False if bad checksum
lock.acquire()
for i in range(8):
arr[i][idx.value] = data[i]
idx.value += 1
lock.release()
eeg.sendTest('2')
arr = [Array('d',range(1024)) for i in range(8)]
idx = Value('i', 0)
q = [deque([],500) for i in range(8)]
iq = deque([],500)
lock = Lock()
lastUpdate = pg.ptime.time()
avgFps = 0.0
def graphLoop():
global val,q,lock,arr,iq, lastUpdate, avgFps
win = pg.GraphicsWindow()
plt = list()
for i in range(8):
plt += [win.addPlot(row=(i+1), col=0, colspan=3)]
#iplt = pg.plot(iq)
counter = 0
while True:
lock.acquire()
#time.sleep(.01)
for i in range(idx.value):
for j in range(8):
q[j].append(arr[j][i])
idx.value = 0
lock.release()
for i in range(8):
plt[i].plot(q[i],clear=True)
QtGui.QApplication.processEvents()
counter += 1
now = pg.ptime.time()
fps = 1.0 / (now - lastUpdate)
lastUpdate = now
avgFps = avgFps * 0.8 + fps * 0.2
serial_proc = Process(target=serialLoop, args=(arr,idx,lock), name='serial_proc')
serial_proc.start()
graphLoop()
serial_proc.terminate()
I wrote a python script that uses a heuristic to cluster 2D points in space. I'm representing each cluster using different cluster.
Presently,
the structure of my program is:
def cluster():
while True:
<do_some_work>
if <certain_condition_is_met>:
print "ADDED a new cluster:",cluster_details
if <breaking_condition_is_met>:
break
return Res
def plot_cluster(result):
<chooses a unique color for each cluster, and calls
pyplot.plot(x_coods,y_coods)
for each cluster>
def driver_function():
result = cluster()
plot_cluster(result)
pyplot.show()
That is, presently, I just obtain the final image of clustered points, where each cluster is represented by a different color.
However, I need to create an animation of how the program proceeds, i.e., something like:
"Initially, all points should be of same color, say blue. Then, as is cluster() function, instead of simply printing "ADDED a new cluster", the color of those points in the new cluster, should be changed in the image already present on screen.
Is there any way I can generate a video of such a program using matplotlib?
I saw an example of
`matplotlib.animation.FuncAnimation( ..., animate, ...)`
but it repeatedly calls the animate function that should return plottable values, which I think my program cannot.
Is there any way to obtain such a video of how this program proceeds?
To get this to work like you want will require a bit of refactoring, but I think something like this will work:
class cluster_run(object):
def __init__(self, ...):
# what ever set up you want
self.Res = None
def reset(self):
# clears all work and starts from scratch
pass
def run(self):
self.reset()
while True:
#<do_some_work>
if <certain_condition_is_met>:
print "ADDED a new cluster:",cluster_details
yield data_to_plot
if <breaking_condition_is_met>:
break
self.Res = Res
class culster_plotter(object):
def __init__(self):
self.fig, self.ax = plt.subplots(1, 1)
def plot_cluster(self, data_to_plot):
# does what ever plotting you want.
# fold in and
x_coords, y_coords)
ln = self.ax.plot(x_coords, y_coords)
return ln
cp = cluster_plotter()
cr = cluster_run()
writer = animation.writers['ffmpeg'](fps=30, bitrate=16000, codec='libx264')
ani = animation.FuncAnimation(cp.fig, cp.plot_cluster, cr.run())
ani.save('out.mp4', writer=writer)
plot_cluster(cr.Res)
Would it be sufficient to use pyplot.savefig('[frame].png'), where [frame] is the frame number of your plot in sequence, and then stitch these images together using a codec such as ffmpeg?