animating image stack with vispy - python

I'm trying to migrate from MATLAB to Python and one of the things I frequently rely on during development in Matlab is the ability to rapidly visualize slices of a datacube by looping through layers and calling drawnow, e.g.
tst = randn(1000,1000,100);
for n = 1:size(tst, 3)
imagesc(tst(:,:,n));
drawnow;
end
When I tic/toc this in MATLAB it shows that the figure is updating at about 28fps. In contrast, when I try to do this using matplotlib's imshow() command this runs at a snails pace in comparison, even using set_data().
import matplotlib as mp
import matplotlib.pyplot as plt
import numpy as np
tmp = np.random.random((1000,1000,100))
myfig = plt.imshow(tmp[:,:,i], aspect='auto')
for i in np.arange(0,tmp.shape[2]):
myfig.set_data(tmp[:,:,i])
mp.pyplot.title(str(i))
mp.pyplot.pause(0.001)
On my computer this runs at about 16fps with the default (very small) scale, and if I resize it to be larger and the same size as the matlab figure it slows down to about 5 fps. From some older threads I saw a suggestion to use glumpy and I installed this along with all of the appropriate packages and libraries (glfw, etc.), and the package itself works fine but it no longer supports the easy image visualization that was suggested in a previous thread.
I then downloaded vispy, and I can make an image with it using code from this thread as a template:
import sys
from vispy import scene
from vispy import app
import numpy as np
canvas = scene.SceneCanvas(keys='interactive')
canvas.size = 800, 600
canvas.show()
# Set up a viewbox to display the image with interactive pan/zoom
view = canvas.central_widget.add_view()
# Create the image
img_data = np.random.random((800,800, 3))
image = scene.visuals.Image(img_data, parent=view.scene)
view.camera.set_range()
# unsuccessfully tacked on the end to see if I can modify the figure.
# Does nothing.
img_data_new = np.zeros((800,800, 3))
image = scene.visuals.Image(img_data_new, parent=view.scene)
view.camera.set_range()
Vispy seems very fast and this looks like it will get me there, but how do you update the canvas with new data? Thank you,

See ImageVisual.set_data method
# Create the image
img_data = np.random.random((800,800, 3))
image = scene.visuals.Image(img_data, parent=view.scene)
view.camera.set_range()
# Generate new data :
img_data_new = np.zeros((800,800, 3))
img_data_new[400:, 400:, 0] = 1. # red square
image.set_data(img_data_new)

Related

How to remove or replace the PyVista window icon?

How can I remove or change the PyVista render window's icon? I have tried to search the issue also from the docs but didn't find any answers.
This is not currently supported directly in PyVista, but this is a great idea and I'll open a pull request to implement this once a major refactor of render windows is done.
In the meantime you can use raw VTK, the SetIcon() method on render windows. According to the docs this only works on Windows and Linux though.
As of PyVista 0.36.1 you have direct access to plotter.ren_win which is a VTK render window object. According to the docs the icon should be a vtkImageData; in practical PyVista terms this means UniformGrids with dimensions (n, m, 1).
Some experimentation suggests that the icon has to have uint8 active scalars of shape (n_points, 3) or (n_points, 4), but I could only get the icon to actually show up on my linux machine with the latter setup. It seems that non-square shaped icons get tiled to square shape, so you have to crop your image to square shape first. Finally, you need to call ren_win.Render() before setting the icon, otherwise problems arise (on my linux machine: a segmentation fault).
Here's a small example:
import numpy as np
import pyvista as pv
from pyvista import examples
# example icon: cropped puppy mesh turned from RGB to RGBA
icon = examples.download_puppy().extract_subset([0, 1199, 0, 1199, 0, 0])
data = np.empty((icon.n_points, 4), dtype=np.uint8)
data[:, :-1] = icon.point_data['JPEGImage']
data[:, -1] = 255 # pad with full opacity
icon.point_data['JPEGImage'] = data
# create a plotter with a dummy mesh and set its icon
plotter = pv.Plotter()
plotter.add_mesh(pv.Dodecahedron())
ren_win = plotter.ren_win # render window
ren_win.Render() # important against segfault
ren_win.SetIcon(icon)
plotter.show()
With this my bottom panel looks like this:
It also works for my window switcher:
(Interestingly, the window title in the title bar is "PyVista" which is the default title in pyvista.Plotter.__init__(), but in the window switcher I see "Vtk". I don't know why this is but I'll also try to see if we can fix this.)
Opacity handling seems to work too:
# add opacity in a nontrivial pattern
i, j = np.indices(icon.dimensions[:-1])
alpha = ((np.sin(2*i/icon.dimensions[0]*2*np.pi) * np.cos(j/icon.dimensions[1]*2*np.pi)) * 255).round().astype(np.uint8)
icon.point_data['JPEGImage'][:, -1] = alpha.ravel()
with this icon the window switcher looks like this:
It looks funky but that's just because the opacity pattern itself is funky. Transparency shows up as the switcher's semitransparent background colour on my system.

Why does matplotlib / Qt auto close plots?

Before plotting using matplotlib, you must specify your display's DPI if you have a high DPI display, since otherwise the image is too small. I have a 4K display, so I definitely need to do this. (I think that matplotlib should automatically do this for you, but that is another topic...)
As a first attempt to specify the DPI, consider the code below. It manually specifies the display's DPI and then creates and plots a test DataFrame:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sys
# method #1: manually specify my display's DPI:
dpi = 163 # this value is valid for my Dell U2718Q 4K (3840 x 2160) display
plt.rcParams["figure.dpi"] = dpi
print("plt.matplotlib.rcParams[\"figure.dpi\"] = " + str(plt.matplotlib.rcParams["figure.dpi"]))
# define a test DataFrame (here, chose to calculate sin and cos over their range of 2 pi):
n = 100
x = (2 * np.pi / n) * np.arange(n)
df = pd.DataFrame( {
"sin(x)" : np.sin(x),
"cos(x)": np.cos(x),
}
)
# plot the DataFrame:
df.plot(figsize = (12, 8), title = "sin and cos", grid = True, color = ["red", "green"])
When I put the code above into a file and run it all at once in PyCharm, everything behaves exactly as expected: the script completes without error, the plot is generated at the correct size, and the plot remains open in a window after the script ends.
So far, so good.
But the code above is brittle: run it on a computer with a different display DPI, and the image will not be sized correctly.
Doing a web search, I found this link which has code claims to automatically determine your display's DPI. My (slight) adaptation of the code is this
# method #2: call code to determine my display's DPI (only works if the backend is Qt)
if plt.get_backend() == "Qt5Agg":
from matplotlib.backends.qt_compat import QtWidgets
qApp = QtWidgets.QApplication(sys.argv)
plt.matplotlib.rcParams["figure.dpi"] = qApp.desktop().physicalDpiX()
If I modify my file to use the code above ("method #2") instead of the manual DPI setting ("method #1"), I find that the script completes without error, but the plot only comes up for a brief instant before being automatically closed!
By successively commenting out lines in the "method #2" code, starting with the last and working backwards, I have determined that the culprit is the call to QtWidgets.QApplication(sys.argv).
In particular, if I reduce the "method #2" code to just this
if plt.get_backend() == "Qt5Agg":
from matplotlib.backends.qt_compat import QtWidgets
QtWidgets.QApplication(sys.argv)
I get this plot auto close behavior.
Another defect, is that the original "method #2" code calculates the DPI of my monitor, a Dell U2718Q, to be 160, when it really is 163: in this link go to p. 3 / 4 and look at the Pixels per inch (PPI) spec.
Does anyone know of a solution to this?
Better code to determine the DPI?
A modification of the "method #2" code which will not cause plots to auto close?
Is this a bug that needs to be reported to matplotlib or Qt?

fast way to display video from arrays in jupyter-lab

I'm trying to display a video from some arrays in an notebook in jupyter-lab. The arrays are produced at runtime. What method to display the images can deliver a (relatively) high framerate? Using matplotlib and imshow is a bit slow. The pictures are around 1.8 megapixel large.
Above some very small example to visualize what I want to achieve.
while(True): #should run at least 30 times per second
array=get_image() #returns RGBA numpy array
show_frame(array) #function I search for
The fastest way (to be used for debug purpose for instance) is to use matplotlib inline and the matplotlib animation package. Something like this worked for me
%matplotlib inline
from matplotlib import pyplot as plt
from matplotlib import animation
from IPython.display import HTML
# np array with shape (frames, height, width, channels)
video = np.array([...])
fig = plt.figure()
im = plt.imshow(video[0,:,:,:])
plt.close() # this is required to not display the generated image
def init():
im.set_data(video[0,:,:,:])
def animate(i):
im.set_data(video[i,:,:,:])
return im
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=video.shape[0],
interval=50)
HTML(anim.to_html5_video())
The video will be reproduced in a loop with a specified framerate (in the code above I set the interval to 50 ms, i.e., 20 fps).
Please note that this is a quick workaround and that IPython.display has a Video package (you can find the documentation here) that allows you to reproduce a video from file or from an URL (e.g., from YouTube).
So you might also consider storing your data locally and leverage the built-in Jupyter video player.

Excessive memory usage in Matplotlib imshow

I've got a PyQT4 application that displays medium size images in a Matplotlib figure. The test image that I'm displaying is about 5Mb (2809 x 1241 pixels). I read in the data using GDAL by the way. The image is read into an array with nodata values masked out. This is then displayed with normalized values and a specified colormap
It seems to use an inordinate amount of memory to display a 5mb file. What I'm seeing is that it takes a 140mb of memory to display this image read in at full resolution. (application with imshow commented out used 60mb of memory, vs 206 with it) The problem gets worse as images are displayed in multiple figures as each one uses an additional 200m of memory. At about 3 or 4 figures displayed the applications starts bogging down as the memory usage gets into the 700-900 mb range.
I understand about matplotlib having to store all the pixels even though it's displaying only a downsampled subset to match the screen resolution. I'll probably end up writing routines to only read in an amount of pixels to match the figure size. But since this application will be displaying up to 8 maps on 8 separate screens I'm concerned about it still using excessive memory.
So my questions are:
1) Does this seem like an inordinate amount of memory to be using for displaying a simple colormapped image? It does to me.
2) Is there something I could be doing to decrease this memory usage? For example using integer datatypes, releasing memory, etc.
3) What other strategies should I be using to deal with this memory usage? For example downsampling (might not be very effective at full screen resolution 1900x1200), switching to 64bit architecture, etc
Thanks,
Code below
import sys, os, random
from PyQt4.QtCore import *
from PyQt4.QtGui import *
import matplotlib
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.backends.backend_qt4agg import NavigationToolbar2QTAgg as NavigationToolbar
from matplotlib.figure import Figure
import matplotlib.colors as colors
import numpy as np
from osgeo import gdal, gdalconst
gridfile = r"i:\vistrails\workingfiles\secondseason\secondseason_workfile_2012_02_28b\brt_1\brt_prob_map.tif"
class AppForm(QMainWindow):
def __init__(self, parent=None):
QMainWindow.__init__(self, parent)
self.create_main_frame()
ds = gdal.Open(gridfile, gdal.GA_ReadOnly)
ary = ds.GetRasterBand(1).ReadAsArray(buf_ysize=500, buf_xsize=300)
ndval = ds.GetRasterBand(1).GetNoDataValue()
rasterdata = np.ma.masked_array(ary, mask=(ary==ndval))
del ary
self.axes.imshow(rasterdataint, cmap=matplotlib.cm.jet)
del rasterdata
def create_main_frame(self):
self.main_frame = QWidget()
# Create the mpl Figure and FigCanvas objects.
# 5x4 inches, 100 dots-per-inch
#
self.dpi = 100
self.fig = Figure((5.0, 4.0), dpi=self.dpi)
self.canvas = FigureCanvas(self.fig)
self.canvas.setParent(self.main_frame)
self.axes = self.fig.add_subplot(111)
self.mpl_toolbar = NavigationToolbar(self.canvas, self.main_frame)
vbox = QVBoxLayout()
vbox.addWidget(self.canvas)
vbox.addWidget(self.mpl_toolbar)
self.main_frame.setLayout(vbox)
self.setCentralWidget(self.main_frame)
def main():
app = QApplication(sys.argv)
form = AppForm()
form.show()
app.exec_()
if __name__ == "__main__":
main()
Memory issue with use of imshow() have been noticed, as here.
1/ Upgrade
As mentionned here, upgrading to latest vesion of mpl may fix the problem.
2/ PIL
As an alternative, you may make you of the PIL library.
When it goes to jpg files, imshow() is using PIL if installed. You can use PIL module directly, as documented here.

Matplotlib in Python - Drawing shapes and animating them

So I'm representing a token ring network (doing the simulation in SimPy), I'm a totally newbie to matplotlib, but I was told that it'd be really good for representing my simulation visually.
So I googled around and found out how to draw shapes and lines - using add_patch and add_line respectively to the axes (I believe). So now I have this output which is absolutely fine:
(can't post images yet!!)
http://img137.imageshack.us/img137/7822/screenshot20100121at120.png
But I'm getting this using the pylab.show() function, and what I think I want is to achieve this using the pylab.plot() function so that I can then update it as my simulation progresses using pylab.draw() afterward.
My code is as follows:
plab.ion()
plab.axes()
for circ in self.circleList:
plab.gca().add_patch(circ)
for line in self.lineList:
plab.gca().add_line(line)
plab.axis('scaled')
plab.show()
Where circleList and lineList are lists containing the circles and lines on the diagram
I'm probably misunderstanding something simple here, but I can't actually find any examples that aren't overtly graph based that use the plot() function.
Clarification:
How can I get that same output, using pylab.plot() instead of pylab.show() ?
Replicating your image using the plot method:
from pylab import *
points = []
points.append((-0.25, -1.0))
points.append((0.7, -0.7))
points.append((1,0))
points.append((0.7,1))
points.append((-0.25,1.2))
points.append((-1,0.5))
points.append((-1,-0.5))
points.append((-0.25, -1.0))
a_line = plot(*zip(*points))[0]
a_line.set_color('g')
a_line.set_marker('o')
a_line.set_markerfacecolor('b')
a_line.set_markersize(30)
axis([-1.5,1.5,-1.5,1.5])
show()
EDIT BASED ON COMMENTS
This uses python multiprocessing library to run the matplotlib animation in a separate process. The main process uses a queue to pass data to it which then updates the plot image.
# general imports
import random, time
from multiprocessing import Process, Queue
# for matplotlib
import random
import numpy as np
import matplotlib
matplotlib.use('GTKAgg') # do this before importing pylab
import matplotlib.pyplot as plt
from matplotlib.patches import Circle
def matplotLibAnimate(q,points):
# set up initial plot
fig = plt.figure()
ax = fig.add_subplot(111)
circles = []
for point in points:
ax.add_patch(Circle(point,0.1))
a_line, = ax.plot(*zip(*points))
a_line.set_color('g')
a_line.set_lw(2)
currentNode = None
def animate(currentNode = currentNode):
while 1:
newNode = q.get()
if currentNode: currentNode.remove()
circle = Circle(newNode,0.1)
currentNode = ax.add_patch(circle)
circle.set_fc('r')
fig.canvas.draw()
# start the animation
import gobject
gobject.idle_add(animate)
plt.show()
#initial points
points = ((-0.25, -1.0),(0.7, -0.7),(1,0),(0.7,1),(-0.25,1.2),(-1,0.5),(-1,-0.5),(-0.25, -1.0))
q = Queue()
p = Process(target = matplotLibAnimate, args=(q,points,))
p.start()
# feed animation data
while 1:
time.sleep(random.randrange(4))
q.put(random.sample(points,1)[0])
Of course, after doing this I think you'll be better served with whatnick's image solution. I'd create my own GUI and not use matplotlibs built in widget. I'd then "animate" my GUI by generating PNGs and swapping them.
It sounds like Mark has the answer you were looking for, but if you decide to go with whatnick's approach and build an animation from individual pngs, here is the code to implement Amit's suggestion to use mencoder (from http://en.wikibooks.org/wiki/Mplayer):
mencoder mf://*.png -mf w=400:h=400 -ovc lavc -lavcopts vcodec=xvid -of avi -o output.avi
The core technique is to update the data of the elements being rendered using set_data. Then call draw(). See if your circle and line elements have set_data functions. Otherwise you can use pyvtk. The other option is to render and save the plots to png files and later build an animation from those.

Categories