I've got a PyQT4 application that displays medium size images in a Matplotlib figure. The test image that I'm displaying is about 5Mb (2809 x 1241 pixels). I read in the data using GDAL by the way. The image is read into an array with nodata values masked out. This is then displayed with normalized values and a specified colormap
It seems to use an inordinate amount of memory to display a 5mb file. What I'm seeing is that it takes a 140mb of memory to display this image read in at full resolution. (application with imshow commented out used 60mb of memory, vs 206 with it) The problem gets worse as images are displayed in multiple figures as each one uses an additional 200m of memory. At about 3 or 4 figures displayed the applications starts bogging down as the memory usage gets into the 700-900 mb range.
I understand about matplotlib having to store all the pixels even though it's displaying only a downsampled subset to match the screen resolution. I'll probably end up writing routines to only read in an amount of pixels to match the figure size. But since this application will be displaying up to 8 maps on 8 separate screens I'm concerned about it still using excessive memory.
So my questions are:
1) Does this seem like an inordinate amount of memory to be using for displaying a simple colormapped image? It does to me.
2) Is there something I could be doing to decrease this memory usage? For example using integer datatypes, releasing memory, etc.
3) What other strategies should I be using to deal with this memory usage? For example downsampling (might not be very effective at full screen resolution 1900x1200), switching to 64bit architecture, etc
Thanks,
Code below
import sys, os, random
from PyQt4.QtCore import *
from PyQt4.QtGui import *
import matplotlib
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.backends.backend_qt4agg import NavigationToolbar2QTAgg as NavigationToolbar
from matplotlib.figure import Figure
import matplotlib.colors as colors
import numpy as np
from osgeo import gdal, gdalconst
gridfile = r"i:\vistrails\workingfiles\secondseason\secondseason_workfile_2012_02_28b\brt_1\brt_prob_map.tif"
class AppForm(QMainWindow):
def __init__(self, parent=None):
QMainWindow.__init__(self, parent)
self.create_main_frame()
ds = gdal.Open(gridfile, gdal.GA_ReadOnly)
ary = ds.GetRasterBand(1).ReadAsArray(buf_ysize=500, buf_xsize=300)
ndval = ds.GetRasterBand(1).GetNoDataValue()
rasterdata = np.ma.masked_array(ary, mask=(ary==ndval))
del ary
self.axes.imshow(rasterdataint, cmap=matplotlib.cm.jet)
del rasterdata
def create_main_frame(self):
self.main_frame = QWidget()
# Create the mpl Figure and FigCanvas objects.
# 5x4 inches, 100 dots-per-inch
#
self.dpi = 100
self.fig = Figure((5.0, 4.0), dpi=self.dpi)
self.canvas = FigureCanvas(self.fig)
self.canvas.setParent(self.main_frame)
self.axes = self.fig.add_subplot(111)
self.mpl_toolbar = NavigationToolbar(self.canvas, self.main_frame)
vbox = QVBoxLayout()
vbox.addWidget(self.canvas)
vbox.addWidget(self.mpl_toolbar)
self.main_frame.setLayout(vbox)
self.setCentralWidget(self.main_frame)
def main():
app = QApplication(sys.argv)
form = AppForm()
form.show()
app.exec_()
if __name__ == "__main__":
main()
Memory issue with use of imshow() have been noticed, as here.
1/ Upgrade
As mentionned here, upgrading to latest vesion of mpl may fix the problem.
2/ PIL
As an alternative, you may make you of the PIL library.
When it goes to jpg files, imshow() is using PIL if installed. You can use PIL module directly, as documented here.
Related
I am using PyQtGraph and am really enjoying it, but have hit upon an issue that may force me to move to something else.
I am displaying medical images (CT/MRI etc.) as numpy 2D or 3D arrays in the ImageView which gives the nice slider view for volume data. The problem is theses images are often low res (256x256) and when viewed on large monitors or just zoomed-in they look blocky and horrible.
How can I show these images antialiased? This seems to be possible as mentioned here:
How can anti-aliasing be enabled in a pyqtgraph ImageView?
and a few other places suggesting all you need to do is:
import pyqtgraph as pg
pg.setConfigOptions(antialias=True)
and enable antialiasing in the graphics view, which I assume would be this:
myImageViewWidget = pg.ImageView(parent=None)
myImageViewWidget.ui.graphicsView.setAntialiasing(True)
But this doesn't seem to do anything different in my code. What am I doing wrong?
I'm using Windows 10 (but need it to work on MacOs - Darwin), Python 3.7, PySide 2 (5.15.12) and PyQtGraph 0.12.3
'Minimum' code to reproduce the issue (not quite but I want to keep ImageView subclassed as that's how I have it in my code):
import sys
from PySide2.QtWidgets import (
QApplication,
QHBoxLayout,
QMainWindow,
QWidget,
)
import pyqtgraph as pg
import numpy as np
pg.setConfigOptions(antialias=True)
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.cw = QWidget(self)
self.cw.setAutoFillBackground(True)
self.setCentralWidget(self.cw)
self.layout = QHBoxLayout()
self.cw.setLayout(self.layout)
self.ImgWidget = MyImageWidget(parent=self)
self.layout.addWidget(self.ImgWidget)
self.show()
class MyImageWidget(pg.ImageView):
def __init__(self, parent=None):
super().__init__(parent)
self.ui.histogram.hide()
self.ui.roiBtn.hide()
self.ui.menuBtn.hide()
self.ui.graphicsView.setAntialiasing(True)
# 5 frames of 50x50 random noise
img = (1000 * np.random.normal(size=(5, 50, 50))) - 500
self.setImage(img)
def main():
app = QApplication()
main = MainWindow()
main.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
What you're referring to is not antialiasing.
Antialiasing "smoothens" the portions of an image that cannot "fit" precisely a single physical pixel.
What you are seeing is in fact the opposite, as each source pixel is actually large enough to be shown as it is: a square that possibly occupies more physical pixels.
What you probably want is a blur effect, which can be achieved through a QGraphicsBlurEffect set on the self.imageItem of the view:
class MyImageWidget(pg.ImageView):
def __init__(self, parent=None):
# ...
self.blurEffect = QGraphicsBlurEffect(blurRadius=1.1)
self.imageItem.setGraphicsEffect(self.blurEffect)
Note that since the image item is always scaled and the blur effect is proportional, you might need to adjust the blur radius to even smaller values depending on the shown resolution (but still bigger than 1.0).
Before plotting using matplotlib, you must specify your display's DPI if you have a high DPI display, since otherwise the image is too small. I have a 4K display, so I definitely need to do this. (I think that matplotlib should automatically do this for you, but that is another topic...)
As a first attempt to specify the DPI, consider the code below. It manually specifies the display's DPI and then creates and plots a test DataFrame:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sys
# method #1: manually specify my display's DPI:
dpi = 163 # this value is valid for my Dell U2718Q 4K (3840 x 2160) display
plt.rcParams["figure.dpi"] = dpi
print("plt.matplotlib.rcParams[\"figure.dpi\"] = " + str(plt.matplotlib.rcParams["figure.dpi"]))
# define a test DataFrame (here, chose to calculate sin and cos over their range of 2 pi):
n = 100
x = (2 * np.pi / n) * np.arange(n)
df = pd.DataFrame( {
"sin(x)" : np.sin(x),
"cos(x)": np.cos(x),
}
)
# plot the DataFrame:
df.plot(figsize = (12, 8), title = "sin and cos", grid = True, color = ["red", "green"])
When I put the code above into a file and run it all at once in PyCharm, everything behaves exactly as expected: the script completes without error, the plot is generated at the correct size, and the plot remains open in a window after the script ends.
So far, so good.
But the code above is brittle: run it on a computer with a different display DPI, and the image will not be sized correctly.
Doing a web search, I found this link which has code claims to automatically determine your display's DPI. My (slight) adaptation of the code is this
# method #2: call code to determine my display's DPI (only works if the backend is Qt)
if plt.get_backend() == "Qt5Agg":
from matplotlib.backends.qt_compat import QtWidgets
qApp = QtWidgets.QApplication(sys.argv)
plt.matplotlib.rcParams["figure.dpi"] = qApp.desktop().physicalDpiX()
If I modify my file to use the code above ("method #2") instead of the manual DPI setting ("method #1"), I find that the script completes without error, but the plot only comes up for a brief instant before being automatically closed!
By successively commenting out lines in the "method #2" code, starting with the last and working backwards, I have determined that the culprit is the call to QtWidgets.QApplication(sys.argv).
In particular, if I reduce the "method #2" code to just this
if plt.get_backend() == "Qt5Agg":
from matplotlib.backends.qt_compat import QtWidgets
QtWidgets.QApplication(sys.argv)
I get this plot auto close behavior.
Another defect, is that the original "method #2" code calculates the DPI of my monitor, a Dell U2718Q, to be 160, when it really is 163: in this link go to p. 3 / 4 and look at the Pixels per inch (PPI) spec.
Does anyone know of a solution to this?
Better code to determine the DPI?
A modification of the "method #2" code which will not cause plots to auto close?
Is this a bug that needs to be reported to matplotlib or Qt?
I am using pyqtgraph to plot some data and noticed that when I move the plot from my laptop screen to a second monitor, the scaling on the plot is affected:
laptop monitor:
external monitor:
notice that the axes got "compressed", and the plot is no longer scaled properly on the second monitor.
I found others reporting similar issues on the web, but could not find any real solution. One solution suggested was to make the monitors' resolutions the same. I don't like this solution because I'd have to sacrifice laptop resolution to accommodate my lower resolution external monitor.
The other solution I found was to add the line app.setAttribute(QtCore.Qt.AA_Use96Dpi) to the main loop, prior to instantiating the Qapplication as shown below, to allegedly have Qt ignore the OS's DPI settings:
def main():
import sys
app = QtWidgets.QApplication(sys.argv)
app.setAttribute(QtCore.Qt.AA_Use96Dpi)
MainWindow =GraphWindow()
MainWindow.show()
sys.exit(app.exec_())
This seems at first to work, because the plotted data is scaled properly on the axes. However, it doesn't seem to really work -- the addition of this line affected the scaling of the axes on the laptop as shown below (same data is now plotted on axes that span 0 to 7000 on the x-axis, and -2 to -26dB on the Yaxis):
,
but did "fix" the issue when moving the plot onto the second monitor to look like the first "original" laptop plot shown above.
This is particularly worrisome, because in the case of the laptop output after the app.setAttribute(QtCore.Qt.AA_Use96Dpi) instruction "looks" right, but misrepresents the actual data. I could have easily missed this had included this instruction when I first plotted the data.
What is the right way to have the plot accurately display regardless of the OS's DPI setting and monitor resolutions? It is very strange that the plotted data seems disassociated with the axis values.
Here is a mininimal reproducible sample:
from PyQt5 import QtWidgets, QtCore
from pyqtgraph import PlotWidget, plot
import pyqtgraph as pg
import sys # We need sys so that we can pass argv to QApplication
import os
from numpy.random import seed
from numpy.random import randint
class MainWindow(QtWidgets.QMainWindow):
def __init__(self, *args, **kwargs):
super(MainWindow, self).__init__(*args, **kwargs)
self.graphWidget = pg.PlotWidget()
self.setCentralWidget(self.graphWidget)
x = [1,2,3,4,5,6,7,8,9,10]
seed(1)
y = randint(5,35,10)
# plot data: x, y values
self.graphWidget.plot(x, y)
def main():
app = QtWidgets.QApplication(sys.argv)
app.setAttribute(QtCore.Qt.AA_Use96Dpi)
main = MainWindow()
main.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
The setAttribute solution never worked for me in that way and the windll manipulation makes the gui blurry...
adding following two lines before app = QApplication(sys.argv) solved my problem:
QApplication.setHighDpiScaleFactorRoundingPolicy(Qt.HighDpiScaleFactorRoundingPolicy.PassThrough)
QtCore.QCoreApplication.setAttribute(QtCore.Qt.AA_EnableHighDpiScaling, True)
Answers can be found here: https://github.com/pyqtgraph/pyqtgraph/issues/756
Quick Summary of this issue:
There are essentially two ways to solve this problem.
Make your app DPI-aware (by Androwei)
import ctypes
import platform
def make_dpi_aware():
if int(platform.release()) >= 8:
ctypes.windll.shcore.SetProcessDpiAwareness(True)
# add this code before "app = QtWidgets.QApplication(sys.argv)"
make_dpi_aware()
set Qt.HighDpiScaleFactorRoundingPolicy to PassThrough (by andybarry)
# add this code before "app = QtWidgets.QApplication(sys.argv)"
QtWidgets.QApplication.setAttribute(QtCore.Qt.HighDpiScaleFactorRoundingPolicy.PassThrough)
I have tried both, and they both work perfectly! Thanks to these contributors. Hope you can find this useful as well.
So I was able to successfully embed matplotlib into my PyQt5 program, except I am running into a problem where it seems the code I have is causing a popup of a matplot widget to open and close during the generation of the matplot for the widget. I was able to source the problem, but I am stuck on how I can go about to fix it.
def getHexabinData(self, shotsDf):
#returns the object type of the shot / makes hexabin
shotsHex = plt.hexbin(-shotsDf.LOC_X, shotsDf.LOC_Y,
extent=(-250, 250, 422.5, -47.5), cmap='Blues', gridsize=45, marginals=True, visible=False)
print('done')
#grabs object of hexabin of all shots
makeDf = shotsDf[shotsDf.SHOT_MADE_FLAG == 1]
#grabs the data frame of all the makes
makesHex = plt.hexbin(-makeDf.LOC_X, makeDf.LOC_Y,
extent=(-250, 250, 422.5, -47.5), cmap=plt.cm.Reds, gridsize=45, marginals=True, visible=False)
print('done')
plt.close()
#close the hexabin plot
pctsByHex = np.true_divide(makesHex.get_array(), shotsHex.get_array())
pctsByHex[np.isnan(pctsByHex)] = 0 # convert NAN values to 0
sizesByHex = len(shotsHex.get_array()) * [0]
sizesByHex = self.getSizeHexByZone(shotsDf, sizesByHex)
sizesByHex = sizesByHex * 120
#size 210 for figsize(12,11)
print('hexes done')
return shotsHex, pctsByHex, sizesByHex
And so, I've sourced the problem to be in the function above, which is a function of a separate class in a separate file that uses the following module instead of:
import matplotlib.pyplot as plt
#instead of these imported modules below for the pyqt5 program
from matplotlib.patches import Circle, Rectangle, Arc
from matplotlib.figure import Figure
from matplotlib.backends.backend_qt4agg import (
FigureCanvasQTAgg as FigureCanvas,
NavigationToolbar2QT as NavigationToolbar)
Apologies if this question is way too specific of a problem. I've tried to do:
plt.close()
plt.hexabin(....visible=False)
but I still get this random "matplot" widget popup that opens and closes itself until the matplot widget shows the updated plot. Is there any fix to this or something I am not seeing?
Do not use import matplotlib.pyplot as plt when you integrate Matplotlib in PyQt. The pyplot module has its own event loop and maintains its own list of windoww. This clashes with PyQt as you are now are experiencing.
So remove the plt.close() statement. Instead just close the Qt window when needed.
A good example on how to integrate without using pyplot can be found here.
I'm trying to migrate from MATLAB to Python and one of the things I frequently rely on during development in Matlab is the ability to rapidly visualize slices of a datacube by looping through layers and calling drawnow, e.g.
tst = randn(1000,1000,100);
for n = 1:size(tst, 3)
imagesc(tst(:,:,n));
drawnow;
end
When I tic/toc this in MATLAB it shows that the figure is updating at about 28fps. In contrast, when I try to do this using matplotlib's imshow() command this runs at a snails pace in comparison, even using set_data().
import matplotlib as mp
import matplotlib.pyplot as plt
import numpy as np
tmp = np.random.random((1000,1000,100))
myfig = plt.imshow(tmp[:,:,i], aspect='auto')
for i in np.arange(0,tmp.shape[2]):
myfig.set_data(tmp[:,:,i])
mp.pyplot.title(str(i))
mp.pyplot.pause(0.001)
On my computer this runs at about 16fps with the default (very small) scale, and if I resize it to be larger and the same size as the matlab figure it slows down to about 5 fps. From some older threads I saw a suggestion to use glumpy and I installed this along with all of the appropriate packages and libraries (glfw, etc.), and the package itself works fine but it no longer supports the easy image visualization that was suggested in a previous thread.
I then downloaded vispy, and I can make an image with it using code from this thread as a template:
import sys
from vispy import scene
from vispy import app
import numpy as np
canvas = scene.SceneCanvas(keys='interactive')
canvas.size = 800, 600
canvas.show()
# Set up a viewbox to display the image with interactive pan/zoom
view = canvas.central_widget.add_view()
# Create the image
img_data = np.random.random((800,800, 3))
image = scene.visuals.Image(img_data, parent=view.scene)
view.camera.set_range()
# unsuccessfully tacked on the end to see if I can modify the figure.
# Does nothing.
img_data_new = np.zeros((800,800, 3))
image = scene.visuals.Image(img_data_new, parent=view.scene)
view.camera.set_range()
Vispy seems very fast and this looks like it will get me there, but how do you update the canvas with new data? Thank you,
See ImageVisual.set_data method
# Create the image
img_data = np.random.random((800,800, 3))
image = scene.visuals.Image(img_data, parent=view.scene)
view.camera.set_range()
# Generate new data :
img_data_new = np.zeros((800,800, 3))
img_data_new[400:, 400:, 0] = 1. # red square
image.set_data(img_data_new)