I am using PyQtGraph and am really enjoying it, but have hit upon an issue that may force me to move to something else.
I am displaying medical images (CT/MRI etc.) as numpy 2D or 3D arrays in the ImageView which gives the nice slider view for volume data. The problem is theses images are often low res (256x256) and when viewed on large monitors or just zoomed-in they look blocky and horrible.
How can I show these images antialiased? This seems to be possible as mentioned here:
How can anti-aliasing be enabled in a pyqtgraph ImageView?
and a few other places suggesting all you need to do is:
import pyqtgraph as pg
pg.setConfigOptions(antialias=True)
and enable antialiasing in the graphics view, which I assume would be this:
myImageViewWidget = pg.ImageView(parent=None)
myImageViewWidget.ui.graphicsView.setAntialiasing(True)
But this doesn't seem to do anything different in my code. What am I doing wrong?
I'm using Windows 10 (but need it to work on MacOs - Darwin), Python 3.7, PySide 2 (5.15.12) and PyQtGraph 0.12.3
'Minimum' code to reproduce the issue (not quite but I want to keep ImageView subclassed as that's how I have it in my code):
import sys
from PySide2.QtWidgets import (
QApplication,
QHBoxLayout,
QMainWindow,
QWidget,
)
import pyqtgraph as pg
import numpy as np
pg.setConfigOptions(antialias=True)
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.cw = QWidget(self)
self.cw.setAutoFillBackground(True)
self.setCentralWidget(self.cw)
self.layout = QHBoxLayout()
self.cw.setLayout(self.layout)
self.ImgWidget = MyImageWidget(parent=self)
self.layout.addWidget(self.ImgWidget)
self.show()
class MyImageWidget(pg.ImageView):
def __init__(self, parent=None):
super().__init__(parent)
self.ui.histogram.hide()
self.ui.roiBtn.hide()
self.ui.menuBtn.hide()
self.ui.graphicsView.setAntialiasing(True)
# 5 frames of 50x50 random noise
img = (1000 * np.random.normal(size=(5, 50, 50))) - 500
self.setImage(img)
def main():
app = QApplication()
main = MainWindow()
main.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
What you're referring to is not antialiasing.
Antialiasing "smoothens" the portions of an image that cannot "fit" precisely a single physical pixel.
What you are seeing is in fact the opposite, as each source pixel is actually large enough to be shown as it is: a square that possibly occupies more physical pixels.
What you probably want is a blur effect, which can be achieved through a QGraphicsBlurEffect set on the self.imageItem of the view:
class MyImageWidget(pg.ImageView):
def __init__(self, parent=None):
# ...
self.blurEffect = QGraphicsBlurEffect(blurRadius=1.1)
self.imageItem.setGraphicsEffect(self.blurEffect)
Note that since the image item is always scaled and the blur effect is proportional, you might need to adjust the blur radius to even smaller values depending on the shown resolution (but still bigger than 1.0).
Related
I have a QDial widget that I want to beautify the circular edge of this widget by adding a QLable as the following figure. However, I think this makes the QLabel the parent widget, and the QDial no further works!
Below is also my simple code.
from PyQt5.QtWidgets import *
from PyQt5 import QtCore, QtGui
from PyQt5.QtGui import *
from PyQt5.QtCore import *
import sys
class Window(QMainWindow):
def __init__(self):
super().__init__()
self.setGeometry(200, 200, 500, 500)
self.UiComponents()
self.show()
def UiComponents(self):
dial = QDial(self)
dial.setGeometry(150, 150, 200, 200)
label_1 = QLabel('', self)
label_1.move(168, 168)
label_1.resize(164, 164)
label_1.setStyleSheet("border: 4px solid gray; border-radius: 82px;")
App = QApplication(sys.argv)
window = Window()
sys.exit(App.exec())
The "main" problem is that you're adding the label over the dial, so it won't be able to receive mouse events.
A theoretical solution could be to use label_1.setAttribute(Qt.WA_TransparentForMouseEvents), but that won't be a good idea, for the following reasons:
widget geometries should normally be managed by a layout manager, so you cannot rely on a "guess" done by trial and error: as soon as the window is resized, all geometries will change and you'll end up with a floating circle that will make everything worse;
even assuming you get the positioning right by intercepting the resize event with an event filter, you'd need to manually reset the stylesheet everytime and ensure that it's properly aligned, but that cannot be guaranteed because different size policies and other widgets could change the final radius of the dial;
what you see on your screen is almost never what users will see in theirs, due to lots of reasons including the current OS and QStyle in use; see the following screenshots taken with 3 Qt common styles (Breeze, Oxygen and Windows):
Unfortunately, QDial has never received lots of care from developers, as it's a scarcely used widget that is hard to implement for custom usage. As such, it doesn't support many any appearance features, and there's also no stylesheet configuration.
If you want to change the look of the dial, the only safe possibility is to subclass it, override its paintEvent() and paint it on your own.
I am using pyqtgraph to plot some data and noticed that when I move the plot from my laptop screen to a second monitor, the scaling on the plot is affected:
laptop monitor:
external monitor:
notice that the axes got "compressed", and the plot is no longer scaled properly on the second monitor.
I found others reporting similar issues on the web, but could not find any real solution. One solution suggested was to make the monitors' resolutions the same. I don't like this solution because I'd have to sacrifice laptop resolution to accommodate my lower resolution external monitor.
The other solution I found was to add the line app.setAttribute(QtCore.Qt.AA_Use96Dpi) to the main loop, prior to instantiating the Qapplication as shown below, to allegedly have Qt ignore the OS's DPI settings:
def main():
import sys
app = QtWidgets.QApplication(sys.argv)
app.setAttribute(QtCore.Qt.AA_Use96Dpi)
MainWindow =GraphWindow()
MainWindow.show()
sys.exit(app.exec_())
This seems at first to work, because the plotted data is scaled properly on the axes. However, it doesn't seem to really work -- the addition of this line affected the scaling of the axes on the laptop as shown below (same data is now plotted on axes that span 0 to 7000 on the x-axis, and -2 to -26dB on the Yaxis):
,
but did "fix" the issue when moving the plot onto the second monitor to look like the first "original" laptop plot shown above.
This is particularly worrisome, because in the case of the laptop output after the app.setAttribute(QtCore.Qt.AA_Use96Dpi) instruction "looks" right, but misrepresents the actual data. I could have easily missed this had included this instruction when I first plotted the data.
What is the right way to have the plot accurately display regardless of the OS's DPI setting and monitor resolutions? It is very strange that the plotted data seems disassociated with the axis values.
Here is a mininimal reproducible sample:
from PyQt5 import QtWidgets, QtCore
from pyqtgraph import PlotWidget, plot
import pyqtgraph as pg
import sys # We need sys so that we can pass argv to QApplication
import os
from numpy.random import seed
from numpy.random import randint
class MainWindow(QtWidgets.QMainWindow):
def __init__(self, *args, **kwargs):
super(MainWindow, self).__init__(*args, **kwargs)
self.graphWidget = pg.PlotWidget()
self.setCentralWidget(self.graphWidget)
x = [1,2,3,4,5,6,7,8,9,10]
seed(1)
y = randint(5,35,10)
# plot data: x, y values
self.graphWidget.plot(x, y)
def main():
app = QtWidgets.QApplication(sys.argv)
app.setAttribute(QtCore.Qt.AA_Use96Dpi)
main = MainWindow()
main.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
The setAttribute solution never worked for me in that way and the windll manipulation makes the gui blurry...
adding following two lines before app = QApplication(sys.argv) solved my problem:
QApplication.setHighDpiScaleFactorRoundingPolicy(Qt.HighDpiScaleFactorRoundingPolicy.PassThrough)
QtCore.QCoreApplication.setAttribute(QtCore.Qt.AA_EnableHighDpiScaling, True)
Answers can be found here: https://github.com/pyqtgraph/pyqtgraph/issues/756
Quick Summary of this issue:
There are essentially two ways to solve this problem.
Make your app DPI-aware (by Androwei)
import ctypes
import platform
def make_dpi_aware():
if int(platform.release()) >= 8:
ctypes.windll.shcore.SetProcessDpiAwareness(True)
# add this code before "app = QtWidgets.QApplication(sys.argv)"
make_dpi_aware()
set Qt.HighDpiScaleFactorRoundingPolicy to PassThrough (by andybarry)
# add this code before "app = QtWidgets.QApplication(sys.argv)"
QtWidgets.QApplication.setAttribute(QtCore.Qt.HighDpiScaleFactorRoundingPolicy.PassThrough)
I have tried both, and they both work perfectly! Thanks to these contributors. Hope you can find this useful as well.
I am trying to make a PyQt5 application where the user can click on a graphics view window in order to place a rectangle. I managed to scale the window using the fitInView method and that works alright but whenever the windows size is changed the relative size of the newly placed object changes as well. How can I make it so that everything is relative to window/screen/graphicsView size? I tried to use the scene width but it did weird things with scale and position when the window was resized.
Here is my the class for my main window. gv is the graphics view and Ui_MainWindow is from QtDesigner.
class Window(Ui_MainWindow, QMainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
self.scene = QGraphicsScene()
self.scene.setSceneRect(QRectF(0, 0, 100, 100))
self.gv.setScene(self.scene)
self.gv.fitInView(0, 0, 100, 100, Qt.KeepAspectRatio)
def resizeEvent(self, e):
self.gv.fitInView(0, 0, 100, 100, Qt.KeepAspectRatio)
Here is my custom graphics view class:
from PyQt5.QtWidgets import QGraphicsView
from PyQt5.QtGui import QPolygonF
from PyQt5.QtCore import QPoint, QRectF
class GraphicsView(QGraphicsView):
def __init__(self, window):
super().__init__(window)
def mousePressEvent(self, QMouseEvent):
rect = self.mapToScene(QMouseEvent.x(), QMouseEvent.y(), 5, 5)
\\ rect = self.mapToScene(QMouseEvent.x(), QMouseEvent.y(), self.width()/20, self.height()/20)
self.scene().addPolygon(rect)
As a side question am I currently doing this as is recommended? Like using fitInView and mapToScene etc?
Also when I set the scene rectangle what values should I be choosing in place of 100? That was just arbitrary.
EDIT: I managed to achieve something similar to what I want by using the graphics view width as a base variable as shown in the comment in the code above. However I am now having the problem that if you resize the window it shows areas of the scene that were not previously visible. Is there a way to stretch the scene or something like that to make sure it always fills the graphics view?
I need to perform lot of operations on an image. So I used OpenCV. OpenCV is very efficient in image processing, however it is not too good to present a suitable GUI. So I decided to use PyQt to draw a custom GUI and OpenCV to process my image.
I created a very simple program you directly picked from the documentation. I simply read a jpg picture and save it in a png format by pressing the key s.
My aim is to replace the key s with a button to press to perform the same action using PyQt. Also, I want the window displayed by PyQt to have the same behavior as OpenCV: mainly speaking, the function imshow() displays a window that fits to the image size.
Here is my OpenCV simple code:
import numpy
import cv2
class LoadImage:
def loadImage(self):
img = cv2.imread('photo.jpg')
cv2.imshow('Image on a window',img)
k = cv2.waitKey(0)
if k == 27:
cv2.destroyAllWindows()
elif k == ord('s'):
cv2.imwrite('photopng.png',img)
cv2.destroyAllWindows()
if __name__=="__main__":
LI=LoadImage()
LI.loadImage()
Output:
Here is a simple PyQt code to draw a simple window:
import sys
from PyQt4 import QtGui
class DrawWindow:
def drawWindow(self):
app=QtGui.QApplication(sys.argv)
w=QtGui.QWidget()
#w.resize(250,250)
w.move(300,300)
w.setWindowTitle("Simple Window")
w.show()
sys.exit(app.exec_())
if __name__=="__main__":
DW=DrawWindow()
DW.drawWindow()
How can I mix the 2 codes to reach my goal?
Modified some code basing on your post, I didn't use the Opencv to render the image, instead using QPixmap to render it. then use KeyPressEvent to capture the user input .
# -*- coding: utf-8 -*-
import numpy
import cv2
from PyQt4.QtGui import *
from PyQt4.QtCore import *
class MyDialog(QDialog):
def __init__(self, parent=None):
super(MyDialog, self).__init__(parent)
self.cvImage = cv2.imread(r'cat.jpg')
height, width, byteValue = self.cvImage.shape
byteValue = byteValue * width
cv2.cvtColor(self.cvImage, cv2.COLOR_BGR2RGB, self.cvImage)
self.mQImage = QImage(self.cvImage, width, height, byteValue, QImage.Format_RGB888)
def paintEvent(self, QPaintEvent):
painter = QPainter()
painter.begin(self)
painter.drawImage(0, 0, self.mQImage)
painter.end()
def keyPressEvent(self, QKeyEvent):
super(MyDialog, self).keyPressEvent(QKeyEvent)
if 's' == QKeyEvent.text():
cv2.imwrite("cat2.png", self.cvImage)
else:
app.exit(1)
if __name__=="__main__":
import sys
app = QApplication(sys.argv)
w = MyDialog()
w.resize(600, 400)
w.show()
app.exec_()
You can create a QImage directly from the image data read by OpenCV, convert that to a QPixmap using QPixmap.fromImage and then use that to set the pixmap of a QLabel using the setPixmap method.
This page shows how to view a webcam feed by using PySide and OpenVC: https://gist.github.com/bsdnoobz/8464000. I made it work for videos as well, by commenting out these two lines
self.capture.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, self.video_size.width())
self.capture.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, self.video_size.height())
and instead using
resized_bgr_frame = cv2.resize(
bgr_frame,
(self.video_size.width(), self.video_size.height()),
interpolation=cv2.INTER_CUBIC if self.video_size.width() > bgr_frame.shape[1] else cv2.INTER_AREA)
on the video frame (where bgr_frame is the frame that was obtained with self.capture.read()).
(Note that there is some different naming and namespace conventions for the OpenCV constants going on here, perhaps due to differing cv2 versions?)
I've got a PyQT4 application that displays medium size images in a Matplotlib figure. The test image that I'm displaying is about 5Mb (2809 x 1241 pixels). I read in the data using GDAL by the way. The image is read into an array with nodata values masked out. This is then displayed with normalized values and a specified colormap
It seems to use an inordinate amount of memory to display a 5mb file. What I'm seeing is that it takes a 140mb of memory to display this image read in at full resolution. (application with imshow commented out used 60mb of memory, vs 206 with it) The problem gets worse as images are displayed in multiple figures as each one uses an additional 200m of memory. At about 3 or 4 figures displayed the applications starts bogging down as the memory usage gets into the 700-900 mb range.
I understand about matplotlib having to store all the pixels even though it's displaying only a downsampled subset to match the screen resolution. I'll probably end up writing routines to only read in an amount of pixels to match the figure size. But since this application will be displaying up to 8 maps on 8 separate screens I'm concerned about it still using excessive memory.
So my questions are:
1) Does this seem like an inordinate amount of memory to be using for displaying a simple colormapped image? It does to me.
2) Is there something I could be doing to decrease this memory usage? For example using integer datatypes, releasing memory, etc.
3) What other strategies should I be using to deal with this memory usage? For example downsampling (might not be very effective at full screen resolution 1900x1200), switching to 64bit architecture, etc
Thanks,
Code below
import sys, os, random
from PyQt4.QtCore import *
from PyQt4.QtGui import *
import matplotlib
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.backends.backend_qt4agg import NavigationToolbar2QTAgg as NavigationToolbar
from matplotlib.figure import Figure
import matplotlib.colors as colors
import numpy as np
from osgeo import gdal, gdalconst
gridfile = r"i:\vistrails\workingfiles\secondseason\secondseason_workfile_2012_02_28b\brt_1\brt_prob_map.tif"
class AppForm(QMainWindow):
def __init__(self, parent=None):
QMainWindow.__init__(self, parent)
self.create_main_frame()
ds = gdal.Open(gridfile, gdal.GA_ReadOnly)
ary = ds.GetRasterBand(1).ReadAsArray(buf_ysize=500, buf_xsize=300)
ndval = ds.GetRasterBand(1).GetNoDataValue()
rasterdata = np.ma.masked_array(ary, mask=(ary==ndval))
del ary
self.axes.imshow(rasterdataint, cmap=matplotlib.cm.jet)
del rasterdata
def create_main_frame(self):
self.main_frame = QWidget()
# Create the mpl Figure and FigCanvas objects.
# 5x4 inches, 100 dots-per-inch
#
self.dpi = 100
self.fig = Figure((5.0, 4.0), dpi=self.dpi)
self.canvas = FigureCanvas(self.fig)
self.canvas.setParent(self.main_frame)
self.axes = self.fig.add_subplot(111)
self.mpl_toolbar = NavigationToolbar(self.canvas, self.main_frame)
vbox = QVBoxLayout()
vbox.addWidget(self.canvas)
vbox.addWidget(self.mpl_toolbar)
self.main_frame.setLayout(vbox)
self.setCentralWidget(self.main_frame)
def main():
app = QApplication(sys.argv)
form = AppForm()
form.show()
app.exec_()
if __name__ == "__main__":
main()
Memory issue with use of imshow() have been noticed, as here.
1/ Upgrade
As mentionned here, upgrading to latest vesion of mpl may fix the problem.
2/ PIL
As an alternative, you may make you of the PIL library.
When it goes to jpg files, imshow() is using PIL if installed. You can use PIL module directly, as documented here.