I need to perform lot of operations on an image. So I used OpenCV. OpenCV is very efficient in image processing, however it is not too good to present a suitable GUI. So I decided to use PyQt to draw a custom GUI and OpenCV to process my image.
I created a very simple program you directly picked from the documentation. I simply read a jpg picture and save it in a png format by pressing the key s.
My aim is to replace the key s with a button to press to perform the same action using PyQt. Also, I want the window displayed by PyQt to have the same behavior as OpenCV: mainly speaking, the function imshow() displays a window that fits to the image size.
Here is my OpenCV simple code:
import numpy
import cv2
class LoadImage:
def loadImage(self):
img = cv2.imread('photo.jpg')
cv2.imshow('Image on a window',img)
k = cv2.waitKey(0)
if k == 27:
cv2.destroyAllWindows()
elif k == ord('s'):
cv2.imwrite('photopng.png',img)
cv2.destroyAllWindows()
if __name__=="__main__":
LI=LoadImage()
LI.loadImage()
Output:
Here is a simple PyQt code to draw a simple window:
import sys
from PyQt4 import QtGui
class DrawWindow:
def drawWindow(self):
app=QtGui.QApplication(sys.argv)
w=QtGui.QWidget()
#w.resize(250,250)
w.move(300,300)
w.setWindowTitle("Simple Window")
w.show()
sys.exit(app.exec_())
if __name__=="__main__":
DW=DrawWindow()
DW.drawWindow()
How can I mix the 2 codes to reach my goal?
Modified some code basing on your post, I didn't use the Opencv to render the image, instead using QPixmap to render it. then use KeyPressEvent to capture the user input .
# -*- coding: utf-8 -*-
import numpy
import cv2
from PyQt4.QtGui import *
from PyQt4.QtCore import *
class MyDialog(QDialog):
def __init__(self, parent=None):
super(MyDialog, self).__init__(parent)
self.cvImage = cv2.imread(r'cat.jpg')
height, width, byteValue = self.cvImage.shape
byteValue = byteValue * width
cv2.cvtColor(self.cvImage, cv2.COLOR_BGR2RGB, self.cvImage)
self.mQImage = QImage(self.cvImage, width, height, byteValue, QImage.Format_RGB888)
def paintEvent(self, QPaintEvent):
painter = QPainter()
painter.begin(self)
painter.drawImage(0, 0, self.mQImage)
painter.end()
def keyPressEvent(self, QKeyEvent):
super(MyDialog, self).keyPressEvent(QKeyEvent)
if 's' == QKeyEvent.text():
cv2.imwrite("cat2.png", self.cvImage)
else:
app.exit(1)
if __name__=="__main__":
import sys
app = QApplication(sys.argv)
w = MyDialog()
w.resize(600, 400)
w.show()
app.exec_()
You can create a QImage directly from the image data read by OpenCV, convert that to a QPixmap using QPixmap.fromImage and then use that to set the pixmap of a QLabel using the setPixmap method.
This page shows how to view a webcam feed by using PySide and OpenVC: https://gist.github.com/bsdnoobz/8464000. I made it work for videos as well, by commenting out these two lines
self.capture.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, self.video_size.width())
self.capture.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, self.video_size.height())
and instead using
resized_bgr_frame = cv2.resize(
bgr_frame,
(self.video_size.width(), self.video_size.height()),
interpolation=cv2.INTER_CUBIC if self.video_size.width() > bgr_frame.shape[1] else cv2.INTER_AREA)
on the video frame (where bgr_frame is the frame that was obtained with self.capture.read()).
(Note that there is some different naming and namespace conventions for the OpenCV constants going on here, perhaps due to differing cv2 versions?)
Related
My current program is in Python and uses OpenCV. I rely on webcam captures and I am processing every captured frame:
import cv2
# use the webcam
cap = cv2.VideoCapture(0)
while True:
# read a frame from the webcam
ret, img = cap.read()
# transform image
I would like to make a Kivy interface (or another graphical user interface) with buttons, keeping already existing functionality with webcam captures.
I found this example:
https://kivy.org/docs/examples/gen__camera__main__py.html
— but it doesn’t explain how to acquire the webcam image to process it with OpenCV.
I found an older example:
http://thezestyblogfarmer.blogspot.it/2013/10/kivy-python-script-for-capturing.html
— it saves screenshots to disk using the ‘screenshot’ function. Then I can read the saved files and process them, but this seems to be an unnecessary step.
What else can I try?
Found this example here: https://groups.google.com/forum/#!topic/kivy-users/N18DmblNWb0
It converts the opencv captures to kivy textures, so you can do every kind of cv transformations before displaying it to your kivy interface.
__author__ = 'bunkus'
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.image import Image
from kivy.clock import Clock
from kivy.graphics.texture import Texture
import cv2
class CamApp(App):
def build(self):
self.img1=Image()
layout = BoxLayout()
layout.add_widget(self.img1)
#opencv2 stuffs
self.capture = cv2.VideoCapture(0)
cv2.namedWindow("CV2 Image")
Clock.schedule_interval(self.update, 1.0/33.0)
return layout
def update(self, dt):
# display image from cam in opencv window
ret, frame = self.capture.read()
cv2.imshow("CV2 Image", frame)
# convert it to texture
buf1 = cv2.flip(frame, 0)
buf = buf1.tostring()
texture1 = Texture.create(size=(frame.shape[1], frame.shape[0]), colorfmt='bgr')
#if working on RASPBERRY PI, use colorfmt='rgba' here instead, but stick with "bgr" in blit_buffer.
texture1.blit_buffer(buf, colorfmt='bgr', bufferfmt='ubyte')
# display image from the texture
self.img1.texture = texture1
if __name__ == '__main__':
CamApp().run()
cv2.destroyAllWindows()
Note: I have no clue how OpenCV works, but I found camera_opencv.py, so this means there is an easy way how to work with it.
As you see in camera example, this is the default way and when you look in __init__.py for camera you can see opencv in providers so perhaps it works with OpenCV out of the box. Check log if you can see OpenCV detected as a provider. You should see CameraOpenCV written somewhere if it's detected and it should show itself when capturing image.
If you however want to work with OpenCV directly(i.e. cap.read() and similar stuff), then you need to write your own handler for the provider or append more options to the camera_opencv file.
I am using PyQtGraph and am really enjoying it, but have hit upon an issue that may force me to move to something else.
I am displaying medical images (CT/MRI etc.) as numpy 2D or 3D arrays in the ImageView which gives the nice slider view for volume data. The problem is theses images are often low res (256x256) and when viewed on large monitors or just zoomed-in they look blocky and horrible.
How can I show these images antialiased? This seems to be possible as mentioned here:
How can anti-aliasing be enabled in a pyqtgraph ImageView?
and a few other places suggesting all you need to do is:
import pyqtgraph as pg
pg.setConfigOptions(antialias=True)
and enable antialiasing in the graphics view, which I assume would be this:
myImageViewWidget = pg.ImageView(parent=None)
myImageViewWidget.ui.graphicsView.setAntialiasing(True)
But this doesn't seem to do anything different in my code. What am I doing wrong?
I'm using Windows 10 (but need it to work on MacOs - Darwin), Python 3.7, PySide 2 (5.15.12) and PyQtGraph 0.12.3
'Minimum' code to reproduce the issue (not quite but I want to keep ImageView subclassed as that's how I have it in my code):
import sys
from PySide2.QtWidgets import (
QApplication,
QHBoxLayout,
QMainWindow,
QWidget,
)
import pyqtgraph as pg
import numpy as np
pg.setConfigOptions(antialias=True)
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.cw = QWidget(self)
self.cw.setAutoFillBackground(True)
self.setCentralWidget(self.cw)
self.layout = QHBoxLayout()
self.cw.setLayout(self.layout)
self.ImgWidget = MyImageWidget(parent=self)
self.layout.addWidget(self.ImgWidget)
self.show()
class MyImageWidget(pg.ImageView):
def __init__(self, parent=None):
super().__init__(parent)
self.ui.histogram.hide()
self.ui.roiBtn.hide()
self.ui.menuBtn.hide()
self.ui.graphicsView.setAntialiasing(True)
# 5 frames of 50x50 random noise
img = (1000 * np.random.normal(size=(5, 50, 50))) - 500
self.setImage(img)
def main():
app = QApplication()
main = MainWindow()
main.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
What you're referring to is not antialiasing.
Antialiasing "smoothens" the portions of an image that cannot "fit" precisely a single physical pixel.
What you are seeing is in fact the opposite, as each source pixel is actually large enough to be shown as it is: a square that possibly occupies more physical pixels.
What you probably want is a blur effect, which can be achieved through a QGraphicsBlurEffect set on the self.imageItem of the view:
class MyImageWidget(pg.ImageView):
def __init__(self, parent=None):
# ...
self.blurEffect = QGraphicsBlurEffect(blurRadius=1.1)
self.imageItem.setGraphicsEffect(self.blurEffect)
Note that since the image item is always scaled and the blur effect is proportional, you might need to adjust the blur radius to even smaller values depending on the shown resolution (but still bigger than 1.0).
I am using Python 3.9.1 and PyQt6. Now I want to create a window with blurred background, which should look something like below:
Blurred Window Background Demo
It would be helpful if anybody provide me a code for this.
the real deal:
python -m pip install BlurWindow
import sys
from PySide2.QtWidgets import *
from PySide2.QtCore import *
from BlurWindow.blurWindow import blur
class MainWindow(QWidget):
def __init__(self):
super(MainWindow, self).__init__()
self.setAttribute(Qt.WA_TranslucentBackground)
self.resize(500, 400)
blur(self.winId())
self.setStyleSheet("background-color: rgba(0, 0, 0, 0)")
if __name__ == '__main__':
app = QApplication(sys.argv)
mw = MainWindow()
mw.show()
sys.exit(app.exec_())
Well taken from KDE Plasma, its easy now to do this with Python.
For details, check this out - Watch this carefully
You will need to use the library provided called fluentapp -
For Project made with python - size 95 mb
You will need to extract it from the project and use the reference guide provided.
I have already tried it its cool and enhances the beauty of your app.
Syntax is easy e.g. -
import fluentapp.pyqt6.windowtools as wingui
wingui.setWindowAlpha("0.5") # Make window transparent
wingui.addGaussianBlur(radius=20, cover= False)
#if you want to use additional layer for dark and light theme, you can set cover True for dark.
Your Code Here ----
Using pyqt (Python3.7, on Ubuntu 18.04), I've created a GUI in which I render some VTK stuff that I want to create a video of. Therefore, I need to Qwidget.grab() the main window and save it to a png file (so I can assemble the images to a video file).
This works fine for everything on the window (include QWidgets like sliders, combobox, etc. and matplotlib images) except for the rendered VTK object (QVTKRenderWindowInteractor) as it replaces it in the picture into a black box or sometime white noise.
I've made a minimal example of the problem, hoping that it was the complexity of the whole program that was the main problem. Unfortunately, even for minimal code, it still does not render the VTK part of the window.
Here is a minimal example of my problem, where clicking on the "Print" button should grab the image and save it to a PNG file.
import sys
from PyQt5.QtWidgets import QVBoxLayout,QPushButton, QWidget, QApplication
import vtk
from vtk.qt.QVTKRenderWindowInteractor import QVTKRenderWindowInteractor
def print_screen_callback(w):
w.grab().save("non_satisfying_printed_window.png")
# Create a small interface that show a cylinder and allow to print it to a PNG file
app = QApplication(sys.argv)
w = QWidget()
w.show()
lay = QVBoxLayout()
w.setLayout(lay)
# Create and populate the vtk widget
ren = vtk.vtkRenderer()
w_vtk = QVTKRenderWindowInteractor()
lay.addWidget(w_vtk)
w_vtk.Initialize()
w_vtk.Start()
w_vtk.GetRenderWindow().AddRenderer(ren)
cylinder = vtk.vtkCylinderSource()
cylinder.SetResolution(8)
cylinderMapper = vtk.vtkPolyDataMapper()
cylinderMapper.SetInputConnection(cylinder.GetOutputPort())
cylinderActor = vtk.vtkActor()
cylinderActor.SetMapper(cylinderMapper)
ren.AddActor(cylinderActor)
# Add print screen button
but = QPushButton("Print")
lay.addWidget(but)
but.released.connect(lambda: print_screen_callback(w))
# Run the application
app.exec()
The expected output is of course the actual window when the person presses on the button "Print".
I am trying to make a PyQt5 application where the user can click on a graphics view window in order to place a rectangle. I managed to scale the window using the fitInView method and that works alright but whenever the windows size is changed the relative size of the newly placed object changes as well. How can I make it so that everything is relative to window/screen/graphicsView size? I tried to use the scene width but it did weird things with scale and position when the window was resized.
Here is my the class for my main window. gv is the graphics view and Ui_MainWindow is from QtDesigner.
class Window(Ui_MainWindow, QMainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
self.scene = QGraphicsScene()
self.scene.setSceneRect(QRectF(0, 0, 100, 100))
self.gv.setScene(self.scene)
self.gv.fitInView(0, 0, 100, 100, Qt.KeepAspectRatio)
def resizeEvent(self, e):
self.gv.fitInView(0, 0, 100, 100, Qt.KeepAspectRatio)
Here is my custom graphics view class:
from PyQt5.QtWidgets import QGraphicsView
from PyQt5.QtGui import QPolygonF
from PyQt5.QtCore import QPoint, QRectF
class GraphicsView(QGraphicsView):
def __init__(self, window):
super().__init__(window)
def mousePressEvent(self, QMouseEvent):
rect = self.mapToScene(QMouseEvent.x(), QMouseEvent.y(), 5, 5)
\\ rect = self.mapToScene(QMouseEvent.x(), QMouseEvent.y(), self.width()/20, self.height()/20)
self.scene().addPolygon(rect)
As a side question am I currently doing this as is recommended? Like using fitInView and mapToScene etc?
Also when I set the scene rectangle what values should I be choosing in place of 100? That was just arbitrary.
EDIT: I managed to achieve something similar to what I want by using the graphics view width as a base variable as shown in the comment in the code above. However I am now having the problem that if you resize the window it shows areas of the scene that were not previously visible. Is there a way to stretch the scene or something like that to make sure it always fills the graphics view?