Setting QPixmap size to fill space in QGridLayout - python

I have been trying to display an image in a QGridLayout, and resize it according to the space available to it. However I can't seem to get the right size.
I have created a class ImageWidget(Qwidget), which is what is referenced as self below, and an instance of which ImageWidget0 is added to the QGridLayout Layout with Layout.addWidget(ImageWidget0, 0, 0).
This is how I'm resizing the pixmap, and it is displaying the image in the layout at the specified size with no problems:
self.label = QLabel(self)
self.pixmap = self.pixmap.scaled(image_size[0], image_size[1], Qt.KeepAspectRatio)
self.label.setPixmap(self.pixmap)
self.show()
However different methods of assigning image_size are giving undesirable results. Both of the following, using the size of the widget and the size of the label respectively, produce an image_size of (100, 30), which is too small:
image_size = (self.width(), self.height())
image_size = (self.label.width(), self.label.height())
So I tried to get the size of the rows and columns instead using the only methods I could find in the QGridLayout documentation, but the following (perhaps unsuprisingly) both produce an image_size of (0, 0):
image_size = (self.parent().layout.columnMinimumWidth(0), self.parent().layout.rowMinimumHeight(0))
image_size = (self.parentWidget().layout.columnMinimumWidth(0), self.parentWidget().layout.rowMinimumHeight(0))
To be clear: The layout itself is NOT CHANGING for all these different pixmap sizes. If image_size is manually set too large, it clips out (which is fine), but when it is too small, it just shrinks in the available space. This is what make me think the problem doesn't lie in the layout adjusting to the pixmap size.
I can post more code if required, but I was trying to keep this succinct and to the point.

As #musicamante pointed out, it is pointless using the widget size at construction as the layout has not yet been set. I got around this by adding a "load image" button, which could add the image once the layout had been created, allowing me to set the size correctly using:
image_size = (self.width(), self.height())
image_size = (self.width(), self.height())

Related

Dimensions of screenshot doesn't match dimensions of resized window in Pywin32

I am trying to screenshot a Microsoft Edge window using pywin32. This screenshot will then be used to a machine learning algorithm to play a game in Microsoft Edge. As you might guess, the program will be taking a screenshot multiple times, so I needed the screenshot to be fast as possible. To increase the speed, my program will resize the Microsoft Edge window to a small resolution (specifically, to 600 by 600). However, when the screenshot doesn't show the entire window even though I have moved it to a specified location.
My program:
import win32gui
import win32ui
import win32con
import win32api
from PIL import Image
import time
# grab a handle to the main desktop window
hdesktop = win32gui.GetDesktopWindow()
# determine the size of all monitors in pixels
width = 600
height = 600
left = 0
top = 0
# set window to correct location
print("You have 3 second to click the desired window!")
for i in range(3, 0, -1):
print(i)
time.sleep(1)
hwnd = win32gui.GetForegroundWindow()
win32gui.MoveWindow(hwnd, 0, 0, width, height, True)
# create a device context
desktop_dc = win32gui.GetWindowDC(hdesktop)
img_dc = win32ui.CreateDCFromHandle(desktop_dc)
# create a memory based device context
mem_dc = img_dc.CreateCompatibleDC()
# create a bitmap object
screenshot = win32ui.CreateBitmap()
screenshot.CreateCompatibleBitmap(img_dc, width, height)
mem_dc.SelectObject(screenshot)
# copy the screen into our memory device context
mem_dc.BitBlt((0, 0), (width, height), img_dc, (left, top),win32con.SRCCOPY)
bmpinfo = screenshot.GetInfo()
bmpstr = screenshot.GetBitmapBits(True)
im = Image.frombuffer(
'RGB',
(bmpinfo['bmWidth'], bmpinfo['bmHeight']),
bmpstr, 'raw', 'BGRX', 0, 1)
im.show()
# free our objects
mem_dc.DeleteDC()
win32gui.DeleteObject(screenshot.GetHandle())
My program first move and resizes the desired window (taken from win32gui.GetForegroundWindow()) by win32gui.MoveWindow(hwnd, 0, 0, width, height, True) Then, it tries to screenshot the window by taking the whole desktop window (hdesktop = win32gui.GetDesktopWindow() ) and then cropping it to the desired coordinates (mem_dc.BitBlt((0, 0), (width, height), img_dc, (left, top),win32con.SRCCOPY) ). I then convert the win32 screenshot to a PIL image so I could look at it. Note that the desired coordinates are the SAME coordinates used to move the window in the first place. However, when I try to run this program, the screenshot doesn't capture the entire window!
I have tried looking at the documentation of the MoveWindow and the BitBlt function, but I couldn't find the issue. The destination and source rectangle parameters is suppose to be (0,0), since of the MoveWindow function. The width and height parameters are the same. I also have tried experimenting with the bRepaint parameter, but it didn't make a difference.
Any Suggestions?
After experimenting with this question a little bit more, I finally found the problem.
In the comments, I said that ctypes.windll.shcore.SetProcessDpiAwareness(1) doesn't work. However, it did. When I upscale the height and width, then the dimensions between the screenshot and the window fits perfectly. However, the reason why width and height doesn't work for smaller dimensions (I was originally setting width and height to 500) is because Microsoft Edge doesn't allow it. If the width goes within a certain threshold, then the actual width of the window would go to the smallest width Microsoft Edge wants it to be. An easy work around was the set the width and height to a larger resolution, and it worked!
Thank you so much for everyone in the comments, especially #IInspectable.

How to make QPixmap display the svg?

I have a big svg(70000 * 2000) and I want to display it completely.
I Used QPixmap and I found it was incomplete.
This is my code:
self.img = QPixmap('test.svg')
self.scaled_img = self.img
def paintEvent(self, e):
painter = QPainter()
painter.begin(self)
self.draw_img(painter)
painter.end()
def draw_img(self, painter):
painter.drawPixmap(self.point, self.scaled_img)
According to the QPainter documentation:
If you are using coordinates with Qt's raster-based paint engine, it is important to note that, while coordinates greater than +/- 215 can be used, any painting performed with coordinates outside this range is not guaranteed to be shown; the drawing may be clipped.
This seems to be a limitation valid for QImage and QPixmap too, as explained in the answer to QImage/QPixmap size limitations?, which means that the image will only be rendered up to 32767x32767.
You may want to try using QSvgRenderer.render(painter, rect) in order to scale it, or setViewBox() and then the basic render() to show portions of it.

PyQt how to get correct window size

I trying to get window size but I getting incorrect size, first of all I set window geometry to 800 x 480 and result is 640 x 480. Here is the code:
class MainWindow(QtGui.QWidget):
def __init__(self):
super(MainWindow, self).__init__()
width = self.width()
height = self.height()
self.setGeometry(10, 10, 800, 480)
self.geometry().width() / height() return the same results. why is that?
EDIT: First need self.setGeometry and after that self.width() / height() (Thanks to #eyllanesc)
Sir the problem is that you are probably trying to see the size inside the init method where its size will always be the default 640x480 where its paint weren't still executed.
You should try seeing it's size after ending the creation of your object. For example try to see its size inside mousePress and you will see that it has the size you set.
Or also try creating the object and after that outside itself checking its size.
The problem is that the .width() and height() of a QWidget is only updated after showing itself and the paint being executed.

Get the size of a viewport in a QScrollArea in PyQt

We are writing a program to display a video from a microscope. The size of the video is 2208 * 1648. This is much bigger than the available screensize. So I use a QScrollArea to show it.
self.scrollArea = QScrollArea(self)
self.scrollArea.setBackgroundRole(QPalette.Dark)
self.scrollArea.setWidget(self.imageLabel)
self.scrollArea.setWidgetResizable(True)
This scales the video to the size of the viewport. So initialy the video is shown completely in the viewport of the scrollarea without scrollbars. Now it must be possible to zoom in on a particular part of the video. So a zoomfactor can be entered. When this is done the following code is executed:
def scaleImage(self, factor):
self.scrollArea.setWidgetResizable(False)
self.imageLabel.resize(factor * self.imageLabel.pixmap().size())
self.adjustScrollBar(self.scrollArea.horizontalScrollBar(), factor)
self.adjustScrollBar(self.scrollArea.verticalScrollBar(), factor)
When the actual video is shown, the images are sent to the label using the following function
def set_image(self, pixmap):
self.imageLabel.set_image(pixmap, self.current_width, self.current_height)
self.imageLabel.resize(self.current_zoomfactor * self.imageLabel.pixmap().size())
This works. However there are several problems:
1. setWidgetResizable does not honor the aspectratio
2. When a user zooms out, eventually the picture becomes smaller than the viewport, which is not what I want.
3. I tried to get the size of the viewport using
sizehint = self.scrollArea.viewportSizeHint()
Then I do get a size which looks more or less correct, however when I resize the window the result does not change.
Thanks a lot in advance
Before the resize the zoom factor is calculated:
def handle_min_zoomfactor(self, new_zoomfactor):
self.minimal_zoom = self.scrollArea.viewport().width() / self.sensor_width
if new_zoomfactor < self.minimal_zoom:
new_zoomfactor = self.minimal_zoom
if new_zoomfactor > 1.0:
new_zoomfactor = 1
After that scaleImage is called.

How Do You Resize an Image in Python Using Pyglet

I'm new to Pyglet (and stackoverflow) and cannot seem to find out how to resize images.
'pipe.png' is the image that I am trying to adjust the size of.
With this code, the image is not fully shown because the window size is too small.
I would like to adjust the size of the image so that it fits inside of the window.
The current size of 'pipe.png' is 100x576.
import pyglet
window = pyglet.window.Window()
pyglet.resource.path = ["C:\\"]
pipe = pyglet.resource.image('pipe.png')
pyglet.resource.reindex()
#window.event
def on_draw():
window.clear()
pipe.blit(0, 0)
pyglet.app.run()
EDIT:
I ended up finding out the answer here:
http://pyglet.org/doc-current/programming_guide/image.html#simple-image-blitting
The solution is:
imageWidth = 100
imageHeight = 100
imageName.width = imageWidth
imageName.height = imageHeight
This would adjust to image size to display as 100x100
Came across of this oldie, so for whoever lone ranger that ends up here juast as I did. Changing .width and .height won't do much in many cases (or at all these days?).
In order to successfully change a image resolution, you need to modify it's .scale attribute.
Here's a snippet of code that I use to resize a image:
from pyglet.gl import *
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
image = pyglet.image.load('test.png')
height, width = 800, 600 # Desired resolution
# the min() and max() mumbo jumbo is to honor the smallest requested resolution.
# this is because the smallest resolution given is the limit of say
# the window-size that the image will fit in, there for we can't honor
# the largest resolution or else the image will pop outside of the region.
image.scale = min(image.height, height)/max(image.height, height)), max(min(width, image.width)/max(width, image.width)
# Usually not needed, and should not be tampered with,
# but for a various bugs when using sprite-inheritance on a user-defined
# class, these values will need to be updated manually:
image.width = width
image.height = height
image.texture.width = width
image.texture.height = height

Categories