PyQt how to get correct window size - python

I trying to get window size but I getting incorrect size, first of all I set window geometry to 800 x 480 and result is 640 x 480. Here is the code:
class MainWindow(QtGui.QWidget):
def __init__(self):
super(MainWindow, self).__init__()
width = self.width()
height = self.height()
self.setGeometry(10, 10, 800, 480)
self.geometry().width() / height() return the same results. why is that?
EDIT: First need self.setGeometry and after that self.width() / height() (Thanks to #eyllanesc)

Sir the problem is that you are probably trying to see the size inside the init method where its size will always be the default 640x480 where its paint weren't still executed.
You should try seeing it's size after ending the creation of your object. For example try to see its size inside mousePress and you will see that it has the size you set.
Or also try creating the object and after that outside itself checking its size.
The problem is that the .width() and height() of a QWidget is only updated after showing itself and the paint being executed.

Related

Setting QPixmap size to fill space in QGridLayout

I have been trying to display an image in a QGridLayout, and resize it according to the space available to it. However I can't seem to get the right size.
I have created a class ImageWidget(Qwidget), which is what is referenced as self below, and an instance of which ImageWidget0 is added to the QGridLayout Layout with Layout.addWidget(ImageWidget0, 0, 0).
This is how I'm resizing the pixmap, and it is displaying the image in the layout at the specified size with no problems:
self.label = QLabel(self)
self.pixmap = self.pixmap.scaled(image_size[0], image_size[1], Qt.KeepAspectRatio)
self.label.setPixmap(self.pixmap)
self.show()
However different methods of assigning image_size are giving undesirable results. Both of the following, using the size of the widget and the size of the label respectively, produce an image_size of (100, 30), which is too small:
image_size = (self.width(), self.height())
image_size = (self.label.width(), self.label.height())
So I tried to get the size of the rows and columns instead using the only methods I could find in the QGridLayout documentation, but the following (perhaps unsuprisingly) both produce an image_size of (0, 0):
image_size = (self.parent().layout.columnMinimumWidth(0), self.parent().layout.rowMinimumHeight(0))
image_size = (self.parentWidget().layout.columnMinimumWidth(0), self.parentWidget().layout.rowMinimumHeight(0))
To be clear: The layout itself is NOT CHANGING for all these different pixmap sizes. If image_size is manually set too large, it clips out (which is fine), but when it is too small, it just shrinks in the available space. This is what make me think the problem doesn't lie in the layout adjusting to the pixmap size.
I can post more code if required, but I was trying to keep this succinct and to the point.
As #musicamante pointed out, it is pointless using the widget size at construction as the layout has not yet been set. I got around this by adding a "load image" button, which could add the image once the layout had been created, allowing me to set the size correctly using:
image_size = (self.width(), self.height())
image_size = (self.width(), self.height())

How to make QPixmap display the svg?

I have a big svg(70000 * 2000) and I want to display it completely.
I Used QPixmap and I found it was incomplete.
This is my code:
self.img = QPixmap('test.svg')
self.scaled_img = self.img
def paintEvent(self, e):
painter = QPainter()
painter.begin(self)
self.draw_img(painter)
painter.end()
def draw_img(self, painter):
painter.drawPixmap(self.point, self.scaled_img)
According to the QPainter documentation:
If you are using coordinates with Qt's raster-based paint engine, it is important to note that, while coordinates greater than +/- 215 can be used, any painting performed with coordinates outside this range is not guaranteed to be shown; the drawing may be clipped.
This seems to be a limitation valid for QImage and QPixmap too, as explained in the answer to QImage/QPixmap size limitations?, which means that the image will only be rendered up to 32767x32767.
You may want to try using QSvgRenderer.render(painter, rect) in order to scale it, or setViewBox() and then the basic render() to show portions of it.

Get the size of a viewport in a QScrollArea in PyQt

We are writing a program to display a video from a microscope. The size of the video is 2208 * 1648. This is much bigger than the available screensize. So I use a QScrollArea to show it.
self.scrollArea = QScrollArea(self)
self.scrollArea.setBackgroundRole(QPalette.Dark)
self.scrollArea.setWidget(self.imageLabel)
self.scrollArea.setWidgetResizable(True)
This scales the video to the size of the viewport. So initialy the video is shown completely in the viewport of the scrollarea without scrollbars. Now it must be possible to zoom in on a particular part of the video. So a zoomfactor can be entered. When this is done the following code is executed:
def scaleImage(self, factor):
self.scrollArea.setWidgetResizable(False)
self.imageLabel.resize(factor * self.imageLabel.pixmap().size())
self.adjustScrollBar(self.scrollArea.horizontalScrollBar(), factor)
self.adjustScrollBar(self.scrollArea.verticalScrollBar(), factor)
When the actual video is shown, the images are sent to the label using the following function
def set_image(self, pixmap):
self.imageLabel.set_image(pixmap, self.current_width, self.current_height)
self.imageLabel.resize(self.current_zoomfactor * self.imageLabel.pixmap().size())
This works. However there are several problems:
1. setWidgetResizable does not honor the aspectratio
2. When a user zooms out, eventually the picture becomes smaller than the viewport, which is not what I want.
3. I tried to get the size of the viewport using
sizehint = self.scrollArea.viewportSizeHint()
Then I do get a size which looks more or less correct, however when I resize the window the result does not change.
Thanks a lot in advance
Before the resize the zoom factor is calculated:
def handle_min_zoomfactor(self, new_zoomfactor):
self.minimal_zoom = self.scrollArea.viewport().width() / self.sensor_width
if new_zoomfactor < self.minimal_zoom:
new_zoomfactor = self.minimal_zoom
if new_zoomfactor > 1.0:
new_zoomfactor = 1
After that scaleImage is called.

How can i make all the images on a window scale down or up with the size of the monitor

So i am making a game in python and pygame and i have the indow setup like this
display = pygame.display.set_mode((0,0), pygame.FULLSCREEN)
which makes the size of the window about 1334 X 800 so i based all the sprites and backgrounds on that size of screen but as you know not everyone has the same sized screen as me so my question is how can i make images scale with how big the monitor screen is
(P.S The game is in fullscreen mode)
First, how do you get the resolution and the scaling factor?
This is tricky, because someone's screen may not have the same aspect ratio as your 1334x800. You can letterbox (in various different ways) or stretch the sprites; you need to decide what you want, but I'll show one letterboxing possibility:
NOMINAL_WIDTH, NOMINAL_HEIGHT = 1334., 800.
surface = display.get_surface()
width, height = surface.get_width(), surface.get_height()
xscale = width / NOMINAL_WIDTH
yscale = height / NOMINAL_HEIGHT
if xscale < 1 and yscale < 1:
scale = max(xscale, yscale)
elif xscale > 1 and yscale > 1:
scale = min(xscale, yscale)
else:
scale = 1.0
Now, how do you scale each sprite and background?
Well, first, are you sure you want to? It may be simpler to just transform the whole surface. Whether this is slower or faster is hard to predict without testing (and probably not relevant anyway), but it will definitely look better (because any interpolation, dithering, antialiasing, etc. happens after compositing, instead of before—unless you're going for that 8-bit look, of course, in which case it will destroy the look…). You can do this by compositing everything to an off-screen surface of 1334x800 (or, better, scaling everything up by a constant factor), then transforming that surface for display. (Note that the transform methods include an optional DestSurface argument. You can use this to directly transform from the offscreen surface to the display's surface.)
But let's assume you want to do it the way you asked.
You can do this when loading the sprites. For example:
def rescale(surf, scale):
new_width, new_height = surf.get_width() * scale, surf.get_height() * scale
return pygame.transform.smoothscale(surf, (new_width, new_height))
class ScaledSprite(pygame.sprite.Sprite):
def __init__(self, path, scale):
pygame.sprite.Sprite.__init__(self)
self.image = rescale(pygame.image.load(path), scale)
self.rect = self.image.get_rect()
And the same for the backgrounds.
from this SO question, you can get the size of the monitor with
infoObject = pygame.display.Info()
which gets the height and width of the screen as infoObject.current_w and infoObject.current_h
You can then use these values to scale everything appropriately.

Pyglet: How to change resolution when you go fullscreen?

I'm using Pyglet and I have a little that includes an object moving over a background. Both of them are represented by images (png and jpg).
I've created a non-fullscreen window with size 800x600 and it works fine, but when I toggle to fullscreen... background and object have the same size as before and the rest of the screen is filled with black (empty color).
What I want to do is to "scale" the images or change the resolution when I toggle fullscreen mode.
I've read the documentation, but I can't find the answer to this.
I know that with Pygame, this problem solves itself automatically (if you change the window size, everything rescales automatically)... but how do you do this with pyglet?
This is my relevant code:
import pyglet
WIDTH = 800
HEIGHT = 600
working_dir = '/where/i/have/my/images/'
window = pyglet.window.Window(WIDTH, HEIGHT)
background = pyglet.image.load(working_dir + 'background.jpg')
flying_thing = pyglet.image.load(working_dir + 'flying_thing.png')
#window.event
def on_draw():
window.clear()
background.blit(0, 0)
flying_thing.blit(WIDTH // 2, HEIGHT // 2)
#window.event
def on_key_press(symbol, modifiers):
if symbol == pyglet.window.key.SPACE:
window.set_fullscreen(not window.fullscreen)
pyglet.app.run()
You can try this code changing working_dir, background.jpg and flying_thing.png to a working directory of yours and two images in it.
I didn't tried, but from pyglet docs, blit supports width and height. Its signature is
blit(self, x, y, z=0, width=None, height=None)
Have you tried using
background.blit(width=window.width, height=windows.height)
instead? (I'm not sure the window.width changes on full_screen, let's see...).
This answer can also be relevant to your question: https://stackoverflow.com/a/11183462/931303.

Categories