How to make QPixmap display the svg? - python

I have a big svg(70000 * 2000) and I want to display it completely.
I Used QPixmap and I found it was incomplete.
This is my code:
self.img = QPixmap('test.svg')
self.scaled_img = self.img
def paintEvent(self, e):
painter = QPainter()
painter.begin(self)
self.draw_img(painter)
painter.end()
def draw_img(self, painter):
painter.drawPixmap(self.point, self.scaled_img)

According to the QPainter documentation:
If you are using coordinates with Qt's raster-based paint engine, it is important to note that, while coordinates greater than +/- 215 can be used, any painting performed with coordinates outside this range is not guaranteed to be shown; the drawing may be clipped.
This seems to be a limitation valid for QImage and QPixmap too, as explained in the answer to QImage/QPixmap size limitations?, which means that the image will only be rendered up to 32767x32767.
You may want to try using QSvgRenderer.render(painter, rect) in order to scale it, or setViewBox() and then the basic render() to show portions of it.

Related

Setting QPixmap size to fill space in QGridLayout

I have been trying to display an image in a QGridLayout, and resize it according to the space available to it. However I can't seem to get the right size.
I have created a class ImageWidget(Qwidget), which is what is referenced as self below, and an instance of which ImageWidget0 is added to the QGridLayout Layout with Layout.addWidget(ImageWidget0, 0, 0).
This is how I'm resizing the pixmap, and it is displaying the image in the layout at the specified size with no problems:
self.label = QLabel(self)
self.pixmap = self.pixmap.scaled(image_size[0], image_size[1], Qt.KeepAspectRatio)
self.label.setPixmap(self.pixmap)
self.show()
However different methods of assigning image_size are giving undesirable results. Both of the following, using the size of the widget and the size of the label respectively, produce an image_size of (100, 30), which is too small:
image_size = (self.width(), self.height())
image_size = (self.label.width(), self.label.height())
So I tried to get the size of the rows and columns instead using the only methods I could find in the QGridLayout documentation, but the following (perhaps unsuprisingly) both produce an image_size of (0, 0):
image_size = (self.parent().layout.columnMinimumWidth(0), self.parent().layout.rowMinimumHeight(0))
image_size = (self.parentWidget().layout.columnMinimumWidth(0), self.parentWidget().layout.rowMinimumHeight(0))
To be clear: The layout itself is NOT CHANGING for all these different pixmap sizes. If image_size is manually set too large, it clips out (which is fine), but when it is too small, it just shrinks in the available space. This is what make me think the problem doesn't lie in the layout adjusting to the pixmap size.
I can post more code if required, but I was trying to keep this succinct and to the point.
As #musicamante pointed out, it is pointless using the widget size at construction as the layout has not yet been set. I got around this by adding a "load image" button, which could add the image once the layout had been created, allowing me to set the size correctly using:
image_size = (self.width(), self.height())
image_size = (self.width(), self.height())

QGraphicsItem paint delay

What could be the possible reason for this? When i zoom in the QGraphicsView and move the QGraphicsItem, I get this weird result. It does update if I zoom or pan the View again or if I focus on other widgets. Im using PySide. And the painter function is this
def paint(self, painter, option, widget):
if self.isSelected():
brush = self.highlight_brush
pen = self.highlight_pen
else:
brush = self.dormant_brush
pen = self.dormant_pen
painter.setBrush(brush)
painter.setPen(pen)
painter.drawRect(0, 0, 100, 100)
Why does this happen even for this basic paint event? This problem is not seen if there is no Pen. If I increase the pen width, this issue is disturbingly visible.
I don't know the actual solution for this rendering artifacts. But, updating the view during mouseMoveEvent did fix the issue.
def mouseMoveEvent(self, event):
QGraphicsView.mouseMoveEvent(self, event)
if self.scene().selectedItems():
self.update()
The error you are seeing is probably because parts of what you are drawing are outside the bounding rectangle. My guess is you are using the same values to calculate the rectangle you are drawing as you are to calculate the bounding rectangle. Applying a pen then will make the drawn rectangle wider than the bounds and so will result in the smearing you are seeing.
I had the same problem. This is my solution:
As #Nathan Mooth said, the problem was that I was drawing outside of the boundingRect, so I just made my rounded rectangle(what I'm drawing in the paint() method) 10 units width and height less than the boundingRect:
# Setup Rect
frameRect = self.boundingRect()
frameRect.setWidth(self.boundingRect().width() - 10)
frameRect.setHeight(self.boundingRect().height() - 10)
This is how it was looking before(GIF):
This is how it looks now(GIF)
Note: I added color selection and changed the color of the drop shadow. So it looks a bit different.

QT5 QgraphicsScene: how to draw on th forground pixel by pixel

I am using PyQt5. I am making a program of robots moving in a maze.
For that, I use QGraphicsScene. I add objects like QRect to represent robots. The background is set via SetBackgroundBrush and loaded from a png image (black represents unpassable terrain):
def update_background(self):
qim = QImage(self.model.map.shape[1],self.model.map.shape[0], QImage.Format_RGB32)
for x in range(0, self.model.map.shape[1]):
for y in range(0, self.model.map.shape[0]):
qim.setPixel(x ,y, qRgb(self.model.map[y,x],self.model.map[y,x],self.model.map[y,x]))
pix = QPixmap(qim)
self.scene.setBackgroundBrush(QBrush(pix))
What I want to do now is to visualize the work of a pathfinding algorithm (I use A* for now). Like a red line that connects the robot with its destination bending over obstacles. This line is stored as a list of (X,Y) coords. I wanted to iterate over the list and paint pixel by pixel on the scene. However I don't know how to do that - there is no "drawPixel" method. Of course, I can add a hundred of small rectangles of 1x1 size. However I will have to redraw them if the route changes.
I thought about creating an image with paths and placing it in the FOREground and then adding. However I cannot make a transparent foreground. It was not a problem with background (because it is in the back). I considered using
theis function:
http://doc.qt.io/qt-5/qpixmap.html#setAlphaChannel
But it is deprecated. It refers to QPainter. I don't know what QPainter is and I am not sure I am heading in the right direction at all.
Please give advice!
So, the question is what is the correct and efficient way to draw routes built by robots?
RobotPathItem(QGraphicsItem):
def __init__(self, path):
super().__init__()
qpath = []
for xy in path:
qpath.append(QPoint(xy[0],xy[1]))
self.path = QPolygon(qpath)
if path:
print(path[0])
def paint(self, painter, option, qwidget = None):
painter.drawPoints(self.path)
def boundingRect(self):
return QRectF(0,0,520,520)
There's no drawPixel, but QPainter has a drawPoint or drawPoints (which would be a lot more efficient in this case, I think). You'll need to create a custom graphics item that contains your list of points and iterates through your list of QPointF values and draws them. When you add points to the list, be sure to recalculate the bounding rectangle. For example, if you had a RobotPathItem (derived from QGraphicsItem), your paint method might look something like:
RobotPathItem::paint (QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget)
{
QPen pen;
// ... set up your pen color, etc., here
painter->setPen (pen);
painter->drawPoints (points, points.count ());
}
This is assuming that "points" is a QList or QVector of QPointF.

Get the size of a viewport in a QScrollArea in PyQt

We are writing a program to display a video from a microscope. The size of the video is 2208 * 1648. This is much bigger than the available screensize. So I use a QScrollArea to show it.
self.scrollArea = QScrollArea(self)
self.scrollArea.setBackgroundRole(QPalette.Dark)
self.scrollArea.setWidget(self.imageLabel)
self.scrollArea.setWidgetResizable(True)
This scales the video to the size of the viewport. So initialy the video is shown completely in the viewport of the scrollarea without scrollbars. Now it must be possible to zoom in on a particular part of the video. So a zoomfactor can be entered. When this is done the following code is executed:
def scaleImage(self, factor):
self.scrollArea.setWidgetResizable(False)
self.imageLabel.resize(factor * self.imageLabel.pixmap().size())
self.adjustScrollBar(self.scrollArea.horizontalScrollBar(), factor)
self.adjustScrollBar(self.scrollArea.verticalScrollBar(), factor)
When the actual video is shown, the images are sent to the label using the following function
def set_image(self, pixmap):
self.imageLabel.set_image(pixmap, self.current_width, self.current_height)
self.imageLabel.resize(self.current_zoomfactor * self.imageLabel.pixmap().size())
This works. However there are several problems:
1. setWidgetResizable does not honor the aspectratio
2. When a user zooms out, eventually the picture becomes smaller than the viewport, which is not what I want.
3. I tried to get the size of the viewport using
sizehint = self.scrollArea.viewportSizeHint()
Then I do get a size which looks more or less correct, however when I resize the window the result does not change.
Thanks a lot in advance
Before the resize the zoom factor is calculated:
def handle_min_zoomfactor(self, new_zoomfactor):
self.minimal_zoom = self.scrollArea.viewport().width() / self.sensor_width
if new_zoomfactor < self.minimal_zoom:
new_zoomfactor = self.minimal_zoom
if new_zoomfactor > 1.0:
new_zoomfactor = 1
After that scaleImage is called.

Pyglet: How to change resolution when you go fullscreen?

I'm using Pyglet and I have a little that includes an object moving over a background. Both of them are represented by images (png and jpg).
I've created a non-fullscreen window with size 800x600 and it works fine, but when I toggle to fullscreen... background and object have the same size as before and the rest of the screen is filled with black (empty color).
What I want to do is to "scale" the images or change the resolution when I toggle fullscreen mode.
I've read the documentation, but I can't find the answer to this.
I know that with Pygame, this problem solves itself automatically (if you change the window size, everything rescales automatically)... but how do you do this with pyglet?
This is my relevant code:
import pyglet
WIDTH = 800
HEIGHT = 600
working_dir = '/where/i/have/my/images/'
window = pyglet.window.Window(WIDTH, HEIGHT)
background = pyglet.image.load(working_dir + 'background.jpg')
flying_thing = pyglet.image.load(working_dir + 'flying_thing.png')
#window.event
def on_draw():
window.clear()
background.blit(0, 0)
flying_thing.blit(WIDTH // 2, HEIGHT // 2)
#window.event
def on_key_press(symbol, modifiers):
if symbol == pyglet.window.key.SPACE:
window.set_fullscreen(not window.fullscreen)
pyglet.app.run()
You can try this code changing working_dir, background.jpg and flying_thing.png to a working directory of yours and two images in it.
I didn't tried, but from pyglet docs, blit supports width and height. Its signature is
blit(self, x, y, z=0, width=None, height=None)
Have you tried using
background.blit(width=window.width, height=windows.height)
instead? (I'm not sure the window.width changes on full_screen, let's see...).
This answer can also be relevant to your question: https://stackoverflow.com/a/11183462/931303.

Categories