I layout a bunch of nodes on a QGraphicsScene. The nodes are basic ellipses (QGraphicsEllipseItems). It works reasonably well.
However I would like to know how to size the ellipses. Currently I have a hard-coded radius to 80 units this works fine when there are the number of ellipses are few hundred, however when I have a few thousand ellipses it looks all wrong as they are too small for the scale of the scene.
Conversely when there are only a few 10s the scene being smaller the ellipses are way to large.
I am looking to find a formula that better balances the size of an ellipse, with the number of ellipses on the scene and the scale of the scene.
Also as I zoom in and out I would like the ellipses to remain appropriately sized.
Can anyone advise on how to best achieve a balanced arrangement?
The scene has a certain bounding rectangle that encloses all graphics items you put into it while the view has a certain size on the screen.
Between the scene and the view there is a transformation matrix (2x3 for scaling, rotate, shear and translation). You can get it by QGraphicsView.transform().
Now if you put more ellipses into your plot increasing the scene size but want to still see all of them, you must zoom out and accordingly the widths of the ellipses will shrink too.
You don't want that. Okay, so why not resizing them (according to the current scaling factor) everytime the scale changes. Well, this is probably not very efficient.
The better solution is to not change the scale of the view, but just scale the positions manually while keeping the zoom fixed. That way no properties of the items except their position has to be changed.
Example (using PySide and Python 3 but easily adjustable to PyQt and Python 2):
from PySide import QtGui, QtCore
import random
class MyGraphicsView(QtGui.QGraphicsView):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def wheelEvent(self, event):
if event.delta() > 0:
scaling = 1.1
else:
scaling = 1 / 1.1
# reposition all items in scene
for item in self.scene().items():
r = item.rect()
item.setRect(QtCore.QRectF(r.x() * scaling, r.y() * scaling, r.width(), r.height()))
app = QtGui.QApplication([])
scene = QtGui.QGraphicsScene()
scene.setSceneRect(-200, -200, 400, 400)
for i in range(100):
rect = QtCore.QRectF(random.uniform(-180, 180), random.uniform(-180, 180), 10, 10)
scene.addEllipse(rect)
view = MyGraphicsView(scene)
view.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
view.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
view.resize(400, 400)
view.show()
app.exec_()
With the mousewheel you can scale the positions and then it looks like this:
As for the balance of size and number of ellipses - well it's all your choice. There is no generic rule. I recommend to not make the ellipses larger than the distance between the ellipses or they will overlap. In general I work with scene coordinates that are 1:1 with pixel sizes of the view (as I did in the example above, 400 pixels width of the view, 400 units width of the scene rectangle). Then I can easily imagine what a size of an ellipse of 10 will be, that is 10 pixels. If I want more I use more and if I want less I use less. But there is no rule for it, it's up to what you want.
Related
I am using PyQt5. I am making a program of robots moving in a maze.
For that, I use QGraphicsScene. I add objects like QRect to represent robots. The background is set via SetBackgroundBrush and loaded from a png image (black represents unpassable terrain):
def update_background(self):
qim = QImage(self.model.map.shape[1],self.model.map.shape[0], QImage.Format_RGB32)
for x in range(0, self.model.map.shape[1]):
for y in range(0, self.model.map.shape[0]):
qim.setPixel(x ,y, qRgb(self.model.map[y,x],self.model.map[y,x],self.model.map[y,x]))
pix = QPixmap(qim)
self.scene.setBackgroundBrush(QBrush(pix))
What I want to do now is to visualize the work of a pathfinding algorithm (I use A* for now). Like a red line that connects the robot with its destination bending over obstacles. This line is stored as a list of (X,Y) coords. I wanted to iterate over the list and paint pixel by pixel on the scene. However I don't know how to do that - there is no "drawPixel" method. Of course, I can add a hundred of small rectangles of 1x1 size. However I will have to redraw them if the route changes.
I thought about creating an image with paths and placing it in the FOREground and then adding. However I cannot make a transparent foreground. It was not a problem with background (because it is in the back). I considered using
theis function:
http://doc.qt.io/qt-5/qpixmap.html#setAlphaChannel
But it is deprecated. It refers to QPainter. I don't know what QPainter is and I am not sure I am heading in the right direction at all.
Please give advice!
So, the question is what is the correct and efficient way to draw routes built by robots?
RobotPathItem(QGraphicsItem):
def __init__(self, path):
super().__init__()
qpath = []
for xy in path:
qpath.append(QPoint(xy[0],xy[1]))
self.path = QPolygon(qpath)
if path:
print(path[0])
def paint(self, painter, option, qwidget = None):
painter.drawPoints(self.path)
def boundingRect(self):
return QRectF(0,0,520,520)
There's no drawPixel, but QPainter has a drawPoint or drawPoints (which would be a lot more efficient in this case, I think). You'll need to create a custom graphics item that contains your list of points and iterates through your list of QPointF values and draws them. When you add points to the list, be sure to recalculate the bounding rectangle. For example, if you had a RobotPathItem (derived from QGraphicsItem), your paint method might look something like:
RobotPathItem::paint (QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget)
{
QPen pen;
// ... set up your pen color, etc., here
painter->setPen (pen);
painter->drawPoints (points, points.count ());
}
This is assuming that "points" is a QList or QVector of QPointF.
We are writing a program to display a video from a microscope. The size of the video is 2208 * 1648. This is much bigger than the available screensize. So I use a QScrollArea to show it.
self.scrollArea = QScrollArea(self)
self.scrollArea.setBackgroundRole(QPalette.Dark)
self.scrollArea.setWidget(self.imageLabel)
self.scrollArea.setWidgetResizable(True)
This scales the video to the size of the viewport. So initialy the video is shown completely in the viewport of the scrollarea without scrollbars. Now it must be possible to zoom in on a particular part of the video. So a zoomfactor can be entered. When this is done the following code is executed:
def scaleImage(self, factor):
self.scrollArea.setWidgetResizable(False)
self.imageLabel.resize(factor * self.imageLabel.pixmap().size())
self.adjustScrollBar(self.scrollArea.horizontalScrollBar(), factor)
self.adjustScrollBar(self.scrollArea.verticalScrollBar(), factor)
When the actual video is shown, the images are sent to the label using the following function
def set_image(self, pixmap):
self.imageLabel.set_image(pixmap, self.current_width, self.current_height)
self.imageLabel.resize(self.current_zoomfactor * self.imageLabel.pixmap().size())
This works. However there are several problems:
1. setWidgetResizable does not honor the aspectratio
2. When a user zooms out, eventually the picture becomes smaller than the viewport, which is not what I want.
3. I tried to get the size of the viewport using
sizehint = self.scrollArea.viewportSizeHint()
Then I do get a size which looks more or less correct, however when I resize the window the result does not change.
Thanks a lot in advance
Before the resize the zoom factor is calculated:
def handle_min_zoomfactor(self, new_zoomfactor):
self.minimal_zoom = self.scrollArea.viewport().width() / self.sensor_width
if new_zoomfactor < self.minimal_zoom:
new_zoomfactor = self.minimal_zoom
if new_zoomfactor > 1.0:
new_zoomfactor = 1
After that scaleImage is called.
I have a problem in controlling the size of objects in network plots done by igraph. The documentation of the plot command says:
bbox:: The bounding box of the plot. This must be a tuple containing the desired width and height of the plot. The default plot is 600 pixels wide and 600 pixels high.
arrow_size: Size (length) of the arrowhead on the edge if the graph is directed, relative to 15 pixels.
vertex_size: Size of the vertex in pixels
So to my understanding all these arguments represent numbers of pixels.
Therefore, multiplying all of them, say, by a factor of 2, I would expect the images to scale completely with this factor.
Consider this following minimal example in python:
from igraph import Graph, plot
def visualize(res=1.0):
g=Graph([(0,1), (1,0)], directed=True)
layout = g.layout_fruchterman_reingold()
plot(g, target='plot.png',
layout=layout,
bbox=(120*res,120*res),
vertex_size=5*res,
arrow_size=10*res)
This plots a trivial graph,
However for res=1.0 and res=2.0 the arrows and vertices become smaller compared to the image size.
How is that possible?
Just a wild guess, but could the stroke width account for the difference? The default stroke width is 1 units, and you don't seem to scale the stroke width. Try setting vertex_frame_width=res in the call to plot().
So i am making a game in python and pygame and i have the indow setup like this
display = pygame.display.set_mode((0,0), pygame.FULLSCREEN)
which makes the size of the window about 1334 X 800 so i based all the sprites and backgrounds on that size of screen but as you know not everyone has the same sized screen as me so my question is how can i make images scale with how big the monitor screen is
(P.S The game is in fullscreen mode)
First, how do you get the resolution and the scaling factor?
This is tricky, because someone's screen may not have the same aspect ratio as your 1334x800. You can letterbox (in various different ways) or stretch the sprites; you need to decide what you want, but I'll show one letterboxing possibility:
NOMINAL_WIDTH, NOMINAL_HEIGHT = 1334., 800.
surface = display.get_surface()
width, height = surface.get_width(), surface.get_height()
xscale = width / NOMINAL_WIDTH
yscale = height / NOMINAL_HEIGHT
if xscale < 1 and yscale < 1:
scale = max(xscale, yscale)
elif xscale > 1 and yscale > 1:
scale = min(xscale, yscale)
else:
scale = 1.0
Now, how do you scale each sprite and background?
Well, first, are you sure you want to? It may be simpler to just transform the whole surface. Whether this is slower or faster is hard to predict without testing (and probably not relevant anyway), but it will definitely look better (because any interpolation, dithering, antialiasing, etc. happens after compositing, instead of before—unless you're going for that 8-bit look, of course, in which case it will destroy the look…). You can do this by compositing everything to an off-screen surface of 1334x800 (or, better, scaling everything up by a constant factor), then transforming that surface for display. (Note that the transform methods include an optional DestSurface argument. You can use this to directly transform from the offscreen surface to the display's surface.)
But let's assume you want to do it the way you asked.
You can do this when loading the sprites. For example:
def rescale(surf, scale):
new_width, new_height = surf.get_width() * scale, surf.get_height() * scale
return pygame.transform.smoothscale(surf, (new_width, new_height))
class ScaledSprite(pygame.sprite.Sprite):
def __init__(self, path, scale):
pygame.sprite.Sprite.__init__(self)
self.image = rescale(pygame.image.load(path), scale)
self.rect = self.image.get_rect()
And the same for the backgrounds.
from this SO question, you can get the size of the monitor with
infoObject = pygame.display.Info()
which gets the height and width of the screen as infoObject.current_w and infoObject.current_h
You can then use these values to scale everything appropriately.
I'm using PyGame's graphics library to display graphics on an LED display.
My LED display is 25x7. For testing without hardware, I've been able to use PyGame's display to view a 40x amplified version:
screen = pygame.display.set_mode((1000,280))
I'd like to actually draw boxes, rects, etc on a 25x7 grid and then project it onto a larger 1000x280 window on my computer, which will look like a 25x7 grid of 40x40 "pixels". That way, if I start working with hardware, I can skip the large projection, and just write 25x7=175 pixels worth of data out to my LEDs.
I came up with a solution, but it's prohibitively slow:
xDim = 25
yDim = 7
pixelMultiplier = 40
surf = pygame.Surface((xDim,yDim))
bigSurf = pygame.Surface((pixelMultiplier*xDim,pixelMultiplier*yDim))
# ... manipulate surf ...
# map surf to bigSurf
for x in range(xDim):
for y in range(yDim):
# grab color of pixel
tmpColor = surf.get_at((x,y))
# write to all pixels in larger surface
for i in range(pixelMultiplier):
for j in range(pixelMultiplier):
bigSurf.set_at((pixelMultiplier*x+i,pixelMultiplier*y+j),tmpColor)
Is there some built-in PyGame function that does this efficiently?
Right after writing the post, this dawned on me:
Create a surface to represent the pixel, and just blit that to the larger surface. Instead of running through 1600 set_at() functions for all 175 pixels, do it with a built-in function that already does that.
# map surf to larger version
for x in range(xDim):
for y in range(yDim):
# grab color of pixel
tmpColor = surf.get_at((x,y))
# create square surface to represent pixel
pixelSurf = pygame.Surface((pixelMultiplier,pixelMultiplier))
pixelSurf.fill(tmpColor)
# blit big pixel onto bigSurf
bigSurf.blit(pixelSurf,(pixelMultiplier*x,pixelMultiplier*y))
This runs much faster, but I'm still not sure it's the optimal solution.
Edit - Best Method
PyGame does have a built-in function!
pygame.transform.scale(surface,(newX,newY))