QT5 QgraphicsScene: how to draw on th forground pixel by pixel - python

I am using PyQt5. I am making a program of robots moving in a maze.
For that, I use QGraphicsScene. I add objects like QRect to represent robots. The background is set via SetBackgroundBrush and loaded from a png image (black represents unpassable terrain):
def update_background(self):
qim = QImage(self.model.map.shape[1],self.model.map.shape[0], QImage.Format_RGB32)
for x in range(0, self.model.map.shape[1]):
for y in range(0, self.model.map.shape[0]):
qim.setPixel(x ,y, qRgb(self.model.map[y,x],self.model.map[y,x],self.model.map[y,x]))
pix = QPixmap(qim)
self.scene.setBackgroundBrush(QBrush(pix))
What I want to do now is to visualize the work of a pathfinding algorithm (I use A* for now). Like a red line that connects the robot with its destination bending over obstacles. This line is stored as a list of (X,Y) coords. I wanted to iterate over the list and paint pixel by pixel on the scene. However I don't know how to do that - there is no "drawPixel" method. Of course, I can add a hundred of small rectangles of 1x1 size. However I will have to redraw them if the route changes.
I thought about creating an image with paths and placing it in the FOREground and then adding. However I cannot make a transparent foreground. It was not a problem with background (because it is in the back). I considered using
theis function:
http://doc.qt.io/qt-5/qpixmap.html#setAlphaChannel
But it is deprecated. It refers to QPainter. I don't know what QPainter is and I am not sure I am heading in the right direction at all.
Please give advice!
So, the question is what is the correct and efficient way to draw routes built by robots?
RobotPathItem(QGraphicsItem):
def __init__(self, path):
super().__init__()
qpath = []
for xy in path:
qpath.append(QPoint(xy[0],xy[1]))
self.path = QPolygon(qpath)
if path:
print(path[0])
def paint(self, painter, option, qwidget = None):
painter.drawPoints(self.path)
def boundingRect(self):
return QRectF(0,0,520,520)

There's no drawPixel, but QPainter has a drawPoint or drawPoints (which would be a lot more efficient in this case, I think). You'll need to create a custom graphics item that contains your list of points and iterates through your list of QPointF values and draws them. When you add points to the list, be sure to recalculate the bounding rectangle. For example, if you had a RobotPathItem (derived from QGraphicsItem), your paint method might look something like:
RobotPathItem::paint (QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget)
{
QPen pen;
// ... set up your pen color, etc., here
painter->setPen (pen);
painter->drawPoints (points, points.count ());
}
This is assuming that "points" is a QList or QVector of QPointF.

Related

Blitting subsurface from tilesheet is doing weird things

I am coding a prototype platformer in pygame. I'm using a .png as a tilesheet, I load it and then get a list tileset.tiles of all the different tile textures in it. I then use three layers of .csv tilemaps to associate every tile in the grid with its own corresponding texture. I bake all of the tile layers onto a map surface once, and then blit this surface at every frame.
The problem is that the outcome is not as expected, apparently not all of the tiles are properly blit onto the surface. The problem seems to arise when the same subsurface has to be blit a second time by the load method in the Room class. It's not clear to me what exactly causes this. I have tried playing around with the .csv files, and it seems that different arrangements of tiles, even across layers, have an influence on what is actually rendered on screen. I've added screenshots to illustrate this better. For reference, the id number 8 corresponds to a blue square texture, which should be the sky. The other numbers correspond to several different textures.
bottom layer csv
middle layer csv
top layer csv
outcome
By changing the first tile of the middle layer no difference is shown (tile 0 corresponds to the flower texture):
alternate middle layer
outcome
Or, if I try and change the first few tiles of the bottom layer:
alternate bottom layer
outcome
Generally, if I change some tile number in a csv file, weird things happen, and other seemingly random tiles get blit. Also, I am able to manually place tiles at any position on the screen without any problems (bypassing the load method and directly blitting subsurfaces from the tileset class) so I think that the tileset class is working properly.
Here's the full code:
import pygame as pg
class Tileset:
def __init__(self,img:pg.Surface):
self.tiles = []
self.img = img
self.loadtiles()
def loadtiles(self):
for i in range(16):
for n in range(16):
currentimg = self.img.subsurface(32*n,32*i,32,32)
self.tiles.append(currentimg.copy())
class Tile(pg.sprite.Sprite):
def __init__(self,image:pg.Surface,position:tuple):
self.img = image
self.pos = position
def draw (self,surface:pg.Surface):
surface.blit(self.img,self.pos)
class Room:
def __init__(self,id,size:tuple):
self.id = id
self.layers = [[],[],[]]
self.size = (size[0]*32,size[1]*32)
# Call when a new room must be loaded: reads room csv, stores tile info, overwrites drawn map with new map
def load(self,map):
map = pg.Surface(self.size)
for layer in self.layers:
with open ('levels/final/room'+str(self.id)+'_'+str(self.layers.index(layer))+'.csv') as file:
data = file.readlines()
for rrrow in data: # unprocessed row
rrow = rrrow.strip('\n').split(',') # semi processed row
row = [] # processed row
for rtile in rrow: # unprocessed tile in row
if rtile != -1:
tileimg = tileset.tiles[int(rtile)]
tile = Tile(tileimg,(rrow.index(rtile)*32, data.index(rrrow)*32)) # process tile
tile.draw(map) # draw tile on current tilemap
row.append(tile) # store tile in row
layer.append(row) # store row in layer (to use later for collisions)
return map
### pygame loop setup (incomplete, shouldn't matter ###
res = (32*30,32*20)
scr = pg.display.set_mode(res)
tileset = Tileset(pg.image.load('graphics\stock.png'))
room0 = Room(0,(30,20))
map = pg.Surface((0,0))
map = room0.load(map)
running = True
while running:
scr.fill((0,0,0))
scr.blit(map,(0,0))
pg.display.flip()
I have just solved it. The problem was in the use of the '.index' method to determine what the position of that tile would be. .index would just return the first occurrence of that tile in the list, and not the actual one, so any time a tile would have to appear for a second time in the same row, or every time two rows looked the same, that tile/row would just be positioned on top of its previous copy. Solved by using i and n as counters for the tile and row positions instead of the .index method.

Best way to render pixels to the screen in python?

I'm writing an interactive (zoom/pan) mandelbrot set viewer in python, and I'm having some performance issues. I'm currently using pyglet and PyOpenGL to render the pixels since I like how it handles mouse events. I generate the pixel values using numpy, and after some searching on stack exchange/docs/other places, I'm currently using glDrawPixels to draw the pixels. The application is horribly slow, taking ~1.5s to draw. I've heard that using textures is much faster, but I have no experience with them, and learning that much OpenGL seems like it should be unnecessary. Another approach I have considered is using vertex lists and batched rendering with pyglet, but it seems wrong to created a new GL_POINT at every single pixel on the screen. Am I going about this all wrong? Is there a better way to render something to the screen when pixels change so frequently? Code below:
# this code all is in a class that subclasses pyglet.window.Window
# called every 1/10.0 seconds, update 10 frames
def update_region(self):
# this code just computes new mandelbrot detail
if self.i < self.NUM_IT:
for _ in range(10): # do 10 iterations every update, can be customizable
self.z = np.where(np.absolute(self.z) < self.THRESHOLD,
self.z ** 2 + self.reg, self.z)
self.pixels = np.where(
(self.pixels == self.NUM_IT) & (np.absolute(self.z) >
self.THRESHOLD), self.i, self.pixels)
self.i = self.i + 1
def update_frame(self, x, y):
self.update_region()
# color_pixels is what will actually be rendered
self.color_pixels = self.cmap.to_rgba(self.pixels).flatten()
def on_draw(self): # draw method called every update (set to .1s)
start = time.time()
glClear(GL_COLOR_BUFFER_BIT)
glDrawPixels(2 * self.W, 2 * self.H, GL_RGBA, GL_FLOAT, (GLfloat * len(self.color_pixels))(*self.color_pixels))
glutSwapBuffers()
print('on_draw took {0} seconds'.format(time.time() - start))
Are you sure it's the glDrawPixels slowing you down? In your code for update_frame there's a cmap.to_rgba() which I assume is mapping the single value calculated by Mandelbrot into an RGB triple, and then whatever .flatten() does. Copying the entire image, twice, won't help.
For drawing raster images that don't need 3D scaling, pyglet has the image module and .blit()
You are right that a vertex list of points would not help.
Loading the image into a texture would be a bit more OpenGL code, but not too much. You could then zoom in OpenGL, and do the Mandelbrot -> RGB conversion on the GPU as it is drawn in a fragment shader.

Drawing sprites at an offset in pyglet very slow

I am making a simple project in python pyglet that involves drawing a large amount of entities and tiles to the screen, typically ~77K tiles at a time, to do this, I use two batches and have every tile as a sprite, where its x,y position on the screen is its x,y position in the world.
The problem comes when I try to implement some sort of side-scrolling feature into it, to avoid asking an XY question, I figure I should just ask what the best way to do this is.
I have tried many ways to increase performance:
Moving around glViewport(), but batches do not draw entities outside of the original size.
Updating all the sprites co-ordinates every tick, which insanely slow
Drawing all sprites to another Texture, but I haven't found anything in the documentation about this, the blit_into method gives me "Cannot blit to a texture.
The camera class, update() is called every tick
class Camera:
def __init__(self):
self.world_sprites = []
self.world_x = 0
self.world_y = 0
def add_sprite(self, spr: tools.WorldSprite):
self.world_sprites.append(spr)
def update(self):
for spr in self.world_sprites:
spr.update_camera(self)
The update_camera method inside of the WorldSprite class
def update_camera(self, cam):
self._x = self.px - cam.world_x
self._y = self.py - cam.world_y
self._update_position()
It works, it's just very, very slow.
Sorry if this is a big question.

Scaling positions of GraphicsItems on a GraphicsScene without changing other properties

I layout a bunch of nodes on a QGraphicsScene. The nodes are basic ellipses (QGraphicsEllipseItems). It works reasonably well.
However I would like to know how to size the ellipses. Currently I have a hard-coded radius to 80 units this works fine when there are the number of ellipses are few hundred, however when I have a few thousand ellipses it looks all wrong as they are too small for the scale of the scene.
Conversely when there are only a few 10s the scene being smaller the ellipses are way to large.
I am looking to find a formula that better balances the size of an ellipse, with the number of ellipses on the scene and the scale of the scene.
Also as I zoom in and out I would like the ellipses to remain appropriately sized.
Can anyone advise on how to best achieve a balanced arrangement?
The scene has a certain bounding rectangle that encloses all graphics items you put into it while the view has a certain size on the screen.
Between the scene and the view there is a transformation matrix (2x3 for scaling, rotate, shear and translation). You can get it by QGraphicsView.transform().
Now if you put more ellipses into your plot increasing the scene size but want to still see all of them, you must zoom out and accordingly the widths of the ellipses will shrink too.
You don't want that. Okay, so why not resizing them (according to the current scaling factor) everytime the scale changes. Well, this is probably not very efficient.
The better solution is to not change the scale of the view, but just scale the positions manually while keeping the zoom fixed. That way no properties of the items except their position has to be changed.
Example (using PySide and Python 3 but easily adjustable to PyQt and Python 2):
from PySide import QtGui, QtCore
import random
class MyGraphicsView(QtGui.QGraphicsView):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def wheelEvent(self, event):
if event.delta() > 0:
scaling = 1.1
else:
scaling = 1 / 1.1
# reposition all items in scene
for item in self.scene().items():
r = item.rect()
item.setRect(QtCore.QRectF(r.x() * scaling, r.y() * scaling, r.width(), r.height()))
app = QtGui.QApplication([])
scene = QtGui.QGraphicsScene()
scene.setSceneRect(-200, -200, 400, 400)
for i in range(100):
rect = QtCore.QRectF(random.uniform(-180, 180), random.uniform(-180, 180), 10, 10)
scene.addEllipse(rect)
view = MyGraphicsView(scene)
view.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
view.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
view.resize(400, 400)
view.show()
app.exec_()
With the mousewheel you can scale the positions and then it looks like this:
As for the balance of size and number of ellipses - well it's all your choice. There is no generic rule. I recommend to not make the ellipses larger than the distance between the ellipses or they will overlap. In general I work with scene coordinates that are 1:1 with pixel sizes of the view (as I did in the example above, 400 pixels width of the view, 400 units width of the scene rectangle). Then I can easily imagine what a size of an ellipse of 10 will be, that is 10 pixels. If I want more I use more and if I want less I use less. But there is no rule for it, it's up to what you want.

drawPie() with customized borders

Is it possible to draw a pie shape with no border at the arc, but with borders at straight lines? I have attached a picture below:
Currently I have implemented this by first calling calling drawPie() with painter.setPen(QtCore.Qt.NoPen), and then later using QLineF to draw the lines separately based on the center and angles of the pie shape.
But the problem is that the line position does not sync with the pie shape if the angles are not multiples of 90. Attached another picture showing the problem.
Is there a simple/elegant way to do this?
Thanks!
Assuming your custom Pie is a subclassed QGraphicsRectItem, you could try something like this:
class CustomPie(QtGui.QGraphicsRectItem):
angle = 2000
def paint(self, painter, option, widget):
# Create the path to draw the lines
path = QtGui.QPainterPath()
path.moveTo(self.rect().width()/2, self.rect().height()/2)
path.lineTo(self.rect().width(), self.rect().height()/2)
path.arcMoveTo(self.rect(), self.angle/16) # arcMoveTo in degrees
path.lineTo(self.rect().width()/2, self.rect().height()/2)
# draw a pie with no Pen
painter.setPen(QtGui.QPen(QtCore.Qt.NoPen))
painter.setBrush(QtGui.QBrush(QtCore.Qt.lightGray))
painter.drawPie(self.rect(), 0, self.angle)
# Draw the path with a custom Pen
painter.setPen(QtGui.QPen(QtCore.Qt.black, 2))
painter.drawPath(path)
Here we override paint to draw a Pie and a path (actually quite similar to your own method).
You would have to override __init__ as well (angle as a class attribute is probably not what you want) but that's the idea.

Categories