Best way to render pixels to the screen in python? - python

I'm writing an interactive (zoom/pan) mandelbrot set viewer in python, and I'm having some performance issues. I'm currently using pyglet and PyOpenGL to render the pixels since I like how it handles mouse events. I generate the pixel values using numpy, and after some searching on stack exchange/docs/other places, I'm currently using glDrawPixels to draw the pixels. The application is horribly slow, taking ~1.5s to draw. I've heard that using textures is much faster, but I have no experience with them, and learning that much OpenGL seems like it should be unnecessary. Another approach I have considered is using vertex lists and batched rendering with pyglet, but it seems wrong to created a new GL_POINT at every single pixel on the screen. Am I going about this all wrong? Is there a better way to render something to the screen when pixels change so frequently? Code below:
# this code all is in a class that subclasses pyglet.window.Window
# called every 1/10.0 seconds, update 10 frames
def update_region(self):
# this code just computes new mandelbrot detail
if self.i < self.NUM_IT:
for _ in range(10): # do 10 iterations every update, can be customizable
self.z = np.where(np.absolute(self.z) < self.THRESHOLD,
self.z ** 2 + self.reg, self.z)
self.pixels = np.where(
(self.pixels == self.NUM_IT) & (np.absolute(self.z) >
self.THRESHOLD), self.i, self.pixels)
self.i = self.i + 1
def update_frame(self, x, y):
self.update_region()
# color_pixels is what will actually be rendered
self.color_pixels = self.cmap.to_rgba(self.pixels).flatten()
def on_draw(self): # draw method called every update (set to .1s)
start = time.time()
glClear(GL_COLOR_BUFFER_BIT)
glDrawPixels(2 * self.W, 2 * self.H, GL_RGBA, GL_FLOAT, (GLfloat * len(self.color_pixels))(*self.color_pixels))
glutSwapBuffers()
print('on_draw took {0} seconds'.format(time.time() - start))

Are you sure it's the glDrawPixels slowing you down? In your code for update_frame there's a cmap.to_rgba() which I assume is mapping the single value calculated by Mandelbrot into an RGB triple, and then whatever .flatten() does. Copying the entire image, twice, won't help.
For drawing raster images that don't need 3D scaling, pyglet has the image module and .blit()
You are right that a vertex list of points would not help.
Loading the image into a texture would be a bit more OpenGL code, but not too much. You could then zoom in OpenGL, and do the Mandelbrot -> RGB conversion on the GPU as it is drawn in a fragment shader.

Related

Weird things happening when making image get bigger on mouse hover in Pygame

I am making a scene where there is a thumbs-up image that is supposed to get bigger on mouse hover, and shrink back to normal size when the mouse is no longer hovering.
This is how I make the thumbs-up image:
thumbs_up_image = pygame.image.load("./plz_like.png")
thumbs_up_rect = thumbs_up_image.get_rect(topleft=(screen.get_width() // 2 - thumbs_up_image.get_width() + 75,
screen.get_height() // 2 + thumbs_up_image.get_height() - 225))
And this is how I make it get bigger:
if thumbs_up_rect.collidepoint(pygame.mouse.get_pos()):
thumbs_up_image = pygame.transform.scale(thumbs_up_image,
[n + 50 for n in thumbs_up_image.get_size()])
thumbs_up_rect = thumbs_up_image.get_rect()
This is how the image is blited:
screen.blit(thumbs_up_image, thumbs_up_rect)
The problem is that when I hover on the thumbs-up image, it first goes to the top-left corner of the screen. Then, when I hover on it again, it gets super big and pixelated.
What am I doing wrong?
I managed to figure it out by myself.
This is how I do it:
First, I prepared a bigger version of the image and it's rect: (as shown below)
big_thumbs_image = pygame.transform.scale(thumbs_up_image, [i + 50 for i in thumbs_up_image.get_size()])
big_thumbs_image_rect = thumbs_up_image.get_rect(
topleft=(screen.get_width() // 2 - thumbs_up_image.get_width() + 55,
screen.get_height() // 2 + thumbs_up_image.get_height() - 250))
Then, when the small image's rect collides with the mouse, blit the bigger image:
if thumbs_up_rect.collidepoint(pygame.mouse.get_pos()):
screen.blit(big_thumbs_image, big_thumbs_image_rect)
You are not showing the code that actually renders the image to the screen.; But basically: you are not saving the original size - at each hover event it will grow and grow (and it will grow once per frame, if that code is run in the mainloop).
You need a variable to hold the original image, one to tell your code the image has already been resized, and an else clause on this if to restore the original image: pygame won't do that for you.
Also, when you use the get_rect for the image, its top-left position will always be "0, 0" - you have to translate this top-left corner to a suitable coordinate- getting the rectangle center of the original sprite (wherever the data of its location on the screen is kept), and setting the same center on the new rect should work.
And finally, prefer "rotozoom" than "scale" - Pygame documentation is clear that the second method uses better algorithms for scaling.
Try using this pygame function:
pygame.transform.rotozoom(Surface, angle, scale)
I also had some issues with pixilation in a game but it seemed to work with this.

Drawing sprites at an offset in pyglet very slow

I am making a simple project in python pyglet that involves drawing a large amount of entities and tiles to the screen, typically ~77K tiles at a time, to do this, I use two batches and have every tile as a sprite, where its x,y position on the screen is its x,y position in the world.
The problem comes when I try to implement some sort of side-scrolling feature into it, to avoid asking an XY question, I figure I should just ask what the best way to do this is.
I have tried many ways to increase performance:
Moving around glViewport(), but batches do not draw entities outside of the original size.
Updating all the sprites co-ordinates every tick, which insanely slow
Drawing all sprites to another Texture, but I haven't found anything in the documentation about this, the blit_into method gives me "Cannot blit to a texture.
The camera class, update() is called every tick
class Camera:
def __init__(self):
self.world_sprites = []
self.world_x = 0
self.world_y = 0
def add_sprite(self, spr: tools.WorldSprite):
self.world_sprites.append(spr)
def update(self):
for spr in self.world_sprites:
spr.update_camera(self)
The update_camera method inside of the WorldSprite class
def update_camera(self, cam):
self._x = self.px - cam.world_x
self._y = self.py - cam.world_y
self._update_position()
It works, it's just very, very slow.
Sorry if this is a big question.

Wrapping and extending python cairo library with cython

I am writing a python program to do graphic animations, I am using the python cairo/rsvg libraries. I have reached a speed issue and wanted to accelerate some parts of the code by moving some of the rendering code in a cython library.
Ideally I would like to augment the cairo library methods to add a few that are optimized for my needs.
For example i have a function that draws an svg on screen centered at a point with a bounding box that forces its size, usually this function is called by an external loop which draws tens of svgs, and this function is one of the most expensive in my code:
def svg(ctx, path, pos, angle, width, height):
"""Draws an svg file at coordinates pos with at a specific angle, and a
maximum bounding box of size width x height"""
if width == 0 or height == 0:
return
svg = rsvg.Handle(file=path) #any svg file
ctx.save()
#put in position
ctx.translate(pos.x, pos.y)
#rotate
ctx.rotate(angle)
#resize
if svg.props.width != width or svg.props.height != height:
ratiow = (width *1.0) / (svg.props.width*1.0)
ratioh = (height * 1.0) / (svg.props.height * 1.0)
ctx.scale(ratiow, ratioh)
#move center to 0,0 of image
ctx.translate(-svg.props.width / 2, - svg.props.height / 2)
#draw image
svg.render_cairo(ctx)
ctx.restore()
What I would like to do is to write a cython function that given the list of svgs will draw them all at once on the screen.
Could this kind of optimization be done with cython?
For what I understand it would be extremely beneficial to define a type for the cairo context object (ctx), but how is the correct way to do it?

Map small surface onto larger surface in PyGame

I'm using PyGame's graphics library to display graphics on an LED display.
My LED display is 25x7. For testing without hardware, I've been able to use PyGame's display to view a 40x amplified version:
screen = pygame.display.set_mode((1000,280))
I'd like to actually draw boxes, rects, etc on a 25x7 grid and then project it onto a larger 1000x280 window on my computer, which will look like a 25x7 grid of 40x40 "pixels". That way, if I start working with hardware, I can skip the large projection, and just write 25x7=175 pixels worth of data out to my LEDs.
I came up with a solution, but it's prohibitively slow:
xDim = 25
yDim = 7
pixelMultiplier = 40
surf = pygame.Surface((xDim,yDim))
bigSurf = pygame.Surface((pixelMultiplier*xDim,pixelMultiplier*yDim))
# ... manipulate surf ...
# map surf to bigSurf
for x in range(xDim):
for y in range(yDim):
# grab color of pixel
tmpColor = surf.get_at((x,y))
# write to all pixels in larger surface
for i in range(pixelMultiplier):
for j in range(pixelMultiplier):
bigSurf.set_at((pixelMultiplier*x+i,pixelMultiplier*y+j),tmpColor)
Is there some built-in PyGame function that does this efficiently?
Right after writing the post, this dawned on me:
Create a surface to represent the pixel, and just blit that to the larger surface. Instead of running through 1600 set_at() functions for all 175 pixels, do it with a built-in function that already does that.
# map surf to larger version
for x in range(xDim):
for y in range(yDim):
# grab color of pixel
tmpColor = surf.get_at((x,y))
# create square surface to represent pixel
pixelSurf = pygame.Surface((pixelMultiplier,pixelMultiplier))
pixelSurf.fill(tmpColor)
# blit big pixel onto bigSurf
bigSurf.blit(pixelSurf,(pixelMultiplier*x,pixelMultiplier*y))
This runs much faster, but I'm still not sure it's the optimal solution.
Edit - Best Method
PyGame does have a built-in function!
pygame.transform.scale(surface,(newX,newY))

How to speed up drawing many circles in Python PIL

In python PIL lib, I'm using
>>> draw.ellipse((x - r, y - r, x + r, y + r))
to draw circle as nodes. However, since there are thousands of nodes to draw, this takes too long.
Is there faster way to draw all my nodes?
For background: to draw a tree-like graph, with circles as nodes.
If you draw that many ellipses every cycle it will be slow.
Is there a specific reason you need to use PIL? From your question details, I am not sure PIL is suited for your task: you need a graphics library, not an image manipulation library. There is a difference.
PyGame is a SDL wrapper suited for high performance drawing routines.
Some tricks include drawing to a surface once, and then only redrawing dirty regions. A tutorial of this can be found here:
Most people new to graphics programming use the first option - they update the whole screen every frame. The problem is that this is unacceptably slow for most people.
The solution is called 'dirty rect animation'. Instead of updating the whole screen every frame, only the parts that changed since the last frame are updated.
I found image.paste() improve speed to about 5 times. there's a transparent problem to avoid overlapping, so mask make speed a little slower.
def init_circle(r):
center = [r] * 2
im = Image.new(IMG_MODE, [r * 2] * 2, FG_COLOR)
draw = ImageDraw.Draw(im)
draw.setink(BG_COLOR)
draw.ellipse((1, 1, r * 2 - 1, r * 2 - 1), fill=NODE_COLOR)
mask = invert(im.convert('L'))
return im.crop((0, 0) + (r * 2, r * 2)), mask
to use it:
im.paste(circle, (x, y, x + 2 * r, y + 2 * r), mask=mask)

Categories