I'm writing an application that shows a list of messages coming up on a PyGame window; to make things prettier, I decided to add a nice fade in/out effect to the text.
Problem is: SDL only allows per-pixel or per-surface alpha, not both. So I wrote a function that recalculates the pixel alphas based on the wanted "layer" opacity, but it's of course very, very, very slow (even on a core i7 #2.7GHz, it's dropping to ~10FPS):
def apply_alpha(surface, alpha):
for x in xrange(surface.get_width()):
for y in xrange(surface.get_height()):
color = surface.get_at((x, y))
color.a = int(color.a * alpha)
surface.set_at((x, y), color)
return surface
Full example here: https://gist.github.com/rshk/5072173 (not the real application, just a relevant example)
Is there any way to speed up things? I see it's possible to directly access the underlying buffer, that avoids instantiation of Color objects, but it seems to be even slower to update the alphas that way, calling the BufferProxy.write() method..
Maybe something using hardware acceleration would help? I think the ideal here would be to apply the alpha while blitting the surface onto the background, but I'm not very expert in this kind of things..
note: changing the text color is not a chance, as the background might not be always black..
As this might serve as future reference for somebody, I did as suggested by #Xymostech: create another surface with the same size of the rendered text and apply the alpha on that surface, using .set_alpha().
Updated version here: https://gist.github.com/rshk/5072375
Related
I'm having some performance issues in pygame, so I'm trying to optimize the rendering.
Currently, I'm blitting the background image to the display buffer:
self.display.blit(self.bg, (0, 0))
Instead, I'm looking for a way to replace the buffer with a copy of the background surface, and draw over that. This way, I don't have to blit a large image every frame, saving me some time.
Is there any way to do so?
It doesn't matter that much how often you blit something to the screen surface, since the display does only get updated once you call pygame.display.update or pygame.display.flip.
If you're sure that blitting the whole background image to the screen surface is a bottle neck in your game, you can try the following things:
a) Instead of blitting the whole background every frame, use the clear() function to "erase" your sprites from the screen.
b) Instead of calling pygame.display.flip or pygame.display.update without an argument, call pygame.display.update with the list of the areas on the screen that have been changed, which is returned by the draw() function (maybe in combination with clear()).
c) Create your display surface with the FULLSCREEN, DOUBLEBUF and HWSURFACE flags.
But as I already said: make sure you know where your bottle neck is. Some common performance pitfalls are: loading images multiple times from disk, font rendering, using different pixel formats (e.g. not calling convert/convert_alpha on surfaces created from images) and generally the lack of caching.
(also note that python/pygame is generally not the first choice when creating graphically demanding games)
Without seeing your code it will be hard to really see where the bottleneck is. Sloth is correct on his points as a way to optimize. in my experience all images should be pre-processed outside of the main game loop by drawing them onto their own surfaces. Blitting surfaces to surfaces is much faster than blitting images to surfaces.
img_surface = pygame.Surface((img_rect.width, img_rect.height), pygame.SRCALPHA)
img_surface.fill((0, 0, 0, 0))
img_surface.blit(get_image("my_image.png"), img_rect)
This happens outside the game loop. In the game loop you blit the image surface to your surface. If you really want to evaluate you code, use cProfile. This will help you nail down exactly where the bottleneck is. Do as much pre-processing outside the main game loop as possible.
Python getting meaningful results from cProfile
This link really helped me understand cProfile. Sloth is also correct in that pygame is limited, so you are going to need to optimize everything as much as you can. That is where cProfile comes in.
...Also, make sure you are only drawing things that are visible to the user. That can really help improve performance.
Background
I'm working in python/pyglet to create a procedurally generated terrain.
I'm visualizing it with pyglet by running a lambda function for each x and y coordinate on the screen and then painting that pixel in the color I want it.
This is not very optimal.
I have a sneaking suspicion that there is a way to create a rectangular object and tell the GL engine to render a texture upon it, giving the GL engine a lambda function that will return the color in exchange for an x and y coordinate.
Pseudo pyglet example
def get_color(pos):
x, y = pos
color = x * y % 255
return [color, color, color]
width = 1680
height = 1024
win = window.Window(fullscreen=False, vsync=True, width=width, height=height)
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
gluOrtho2D(0, win.width, 0, win.height)
glTextureMapper(get_color)
win.clear()
points = [0, 0, width, 0, width, height, 0, height]
pyglet.graphics.draw(int(len(points) / 2), GL_QUADS, ('v2f', points))
TLDR Question
How do I color a polygon using a lambda function? Is this a shader?
Pyglet examples are preferred, but examples in any language are welcome
This is probably not a complete answer, but might get you a bit further.
Also keep in mind that I've never worked much with textures at all weirdly enough.
What you might be able to do, is create a texture-group for any set of quads, and add them into a batch. Because I'm not entirely sure why you need a lambda function?
class TextureGroup(pyglet.graphics.Group):
def set_state(self):
glEnable(texture.target)
glBindTexture(texture.target, texture.id)
def unset_state(self):
glDisable(texture.target)
vertice_mem = {}
batch = pyglet.graphics.Batch()
texture_group = TextureGroup()
vertice_list = batch.add(2, pyglet.gl.GL_QUADS, None, ('v2f', points))
vertice_mem[(x,y)] = vertex_list
This is one way to optimize how textures are added to certain faces and also add those faces into a batch rendering them a lot faster and more dynamic.
If you need to update the vertices, you can always access the vertice_list.vertices and change the x,y pair for that particular face.
You can use the custom group to create certain textures "on the fly", or on this case pre-rendered certain textures which you could stitch together to create "procedural" textures.
What #Ripi2 says is also a good tip, read up on how procedural memory mapping works and what it actually is. Most graphics you see in sand-box games are actual clever re-use of textures to create the illusion of on-demand graphics.
The last and final tip is that you will most likely need to look into shaders, they are by far the fastest way to auto-generate "textures" by manipulating and creating turbulence in textures, else will be heavily taxing on your system because you will need to iterate over the pixel area in some way.
But again, this is not my area of experience - only a fellow travelers bits and pieces of information gathered over the years.
As pointed out strictly speaking OpenGL cannot draw rectangles (quads), but you can always use a diagonal between two corners to make two triangles. (There can be some differences in interpolation between corners, but they probably won't affect you.)
And while I'm being pedantic, you're not coloring pixels, you're colouring points on your 3D terrain object surface. (I'm assuming you want the terrain to keep the same colors when viewed from any angle or distance.)
OK, what I hope will be the real answer. Is your procedural terrain generation coloring done once, or does it change dynamically?
If you only create and color the terrain once, use a texture map. You'll need to learn how to assign texture coordinates at the corners of the terrain (TL;DR: they'll be (0.0,0.0), (1.0,0.0), (1.0,1.0), (0.0, 1.0) ) and how to use glTexImage2D to transfer an image to the GPU.
Texture maps are just 2D raster images. Pyglet should be able to handle this for you, if not either Pillow or the Python Imaging Library packages.
If you want the terrain to change colors on the fly, you need a fragment shader, a small function written in OpenGL Shading Language that executes every time the terrain is drawn. And due to the way OpenGL works you'll also have to write a vertex shader that handles the 3D coordinate transformations.
This will mean more thinking and coding that the texture map approach. But everything in OpenGL (or DirectX, or RenderMan, …) is done with shaders these days, so you might as well start learning. And shaders let you do any kind of procedural modelling and animation, not just colors. For example, you could dynamically generate your terrain heights as well as colors.
Hope this helps.
I'm trying to create a 360 degree camera just like google street cameras
(This is my whole code if you are interested)
I have a individual kind of perspective equation that map pixel [xold,yold] to [xnew,ynew] in accordance to Alpha and Beta angles as inputs.
To simplify that equation and my question, i assume i'm just trying to rotate an image.
Now my question is how to rotate an image by using rotation equation on each pixel very fast on pygame or anyother intractive shell:
xnew = xold * cos(alpha) - yold * sin(alpha)
ynew = xold * sin(alpha) + yold * cos(alpha)
Assume pygame.transform.rotate() is not available
Read the following words from pygame.org:
http://www.pygame.org/docs/ref/surface.html
"There is support for pixel access for the Surfaces. Pixel access on hardware surfaces is slow and not recommended. Pixels can be accessed using the get_at() and set_at() functions. These methods are fine for simple access, but will be considerably slow when doing of pixel work with them. If you plan on doing a lot of pixel level work, it is recommended to use a pygame.PixelArray object for direct pixel access of surfaces, which gives an array like view of the surface. For involved mathematical manipulations try the pygame.surfarray module for accessing surface pixel data using array interfaces module (It’s quite quick, but requires NumPy.)"
pygame.Surface.set_at((x,y),Color) is definitely the easiest way to do it, but for performance (which is what you asked), you must use pygame.PixelArray or pygame.surfarray.
I can't do the coding for you because I'm short on time, but these websites will point you in the right direction:
http://www.pygame.org/docs/ref/pixelarray.html#pygame.PixelArray
http://www.pygame.org/docs/ref/surfarray.html#module-pygame.surfarray
Good luck with your coding!
Given that you are trying to simulate a 3D environment, it would be extremely hard to beat a solution with PyOpenGL performance-wise. From what I saw when I ran your code, it looks like you are implementing a "skybox", where the viewer would be in a virtual cube. OpenGL is meant for 3D computations like this, so you do not need to manually shift pixels on at a time but instead let the GPU do you that for you while you just pass in a series of vertices and textures! If you need really complicated equations that manipulate every single pixel on the screen, you would then be able to use GLSL shaders to do that work on the GPU in parallel. Let me know if you want me to elaborate on this if you are interested in this approach, as it is would be very different from your current code.
This question is related to this other one.
In my program (which uses pygame to draw objects on the video) I have two representation of my world:
A physical one that I use to make all the calculations involved in the simulation and in which objects are located on a 1000x1000 metres surface.
A visual one which I use to draw on the screen, in which my objects are located in a window measuring 100x100 pixels.
What I want to achieve is to be able to pass to my pygame drawing functions (which normally accept inputs in pixels) my physical/real-word coordinates. In other words, I would like to be able to say:
Draw a 20m radius circle at coordinates (200m, 500m)
using the precise pygame syntax:
pygame.draw.circle(surface, (255,255,255), (200,500), 20)
and get my circle of 2px radius at centred on pixels (20,50).
Please note that this question is about a native pygame way to do this, not some sort of workaround to achieve that result (if you want to answer that, you should take a look to the question I already mentioned) instead.
Thanks in advance for your time and support.
There is no native pygame way to do this.
You may be misunderstanding the function of pygame. It is not for drawing vector objects. It is for writing pixels into video surfaces.
Since you have vector objects, you must define how they will be converted into pixels. Doing this is not a workaround - it's how you are intended to use pygame.
Since it seems that PyGame developers do not hang around here too much, I brought the question to the Pygame mailing list where it originated a monster thread and the issue has been debated at large.
The summary would be:
At present there is not such a feature.
There is interest to implement it, or at least to try to implement it...
...although is not a priority of the core devs in any way
There is more than one way to skin a cat:
should be the scaling happen both ways (inputting coordinates and reading them)?
how to deal with lines that have no thickness but that should be visible?
how to deal with visibility of objects at the edge of the image? which of their points should be taken as reference to know if a pixel should be lit or not for them?
and more (see linked thread).
I am using python-clutter 1.0
My question in the form of a challenge
Write code to allow zooming up to a CairoTexture actor, by pressing a key, in steps such that at each the actor can be re-drawn (by cairo) so that the image remains high-res but still scales as expected, without re-sizing the actor.
Think of something like Inkscape and how you can zoom into the vectors; how the vectors remain clean at any magnification. Put a path (bunch of cairo line_to commands, say) onto an CairoTexture actor and then allow the same trick to happen.
More detail
I am aiming at a small SVG editor which uses groups of actors. Each actor is devoted to one path. I 'zoom' by using SomeGroup.set_depth(z) and then make z bigger/smaller. All fine so far. However, the closer the actor(s) get to the camera, the more the texture is stretched to fit their new apparent size.
I can't seem to find a way to get Clutter to do both:
Leave the actor's actual size static (i.e. what it started as.)
Swap-out its underlying surface for larger ones (on zooming in) that I can then re-draw the path onto (and use a cairo matrix to perform the scaling of the context.)
If I use set_size or set_surface_size, the actor gets larger which is not intended. I only want it's surface (underlying data) to get larger.
(I'm not sure of the terminology for this, mipmapping perhaps? )
Put another way: a polygon is getting larger, increase the size of its texture array so that it can map onto the larger polygon.
I have even tried an end-run around clutter by keeping a second surface (using pycairo) that I re-create to the apparent size of the actor (get_transformed_size) and then I use clutter's set_from_rgb_data and point it at my second surface, forcing a re-size of the surface but not of the actor's dimensions.
The problem with this is that a)clutter ignores the new size and only draws into the old width/height and b)the RGBA vs ARGB32 thing kind of causes a colour meltdown.
I'm open to any alternative ideas, I hope I'm standing in the woods missing all the trees!
\d
Well, despite all my tests and hacks, it was right under my nose all along.
Thanks to Neil on the clutter-project list, here's the scoop:
CT = SomeCairoTextureActor()
# record the old height, once:
old_width, old_height = CT.get_size()
Start a loop:
# Do stuff to the depth of CT (or it's parent)
...
# Get the apparent width and height (absolute size in pixels)
appr_w,appr_h = CT.get_transformed_size()
# Make the new surface to the new size
CT.set_surface_size( appr_w, appr_h )
# Crunch the actor back down to old size
# but leave the texture surface something other!
CT.set_size(old_width, old_height)
loop back again
The surface size and the size of the
actor don't have to be the same. The
surface size is just by default the
preferred size of the actor. You can
override the preferred size by just
setting the size on the actor. If the
size of the actor is different from
the surface size then the texture will
be squished to fit in the actor size
(which I think is what you want).
Nice to put this little mystery to bed. Thanks clutter list!
\d