Procedural GL textures - python

Background
I'm working in python/pyglet to create a procedurally generated terrain.
I'm visualizing it with pyglet by running a lambda function for each x and y coordinate on the screen and then painting that pixel in the color I want it.
This is not very optimal.
I have a sneaking suspicion that there is a way to create a rectangular object and tell the GL engine to render a texture upon it, giving the GL engine a lambda function that will return the color in exchange for an x and y coordinate.
Pseudo pyglet example
def get_color(pos):
x, y = pos
color = x * y % 255
return [color, color, color]
width = 1680
height = 1024
win = window.Window(fullscreen=False, vsync=True, width=width, height=height)
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
gluOrtho2D(0, win.width, 0, win.height)
glTextureMapper(get_color)
win.clear()
points = [0, 0, width, 0, width, height, 0, height]
pyglet.graphics.draw(int(len(points) / 2), GL_QUADS, ('v2f', points))
TLDR Question
How do I color a polygon using a lambda function? Is this a shader?
Pyglet examples are preferred, but examples in any language are welcome

This is probably not a complete answer, but might get you a bit further.
Also keep in mind that I've never worked much with textures at all weirdly enough.
What you might be able to do, is create a texture-group for any set of quads, and add them into a batch. Because I'm not entirely sure why you need a lambda function?
class TextureGroup(pyglet.graphics.Group):
def set_state(self):
glEnable(texture.target)
glBindTexture(texture.target, texture.id)
def unset_state(self):
glDisable(texture.target)
vertice_mem = {}
batch = pyglet.graphics.Batch()
texture_group = TextureGroup()
vertice_list = batch.add(2, pyglet.gl.GL_QUADS, None, ('v2f', points))
vertice_mem[(x,y)] = vertex_list
This is one way to optimize how textures are added to certain faces and also add those faces into a batch rendering them a lot faster and more dynamic.
If you need to update the vertices, you can always access the vertice_list.vertices and change the x,y pair for that particular face.
You can use the custom group to create certain textures "on the fly", or on this case pre-rendered certain textures which you could stitch together to create "procedural" textures.
What #Ripi2 says is also a good tip, read up on how procedural memory mapping works and what it actually is. Most graphics you see in sand-box games are actual clever re-use of textures to create the illusion of on-demand graphics.
The last and final tip is that you will most likely need to look into shaders, they are by far the fastest way to auto-generate "textures" by manipulating and creating turbulence in textures, else will be heavily taxing on your system because you will need to iterate over the pixel area in some way.
But again, this is not my area of experience - only a fellow travelers bits and pieces of information gathered over the years.

As pointed out strictly speaking OpenGL cannot draw rectangles (quads), but you can always use a diagonal between two corners to make two triangles. (There can be some differences in interpolation between corners, but they probably won't affect you.)
And while I'm being pedantic, you're not coloring pixels, you're colouring points on your 3D terrain object surface. (I'm assuming you want the terrain to keep the same colors when viewed from any angle or distance.)
OK, what I hope will be the real answer. Is your procedural terrain generation coloring done once, or does it change dynamically?
If you only create and color the terrain once, use a texture map. You'll need to learn how to assign texture coordinates at the corners of the terrain (TL;DR: they'll be (0.0,0.0), (1.0,0.0), (1.0,1.0), (0.0, 1.0) ) and how to use glTexImage2D to transfer an image to the GPU.
Texture maps are just 2D raster images. Pyglet should be able to handle this for you, if not either Pillow or the Python Imaging Library packages.
If you want the terrain to change colors on the fly, you need a fragment shader, a small function written in OpenGL Shading Language that executes every time the terrain is drawn. And due to the way OpenGL works you'll also have to write a vertex shader that handles the 3D coordinate transformations.
This will mean more thinking and coding that the texture map approach. But everything in OpenGL (or DirectX, or RenderMan, …) is done with shaders these days, so you might as well start learning. And shaders let you do any kind of procedural modelling and animation, not just colors. For example, you could dynamically generate your terrain heights as well as colors.
Hope this helps.

Related

Is it possible to draw a rectangle in screen without a app/window? [duplicate]

I'm wanting to do something like the following:
...
pixel[0,0] = [ 254, 0, 0 ] # Draw R at pixel x0y0
pixel[2,1] = [ 0, 254, 0 ] # Draw G at pixel x2y1
pixel[4,2] = [ 0, 0, 254 ] # Draw B at pixel x4y2
...
I hope to display many different configurations of pixels and colours in a short space of time -- writing to an intermediary file would be too expensive.
How should I best go about achieving this goal in Python?
Direct answer:
This can only be done with OS-specific APIs. Some OSes do not allow changing pixels on the screen directly.
On Windows, you can use pywin32 libraries to get screen's device context with dc = GetDC(0) call, then paint pixels with SetPixel(dc, x, y, color).
import win32gui
import win32api
dc = win32gui.GetDC(0)
red = win32api.RGB(255, 0, 0)
win32gui.SetPixel(dc, 0, 0, red) # draw red at 0,0
Of course, what you paint this way can be erased at any moment.
Right answer:
Painting pixel by pixel is the slowest way to paint something. For example, on Windows, creating an image in memory then painting it in one operation is order of magnitude faster than painting with SetPixel.
If you need speed, use some user interface library for python, for example, Tkinter module or PyQt. Create a window and image in memory, then paint the image on the window. If you need to manipulate pixels, manipulate them in the image and repaint every time.
Although it does not exactely what you say (paint "by pixel"), I like to use Python, GTK and Cairo.
In this case, you create a GTK Window, add a DrawingArea, create a Cairo context, and do paint operations.
The difference is that you have a continuous canvas with floating-point coordinates, and instead of pixels you draw geometric shapes with strokes and fills. You could even draw rectangles one-pixel-sized, and fill them, but that would not be so fast.
Also, you can take any raster object (image, pixbuffer) and render it at once. Then you could generate images in a fast sequence, and render its pixels all at once, similar to a movie.
A working example can be seen in a previous answer here (second snippet):
quickest way to get started with cairo
Im not sure if I get your question right, but I believe you should investigate some framework or middleware that give you an access to screen. I would say PyGame or Pyglet or maybe even Panda. It may be overkill a bit because Pygame and Panda is a framework that let you develop game while Pyglet is wrapper to Opengl in python, but it will let you control pixels, windows and similar with easy.

Draw extruded 3d text with pyopengl?

What is the best way to draw 3d text with pyopengl, preferably with modern opengl ?
There seem to be quite a few examples, but mostly in old-style opengl and not for python
First you need a text outline which requires a font.
On windows you can use WinGDI fonts by creating a dummy context, selecting a created font on it and getting outlines of every character. For Truetype you load glyphs into the face and simply access their outlines.
For the front & back face of your 3D text, you need to convert the outlines/contours to triangles. Either use the good old GLU tesselator, the relative new tesselation shader of OpenGL 4 or a geometry library like CGAL. Then simply draw the triangles for the front and again with some additional depth for the back.
For the sides you simply span rectangles behind outline segments/lines with height equal to your wished text depth - these are your text sides. You can simply use a depth vector and transform everything afterwards or calculate the orthogonal vector with 2 segments.
You notice that this all can be expensive, so don't hesitate to cache as much as possible.
I referenced to C sources because i'm more familiar with these, but i bet there're python equivalents to port my explanations.

Pygame - surface fade in/out

I'm writing an application that shows a list of messages coming up on a PyGame window; to make things prettier, I decided to add a nice fade in/out effect to the text.
Problem is: SDL only allows per-pixel or per-surface alpha, not both. So I wrote a function that recalculates the pixel alphas based on the wanted "layer" opacity, but it's of course very, very, very slow (even on a core i7 #2.7GHz, it's dropping to ~10FPS):
def apply_alpha(surface, alpha):
for x in xrange(surface.get_width()):
for y in xrange(surface.get_height()):
color = surface.get_at((x, y))
color.a = int(color.a * alpha)
surface.set_at((x, y), color)
return surface
Full example here: https://gist.github.com/rshk/5072173 (not the real application, just a relevant example)
Is there any way to speed up things? I see it's possible to directly access the underlying buffer, that avoids instantiation of Color objects, but it seems to be even slower to update the alphas that way, calling the BufferProxy.write() method..
Maybe something using hardware acceleration would help? I think the ideal here would be to apply the alpha while blitting the surface onto the background, but I'm not very expert in this kind of things..
note: changing the text color is not a chance, as the background might not be always black..
As this might serve as future reference for somebody, I did as suggested by #Xymostech: create another surface with the same size of the rendered text and apply the alpha on that surface, using .set_alpha().
Updated version here: https://gist.github.com/rshk/5072375

How to draw a pixel on the screen directly?

I'm wanting to do something like the following:
...
pixel[0,0] = [ 254, 0, 0 ] # Draw R at pixel x0y0
pixel[2,1] = [ 0, 254, 0 ] # Draw G at pixel x2y1
pixel[4,2] = [ 0, 0, 254 ] # Draw B at pixel x4y2
...
I hope to display many different configurations of pixels and colours in a short space of time -- writing to an intermediary file would be too expensive.
How should I best go about achieving this goal in Python?
Direct answer:
This can only be done with OS-specific APIs. Some OSes do not allow changing pixels on the screen directly.
On Windows, you can use pywin32 libraries to get screen's device context with dc = GetDC(0) call, then paint pixels with SetPixel(dc, x, y, color).
import win32gui
import win32api
dc = win32gui.GetDC(0)
red = win32api.RGB(255, 0, 0)
win32gui.SetPixel(dc, 0, 0, red) # draw red at 0,0
Of course, what you paint this way can be erased at any moment.
Right answer:
Painting pixel by pixel is the slowest way to paint something. For example, on Windows, creating an image in memory then painting it in one operation is order of magnitude faster than painting with SetPixel.
If you need speed, use some user interface library for python, for example, Tkinter module or PyQt. Create a window and image in memory, then paint the image on the window. If you need to manipulate pixels, manipulate them in the image and repaint every time.
Although it does not exactely what you say (paint "by pixel"), I like to use Python, GTK and Cairo.
In this case, you create a GTK Window, add a DrawingArea, create a Cairo context, and do paint operations.
The difference is that you have a continuous canvas with floating-point coordinates, and instead of pixels you draw geometric shapes with strokes and fills. You could even draw rectangles one-pixel-sized, and fill them, but that would not be so fast.
Also, you can take any raster object (image, pixbuffer) and render it at once. Then you could generate images in a fast sequence, and render its pixels all at once, similar to a movie.
A working example can be seen in a previous answer here (second snippet):
quickest way to get started with cairo
Im not sure if I get your question right, but I believe you should investigate some framework or middleware that give you an access to screen. I would say PyGame or Pyglet or maybe even Panda. It may be overkill a bit because Pygame and Panda is a framework that let you develop game while Pyglet is wrapper to Opengl in python, but it will let you control pixels, windows and similar with easy.

Zooming into a Clutter CairoTexture while re-drawing

I am using python-clutter 1.0
My question in the form of a challenge
Write code to allow zooming up to a CairoTexture actor, by pressing a key, in steps such that at each the actor can be re-drawn (by cairo) so that the image remains high-res but still scales as expected, without re-sizing the actor.
Think of something like Inkscape and how you can zoom into the vectors; how the vectors remain clean at any magnification. Put a path (bunch of cairo line_to commands, say) onto an CairoTexture actor and then allow the same trick to happen.
More detail
I am aiming at a small SVG editor which uses groups of actors. Each actor is devoted to one path. I 'zoom' by using SomeGroup.set_depth(z) and then make z bigger/smaller. All fine so far. However, the closer the actor(s) get to the camera, the more the texture is stretched to fit their new apparent size.
I can't seem to find a way to get Clutter to do both:
Leave the actor's actual size static (i.e. what it started as.)
Swap-out its underlying surface for larger ones (on zooming in) that I can then re-draw the path onto (and use a cairo matrix to perform the scaling of the context.)
If I use set_size or set_surface_size, the actor gets larger which is not intended. I only want it's surface (underlying data) to get larger.
(I'm not sure of the terminology for this, mipmapping perhaps? )
Put another way: a polygon is getting larger, increase the size of its texture array so that it can map onto the larger polygon.
I have even tried an end-run around clutter by keeping a second surface (using pycairo) that I re-create to the apparent size of the actor (get_transformed_size) and then I use clutter's set_from_rgb_data and point it at my second surface, forcing a re-size of the surface but not of the actor's dimensions.
The problem with this is that a)clutter ignores the new size and only draws into the old width/height and b)the RGBA vs ARGB32 thing kind of causes a colour meltdown.
I'm open to any alternative ideas, I hope I'm standing in the woods missing all the trees!
\d
Well, despite all my tests and hacks, it was right under my nose all along.
Thanks to Neil on the clutter-project list, here's the scoop:
CT = SomeCairoTextureActor()
# record the old height, once:
old_width, old_height = CT.get_size()
Start a loop:
# Do stuff to the depth of CT (or it's parent)
...
# Get the apparent width and height (absolute size in pixels)
appr_w,appr_h = CT.get_transformed_size()
# Make the new surface to the new size
CT.set_surface_size( appr_w, appr_h )
# Crunch the actor back down to old size
# but leave the texture surface something other!
CT.set_size(old_width, old_height)
loop back again
The surface size and the size of the
actor don't have to be the same. The
surface size is just by default the
preferred size of the actor. You can
override the preferred size by just
setting the size on the actor. If the
size of the actor is different from
the surface size then the texture will
be squished to fit in the actor size
(which I think is what you want).
Nice to put this little mystery to bed. Thanks clutter list!
\d

Categories