OpenGL: Access Array Texture in GLSL - python

I was trying to texture a cube in PyOpenGL when I ran into an issue. PyOpenGL only supports 1.10, 1.20, 1.30, 1.00 ES, and 3.00 ES. I need at least version 1.40 because in 1.40 layout was introduced and I need to use layout to get the texture coordinates. I'm trying to texture each side of a cube differently using array textures. Here are some of my code snippets if they are required.
Loading the array texture:
def load_texture_array(path,width,height):
teximg = pygame.image.load(path)
texels = teximg.get_buffer().raw
texture = GLuint(0)
layerCount = 6
mipLevelCount = 1
glGenTextures(1, texture)
glBindTexture(GL_TEXTURE_2D_ARRAY, texture)
glTexStorage3D(GL_TEXTURE_2D_ARRAY, mipLevelCount, GL_RGBA8, width, height, layerCount)
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, 0, width, height, layerCount, GL_RGBA, GL_UNSIGNED_BYTE, texels)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
My vertex shader:
#version 130
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aColor;
layout (location = 2) in vec2 aTexCoord;
out vec3 color;
out vec2 TexCoord;
void main() {
gl_Position = vec4(aPos, 1.0);
color = aColor;
TexCoord = aTexCoord;
}
my fragment shader:
#version 130
out vec4 FragColor;
in vec3 color;
in vec2 TexCoord;
uniform sampler2D texture;
void main()
{
FragColor = texture(texture, TexCoord);
}
If you need any more information, just ask in the comments and I will add it.

PyOpenGL only supports 1.10, 1.20, 1.30, 1.00 ES, and 3.00 ES [...]
No. PyOpenGL supports OpenGL ES versions from 1.0 to 3.1 and all desktop OpenGL version from 1.1 to 4.4 (See About PyOpenGL). You confused (desktop) OpenGL and OpneGL ES. Compare OpenGL specification - Khronos OpenGL registry and OpenGL ES Specification - Khronos OpenGL ES Registry.
In any case, the "supported" version only refers to OpenGL API. The OpenGL context version only depends on the graphics card. You can use any GLSL version which is supported by the current OpenGL context. PyOpenGL guarantees, that the OpenGL API is implemented up to version 4.4, but you can also use a higher version. It just means that OpenGL functions added later may be missing from the PyOpenGL API.
Log current version after creating the OpenGL window and making the context current:
print(glGetString(GL_VENDOR))
print(glGetString(GL_RENDERER))
print(glGetString(GL_VERSION))
print(glGetString(GL_SHADING_LANGUAGE_VERSION))
Vertex shader input Layout Qualifier are not supported in OpenGL Shading Language 1.30. You have to switch to OpenGL Shading Language 1.40:
#version 130
#version 140
Anyway you've mentioned that in your question - "I need at least version 1.40 because in 1.40 layout was introduced [...]"
The glsl sampler type has to match the texture target. The proper sampler type for (floating point) GL_TEXTURE_2D_ARRAY is sampler2Darray(See Sampler types):
uniform sampler2D texture;
uniform sampler2Darray texture;
For 2 dimensional array textures, 3 dimensional texture coordinates are required (See texture):
FragColor = texture(texture, TexCoord);
FragColor = texture(texture, vec3(TexCoord.st, index));
index is the index of the addressed texture in the array. You can provide the index by a Uniform variable or even by a 3rd component in the texture coordinate attribute.

Related

opengl fragment shader storing color in 16 bytes

I use the following standard fragment shader:
#version 330
in vec4 v_color;
out vec4 FragColor;
void main()
{
FragColor = v_color;
}
With the following standard vertex shader
#version 330
uniform mat4 u_view; // View matrix
uniform mat4 u_projection; // Projection matrix
in vec4 a_color;
in vec3 a_position;
out vec4 v_color;
void main()
{
gl_Position = u_projection * u_view * vec4(a_position, 1.0);
v_color = a_color;
}
My problem is that I use 16 bytes to represent the color, this creates some memory waste in CPU while generating the vertices.
Can I reduce the size of the color to 4 bytes?

PyOpenGL Constant Size 2D Labels for 3D Objects

I'm trying to make labels for some 3d objects, with an icon/triangle to show you where the object is, plus some text describing what the object is.
Essentially, I want to 1. display text using pyopengl and 2. have the text + icon stay at a constant size on the screen that can 3. still move around the screen.
(I looked around a bit and looked at orthographic projections, but I'm not sure that's what I should be using...)
I have not used opengl very much at all, so this might be a dumb question!
Any help is much appreciate.
A nice start would be to store your icon + text inside a quad.
There are plenty of good tutorials on "font rendering" which will help you to create the desired quad.
You can load your icon inside a openGL texture, once done you will have to create your quad and associate a vertex and fragment shader to it. The difficult part will be to set a fixed position to your quad, that won't be linked to your 3D scene.
In video-games when you want to draw the ui, or the hud (healthbar, minimap), you draw them at the end on top off everything. Thoose elements don't need the mvp matrices, or projection matrices you might be familiar with. All the magic will appen in the vertex shader, he will be responsible to to set the position of all your elements, the output of the vertex shader should be in the [-1, 1] range for all coordinates:
-1 -> left, top, near
1 -> right, bottom, far
We call this space ndc see diagram https://antongerdelan.net/opengl/raycasting.html
Your job will be to output values in this range. If you want your quad to be half the size of the width and a quarter of the height, centered in the middle, you can store this information in the vertices you sent to your shader.
GLfloat vertices[] = {-0.5, -0.25, 0, // bottom left corner
-0.5, 0.25, 0, // top left corner
0.5, 0.25, 0, // top right corner
0.5, -0.25, 0}; // bottom right corner
vertex shader, people might use pixel size quads with a ortho projection
in vec3 pos;
in vec2 uv;
out fuv;
void main()
{
//glm::mat4 translation = uTranslation might need to mvoe the quad on screen at runtime
gl_Position = vec4(pos.x, pos.y, -1.0, 1.0); //set it to near plane -1
fuv = uv;
}
fragment shader
in vec2 fuv;
out vec4 color;
uniform sampler2D renderedTexture;
void main(){
color = texture(renderedTexture, fuv);
}

Drawing a crosshair in PyOpenGL

I have a working camera in a 3D scene in OpenGL, and I decided I want to draw a crosshair in the middle. To do that, I want to use a separate shader for the crosshair (and eventually, HUD) and the 3D scene. I managed to make something work using glDrawArrays and a VBO with the line vertices in it.
The problem with this setup is that the cross extends all the way to the borders of the window, even though I specified small coordinates so that it is just in the center of the screen.
Here are my shaders (in order, vertex shader and fragment shader):
#version 450 core
layout (location = 0) in vec2 aPos;
void main() {
gl_Position = vec4(aPos, 0, 0);
}
#version 450 core
out vec4 FragColor;
void main() {
FragColor = vec4(1.0, 1.0, 1.0, 1.0);
}
As you can see, they are extremely simple and do literally nothing with the coordinates. The drawing process looks something like this:
crosshair = np.array([
-0.02, 0,
0.02, 0,
0, -0.02,
0, 0.02], dtype = 'float32')
vbo_2d, vao_2d = glGenBuffers(1), glGenVertexArrays(1)
glBindVertexArray(vao_2d)
glBindBuffer(GL_ARRAY_BUFFER, vbo_2d)
glBufferData(GL_ARRAY_BUFFER, crosshair, GL_STATIC_DRAW)
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 8, ctypes.c_void_p(0))
glEnableVertexAttribArray(0)
while not window.check_if_closed():
#render 3d stuff with another shader program and other vbos ad vaos
glBindVertexArray(0)
shader_program_2d.use()
glBindVertexArray(vao_2d)
glDrawArrays(GL_LINES, 0, 4)
#swap buffers etc.
This is what my window looks like:
Thanks in advance!
The issue is the computation of the Homogeneous clip space coordinate in the vertex shader. The normalized device space coordinate is computed by dividing the .xyz components of gl_Position by it's .w component (Perspective divide).
Since the .w component is 0, the resulting coordinates become infinite (except the coordinate itself is 0).
gl_Position = vec4(aPos, 0, 0);
gl_Position = vec4(aPos, 0, 1.0);

Pygame and PyOpenGL quad texturing problem

I'm trying to texturing a quad and to understand how this little sample works. My code is not original, it's mixed from various examples.
Texture: https://jamesmwake.files.wordpress.com/2015/10/uv_texture_map.jpg?w=660
My questions:
When I change GL_TEXTURE_MIN_FILTER to GL_TEXTURE_MAG_FILTER in
glTexParameteri the texture disappears. Why?
When I change GL_LINEAR to GL_NEAREST, nothing happens. The used
texture's resolution changed to 300x300px. Why is that?
How can I make mipmaps and then using them?
The loadImage() function make a texture. How knows PyOpenGL which
texture should be used in the makeQuad() function?
Code:
import pygame
from pygame.locals import *
from OpenGL.GL import *
from OpenGL.GLU import *
def loadImage():
img = pygame.image.load("checker_texture_downsized.jpg")
textureData = pygame.image.tostring(img, "RGB", 1)
width = img.get_width()
height = img.get_height()
bgImgGL = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, bgImgGL)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureData)
glEnable(GL_TEXTURE_2D)
def makeQuad():
glBegin(GL_QUADS)
glTexCoord2f(0, 0)
glVertex2f(25, 25)
glTexCoord2f(0, 1)
glVertex2f(25, 775)
glTexCoord2f(1, 1)
glVertex2f(775, 775)
glTexCoord2f(1, 0)
glVertex2f(775, 25)
glEnd()
def main():
pygame.init()
display = (1280,800)
pygame.display.set_mode(display, DOUBLEBUF|OPENGL)
gluOrtho2D(0, 1280, 0, 800)
loadImage()
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
quit()
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
makeQuad()
pygame.display.flip()
main()
Note, that drawing by glBegin/glEnd sequences, the fixed function pipeline matrix stack and fixed function pipeline per vertex light model, is deprecated since decades.
Read about Fixed Function Pipeline and see Vertex Specification and Shader for a state of the art way of rendering.
When I change GL_TEXTURE_MIN_FILTER to GL_TEXTURE_MAG_FILTER in glTexParameteri the texture disappears. Why?
The initial value of GL_TEXTURE_MIN_FILTER is GL_NEAREST_MIPMAP_LINEAR. If you don't change it and you don't create mipmaps, then the texture is not "complete" and will not be "shown". See glTexParameter.
See OpenGL 4.6 API Compatibility Profile Specification; 8.17 Texture Completeness; page 306
A texture is said to be complete if all the texture images and texture parameters required to utilize the texture for texture application are consistently defined.
... a texture is complete unless any of the following conditions hold true:
The minification filter requires a mipmap (is neither NEAREST nor LINEAR), and the texture is not mipmap complete.
When I change GL_LINEAR to GL_NEAREST, nothing happens. The used texture's resolution changed to 300x300px. Why is that?
If the texture is smaller than the region where the texture is wrapped to, the the minification filter has not effect, but the magnification would have an effect. If you set the value GL_NEAREST to the GL_TEXTURE_MAG_FILTER then the texels are not interpolated any more.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
How can I make mipmaps and then using them?
Mipmaps can be generated by glGenerateMipmap:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_LINEAR)
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureData)
glGenerateMipmap(GL_TEXTURE_2D)
The loadImage() function make a texture. How knows PyOpenGL which texture should be used in the makeQuad() function?
OpenGL is a state engine. Each state is kept until you change it again, even beyond frames. Since you have bound the texture in loadImage
glBindTexture(GL_TEXTURE_2D, bgImgGL)
the currently named texture object, which is bound to texture unit 0 is bgImgGL. This texture is used for drawing.

OpenGL (pyglet) issue with glTexImage2D (striped texture)

Trying to use glTexImage2D() to pass a programmatically generated displacement map to a vertex shader, but for some reason the vertices don't seem to be getting uniform values from the uploaded image, even though every value in the array being uploaded is the same.
The python code is as follows:
pix = []
for x in range(256):
for y in range(128):
pix.append(200) # pix is just [200, 200, 200, ...]
pix = (GLuint * len(pix))(*pix)
disp_tex_id = GLuint(0)
glGenTextures(1, byref(disp_tex_id))
glActiveTexture(GL_TEXTURE1)
glEnable(GL_TEXTURE_2D)
glBindTexture(GL_TEXTURE_2D, disp_tex_id)
glPixelStorei(GL_UNPACK_ALIGNMENT, 1)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 256, 128, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, pix)
shader.uniformi('disp_texture', 1)
And the relevant shader code is here:
[vertex]
#version 110
#extension GL_EXT_gpu_shader4 : require
uniform sampler2D disp_texture;
varying vec4 dispColor;
void main() {
gl_TexCoord[0] = gl_MultiTexCoord0;
dispColor = texture2D(disp_texture, gl_TexCoord[0].st);
}
[fragment]
#version 110
varying vec4 dispColor;
void main() {
gl_FragColor = dispColor;
}
I'm currently just displaying the displacement on the sphere as a color value, but instead of a uniformly gray sphere, it's striped.
That striping you are experiencing is due to incorrectly sizing the data in your pix array.
GL_LUMINANCE is a single component image format and the size of your component here is 1-byte (GL_UNSIGNED_BYTE in glTexImage2D (...)).
You are effectively only giving color to one out of every 4 of your texels because you are creating an array of GLuints (where the highest 24-bits are all 0) and then telling GL that you are passing it a GLubyte array.
To resolve this, you need to change:
pix = (GLuint * len(pix))(*pix)
To this instead:
pix = (GLubyte * len(pix))(*pix)

Categories