OpenGL (pyglet) issue with glTexImage2D (striped texture) - python

Trying to use glTexImage2D() to pass a programmatically generated displacement map to a vertex shader, but for some reason the vertices don't seem to be getting uniform values from the uploaded image, even though every value in the array being uploaded is the same.
The python code is as follows:
pix = []
for x in range(256):
for y in range(128):
pix.append(200) # pix is just [200, 200, 200, ...]
pix = (GLuint * len(pix))(*pix)
disp_tex_id = GLuint(0)
glGenTextures(1, byref(disp_tex_id))
glActiveTexture(GL_TEXTURE1)
glEnable(GL_TEXTURE_2D)
glBindTexture(GL_TEXTURE_2D, disp_tex_id)
glPixelStorei(GL_UNPACK_ALIGNMENT, 1)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 256, 128, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, pix)
shader.uniformi('disp_texture', 1)
And the relevant shader code is here:
[vertex]
#version 110
#extension GL_EXT_gpu_shader4 : require
uniform sampler2D disp_texture;
varying vec4 dispColor;
void main() {
gl_TexCoord[0] = gl_MultiTexCoord0;
dispColor = texture2D(disp_texture, gl_TexCoord[0].st);
}
[fragment]
#version 110
varying vec4 dispColor;
void main() {
gl_FragColor = dispColor;
}
I'm currently just displaying the displacement on the sphere as a color value, but instead of a uniformly gray sphere, it's striped.

That striping you are experiencing is due to incorrectly sizing the data in your pix array.
GL_LUMINANCE is a single component image format and the size of your component here is 1-byte (GL_UNSIGNED_BYTE in glTexImage2D (...)).
You are effectively only giving color to one out of every 4 of your texels because you are creating an array of GLuints (where the highest 24-bits are all 0) and then telling GL that you are passing it a GLubyte array.
To resolve this, you need to change:
pix = (GLuint * len(pix))(*pix)
To this instead:
pix = (GLubyte * len(pix))(*pix)

Related

Anchoring Rendered 3d Model to CharUcoBoard with OpenCV to OpenGL Pose Estimation

I am having difficulty spotting the issue in my code with regards to an OpenCV to OpenGL pseudo augmented reality program. I say "pseudo" because the constraint is taking images and mat objects from openCV and converting them to OpenGL.
I have working models rendered, have used the cv2.rodrigues method and camera calibration to come up with some translation and rotation vectors, I have inverted the matrix and taken the transpose for proper display in the openGL context. The issue I have stems from the fact that the model gets warped when I rotate my CharUcoBoard and begins to drift in the opposite direction when I move my camera. I assume it has something to do with the fact I am using outdated tech, on an unsupported platform (MacOS Big Sur, 11.6). Unfortunately I do not have time to switch platforms or utilize more abstracted software. Ergo, I am stuck with what I got.
I have images, but since this is a new account am unable to post them directly.
I have tried several methods, including hard coding in my calibrated camera parameters, modifying the initial identity matrix to coincide with my projected matrix and adding additional calibration images from all manner of angles, rotations and camera distances.
Ive tried using glOrtho and glMultMatrixf to no avail. Perhaps I missed something in my code.
The math part:
self.INVERSE_MATRIX = np.array([[1.0, 1.0, 1.0, 1.0],
[-1.0, -1.0, -1.0, -1.0],
[-1.0, -1.0, -1.0, -1.0],
[1.0, 1.0, 1.0, 1.0]])
def _boardHandle(self, image):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
res = cv2.aruco.detectMarkers(gray, dictionary, parameters=arucoParams)
corners = res[0]
ids = res[1]
if ids is not None:
res2 = cv2.aruco.interpolateCornersCharuco(corners, ids, gray,
board)
charucoRetval = res2[0]
charucoCorners = res2[1]
charucoIds = res2[2]
# with camera calibration
retval, rvecs, tvecs = cv2.aruco.estimatePoseCharucoBoard(charucoCorners, charucoIds, board,
self.mtx, self.dist, self.rvecs, self.tvecs)
if retval:
rmtx = cv2.Rodrigues(rvecs)[0]
view_matrix = np.array([[rmtx[0][0], rmtx[0][1], rmtx[0][2], tvecs[0]],
[rmtx[1][0], rmtx[1][1], rmtx[1][2], tvecs[1]],
[rmtx[2][0], rmtx[2][1], rmtx[2][2], tvecs[2]],
[0.0, 0.0, 0.0, 1.0]], dtype=object)
view_matrix = view_matrix * self.INVERSE_MATRIX
view_matrix = np.transpose(view_matrix)
return view_matrix
return self.INVERSE_MATRIX
The rendering pipeline and frame grabbing:
def _draw_scene(self):
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glLoadIdentity()
# get image from webcam
image = self.webcam.get_current_frame()
# convert image to OpenGL texture format
bg_image = cv2.flip(image, 0)
bg_image = Image.fromarray(bg_image)
ix = bg_image.size[0]
iy = bg_image.size[1]
bg_image = bg_image.tobytes("raw", "BGRX", 0, -1)
# create background texture
glBindTexture(GL_TEXTURE_2D, self.bgTexture)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexImage2D(GL_TEXTURE_2D, 0, 3, ix, iy, 0, GL_RGBA, GL_UNSIGNED_BYTE, bg_image)
# draw background
glBindTexture(GL_TEXTURE_2D, self.bgTexture)
glPushMatrix()
glTranslatef(0.0, 0.0, -10.0)
self._draw_background()
glPopMatrix()
# handle boards/pieces
image = self._boardHandle(image)
#drawing the other pieces
glutSwapBuffers()
The above blocks handle the camera calibration files, the initial creation of the modelview matrix, and the rendering of the chessboard. I had originally assumed that estimating the pose and then utilizing that in a modelview matrix would have given me an accurate rendering, which seems to be the case when the camera is just shy of centered above the board. Im relatively new at openGL however and may have missed something crucial.
I know display lists are outdated as well, but at this point have been unable to make the above work with VAO's, VBO's or EBO's. That will have to be a future project for me.
Any help/pointers or resources will be appreciated.

Pygame and PyOpenGL quad texturing problem

I'm trying to texturing a quad and to understand how this little sample works. My code is not original, it's mixed from various examples.
Texture: https://jamesmwake.files.wordpress.com/2015/10/uv_texture_map.jpg?w=660
My questions:
When I change GL_TEXTURE_MIN_FILTER to GL_TEXTURE_MAG_FILTER in
glTexParameteri the texture disappears. Why?
When I change GL_LINEAR to GL_NEAREST, nothing happens. The used
texture's resolution changed to 300x300px. Why is that?
How can I make mipmaps and then using them?
The loadImage() function make a texture. How knows PyOpenGL which
texture should be used in the makeQuad() function?
Code:
import pygame
from pygame.locals import *
from OpenGL.GL import *
from OpenGL.GLU import *
def loadImage():
img = pygame.image.load("checker_texture_downsized.jpg")
textureData = pygame.image.tostring(img, "RGB", 1)
width = img.get_width()
height = img.get_height()
bgImgGL = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, bgImgGL)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureData)
glEnable(GL_TEXTURE_2D)
def makeQuad():
glBegin(GL_QUADS)
glTexCoord2f(0, 0)
glVertex2f(25, 25)
glTexCoord2f(0, 1)
glVertex2f(25, 775)
glTexCoord2f(1, 1)
glVertex2f(775, 775)
glTexCoord2f(1, 0)
glVertex2f(775, 25)
glEnd()
def main():
pygame.init()
display = (1280,800)
pygame.display.set_mode(display, DOUBLEBUF|OPENGL)
gluOrtho2D(0, 1280, 0, 800)
loadImage()
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
quit()
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
makeQuad()
pygame.display.flip()
main()
Note, that drawing by glBegin/glEnd sequences, the fixed function pipeline matrix stack and fixed function pipeline per vertex light model, is deprecated since decades.
Read about Fixed Function Pipeline and see Vertex Specification and Shader for a state of the art way of rendering.
When I change GL_TEXTURE_MIN_FILTER to GL_TEXTURE_MAG_FILTER in glTexParameteri the texture disappears. Why?
The initial value of GL_TEXTURE_MIN_FILTER is GL_NEAREST_MIPMAP_LINEAR. If you don't change it and you don't create mipmaps, then the texture is not "complete" and will not be "shown". See glTexParameter.
See OpenGL 4.6 API Compatibility Profile Specification; 8.17 Texture Completeness; page 306
A texture is said to be complete if all the texture images and texture parameters required to utilize the texture for texture application are consistently defined.
... a texture is complete unless any of the following conditions hold true:
The minification filter requires a mipmap (is neither NEAREST nor LINEAR), and the texture is not mipmap complete.
When I change GL_LINEAR to GL_NEAREST, nothing happens. The used texture's resolution changed to 300x300px. Why is that?
If the texture is smaller than the region where the texture is wrapped to, the the minification filter has not effect, but the magnification would have an effect. If you set the value GL_NEAREST to the GL_TEXTURE_MAG_FILTER then the texels are not interpolated any more.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
How can I make mipmaps and then using them?
Mipmaps can be generated by glGenerateMipmap:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_LINEAR)
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureData)
glGenerateMipmap(GL_TEXTURE_2D)
The loadImage() function make a texture. How knows PyOpenGL which texture should be used in the makeQuad() function?
OpenGL is a state engine. Each state is kept until you change it again, even beyond frames. Since you have bound the texture in loadImage
glBindTexture(GL_TEXTURE_2D, bgImgGL)
the currently named texture object, which is bound to texture unit 0 is bgImgGL. This texture is used for drawing.

Python From array of floats to texture

I need to create a texture from matrix of floats([0..1]). Texture should show a grey squares, but only show a white rectangle :(
I have this code:
def _generate_image(self):
i_len = len(self._data)*Config().get_pixels_per_tile()
j_len = len(self._data[0])*Config().get_pixels_per_tile()
data = ''.join([ chr(int(c*255)) for f in self._data for c in f for _ in range(3*Config().get_pixels_per_tile()) ])
print data
return ImageFromData(data, j_len, i_len, GL_RGB, GL_UNSIGNED_BYTE)
class ImageFromData(object):
def __init__(self, data, width, height, colors, type_):
self.w = width
self.h = height
self.image = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, self.image)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexImage2D( GL_TEXTURE_2D, 0, colors, width, height, 0, colors, type_, data )
Thanks!
I think you might be making a mistake while converting your image to a string in that nasty-looking set of nested for loops you have there :-)
So your original data are two nested lists of float intensity values for rows and columns of pixels, and you're going to copy them three times to fill the RGB channels? I think you want this:
def pixlist2string(pixlist):
tmp = []
for row in pixlist:
for pix in row:
for channel in xrange(3):
tmp.append(chr(int(255*pix)))
return ''.join(ss for ss in tmp)
This is a very roundabout way of doing things, though. Converting to 8-bit integers is unnecessary - just tell glTexImage2D that the input data type is GL_FLOAT and give it normalised values between 0 and 1. Similarly there's no need to duplicate input pixels in order to fill RGB channels if you set the input format to single-channel (GL_INTENSITY,GL_LUMINANCE, GL_RED etc).
I would also strongly recommend using Numpy arrays to hold your input pixel data. Then you can just pass the array itself to glTexImage2D without fiddling around with string conversion.

Mapping a texture to a quad opengl and pyglet

I am trying to map a texture to a quad in pyglet using the opengl functions but the quad is just showing up as white.
My code looks like this:
import pyglet
from pyglet.gl import *
glEnable(GL_TEXTURE_2D)
image = pyglet.image.load("redbrick.png")
texture = image.get_texture()
and my draw function:
def on_draw():
window.clear()
glBindTexture (GL_TEXTURE_2D, 13)
glBegin (GL_QUADS);
glTexCoord2i (0, 0)
glVertex2i (0, 0)
glTexCoord2i (1, 0)
glVertex2i (100, 0)
glTexCoord2i (1, 1)
glVertex2i (100, 100)
glTexCoord2i (0, 1)
glVertex2i (0, 100)
glEnd ()
Is there something I am doing wrong or missing out for it to be drawn as a white quad?
I see a few things that may be wrong. It would be nice to have the full code in like a pastebin but I can't comment to ask so...
your texture needs to be made into an opengl texture. You need to first convert your image into a raw data format. When you load the image do image.get_data(), compressed into a single line below. This isn't the most efficient way but a simple example.
After Binding the texture set drawing parameters.
Then hand the data to the video card in glTexImage2D.
data=pyglet.image.load("redbrick.png").get_data()
texture_id = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, texture_id)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (image_width), (image_height), 0, GL_RGBA, GL_UNSIGNED_BYTE, data)
After that you should be able to use this texture if you bind with texture_id. 13 could be nothing for all I can see. Provide all the code and I could probably revise.

Render to texture - level of detail

I've got a problem with my render to texture process. When I render the scene with width = height = 512 it has nearly no errors but the smaller the texture and scene gets, the more errors it gets.
The error is that regions of the texture are black, what makes no sense.
Here are some screenshots
512*512: http://www.ld-host.de/uploads/images/d9452fa0ba28494830fd96f0f15b9eba.png
128*128: http://www.ld-host.de/uploads/images/a39c141282a622f086d4a96b070a56a3.png
Here is my code how I render to texture and use the texture later
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
gluLookAt(0,0,200,0,0,-1,0,1,0)
self.fbos = glGenFramebuffersEXT(1)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, self.fbos)
self.depthbuffers = (glGenRenderbuffersEXT(1)
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, self.depthbuffers)
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, width, height)
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, self.depthbuffers)
self.textures = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, self.textures)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, None);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, self.textures, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, self.fbos);
glEnable(GL_CLIP_PLANE0)
glEnable(GL_CLIP_PLANE1)
glClipPlane(GL_CLIP_PLANE0, (0,0,1,-1 * self.start + self.diff))
glClipPlane(GL_CLIP_PLANE1, (0,0,-1,self.start))
# render the mesh
glTranslatef(-64,-64,-64)
glEnableClientState(GL_VERTEX_ARRAY) # Enable something in OpenGL
glEnableClientState(GL_COLOR_ARRAY) # Enable something in OpenGL
glBindBuffer(GL_ARRAY_BUFFER,self.vbo[1])
glColorPointer(3,GL_FLOAT,0,None) # Tell OpenGL that it contains only ColorValues
#Now the vertex Buffer with positions
glBindBuffer(GL_ARRAY_BUFFER,self.vbo[0])
glVertexPointer(3,GL_FLOAT,0,None) # Tell OpenGL that it contains the Positions for each Points
glDrawArrays(GL_TRIANGLES,0,len(self.verts)+len(self.color)) # Merge both
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glDisable(GL_CLIP_PLANE0)
glDisable(GL_CLIP_PLANE1)
glBindTexture(GL_TEXTURE_2D,0)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
#Plane for showing texture
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
gluLookAt(0,0,1,0,0,0,0,1,0)
glClearColor(1,1,1,0)
glEnable(GL_TEXTURE_2D)
glBindTexture(GL_TEXTURE_2D, self.textures)
glBegin(GL_QUADS)
glNormal3f(0.0, 1.0, 0.0)
glTexCoord2f(0.0, 0.0)
glVertex3f(-1, 1, -1)
glTexCoord2f(1.0, 0.0)
glVertex3f(1, 1, -1)
glTexCoord2f(1.0, 1.0)
glVertex3f(1, -1, -1)
glTexCoord2f(0.0, 1.0)
glVertex3f(-1, -1, -1)
glEnd();
glDisable(GL_TEXTURE_2D)
pygame.display.flip()
How can I achieve a better quality for the smaller texture? The mesh has no holes but the texture sometimes has.
I don't know what exactly you're expecting to see. You're drawing triangles/cubes/etc that are clearly smaller than the size of a pixel/sample size. Therefore, not all of them are going to be visible.
This is a standard aliasing problem: triangles that don't cover the center of a pixel/sample will not be visible. That's the nature of rasterization. And the only way to fix aliasing is to increase the number of samples you use. You can render at a higher resolution and downscale, or you could use MSAA or another real aliasing technique.

Categories