I working on a voxel engine based using raycasting. But i need a way to display my frame who is a nested list like that (100x100x3). The only idea i had was to create an image in pyglet using pyglet.image.create(width,height) and to next modifie it's data using image.set_data("RGB",width*3,data). But the problem i have is how to organize my data create like that : numpy.zeros([100,100,3]) to be used in set_data(). I have try data.tobytes() or ctypes but i always get an image glitter with random particules.
See numpy.zeros. numpy.zeros([100,100,3]) generates an array of floats. You should generate an array of uint8:
numpy.zeros([100,100,3])
numpy.zeros([100,100,3], dtype = numpy.uint8)
I'm trying to access pixel data from an SDL_Surface in PySDL2, I have been able to access the width and height of the surface using surface.contents.h, however surface.contents.pixels only return an int. Looking at the API reference, i believe that this is a pointer to the pixel array. I only need to read the pixels, not overwrite them.
surface = sdl2.sdlttf.TTF_RenderText_Blended(
font, contents.encode(), sdl2.SDL_Color(*colour))
surf = surface.contents
i want to load a mesh file (.obj), then want to use the trimesh.sample.sample_surface_even() function to get some points on the surface, turn the resulting points back into a mesh and save them back as an .obj file.
My problem is, that i dont know how to turn the samples back into a mesh that can be saved. Can somebody tell me what i should do step by step, to achieve that goal?
Here is my code so far:
import numpy as np
import trimesh
mesh = trimesh.load_mesh('mesh10.obj')
sampledmesh= trimesh.sample.sample_surface_even(mesh,500)
#? How to turn sampledmesh back into a mesh?
sampledmesh.export('mesh10_export.obj')
You can use the submesh function on the sampled face indices, which is the second element in the returned tuple:
sampledmesh = trimesh.sample.sample_surface_even(mesh,500)
sampled_submesh = mesh.submesh([sampledmesh[1]])[0]
submesh returns an array of meshes, but here we just have one, so we take the first mesh.
I am trying to change the values from a gray scale PNG image and then create an image from those changed values.
The first step I took was using a python implementation of libpng.
With that I was able to get a list of all the pixel values of the PNG. It is a gray scale image so the values where from 0 to 100. I do a quick algorithm to change the values. And then I try to create it into a new PNG file. Its all on a single line so I use regex to format it into a 2D array.
I attempted to use this that I found here
from PIL import Image
import numpy as np
pixels = [[100,0,0],[0,100,0],[0,0,100]]
# Convert the pixels into an array using numpy
array = np.array(pixels)
print(array)
# Use PIL to create an image from the new array of pixels
new_image = Image.fromarray(array, 'L')
new_image.save('testing.png')
But I guess the formatting that PIL uses is different from what the libpng is so instead of making an image that looks like 3 white pixels diagonally, I only get 1 white pixel on the top left. So either I change the values I am getting from libpng so that numpy works, or I find something in libpng that will allow me to change the values directly and create the new file.
The error in your code is that you are not setting the right data type for the array. If you call array = np.array(pixels, dtype='uint8') then your code will work.
I'm writing a code that part of it is reading an image source and displaying it on the screen for the user to interact with. I also need the sharpened image data. I use the following to read the data and display it in pyGame
def image_and_sharpen_array(file_name):
#read the image data and return it, with the sharpened image
image = misc.imread(file_name)
blurred = ndimage.gaussian_filter(image,3)
edge = ndimage.gaussian_filter(blurred,1)
alpha = 20
out = blurred + alpha*(blurred - edge)
return image,out
#get image data
scan,sharpen = image_and_sharpen_array('foo.jpg')
w,h,c = scan.shape
#setting up pygame
pygame.init()
screen = pygame.display.set_mode((w,h))
pygame.surfarray.blit_array(screen,scan)
pygame.display.update()
And the image is displayed on the screen only rotated and inverted. Is this due to differences between misc.imread and pyGame? Or is this due to something wrong in my code?
Is there other way to do this? The majority of solution I read involved saving the figure and then reading it with ``pyGame''.
I often use the numpy swapaxes() method:
In this case we only need to invert x and y axis (axis number 0 and 1) before displaying our array :
return image.swapaxes(0,1),out
I thought technico provided a good solution - just a little lean on info. Assuming get_arr() is a function that returns the pixel array:
pixl_arr = get_arr()
pixl_arr = numpy.swapaxes(pixl_arr, 0, 1)
new_surf = pygame.pixelcopy.make_surface(pixl_arr)
screen.blit(new_surf, (dest_x, dest_y))
Alternatively, if you know that the image will always be of the same dimensions (as in iterating through frames of a video or gif file), it would be more efficient to reuse the same surface:
pixl_arr = get_arr()
pixl_arr = numpy.swapaxes(pixl_arr, 0, 1)
pygame.pixelcopy.array_to_surface(old_surf, pixl_arr)
screen.blit(old_surf, (dest_x, dest_y))
YMMV, but so far this is working well for me.
Every lib has its own way of interpreting image arrays. By 'rotated' I suppose you mean transposed. That's the way PyGame shows up numpy arrays. There are many ways to make it look 'correct'. Actually there are many ways even to show up the array, which gives you full control over channel representation and so on. In pygame version 1.9.2, this is the fastest array rendering that I could ever achieve. (Note for earlier version this will not work!).
This function will fill the surface with array:
def put_array(surface, myarr): # put array into surface
bv = surface.get_view("0")
bv.write(myarr.tostring())
If that is not working, use this, should work everywhere:
# put array data into a pygame surface
def put_arr(surface, myarr):
bv = surface.get_buffer()
bv.write(myarr.tostring(), 0)
You probably still get not what you want, so it is transposed or have swapped color channels. The idea is, manage your arrays in that form, which suites this surface buffer. To find out what is correct channel order and axes order, use openCV library (cv2.imread(filename)). With openCV you open images in BGR order as standard, and it has a lot of conversion functions. If I remember correctly, when writing directly to surface buffer, BGR is the correct order for 24 bit and BGRA for a 32 bit surface. So you can try to put the image array which you get out of file with this function and blit to the screen.
There are other ways to draw arrays e.g. here is whole set of helper functions http://www.pygame.org/docs/ref/surfarray.html
But I would not recommend using it, since surfaces are not for direct pixel manipulating, you will probably get lost in references.
Small tip: To do 'signalling test' use a picture, like this. So you will immediately see if something is wrong, just load as array and try to render.
My suggestion is to use the pygame.transform module. There are the flip and rotate methods, which you can use to however your transformation is. Look up the docs on this.
My recommendation is to save the output image to a new Surface, and then apply the transformations, and blit to the display.
temp_surf = pygame.Surface((w,h))
pygame.surfarray.blit(temp_surf, scan)
'''transform temp_surf'''
screen.blit(temp_surf, (0,0))
I have no idea why this is. It is probably something to do with the order in which the axes are transferred from a 2d array to a pygame Surface.