I working on a voxel engine based using raycasting. But i need a way to display my frame who is a nested list like that (100x100x3). The only idea i had was to create an image in pyglet using pyglet.image.create(width,height) and to next modifie it's data using image.set_data("RGB",width*3,data). But the problem i have is how to organize my data create like that : numpy.zeros([100,100,3]) to be used in set_data(). I have try data.tobytes() or ctypes but i always get an image glitter with random particules.
See numpy.zeros. numpy.zeros([100,100,3]) generates an array of floats. You should generate an array of uint8:
numpy.zeros([100,100,3])
numpy.zeros([100,100,3], dtype = numpy.uint8)
Related
I've got a problem. I'm trying create image from binary data which I got from hyperspectral camera. The file which I have is in BSQ uint16 format. From the documentation I found out that images contained in the file (.dat) have a resolution of 1024x1024 and there are 24 images in total. The whole thing is to form a kind of "cube" which I want use in the future to creat multi-layered orthomosaic.
I would also like to add that I am completely new in python but I try to be up to date with everything I need. I hope that everything what I have written is clear and uderstandable.
At first I tried to use Numpy liblary to creating 3D array but ended up with an arrangement of random pixels.
from PIL import Image
import numpy as np
file=open('Sequence 1_000021.dat','rb')
myarray=np.fromfile(file,dtype=np.uint16)
print('Size of new array',":", len(myarray))
con_array=np.reshape(myarray,(24,1024,1024),'C')
naPIL=Image.fromarray(con_array[1,:,:])
naPIL.save('naPIL.tiff')
The result: enter image description here
Example of image which I want to achieve (thumbnail): enter image description here
As suspected it's just byte order, I get a sensible looking image when running the following code in a Jupyter notebook:
import numpy as np
from PIL import Image
# open as big-endian, convert to native order, then reshape as appropriate
raw = np.fromfile(
'./Sequence 1_000021.dat', dtype='>u2'
).astype('uint16').reshape((24, 1024, 1024))
# display inline
Image.fromarray(raw[1,:,:])
I have a pong game that is a simple pygame environment. I would like to get the numpy array of the image at each instance from the pygame.display. I am unable to find the method which gives the numpy array of the entire display. Thanks.
Use pygame.surfarray.pixels2d, pygame.surfarray.array2d, pygame.surfarray.pixels3d, or pygame.surfarray.array3d to create a pixel array from a pygame.Surface object (see pygame.surfarray module):
import pygame
# [...]
window = pygame.display.set_mode()
array2D = pygame.surfarray.array2d(window)
array3D = pygame.surfarray.array3d(window)
While array* creates copies of the pixels from a Surface object, pixel* creates a new array that directly references the pixel values in a Surface object.
The *2d version generates a two-dimensional array, with each pixel represented by a single integral value. The *3d version generates a three-dimensional array in which each color channel is represented by an integer value.
i want to load a mesh file (.obj), then want to use the trimesh.sample.sample_surface_even() function to get some points on the surface, turn the resulting points back into a mesh and save them back as an .obj file.
My problem is, that i dont know how to turn the samples back into a mesh that can be saved. Can somebody tell me what i should do step by step, to achieve that goal?
Here is my code so far:
import numpy as np
import trimesh
mesh = trimesh.load_mesh('mesh10.obj')
sampledmesh= trimesh.sample.sample_surface_even(mesh,500)
#? How to turn sampledmesh back into a mesh?
sampledmesh.export('mesh10_export.obj')
You can use the submesh function on the sampled face indices, which is the second element in the returned tuple:
sampledmesh = trimesh.sample.sample_surface_even(mesh,500)
sampled_submesh = mesh.submesh([sampledmesh[1]])[0]
submesh returns an array of meshes, but here we just have one, so we take the first mesh.
I am trying to change the values from a gray scale PNG image and then create an image from those changed values.
The first step I took was using a python implementation of libpng.
With that I was able to get a list of all the pixel values of the PNG. It is a gray scale image so the values where from 0 to 100. I do a quick algorithm to change the values. And then I try to create it into a new PNG file. Its all on a single line so I use regex to format it into a 2D array.
I attempted to use this that I found here
from PIL import Image
import numpy as np
pixels = [[100,0,0],[0,100,0],[0,0,100]]
# Convert the pixels into an array using numpy
array = np.array(pixels)
print(array)
# Use PIL to create an image from the new array of pixels
new_image = Image.fromarray(array, 'L')
new_image.save('testing.png')
But I guess the formatting that PIL uses is different from what the libpng is so instead of making an image that looks like 3 white pixels diagonally, I only get 1 white pixel on the top left. So either I change the values I am getting from libpng so that numpy works, or I find something in libpng that will allow me to change the values directly and create the new file.
The error in your code is that you are not setting the right data type for the array. If you call array = np.array(pixels, dtype='uint8') then your code will work.
I'm writing a code that part of it is reading an image source and displaying it on the screen for the user to interact with. I also need the sharpened image data. I use the following to read the data and display it in pyGame
def image_and_sharpen_array(file_name):
#read the image data and return it, with the sharpened image
image = misc.imread(file_name)
blurred = ndimage.gaussian_filter(image,3)
edge = ndimage.gaussian_filter(blurred,1)
alpha = 20
out = blurred + alpha*(blurred - edge)
return image,out
#get image data
scan,sharpen = image_and_sharpen_array('foo.jpg')
w,h,c = scan.shape
#setting up pygame
pygame.init()
screen = pygame.display.set_mode((w,h))
pygame.surfarray.blit_array(screen,scan)
pygame.display.update()
And the image is displayed on the screen only rotated and inverted. Is this due to differences between misc.imread and pyGame? Or is this due to something wrong in my code?
Is there other way to do this? The majority of solution I read involved saving the figure and then reading it with ``pyGame''.
I often use the numpy swapaxes() method:
In this case we only need to invert x and y axis (axis number 0 and 1) before displaying our array :
return image.swapaxes(0,1),out
I thought technico provided a good solution - just a little lean on info. Assuming get_arr() is a function that returns the pixel array:
pixl_arr = get_arr()
pixl_arr = numpy.swapaxes(pixl_arr, 0, 1)
new_surf = pygame.pixelcopy.make_surface(pixl_arr)
screen.blit(new_surf, (dest_x, dest_y))
Alternatively, if you know that the image will always be of the same dimensions (as in iterating through frames of a video or gif file), it would be more efficient to reuse the same surface:
pixl_arr = get_arr()
pixl_arr = numpy.swapaxes(pixl_arr, 0, 1)
pygame.pixelcopy.array_to_surface(old_surf, pixl_arr)
screen.blit(old_surf, (dest_x, dest_y))
YMMV, but so far this is working well for me.
Every lib has its own way of interpreting image arrays. By 'rotated' I suppose you mean transposed. That's the way PyGame shows up numpy arrays. There are many ways to make it look 'correct'. Actually there are many ways even to show up the array, which gives you full control over channel representation and so on. In pygame version 1.9.2, this is the fastest array rendering that I could ever achieve. (Note for earlier version this will not work!).
This function will fill the surface with array:
def put_array(surface, myarr): # put array into surface
bv = surface.get_view("0")
bv.write(myarr.tostring())
If that is not working, use this, should work everywhere:
# put array data into a pygame surface
def put_arr(surface, myarr):
bv = surface.get_buffer()
bv.write(myarr.tostring(), 0)
You probably still get not what you want, so it is transposed or have swapped color channels. The idea is, manage your arrays in that form, which suites this surface buffer. To find out what is correct channel order and axes order, use openCV library (cv2.imread(filename)). With openCV you open images in BGR order as standard, and it has a lot of conversion functions. If I remember correctly, when writing directly to surface buffer, BGR is the correct order for 24 bit and BGRA for a 32 bit surface. So you can try to put the image array which you get out of file with this function and blit to the screen.
There are other ways to draw arrays e.g. here is whole set of helper functions http://www.pygame.org/docs/ref/surfarray.html
But I would not recommend using it, since surfaces are not for direct pixel manipulating, you will probably get lost in references.
Small tip: To do 'signalling test' use a picture, like this. So you will immediately see if something is wrong, just load as array and try to render.
My suggestion is to use the pygame.transform module. There are the flip and rotate methods, which you can use to however your transformation is. Look up the docs on this.
My recommendation is to save the output image to a new Surface, and then apply the transformations, and blit to the display.
temp_surf = pygame.Surface((w,h))
pygame.surfarray.blit(temp_surf, scan)
'''transform temp_surf'''
screen.blit(temp_surf, (0,0))
I have no idea why this is. It is probably something to do with the order in which the axes are transferred from a 2d array to a pygame Surface.