Accessing SDL_Surface Pixel Array in pySDL2 - python

I'm trying to access pixel data from an SDL_Surface in PySDL2, I have been able to access the width and height of the surface using surface.contents.h, however surface.contents.pixels only return an int. Looking at the API reference, i believe that this is a pointer to the pixel array. I only need to read the pixels, not overwrite them.
surface = sdl2.sdlttf.TTF_RenderText_Blended(
font, contents.encode(), sdl2.SDL_Color(*colour))
surf = surface.contents

Related

Get image array from display in pygame

I have a pong game that is a simple pygame environment. I would like to get the numpy array of the image at each instance from the pygame.display. I am unable to find the method which gives the numpy array of the entire display. Thanks.
Use pygame.surfarray.pixels2d, pygame.surfarray.array2d, pygame.surfarray.pixels3d, or pygame.surfarray.array3d to create a pixel array from a pygame.Surface object (see pygame.surfarray module):
import pygame
# [...]
window = pygame.display.set_mode()
array2D = pygame.surfarray.array2d(window)
array3D = pygame.surfarray.array3d(window)
While array* creates copies of the pixels from a Surface object, pixel* creates a new array that directly references the pixel values ​​in a Surface object.
The *2d version generates a two-dimensional array, with each pixel represented by a single integral value. The *3d version generates a three-dimensional array in which each color channel is represented by an integer value.

How to present numpy array into pygame surface?

I'm writing a code that part of it is reading an image source and displaying it on the screen for the user to interact with. I also need the sharpened image data. I use the following to read the data and display it in pyGame
def image_and_sharpen_array(file_name):
#read the image data and return it, with the sharpened image
image = misc.imread(file_name)
blurred = ndimage.gaussian_filter(image,3)
edge = ndimage.gaussian_filter(blurred,1)
alpha = 20
out = blurred + alpha*(blurred - edge)
return image,out
#get image data
scan,sharpen = image_and_sharpen_array('foo.jpg')
w,h,c = scan.shape
#setting up pygame
pygame.init()
screen = pygame.display.set_mode((w,h))
pygame.surfarray.blit_array(screen,scan)
pygame.display.update()
And the image is displayed on the screen only rotated and inverted. Is this due to differences between misc.imread and pyGame? Or is this due to something wrong in my code?
Is there other way to do this? The majority of solution I read involved saving the figure and then reading it with ``pyGame''.
I often use the numpy swapaxes() method:
In this case we only need to invert x and y axis (axis number 0 and 1) before displaying our array :
return image.swapaxes(0,1),out
I thought technico provided a good solution - just a little lean on info. Assuming get_arr() is a function that returns the pixel array:
pixl_arr = get_arr()
pixl_arr = numpy.swapaxes(pixl_arr, 0, 1)
new_surf = pygame.pixelcopy.make_surface(pixl_arr)
screen.blit(new_surf, (dest_x, dest_y))
Alternatively, if you know that the image will always be of the same dimensions (as in iterating through frames of a video or gif file), it would be more efficient to reuse the same surface:
pixl_arr = get_arr()
pixl_arr = numpy.swapaxes(pixl_arr, 0, 1)
pygame.pixelcopy.array_to_surface(old_surf, pixl_arr)
screen.blit(old_surf, (dest_x, dest_y))
YMMV, but so far this is working well for me.
Every lib has its own way of interpreting image arrays. By 'rotated' I suppose you mean transposed. That's the way PyGame shows up numpy arrays. There are many ways to make it look 'correct'. Actually there are many ways even to show up the array, which gives you full control over channel representation and so on. In pygame version 1.9.2, this is the fastest array rendering that I could ever achieve. (Note for earlier version this will not work!).
This function will fill the surface with array:
def put_array(surface, myarr): # put array into surface
bv = surface.get_view("0")
bv.write(myarr.tostring())
If that is not working, use this, should work everywhere:
# put array data into a pygame surface
def put_arr(surface, myarr):
bv = surface.get_buffer()
bv.write(myarr.tostring(), 0)
You probably still get not what you want, so it is transposed or have swapped color channels. The idea is, manage your arrays in that form, which suites this surface buffer. To find out what is correct channel order and axes order, use openCV library (cv2.imread(filename)). With openCV you open images in BGR order as standard, and it has a lot of conversion functions. If I remember correctly, when writing directly to surface buffer, BGR is the correct order for 24 bit and BGRA for a 32 bit surface. So you can try to put the image array which you get out of file with this function and blit to the screen.
There are other ways to draw arrays e.g. here is whole set of helper functions http://www.pygame.org/docs/ref/surfarray.html
But I would not recommend using it, since surfaces are not for direct pixel manipulating, you will probably get lost in references.
Small tip: To do 'signalling test' use a picture, like this. So you will immediately see if something is wrong, just load as array and try to render.
My suggestion is to use the pygame.transform module. There are the flip and rotate methods, which you can use to however your transformation is. Look up the docs on this.
My recommendation is to save the output image to a new Surface, and then apply the transformations, and blit to the display.
temp_surf = pygame.Surface((w,h))
pygame.surfarray.blit(temp_surf, scan)
'''transform temp_surf'''
screen.blit(temp_surf, (0,0))
I have no idea why this is. It is probably something to do with the order in which the axes are transferred from a 2d array to a pygame Surface.

How to get the picture size in pygame (python)

If I'm using an image and I want to know the image's size in the file.
There is a function to get the picture's height and width ?
EDIT: Of course I loaded the image to the program with pygame.image.load(PATH).
I believe you need to load the image as a Surface before you can get its width and height. You do that with foo = pygame.image.load(PATHNAME).
Then you can get the width and height by creating a Rectangle with foo.get_rect() and asking the rectangle.
import pygame
foo = pygame.image.load(PATH).get_rect().size
print(foo)
Return the size in a tupple

Crop an image in Python: Image.crop() vs. Image.transform()?

I want to crop an image by selecting a subregion of it to turn into a new image file.
Based on what I see in the PIL docs, I am trying to decide between 2 approaches.
For both approaches:
# assume I already have a PIL-opened image called original_image
# assume I have a crop_box tuple for the area to crop, e.g. (0, 0, 100, 100)
Approaches
Cropping the original image directly, and then transforming the returned crop area into a new file. It seems that I'm giving up all the mode data / other metadata about the original image, which would then have to be re-loaded?
cropped_image = original_image.crop(crop_box)
# process cropped version as needed
Do an image.transform() in which I select a sub-region
# assume I have an output size of (100, 100)
cropped_image = original_image.transform(size, Image.EXTENT, crop_box)
# process cropped version as needed
Is one approach preferable to the other in terms of speed, data preservation, or another important factor that I'm missing?
The following is from the PIL.Image documentation under the transform function:
It is slightly slower than crop
Plus crop is simpler and shows your intent. That's what I'd use.

opencv zoom function strange results

i am trying to write a zoom function which looks something like this:
centre = ((im.width-1)/2, (im.height-1)/2)
width = int(im.width/(2.0*level))
height = int(im.height/(2.0*level))
rect = (centre[0]-width, centre[1]-height, width*2, height*2)
dst = cv.GetSubRect(im, rect)
cv.Resize(dst, im)
when I use exactly what is written above, I get an odd result where the bottom half of the resultant image is distorted and blurry. However when I replace the line cv.Resize(dst, im) with
size = cv.CloneImage(im)
cv.Resize(dst, size)
im = size
it works fine. Why is this? is there something fundamentally wrong with the way i am performing the zoom?
cv.Resize requires source and destination to be separate memory locations.
Now in the first snippet of your code, you are using cv.GetSubRect to generate an object pointing to area of image which you wish to zoom in. Here the new object is NOT pointing to a new memory location. It is pointing to a memory location which is a subset of original object.
Since cv.Resize requires both the memory locations to be different, what you are getting is a result of undefined behavior.
In the second part of your code you are fulfilling this criteria by using cv.CloneImage.
you are first creating a copy of im (i.e. size. however you could have used a blank image aswell) and then you are using cv.Resize to resize dst and write the resulting image in size.
My advice is to go through the function documentation before using them.

Categories