use a color lookup table with pygame - python

I created a lookup color table in pygame (here just gray level):
LUT = np.empty([256,3],np.uint8)
for gg in range(0,256):
LUT[gg,:] = [gg,gg,gg]
Then I want to call each value of this lookup table to fill a 3D pixel array from pygame from a 2D matrix, for example with noise:
screenWidth = 800
screenHeight = 480
win = pygame.display.set_mode((screenWidth,screenHeight), 32)
buffer = pygame.surfarray.pixels3d(win)
I = (255*np.random.rand(screenWidth,screenHeight)).astype(int)
buffer = LUT[I,:]
In appearance everything seems right, I get a 3D pixel array, each value is what I expect. But this pixel array is not being displayed in the window "win" I created (it remains black). What puzzles me is that if I fill the same pixel array element by element:
for rr in range(0,screenWidth):
for cc in range(0,screenHeight):
buffer[rr,cc,:] = LUT[I[rr,cc],:]
Then it works fine, but it's a lot slower. I can find no difference between the pixel arrays filled by the 2 techniques (both are regular (800,480,3) unit8 array as I would expect). And if I display them with matplotlib imshow the image look fine in both cases. Does anyone know why?

The nested loop
for rr in range(0,screenWidth):
for cc in range(0,screenHeight):
buffer[rr,cc,:] = LUT[I[rr,cc],:]
can be substituted by:
buffer[:,:,:] = LUT[I,:]
buffer = LUT[I,:] doesn't changes the element of buffer. The statement generates an new buffer and assigns the new array object to the variable buffer. The elements of buffer have to be set by slicing (see numpy - Basic Slicing and Indexing):
lutBuffer = LUT[I,:]
buffer[:,:,:] = lutBuffer

Related

How to get a matrix consisting of arrays of pixels and RGB?

I tried using the PIL library to get a matrix consisting of arrays of pixels and RGB, however, I only get a one-dimensional array and I don’t understand how to form a matrix of them
img = Image.open("E:\\1f9114.png").convert('RGB')
obj = img.load()
width, height = img.size
for j in range(height):
for i in range(width):
matrix1=[i,j,obj[i,j]]
print(matrix1)
print()
I know that the matrix can turn out huge, and the usual sheet does not cope .I hope somebody will help, as it is important for me.
There are several issues with this code snippet:
matrix1 is always overridden. If you want to add pixels to an existing list, use list.append().
im.getdata() should be used to obtain a one-dimensional raw pixel list from the image.
Here is an example (adapted from here) to load pixels into a two-dimensional array that contains (r,g,b) tuples built using list comprehensions and slices.
pixels = list(img.getdata())
matrix1 = [pixels[i*width:(i+1)*width] for i in range(height)]

PyCairo and modifying pixel data

I'm using PyCairo to draw some vector images programmatically from a Python script. It works fine. But now I'd like to access the pixel data and do some further processing on them at the pixel level (things like blur or other raster effects) and then continue using that image surface with PyCairo to draw some more vector shapes.
I found the get_data() method in cairo.ImageSurface class, but I'm not sure how to use it, because the documentation is very cryptic about it. It just says that it returns something called a "Python buffer", but there are no code examples of how this can actually be used in a real aplication.
Can anyone provide an example code of how to get the grip of those pixels in that "Python buffer" thingamajig? (preferably without the need of copying the entire image back and forth from/to PyCairo surfaces).
The data is the raw pixel data. It's a memoryview representing the underlying ImageSurface. The data has a different interpretation depending on the format.
Let's consider only the RGB24 pixel format for simplicity sake. Each pixel is stored as four bytes. Red, green, and blue respectively. The fourth byte is ignored, and is only there for performance reasons.
The pixels are then stored row by row, the first row coming first and the second row coming after and so on and so forth.
The might be additional padding at the end of the row as well, therefore the stride of the data is a crucial property. To get the byte index of a specific row y we thus need to compute y * stride. To this we add a x coordinate times the pixel byte width 4.
This is all illustrated in the following small python program that draws a white rectangle on a black background.
import cairo
width, height = 100, 100
surface = cairo.ImageSurface(cairo.Format.RGB24, width, height)
data = surface.get_data()
for y in range(50 - 20, 50 + 20):
for x in range(50 - 20, 50 + 20):
index = y * surface.get_stride() + x * 4
data[index] = 255 # red
data[index + 1] = 255 # green
data[index + 2] = 255 # blue
surface.write_to_png("im.png")

Trying to plot Mandelbrot set - no matter what, always get plain black image

First, here's the code I have:
from PIL import Image as im
import numpy as np
def mandelbrot_iteration(c):
iters = 0
while abs(c)<2 and iters<200:
c=c**2+c
iters+=1
return iters
HEIGHT = 400
WIDTH = 500
diag = im.new('L',(WIDTH, HEIGHT))
pix = diag.load()
x_pts = np.arange(-2,2,4/WIDTH)
y_pts = np.arange(-2,2,4/HEIGHT)
for x in x_pts:
for y in y_pts:
pix[x+2,y+2]=mandelbrot_iteration(complex(x,y))
diag.save("Fractal.png")
I thought this was quite straight forward. I see how many interations each point on a grid of complex numbers takes to grow past an abs. value of 2 and plot these values as a colour at each point (with 200 being the cutoff, assuming the sequence doesn't diverge). In the range specified, there should definitely be some non-trivial things going on, but no matter what I try, the image made is plain black.
Also this method of generating images has almost zero documentation. I've searched a lot, and this:
im.load()
Allocates storage for the image and loads it from the file (or from
the source, for lazy operations). In normal cases, you don’t need to
call this method, since the Image class automatically loads an opened
image when it is accessed for the first time.
(New in 1.1.6) In 1.1.6 and later, load returns a pixel access object
that can be used to read and modify pixels. The access object behaves
like a 2-dimensional array, so you can do:
pix = im.load() print pix[x, y] pix[x, y] = value
Access via this object is a lot faster than getpixel and putpixel
Is everything that I can find about it (no examples out there either), which is very frustrating. I imagine the line pix[x+2,y+2] is at fault. The '+2's are there to stop the "out of range" errors, but, having tried some examples, I have no idea what it does with input numbers to generate a colour. I did find that 'L' when the image is created should make a greyscale image, but no idea what range pix[x,y] expects or anything. Everything came out black...
The immediate problem is your scale is off.
In this line pix[x+2,y+2]=..., with your ranges for x and y, the only pixels that are being drawn are 0..4. Since the last few pixels drawn are black, the entire top left 4x4 square is black (and the rest is 0 – also black – by default, for a new image).
That can be fixed like this:
from PIL import Image as im
import numpy as np
def mandelbrot_iteration(c):
iters = 0
while abs(c)<2 and iters<200:
c=c**2+c
iters+=1
return iters
HEIGHT = 400
WIDTH = 500
diag = im.new('L',(WIDTH, HEIGHT))
pix = diag.load()
x_pts = np.arange(-2,2,4.0/WIDTH)
y_pts = np.arange(-2,2,4.0/HEIGHT)
for x in x_pts:
for y in y_pts:
pix[WIDTH*(x+2)/4.0,HEIGHT*(y+2)/4.0]=mandelbrot_iteration(complex(x,y))
diag.show()
although the result is not yet a good Mandelbrot...
With hcs' comment "mandelbrot iteration should be z=0, while abs(z)<2, z=z**2+c" applied, you'd use this code
from PIL import Image as im
import numpy as np
def mandelbrot_iteration(c):
iters = 0
z = 0
while abs(z)<2 and iters<200:
z=z**2+c
iters+=1
return iters
HEIGHT = 400
WIDTH = 500
diag = im.new('L',(WIDTH, HEIGHT))
pix = diag.load()
x_pts = np.arange(-2,2,4.0/WIDTH)
y_pts = np.arange(-2,2,4.0/HEIGHT)
for x in x_pts:
for y in y_pts:
pix[WIDTH*(x+2)/4.0,HEIGHT*(y+2)/4.0]=mandelbrot_iteration(complex(x,y))
# diag.show()
diag.save("Fractal.png")
and lo and behold, a true Mandelbrot pops up:

Converting an image in pygame to an 2D array of RGB values

How can I convert a surface object in pygame to a 2-dimensional array of RGB values (one value for every pixel)? I have read the documentation on PixelArrays and Surfarrays and I cannot seem to find an answer to my question. Examples are more than welcome.
The pygame documentation says that given a surface object, you can create a PixelArray wrapper to provide direct 2D array access to its pixels by calling the module's PixelArray() method like this:
pxarray = pygame.PixelArray(surface)
Logically a PixelArray object is a 2-dimensional array of RGB values stored as integers.
A PixelArray pixel item can be assigned a raw integer value, a
pygame.Color instance (an object for color representations), or a (r, g, b[, a]) tuple.
pxarray[x, y] = 0xFF00FF
pxarray[x, y] = pygame.Color(255, 0, 255)
pxarray[x, y] = (255, 0, 255)
It also mentions:
However, only a pixel’s integer value is returned. So, to compare a pixel to a
particular color, the color needs to be first mapped using the Surface.map_rgb()
method of the Surface object for which the PixelArray was created.
Which means you'll need to use the Surface.map_rgb() method to get RGB tuples from PixelArray integer values whenever you're not doing an assignment to the array, i.e. when reading a pixel's value, as is being done in the following conditional:
# Check, if the first pixel at the topleft corner is blue
if pxarray[0, 0] == surface.map_rgb((0, 0, 255)):
...
Hope this helps.
After wrapping a Surface in a PixelArray, the unmap_rgb function can convert the pixels into pygame.Color objects.
For example:
import pygame
pygame.init()
image = pygame.image.load('example.png')
pixel = pygame.PixelArray(image)
center = int(image.get_width()/2), int(image.get_height()/2)
colour = image.unmap_rgb(pixel[center])
print(str(colour))
Edit: To do the complete conversion of all these pygame.Color objects into a two-dimensional array of RGB tuples, you could use this code:
import pygame
pygame.init()
image = pygame.image.load('example.png')
pixel = pygame.PixelArray(image)
color_array = [[image.unmap_rgb(pixel[x, y]) for x in range(0, image.get_width())] for y in range(0, image.get_height())]
rgb_array = [[(column.r, column.g, column.b) for column in row] for row in color_array]
print(str(rgb_array))
Note that if you were to use map_rgb (as suggested in the other answer), you might have to test every pixel against over 16 million map_rgb calls (one for each of 256 * 256 * 256 possible RGB values). If you wanted the alpha values as well, that would add another multiplication of possible values by 256. Another strategy using map_rgb might be to create a lookup table from the (256 ** n) possible values, and check each pixel against the table. While it is possible to do that, the code presented here seems much more efficient.

Finding coordinates of brightest pixel in an image and entering them into an array

I have been asked to write a program to find 'stars' in an image by converting the image file to a numpy array and generating an array of the coordinates of the brightest pixels in the image above a specified threshold (representing background interference).
Once I have located the brightest pixel in the image I must record its x,y coordinates, and set the value of that pixel and surrounding 10X10 pixel area to zero, effectively removing the star from the image.
I already have a helper code which converts the image to an array, and have attempted to tackle the problem as follows;
I have defined a variable
Max = array.max()
and used a while loop;
while Max >= threshold
coordinates = numpy.where(array == Max) # find the maximum value
however I want this to loop over the whole array for all of the coordinates,not just find the first maximum, and also remove each maximum when found and setting the surrounding 10X10 area to zero. I have thought about using a for loop to do this but am unsure how I should use it since I am new to Python.
I would appreciate any suggestions,
Thanks
There are a number of different ways to do it with just numpy, etc.
There's the "brute force" way:
import Image
import numpy as np
im = Image.open('test.bmp')
data = np.array(im)
threshold = 200
window = 5 # This is the "half" window...
ni, nj = data.shape
new_value = 0
for i, j in zip(*np.where(data > threshold)):
istart, istop = max(0, i-window), min(ni, i+window+1)
jstart, jstop = max(0, j-window), min(nj, j+window+1)
data[istart:istop, jstart:jstop] = new_value
Or the faster approach...
import Image
import numpy as np
import scipy.ndimage
im = Image.open('test.bmp')
data = np.array(im)
threshold = 200
window = 10 # This is the "full" window...
new_value = 0
mask = data > threshold
mask = scipy.ndimage.uniform_filter(mask.astype(np.float), size=window)
mask = mask > 0
data[mask] = new_value
Astronomy.net will do this for you:
If you have astronomical imaging of the sky with celestial coordinates
you do not know—or do not trust—then Astrometry.net is for you. Input
an image and we'll give you back astrometric calibration meta-data,
plus lists of known objects falling inside the field of view.
We have built this astrometric calibration service to create correct,
standards-compliant astrometric meta-data for every useful
astronomical image ever taken, past and future, in any state of
archival disarray. We hope this will help organize, annotate and make
searchable all the world's astronomical information.
You don't even have to upload the images to their website. You can download the source. It is licensed under the GPL and uses NumPy, so you can muck around with it if you need to.
Note that you will need to first convert your bitmap to one of the following: JPEG, GIF, PNG, or FITS image.

Categories