I have an image stored in RGBA format as a 3d numpy array in python, i.e.
image = np.zeros((500, 500, 4), dtype=np.int16)
would be a transparent, black 500x500 square.
I would like to be able to quickly fill the image with a uniform color. For instance fill_img(some_instance_with_img, (255, 0, 0, 255)) would fill the image stored in some_instance_with_img with opaque red. The following code does the trick, assuming self is an instance that contains an image stored as image:
def fill_img(self, color):
color = np.array(color)
shape = self.image.shape
self.image = np.tile(color, (shape[0] * shape[1])).reshape(shape)
However, it creates a brand new array and simply reassigns self.image to this new array. What I would like to do is avoid this intermediate array. If np.tile had an out argument, it would look like:
def fill_img(self, color):
color = np.array(color)
shape = self.image.shape
np.tile(color, (shape[0] * shape[1]), out=self.image)
self.image.reshape(shape)
but np.tile does not support an out parameter. It feels like I am just missing something, although it is possible that this behavior doesn't exist. Any help would be appreciated. Thanks.
Related
It's easy to create a color image with a constant scalar across all channels:
height, width = 3, 4
shape = (height, width)
num_channels = 3
scalar_value = 0.5
image = np.full((*shape, num_channels), scalar_value)
Is there an easy way to create an image with a constant color vector?
vector_value = (0.3, 0.4, 0.5) # e.g. (red, green, blue)
image = create_new(shape, vector_value)
This could be done using a custom function:
def create_new(shape, vector_value):
image = np.empty((*shape, len(vector_value)))
image[...] = vector_value
return image
but I am wondering if this can be done using a simple numpy expression.
The documentation for np.full is a little misleading (read incorrect). It accepts any broadcastable value as fill_value, not just a scalar. That means you can just do
np.full((*shape, len(vector_value)), vector_value)
I tested this with numpy version 1.17.3, and I'm not sure when that changed. My guess is that if you go back far enough, the documentation held true, and fill_value could only be a scalar.
There is currently an issue open to update the documentation: https://github.com/numpy/numpy/issues/14837 .
This seems to work, although it creates a view onto the vector rather than a separate image array:
image = np.broadcast_to(vector_value, (*shape, num_channels))
Perhaps this one, although it is a bit long:
image = np.broadcast_to(vector_value, (*shape, num_channels)).copy()
I have image A with dimension (512, 512, 3).
I want to find all the pixels which != [255,255,255].
Given that pixels, I want to color these coordinates in another image B.
What am I doing wrong?
indices = np.where(imgA!= [255,255,255])
imgB[indices] = [0,0,0]
This template should get you on the right path:
from PIL import image
picture = Image.open(path_to_picture)
width, height = picture.size
for x in range(width):
for y in range(height):
current_color = picture.getpixel( (x,y) )
if current_color[0:3]!=(255,255,255):
picture.putpixel( (x,y), (***, ***,***) + (current_color[-1],))
picture.save(path_to_new_picture)
Note here that getpixel() will return a tuple that contains the RGBA values for the given pixel. In this example, I am assuming that you are retaining the alpha value and simply modifying the RGB values of the current pixel.
you need to loop over each pixel in the image.
... imgA!= [255,255,255] will always return true, because you are comparing a (512,512,3) nd.array to a (3,) nd.array
Even if your images are not built from numpy matricies, this point still applies. If you run into performance issues, use cython for faster for loops.
I have calculated a matrix of RGB triples for an image and I would like to know the most straightforward technique to display them in an interactive window. I suspect that pygame will be involved. Techniques that minimize the use of pip will be given preference.
result = numpy.zeros([height,width, 3], dtype=numpy.uint8)
pyopencl.enqueue_copy(queue, result, result_g).wait()
surface = pygame.display.set_mode((width, height), pygame.DOUBLEBUF)
# now what?
The solution I was able to get working was this:
result = numpy.zeros([height,width, 3], dtype=numpy.uint8)
pyopencl.enqueue_copy(queue, result, result_g).wait()
surface = pygame.display.set_mode((width, height), pygame.DOUBLEBUF)
rgb2 = numpy.transpose(rgb, (1,0,2))
pygame.pixelcopy.array_to_surface(surface, rgb2)
pygame.display.flip()
The transpose is only necessary because my opencl kernel computed an image arranged as result[y,x,:] = (r,g,b) whereas array_to_surface expects result[x,y,:] (which is backwards from how most framebuffers work). I could alter my opencl kernel to store things in column-major order if I wanted to avoid the transpose.
This solution only works because my surface is the exact same dimensions as my image. I will upvote any other good solutions that work when the surface and the pixel matrix are different dimensions (because someone might find this article when searching for that).
It is difficult to answer exactly without knowing what the code you have shown does, but something like this:
for c in range(width):
for d in range(height):
color = result(d,c)
pygame.draw.line(surface, color, (c,d),(c+1,d))
I am getting some frame data from a web cam in the form of rgb values.
import numpy as np
frame = get_video()
print np.shape(frame)
The output is (480, 640, 3). Now I want to construct image from these values. So, I want to use
im = Image.new("RGB", (480, 640),frame)
But, here the third argument takes a tuple. I get this error
SystemError: new style getargs format but argument is not a tuple
So, my question is what is the best way to convert this frame data to a tuple, so that I can construct my image.
I am assuming here that you are importing the class Image from PIL.
The documentation for Image.new(), accessed with console command Image.new? is:
In [3]: Image.new?
Type: function
Base Class:
String Form:
Namespace: Interactive
File: /usr/lib/python2.7/dist-packages/PIL/Image.py
Definition: Image.new(mode, size, color=0)
Docstring: Create a new image
The third parameter is an RGB color, such as (255,255,255), to fill the hole image. You can't initialize individual pixels with this function.
I am also assuming that frame is a 3D array. As your output suggests, it's 480 lines and 640 rows of RGB tuples.
I don't know if there is a simpler way to do this, but I would set the pixel values by the putpixel() function, something like:
im = Image.new( "RGB", (480, 640), (0, 0, 0) ) #initialize 480:640 black image
for i in range(480):
for j in range(640):
im.putpixel( (i, j), tuple(frame[i][j]) )
Always check the docstring via console, it saves a lot of time. I also recommend using ipython as your console.
I found this implementation to be faster
from PIL import Image
im = Image.new("RGB", (480, 640), (0, 0, 0) ) #initialize 480:640 black image
while True:
frame = get_video()
im = Image.fromarray(frame)
im.save('some-name', 'JPEG')
How can I convert a surface object in pygame to a 2-dimensional array of RGB values (one value for every pixel)? I have read the documentation on PixelArrays and Surfarrays and I cannot seem to find an answer to my question. Examples are more than welcome.
The pygame documentation says that given a surface object, you can create a PixelArray wrapper to provide direct 2D array access to its pixels by calling the module's PixelArray() method like this:
pxarray = pygame.PixelArray(surface)
Logically a PixelArray object is a 2-dimensional array of RGB values stored as integers.
A PixelArray pixel item can be assigned a raw integer value, a
pygame.Color instance (an object for color representations), or a (r, g, b[, a]) tuple.
pxarray[x, y] = 0xFF00FF
pxarray[x, y] = pygame.Color(255, 0, 255)
pxarray[x, y] = (255, 0, 255)
It also mentions:
However, only a pixel’s integer value is returned. So, to compare a pixel to a
particular color, the color needs to be first mapped using the Surface.map_rgb()
method of the Surface object for which the PixelArray was created.
Which means you'll need to use the Surface.map_rgb() method to get RGB tuples from PixelArray integer values whenever you're not doing an assignment to the array, i.e. when reading a pixel's value, as is being done in the following conditional:
# Check, if the first pixel at the topleft corner is blue
if pxarray[0, 0] == surface.map_rgb((0, 0, 255)):
...
Hope this helps.
After wrapping a Surface in a PixelArray, the unmap_rgb function can convert the pixels into pygame.Color objects.
For example:
import pygame
pygame.init()
image = pygame.image.load('example.png')
pixel = pygame.PixelArray(image)
center = int(image.get_width()/2), int(image.get_height()/2)
colour = image.unmap_rgb(pixel[center])
print(str(colour))
Edit: To do the complete conversion of all these pygame.Color objects into a two-dimensional array of RGB tuples, you could use this code:
import pygame
pygame.init()
image = pygame.image.load('example.png')
pixel = pygame.PixelArray(image)
color_array = [[image.unmap_rgb(pixel[x, y]) for x in range(0, image.get_width())] for y in range(0, image.get_height())]
rgb_array = [[(column.r, column.g, column.b) for column in row] for row in color_array]
print(str(rgb_array))
Note that if you were to use map_rgb (as suggested in the other answer), you might have to test every pixel against over 16 million map_rgb calls (one for each of 256 * 256 * 256 possible RGB values). If you wanted the alpha values as well, that would add another multiplication of possible values by 256. Another strategy using map_rgb might be to create a lookup table from the (256 ** n) possible values, and check each pixel against the table. While it is possible to do that, the code presented here seems much more efficient.