I want to crop an image by selecting a subregion of it to turn into a new image file.
Based on what I see in the PIL docs, I am trying to decide between 2 approaches.
For both approaches:
# assume I already have a PIL-opened image called original_image
# assume I have a crop_box tuple for the area to crop, e.g. (0, 0, 100, 100)
Approaches
Cropping the original image directly, and then transforming the returned crop area into a new file. It seems that I'm giving up all the mode data / other metadata about the original image, which would then have to be re-loaded?
cropped_image = original_image.crop(crop_box)
# process cropped version as needed
Do an image.transform() in which I select a sub-region
# assume I have an output size of (100, 100)
cropped_image = original_image.transform(size, Image.EXTENT, crop_box)
# process cropped version as needed
Is one approach preferable to the other in terms of speed, data preservation, or another important factor that I'm missing?
The following is from the PIL.Image documentation under the transform function:
It is slightly slower than crop
Plus crop is simpler and shows your intent. That's what I'd use.
Related
I am trying to create a pipeline in which I first render an image using the blender python API (I am using Blender 2.90) and then perform some image processing in python. I want to fetch the image directly from blender without first writing the rendered image to disk and then loading it again. I ran the following code within the blender GUI to do so:
import bpy
import numpy as np
import PIL.Image as Image
from skimage.util import img_as_ubyte
resolution_x = 512
resolution_y = 512
# render settings
scene = bpy.context.scene
scene.render.engine = 'BLENDER_EEVEE'
scene.render.resolution_x = resolution_x
scene.render.resolution_y = resolution_y
scene.render.image_settings.file_format = 'PNG'
scene.render.filepath = "path/to/good_image.png"
# create Viewer Layer in Compositor
scene.use_nodes = True
tree = scene.node_tree
nodes = tree.nodes
links = tree.links
for node in nodes:
nodes.remove(node)
render_layer_node = nodes.new('CompositorNodeRLayers')
viewer_node = nodes.new('CompositorNodeViewer')
links.new(viewer_node.inputs[0], render_layer_node.outputs[0])
# render scene and get pixels from Viewer Node
bpy.ops.render.render(write_still=True)
pixels = bpy.data.images['Viewer Node'].pixels
# do some processing and save
img = np.flip(img_as_ubyte(np.array(pixels[:]).reshape((resolution_y, resolution_x, 4))), axis=0)
Image.fromarray(img).save("path/to/bad_image.png")
Problem: The image I get from the Viewer Node is much darker (bad image) than the image saved in the conventional way (good image). Does anyone have an idea why this happens and how to fix it? Does blender maybe treat pixel values differently than I expect?
Some additional information:
Before conversion to uint8, the values of the alpha channel within the dark image are 1.0 (as they actually should be). Background values in the dark image are not 0.0 or negative (as one might guess from appearance), but 0.05...
What I tried:
I thought that pixels might be scaled within range -1 to 1, so I rescaled the pixels to range 0 to 1 before transforming to uint8... Did not lead to the correct image either :(
It's because the image that you get from the Viewer Node is the one "straight from compositing" before color management takes place. You can have a look at the documentation here: this image is still in the linear space.
Your good_image.png on the other hand is obtained after transformation into the "Display Space" (see diagram in the doc). Hence it was transformed into a log-space, maybe gamma-corrected, etc.
Finally, you can get an image that is close to (but slightly different though) to the good image from the viewer node by calling bpy.data.images['Viewer Node'].save_render(filepath) instead, but there is no way to directly extract the color-managed version without rendering to a file first. You can probably do it yourself by adding PyOpenColorIO to your script and applying the color management from this module.
I currently have built a stitching program that is able to stitch any number of images however they have to be of specific order which is a problem for my specific image sets (working with 1452 images). My goal is to create an unordered stitcher program. I am convinced that the issue takes place when I am physically (well virtually) stitching the images to one another.
Here is code for my stitching (assume keypoints found are accurate as well as homography)
def stitchMatches(self,image1,image2,homography):
#gather x and y axis of images that will be stitched
height1, width1 = image1.shape[0], image1.shape[1]
height2, width2 = image2.shape[0], image2.shape[1]
#create blank image that will be large enough to hold stitched image
blank_image = np.zeros(((width1 + width2),(height1 + height2),3),np.uint8)
#stitch image two into the resulting image while using blank_image
#to create a large enough frame for images
result = cv2.warpPerspective((image1),homography,blank_image.shape[0:2])
#numpy notation for slicing a matrix together allows you to see the image
result[0:image2.shape[0], 0:image2.shape[1]] = image2
return result
I believe the issue is with these two lines.
result = cv2.warpPerspective((image1),homography,blank_image.shape[0:2])
result[0:image2.shape[0], 0:image2.shape[1]] = image2
When I run the code on an ordered image (image 1 is north east most)set this is the result.
However when I run the images in the reverse order I get this result.
To my understanding, this is because I have set the resulting image shape to map to image 2 but haven't been able to develop a work around.
Here is a photo of multiple images showing the same issue of the images getting cropped out.
Any help is appreciated.
After searching for a few hours, I ended up on this link. A little background information follows.
I'm capturing live frames of a running embedded device via a hardware debugger. The captured frames are stored as raw binary files, without headers or format. After looking at the above link and understanding, albeit perfunctorily, the NumPY and Matplotlib, I was able to convert the raw binary data to an image successfully. This is important because I'm not sure if the link to the raw binary file will help any one.
I use this code:
import matplotlib.pyplot as plt # study documentation
import numpy as np # " "
iFile = "FramebufferL0_0.bin" # Layer-A
shape = (430, 430) # length and width of the image
dtype = np.dtype('<u2') # unsigned 16 bit little-endian.
oFile = "FramebufferL0_0.png"
fid = open(iFile, 'rb')
data = np.fromfile(fid, dtype)
image = data.reshape(shape)
plt.imshow(image, cmap = "gray")
plt.savefig(oFile)
plt.show()
Now, the image I'm showing is black and white because the color map is gray-scale (right?). The actual captured frame is NOT black and white. That is, the image I see on my embedded device is "colorful".
My question is, how can I calculate actual color of each pixel from the raw binary file? Is there a way I can get the actual color map of the image from the raw binary? I looked into this example and I'm sure that, if I'm able to calculate the R, G and B channels (and Alpha too), I'll be able to recreate the exact image. An example code would be of much help.
An RGBA image has 4 channels, one for each color and one for the alpha value. The binary file seems to have a single channel, as you don't report an error when performing the data.reshape(shape) operation (the shape for the corresponding RGBA image would be (430, 430, 4)).
I see two potential reasons:
The image actual does have colour information but when you are grabbing the data you are only grabbing one of the four channels.
The image is actually a gray-scale image, but the embedded device shows a pseudocolor image, creating the illusion of colour information. Without knowing what the colourmap is being used, it is hard to help you, other than point you towards matplotlib.pyplot.colormaps(), which lists all already available colour maps in matplotlib.
Could you
a) explain the exact source / type of imaging modality, and
b) show a photo of the output of the embedded device?
PS: Also, at least in my hands, the pasted binary file seems to have a size of 122629, which is incongruent with an image shape of (430,430).
I'm writing a code that part of it is reading an image source and displaying it on the screen for the user to interact with. I also need the sharpened image data. I use the following to read the data and display it in pyGame
def image_and_sharpen_array(file_name):
#read the image data and return it, with the sharpened image
image = misc.imread(file_name)
blurred = ndimage.gaussian_filter(image,3)
edge = ndimage.gaussian_filter(blurred,1)
alpha = 20
out = blurred + alpha*(blurred - edge)
return image,out
#get image data
scan,sharpen = image_and_sharpen_array('foo.jpg')
w,h,c = scan.shape
#setting up pygame
pygame.init()
screen = pygame.display.set_mode((w,h))
pygame.surfarray.blit_array(screen,scan)
pygame.display.update()
And the image is displayed on the screen only rotated and inverted. Is this due to differences between misc.imread and pyGame? Or is this due to something wrong in my code?
Is there other way to do this? The majority of solution I read involved saving the figure and then reading it with ``pyGame''.
I often use the numpy swapaxes() method:
In this case we only need to invert x and y axis (axis number 0 and 1) before displaying our array :
return image.swapaxes(0,1),out
I thought technico provided a good solution - just a little lean on info. Assuming get_arr() is a function that returns the pixel array:
pixl_arr = get_arr()
pixl_arr = numpy.swapaxes(pixl_arr, 0, 1)
new_surf = pygame.pixelcopy.make_surface(pixl_arr)
screen.blit(new_surf, (dest_x, dest_y))
Alternatively, if you know that the image will always be of the same dimensions (as in iterating through frames of a video or gif file), it would be more efficient to reuse the same surface:
pixl_arr = get_arr()
pixl_arr = numpy.swapaxes(pixl_arr, 0, 1)
pygame.pixelcopy.array_to_surface(old_surf, pixl_arr)
screen.blit(old_surf, (dest_x, dest_y))
YMMV, but so far this is working well for me.
Every lib has its own way of interpreting image arrays. By 'rotated' I suppose you mean transposed. That's the way PyGame shows up numpy arrays. There are many ways to make it look 'correct'. Actually there are many ways even to show up the array, which gives you full control over channel representation and so on. In pygame version 1.9.2, this is the fastest array rendering that I could ever achieve. (Note for earlier version this will not work!).
This function will fill the surface with array:
def put_array(surface, myarr): # put array into surface
bv = surface.get_view("0")
bv.write(myarr.tostring())
If that is not working, use this, should work everywhere:
# put array data into a pygame surface
def put_arr(surface, myarr):
bv = surface.get_buffer()
bv.write(myarr.tostring(), 0)
You probably still get not what you want, so it is transposed or have swapped color channels. The idea is, manage your arrays in that form, which suites this surface buffer. To find out what is correct channel order and axes order, use openCV library (cv2.imread(filename)). With openCV you open images in BGR order as standard, and it has a lot of conversion functions. If I remember correctly, when writing directly to surface buffer, BGR is the correct order for 24 bit and BGRA for a 32 bit surface. So you can try to put the image array which you get out of file with this function and blit to the screen.
There are other ways to draw arrays e.g. here is whole set of helper functions http://www.pygame.org/docs/ref/surfarray.html
But I would not recommend using it, since surfaces are not for direct pixel manipulating, you will probably get lost in references.
Small tip: To do 'signalling test' use a picture, like this. So you will immediately see if something is wrong, just load as array and try to render.
My suggestion is to use the pygame.transform module. There are the flip and rotate methods, which you can use to however your transformation is. Look up the docs on this.
My recommendation is to save the output image to a new Surface, and then apply the transformations, and blit to the display.
temp_surf = pygame.Surface((w,h))
pygame.surfarray.blit(temp_surf, scan)
'''transform temp_surf'''
screen.blit(temp_surf, (0,0))
I have no idea why this is. It is probably something to do with the order in which the axes are transferred from a 2d array to a pygame Surface.
I create an image with PIL:
I need to fill in the empty space (depicted as black). I could easily fill it with a static color, but what I'd like to do is fill the pixels in with nearby colors. For example, the first pixel after the border might be a Gaussian blur of the filled-in pixels. Or perhaps a push-pull type algorithm described in The Lumigraph, Gortler, et al..
I need something that is not too slow because I have to run this on many images. I have access to other libraries, like numpy, and you can assume that I know the borders or a mask of the outside region or inside region. Any suggestions on how to approach this?
UPDATE:
As suggested by belisarius, opencv's inpaint method is perfect for this. Here's some python code that uses opencv to achieve what I wanted:
import Image, ImageDraw, cv
im = Image.open("u7XVL.png")
pix = im.load()
#create a mask of the background colors
# this is slow, but easy for example purposes
mask = Image.new('L', im.size)
maskdraw = ImageDraw.Draw(mask)
for x in range(im.size[0]):
for y in range(im.size[1]):
if pix[(x,y)] == (0,0,0):
maskdraw.point((x,y), 255)
#convert image and mask to opencv format
cv_im = cv.CreateImageHeader(im.size, cv.IPL_DEPTH_8U, 3)
cv.SetData(cv_im, im.tostring())
cv_mask = cv.CreateImageHeader(mask.size, cv.IPL_DEPTH_8U, 1)
cv.SetData(cv_mask, mask.tostring())
#do the inpainting
cv_painted_im = cv.CloneImage(cv_im)
cv.Inpaint(cv_im, cv_mask, cv_painted_im, 3, cv.CV_INPAINT_NS)
#convert back to PIL
painted_im = Image.fromstring("RGB", cv.GetSize(cv_painted_im), cv_painted_im.tostring())
painted_im.show()
And the resulting image:
A method with nice results is the Navier-Stokes Image Restoration. I know OpenCV has it, don't know about PIL.
Your example:
I did it with Mathematica.
Edit
As per your reuquest, the code is:
i = Import["http://i.stack.imgur.com/uEPqc.png"];
Inpaint[i, ColorNegate#Binarize#i, Method -> "NavierStokes"]
The ColorNegate# ... part creates the replacement mask.
The filling is done with just the Inpaint[] command.
Depending on how you're deploying this application, another option might be to use the Gimp's python interface to do the image manipulation.
The doc page I linked to is oriented more towards writing GIMP plugins in python, rather than interacting with a background gimp instance from a python app, but I'm pretty sure that's also possible (it's been a while since I played with the gimp/python interface, I'm a little hazy).
You can also create the mask with the function CreateImage(), for instance:
inpaint_mask = cv.CreateImage(cv.GetSize(im), 8, 1)