Pillow module - hue change when cropping and saving (no conversion) - python

I want to crop an image and save it, but the problem is that the image changes hue significantly after saving the same format. Why is this happening? I'm not even converting it in the process.
Here is my code:
def square_crop(resized_image):
print "square crop"
width, height = resized_image.size
print width,height
edge_length = 3600
if(width >= height):
print("width>=height")
left = (width - edge_length)/2
top = (height - edge_length)/2
right = (width + edge_length)/2
bottom = (height + edge_length)/2
squared_image = resized_image.crop((left, top, right, bottom))
squared_image.save('squared.png')
And the confusing part is that this code uses the same image and saves it without hue change, so the cropping function must have an issue:
def image_resize(image):
print "image resize"
width, height = image.size
print width,height
if(width > 3601 and height > 3601):
print width, height
if(width >= height):
ratio = float(width)/float(height)
w = 3601 * ratio
h = 3601.0
print ratio, w, h
resized_image = image.resize([int(w), int(h)])
resized_image.save("resized.png")
else:
ratio = float(height)/float(width)
print(ratio)
w = 3601.0
h = 3601 * ratio
print ratio, w, h
resized_image = image.resize([int(w), int(h)])
resized_image.save("heic1509a_resized.png")
*EDIT: When I import .jpg file and save to .jpg both functions have the same hue issue. Same with .tif.
**EDIT: Also I've noticed that for some images this red color loss does not happen. I truly don't have any idea what is going on. I will leave the before and after screenshots to see for yourself.
Before - After
***EDIT: The problem is from the color space as the images that have changed color when saved were encoded using ProPhoto RGB color space(ROMM RGB (Reference Output Medium Metric)).
I am using gimp2 to convert them first to RGB without losing color, but I would want to find a way to do that automatically from python.
I will post any new updates on this issue.

The problem was that when I was saving the file, the PIL library automatically switched the color space of the image(ROMM-RGB) to other color space(RGB or sRGB) and basically every color changed.
All you have to do is preserve the color space of the image and you're fine. If you want to convert to another color space you should look up OpenCV library.
I can't explain too much in detail because I am just breaking the ice on this. Here is the code that solved this issue:
resized_image.save('resized.jpg', #file name
format = 'JPEG', #format of the file
quality = 100, #compression quality
icc_profile = resized_image.info.get('icc_profile','')) #preserve the icc profile of the photo(this was the one that caused problems)
Here is a link to a more in-depth answer: LINK

Related

resize image in python so it can be saved from being stretch

do anyone you know anything I can use to create functionality of CSS property -> " object-fit:scale-down " while generating pdf if images are coming in a different aspect ratio i tried it with giving only one place fixed or passing the height/ weight css based on height 20% greater then width then moving it to vertically rectangle box if the width is greater then height 20% moving to horizontal box if it's under 20% difference moving it to a square box but none of that works and images end up showing stretch. any logic or workaround will be helpful if possible i want to use PIL and achieve this...
Language- python3
library used- jinja,xhtml2pdf, PIL
sample jinja code that is getting converted into pdf later
<img src="{{path}}" style="height:{{height}};width:{{width}};" />
Python code tried
try:
#logo fixes for diffrent size of logo
im = Image.open('something')
width,height = im.size
#logic if logo is higher then 20% of width then it's vertically image if width is more then 20% of height then it's comes under horizontal catogory (20*width)/100 default is 2cm to 2cm for square image
if height+(20*width)/100>width and height!=width: #horizontal
data['width']='2cm'
data['height']='4cm'
elif width+(20*height)/100>height and height!=width: #vertical
data['width']='4cm'
data['height']='2cm'
else: #default
data['width']='2cm'
data['height']='2cm'
except Exception as imageerror:
data['width']='2cm'
data['height']='2cm'
try:
image = Image.open('something')
image.thumbnail((700,700), Image.ANTIALIAS)
image.save('something','JPEG',quality=100) #replace existing file
height,width = image.size
except exception as e:
#Size extraction failed Print e
pass
this way we can provide maximum boundaries for my images and PIL will take care of resizing and pass me the height and width of the image that fits in that boundaries ...Problem solved.
I couldn't quite follow your logic with 30% but something like this would work:
from PIL import Image
image = Image.open('./image.png')
width, height = image.size
if width * 1.3 > height:
new_height = width
new_width = width
else:
new_height = height
new_width = height
image.resize([new_width, new_height]).save('./new_image.png')

How can i resize images given a certain ratio?

I have many images of different sizes in my directory, however i want to resize them all by a certain ratio, let's say 0.25 or 0.2, it should be a variable i can control from my code and i want the resulting images to be an output in another directory.
I looked into this approach supplied by this previous question How to resize an image in python, while retaining aspect ratio, given a target size?
Here is my approach,
aspectRatio = currentWidth / currentHeight
heigth * width = area
So,
height * (height * aspectRatio) = area
height² = area / aspectRatio
height = sqrt(area / aspectRatio)
At that point we know the target height, and width = height * aspectRatio.
Ex:
area = 100 000
height = sqrt(100 000 / (700/979)) = 373.974
width = 373.974 * (700/979) = 267.397
but it lacks lots of details for example:how to transform these sizes back on the image which libraries to use and so on..
Edit: looking more into the docs img.resize looks ideal (although i also noticed .thumbnail) but i can't find a proper example on a case like mine.
from PIL import Image
ratio = 0.2
img = Image.open('/home/user/Desktop/test_pic/1-0.png')
hsize = int((float(img.size[1])*float(ratio)))
wsize = int((float(img.size[0])*float(ratio)))
img = img.resize((wsize,hsize), Image.ANTIALIAS)
img.save('/home/user/Desktop/test_pic/change.png')
You can create your own small routine to resize and resave pictures:
import cv2
def resize(oldPath,newPath,factor):
"""Resize image on 'oldPath' in both dimensions by the same 'factor'.
Store as 'newPath'."""
def r(image,f):
"""Resize 'image' by 'f' in both dimensions."""
newDim = (int(f*image.shape[0]),int(f*image.shape[1]))
return cv2.resize(image, newDim, interpolation = cv2.INTER_AREA)
cv2.imwrite(newPath, r(cv2.imread(oldPath), factor))
And test it like so:
# load and resize (local) pic, save as new file (adapt paths to your system)
resize(r'C:\Pictures\2015-08-05 10.58.36.jpg',r'C:\Pictures\mod.jpg',0.4)
# show openened modified image
cv2.imshow("...",cv2.imread(r'C:\Users\partner\Pictures\mod.jpg'))
# wait for keypress for diplay to close
cv2.waitKey(0)
You should add some error handling, f.e.:
no image at given path
image not readable (file path permissions)
image not writeable (file path permissions)

PIL merge of two images with alpha channels - not working as expected

There's a bunch of questions here on SO which provide answers to the present question, however the output is not the expected.
The goal is to merge two RGBA images. The information on the alpha channel of each image is not the same.
The current (simplified) code is:
from PIL import Image
image = '1.png'
watermark = '2.png'
wmark = Image.open(watermark)
img = Image.open(image)
img.paste(wmark, (0, 0), wmark)
img.save("result.png", "PNG")
The two images are:
Background
Foreground
Expected output
Actual result
In case you don't see the difference, here are the alpha channels (inverted for better visualization) of the final versions.
Expected result - alpha channel
Actual result - alpha channel
So with that said, is there any way of doing this or am I doing something wrong?
EDIT - clarification following #zenpoy comment:
If the foreground image has a certain amount of opacity, I want that to be taken into account when superimposing both images, but I don't want the alpha channel of the second image to be added to the first. Much like putting a piece of glass (the foreground image) in front of a paper image (background).
In other words, if the background image was RGB instead of RGBA, the final image should have no alpha information.
From your initial description, the following idea seems to be equivalent. Let X, Y be two RGBA images. Merge X and Y considering the bands RGB from X and the bands RGBA from Y, producing an image Z. Set the band A in Z to that of the band A in X. This contradicts your final statement, but it seems to give the expected output in this situation.
So, this is the code:
image = '1.png'
watermark = '2.png'
wmark = Image.open(watermark)
img = Image.open(image)
ia, wa = None, None
if len(img.getbands()) == 4:
ir, ig, ib, ia = img.split()
img = Image.merge('RGB', (ir, ig, ib))
if len(wmark.getbands()) == 4:
wa = wmark.split()[-1]
img.paste(wmark, (0, 0), wmark)
if ia:
if wa:
# XXX This seems to solve the contradiction, discard if unwanted.
ia = max_alpha(wa, ia)
img.putalpha(ia)
img.save('result.png')
where the function max_alpha is:
def max_alpha(a, b):
# Assumption: 'a' and 'b' are of same size
im_a = a.load()
im_b = b.load()
width, height = a.size
alpha = Image.new('L', (width, height))
im = alpha.load()
for x in xrange(width):
for y in xrange(height):
im[x, y] = max(im_a[x, y], im_b[x, y])
return alpha
This new function seems to take into consideration the contradiction mentioned.

PIL: enlarge an image

I'm having trouble getting PIL to enlarge an image. Large images get scaled down just fine, but small images won't get bigger.
# get the ratio of the change in height of this image using the
# by dividing the height of the first image
s = h / float(image.size[1])
# calculate the change in dimension of the new image
new_size = tuple([int(x*s) for x in image.size])
# if this image height is larger than the image we are sizing to
if image.size[1] > h:
# make a thumbnail of the image using the new image size
image.thumbnail(new_size)
by = "thumbnailed"
# add the image to the images list
new_images.append(image)
else:
# otherwise try to blow up the image - doesn't work
new_image = image.resize(new_size)
new_images.append(new_image)
by = "resized"
logging.debug("image %s from: %s to %s" % (by, str(image.size), str(new_size)))
Both resize and transform methods properly resize images.
size_tuple = im.size
x1 = y1 = 0
x2, y2 = size_tuple
# resize
im = im.resize(size_tuple)
# transform
im = im.transform(size_tuple, Image.EXTENT, (x1,y1,x2,y2))
If you are having the same problem I described - try it on another machine. There must be something wrong with the python installation on my server. It worked fine on my local machine.
Here is a working example how to resize an image in every direction with openCV and numpy:
import cv2, numpy
original_image = cv2.imread('original_image.jpg',0)
original_height, original_width = original_image.shape[:2]
factor = 2
resized_image = cv2.resize(original_image, (int(original_height*factor), int(original_width*factor)), interpolation=cv2.INTER_CUBIC )
cv2.imwrite('resized_image.jpg',resized_image)
#fixed var name
Simple as that. You wanna use "cv2.INTER_CUBIC" to enlarge (factor > 1) and "cv2.INTER_AREA" to make the images smaller (factor < 1).

In Python, Python Image Library 1.1.6, how can I expand the canvas without resizing?

I am probably looking for the wrong thing in the handbook, but I am looking to take an image object and expand it without resizing (stretching/squishing) the original image.
Toy example: imagine a blue rectangle, 200 x 100, then I perform some operation and I have a new image object, 400 x 300, consisting of a white background upon which a 200 x 100 blue rectangle rests. Bonus if I can control in which direction this expands, or the new background color, etc.
Essentially, I have an image to which I will be adding iteratively, and I do not know what size it will be at the outset.
I suppose it would be possible for me to grab the original object, make a new, slightly larger object, paste the original on there, draw a little more, then repeat. It seems like it might be computationally expensive. However, I thought there would be a function for this, as I assume it is a common operation. Perhaps I assumed wrong.
The ImageOps.expand function will expand the image, but it adds the same amount of pixels in each direction.
The best way is simply to make a new image and paste:
newImage = Image.new(mode, (newWidth,newHeight))
newImage.paste(srcImage, (x1,y1,x1+oldWidth,y1+oldHeight))
If performance is an issue, make your original image bigger than needed and crop it after the drawing is done.
Based on interjays answer:
#!/usr/bin/env python
from PIL import Image
import math
def resize_canvas(old_image_path="314.jpg", new_image_path="save.jpg",
canvas_width=500, canvas_height=500):
"""
Resize the canvas of old_image_path.
Store the new image in new_image_path. Center the image on the new canvas.
Parameters
----------
old_image_path : str
new_image_path : str
canvas_width : int
canvas_height : int
"""
im = Image.open(old_image_path)
old_width, old_height = im.size
# Center the image
x1 = int(math.floor((canvas_width - old_width) / 2))
y1 = int(math.floor((canvas_height - old_height) / 2))
mode = im.mode
if len(mode) == 1: # L, 1
new_background = (255)
if len(mode) == 3: # RGB
new_background = (255, 255, 255)
if len(mode) == 4: # RGBA, CMYK
new_background = (255, 255, 255, 255)
newImage = Image.new(mode, (canvas_width, canvas_height), new_background)
newImage.paste(im, (x1, y1, x1 + old_width, y1 + old_height))
newImage.save(new_image_path)
resize_canvas()
You might consider a rather different approach to your image... build it out of tiles of a fixed size. That way, as you need to expand, you just add new image tiles. When you have completed all of your computation, you can determine the final size of the image, create a blank image of that size, and paste the tiles into it. That should reduce the amount of copying you're looking at for completing the task.
(You'd likely want to encapsulate such a tiled image into an object that hid the tiling aspects from the other layers of code, of course.)
This code will enlarge a smaller image, preserving aspect ratio, then center it on a standard sized canvas. Also preserves transparency, or defaults to gray background.
Tested with P mode PNG files.
Coded debug final.show() and break for testing. Remove lines and hashtag on final.save(...) to loop and save.
Could parameterize canvas ratio and improve flexibility, but it served my purpose.
"""
Resize ... and reconfigures. images in a specified directory
Use case: Images of varying size, need to be enlarged to exaxtly 1200 x 1200
"""
import os
import glob
from PIL import Image
# Source directory plus Glob file reference (Windows)
source_path = os.path.join('C:', os.sep, 'path', 'to', 'source', '*.png')
# List of UNC Image File paths
images = glob.glob(source_path)
# Destination directory of modified image (Windows)
destination_path = os.path.join('C:', os.sep, 'path', 'to', 'destination')
for image in images:
original = Image.open(image)
# Retain original attributes (ancillary chunks)
info = original.info
# Retain original mode
mode = original.mode
# Retain original palette
if original.palette is not None:
palette = original.palette.getdata()[1]
else:
palette = False
# Match original aspect ratio
dimensions = original.getbbox()
# Identify destination image background color
if 'transparency' in info.keys():
background = original.info['transparency']
else:
# Image does not have transparency set
print(image)
background = (64)
# Get base filename and extension for destination
filename, extension = os.path.basename(image).split('.')
# Calculate matched aspect ratio
if dimensions[2] > dimensions[3]:
width = int(1200)
modifier = width / dimensions[2]
length = int(dimensions[3] * modifier)
elif dimensions[3] > dimensions[2]:
length = int(1200)
modifier = length / dimensions[3]
width = int(dimensions[2] * modifier)
else:
width, length = (1200, 1200)
size = (width, length)
# Set desired final image size
canvas = (1200, 1200)
# Calculate center position
position = (
int((1200 - width)/2),
int((1200 - length)/2),
int((1200 - width)/2) + width,
int((1200 - length)/2) + length
)
# Enlarge original image proportionally
resized = original.resize(size, Image.LANCZOS)
# Then create sized canvas
final = Image.new(mode, canvas, background)
# Replicate original properties
final.info = info
# Replicate original palatte
if palette:
final.putpalette(palette)
# Cemter paste resized image to final canvas
final.paste(resized, position)
# Save final image to destination directory
final.show()
#final.save("{}\\{}.{}".format(destination_path, filename, extension))
break

Categories