I'm using PIL to convert a transparent PNG image uploaded with Django to a JPG file. The output looks broken.
Source file
Code
Image.open(object.logo.path).save('/tmp/output.jpg', 'JPEG')
or
Image.open(object.logo.path).convert('RGB').save('/tmp/output.png')
Result
Both ways, the resulting image looks like this:
Is there a way to fix this? I'd like to have white background where the transparent background used to be.
Solution
Thanks to the great answers, I've come up with the following function collection:
import Image
import numpy as np
def alpha_to_color(image, color=(255, 255, 255)):
"""Set all fully transparent pixels of an RGBA image to the specified color.
This is a very simple solution that might leave over some ugly edges, due
to semi-transparent areas. You should use alpha_composite_with color instead.
Source: http://stackoverflow.com/a/9166671/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
x = np.array(image)
r, g, b, a = np.rollaxis(x, axis=-1)
r[a == 0] = color[0]
g[a == 0] = color[1]
b[a == 0] = color[2]
x = np.dstack([r, g, b, a])
return Image.fromarray(x, 'RGBA')
def alpha_composite(front, back):
"""Alpha composite two RGBA images.
Source: http://stackoverflow.com/a/9166671/284318
Keyword Arguments:
front -- PIL RGBA Image object
back -- PIL RGBA Image object
"""
front = np.asarray(front)
back = np.asarray(back)
result = np.empty(front.shape, dtype='float')
alpha = np.index_exp[:, :, 3:]
rgb = np.index_exp[:, :, :3]
falpha = front[alpha] / 255.0
balpha = back[alpha] / 255.0
result[alpha] = falpha + balpha * (1 - falpha)
old_setting = np.seterr(invalid='ignore')
result[rgb] = (front[rgb] * falpha + back[rgb] * balpha * (1 - falpha)) / result[alpha]
np.seterr(**old_setting)
result[alpha] *= 255
np.clip(result, 0, 255)
# astype('uint8') maps np.nan and np.inf to 0
result = result.astype('uint8')
result = Image.fromarray(result, 'RGBA')
return result
def alpha_composite_with_color(image, color=(255, 255, 255)):
"""Alpha composite an RGBA image with a single color image of the
specified color and the same size as the original image.
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
back = Image.new('RGBA', size=image.size, color=color + (255,))
return alpha_composite(image, back)
def pure_pil_alpha_to_color_v1(image, color=(255, 255, 255)):
"""Alpha composite an RGBA Image with a specified color.
NOTE: This version is much slower than the
alpha_composite_with_color solution. Use it only if
numpy is not available.
Source: http://stackoverflow.com/a/9168169/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
def blend_value(back, front, a):
return (front * a + back * (255 - a)) / 255
def blend_rgba(back, front):
result = [blend_value(back[i], front[i], front[3]) for i in (0, 1, 2)]
return tuple(result + [255])
im = image.copy() # don't edit the reference directly
p = im.load() # load pixel array
for y in range(im.size[1]):
for x in range(im.size[0]):
p[x, y] = blend_rgba(color + (255,), p[x, y])
return im
def pure_pil_alpha_to_color_v2(image, color=(255, 255, 255)):
"""Alpha composite an RGBA Image with a specified color.
Simpler, faster version than the solutions above.
Source: http://stackoverflow.com/a/9459208/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
image.load() # needed for split()
background = Image.new('RGB', image.size, color)
background.paste(image, mask=image.split()[3]) # 3 is the alpha channel
return background
Performance
The simple non-compositing alpha_to_color function is the fastest solution, but leaves behind ugly borders because it does not handle semi transparent areas.
Both the pure PIL and the numpy compositing solutions give great results, but alpha_composite_with_color is much faster (8.93 msec) than pure_pil_alpha_to_color (79.6 msec). If numpy is available on your system, that's the way to go. (Update: The new pure PIL version is the fastest of all mentioned solutions.)
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.alpha_to_color(i)"
10 loops, best of 3: 4.67 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.alpha_composite_with_color(i)"
10 loops, best of 3: 8.93 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.pure_pil_alpha_to_color(i)"
10 loops, best of 3: 79.6 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.pure_pil_alpha_to_color_v2(i)"
10 loops, best of 3: 1.1 msec per loop
Here's a version that's much simpler - not sure how performant it is. Heavily based on some django snippet I found while building RGBA -> JPG + BG support for sorl thumbnails.
from PIL import Image
png = Image.open(object.logo.path)
png.load() # required for png.split()
background = Image.new("RGB", png.size, (255, 255, 255))
background.paste(png, mask=png.split()[3]) # 3 is the alpha channel
background.save('foo.jpg', 'JPEG', quality=80)
Result #80%
Result # 50%
By using Image.alpha_composite, the solution by Yuji 'Tomita' Tomita become simpler. This code can avoid a tuple index out of range error if png has no alpha channel.
from PIL import Image
png = Image.open(img_path).convert('RGBA')
background = Image.new('RGBA', png.size, (255, 255, 255))
alpha_composite = Image.alpha_composite(background, png)
alpha_composite.save('foo.jpg', 'JPEG', quality=80)
The transparent parts mostly have RGBA value (0,0,0,0). Since the JPG has no transparency, the jpeg value is set to (0,0,0), which is black.
Around the circular icon, there are pixels with nonzero RGB values where A = 0. So they look transparent in the PNG, but funny-colored in the JPG.
You can set all pixels where A == 0 to have R = G = B = 255 using numpy like this:
import Image
import numpy as np
FNAME = 'logo.png'
img = Image.open(FNAME).convert('RGBA')
x = np.array(img)
r, g, b, a = np.rollaxis(x, axis = -1)
r[a == 0] = 255
g[a == 0] = 255
b[a == 0] = 255
x = np.dstack([r, g, b, a])
img = Image.fromarray(x, 'RGBA')
img.save('/tmp/out.jpg')
Note that the logo also has some semi-transparent pixels used to smooth the edges around the words and icon. Saving to jpeg ignores the semi-transparency, making the resultant jpeg look quite jagged.
A better quality result could be made using imagemagick's convert command:
convert logo.png -background white -flatten /tmp/out.jpg
To make a nicer quality blend using numpy, you could use alpha compositing:
import Image
import numpy as np
def alpha_composite(src, dst):
'''
Return the alpha composite of src and dst.
Parameters:
src -- PIL RGBA Image object
dst -- PIL RGBA Image object
The algorithm comes from http://en.wikipedia.org/wiki/Alpha_compositing
'''
# http://stackoverflow.com/a/3375291/190597
# http://stackoverflow.com/a/9166671/190597
src = np.asarray(src)
dst = np.asarray(dst)
out = np.empty(src.shape, dtype = 'float')
alpha = np.index_exp[:, :, 3:]
rgb = np.index_exp[:, :, :3]
src_a = src[alpha]/255.0
dst_a = dst[alpha]/255.0
out[alpha] = src_a+dst_a*(1-src_a)
old_setting = np.seterr(invalid = 'ignore')
out[rgb] = (src[rgb]*src_a + dst[rgb]*dst_a*(1-src_a))/out[alpha]
np.seterr(**old_setting)
out[alpha] *= 255
np.clip(out,0,255)
# astype('uint8') maps np.nan (and np.inf) to 0
out = out.astype('uint8')
out = Image.fromarray(out, 'RGBA')
return out
FNAME = 'logo.png'
img = Image.open(FNAME).convert('RGBA')
white = Image.new('RGBA', size = img.size, color = (255, 255, 255, 255))
img = alpha_composite(img, white)
img.save('/tmp/out.jpg')
Here's a solution in pure PIL.
def blend_value(under, over, a):
return (over*a + under*(255-a)) / 255
def blend_rgba(under, over):
return tuple([blend_value(under[i], over[i], over[3]) for i in (0,1,2)] + [255])
white = (255, 255, 255, 255)
im = Image.open(object.logo.path)
p = im.load()
for y in range(im.size[1]):
for x in range(im.size[0]):
p[x,y] = blend_rgba(white, p[x,y])
im.save('/tmp/output.png')
It's not broken. It's doing exactly what you told it to; those pixels are black with full transparency. You will need to iterate across all pixels and convert ones with full transparency to white.
import numpy as np
import PIL
def convert_image(image_file):
image = Image.open(image_file) # this could be a 4D array PNG (RGBA)
original_width, original_height = image.size
np_image = np.array(image)
new_image = np.zeros((np_image.shape[0], np_image.shape[1], 3))
# create 3D array
for each_channel in range(3):
new_image[:,:,each_channel] = np_image[:,:,each_channel]
# only copy first 3 channels.
# flushing
np_image = []
return new_image
from PIL import Image
def fig2img ( fig ):
"""
#brief Convert a Matplotlib figure to a PIL Image in RGBA format and return it
#param fig a matplotlib figure
#return a Python Imaging Library ( PIL ) image
"""
# put the figure pixmap into a numpy array
buf = fig2data ( fig )
w, h, d = buf.shape
return Image.frombytes( "RGBA", ( w ,h ), buf.tostring( ) )
def fig2data ( fig ):
"""
#brief Convert a Matplotlib figure to a 4D numpy array with RGBA channels and return it
#param fig a matplotlib figure
#return a numpy 3D array of RGBA values
"""
# draw the renderer
fig.canvas.draw ( )
# Get the RGBA buffer from the figure
w,h = fig.canvas.get_width_height()
buf = np.fromstring ( fig.canvas.tostring_argb(), dtype=np.uint8 )
buf.shape = ( w, h, 4 )
# canvas.tostring_argb give pixmap in ARGB mode. Roll the ALPHA channel to have it in RGBA mode
buf = np.roll ( buf, 3, axis = 2 )
return buf
def rgba2rgb(img, c=(0, 0, 0), path='foo.jpg', is_already_saved=False, if_load=True):
if not is_already_saved:
background = Image.new("RGB", img.size, c)
background.paste(img, mask=img.split()[3]) # 3 is the alpha channel
background.save(path, 'JPEG', quality=100)
is_already_saved = True
if if_load:
if is_already_saved:
im = Image.open(path)
return np.array(im)
else:
raise ValueError('No image to load.')
Given a binary image, what is the fastest and Pythonic way to convert the image to RGB and then modify it's pixels?
I have these two ways but they don't feel good to me
def get_mask(rgb_image_path):
mask = np.array(Image.open(rgb_image_path).convert('L'), dtype = np.float32) # Mask should be Grayscale so each value is either 0 or 255
mask[mask == 255.0] = 1.0 # whereever there is 255, convert it to 1: (1 == 255 == White)
return mask
def approach1(mask):
mask = np.logical_not(mask)
mask = mask.astype(np.uint8)
mask = mask*255
red = mask.copy()
blue = mask.copy()
green = mask.copy()
red[red == 0] = 26
blue[blue == 0] = 237
green[green == 0] = 160
mask = np.stack([red, blue, green], axis = -1)
plt.imshow(mask)
def approach2(mask):
mask = np.logical_not(mask)
mask = mask.astype(np.uint8)
mask = np.stack([mask,mask,mask], axis = -1)
mask = mask*255
width,height, channels = mask.shape
for i in range(width):
for j in range(height):
if mask[i][j][0] == 0:
mask[i][j] = (26,237,160)
plt.imshow(mask)
Below is the Image
I suppose the most simple way is this:
def mask_coloring(mask):
expected_color = (26, 237, 160)
color_mask = np.zeros((mask.shape[0], mask.shape[1], 3), dtype=np.uint8)
color_mask[mask == 255.0, :] = expected_color
plt.imshow(color_mask)
By the way, similar approach can be found here.
The easiest way to do this is to take advantage of the fact that you only have and only want 2 colours and to use a fast, space-efficient palette image that doesn't require you to iterate over all the pixels, or store 3 bytes per pixel:
from PIL import Image
# Open your image
im = Image.open('oqX2g.gif')
# Create a new palette where the 1st entry is black (0,0,0) and the 2nd is your colour (26,237,160) then pad out to 256 RGB entries
palette = [0,0,0] + [26,237,160] + [0,0,0] * 254
# Put palette into image and save
im.putpalette(palette)
im.save('result.png')
Read more about palette images here.
What if you want the black background to become magenta, and the foreground green? Oh, and you want it as an RGB Numpy array?
Just change the palette and convert:
from PIL import Image
import numpy as np
# Open your image, push in new palette
im = Image.open('oqX2g.gif')
palette = [255,0,255] + [0,255,0] + [0,0,0] * 254
im.putpalette(palette)
# Make RGB Numpy array
na = np.array(im.convert('RGB'))
I'm trying to make the black pixels in the background of an image transparent. For this, I wrote the function below. I then overlay the image (a car in the center surrounded by black pixels) on a copy of itself and move the first one slowly. I was expecting the first image to reveal the copy underneath without showing the rectangular boundary since that part should have been transparent. However, I don't get the desired effect. Does anyone know what I'm missing?
def makeImageTransparent(img):
img = img.convert("RGBA")
datas = img.getdata()
newData = []
ii = 0
for item in datas:
if item[0] == 0 and item[1] == 0 and item[2] == 0:
newData.append((0, 0, 0, 0))
ii = ii + 1
else:
newData.append(item)
print str(ii)
img.putdata(newData)
return img
I couldn't figure out how to make the transparency work, so I just created my own paste method that updated the bytes of the image I was pasting to directly and this got me the desired effect.
def pasteImage(img, bigim, posn):
pixdata = img.load()
width, height = img.size
mainpixdata = bigim.load()
for y in xrange(height):
for x in xrange(width):
if pixdata[x, y] != (0, 0, 0, 0):
mainpixdata[x+posn[0], y+posn[1]] = pixdata[x,y]
I have no background in image-processing. I am interested in getting the difference between these two images.
After writing the following code :
from PIL import Image
from PIL import ImageChops
im1 = Image.open("1.png")
im2 = Image.open("2.png")
diff = ImageChops.difference(im2, im1)
diff.save("diff.png")
I get this output :-
I am looking for some customisations here :
1) I want to label the differences in output in different colours. Things from the 1.png and 2.png should have a different colours.
2) background should be white.
3) I want my output to have axises and axis labels. Would it be possible somehow ?
You probably can't do this with the high-level difference method, but it's quite easy if you compare the images pixel by pixel yourself. Quick attempt:
Code:
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
im1 = Image.open("im1.jpeg").convert('1') # binary image for pixel evaluation
rgb1 = Image.open("im1.jpeg").convert('RGB') # RGB image for border copy
p1 = im1.load()
prgb1 = rgb1.load()
im2 = Image.open("im2.jpeg").convert('1') # binary image for pixel evaluation
p2 = im2.load()
width = im1.size[0]
height = im1.size[1]
imd = Image.new("RGB", im1.size)
draw = ImageDraw.Draw(imd)
dest = imd.load()
fnt = ImageFont.truetype('/System/Library/Fonts/OpenSans-Regular.ttf', 20)
for i in range(0, width):
for j in range(0, height):
# border region: just copy pixels from RGB image 1
if j < 30 or j > 538 or i < 170 or i > 650:
dest[i,j] = prgb1[i,j]
# pixel is only set in im1, make red
elif p1[i,j] == 255 and p2[i,j] == 0:
dest[i,j] = (255,0,0)
# pixel is only set in im2, make blue
elif p1[i,j] == 0 and p2[i,j] == 255:
dest[i,j] = (0,0,255)
# unchanged pixel/background: make white
else:
dest[i,j] = (255,255,255)
draw.text((700, 50),"blue", "blue", font=fnt)
draw.text((700, 20),"red", "red", font=fnt)
imd.show()
imd.save("diff.png")
This assumes that the images are the same size and have identical axes.
What I want to do is to crop out the white lines above a given instagram print screen. I tried doing that by finding the center of the image and going up, line by line, until I found the first line entirely white. Any idea why my code is not working?
from PIL import Image
image_file = "test.png"
im = Image.open(image_file)
width, height = im.size
centerLine = height // 2
entireWhiteLine = set()
entireWhiteLine.add(im.getpixel((0, 0)))
terminateUpperCrop = 1
while terminateUpperCrop != 2 :
for i in range(centerLine, 1, -1) :
entireLine = set()
upperBorder = i - 1
for j in range(0, width, 1) :
entireLine.add((im.getpixel((i, j))))
if entireLine == im.getpixel((0,0)):
box = (0, upperBorder, width, height)
crop = im.crop((box))
crop.save('test2.png')
terminateUpperCrop = 2
Your getpixel() call is actually searching with the coordinates the wrong way around, so in effect you were scanning for the left edge. You could use the following approach. This creates a row of data containing only white pixels. If the length of the row equals your width, then you know they are all white.
from PIL import Image
image_file = "test.png"
im = Image.open(image_file)
width, height = im.size
centerLine = height // 2
white = (255, 255, 255)
for y in range(centerLine, 0, -1) :
if len([1 for x in range(width) if im.getpixel((x, y)) == white]) == width - 1:
box = (0, y, width, height)
crop = im.crop((box))
crop.save('test2.png')
break