Related
I'm using PIL to convert a transparent PNG image uploaded with Django to a JPG file. The output looks broken.
Source file
Code
Image.open(object.logo.path).save('/tmp/output.jpg', 'JPEG')
or
Image.open(object.logo.path).convert('RGB').save('/tmp/output.png')
Result
Both ways, the resulting image looks like this:
Is there a way to fix this? I'd like to have white background where the transparent background used to be.
Solution
Thanks to the great answers, I've come up with the following function collection:
import Image
import numpy as np
def alpha_to_color(image, color=(255, 255, 255)):
"""Set all fully transparent pixels of an RGBA image to the specified color.
This is a very simple solution that might leave over some ugly edges, due
to semi-transparent areas. You should use alpha_composite_with color instead.
Source: http://stackoverflow.com/a/9166671/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
x = np.array(image)
r, g, b, a = np.rollaxis(x, axis=-1)
r[a == 0] = color[0]
g[a == 0] = color[1]
b[a == 0] = color[2]
x = np.dstack([r, g, b, a])
return Image.fromarray(x, 'RGBA')
def alpha_composite(front, back):
"""Alpha composite two RGBA images.
Source: http://stackoverflow.com/a/9166671/284318
Keyword Arguments:
front -- PIL RGBA Image object
back -- PIL RGBA Image object
"""
front = np.asarray(front)
back = np.asarray(back)
result = np.empty(front.shape, dtype='float')
alpha = np.index_exp[:, :, 3:]
rgb = np.index_exp[:, :, :3]
falpha = front[alpha] / 255.0
balpha = back[alpha] / 255.0
result[alpha] = falpha + balpha * (1 - falpha)
old_setting = np.seterr(invalid='ignore')
result[rgb] = (front[rgb] * falpha + back[rgb] * balpha * (1 - falpha)) / result[alpha]
np.seterr(**old_setting)
result[alpha] *= 255
np.clip(result, 0, 255)
# astype('uint8') maps np.nan and np.inf to 0
result = result.astype('uint8')
result = Image.fromarray(result, 'RGBA')
return result
def alpha_composite_with_color(image, color=(255, 255, 255)):
"""Alpha composite an RGBA image with a single color image of the
specified color and the same size as the original image.
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
back = Image.new('RGBA', size=image.size, color=color + (255,))
return alpha_composite(image, back)
def pure_pil_alpha_to_color_v1(image, color=(255, 255, 255)):
"""Alpha composite an RGBA Image with a specified color.
NOTE: This version is much slower than the
alpha_composite_with_color solution. Use it only if
numpy is not available.
Source: http://stackoverflow.com/a/9168169/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
def blend_value(back, front, a):
return (front * a + back * (255 - a)) / 255
def blend_rgba(back, front):
result = [blend_value(back[i], front[i], front[3]) for i in (0, 1, 2)]
return tuple(result + [255])
im = image.copy() # don't edit the reference directly
p = im.load() # load pixel array
for y in range(im.size[1]):
for x in range(im.size[0]):
p[x, y] = blend_rgba(color + (255,), p[x, y])
return im
def pure_pil_alpha_to_color_v2(image, color=(255, 255, 255)):
"""Alpha composite an RGBA Image with a specified color.
Simpler, faster version than the solutions above.
Source: http://stackoverflow.com/a/9459208/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
image.load() # needed for split()
background = Image.new('RGB', image.size, color)
background.paste(image, mask=image.split()[3]) # 3 is the alpha channel
return background
Performance
The simple non-compositing alpha_to_color function is the fastest solution, but leaves behind ugly borders because it does not handle semi transparent areas.
Both the pure PIL and the numpy compositing solutions give great results, but alpha_composite_with_color is much faster (8.93 msec) than pure_pil_alpha_to_color (79.6 msec). If numpy is available on your system, that's the way to go. (Update: The new pure PIL version is the fastest of all mentioned solutions.)
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.alpha_to_color(i)"
10 loops, best of 3: 4.67 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.alpha_composite_with_color(i)"
10 loops, best of 3: 8.93 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.pure_pil_alpha_to_color(i)"
10 loops, best of 3: 79.6 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.pure_pil_alpha_to_color_v2(i)"
10 loops, best of 3: 1.1 msec per loop
Here's a version that's much simpler - not sure how performant it is. Heavily based on some django snippet I found while building RGBA -> JPG + BG support for sorl thumbnails.
from PIL import Image
png = Image.open(object.logo.path)
png.load() # required for png.split()
background = Image.new("RGB", png.size, (255, 255, 255))
background.paste(png, mask=png.split()[3]) # 3 is the alpha channel
background.save('foo.jpg', 'JPEG', quality=80)
Result #80%
Result # 50%
By using Image.alpha_composite, the solution by Yuji 'Tomita' Tomita become simpler. This code can avoid a tuple index out of range error if png has no alpha channel.
from PIL import Image
png = Image.open(img_path).convert('RGBA')
background = Image.new('RGBA', png.size, (255, 255, 255))
alpha_composite = Image.alpha_composite(background, png)
alpha_composite.save('foo.jpg', 'JPEG', quality=80)
The transparent parts mostly have RGBA value (0,0,0,0). Since the JPG has no transparency, the jpeg value is set to (0,0,0), which is black.
Around the circular icon, there are pixels with nonzero RGB values where A = 0. So they look transparent in the PNG, but funny-colored in the JPG.
You can set all pixels where A == 0 to have R = G = B = 255 using numpy like this:
import Image
import numpy as np
FNAME = 'logo.png'
img = Image.open(FNAME).convert('RGBA')
x = np.array(img)
r, g, b, a = np.rollaxis(x, axis = -1)
r[a == 0] = 255
g[a == 0] = 255
b[a == 0] = 255
x = np.dstack([r, g, b, a])
img = Image.fromarray(x, 'RGBA')
img.save('/tmp/out.jpg')
Note that the logo also has some semi-transparent pixels used to smooth the edges around the words and icon. Saving to jpeg ignores the semi-transparency, making the resultant jpeg look quite jagged.
A better quality result could be made using imagemagick's convert command:
convert logo.png -background white -flatten /tmp/out.jpg
To make a nicer quality blend using numpy, you could use alpha compositing:
import Image
import numpy as np
def alpha_composite(src, dst):
'''
Return the alpha composite of src and dst.
Parameters:
src -- PIL RGBA Image object
dst -- PIL RGBA Image object
The algorithm comes from http://en.wikipedia.org/wiki/Alpha_compositing
'''
# http://stackoverflow.com/a/3375291/190597
# http://stackoverflow.com/a/9166671/190597
src = np.asarray(src)
dst = np.asarray(dst)
out = np.empty(src.shape, dtype = 'float')
alpha = np.index_exp[:, :, 3:]
rgb = np.index_exp[:, :, :3]
src_a = src[alpha]/255.0
dst_a = dst[alpha]/255.0
out[alpha] = src_a+dst_a*(1-src_a)
old_setting = np.seterr(invalid = 'ignore')
out[rgb] = (src[rgb]*src_a + dst[rgb]*dst_a*(1-src_a))/out[alpha]
np.seterr(**old_setting)
out[alpha] *= 255
np.clip(out,0,255)
# astype('uint8') maps np.nan (and np.inf) to 0
out = out.astype('uint8')
out = Image.fromarray(out, 'RGBA')
return out
FNAME = 'logo.png'
img = Image.open(FNAME).convert('RGBA')
white = Image.new('RGBA', size = img.size, color = (255, 255, 255, 255))
img = alpha_composite(img, white)
img.save('/tmp/out.jpg')
Here's a solution in pure PIL.
def blend_value(under, over, a):
return (over*a + under*(255-a)) / 255
def blend_rgba(under, over):
return tuple([blend_value(under[i], over[i], over[3]) for i in (0,1,2)] + [255])
white = (255, 255, 255, 255)
im = Image.open(object.logo.path)
p = im.load()
for y in range(im.size[1]):
for x in range(im.size[0]):
p[x,y] = blend_rgba(white, p[x,y])
im.save('/tmp/output.png')
It's not broken. It's doing exactly what you told it to; those pixels are black with full transparency. You will need to iterate across all pixels and convert ones with full transparency to white.
import numpy as np
import PIL
def convert_image(image_file):
image = Image.open(image_file) # this could be a 4D array PNG (RGBA)
original_width, original_height = image.size
np_image = np.array(image)
new_image = np.zeros((np_image.shape[0], np_image.shape[1], 3))
# create 3D array
for each_channel in range(3):
new_image[:,:,each_channel] = np_image[:,:,each_channel]
# only copy first 3 channels.
# flushing
np_image = []
return new_image
from PIL import Image
def fig2img ( fig ):
"""
#brief Convert a Matplotlib figure to a PIL Image in RGBA format and return it
#param fig a matplotlib figure
#return a Python Imaging Library ( PIL ) image
"""
# put the figure pixmap into a numpy array
buf = fig2data ( fig )
w, h, d = buf.shape
return Image.frombytes( "RGBA", ( w ,h ), buf.tostring( ) )
def fig2data ( fig ):
"""
#brief Convert a Matplotlib figure to a 4D numpy array with RGBA channels and return it
#param fig a matplotlib figure
#return a numpy 3D array of RGBA values
"""
# draw the renderer
fig.canvas.draw ( )
# Get the RGBA buffer from the figure
w,h = fig.canvas.get_width_height()
buf = np.fromstring ( fig.canvas.tostring_argb(), dtype=np.uint8 )
buf.shape = ( w, h, 4 )
# canvas.tostring_argb give pixmap in ARGB mode. Roll the ALPHA channel to have it in RGBA mode
buf = np.roll ( buf, 3, axis = 2 )
return buf
def rgba2rgb(img, c=(0, 0, 0), path='foo.jpg', is_already_saved=False, if_load=True):
if not is_already_saved:
background = Image.new("RGB", img.size, c)
background.paste(img, mask=img.split()[3]) # 3 is the alpha channel
background.save(path, 'JPEG', quality=100)
is_already_saved = True
if if_load:
if is_already_saved:
im = Image.open(path)
return np.array(im)
else:
raise ValueError('No image to load.')
I am using two different ways to re-size an image, but all three look exactly the same...
What am I doing wrong that no scaling occurs?
import cv2 as cv
import numpy as np
path = "resources/Shapes.png"
img = cv.imread(path)
cv.imshow("img", img)
res1 = cv.resize(img, None, fx = 2, fy = 2, interpolation = cv.INTER_CUBIC)
cv.imshow("res1", res1)
height, width = img.shape[:2]
res2 = cv.resize(img, (2 * width, 2 * height), interpolation = cv.INTER_CUBIC)
cv.imshow("res2", res2)
k = cv.waitKey(0)
Just putting this here for future reference:
The code above works, the issue was that imshow does not always show the true size of the image, by saving the different images, or simply examining them with res1.shape vs img.shape, you can see the true size of the image.
I am looking to display some images in OpenCV Python with titles and borders around the each subplot. something like this (courtesy of the following stackoverflow post: OpenCV (Python) video subplots):
WHAT I WANT:
But I only manage to get this with that code adapted.
import cv2
im1 = cv2.imread('Lenna.png')
final_frame = cv2.hconcat((im1, im1))
cv2.imshow('lena', final_frame)
WHAT I HAVE
Is it possible to obtain this using OpenCV?
I know a workaround would be to put text on the images, but that's not what I want because it will cover important information that way.
UPDATE
My bad, I didn't specify initially: I have 4 subplots (so 4 different images) and not two like in the example. Also, I want the solution to be as fast as possible since I have video (time restrictions)
I have a pretty quick and dirty solution. You can refine it to suit your needs. I have the explanation alongside the code as well:
import cv2
import numpy as np
img1 = cv2.imread('lena.jpg')
#--- Here I am creating the border---
black = [0,0,0] #---Color of the border---
constant=cv2.copyMakeBorder(img1,10,10,10,10,cv2.BORDER_CONSTANT,value=black )
cv2.imshow('constant',constant)
You can find many other options for different borders ON THIS PAGE
#--- Here I created a violet background to include the text ---
violet= np.zeros((100, constant.shape[1], 3), np.uint8)
violet[:] = (255, 0, 180)
#--- I then concatenated it vertically to the image with the border ---
vcat = cv2.vconcat((violet, constant))
cv2.imshow('vcat', vcat)
#--- Now I included some text ---
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(vcat,'FRAME',(30,50), font, 2,(0,0,0), 3, 0)
cv2.imshow('Text', vcat)
#--- I finally concatenated both the above images horizontally---
final_img = cv2.hconcat((vcat, vcat))
cv2.imshow('Final', final_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
The general idea would be to create a new image with width += width/10 and height += height/20. Write some text as heading and place the input image along the center as:
import cv2
import numpy as np
img = cv2.imread("/Users/anmoluppal/Downloads/Lenna.png")
height, width, ch = img.shape
new_width, new_height = width + width/20, height + height/8
# Crate a new canvas with new width and height.
canvas = np.ones((new_height, new_width, ch), dtype=np.uint8) * 125
# New replace the center of canvas with original image
padding_top, padding_left = 60, 10
if padding_top + height < new_height and padding_left + width < new_width:
canvas[padding_top:padding_top + height, padding_left:padding_left + width] = img
else:
print "The Given padding exceeds the limits."
text1 = "Sample Image 1"
text2 = "Sample Image 2"
img1 = cv2.putText(canvas.copy(), text1, (int(0.25*width), 30), cv2.FONT_HERSHEY_COMPLEX, 1, np.array([255, 0, 0]))
img2 = cv2.putText(canvas.copy(), text2, (int(0.25*width), 30), cv2.FONT_HERSHEY_COMPLEX, 1, np.array([255, 0, 0]))
final = cv2.hconcat((img1, img2))
cv2.imwrite("./debug.png", final)
I used the other answers to make a generalizable function which works for arbitrary row/columns:
def cvSubplot(imgs, # 2d np array of imgs (each img an np arrays of depth 1 or 3).
pad=10, # number of pixels to use for padding between images. must be even
titles=None, # (optional) np array of subplot titles
win_name='CV Subplot' # name of cv2 window
):
'''
Makes cv2 based subplots. Useful to plot image in actual pixel size
'''
rows, cols = imgs.shape
subplot_shapes = np.array([list(map(np.shape, x)) for x in imgs])
sp_height, sp_width, depth = np.max(np.max(subplot_shapes, axis=0), axis=0)
title_pad = 30
if titles is not None:
pad_top = pad + title_pad
else:
pad_top = pad
frame = np.zeros((rows*(sp_height+pad_top), cols*(sp_width+pad), depth ))
for r in range(rows):
for c in range(cols):
img = imgs[r, c]
h, w, _ = img.shape
y0 = r * (sp_height+pad_top) + pad_top//2
x0 = c * (sp_width+pad) + pad//2
frame[y0:y0+h, x0:x0+w, :] = img
if titles is not None:
frame = cv2.putText(frame, titles[r, c], (x0, y0-title_pad//4), cv2.FONT_HERSHEY_COMPLEX, .5, (255,255,255))
cv2.imshow(win_name, frame)
cv2.waitKey(0)
Below is an example usage:
import cv2
import numpy as np
a1 = np.random.random((40,400,1))
a2 = np.random.random((200,200,1))
a3 = np.random.random((100,100,1))
a4 = np.random.random((300,150,1))
a5 = np.random.random((100,150,1))
filler = np.zeros((0,0,1))
titles = np.array([['A', 'B', 'C'], ['D', 'E', 'Filler']])
imgs = np.array([[a1, a2, a3], [a4, a5, filler]])
cvSubplot(imgs, pad=20, titles=titles)
That script produces the following cv2 image:
What I want to achieve is to programmatically create a two-dimensional color ramp represented by a 256x256 matrix of color values. The expected result can be seen in the attached image. What I have for a starting point are the 4 corner colors of the matrix from which the remaining 254 colors inbetween should be interpolated. While I had some success for interpolating the colors for one axis, the two-dimensional calculation provides me some bad headaches. While the image seems to have a non-linear color gradient, I would be happy with a linear one.
If you could give me some hints how to do this with numpy or other tools I`ll be more than thankful.
Here's a super short solution using the zoom function from scipy.ndimage. I define a 2x2 RGB image with the intial colors (here random ones) and simply zoom it to 256x256, order=1 makes the interpolation linear. Here is the code :
import numpy as np
import matplotlib.pyplot as plt
im=(np.random.rand(2,2,3)*255).astype(np.uint8)
from scipy.ndimage.interpolation import zoom
zoomed=zoom(im,(128,128,1),order=1)
plt.subplot(121)
plt.imshow(im,interpolation='nearest')
plt.subplot(122)
plt.imshow(zoomed,interpolation='nearest')
plt.show()
Output:
Here are 3 ways to do this bilinear interpolation. The first version does all the arithmetic in pure Python, the second uses PIL image composition, the third uses Numpy to do the arithmetic. As expected, the pure Python is significantly slower than the other approaches. The Numpy version (which was derived from code written by Andras Deak) is almost as fast as the PIL version for small images, but for larger images the PIL version is noticeably faster.
I also tried using jadsq's scaling technique in PIL but the results were not good - I suspect that PIL's interpolation code is a little buggy.
If you wanted to create lots of these bilinear gradient images of the same size, the PIL technique has another advantage: once you've created the composition masks you don't need to rebuild them for every image.
#!/usr/bin/env python3
''' Simple bilinear interpolation
Written by PM 2Ring 2016.09.14
'''
from PIL import Image
from math import floor
import numpy as np
def color_square0(colors, size):
tl, tr, bl, br = colors
m = size - 1
r = range(size)
def interp_2D(tl, tr, bl, br, x, y):
u0, v0 = x / m, y / m
u1, v1 = 1 - u0, 1 - v0
return floor(0.5 + u1*v1*tl + u0*v1*tr + u1*v0*bl + u0*v0*br)
data = bytes(interp_2D(tl[i], tr[i], bl[i], br[i], x, y)
for y in r for x in r for i in (0, 1, 2))
return Image.frombytes('RGB', (size, size), data)
# Fastest
def color_square1(colors, size):
#Make an Image of each corner color
tl, tr, bl, br = [Image.new('RGB', (size, size), color=c) for c in colors]
#Make the composition mask
mask = Image.new('L', (size, size))
m = 255.0 / (size - 1)
mask.putdata([int(m * x) for x in range(size)] * size)
imgt = Image.composite(tr, tl, mask)
imgb = Image.composite(br, bl, mask)
return Image.composite(imgb, imgt, mask.transpose(Image.TRANSPOSE))
# This function was derived from code written by Andras Deak
def color_square2(colors, size):
tl, tr, bl, br = map(np.array, colors)
m = size - 1
x, y = np.mgrid[0:size, 0:size]
x = x[..., None] / m
y = y[..., None] / m
data = np.floor(x*y*br + (1-x)*y*tr + x*(1-y)*bl + (1-x)*(1-y)*tl + 0.5)
return Image.fromarray(np.array(data, dtype = 'uint8'), 'RGB')
color_square = color_square1
#tl = (255, 0, 0)
#tr = (255, 255, 0)
#bl = (0, 0, 255)
#br = (0, 255, 0)
tl = (108, 115, 111)
tr = (239, 239, 192)
bl = (124, 137, 129)
br = (192, 192, 175)
colors = (tl, tr, bl, br)
size = 256
img = color_square(colors, size)
img.show()
#img.save('test.png')
output
Just for fun, here's a simple GUI program using Tkinter which can be used to generate these gradients.
#!/usr/bin/env python3
''' Simple bilinear colour interpolation
using PIL, in a Tkinter GUI
Inspired by https://stackoverflow.com/q/39485178/4014959
Written by PM 2Ring 2016.09.15
'''
import tkinter as tk
from tkinter.colorchooser import askcolor
from tkinter.filedialog import asksaveasfilename
from PIL import Image, ImageTk
DEFCOLOR = '#d9d9d9'
SIZE = 256
#Make the composition masks
mask = Image.new('L', (SIZE, SIZE))
m = 255.0 / (SIZE - 1)
mask.putdata([int(m * x) for x in range(SIZE)] * SIZE)
maskt = mask.transpose(Image.TRANSPOSE)
def do_gradient():
imgt = Image.composite(tr.img, tl.img, mask)
imgb = Image.composite(br.img, bl.img, mask)
img = Image.composite(imgb, imgt, maskt)
ilabel.img = img
photo = ImageTk.PhotoImage(img)
ilabel.config(image=photo)
ilabel.photo = photo
def set_color(w, c):
w.color = c
w.config(background=c, activebackground=c)
w.img = Image.new('RGB', (SIZE, SIZE), color=c)
def show_color(w):
c = w.color
newc = askcolor(c)[1]
if newc is not None and newc != c:
set_color(w, newc)
do_gradient()
def color_button(row, column, initcolor=DEFCOLOR):
b = tk.Button(root)
b.config(command=lambda w=b:show_color(w))
set_color(b, initcolor)
b.grid(row=row, column=column)
return b
def save_image():
filetypes = [('All files', '.*'), ('PNG files', '.png')]
fname = asksaveasfilename(title="Save Image",filetypes=filetypes)
if fname:
ilabel.img.save(fname)
print('Saved image as %r' % fname)
else:
print('Cancelled')
root = tk.Tk()
root.title("Color interpolation")
coords = ((0, 0), (0, 2), (2, 0), (2, 2))
tl, tr, bl, br = [color_button(r, c) for r,c in coords]
ilabel = tk.Label(root, relief=tk.SUNKEN)
do_gradient()
ilabel.grid(row=1, column=1)
b = tk.Button(root, text="Save", command=save_image)
b.grid(row=3, column=1)
root.mainloop()
Here's a very short way to do it with ImageMagick which is installed on most Linux distros and is available for OSX and Windows. There are also Python bindings. Anyway, just at the command line, create a 2x2 square with the colours at the 4 corners of your image, then let ImageMagick expand and interpolate up to the full size:
convert \( xc:"#59605c" xc:"#ebedb3" +append \) \
\( xc:"#69766d" xc:"#b3b3a0" +append \) \
-append -resize 256x256 result.png
The first line makes a 1x1 pixel of each of your top-left and top-right corners and appends the two side by side. The second line makes a 1x1 pixel of each of your bottom-left and bottom-right corners and appends them side by side. The final line appends the bottom row below the top row and enlarges by interpolation to 256x256.
If you want to better understand what's going on, here is the same basic image but scaled up using nearest neighbour rather than interpolation:
convert \( xc:"#59605c" xc:"#ebedb3" +append \) \
\( xc:"#69766d" xc:"#b3b3a0" +append \) \
-append -scale 20x20 result.png
I'm looking for the most efficient way to 'square' an image for use as an icon. For example, I've got a .png file whose dimensions are 24x20.I don't want to change the 'image' part of the image in any way, I just want to add transparent pixels to the edge of the image so it becomes 24x24. My research suggests that I need to create a transparent canvas 24x24, paste my image on to this, then save the result. I'm working in wxpython and was wondering if anyone could guide me through the process. Better yet, I also have PIL installed, and was wondering if there wasn't a built-in way of doing this. It seems like the kind of operation that would be carried out fairly regularly, but none of the imageops methods quite fit the bill.
Use image.paste to paste the image on a transparent background:
import Image
FNAME = '/tmp/test.png'
top = Image.open(FNAME).convert('RGBA')
new_w = new_h = max(top.size)
background = Image.new('RGBA', size = (new_w,new_h), color = (0, 0, 0, 0))
background.paste(top, (0, 0))
background.save('/tmp/result.png')
You could do it with numpy array pretty easy .. something like this
import matplotlib.pyplot as plt
import numpy as np
im1 = plt.imread('your_im.png')
im0 = np.zeros((24, 24, 4), dtype=im1.dtype)
im0[2:-2,:,:] = im1
plt.imsave('your_new_im.png', im0)
Here's a pure wxPython implementation.
import wx
app = wx.PySimpleApp()
# load input bitmap
bitmap = wx.Bitmap('input.png')
# compute dimensions
width, height = bitmap.GetSize()
size = max(width, height)
dx, dy = (size - width) / 2, (size - height) / 2
# create output bitmap
new_bitmap = wx.EmptyBitmap(size, size)
dc = wx.MemoryDC(new_bitmap)
dc.SetBackground(wx.Brush(wx.Colour(255, 0, 255)))
dc.Clear()
dc.DrawBitmap(bitmap, dx, dy)
del dc
# save output
image = wx.ImageFromBitmap(new_bitmap)
image.SetMaskColour(255, 0, 255)
image.SaveFile('output.png', wx.BITMAP_TYPE_PNG)