Compositing two images with python wand - python

I need to use python wand (image-magick bindings for python) to create a composite image, but I'm having some trouble figuring out how to do anything other than simply copy pasting the foreground image into the background image. What I want is, given I have two images like:
and
both jpegs, I want to remove the white background of the cat and then paste it on the room. Answers for other python image modules, like PIL, are also fine, I just need something to automatize the composition process. Thanks in advance.

You can achieve this using Image.composite() method:
import urllib2
from wand.image import Image
from wand.display import display
fg_url = 'http://i.stack.imgur.com/Mz9y0.jpg'
bg_url = 'http://i.stack.imgur.com/TAcBA.jpg'
bg = urllib2.urlopen(bg_url)
with Image(file=bg) as bg_img:
fg = urllib2.urlopen(fg_url)
with Image(file=fg) as fg_img:
bg_img.composite(fg_img, left=100, top=100)
fg.close()
display(bg_img)
bg.close()

For those that stumble across this in the future, what you probably want to do is change the 'white' color in the cat image to transparent before doing the composition. This should be achievable using the 'transparent_color()' method of the Image. Something like 'fg_img.transparent_color(wand.color.Color('#FFF')), probably also with a fuzz parameter.
See:
http://www.imagemagick.org/Usage/compose/
http://docs.wand-py.org/en/latest/wand/image.html

Related

OpenCV imread transparency gone

I have an image (a captcha) that I download from the web.
When I loaded to opencv it seems to loose its properties or simply mixes the transparent background with the dark/black colors:
Currently the code does nothing but loading a writing again:
captchaImg = cv2.imread('captcha1.png')
cv2.imwrite("captcha2.png", captchaImg)
I have tried loading also with options 0, 1, 2, 3 but the result is the same.
Using the provided constants might help. I do the equivalent of
captchaImg = cv2.imread('captcha1.png', cv2.IMREAD_UNCHANGED)
which reads the alpha channel (if there is one). The REPL says that cv2.IMREAD_UNCHANGED is -1
Well this is a problem with opencv and it has a solution with opencv but it is kind of complex so I went on and use another libary (PIL) that I was going to use any way.
Basically what you do is put a white image behind the transparent one an with that you solve the problem.
The code is the following:
image = Image.open("captcha1.png")
image.convert("RGBA")
canvas = Image.new('RGBA', image.size, (255,255,255,255)) # Empty canvas colour (r,g,b,a)
canvas.paste(image, mask=image) # Paste the image onto the canvas, using it's alpha channel as mask
canvas.save("captcha1.png", format="PNG")
I hope it helps someone with the same problem.

How do I create an animated gif in Python using Wand?

The instructions are simple enough in the Wand docs for reading a sequenced image (e.g. animated gif, icon file, etc.):
>>> from wand.image import Image
>>> with Image(filename='sequence-animation.gif') as image:
... len(image.sequence)
...but I'm not sure how to create one.
In Ruby this is easy using RMagick, since you have ImageLists. (see my gist for an example.)
I tried creating an Image (as the "container") and instantiating each SingleImage with an image path, but I'm pretty sure that's wrong, especially since the constructor documentation for SingleImage doesn't look for use by the end-user.
I also tried creating a wand.sequence.Sequence and going from that angle, but hit a dead-end as well. I feel very lost.
The best examples are located in the unit-tests shipped with the code. wand/tests/sequence_test.py for example.
For creating an animated gif with wand, remember to load the image into the sequence, and then set the additional delay/optimize handling after all frames are loaded.
from wand.image import Image
with Image() as wand:
# Add new frames into sequance
with Image(filename='1.png') as one:
wand.sequence.append(one)
with Image(filename='2.png') as two:
wand.sequence.append(two)
with Image(filename='3.png') as three:
wand.sequence.append(three)
# Create progressive delay for each frame
for cursor in range(3):
with wand.sequence[cursor] as frame:
frame.delay = 10 * (cursor + 1)
# Set layer type
wand.type = 'optimize'
wand.save(filename='animated.gif')

Overlaying coloured pixels on top of an image using Python Imaging Library

Okay, first thing first. This is a near duplicate of this question.
However, the issue I am facing is slightly different in a critical way.
In my application, I read a generic file name in, load said image, and display it. Where it gets tricky is I have overlay the appearance of being 'highlighted'. To do this, I was using the Image.blend() function, and blending it with a straight yellow image.
However, when dealing with blend, I was fighting the error that the two images are not compatible to be blended. To solve this, I opened the sample image I had in paint, and just pasted yellow over the whole thing, and saved it as a copy.
It just occurred to me that this will fail when a different type of image is read in by file name. Remember this needs to be generic.
So my question is: Instead of making a copy of the image manually, can I get python to generate one by copying the image and modifying it so it is solid yellow? Note: I do not need to save it after, so just making it happen is enough.
Unfortunately, I am not allowed to share my code, but hopefully the following will give an idea of what I need:
from PIL import Image
desiredWidth = 800
desiredHeight = 600
primaryImage = Image.open("first.jpg").resize((desiredWidth, desiredHeight), Image.ANTIALIAS)
# This is the thing I need fixed:
highlightImage = Image.open("highlight.jpg").resize((desiredWidth, desiredHeight), Image.ANTIALIAS)
toDisplay = Image.blend(primaryImage, highlightImage, 0.3)
Thanks to anyone who can help.
Sounds like you want to make a new image:
fill_color = (255,255,0) #define the colour as (R,G,B) tuple
highlightImage = Image.new(primaryImage.mode, #same mode as the primary
primaryImage.size, #same size as the primary
fill_color)#and the colour defined above
this creates a new image with the same mode and size as the already opened image, but with a solid colour. Cheers.
Also if you are using Pillow instead of original PIL you can even get the color by name:
from PIL.ImageColor import getcolor
overlay = 'yellow'
fill_color = getcolor(overlay, primaryImage.mode)

PyGame image save colors distorted

So I have implemented the following screenshot functionality into my game just to log progress and stuff like that. This is my code:
pygame.image.save(screen, save_file)
Pretty basic. I recently upgraded to python 3.3 and have since been having the issue of distorted colors using this function. Here is what I mean:
Distorted Color:
So it looks quite nice, but it isn't how it supposed to be. This is the actual image:
Is this a known issue or is it just me? Are there any fixes to it or is it just a broken function at the moment. I am using pygame 1.9.2pre and I am assuming it is just a bug with the pre release but I was having issues using any other versions of pygame with python 3.3.
Some users have reported difficulty with saving images as pngs:
I only get .tga files even when I specify .png. Very frustrating.
If you use .PNG (uppercase), it will result in an invalid file (at least on my win32). Use .png (lowercase) instead.
PNG does not seem to work, I am able to get a preview of it in Thunar, but everywhere else It says that it is not a valid PNG.
Saving in a different format may be helpful. For example, BMP is a simple format, so it's unlikely that Pygame's implementation will be buggy.
If you really want to save as PNG, you can reverse the distortion by swapping the red channel with the green one. This is fairly easy. For example, using PIL:
from PIL import Image
im = Image.open("screenshot.png")
width, height = im.size
pix = im.load()
for i in range(width):
for j in range(height):
r,g,b = pix[i,j]
pix[i,j] = (g,r,b)
im.save("output.png")
Or you can save as BMP and convert to PNG after the fact:
from PIL import Image
im = Image.open("screenshot.bmp")
im.save("screenshot.png")
for future reference, this trick worked for me:
from PIL import Image
imgdata = pygame.surfarray.array3d(screen).transpose([1,0,2])[:,:,::-1]
Image.fromarray(imgdata).save('output.png')

How can I find out why PIL isn't drawing the font correctly?

Here's the code I'm using:
from PIL import Image
import ImageFont, ImageDraw
import sys
import pdb
img = Image.new("RGBA",(300,300))
draw = ImageDraw.Draw(img)
font = ImageFont.truetype(sys.argv[1],30)
draw.text((0,100),"world",font=font,fill="red")
del draw
img.save(sys.argv[2],"PNG")
and here's the image that results:
img http://www.freeimagehosting.net/image.php?976a0d3eaa.png ( for some reason, I can't make it show on SO, so the link is http://www.freeimagehosting.net/image.php?976a0d3eaa.png )
The thing is, I don't understand why it isn't drawing the font correctly? I should be able to read the word "world" off of it. It's like the picture's been cut in half or something. Does anyone have any clue?
EDIT: after balpha's comment, I decided to try another font. I'm only interested in ttf fonts, so I tried with another one, and it worked. This is kind of strange. The original font I tried to run this with is Beautiful ES. I'm curious if you guys can reproduce the same image on your computers, and if you happen to know the reason for why that is.
PIL uses the freetype2 library, so most possibly it is an issue with the font file; for example, it could have bad metrics defined (e.g see the OS/2 related ones opening the font with FontForge).

Categories