There are a couple similar questions on SO, but none of them really helped. Basically I am trying to resize a simple png image, as seen here:
http://media.spiralknights.com/wiki-images/3/3e/Equipment-Proto_Sword_icon.png
(from the mmo Spiral Knights, copyright Three Rings Entertainment)
I had originally implemented a utility which uses these images in php, and the resizing utility there worked perfectly well. I used the method described on the imagecopyresampled page in PHP's documentation.
Honestly I can't even get it to resize better in Photoshop, but the results are almost disastrous in python. I consistently get a halo effect, and I believe this is at least in part to the actual RGBA values of the transparent pixels. Here, this picture tells it better:
(the second to last resize was just a suggestion I saw on another forum to resize first to twice the final size, and it DID help at least a little, but not enough)
The image is already in RGBA mode when it is being resized.
As you can see the PHP and Photoshop resizes are halo-free. Honestly everything but the PHP resize does TOO much work, I like the minimal palette in the php image (if you look at the larger versions you can see that the PHP resize uses less in between colours), but I could live with the way Photoshop has resized it, or even the inner part of the python resize, but the halo is unacceptable.
It seems to me -- and correct me if I'm wrong -- that PHP and Photoshop seem to know not to use the colour of the pixels in the alpha channel when interpolating, but python is using that light border, which is otherwise transparent, in its resize.
Unfortunately there are a lot of different icons that I need to resize, with varying profiles, so they're not all as simple as this circular one, but this was just the first one I was using while experimenting.
It's not much of a code question in and of itself, but if you need something to look at then this is the gist:
>> import Image
>> img = Image.open('swordorig.png')
>> img
<PngImagePlugin.PngImageFile image mode=RGBA size=256x256 at 0x2A3AF58>
>> img.resize((36,36), Image.ANTIALIAS).save('swordresize.png')
Eventual question being: is there a way to tell PIL NOT to use the colour of a pixel that has an alpha of 0 while resampling?
Thanks to everyone for answering! And especially to rotoglup for telling me the term for what I was actually looking for. Going through the stack overflow pages for premultiplied alpha in PIL I found a solution that works; here is an updated sheet of resize graphics:
The gray bars are just to give an impression of what they would look like on a lighter background.
https://stackoverflow.com/a/6882161/1189554
Thanks to madlag
So my code ends up looking like this:
>> import Image, numpy
>> img = Image.open('swordorig.png')
>> premult = numpy.fromstring(img.tostring(), dtype=numpy.uint8)
>> alphaLayer = premult[3::4] / 255.0
>> premult[::4] *= alphaLayer
>> premult[1::4] *= alphaLayer
>> premult[2::4] *= alphaLayer
>> img = Image.fromstring("RGBA", img.size, premult.tostring())
>> img.resize((36,36), Image.ANTIALIAS).save('swordresize.png')
And you get the icon that is on the bottom of the sheet. Still more colours than the PHP version, and a very mild ring, but overall much, much nicer. I'm sure the algorithm could be tweaked to improve it even more. Thanks again to everyone for responding!
Related
Preamble
I'm displaying an image from a Django back-end into a 26 x 26px sized (css) <img> tag.
I can do this two ways:
Resize on the back-end (Pillow) to 26x26 (cropping to keep the aspect ratio)
Send the full size image and let CSS do the sizing
Issue
Using the LANCZOS algorithm for resizing and saving the result as 100% quality Jpg, the displayed image looks so much worse than the one resized by the browser.
Why is that, and is there any way to fix it?
EDIT: Testing on a Retina display
EDIT: Bicubic looks very similar to Lanczos
This article here [1] says that Retina displays (high-DPI displays) are going to try to display 2x more pixels in the same area. Area that is also measured in pixels, the pixels you actually declare in CSS.
So this is where my confusion came from. The solution was to generate the thumbnails twice the size being displayed.
In my experience, PIL/Pillow (hereafter simply “Pillow”) can behave very differently with small images than it does with large ones – not just in its resizing operations but generally – but so you might as well test out all of the methods Pillow offers, e.g.:
# q.v. https://gist.github.com/fish2000/d85befaf289c664b6a9f44d1b56e57da#file-asscat-py-L129-L134
from PIL import Image
# q.v. PIL.Image constants of the same (yet uppercased) names:
interpolation_methods = frozenset({
"box",
"bilinear", "bicubic",
"hamming", "lanczos",
"nearest" })
def interpol(name):
""" Return a PIL/Pillow image interpolation method constant by name """
return getattr(Image, name.upper())
size = (26, 26)
avatar = Image.open(…) # load your source avatar image
methods = (interpol(method) for method in interpolation_methods)
scaled = (avatar.resize(size, resample=method) for method in methods)
# you can save these out for more granular inspection:
previews = list(scaled)
for preview in previews:
preview.show()
… keep in mind that Image.NEAREST can yield surprisingly decent results for small sizes – that, and the fact that Pillow is not in any way Adobe® Photoshop™, and as such can’t really be tasked with replicating results you may have gotten out of same.
But so, w/r/t whether or not to scale using CSS (or any other client-based method): it’s always better to send less bytes over the wire, if possible – but that doesn’t mean it can’t be done. I’m a perfectionist, personally, but if pressed for time or money I am not pedantic about it.
so, as the topic says, I want to write some text on an transparent image.
Or, to be more specific, I want to write on multiple specific positions different text. The point is, I want to use a custom font (coolvetica) and (and there is my problem) I want it to be anti-aliased. unfortunately, PIL does not support font-anti-aliasing, as you might already know, but also see here:
PIL example image.
So as you see, PIL has not only an issue with the coordinates, but also does not really support font-anti-aliasing and adds some irritating black-ish border to the text. It is not very usable for me.
So after some searching, I found Python Wand. The output is a lot better, but the antialiasing leads not to the result I want: Wand example image. The curves are not as smooth as I want and there are some weird "pimples" like the dot at the end of the "r".
My code to use wand is this:
with Drawing() as draw:
with wandimage(filename="Skispringen_Score.png") as img:
draw.font_family = 'coolvetica.ttf'
draw.text_antialias = True
draw.fill_color = (Color("rgba(255, 255, 255, 255)"))
for postionName, scorePosition in scoreContent.items():
if isinstance(currentScoreText[postionName], int) and currentScoreText[postionName] > 9:
center = 2
else:
center = 0
draw.font_size = fontsize[scorePosition["size"]]
draw.text(scorePosition["x"] - center, scorePosition["y"], str(currentScoreText[postionName]))
draw(img)
img.save(filename='wand-image.png')
And here is an example of what the text should actually look like: PHP example image.
This image was generated with php and the text looks as smooth as I want to.
I also tried to use cairo, but the documentation is really not that good (especially examples are rare) and I just don't even know how to write any text with it nor how to set an custom font.
The actual output image has an resolution of 1280x720 and is - besides of the text - completely transparent, the image is just an overlay for an videostream.
Do you have any idea how to get a nice looking text onto the image with python?
You could try pycairo. Here are some code examples https://github.com/pygobject/pycairo/tree/master/examples
The reference docs for cairo are for C.
If you want even better results then you could render the image at 4 times the size and
downsample it to the desired size.
You can also try some filters like a gaussian blur.
For pixel perfect images you will have to use the same rendering engine that your php function is using.
I'm fairly new to PIL and having issue with some image processing. I'm just trying to resize an image to a different resolution using PIL:
resized_hd = image.resize((hd_width, hd_height), Image.ANTIALIAS)
However, the resized picture does not look as good/bright as the original one.
Original (5184*3456) -> http://d31d9cjolqcwln.cloudfront.net/San+Francisco/sutro+baths.jpg
Resized (2048*1366) -> http://d31d9cjolqcwln.cloudfront.net/San+Francisco/HD-sutro+baths.jpg
Any guess ?
I think it's because one of the two images (the original one) has an embedded color profile, which the other one (the resized one) doesn't have. I discovered this by trying to open the original image in gimp. The color profile will cause advanced viewers to make corrections, in theory to fix differences like the ones between paper and screen. When I view the two images in a simple viewer that doesn't know about color profiles, they are really the same brightness. It looks like the PIL library ignores the color profile and doesn't carry it onto the resized image.
I can't help you more precisely than that, though, as I don't know much about color profiles. There might be tools to copy the profile over.
Kuddo to Armin for his suggestion.
Pil allow you to attach an ICC profile to a resized image using the following code:
icc_profile = im1.info.get('icc_profile')
im6 = im1.resize((hd_width, hd_height), Image.ANTIALIAS)
## this one will preserve the colours
im6.save("colorok-"+image,icc_profile=icc_profile)
## this one don't
im6.save("nop-"+image)
My end goal is to find a color in an image (in this case white) and replace it with another color (based on certain circumstances). So, as a certain variable changes the white is replaced with a specific color.
However, to get there I'm currently just playing with images (using tutorials ..etc). I'm trying to use the below code to print the color palette of an image. My understanding is the colors are a "tuple" of 3 integer representing RGB. 0 = darkest, 255 = white. So, the image I'm testing is a black and white image. I'm expecting something like "(0,0,0),(255,255,255)". So, I figured if I could get this far, then I could write a code to replace the "(255,255,255)" with the appropriate color.
I mentioned the end goal because I'm very aware that my approach might not be the best, and that perhaps someone has a better way I can go about this. If not, I'd at least like to be able to print a string referencing the colors an image contains. The "NOFRAME" was a great piece of advice I found on this site, as I'm not actually "using" the images/graphics - just using their attributes.
image1=r"C:\Python27\Lib\site-packages\pygame\examples\data\image1.jpg"
image2=r"C:\Python27\Lib\site-packages\pygame\examples\data\image2.png"
import pygame, sys
from pygame.locals import *
pygame.init()
pygame.display.set_mode((1,1), pygame.NOFRAME)
background = pygame.image.load(image1).convert()
mouse_c=pygame.image.load(image2).convert_alpha()
colorpal = pygame.Surface.get_palette(mouse_c)
print colorpal
What you want to do is going to require a lot of pixel level work, so I recommend that you use a pygame.PixelArray object for direct pixel access of the surface(s).
For what you specifically want to do, it sounds like you could use the pygame.PixelArray.replace() method.
You can get individual pixels on image surfaces by using theget_atmethod of Pygame surfaces - as inmouse_c.get_at((0,0)).
This is strictly what you are asking for -- but it won't suit any real world needs, as it is extremely, and I am saying extremely, slow to process all pixels of an image calling just this.
You could take the image buffer withget_bufferand interpret the data there, raw, or pass the buffer along to a function written in native code to get some speed.
Still, if your goal is to replace a color by another in real time you can resort to use indexed images - that way, each image will have a color table, you just modify,say, the color number 10 to be 0,0,255, and as you render the image, all occurrences of that color become blue. This is fast -- not as efficient as back in the 8 bit video-games time when this was done by hardware -- but it will be orders of magnitude faster any substitutions you try to make in pure Python code.
I want to load a number of images from harddrive and place them on a larger white background. And I want to do it in Python. I am wondering what is the best way of doing that. I am using a windows machine and I can use any library I want. Any pointer to a webpage or a sample code that can point me to a good direction would be appreciated.
Something like this:
A very popular image processing library for Python is PIL. The official PIL tutorial might be useful, especially the section about "Cutting, Pasting and Merging Images".
PIL isn't enough. Try also with PIL the aggdraw library.
But aggdraw also isn't enough. It works poorly with transparency. I mean 0.5-1 gray pixel around opaque object over the transpparent area.