I'm using Python Wand module(version 0.4.3.) to convert an image stored in pdf to PNG. Final PNG quality is great when I saved the final image in its orignal image width and height. But, when I try to save it to smaller image final PNG gets blurry and quality is not that great.
The difference between two images is shown here. Top image is converted to original size (10800x7200px). The second one is scale to 1250x833px.
Is there any way I can improve the second image? I played with different filter and blur setting.But, could not get the image quality I want. Any help is greatly appreciated.
Code I used to convert PDF to png in its original size:
def pdf_to_png(pdf_name, res):
with Image(filename=pdf_name, resolution=res) as img:
with Image(width=img.width,height=img.height, background=Color("white")) as bg:
bg.composite(img,0,0)`
bg.save(filename="Drawing_improved_wand.png")`
pdf_to_png('Drawing_1.pdf', 300)
Code for resized png:
with Image(filename="Drawing_1.pdf", resolution=(300,300)) as img:
with Image(width=1250, height=833, background=Color("white")) as bg:
img.resize(1250, 833,filter='undefined', blur=1)
img.format = 'png'
bg.composite(img,0,0)
bg.save(filename='Drawing_improved_wand1250x833.png')
This is likely due to an inefficiency with how ImageMagick handles rasterization form PDF text + vectors, and not because of anything you're doing wrong. The large PNG likely has the same problems as the small one, but since the resolution is almost an order of magnitude higher, the effects become imperceptible.
If when exporting to the large PNG the file looks good, I would use this for further processing (like scaling down) and not the PDF.
have you try to set blur < 1?
for example:
img.resize(1250, 833,filter='undefined', blur=0.1)
Related
I am trying to downsize images with no form of resampling. I want it to fully maintain hard edges, and not add any blur / antialiasing whatsoever. Essentially, exactly like the photoshop "Hard edges" resampling mode. However, each and every one of Pillow's built in resampling methods gives me some kind of blur or aliasing effects.
The code im using currently is as follows
fileName = "big.jpg"
for i in range(6):
filename16 = "small" + str(i) + ".jpg"
img = Image.open(fileName)
img16 = img.resize((16, 16), resample=i)
img16.save(filename16)
Original image:
The results
Resized image to 16x16 by using;
Photoshop's "Nearest neighbour - Hard edges"
0 Pillow Nearest Neighbor
1 Pillow Lanczos
2 Pillow Linear
3 Pillow Bicubic
4 Pillow Box
5 Pillow Hamming
Or side by side, it can be clearly seen that all resampling methods change something about the image
How would i be able to get a result as that achieved by photoshop's "hard edges" option in Pillow?
Thanks!
Make sure to save your images as .png files. .jpg files are lossy, and don't compress hard lines very well, since they are designed to store regular photos. If you look at the raw photo data, you will see a sharp transition between zones when you interpolated with NEAREST, but since the image is saved as .jpg, that all goes out the window.
Here's what the raw data is when you downsize with NEAREST:
Here's what the .jpg data looks like:
You can see the transition if filled with noise.
I am trying to save a grayscale image using matplotlib savefig(). I find that the png file which is saved after the use of matplotlib savefig() is a bit different from the output image which is showed when the code runs. The output image which is generated when the code is running contains more details than the saved figure.
How can I save the output plot in such a manner that all details are stored in the output image?
My my code is given below:
import cv2
import matplotlib.pyplot as plt
plt.figure(1)
img_DR = cv2.imread(‘image.tif',0)
edges_DR = cv2.Canny(img_DR,20,40)
plt.imshow(edges_DR,cmap = 'gray')
plt.savefig('DR.png')
plt.show()
The input file (‘image.tif’) can be found from here.
Following is the output image which is generated when the code is running:
Below is the saved image:
Although the two aforementioned images denote the same picture, one can notice that they are slightly different. A keen look at the circular periphery of the two images shows that they are different.
Save the actual image to file, not the figure. The DPI between the figure and the actual created image from your processing will be different. Since you're using OpenCV, use cv2.imwrite. In your case:
cv2.imwrite('DR.png', edges_DR)
Use the PNG format as JPEG is lossy and would thus give you a reduction in quality to promote small file sizes. If accuracy is the key here, use a lossless compression standard and PNG is one example.
If you are somehow opposed to using OpenCV, Matplotlib has an equivalent image writing method called imsave which has the same syntax as cv2.imwrite:
plt.imsave('DR.png', edges_DR, cmap='gray')
Note that I am enforcing the colour map to be grayscale for imsave as it is not automatically inferred like how OpenCV writes images to file.
Since you are using cv2 to load the image, why not using it also to save it.
I think the command you are looking for is :
cv2.imwrite('gray.jpg', gray_image)
Using a DPI that matches the image size seems to make a difference.
The image is of size width=2240 and height=1488 (img_DR.shape). Using fig.get_size_inches() I see that the image size in inches is array([7.24, 5.34]). So an appropriate dpi is about 310 since 2240/7.24=309.4 and 1488/5.34=278.65.
Now I do plt.savefig('DR.png', dpi=310) and get
One experiment to do would be to choose a high enough DPI, calculate height and width of figure in inches, for example width_inch = width_pixel/DPI and set figure size using plt.figure(figsize=(width_inch, height_inch)), and see if the displayed image itself would increase/decrease in quality.
Hope this helps.
I'm trying to resize images while keeping aspect ratio. I use PIL's thumbnail method for this. I use Image.ANTIALIAS filter.
You can check my code here:
image = Image.open(design.design.path)
format = image.format
image = ImageOps.mirror(image)
new_size = (241, 241)
image.thumbnail(new_size, Image.ANTIALIAS)
image.save(response, format)
This code works perfectly however quality is lost after thumbnail. I can see it by zooming in on the saved image. I can see pixels at the corners of the image while I don't on the original image. This is seen even better when I print out resized image.
You can check out sample images here: http://imgur.com/a/ifZoU
Please tell me if you need anything else
Image.save has options for you:
img.save(fpath, 'image/png', quality=100, optimize=True)
If you are making thumbnails of a pixel-based image, of course you will loose quality. Re-sizing this kind of image (as opposed to vector images) simply throw information away - there is no way to recover it.
IF you need to view the image in full size, you have to preserve the original image, and keep the scaled-down version to be shown only where it is needed - use the original for everything else.
https://en.wikipedia.org/wiki/Raster_graphics
I'm working on a tool that cuts a large image into smaller tiles using ImageMagick via Python. And I need all the tiles to be on the same format (png, 8 or 16 bits).
In most case, it works just fine, but on monochromatic tiles ImageMagick compresses the picture on writing the file. For instance, pure black tiles are compressed to a 1 bit picture.
I use the plain save method, as explained in the docs.
I found no documentation about this autocompressing feature nor any way to avoid this.
Is there a workaround for this or a way I can avoid this happening?
edit:
For instance, if I use this code to import a 24bit rgb picture:
from wand.image import Image
img = Image(filename ='http://upload.wikimedia.org/wikipedia/commons/6/68/Solid_black.png')
print img.type
I get this as type
bilevel
if I add this,
img.type = 'grayscale'
print img.type
Once again I get
bilevel
If I try to force the pixel depth like this,
img.depth = 16
print img.type
print img.depth
I get:
bilevel
16
I thought that maybe it actually changed the depth, but once I save the image, it become 1 bit depth again.
So it seems to me that ImageMagick just automatically compresses the picture and that I have no control over it. It even refuses to change the image type.
Any ideas to avoid this? Any way to force the pixel depth?
I have a hundred 10x10 px images, and I want to combine them into a big 100x100 image. I'm using the Image library to first create a blank image and then paste in the smaller images:
blank = Image.new('P',(100,100))
blank.paste(im,box)
The smaller images are in color, but the resulting image turns out in all grayscale. Is there a fix or workaround for this?
It's probably something to do with using a palette type image (mode P). Is there a specific reason you are doing this? If not, try passing 'RGB' as the first argument.