I'm trying to convert the jpg images into new generation WebP format. I used PIL for that and everything working perfectly!
But my problem is when the program compresses a JPG file which is 134KB and when it converted into WebP it became 108KB.
How I can shorten the size of the WebP image ?, I want to compress the quality.
My code looks like this:
from PIL import Image
import glob
import os
image = Image.open('my-image.jpg')
image = image.convert('RGB')
image.save('my-image.jpg.webp', 'webp')
Does anybody know how we can again decrease the size of the converted WebP images?
Set the quality parameter while saving the image.
image.save('my-image.jpg.webp', 'webp', optimize = True, quality = 10)
Save the picture with desired quality. To change the quality of image, set the quality variable at your desired level, The more the value of quality variable
and lesser the compression
Related
I have a small animation (pyglet) that I am taking a screenshot of that I want to use in a CNN afterwards. At the moment, I have to save a screenshot (ColorBufferImage) and then upload it again directly so that the image is in the correct format for the CNN (PIL format).
For better performance I would skip the whole thing without having to save the image extra. Here is my code:
pyglet.image.get_buffer_manager().get_color_buffer().save('screenshot.png')
image = tensorflow.keras.utils.load_img('screenshot.png',color_mode='rgb',target_size=(256, 256),interpolation='nearest',keep_aspect_ratio=False)
I have never worked with TIF images, I have a specific problem definition where I need to work on satellite raster in .TIF format, which I need to upload on Google Earth Engine.
But when I convert image into Numpy array to apply some Image Processing techniques it looses its Georeferencing. Is there any way of preserving that in Output Image?
from PIL import Image
Image.MAX_IMAGE_PIXELS = None
ImagenTotal = numpy.asarray(Image.open('Well/newnew3.tif'))
imshow(ImagenTotal[10:-10,10:-10,:])
pylab.savefig('foo{}.tif'.)
I used Mask-RCNN and Tensorflow in a project. I have completed many trainings in the past.
In the recent one, I found loss curve having a large amplitude of shock in tensorboard. I changed some settings such as batch_size in a small range but it's useless(in my opinion).
I found that it's the first time using .png to train, used .jpg before this training, maybe something wrong here?
Thank you in advance.
Of course, there is a difference between png and jpeg format. PNG supports lossless data compression, whereas JPEG uses always a lossy compression. Therefore the same image can become numerically different, if we save it in a different format.
Example:
We assume the following png image (make sure to download it, to reproduce this example). We will save this image as jpg and then we will compare the png image with the jpg image:
https://upload.wikimedia.org/wikipedia/commons/6/6c/Belton_Garden.png
Code:
import cv2
import numpy as np
img = cv2.imread('./Belton_Garden.png', -1)
img_png = img
cv2.imwrite('./Belton_Garden.jpg', img)
img_jpg = cv2.imread('./Belton_Garden.jpg', -1)
diff = img_png - img_jpg
cv2.imwrite('diff.png', diff)
print('diff', diff)
The difference of the png and the jpg image looks like this:
Conclusion:
Will different image formats (jpg, png) affect the training results of
Mask-RCNN?
As you can see, it may affect your results. For this reason, it is a good idea to be consistent with the image format, to remove this source of error.
I am trying to save a grayscale image using matplotlib savefig(). I find that the png file which is saved after the use of matplotlib savefig() is a bit different from the output image which is showed when the code runs. The output image which is generated when the code is running contains more details than the saved figure.
How can I save the output plot in such a manner that all details are stored in the output image?
My my code is given below:
import cv2
import matplotlib.pyplot as plt
plt.figure(1)
img_DR = cv2.imread(‘image.tif',0)
edges_DR = cv2.Canny(img_DR,20,40)
plt.imshow(edges_DR,cmap = 'gray')
plt.savefig('DR.png')
plt.show()
The input file (‘image.tif’) can be found from here.
Following is the output image which is generated when the code is running:
Below is the saved image:
Although the two aforementioned images denote the same picture, one can notice that they are slightly different. A keen look at the circular periphery of the two images shows that they are different.
Save the actual image to file, not the figure. The DPI between the figure and the actual created image from your processing will be different. Since you're using OpenCV, use cv2.imwrite. In your case:
cv2.imwrite('DR.png', edges_DR)
Use the PNG format as JPEG is lossy and would thus give you a reduction in quality to promote small file sizes. If accuracy is the key here, use a lossless compression standard and PNG is one example.
If you are somehow opposed to using OpenCV, Matplotlib has an equivalent image writing method called imsave which has the same syntax as cv2.imwrite:
plt.imsave('DR.png', edges_DR, cmap='gray')
Note that I am enforcing the colour map to be grayscale for imsave as it is not automatically inferred like how OpenCV writes images to file.
Since you are using cv2 to load the image, why not using it also to save it.
I think the command you are looking for is :
cv2.imwrite('gray.jpg', gray_image)
Using a DPI that matches the image size seems to make a difference.
The image is of size width=2240 and height=1488 (img_DR.shape). Using fig.get_size_inches() I see that the image size in inches is array([7.24, 5.34]). So an appropriate dpi is about 310 since 2240/7.24=309.4 and 1488/5.34=278.65.
Now I do plt.savefig('DR.png', dpi=310) and get
One experiment to do would be to choose a high enough DPI, calculate height and width of figure in inches, for example width_inch = width_pixel/DPI and set figure size using plt.figure(figsize=(width_inch, height_inch)), and see if the displayed image itself would increase/decrease in quality.
Hope this helps.
I'm using Python Wand module(version 0.4.3.) to convert an image stored in pdf to PNG. Final PNG quality is great when I saved the final image in its orignal image width and height. But, when I try to save it to smaller image final PNG gets blurry and quality is not that great.
The difference between two images is shown here. Top image is converted to original size (10800x7200px). The second one is scale to 1250x833px.
Is there any way I can improve the second image? I played with different filter and blur setting.But, could not get the image quality I want. Any help is greatly appreciated.
Code I used to convert PDF to png in its original size:
def pdf_to_png(pdf_name, res):
with Image(filename=pdf_name, resolution=res) as img:
with Image(width=img.width,height=img.height, background=Color("white")) as bg:
bg.composite(img,0,0)`
bg.save(filename="Drawing_improved_wand.png")`
pdf_to_png('Drawing_1.pdf', 300)
Code for resized png:
with Image(filename="Drawing_1.pdf", resolution=(300,300)) as img:
with Image(width=1250, height=833, background=Color("white")) as bg:
img.resize(1250, 833,filter='undefined', blur=1)
img.format = 'png'
bg.composite(img,0,0)
bg.save(filename='Drawing_improved_wand1250x833.png')
This is likely due to an inefficiency with how ImageMagick handles rasterization form PDF text + vectors, and not because of anything you're doing wrong. The large PNG likely has the same problems as the small one, but since the resolution is almost an order of magnitude higher, the effects become imperceptible.
If when exporting to the large PNG the file looks good, I would use this for further processing (like scaling down) and not the PDF.
have you try to set blur < 1?
for example:
img.resize(1250, 833,filter='undefined', blur=0.1)