Compressing PIL image without saving the file - python

I am having trouble with compressing image in python without saving the image at the disk. The image has a save function as described here. Here it optimizes the image by saving it. Is it possible to use the same procedure without saving the image. I want to do it like another python function.
image=image.quantize() [here it reduces the quality a lot ]
Thanks in advance :)

In PIL or opencv the image is just a large matrix with values for its pixels. If you want to do something with the image(e.g. display it), the function needs to know all the pixel values, and thus needs the extracted image.
However, there is a method to keep the image compressed in memory until you really need to do something with the image. Have a look at this answer: How can i load a image in Python, but keep it compressed?

Related

need help creating Jpeg Generational Degradation code

I am currently creating a Generation loss code for .jpeg images.
Theory:- .jpg is a lossy compression format (for the most part). i.e. every time the image is converted to .jpg some contents/data of the original image is lost in the process. This results in lower file sizes, but due to the loss of data the image is of lower quality then the original. In most use cases, the degradation in quality is negligible. But if this process is carried out a lot of time, all the pixel data of the image get's compressed (lost) so many times, that we end up with just random noise.
I have tried doing it on PIL and cv2, but had no success.
What i tried:- Opening the image (let's say a image of format .png), and converting it into a .jpg. Then converting the image (which is currently of format .jpg) back to .png, so that the before mentioned process can be carried out several times.
My reasoning behind this is, since we are converting the original image into a jpeg, some data should be lost.
I am displaying the image using cv2.imshow() because the window stays active until destroyed explicitly, or an cv2.destroyWindow()/cv2.destroyAllWindows() is encountered.
I expected the image to show up, and its quality to gradually decrease as the program goes by, but for some reason the image stays the same. So, I am expecting someone to help me create the code from scratch (as my current efforts are in vain).
P.S.:- The Reason why I didn't posted any code, is because it's more of a bodge rather then anything concrete, and does nothing towards achieving the objective. So me uploading it would only waste, others time analysing it.
The flaw in your theory is here:
every time the image is converted to .jpg some contents/data of the original image is lost in the process.
If you have already converted to JPEG and recompress with the same settings you might not loose data.

how to generate jp2k distorted images with differnt compression ratios in python

I am working with glymur package right now. is there any method that i can control the generated image with different compression ratio?
im=Image.open("F:/t.bmp")
jp2=glymur.Jp2k("F:/distortion_images/2.jp2",data=array(im),numres=())
x=glymur.Jp2k("F:/distortion_images/2.jp2")
img=Image.fromarray(x[:])
img.show()
i think i need to add some parameters at the back of numres, but it turned out the same image, maybe my numres is not valid.
Can anyone help with this or applying other methods?

Save image without rescaling

I have a 2D array and I need to save it as an image. What's the best way to do it without rescaling? I want to read the image afterwards and check that the values have been saved correctly. I am saving it as a bmp so to avoid compression issues, but other formats should also be fine.
To save an image you can use SciPys imsave function.
imsave(path, image)
EDIT: To Save an image as bmp just choose the file extension in path accordingly.
EDIT2: To prevent intensity normalization you can use
scipy.toimage(image, cmin=0, cmax=255, mode='I').save("image.png")
You can use mode'I'to save your image in a specific format. Just be sure that your input is of type uint16.

Reading a .JPG Image and Saving it without file size change

I want to write a python code that reads a .jpg picture, alter some of its RBG components and save it again, without changing the picture size.
I tried to load the picture using OpenCV and PyGame, however, when I tried a simple Load/Save code, using three different functions, the resulting images is greater in size than the initial image. This is the code I used.
>>> import cv, pygame # Importing OpenCV & PyGame libraries.
>>> image_opencv = cv.LoadImage('lena.jpg')
>>> image_opencv_matrix = cv.LoadImageM('lena.jpg')
>>> image_pygame = pygame.image.load('lena.jpg')
>>> cv.SaveImage('lena_opencv.jpg', image_opencv)
>>> cv.SaveImage('lena_opencv_matrix.jpg', image_opencv_matrix)
>>> pygame.image.save(image_pygame, 'lena_pygame.jpg')
The original size was 48.3K, and the resulting are 75.5K, 75.5K, 49.9K.
So, I'm not sure I'm missing something that makes the picture original size changes, although I only made a Load/Save, or not?
And is there a better library to use rather than OpenCV or PyGame ?!
JPEG is a lossy image format. When you open and save one, you’re encoding the entire image again. You can adjust the quality settings to approximate the original file size, but you’re going to lose some image quality regardless. There’s no general way to know what the original quality setting was, but if the file size is important, you could guess until you get it close.
The size of a JPEG output depends on 3 things:
The dimensions of the original image. In your case these are the same for all 3 examples.
The color complexity within the image. An image with a lot of detail will be bigger than one that is totally blank.
The quality setting used in the encoder. In your case you used the defaults, which appear to be higher for OpenCV vs. PyGame. A better quality setting will generate a file that's closer to the original (less lossy) but larger.
Because of the lossy nature of JPEG some of this is slightly unpredictable. You can save an image with a particular quality setting, open that new image and save it again at the exact same quality setting, and it will probably be slightly different in size because of the changes introduced when you saved it the first time.

Can you reduce memory consumption by ReportLab when embedding very large images, or is there a Python PDF toolkit that can?

Right now reportlab is making PDFs most of the time. However when one file gets several large images (125 files with a total on disk size of 7MB), we end up running out of memory and crashing trying to build a PDF that should ultimately be smaller than 39MB. The problem stems from:
elif mode not in ('L','RGB','CMYK'):
im = im.convert('RGB')
self.mode = 'RGB'
Where nice b&w (bitonal) images are converted to RGB and when you have images with sizes in the 2595x3000, they consume a lot of memory. (Not sure why they consume 2GB, but that point is moot. When we add them to reportlab our entire python memory footprint is about 50MB, when we call
doc.build(elements, canvasmaker=canvasmaker)
Memory usage skyrockets as we go from bitonal PNGs to RGB and then render them onto the page.
While I try to see if I can figure out how to inject bitonal images into reportlab PDFs, I thought I would see if anyone else had an idea of how to fix this problem either in reportlab or with another tool.
We have a working PDF maker using PODOFO in C++, one of my possible solutions is to write a script/outline for that tool that will simply generate the PDF in a subprocess and then return that via a file or stdout.
Short of redoing PIL you are out of luck. The Images are converted internally in PIL to 24 bit color TIFs. This is not something you can easily change.
We switched to Podofo and generate the PDF outside of python.

Categories