import Image
from numpy import zeros, asarray
YUV = zeros((240, 320, 3), dtype='uint8')
im = Image.fromarray(YUV, mode="YCbCr")
blah = asarray(im)
When I run this (IPython 0.10.1 on Py 2.7.1) it seems to make python read some memory which it shouldn't be reading. Sometimes it crashes, sometimes it doesn't but I can surely make it crash by increasing the 320x240 zeros to, say, 3200x2400 and/or calling blah.copy(). Also, if I do:
from matplotlib import pyplot as p
p.subplot(221); p.imshow(blah[:,:,0])
p.subplot(222); p.imshow(blah[:,:,1])
p.subplot(223); p.imshow(blah[:,:,2])
p.subplot(224); p.imshow(blah[:,:,3])
p.gray()
p.show()
I start to see junk memory appearing in blah at around about row 180. What's going on here? Am I converting from PIL Image to numpy array in a bad way? What is the difference between using array(im) instead of asarray(im), and what is preferred? (note in the former case it does still crash sometimes, but it seems to be more stable and less junk)
(Here's a related question)
I noticed that your image is YCbCr 3-channel, but you're displaying 4 channels. Turns out the "junk data" problem is caused by a bug in PIL's array interface, and a fix was committed in Nov 2010. PIL's array interface is returning a 4th channel.
I ran your test case under PIL 1.1.7 and see the noise. I commented out the 224 subplot and re-ran your test using the latest PIL trunk code and a proper 3-channel array is produced, with no noise. The crashing may also be related, but I couldn't reproduce that in my environment.
Related
I'm currently trying to make pyvips work for a project where I need to manipulate images of a "big but still sensible" size, between 1920x1080 and 40000x40000.
The installation worked well, but these particular 2 lines sometime work, sometime don't.
img = pyvips.Image.new_from_file('global_maps/MapBigBig.png')
img.write_to_file('global_maps/MapTest.png')
It seems that for the biggest images, I get the following error message when writing back the image (the loading works fine):
pyvips.error.Error: unable to call VipsForeignSavePngFile
pngload: arithmetic overflow
vips2png: unable to write to target global_maps/MapFishermansRowHexTest.png
I say it seems, because the following lines work perfectly well (with a size of 100 000 x 100 000, far bigger than the problematic images):
size = 100000
test = pyvips.Image.black(size, size, bands=3)
test.write_to_file('global_maps/Test.png')
I could not find an answer anywhere, do you have any idea what I'm doing wrong ?
EDIT:
Here is a link to an image that does not work (it weights 102 Mo).
This image was created using pyvips and a 40 time smaller image, this way:
img = pyvips.Image.new_from_file('global_maps/MapNormal.png')
out = img.resize(40, kernel='linear')
out.write_to_file('global_maps/MapBigBig.png')
And it can be read using paint3D or gimp.
I found your error message in libspng:
https://github.com/randy408/libspng/blob/master/spng/spng.c#L5989
It looks like it's being triggered if the decompressed image size would go over your process pointer size. If I try a 32-bit libvips on Windows I see:
$ ~/w32/vips-dev-8.12/bin/vips.exe copy MapFishermansRowHexBigBig.png x2.png
pngload: arithmetic overflow
vips2png: unable to write to target x2.png
But a 64-bit libvips on Windows works fine:
$ ~/vips-dev-8.12/bin/vips.exe copy MapFishermansRowHexBigBig.png x.png
$
So I think switching to a 64-bit libvips build would probably fix your problem. You'll need a 64-bit python too, of course.
I also think this is probably a libspng bug (or misfeature?) since you can read >4gb images on a 32-bit machine as long as you don't try to read them all in one go (libvips reads in chunks, so it should be fine). I'll open an issue on the libspng repo.
I'm trying to save a 32-bit floating point image (stored as a Numpy array) as a TIFF file using tifffile.py.
import numpy as np
import tifffile
image = np.random.rand(500, 500, 3).astype(np.float32)
tifffile.imsave('image.tiff', image)
However, when viewing the output of the above code in Eye of Gnome, the image is entirely blank.
I think the problem is that not all tools support multi-channel TIFFs with 32-bits per channel. For example, as far as I can tell Python's PIL library does not. But I think tifffile.py does, because if I use your code I get a TIFF that opens, and looks reasonable, in GIMP:
From what I read, Photoshop can read 32-bit TIFFs too. So I think the TIFF file contains your image, but whether it works for you or not depends on what you want to do with it next.
This question might be relevant too, although it's about using 16-bit integers not floats: Python: Read and write TIFF 16 bit , three channel , colour images
I'm a film photographer who deals a lot with cropping/image resizing. Because I shoot film, I have to scan my negatives and crop each frame out of the batch scan. My scanner scans four strips of six images each (24 frames/crops per scan).
A friend of mine wrote me a script for Python that automatically crops images based on inputted coordinates. The script works well but it has problems in the file format of the exported images.
From the scan, each frame should produce a 37mb TIFF at 240 DPI (when I crop and export in Adobe Lightroom). Instead, the Cropper outputs a 13mb 72 DPI TIFF.
Terminal (I'm on Mac) warns me about a "Decompression Bomb" whenever I run the Cropper. My friend is stumped and suggested I ask Stack Overflow.
I've no Python experience. I can provide the code he wrote and the commands Terminal gives me.
Thoughts?
This would be greatly appreciated and a huge HUGE timesaver.
THANK YOU!
ERROR MESSAGE: /Library/Python/2.7/site-packages/PIL/Image.py:2192: DecompressionBombWarning: Image size (208560540 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
PIL is merely trying to protect you. It'll not open larger images, as that could be a vector of attack for a malicious user to give you a large image that'll expand to use up all memory. Quoting from the PIL.Image.open() documentation:
Warning: To protect against potential DOS attacks caused by “decompression bombs” (i.e. malicious files which decompress into a huge amount of data and are designed to crash or cause disruption by using up a lot of memory), Pillow will issue a DecompressionBombWarning if the image is over a certain limit.
Since you are not a malicious user and are not accepting images from anyone else, you can simply disable the limit:
from PIL import Image
Image.MAX_IMAGE_PIXELS = None
Setting Image.MAX_IMAGE_PIXELS disables the check altogether. You can also set it to a (high) integer value; the default is 1024 * 1024 * 1024 // 4 // 3, nearly 90 million pixels or about a 250MB uncompressed data for a 3-channel image.
Note that for PIL versions up to 4.3.0, by default, all that happens is that a warning is issued. You could also disable the warning:
import warnings
from PIL import Image
warnings.simplefilter('ignore', Image.DecompressionBombWarning)
Inversely, if you want to prevent such images from being loaded altogether, turn the warning into an exception:
import warnings
from PIL import Image
warnings.simplefilter('error', Image.DecompressionBombWarning)
and you can then expect the Image.DecompressionBombWarning object to be raised as an exception whenever you pass an image in that would otherwise demand a lot of memory.
As of PIL v5.0.0 (released Jan 2018), images that use twice the number of pixels as the MAX_IMAGE_PIXELS value will result in a PIL.Image.DecompressionBombError exception.
Note that these checks also apply to the Image.crop() operation (you can create a larger image by cropping), and you need to use PIL version 6.2.0 or newer (released in October 2019) if you want to benefit from this protection when working with GIF or ICO files.
From the Pillow docs:
Warning: To protect against potential DOS attacks caused by "decompression bombs" (i.e. malicious files which decompress into a huge amount of data and are designed to crash or cause disruption by using up a lot of memory), Pillow will issue a DecompressionBombWarning if the image is over a certain limit. If desired, the warning can be turned into an error with warnings.simplefilter('error', Image.DecompressionBombWarning) or suppressed entirely with warnings.simplefilter('ignore', Image.DecompressionBombWarning). See also the logging documentation to have warnings output to the logging facility instead of stderr.
I've got a very strange problem with python-qrcode.
I've had it working in our dev environment for a while now, without any issues. We use it to create two QR codes both of which contain URLs of almost exactly the same length (one contains an extra letter and two extra slashes). It's crucial that these two codes be exactly the same size.
Since we setup python-qrcode about five months ago, every single qrcode we have generated has been exactly the same size without fail. However, we've now pushed everything through to a production server and suddenly we have a problem.
Basically, one of the codes we generate is bigger than normal (this is the one with the three extra characters). The other code is the correct size. The two codes are generated using exactly the same function, we just pass the different URL to be encoded.
On my local machine and on our dev server, all the qrcodes are exactly the same size (including the one with the extra characters), but on the production server, the longer one is bigger while the other is correct.
We use Git version control, so all the files/functions etc are identical between the servers. The only difference between the setups is the version of Ubuntu (12.04 vs 12.10 on the production server), but I can't see why that would cause this issue.
If both codes were bigger, I could understand, but I can't work out why one would be bigger than the other on only one server.....?
If anyone can make any suggestion as to where to start working this out, I'd be very grateful!
EDIT:
Here's the relevant code:
myQrGenerator = qrcode.QRCode(
version=QRCODE_SIZE,
error_correction=qrcode.constants.ERROR_CORRECT_M,
box_size=QRCODE_BOX_SIZE,
border=QRCODE_BORDER_SIZE
)
myQrGenerator.add_data('%s%s/' % (theBaseUrl, str(theHash)))
myQrGenerator.make(fit=True)
We get those variables from local_settings.py
After a lengthy discussion it was established that the two servers used different URLs. The one that spewed out a larger QR code (in terms of QR pixels, and subsequently in terms of image pixels) overflowed, where the limit of bits it could store for the predefined size was not enough, and qrcode made it fit by increasing the amount of data it could store.
To fix this, fit was set False to provide a constraint for overflows, and version was increased to accomodate more bits from the start. box_size can be decreased a bit to maintain, more or less, the original image size.
Probably a difference in the way PIL is installed on the box. Looking at the python-qrcode source, it does:
try:
from PIL import Image, ImageDraw
except ImportError:
import Image, ImageDraw
See what happens when you do:
from PIL import Image, ImageDraw
On each machine. Either way if it really isn't a bug in the application code (make doubly sure the same code is on each box), it sounds like it's going to be because of some difference in the way PIL builds itself on Ubuntu 12.10 vs 12.04, probably due to some different headers / libs it uses to compile. Look into ensuring the PIL installation consistent with the other boxes.
Hello all,
I am working on a program which determines the average colony size of yeast from a photograph, and it is working fine with the .bmp images I tested it on. The program uses pygame, and might use PIL later.
However, the camera/software combo we use in my lab will only save 16-bit grayscale tiff's, and pygame does not seem to be able to recognize 16-bit tiff's, only 8-bit. I have been reading up for the last few hours on easy ways around this, but even the Python Imaging Library does not seem to be able to work with 16-bit .tiff's, I've tried and I get "IOError: cannot identify image file".
import Image
img = Image.open("01 WT mm.tif")
My ultimate goal is to have this program be user-friendly and easy to install, so I'm trying to avoid adding additional modules or requiring people to install ImageMagick or something.
Does anyone know a simple workaround to this problem using freeware or pure python? I don't know too much about images: bit-depth manipulation is out of my scope. But I am fairly sure that I don't need all 16 bits, and that probably only around 8 actually have real data anyway. In fact, I once used ImageMagick to try to convert them, and this resulted in an all-white image: I've since read that I should use the command "-auto-levels" because the data does not actually encompass the 16-bit range.
I greatly appreciate your help, and apologize for my lack of knowledge.
P.S.: Does anyone have any tips on how to make my Python program easy for non-programmers to install? Is there a way, for example, to somehow bundle it with Python and pygame so it's only one install? Can this be done for both Windows and Mac? Thank you.
EDIT: I tried to open it in GIMP, and got 3 errors:
1) Incorrect count for field "DateTime" (27, expecting 20); tag trimmed
2) Sorry, can not handle images with 12-bit samples
3) Unsupported layout, no RGBA loader
What does this mean and how do I fit it?
py2exe is the way to go for packaging up your application if you are on a windows system.
Regarding the 16bit tiff issue:
This example http://ubuntuforums.org/showthread.php?t=1483265 shows how to convert for display using PIL.
Now for the unasked portion question: When doing image analysis, you want to maintain the highest dynamic range possible for as long as possible in your image manipulations - you lose less information that way. As you may or may not be aware, PIL provides you with many filters/transforms that would allow you enhance the contrast of an image, even out light levels, or perform edge detection. A future direction you might want to consider is displaying the original image (scaled to 8 bit of course) along side a scaled image that has been processed for edge detection.
Check out http://code.google.com/p/pyimp/wiki/screenshots for some more examples and sample code.
I would look at pylibtiff, which has a pure python tiff reader.
For bundling, your best bet is probably py2exe and py2app.
This is actually a 2 part question:
1) 16 bit image data mangling for Python - I usually use GDAL + Numpy. This might be a bit too much for your requirements, you can use PIL + Numpy instead.
2) Release engineering Python apps can get messy. Depending on how complex your app is you can get away with py2deb, py2app and py2exe. Learning distutils will help too.