I'm trying to make a simple application that fetches radar images from NOAA, then stacks them to make an image for display. I've never used the PILLOW library before, but I do have some experience with Python.
The images are GIF files and have transparent parts. You can see example images at the urls in my code below. I've got a white background and an overlay made from stacking several of the overlays on their site as a single transparent image on my computer.
Here is my current code:
from PIL import Image
import urllib.request
urllib.request.urlretrieve('http://radar.weather.gov/ridge/Warnings/Short/EWX_Warnings_0.gif', 'warnings_now.gif')
urllib.request.urlretrieve('http://radar.weather.gov/ridge/RadarImg/N0R/EWX_N0R_0.gif', 'radar_now.gif')
radar = Image.open('radar_now.gif')
warnings = Image.open('warnings_now.gif')
background = Image.open('Background.gif')
overlay = Image.open('Overlay.gif')
background.paste(radar, (0,0), radar)
background.paste(overlay, (0,0), overlay)
background.paste(warnings, (0,0), warnings)
background.save('radar_final.gif', 'GIF')
I've read in a lot of places, both here and on other sites that to properly stack transparent images, you need to do image_name.paste(transparent_image, (0,0), transparent_image) and that by using a third parameter, PIL will interpret the alpha channel of transparent_image as a mask. However, whenever I run this, I get the following error instead of getting an image as output.
Traceback (most recent call last):
File "radar2.py", line 9, in <module>
background.paste(radar, (0,0), radar)
File "/usr/lib64/python3.3/site-packages/PIL/Image.py", line 1314, in paste
self.im.paste(im, box, mask.im)
ValueError: bad transparency mask
I've searched all over for a solution, and I've only come across a solutions for a similar error happening with PNG images, where they were converted to RGBA mode. When I tried this with my images, I got an identical error (except for line numbers). I wasn't expecting this to work as I've read that GIF images are only P or L mode.
These are all P mode images, and they all have one alpha layer. I also tried splitting off the alpha layer to use as a mask, but that also resulted in a ValueError: bad transparency mask.
I can't seem to find much about transparent GIF images and Pillow. There is this one thing that keeps popping up about converting to PNG, but the solutions in that don't seem relevant.
Thanks for any help!
Note: This is on a 64-bit Gentoo Linux system running Python 3.3 and whatever is the most recent version of Pillow in the Gentoo repos.
Related
I'm running into an interesting issue with opencv in python. I noticed that the luminance in the red channel is significantly reduced when I load an image using opencv and immediately save it without doing any further processing.
I know that jpeg is a lossy format, and that I should expect quality to degrade if I read/write a bunch of times. However, if I open the image in preview (mac) and export from there, I don't see a difference, even if I export to the lowest possible quality. If I use openCV to do the exact same thing, the difference in red channel is enormous (much more than you would expect due to the compression).
I tried to change all the flags (imwrite_jpeg_quality, imwrite_jpeg_luma_quality, imread_anycolor, imread_unchanged, etc). I could not find any combination that prevents this effect.
To test whether it was just the red channel, I tried this with a picture of a green apple (no difference), and a picture of an orange (red reduction noticable by eye). To make sure it has nothing to do with the image file itself (corrupted?), I also took a screenshot of the orange, exported it as a new jpeg file, and tried to read/write with python, and again the color was changed.
I attached the images before/after of the orange. The difference becomes very clear if you open both images in their own tab and switch back and forth.
If I read an already processed image (with reduced red) using opencv, and export it a second time, the red is not reduced any further. It only happens when the image is processed by opencv the first time.
this is my code:
import cv2
loc = '/imagetest/'
img = cv2.imread(loc + 'orange.jpg', cv2.IMREAD_UNCHANGED)
cv2.imwrite(loc + 'orange_processed.jpg',img,[cv2.IMWRITE_JPEG_QUALITY, 100])
I have a project that uses node.js to upload pictures onto a cloud server. Then, a Python script rotates and resizes them with PIL. For some reason, there are always a few images (jpg/jpeg) that throw the error about the image being truncated.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/PIL/Image.py", line 1960, in save
self.load()
File "/usr/local/lib/python3.4/dist-packages/PIL/ImageFile.py", line 238, in load
len(b))
OSError: image file is truncated (28 bytes not processed)
From my research, most people just use try/except to find these bad images and get new ones from the source (which in my case are Android and iPhones). However, I am able to open these pictures with the browser, and other image programs (And they look just fine!). So, it seems like most applications can deal with a certain amount of image corruption. I am aware that I can force PIL to process truncated images, but then they have nasty areas of grey.
LOAD_TRUNCATED_IMAGES = True
Is there a way to avoid these (sometimes huge %) grey areas or actually have PIL fix the image?
If not, can I add that as a feature request? I might even be able to take on the challenge, but I don't know how difficult it is. Would anyone be willing to guide me?
It would be fantastic if there was just a tiny (28 Byte) grey (or the colors of the nearby pixels) box of what is truncated, instead of the entire rest of the image. Some images that I process will be about 90% grey box.
Or, does anyone know of another Python module or CLI application (maybe GIMP) that can deal with a minor corruption issue and re-save the image.
I have tried re-opening the image when there is an error, but that doesn't seem to help.
Thank you!
I use cv.imread to read a png file in python. When I then use cv.imwrite function to immediately save the image i then find that the colours in the image have changed slightly. I am trying to perform character recognition on this image and the OCR performs far less well on the image in python than the original image.
The first image is the original, and the second is the saved one with OpenCV.
We can see that the green has changed slightly and whilst this does not seem important it affects the OCR and I therefore imagine that other changes are happening to the png. Does anyone know why this might be and how i can resolve this.
The code is as follows
img = cv2.imread('file.png')
cv2.imwrite('out.png', img)
When I run file.png in tesseract for character recognition I get great results but when I run out.png in tesseract far less words get recognised correctly.
When you have a .png image file you ought to read as a .png file.
I downloaded your image and did some analysis myself.
First, I read the image as you did:
img = cv2.imread('file.png')
img.shape returns (446, 864, 3) i.e an image with 3 channels.
Next I read the same image using cv2.IMREAD_UNCHANGED:
img = cv2.imread('file.png', cv2.IMREAD_UNCHANGED)
img.shape returns (446, 864, 4) i.e an image with 4 channels.
.png files have an additional transparency channel. So next you come accross a .png file read it using cv2.IMREAD_UNCHANGED flag
UPDATE:
Enlisting the various ways to read an image:
for var in dir(cv2):
if var.startswith('IMREAD'):
print(var)
returns:
IMREAD_ANYCOLOR
IMREAD_ANYDEPTH
IMREAD_COLOR
IMREAD_GRAYSCALE
IMREAD_LOAD_GDAL
IMREAD_UNCHANGED
The current goal of what I am writing is to download an image from S3, open it using Pillow, and show the image. I am able to download the image fine; it can be viewed properly from my photos. However, when I try to use the image with PIL, all of the pixel values are black. It does however, retain the shape of the image, which leads me to know that the image is at least being read. the code is shown below:
s3.Bucket(bucket).download_file(key, key) # downloaded perfectly fine
img = Image.open(key)
img.show() # shows all black with the Images's dimensions
I know I can read from bytes, but that will give me a 1d array of all the bytes instead of the dimensions needed for an image.
I am trying to build an app that does 2 things.
Get live feed from webcam and display it using OpenCV. (tried IP Camera but gave up, its still not working)
Plot a chart on the basis of the video input.
The webcam feed is working, I am able to display it using imshow() and namedWindow()
.
The chart I have made using Tkinter.
The two outputs above, I want to add them in a single frame. Is it possible to do so?
And what do I use to embed them in a single window?
Please note I am using Python and developing on Windows.
You can combine two or more output windows into a single output window using numpy stack concept.
Referene Link:-
http://docs.scipy.org/doc/numpy/reference/generated/numpy.hstack.html
http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html#numpy.vstack
Sample Code:-
import cv2
import numpy as np
img1 = cv2.imread('Bird1.jpg')
img2 = cv2.imread('Bird2.jpg')
img_stack = np.hstack((img1,img2))
cv2.imshow('Image Stack',img_stack)
cv2.waitKey(0)
cv2.destroyAllWindows()
Note:-
You can combine any number of output windows into single one. To do this,
the Input Images Height, Width and Channel must be same.
Channel means, If images are in RGB Mode means all Images should be in RGB Mode.
You cannot combine, one RGB Mode Image and one Grayscale Mode Image into a single window.
Like Images, you may also stack videos.