Working with TIFFs (import, export) in Python using numpy - python

I need a python method to open and import TIFF images into numpy arrays so I can analyze and modify the pixel data and then save them as TIFFs again. (They are basically light intensity maps in greyscale, representing the respective values per pixel)
I couldn't find any documentation on PIL methods concerning TIFF. I tried to figure it out, but only got "bad mode" or "file type not supported" errors.
What do I need to use here?

First, I downloaded a test TIFF image from this page called a_image.tif. Then I opened with PIL like this:
>>> from PIL import Image
>>> im = Image.open('a_image.tif')
>>> im.show()
This showed the rainbow image. To convert to a numpy array, it's as simple as:
>>> import numpy
>>> imarray = numpy.array(im)
We can see that the size of the image and the shape of the array match up:
>>> imarray.shape
(44, 330)
>>> im.size
(330, 44)
And the array contains uint8 values:
>>> imarray
array([[ 0, 1, 2, ..., 244, 245, 246],
[ 0, 1, 2, ..., 244, 245, 246],
[ 0, 1, 2, ..., 244, 245, 246],
...,
[ 0, 1, 2, ..., 244, 245, 246],
[ 0, 1, 2, ..., 244, 245, 246],
[ 0, 1, 2, ..., 244, 245, 246]], dtype=uint8)
Once you're done modifying the array, you can turn it back into a PIL image like this:
>>> Image.fromarray(imarray)
<Image.Image image mode=L size=330x44 at 0x2786518>

I use matplotlib for reading TIFF files:
import matplotlib.pyplot as plt
I = plt.imread(tiff_file)
and I will be of type ndarray.
According to the documentation though it is actually PIL that works behind the scenes when handling TIFFs as matplotlib only reads PNGs natively, but this has been working fine for me.
There's also a plt.imsave function for saving.

You could also use GDAL to do this. I realize that it is a geospatial toolkit, but nothing requires you to have a cartographic product.
Link to precompiled GDAL binaries for windows (assuming windows here)
Link
To access the array:
from osgeo import gdal
dataset = gdal.Open("path/to/dataset.tiff", gdal.GA_ReadOnly)
for x in range(1, dataset.RasterCount + 1):
band = dataset.GetRasterBand(x)
array = band.ReadAsArray()

PyLibTiff worked better for me than PIL, which as of January 2023 still doesn't support color images with more than 8 bits per color.
from libtiff import TIFF
tif = TIFF.open('filename.tif') # open tiff file in read mode
# read an image in the current TIFF directory as a numpy array
image = tif.read_image()
# read all images in a TIFF file:
for image in tif.iter_images():
pass
tif = TIFF.open('filename.tif', mode='w')
tif.write_image(image)
You can install PyLibTiff with
pip3 install numpy libtiff
The readme of PyLibTiff also mentions the tifffile library but I haven't tried it.

In case of image stacks, I find it easier to use scikit-image to read, and matplotlib to show or save. I have handled 16-bit TIFF image stacks with the following code.
from skimage import io
import matplotlib.pyplot as plt
# read the image stack
img = io.imread('a_image.tif')
# show the image
plt.imshow(img,cmap='gray')
plt.axis('off')
# save the image
plt.savefig('output.tif', transparent=True, dpi=300, bbox_inches="tight", pad_inches=0.0)

You can also use pytiff of which I am the author.
import pytiff
with pytiff.Tiff("filename.tif") as handle:
part = handle[100:200, 200:400]
# multipage tif
with pytiff.Tiff("multipage.tif") as handle:
for page in handle:
part = page[100:200, 200:400]
It's a fairly small module and may not have as many features as other modules, but it supports tiled TIFFs and BigTIFF, so you can read parts of large images.

There is a nice package called tifffile which makes working with .tif or .tiff files very easy.
Install package with pip
pip install tifffile
Now, to read .tif/.tiff file in numpy array format:
from tifffile import tifffile
image = tifffile.imread('path/to/your/image')
# type(image) = numpy.ndarray
If you want to save a numpy array as a .tif/.tiff file:
tifffile.imwrite('my_image.tif', my_numpy_data, photometric='rgb')
or
tifffile.imsave('my_image.tif', my_numpy_data)
You can read more about this package here.

Using cv2
import cv2
image = cv2.imread(tiff_file.tif)
cv2.imshow('tif image',image)

if you want save tiff encoding with geoTiff. You can use rasterio package
a simple code:
import rasterio
out = np.random.randint(low=10, high=20, size=(360, 720)).astype('float64')
new_dataset = rasterio.open('test.tiff', 'w', driver='GTiff',
height=out.shape[0], width=out.shape[1],
count=1, dtype=str(out.dtype),
)
new_dataset.write(out, 1)
new_dataset.close()
for more detail about numpy 2 GEOTiff .you can click this: https://gis.stackexchange.com/questions/279953/numpy-array-to-gtiff-using-rasterio-without-source-raster

I recommend using the python bindings to OpenImageIO, it's the standard for dealing with various image formats in the vfx world. I've ovten found it more reliable in reading various compression types compared to PIL.
import OpenImageIO as oiio
input = oiio.ImageInput.open ("/path/to/image.tif")

Another method of reading tiff files is using tensorflow api
import tensorflow_io as tfio
image = tf.io.read_file(image_path)
tf_image = tfio.experimental.image.decode_tiff(image)
print(tf_image.shape)
Output:
(512, 512, 4)
tensorflow documentation can be found here
For this module to work, a python package called tensorflow-io has to installed.
Athough I couldn't find a way to look at the output tensor (after converting to nd.array), as the output image had 4 channels. I tried to convert using cv2.cvtcolor() with the flag cv2.COLOR_BGRA2BGR after looking at this post but still wasn't able to view the image.

no answers to this question did not work for me. so i found another way to view tif/tiff files:
import rasterio
from matplotlib import pyplot as plt
src = rasterio.open("ch4.tif")
plt.imshow(src.read(1), cmap='gray')
the code above will help you to view the tif files. also check below to be sure:
type(src.read(1)) #see that src.read(1) is a numpy array
src.read(1) #prints the matrix

Related

convert .nii to .tif using imwrite, it saves black image insted of the image

I want to convert .nii images to .tif to train my model using U-Net.
1-I looped through all images in the folder.
2-I looped through all slices within each image.
3-I saved each slice as .tif.
The training images are converted successfully. However, the labels (masks) are all saved as black images. I want to successfully convert those masks from .nii to .tif, but I don't know how. I read that it could be something with brightness, but I didn't get the idea clearly, so I couldn't solve the problem until now.
The only reason for this conversion is to be able to train my model. Feel free to suggest a better idea, if anyone can share a way to feed the network with the .nii format directly.
import nibabel as nib
import matplotlib.pyplot as plt
import imageio
import numpy as np
import glob
import os
import nibabel as nib
import numpy as np
from tifffile import imsave
import tifffile as tiff
for filepath in glob.iglob('data/Task04_Hippocampus/labelsTr/*.nii.gz'):
a = nib.load(filepath).get_fdata()
a = a.astype('int8')
base = Path(filepath).stem
base = re.sub('.nii', '', base)
x,y,z = a.shape
for i in range(0,z):
newimage = a[:, :, i]
imageio.imwrite('data/Task04_Hippocampus/masks/'+base+'_'+str(i)+'.tif', newimage)
Unless you absolutely have to use TIFF, I would strongly suggest using the NiFTI format for a number of important reasons:
Image values are often not arbitrary. For example, in CT images the values correspond to x-ray attenuation (check out this Wikipedia page). TIFF, which is likely to scale the values in some way, is not suitable for this.
NIfTI also contains a header which has crucial geometric information needed to correctly interpret the image, such as the resolution, slice thickness, and direction.
You can directly extract a numpy.ndarray from NIfTI images using SimpleITK. Here is a code snippet:
import SimpleITK as sitk
import numpy as np
img = sitk.ReadImage("your_image.nii")
arr = sitk.GetArrayFromImage(img)
slice_0 = arr[0,:,:] # this is a 2D axial slice as a np.ndarray
As an aside: the reason the images where you stored your masks look black is because in NIfTI format labels have a value of 1 (and background is 0). If you directly convert to TIFF, a value of 1 is very close to black when interpreted as an RGB value - another reason to avoid TIFF!

Python PIL read/open TIFF is black only

I try to read a TIFF file with pillow/PIL (7.2.0) in Python (3.8.3), e.g. this image.
The resulting file seems to be corrupted:
from PIL import Image
import numpy as np
myimage = Image.open('moon.tif')
myimage.mode
# 'L'
myimage.format
# 'TIFF'
myimage.size
# (358, 537)
# so far all good, but:
np.array(myimage)
# shows only zeros in the array, likewise
np.array(myimage).sum()
# 0
It doesn't seem to be a problem of the conversion to numpy array only, since if I save it to a jpg (myimage.save('moon.jpg')) the resulting jpg image has the appropriate dimensions but is all black, too.
Where did I do wrong or is it a bug?
I am not an expert in coding but i had same problem and found the TIFF file has 4 layers. R, G ,B and Alpha. When you convert it using PIL it is black.
try to view the image as plt.imshow(myimage[:, :, 0])
you could also remove the Alpha layer by saving the read image ( i used plt.imread('image')) and then saving it as image=image[:,:,3]. Now its a RGB image.
I don't know if i answered your question, but i felt this info might be of help.

Matplotlib: Missing channel using imread

When I try to load an image that has three channels with matplotlib it only has one channel when I issue the numpy shape command. This shows the following image:
Here is the code I used:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
img = mpimg.imread('dolphin.png')
plt.imshow(img)
plt.show()
img.shape
(320, 500)
I also followed the matplotlib image tutorial which uses the same commands as above.
Loading the image with opencv the result is an image with three channels, as expected.
import cv2
imgcv = cv2.imread('dolphin.png')
plt.imshow(imgcv)
plt.show()
imgcv.shape
(320, 500, 3)
I am using Python 3.5.6 with anaconda.
Here is a short output of the conda list command:
...
matplotlib 3.0.0
...
opencv3 3.1.0
...
pillow 5.2.0
...
The original image I used:
Am I missing a package or is there another command to load a *.png file? Everything seems to work with *.jpg images
As I see it, matplotlib's imread correctly reads in the image. If the image contains only a single channel, the resulting numpy array will be 2D. If the image contains 3 or 4 channels, the numpy array will be 3D.
Taking the dolphin image from the question you get
plt.imread("https://i.stack.imgur.com/cInHj.png").shape
> (320, 500)
Concerning the stinkbug image from the matplotlib documentation there is indeed a little problem. The image you see is a grey scale image as well,
plt.imread("https://matplotlib.org/_images/stinkbug.png").shape
> (375, 500)
However the tutorial claims it to be a 3 channel image. This is correct from the point of view of the tutorial, because it takes the image from the doc on the github repository folder.
plt.imread("https://raw.githubusercontent.com/matplotlib/matplotlib/master/doc/_static/stinkbug.png").shape
> (375, 500, 3)
The problem is that the documentation is built through sphinx and sphinx-gallery and in addition may use some other libraries. In the course of this, the image is not copied in its raw format to the output folder. This problem has been reported already here, the reason is not yet fully tracked down.
In any case, the remaining open question is then, why does cv2.imread give you a 3D array for a greyscale image?
From the OpenCV imread documentation:
Second argument is a flag which specifies the way image should be read.
cv2.IMREAD_COLOR : Loads a color image. Any transparency of image will be neglected. It is the default flag.
cv2.IMREAD_GRAYSCALE : Loads image in grayscale mode
cv2.IMREAD_UNCHANGED : Loads image as such including alpha channel
Note Instead of these three flags, you can simply pass integers 1, 0 or -1 respectively.
So here you need to specify yourself, which mode you want to use.
Let's verify:
import cv2
import urllib.request as req
dolphinurl ="https://i.stack.imgur.com/cInHj.png"
stinkbugweburl = "https://matplotlib.org/_images/stinkbug.png"
stinkbuggiturl = "https://raw.githubusercontent.com/matplotlib/matplotlib/master/doc/_static/stinkbug.png"
def printshape(url, **kw):
req.urlretrieve(url, "image_name.png")
im = cv2.imread("image_name.png", **kw)
print(im.shape)
printshape(dolphinurl)
printshape(stinkbugweburl)
printshape(stinkbugweburl)
This prints
(320, 500, 3)
(375, 500, 3)
(375, 500, 3)
while if you specify greyscale,
printshape(dolphinurl,0)
printshape(stinkbugweburl,0)
printshape(stinkbugweburl,0)
it'll print
(320, 500)
(375, 500)
(375, 500)
In that sense it's up to the user to decide how they want to read in the image.

GeoTIFF issue with opening in PIL

Everytime I open an GeoTIFF image of a orthophoto in python (tried PIL, matplotlib, scipy, openCV) the image screws up. Some corners are beeing cropped , however the image remains its original shape. If I manually convert the tif to for instance a png in Photoshop and load it, it does work correctly. So it seems like PIL has some trouble handling tif files with objects that not fill the entire canvas. Does anyone have a solution for this problem?
Part of original Image:
After opening:
It would have been really nice if you put the link of the figure that you are using (if it's free). I downloaded a sample GeoTIFF image from here, and I used gdal to open it.
The shape of the geotiff.ReadAsArray() is (3, 1024, 2048) so I convert it to (1024, 2048, 3) (RGB) and open it with imshow:
import gdal
gdal.UseExceptions()
import matplotlib.pyplot as plt
import numpy as np
geotiff = gdal.Open('/home/vafanda/Downloads/test.tif')
geotiff_arr= geotiff.ReadAsArray()
print np.shape(geotiff_arr)
geotiff_shifted = np.rollaxis(geotiff_arr,0,3)
print "Dimension converted to: "
print np.shape(geotiff_shifted)
plt.imshow(geotiff_shifted)
plt.show()
result:

Read 16-bit PNG image file using Python

I'm trying to read a PNG image file written in 16-bit data type. The data should be converted to a NumPy array. But I have no idea how to read the file in '16-bit'. I tried with PIL and SciPy, but they converted the 16-bit data to 8-bit when they load it. Could anyone please let me know how to read data from a 16-bit PNG file and convert it to NumPy array without changing the datatype?
The following is the script that I used.
from scipy import misc
import numpy as np
from PIL import Image
#make a png file
a = np.zeros((1304,960), dtype=np.uint16)
a[:] = np.arange(960)
misc.imsave('16bit.png',a)
#read the png file using scipy
b = misc.imread('16bit.png')
print "scipy:" ,b.dtype
#read the png file using PIL
c = Image.open('16bit.png')
d = np.array(c)
print "PIL:", d.dtype
I'd recommend using opencv:
pip install opencv-python
and
import cv2
image = cv2.imread('16bit.png', cv2.IMREAD_UNCHANGED)
in contrast to OpenImageIO, opencv could be installed from pip
The time, required to read a single 4000x4000 png is about the same as PIL, but PIL uses more CPU and requires additional time to convert data back to uint16.
I have the same problem here. I tested it even with 16 bit images i created by my own. All of them were opened correctly when i loaded them with the png package. Also the output of 'file ' looked okay.
Opening them with PIL always led to 8-bit numpy-arrays.
Working with Python 2.7.6 on Linux btw.
Like this it works for me:
import png
import numpy as np
reader = png.Reader( path-to-16bit-png )
pngdata = reader.read()
px_array = np.array( map( np.uint16, pngdata[2] )
print( px_array.dtype )
Maybe someone can give more information under which circumstances the former approach worked? (as this one is pretty slow)
Thanks in advance.
The simplest solution I've found:
When I open a 16 bit monochrome PNG Pillow it doesn't open correctly as I;16 mode.
Image.mode is opened as I (32 bits)
So, the best way to convert to numpy array. It is dtype="int32" so we will convert it to dtype="uint16".
import numpy as np
from PIL import Image
im = Image.fromarray(np.array(Image.open(name)).astype("uint16"))
print("Image mode: ", im.mode)
Tested in Python 3.6.8 with Pillow 6.1.0
This happens because PIL does not support 16-bit data, explained here: http://effbot.org/imagingbook/concepts.htm
I use a work around using the osgeo gdal package (which can read PNG).
#Import
import numpy as np
from osgeo import gdal
#Read in PNG file as 16-bit numpy array
lon_offset_px=0
lat_offset_px=0
fn = 'filepath'
gdo = gdal.Open(fn)
band = gdo.GetRasterBand(1)
xsize = band.XSize
ysize = band.YSize
png_array = gdo.ReadAsArray(lon_offset_px, lat_offset_px, xsize, ysize)
png_array = np.array(png_array)
This will return
png_array.dtype
dtype('uint16')
A cleaner way I found is using the skimage package.
from skimage import io
im = io.imread(jpg)
Where 'im' will be a numpy array.
Note: I haven't tested this with PNG but it works with TIFF files
I'm using png module:
At first install png by:
>pip install pypng
Then
import png
import numpy as np
reader = png.Reader('16bit.png')
data = reader.asDirect()
pixels = data[2]
image = []
for row in pixels:
row = np.asarray(row)
row = np.reshape(row, [-1, 3])
image.append(row)
image = np.stack(image, 1)
print(image.dtype)
print(image.shape)
Another option to consider, based on Mr. Fridy's answer, is to load it using pypng like this:
import png
pngdata = png.Reader("path/to/16bit.png").read_flat()
img = np.array(pngdata[2]).reshape((pngdata[1], pngdata[0], -1))
You can install pypng using pip:
pip install pypng
The dtype from png.Reader.read_flat() is correctly uint16 and the reshaping of the np.ndarray puts it into (height, width, channels) format.
I've been playing with this image using PIL version 5.3.0:
it reads the data just fine:
>>> image = Image.open('/home/jcomeau/Downloads/grayscale_example.png')
>>> image.mode
'I'
>>> image.getextrema()
(5140, 62708)
>>> image.save('/tmp/test.png')
and it saves in the right mode, however the contents are not identical:
jcomeau#aspire:~$ diff /tmp/test.png ~/Downloads/grayscale_example.png
Binary files /tmp/test.png and /home/jcomeau/Downloads/grayscale_example.png differ
jcomeau#aspire:~$ identify /tmp/test.png ~/Downloads/grayscale_example.png
/tmp/test.png PNG 85x63 85x63+0+0 16-bit sRGB 6.12KB 0.010u 0:00.000
/home/jcomeau/Downloads/grayscale_example.png PNG 85x63 85x63+0+0 16-bit sRGB 6.14KB 0.000u 0:00.000
however, image.show() always converts to 8-bit grayscale, clamped at 0 and 255. so it's useless for seeing what you've got at any stage of the transformation. while I could write a routine to do so, and perhaps even monkeypatch .show(), I just run the display command in another xterm.
>>> image.putdata([n - 32768 for n in image.getdata()])
>>> image.getextrema()
(-27628, 29940)
>>> image.save('/tmp/test2.png')
note that converting to mode I;16 doesn't help:
>>> image.convert('I;16').save('/tmp/test3.png')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jcomeau/.local/lib/python2.7/site-packages/PIL/Image.py", line 1969, in save
save_handler(self, fp, filename)
File "/home/jcomeau/.local/lib/python2.7/site-packages/PIL/PngImagePlugin.py", line 729, in _save
raise IOError("cannot write mode %s as PNG" % mode)
IOError: cannot write mode I;16 as PNG
You can also use the excellent OpenImageIO library's Python API.
import OpenImageIO as oiio
img_input = oiio.ImageInput.open("test.png") # Only reads the image header
pix = img_input.read_image(format="uint16") # Reads the pixels into a Numpy array
OpneImageIO is used extensively in the VFX industry, so most Linux distros come with a native package for it. Unfortunately the otherwise excellent documentation is in PDF format (I personally prefer HTML), look for it in /usr/share/doc/OpenImageIO.
imageio library supports 16bit images:
from imageio import imread, imwrite
import numpy as np
from PIL import Image
#make a png file
a = np.arange(65536, dtype=np.uint16).reshape(256,256)
imwrite('16bit.png',a)
#read the png file using imageio
b = imread('16bit.png')
print("imageio:" ,b.dtype)
#imageio: uint16
#read the png file using PIL
c = Image.open('16bit.png')
d = np.array(c)
print("PIL:", d.dtype)
# PIL: int32
Using imagemagick:
>> identify 16bit.png
16bit.png PNG 256x256 256x256+0+0 16-bit Grayscale Gray 502B 0.000u 0:00.000
I suspect your "16 bit" PNG is not 16-bit. (if you're on Linux or Mac you could run file 16bit.png and see what it says)
When I use PIL and numpy I get a 32-bit array with 16-bit values in it:
import PIL.Image
import numpy
image = PIL.Image.open('16bit.png')
pixel = numpy.array(image)
print "PIL:", pixel.dtype
print max(max(row) for row in pixel)
the output is:
PIL: int32
65535

Categories