Tif image saving a black image - python

I have input a uint8 tif image and tried to convert it as a uint16 tif image.
Though the conversion works when I save the image using tiff.imsave the image is exported as a completely black image. I am not sure what I am doing wrong. Kindly guide.
from skimage.io import imread
from skimage.io import imshow
import numpy as np
import cv2
import PIL
import glob, os
import tifffile as tiff
Image_input = tiff.imread("Input.tif");
imshow(Image_input);
Image_output = Image_input.astype(np.uint16)
imshow(Image_output);
im = tiff.imsave('Output.tif', Image_output)

In a uint8 image, all the pixels lie in the range 0..255. In a uint16 image, all the pixels lie in the range 0..65535.
So, the brightest pixel (255) in your input image will only be 255*100/65535 or 0.4% bright, also known as nearly black in your output image.
You will probably want to scale your image, multiplying by 255 or shifting left 8 bits to give it a comparable brightness.
Note there is no benefit of increased luminosity resolution as a result of your operation - so I hope there is some other unmentioned purpose to moving to 16-bit.

Related

convert .nii to .tif using imwrite, it saves black image insted of the image

I want to convert .nii images to .tif to train my model using U-Net.
1-I looped through all images in the folder.
2-I looped through all slices within each image.
3-I saved each slice as .tif.
The training images are converted successfully. However, the labels (masks) are all saved as black images. I want to successfully convert those masks from .nii to .tif, but I don't know how. I read that it could be something with brightness, but I didn't get the idea clearly, so I couldn't solve the problem until now.
The only reason for this conversion is to be able to train my model. Feel free to suggest a better idea, if anyone can share a way to feed the network with the .nii format directly.
import nibabel as nib
import matplotlib.pyplot as plt
import imageio
import numpy as np
import glob
import os
import nibabel as nib
import numpy as np
from tifffile import imsave
import tifffile as tiff
for filepath in glob.iglob('data/Task04_Hippocampus/labelsTr/*.nii.gz'):
a = nib.load(filepath).get_fdata()
a = a.astype('int8')
base = Path(filepath).stem
base = re.sub('.nii', '', base)
x,y,z = a.shape
for i in range(0,z):
newimage = a[:, :, i]
imageio.imwrite('data/Task04_Hippocampus/masks/'+base+'_'+str(i)+'.tif', newimage)
Unless you absolutely have to use TIFF, I would strongly suggest using the NiFTI format for a number of important reasons:
Image values are often not arbitrary. For example, in CT images the values correspond to x-ray attenuation (check out this Wikipedia page). TIFF, which is likely to scale the values in some way, is not suitable for this.
NIfTI also contains a header which has crucial geometric information needed to correctly interpret the image, such as the resolution, slice thickness, and direction.
You can directly extract a numpy.ndarray from NIfTI images using SimpleITK. Here is a code snippet:
import SimpleITK as sitk
import numpy as np
img = sitk.ReadImage("your_image.nii")
arr = sitk.GetArrayFromImage(img)
slice_0 = arr[0,:,:] # this is a 2D axial slice as a np.ndarray
As an aside: the reason the images where you stored your masks look black is because in NIfTI format labels have a value of 1 (and background is 0). If you directly convert to TIFF, a value of 1 is very close to black when interpreted as an RGB value - another reason to avoid TIFF!

Some greyscale images imported using opencv are displayed as black, some as greyscale. Why?

I am facing a problem I do not understand.
I have greyscale images where the object of interest is grey and white, and the background is black.
Some images are displayed as just black, though, when I import them into Python using opencv.
Some images are displayed correctly.
See code and examples below,
import cv2
import matplotlib.pyplot as plt
import numpy as np
from imantics import Polygons, Mask
import imantics as imcs
import skimage
from shapely.geometry import Polygon as Pollygon
import matplotlib.image as mpimg
import PIL
mask1 = cv2.imread('mask1.jpg', 0)
mask2 = cv2.imread('mask2.jpg', 0)
cv2.imshow("title", mask1)
cv2.waitKey()
cv2.imshow("title", mask2)
cv2.waitKey()
Mask1 is displayed correctly, mask1
Mask 2 is not displayed correctly (only black), mask2
I am performing further processing of the images - e.g. finding contour of grey and white areas in image, which is why I need to understand why there seems to be a problem with some of the imported images.
I figured out the answer.
The problem was due to the image size.
If I import mask2 as,
mask2 = cv2.imread('mask2.jpg',64)
It is displayed correctly. Previously I was just seeing a small part of the image which happened to be black.

How to change the grey scale value of a region in an image?

I am new to Python and not really sure how to attack this problem.
What I am trying to do is to take a black and white image and change the value of the edge (x pixels thick) from 255 to some other greyscale value.
I need to do this to a set of png images inside of a folder. All images will be geometric (mostly a combination of straight lines) no crazy curves or patterns. Using Python 3.
Please check the images.
A typical file will look like this:
https://drive.google.com/open?id=13ls1pikNsO7ZbsHatC6cOr4O6Fj0MPOZ
I think this is what you want. The comments should explain pretty well what I going on:
#!/usr/bin/env python3
import numpy as np
from PIL import Image, ImageFilter
from skimage.morphology import dilation, square
# Open input image and ensure it is greyscale
image = Image.open('XYbase.png').convert('L')
# Find the edges
edges = image.filter(ImageFilter.FIND_EDGES)
# Convert edges to Numpy array and dilate (fatten) with our square structuring element
selem = square(6)
fatedges = dilation(np.array(edges),selem)
# Make Numpy version of our original image and set all fatedges to brightness 128
imnp = np.array(image)
imnp[np.nonzero(fatedges)] = 128
# Convert Numpy image back to PIL image and save
Image.fromarray(imnp).save('result.png')
So, if I start with this image:
The (intermediate) edges look like this:
And I get this as the result:
If you want the outlines fatter/thinner, increase/decrease the 6 in:
selem = square(6)
If you want the outlines lighter/darker, increase/decrease the 128 in:
imnp[np.nonzero(fatedges)] = 128
Keywords: image, image processing, fatten, thicken, outline, trace, edge, highlight, Numpy, PIL, Pillow, edge, edges, morphology, structuring element, skimage, scikit-image, erode, erosion, dilate, dilation.
I can interpret your question in a much simpler way, so I thought I'd answer that simpler question too. Maybe you already have a grey-ish edge around your shapes (like the Google drive files you shared) and just want to change all pixels that are neither black nor white into a different colour - and the fact that they are edges is irrelevant. That is much easier:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
# Open input image and ensure it is greyscale
image = Image.open('XYBase.png').convert('L')
# Make Numpy version
imnp = np.array(image)
# Set all pixels that are neither black nor white to 220
imnp[(imnp>0) & (imnp<255)] = 220
# Convert Numpy image back to PIL image and save
Image.fromarray(imnp).save('result.png')

Image resizing changes color and texture

I'm cutting up a large image in to smaller overlapping patches, and this 500x500x3 gray square is part of the background.
I'm loading the image into numpy, attempting to scale it down by to 100x100x3, and then saving the image using the following code:
import numpy as np
from scipy import misc
import skimage.transform
from matplotlib.image import imread
im = imread(originalPath)
arr = np.array(im)
resized = skimage.transform.resize(arr,(100,100,3))
misc.imsave("resized.tif", resized)
The saved image is the correct size, but becomes purple and pink rather than the original gray.
I've also tried other rescaling methods including
skimage.transform.downscale_local_mean(arr, (5,5,1))
skimage.measure.block_reduce(arr, block_size=(5,5,1), func=np.mean)
and they all produce slightly varying versions of the same purple-ish square.
I have one main question and one side question:
How do I rescale to have a square with accurate colors?
Am I using the correct rescaling method?

Why jpeg image becomes 2D array after being loaded

I have a jpeg image as follows:
Now I want to load this image to do image processing. I use the following code:
from scipy import misc
import numpy as np
im = misc.imread('logo.jpg')
Because the image is a coloured one, I would expect im is a 3D matrix. However, im.shape gives me a 2D matrix:
(150, 150)
I tried another way of loading image as follows:
from PIL import Image
jpgfile = Image.open("logo.jpg")
But jpgfile also has the size of 150x150.
My question is: What's wrong with my code, or my understanding about RGB image is wrong?
Thank you very much.
From the docs here: http://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.imread.html, specify mode='RGB' to get the red, green, blue values. The output appears to default to conversion to a grayscale number.

Categories