Is there a 3D version of scikits image measure regionprops? - python

I have a three dimensional binary image for which I am working on determining the two-point cluster function. The first step to doing this is to define all of the connected regions within the image. I have successfully done so with skimage as:
from skimage.measure import label
from skimage.morphology import remove_small_objects
from skimage.measure import label
from skimage.morphology import remove_small_objects
min = 64
label_file = label(newim_int, return_num=True, connectivity=2)
image_clean = remove_small_objects(label_file[0], min_size=min,connectivity=2, in_place=True)
label_file = label(image_clean, return_num=True, connectivity=2)
with a bunch of stuff missing to read in the file, etc between min= and label_file=
I would now like to know the distribution of sizes of the regions labeled
here. Unfortunately, skimage.measure.regionprops tells me it only works for 2D images. Is there another way to do this?
Thanks

regionprops has already started being expanded to include 3d images; at present some of the properties return a NotImplementedError.
Fortunately, area already works for 3d so you should be able to use this property.

Related

convert .nii to .tif using imwrite, it saves black image insted of the image

I want to convert .nii images to .tif to train my model using U-Net.
1-I looped through all images in the folder.
2-I looped through all slices within each image.
3-I saved each slice as .tif.
The training images are converted successfully. However, the labels (masks) are all saved as black images. I want to successfully convert those masks from .nii to .tif, but I don't know how. I read that it could be something with brightness, but I didn't get the idea clearly, so I couldn't solve the problem until now.
The only reason for this conversion is to be able to train my model. Feel free to suggest a better idea, if anyone can share a way to feed the network with the .nii format directly.
import nibabel as nib
import matplotlib.pyplot as plt
import imageio
import numpy as np
import glob
import os
import nibabel as nib
import numpy as np
from tifffile import imsave
import tifffile as tiff
for filepath in glob.iglob('data/Task04_Hippocampus/labelsTr/*.nii.gz'):
a = nib.load(filepath).get_fdata()
a = a.astype('int8')
base = Path(filepath).stem
base = re.sub('.nii', '', base)
x,y,z = a.shape
for i in range(0,z):
newimage = a[:, :, i]
imageio.imwrite('data/Task04_Hippocampus/masks/'+base+'_'+str(i)+'.tif', newimage)
Unless you absolutely have to use TIFF, I would strongly suggest using the NiFTI format for a number of important reasons:
Image values are often not arbitrary. For example, in CT images the values correspond to x-ray attenuation (check out this Wikipedia page). TIFF, which is likely to scale the values in some way, is not suitable for this.
NIfTI also contains a header which has crucial geometric information needed to correctly interpret the image, such as the resolution, slice thickness, and direction.
You can directly extract a numpy.ndarray from NIfTI images using SimpleITK. Here is a code snippet:
import SimpleITK as sitk
import numpy as np
img = sitk.ReadImage("your_image.nii")
arr = sitk.GetArrayFromImage(img)
slice_0 = arr[0,:,:] # this is a 2D axial slice as a np.ndarray
As an aside: the reason the images where you stored your masks look black is because in NIfTI format labels have a value of 1 (and background is 0). If you directly convert to TIFF, a value of 1 is very close to black when interpreted as an RGB value - another reason to avoid TIFF!

How to change fits images' angular resolution in astropy?

I have two fits image of different wavelength. They have different angular resolution.
I want to convolve the higher resolution image to the lower one.
I have tried astropy.convolution.convolve and astropy.convolution.Gaussian2DKernel.
Resolution is 0.184" for 1600nm, and 0.124" for 606nm. So I think the resolution for the kernel should be 0.136". Then I tried following code:
import os
from astropy.io import fits
from astropy.convolution import Gaussian2DKernel
from astropy.convolution import convolve
kernel = Gaussian2DKernel(x_stddev=0.136)
hdu = fits.open('/Users/lpr/Data/fits/pridata/goodsn_f606/606.fits')[0]
img = hdu.data
astropy_conv = convolve(img,kernel)
hdu.data = astropy_conv
hdu.writeto('/Users/lpr/Data/fits/expdata/CONVOLIMAGE/convolved_606.fits')
print('done')
Of course, that's wrong. The resolution of higher one(606) almost unchanged. Then I realize that I convolve two different type thing. One is flux(or electrons/s), the other is kernel.
Now I don't know how to match the higher resolution image to lower one. Thank you for answering my question!
I think the first issue is that the standard deviation of your kernel should be in pixels, not in arcseconds.
Then you may be interested in two packages that allow to compute the matching kernel between two PSFs:
the first one is in photutils: https://photutils.readthedocs.io/en/stable/psf_matching.html
the second one is a dedicated package: https://pypher.readthedocs.io/en/latest/

How do I to implement MATLAB's edge() function in Python OpenCV or skimage

I want to replicate MATLAB's edge() detection function in Python.
There are two Python functions I know of that implement the Canny filter:
import cv2
edges = cv2.Canny(image)
and
from skimage import feature
edges = feature.canny(image)
However, neither of these Python functions is capable of computing the filter's high and low threshold in the same manner MATLAB does. According to here.
MATLAB Code
farm = imread('small_farms.JPG');%change this to the
file path of image
imshow(farm);%this shows the original image
gfarm = rgb2gray(farm);
figure,
imshow(gfarm);%show grayscaled image
A = medfilt2(gfarm,[4 4]);
figure,
imshow(A);
B = edge(A,'log');
figure,
imshow(B,[]);

Better way to compare images in Python

I am using Scikits SSIM to calculate how similar 2 pictures are, and it is working fine for one exception. When there is a lot of white pixels (lets say its a pure white background with a very simple black outlined shape) it will say they are very similar when the actual shape is in fact very different.
I tried looking for other questions about this but couldn't find one that accurately answered my question.
Some code:
from skimage.measure import compare_ssim
import numpy as np
import cv2
# With SSIM, compares image A to image B, and returns the result.
def compare_images(imageA, imageB):
return compare_ssim(imageA, imageB)
# Loads an image with a given filepath with imread.
def load_images(filepath):
picture = cv2.imread(filepath)
# Convert the images to grayscale
return cv2.cvtColor(picture, cv2.COLOR_BGR2GRAY)
# compare the images
original = load_images("images/images.png")
contrast = load_images("images/download.png")
result = compare_images(original, contrast)
print(result)
Mind you, I am just a Python novice. Any help would be welcome.

How can I calculate the perimeter of an object in an image?

I have an image. I want to get the perimeter of every object in my image. For example, in this image , the perimeter of an object is 33 (the number of pixels at its edges).
I have written the following algorithm, but it is very timely.
Does anyone have an idea to increase the speed of the algorithm?
What I have tried:
def cal_perimeter_object(object, image):
peri_ = 0
for pixel_ in image:
if pixel_is_in_neigbor_of_object() is True:
peri_ += 1
return peri_
As mentioned in the comment by #Piinthesky having a boolean (or labelled image) where you know the label for the object you want to find the contour for is the first step. There are a number of ways of doing this, the simplest of which is thresholding. Once you have your labelled image you can find the perimeter in a number of ways - e.g. the number of pixels along the border. To give you a head start here is a way to do it on the image you put in the link. I have used scikit-image but there are other python libraries you may use.
# If your python version is not 3.x uncomment line below
#from __future__ import print_function
from skimage.measure import label, regionprops
import skimage.io as io
# read in the image (enter the path where you downloaded it on your computer below
im = io.imread('/home/kola/Downloads/perimeter.png')
# To simplify things I am only using the first channel and thresholding
# to get a boolean image
bw = im[:,:,0] > 230
regions = regionprops(bw.astype(int))
print(regions[0].perimeter)

Categories