Medical imaging (PyRadiomics) with .nii.gz files - python

I am trying to implement the package:
https://pyradiomics.readthedocs.io/en/latest/usage.html
It looks super simple, but they expect .nrrd files.
My files are .nii.gz. How do I solve this?
Also, have anyone tried to apply PyRadiomics on TCIA data? if so, can I see your github or Jupyter Notebook?
Thanks a lot.

You could turn NII into numpy array firstly and then turn it into NRRD with using:
nrrd and nibabel
import numpy as np
import nibabel as nib
import nrrd
# Download NII
example_filename = "image.nii.gz"
image = nib.load(example_filename)
# Turn into numpy array
array = np.array(img.dataobj)
# Save NRRD
nrrd_path_to = "image.nrrd"
nrrd.write(image_path_to, array)

Although the examples are in .nrrd, PyRadiomics uses SimpleITK for image operations. This allows PyRadiomics to support a whole range of image formats, including .nii.gz. You don't have to convert them.

The DWIConverter converts diffusion-weighted MR images in DICOM series into nrrd format for analysis in Slicer. It parses the DICOM header to extract necessary information about measurement frame, diffusion weighting directions, b-values, etc, and write out a nrrd image. For non-diffusion weighted DICOM images, it loads in an entire DICOM series and writes out a single DICOM volume in a .nhdr/.raw pair.
So that trying to convert your .nii.gz inside DICOM files for the nrrd format is a possibility by using this tools. Also, you can look at the SlicerDMRI that is a similar module.

Related

How do you open .NPY files?

How do I open .NPY files in python so that I can read them?
I've been trying to run some code I've found but it outputs in .NPY files so I can't tell if its working.
*.npy files are binary files to store numpy arrays. They are created with
import numpy as np
data = np.random.normal(0, 1, 100)
np.save('data.npy', data)
And read in like
import numpy as np
data = np.load('data.npy')
Late reply but I think NPYViewer is a tool that can help you, as it allows you to quickly visualize the contents of .npy files without having to write code. It also has options to visualize 2D .npy arrays as grayscale images as well as 3D point clouds.
Reference: https://github.com/csmailis/NPYViewer

Resize CSV data using Python and Keras

I have CSV files that I need to feed to a Deep-Learning network. Currently my CSV files are of size 360*480, but the network restricts them to be of size 224*224. I am using Python and Keras for the deep-learning part. So how can I resize the matrices?
I was thinking that since aspect ratio is 3:4, so if I resize them to 224:(224*4/3) = 224:299, and then crop the width of the matrix to 224, it could serve the purpose. But I cannot find a suitable function to do that. Please suggest.
I think you're looking for cv.resize() if you're using images.
If not, try numpy.ndarray.resize()
Image processing
If you want to do nontrivial alterations to the data as images (i.e. interpolating between pixel values, assuming that they represent photographs) then you might want to use proper image processing libraries for that. You'd need to treat them not as raw matrixes (csv of numbers) but convert them to rgb images, do the transformations you desire, and convert them back to a numpy matrix.
OpenCV (https://docs.opencv.org/3.4/da/d6e/tutorial_py_geometric_transformations.html)
or Pillow (https://pillow.readthedocs.io/en/3.1.x/reference/Image.html) might be useful to do that.
I found a short and simple way to solve this. This uses the Python Image Library/Pillow.
import numpy as np
import pylab as pl
from PIL import Image
matrix = np.array(list(csv.reader(open('./path/mat.csv', "r"), delimiter=","))).astype("uint8") #read csv
imgObj = Image.fromarray(matrix) #convert matrix to Image object
resized_imgObj = img.resize((224,224)) #resize Image object
imgObj.show()
resized_imgObj.show()
resized_matrix = np.asarray(img) #convert Image object to matrix
While numpy module also has a resize function, but it is not as useful as the aforementioned way.
When I tried it, the resized matrix had lost all the intricacies and aesthetic aspect of the original matrix. This is probably due to the fact that numpy.ndarray.resize doesn't interpolate and missing entries are filled with zeros.
So, for this case Image.resize() is more useful.
You could also convert the csv file to a list, truncate the list, and then convert the list to a numpy array and then use np.reshape.

How to extract patches from a .tiff file to feed as input to Alexnet implemented in Keras?

I am trying to use transfer learning to extract features. My dataset has images in .tiff format.How do I feed patches of this format to the Alexnet implemented in Keras ?
Are you able to feed any other format of data to ALextNet or do you need complete guide to feed images into a neural network in Keras. If you are able to feed a numpy array containing the pixel values of images then you can follow the following steps to read a tiff image and convert a numpy array out of it. Or if you want a complete tutorial then you should mention it in the question.
Using PIL library
from PIL import Image
im = Image.open('a_image.tif')
Using matplotlib
import matplotlib.pyplot as plt
I = plt.imread(tiff_file)
Let us know if you need any help with it.

Loading Custom Dataset into TensorFlow CNN

We are using TensorFlow and python to create a custom CNN that will classify images into one of several categories. We have created our CNN based on this tutorial: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/layers/cnn_mnist.py
Instead of reading in a pre-existing dataset like the MNIST dataset used in the tutorial, we would like to read in all images from multiple folders. The name of each folder is the label associated with all the images in that folder. Unfortunately we're very new to python and TensorFlow, could someone point us in the right direction, either with a tutorial or some basic code?
Thank you so much!
consider using the glob package. it allows you to import multiple files in subdirectories easily using patterns. https://docs.python.org/2/library/glob.html
import glob
import matplotlib.pyplot as plt
import numpy as np
images = glob.glob(<file pattern>)
img_list = [plt.imread(image) for image in images]
img_array = np.stack(tuple(img_list))
I haven't tested this so there might be errors but it should make a 3-d numpy array of images (each image a 2-d array). Is this the format you were looking for?

Convert DICOM to TIFF

I'm new to Python so forgive my ignorance If I don't have all the info correct. I'm trying raster through a directory and convert all the DICOM files within to TIFF files. I have gotten the search functionality to work, but I am having a hard time saving the images as TIFFs. I'm using the pydicom libraries to read in the DICOM and manipulate the header information. Also, I have tried using the save_as function in pydicom to save to TIFF, but I would rather use the save function in PIL to properly set the compression of the TIFF. I think the problem is that I can't/don't understand how to extract the actual image data from a DICOM and place it in a new image.Any Help would be greatly appreciated ... Cheers
Python 2.7
PIL 1.1.7
Pydicom 0.9.6
Found an answer online to the same query sometime back, although I don't remember the site but as I applied it to my code, sharing it here for others as well :
import pylab
import dicom
ImageFile=dicom.read_file(<SourceFilePath>) #Path to "*.dcm" file
pylab.imshow(ImageFile.pixel_array,cmap=pylab.cm.bone) #to view image
or if you want to save the image then instead use:
pylab.imsave('<DestinationFilePath>',ImageFile.pixel_array,cmap=pylab.cm.bone)
The imsave will by default save the image in .png format though. You can specify the desired format in the imsave() if it is supported.
Hope it is useful.
If you know how to use PIL to save image data as .tiff, this example should help you to pass image data from pydicom to PIL (there is more here in the comments).

Categories