I am trying to use MRI brain imaging data for deep learning model. Currently my image has 4 dimensions as shown below but I would like to retain only the T1c modality of the MRI image because my model input should only be 1 channel 3D MRIs (T1c).
I did try to make use of the Nibabel package as shown below
import nibabel as nib
ff = glob.glob('imagesTr\*')
a = nib.load(ff[0])
a.shape
This returns the below output
I am also pasting the header info of 'a'
From this, which of the dimension is used to identify the MRI modality like (T1,T2, T1c, FLAIR etc)? and How can I retain only T1c?. Can you please help?
First you need to identify the order of the images stores in the 4th dimensions.
Probably the header will help:
print(a.header)
Next, to keep only 1 modality you can use this:
data = a.get_fdata()
modality_1 = data[:,:,:,0]
EDIT 1:
Based on the website of the challenge:
All BraTS multimodal scans are available as NIfTI files (.nii.gz) and
describe a) native (T1) and b) post-contrast T1-weighted (T1Gd), c)
T2-weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery
(FLAIR) volumes, and were acquired with different clinical protocols
and various scanners from multiple (n=19) institutions, mentioned as
data contributors here.
and
The provided data are distributed after their pre-processing, i.e.
co-registered to the same anatomical template, interpolated to the
same resolution (1 mm^3) and skull-stripped.
So the header will not help in this case (equal dimensions for all modalities due to preprocessing).
If you are looking for the post-contrast T1-weighted (T1Gd) images then it's the 2nd dimension so use:
data = a.get_fdata()
modality_1 = data[:,:,:,1]
Additionally, we can visualize the each 3D volume (data[:,:,:,0], data[:,:,:,1],data[:,:,:,2], data[:,:,:,3]) and verify my statement.
See here: https://gofile.io/?c=fhoZTu
It's not possible to identify the type of MRI from the Nifti header. You would need the original DICOM images to derive this type of information.
You can, however, visually check your images and compare the contrast/colour of the white matter, grey matter and ventricles to figure out if your image is T1, T2, FLAIR, etc. For instance in a T1-image you would expect darker grey matter, lighter white matter and black CSF. In a T2 image you would expect lighter grey matter, darker white matter and white CSF. A FLAIR is the same as T2 but with 'inverted' CSF.
See some example brain images here: https://casemed.case.edu/clerkships/neurology/Web%20Neurorad/t1t2flairbrain.jpg
That being said, you seem to have a 4-dimensional image, which suggests some sort of time series, so I would assume your data is DTI or fMRI or something like it.
It's also not possible to transform one type of MRI into another, so if your data set is not already T1, then there is no way to use it in a model that expects clean T1 data.
I would strongly encourage you to learn more about MRI and the type of data you are working with. Otherwise it will be impossible to interpret your results.
Related
I made a tif image based on a 3d model of a woodsheet. (x, y, z) represents a point in a 3d space. I simply map (x, y) to a pixel position in the image and (z) to the greyscale value of that pixel. It worked as I have imagined. Then I ran into a low-resolution problem when I tried to print it. The tif image would get pixilated badly as soon as it zooms out. My research suggests that I need to increase the resolution of the image. So I tried a few super-resolution algos found from online sources, including this one https://learnopencv.com/super-resolution-in-opencv/
The final image did get a lot bigger in resolution (10+ times larger in either dimension) but the same problem persists - it gets pixilated as soon as it zooms out, just about the same as the original image.
Looks like quality of an image has something to do not only with resolution of it but also something else. When I say quality of image, I mean how clear the wood texture is in the image. And when I enlarge it, how sharp/clear the texture remains in the image. Can anyone shed some light on this? Thank you.
original tif
The algo generated tif is too large to be included here (32M)
Gigapixel enhanced tif
Update - Here is a recently achieved result: with a GAN-based solution
It has restored/invented some of the wood grain details. But the models need to be retrained.
In short, it is possible to do this via deep learning reconstruction like the Super Resolution package you referred to, but you should understand what something like this is trying to do and whether it is fit for purpose.
Generic algorithms like the Super Resolution is trained on variety of images to "guess" at details that is not present in the original image, typically using generative training methods like using the low vs high resolution version of the same image as training data.
Using a contrived example, let's say you are trying to up-res a picture of someone's face (CSI Zoom-and-Enhance style!). From the algorithm's perspective, if a black circle is always present inside a white blob of a certain shape (i.e. a pupil in an eye), then next time it the algorithm sees the same shape it will guess that there should be a black circle and fill in a black pupil. However, this does not mean that there is details in the original photo that suggests a black pupil.
In your case, you are trying to do a very specific type of up-resing, and algorithms trained on generic data will probably not be good for this type of work. It will be trying to "guess" what detail should be entered, but based on a very generic and diverse set of source data.
If this is a long-term project, you should look to train your algorithm on your specific use-case, which will definitely yield much better results. Otherwise, simple algorithms like smoothing will help make your image less "blocky", but it will not be able to "guess" details that aren't present.
Lets assume i have a little dataset. I want to implement data augmentation. First i implement image segmentation (after this, image will be binary image) and then implement data augmentation. Is this a good way?
For image augmentation in segmentation and instance segmentation, you have to either no change the positions of the objects contained in the image by manipulating colors for example, or modify these positions by applying translations and rotation.
So, yes this way works, but you have to take into consideration the type of data you have and what you are looking to achieve. Data augmentation isn't a ready to-go process with good results everywhere.
In case you have a:
Semantic segmentation : Each pixel of your image has a row i and a column j which are labeled as its enclosing object. This means having your main image I and a label image L with its same size linking every pixel to its object label. In this case, your data augmentation is applied to both I and L, giving a combination of the two transformed images.
Instance segmentation : Here we generate a mask for every instance of the original image and the augmentation is applied to all of them including the original, then from these transformed masks we get our new instances.
EDIT:
Take a look at CLoDSA (Classification, Localization, Detection and Segmentation Augmentor) it may help you implement your idea.
In case your dataset is small, you should add data-augmentation during the training. It is important to change the original image & the targets (masks) in the same way !!.
For example, If an image is rotated 90 degrees, then its mask should also be rotated 90 degrees. Since you are using Keras library, You should check if the ImageDataGenerator also changes the target images (masks), along with the inputs. If it doesn't, You can implement the augmentations by yourself. This repository shows how it is done in OpenCV here:
https://github.com/kochlisGit/random-data-augmentations
I am new on Medical Imaging. I am dealing with MRI images, namely T2 and DWI.
I uploaded both images with nib.load, yet each sequence image has a different number of slices (volume, depth?). If I select one slice (z coordinate), how can get the corresponding slice on the other sequence image? ITK does it correctly, so maybe something in the NIFTI header could help?
Thank you so much for reading! I also tried interpolation, but it did not work.
If ITK does it correctly, why not just use ITK?
If you insist on hand-rolling your own index <-> physical space conversion routines, take a look at how ITK computes those matrices and the publicly accessible methods which use them. For being specific to NIFTI, take a look at ITK's NIFTI reader which sets the relevant metadata in itk::Image.
Reading the NIFTI, namely affine, and extracting the translation Transformation Matrix to create a mapping from the T2 to the DWI voxels.
Hint: nibabel.
I have a set of images in a folder, where each image either has a square shape or a triangle shape on a white background (like this and this). I would like to separate those images into different folders (note that I don't care about detecting whether image is a square/triangle etc. I just want to separate those two).
I am planning to use more complex shapes in the future (e.g. pentagons, or other non-geometric shapes) so I am looking for an unsupervised approach. But the main task will always be clustering a set of images into different folders.
What is the easiest/best way to do it? I looked at image clustering algorithms, but they do clustering of colors/shapes inside the image. In my case I simply want to separate those image files based on the shapes that have.
Any pointers/help is appreciated.
You can follow this method:
1. Create a look-up tables with shape you are using in the images
2. Do template matching on the images stored in a single folder
3. According to the result of template matching just store them in different folders
4. You can create folders beforehand and just replace the strings in program according to the usage.
I hope this helps
It's really going to depend on what your data set looks like (e.g., what your shape images look like), and how robust you want your solution to be. The tricky part is going to be extracting features from each shape image the produce a clustering result that you're satisfied with. A few ideas:
You could compute SIFT features for each images and then cluster the images based on those features: http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
If you don't want to go the SIFT route, you could try something like HOG: http://en.wikipedia.org/wiki/Histogram_of_oriented_gradients
A somewhat more naive approach - If the shapes are always the same scale, and the background color is fixed you could get rid of the background cluster the images based on shape area (e.g., number of pixels taken up by the shape).
I would like to ask you for help. I am a student and for academic research I'm designing a system where one of the modules is responsible for comparison of low-resolution simple images (img, jpg, jpeg, png, gif). However, I need guidance if I can write an implementation in Python and how to get started. Maybe someone of you met once with something like this and would be able to share their knowledge.
Issue 1 - simple version
The input data must be compared with the pattern (including images) and the data output will contain information about the degree of similarity (percentage), and the image of the pattern to which the given input is the most similar. In this version, the presumption is that the input image is not modified in any way (ie not rotated, tilted, etc.)
Issue 2 - difficult version
The input data must be compared with the pattern (including images) and the data output will contain information about the degree of similarity (percentage), and the image of the pattern to which the given input is the most similar. In this version, the presumption is that the input image can be rotated
Can some of you guys tell me what I need to do that and how to start. I will appreciate any help.
As a starter, you could read in the images using matplotlib, or the python imaging library (PIL).
Comparing to a pattern could be done by a cross-correlation, which you could do using scipyor numpy. As you only have few pixels, I would go for numpy which does not use fourier transforms.
import pylab as P
import numpy as N
# read the images
im1 = P.imread('4Fsjx.jpg')
im2 = P.imread('xUHhB.jpg')
# do the crosscorrelation
conv = N.convolve(im1, im2)
# a measure for similarity then is:
sim = N.sum(N.flatten(conv))
please note, this is a very quick and dirty approach and you should spend quite some thoughts on how to improve it, not even including the rotation that you mentioned. Anyhow; this code can read in your images, and give you a measure for similarity, although the convolve will not work on color coded data. I hope it will give you something to start at.
Here is a start as some pseudo code. I would strongly recommend getting numpy/scipy to help with this.
#read the input image:
files = glob.glob('*.templates')
listOfImages = []
for elem in files:
imagea = scipy.misc.imread(elem)
listOfImages.append(imagea)
#read input/test imagea
targetImage = scipy.misc.imread(targetImageName)
now loop through each of the listOfImages and compute the "distance"
note that this is probably the hardest part. How will you decide
if two images are similar? Using direct pixel comparisons? Using
image histograms, using some image aligment metric(this would be useful
for your difficult version). Some of the simple gotchas, I noticed that your uploaded images were different sizes. If the images are of different sizes then you will have to
sweep over the images. Also, can the images be scaled? Then you will need to either have a scale invariant metric or try the sweep over different scales
#keep track of the min distance
minDistance = Distance(targetImage,listOfImages[0])
minIndex = 0
for index,elem in enumerate(listOfImages):
currentDistance = Distance(targetImage,elem)
if currentDistance < minDistance:
minDistance = currentDistance
minIndex = index
The distance function is where the challenges are, but I'll leave that
for you.