Easy way to horizontally flip images in dataset with json labels? - python

I'm using Tensorflow 2.0 with Python to train an image classifier. I'm using the file model_main_tf2.py to train the model, and have a dataset of images for training and testing. The images were annotated using the LabelMe tool in Python, which allows me to create polygon masks for a Mask RCNN.
What I would like to do is generate duplicates of all the training and test images, by flipping them horizontally. I can already do this easily in python, but I want to flip the JSON files that LabelMe generates, to save me from re-annotating the new flipped images. Is there a tool that allows me to do this?
Thanks

Since this question is under the Python tag - I assume you want this to be done in Python. Flipping can be done in numpy, PIL, or opencv (your choice).
image = # some image translated to a numpy array
print(type(image))
>> numpy.ndarray
# np.fliplr will also do the trick
flipped_image_h = np.flip(image, axis=1) # flip horizontally
# np.flipud will also do the trick
flipped_image_v = np.flip(image, axis=0) # flip vertically
# Save flipped_image
See the numpy docs for more info

I think there isn't an explicite way to do it. you just need to write a code that opens your JSon files and make the changes yourself

Related

Data augmentation after image segmentation

Lets assume i have a little dataset. I want to implement data augmentation. First i implement image segmentation (after this, image will be binary image) and then implement data augmentation. Is this a good way?
For image augmentation in segmentation and instance segmentation, you have to either no change the positions of the objects contained in the image by manipulating colors for example, or modify these positions by applying translations and rotation.
So, yes this way works, but you have to take into consideration the type of data you have and what you are looking to achieve. Data augmentation isn't a ready to-go process with good results everywhere.
In case you have a:
Semantic segmentation : Each pixel of your image has a row i and a column j which are labeled as its enclosing object. This means having your main image I and a label image L with its same size linking every pixel to its object label. In this case, your data augmentation is applied to both I and L, giving a combination of the two transformed images.
Instance segmentation : Here we generate a mask for every instance of the original image and the augmentation is applied to all of them including the original, then from these transformed masks we get our new instances.
EDIT:
Take a look at CLoDSA (Classification, Localization, Detection and Segmentation Augmentor) it may help you implement your idea.
In case your dataset is small, you should add data-augmentation during the training. It is important to change the original image & the targets (masks) in the same way !!.
For example, If an image is rotated 90 degrees, then its mask should also be rotated 90 degrees. Since you are using Keras library, You should check if the ImageDataGenerator also changes the target images (masks), along with the inputs. If it doesn't, You can implement the augmentations by yourself. This repository shows how it is done in OpenCV here:
https://github.com/kochlisGit/random-data-augmentations

Load 3D Niftii images and save all slices for axial, coronal, saggital?

I have some 3D Niftii datasets of brain MRI scans (FLAIR, T1, T2,..).
The FLAIR scans for example are 144x512x512 with Voxel Size of 1.1, 0.5, 0.5 and I want to have 2D-slices from axial, coronal and sagittal view, which I use as input for my CNN.
What I want to do:
Read in .nii files with nibabel, save them as Numpy array and store the slices from axial, coronal and sagittal as 2D-PNGs.
What I tried:
-Use med2image python library
-wrote own python script with nibabel, Numpy and image
PROBLEM: The axial and coronal pictures are somehow stretched in one direction. Sagittal works out like it should.
I tried to debug the python script and used Matplotlib to show the array, that I get, after
image = nibabel.load(inputfile)
image_array=image.get_fdata()
by using for example:
plt.imshow(image_array[:,:, 250])
plt.show()
and found out, the data is already stretched there.
I could figure out to get the desired output with
header = image.header
sX=header['pixdim'][1]
sY=header['pixdim'][2]
sZ=header['pixdim'][3]
plt.imshow(image_array[:,:, 250],aspect=sX/sZ)
But how can I apply something like "aspect", when saving my image? Or is there a possibility to already load the .nii file with parameters like that, to have data, that I can work with?
It looks like, the pixel dimensions are not taken care of, when nibabel loads the .nii image. But unfortunately there's no way for me to solve this problem..
I found out, it doesn't make a difference for training my ML Model, if the pictures are stretched, or not, since I also do this in Data augmentation.
Opening the nifty volumes in Slicer or MRICroGL showed the volumes, as expected, since these programs also take the Header into account. And also the predictions were perfectly fine (even though, the pictures were "stretched", when saved slice-wise somehow)
Still, it annoyed me, to look at stretched pictures and I just implemented some resizing with cv2:
def saveSlice(img, fname, path):
img=numpy.uint8(img*255)
fout=os.path.join(path, f'{fname}.png')
img = cv2.resize(img, dsize=(IMAGE_WIDTH, IMAGE_HEIGTH), interpolation=cv2.INTER_LINEAR)
cv2.imwrite(fout, img)
print(f'[+] Slice saved: {fout}', end='\r')
The results are really good and it works pretty well for me.

YOLO V4 Tiny - Making more photos from one annotated image

I am trying to make a yolo v4 tiny custom data set using google collab. I am using labelImg.py for image annotations which is shown in https://github.com/tzutalin/labelImg.
I have annotated one image as shown as below,
The .txt file with the annotated coordinates looks as following,
0 0.580859 0.502083 0.303906 0.404167
I only have one class which is calculator class. I want to use this one image to produce 4 more annotated images. I want to rotate the annotated image 45 degrees every time and create a new annotated image and a.txt coordinate file. I have seen something like this done in roboflow but I cant figure out how to do it manually with a python script. Is it possible to do it? If so how?
You can look into the repo and article below for python based data augmentation including rotation, shearing, resizing, translation, flipping etc.
https://github.com/Paperspace/DataAugmentationForObjectDetection
https://blog.paperspace.com/data-augmentation-for-bounding-boxes/
If you are using AlexeyAB's darknet repo for yolov4, then there are some augmentations you can use to increase training data size and variation.
https://github.com/AlexeyAB/darknet/wiki/CFG-Parameters-in-the-%5Bnet%5D-section
Look into Data augmentation section where you can use various defined augmentations for object detection by adding them to yolo cfg file.

Mask RCNN: How to add region annotation based on manually segmented image?

There is a implementation of Mask RCNN on Github by Matterport.
I'm trying to train my data for it. I'm adding polygons on images with this tool. I'm drawing polygons on images manually, but I already have manually segmented image below (black and white one)
My questions are:
1) When adding json annotation for region data, is there a way to use that pre-segmented image below?
2) Is there a way to train my data for this algorithm, without adding json annotation and use manually segmented images? The tutorials and posts I've seen uses json annotations to train.
3) This algorithm's output is image with masks obviously, is there a way get black and white output for segmentations?
Here's the code that I'm working on google colab.
Original Repo
My Fork
Manually segmented image
I think both questions 1 and 2 refer to the same solution: you need to convert your masks to json annotations. For that, I would suggest you to read this link, posted in the repository of the cocodataset. There you can read about this repository that you could use for what you need. You could also use directly the Coco PythonAPI, calling the methods here defined.
For question 3, a mask is already binary image (therefore, you can show it as black and white pixels).

Resize CSV data using Python and Keras

I have CSV files that I need to feed to a Deep-Learning network. Currently my CSV files are of size 360*480, but the network restricts them to be of size 224*224. I am using Python and Keras for the deep-learning part. So how can I resize the matrices?
I was thinking that since aspect ratio is 3:4, so if I resize them to 224:(224*4/3) = 224:299, and then crop the width of the matrix to 224, it could serve the purpose. But I cannot find a suitable function to do that. Please suggest.
I think you're looking for cv.resize() if you're using images.
If not, try numpy.ndarray.resize()
Image processing
If you want to do nontrivial alterations to the data as images (i.e. interpolating between pixel values, assuming that they represent photographs) then you might want to use proper image processing libraries for that. You'd need to treat them not as raw matrixes (csv of numbers) but convert them to rgb images, do the transformations you desire, and convert them back to a numpy matrix.
OpenCV (https://docs.opencv.org/3.4/da/d6e/tutorial_py_geometric_transformations.html)
or Pillow (https://pillow.readthedocs.io/en/3.1.x/reference/Image.html) might be useful to do that.
I found a short and simple way to solve this. This uses the Python Image Library/Pillow.
import numpy as np
import pylab as pl
from PIL import Image
matrix = np.array(list(csv.reader(open('./path/mat.csv', "r"), delimiter=","))).astype("uint8") #read csv
imgObj = Image.fromarray(matrix) #convert matrix to Image object
resized_imgObj = img.resize((224,224)) #resize Image object
imgObj.show()
resized_imgObj.show()
resized_matrix = np.asarray(img) #convert Image object to matrix
While numpy module also has a resize function, but it is not as useful as the aforementioned way.
When I tried it, the resized matrix had lost all the intricacies and aesthetic aspect of the original matrix. This is probably due to the fact that numpy.ndarray.resize doesn't interpolate and missing entries are filled with zeros.
So, for this case Image.resize() is more useful.
You could also convert the csv file to a list, truncate the list, and then convert the list to a numpy array and then use np.reshape.

Categories