data Augmentation while preserving existing label - python

Problem statement
I am working on a project using YOLO model to detect tools form given picture.i.e hammer, screwdrivers, bolt etc. I have only 100 pictures as training dataset and I have labelled them using polygons. I have decided to augment the data with the below given code. I’ve got 500 new images but, the problem is that I don't want to label them again. I am looking for any way out with which label bounding boxes (polygons) adjust (preserved) with news augmented images so that I can get polygons data without doing labelling again. In short, I want to preserver the label during the image augmentation process.
Code used for Augmentation
Brightness=[0.7,0.8,1] # Different brightness Levels
Rotation=[10,-10] # Different rotation Levels
# Main link source
main_path=r"./Augmentation"
# Directoy from main path
dir=os.listdir(main_path)[1:]
# Read and iterate all images from directory
for name in dir:
image = Image.open(os.path.join(main_path,name))
# Apply rotation from image
for j in Rotation: # Different rotation Levels
rotated = image.rotate(j)
ransImageRGBA = rotated.convert("RGB")
apply_br = ImageEnhance.Brightness(ransImageRGBA)
# Apply values for brightness in rotated images
for i in Brightness: # Different rotation Levels
Lightness =apply_br.enhance(i)
# below line for output
Lightness = apply_br.enhance(i).save((os.path.join(main_path, 'augmented_Images',str(i)+str(j))+name))
print('image has been augmented successfully')

look into imaug. The augmentations from this module also augment the labels. One more thing, what you are doing right now is offline augmentation. You might want to look at online augmentation. Then every epoch the pictures are augmented in a different way and you only train on the augmented pictures. This way you don't have to have a lot of discspace.
If you are using Yolov4 with darknet, image augmentation is performed automatically.

Related

Update the yolov5 annotations ".txt" files of cropped images

I want to crop images which is already annotated for yolov5 format ".txt", but the coordinates will change on cropped so how can i update it and the image crop coordinate will also be one of the class in annotation .txt file. For example: there is two model i want to train first will just detect odometer and second will detect the digits in it so for second model first model will crop odometer image and send to second, therefore for training the second model i need this coordinates corrected. Because I already have full annotation ready and don't want to redo annotation on cropped images of like more than 2k images
As I have mentioned in the comments, I think I can write that as an answer as well.
You can crop your images from yolov5 with --save-crop flag of yolov5. And then for the annotations, you can actually give out full image size as the bounding box sizes. I suggest you check out this thread as well.

Resize COCO bounding boxes for object detection

So here is my first question here. I am preparing a dataset for object detection. I have done the following things so far:
I have an original picture (size w4000 x h3000).
I used the annotation platform Roboflow to annotate it in COCO format, with close to 250 objects in the picture.
Roboflow returned a downscaled picture (2048x1536) with a respective json file with the annotations in COCO format.
Then, to obtain a dataset from my original picture (as I have a lot of objects and the picture is big enough), I decided to tile the original picture in patches of 224x224. For this purpose, I upscaled a bit (4032x3136) to be able to slice it properly, obtaining 252 pictures.
QUESTIONS
How can I resize the bounding boxes of the Roboflow 2048x1536 picture to my original picture (4032x3136)?
Once the b.boxes are resized to my original size picture, how can I resize them again, adapting the size to each of my patches (224x224) created by slicing the original picture?
Thank you!!
It sounds like the ultimate goal is to have tiled 224x224 images from the source 4032x3136 images with bounding boxes correctly updated.
In Roboflow, at least, you can add tiling as a preprocessing step to your original 4032x3136 images. The images will be broken into the number of tiles you select (2x2, 3x3, NxY). The bounding boxes will be correctly updated to cover the objects across each individual tile as well.
To reimplement in code from what you've described, you would need to:
Upscale your 2048x1536 images to 4032x3136
Scale the bounding boxes accordingly
Break the images into 224x224 tiles using something like Pil
Update the annotations to be broken into the coordinates on the respective tiles; one annotation per tile

Data augmentation after image segmentation

Lets assume i have a little dataset. I want to implement data augmentation. First i implement image segmentation (after this, image will be binary image) and then implement data augmentation. Is this a good way?
For image augmentation in segmentation and instance segmentation, you have to either no change the positions of the objects contained in the image by manipulating colors for example, or modify these positions by applying translations and rotation.
So, yes this way works, but you have to take into consideration the type of data you have and what you are looking to achieve. Data augmentation isn't a ready to-go process with good results everywhere.
In case you have a:
Semantic segmentation : Each pixel of your image has a row i and a column j which are labeled as its enclosing object. This means having your main image I and a label image L with its same size linking every pixel to its object label. In this case, your data augmentation is applied to both I and L, giving a combination of the two transformed images.
Instance segmentation : Here we generate a mask for every instance of the original image and the augmentation is applied to all of them including the original, then from these transformed masks we get our new instances.
EDIT:
Take a look at CLoDSA (Classification, Localization, Detection and Segmentation Augmentor) it may help you implement your idea.
In case your dataset is small, you should add data-augmentation during the training. It is important to change the original image & the targets (masks) in the same way !!.
For example, If an image is rotated 90 degrees, then its mask should also be rotated 90 degrees. Since you are using Keras library, You should check if the ImageDataGenerator also changes the target images (masks), along with the inputs. If it doesn't, You can implement the augmentations by yourself. This repository shows how it is done in OpenCV here:
https://github.com/kochlisGit/random-data-augmentations

Add chroma noise to image

I'm training a deep neural network to improve the quality of images. The images contain some specific types of noise that I want to reduce/remove by means of a deep learning model. In order to do so I'm using a huge dataset of similar clear high-res images with barely any noise, add the specific types of noise to the images and train the network on regenerating the original image (a custom autoencoder network). With one of the several noise types this works very well so far. Without going to far into the details, adding that particular type of noise was easy.
Now I need to add another noise type to the images, more precisely: chroma noise like in the following image (the bottom right one): link
How do I artificially generate and add chroma noise to an image in Python? I can use the full range of image processing packages, PIL, numpy, OpenCV, torchvision...
You need to convert the image to a colorspace such as HSV or CIE Lab. You then add noise to the chromacity channels (a, b in Lab, or H, S is HSV). Finally, convert back to RGB.
This colorspace conversion step is very common and most image toolkits should have that functionality.

TensorFlow: Labeling summary images with filenames

I'm training a CNN to learn an unusual task where the labels for each image depend on the other images in the minibatch. Further, the data undergo a number of transformations and through a bunch of different threads and queues before the net can actually axes them.
I'd like to be able to validate that the label and image mapping is correct. However, it doesn't seem like it's possible to get TensorFlow to surface the filename for each image summary as part of tf.image_summary(...). Does anyone know if it's possible to do this? Because the labeling regime is so unusual, you cannot gather immediately from the image if the label itself is correct. I need to be able to get the filename to recover the label properly. I can provide the filenames along with the images themselves and their labels in the queue without any problem.
Edit: To clarify, I'm using TensorBoard to display the image summaries.

Categories