Update the yolov5 annotations ".txt" files of cropped images - python

I want to crop images which is already annotated for yolov5 format ".txt", but the coordinates will change on cropped so how can i update it and the image crop coordinate will also be one of the class in annotation .txt file. For example: there is two model i want to train first will just detect odometer and second will detect the digits in it so for second model first model will crop odometer image and send to second, therefore for training the second model i need this coordinates corrected. Because I already have full annotation ready and don't want to redo annotation on cropped images of like more than 2k images

As I have mentioned in the comments, I think I can write that as an answer as well.
You can crop your images from yolov5 with --save-crop flag of yolov5. And then for the annotations, you can actually give out full image size as the bounding box sizes. I suggest you check out this thread as well.

Related

Resize COCO bounding boxes for object detection

So here is my first question here. I am preparing a dataset for object detection. I have done the following things so far:
I have an original picture (size w4000 x h3000).
I used the annotation platform Roboflow to annotate it in COCO format, with close to 250 objects in the picture.
Roboflow returned a downscaled picture (2048x1536) with a respective json file with the annotations in COCO format.
Then, to obtain a dataset from my original picture (as I have a lot of objects and the picture is big enough), I decided to tile the original picture in patches of 224x224. For this purpose, I upscaled a bit (4032x3136) to be able to slice it properly, obtaining 252 pictures.
QUESTIONS
How can I resize the bounding boxes of the Roboflow 2048x1536 picture to my original picture (4032x3136)?
Once the b.boxes are resized to my original size picture, how can I resize them again, adapting the size to each of my patches (224x224) created by slicing the original picture?
Thank you!!
It sounds like the ultimate goal is to have tiled 224x224 images from the source 4032x3136 images with bounding boxes correctly updated.
In Roboflow, at least, you can add tiling as a preprocessing step to your original 4032x3136 images. The images will be broken into the number of tiles you select (2x2, 3x3, NxY). The bounding boxes will be correctly updated to cover the objects across each individual tile as well.
To reimplement in code from what you've described, you would need to:
Upscale your 2048x1536 images to 4032x3136
Scale the bounding boxes accordingly
Break the images into 224x224 tiles using something like Pil
Update the annotations to be broken into the coordinates on the respective tiles; one annotation per tile

data Augmentation while preserving existing label

Problem statement
I am working on a project using YOLO model to detect tools form given picture.i.e hammer, screwdrivers, bolt etc. I have only 100 pictures as training dataset and I have labelled them using polygons. I have decided to augment the data with the below given code. I’ve got 500 new images but, the problem is that I don't want to label them again. I am looking for any way out with which label bounding boxes (polygons) adjust (preserved) with news augmented images so that I can get polygons data without doing labelling again. In short, I want to preserver the label during the image augmentation process.
Code used for Augmentation
Brightness=[0.7,0.8,1] # Different brightness Levels
Rotation=[10,-10] # Different rotation Levels
# Main link source
main_path=r"./Augmentation"
# Directoy from main path
dir=os.listdir(main_path)[1:]
# Read and iterate all images from directory
for name in dir:
image = Image.open(os.path.join(main_path,name))
# Apply rotation from image
for j in Rotation: # Different rotation Levels
rotated = image.rotate(j)
ransImageRGBA = rotated.convert("RGB")
apply_br = ImageEnhance.Brightness(ransImageRGBA)
# Apply values for brightness in rotated images
for i in Brightness: # Different rotation Levels
Lightness =apply_br.enhance(i)
# below line for output
Lightness = apply_br.enhance(i).save((os.path.join(main_path, 'augmented_Images',str(i)+str(j))+name))
print('image has been augmented successfully')
look into imaug. The augmentations from this module also augment the labels. One more thing, what you are doing right now is offline augmentation. You might want to look at online augmentation. Then every epoch the pictures are augmented in a different way and you only train on the augmented pictures. This way you don't have to have a lot of discspace.
If you are using Yolov4 with darknet, image augmentation is performed automatically.

object detection: is object in the photo, python

I am trying to detect plants in the photos, i've already labeled photos with plants (with labelImg), but i don't understand how to train model with only background photos, so that when there is no plant here model can tell me so.
Do I need to set labeled box as the size of image?
p.s. new to ml so don't be rude, please)
I recently had a problem where all my training images were zoomed in on the object. This meant that the training images all had very little background information. Since object detection models use space outside bounding boxes as negative examples of these objects, this meant that the model had no background knowledge. So the model knew what objects were, but didn't know what they were not.
So I disagree with #Rika, since sometimes background images are useful. With my example, it worked to introduce background images.
As I already said, object detection models use non-labeled space in an image as negative examples of a certain object. So you have to save annotation files without bounding boxes for background images. In the software you use here (labelImg), you can use verify image to say that it saves the annotation file of the image without boxes. So it saves a file that says it should be included in training, but has no bounding box information. The model uses this as negative examples.
In your case, you don't need to do anything in that regard. Just grab the detection data that you created and train your network with it. When it comes to testing, you usually set a threshold for bounding boxes accuracy, because you may get lots of them so you only want the ones with the highest confidence.
Then you get/show the ones with highest bbox accuracies and there your go, you get your detection result and you can do what ever you want like cropping them using the bounding box coordinates you get.
If there are no plants, your network will likely create bboxes with an accuracy below your threshold (very low confidence) and then, you just ignore them.

Mask RCNN: How to add region annotation based on manually segmented image?

There is a implementation of Mask RCNN on Github by Matterport.
I'm trying to train my data for it. I'm adding polygons on images with this tool. I'm drawing polygons on images manually, but I already have manually segmented image below (black and white one)
My questions are:
1) When adding json annotation for region data, is there a way to use that pre-segmented image below?
2) Is there a way to train my data for this algorithm, without adding json annotation and use manually segmented images? The tutorials and posts I've seen uses json annotations to train.
3) This algorithm's output is image with masks obviously, is there a way get black and white output for segmentations?
Here's the code that I'm working on google colab.
Original Repo
My Fork
Manually segmented image
I think both questions 1 and 2 refer to the same solution: you need to convert your masks to json annotations. For that, I would suggest you to read this link, posted in the repository of the cocodataset. There you can read about this repository that you could use for what you need. You could also use directly the Coco PythonAPI, calling the methods here defined.
For question 3, a mask is already binary image (therefore, you can show it as black and white pixels).

How to label image without using bounding box?

I'm trying to create custom dataset using my own images. This images I cropped from logs data such as below:
https://drive.google.com/open?id=1x0oWiVZ9KOw5P0gIMxQNxO-ajdrGy7Te
I want it to be able to detect the high vibration such as below:
https://drive.google.com/open?id=1tUjthjGG1c23kTCQZOgedcsx99R_a_z3
I have around 300 image of the High Vibration in one folder. the picture is like below:
https://drive.google.com/open?id=1IG_-wRJxe-_TOYfSxHjRq5UBWMn9mO1k
I wanted to do exactly like https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9. IN this example image dataset was hand-labeled manually with LabelImg.
However I don't see why I need to draw box for images that only have one object in it and can have the frame of the image as the bounding box.
Please advice how I can create data set and processing the images without manually drawing the bounding box (since the images consist of one object), and how to draw bounding boxes in batch for image that contain one object(i.e. having the frame of the image as the bounding box)?
It sounds like you want to do 'image classification' as opposed to 'object detection' -- it may be easier to make a script that generates xml files, containing the image's width and height as bounding box dimensions, as opposed to using labelImg.

Categories