Splitting an already labelled image for object detection - python

I have a large geotiff file of elevation data that I would like to use for object detection. I've labelled the objects (originally as a .shp) and converted the labels into a single geojson.
From reading object detection tutorials it seems that I need to split this large image into multiple images for training/testing. Is there a way to do this using the original labels, so that I don't need to re-label each smaller image?
If anyone has any useful tutorials/end to end examples of preparing satellite data for object detection that would also be really helpful.

The GeoJSON file you have should have the co-ordinates to get the bounding box for the named portion of the original image. (If you want to know how to do that, see here: https://gis.stackexchange.com/a/313023/120175). Once you have the bounding box, you can use any imaging library (Pillow or Pillow-SIMD) to get the sub-image that you have named (with the name in the same geojson object that contained the coordinates you took for getting bounding box). You can operate them while they're in memory or save them (they can be treated as independent images themselves) with these imaging library. These images can be used for training.

Related

convert segmented images to coco json

I have images saved as png where the object I want to generate coco style segmentation annotations are present. take this image of a bulb as an example:bulb
Now let's say I want to generate segmentation annotations as in coco format from this image automatically by using a program, how can I do it ?
One thing we can do is write a program to store a fixed number of co-ordinates which are on the edges, and then use those co-ordinates in the segmentation field in the .json file, but that could create problems where the number or co-ordinates needed to accurately capture the boundary of an object would differ.
Any kind of help is greatly appreciated.

YOLO V4 Tiny - Making more photos from one annotated image

I am trying to make a yolo v4 tiny custom data set using google collab. I am using labelImg.py for image annotations which is shown in https://github.com/tzutalin/labelImg.
I have annotated one image as shown as below,
The .txt file with the annotated coordinates looks as following,
0 0.580859 0.502083 0.303906 0.404167
I only have one class which is calculator class. I want to use this one image to produce 4 more annotated images. I want to rotate the annotated image 45 degrees every time and create a new annotated image and a.txt coordinate file. I have seen something like this done in roboflow but I cant figure out how to do it manually with a python script. Is it possible to do it? If so how?
You can look into the repo and article below for python based data augmentation including rotation, shearing, resizing, translation, flipping etc.
https://github.com/Paperspace/DataAugmentationForObjectDetection
https://blog.paperspace.com/data-augmentation-for-bounding-boxes/
If you are using AlexeyAB's darknet repo for yolov4, then there are some augmentations you can use to increase training data size and variation.
https://github.com/AlexeyAB/darknet/wiki/CFG-Parameters-in-the-%5Bnet%5D-section
Look into Data augmentation section where you can use various defined augmentations for object detection by adding them to yolo cfg file.

Mask RCNN: How to add region annotation based on manually segmented image?

There is a implementation of Mask RCNN on Github by Matterport.
I'm trying to train my data for it. I'm adding polygons on images with this tool. I'm drawing polygons on images manually, but I already have manually segmented image below (black and white one)
My questions are:
1) When adding json annotation for region data, is there a way to use that pre-segmented image below?
2) Is there a way to train my data for this algorithm, without adding json annotation and use manually segmented images? The tutorials and posts I've seen uses json annotations to train.
3) This algorithm's output is image with masks obviously, is there a way get black and white output for segmentations?
Here's the code that I'm working on google colab.
Original Repo
My Fork
Manually segmented image
I think both questions 1 and 2 refer to the same solution: you need to convert your masks to json annotations. For that, I would suggest you to read this link, posted in the repository of the cocodataset. There you can read about this repository that you could use for what you need. You could also use directly the Coco PythonAPI, calling the methods here defined.
For question 3, a mask is already binary image (therefore, you can show it as black and white pixels).

Google TensorFlow Object Detection API on Windows - Labelling Images

I have been using Labellmg to create the xml files in PASCAL VOC format. There is a prebuilt binary version which makes it really easy to start drawing bounding boxes around objects in the images.
https://github.com/tzutalin/labelImg
At times, the bounding boxes end up covering unwanted pixels. I am looking for a much more accurate tool on windows which can help me draw polygons or use any sort of point to point tool to annotate the object around the edges. I came across one which can be used on the Mac OS X
https://rectlabel.com/
Also, is it true that Tensorflow object detection API only supports bounding boxes annotations?

Detect blob in very noisy image

I am working with a stack of noisy images, trying to isolate a blob in an image section. Below you can see the starting image, loaded and plotted with python, and the same image after some editing with Gimp.
What I want to do is to isolate and recognise the blob inside the light blue circle, editing the image and then using something like ndimage.label. Do you have any suggestion on how to edit the image? Thanks.
The background looks quite even so you should be able to isolate the main object using thresholding, allowing you to use array masking to identify regions within the main object. I would have a go with some tools from scikit image to see where that gets you http://scikit-image.org/docs/dev/auto_examples/
I would try gaussian/median filtering followed by thresholding/filling gaps. Or you could try random walker segmentation, or pherhaps texture classification might be more useful. When you have a list of smaller objects within the main object you can then filter these with respect to shape, size, roundness etc
http://scikit-image.org/docs/dev/auto_examples/plot_label.html#example-plot-label-py

Categories