Get rectangular shape from very noisy image Opencv Python - python

Need to get rectangular shapes from a noisy color segmented image.
The problem is that sometimes the object isn't uniformly the correct color causing holes in the image, or sometimes reflection of the object in the background cause noise/false positive for the color segmentation.
The object could be in any position of the image and of any unknown rectangular size, the holes can occur anywhere inside the object and the noise could occur on any side of the object.
The only known constant is that the object is rectangular in shape.
Whats the best way to filter out that noise to the left of the object and get a bounding box around the object?
Using erosion would remove the detail from the bottom of the object and would cause the size of the bounding box to be wrong

I can't comment because of my rep, but I think you could try to analyse the colored image using other color spaces. Create a upper and a lower bound of the color you want until it selects the object, leaving you with less noise, which you can filter with erode/dilate/opening/closing.
For example, in my project I wanted to found a bounding box of a color-changing green rectangle, so I went and tried a lot of diferent color spaces with a lot of diferent upper/lower bounds until I finally got something worthy. Here is a nice read of what I'm talking about : Docs
You can also try filtering the object by área, after dilating it (you dilate first so the closer points connect to one another, while the more distant ones, which are the noise, don't, creating a big rectangle with lots of noise, but then you filter by a big área).

One method is to take histogram projection on both the horizontal and vertical axes, and select the intersection of ranges that have high projections.
The projections are just totals of object pixels in each row and each column. When you are looking for only one rectangle, the values indicated the probablity of the row/column belonging to the rectangle.

Related

Python - Segmenting an ROI with many smaller objects inside

I am working with an image containing a lot of small objects formed of hexagons, which are roughly inside a rectangular figure.
See image here:
There are also areas of noise outside this rectangle with the same pixel intensity, which I want to disregard with future functions. I have 2 questions regarding this:
How can I create a segmentation/ROI to only consider the objects/shapes inside that rectangular figure? I tried using Canny and contouring, as well as methods to try and create bounding boxes, but in each of them I always segment the individual objects directly in the entire image, and I can't eliminate the outside noise as a preliminary step.
How can I identify the number of white hexagons inside the larger rectangle? My original idea was to find the area of each of the individual objects I would obtain inside the rectangle (using contouring), sort from smallest to lowest (so the smallest area would correspond to a single hexagon), and then divide all the areas by the hexagonal area to get the number, which I could sum together. Is there an easier way to do this?

Calculating positions of objects as (x,y) on a known platform (opencv-python)

I have a platform which I know the sizes. I would like to get the positions of objects placed on it as (x,y) while looking through the webcam, the origin being the top-left corner of the platform. However, I can only look through from a low angle: example
I detect the objects using the otsu threshold. I want to use the bottom edge of the bounding rectangles, then proportion it accordingly concerning the corners (the best I can think of), but I don't know how to implement it. I tried warp perspective but it enlarges the objects too much. image with threshold // attempt of warp perspective
Any help or suggestion would be appreciated.
Don't use warp perspective to transform the image to make the table cover the complete image as you did here.
While performing perspective transformations in image processing, try not to transform the image too much.
Below is the image with your table marked with red trapezium that you transformed.
Now try to transform it into a perfect rectangle but you do not want to transform it too much as you did. One way is to transform the trapezium to a rectangle by simply adjusting the shorter edge's vertices to come directly above the lower edge's vertices as shown in the image below with green.
This way, things far from the camera will be skewed wrt width only a little. This will give better results. Another even better way would be to decrease the size of the lower edge a little and increase the size of the upper edge a little. This will evenly skew objects kept over the table as shown below.
Now, as you know the real dimensions of the table and the dimensions of the rectangle in the image, you can do the mapping. Using this, you can determine the exact position of the objects kept on the table.

How to remove islands smaller than a given radius and specify minimum gap between islands

I have some data that I have segmented into a number of classes in Python 3.5. The resulting image is the following
Original image
There are a lot of small islands within the data that I want to remove. I've circled a few of them to give an idea below, but there are many.
Original image with some noise circled
The idea is that I want to be able to specify a minimum radius below which a noise island should be deleted.
I have tried a few different approaches using scikit-image morphology filters. I have tried combining grayscale closing and opening filters (shown in the image below), and I have also tried using the remove_small_objects filter and treating each pair of classes separately and combining them at the end. They do work at removing the noise islands (shown below), but this creates a new problem. There are thin boundaries between some of the islands, which I don't want either!
Image with grayscale opening applied and thin areas circled
So basically, I want to remove the noise islands but also have a minimum gap between each island.
Any help would be much appreciated.
EDIT:
Some clarification on the desired result:
The desired result is to obtain an image where there are no blobs smaller than a certain radius, and also that the thin boundaries between blobs below a certain thickness are filled in or removed. A mockup of what I'm looking for is shown here:
Desired result
Dilation and erosion (the processes that are performed by opening and closing) are the standard approach for filtering small noise patches, but for long and snaky regions they can cause the issues you are experiencing with the creation of thin regions and boundaries. Rather than dilation and erosion, you can instead try filtering based on explicit connected component size. Skimage has the function skimage.morphology.label which labels your connected components, and skimage.morphology.remove_small_objects which removes any connected components with size below a certain threshold.

Calculate a color threshold from within a contour dynamically

I have an input image which consists of 3 colors.These colors are circular in shape and nested.
The image is similar to this :
https://www.google.ie/search?q=red+yellow+blue+nested+circles&client=ms-unknown&tbm=isch&tbs=rimg:CbQTsOKsM7yhIkCaDOdJHzqnN2Xk-DhItFHm0Zqt6wMB32Tm1CzyzQ7wrXERbVqngEyMBzO57J8UuHLak9WPqWfjV7kgvdJ47BJlKhIJmgznSR86pzcR8SW2ldYWlqIqEgll5Pg4SLRR5hG-6WlMFrVBvioSCdGaresDAd9kEVFfCyyB-AgqKhIJ5tQs8s0O8K0RT790ELynuK8qEglxEW1ap4BMjBHBPar4Jd2NtioSCQczueyfFLhyEY7iP_13IGcsOKhIJ2pPVj6ln41cRTMOWeqZE5oYqEgm5IL3SeOwSZREray5kAy-dzw%3D%3D&tbo=u&ved=0ahUKEwjWzfSC6IbXAhXFbBoKHW3GBrUQuIIBCCM#imgrc=5tQs8s0O8K32hM:
I will be working with several images like this.The imaage is always of the same thing.however dude to different camera , lighting , even printer differences , the actual color can vary. In that it will always be red yellow green in the order showen. By using HSV and thresholds I can easily determine upper and lower values for each color. However If I change to a different set of images theses values are no longer functional.
My idea to overcome this is to look for contours first in the image.
For each contour I would like to get an upper and lower threshold. Using a combination of canny , gaussian and contours I am able to draw contour around each color from testing this seems general enough for purpose.
Where I'm stuck is getting the threshold value from within the contours. Is this possible? Or is there simpler logic I am over looking to achieve this ?
At present I'm using python , but language is secondary.
Forget about the contours, they will make things harder than necessary.
A better approach could be by classifying the pixels using k-means. Initialize with at least three clusters, centered around green, yellow, red. Maybe one centered on white, for the background.
After convergence you should have the exact colors, together with segmentation.
https://en.wikipedia.org/wiki/K-means_clustering#Standard_algorithm

Clipping image/remove background programmatically in Python

How to go from the image on the left to the image on the right programmatically using Python (and maybe some tools, like OpenCV)?
I made this one by hand using an online tool for clipping. I am completely noob in image processing (especially in practice). I was thinking to apply some edge or contour detection to create a mask, which I will apply later on the original image to paint everything else (except the region of interest) black. But I failed miserably.
The goal is to preprocess a dataset of very similar images, in order to train a CNN binary classifier. I tried to train it by just cropping the image close to the region of interest, but the noise is so high that the CNN learned absolutely nothing.
Can someone help me do this preprocessing?
I used OpenCV's implementation of watershed algorithm to solve your problem. You can find out how to use it if you read this great tutorial, so I will not explain this into a lot of detail.
I selected four points (markers). One is located on the region that you want to extract, one is outside and the other two are within lower/upper part of the interior that does not interest you. I then created an empty integer array (the so-called marker image) and filled it with zeros. Then I assigned unique values to pixels at marker positions.
The image below shows the marker positions and marker values, drawn on the original image:
I could also select more markers within the same area (for example several markers that belong to the area you want to extract) but in that case they should all have the same values (in this case 255).
Then I used watershed. The first input is the image that you provided and the second input is the marker image (zero everywhere except at marker positions). The algorithm stores the result in the marker image; the region that interests you is marked with the value of the region marker (in this case 255):
I set all pixels that did not have the 255 value to zero. I dilated the obtained image three times with 3x3 kernel. Then I used the dilated image as a mask for the original image (i set all pixels outside the mask to zero) and this is the result i got:
You will probably need some kind of method that will find markers automatically. The difficulty of this task depends heavily on the set of the input images. In some cases, the method can be really straightforward and simple (as in the tutorial linked above) but sometimes this can be a tough nut to crack. But I can't recommend anything because I don't know how your images look like in general (you only provided one). :)

Categories