Detect blob in very noisy image - python

I am working with a stack of noisy images, trying to isolate a blob in an image section. Below you can see the starting image, loaded and plotted with python, and the same image after some editing with Gimp.
What I want to do is to isolate and recognise the blob inside the light blue circle, editing the image and then using something like ndimage.label. Do you have any suggestion on how to edit the image? Thanks.

The background looks quite even so you should be able to isolate the main object using thresholding, allowing you to use array masking to identify regions within the main object. I would have a go with some tools from scikit image to see where that gets you http://scikit-image.org/docs/dev/auto_examples/
I would try gaussian/median filtering followed by thresholding/filling gaps. Or you could try random walker segmentation, or pherhaps texture classification might be more useful. When you have a list of smaller objects within the main object you can then filter these with respect to shape, size, roundness etc
http://scikit-image.org/docs/dev/auto_examples/plot_label.html#example-plot-label-py

Related

Is there a way to discern an object from the background with OpenCV?

I always wanted to have a device that, from a live camera feed, could detect an object, create a 3D model of it, and then identify it. It would work a lot like the Scanner tool from Subnautica. Imagine my surprise when I found OpenCV, a free-to-use computer vision tool for Python!
My first step is to get the computer to recognize that there is an object at the center of the camera feed. To do this, I found a Canny() function that could detect edges and display them as white lines in a black image, which should make a complete outline of the object in the center. I also used the floodFill() function to fill in the black zone between the white lines with gray, which would show that the computer recognizes that there is an object there. My attempt is in the following image.
The red dot is the center of the live video.
The issue is that the edge lines can have holes in them due to a blur between two colors, which can range from individual pixels to entire missing lines. As a result, the gray gets out and doesn't highlight me as the only object, and instead highlights the entire wall as well. Is there a way to fill those missing pixels in or is there a better way of doing this?
Welcome to SO and the exiting world of machine vision !
What you are describing is a very classical problem in the field, and not a trivial one at all. It depends heavily on the shape and appearance of what you define as the object of interest and the overall structure, homogeneity and color of the background. Remember, the computer has no concept of what an "object" is, the only thing it 'knows' is a matrix of numbers.
In your example, you might start out with selecting the background area by color (or hue, look up HSV). Everything else is your object. This is what classical greenscreening techniques do, and it only works with (a) a homogenous background, which does not share a color with your object and (b) a single or multiple not overlapping objects.
The problem with your edge based approach is that you won't get a closed edge safely, and deciding where the inside and outside of the object is might get tricky.
Advanced ways to do this would get you into Neural Network territory, but maybe try to get the basics down first.
Here are two links to tutorials on converting color spaces and extracting contours:
https://docs.opencv.org/4.x/df/d9d/tutorial_py_colorspaces.html
https://docs.opencv.org/3.4/d4/d73/tutorial_py_contours_begin.html
If you got that figured out, look into stereo vision or 3D imaging in general, and that subnautica scanner might just become reality some day ;)
Good luck !

Mask RCNN: How to add region annotation based on manually segmented image?

There is a implementation of Mask RCNN on Github by Matterport.
I'm trying to train my data for it. I'm adding polygons on images with this tool. I'm drawing polygons on images manually, but I already have manually segmented image below (black and white one)
My questions are:
1) When adding json annotation for region data, is there a way to use that pre-segmented image below?
2) Is there a way to train my data for this algorithm, without adding json annotation and use manually segmented images? The tutorials and posts I've seen uses json annotations to train.
3) This algorithm's output is image with masks obviously, is there a way get black and white output for segmentations?
Here's the code that I'm working on google colab.
Original Repo
My Fork
Manually segmented image
I think both questions 1 and 2 refer to the same solution: you need to convert your masks to json annotations. For that, I would suggest you to read this link, posted in the repository of the cocodataset. There you can read about this repository that you could use for what you need. You could also use directly the Coco PythonAPI, calling the methods here defined.
For question 3, a mask is already binary image (therefore, you can show it as black and white pixels).

OpenCV how to replace cut out object with background

I have two images, one image which contains a box and one without. There is a small vertical disparity between the two pictures since the camera was not at the same spot and was translated a bit. I want to cut out the box and replace the hole with the information from the other picture.
I want to achieve something like this (a slide from a computer vision course)
I thought about using the cv2.createBackgroundSubtractorMOG2() method, but it does not seem to work with only 2 pictures.
Simply subtracting the picture from another does not work either because of the disparity.
The course suggests using RANSAC to compute the most likely relationship between two pictures and subtract the area thaht changed a lot. But how do I actually fill in the holes?
Many thanks in advance!!
If you plant ot use only a pair of images (or only a few images), image stitching methods are better than background subtraction.
The steps are:
Calculate homography between the two images.
Warp the second image to overlap the second.
Replace the region with the human with pixels from the warped image.
This link shows a basic example of image stitching. You will need extra work if both images have humans in different places, but otherwise it should not be hard to tweak this code.
You can try this library for background subtraction issues. https://github.com/andrewssobral/bgslibrary
there is python wrappers of this tool.

Remove background and noise from image

Im trying to remove the background from a video and get a binary images( or 8-bit) where value of the object that moves is 1 and static background is 0.
something like this:
at first I tried it with getting the difference absDiff() from running average accumulateWeighted() and the current frame but the result was not what I expected( only the edges was 1 and inside of the moving object was 0).
so I went for createBackgroundSubtractorMOG2 and createBackgroundSubtractorMOG but this is not good either( same problem ).
is there a way to get the whole moving object?
The Mixture of Gaussians method is not going to solve all your problems. Common problem is sensitivity to light conditions, e.g. attaching shadow to extracted foreground object . If the image scenario (background) is roughly the same you can refine your results with some image processing.
If the background is similar as in attached image, try to build color histogram in HSI space, create image of extracted foreground object (not mask, actual colored image) and remove pixels that color is similar to the floor (technique known from skin detection methods). In that way you could remove some shadows attached to the person/objects.
Also, if real-time processing is not crucial in your application, you could use more sophisticated background/foreground detection like SubSENSE.

How can I use PIL to crop a select area based on face detection?

Hi I am wanting to use the python imaging library to crop images to a specific size for a website. I have a problem, these images are meant to show people's faces so I need to automatically crop based on them.
I know face detection is a difficult concept so I'm thinking of using the face.com API http://developers.face.com/tools/#faces/detect which is fine for what I want to do.
I'm just a little stuck on how I would use this data to crop a select area based on the majority of faces.
Can anybody help?
Joe
There is a library for python that have a concept of smart-cropping that among other options, can use face detection to do a smarter cropping.
It uses opencv under the hood, but you are isolated from it.
https://github.com/globocom/thumbor
If you have some rectangle that you want to excise from an image, here's what I might try first:
(optional) If the image is large, do a rough square crop centered on the face with dimensions sqrt(2) larger than the longer edge (if rectangular). Worst-case (45° rotation), it will still grab everything important.
Rotate based on the face orientation (something like rough_crop.rotate(math.degrees(math.atan(ydiff/xdiff)), trig is fun)
Do a final crop. If you did the initial crop, the face should be centered, otherwise you'll have to transform (rotate) all your old coordinates to the new image (more trig!).

Categories