I'm trying to perform image segmentation using Morphological Snakes, however the input images contain something like a halo that it is messing with the output. Here you can see an example image and its corresponding segmentation should look like this. As it can be seen in the first image the object of interest is the one in the center but around it there is, what I call, a gray halo and since the difference among the pixels is not much I haven't found a way to perform the segmentation.
I've tried preprocessing methods such as morphological operations and gaussian filter, however I didn't obtain the expected results. I've also tryed to segmentate the image using adaptative threshold and otsu, but didn't work either because, as I said before, the difference among the pixels is not much (that's why I moved to morphological snakes that work better but not enough for my case).
I want to know if there's any preprocessing method that could help in my case, maybe I haven't found the correct combination of them. I need to remove or clean the halo, or to highlight the the object in the center.
Thanks in advance to any who could help me.
Related
I have raw microscopy images like this:
And I want to segment the objects, as you see some of them are really close and I have a great range of intensity values.
background: 700 a.u.
fluorescent shapes: from 7000 to 32000 a.u.
To segment them I use Otsu binary segmentation from skimage package (without prior processing of the image)
thresh, imgthresh=cv2.threshold(image, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
The result is pretty good, but still fails in detecting the brightest shapes as individual objects.
I have tried a lot of things: watershed algorithm, image preprocessing (blurring), eroding , adaptive thresholding, but nothing works properly since the main problem is the difference in fluorescent values of the image.
Any smart idea on how to solve this?
Because your data have such a large range in intensity values, single histogram based methods on the whole image (e.g. Otsu) are going to have a little trouble accomplishing this task. I think that your best bet is going to be either:
threshold_multiotsu: and choose number of classes based on number of 'clusters' of intensities. Unfortunately, you will likely need to alter the number of classes on an image by image basis so this isn't super robust.
threshold_local: I know you said that you tried this but you might revisit this and alter the block_size parameter until you get something that looks reasonable. Based on your example images (and assuming a little bit about why the objects in your example images are green) it looks like that objects in close spatial proximity to one another generally have similar intensity values. Furthermore, you likely won't have to go through and alter the parameters as much as you would in option 1.
I suspect that these will be the simplest and most straight forward approaches but you could also delve into identifying the object edges using something from skimage.feature and then filling objects. Maybe something like outline here: https://scikit-image.org/docs/stable/auto_examples/features_detection/plot_blob.html. This will be a bit more involved, but these methods should be more robust with identifying objects with largely varied intensity values.
If all else fails you can try a couple of SOTA packages. The main ones that I am thinking of are https://github.com/stardist/stardist and https://github.com/MouseLand/cellpose but these seem like a bit of overkill based on your example data here.
I always wanted to have a device that, from a live camera feed, could detect an object, create a 3D model of it, and then identify it. It would work a lot like the Scanner tool from Subnautica. Imagine my surprise when I found OpenCV, a free-to-use computer vision tool for Python!
My first step is to get the computer to recognize that there is an object at the center of the camera feed. To do this, I found a Canny() function that could detect edges and display them as white lines in a black image, which should make a complete outline of the object in the center. I also used the floodFill() function to fill in the black zone between the white lines with gray, which would show that the computer recognizes that there is an object there. My attempt is in the following image.
The red dot is the center of the live video.
The issue is that the edge lines can have holes in them due to a blur between two colors, which can range from individual pixels to entire missing lines. As a result, the gray gets out and doesn't highlight me as the only object, and instead highlights the entire wall as well. Is there a way to fill those missing pixels in or is there a better way of doing this?
Welcome to SO and the exiting world of machine vision !
What you are describing is a very classical problem in the field, and not a trivial one at all. It depends heavily on the shape and appearance of what you define as the object of interest and the overall structure, homogeneity and color of the background. Remember, the computer has no concept of what an "object" is, the only thing it 'knows' is a matrix of numbers.
In your example, you might start out with selecting the background area by color (or hue, look up HSV). Everything else is your object. This is what classical greenscreening techniques do, and it only works with (a) a homogenous background, which does not share a color with your object and (b) a single or multiple not overlapping objects.
The problem with your edge based approach is that you won't get a closed edge safely, and deciding where the inside and outside of the object is might get tricky.
Advanced ways to do this would get you into Neural Network territory, but maybe try to get the basics down first.
Here are two links to tutorials on converting color spaces and extracting contours:
https://docs.opencv.org/4.x/df/d9d/tutorial_py_colorspaces.html
https://docs.opencv.org/3.4/d4/d73/tutorial_py_contours_begin.html
If you got that figured out, look into stereo vision or 3D imaging in general, and that subnautica scanner might just become reality some day ;)
Good luck !
After performing a bunch of preprocessing steps, I have this image. I'd like to remove the tiny islands of noise from within the image. What I've noticed is that these noisy pixels are directly connected to less than two pixels in their neighborhood. Is there a way of extracting just the logo and the letters "PUSH TO OPEN" without the noisy pixels?
I've already tried basic morphological operations such as erosion (cv2.erode) and opening to no avail.
I apologize if I've not made any part of my question clear as I'm a beginner at opencv. Any help is appreciated!
I'm not sure, but it looks like salt-and-pepper noise. You can remove it using smoothing filter such as median filter in your preprocessing step. Oh since you're new to OpenCV, you might want to look at OpenCV tutorial.
I have two images, one image which contains a box and one without. There is a small vertical disparity between the two pictures since the camera was not at the same spot and was translated a bit. I want to cut out the box and replace the hole with the information from the other picture.
I want to achieve something like this (a slide from a computer vision course)
I thought about using the cv2.createBackgroundSubtractorMOG2() method, but it does not seem to work with only 2 pictures.
Simply subtracting the picture from another does not work either because of the disparity.
The course suggests using RANSAC to compute the most likely relationship between two pictures and subtract the area thaht changed a lot. But how do I actually fill in the holes?
Many thanks in advance!!
If you plant ot use only a pair of images (or only a few images), image stitching methods are better than background subtraction.
The steps are:
Calculate homography between the two images.
Warp the second image to overlap the second.
Replace the region with the human with pixels from the warped image.
This link shows a basic example of image stitching. You will need extra work if both images have humans in different places, but otherwise it should not be hard to tweak this code.
You can try this library for background subtraction issues. https://github.com/andrewssobral/bgslibrary
there is python wrappers of this tool.
This is a follow up question on my previous question.
(Finding areas that are too thin using morphological opening on black and white images)
After reading and implementing the suggestions from Shai and rayryeng I have another issue.
The algorithm also finds the end of pointy shapes and I need to disregard those since every triangle ends with a really thin area.
For example:
The algorithm finds the trident stick and the small part in the middle which is great. But it also finds the end of the trident at the top right which is the end of a shape.
Any ideas on how to identify those kind of cases will be greatly appreciated.
You might want to consider using bwmorph operation 'endpoints' applied to 'skel' of your template - these two morphological operations should help you identify the the "pointy" shapes of your input image, thus excluding them from your "thin regions" you highlight.
Using opencv, you may find this example of morphological skeleton operation useful. It would also seems like pymorph can prove useful for you.