I am in the process of putting together an OpenCV script to analyze immunohistochemically stained heart tissue. Our staining procedure renders cell types expressing certain proteins in their plasma membranes with pigments visible under a light microscope, which we use to photograph the images.
So far, I've succeded in segmenting the images to different layers based on color range using a modified version of the frequently cited color segmentation script available through the OpenCV community(http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html).
A screen shot of the original image:
B-Cell layer displayed:
At this point, I would like to calculate the ratio of area of B-Cells to unstained tissue. This operation prompted an extraction of the background cell layer as such based on color range:
Obviously, these results leave much to be desired.
Does anyone have ideas of how to approach this problem? Again, I would like to segment the background tissue (transparent) layer, which is unfortunately fairly sponge-like in texture. My goal is to create a mask representive of the area of unstained tissue. It seems a blur technique is necessary to fill the gaps in the tissue, but the loss in accuracy this approach entails is obvious.
In the sample image, the channels look highly correlated. If you apply decorrelation-stretching to the image you should be able to see more detail. Here in my blog post I've implemented decorrelation-stretching in C++ (unfortualtely not Python).
Using the sample code in the blog I did the following to segment the cell region:
dstretch the CIE Lab image with following targetMean and tergetSigma.
float mu[3] = {128.0f, 128.0f, 128.0f};
float sd[3] = {128.0f, 5.0f, 5.0f};
Mat mean = Mat(3, 1, CV_32F, mu);
Mat sigma = Mat(3, 1, CV_32F, sd);
Convert the dstretched CIE Lab image back to BGR.
Erode this BGR image with a 3x3 rectangular structuring element once.
Apply kmeans clustering to this eroded image with k = 2.
I don't know how good this segmentation is. I think it is possible to get a better segmentation by trying different values for the above parameters (mean, sigma, structuring element size and number of times the image is eroded).
(Following images are not to the original scale)
Original:
dstretched CIE Lab converted back to BGR:
Eroded:
kmeans with k = 2:
Related
I am trying to paste an object with a completely tight known mask onto an image so it should be easy, but without some post treatments I get artefacts at the border. I want to use the blending technique Poisson Blending to reduce the artefacts. It is implemented in opencv seamlessClone.
import cv2
import matplotlib.pyplot as plt
#user provided tight mask array tight_mask of dtype uint8 with only white pixel the ones on the object the others are black (50x50x3)
tight_mask
#object obj to paste a 50x50x3 uint8 in color
obj
#User provided image im which is large 512x512 of a mostly uniform background in colors
im
#two different modes of poisson blending, which give approximately the same result
normal_clone=cv2.seamlessClone(obj, im, mask, center, cv2.NORMAL_CLONE)
mixed_clone=cv2.seamlessClone(obj, im, mask, center, cv2.MIXED_CLONE)
plt.imshow(normal_clone,interpolation="none")
plt.imshow(mixed_clone, interpolation="none")
However, with the code above, I only get images where the pasted objects are very very very transparent. So they are obviously well blended but they are so blended that they fade away like ghosts of objects.
I was wondering if I was the only one to have such issues and if not what were the alternatives in term of poisson blending ?
Do I have to reimplement it from scratch to modify the blending factor (is that even possible ?), is there another way ? Do I have to use dilatation on the mask to lessen the blending ? Can I enhance the contrast somehow afterwards ?
In fact the poisson blending is using the gradient information in the image to paste to blend it into the target image.
It turns out that if the mask is completely tight the border gradient is then artificially interpreted as null.
That is why it ignores it completely and produces ghosts.
Using a larger mask by dilating the original mask using morphological operations and therefore including some background is thus the solution.
Care must be taken when choosing the color of the background included if the contrast is too big the gradient will be too strong and the image would not be well blended.
Using a color like gray is a good starting point.
I am using Object segmentation dataset having following information:
Introduced: IROS 2012
Device: Kinect v1
Description: 111 RGBD images of stacked and occluding objects on table.
Labelling: Per-pixel segmentation into objects.
link for the page: http://www.acin.tuwien.ac.at/?id=289
I am trying to use the depth map provided by the dataset. However, it seems the depth map is completely black.
Original image for the above depth map
I tried to do some preprocessing and normalised the image so that the depth map could be visualised in the form of a gray image.
img_depth = cv2.imread("depth_map.png",-1) #depth_map.png has uint16 data type
depth_array = np.array(img_depth, dtype=np.float32)
frame = cv2.normalize(depth_array, depth_array, 0, 1, cv2.NORM_MINMAX)
cv2.imwrite('capture_depth.png',frame*255)
The result of doing this preprocessing is:
In one of the posts in stackoverflow, i read that these black patches are the regions where the depth map was not defined.
If i have to use this depth map, what is the best possible way to fill these undefined regions? (I am thinking of filling these regions with K-nearest neighbour but feel there could be better ways for this).
Are there any RGB-D datasets that do not have such problems or these kind of problems always exists? what are the best possible way to tackle such problems?
Thanks in Advance!
Pretty much every 3d imaging technology will produce data with invalid or missing points. Lack of texture, too steep slopes, obscuration, transparency, reflections,... you name it.
There is no magic solution to filling these holes. You'll need some sort of interpolation or you maybe replace missing points based on some model.
The internet is full of methods for filling holes. Most techniques for intensity images can be successsfully applied to depth images.
It will depend on your application, your requirements and what you know about your objects.
Data quality in 3d is a question of time, money and the right combination of object and technology.
Areas that absorb or scatter the Kinect IR (like glossy surfaces or sharp edges) are filled with zero pixel value (indicating non-calculated depth). A method to approximately fill the non-captured data around these areas is by using the statistical median of a 5x5 window. This method works just fine for Kinect depth images. An example implementation can be seen for Matlab and C# in the links.
How to go from the image on the left to the image on the right programmatically using Python (and maybe some tools, like OpenCV)?
I made this one by hand using an online tool for clipping. I am completely noob in image processing (especially in practice). I was thinking to apply some edge or contour detection to create a mask, which I will apply later on the original image to paint everything else (except the region of interest) black. But I failed miserably.
The goal is to preprocess a dataset of very similar images, in order to train a CNN binary classifier. I tried to train it by just cropping the image close to the region of interest, but the noise is so high that the CNN learned absolutely nothing.
Can someone help me do this preprocessing?
I used OpenCV's implementation of watershed algorithm to solve your problem. You can find out how to use it if you read this great tutorial, so I will not explain this into a lot of detail.
I selected four points (markers). One is located on the region that you want to extract, one is outside and the other two are within lower/upper part of the interior that does not interest you. I then created an empty integer array (the so-called marker image) and filled it with zeros. Then I assigned unique values to pixels at marker positions.
The image below shows the marker positions and marker values, drawn on the original image:
I could also select more markers within the same area (for example several markers that belong to the area you want to extract) but in that case they should all have the same values (in this case 255).
Then I used watershed. The first input is the image that you provided and the second input is the marker image (zero everywhere except at marker positions). The algorithm stores the result in the marker image; the region that interests you is marked with the value of the region marker (in this case 255):
I set all pixels that did not have the 255 value to zero. I dilated the obtained image three times with 3x3 kernel. Then I used the dilated image as a mask for the original image (i set all pixels outside the mask to zero) and this is the result i got:
You will probably need some kind of method that will find markers automatically. The difficulty of this task depends heavily on the set of the input images. In some cases, the method can be really straightforward and simple (as in the tutorial linked above) but sometimes this can be a tough nut to crack. But I can't recommend anything because I don't know how your images look like in general (you only provided one). :)
I have a morphological problem i am attempting to solve using OpenCV. I have two images.
Mask
Seed
In the mask image am trying to retain only the blobs marked by a seed image and to remove the rest.
Underneath I am posting the mask and seed image
Mask Image :
Seed Image :
To further illustrate a the problem I have zoomed into the image and created a subplot.
In this example the plot on your right is the seed image, the plot your left is the mask image. At the end of the operation I would like to have the elephant trunk shaped blob on the left as the result as it is marked by the seed coordinates(left).
Bit-wise operations will give me only overlapping regions between seed and mask (result is the same square shaped blob).
One possible solution is to use opening by reconstruction, however OpenCV doesn't have an implementation of it.
OpenCV - Is there an implementation of marker based reconstruction in opencv
Any pointers are appreciated!
Alright, Thank you everyone who had taken the time to view this post. I was unable to find a solution for this particular problem within OpenCV. Hence I resorted to using the PYMORPH library.
https://pythonhosted.org/pymorph/
The function Inf-reconstruction does exactly what I wanted.
pymorph.infrec(f, g, Bc={3x3 cross})
infrec creates the image y by an infinite number of recursive iterations (iterations until stability) of the dilation of f by Bc conditioned to g. We say the y is the inf-reconstruction of g from the marker f. For algorithms and applications, see Vinc:93b.
Parameters :
f : Marker image (gray or binary).
g : Conditioning image (gray or binary).
Bc : Connectivity Structuring element (default: 3x3 cross).
Returns :
y : Image
Hope this helps others traveling through similar hurdles.
Thank you
I am attempting to use machine learning (namely random forests) for image segmentation. The classifier utilizes a number of different pixel level features to classify pixels as either edge pixels or non edge pixels. I recently applied my classifier to a set of images that are pretty difficult to segment even manually (Image segmentation based on edge pixel map) and am still working on obtaining reasonable contours from the resulting probability map. I also applied the classifier to an easier set of images and am obtaining quite good predicted outlines (Rand index > 0.97) when I adjust the threshold to 0.95. I am interested in improving the segmentation result by filtering contours extracted from the probability map.
Here is the original image:
The expert outlines:
The probability map generated from my classifier:
This can be further refined when I convert the image to binary based on a threshold of 0.95:
I tried filling holes in the probability map, but that left me with a lot of noise and sometimes merged nearby cells. I also tried contour finding in openCV but this didn't work either as many of these contours are not completely connected - a few pixels will be missing here and there in the outlines.
Edit: I ended up using Canny edge detection on the probability map.
The initial image seems to be well contrasted and I guess we can simply threshold to obtain a good estimate of the cells. Here is a morphological area based filtering of the thresholded image:
Threshold:
Area based opening filter(this needs to be set based on your dataset of cells under study):
Area based closing filter(this needs to be set based on your dataset of cells under study):
Contours using I-Erosion(I):
Code snippet:
C is input image
C10 = C>10; %threshold depends on the average contrast in your dataset
C10_areaopen = bwareaopen(C10,2500); %area filters average remove small components that are not cells
C10_areaopenclose = ~bwareaopen(~C10_areaopen,100); %area filter fills holes
se = strel('disk',1);
figure, imshow(C10_areaopenclose-imerode(C10_areaopenclose,se)) %inner contour
To get smoother shapes I guess fine opening operations can be performed on the filtered images, thus removing any concave parts of the cells. Also for cells that are attached one could use the distance function and the watershed over the distance function to obtain segmentations of the cells: http://www.ias-iss.org/ojs/IAS/article/viewFile/862/765
I guess this can be also used on your probability/confidence maps to perform nonlinear area based filtering.