image line detection for a grayscale image .. not working - python

Input image
This is a 3000x3000 greyscale image, and I would like to get the coordinates of the diagonal components in the image.
I tried Hough transform and pylsd (line segment detection), but both of them don't work as I hoped. Here are some unsatisfactory outcomes:
result with too many junks
I would like to have a maximum amount of true diagonals with minimum number of junks by using a simple parameter such as the length above which a cluster can be labeled as a line. Any suggestions or tips will be appreciated.
Either Python or R is preferred (not matlab)

Related

How to (generally!) deal with out-of-range output pixel destinations

I'm working on a perspective transform application involving transforming 3D points to 2D camera pixels. It is a purely mathematical model, because I'm preparing to use it on hardware that I don't really have access to (so I'm making up focal length and offset values for the intrinsic camera matrix).
When I do the mapping, depending on the xyz location of the camera, I get huge differences in where my transformed image is, and I have to make the matrix where I'm inputting the pixels really large. (I'm mapping an image of 1000x1000 pixels to an image of about 600x600 pixels, but its located around 6000, so I have to make my output matrix 7000x7000, which takes a long time to plt.imshow. I have no use for the actual location of the pixels, because I'm only concerned with what the remapped image looks like.
I was wondering how people dealt with this issue:
I can think of just cropping the image down to the area that is non-zero (where my pixels are actually mapped too? Like seen in:
How to crop a numpy 2d array to non-zero values?
but that still requires me to use space and time to alot a 7000x7000 destination matrix

How to create a closed region around cluster of pixels?

I am doing some image processing in Python 3.5.2. After some work I have segmented and image using Support Vector Machines (used as pixel-wise classification task). As expected after training, when I try to predict a new image I get some pixels misslabeled. I only have to classes for the segmentation so the result will work as a mask with 1 in the desired region and 0 elsewhere.
An example predicted mask looks like this:
EDIT:
Here is the link for this image (saved using cv2.imwrite()):
https://i.ibb.co/74nxLvZ/img.jpg
As you can see there is a big region with some holes in it that means they are False Negative (FN) pixel predictions. Also, there are some False Positive (FP) pixels outside that big region.
I want to be able to get a mask for that big region alone and filled. Therefore I've been thinking about using some clustering method like DBSCAN or K-means to create clusters on this data points hopefully getting a cluster for the big region. Do you have any suggestion on the matter?
Now, assume I have that clusters. How can I fill the holes in tha big region. I would want to create some sort of figure/polygon/roi around that big region and then get all the pixels inside. Can any one shed some light on how to achieve this?
Somehow I would want something like this:
Hope I made myself clear. If I wasn´t let me know on the comments. Hope someone can help me figure this out.
Thanks in advance
You can in fact use DBSCAN to cluster data points. Specially when you don't know the number of clusters you are trying to get.
Then, you can get the contour of the region you want to fill. In this case the big white region with holes.
# im_gray: is the binary image you have
cnt, _ = cv2.findContours(im_gray, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_NONE)
You can loop through cnt to select the correct contour. Then, assuming you "know" the contour you want, you can use the function cv2.approxPolyDP() from OpenCV
taken from OpenCV tutorial
It approximates a contour shape to another shape with less number of vertices
depending upon the precision we specify. It is an implementation of Douglas-Peucker
algorithm.
epsilon = 0.001
approxPoly= cv2.approxPolyDP(np.array(maxPoly), epsilon, closed=True)
epsilon is an accuracy parameter, it is the maximum distance from the contour to approximated contour. As suggested in the documentation (link above) you can use epsilon=0.1*cv2.arcLength(cnt,True). In this case I used value 0.001.
Once you have this, you can just draw it:
poligon_mask = np.zeros(im_gray.shape)
poligon_mask = cv2.drawContours(max_poligon_mask, [approxPoly], cv2.FILLED, (255), -1)
Hope this helps.

Clipping image/remove background programmatically in Python

How to go from the image on the left to the image on the right programmatically using Python (and maybe some tools, like OpenCV)?
I made this one by hand using an online tool for clipping. I am completely noob in image processing (especially in practice). I was thinking to apply some edge or contour detection to create a mask, which I will apply later on the original image to paint everything else (except the region of interest) black. But I failed miserably.
The goal is to preprocess a dataset of very similar images, in order to train a CNN binary classifier. I tried to train it by just cropping the image close to the region of interest, but the noise is so high that the CNN learned absolutely nothing.
Can someone help me do this preprocessing?
I used OpenCV's implementation of watershed algorithm to solve your problem. You can find out how to use it if you read this great tutorial, so I will not explain this into a lot of detail.
I selected four points (markers). One is located on the region that you want to extract, one is outside and the other two are within lower/upper part of the interior that does not interest you. I then created an empty integer array (the so-called marker image) and filled it with zeros. Then I assigned unique values to pixels at marker positions.
The image below shows the marker positions and marker values, drawn on the original image:
I could also select more markers within the same area (for example several markers that belong to the area you want to extract) but in that case they should all have the same values (in this case 255).
Then I used watershed. The first input is the image that you provided and the second input is the marker image (zero everywhere except at marker positions). The algorithm stores the result in the marker image; the region that interests you is marked with the value of the region marker (in this case 255):
I set all pixels that did not have the 255 value to zero. I dilated the obtained image three times with 3x3 kernel. Then I used the dilated image as a mask for the original image (i set all pixels outside the mask to zero) and this is the result i got:
You will probably need some kind of method that will find markers automatically. The difficulty of this task depends heavily on the set of the input images. In some cases, the method can be really straightforward and simple (as in the tutorial linked above) but sometimes this can be a tough nut to crack. But I can't recommend anything because I don't know how your images look like in general (you only provided one). :)

OpenCV - Retaining only marked blobs in python

I have a morphological problem i am attempting to solve using OpenCV. I have two images.
Mask
Seed
In the mask image am trying to retain only the blobs marked by a seed image and to remove the rest.
Underneath I am posting the mask and seed image
Mask Image :
Seed Image :
To further illustrate a the problem I have zoomed into the image and created a subplot.
In this example the plot on your right is the seed image, the plot your left is the mask image. At the end of the operation I would like to have the elephant trunk shaped blob on the left as the result as it is marked by the seed coordinates(left).
Bit-wise operations will give me only overlapping regions between seed and mask (result is the same square shaped blob).
One possible solution is to use opening by reconstruction, however OpenCV doesn't have an implementation of it.
OpenCV - Is there an implementation of marker based reconstruction in opencv
Any pointers are appreciated!
Alright, Thank you everyone who had taken the time to view this post. I was unable to find a solution for this particular problem within OpenCV. Hence I resorted to using the PYMORPH library.
https://pythonhosted.org/pymorph/
The function Inf-reconstruction does exactly what I wanted.
pymorph.infrec(f, g, Bc={3x3 cross})
infrec creates the image y by an infinite number of recursive iterations (iterations until stability) of the dilation of f by Bc conditioned to g. We say the y is the inf-reconstruction of g from the marker f. For algorithms and applications, see Vinc:93b.
Parameters :
f : Marker image (gray or binary).
g : Conditioning image (gray or binary).
Bc : Connectivity Structuring element (default: 3x3 cross).
Returns :
y : Image
Hope this helps others traveling through similar hurdles.
Thank you

Drawing boundry around heterogeneously textured tissue in microscopy study

I am in the process of putting together an OpenCV script to analyze immunohistochemically stained heart tissue. Our staining procedure renders cell types expressing certain proteins in their plasma membranes with pigments visible under a light microscope, which we use to photograph the images.
So far, I've succeded in segmenting the images to different layers based on color range using a modified version of the frequently cited color segmentation script available through the OpenCV community(http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html).
A screen shot of the original image:
B-Cell layer displayed:
At this point, I would like to calculate the ratio of area of B-Cells to unstained tissue. This operation prompted an extraction of the background cell layer as such based on color range:
Obviously, these results leave much to be desired.
Does anyone have ideas of how to approach this problem? Again, I would like to segment the background tissue (transparent) layer, which is unfortunately fairly sponge-like in texture. My goal is to create a mask representive of the area of unstained tissue. It seems a blur technique is necessary to fill the gaps in the tissue, but the loss in accuracy this approach entails is obvious.
In the sample image, the channels look highly correlated. If you apply decorrelation-stretching to the image you should be able to see more detail. Here in my blog post I've implemented decorrelation-stretching in C++ (unfortualtely not Python).
Using the sample code in the blog I did the following to segment the cell region:
dstretch the CIE Lab image with following targetMean and tergetSigma.
float mu[3] = {128.0f, 128.0f, 128.0f};
float sd[3] = {128.0f, 5.0f, 5.0f};
Mat mean = Mat(3, 1, CV_32F, mu);
Mat sigma = Mat(3, 1, CV_32F, sd);
Convert the dstretched CIE Lab image back to BGR.
Erode this BGR image with a 3x3 rectangular structuring element once.
Apply kmeans clustering to this eroded image with k = 2.
I don't know how good this segmentation is. I think it is possible to get a better segmentation by trying different values for the above parameters (mean, sigma, structuring element size and number of times the image is eroded).
(Following images are not to the original scale)
Original:
dstretched CIE Lab converted back to BGR:
Eroded:
kmeans with k = 2:

Categories