Image segmentation based on edge pixel map [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have trained a classifier in Python for classifying pixels in an image of cells as edge or non edge. I've used it successfully on a few image datasets but am running into problems with this particular dataset, which seems pretty ambiguous even to the human eye. I don't know of any existing automated technique that can segment it accurately.
After prediction I obtain the following image:
I am relatively new to image processing and am unsure with how to proceed with actually obtaining the final segmentations of the cells. I have briefly tried a few different techniques - namely Hough circular transform, level sets, skeletonization, contour finding - but none have really done the trick. Am I just not tuning the parameters correctly or is there a better technique out there?
Here are the correct outlines, by the way, for reference.
And the original image:
And the continuous probability map:

Very nice work on boundary detection. I used to work on similar segmentation problems.
Theory:
Once you obtained your edge map where e(i,j) indicates the "edge-iness" degree of pixel i,j you would like a segmentation of the image that would respect the edge map as possible.
In order to formulate this "respect the edge map" in a more formal fashion I suggest you look at the Correlation clustering (CC) functional:
The CC functional asses the quality of a segmentation based on pair-wise relations between neighboring pixels whether they should be in the same cluster (no edge between them) or in different clusters (there is an edge between them).
Take a look at the example at section 7.1 of the aforementioned paper.
CC is used for similar segmentation problems in medical (neuronal) imaging as well, see e.g., here.
Practice
Once you convince yourself that CC is indeed an appropriate formulation for your problem, there is still the question of how exactly to convert your binary edge map into an affinity matrix that CC can process. Bear in mind that CC needs as an input a (usually sparse) adjacency matrix with positive entries for pairs of pixels assuming to belong to the same segment, and negative entries for pairs of pixels assumed to belong in different segments.
Here's my suggestion:
The edges in your edge map looks quite thick and not well localize. I suggest a non-max supression, or morphological thining as a pre-processing stage.
Once you have a better localized edges, you ignore the "edge" pixels and only work with the "non-edge" pixels, lets call them "active".
Two active pixels that are next to each other: there is no "edge" pixel between them - they should be together. So the adjecency matrix for immidiate nieghbors should have positive entires.
Consider three pixels on a line, with the two endpoints are "active" pixels: if the middle one is an edge then the two active pixels should not belong to the same cluster - the corresponding entries in the adjecency matrix should be negative. If the middle pixel is also active than the corresponding entries in the adjecency matrix should be positive.
Consider all possible neighboring pairs and triplets (inducing 24-connected grid graph) allows you to construct an affinity matrix with positive and negative entries suitable for CC.
Given a matrix you should search for the segmentation with the best CC score (optimization stage). I have Matlab code for this here. You can also use the excellent openGM package.
The optimization will result with a partition of the active pixels only, you can map it back to the input image domain, leaving the edge pixels as un-assigned to any segment.

Seeing the picture of the edges/non edges pixel in the classifier, we can see that the gradient image of your input already basically gives the result of the classifier you have learnt. But the confidence map shows a good solution except that:
1. they are connected levelsets, with varying sizes.
2. you have noisy bright spots in the cells that cause false outputs from the classifier. (maybe some smoothing could be considered)
3. I guess it would be probably easier to characterize the internal of each cell: the grayscale variations, the average size. Learning these distributions would probably get you better detection results. Topologically we have a set of low grayscale values nested in large grayscale values.
To perform this one could use Graphcuts with GMM model for the unitary costs and a learnt gradient distribution for the pairwise term

I think your Hough transform is a good idea. One thing you should try (if you don't already), is to threshold your image before you run it through your tranform, though the article I just linked seems to only be binary thresholding. What this might do is to exaggerate differences between the edge and the background, so it might be easier to detect. Basically, apply a function (in the form of a filter which operates on the pixel's value) to each pixel.
Another thing you can try is active contours. Basically, you lay down some circles and they move through the image until they find what you're looking for.
My last idea is maybe try a wavelet transform. These seem to work pretty well at picking out boundaries and borders in images. Hope these ideas can get you started.

Related

How to identify particles in this complex image?

I have been trying Python+OpenCV for quite long time already and followed many tutorials in order to identify particles in the following image:
My ultimate goal is to identify every particle, from there I will be able to e.g. count number of particles, calculate a size distribution, etc.
I have already tried to customize many examples several sites.
I got good hints based on:
How to define the markers for Watershed in OpenCV?
Counting particles using image processing in python
Although I was not able to achieve decent results.
How can I identify particles in this image using Python and OpenCV?
IMO, the only hope to get meaningful results is to use the fact that the particles are round. By using some homogeneity criterion, you could find candidate particle centers, and from these grow contours in such a way that they remain round and stop at edges. An option could be to draw rays from the seed point, find the closest edge points and use a robust fit of a circle or an ellipse.
Reject the shapes that are too far from roundness. This should allow you to find the unoccluded particles. Then you can continue the game from other seed points, this time growing contours that can be occluded by the already detected particles. (When an edge is hit, if it is known to belong to a particle, ignore it.)
Let's pretend the goal is to get an estimated number of particles. Also, let's assume those particles are spheres.
With that being said it should be possible to build a model, based on highlight, shadow, halftone to make the final result as accurate as it can be.
With that being said a simple proof of the concept based on highlight segmentation can be verified.
Initial result doesn't seem to be promising, but a tiny change of the contrast improves it:
Should be enough to get estimated number of stones and apply more advanced models for identified regions.

Using external pose estimates to improve stationary marker contour tracking

Suppose that I have an array of sensors that allows me to come up with an estimate of my pose relative to some fixed rectangular marker. I thus have an estimate as to what the contour of the marker will look like in the image from the camera. How might I use this to better detect contours?
The problem that I'm trying to overcome is that sometimes, the marker is occluded, perhaps by a line cutting across it. As such, I'm left with two contours that if merged, would yield the marker. I've tried opening and closing to try and fix the problem, but it isn't robust to the different types of lighting.
One approach that I'm considering is to use the predicted contour, and perform a local convolution with the gradient of the image, to find my true pose.
Any thoughts or advice?
The obvious advantage of having a pose estimate is that it restricts the image region for searching your target.
Next, if your problem is occlusion, you then need to model that explicitly, rather than just try to paper it over with image processing tricks: add to your detector objective function a term that expresses what your target may look like when partially occluded. This can be either an explicit "occluded appearance" model, or implicit - e.g. using an algorithm that is able to recognize visible portions of the targets independently of the whole of it.

Find edges of images

I have software that generates several images like the following four images:
Does an algorithm exist that detects the (horizontal & vertical) edges and creates a binary output like this?
If possible I'd like to implement this with numpy and scipy. I already tried to implement an algorithm, but I failed because I didn't find a place to start. I also tried to use a neural network to do this, but this seems to be overpowered and does not work perfectly.
The simplest thing to try is to:
Convert your images to binary images (by a simple threshold)
Apply the Hough transform (OpenCV, Matlab have it already implemented)
In the Hough transform results, detect the peaks for angles 0 degree, + and - 90 degrees. (Vertical and horizontal lines)
In OpenCV and Matlab, you have extra options for the Hough transform which allow you to fill the gaps between two disconnected segments belonging to a same straight line. You may need a few extra operations for post-processing your results but the main steps should be these ones.

Detect grid nodes using OpenCV (or using something else)

I have a grid on pictures (they are from camera). After binarization they look like this (red is 255, blue is 0):
What is the best way to detect grid nodes (crosses) on these pictures?
Note: grid is distorted from cell to cell non-uniformly.
Update:
Some examples of different grids and thier distortions before binarization:
In cases like this I first try to find the best starting point.
So, first I thresholded your image (however I could also skeletonize it and just then threshold. But this way some data is lost irrecoverably):
Then, I tried loads of tools to get the most prominent features emphasized in bulk. Finally, playing with Gimp's G'MIC plugin I found this:
Based on the above I prepared a universal pattern that looks like this:
Then I just got a part of this image:
To help determine angle I made local Fourier freq graph - this way you can obtain your pattern local angle:
Then you can make a simple thick that works fast on modern GPUs - get difference like this (missed case):
When there is hit the difference is minimal; what I had in mind talking about local maximums refers more or less to how the resulting difference should be treated. It wouldn't be wise to weight outside of the pattern circle difference the same as inside due to scale factor sensitivity. Thus, inside with cross should be weighted more in used algorithm. Nevertheless differenced pattern with image looks like this:
As you can see it's possible to differentiate between hit and miss. What is crucial is to set proper tolerance and use Fourier frequencies to obtain angle (with thresholded images Fourier usually follows overall orientation of image analyzed).
The above way can be later complemented by Harris detection, or Harris detection can be modified using above patterns to distinguish two to four closely placed corners.
Unfortunately, all techniques are scale dependent in such case and should be adjusted to it properly.
There are also other approaches to your problem, for instance by watershedding it first, then getting regions, then disregarding foreground, then simplifying curves, then checking if their corners form a consecutive equidistant pattern. But to my nose it would not produce correct results.
One more thing - libgmic is G'MIC library from where you can directly or through bindings use transformations shown above. Or get algorithms and rewrite them in your app.
I suppose that this can be a potential answer (actually mentioned in comments): http://opencv.itseez.com/2.4/modules/imgproc/doc/feature_detection.html?highlight=hough#houghlinesp
There can also be other ways using skimage tools for feature detection.
But actually I think that instead of Hough transformation that could contribute to huge bloat and and lack of precision (straight lines), I would suggest trying Harris corner detection - http://docs.opencv.org/2.4/doc/tutorials/features2d/trackingmotion/harris_detector/harris_detector.html .
This can be further adjusted (cross corners, so local maximum should depend on crossy' distribution) to your specific issue. Then some curves approximation can be done based on points got.
Maybe you cloud calculate Hough Lines and determine the intersections. An OpenCV documentation can be found here

Image registration using python and cross-correlation

I got two images showing exaktly the same content: 2D-gaussian-shaped spots. I call these two 16-bit png-files "left.png" and "right.png". But as they are obtained thru an slightly different optical setup, the corresponding spots (physically the same) appear at slightly different positions. Meaning the right is slightly stretched, distorted, or so, in a non-linear way. Therefore I would like to get the transformation from left to right.
So for every pixel on the left side with its x- and y-coordinate I want a function giving me the components of the displacement-vector that points to the corresponding pixel on the right side.
In a former approach I tried to get the positions of the corresponding spots to obtain the relative distances deltaX and deltaY. These distances then I fitted to the taylor-expansion up to second order of T(x,y) giving me the x- and y-component of the displacement vector for every pixel (x,y) on the left, pointing to corresponding pixel (x',y') on the right.
To get a more general result I would like to use normalized cross-correlation. For this I multiply every pixelvalue from left with a corresponding pixelvalue from right and sum over these products. The transformation I am looking for should connect the pixels that will maximize the sum. So when the sum is maximzied, I know that I multiplied the corresponding pixels.
I really tried a lot with this, but didn't manage. My question is if somebody of you has an idea or has ever done something similar.
import numpy as np
import Image
left = np.array(Image.open('left.png'))
right = np.array(Image.open('right.png'))
# for normalization (http://en.wikipedia.org/wiki/Cross-correlation#Normalized_cross-correlation)
left = (left - left.mean()) / left.std()
right = (right - right.mean()) / right.std()
Please let me know if I can make this question more clear. I still have to check out how to post questions using latex.
Thank you very much for input.
[left.png] http://i.stack.imgur.com/oSTER.png
[right.png] http://i.stack.imgur.com/Njahj.png
I'm afraid, in most cases 16-bit images appear just black (at least on systems I use) :( but of course there is data in there.
UPDATE 1
I try to clearify my question. I am looking for a vector-field with displacement-vectors that point from every pixel in left.png to the corresponding pixel in right.png. My problem is, that I am not sure about the constraints I have.
where vector r (components x and y) points to a pixel in left.png and vector r-prime (components x-prime and y-prime) points to the corresponding pixel in right.png. for every r there is a displacement-vector.
What I did earlier was, that I found manually components of vector-field d and fitted them to a polynom second degree:
So I fitted:
and
Does this make sense to you? Is it possible to get all the delta-x(x,y) and delta-y(x,y) with cross-correlation? The cross-correlation should be maximized if the corresponding pixels are linked together thru the displacement-vectors, right?
UPDATE 2
So the algorithm I was thinking of is as follows:
Deform right.png
Get the value of cross-correlation
Deform right.png further
Get the value of cross-correlation and compare to value before
If it's greater, good deformation, if not, redo deformation and do something else
After maximzied the cross-correlation value, know what deformation there is :)
About deformation: could one do first a shift along x- and y-direction to maximize cross-correlation, then in a second step stretch or compress x- and y-dependant and in a third step deform quadratic x- and y-dependent and repeat this procedure iterativ?? I really have a problem to do this with integer-coordinates. Do you think I would have to interpolate the picture to obtain a continuous distribution?? I have to think about this again :( Thanks to everybody for taking part :)
OpenCV (and with it the python Opencv binding) has a StarDetector class which implements this algorithm.
As an alternative you might have a look at the OpenCV SIFT class, which stands for Scale Invariant Feature Transform.
Update
Regarding your comment, I understand that the "right" transformation will maximize the cross-correlation between the images, but I don't understand how you choose the set of transformations over which to maximize. Maybe if you know the coordinates of three matching points (either by some heuristics or by choosing them by hand), and if you expect affinity, you could use something like cv2.getAffineTransform to have a good initial transformation for your maximization process. From there you could use small additional transformations to have a set over which to maximize. But this approach seems to me like re-inventing something which SIFT could take care of.
To actually transform your test image you can use cv2.warpAffine, which also can take care of border values (e.g. pad with 0). To calculate the cross-correlation you could use scipy.signal.correlate2d.
Update
Your latest update did indeed clarify some points for me. But I think that a vector field of displacements is not the most natural thing to look for, and this is also where the misunderstanding came from. I was thinking more along the lines of a global transformation T, which applied to any point (x,y) of the left image gives (x',y')=T(x,y) on the right side, but T has the same analytical form for every pixel. For example, this could be a combination of a displacement, rotation, scaling, maybe some perspective transformation. I cannot say whether it is realistic or not to hope to find such a transformation, this depends on your setup, but if the scene is physically the same on both sides I would say it is reasonable to expect some affine transformation. This is why I suggested cv2.getAffineTransform. It is of course trivial to calculate your displacement Vector field from such a T, as this is just T(x,y)-(x,y).
The big advantage would be that you have only very few degrees of freedom for your transformation, instead of, I would argue, 2N degrees of freedom in the displacement vector field, where N is the number of bright spots.
If it is indeed an affine transformation, I would suggest some algorithm like this:
identify three bright and well isolated spots on the left
for each of these three spots, define a bounding box so that you can hope to identify the corresponding spot within it in the right image
find the coordinates of the corresponding spots, e.g. with some correlation method as implemented in cv2.matchTemplate or by also just finding the brightest spot within the bounding box.
once you have three matching pairs of coordinates, calculate the affine transformation which transforms one set into the other with cv2.getAffineTransform.
apply this affine transformation to the left image, as a check if you found the right one you could calculate if the overall normalized cross-correlation is above some threshold or drops significantly if you displace one image with respect to the other.
if you wish and still need it, calculate the displacement vector field trivially from your transformation T.
Update
It seems cv2.getAffineTransform expects an awkward input data type 'float32'. Let's assume the source coordinates are (sxi,syi) and destination (dxi,dyi) with i=0,1,2, then what you need is
src = np.array( ((sx0,sy0),(sx1,sy1),(sx2,sy2)), dtype='float32' )
dst = np.array( ((dx0,dy0),(dx1,dy1),(dx2,dy2)), dtype='float32' )
result = cv2.getAffineTransform(src,dst)
I don't think a cross correlation is going to help here, as it only gives you a single best shift for the whole image. There are three alternatives I would consider:
Do a cross correlation on sub-clusters of dots. Take, for example, the three dots in the top right and find the optimal x-y shift through cross-correlation. This gives you the rough transform for the top left. Repeat for as many clusters as you can to obtain a reasonable map of your transformations. Fit this with your Taylor expansion and you might get reasonably close. However, to have your cross-correlation work in any way, the difference in displacement between spots must be less than the extend of the spot, else you can never get all spots in a cluster to overlap simultaneously with a single displacement. Under these conditions, option 2 might be more suitable.
If the displacements are relatively small (which I think is a condition for option 1), then we might assume that for a given spot in the left image, the closest spot in the right image is the corresponding spot. Thus, for every spot in the left image, we find the nearest spot in the right image and use that as the displacement in that location. From the 40-something well distributed displacement vectors we can obtain a reasonable approximation of the actual displacement by fitting your Taylor expansion.
This is probably the slowest method, but might be the most robust if you have large displacements (and option 2 thus doesn't work): use something like an evolutionary algorithm to find the displacement. Apply a random transformation, compute the remaining error (you might need to define this as sum of the smallest distance between spots in your original and transformed image), and improve your transformation with those results. If your displacements are rather large you might need a very broad search as you'll probably get lots of local minima in your landscape.
I would try option 2 as it seems your displacements might be small enough to easily associate a spot in the left image with a spot in the right image.
Update
I assume your optics induce non linear distortions and having two separate beampaths (different filters in each?) will make the relationship between the two images even more non-linear. The affine transformation PiQuer suggests might give a reasonable approach but can probably never completely cover the actual distortions.
I think your approach of fitting to a low order Taylor polynomial is fine. This works for all my applications with similar conditions. Highest orders probably should be something like xy^2 and x^2y; anything higher than that you won't notice.
Alternatively, you might be able to calibrate the distortions for each image first, and then do your experiments. This way you are not dependent on the distribution of you dots, but can use a high resolution reference image to get the best description of your transformation.
Option 2 above still stands as my suggestion for getting the two images to overlap. This can be fully automated and I'm not sure what you mean when you want a more general result.
Update 2
You comment that you have trouble matching dots in the two images. If this is the case, I think your iterative cross-correlation approach may not be very robust either. You have very small dots, so overlap between them will only occur if the difference between the two images is small.
In principle there is nothing wrong with your proposed solution, but whether it works or not strongly depends on the size of your deformations and the robustness of your optimization algorithm. If you start off with very little overlap, then it may be hard to find a good starting point for your optimization. Yet if you have sufficient overlap to begin with, then you should have been able to find the deformation per dot first, but in a comment you indicate that this doesn't work.
Perhaps you can go for a mixed solution: find the cross correlation of clusters of dots to get a starting point for your optimization, and then tweak the deformation using something like the procedure you describe in your update. Thus:
For a NxN pixel segment find the shift between the left and right images
Repeat for, say, 16 of those segments
Compute an approximation of the deformation using those 16 points
Use this as the starting point of your optimization approach
You might want to have a look at bunwarpj which already does what you're trying to do. It's not python but I use it in exactly this context. You can export a plain text spline transformation and use it if you wish to do so.

Categories