I have software that generates several images like the following four images:
Does an algorithm exist that detects the (horizontal & vertical) edges and creates a binary output like this?
If possible I'd like to implement this with numpy and scipy. I already tried to implement an algorithm, but I failed because I didn't find a place to start. I also tried to use a neural network to do this, but this seems to be overpowered and does not work perfectly.
The simplest thing to try is to:
Convert your images to binary images (by a simple threshold)
Apply the Hough transform (OpenCV, Matlab have it already implemented)
In the Hough transform results, detect the peaks for angles 0 degree, + and - 90 degrees. (Vertical and horizontal lines)
In OpenCV and Matlab, you have extra options for the Hough transform which allow you to fill the gaps between two disconnected segments belonging to a same straight line. You may need a few extra operations for post-processing your results but the main steps should be these ones.
Related
Goal:
For the past two weeks I've been trying to figure out how to convert the following image:
To one that looks like this (may not match exactly, as this image was taken at a different time):
Lens Correction (necessary?):
The first thing I noticed is that simply slicing the image and overlaying the four parts wouldn't work perfectly, as the curvature of certain lines does not match. For instance, the mid-court line bends left in the second slice and bends right in the third slice. This bending looks like a barrel distortion so I tried using both a parameterized lens correction function (passing k1, k2, and k3 to OpenCV) and using lensfun. Since the lensfun database does not include my camera make or model (it's an AXIS camera) and I do not know the make or model of the lens (it's manufactured as part of the camera), I wrote a small script to dump test images using various lenses with various parameters, then skimmed through the thousands of output images until I found one that looked like it had relatively straight lines:
This correction was done using the "Samyang 12mm f/2.8 Fish-Eye ED AS NCS" lens with a "Canon EOS 10D" camera in lensfun. It's probably not perfect, but I figured it was close enough to move on to step two.
Once the lens distortion was corrected, the second issue is that the same line in two slices was pointing in different directions, which should be corrected with a simple perspective transform. So I began a long quest to figure out the proper parameters for this perspective transform.
Failed Attempts:
1. Using SciPy
I started by writing a cost function to judge the "quality" of a given set of parameters (overlapped pixels should match) and applying SciPy's solver to figure it out. I made several tweaks to my cost function (applying a Gaussian blur, scaling down the image, gray scaling the image, using the Sobel operator to get a gradient, looking only at the pixels on either side of a "seam" after overlapping instead of the whole overlap region, etc) but it always failed to find a good solution. The results looked worse than the original camera image most of the time:
2. Using math
When that failed I tried applying math to compute the proper perspective transform. I know the FOV of the camera (from the spec sheet), I know the image width and height, I know the sensor size (from the spec sheet), and using a protractor I measured the angles between the lenses. Using the pinhole model I then calculated the expected (x,y) values of points on the image plane and what transform would be necessary to correct them. The results looked better than SciPy, but were still dismal.
3. Using OpenCV's Stitcher
After this I tried using OpenCV's built-in Stitcher class. However it failed to stitch together slices 2 and 3 due to insufficient overlap between the images (and about 10% of the time it even failed to stitch together slices 1 and 2, presumably because of the non-deterministic nature of RANSAC). Even when it did succeed, the stitch wasn't that great:
4. Using ORB and OpenCV's findHomography
Most recently I tried using ORB with a mask (only looking for features in the overlap region) and OpenCV's findHomography function to create a custom version of the Stitcher. While the matches seemed promising, the resulting stitch was still sub-optimal:
I'm beginning to suspect that my methodology (slice -> lens correct -> perspective transform -> overlay) is flawed and there's a better way to do this.
5. Updated ORB / findHomography
I updated my feature detection to eliminate any matches where the Y coordinates differed drastically (e.g. matching the white of the table to the white of the lights). After doing this my number of matched features fell from ~110 to ~55, but the homography was improved significantly. Here's the stitch that results for slices 1/2 and 2/3 with the update:
Until someone can tell me that I'm going about this all wrong, I'm going to keep pursuing this strategy with the following added step:
Slice image
Lens correct each slice
Perspective transform slice 2 or 3 so that the side line is horizontal and the mid-court line is vertical
Use ORB + match filtering + findHomography to iteratively align and then stitch adjacent slices
Ultimately when it's all said and done I want to try and compute a mapping from input pixels to output pixels so that we're not doing all of this complex work (lens correction, ORB, findHomography, etc) per-frame. We'll do it once per camera, save the mapping to a file somewhere, then we can in real-time map the input video to an output video frame-by-frame using cv2.remap
Note:
The second image I posted showing the "expected output" comes directly from the camera in question. It can be configured to return the first image at 30 fps, or the second image at 10 fps. We wish to perform the stitching off-camera on a more powerful computer so we can get 30 fps but still have the single image.
AXIS provides an SDK for doing the stitching off-camera, but this SDK is Windows-only and most of our tech stack is Linux and most of our development machines are Mac OS. I have used a Windows computer to try and look into the stitching SDK they provide, however I had no luck getting it to compile and run. Their sample code kept throwing errors and I've never had any luck getting Visual Studio or C++ to play nicely for me.
My suggestion is to train an autoencoder. Use the first image as input and the second one as an output, as in a denoising autoencoder:
Note that you may lose resolution if you create a botteleneck too small in the middle layer.
Also, Variational autoencoders present a latent vector but work following the same principle.
You can adapt this code:
denoise = Sequential()
denoise.add(Convolution2D(20, 3,3,
border_mode='valid',
input_shape=input_shape))
denoise.add(BatchNormalization(mode=2))
denoise.add(Activation('relu'))
denoise.add(UpSampling2D(size=(2, 2)))
denoise.add(Convolution2D(20, 3, 3,
init='glorot_uniform'))
denoise.add(BatchNormalization(mode=2))
denoise.add(Activation('relu'))
denoise.add(Convolution2D(20, 3, 3,init='glorot_uniform'))
denoise.add(BatchNormalization(mode=2))
denoise.add(Activation('relu'))
denoise.add(MaxPooling2D(pool_size=(3,3)))
denoise.add(Convolution2D(4, 3, 3,init='glorot_uniform'))
denoise.add(BatchNormalization(mode=2))
denoise.add(Activation('relu'))
denoise.add(Reshape((28,28,1)))
sgd = SGD(lr=learning_rate,momentum=momentum, decay=decay_rate, nesterov=False)
denoise.compile(loss='mean_squared_error', optimizer=sgd,metrics = ['accuracy'])
denoise.summary()
denoise.fit(x_train_noisy, x_train,
nb_epoch=50,
batch_size=30,verbose=1)
I am trying to extract the tiles ( Letters ) placed on a Scrabble Board. The goal is to identify / read all possible words present on the board.
An example image -
Ideally, I would like to find the four corners of the scrabble Board, and apply perspective transform, for further processing.
After Perspective transform -
The algorithm that I am using is as follows -
Apply Adaptive thresholding to the gray scale image of the Scrabble Board.
Dilate / Close the image, find the largest contour in the given image, then find the convex hull, and completely fill the area enclosed by the convex hull.
Find the boundary points ( contour ) of the resultant image, then apply Contour approximation to get the corner points, then apply perspective transform
Corner Points found -
This approach works with images like these. But, as you can see, many square boards have a base, which is curved at the top and the bottom. Sometimes, the base is a big circular board. And with these images my approach fails. Example images and outputs -
Board with Circular base:
Points found using above approach:
I can post more such problematic images, but this image should give you an idea about the problem that I am dealing with. My question is -
How do I find the rectangular board when a circular board is also present in the image?
Some points I would like to state -
I tried using hough lines to detect the lines in the image, find the largest vertical line(s), and then find their intersections to detect the corner points. Unfortunately, because of the tiles, all lines seem to be distorted / disconnected, and hence my attempts have failed.
I have also tried to apply contour approximation to all the contours found in the image ( I was assuming that the large rectangle, too, would be a contour ), but that approach failed as well.
I have implemented the solution in openCV-python. Since the approach is what matters here, and the question was becoming a tad too long, I didn't post the relevant code.
I am willing to share more such problematic images as well, if it is required.
Thank you!
EDIT1
#Silencer's answer has been mighty helpful to me for identifying letters in the image, but I want to accurately find the placement of the words in the image. Hence, I feel identifying the rows and columns is necessary, and I can do that only when a perspective transform is applied to the board.
I wrote an answer on MSER text detection:
Trying to Plot OpenCV's MSER regions using matplotlib
The code generate the following results on your images.
You can have a try.
I think #silencer has already given quite promising solution.
But to perform perspective transform as you have mentioned that you have already tried with hough lines to find the largest rectangle but it fails because for tiles present.
Given you have large image data set may be more than 1000 images, you can also give a shot to Deep learning based approach where you can train a model with images as input and corresponding rectangle boundary points coordinate as outputs.
I have a grid on pictures (they are from camera). After binarization they look like this (red is 255, blue is 0):
What is the best way to detect grid nodes (crosses) on these pictures?
Note: grid is distorted from cell to cell non-uniformly.
Update:
Some examples of different grids and thier distortions before binarization:
In cases like this I first try to find the best starting point.
So, first I thresholded your image (however I could also skeletonize it and just then threshold. But this way some data is lost irrecoverably):
Then, I tried loads of tools to get the most prominent features emphasized in bulk. Finally, playing with Gimp's G'MIC plugin I found this:
Based on the above I prepared a universal pattern that looks like this:
Then I just got a part of this image:
To help determine angle I made local Fourier freq graph - this way you can obtain your pattern local angle:
Then you can make a simple thick that works fast on modern GPUs - get difference like this (missed case):
When there is hit the difference is minimal; what I had in mind talking about local maximums refers more or less to how the resulting difference should be treated. It wouldn't be wise to weight outside of the pattern circle difference the same as inside due to scale factor sensitivity. Thus, inside with cross should be weighted more in used algorithm. Nevertheless differenced pattern with image looks like this:
As you can see it's possible to differentiate between hit and miss. What is crucial is to set proper tolerance and use Fourier frequencies to obtain angle (with thresholded images Fourier usually follows overall orientation of image analyzed).
The above way can be later complemented by Harris detection, or Harris detection can be modified using above patterns to distinguish two to four closely placed corners.
Unfortunately, all techniques are scale dependent in such case and should be adjusted to it properly.
There are also other approaches to your problem, for instance by watershedding it first, then getting regions, then disregarding foreground, then simplifying curves, then checking if their corners form a consecutive equidistant pattern. But to my nose it would not produce correct results.
One more thing - libgmic is G'MIC library from where you can directly or through bindings use transformations shown above. Or get algorithms and rewrite them in your app.
I suppose that this can be a potential answer (actually mentioned in comments): http://opencv.itseez.com/2.4/modules/imgproc/doc/feature_detection.html?highlight=hough#houghlinesp
There can also be other ways using skimage tools for feature detection.
But actually I think that instead of Hough transformation that could contribute to huge bloat and and lack of precision (straight lines), I would suggest trying Harris corner detection - http://docs.opencv.org/2.4/doc/tutorials/features2d/trackingmotion/harris_detector/harris_detector.html .
This can be further adjusted (cross corners, so local maximum should depend on crossy' distribution) to your specific issue. Then some curves approximation can be done based on points got.
Maybe you cloud calculate Hough Lines and determine the intersections. An OpenCV documentation can be found here
I need some help developing some code that segments a binary image into components of a certain pixel density. I've been doing some research in OpenCV algorithms, but before developing my own algorithm to do this, I wanted to ask around to make sure it hasn't been made already.
For instance, in this picture, I have code that imports it as a binary image. However, is there a way to segment objects in the objects from the lines? I would need to segment nodes (corners) and objects (the circle in this case). However, the object does not necessarily have to be a shape.
The solution I thought was to use pixel density. Most of the picture will made up of lines, and the objects have a greater pixel density than that of the line. Is there a way to segment it out?
Below is a working example of the task.
Original Picture:
Resulting Images after Segmentation of Nodes (intersection of multiple lines) and Components (Electronic components like the Resistor or the Voltage Source in the picture)
You can use an integral image to quickly compute the density of black pixels in a rectangular region. Detection of regions with high density can then be performed with a moving window in varying scales. This would be very similar to how face detection works but using only one super-simple feature.
It might be beneficial to make all edges narrow with something like skeletonizing before computing the integral image to make the result insensitive to wide lines.
OpenCV has some functionality for finding contours that is able to put the contours in a hierarchy. It might be what you are looking for. If not, please add some more information about your expected output!
If I understand correctly, you want to detect the lines and the circle in your image, right?
If it is the case, have a look at the Hough line transform and Hough circle transform.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have trained a classifier in Python for classifying pixels in an image of cells as edge or non edge. I've used it successfully on a few image datasets but am running into problems with this particular dataset, which seems pretty ambiguous even to the human eye. I don't know of any existing automated technique that can segment it accurately.
After prediction I obtain the following image:
I am relatively new to image processing and am unsure with how to proceed with actually obtaining the final segmentations of the cells. I have briefly tried a few different techniques - namely Hough circular transform, level sets, skeletonization, contour finding - but none have really done the trick. Am I just not tuning the parameters correctly or is there a better technique out there?
Here are the correct outlines, by the way, for reference.
And the original image:
And the continuous probability map:
Very nice work on boundary detection. I used to work on similar segmentation problems.
Theory:
Once you obtained your edge map where e(i,j) indicates the "edge-iness" degree of pixel i,j you would like a segmentation of the image that would respect the edge map as possible.
In order to formulate this "respect the edge map" in a more formal fashion I suggest you look at the Correlation clustering (CC) functional:
The CC functional asses the quality of a segmentation based on pair-wise relations between neighboring pixels whether they should be in the same cluster (no edge between them) or in different clusters (there is an edge between them).
Take a look at the example at section 7.1 of the aforementioned paper.
CC is used for similar segmentation problems in medical (neuronal) imaging as well, see e.g., here.
Practice
Once you convince yourself that CC is indeed an appropriate formulation for your problem, there is still the question of how exactly to convert your binary edge map into an affinity matrix that CC can process. Bear in mind that CC needs as an input a (usually sparse) adjacency matrix with positive entries for pairs of pixels assuming to belong to the same segment, and negative entries for pairs of pixels assumed to belong in different segments.
Here's my suggestion:
The edges in your edge map looks quite thick and not well localize. I suggest a non-max supression, or morphological thining as a pre-processing stage.
Once you have a better localized edges, you ignore the "edge" pixels and only work with the "non-edge" pixels, lets call them "active".
Two active pixels that are next to each other: there is no "edge" pixel between them - they should be together. So the adjecency matrix for immidiate nieghbors should have positive entires.
Consider three pixels on a line, with the two endpoints are "active" pixels: if the middle one is an edge then the two active pixels should not belong to the same cluster - the corresponding entries in the adjecency matrix should be negative. If the middle pixel is also active than the corresponding entries in the adjecency matrix should be positive.
Consider all possible neighboring pairs and triplets (inducing 24-connected grid graph) allows you to construct an affinity matrix with positive and negative entries suitable for CC.
Given a matrix you should search for the segmentation with the best CC score (optimization stage). I have Matlab code for this here. You can also use the excellent openGM package.
The optimization will result with a partition of the active pixels only, you can map it back to the input image domain, leaving the edge pixels as un-assigned to any segment.
Seeing the picture of the edges/non edges pixel in the classifier, we can see that the gradient image of your input already basically gives the result of the classifier you have learnt. But the confidence map shows a good solution except that:
1. they are connected levelsets, with varying sizes.
2. you have noisy bright spots in the cells that cause false outputs from the classifier. (maybe some smoothing could be considered)
3. I guess it would be probably easier to characterize the internal of each cell: the grayscale variations, the average size. Learning these distributions would probably get you better detection results. Topologically we have a set of low grayscale values nested in large grayscale values.
To perform this one could use Graphcuts with GMM model for the unitary costs and a learnt gradient distribution for the pairwise term
I think your Hough transform is a good idea. One thing you should try (if you don't already), is to threshold your image before you run it through your tranform, though the article I just linked seems to only be binary thresholding. What this might do is to exaggerate differences between the edge and the background, so it might be easier to detect. Basically, apply a function (in the form of a filter which operates on the pixel's value) to each pixel.
Another thing you can try is active contours. Basically, you lay down some circles and they move through the image until they find what you're looking for.
My last idea is maybe try a wavelet transform. These seem to work pretty well at picking out boundaries and borders in images. Hope these ideas can get you started.