OpenCV Best method to automatically draw a straight line through points - python

what OpenCV Python tool would be the best to draw a straight line between multiple 'points' on an image ?
Something like a BestFitLine but for thresoldedimage, not a plot
Thanks
source image
processed image (binary threshold)
the black dot are supposed to be each individual crop
what I'd like to have

I'm afraid Hough Lines will fail here. You could try ransac to get best estimated line, erase those points and run next ransac. Tough you still can encounter some troubles: exactly how many clusters do you want to recognize? You could set min score sample score for a line model. You'd also need to filter out horizontal lines.
There're ransac implementations in both scikit-learn and scikit-image.

If you haven't tried yet, go with the OpenCV functions HoughLines() and HoughLinesP() to detect lines in an image.

Related

Detection of spots on an image, 3D-plotting of detected spots then calculate PYTHON

Can someone help me on how to detect spots on an image like this on the picture using python?
After detecting the spots, there goes a red circle around and plots it as shown on the pictures attached here.
This is the 3D plot of the detected spots.
You can do this using a Gaussian Mixture Model. I don't think there is a function in SciPy, but there is one in scikit-learn
Here is a tutorial on this.
(from my answer to this question)
Of if your spots always look similar I would recommend Template Matching. There is a function in OpenCV for this. And also one in scikit-image.

Corner detection in Python OpenCV for extraction of squares

I am working on a form extraction module which detects text in specific segments of an image. So far I am able to remove the text and retain only the bounding box in the image.
My next step was to extract each box in the image. To do that I am trying to detect corners in the image. But here is where I am stuck. I tried template matching. This was the result. Although the results look promising the drawback is that this method is very time consuming. And few corners are still not detected.
I also tried Shi-Tomasi Corner Detector after dilating the image.
What would be the best approach to solve this problem?
I suggest you detect the lines instead, e.g. using Hough transform, followed by edge chaining, followed by robust line fitting on each chain.

Segmentation of lines, words and characters from a document's image

I am working on a project where I have to read the document from an image. In initial stage I will read the machine printed documents and then eventually move to handwritten document's image. However I am doing this for learning purpose, so I don't intend to use apis like Tesseract etc.
I intend to do in steps:
Preprocessing(Blurring, Thresholding, Erosion&Dilation)
Character Segmentation
OCR (or ICR in later stages)
So I am doing the character segmentation right now, I recently did it through the Horizontal and Vertical Histogram. I was not able to get very good results for some of the fonts, like the image as shown I was not able to get good results.
Is there any other method or algorithm to do the same?
Any help will be appreciated!
Edit 1:
The result I got after detecting blobs using cv2.SimpleBlobDetector.
The result I got after using cv2.findContours.
A first option is by deskewing, i.e. measuring the skew angle. You can achieve this for instance by Gaussian filtering or erosion in the horizontal direction, so that the characters widen and come into contact. Then binarize and thin or find the lower edges of the blobs (or directly the directions of the blobs). You will get slightly oblique line segments which give you the skew direction.
When you know the skew direction, you can counter-rotate to perform de-sekwing. The vertical histogram will then reliably separate the lines, and you can use an horizontal histogram in each of them.
A second option, IMO much better, is to binarize the characters and perform blob detection. Then proximity analysis of the bounding boxes will allow you to determine chains of characters. They will tell you the lines, and where spacing is larger, delimit the words.

How do I use opencv to fit a line into a curve like this

I am wondering how can I achieve this?
This is is the line I want to fit to the image.
I have tried Houghline transform. But I am unable to achieve this result, and I got something like this:
properties of the line:
The line has to be tangent to the black region, not cutting through it.
You can find contours, apply convex hull algorithm, then select edge you need.
The image is already thresholded, therefore you can easily findContours()
This will give you a detailed list of points for the largest contours (you can choose to retrieve the largest as a flag).
If you want to simplify it, you can using approxPolyDP.
Play with the epsilon parameter to get a simpler path.
I'm not sure how a single line can be a tangent to the majority of this complex outline.

Image Segmentation based on Pixel Density

I need some help developing some code that segments a binary image into components of a certain pixel density. I've been doing some research in OpenCV algorithms, but before developing my own algorithm to do this, I wanted to ask around to make sure it hasn't been made already.
For instance, in this picture, I have code that imports it as a binary image. However, is there a way to segment objects in the objects from the lines? I would need to segment nodes (corners) and objects (the circle in this case). However, the object does not necessarily have to be a shape.
The solution I thought was to use pixel density. Most of the picture will made up of lines, and the objects have a greater pixel density than that of the line. Is there a way to segment it out?
Below is a working example of the task.
Original Picture:
Resulting Images after Segmentation of Nodes (intersection of multiple lines) and Components (Electronic components like the Resistor or the Voltage Source in the picture)
You can use an integral image to quickly compute the density of black pixels in a rectangular region. Detection of regions with high density can then be performed with a moving window in varying scales. This would be very similar to how face detection works but using only one super-simple feature.
It might be beneficial to make all edges narrow with something like skeletonizing before computing the integral image to make the result insensitive to wide lines.
OpenCV has some functionality for finding contours that is able to put the contours in a hierarchy. It might be what you are looking for. If not, please add some more information about your expected output!
If I understand correctly, you want to detect the lines and the circle in your image, right?
If it is the case, have a look at the Hough line transform and Hough circle transform.

Categories