This question already has answers here:
how to remove straight lines or non-curvical lines in a canny image
(2 answers)
Closed 6 years ago.
I have the following image. My task is to examine the shape of worms (enclosed in rectangles) and classify them as dead/alive - the ones that are rod-like (straight) are dead and the curved ones are alive.
I have used adaptive thresholding on the source image, drawn the contours and bounding rectangles using OpenCV 3.1.0 in Python 2.7.1 . What would be the simplest and perhaps an efficient way to achieve the above aim.
(isConvex() certainly doesn't work :p)
One approach would be to compare the length of the contour (or worm) to the diagonal distance of the bounding box (upper left corner to bottom right corner distance). The straight ones will have a length closer to this distance than the curvy ones will.
Related
A syntax called 'gcode' is used to tell CNC engraving machines how to move.
An example of gcode is as follows :
G00 Z1 F800 (lift z axis by 1mm)
G00 X49.546785 Y-11.48703 F800 (Go to these coordinates at 800mm/m)
G01 Z-0.35 F100 (Penetrate. Lower the tool into the work by 0.35mm)
G03 X49.126859 Y-11.464812 I-0.385599 J-3.308393 F80 (cut an anticlockwise arc at 80mm/m)
(arc ends at X,Y, arc centre is x+i, y+j)
etc.
As you can see we can describe the movement of the tool in stright lines (G0,G1) and in arcs (G2,G3) from coordinates in the x,y and z planes.
Using this mechanism we can draw (engrave) paths, often they are closed paths as below:
In this image we see a closed path (letter a) in black. The green outline is the same path but 'scaled upwards' and the red path is the same path but scaled downwards.
In Inkscape we can do this using the 'dynamic offset' tool.
I am looking for an algorithm I can apply to gcode (as described above) to scale paths as described.
My first thought is literally to just scale every single line and arc :
Say we are scaling by 'n'%
Essentially we would make every line n% longer, and every arc n% bigger.
But what would the resulting path centre on?
Does anyone know the name of this algorithm, or have any links or examples of how to achieve this in say, SVG or any other coordinate based system (preferably in python if possible).
Addendum :
The process of scaling polygons inwards and outwards largely has two distinct names: 'Dilation' and 'offsetting'.
See here for a near answer to this question
As given in the comments, Dilation, Erosion, Opening and Closing are standard morphology operations. In fact, the graphic at Wikipedia gives details that are quite similar to what you have.
The difference is that the inside of the object is included in the dilation and erosion. Just alter the structuring element size and you can subtract the images to get the traces that you want.
The erosion and dilation are simpler forms of morphology, so look at those first to understand the algorithms. They are implemented in OpenCV which has Python bindings; however, they are fairly simple to code.
It maybe possible to use XOR to get the outline without image subtraction. But simply the perimeter of the opening and closing is the outline that I think you are looking for. Dilation and erosion will give slightly different paths. You could also use Voronoi partitioning, as a crudest form. The main difference is how corners and other junctions are handled.
This question already has answers here:
OpenCV Python: Draw minAreaRect ( RotatedRect not implemented)
(5 answers)
Find Rotated Rectangle in OpenCV Python
(1 answer)
Closed 2 years ago.
I been messing with Python and OpenCV on some images, and kind of got the result I want. Here is an example of the output image.
So the important part is opencv has draw red rectangles around the objects it has identified (I don't want to share what the original image was so yes those gray patches are all the remain after running a lot of functions on the image).
Now what I want to do is expand the rectangles (there are 2 here, but 3 or 4 are also possible) so they are as large as possible without touching. I just can't seem to come up with any kind of algorithm to do that. Any suggestions?
This question already has answers here:
Split text lines in scanned document
(2 answers)
Closed 4 years ago.
I have generated the edges using Canny edge detector now want to crop source image by those edges.
Is there any way to get.
Left and Top Most 255 pixel location of image.
Right and Top Most 255 pixel location of image.
Left and Bottom Most 255 pixel location of image.
Right and Bottom Most 255 pixel location of image.
And crop that image based on that location.
Using open cv or any other library using python.
There may be better solutions but I think you can implement an algorithm.
Start by drawing a square that FULLY captures the child set of pixels. Then, slowly bringing in the sides one at a time until they encounter a 255 pixel. Once you've fully pulled in all 4 sides, you will have your desired area to crop.
You could also use four simple (one-liner) For loops to check for "first white pixel". Since pixels x-pos starts at top/left, Using x++ to check forward and x-- to check backwards (from right-side).
I am trying to extract the tiles ( Letters ) placed on a Scrabble Board. The goal is to identify / read all possible words present on the board.
An example image -
Ideally, I would like to find the four corners of the scrabble Board, and apply perspective transform, for further processing.
After Perspective transform -
The algorithm that I am using is as follows -
Apply Adaptive thresholding to the gray scale image of the Scrabble Board.
Dilate / Close the image, find the largest contour in the given image, then find the convex hull, and completely fill the area enclosed by the convex hull.
Find the boundary points ( contour ) of the resultant image, then apply Contour approximation to get the corner points, then apply perspective transform
Corner Points found -
This approach works with images like these. But, as you can see, many square boards have a base, which is curved at the top and the bottom. Sometimes, the base is a big circular board. And with these images my approach fails. Example images and outputs -
Board with Circular base:
Points found using above approach:
I can post more such problematic images, but this image should give you an idea about the problem that I am dealing with. My question is -
How do I find the rectangular board when a circular board is also present in the image?
Some points I would like to state -
I tried using hough lines to detect the lines in the image, find the largest vertical line(s), and then find their intersections to detect the corner points. Unfortunately, because of the tiles, all lines seem to be distorted / disconnected, and hence my attempts have failed.
I have also tried to apply contour approximation to all the contours found in the image ( I was assuming that the large rectangle, too, would be a contour ), but that approach failed as well.
I have implemented the solution in openCV-python. Since the approach is what matters here, and the question was becoming a tad too long, I didn't post the relevant code.
I am willing to share more such problematic images as well, if it is required.
Thank you!
EDIT1
#Silencer's answer has been mighty helpful to me for identifying letters in the image, but I want to accurately find the placement of the words in the image. Hence, I feel identifying the rows and columns is necessary, and I can do that only when a perspective transform is applied to the board.
I wrote an answer on MSER text detection:
Trying to Plot OpenCV's MSER regions using matplotlib
The code generate the following results on your images.
You can have a try.
I think #silencer has already given quite promising solution.
But to perform perspective transform as you have mentioned that you have already tried with hough lines to find the largest rectangle but it fails because for tiles present.
Given you have large image data set may be more than 1000 images, you can also give a shot to Deep learning based approach where you can train a model with images as input and corresponding rectangle boundary points coordinate as outputs.
I need to detect the different shapes on the image such as ellipses and circles with a dashed-border.
Using OpenCV causes some troubles, for e.g. openCV detects not a contour but an amount of small separated lines(contours).
In case of ellipse with a "dashed-border" I need it to be as single contour, not a separated objects.
What would you suggest to solve this problem?
I also need to recognize the ellipses(or circles) with solid border and ellipses(circles) with a dashed-border
Ellipse with solid border
Ellipse with dashed border:
The Hough transform should not have any troubles with either dashed or solid borders, or a mix thereof. I have used it to detect dashed lines in one of my project and ellipses should not be much harder.
OpenCV by default only implements the Hough transform for circles and straight lines but there is a blog entry with code on how to adapt it for elliptical shapes.
The better idea there is to use an specific transform to detect circle and ellipses rather than a general "find contour" call.
I am not enough acquainted with OpenCV to know if there this sort of detection is built in there - it should be.
An internet search for the more generic "image ellipse detection" shows it is not a settled matter - unlike square and rectangle detection, so whatever OpenCV already contains will have to be tunned up.
(The basic idea is that the underlying algorithm creates a new n-dimensional space where every possible paramterized ellipse for the target image could be represented (with, for example, an axis for each x and y center coordinates, w and h radii, and one for rotation), and fill in hits in this matrix according to the pixels contrast in the target image).- Sebastian's answer put the correct name for this approach I could not recall: "Hough transform", and provides the needed links on how it works, and how to extend it on OpenCV.