How to find a four point polygon approximation with OpenCV? - python

So I have a problem that requires me to get a perspective transform on a series of numbers. However, in order to get the four point transform, I need the correct points to send as parameters to the function. I couldn't find any methods that will solve this problem, and I've tried convex hull (returns more than four), minAreaRect (it returns a rectangle).

I don't have a lot of experience with OCR, but I would hope all the text segments live on the same perspective plane.
If so, how about using a simplified convex Hull (e.g. convexHull() then approxPolyDP) on one of seven connected components to get the points / compute perspective, then apply the same unwarp to an a scaled quad that encloses all the components ? (probably not perfect, but close)
Hopefully the snippets in this answer will help:
I really hope the same perspective transformation can be applied to each yellow text connected component.

Related

Applying Homographies to Remove Perspective Distortion

When calculating Homography, usually the information of the camera should be provided. Is there any straightforward technique to achieve perspective correction without actually having camera's properties?
are there papers for that?
A standard technique is calibration with a target.
To identify a (planar) homography, four points suffice. Take an image of the viewed plane where you place a contrasted rectangle and locate the corners in the image (pixel coordinates). You could do this by image processing or just manually. Then choose the pixel coordinates where you would like the corners to map after correction.
This will allow you to write a system of eight equations in the eight unknown parameters of the homography. Fortunately, this system is easily linearized and the solution is unique.

How to find "regions" in a contour OpenCV?

Say we have the following contour information from OpenCV contours:
What I mean by a "region" is a subset of the contour with low directional variation.
So for example these, could be regions in the provided example:
One way to detect these could be, doing a local neighborhoud comparison of the dot products of the tangent at each point. (i.e see how much the tangent changes locally).
I was wondering however if there is a better way to do this, using OpenCV directly rather than doing vector operations myself.
-When your region boundaries are always near-vertical or near-horizontal, consider preprocessing the image using a filter (erode, dilate), to isolate vertices and horices, then merge results, to find an alternating color on region boundaries.
-When your directions go anywhere, it's more complicated ! One option would be to retrieve coordinates from your pixels with the help of Hough lines see
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html

Sort points in 2D space to make a spline

I have a sequence of points which are distributed in 2D space. They represent a shape but they are not ordered. So, I can plot them as points to give an idea of the shape, but if I plot the line connecting them, I miss the shape because the order of points is not the right order of connection.
I'm wondering, how can I put them in the right order such that, if I connect them one by one in sequence, I get a spline showing the shape they represent? I found and tried the convex hull in Matlab but with no results. The shape could be complex, for example a star and with convex hull I get a shape that is too much simplified (many points are not taken into account).
Thanks for help!
EDIT
Could be everything the image. I've randomly created one to show you a possible case, with some parts that are coming into the shape, and also points can have different distances.
I've tried with convex hull function in Matlab, that's what I get. Every time the contour have a "sharp corner", I miss it and the final shape is not what I'm looking for. Also, Matlab function has no parameter to set to change convex hull result (at least I can't see anything in the help).
hull = convhull(coords(:,1),coords(:,2));
plot(coords(hull,1),coords(hull,2),'.r');
You need to somehow order your points, so they can be in a sequence; in the case of your drawing example, the points can likely be ordered using the minimal distance, to the next -not yet used- point, starting at one end (you'll probably have to provide the end).
Then you can draw a spline, maybe using Chaikin's algorithm for curves that will locally approximate a bezier curve.
You need to start working on this, and post another question with your code, if you are having difficulties.
Alpha shapes may perform better than convexhulls for this problem. Alpha shapes will touch all the points in the exterior of a point cloud, even can carve out holes.
But for complicated shape reconstruction, I would recommend you to try a beta-skeleton bsed approach discussed in https://people.eecs.berkeley.edu/~jrs/meshpapers/AmentaBernEppstein.pdf
See more details on β-Skeleton at https://en.wikipedia.org/wiki/Beta_skeleton
Quote from the linked article:
The circle-based β-skeleton may be used in image analysis to reconstruct the shape of a two-dimensional object, given a set of sample points on the boundary of the object (a computational form of the connect the dots puzzle where the sequence in which the dots are to be connected must be deduced by an algorithm rather than being given as part of the puzzle).
it is possible to prove that the choice β = 1.7 will correctly reconstruct the entire boundary of any smooth surface, and not generate any edges that do not belong to the boundary, as long as the samples are generated sufficiently densely relative to the local curvature of the surface
Cheers

Image Segmentation based on Pixel Density

I need some help developing some code that segments a binary image into components of a certain pixel density. I've been doing some research in OpenCV algorithms, but before developing my own algorithm to do this, I wanted to ask around to make sure it hasn't been made already.
For instance, in this picture, I have code that imports it as a binary image. However, is there a way to segment objects in the objects from the lines? I would need to segment nodes (corners) and objects (the circle in this case). However, the object does not necessarily have to be a shape.
The solution I thought was to use pixel density. Most of the picture will made up of lines, and the objects have a greater pixel density than that of the line. Is there a way to segment it out?
Below is a working example of the task.
Original Picture:
Resulting Images after Segmentation of Nodes (intersection of multiple lines) and Components (Electronic components like the Resistor or the Voltage Source in the picture)
You can use an integral image to quickly compute the density of black pixels in a rectangular region. Detection of regions with high density can then be performed with a moving window in varying scales. This would be very similar to how face detection works but using only one super-simple feature.
It might be beneficial to make all edges narrow with something like skeletonizing before computing the integral image to make the result insensitive to wide lines.
OpenCV has some functionality for finding contours that is able to put the contours in a hierarchy. It might be what you are looking for. If not, please add some more information about your expected output!
If I understand correctly, you want to detect the lines and the circle in your image, right?
If it is the case, have a look at the Hough line transform and Hough circle transform.

How do you calculate the area of a series of random points?

So I'm working on a piece of code to take positional data for a RC Plane Crop Duster and compute the total surface area transversed (without double counting any area). I cannot figure out how to calculate the area for a given period of operation.
Given the following Table Calculate the area the points cover.
x,y
1,2
1,5
4,3
6,6
3,4
3,1
Any Ideas? I've browsed Greens Theorem and I'm left without a practical concept in which to code.
Thanks for any advise
Build the convex hull from the given points
Algorithms are described here
See a very nice python demo + src
Calculate its area
Python code is here
Someone mathier than me may have to verify the information here. But it looks legit: http://www.wikihow.com/Calculate-the-Area-of-a-Polygon and fairly easy to apply in code.
I'm not entirely sure that you're looking for "Surface area" as much as you're looking for Distance. It seems like you want to calculate the distance between one point and the next for that list. If that's the case, simply use the Distance Formula.
If the plane drops a constant width of dust while flying between those points, then the area is simply the distance between those points times the width of the spray.
If your points are guaranteed to be on an integer grid - as they are in your example - (and you really are looking for enclosed area) would Pick's Theorem help?
You will have to divide the complex polygon approximately into standard polygons (triangles, rectangles etc) and then find area of all of them. This is just like regular integration (only difference is that you are yet to find a formula to approximate your data).
The above points are when you assume that you are forming a closed polygon with your data.
Use to QHull to triangulate the region, then sum the areas of the resulting triangles.
Python now conveniently has a library that implements the method Lior provided. https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.ConvexHull.html will calculate the convex hull for any N dimensional space and calculate the area/volume for you as well. See the example and return value attributes towards the bottom of the page for details.

Categories