Know rotation of 3 planar points - OpenCV - python

I'm working with OpenCV and I want to track the position and movement of 3 points (leds) with my webcam.
What I really want to know is the rotation, if it's rotating to the left or to the right.
My problem is that I'm just working in 2D, I just capture XY coordinates. That means that I always have a mirror behavior when I turn left or right.
I don't know how to know that. As they are 3 points I'm working it as a triangle, I know how to calculate, angles, area, height, etc but all of this is useless because always all the changes are symmetrical in both directions.
¿There is some function in OpenCV or python to work this? I'm pretty sure it has an easy solution, but my head is going to exploit.
I attach some pictures.
I can't attach code because it isn't a code problem, the problem is that I don't know how to do it.
Things that I had probe:
Compare coordinates points of the actual frame and the before
Compare barycentre
Compare angles
Compare area of the points (I think maybe here is the solution because although is 2D, you can see one of the bottom point bigger than the other one).
[front][1]
[Turn right][2]
[Turn left][3]
It's my first post, and sorry for my english! Thanks!!

Related

How to determinite orientation of camera whout OpenCV recoverPose() method?

I want to determine the orientation of the camera for each frame in a video. I'm looking at the cv2.recoverPose() method, but I have found two personal issues with it:
It requires the Essential matrix. The only way to find E with openCV is by passing 5 points to cv2.findEssentialMat() which is a lot of points! I would rather have just 2 points to find the orientation. I believe there are other ways of estimating it but that leads me to my second problem.
These "recovered poses" seem to be estimations and not all that accurate. Maybe I'm wrong. How accurate is it?
One unique thing about my circumstance is that I know the 3d position of both the center of projection of the camera and any reference points that the camera may be looking at. I know what your thinking: if I have the 3d location why can't I determine the orientation? Just assume that its not reasonable to do so. I think that I could use cv2.projectPoints() or some similar method to determine the orientation of the camera, but I'm not exactly sure how.
Anyone have ideas?

edge detection of an image and saving cells of a grid

picture example
I have recently started learning Python with Spyder IDE and I'm a bit lost so I ask for advice.
The thing is that I need to program an algorithm that, given a random image representing a board with black spots in it (in the picture I upload It is a 4x5 board) so It recognizes the edges properly and draw a AxB grid on it. I also need to save each cell separately so as to work with them.
I know that open CV treat images and I have even tried auto_canny but I don't really know how to solve this problem. Can anybody give me some indications please?
as I understand from your question you need to have as an output the grid of the matrix in your picture (eg. 4x3) and each cell as separate image.
This is the way I would approach this problem:
Use canny + corner detection to get the intersection of the lines
With the coordinates of the corners you can form your regions of interest, crop each individually and save it as a new image
For the grid you can check the X's and the Y's of the coordinates, for example you will have something like: ((50, 30), (50,35),(50,40)) and from this you can tell that there are 3 points on the horizontal axis. I would encourage you to set a error margin as the points might not be all on the same coordinate, but may not differ a lot.
Good luck!

What is causing this unusual cull and how can I fix it?

I am an extreme novice to OpenGL, just trying to hack something together for a personal project. When I enabled GL_CULL_FACE I mostly got what I wanted, except a big triangle chunk is now missing from my cube!
What might be happening here and how can I fix it? I made this cube with 6 GL_QUADS, so I never expected to be missing a triangle like this...
Each polygon has a "front" and "back" side, and when culling is on you only see polygons whose "front" is toward you (the ones for which the normal is pointing toward the camera).
The fact that this face is getting culled from this angle suggests that its normal points inside the cube instead of outside; to flip it around, reverse the order in which you specify the vertices.

How to get the largest rectangle inside a contour?

I'd like to ask if there's a better or faster alternative way to get the largest rectangle inside an almost rectangular contour.
The rectangle should be aligned to both x and y axis and should be completely inside the rectangular contour. That means it would not contain any external white pixels, yet occupy the largest area in the contour.
Test image is here:
I've tried these two but I'm looking if there's a faster and neater way to go around this.
I also tried going through the points of a contour and getting the minimum and maximum points like in here but of course, it just shows similar results to what cv2.boundingRect already does.
Maybe this is a bit of lateral thinking, but looking at your examples and spec when not fill out white pikels contiguouys with the outside bounding box instead. (Like a 'paint pot' brush in Paint-type application).
E.g. (red pixels being the ones you would turn black from white):
You could probably even limit the process to the outer N pixels.
============================
So how might one implement this? It is essentially a version of the "flood fill" algorithm used in pixel graphics programmes, except that you start not from a single seed pixel but checking every point on the edge of the outside bounding rectangle. You start filling in and build a stack of points you need to come back to because you can't necessarily follow every area at once and may need to go back on your self.
You can look that algorithm up, but a 'pure' version will be very stack-heavy if you push every point you can't follow right now, particularly starting with the whole boundary of the shape.
I haven't implemented it this way, but my first thought would be a scan from a boundary inwards, taking a whole line of pixels at a time and mark all the 'white' pixels with a new 3rd colour, then on the next row you fill all the white pixels touching the previously marked pixels and so on. (doesn't matter whether you mark the changed pixels as a 3rd colour, a mask, or alpha-channel or whatever - but you must be able to tell newly filled in pixels from the old black ones.
As you go, you need to check for any 'stranded' areas where you need to work backwards to fill in white areas that are not directly connected to the outside:
Start filling from the edge...
Watch out for stranded areas - if you find one, scan backwards to fill before going to where you were before, to carry one (you may need to recurse if you stranded area turns back on itself again, though in your particular application this shouldn't be a huge issue, unlike some graphics applications)
And continue, not forgetting to fill in from the other edges if required (see note below) until you come to a row with no further pixels to fill and no more back-filling to do. Then restart at the far side of the image as you need to start a backward pass from the far side to catch anything else on that side.
For a practical implementation there is some thinking to do. Your examples will have a lot of filling at the edge but not much by way of complex internal shapes to follow, which keeps things simple. But you need to work from all 4 sides to do it efficiently - perhaps working in as a series of concentric rectangles rather than one side at a time. More complexity working through the design but massively more efficient in this example.
Food for thought anyhow.

Modify polygons so that they don't overlap and area stays the same

I have a set of polygons and they can overlap with each other, like this:
I want to modify them in such a way that they don't overlap and the resulting surface area stays the same. Something like this:
It is okay if the shape or the position changes. The main thing is that they should not overlap with each other and the area should not change much (I know the area changed a little in the second image but I drew it manually thus let's just assume that the areas did not change).
I am trying to do it programmatically with the help of Python. Basically I stored polygons in a PostGIS database and with the help of a script I want to retrieve them and modify them.
I am very new to GIS and thus this seems like a difficult task.
What is the correct way of doing it? Is there an algorithm that solves this kind of problems?
Take a look at ST_buffer and try passing a signed float as the second argument (degrees to reduce radius by)
SELECT buffer(the_geom,-0.01) as geom
Be careful with negative buffers as you could run into issues if the buffer size exceeds the radius, see here.
Here is what I did:
Iterated over all the polygons and found overlapping polygons. Then, I moved the polygon in different directions and found the best moving direction by calculating the minimum resulting overlapping area. Then I simply moved the polygon in that best direction until there is no overlapping area.

Categories