How to determinite orientation of camera whout OpenCV recoverPose() method? - python

I want to determine the orientation of the camera for each frame in a video. I'm looking at the cv2.recoverPose() method, but I have found two personal issues with it:
It requires the Essential matrix. The only way to find E with openCV is by passing 5 points to cv2.findEssentialMat() which is a lot of points! I would rather have just 2 points to find the orientation. I believe there are other ways of estimating it but that leads me to my second problem.
These "recovered poses" seem to be estimations and not all that accurate. Maybe I'm wrong. How accurate is it?
One unique thing about my circumstance is that I know the 3d position of both the center of projection of the camera and any reference points that the camera may be looking at. I know what your thinking: if I have the 3d location why can't I determine the orientation? Just assume that its not reasonable to do so. I think that I could use cv2.projectPoints() or some similar method to determine the orientation of the camera, but I'm not exactly sure how.
Anyone have ideas?

Related

Flickering depth map OpenCV

after some attempts I managed to get a pretty accurate disparity map of the scene I am filming with my stereo camera, using OpenCV's StereoSGBM function after rectification and calibration of the camera, and computing the disparity. I have also applied the Weighted Least Squares (WLS) filter on the final result which made me obtain something way more homogenous and nicely looking:
However, there is still a depth map "flickering" that needs to be fixed, meaning that stable objects change their depth grey value from frame to frame making the information non reliable. I read it is a common problem but have not found a way to solve it.
The depth map is recalculating depths for each frame while something time consistent is needed. Any idea on how to solve this?
I don't know if you found a solution for this but I'm experiencing a similar problem. What I understood so far is that this "flickering" can be mainly because the normalization operation of the depth values.
In my case I noticed that when there are blobs, the upper and lower values for the depth is inside a big range and this lead to a different values normalization. How do you use to normalize the depth map? This can be relevant!
Another thing that I suggest you to investigate are the parameters of the stereo algorithm. For StereoSGBM you have a lot of parameters to play with, try using a different combination of them.
P.S. = If you found a solution for this, I would be more than happy to know how you figured it out, if you can share the solution I will appreciate it. Mine are just some ideas and starting point that are in my opinion the major causes.

Know rotation of 3 planar points - OpenCV

I'm working with OpenCV and I want to track the position and movement of 3 points (leds) with my webcam.
What I really want to know is the rotation, if it's rotating to the left or to the right.
My problem is that I'm just working in 2D, I just capture XY coordinates. That means that I always have a mirror behavior when I turn left or right.
I don't know how to know that. As they are 3 points I'm working it as a triangle, I know how to calculate, angles, area, height, etc but all of this is useless because always all the changes are symmetrical in both directions.
¿There is some function in OpenCV or python to work this? I'm pretty sure it has an easy solution, but my head is going to exploit.
I attach some pictures.
I can't attach code because it isn't a code problem, the problem is that I don't know how to do it.
Things that I had probe:
Compare coordinates points of the actual frame and the before
Compare barycentre
Compare angles
Compare area of the points (I think maybe here is the solution because although is 2D, you can see one of the bottom point bigger than the other one).
[front][1]
[Turn right][2]
[Turn left][3]
It's my first post, and sorry for my english! Thanks!!

Get 3D coordinates from two 2D frames

I couldn't find the proper answer to my problem on the Web, so I'll ask it here. Let's say we're given two 2D photos of the same place taken from slightly different angles. I've chosen the set of points (edge detection), found correspondences between them (which point is which on other photo). Now I need to somehow find out world coordinates of these points in 3D.
For the last 5 hours I've read a lot about it but I still can't understand what steps should I follow. I've tried to estimate motion of a camera using the function recoverPose applied to an essential matrix and two sets of points on each frame. I can't understand what it gives me when I know rotation and translation matrices (thatrecoverPose returned). What should I do in order to achieve my goal?
I also know the calibration matrix of my camera (I use KITTI dataset). I've read opencv documentation but still don't understand.
It's monocular vision.

How to determine and extract surface points of 3D object?

I have a 3D object which is not hollow, so there are many 3D points. How would you determine which of these points of such an object (especially with a very curvaceous surface) are on the surface? I understand how to extract them, but I need either a function somewhat like libraryUNK.surfacePoint... Which I don't know.
Or better an understanding of what is considered to be a surface point, which I don't know either and couldn't (yet) develop (for myself) any proper definition.
I know I can do triangulation to get the surface. But I don't get what to do next, as I will be left now with a set of triangles, some of which are on the surface, some of which are not... but again I have no definition how to consider what's on surface and what is not...
It seems to me that you want to compute the convex hull of your 3D points cloud.
It's not an easy problem, but there's plently of solutions (and algorithms) to do that. One of the efficients one is called "convex hull". There's a ConvexHull function in scipy.spatial.
Here is the details with an example (2D, but it works in any dimension)
http://scipy.github.io/devdocs/generated/scipy.spatial.ConvexHull.html
This function use the QHull library
http://www.qhull.org/
There's plently of ressources on the QHull page. There's also a Wikipedia page (Again, this is not the only method to compute convex hulls, you may want to try others):
https://en.wikipedia.org/wiki/Quickhull
Edit: after re-reading the question, it seems your volume may not be convex. Unfortunately, if it isn't, there's no way to tell whether a point is inside the volume or in the surface without additional informations on the volume or on the points cloud.

Pipeline to create Voronoi Meshes

I would like to implement a Maya plugin (this question is independent from Maya) to create 3D Voronoi patterns, Something like
I just know that I have to start from point sampling (I implemented the adaptive poisson sampling algorithm described in this paper).
I thought that, from those points, I should create the 3D wire of the mesh applying Voronoi but the result was something different from what I expected.
Here are a few example of what I get handling the result i get from scipy.spatial.Voronoi like this (as suggested here):
vor = Voronoi(points)
for vpair in vor.ridge_vertices:
for i in range(len(vpair) - 1):
if all(x >= 0 for x in vpair):
v0 = vor.vertices[vpair[i]]
v1 = vor.vertices[vpair[i+1]]
create_line(v0.tolist(), v1.tolist())
The grey vertices are the sampled points (the original shape was a simple sphere):
Here is a more complex shape (an arm)
I am missing something? Can anyone suggest the proper pipeline and algorithms I have to implement to create such patterns?
I saw your question since you posted it but didn’t have a real answer for you, however as I see you still didn’t get any response I’ll at least write down some ideas from me. Unfortunately it’s still not a full solution for your problem.
For me it seems you’re mixing few separate problems in this question so it would help to break it down to few pieces:
Voronoi diagram:
The diagram is by definition infinite, so when you draw it directly you should expect a similar mess you’ve got on your second image, so this seems fine. I don’t know how the SciPy does that, but the implementation I’ve used flagged some edge ends as ‘infinite’ and provided me the edges direction, so I could clip it at some distance by myself. You’ll need to check the exact data you get from SciPy.
In the 3D world you’ll almost always want to remove such infinite areas to get any meaningful rendering, or at least remove the area that contains your camera.
Points generation:
The Poisson disc is fine as some sample data or for early R&D but it’s also the most boring one :). You’ll need more ways to generate input points.
I tried to imagine the input needed for your ball-like example and I came up with something like this:
Create two spheres of points, with the same center but different radius.
When you create a Voronoi diagram out of it and remove infinite areas you should end up with something like a football ball.
If you created both spheres randomly you’ll get very irregular boundaries of the ‘ball’, but if you scale the points of one sphere, to use for the 2nd one you should get a regular mesh, similar to ball. You can also use similar points, but add some random offset to control the level of surface irregularity.
Get your computed diagram and for each edge create few points along this edge - this will give you small areas building up the edges of bigger areas. Play with random offsets again. Try to ignore edges, that doesn't touch any infinite region to get result similar to your image.
Get the points from both stages and compute the diagram once more.
Mesh generation:
Up to now it didn’t look like your target images. In fact it may be really hard to do it with production quality (for a Maya plugin) but I see some tricks that may help.
What I would try first would be to get all my edges and extrude some circle along them. You may modulate circle size to make it slightly bigger at the ends. Then do Boolean ‘OR’ between all those meshes and some Mesh Smooth at the end.
This way may give you similar results but you’ll need to be careful at mesh intersections, they can get ugly and need some special treatment.

Categories