Straight Edges Curved After Triangulation OpenCV - python

I am working on a project using openCV to create a 3D image from 2D images. Currently I am just working with 2 Images to simplify things.
What I am confused about is that currently my 4D triangulated points appear to make some sense with my point matches. Despite this there are some points which are on straight surfaces which are being mapped onto curved lines. Is there a reason for this? I assume it could potentially be because the object at these points is relatively far away from the camera which I imagine would degrade the precision of the image. Is this intuition correct, and if I take more images will this problem alleviate itself eventually? What is also interesting is that when I reproject the 4D points back to 3D they are no longer curved.
Below are my images showing, coordinate axis, matched points, 4D reconstruction, and then re-projected 3D results
The 3D Triangulated Points make some sense to me. My guess is that the curved surface is the front of the fish tank The blob which is almost along the negative x-axis and has negative z-coordinates is from the bottle. The rest of the points appear to be from the checkerboard grid/noise.

Related

Get a transparent colormap in 3D with Python

I have been looking for a Python library that would allow me, from a set of points in 3D and RGB colors associated to each point, to get a transparent surface (i. e., with some degree of transparency). The idea would be to be able to display (and manipulate/rotate) things similar to the image below:
The atoms and bonds are inside a 3D surface that is smooth and constructed from a series of points each with a RGB color.
I could get some rough Poisson reconstruction running with Mayavi but the colors appeared very pixelized and I couldn't find a way to make the surface transparent. I could obtain a lot of features I wanted for this work with Open3D (I actually place these objects inside crystal structures so I need to represent bonds, atoms, crystal edges, axes and so on), but here again I couldn't find a Poisson reconstruction algorithm to recreate the smooth surface from points nor any functionality to make a surface transparent. Any suggestion would be appreciated.

Calculate 3D Plane that Rests on a 3D Surface

I have about 300,000 points defining my 3D surface. I would like to know if I dropped a infinitely stiff sheet onto my 3D surface, what the equation of that plane would be. I know I need to find the 3 points the sheet would rest on as that defines a plane, but I'm not sure how to find my 3 points out of the ~300,000. You can assume this 3D surface is very bumpy and that this sheet will most likely lie on 3 "hills".
Edit: Some more background knowledge. This is point cloud data for a scan of a 3D surface which is nearly flat. What I would like to know is how this object would rest if I flipped it over and put it on a completely flat surface. I realize that this surface may be able to rest on the table in various different ways depending on the density and thickness of the object but you can assume the number of ways is finite and I would like to know all of the different ways just in case.
Edit: After looking at some point cloud libraries I'm thinking of doing something like computing the curvature using a kd tree (using SciPy) and only looking at regions that have a negative curvature and then there should be 3+ regions with negative curvature so some combinatorics + iterations should give the correct 3 points for the plane(s).

How to plot 2-D navigation data points on a floor map using Matlab/Python?

I want to show the tracking result of my indoor localization algorithm with respect to the ground truth reference path on the floor map. The floor plan and the walking route representing the ground truth is as follows:
Here, the red line is the ground truth route. The right-left side of the image represents the x-axis and it is compressed (original x-axis length is much larger) owing to space. The top-bottom of the image is the y-axis and it represents precisely according to the coordinate.
I want to draw the localization estimation points (2-D) on it. I tried to do it using Origin. I got the following image.
As seen in the figure above, the image does not resemble the floor plan precisely (using log on y-axis can reduce the dimension of y-axis but it does not yield a complete solution in my case).
To summarize:
What I have: (a) A set of 2-D coordinate points from each localization algorithm (I'm comparing my method with two other methods, so there are 3 sets of 2-D coordinate points) and (b) a floor plan image.
What I want: To plot the sets of 2-D coordinate points on the floor plan image.
If anyone could drop a sample Matlab/python code to plot the 2-D coordinates, I'd highly appreciate it.
Thank you.
To plot on top of an image, you have to provide the nessecary scaling information. This can be achieved using the image function passing x, y and C: https://de.mathworks.com/help/matlab/ref/image.html?s_tid=doc_ta
I don't know how your floor plan is scaled, but the resulting code should be something like:
image(x,y,C) % x and y provides the scaling informating, C is the image.
hold on
plot(...) % code you already have

How to create the vision of a 2D camera looking at a 2D world

In a 3D world, a camera looking out will create a 2D representation of what it sees, AKA a picture. Looking at a 3D cube would produce this 2D image:
However, inside a 2D world, a camera looking out will create a 1D line of pixels that will represent what it sees.
The camera in this scene:
Would produce this 1D "image":
Think of the camera looking out. It would be unable to see all of the pink shape, because most of it is obstructed be the red shape. It would only see the part that is unobstructed. Also, objects further away appear smaller.
How can I create the view of a 2D camera looking out at a 2D world, creating a 1D image?
I am looking for a method, preferably in Python, to accomplish this. I am trying to do this for some 2D creatures I am simulating. I want their input to be 1D array that represents their view of the world.
Pick a focal point at the creature. Draw a circle around it. Subdivide the circle for each pixel in your image resolution, and cast a ray from the focal point to the circle point and out into the world. Find the points of intersection between that ray and the world objects. Get the color of the closest one to set the color in the image at that pixel. Repeat for each pixel.
This gives a 360-degree perspective view, which might simplify things for a simulated creature. If you want a more directional view, just use an arc instead of a circle. 45 degrees seems reasonable.

Get 3D coordinates from two 2D frames

I couldn't find the proper answer to my problem on the Web, so I'll ask it here. Let's say we're given two 2D photos of the same place taken from slightly different angles. I've chosen the set of points (edge detection), found correspondences between them (which point is which on other photo). Now I need to somehow find out world coordinates of these points in 3D.
For the last 5 hours I've read a lot about it but I still can't understand what steps should I follow. I've tried to estimate motion of a camera using the function recoverPose applied to an essential matrix and two sets of points on each frame. I can't understand what it gives me when I know rotation and translation matrices (thatrecoverPose returned). What should I do in order to achieve my goal?
I also know the calibration matrix of my camera (I use KITTI dataset). I've read opencv documentation but still don't understand.
It's monocular vision.

Categories