Get a transparent colormap in 3D with Python - python

I have been looking for a Python library that would allow me, from a set of points in 3D and RGB colors associated to each point, to get a transparent surface (i. e., with some degree of transparency). The idea would be to be able to display (and manipulate/rotate) things similar to the image below:
The atoms and bonds are inside a 3D surface that is smooth and constructed from a series of points each with a RGB color.
I could get some rough Poisson reconstruction running with Mayavi but the colors appeared very pixelized and I couldn't find a way to make the surface transparent. I could obtain a lot of features I wanted for this work with Open3D (I actually place these objects inside crystal structures so I need to represent bonds, atoms, crystal edges, axes and so on), but here again I couldn't find a Poisson reconstruction algorithm to recreate the smooth surface from points nor any functionality to make a surface transparent. Any suggestion would be appreciated.

Related

Straight Edges Curved After Triangulation OpenCV

I am working on a project using openCV to create a 3D image from 2D images. Currently I am just working with 2 Images to simplify things.
What I am confused about is that currently my 4D triangulated points appear to make some sense with my point matches. Despite this there are some points which are on straight surfaces which are being mapped onto curved lines. Is there a reason for this? I assume it could potentially be because the object at these points is relatively far away from the camera which I imagine would degrade the precision of the image. Is this intuition correct, and if I take more images will this problem alleviate itself eventually? What is also interesting is that when I reproject the 4D points back to 3D they are no longer curved.
Below are my images showing, coordinate axis, matched points, 4D reconstruction, and then re-projected 3D results
The 3D Triangulated Points make some sense to me. My guess is that the curved surface is the front of the fish tank The blob which is almost along the negative x-axis and has negative z-coordinates is from the bottle. The rest of the points appear to be from the checkerboard grid/noise.

Calculate 3D Plane that Rests on a 3D Surface

I have about 300,000 points defining my 3D surface. I would like to know if I dropped a infinitely stiff sheet onto my 3D surface, what the equation of that plane would be. I know I need to find the 3 points the sheet would rest on as that defines a plane, but I'm not sure how to find my 3 points out of the ~300,000. You can assume this 3D surface is very bumpy and that this sheet will most likely lie on 3 "hills".
Edit: Some more background knowledge. This is point cloud data for a scan of a 3D surface which is nearly flat. What I would like to know is how this object would rest if I flipped it over and put it on a completely flat surface. I realize that this surface may be able to rest on the table in various different ways depending on the density and thickness of the object but you can assume the number of ways is finite and I would like to know all of the different ways just in case.
Edit: After looking at some point cloud libraries I'm thinking of doing something like computing the curvature using a kd tree (using SciPy) and only looking at regions that have a negative curvature and then there should be 3+ regions with negative curvature so some combinatorics + iterations should give the correct 3 points for the plane(s).

Aruco marker pose estimation on curved surface

I assume the pose estimate from aruco markers is only valid when they are affixed to a flat surface. What about attaching them to a slightly curved surface? Is there any way to correct for the surface curvature in the resulting pose?
Yes, you should be able to get the pose estimate for a curved surface using an Aruco board, though it may be physically difficult to construct and measure. Aruco boards do not need to be planar; they can describe any arrangement of markers in 3D space. So the following steps should work:
attach markers to your curved surface (which may be a challenge if the surface is not developable).
calculate, or directly measure, the 3D positions of the physical markers' corners in your preferred Cartesian coordinate system.
define an Aruco board using the markers' 3D corner positions and IDs.
tune the Aruco detection parameters (at least adaptive threshold and polygonal approximation) to give robust marker detection in the presence of curvature in the marker edges and localised lighting variations due to the curved surface.
once marker detection is reliable, use estimatePoseBoard() to get the pose estimate of the board, and hence the curved surface, in the same Cartesian coordinate system. estimatePoseBoard() finds a best-fit solution for the pose while considering all visible markers simultaneously.
Note: I haven't actually tried this.

contour edges determined by points in matplotlib

I was wondering if there is a way to get the shape of the contour of a region determined by several points. For instance, in the image below, I show as a background (in gray) a collection of points, but it does not look very nice, so I would like to determine automatically what are the edges or the contour that the points are delimiting, to just plot the shape of the background instead of plotting thousands of points to show it.
Edit: As kindly pointed out by #heltonbiker, actually the \chi-shape in http://www.geosensor.net/papers/duckham08.PR.pdf would perfectly do the job, however I still have no clue on how to implement it. Any help would be highly appreciated!

Image Segmentation based on Pixel Density

I need some help developing some code that segments a binary image into components of a certain pixel density. I've been doing some research in OpenCV algorithms, but before developing my own algorithm to do this, I wanted to ask around to make sure it hasn't been made already.
For instance, in this picture, I have code that imports it as a binary image. However, is there a way to segment objects in the objects from the lines? I would need to segment nodes (corners) and objects (the circle in this case). However, the object does not necessarily have to be a shape.
The solution I thought was to use pixel density. Most of the picture will made up of lines, and the objects have a greater pixel density than that of the line. Is there a way to segment it out?
Below is a working example of the task.
Original Picture:
Resulting Images after Segmentation of Nodes (intersection of multiple lines) and Components (Electronic components like the Resistor or the Voltage Source in the picture)
You can use an integral image to quickly compute the density of black pixels in a rectangular region. Detection of regions with high density can then be performed with a moving window in varying scales. This would be very similar to how face detection works but using only one super-simple feature.
It might be beneficial to make all edges narrow with something like skeletonizing before computing the integral image to make the result insensitive to wide lines.
OpenCV has some functionality for finding contours that is able to put the contours in a hierarchy. It might be what you are looking for. If not, please add some more information about your expected output!
If I understand correctly, you want to detect the lines and the circle in your image, right?
If it is the case, have a look at the Hough line transform and Hough circle transform.

Categories