I have an organized point cloud file of an assembly generated using FreeCAD. I am using open3d python library to visualize this point cloud file. I just wonder whether there's a method to resize (cut down) this point cloud to a smaller size (shrink in size), but keeping this cloud organized in its original shape. That means I want to reduce the spacing between the points so that they become closer and the point cloud compresses while maintaining the shape of the point cloud.
I think this shouldn't be the same as down sampling. What I want to do next is that I want to combine the 2 point clouds together so that the shrinker point cloud object appears to be within the bigger point cloud object
I am working on python using the approach of storing these point clouds in lists etc. and visualizing using open3D python library.
Any advice is welcome, thanks first!
Have you tried scaling it?
For example, converting the units from meters to millimeters - multiply/divide each coordinate by the same constant (for example - 1000 for m<->mm conversion).
Then, center each point cloud (subtract the mean from each point) and merge them. You can adjust their centroid and the scaling for better output.
Related
I am trying to generate depth images from 3D point cloud. So far I have found a lot of information on how to generate point cloud from depth images but I need to do it in reverse now. I want to do it in Python.
I am working with cold storage management application project. In this project, I will display all temperatures of storage which we manage. I have setup some sensors in this storage which help me know about temperatures at these location. (We can understand a whole space of storage like a cubic or rectangular parallelepiped) Temperatures will be displayed by 3D such as: temperature[x][y][z] = -25 ('C),...
In storage, I isn't know some points which don't locate at sensors. So, i think i can calculate for this value by trilinear interpolation. Althought, i have solved this problem, i want to add more sensors in any location in my storage. (More than 8 points which is used by trilinear interpolation). To increase accurancy. So, When i do that, i have to divide whole space of storage to some smaller spaces which have sensor right corner location (For 8 points right corner to use trilinear interpolation) and i added all of results in a 3D array like final result for whole storage.
My problem is the solution which i talked above to slow. Can you tell me how to use interpolation with more than 8 point values for cubic (storage space) or what way better which i can do, but don't need to divide storage space ?
Thanks, my friends and srr for my bad writting skill in english :>>>>
I have two 3D points cloud with correspondances between points.
I'm trying to find the best affine transformation between this two cloud, and I want to obtain finally:
- Rotation
- Shear
- Scale
- Translation
Curently i'm using Coherent point drift, but i know there is a better solution with probably mean square (probably procustes analysis) but with this solution I don't know how it is possible to extract more than only scale and rotation.
And there is a librairie to do this easily in python3 ?
Thank you in advance.
I am using win10, python and c#. I want to calculate circumference of human parts (belly, biceps etc) with using point cloud or 3d scans like .stl .obj .ply. Now I can get the point cloud of human body with kinect v2. I have point cloud of human body, scanned 3d human body in .stl .obj .ply formats.
I need some ideas and infos about it. I don't know how to analyse the things I have and how to calculate what I want.
Here I found an example of what I am trying to do but It doesn't need to be perfectly stable like that, Its for a school homework. Maybe you can give me some ideas about how to achieve my goal. Thank you for your help.
https://www.youtube.com/watch?time_continue=48&v=jOvaZGloNRo
I get 3d scanned object with kinect v2 and use PCL to convert it into point cloud.
I don't know about using PCL with Python or C#. In general you are looking at the following steps:
Filtering the points to the interested region
Segmenting the shape
Extracting the parameters
If you're interested in only Python, then OpenCV might be the best option. You can also develop the core logic in C++ and wrap it for Python or C#. C++ also has some nice UI libaries (Qt, nanogui), please see the following details for achieving the objective with PCL
Filtering
CropBox or PassThrough can be used for this. It'll result in similar results as shown in the image assuming that the frame has been chosen properly. If not, the points cloud can be easily transformed
Segmenting the shape
Assuming you want an average circumference, you might need to experiment with Circle 2D, Circle 3D and Cylinder models. More details regarding usage and API are here. The method chosen can be simple SAC (Sample Consensus) like RANSAC (Random SAC) or advanced method like LMEDS (Least Median of Squares) or MLESAC (Max Likelihood Estimation SAC)
Extracting the parameters
All models have a radius field which can be used to find the circumference using standard formula (2*pi*r)
Disclaimer: Please take note that the shape is circular, not ellipse and the cylinder are right angled cylinders. So if the object measured (arm, or bicep) is not circular, the computed value might not be close to ground truth in extreme cases
We are beginning a project to visualize the results of a finite volume (FV) calculation using Google Earth. The FV data is essentially 2d (lat/long) data consisting of a Cartesian array of values (sea surface height, for example). Each value should be mapped to a color from some colormap, and then displayed as a single mesh cell in a gridded array suitable for Google Earth. The Cartesian array could be 100x100 or larger.
My question is, do we construct polygons for each mesh cell C_{ij} in the array, assigning a color corresponding to the q_{ij} value for that mesh cell? This would seem to create a huge KML file, if the coordinates of the four corners of every mesh cell must be described, (i.e. 10,000 polygons, for example).
Or are there KML tools we could use that would allow us to specify, for example, the lower and upper coordinates of the array, a generic mesh cell size (e.g. dX, dY values), and the array of q data (or, equivalently, colours) that should be used to fill the "patch"?
Alternatively, we could create an image file, containing for example, a rendered image of our data array (created by some other means), and then referenced from the KML file.
Our aim is to use PyKML for this project.
Any suggestions would be very helpful.
After much digging around, I think I now have a better understanding of what Google Earth can and cannot do, (or is not designed to do). It seems that Google Earth is not designed as a visualization tool for numerical data. This does not mean it cannot be done, but that one must create the image files elsewhere, and then overlay them onto Google Earth. For example, this link provides instructions for visualizing the output from a fire modeling code :
http://www.openwfm.org/wiki/Visualization_in_Google_Earth
The instructions here suggests how pseudocolor plots can used in at least one special case to visualize output in Google Earth.