I am trying to generate depth images from 3D point cloud. So far I have found a lot of information on how to generate point cloud from depth images but I need to do it in reverse now. I want to do it in Python.
Related
I have an organized point cloud file of an assembly generated using FreeCAD. I am using open3d python library to visualize this point cloud file. I just wonder whether there's a method to resize (cut down) this point cloud to a smaller size (shrink in size), but keeping this cloud organized in its original shape. That means I want to reduce the spacing between the points so that they become closer and the point cloud compresses while maintaining the shape of the point cloud.
I think this shouldn't be the same as down sampling. What I want to do next is that I want to combine the 2 point clouds together so that the shrinker point cloud object appears to be within the bigger point cloud object
I am working on python using the approach of storing these point clouds in lists etc. and visualizing using open3D python library.
Any advice is welcome, thanks first!
Have you tried scaling it?
For example, converting the units from meters to millimeters - multiply/divide each coordinate by the same constant (for example - 1000 for m<->mm conversion).
Then, center each point cloud (subtract the mean from each point) and merge them. You can adjust their centroid and the scaling for better output.
I am looking for an algorithm for convert 3D Point Cloud to Depth Image in Python, especially 3D Face Point Cloud. Can you suggest me some tools or algorithms?
Thanks in advance.
I am using win10, python and c#. I want to calculate circumference of human parts (belly, biceps etc) with using point cloud or 3d scans like .stl .obj .ply. Now I can get the point cloud of human body with kinect v2. I have point cloud of human body, scanned 3d human body in .stl .obj .ply formats.
I need some ideas and infos about it. I don't know how to analyse the things I have and how to calculate what I want.
Here I found an example of what I am trying to do but It doesn't need to be perfectly stable like that, Its for a school homework. Maybe you can give me some ideas about how to achieve my goal. Thank you for your help.
https://www.youtube.com/watch?time_continue=48&v=jOvaZGloNRo
I get 3d scanned object with kinect v2 and use PCL to convert it into point cloud.
I don't know about using PCL with Python or C#. In general you are looking at the following steps:
Filtering the points to the interested region
Segmenting the shape
Extracting the parameters
If you're interested in only Python, then OpenCV might be the best option. You can also develop the core logic in C++ and wrap it for Python or C#. C++ also has some nice UI libaries (Qt, nanogui), please see the following details for achieving the objective with PCL
Filtering
CropBox or PassThrough can be used for this. It'll result in similar results as shown in the image assuming that the frame has been chosen properly. If not, the points cloud can be easily transformed
Segmenting the shape
Assuming you want an average circumference, you might need to experiment with Circle 2D, Circle 3D and Cylinder models. More details regarding usage and API are here. The method chosen can be simple SAC (Sample Consensus) like RANSAC (Random SAC) or advanced method like LMEDS (Least Median of Squares) or MLESAC (Max Likelihood Estimation SAC)
Extracting the parameters
All models have a radius field which can be used to find the circumference using standard formula (2*pi*r)
Disclaimer: Please take note that the shape is circular, not ellipse and the cylinder are right angled cylinders. So if the object measured (arm, or bicep) is not circular, the computed value might not be close to ground truth in extreme cases
I've been doing a lot of Image Processing recently on Python using OpenCV and I've worked all this while with 2-D Images in the generic BGR style.
Now, I'm trying to figure out how to incorporate depth and work with depth information as well.
I've seen the documentation on creating simple point clouds using the Left and Right images of a Stereocamera, but I was hoping to gain some intuition on Depth-based cameras themselves like Kinect.
What kind of camera should I use for this purpose, and more importantly: how do I process these images in Python - as I can't find a lot of documentation on handling RGBD images in OpenCV.
If you want to work with depth based cameras you can go for Time of Flight(ToF) cameras like picoflexx and picomonstar cameras. They will give you X,Y and Z values. Where your x and y values are distances from camera centre of that point (like in 2D space) and Z will five you the direct distance of that point (not perpendicular) from camera centre.
For this camera and this 3d data processing you can use Point Cloud Library fro processing.
I was implementing the depth map construction, code of which (in Python) is available here OpenCv Docs - depthMap I was successful in getting the depth map as they showed in the doc for their given images-pair (left and right stereo images) tsukuba_l.png and tsukuba_2.png. I considered testing my own image pairs, so I took from my mobile a pair of images, as shown below:
When I run the code, I'm getting the depth map something like this
I tried playing with numDisparities and blocksize, but it didn't help in getting the best map.
I thought of checking the script of cv2.StereoBM_create in its master folder in Github, but couldn't get that online. Can you help me with a way to implement depth maps for custom images taken by me? is there a way that we can play with the parameters or at least get me the link to GitHub master module that has all Stereo related modules. Thank you.
I guess you did not rectify the images which is fundamental for stereo matching.
You should first calibrate your stereo system (if you took them with mobile phone every image pair you take will have a different transform, the two cameras need to have always the same transformation between each other) and then rectify the images, in that way they are projected onto the same plane, then the stereo match algorithm looks for correspondences in the other image on the same rows.
Check in the docs for stereoRectify(), you will see some images as example of the rectification process.
By the way there is another python example based on SemiGlboal Block Matching algorithm in opencv/samples/python/stereo_match.py.