I need to visualize a set of hairstyles in 3D using open3d (or any similar package, if that makes things easier). In particular, I am looking for a way to visualize one of the hairstyles of the USC-HairSalon dataset available here.
Each hairstyle consists of 10.000 hair strands, with each strand made of 100 points. In Blender, hairstyles can be represented in 3D using the script available here.
Do you have any suggestions on what could be an efficient way to represent all hair strands? Each dataset is stored in a .data file and needs to be processed before being rendered.
To load the reference head, I am using a few lines of code:
import open3d as o3d
mesh = o3d.io.read_triangle_mesh("./USC-HairSalon_dataset/Raw_data/head_model.obj")
mesh.compute_vertex_normals()
o3d.visualization.draw_geometries([mesh])
Related
Since I am still a beginner with open3d and still reading the documentation. Nowhere seems to be a clear explanation for Armadillo mesh class in open3d neither for knot mesh.
Why are they different types of meshes and what are the cases one should use them?
The reason behind the existence of these classes is explained in their GitHub repository as
The Dataset classes in Open3D are designed for convenient access to "built-in" example and test data.
Example usages of Armadillo mesh include surface deformations (ARAP), mesh optimization (Laplacian Mesh Optimization), skeleton extraction (Skeleton Extraction by Mesh Contraction), and key point detection (Intrinsic shape signatures (ISS)).
I haven't used a knot mesh before, but I believe that it is useful for applications in knot theory.
If you want to visualize the meshes and understand the difference better, you can do it the following way.
import open3d as o3d
knot_data = o3d.data.KnotMesh()
mesh = o3d.io.read_triangle_mesh(knot_data.path)
mesh.compute_vertex_normals() # for better visualization
o3d.visualization.draw_geometries([mesh])
armadillo_data = o3d.data.ArmadilloMesh()
mesh = o3d.io.read_triangle_mesh(armadillo_data.path)
o3d.visualization.draw_geometries([mesh])
I have a STEP file which I converted into an STL file. I need the STL format to be able to extract certain features out of the 3D mesh. Further, I want to visualize the 3D model in an interactive manner, i.e. the user should be able to rotate and scale the model. The code I found, which uses Open3D, doesn't work so far:
import open3d as o3d
def visualize(mesh):
vis = o3d.visualization.Visualizer()
vis.create_window()
vis.add_geometry(mesh)
vis.run()
vis.destroy_window()
mesh = o3d.io.read_triangle_model(path_stl_file)
visualize(mesh)
Any idea how to do it the right way? Thx.
Argh, I used the wrong importer. The function to be used is:
read_triangle_mesh(path_stl_file)
Currently, I use MATLAB extensively for analyzing experimental scientific data (mostly time traces and images). However, again and again I keep running into fundamental problems with the MATLAB language and I would like to make the switch to python. One feature of matlab is holding me back however: its ability to add datatips to plots and images.
For a line plot the datatip is a window next to one of the data points that shows its coordinates. This is very useful to quickly see where datapoints are and what their value is. Of course this can also be done by inspecting the vectors that were used to plot the line, but that is slightly more cumbersome and becomes a headache when trying to analyze loads of data. E.g. let's say we quickly want to know for what value of x, y=0.6. Moving the datatip around will give a rough estimate very quickly.
For images, the datatip shows the x and y coordinates, but also the greyscale value (called index by MATLAB) and the RGB color. I'm mainly interested in the greyscale value here. Suppose we want to know the coordinates of the bottom tip of the pupil of the cat's eye. A datatip allows to simply click that point and copy the coordinates (either manually or programmatically). Alternatively, one would have to write some image processing script to find this pixel location. For a one time analysis that is not worthwhile.
The plotting library for python that I'm most familiar with and that is commonly called the most flexible is matplotlib. An old stock overflow question seems to indicate that this can be done using mpldatacursor and another module seems to be mplcursors. These libraries do not seem to be compatible with Spyder, however, limiting their usability. Also, I imagine many python programmers would be using a feature like datatips, so it seems odd to have to rely on a 3rd party module.
Now on to the actual question: Is there any module (or simple piece of code that I could put in my personal library) to get the equivalent of MATLAB's datatips in all figures generated by a python script?
I have an organized point cloud file of an assembly generated using FreeCAD. I am using open3d python library to visualize this point cloud file. I just wonder whether there's a method to resize (cut down) this point cloud to a smaller size (shrink in size), but keeping this cloud organized in its original shape. That means I want to reduce the spacing between the points so that they become closer and the point cloud compresses while maintaining the shape of the point cloud.
I think this shouldn't be the same as down sampling. What I want to do next is that I want to combine the 2 point clouds together so that the shrinker point cloud object appears to be within the bigger point cloud object
I am working on python using the approach of storing these point clouds in lists etc. and visualizing using open3D python library.
Any advice is welcome, thanks first!
Have you tried scaling it?
For example, converting the units from meters to millimeters - multiply/divide each coordinate by the same constant (for example - 1000 for m<->mm conversion).
Then, center each point cloud (subtract the mean from each point) and merge them. You can adjust their centroid and the scaling for better output.
Turtlebot exploration using a pregenerated map
I am working on the exploration strategy of a turtlebot that is supposed to navigate a known environment (with the help of a pre-built map) and searches for an object (e.g. a red ball).
The map has been generated using gmapping and now I am trying to implement a strategy for exploring the known map. Although generating random points on the map and then performing a 360° scan at these points is an option, I am not too enthusiastic about this method, since it does not guarantee that the object will be located if it exists.
What I would like to have is a set of points, such that after visiting all these points and performing a 360° scan at each of those points, the entire accessible/visible area of the map is covered by the robot.
I felt that the art gallery algorithm (described here: http://cs.smith.edu/~orourke/books/ArtGalleryTheorems/Art_Gallery_Full_Book.pdf) also achieves something similar, and hence tried to implement it in my scenario.
The art gallery problem requires polygons and I am finding it quite difficult to generate polygons from maps generated by gmapping on the real robot (although for simpler maps it's possible).
I have tried using the triangle module in Python to create convex hulls of clusters of objects, but that is not too useful in noisy environment like the one linked below. (I converted the .pgm file into a .png by plotting only obstacle points)
obstacle points
Note: My turtlebot does not have a laser scanner. It just has a Asus Xtion Pro Live.
I would really appreciate some help regarding the following:
Is there an easier way of exploring a known map with a turtlebot
(preferably some package in ROS)?
If the art gallery algorithm is a viable option, then how should I obtain accurate polygons from the scan-points of the obstacles?
Thank you in advance.