I am new to 3D geometry. I am trying to extract a mesh from a glb file using python library trimesh. I couldn't figure out the proper way of doing that.
My requirement is that I need a 3D mesh (as adjacency matrix) of an object so that I can apply non-euclidian convolutional operators on them.
Any suggestions on what I should be looking at?
You can use trimesh.load to load your glTF file. Note that the return type depends on the filetype of your model file. For glTF files it will return an instance of trimesh.Scene. The scene has all sorts of attributes like the camera, lights but also geometries. This is because glTF files can contain more that just model data. Each geometry is an instance of trimesh.Trimesh, which is the base class for geometries and has a edges_sparse property which represents the adjacency matrix of the model.
To put it all together:
scene = trimesh.load("some_file.glb")
geometries = list(scene.geometry.values())
geometry = geometries[0]
adjacency_matrix = geometry.edges_sparse
It's a bit tedious to figure this out using the documentation. I tend to look at the source code or turn on the debugger in my IDE:
trimesh.Scene: https://github.com/mikedh/trimesh/blob/master/trimesh/scene/scene.py
trimesh.Trimesh: https://github.com/mikedh/trimesh/blob/master/trimesh/base.py
Related
I'm trying to write a script that creates an envelope polygon feature for an existing feature class that contains multiple features. The output needs to be a single envelope polygon for all features combined instead of one polygon for each individual feature.
The output should consist of a new feature class with a single envelope polygon for the entire feature class, not a polygon for each feature. The output can be a shapefile or a geodatabase feature class. I also need to work with just the input feature class and/or the geometries of the input features. The script should also be able to work on any feature class (geometry type, coordinate system, etc.)
I've been using a shapefile that represents the Hawaii islands for testing.
This is my code so far:
import arcpy
arcpy.env.workspace = "C:/Shapefiles"
fc = "hawaii.shp"
ofc = "hawaii_env"
with arcpy.da.SearchCursor(fc, "SHAPE#") as cursor:
for row in cursor:
print(row[0].extent)
polygon = [arcpy.Polygon(arcpy.Array(
[arcpy.Point(row[0].extent.XMin, row[0].extent.YMin), arcpy.Point(row[0].extent.XMax, row[0].extent.YMin),
arcpy.Point(row[0].extent.XMax, row[0].extent.YMax), arcpy.Point(row[0].extent.XMin, row[0].extent.YMax)]))]
arcpy.CopyFeatures_management(polygon, ofc)
When I run the script, the shapefile it produces is just one polygon, but I don't believe it is working correctly. The one polygon is only covering one island of hawaii. I'm not sure of the exact output I should receive and what to look for in ArcGIS Pro.
Any code to try or advice would be very helpful, thank you
I have a STEP file which I converted into an STL file. I need the STL format to be able to extract certain features out of the 3D mesh. Further, I want to visualize the 3D model in an interactive manner, i.e. the user should be able to rotate and scale the model. The code I found, which uses Open3D, doesn't work so far:
import open3d as o3d
def visualize(mesh):
vis = o3d.visualization.Visualizer()
vis.create_window()
vis.add_geometry(mesh)
vis.run()
vis.destroy_window()
mesh = o3d.io.read_triangle_model(path_stl_file)
visualize(mesh)
Any idea how to do it the right way? Thx.
Argh, I used the wrong importer. The function to be used is:
read_triangle_mesh(path_stl_file)
I am trying to annotate my point cloud data. I found a number of tools but could only access the demo version of 3D Point cloud tool by Supervisely. Once the annotation is complete by drawing 3D boxes around the objects, the output annotation file is in the form of a JSON file. This contains, the class and global position, orientation and dimensions of the box. How do I use this file to assign labels to the points inside these 3D boxes. I want the output format in either .pcd file or .bin file.
The output of the JSON file is as follows:
[{"id":36698,"name":"vel_1558647824006782.pcd","annotations":[{"className":"Car","geometryType":"cuboid","classId":957,"geometry":{"position":{"x":9.539855967959713,"y":18.342023271012913,"z":0.43944128482454614},"rotation":{"x":0,"y":0,"z":0},"dimensions":{"x":5.691547052392309,"y":1.6625674002633986,"z":1.757779283656416}}},{"className":"ground","geometryType":"cuboid","classId":958,"geometry":{"position":{"x":28.890481890779242,"y":8.463823613489927,"z":-1.0314986175132965},"rotation":{"x":0,"y":0,"z":0},"dimensions":{"x":96.34273328620523,"y":18.714553504372063,"z":1.0544185995045456}}}]}]
I thought of using a crop_box filter by PCL. But is there any other way around it. It would also help if someone can point me to other point cloud annotation tools which could best help me solve the problem?
I was able to write a c++ script to read the json file and use PCL's cropbox filter to solve the problem. The methodology used is as follows:
1) Read the json file using nlohmann/json
std::ifstream ifs("somepath.json");
json j = json::parse(ifs);
2) Extract the "position" (centroid of cuboid), "orientation" of cuboid, "dimensions" of the cuboid and the "className" for each box. The code below shows a way to extract the position data into std::vector. (i is iterating over the number of boxes.)
std::vector<float> position {
j[0]["annotations"][i]["geometry"]["position"]["x"].get<float>(),
j[0]["annotations"][i]["geometry"]["position"]["y"].get<float>(),
j[0]["annotations"][i]["geometry"]["position"]["z"].get<float>()};
3) Get the max and min (x,y,z) coordinate points of the vertices of the box. This goes as input to the cropbox filter. (Note: the x,y,z point does not have to relate to a single vertex. Xmin will be minimum x of all 8 vertices and Ymin will minium of all 8 vertices and so on.)
4) Use cropbox filter from PCL. This allows you to get indices of all the points inside the given box. Examples can be found here and here.
5) Depending upon the class of the box, assign different colors to the points at those indices of the point cloud.
6) Save the point cloud.
This would be a generalized way of labeling every point of a point cloud from the an input json file which consists information of the position, dimension and orientation of the 3D cubes.
I have (X,Y) coordinates of polygons as shown in the picture pores image. How can I import them to Abaqus and how to create a surface in a way that subtract the internal shapes (polygon1,poly2....) from the external shape (the frame (a rectangle)). Rectangle-poly1-poly2....
Try recording a macro. Manually create the model using Abaqus CAE. I assumed you're doing a 2-D model, so your geometry will probably be:
A solid rectangle.
2a. Planar cut on the rectangle created through a sketch.
2b. Alternatively, you can use the "create wire" feature to input your shape coordinates (right-click to read coordinates from a text file). Use merge scheme "Separate wire". Then, use the "Partition" tool to partition the holes, and then use "Geometry Edit" to remove the undesired faces.
Anyways, the macro recorder will help prototype the code that you will need to generate. Once you do that, you're going to need to do typical Python file open/read operations to read the input coordinates and develop the program to generate the model and mesh. Or, you can extruciatingly generate your model manually using the method I outlined above. There's a lot of ways to skin this chicken.
The marching cubes class does not generate anything after I passed a ImageData to it.
I am trying to generate surface from a 3D numpy array. I used evtk to export the array to a .vti (Structured Points) file and stored as pointData. Then, read the file by vtkXMLImageDataReader. After passing the output of the Reader (vtkImageData) to vtkMarchingCubes class, the class returns a vtkPolyData with no point or cell, vtkImageData were used in vtk wiki's example, the MarchingCubes class should have no problem to handle it.
I tried to export the ImageData generated by the Reader, the output can be visualized by ParaView without any problem.
I just wondering are there any special requirement for the marching cubes in vtk to generate the surface? It will be even better if the 3D numpy can directly transform to data type that suitable for the MarchingCubes.
Here is my code to read the .vki file and pass it to marching cubes:
dataReader = vtk.vtkXMLImageDataReader()
dataReader.SetFileName('model_name.vti')
dataReader.Update()
contacts = vtk.vtkDiscreteMarchingCubes()
contacts.SetInputConnection(dataReader.GetOutputPort())
contacts.SetValue(0,1)
contacts.ComputeNormalsOn()
contacts.Update()
result = contacts.GetOutput()
If you wish to do this in the python/numpy world, then I would suggest using the marching cubes implementation from here (or look at isosurface routines in pyqtgraph, mayavi ...). Although there are many tutorials on moving data to and from VTK, if you are already programming with numpy, I wouldn't mix the two methodologies.
Although this question is quite old already, I ran into a very similar problem.
As YC.Chui pointed out in his comment, we need to set the scalar values in the read data. For me this helped:
reader = vtk.vtkXMLImageDataReader()
reader.SetFileName('model_name.vti')
reader.Update()
readData = reader.GetOutput()
readData.GetPointData().SetScalars(readData.GetPointData().GetArray(0))
I am quite new to VTK and have absolutely no idea why this solves the problem and whether there is a better way. Feel free to comment and correct me!