Does marching cubes in vtk require specific input? - python

The marching cubes class does not generate anything after I passed a ImageData to it.
I am trying to generate surface from a 3D numpy array. I used evtk to export the array to a .vti (Structured Points) file and stored as pointData. Then, read the file by vtkXMLImageDataReader. After passing the output of the Reader (vtkImageData) to vtkMarchingCubes class, the class returns a vtkPolyData with no point or cell, vtkImageData were used in vtk wiki's example, the MarchingCubes class should have no problem to handle it.
I tried to export the ImageData generated by the Reader, the output can be visualized by ParaView without any problem.
I just wondering are there any special requirement for the marching cubes in vtk to generate the surface? It will be even better if the 3D numpy can directly transform to data type that suitable for the MarchingCubes.
Here is my code to read the .vki file and pass it to marching cubes:
dataReader = vtk.vtkXMLImageDataReader()
dataReader.SetFileName('model_name.vti')
dataReader.Update()
contacts = vtk.vtkDiscreteMarchingCubes()
contacts.SetInputConnection(dataReader.GetOutputPort())
contacts.SetValue(0,1)
contacts.ComputeNormalsOn()
contacts.Update()
result = contacts.GetOutput()

If you wish to do this in the python/numpy world, then I would suggest using the marching cubes implementation from here (or look at isosurface routines in pyqtgraph, mayavi ...). Although there are many tutorials on moving data to and from VTK, if you are already programming with numpy, I wouldn't mix the two methodologies.

Although this question is quite old already, I ran into a very similar problem.
As YC.Chui pointed out in his comment, we need to set the scalar values in the read data. For me this helped:
reader = vtk.vtkXMLImageDataReader()
reader.SetFileName('model_name.vti')
reader.Update()
readData = reader.GetOutput()
readData.GetPointData().SetScalars(readData.GetPointData().GetArray(0))
I am quite new to VTK and have absolutely no idea why this solves the problem and whether there is a better way. Feel free to comment and correct me!

Related

pygalmesh setting the proper variables and explanation

for my research i am using pygalmesh to create a mesh from an array. The problem i am trying to solve is a vascular network. In it, theres multiple vessel that are not meshed while the surrounding tissue is meshed. I am able to create the array but i am having trouble setting the correct variable. In the readme documentation theres a lot of variable for which i can't find any infos.
For example :
max_edge_size_at_feature_edges=0.025,
min_facet_angle=25,
max_radius_surface_delaunay_ball=0.1,
max_facet_distance=0.001
Is there a file where they explain all those variable, like what do they actually change in the mesh and all the ones that can be put in?
My current goal with meshing would be to reduce the number of 2d element around my vessels to reduce my array dimension in the later computation.
Thx
PS : If there are other meshing alternative that you know of that can mimic pygalmesh meshing from an array and are easy to use, let me know!

Generate a PointCloud object in Open3D from RealSense data

I'm trying to convert data captured from an Intel RealSense device into an Open3D PointCloud object that I then need to process. For the moment I only have the rosbag sample files to work with, but I think a similar procedure should be used with the direct stream from the device. So far I managed to read and display the files using pyrealsense2 and OpenCV.
Now I'm trying to use Open3D. For example (using the Structured Light Short-Range Example):
import open3d as o3d
bag_reader = o3d.t.io.RSBagReader()
bag_reader.open("structured.bag")
rgbd_image = bag_reader.next_frame()
pcd = o3d.geometry.PointCloud.create_from_rgbd_image(
rgbd_image,
o3d.camera.PinholeCameraIntrinsic(
o3d.camera.PinholeCameraIntrinsicParameters.PrimeSenseDefault)))
This results in an error saying:
TypeError: create_from_rgbd_image(): incompatible function arguments.
So I tried to create a Open3D.geometry.RGBDImage manually with the "correct" format:
import numpy as np
raw_rgb = np.array(rgbd_image.color)
raw_depth = np.array(rgbd_image.depth)
new_rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(
o3d.geometry.Image(raw_rgb), o3d.geometry.Image(raw_depth))
Now there is no TypeError when creating the PointCloud, but it's completely wrong, in fact it shows like this:
o3d.visualization.draw_geometries([pcd])
When in reality it should look something like this (notice I'm just elaborating the first frame when actually the rosbag file is a video):
I also tried to mess around with the intrinsic and extrinsic matrixes using the parameters extracted from the rosbag file but still the point cloud looks really messed up.
What am I doing wrong? What's the way to correctly recreate a point cloud in Open3D from RealSense data?
For those who encounter the same problem: The point is that functions from different branches are used here, one from Legacy, the other from Tensor. In this case used o3d**.t.**io + o3d.geometry. You need to use either o3d.io + o3d.geometry, or o3d.t.io + o3d.t.geometry. Another option is using to_legacy()/from_legacy() with Tensor data.

Object tracking without openCV

I am trying to build up an algorithm to detect some objects and track them over time. My input data is a tif multi-stack file, which I read as a np array. I apply a U-Net model to create a binary mask and then identify the coordinates of single objects using scipy.
Up to here everything kind of works but I just cannot get my head around the tracking. I have a dictionary where keys are the frame numbers and values are lists of tuples. Each tuple contain the coordinates of each object.
Now I have to link the objects together, which on paper seems pretty simple. I was hoping there was a function or a package to do so (ideally something similar to trackMate or M2track on ImageJ), but I cannot find anything like that. I am considering writing my own nearest neighbor tool but I'd like to know whether there is a less painful way (and also, I would like to consider also more advanced metrics).
The other option I considered is using cv2, but this would require converting the data in a format cv2 likes, which will significantly slow down the code. In addition, I would like to keep the data as close as possible to the original input, so no cv2 for me.
I solved it using trackpy.
http://soft-matter.github.io/trackpy/v0.5.0/
trackpy properly reads multistack tiff files (OpenCv can't).

Read a vtr file transform it and write it again creates no file

I am completely new to dealing with vtk(/vtr/vtu/..) outside of paraview but I want to make my workflow a bit easier so I want to do some data transformation outside of paraview.
Basically I have two simulations but the origin and the axis are different. The origin difference changes every time step so I want to transform my files such that they are aligned before opening them in paraview such that I not constantly have to change the values in the transform filter when I want to look at a different time step. I was first trying to achieve this by just rotating and transforming one file.
My approach is as such (based on a lot of other webpages so unfortunately I cannot track down anymore what came form where):
import vtk
reader = vtk.vtkXMLRectilinearGridReader()
reader.SetFileName(file_name)
reader.Update()
data = reader.GetOutput()
transform = vtk.vtkTransform()
transform.RotateZ(90)
transform.Translate(2.34375, 4.6875, 2.34375)
transformFilter=vtk.vtkTransformFilter()
transformFilter.SetTransform(transform)
transformFilter.SetInputData(data)
transformFilter.Update()
writer = vtk.vtkXMLRectilinearGridWriter()
writer.SetInputData(transformFilter.GetOutput())
writer.SetFileName("Output.vtr")
writer.Update()
Now I don't get any errors but also there is no file created and I don't know where I go wrong. Any help is highly appreciated.
(btw I tried this answer and that actually does create a file)
EDIT; Maybe I found why it goes wrong but still I don't know how to fix it. If I print data it says vtkRectilinearGrid while if I print transformFilter.GetOutput() it says vtkStructuredGrid. I thought the transform filter would keep the same grid type but apparantly not. Somebody any idea to let it keep the same grid type?
A vtkRectilinearGrid is oriented along the main axis. It allows some optimizations, like having implicit coordinates.
The output of the Transform filter cannot be converted to a vtkRectlinearGrid, mainly because you cannot assume its orientation. Points cannot be implicit as with the RectilinearGrid, the object store each of them.
As you said, your solution is to change how you write your data. You can write a .vts file with a vtkXMLStructuredGridWriter.

Scale an .stl file using python script

I have an .stl file and I want to write a python script to scale if. So for example, if I have a file with a cube of dimensions 2x2x2mm, and I have my scale set to 10, the script should spit out an stl file with a cube of dimensions 20x20x20mm.
I searched and found numpy-stl library for modifying stl files. However, I can't seem to find any function that scales an .stl. Can someone point me in the right direction as what to use?
I know this is an old question, but you should be able to directly modify the mesh.x, mesh.y and mesh.z attributes to accomplish this.
I also stumbled across this post while trying to get an answer to the same problem. I assume you are using numpy-stl.
My solution was to create a rotation matrix R using the scaling parameter instead of rotation angles and then use mesh.rotate_using_matrix(R,point=centroid).
Seems to work for me.

Categories