want to do some 3d model processing using Python.
I was told 3d models are made up of 4d matrices. Is it possible to import a 3d model from meshlab or blender or some other software and convert it into a matrix or something of the sort so I could do some processing?
Yes it is possible. You need the plyfile library. Since it's Meshlab, I am assuming the file format that you're trying to import is .ply. Use the code below.
from plyfile import PlyData
data = PlyData.read('my_data.ply')
While you can use a 4D matrix to apply multiple transformations in one step, for the most part I would say it is more common to access each property either as an array of three or four values or to directly access each element.
import bpy
obj = bpy.context.active_object
obj.location.x += 1.0
obj.location.y -= 0.2
obj.location.z += 0.8
obj.rotation_euler = (radians(45.0),radians(15.8), radians(0.0))
Blender includes a full python interpreter, so you should be able to do most of what you want using blender without having to export any data.
If you want to try blender, you can get more blender specific help at blender.stackexchange.
Related
I need to visualize a set of hairstyles in 3D using open3d (or any similar package, if that makes things easier). In particular, I am looking for a way to visualize one of the hairstyles of the USC-HairSalon dataset available here.
Each hairstyle consists of 10.000 hair strands, with each strand made of 100 points. In Blender, hairstyles can be represented in 3D using the script available here.
Do you have any suggestions on what could be an efficient way to represent all hair strands? Each dataset is stored in a .data file and needs to be processed before being rendered.
To load the reference head, I am using a few lines of code:
import open3d as o3d
mesh = o3d.io.read_triangle_mesh("./USC-HairSalon_dataset/Raw_data/head_model.obj")
mesh.compute_vertex_normals()
o3d.visualization.draw_geometries([mesh])
I have a STEP file which I converted into an STL file. I need the STL format to be able to extract certain features out of the 3D mesh. Further, I want to visualize the 3D model in an interactive manner, i.e. the user should be able to rotate and scale the model. The code I found, which uses Open3D, doesn't work so far:
import open3d as o3d
def visualize(mesh):
vis = o3d.visualization.Visualizer()
vis.create_window()
vis.add_geometry(mesh)
vis.run()
vis.destroy_window()
mesh = o3d.io.read_triangle_model(path_stl_file)
visualize(mesh)
Any idea how to do it the right way? Thx.
Argh, I used the wrong importer. The function to be used is:
read_triangle_mesh(path_stl_file)
I'm trying to convert data captured from an Intel RealSense device into an Open3D PointCloud object that I then need to process. For the moment I only have the rosbag sample files to work with, but I think a similar procedure should be used with the direct stream from the device. So far I managed to read and display the files using pyrealsense2 and OpenCV.
Now I'm trying to use Open3D. For example (using the Structured Light Short-Range Example):
import open3d as o3d
bag_reader = o3d.t.io.RSBagReader()
bag_reader.open("structured.bag")
rgbd_image = bag_reader.next_frame()
pcd = o3d.geometry.PointCloud.create_from_rgbd_image(
rgbd_image,
o3d.camera.PinholeCameraIntrinsic(
o3d.camera.PinholeCameraIntrinsicParameters.PrimeSenseDefault)))
This results in an error saying:
TypeError: create_from_rgbd_image(): incompatible function arguments.
So I tried to create a Open3D.geometry.RGBDImage manually with the "correct" format:
import numpy as np
raw_rgb = np.array(rgbd_image.color)
raw_depth = np.array(rgbd_image.depth)
new_rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(
o3d.geometry.Image(raw_rgb), o3d.geometry.Image(raw_depth))
Now there is no TypeError when creating the PointCloud, but it's completely wrong, in fact it shows like this:
o3d.visualization.draw_geometries([pcd])
When in reality it should look something like this (notice I'm just elaborating the first frame when actually the rosbag file is a video):
I also tried to mess around with the intrinsic and extrinsic matrixes using the parameters extracted from the rosbag file but still the point cloud looks really messed up.
What am I doing wrong? What's the way to correctly recreate a point cloud in Open3D from RealSense data?
For those who encounter the same problem: The point is that functions from different branches are used here, one from Legacy, the other from Tensor. In this case used o3d**.t.**io + o3d.geometry. You need to use either o3d.io + o3d.geometry, or o3d.t.io + o3d.t.geometry. Another option is using to_legacy()/from_legacy() with Tensor data.
I am trying to build up an algorithm to detect some objects and track them over time. My input data is a tif multi-stack file, which I read as a np array. I apply a U-Net model to create a binary mask and then identify the coordinates of single objects using scipy.
Up to here everything kind of works but I just cannot get my head around the tracking. I have a dictionary where keys are the frame numbers and values are lists of tuples. Each tuple contain the coordinates of each object.
Now I have to link the objects together, which on paper seems pretty simple. I was hoping there was a function or a package to do so (ideally something similar to trackMate or M2track on ImageJ), but I cannot find anything like that. I am considering writing my own nearest neighbor tool but I'd like to know whether there is a less painful way (and also, I would like to consider also more advanced metrics).
The other option I considered is using cv2, but this would require converting the data in a format cv2 likes, which will significantly slow down the code. In addition, I would like to keep the data as close as possible to the original input, so no cv2 for me.
I solved it using trackpy.
http://soft-matter.github.io/trackpy/v0.5.0/
trackpy properly reads multistack tiff files (OpenCv can't).
I am new to 3D geometry. I am trying to extract a mesh from a glb file using python library trimesh. I couldn't figure out the proper way of doing that.
My requirement is that I need a 3D mesh (as adjacency matrix) of an object so that I can apply non-euclidian convolutional operators on them.
Any suggestions on what I should be looking at?
You can use trimesh.load to load your glTF file. Note that the return type depends on the filetype of your model file. For glTF files it will return an instance of trimesh.Scene. The scene has all sorts of attributes like the camera, lights but also geometries. This is because glTF files can contain more that just model data. Each geometry is an instance of trimesh.Trimesh, which is the base class for geometries and has a edges_sparse property which represents the adjacency matrix of the model.
To put it all together:
scene = trimesh.load("some_file.glb")
geometries = list(scene.geometry.values())
geometry = geometries[0]
adjacency_matrix = geometry.edges_sparse
It's a bit tedious to figure this out using the documentation. I tend to look at the source code or turn on the debugger in my IDE:
trimesh.Scene: https://github.com/mikedh/trimesh/blob/master/trimesh/scene/scene.py
trimesh.Trimesh: https://github.com/mikedh/trimesh/blob/master/trimesh/base.py