I'm trying to convert data captured from an Intel RealSense device into an Open3D PointCloud object that I then need to process. For the moment I only have the rosbag sample files to work with, but I think a similar procedure should be used with the direct stream from the device. So far I managed to read and display the files using pyrealsense2 and OpenCV.
Now I'm trying to use Open3D. For example (using the Structured Light Short-Range Example):
import open3d as o3d
bag_reader = o3d.t.io.RSBagReader()
bag_reader.open("structured.bag")
rgbd_image = bag_reader.next_frame()
pcd = o3d.geometry.PointCloud.create_from_rgbd_image(
rgbd_image,
o3d.camera.PinholeCameraIntrinsic(
o3d.camera.PinholeCameraIntrinsicParameters.PrimeSenseDefault)))
This results in an error saying:
TypeError: create_from_rgbd_image(): incompatible function arguments.
So I tried to create a Open3D.geometry.RGBDImage manually with the "correct" format:
import numpy as np
raw_rgb = np.array(rgbd_image.color)
raw_depth = np.array(rgbd_image.depth)
new_rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(
o3d.geometry.Image(raw_rgb), o3d.geometry.Image(raw_depth))
Now there is no TypeError when creating the PointCloud, but it's completely wrong, in fact it shows like this:
o3d.visualization.draw_geometries([pcd])
When in reality it should look something like this (notice I'm just elaborating the first frame when actually the rosbag file is a video):
I also tried to mess around with the intrinsic and extrinsic matrixes using the parameters extracted from the rosbag file but still the point cloud looks really messed up.
What am I doing wrong? What's the way to correctly recreate a point cloud in Open3D from RealSense data?
For those who encounter the same problem: The point is that functions from different branches are used here, one from Legacy, the other from Tensor. In this case used o3d**.t.**io + o3d.geometry. You need to use either o3d.io + o3d.geometry, or o3d.t.io + o3d.t.geometry. Another option is using to_legacy()/from_legacy() with Tensor data.
Related
I have a STEP file which I converted into an STL file. I need the STL format to be able to extract certain features out of the 3D mesh. Further, I want to visualize the 3D model in an interactive manner, i.e. the user should be able to rotate and scale the model. The code I found, which uses Open3D, doesn't work so far:
import open3d as o3d
def visualize(mesh):
vis = o3d.visualization.Visualizer()
vis.create_window()
vis.add_geometry(mesh)
vis.run()
vis.destroy_window()
mesh = o3d.io.read_triangle_model(path_stl_file)
visualize(mesh)
Any idea how to do it the right way? Thx.
Argh, I used the wrong importer. The function to be used is:
read_triangle_mesh(path_stl_file)
I am trying to build up an algorithm to detect some objects and track them over time. My input data is a tif multi-stack file, which I read as a np array. I apply a U-Net model to create a binary mask and then identify the coordinates of single objects using scipy.
Up to here everything kind of works but I just cannot get my head around the tracking. I have a dictionary where keys are the frame numbers and values are lists of tuples. Each tuple contain the coordinates of each object.
Now I have to link the objects together, which on paper seems pretty simple. I was hoping there was a function or a package to do so (ideally something similar to trackMate or M2track on ImageJ), but I cannot find anything like that. I am considering writing my own nearest neighbor tool but I'd like to know whether there is a less painful way (and also, I would like to consider also more advanced metrics).
The other option I considered is using cv2, but this would require converting the data in a format cv2 likes, which will significantly slow down the code. In addition, I would like to keep the data as close as possible to the original input, so no cv2 for me.
I solved it using trackpy.
http://soft-matter.github.io/trackpy/v0.5.0/
trackpy properly reads multistack tiff files (OpenCv can't).
want to do some 3d model processing using Python.
I was told 3d models are made up of 4d matrices. Is it possible to import a 3d model from meshlab or blender or some other software and convert it into a matrix or something of the sort so I could do some processing?
Yes it is possible. You need the plyfile library. Since it's Meshlab, I am assuming the file format that you're trying to import is .ply. Use the code below.
from plyfile import PlyData
data = PlyData.read('my_data.ply')
While you can use a 4D matrix to apply multiple transformations in one step, for the most part I would say it is more common to access each property either as an array of three or four values or to directly access each element.
import bpy
obj = bpy.context.active_object
obj.location.x += 1.0
obj.location.y -= 0.2
obj.location.z += 0.8
obj.rotation_euler = (radians(45.0),radians(15.8), radians(0.0))
Blender includes a full python interpreter, so you should be able to do most of what you want using blender without having to export any data.
If you want to try blender, you can get more blender specific help at blender.stackexchange.
I have two stereo images that I'd like to use to compute a depth map. While I unfortunately do not know C/C++, I do know python-- so when I found this tutorial, I was optimistic.
Unfortunately, the tutorial appears to be somewhat out of date. It not only needs to be tweaked to run at all (renaming 'createStereoBM' to 'StereoBM') but when it does run, it doesn't give a good result, even on the example stereo-images that were used in the tutorial itself.
Here's an example:
import numpy as np
import cv2
from matplotlib import pyplot as plt
imgL = cv2.imread('Yeuna9x.png',0)
imgR = cv2.imread('SuXT483.png',0)
stereo = cv2.StereoBM(1, 16, 15)
disparity = stereo.compute(imgL, imgR)
plt.imshow(disparity,'gray')
plt.show()
The result:
This looks very different from what the author of the tutorial achieves:
(source: opencv.org)
Tweaking the parameters does not improve matters. All documentation I've been able to find is for the original C-version of openCV code, not the python-library-equivalent. I unfortunately haven't been able to use this to improve things.
Any help would be appreciated!
You have the images the wrong way around.
Look at the images, the tin behind the lamp lets you work out the camera locations of the two images,
Just change this:
# v
imgR = cv2.imread('Yeuna9x.png',0)
imgL = cv2.imread('SuXT483.png',0)
# ^
If you look at the image in the tutorial which they say is the left frame, it the same as your right one.
Here's my result after the change.
It is possible that you need to keep adjusting the parameters of the block matching algorithm.
have a look at this blog article:https://erget.wordpress.com/2014/03/13/building-an-interactive-gui-with-opencv/
The article's author has composed a set of classes to make the process of calibrating the cameras more streamlined than the opencv tutorial. These classes are available as pypi package: https://github.com/erget/StereoVision
Hope this helps :)
The camera is translated vertically instead of horizontally. Rotate the images 90 degrees, then try. (Prove it to yourself by rotating the screen. I just picked up my laptop and turned it on its edge.)
You mention different software; perhaps a row-major/column-major kind of thing between the original and pyOpenCV.
The marching cubes class does not generate anything after I passed a ImageData to it.
I am trying to generate surface from a 3D numpy array. I used evtk to export the array to a .vti (Structured Points) file and stored as pointData. Then, read the file by vtkXMLImageDataReader. After passing the output of the Reader (vtkImageData) to vtkMarchingCubes class, the class returns a vtkPolyData with no point or cell, vtkImageData were used in vtk wiki's example, the MarchingCubes class should have no problem to handle it.
I tried to export the ImageData generated by the Reader, the output can be visualized by ParaView without any problem.
I just wondering are there any special requirement for the marching cubes in vtk to generate the surface? It will be even better if the 3D numpy can directly transform to data type that suitable for the MarchingCubes.
Here is my code to read the .vki file and pass it to marching cubes:
dataReader = vtk.vtkXMLImageDataReader()
dataReader.SetFileName('model_name.vti')
dataReader.Update()
contacts = vtk.vtkDiscreteMarchingCubes()
contacts.SetInputConnection(dataReader.GetOutputPort())
contacts.SetValue(0,1)
contacts.ComputeNormalsOn()
contacts.Update()
result = contacts.GetOutput()
If you wish to do this in the python/numpy world, then I would suggest using the marching cubes implementation from here (or look at isosurface routines in pyqtgraph, mayavi ...). Although there are many tutorials on moving data to and from VTK, if you are already programming with numpy, I wouldn't mix the two methodologies.
Although this question is quite old already, I ran into a very similar problem.
As YC.Chui pointed out in his comment, we need to set the scalar values in the read data. For me this helped:
reader = vtk.vtkXMLImageDataReader()
reader.SetFileName('model_name.vti')
reader.Update()
readData = reader.GetOutput()
readData.GetPointData().SetScalars(readData.GetPointData().GetArray(0))
I am quite new to VTK and have absolutely no idea why this solves the problem and whether there is a better way. Feel free to comment and correct me!