DICOM Volume to Point Cloud using SimpleITK and Open3D - python

I need to convert a knee MRI into a point cloud representation.
here I load a serie:
import SimpleITK as sitk
reader = sitk.ImageSeriesReader()
dicom_names = reader.GetGDCMSeriesFileNames(path)
reader.SetFileNames(dicom_names)
sag = reader.Execute()
sag = sitk.Cast(sag, sitk.sitkFloat32)
Then I want to keep all real positions of my voxels
vol = sitk.GetArrayFromImage(sag)
rx=sag.GetSpacing()[0]
ry=sag.GetSpacing()[2]
rz=sag.GetSpacing()[1]
origin=sag.GetOrigin()
So sag informations are :
origin: (117.90852282166, -103.56080762947, 53.280393663713)
size: (320, 320, 32)
spacing: (0.4375, 0.4375, 3.300000171769498)
direction: (0.13914594144969092, 0.03131872101712905, -0.9897765124976096, 0.9902718853820294, -0.0044006872607998656, 0.13907633505939634, -3.951777152039084e-09, -0.9994997607130699, -0.031626386681315836)
Now I want to convert this volume into point clouds using open3D.
I wrote open3D documentation http://www.open3d.org/docs/0.9.0/tutorial/Basic/working_with_numpy.html and I tried this :
z1, y1, x1 = np.meshgrid(np.arange(vol.shape[0]) * rx + origin[0],
np.arange(vol.shape[1]) * ry + origin[1],
np.arange(vol.shape[2]) * rz + origin[2],
indexing='ij')
print(np.size(x1))
XYZ=np.zeros((np.size(x1),3))
XYZ[:,0] = np.reshape(x1, -1)
XYZ[:,1] = np.reshape(y1, -1)
XYZ[:,2] = np.reshape(z1, -1)
# Pass xyz to Open3D.o3d.geometry.PointCloud and visualize
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(XYZ)
o3d.io.write_point_cloud("voltest.ply", pcd)
But it's not what I want because everything is flatten and I want a 3D representation of all my volume.
I don't have any segmentation, that's why I take all the volume.
I searched solutions with vtk (https://vtk.org/doc/nightly/html/classvtkConvertToPointCloud.html) or Point Cloud Library (https://pointclouds.org/) but I couldn't understand how I can do it..
I hope it's clear to understand what I want to do.. Do you have any suggestions ?
Thanks a lot

Related

depthImg from getCameraImage() is full of 'nan'

This is the code I'm using the generate the image:
pos_vec = [.15, .15, .15]
up_vec = camera_upvec(pos_vec)
viewMat = p.computeViewMatrix(
cameraEyePosition=pos_vec,
cameraTargetPosition=[0, 0, 0],
cameraUpVector=up_vec)
projMat = p.computeProjectionMatrixFOV(
fov=70.25,
aspect=1.0,
nearVal=0,
farVal=3.1)
width, height, rgbImg, depthImg, segImg = p.getCameraImage(
width=1280,
height=720,
viewMatrix=viewMat,
projectionMatrix=projMat)
print(depthImg)
this is my camera_upvec function if that helps:
from numpy import cos, sin, arccos, arcsin
def camera_upvec(pos_vec):
theta = arccos(pos_vec[-1])
sintheta = sin(theta)
phi = arccos(pos_vec[0]/sintheta)
u1 = np.array([cos(theta)*cos(phi), cos(theta)*sin(phi), -sin(theta)])
# u2 = np.array([-sin(phi), cos(phi), 0])
return -u1
And this is what the gui with the images looks like:
GUI with image visuals
Is there something obvious that is going wrong that anyone can tell?
I tried different camera angles. Some other camera nearVal/farVals. I expected the depthImg to not be full of nan values
I figured it out, btw. It was because nearVal needs to be a nonzero value. I'm guessing pybullet divides by nearVal at some point.

How to convert octree to voxel grid with open3d?

I found to_voxel_grid from open3d documentation, however it didn't show any thing after visualization. I'm wondering if I used the function in the wrong way and how to fix it.
import open3d as o3d
import numpy as np
N = 2000
armadillo = o3d.data.ArmadilloMesh()
mesh = o3d.io.read_triangle_mesh(armadillo.path)
pcd = mesh.sample_points_poisson_disk(N)
pcd.scale(1 / np.max(pcd.get_max_bound() - pcd.get_min_bound()), center=pcd.get_center())
pcd.colors = o3d.utility.Vector3dVector(np.random.uniform(0, 1, size=(N, 3)))
octree = o3d.geometry.Octree(max_depth=4)
octree.convert_from_point_cloud(pcd, size_expand=0.01)
voxel_grid = octree.to_voxel_grid()
o3d.visualization.draw_geometries([voxel_grid])
This is result when using voxel_grid.get_voxels() checking.
[Voxel with grid_index: (-2147483648, -2147483648, -2147483648), color: (0.361422, 0.375204, 0.305094)]

How can I align / register two meshes in Open3d? (Python)

I have two .ply files that contain mesh of objects that are similar in shape. They are initially unaligned. I would like to achieve global registration for the two mesh objects. Is there a way that I can do this without having to initially import the point cloud data, do global registration, and then reconstruct the mesh?
I have tried the steps listed in the open3d documentation (http://www.open3d.org/docs/0.12.0/tutorial/pipelines/global_registration.html) and it works well for the point clouds. However, reconstructing a mesh from the point clouds is challenging, as they are a relatively complex shape, so I would like to avoid that.
Thank you in advance!
The main idea is you don't need to reconstruct mesh from point cloud.
Mark the data you have as mesh_a, mesh_b, pcl_a, pcl_b.
if your pcl_a/b is extracted directly from mesh_a/b or pcl_a/b and mesh_a/b has the same Transformation Matrix, You can simply apply the transformation matrix obtained from the point cloud alignment to the mesh.
if your point cloud data has no relations with your mesh data. You can first sample mesh_a/b to point cloud and do registration or directly get mesh vertex as point cloud from mesh data. The rest of the work is the same as in case one.
Here are the example of situation two.
import copy
import numpy as np
import open3d as o3d
def draw_registration_result(source, target, transformation):
source_temp = copy.deepcopy(source)
target_temp = copy.deepcopy(target)
source_temp.paint_uniform_color([1, 0.706, 0])
target_temp.paint_uniform_color([0, 0.651, 0.929])
source_temp.transform(transformation)
o3d.visualization.draw_geometries([source_temp, target_temp])
def preprocess_point_cloud(pcd, voxel_size):
print(":: Downsample with a voxel size %.3f." % voxel_size)
pcd_down = pcd.voxel_down_sample(voxel_size)
radius_normal = voxel_size * 2
print(":: Estimate normal with search radius %.3f." % radius_normal)
pcd_down.estimate_normals(
o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))
radius_feature = voxel_size * 5
print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
pcd_down,
o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
return pcd_down, pcd_fpfh
def execute_global_registration(source_down, target_down, source_fpfh,
target_fpfh, voxel_size):
distance_threshold = voxel_size * 1.5
print(":: RANSAC registration on downsampled point clouds.")
print(" Since the downsampling voxel size is %.3f," % voxel_size)
print(" we use a liberal distance threshold %.3f." % distance_threshold)
result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching(
source_down, target_down, source_fpfh, target_fpfh, True,
distance_threshold,
o3d.pipelines.registration.TransformationEstimationPointToPoint(False),
3, [
o3d.pipelines.registration.CorrespondenceCheckerBasedOnEdgeLength(
0.9),
o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(
distance_threshold)
], o3d.pipelines.registration.RANSACConvergenceCriteria(100000, 0.999))
return result
def main():
voxel_size = 0.01
print(":: Load two mesh.")
target_mesh = o3d.io.read_triangle_mesh('bunny.ply')
source_mesh = copy.deepcopy(target_mesh)
source_mesh.rotate(source_mesh.get_rotation_matrix_from_xyz((np.pi / 4, 0, np.pi / 4)), center=(0, 0, 0))
source_mesh.translate((0, 0.05, 0))
draw_registration_result(target_mesh, source_mesh, np.identity(4))
print(":: Sample mesh to point cloud")
target = target_mesh.sample_points_uniformly(1000)
source = source_mesh.sample_points_uniformly(1000)
draw_registration_result(source, target, np.identity(4))
source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
result_ransac = execute_global_registration(source_down, target_down,
source_fpfh, target_fpfh,
voxel_size)
print(result_ransac)
draw_registration_result(source_down, target_down, result_ransac.transformation)
draw_registration_result(source_mesh, target_mesh, result_ransac.transformation)
if __name__ == '__main__':
main()

Using a Data Converter to Display 3D Volume as Images

I would like to write a data converter tool. I need analyze the bitstream in a file to display the 2D cross-sections of a 3D volume.
The dataset I am trying to view can be found here: https://figshare.com/articles/SSOCT_test_dataset_for_OCTproZ/12356705.
It's the file titled: burned_wood_with_tape_1664x512x256_12bit.raw (832 MB)
Would extremely appreciate some direction. Willing to award a bounty if I could get some software to display the dataset as images using a data conversion.
As I'm totally new to this concept, I don't have code to show for this problem. However, here's a little something I tried using inspiration from other questions on SO:
import rawpy
import imageio
path = "Datasets/burned_wood_with_tape_1664x512x256_12bit.raw"
for item in path:
item_path = path + item
raw = rawpy.imread(item_path)
rgb = raw.postprocess()
rawpy.imshow(rgb)
Down below I implemented next visualization.
Example RAW file burned_wood_with_tape_1664x512x256_12bit.raw consists of 1664 samples per A-scan, 512 A-scans per B-scan, 16 B-scans per buffer, 16 buffers per volume, and 2 volumes in this file, each sample is encoded as 2-bytes unsigned integer in little endian order, only 12 higher bits are used, lower 4 bits contain zeros. Samples are centered approximately around 2^15, to be precise data has these stats min 0 max 47648 mean 32757 standard deviation 454.5.
I draw gray images of size 1664 x 512, there are total 16 * 16 * 2 = 512 such images (frames) in a file. I draw animated frames on screen using matplotlib library, also rendering these animation into GIF file. One example of rendered GIF at reduced quality is located after code.
To render/draw images of different resulting resolution you need to change code line with plt.rcParams['figure.figsize'], this fig size contains (widht_in_inches, height_in_inches), by default DPI (dots per inch) equals to 100, meaning that if you want to have resulting GIF of resolution 720x265 then you need to set this figure size to (7.2, 2.65). Also resulting GIF contains animation of a bit smaller resolution because axes and padding is included into resulting figure size.
My next code needs pip modules to be installed one time by command python -m pip install numpy matplotlib.
Try it online!
# Needs: python -m pip install numpy matplotlib
def oct_show(file, *, begin = 0, end = None):
import os, numpy as np, matplotlib, matplotlib.pyplot as plt, matplotlib.animation
plt.rcParams['figure.figsize'] = (7.2, 2.65) # (4.8, 1.75) (7.2, 2.65) (9.6, 3.5)
sizeX, sizeY, cnt, bits = 1664, 512, 16 * 16 * 2, 12
stepX, stepY = 16, 8
fps = 5
try:
fsize, opened_here = None, False
if type(file) is str:
fsize = os.path.getsize(file)
file, opened_here = open(file, 'rb'), True
by = (bits + 7) // 8
if end is None and fsize is not None:
end = fsize // (sizeX * sizeY * by)
imgs = []
file.seek(begin * sizeY * sizeX * by)
a = file.read((end - begin) * sizeY * sizeX * by)
a = np.frombuffer(a, dtype = np.uint16)
a = a.reshape(end - begin, sizeY, sizeX)
amin, amax, amean, stdd = np.amin(a), np.amax(a), np.mean(a), np.std(a)
print('min', amin, 'max', amax, 'mean', round(amean, 1), 'std_dev', round(stdd, 3))
a = (a.astype(np.float32) - amean) / stdd
a = np.maximum(0.1, np.minimum(a * 128 + 128.5, 255.1)).astype(np.uint8)
a = a[:, :, :, None].repeat(3, axis = -1)
fig, ax = plt.subplots()
plt.subplots_adjust(left = 0.08, right = 0.99, bottom = 0.06, top = 0.97)
for i in range(a.shape[0]):
title = ax.text(
0.5, 1.02, f'Frame {i}',
size = plt.rcParams['axes.titlesize'],
ha = 'center', transform = ax.transAxes,
)
imgs.append([ax.imshow(a[i], interpolation = 'antialiased'), title])
ani = matplotlib.animation.ArtistAnimation(plt.gcf(), imgs, interval = 1000 // fps)
print('Saving animated frames to GIF...', flush = True)
ani.save(file.name + '.gif', writer = 'imagemagick', fps = fps)
print('Showing animated frames on screen...', flush = True)
plt.show()
finally:
if opened_here:
file.close()
oct_show('burned_wood_with_tape_1664x512x256_12bit.raw')
Example output GIF:
I don't think it's a valid RAW file at all.
If you try this code:
import rawpy
import imageio
path = 'Datasets/burned_wood_with_tape_1664x512x256_12bit.raw'
raw = rawpy.imread(path)
rgb = raw.postprocess()
You will get a following error:
----> 5 raw = rawpy.imread(path)
6 rgb = raw.postprocess()
~\Anaconda3\envs\py37tf2gpu\lib\site-packages\rawpy\__init__.py in imread(pathOrFile)
18 d.open_buffer(pathOrFile)
19 else:
---> 20 d.open_file(pathOrFile)
21 return d
rawpy\_rawpy.pyx in rawpy._rawpy.RawPy.open_file()
rawpy\_rawpy.pyx in rawpy._rawpy.RawPy.handle_error()
LibRawFileUnsupportedError: b'Unsupported file format or not RAW file'

Regridding regular netcdf data

I have a netcdf file containing global sea-surface temperatures. Using matplotlib and Basemap, I've managed to make a map of this data, with the following code:
from netCDF4 import Dataset
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
filename = '/Users/Nick/Desktop/SST/SST.nc'
fh = Dataset(filename, mode='r')
lons = fh.variables['LON'][:]
lats = fh.variables['LAT'][:]
sst = fh.variables['SST'][:].squeeze()
fig = plt.figure()
m = Basemap(projection='merc', llcrnrlon=80.,llcrnrlat=-25.,urcrnrlon=150.,urcrnrlat=25.,lon_0=115., lat_0=0., resolution='l')
lon, lat = np.meshgrid(lons, lats)
xi, yi = m(lon, lat)
cs = m.pcolormesh(xi,yi,sst, vmin=18, vmax=32)
m.drawmapboundary(fill_color='0.3')
m.fillcontinents(color='0.3', lake_color='0.3')
cbar = m.colorbar(cs, location='bottom', pad="10%", ticks=[18., 20., 22., 24., 26., 28., 30., 32.])
cbar.set_label('January SST (' + u'\u00b0' + 'C)')
plt.savefig('SST.png', dpi=300)
The problem is that the data is very high resolution (9km grid) which makes the resulting image quite noisy. I would like to put the data onto a lower resolution grid (e.g. 1 degree), but I'm struggling to work out how this could be done. I followed a worked solution to try and use the matplotlib griddata function by inserting the code below into my above example, but it resulted in 'ValueError: condition must be a 1-d array'.
xi, yi = np.meshgrid(lons, lats)
X = np.arange(min(x), max(x), 1)
Y = np.arange(min(y), max(y), 1)
Xi, Yi = np.meshgrid(X, Y)
Z = griddata(xi, yi, z, Xi, Yi)
I'm a relative beginner to Python and matplotlib, so I'm not sure what I'm doing wrong (or what a better approach might be). Any advice appreciated!
If you regrid your data to a coarser lat/lon grid using e.g. bilinear interpolation, this will result in a smoother field.
The NCAR ClimateData guide has a nice introduction to regridding (general, not Python-specific).
The most powerful implementation of regridding routines available for Python is, to my knowledge, the Earth System Modeling Framework (ESMF) Python interface (ESMPy). If this is a bit too involved for your application, you should look into
EarthPy tutorials on regridding (e.g. using Pyresample, cKDTree, or Basemap).
Turning your data into an Iris cube and using Iris' regridding functions.
Perhaps start by looking at the EarthPy regridding tutorial using Basemap, since you are using it already.
The way to do this in your example would be
from mpl_toolkits import basemap
from netCDF4 import Dataset
filename = '/Users/Nick/Desktop/SST/SST.nc'
with Dataset(filename, mode='r') as fh:
lons = fh.variables['LON'][:]
lats = fh.variables['LAT'][:]
sst = fh.variables['SST'][:].squeeze()
lons_sub, lats_sub = np.meshgrid(lons[::4], lats[::4])
sst_coarse = basemap.interp(sst, lons, lats, lons_sub, lats_sub, order=1)
This performs bilinear interpolation (order=1) on your SST data onto a sub-sampled grid (every fourth point). Your plot will look more coarse-grained afterwards. If you do not like that, interpolate back onto the original grid with e.g.
sst_smooth = basemap.interp(sst_coarse, lons_sub[0,:], lats_sub[:,0], *np.meshgrid(lons, lats), order=1)
I usually run my data through a Laplace filter for smoothing. Perhaps you could try the function below and see if it helps with your data. The function can be called with or without a mask (e.g land/ocean mask for ocean data points). Hope this helps. T
# Laplace filter for 2D field with/without mask
# M = 1 on - cells used
# M = 0 off - grid cells not used
# Default is without masking
import numpy as np
def laplace_X(F,M):
jmax, imax = F.shape
# Add strips of land
F2 = np.zeros((jmax, imax+2), dtype=F.dtype)
F2[:, 1:-1] = F
M2 = np.zeros((jmax, imax+2), dtype=M.dtype)
M2[:, 1:-1] = M
MS = M2[:, 2:] + M2[:, :-2]
FS = F2[:, 2:]*M2[:, 2:] + F2[:, :-2]*M2[:, :-2]
return np.where(M > 0.5, (1-0.25*MS)*F + 0.25*FS, F)
def laplace_Y(F,M):
jmax, imax = F.shape
# Add strips of land
F2 = np.zeros((jmax+2, imax), dtype=F.dtype)
F2[1:-1, :] = F
M2 = np.zeros((jmax+2, imax), dtype=M.dtype)
M2[1:-1, :] = M
MS = M2[2:, :] + M2[:-2, :]
FS = F2[2:, :]*M2[2:, :] + F2[:-2, :]*M2[:-2, :]
return np.where(M > 0.5, (1-0.25*MS)*F + 0.25*FS, F)
# The mask may cause laplace_X and laplace_Y to not commute
# Take average of both directions
def laplace_filter(F, M=None):
if M == None:
M = np.ones_like(F)
return 0.5*(laplace_X(laplace_Y(F, M), M) +
laplace_Y(laplace_X(F, M), M))
To answer your original question regarding scipy.interpolate.griddata, too:
Have a close look at the parameter specs for that function (e.g. in the SciPy documentation) and make sure that your input arrays have the right shapes. You might need to do something like
import numpy as np
points = np.vstack([a.flat for a in np.meshgrid(lons,lats)]).T # (n,D)
values = sst.ravel() # (n)
etc.
If you are working on Linux, you can achieve this using nctoolkit (https://nctoolkit.readthedocs.io/en/latest/).
You have not stated the latlon extent of your data, so I will assume it is a global dataset. Regridding to 1 degree resolution would require the following:
import nctoolkit as nc
filename = '/Users/Nick/Desktop/SST/SST.nc'
data = nc.open_data(filename)
data.to_latlon(lon = [-179.5, 179.5], lat = [-89.5, 89.5], res = [1,1])
# visualize the data
data.plot()
Look at this example with xarray...
use the ds.interp method and specify the new latitude and longitude values.
http://xarray.pydata.org/en/stable/interpolation.html#example

Categories