Related
I need to convert a knee MRI into a point cloud representation.
here I load a serie:
import SimpleITK as sitk
reader = sitk.ImageSeriesReader()
dicom_names = reader.GetGDCMSeriesFileNames(path)
reader.SetFileNames(dicom_names)
sag = reader.Execute()
sag = sitk.Cast(sag, sitk.sitkFloat32)
Then I want to keep all real positions of my voxels
vol = sitk.GetArrayFromImage(sag)
rx=sag.GetSpacing()[0]
ry=sag.GetSpacing()[2]
rz=sag.GetSpacing()[1]
origin=sag.GetOrigin()
So sag informations are :
origin: (117.90852282166, -103.56080762947, 53.280393663713)
size: (320, 320, 32)
spacing: (0.4375, 0.4375, 3.300000171769498)
direction: (0.13914594144969092, 0.03131872101712905, -0.9897765124976096, 0.9902718853820294, -0.0044006872607998656, 0.13907633505939634, -3.951777152039084e-09, -0.9994997607130699, -0.031626386681315836)
Now I want to convert this volume into point clouds using open3D.
I wrote open3D documentation http://www.open3d.org/docs/0.9.0/tutorial/Basic/working_with_numpy.html and I tried this :
z1, y1, x1 = np.meshgrid(np.arange(vol.shape[0]) * rx + origin[0],
np.arange(vol.shape[1]) * ry + origin[1],
np.arange(vol.shape[2]) * rz + origin[2],
indexing='ij')
print(np.size(x1))
XYZ=np.zeros((np.size(x1),3))
XYZ[:,0] = np.reshape(x1, -1)
XYZ[:,1] = np.reshape(y1, -1)
XYZ[:,2] = np.reshape(z1, -1)
# Pass xyz to Open3D.o3d.geometry.PointCloud and visualize
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(XYZ)
o3d.io.write_point_cloud("voltest.ply", pcd)
But it's not what I want because everything is flatten and I want a 3D representation of all my volume.
I don't have any segmentation, that's why I take all the volume.
I searched solutions with vtk (https://vtk.org/doc/nightly/html/classvtkConvertToPointCloud.html) or Point Cloud Library (https://pointclouds.org/) but I couldn't understand how I can do it..
I hope it's clear to understand what I want to do.. Do you have any suggestions ?
Thanks a lot
I am trying to generate a 3D contour plot using data stored as lists for two angles phi2 and theta in degrees. I have in total 88 datapoints. I am trying to generate the joint multivariate normal DPF using the scipy stats multivariate_normal and then plot the graph. But the attached code does not work it gives me errors refered to that z is 1D and has to be 2D.
Could anybody be so kind of direct me on how to get a decent density surface and/or contour with the data I have and fix this code? Thank you in advance
This is my code:
phi2 = [68.74428813, 73.81435267, 66.13791645, 178.54309657, 179.52273055, 161.05564169,
157.29079181, 191.92405566, 91.49774385, 96.19566795, 70.59561195, 119.9603657,
120.22305924, 98.52577754, 102.37894512, 100.12088791, 150.21004667, 139.18249739,
139.09246089, 89.51031839, 88.39689092, 136.47397506, 286.26056406, 283.74464006,
290.17913953, 286.74459786, 284.86706369, 328.13937238, 275.44219073, 303.47499211,
260.52134486, 259.35788745, 306.90146741, 11.20622691, 10.78220574, 19.15446087,
12.15462016, 13.58160662, 3.83673279, 0.12494051, 17.73139875, 8.53784067, 16.50118845,
2.53838974, 233.88019465, 234.93195189, 229.57996459, 233.07447083, 233.59862002,
231.18392245, 207.88397566, 237.31741345, 183.95293031, 179.42872881, 213.32271268,
140.7533708, 150.16895446, 130.61256041, 130.89734197, 128.63260154, 12.06830893,
200.28087782, 189.90378416, 62.39275508, 58.30936802, 205.64840358, 277.30394295,
287.76441089, 284.93518941, 265.89041707, 265.04884345, 343.86712163, 9.14315768,
341.43239609, 259.68283323, 260.00152679, 319.65245694, 341.08153124, 328.45596486,
336.02665804, 334.51276135, 334.8480636, 14.23480894, 12.53212715, 326.89899848,
42.62591188, 45.9396189, 335.39967741]
theta = [162.30883091334002, 162.38681345640427, 159.9615814653753, 174.16782637794842,
174.2151437560705, 176.40150466571572, 172.99139904772483, 175.92043493594562,
170.54952038009057, 172.72436078059172, 157.8929621077973, 168.98842698365024,
171.98480108104968, 157.1025039563731, 158.00939405227624, 157.85195861050553,
171.7970456599733, 173.88542939027778, 174.13060483554227, 157.06302225640127,
156.68490146086768, 174.10583004433656, 12.057892850177469, 22.707446760473047,
10.351988334104147, 10.029845365897357, 9.685493520484972, 7.903767103756965,
2.4881977395826027, 5.95349444674959, 30.507921155263, 30.63344201861564,
12.408566633469452, 3.9720259901877712, 4.65662142520097, 4.638183341072918,
4.106777084823232, 4.080743212101051, 4.747614837690929, 5.50356343278645,
3.5832926179292923, 3.495358074328152, 2.980060059242138, 5.785575733164003,
172.46901133841854, 172.2062576963548, 173.0410300278859, 174.06303865166896,
174.21162725364357, 170.0470319897294, 174.10752252682713, 171.23903792872886,
172.86412623832285, 174.4850965754363, 172.82274147050111, 176.9008741019669,
177.0080169547876, 171.90883294152246, 173.22247813491, 173.4304905772758,
89.63634206258786, 175.70086864635368, 175.71009499829492, 162.5980851129683,
162.16583875715634, 175.35616287818408, 4.416907543506939, 4.249480386717373,
5.265265803392446, 21.091392446454336, 21.573883985068303, 7.135649687137961,
5.332884425609576, 1.4184699545284118, 24.487533963462965, 25.63021267148377,
5.005913657707176, 7.562769691801299, 7.52664594699765, 7.898159135060811,
7.167861631741688, 7.018092266267269, 5.939275995893341, 5.975608665369072,
7.138904478798905, 9.93153808410636, 9.415946863231648, 7.154298332687937]
import sys, os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from numpy import loadtxt
import matplotlib
from matplotlib.mlab import bivariate_normal
import math
from scipy.stats import multivariate_normal
from astropy.stats import circcorrcoef
from astropy import units as u
from scipy.stats import circvar
from scipy.stats import circmean
phi2_vacuum = np.array(phi2_vacuum)
theta_vacuum = np.array(theta_vacuum)
angle1 = np.radians(phi2_vacuum)
angle2 = np.radians(theta_vacuum)
# Obtain the circular variance
var_angle1 = circvar(angle1)
var_angle2 = circvar(angle2)
# Obtain circular mean from scipy
mean_angle1 = circmean(angle1)
mean_angle2 = circmean(angle2)
# Obtain circular covar between both angles in degrees
corr = circcorrcoef(angle1, angle2)
covar = corr * np.sqrt(var_angle1*var_angle2)
# Create the covar matrix
covar_matrix = np.array([[var_angle1, covar], [covar, var_angle2]])
# Obtain circular prob
delta = covar / (var_angle1 * var_angle2)
S = ((angle1-mean_angle1)/var_angle1) + ((angle2-mean_angle2)/var_angle2) - ((2*delta*
(angle1-mean_angle1)*(angle2-mean_angle2))/(var_angle1*var_angle2))
# Obtain exponential of PDF
exp = -1 * S / (2 * (1 - delta**2))
# Calculate the PDF
#prob = (1/(2*np.pi*var_angle1*var_angle2*np.sqrt(1-(delta**2)))) * np.e**exp
prob = multivariate_normal([mean_angle1, mean_angle2], covar_matrix)
# Create the stacking
pos = np.dstack((angle1, angle2))
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
ax2.contourf(angle1, angle2, pdf.pdf(pos))
I'd like to randomly rotate an image tensor (B, C, H, W) around it's center (2d rotation I think?). I would like to avoid using NumPy and Kornia, so that I basically only need to import from the torch module. I'm also not using torchvision.transforms, because I need it to be autograd compatible. Essentially I'm trying to create an autograd compatible version of torchvision.transforms.RandomRotation() for visualization techniques like DeepDream (so I need to avoid artifacts as much as possible).
import torch
import math
import random
import torchvision.transforms as transforms
from PIL import Image
# Load image
def preprocess_simple(image_name, image_size):
Loader = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor()])
image = Image.open(image_name).convert('RGB')
return Loader(image).unsqueeze(0)
# Save image
def deprocess_simple(output_tensor, output_name):
output_tensor.clamp_(0, 1)
Image2PIL = transforms.ToPILImage()
image = Image2PIL(output_tensor.squeeze(0))
image.save(output_name)
# Somehow rotate tensor around it's center
def rotate_tensor(tensor, radians):
...
return rotated_tensor
# Get a random angle within a specified range
r_degrees = 5
angle_range = list(range(-r_degrees, r_degrees))
n = random.randint(angle_range[0], angle_range[len(angle_range)-1])
# Convert angle from degrees to radians
ang_rad = angle * math.pi / 180
# test_tensor = preprocess_simple('path/to/file', (512,512))
test_tensor = torch.randn(1,3,512,512)
# Rotate input tensor somehow
output_tensor = rotate_tensor(test_tensor, ang_rad)
# Optionally use this to check rotated image
# deprocess_simple(output_tensor, 'rotated_image.jpg')
Some example outputs of what I'm trying to accomplish:
So the grid generator and the sampler are sub-modules of the Spatial Transformer (JADERBERG, Max, et al.). These sub-modules are not trainable, they let you apply a learnable, as well as non-learnable, spatial transformation.
Here I take these two submodules and use them to rotate an image by theta using PyTorch's functions torch.nn.functional.affine_grid and torch.nn.functional.affine_sample (these functions are implementations of the generator and the sampler, respectively):
import torch
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
def get_rot_mat(theta):
theta = torch.tensor(theta)
return torch.tensor([[torch.cos(theta), -torch.sin(theta), 0],
[torch.sin(theta), torch.cos(theta), 0]])
def rot_img(x, theta, dtype):
rot_mat = get_rot_mat(theta)[None, ...].type(dtype).repeat(x.shape[0],1,1)
grid = F.affine_grid(rot_mat, x.size()).type(dtype)
x = F.grid_sample(x, grid)
return x
#Test:
dtype = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
#im should be a 4D tensor of shape B x C x H x W with type dtype, range [0,255]:
plt.imshow(im.squeeze(0).permute(1,2,0)/255) #To plot it im should be 1 x C x H x W
plt.figure()
#Rotation by np.pi/2 with autograd support:
rotated_im = rot_img(im, np.pi/2, dtype) # Rotate image by 90 degrees.
plt.imshow(rotated_im.squeeze(0).permute(1,2,0)/255)
In the example above, assume we take our image, im, to be a dancing cat in a skirt:
rotated_im will be a 90-degrees CCW rotated dancing cat in a skirt:
And this is what we get if we call rot_img with theta eqauls to np.pi/4:
And the best part that it's differentiable w.r.t the input and has autograd support! Hooray!
With torchvision it should be simple:
import torchvision.transforms.functional as TF
angle = 30
x = torch.randn(1,3,512,512)
out = TF.rotate(x, angle)
For example if x is:
out with a 30 degree rotation is (NOTE: counterclockwise):
There is a pytorch function for that:
x = torch.tensor([[0, 1],
[2, 3]])
x = torch.rot90(x, 1, [0, 1])
>> tensor([[1, 3],
[0, 2]])
Here are the docs: https://pytorch.org/docs/stable/generated/torch.rot90.html
I have any transformation matrix, for example:
sig =[[2,1],[1,1]]
With this code, I could transform a circle with r=1:
import numpy as np
import math as mt
from matplotlib.pyplot import *
sig =[[2,1],[1,1]]
ndiv=100
r=1.0
theta=np.linspace(0,2*np.pi,ndiv)
x=r*np.cos(theta)
y=r*np.sin(theta)
fig = figure()
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8])
ax.plot(x,y,'b.')
x_transf=np.zeros(ndiv)
y_transf=np.zeros(ndiv)
direc=np.zeros(ndiv)
N=np.zeros(ndiv)
v=[0.0,0.0]
w=[0.0,0.0]
for i in range(ndiv):
v[0]=x[i]
v[1]=y[i]
w=np.dot(sig,v)
x_transf[i]=w[0]
y_transf[i]=w[1]
N[i]=mt.sqrt(x_transf[i]**2+y_transf[i]**2)
ax.plot(x_transf,y_transf,'g+')
axis('equal')
grid('on')
Now I need to transform a rectangle (square) using this tranformation matrix:
M = [[sqrt(th), -1.5*th],[2*sin(th),cos(th)]] #th is an angle between 0 and 2pi
Also find the angle that produces the biggest area. How can I do this?
Here's an example of how to apply the transformation. As an example and a check I also applied a normal rotation (in red):
import matplotlib.pyplot as plt
import numpy as np
from numpy import sin, cos, sqrt, pi
r0 = np.array([[.5, .2], [1.5,.2], [1.5,1.2], [.5,1.2], [.5,.2]])
th = .05*pi
R = np.array([[cos(th), -sin(th)], [sin(th), cos(th)]])
M = np.array([[sqrt(th), -1.5*th],[2*sin(th),cos(th)]])
r1 = R.dot(r0.T).T
r2 = M.dot(r0.T).T
plt.plot(r0[:,0], r0[:,1], "bo-")
plt.plot(r1[:,0], r1[:,1], "ro-")
plt.plot(r2[:,0], r2[:,1], "go-")
plt.xlim(-0.2, 1.8)
plt.ylim(-0., 2.)
plt.show()
As for finding the largest area, you could either derive this analytically, or numerically calculate the area of the rotated rectangle and maximize it, using, say, scipy.optimize.
My goal is to trace drawings that have a lot of separate shapes in them and to split these shapes into individual images. It is black on white. I'm quite new to numpy,opencv&co - but here is my current thought:
scan for black pixels
black pixel found -> watershed
find watershed boundary (as polygon path)
continue searching, but ignore points within the already found boundaries
I'm not very good at these kind of things, is there a better way?
First I tried to find the rectangular bounding box of the watershed results (this is more or less a collage of examples):
from numpy import *
import numpy as np
from scipy import ndimage
np.set_printoptions(threshold=np.nan)
a = np.zeros((512, 512)).astype(np.uint8) #unsigned integer type needed by watershed
y, x = np.ogrid[0:512, 0:512]
m1 = ((y-200)**2 + (x-100)**2 < 30**2)
m2 = ((y-350)**2 + (x-400)**2 < 20**2)
m3 = ((y-260)**2 + (x-200)**2 < 20**2)
a[m1+m2+m3]=1
markers = np.zeros_like(a).astype(int16)
markers[0, 0] = 1
markers[200, 100] = 2
markers[350, 400] = 3
markers[260, 200] = 4
res = ndimage.watershed_ift(a.astype(uint8), markers)
unique(res)
B = argwhere(res.astype(uint8))
(ystart, xstart), (ystop, xstop) = B.min(0), B.max(0) + 1
tr = a[ystart:ystop, xstart:xstop]
print tr
Somehow, when I use the original array (a) then argwhere seems to work, but after the watershed (res) it just outputs the complete array again.
The next step could be to find the polygon path around the shape, but the bounding box would be great for now!
Please help!
#Hooked has already answered most of your question, but I was in the middle of writing this up when he answered, so I'll post it in the hopes that it's still useful...
You're trying to jump through a few too many hoops. You don't need watershed_ift.
You use scipy.ndimage.label to differentiate separate objects in a boolean array and scipy.ndimage.find_objects to find the bounding box of each object.
Let's break things down a bit.
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
def draw_circle(grid, x0, y0, radius):
ny, nx = grid.shape
y, x = np.ogrid[:ny, :nx]
dist = np.hypot(x - x0, y - y0)
grid[dist < radius] = True
return grid
# Generate 3 circles...
a = np.zeros((512, 512), dtype=np.bool)
draw_circle(a, 100, 200, 30)
draw_circle(a, 400, 350, 20)
draw_circle(a, 200, 260, 20)
# Label the objects in the array.
labels, numobjects = ndimage.label(a)
# Now find their bounding boxes (This will be a tuple of slice objects)
# You can use each one to directly index your data.
# E.g. a[slices[0]] gives you the original data within the bounding box of the
# first object.
slices = ndimage.find_objects(labels)
#-- Plotting... -------------------------------------
fig, ax = plt.subplots()
ax.imshow(a)
ax.set_title('Original Data')
fig, ax = plt.subplots()
ax.imshow(labels)
ax.set_title('Labeled objects')
fig, axes = plt.subplots(ncols=numobjects)
for ax, sli in zip(axes.flat, slices):
ax.imshow(labels[sli], vmin=0, vmax=numobjects)
tpl = 'BBox:\nymin:{0.start}, ymax:{0.stop}\nxmin:{1.start}, xmax:{1.stop}'
ax.set_title(tpl.format(*sli))
fig.suptitle('Individual Objects')
plt.show()
Hopefully that makes it a bit clearer how to find the bounding boxes of the objects.
Use the ndimage library from scipy. The function label places a unique tag on each block of pixels that are within a threshold. This identifies the unique clusters (shapes). Starting with your definition of a:
from scipy import ndimage
image_threshold = .5
label_array, n_features = ndimage.label(a>image_threshold)
# Plot the resulting shapes
import pylab as plt
plt.subplot(121)
plt.imshow(a)
plt.subplot(122)
plt.imshow(label_array)
plt.show()