Volumetric slice plot of a parallelepiped volume data - python

I have a parallelepiped volume data defined by three vector:
a 2.468000 0.000000 0.000000
b -1.234000 2.137351 0.000000
c 0.000000 0.000000 32.000000
my grid is described by 40 40 500 points, respectively for the axes a,b,c. As you can see the three vectors are not mutually orthogonal and this causes a lot of problems for the reading of the grid.
My original plan was to read my raw data and then to extract several volumetric slices to be processed with sagemath to produce super nice pictures. Regrettably I looked in the python documentation for something like that and I found several command that can be used for an orthogonal volume (ndgrid, easyviz.slice_) but none for not-mutually orthogonal volume data.
In the Scitools package and numpy I found the following command
Numpy provides:
mgrid
ogrid
meshgrid
Scitools provides:
ndgrid
boxgrid
then I was looking also among the matplotlib functions but they are so many that I simply gave up.
Is there some friendly programmer that can put me in the right way?

What exactly are you after?
You can easily create an array of shape (40, 40, 500, 3) where the item at position [a, b, c] is a 3 element array holding the x, y, z coordinates of the corresponding parallelepiped grid point as follows:
a = np.array([2.468000, 0.000000, 0.000000]).reshape(1, 1, 1, 3)
b = np.array([-1.234000, 2.137351, 0.000000]).reshape(1, 1, 1, 3)
c = np.array([0.000000, 0.000000, 32.000000]).reshape(1, 1, 1, 3)
A = np.linspace(0, 1, num=40).reshape(40, 1, 1, 1)
B = np.linspace(0, 1, num=40).reshape(1, 40, 1, 1)
C = np.linspace(0, 1, num=500).reshape(1, 1, 500, 1)
grid = a * A + b * B + c * C

Related

Plotly: How to plot a tetrahedron volume

I have a set of xyz points and a set of tetrahedrons. Where each node of the tetrahedron points to an index in the points table.
I need to plot the tetrahedrons with a corresponding color based on the tag attribute.
points
Index
x
y
z
0
x_1
y_1
z_1
1
x_2
y_2
z_2
...
...
...
...
tetrahedrons
Index
a
b
c
d
tag
0
a_1.pt
b_1.pt
c_1.pt
d_1.pt
9
1
a_2.pt
b_2.pt
c_2.pt
d_2.pt
0
...
...
...
...
...
...
I have tried using the Mesh3d api but it does not allow for a 4th vertex.
I can plot something like the code below but it does not have all the faces of the tetrahedron.
go.Figure(data=[
go.Mesh3d(
x=mesh_pts.x, y=mesh_pts.y, z=mesh_pts.z,
i=tagged_th.a, j=tagged_th.b, k=tagged_th.c,
),
]).show()
I think the Volume or Isosurface plots might work but I'm not sure how to convert my data into a format to be consumed by those apis.
I can't hide the fact that, a few minutes ago, I wasn't even aware of i,j,k parameters. But, still, I know that Mesh3D draws triangles, not tetrahedron. You need to take advantage of those i,j,k parameters to control which triangles are drawn. But it is still your job to tell which triangles need to be drawn to that it look like tetrahedrons.
Yes, there are 4 triangles per tetrahedron. If you wish to draw them four, you need to explicitly pass i,j,k for all 4. Not just pass i,j,k and an nonexistent l and expect plotly to understand that this means 4 triangles.
If a, b, c and d are 4 vertices of a tetrahedron, then the 4 triangles you need to draw are the 4 combinations of 3 of vertices from those. That is bcd, acd, abd and abc.
Let's write this in 4 rows
bcd
acd
abd
abc
^^^
|||
||\------k
|\------ j
\------- i
So, if, now, a, b, c and d are list of n vertices, then i, j, k must be lists 4 times longer
i=b + a + a + a
j=c + c + b + b
k=d + d + d + c
Application: let's define 2 tetrahedrons, one sitting on the spike of the other, using your dataframes format
import plotly.graph_objects as go
import pandas as pd
mesh_pts = pd.DataFrame({'x':[0, 1, 0, 0, 1, 0, 0],
'y':[0, 0, 1, 0, 0, 1, 0],
'z':[0, 0, 0, 1, 1, 1, 2]})
tagged_th = pd.DataFrame({'a':[0,3],
'b':[1,4],
'c':[2,5],
'd':[3,6],
'tag':[0,1]})
# And from there, just create a list of triangles, made of 4 combinations
# of 3 points taken from list of tetrahedron vertices
go.Figure(data=[
go.Mesh3d(
x=mesh_pts.x,
y=mesh_pts.y,
z=mesh_pts.z,
i=pd.concat([tagged_th.a, tagged_th.a, tagged_th.a, tagged_th.b]),
j=pd.concat([tagged_th.b, tagged_th.b, tagged_th.c, tagged_th.c]),
k=pd.concat([tagged_th.c, tagged_th.d, tagged_th.d, tagged_th.d]),
intensitymode='cell',
intensity=pd.concat([tagged_th.tag, tagged_th.tag, tagged_th.tag, tagged_th.tag])
)
]).show()
I don't see what you mean by "does not allow for a 4th vertex". Here is an example with two tetrahedra:
import plotly.graph_objects as go
import plotly.io as pio
import numpy as np
i = np.array([0, 0, 0, 1])
j = np.array([1, 2, 3, 2])
k = np.array([2, 3, 1, 3])
fig = go.Figure(data = [
go.Mesh3d(
x = [0,1,2,0, 4,5,6,4],
y = [0,0,1,2, 0,0,1,2],
z = [0,2,2,3, 4,2,4,1],
i = np.concatenate((i, i+4)),
j = np.concatenate((j, j+4)),
k = np.concatenate((k, k+4)),
facecolor = ["red","red","red","red", "green","green","green","green"]
)
])
pio.write_html(fig, file = "tetrahedra.html", auto_open = True)

Rearrange 3D array in python

I have big binary 3D data and I want to re-arrange the data such as it is a sequence of values in order achieved by parsing the original data as sub-arrays of size (4x4x4).
For example, if the data is 2D and I want to re-arrange the data from 2x2 sub-arrays
example image
I used simple loops for this but just iterating over the loops took way more times, I am trying to to use some numpy functions to do so but I am new to SciPy
My code looks like this
x,y,z = 1200,800,400
data = np.fromfile(file_name, dtype=np.float32)
data.shape = (z,y,x)
new_data = np.empty(shape=x*y*z, dtype = np.float32)
index = 0
for zz in range(0,z,4):
for yy in range(0,y,4):
for xx in range(0,x,4):
for zShift in range(4):
for yShift in range(4):
for xShift in range(4):
new_data[index] = data[zz+zShift][yy+yShift][xx+xShift]
index+=1
new_data.tofile(output)
However, this takes a lot of time, any better implementation ideas?
As I said, the code works as intended, however, I need a smarter, pythonic way to achieve my output
Thank you!
x,y,z = 1200,800,400
data = np.empty([x,y,z])
# numpy calculates the shape of -1
out = data.reshape(-1, 4, 4, 4)
out.shape
>>> (6000000, 4, 4, 4)
Perform the following test, for smaller data and block size:
x, y, z = 4, 4, 4 # Dimensions
stp = 2 # Block size (in each dimension)
# Create the test array
arr = np.arange(x * y * z).reshape((x, y, z))
And to create a list of "blocks", run:
new_data = []
for xx in range(0, x, stp):
for yy in range(0, y, stp):
for zz in range(0, z, stp):
print('Index:', xx, yy, zz)
obj = arr[xx:xx+stp, yy:yy+stp, zz:zz+stp].copy()
print(obj)
new_data.append(obj)
In the target version of your code:
restore original values of x, y and z,
read the array from your source,
change stp back to 4,
drop test printouts.
Note also that your code adds individual elements to new_data,
only iterating over blocks of size 4 * 4 * 4,
whereas you wrote that you want a sequence of smaller arrays
(i.e. slices) of size 4 * 4 * 4, what my code does.
So if you need a list of slices (smaller arrays), not a single
4-D array, use my code.

Python homogeneous to inhomogeneous plot line

I found an article which is about epipolar geometry.
I calculated the fundamental matrix. Now Iam trying to find the line on which a corresponding point lays as described in the article:
I calculated the line which is in homogeneous coordinates. How could I plot this line into the picture like in the example? I thought about transforming the line from homogeneous to inhomogeneous coordinates. I think this can be achieved by dividing x and y by z
For example, homogeneous:
x=0.0295
y=0.9996
z=-265.1531
to inhomogeneous:
x=0.0295/-265.1531
y=0.9996/-265.1531
so:
x=-0.0001112564778612809
y=0.0037698974667842843
Those numbers seem wrong to me, because theyre so small. Is this the correct approach?
How could I plot my result into an image?
the x, y and z you have are the parameters of the "Epipolar Lines" equation that appear under the "line in the image" formula in the slides, but labelled a, b and c respectively, i.e:
au + bv + c = 0
solutions to this are points on the line. e.g. in Python I'd define a as some points on the picture's x-axis, and solve for b:
import numpy as np
F = np.array([
[-0.00310695, -0.0025646, 2.96584],
[-0.028094, -0.00771621, 56.3813],
[13.1905, -29.2007, -9999.79],
])
p_l = np.array([
[343.53],
[221.70],
[ 1.0],
])
lt = F # p_l
# if you want to normalise
lt /= np.sqrt(sum(lt[:2] ** 2))
# should give your values [0.0295, 0.9996, -265.2]
print(lt)
a, b, c = lt.ravel()
x = np.array([0, 400])
y = -(x*a + c) / b
and then just draw a line between these points

Numpy.dot dot product function for statsmodels

I am learning statsmodels.api module to use python for regression analysis. So I started from the simple OLS model.
In econometrics, the function is like: y = Xb + e
where X is NxK dimension, b is Kx1, e is Nx1, so adding together y is Nx1. This is perfectly fine from linear algebra point of view.
But I followed the tutorial from Statsmodels as the following:
import numpy as np
nsample = 100 # total obs is 100
x = np.linspace(0, 10, 100) # using np.linspace(start, stop, number)
X = np.column_stack((x, x**2))
beta = np.array([1, 0.1, 10])
e = np.random.normal(size = nsample) # draw numbers from normal distribution
default at mu = 0, and std.dev = 1, size = set by user
# e is n x 1
# Now, we add the constant/intercept term to X
X = sm.add_constant(X)
# Now, we compute the y
y = np.dot(X, beta) + e
So this generates the correct answer. But I have a question about the generation of beta = np.array([1,0.1,10]). This beta, if we use:
beta.shape
(3,)
It has a dimension of (3,), the same goes with y and e except X:
X.shape
(100,3)
e.shape
(100,)
y.shape
(100,)
So I guess initiating array using the following three ways
o = array([1,2,3])
o1 = array([[1],[2],[3]])
o2 = array([[1,2,3]])
print(o.shape)
print(o1.shape)
print(o2.shape)
----------------
(3,)
(3, 1)
(1, 3)
If I use beta = array([[1],[2],[3]]), which is a (3,1), and np.dot(X, beta) gets me a wrong answer, although the dimension seems to work.
If I use array([[1,2,3]]), which is a row vector, the dimension doesn't match for dot product in numpy, neither in linear algebra.
So, I am wondering why for a NxK dot Kx1 numpy dot product, we have to use a (N,K) dot (K,) instead of (N,K) dot (K,1) matrices. What operation makes only np.array([1, 0.1, 10]) works for numpy.dot() while np.array([[1], [0.1], [10]]) doesn't.
Thank you very much.
Some update
Sorry about the confusion, the codes in Statsmodels are randomly generated so I tried to fix the X and get the following input:
f = array([[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15]])
o = array([1,2,3])
o1 = array([[1],[2],[3]])
o2 = array([[1,2,3]])
print(o.shape)
print(o1.shape)
print(o2.shape)
print("---------")
print(np.dot(f,o))
print(np.dot(f,o1))
r1 = np.dot(f,o)
r2 = np.dot(f,o1)
type1 = type(np.dot(f,o))
type2 = type(np.dot(f,o1))
tf = type1 is type2
tf2 = type1 == type2
print(type1)
print(type2)
print(tf)
print(tf2)
-------------------------
(3,)
(3, 1)
(1, 3)
---------
[14 32 50 68 86]
[[14]
[32]
[50]
[68]
[86]]
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
True
True
Sorry again for the confusion and inconvenience, they worked fine.
python/numpy is not a matrix-based language as it is Matlab or Octave or Scilab. These follow the rules of matrix multplication strictly. So
np.dot(f,o) ---------> f*o in Matlab/Octave/Scilab
np.dot(f,o1) ---------> f*o1 does not work in Matlab/Octave/Scilab
python/numpy has the 'broadcasting' which are the rules how the different data types and operations give together a result. It's not obvious why np.dot(f,o1) even should work, but the broadcasting defines some usefull results. You will have to consult the docs for that.
In python/numpy the * is not a matrix operator. You can find out what the broadcasting gives for
print(f*o)
print(f*o1)
print(f*o2)
Rather recently python/numpy has introduced the matrix operator #. You might find out what happens with
print(f#o)
print(f#o1)
print(f#o2)
Does this give some impressions ?

Organize 3d points and found them by distance from a position

I'm working in 3D context. I've some objects in this space who are represented by x, y, z position.
# My objects names (in my real context it's pheromone "point")
A = 1
B = 2
C = 3
D = 4
# My actual way to stock their positions
pheromones_positions = {
(25, 25, 60): [A, D],
(10, 90, 30): [B],
(5, 85, 8): [C]
}
My objective is to found what points (pheromones) are near (with distance) a given emplacement. I do this simply with:
def calc_distance(a, b):
return sqrt((a[0]-b[0])**2+(a[1]-b[1])**2+(a[2]-b[2])**2)
def found_in_dict(search, points, distance):
for point in points:
if calc_distance(search, point) <= distance:
return points[point]
founds = found_in_dict((20, 20, 55), pheromones_positions, 10)
# found [1, 4] (A and D)
But, with a lot of pheromones it's very slow (test them one by one ...). How can i organize these 3D positions to found more quickly "positions by distance from given position" ?
Does exist algorithms or librarys (numpy ?) who can help me in this way ?
You should compute all (squared) distances at once. With NumPy you can simply subtract the target point of size 1x3 from the (nx3) array of all position coordinates and sum the squared coordinate differences to obtain a list with n elements:
squaredDistances = np.sum((np.array(pheromones_positions.keys()) - (20, 20, 55))**2, axis=1)
idx = np.where(squaredDistances < 10**2)[0]
print pheromones_positions.values()[idx]
Output:
[1, 4]
By the way: Since your return statement is within the for-loop over all points, it will stop iterating after finding a first point. So you might miss a second or third match.

Categories