How to convert a cartesian problem in a cylindrical problem? - python

I display a gyroid structure (TPMS) in a cartesian system using Pyvista. I try now to display the structure in cylindrical coordinates. Pyvista displays something cylindrical indeed but it seems that the unit cell length is not uniform (while there is no reason to change this my parameter "a" being steady. This change seems to appear especially along z but I don't understand why (see image).
I have this:
Here is a part of my code.
Thank you for your help.
import pyvista as pv
import numpy as np
from numpy import cos, sin, pi
from random import uniform
lattice_par = 1.0 # Unit cell length
a = (2*pi)/lattice_par
res = 200j
r, theta, z = np.mgrid[0:2:res, 0:2*pi:res, 0:4:res]
# consider using non-equidistant r for uniformity
def GyroidCyl(r, theta, z, b=0.8):
return (sin(a*(r*cos(theta) - 1))*cos(a*(r*sin(theta) - 1))
+ sin(a*(r*sin(theta) - 1))*cos(a*(z - 1))
+ sin(a*(z - 1))*cos(a*(r*cos(theta) - 1))
- b)
vol3 = GyroidCyl(r, theta, z)
# compute Cartesian coordinates for grid points
x = r * cos(theta)
y = r * sin(theta)
grid = pv.StructuredGrid(x, y, z)
grid["vol3"] = vol3.flatten()
contours3 = grid.contour([0]) # Isosurface = 0
pv.set_plot_theme('document')
p = pv.Plotter()
p.add_mesh(contours3, scalars=contours3.points[:, 2], show_scalar_bar=False, interpolate_before_map=True,
show_edges=False, smooth_shading=False, render=True)
p.show_axes_all()
p.add_floor()
p.show_grid()
p.add_title('Gyroid in cylindrical coordinates')
p.add_text('Volume Fraction Parameter = ' + str(b))
p.show(window_size=[2040, 1500])

So you've noted in comments that you're trying to replicate something like the strategy explained in this paper. What they do is take a regular gyroid unit cell, and then transform it to build a cylindrical shell. If igloos were cylindrical, then a gyroid cell would be a single piece of snow brick. Put them next to one another and stack them in a column, and you get a cylinder.
Since I can't use figures from the paper we'll have to recreate some ourselves. So you have to start from a regular gyroid defined by the implicit function
cos(x) sin(y) + cos(y) sin(z) + cos(z) sin(x) = 0
(or some variation thereof). Here's how a single unit cell looks:
import pyvista as pv
import numpy as np
res = 100j
a = 2*np.pi
x, y, z = np.mgrid[0:a:res, 0:a:res, 0:a:res]
def Gyroid(x, y, z):
return np.cos(x)*np.sin(y) + np.cos(y)*np.sin(z) + np.cos(z)*np.sin(x)
# compute implicit function
fun_values = Gyroid(x, y, z)
# create grid for contouring
grid = pv.StructuredGrid(x, y, z)
grid["vol3"] = fun_values.ravel('F')
contours3 = grid.contour([0]) # isosurface for 0
# plot the contour, i.e. the gyroid
pv.set_plot_theme('document')
plotter = pv.Plotter()
plotter.add_mesh(contours3, scalars=contours3.points[:, -1],
show_scalar_bar=False)
plotter.add_bounding_box()
plotter.enable_terrain_style()
plotter.show_axes()
plotter.show()
Using the "unit cell" term implies there's an underlying infinite lattice, which can be built by stacking these (rectangular) unit cells neatly next to one another. With some imagination we can convince ourselves that this is true. Or we can look at the formula and note that due to the trigonometric functions the function is periodic in x, y and z, with period 2*pi. This also tells us that we can transform the unit cell to have arbitrary rectangular dimensions by introducing lattice parameters a, b and c:
cos(kx x) sin(ky y) + cos(ky y) sin(kz z) + cos(kz z) sin(kx x) = 0, where
kx = 2 pi/a
ky = 2 pi/b
kz = 2 pi/c
(These kx, ky and kz quantities are called wave vectors in solid state physics.)
The corresponding change only affects the header:
res = 100j
a, b, c = lattice_params = 1, 2, 3
kx, ky, kz = [2*np.pi/lattice_param for lattice_param in lattice_params]
x, y, z = np.mgrid[0:a:res, 0:b:res, 0:c:res]
def Gyroid(x, y, z):
return ( np.cos(kx*x)*np.sin(ky*y)
+ np.cos(ky*y)*np.sin(kz*z)
+ np.cos(kz*z)*np.sin(kx*x))
This is where we start. What we have to do is take this unit cell, bend it so that it corresponds to a 30-degree circular arc on a cylinder, and stack the cylinder using this unit. According to the paper, they used 12 unit cells to create a circle in a plane (hence the 30-degree magic number), and stacked three such circular bands on top of each other to build the cylinder.
The actual mapping is also fairly clearly explained in the paper. Whereas your original x, y and z parameters of the function essentially interpolated between [0, a], [0, b] and [0, c], respectively, in the new setup x interpolates in the radius range [r1, r2], y interpolates in the angular range [0, pi/6] and z is just z. (In the paper x and y seem to be reversed with respect to this convention, but this shouldn't matter. If it matters, that's left as an exercise to the reader.)
So what we need to do is more or less keep the current grid points, but transform the corresponding x, y and z grid points so that they lie on a cylinder instead. Here's one take:
import pyvista as pv
import numpy as np
res = 100j
a, b, c = lattice_params = 1, 1, 1
kx, ky, kz = [2*np.pi/lattice_param for lattice_param in lattice_params]
r_aux, phi, z = np.mgrid[0:a:res, 0:b:res, 0:3*c:res]
# convert r_aux range to actual radii
r1, r2 = 1.5, 2
r = r2/a*r_aux + r1/a*(1 - r_aux)
def Gyroid(x, y, z):
return ( np.cos(kx*x)*np.sin(ky*y)
+ np.cos(ky*y)*np.sin(kz*z)
+ np.cos(kz*z)*np.sin(kx*x))
# compute data for cylindrical gyroid
# r_aux is x, phi / 12 is y and z is z
fun_values = Gyroid(r_aux, phi * 12, z)
# compute Cartesian coordinates for grid points
x = r * np.cos(phi*ky)
y = r * np.sin(phi*ky)
grid = pv.StructuredGrid(x, y, z)
grid["vol3"] = fun_values.ravel('F')
contours3 = grid.contour([0])
# plot cylindrical gyroid
pv.set_plot_theme('document')
plotter = pv.Plotter()
plotter.add_mesh(contours3, scalars=contours3.points[:, -1],
show_scalar_bar=False)
plotter.add_bounding_box()
plotter.show_axes()
plotter.enable_terrain_style()
plotter.show()
If you want to look at a single transformed unit cell in the cylindrical setting, use a single domain of phi and z for the function and only convert to 1/12 a full circle for the grid points:
fun_values = Gyroid(r_aux, phi, z/3)
# compute Cartesian coordinates for grid points
x = r * np.cos(phi*ky/12)
y = r * np.sin(phi*ky/12)
grid = pv.StructuredGrid(x, y, z/3)
But it's not easy to see the curvature in the (no longer a) unit cell:

Related

Evaluate function in points inside half sphere and plot slides in Python

I am trying to evaluate a function that depends on the radius from the center of a sphere to any point inside half a sphere.
I start by defining three arrays corresponding to the points along the radius, the elevation and azimuthal angles. In a for loop I compute the x, y and z coordinates to evaluate the function.
I am not sure if I am doing the mapping properly. I need to store the values of the evaluated function in a 3D matrix corresponding to the x, y, and z coordinates to plot slices in a postprocessing step, but I am stuck identifying how I can define the size of my function matrix.
In cartesian coordinates is really easy since one can link every coordinate with the dimension of the matrix. That's why I need some guidance in how I can slide the matrix since I don't have a 3D matrix with the cartesian coordinates. How I can construct this matrix from the spherical coordintaes?
Any help will be more than appreciated!
Here is my (unfruitful) attempt:
import numpy as np
beta = 1
rho = np.linspace(0, 1, 20)
phi = np.linspace(0, 2*np.pi, 20)
theta = np.linspace(0, np.pi/2, 10)
f = np.empty([len(theta), len(theta), len(phi)], dtype=complex)
for i in range(len(rho)):
for j in range(len(phi)):
for k in range(len(theta)):
x = rho[i] * np.sin(theta[k]) * np.cos(phi[j])
y = rho[i] * np.sin(theta[k]) * np.sin(phi[j])
z = rho[i] * np.cos(theta[k])
R = np.sqrt(x**2 + y**2 + z**2)
f[k, i, j] = -1j*((z/R)/(z/R + beta)) * (np.exp(1j*k*R)/R)
You just have a typo, the second dimension is again len(theta) isntead of len(rho). It should be
f = np.empty([len(theta), len(rho), len(phi)], dtype=complex)
Note also that, if I am not mistaken, you don't need R at all, it's just rho[i].

How to use a smooth curve to link points approximately distributing in a circle?

I have a set of twelve points, which center at (0, 0) and distribute approximately in a circle, at the interval of 30 degrees, shown in the image.
The twelve points
I want to use a smooth curve to link (go through) them like the image below (I draw the red line by hand).
a hand-drawn curve in red
I want to make it in python or matlab. I have tried some interpolation methods for the upper half and lower half separately, and wanted to combine them as a complete curve. However, the results always overshoot.
Thank you for any suggestions!
I think the key here is to note that you have to consider it as a parametrized curve in 2d, not just a 1d to 2d function. Furthermore since it should be something like a circle, you need an interpolation method that supports periodic boundaries. Here are two methods for which this applies:
% set up toy data
t = linspace(0, 2*pi, 10);
t = t(1:end-1);
a = 0.08;
b = 0.08;
x = cos(t+a*randn(size(t))) + b*randn(size(t));
y = sin(t+a*randn(size(t))) + b*randn(size(t));
plot(x, y, 'ok');
% fourier interpolation
z = x+1i*y;
y = interpft(z, 200);
hold on
plot(real(y), imag(y), '-.r')
% periodic spline interpolation
z = [z, z(1)];
n = numel(z);
t = 1:n;
pp = csape(t, z, 'periodic');
ts = linspace(1, n, 200);
y = ppval(pp, ts);;
plot(real(y), imag(y), ':b');
Thank for suggestions from #flawr. According to the answer from #flawr, I implemented the periodic spline interpolation in python (still working on implementing fourier interpolation in python.). Here is the code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import CubicSpline
# set up toy data
t = np.linspace(0, 2*np.pi, 10)
t = t[0:-1]
a = 0.08
b = 0.08
x = np.cos(t + a * np.random.normal(size=len(t))) + b * np.random.normal(size=len(t))
y = np.sin(t + a * np.random.normal(size=len(t))) + b * np.random.normal(size=len(t))
plt.scatter(x, y)
# periodic spline interpolation
z = []
for idx in range(len(x)):
z.append(complex(x[idx], y[idx]))
z.append(complex(x[0], y[0]))
len_z = len(z)
t = [i for i in range(len_z)]
cs = CubicSpline(t, z, bc_type='periodic')
xs = np.linspace(0, len_z, 200)
y_new = cs(xs)
plt.plot(y_new.real, y_new.imag)
plt.show()

Inverse of numpy.gradient function

I need to create a function which would be the inverse of the np.gradient function.
Where the Vx,Vy arrays (Velocity component vectors) are the input and the output would be an array of anti-derivatives (Arrival Time) at the datapoints x,y.
I have data on a (x,y) grid with scalar values (time) at each point.
I have used the numpy gradient function and linear interpolation to determine the gradient vector Velocity (Vx,Vy) at each point (See below).
I have achieved this by:
#LinearTriInterpolator applied to a delaunay triangular mesh
LTI= LinearTriInterpolator(masked_triang, time_array)
#Gradient requested at the mesh nodes:
(Vx, Vy) = LTI.gradient(triang.x, triang.y)
The first image below shows the velocity vectors at each point, and the point labels represent the time value which formed the derivatives (Vx,Vy)
The next image shows the resultant scalar value of the derivatives (Vx,Vy) plotted as a colored contour graph with associated node labels.
So my challenge is:
I need to reverse the process!
Using the gradient vectors (Vx,Vy) or the resultant scalar value to determine the original Time-Value at that point.
Is this possible?
Knowing that the numpy.gradient function is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries, I am sure there is a function which would reverse this process.
I was thinking that taking a line derivative between the original point (t=0 at x1,y1) to any point (xi,yi) over the Vx,Vy plane would give me the sum of the velocity components. I could then divide this value by the distance between the two points to get the time taken..
Would this approach work? And if so, which numpy integrate function would be best applied?
An example of my data can be found here [http://www.filedropper.com/calculatearrivaltimefromgradientvalues060820]
Your help would be greatly appreciated
EDIT:
Maybe this simplified drawing might help understand where I'm trying to get to..
EDIT:
Thanks to #Aguy who has contibuted to this code.. I Have tried to get a more accurate representation using a meshgrid of spacing 0.5 x 0.5m and calculating the gradient at each meshpoint, however I am not able to integrate it properly. I also have some edge affects which are affecting the results that I don't know how to correct.
import numpy as np
from scipy import interpolate
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
#Createmesh grid with a spacing of 0.5 x 0.5
stepx = 0.5
stepy = 0.5
xx = np.arange(min(x), max(x), stepx)
yy = np.arange(min(y), max(y), stepy)
xgrid, ygrid = np.meshgrid(xx, yy)
grid_z1 = interpolate.griddata((x,y), Arrival_Time, (xgrid, ygrid), method='linear') #Interpolating the Time values
#Formatdata
X = np.ravel(xgrid)
Y= np.ravel(ygrid)
zs = np.ravel(grid_z1)
Z = zs.reshape(X.shape)
#Calculate Gradient
(dx,dy) = np.gradient(grid_z1) #Find gradient for points on meshgrid
Velocity_dx= dx/stepx #velocity ms/m
Velocity_dy= dy/stepx #velocity ms/m
Resultant = (Velocity_dx**2 + Velocity_dy**2)**0.5 #Resultant scalar value ms/m
Resultant = np.ravel(Resultant)
#Plot Original Data F(X,Y) on the meshgrid
fig = pyplot.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(x,y,Arrival_Time,color='r')
ax.plot_trisurf(X, Y, Z)
ax.set_xlabel('X-Coordinates')
ax.set_ylabel('Y-Coordinates')
ax.set_zlabel('Time (ms)')
pyplot.show()
#Plot the Derivative of f'(X,Y) on the meshgrid
fig = pyplot.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(X,Y,Resultant,color='r',s=0.2)
ax.plot_trisurf(X, Y, Resultant)
ax.set_xlabel('X-Coordinates')
ax.set_ylabel('Y-Coordinates')
ax.set_zlabel('Velocity (ms/m)')
pyplot.show()
#Integrate to compare the original data input
dxintegral = np.nancumsum(Velocity_dx, axis=1)*stepx
dyintegral = np.nancumsum(Velocity_dy, axis=0)*stepy
valintegral = np.ma.zeros(dxintegral.shape)
for i in range(len(yy)):
for j in range(len(xx)):
valintegral[i, j] = np.ma.sum([dxintegral[0, len(xx) // 2],
dyintegral[i, len(yy) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]])
valintegral = valintegral * np.isfinite(dxintegral)
Now the np.gradient is applied at every meshnode (dx,dy) = np.gradient(grid_z1)
Now in my process I would analyse the gradient values above and make some adjustments (There is some unsual edge effects that are being create which I need to rectify) and would then integrate the values to get back to a surface which would be very similar to f(x,y) shown above.
I need some help adjusting the integration function:
#Integrate to compare the original data input
dxintegral = np.nancumsum(Velocity_dx, axis=1)*stepx
dyintegral = np.nancumsum(Velocity_dy, axis=0)*stepy
valintegral = np.ma.zeros(dxintegral.shape)
for i in range(len(yy)):
for j in range(len(xx)):
valintegral[i, j] = np.ma.sum([dxintegral[0, len(xx) // 2],
dyintegral[i, len(yy) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]])
valintegral = valintegral * np.isfinite(dxintegral)
And now I need to calculate the new 'Time' values at the original (x,y) point locations.
UPDATE (08-09-20) : I am getting some promising results using the help from #Aguy. The results can be seen below (with the blue contours representing the original data, and the red contours representing the integrated values).
I am still working on an integration approach which can remove the inaccuarcies at the areas of min(y) and max(y)
from matplotlib.tri import (Triangulation, UniformTriRefiner,
CubicTriInterpolator,LinearTriInterpolator,TriInterpolator,TriAnalyzer)
import pandas as pd
from scipy.interpolate import griddata
import matplotlib.pyplot as plt
import numpy as np
from scipy import interpolate
#-------------------------------------------------------------------------
# STEP 1: Import data from Excel file, and set variables
#-------------------------------------------------------------------------
df_initial = pd.read_excel(
r'C:\Users\morga\PycharmProjects\venv\Development\Trial'
r'.xlsx')
Inputdata can be found here link
df_initial = df_initial .sort_values(by='Delay', ascending=True) #Update dataframe and sort by Delay
x = df_initial ['X'].to_numpy()
y = df_initial ['Y'].to_numpy()
Arrival_Time = df_initial ['Delay'].to_numpy()
# Createmesh grid with a spacing of 0.5 x 0.5
stepx = 0.5
stepy = 0.5
xx = np.arange(min(x), max(x), stepx)
yy = np.arange(min(y), max(y), stepy)
xgrid, ygrid = np.meshgrid(xx, yy)
grid_z1 = interpolate.griddata((x, y), Arrival_Time, (xgrid, ygrid), method='linear') # Interpolating the Time values
# Calculate Gradient (velocity ms/m)
(dy, dx) = np.gradient(grid_z1) # Find gradient for points on meshgrid
Velocity_dx = dx / stepx # x velocity component ms/m
Velocity_dy = dy / stepx # y velocity component ms/m
# Integrate to compare the original data input
dxintegral = np.nancumsum(Velocity_dx, axis=1) * stepx
dyintegral = np.nancumsum(Velocity_dy, axis=0) * stepy
valintegral = np.ma.zeros(dxintegral.shape) # Makes an array filled with 0's the same shape as dx integral
for i in range(len(yy)):
for j in range(len(xx)):
valintegral[i, j] = np.ma.sum(
[dxintegral[0, len(xx) // 2], dyintegral[i, len(xx) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]])
valintegral[np.isnan(dx)] = np.nan
min_value = np.nanmin(valintegral)
valintegral = valintegral + (min_value * -1)
##Plot Results
fig = plt.figure()
ax = fig.add_subplot()
ax.scatter(x, y, color='black', s=7, zorder=3)
ax.set_xlabel('X-Coordinates')
ax.set_ylabel('Y-Coordinates')
ax.contour(xgrid, ygrid, valintegral, levels=50, colors='red', zorder=2)
ax.contour(xgrid, ygrid, grid_z1, levels=50, colors='blue', zorder=1)
ax.set_aspect('equal')
plt.show()
TL;DR;
You have multiple challenges to address in this issue, mainly:
Potential reconstruction (scalar field) from its gradient (vector field)
But also:
Observation in a concave hull with non rectangular grid;
Numerical 2D line integration and numerical inaccuracy;
It seems it can be solved by choosing an adhoc interpolant and a smart way to integrate (as pointed out by #Aguy).
MCVE
In a first time, let's build a MCVE to highlight above mentioned key points.
Dataset
We recreate a scalar field and its gradient.
import numpy as np
from scipy import interpolate
import matplotlib.pyplot as plt
def f(x, y):
return x**2 + x*y + 2*y + 1
Nx, Ny = 21, 17
xl = np.linspace(-3, 3, Nx)
yl = np.linspace(-2, 2, Ny)
X, Y = np.meshgrid(xl, yl)
Z = f(X, Y)
zl = np.arange(np.floor(Z.min()), np.ceil(Z.max())+1, 2)
dZdy, dZdx = np.gradient(Z, yl, xl, edge_order=1)
V = np.hypot(dZdx, dZdy)
The scalar field looks like:
axe = plt.axes(projection='3d')
axe.plot_surface(X, Y, Z, cmap='jet', alpha=0.5)
axe.view_init(elev=25, azim=-45)
And, the vector field looks like:
axe = plt.contour(X, Y, Z, zl, cmap='jet')
axe.axes.quiver(X, Y, dZdx, dZdy, V, units='x', pivot='tip', cmap='jet')
axe.axes.set_aspect('equal')
axe.axes.grid()
Indeed gradient is normal to potential levels. We also plot the gradient magnitude:
axe = plt.contour(X, Y, V, 10, cmap='jet')
axe.axes.set_aspect('equal')
axe.axes.grid()
Raw field reconstruction
If we naively reconstruct the scalar field from the gradient:
SdZx = np.cumsum(dZdx, axis=1)*np.diff(xl)[0]
SdZy = np.cumsum(dZdy, axis=0)*np.diff(yl)[0]
Zhat = np.zeros(SdZx.shape)
for i in range(Zhat.shape[0]):
for j in range(Zhat.shape[1]):
Zhat[i,j] += np.sum([SdZy[i,0], -SdZy[0,0], SdZx[i,j], -SdZx[i,0]])
Zhat += Z[0,0] - Zhat[0,0]
We can see the global result is roughly correct, but levels are less accurate where the gradient magnitude is low:
Interpolated field reconstruction
If we increase the grid resolution and pick a specific interpolant (usual when dealing with mesh grid), we can get a finer field reconstruction:
r = np.stack([X.ravel(), Y.ravel()]).T
Sx = interpolate.CloughTocher2DInterpolator(r, dZdx.ravel())
Sy = interpolate.CloughTocher2DInterpolator(r, dZdy.ravel())
Nx, Ny = 200, 200
xli = np.linspace(xl.min(), xl.max(), Nx)
yli = np.linspace(yl.min(), yl.max(), Nx)
Xi, Yi = np.meshgrid(xli, yli)
ri = np.stack([Xi.ravel(), Yi.ravel()]).T
dZdxi = Sx(ri).reshape(Xi.shape)
dZdyi = Sy(ri).reshape(Xi.shape)
SdZxi = np.cumsum(dZdxi, axis=1)*np.diff(xli)[0]
SdZyi = np.cumsum(dZdyi, axis=0)*np.diff(yli)[0]
Zhati = np.zeros(SdZxi.shape)
for i in range(Zhati.shape[0]):
for j in range(Zhati.shape[1]):
Zhati[i,j] += np.sum([SdZyi[i,0], -SdZyi[0,0], SdZxi[i,j], -SdZxi[i,0]])
Zhati += Z[0,0] - Zhati[0,0]
Which definitely performs way better:
So basically, increasing the grid resolution with an adhoc interpolant may help you to get more accurate result. The interpolant also solve the need to get a regular rectangular grid from a triangular mesh to perform integration.
Concave and convex hull
You also have pointed out inaccuracy on the edges. Those are the result of the combination of the interpolant choice and the integration methodology. The integration methodology fails to properly compute the scalar field when it reach concave region with few interpolated points. The problem disappear when choosing a mesh-free interpolant able to extrapolate.
To illustrate it, let's remove some data from our MCVE:
q = np.full(dZdx.shape, False)
q[0:6,5:11] = True
q[-6:,-6:] = True
dZdx[q] = np.nan
dZdy[q] = np.nan
Then the interpolant can be constructed as follow:
q2 = ~np.isnan(dZdx.ravel())
r = np.stack([X.ravel(), Y.ravel()]).T[q2,:]
Sx = interpolate.CloughTocher2DInterpolator(r, dZdx.ravel()[q2])
Sy = interpolate.CloughTocher2DInterpolator(r, dZdy.ravel()[q2])
Performing the integration we see that in addition of classical edge effect we do have less accurate value in concave regions (swingy dot-dash lines where the hull is concave) and we have no data outside the convex hull as Clough Tocher is a mesh-based interpolant:
Vl = np.arange(0, 11, 1)
axe = plt.contour(X, Y, np.hypot(dZdx, dZdy), Vl, cmap='jet')
axe.axes.contour(Xi, Yi, np.hypot(dZdxi, dZdyi), Vl, cmap='jet', linestyles='-.')
axe.axes.set_aspect('equal')
axe.axes.grid()
So basically the error we are seeing on the corner are most likely due to integration issue combined with interpolation limited to the convex hull.
To overcome this we can choose a different interpolant such as RBF (Radial Basis Function Kernel) which is able to create data outside the convex hull:
Sx = interpolate.Rbf(r[:,0], r[:,1], dZdx.ravel()[q2], function='thin_plate')
Sy = interpolate.Rbf(r[:,0], r[:,1], dZdy.ravel()[q2], function='thin_plate')
dZdxi = Sx(ri[:,0], ri[:,1]).reshape(Xi.shape)
dZdyi = Sy(ri[:,0], ri[:,1]).reshape(Xi.shape)
Notice the slightly different interface of this interpolator (mind how parmaters are passed).
The result is the following:
We can see the region outside the convex hull can be extrapolated (RBF are mesh free). So choosing the adhoc interpolant is definitely a key point to solve your problem. But we still need to be aware that extrapolation may perform well but is somehow meaningless and dangerous.
Solving your problem
The answer provided by #Aguy is perfectly fine as it setups a clever way to integrate that is not disturbed by missing points outside the convex hull. But as you mentioned there is inaccuracy in concave region inside the convex hull.
If you wish to remove the edge effect you detected, you will have to resort to an interpolant able to extrapolate as well, or find another way to integrate.
Interpolant change
Using RBF interpolant seems to solve your problem. Here is the complete code:
df = pd.read_excel('./Trial-Wireup 2.xlsx')
x = df['X'].to_numpy()
y = df['Y'].to_numpy()
z = df['Delay'].to_numpy()
r = np.stack([x, y]).T
#S = interpolate.CloughTocher2DInterpolator(r, z)
#S = interpolate.LinearNDInterpolator(r, z)
S = interpolate.Rbf(x, y, z, epsilon=0.1, function='thin_plate')
N = 200
xl = np.linspace(x.min(), x.max(), N)
yl = np.linspace(y.min(), y.max(), N)
X, Y = np.meshgrid(xl, yl)
#Zp = S(np.stack([X.ravel(), Y.ravel()]).T)
Zp = S(X.ravel(), Y.ravel())
Z = Zp.reshape(X.shape)
dZdy, dZdx = np.gradient(Z, yl, xl, edge_order=1)
SdZx = np.nancumsum(dZdx, axis=1)*np.diff(xl)[0]
SdZy = np.nancumsum(dZdy, axis=0)*np.diff(yl)[0]
Zhat = np.zeros(SdZx.shape)
for i in range(Zhat.shape[0]):
for j in range(Zhat.shape[1]):
#Zhat[i,j] += np.nansum([SdZy[i,0], -SdZy[0,0], SdZx[i,j], -SdZx[i,0]])
Zhat[i,j] += np.nansum([SdZx[0,N//2], SdZy[i,N//2], SdZx[i,j], -SdZx[i,N//2]])
Zhat += Z[100,100] - Zhat[100,100]
lz = np.linspace(0, 5000, 20)
axe = plt.contour(X, Y, Z, lz, cmap='jet')
axe = plt.contour(X, Y, Zhat, lz, cmap='jet', linestyles=':')
axe.axes.plot(x, y, '.', markersize=1)
axe.axes.set_aspect('equal')
axe.axes.grid()
Which graphically renders as follow:
The edge effect is gone because of the RBF interpolant can extrapolate over the whole grid. You can confirm it by comparing the result of mesh-based interpolants.
Linear
Clough Tocher
Integration variable order change
We can also try to find a better way to integrate and mitigate the edge effect, eg. let's change the integration variable order:
Zhat[i,j] += np.nansum([SdZy[N//2,0], SdZx[N//2,j], SdZy[i,j], -SdZy[N//2,j]])
With a classic linear interpolant. The result is quite correct, but we still have an edge effect on the bottom left corner:
As you noticed the problem occurs at the middle of the axis in region where the integration starts and lacks a reference point.
Here is one approach:
First, in order to be able to do integration, it's good to be on a regular grid. Using here variable names x and y as short for your triang.x and triang.y we can first create a grid:
import numpy as np
n = 200 # Grid density
stepx = (max(x) - min(x)) / n
stepy = (max(y) - min(y)) / n
xspace = np.arange(min(x), max(x), stepx)
yspace = np.arange(min(y), max(y), stepy)
xgrid, ygrid = np.meshgrid(xspace, yspace)
Then we can interpolate dx and dy on the grid using the same LinearTriInterpolator function:
fdx = LinearTriInterpolator(masked_triang, dx)
fdy = LinearTriInterpolator(masked_triang, dy)
dxgrid = fdx(xgrid, ygrid)
dygrid = fdy(xgrid, ygrid)
Now comes the integration part. In principle, any path we choose should get us to the same value. In practice, since there are missing values and different densities, the choice of path is very important to get a reasonably accurate answer.
Below I choose to integrate over dxgrid in the x direction from 0 to the middle of the grid at n/2. Then integrate over dygrid in the y direction from 0 to the i point of interest. Then over dxgrid again from n/2 to the point j of interest. This is a simple way to make sure most of the path of integration is inside the bulk of available data by simply picking a path that goes mostly in the "middle" of the data range. Other alternative consideration would lead to different path selections.
So we do:
dxintegral = np.nancumsum(dxgrid, axis=1) * stepx
dyintegral = np.nancumsum(dygrid, axis=0) * stepy
and then (by somewhat brute force for clarity):
valintegral = np.ma.zeros(dxintegral.shape)
for i in range(n):
for j in range(n):
valintegral[i, j] = np.ma.sum([dxintegral[0, n // 2], dyintegral[i, n // 2], dxintegral[i, j], - dxintegral[i, n // 2]])
valintegral = valintegral * np.isfinite(dxintegral)
valintegral would be the result up to an arbitrary constant which can help put the "zero" where you want.
With your data shown here:
ax.tricontourf(masked_triang, time_array)
This is what I'm getting reconstructed when using this method:
ax.contourf(xgrid, ygrid, valintegral)
Hopefully this is somewhat helpful.
If you want to revisit the values at the original triangulation points, you can use interp2d on the valintegral regular grid data.
EDIT:
In reply to your edit, your adaptation above has a few errors:
Change the line (dx,dy) = np.gradient(grid_z1) to (dy,dx) = np.gradient(grid_z1)
In the integration loop change the dyintegral[i, len(yy) // 2] term to dyintegral[i, len(xx) // 2]
Better to replace the line valintegral = valintegral * np.isfinite(dxintegral) with valintegral[np.isnan(dx)] = np.nan

Why is this randomly generated spherical point cloud not uniformly distributed?

I'm trying to simulate radiation emitting from a point source. To do this, given the coordinates of a source and the desired length of emitted rays, I randomly generate a direction vector in spherical coordinates, convert it to cartesian, and return the correct end point. However, when I run this, and visualize the resulting point cloud (consisting of all the randomly generated end points) in Blender, I see that it's more densely populated at the "poles" of the sphere. I'd like the points to be uniformly distributed along the sphere. How can I achieve this?
The random generation function:
def getRadiationEmissionLineSeg(p, t):
if(p.size == 4):
#polar angle spans [0, pi] from +Z axis to -Z axis
#azimuthal angle spans [0, 2*pi] orthogonal to the zenith (in the XY plane)
theta = math.pi * random.random()
phi = 2 * math.pi * random.random()
#use r = 1 to get a unit direction vector
v = sphericalToCartesian(1, theta, phi)
#parametric vector form: vec = p + tv
#p = point that lies on vector (origin point in case of a ray)
#t = parameter (-inf, inf) for lines, [0, inf) for rays
#v = direction vector (must be normalized)
return p + t * v
The spherical coordinates -> cartesian conversion function:
def sphericalToCartesian(r, theta, phi):
x = r * math.sin(theta) * math.cos(phi)
y = r * math.sin(theta) * math.sin(phi)
z = r * math.cos(theta)
return npy.array([x, y, z, 0])
When you transform points by spherical coordinates and angle theta approaches pi, the circle which is an image of [0,2pi]x{theta} gets smaller and smaller. Since theta is uniformly distributed, there will be more points near poles. It could be seen on image of grid.
If you want to generate uniformly distributed points on sphere, you can use the fact that if you cut a sphere with two parallel planes, the area of the strip of spherical surface between the planes depends only on the distance between the planes. Hence, you can get a uniform distribution on the sphere using two uniformly distributed random variables:
z coordinate between -r and r,
an angle theta between [0, 2pi) corresponding to a longitude.
Then you can easily calculate x and y coordiantes.
Example code:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
r = 1
n = 1000
z = np.random.random(n)*2*r - r
phi = np.random.random(n)*2*np.pi
x = np.sqrt(1 - z**2)*np.cos(phi)
y = np.sqrt(1 - z**2)*np.sin(phi)
fig = plt.figure(figsize=(8, 8))
ax = plt.axes(projection='3d')
ax.scatter(x, y, z)
plt.show()
Results for n=100,250,1000:

Plane fitting to 4 (or more) XYZ points

I have 4 points, which are very near to be at the one plane - it is the 1,4-Dihydropyridine cycle.
I need to calculate distance from C3 and N1 to the plane, which is made of C1-C2-C4-C5.
Calculating distance is OK, but fitting plane is quite difficult to me.
1,4-DHP cycle:
1,4-DHP cycle, another view:
from array import *
from numpy import *
from scipy import *
# coordinates (XYZ) of C1, C2, C4 and C5
x = [0.274791784, -1.001679346, -1.851320839, 0.365840754]
y = [-1.155674199, -1.215133985, 0.053119249, 1.162878076]
z = [1.216239624, 0.764265677, 0.956099579, 1.198231236]
# plane equation Ax + By + Cz = D
# non-fitted plane
abcd = [0.506645455682, -0.185724560275, -1.43998120646, 1.37626378129]
# creating distance variable
distance = zeros(4, float)
# calculating distance from point to plane
for i in range(4):
distance[i] = (x[i]*abcd[0]+y[i]*abcd[1]+z[i]*abcd[2]+abcd[3])/sqrt(abcd[0]**2 + abcd[1]**2 + abcd[2]**2)
print distance
# calculating squares
squares = distance**2
print squares
How to make sum(squares) minimized? I have tried least squares, but it is too hard for me.
That sounds about right, but you should replace the nonlinear optimization with an SVD. The following creates the moment of inertia tensor, M, and then SVD's it to get the normal to the plane. This should be a close approximation to the least-squares fit and be much faster and more predictable. It returns the point-cloud center and the normal.
def planeFit(points):
"""
p, n = planeFit(points)
Given an array, points, of shape (d,...)
representing points in d-dimensional space,
fit an d-dimensional plane to the points.
Return a point, p, on the plane (the point-cloud centroid),
and the normal, n.
"""
import numpy as np
from numpy.linalg import svd
points = np.reshape(points, (np.shape(points)[0], -1)) # Collapse trialing dimensions
assert points.shape[0] <= points.shape[1], "There are only {} points in {} dimensions.".format(points.shape[1], points.shape[0])
ctr = points.mean(axis=1)
x = points - ctr[:,np.newaxis]
M = np.dot(x, x.T) # Could also use np.cov(x) here.
return ctr, svd(M)[0][:,-1]
For example: Construct a 2D cloud at (10, 100) that is thin in the x direction and 100 times bigger in the y direction:
>>> pts = np.diag((.1, 10)).dot(randn(2,1000)) + np.reshape((10, 100),(2,-1))
The fit plane is very nearly at (10, 100) with a normal very nearly along the x axis.
>>> planeFit(pts)
(array([ 10.00382471, 99.48404676]),
array([ 9.99999881e-01, 4.88824145e-04]))
Least squares should fit a plane easily. The equation for a plane is: ax + by + c = z. So set up matrices like this with all your data:
x_0 y_0 1
A = x_1 y_1 1
...
x_n y_n 1
And
a
x = b
c
And
z_0
B = z_1
...
z_n
In other words: Ax = B. Now solve for x which are your coefficients. But since you have more than 3 points, the system is over-determined so you need to use the left pseudo inverse. So the answer is:
a
b = (A^T A)^-1 A^T B
c
And here is some simple Python code with an example:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
N_POINTS = 10
TARGET_X_SLOPE = 2
TARGET_y_SLOPE = 3
TARGET_OFFSET = 5
EXTENTS = 5
NOISE = 5
# create random data
xs = [np.random.uniform(2*EXTENTS)-EXTENTS for i in range(N_POINTS)]
ys = [np.random.uniform(2*EXTENTS)-EXTENTS for i in range(N_POINTS)]
zs = []
for i in range(N_POINTS):
zs.append(xs[i]*TARGET_X_SLOPE + \
ys[i]*TARGET_y_SLOPE + \
TARGET_OFFSET + np.random.normal(scale=NOISE))
# plot raw data
plt.figure()
ax = plt.subplot(111, projection='3d')
ax.scatter(xs, ys, zs, color='b')
# do fit
tmp_A = []
tmp_b = []
for i in range(len(xs)):
tmp_A.append([xs[i], ys[i], 1])
tmp_b.append(zs[i])
b = np.matrix(tmp_b).T
A = np.matrix(tmp_A)
fit = (A.T * A).I * A.T * b
errors = b - A * fit
residual = np.linalg.norm(errors)
print("solution: %f x + %f y + %f = z" % (fit[0], fit[1], fit[2]))
print("errors:")
print(errors)
print("residual: {}".format(residual))
# plot plane
xlim = ax.get_xlim()
ylim = ax.get_ylim()
X,Y = np.meshgrid(np.arange(xlim[0], xlim[1]),
np.arange(ylim[0], ylim[1]))
Z = np.zeros(X.shape)
for r in range(X.shape[0]):
for c in range(X.shape[1]):
Z[r,c] = fit[0] * X[r,c] + fit[1] * Y[r,c] + fit[2]
ax.plot_wireframe(X,Y,Z, color='k')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.show()
The solution for your points:
0.143509 x + 0.057196 y + 1.129595 = z
The fact that you are fitting to a plane is only slightly relevant here. What you are trying to do is minimize a particular function starting from a guess. For that use scipy.optimize. Note that there is no guarantee that this is the globally optimal solution, only locally optimal. A different initial condition may converge to a different result, this works well if you start close to the local minima you are seeking.
I've taken the liberty to clean up your code by taking advantage of numpy's broadcasting:
import numpy as np
# coordinates (XYZ) of C1, C2, C4 and C5
XYZ = np.array([
[0.274791784, -1.001679346, -1.851320839, 0.365840754],
[-1.155674199, -1.215133985, 0.053119249, 1.162878076],
[1.216239624, 0.764265677, 0.956099579, 1.198231236]])
# Inital guess of the plane
p0 = [0.506645455682, -0.185724560275, -1.43998120646, 1.37626378129]
def f_min(X,p):
plane_xyz = p[0:3]
distance = (plane_xyz*X.T).sum(axis=1) + p[3]
return distance / np.linalg.norm(plane_xyz)
def residuals(params, signal, X):
return f_min(X, params)
from scipy.optimize import leastsq
sol = leastsq(residuals, p0, args=(None, XYZ))[0]
print("Solution: ", sol)
print("Old Error: ", (f_min(XYZ, p0)**2).sum())
print("New Error: ", (f_min(XYZ, sol)**2).sum())
This gives:
Solution: [ 14.74286241 5.84070802 -101.4155017 114.6745077 ]
Old Error: 0.441513295404
New Error: 0.0453564286112
This returns the 3D plane coefficients along with the RMSE of the fit.
The plane is provided in a homogeneous coordinate representation, meaning its dot product with the homogeneous coordinates of a point produces the distance between the two.
def fit_plane(points):
assert points.shape[1] == 3
centroid = points.mean(axis=0)
x = points - centroid[None, :]
U, S, Vt = np.linalg.svd(x.T # x)
normal = U[:, -1]
origin_distance = normal # centroid
rmse = np.sqrt(S[-1] / len(points))
return np.hstack([normal, -origin_distance]), rmse
Minor note: the SVD can also be directly applied to the points instead of the outer product matrix, but I found it to be slower with NumPy's SVD implementation.
U, S, Vt = np.linalg.svd(x.T, full_matrices=False)
rmse = S[-1] / np.sqrt(len(points))
Another way aside from svd to quickly reach a solution while dealing with outliers ( when you have a large data set ) is ransac :
def fit_plane(voxels, iterations=50, inlier_thresh=10): # voxels : x,y,z
inliers, planes = [], []
xy1 = np.concatenate([voxels[:, :-1], np.ones((voxels.shape[0], 1))], axis=1)
z = voxels[:, -1].reshape(-1, 1)
for _ in range(iterations):
random_pts = voxels[np.random.choice(voxels.shape[0], voxels.shape[1] * 10, replace=False), :]
plane_transformation, residual = fit_pts_to_plane(random_pts)
inliers.append(((z - np.matmul(xy1, plane_transformation)) <= inlier_thresh).sum())
planes.append(plane_transformation)
return planes[np.array(inliers).argmax()]
def fit_pts_to_plane(voxels): # x y z (m x 3)
# https://math.stackexchange.com/questions/99299/best-fitting-plane-given-a-set-of-points
xy1 = np.concatenate([voxels[:, :-1], np.ones((voxels.shape[0], 1))], axis=1)
z = voxels[:, -1].reshape(-1, 1)
fit = np.matmul(np.matmul(np.linalg.inv(np.matmul(xy1.T, xy1)), xy1.T), z)
errors = z - np.matmul(xy1, fit)
residual = np.linalg.norm(errors)
return fit, residual
Here's one way. If your points are P[1]..P[n] then compute the mean M of these and subtract it from each, getting points p[1]..p[n]. Then compute C = Sum{ p[i]*p[i]'} (the "covariance" matrix of the points). Next diagonalise C, that is find orthogonal U and diagonal E so that C = U*E*U'. If your points are indeed on a plane then one of the eigenvalues (ie the diagonal entries of E) will be very small (with perfect arithmetic it would be 0). In any case if the j'th one of these is the smallest, then let the j'th column of U be (A,B,C) and compute D = -M'*N. These parameters define the "best" plane, the one such that the sum of the squares of the distances from the P[] to the plane is least.

Categories