Matplotlib 3D plot - parametric curve "wraparound" from certain perspectives - python

I have been writing a Python script (GitHub LINK) for visualizing asteroid/comet/meteoroid orbits. The script also plots the position of planets and their orbits.
It works just right for orbits with small semi-major axis (i.e. "smaller" orbits). But when I have an orbit that goes way beyond Neptune (e.g. of a Halley-type comet), and from certain perspectives, there is a weird "wraparound" (for lack of a better word) effect.
Let me show you what I mean:
Image compilation: http://i.imgur.com/onSZG8s.png
This image shows the plot from a perspective where it does not break.
When you rotate the same plot a bit to the right, it is as if the orbit folded in half and reversed its direction!
And if you look at the plot from a great distance, you can see that the elipse is plotted as it should be.
And here is a minimal version of the code with which the issue can be reproduced. The "wraparound" occurs only when the perspective of the camera is closely parallel with the large orbit.
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import matplotlib.pyplot as plt
def orbitalElements2Cartesian(a, e, I, peri, node, E):
""" Convert orbital elements to Cartesian coordinates in the Solar System.
Args:
a (float): semi-major axis (AU)
e (float): eccentricity
I (float): inclination (degrees)
peri (float): longitude of perihelion (degrees)
node (float): longitude of ascending node (degrees)
E (float): eccentric anomaly (radians)
"""
# The source of equations used:
# http://farside.ph.utexas.edu/teaching/celestial/Celestialhtml/node34.html
# Check if the orbit is parabolic or hyperbolic
if e >=1:
e = 0.99999999
# Convert degrees to radians
I, peri, node = map(np.radians, [I, peri, node])
# True anomaly
theta = 2*np.arctan(np.sqrt((1.0 + e)/(1.0 - e))*np.tan(E/2.0))
# Distance from the Sun to the poin on orbit
r = a*(1.0 - e*np.cos(E))
# Cartesian coordinates
x = r*(np.cos(node)*np.cos(peri + theta) - np.sin(node)*np.sin(peri + theta)*np.cos(I))
y = r*(np.sin(node)*np.cos(peri + theta) + np.cos(node)*np.sin(peri + theta)*np.cos(I))
z = r*np.sin(peri + theta)*np.sin(I)
return x, y, z
if __name__ == '__main__':
# Example orbital elements
# a, e, incl, peri, node
orb_elements = np.array([
[2.363, 0.515, 4.0, 205.0, 346.1],
[0.989, 0.089, 3.1, 55.6, 21.2],
[0.898, 0.460, 1.3, 77.1, 331.2],
[104.585332285, 0.994914, 89.3950, 130.8767, 282.4633]
])
# Setup the plot
fig = plt.figure()
ax = fig.gca(projection='3d')
# Eccentric anomaly (full range)
E = np.linspace(-np.pi, np.pi, 100)
# Plot the given orbits
for i, orbit in enumerate(orb_elements):
a, e, I, peri, node = orbit
# Take extra steps in E if the orbit is very large
if a > 50:
E = np.linspace(-np.pi, np.pi, (a/20.0)*100)
# Get the orbit in the cartesian space
x, y, z = orbitalElements2Cartesian(a, e, I, peri, node, E)
# Plot orbits
ax.plot(x, y, z, c='#32CD32')
# Add limits (in AU)
ax.set_xlim3d(-5,5)
ax.set_ylim3d(-5,5)
ax.set_zlim3d(-5,5)
plt.tight_layout()
plt.show()
I am a bit dumbfounded by this and cannot seem to find a proper solution. I would greatly appreciate some help!

matplotlib isn't great for complex 3D plots in my experience (I've had similar strange behaviour with out of axis values). Something like mayavi could be worth considering as it's designed for 3D plots...
A possible workaround is given in this blog, basically just set out of axis values to np.NaN for your required axis. If I add the following to your example,
for r in [x,y,z]:
for i in np.arange(len(r)):
if r[i] < -5:
x[i] = np.NaN
y[i] = np.NaN
z[i] = np.NaN
elif r[i] > 5:
x[i] = np.NaN
y[i] = np.NaN
z[i] = np.NaN
else:
pass
it removes the wraparound.

I had similar issues and wanted to make something a bit more user friendly. I moved all of the functions in this library over to javascript and created a webGL interface in Three.js which lets you do what you want here but also plots the location of the asteroid / comet with animation via time functions. Just need a web browser to use it. Check it out :)
http://rankinstudio.com/asteroids/asteroids.html

Related

Inverse of numpy.gradient function

I need to create a function which would be the inverse of the np.gradient function.
Where the Vx,Vy arrays (Velocity component vectors) are the input and the output would be an array of anti-derivatives (Arrival Time) at the datapoints x,y.
I have data on a (x,y) grid with scalar values (time) at each point.
I have used the numpy gradient function and linear interpolation to determine the gradient vector Velocity (Vx,Vy) at each point (See below).
I have achieved this by:
#LinearTriInterpolator applied to a delaunay triangular mesh
LTI= LinearTriInterpolator(masked_triang, time_array)
#Gradient requested at the mesh nodes:
(Vx, Vy) = LTI.gradient(triang.x, triang.y)
The first image below shows the velocity vectors at each point, and the point labels represent the time value which formed the derivatives (Vx,Vy)
The next image shows the resultant scalar value of the derivatives (Vx,Vy) plotted as a colored contour graph with associated node labels.
So my challenge is:
I need to reverse the process!
Using the gradient vectors (Vx,Vy) or the resultant scalar value to determine the original Time-Value at that point.
Is this possible?
Knowing that the numpy.gradient function is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries, I am sure there is a function which would reverse this process.
I was thinking that taking a line derivative between the original point (t=0 at x1,y1) to any point (xi,yi) over the Vx,Vy plane would give me the sum of the velocity components. I could then divide this value by the distance between the two points to get the time taken..
Would this approach work? And if so, which numpy integrate function would be best applied?
An example of my data can be found here [http://www.filedropper.com/calculatearrivaltimefromgradientvalues060820]
Your help would be greatly appreciated
EDIT:
Maybe this simplified drawing might help understand where I'm trying to get to..
EDIT:
Thanks to #Aguy who has contibuted to this code.. I Have tried to get a more accurate representation using a meshgrid of spacing 0.5 x 0.5m and calculating the gradient at each meshpoint, however I am not able to integrate it properly. I also have some edge affects which are affecting the results that I don't know how to correct.
import numpy as np
from scipy import interpolate
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
#Createmesh grid with a spacing of 0.5 x 0.5
stepx = 0.5
stepy = 0.5
xx = np.arange(min(x), max(x), stepx)
yy = np.arange(min(y), max(y), stepy)
xgrid, ygrid = np.meshgrid(xx, yy)
grid_z1 = interpolate.griddata((x,y), Arrival_Time, (xgrid, ygrid), method='linear') #Interpolating the Time values
#Formatdata
X = np.ravel(xgrid)
Y= np.ravel(ygrid)
zs = np.ravel(grid_z1)
Z = zs.reshape(X.shape)
#Calculate Gradient
(dx,dy) = np.gradient(grid_z1) #Find gradient for points on meshgrid
Velocity_dx= dx/stepx #velocity ms/m
Velocity_dy= dy/stepx #velocity ms/m
Resultant = (Velocity_dx**2 + Velocity_dy**2)**0.5 #Resultant scalar value ms/m
Resultant = np.ravel(Resultant)
#Plot Original Data F(X,Y) on the meshgrid
fig = pyplot.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(x,y,Arrival_Time,color='r')
ax.plot_trisurf(X, Y, Z)
ax.set_xlabel('X-Coordinates')
ax.set_ylabel('Y-Coordinates')
ax.set_zlabel('Time (ms)')
pyplot.show()
#Plot the Derivative of f'(X,Y) on the meshgrid
fig = pyplot.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(X,Y,Resultant,color='r',s=0.2)
ax.plot_trisurf(X, Y, Resultant)
ax.set_xlabel('X-Coordinates')
ax.set_ylabel('Y-Coordinates')
ax.set_zlabel('Velocity (ms/m)')
pyplot.show()
#Integrate to compare the original data input
dxintegral = np.nancumsum(Velocity_dx, axis=1)*stepx
dyintegral = np.nancumsum(Velocity_dy, axis=0)*stepy
valintegral = np.ma.zeros(dxintegral.shape)
for i in range(len(yy)):
for j in range(len(xx)):
valintegral[i, j] = np.ma.sum([dxintegral[0, len(xx) // 2],
dyintegral[i, len(yy) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]])
valintegral = valintegral * np.isfinite(dxintegral)
Now the np.gradient is applied at every meshnode (dx,dy) = np.gradient(grid_z1)
Now in my process I would analyse the gradient values above and make some adjustments (There is some unsual edge effects that are being create which I need to rectify) and would then integrate the values to get back to a surface which would be very similar to f(x,y) shown above.
I need some help adjusting the integration function:
#Integrate to compare the original data input
dxintegral = np.nancumsum(Velocity_dx, axis=1)*stepx
dyintegral = np.nancumsum(Velocity_dy, axis=0)*stepy
valintegral = np.ma.zeros(dxintegral.shape)
for i in range(len(yy)):
for j in range(len(xx)):
valintegral[i, j] = np.ma.sum([dxintegral[0, len(xx) // 2],
dyintegral[i, len(yy) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]])
valintegral = valintegral * np.isfinite(dxintegral)
And now I need to calculate the new 'Time' values at the original (x,y) point locations.
UPDATE (08-09-20) : I am getting some promising results using the help from #Aguy. The results can be seen below (with the blue contours representing the original data, and the red contours representing the integrated values).
I am still working on an integration approach which can remove the inaccuarcies at the areas of min(y) and max(y)
from matplotlib.tri import (Triangulation, UniformTriRefiner,
CubicTriInterpolator,LinearTriInterpolator,TriInterpolator,TriAnalyzer)
import pandas as pd
from scipy.interpolate import griddata
import matplotlib.pyplot as plt
import numpy as np
from scipy import interpolate
#-------------------------------------------------------------------------
# STEP 1: Import data from Excel file, and set variables
#-------------------------------------------------------------------------
df_initial = pd.read_excel(
r'C:\Users\morga\PycharmProjects\venv\Development\Trial'
r'.xlsx')
Inputdata can be found here link
df_initial = df_initial .sort_values(by='Delay', ascending=True) #Update dataframe and sort by Delay
x = df_initial ['X'].to_numpy()
y = df_initial ['Y'].to_numpy()
Arrival_Time = df_initial ['Delay'].to_numpy()
# Createmesh grid with a spacing of 0.5 x 0.5
stepx = 0.5
stepy = 0.5
xx = np.arange(min(x), max(x), stepx)
yy = np.arange(min(y), max(y), stepy)
xgrid, ygrid = np.meshgrid(xx, yy)
grid_z1 = interpolate.griddata((x, y), Arrival_Time, (xgrid, ygrid), method='linear') # Interpolating the Time values
# Calculate Gradient (velocity ms/m)
(dy, dx) = np.gradient(grid_z1) # Find gradient for points on meshgrid
Velocity_dx = dx / stepx # x velocity component ms/m
Velocity_dy = dy / stepx # y velocity component ms/m
# Integrate to compare the original data input
dxintegral = np.nancumsum(Velocity_dx, axis=1) * stepx
dyintegral = np.nancumsum(Velocity_dy, axis=0) * stepy
valintegral = np.ma.zeros(dxintegral.shape) # Makes an array filled with 0's the same shape as dx integral
for i in range(len(yy)):
for j in range(len(xx)):
valintegral[i, j] = np.ma.sum(
[dxintegral[0, len(xx) // 2], dyintegral[i, len(xx) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]])
valintegral[np.isnan(dx)] = np.nan
min_value = np.nanmin(valintegral)
valintegral = valintegral + (min_value * -1)
##Plot Results
fig = plt.figure()
ax = fig.add_subplot()
ax.scatter(x, y, color='black', s=7, zorder=3)
ax.set_xlabel('X-Coordinates')
ax.set_ylabel('Y-Coordinates')
ax.contour(xgrid, ygrid, valintegral, levels=50, colors='red', zorder=2)
ax.contour(xgrid, ygrid, grid_z1, levels=50, colors='blue', zorder=1)
ax.set_aspect('equal')
plt.show()
TL;DR;
You have multiple challenges to address in this issue, mainly:
Potential reconstruction (scalar field) from its gradient (vector field)
But also:
Observation in a concave hull with non rectangular grid;
Numerical 2D line integration and numerical inaccuracy;
It seems it can be solved by choosing an adhoc interpolant and a smart way to integrate (as pointed out by #Aguy).
MCVE
In a first time, let's build a MCVE to highlight above mentioned key points.
Dataset
We recreate a scalar field and its gradient.
import numpy as np
from scipy import interpolate
import matplotlib.pyplot as plt
def f(x, y):
return x**2 + x*y + 2*y + 1
Nx, Ny = 21, 17
xl = np.linspace(-3, 3, Nx)
yl = np.linspace(-2, 2, Ny)
X, Y = np.meshgrid(xl, yl)
Z = f(X, Y)
zl = np.arange(np.floor(Z.min()), np.ceil(Z.max())+1, 2)
dZdy, dZdx = np.gradient(Z, yl, xl, edge_order=1)
V = np.hypot(dZdx, dZdy)
The scalar field looks like:
axe = plt.axes(projection='3d')
axe.plot_surface(X, Y, Z, cmap='jet', alpha=0.5)
axe.view_init(elev=25, azim=-45)
And, the vector field looks like:
axe = plt.contour(X, Y, Z, zl, cmap='jet')
axe.axes.quiver(X, Y, dZdx, dZdy, V, units='x', pivot='tip', cmap='jet')
axe.axes.set_aspect('equal')
axe.axes.grid()
Indeed gradient is normal to potential levels. We also plot the gradient magnitude:
axe = plt.contour(X, Y, V, 10, cmap='jet')
axe.axes.set_aspect('equal')
axe.axes.grid()
Raw field reconstruction
If we naively reconstruct the scalar field from the gradient:
SdZx = np.cumsum(dZdx, axis=1)*np.diff(xl)[0]
SdZy = np.cumsum(dZdy, axis=0)*np.diff(yl)[0]
Zhat = np.zeros(SdZx.shape)
for i in range(Zhat.shape[0]):
for j in range(Zhat.shape[1]):
Zhat[i,j] += np.sum([SdZy[i,0], -SdZy[0,0], SdZx[i,j], -SdZx[i,0]])
Zhat += Z[0,0] - Zhat[0,0]
We can see the global result is roughly correct, but levels are less accurate where the gradient magnitude is low:
Interpolated field reconstruction
If we increase the grid resolution and pick a specific interpolant (usual when dealing with mesh grid), we can get a finer field reconstruction:
r = np.stack([X.ravel(), Y.ravel()]).T
Sx = interpolate.CloughTocher2DInterpolator(r, dZdx.ravel())
Sy = interpolate.CloughTocher2DInterpolator(r, dZdy.ravel())
Nx, Ny = 200, 200
xli = np.linspace(xl.min(), xl.max(), Nx)
yli = np.linspace(yl.min(), yl.max(), Nx)
Xi, Yi = np.meshgrid(xli, yli)
ri = np.stack([Xi.ravel(), Yi.ravel()]).T
dZdxi = Sx(ri).reshape(Xi.shape)
dZdyi = Sy(ri).reshape(Xi.shape)
SdZxi = np.cumsum(dZdxi, axis=1)*np.diff(xli)[0]
SdZyi = np.cumsum(dZdyi, axis=0)*np.diff(yli)[0]
Zhati = np.zeros(SdZxi.shape)
for i in range(Zhati.shape[0]):
for j in range(Zhati.shape[1]):
Zhati[i,j] += np.sum([SdZyi[i,0], -SdZyi[0,0], SdZxi[i,j], -SdZxi[i,0]])
Zhati += Z[0,0] - Zhati[0,0]
Which definitely performs way better:
So basically, increasing the grid resolution with an adhoc interpolant may help you to get more accurate result. The interpolant also solve the need to get a regular rectangular grid from a triangular mesh to perform integration.
Concave and convex hull
You also have pointed out inaccuracy on the edges. Those are the result of the combination of the interpolant choice and the integration methodology. The integration methodology fails to properly compute the scalar field when it reach concave region with few interpolated points. The problem disappear when choosing a mesh-free interpolant able to extrapolate.
To illustrate it, let's remove some data from our MCVE:
q = np.full(dZdx.shape, False)
q[0:6,5:11] = True
q[-6:,-6:] = True
dZdx[q] = np.nan
dZdy[q] = np.nan
Then the interpolant can be constructed as follow:
q2 = ~np.isnan(dZdx.ravel())
r = np.stack([X.ravel(), Y.ravel()]).T[q2,:]
Sx = interpolate.CloughTocher2DInterpolator(r, dZdx.ravel()[q2])
Sy = interpolate.CloughTocher2DInterpolator(r, dZdy.ravel()[q2])
Performing the integration we see that in addition of classical edge effect we do have less accurate value in concave regions (swingy dot-dash lines where the hull is concave) and we have no data outside the convex hull as Clough Tocher is a mesh-based interpolant:
Vl = np.arange(0, 11, 1)
axe = plt.contour(X, Y, np.hypot(dZdx, dZdy), Vl, cmap='jet')
axe.axes.contour(Xi, Yi, np.hypot(dZdxi, dZdyi), Vl, cmap='jet', linestyles='-.')
axe.axes.set_aspect('equal')
axe.axes.grid()
So basically the error we are seeing on the corner are most likely due to integration issue combined with interpolation limited to the convex hull.
To overcome this we can choose a different interpolant such as RBF (Radial Basis Function Kernel) which is able to create data outside the convex hull:
Sx = interpolate.Rbf(r[:,0], r[:,1], dZdx.ravel()[q2], function='thin_plate')
Sy = interpolate.Rbf(r[:,0], r[:,1], dZdy.ravel()[q2], function='thin_plate')
dZdxi = Sx(ri[:,0], ri[:,1]).reshape(Xi.shape)
dZdyi = Sy(ri[:,0], ri[:,1]).reshape(Xi.shape)
Notice the slightly different interface of this interpolator (mind how parmaters are passed).
The result is the following:
We can see the region outside the convex hull can be extrapolated (RBF are mesh free). So choosing the adhoc interpolant is definitely a key point to solve your problem. But we still need to be aware that extrapolation may perform well but is somehow meaningless and dangerous.
Solving your problem
The answer provided by #Aguy is perfectly fine as it setups a clever way to integrate that is not disturbed by missing points outside the convex hull. But as you mentioned there is inaccuracy in concave region inside the convex hull.
If you wish to remove the edge effect you detected, you will have to resort to an interpolant able to extrapolate as well, or find another way to integrate.
Interpolant change
Using RBF interpolant seems to solve your problem. Here is the complete code:
df = pd.read_excel('./Trial-Wireup 2.xlsx')
x = df['X'].to_numpy()
y = df['Y'].to_numpy()
z = df['Delay'].to_numpy()
r = np.stack([x, y]).T
#S = interpolate.CloughTocher2DInterpolator(r, z)
#S = interpolate.LinearNDInterpolator(r, z)
S = interpolate.Rbf(x, y, z, epsilon=0.1, function='thin_plate')
N = 200
xl = np.linspace(x.min(), x.max(), N)
yl = np.linspace(y.min(), y.max(), N)
X, Y = np.meshgrid(xl, yl)
#Zp = S(np.stack([X.ravel(), Y.ravel()]).T)
Zp = S(X.ravel(), Y.ravel())
Z = Zp.reshape(X.shape)
dZdy, dZdx = np.gradient(Z, yl, xl, edge_order=1)
SdZx = np.nancumsum(dZdx, axis=1)*np.diff(xl)[0]
SdZy = np.nancumsum(dZdy, axis=0)*np.diff(yl)[0]
Zhat = np.zeros(SdZx.shape)
for i in range(Zhat.shape[0]):
for j in range(Zhat.shape[1]):
#Zhat[i,j] += np.nansum([SdZy[i,0], -SdZy[0,0], SdZx[i,j], -SdZx[i,0]])
Zhat[i,j] += np.nansum([SdZx[0,N//2], SdZy[i,N//2], SdZx[i,j], -SdZx[i,N//2]])
Zhat += Z[100,100] - Zhat[100,100]
lz = np.linspace(0, 5000, 20)
axe = plt.contour(X, Y, Z, lz, cmap='jet')
axe = plt.contour(X, Y, Zhat, lz, cmap='jet', linestyles=':')
axe.axes.plot(x, y, '.', markersize=1)
axe.axes.set_aspect('equal')
axe.axes.grid()
Which graphically renders as follow:
The edge effect is gone because of the RBF interpolant can extrapolate over the whole grid. You can confirm it by comparing the result of mesh-based interpolants.
Linear
Clough Tocher
Integration variable order change
We can also try to find a better way to integrate and mitigate the edge effect, eg. let's change the integration variable order:
Zhat[i,j] += np.nansum([SdZy[N//2,0], SdZx[N//2,j], SdZy[i,j], -SdZy[N//2,j]])
With a classic linear interpolant. The result is quite correct, but we still have an edge effect on the bottom left corner:
As you noticed the problem occurs at the middle of the axis in region where the integration starts and lacks a reference point.
Here is one approach:
First, in order to be able to do integration, it's good to be on a regular grid. Using here variable names x and y as short for your triang.x and triang.y we can first create a grid:
import numpy as np
n = 200 # Grid density
stepx = (max(x) - min(x)) / n
stepy = (max(y) - min(y)) / n
xspace = np.arange(min(x), max(x), stepx)
yspace = np.arange(min(y), max(y), stepy)
xgrid, ygrid = np.meshgrid(xspace, yspace)
Then we can interpolate dx and dy on the grid using the same LinearTriInterpolator function:
fdx = LinearTriInterpolator(masked_triang, dx)
fdy = LinearTriInterpolator(masked_triang, dy)
dxgrid = fdx(xgrid, ygrid)
dygrid = fdy(xgrid, ygrid)
Now comes the integration part. In principle, any path we choose should get us to the same value. In practice, since there are missing values and different densities, the choice of path is very important to get a reasonably accurate answer.
Below I choose to integrate over dxgrid in the x direction from 0 to the middle of the grid at n/2. Then integrate over dygrid in the y direction from 0 to the i point of interest. Then over dxgrid again from n/2 to the point j of interest. This is a simple way to make sure most of the path of integration is inside the bulk of available data by simply picking a path that goes mostly in the "middle" of the data range. Other alternative consideration would lead to different path selections.
So we do:
dxintegral = np.nancumsum(dxgrid, axis=1) * stepx
dyintegral = np.nancumsum(dygrid, axis=0) * stepy
and then (by somewhat brute force for clarity):
valintegral = np.ma.zeros(dxintegral.shape)
for i in range(n):
for j in range(n):
valintegral[i, j] = np.ma.sum([dxintegral[0, n // 2], dyintegral[i, n // 2], dxintegral[i, j], - dxintegral[i, n // 2]])
valintegral = valintegral * np.isfinite(dxintegral)
valintegral would be the result up to an arbitrary constant which can help put the "zero" where you want.
With your data shown here:
ax.tricontourf(masked_triang, time_array)
This is what I'm getting reconstructed when using this method:
ax.contourf(xgrid, ygrid, valintegral)
Hopefully this is somewhat helpful.
If you want to revisit the values at the original triangulation points, you can use interp2d on the valintegral regular grid data.
EDIT:
In reply to your edit, your adaptation above has a few errors:
Change the line (dx,dy) = np.gradient(grid_z1) to (dy,dx) = np.gradient(grid_z1)
In the integration loop change the dyintegral[i, len(yy) // 2] term to dyintegral[i, len(xx) // 2]
Better to replace the line valintegral = valintegral * np.isfinite(dxintegral) with valintegral[np.isnan(dx)] = np.nan

Test whether points are inside ellipses, without using Matplotlib?

I'm working on a Python-based data analysis. I have some x-y data points, and some ellipses, and I want to determine whether points are inside any of the ellipses. The way that I've been doing this works, but it's kludgy. As I think about distributing my software to other people, I find myself wanting a cleaner way.
Right now, I'm using matplotlib.patches.Ellipse objects. Matplotlib Ellipses have a useful method called contains_point(). You can work in data coordinates on a Matplotlib Axes object by calling Axes.transData.transform().
The catch is that I have to create a Figure and an Axes object to hold the Ellipses. And when my program runs, an annoying Matplotlib Figure object will get rendered, showing the Ellipses, which I don't actually need to see. I have tried several methods to suppress this output. I have succeeded in deleting the Ellipses from the Axes, using Axes.clear(), resulting in an empty graph. But I can't get Matplolib's pyplot.close(fig_number) to delete the Figure itself before calling pyplot.show().
Any advice is appreciated, thanks!
Inspired by how a carpenter draws an ellipse using two nails and a piece of string, here is a numpy-friendly implementation to test whether points lie inside given ellipses.
One of the definitions of an ellipse, is that the sum of the distances to the two foci is constant, equal to the width (or height if it would be larger) of the ellipse. The distance between the center and the foci is sqrt(a*a - b*b), where a and b are half of the width and height. Using that distance and rotation by the desired angle finds the locations of the foci. numpy.linalg.norm can be used to calculate the distances using numpy's efficient array operations.
After the calculations, a plot is generated to visually check whether everything went correct.
import numpy as np
from numpy.linalg import norm # calculate the length of a vector
x = np.random.uniform(0, 40, 20000)
y = np.random.uniform(0, 20, 20000)
xy = np.dstack((x, y))
el_cent = np.array([20, 10])
el_width = 28
el_height = 17
el_angle = 20
# distance between the center and the foci
foc_dist = np.sqrt(np.abs(el_height * el_height - el_width * el_width) / 4)
# vector from center to one of the foci
foc_vect = np.array([foc_dist * np.cos(el_angle * np.pi / 180), foc_dist * np.sin(el_angle * np.pi / 180)])
# the two foci
el_foc1 = el_cent + foc_vect
el_foc2 = el_cent - foc_vect
# for each x,y: calculate z as the sum of the distances to the foci;
# np.ravel is needed to change the array of arrays (of 1 element) into a single array
z = np.ravel(norm(xy - el_foc1, axis=-1) + norm(xy - el_foc2, axis=-1) )
# points are exactly on the ellipse when the sum of distances is equal to the width
# z = np.where(z <= max(el_width, el_height), 1, 0)
# now create a plot to check whether everything makes sense
from matplotlib import pyplot as plt
from matplotlib import patches as mpatches
fig, ax = plt.subplots()
# show the foci as red dots
plt.plot(*el_foc1, 'ro')
plt.plot(*el_foc2, 'ro')
# create a filter to separate the points inside the ellipse
filter = z <= max(el_width, el_height)
# draw all the points inside the ellipse with the plasma colormap
ax.scatter(x[filter], y[filter], s=5, c=z[filter], cmap='plasma')
# draw all the points outside with the cool colormap
ax.scatter(x[~filter], y[~filter], s=5, c=z[~filter], cmap='cool')
# add the original ellipse to verify that the boundaries match
ellipse = mpatches.Ellipse(xy=el_cent, width=el_width, height=el_height, angle=el_angle,
facecolor='None', edgecolor='black', linewidth=2,
transform=ax.transData)
ax.add_patch(ellipse)
ax.set_aspect('equal', 'box')
ax.autoscale(enable=True, axis='both', tight=True)
plt.show()
The simplest solution here is to use shapely.
If you have an array of shape Nx2 containing a set of vertices (xy) then it is trivial to construct the appropriate shapely.geometry.polygon object and check if an arbitrary point or set of points (points) is contained within -
import shapely.geometry as geom
ellipse = geom.Polygon(xy)
for p in points:
if ellipse.contains(geom.Point(p)):
# ...
Alternatively, if the ellipses are defined by their parameters (i.e. rotation angle, semimajor and semiminor axis) then the array containing the vertices must be constructed and then the same process applied. I would recommend using the polar form relative to center as this is the most compatible with how shapely constructs the polygons.
import shapely.geometry as geom
from shapely import affinity
n = 360
a = 2
b = 1
angle = 45
theta = np.linspace(0, np.pi*2, n)
r = a * b / np.sqrt((b * np.cos(theta))**2 + (a * np.sin(theta))**2)
xy = np.stack([r * np.cos(theta), r * np.sin(theta)], 1)
ellipse = affinity.rotate(geom.Polygon(xy), angle, 'center')
for p in points:
if ellipse.contains(geom.Point(p)):
# ...
This method is advantageous because it supports any properly defined polygons - not just ellipses, it doesn't rely on matplotlib methods to perform the containment checking, and it produces a very readable code (which is often important when "distributing [one's] software to other people").
Here is a complete example (with added plotting to show it working)
import shapely.geometry as geom
from shapely import affinity
import matplotlib.pyplot as plt
import numpy as np
n = 360
theta = np.linspace(0, np.pi*2, n)
a = 2
b = 1
angle = 45.0
r = a * b / np.sqrt((b * np.cos(theta))**2 + (a * np.sin(theta))**2)
xy = np.stack([r * np.cos(theta), r * np.sin(theta)], 1)
ellipse = affinity.rotate(geom.Polygon(xy), angle, 'center')
x, y = ellipse.exterior.xy
# Create a Nx2 array of points at grid coordinates throughout
# the ellipse extent
rnd = np.array([[i,j] for i in np.linspace(min(x),max(x),50)
for j in np.linspace(min(y),max(y),50)])
# Filter for points which are contained in the ellipse
res = np.array([p for p in rnd if ellipse.contains(geom.Point(p))])
plt.plot(x, y, lw = 1, color='k')
plt.scatter(rnd[:,0], rnd[:,1], s = 50, color=(0.68, 0.78, 0.91)
plt.scatter(res[:,0], res[:,1], s = 15, color=(0.12, 0.67, 0.71))
plt.show()

How to generate random and lattice points inside of an irregular object?

I have an irregular 3d object and want to know the surface of this object. The object can be both convex or non convex type. I can get the surface of this object applying any method like marching cube, surface contour, or isosurface.
All this methods give me triangulated mesh which is basically contains edges and vertex.
My task is to generate random and lattice points inside the object.
How should i check whether my point is inside or outside?
Any suggestion?
Thanks a lot.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
from skimage import measure, io
from skimage.draw import ellipsoid
import skimage as sk
import random
I=np.zeros((50,50,50),dtype=np.float)
for i in range(50):
for j in range(50):
for k in range(50):
dist=np.linalg.norm([i,j,k]-O)
if dist<8:
I[i,j,k]=0.8#random.random()
dist=np.linalg.norm([i,j,k]-O2)
if dist<16:
I[i,j,k]=1#random.random()
verts, faces, normals, values = measure.marching_cubes_lewiner(I,0.7)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, projection='3d')
mesh = Poly3DCollection(verts[faces])
mesh.set_edgecolor('k')
ax.add_collection3d(mesh)
plt.show()
%now forget the above code and suppose i have only verts and
%faces information. Now how to generate random points inside this Data
Data=verts[faces]
???????
For random points inside the closed shape:
Select linear density of samples
Make bounding box enclosing the shape
Select entry point on the box
Select exit point, compute direction cosines (wx, wy, wz). Find all segments inside the shape along the ray
Start the ray from entry point
Get to first segment and and set it to pstart
Sample length s from exponential distribution with selected linear density
Find point pend = pstart + s (wx, wy, wz)
If it is in the first segment, store it, and make pstart = pend. Go to step 7.
If it is not, go to the start of another segment, and set it to pstart. Go to step 7. If there is no segment left, you're done with one ray, go to step 3 and generate another ray.
Generate some predefined number of rays, collect all stored points, and you're done
I am sharing the code which I have written. It might be useful for others if anybody is interested for similar kind of problem. This is not the optimize code. As grid spacing value decrease computation time increase. Also depends upon the number of triangle of mesh. Any suggestion for optimizing or improve the code is welcome. Thanks
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
import numpy as np
#from mayavi import mlab
verts # numpy array of vertex (triangulated mesh)
faces # numpy array of faces (triangulated mesh)
%This function is taken from here
%https://www.erikrotteveel.com/python/three-dimensional-ray-tracing-in-python/
def ray_intersect_triangle(p0, p1, triangle):
# Tests if a ray starting at point p0, in the direction
# p1 - p0, will intersect with the triangle.
#
# arguments:
# p0, p1: numpy.ndarray, both with shape (3,) for x, y, z.
# triangle: numpy.ndarray, shaped (3,3), with each row
# representing a vertex and three columns for x, y, z.
#
# returns:
# 0.0 if ray does not intersect triangle,
# 1.0 if it will intersect the triangle,
# 2.0 if starting point lies in the triangle.
v0, v1, v2 = triangle
u = v1 - v0
v = v2 - v0
normal = np.cross(u, v)
b = np.inner(normal, p1 - p0)
a = np.inner(normal, v0 - p0)
# Here is the main difference with the code in the link.
# Instead of returning if the ray is in the plane of the
# triangle, we set rI, the parameter at which the ray
# intersects the plane of the triangle, to zero so that
# we can later check if the starting point of the ray
# lies on the triangle. This is important for checking
# if a point is inside a polygon or not.
if (b == 0.0):
# ray is parallel to the plane
if a != 0.0:
# ray is outside but parallel to the plane
return 0
else:
# ray is parallel and lies in the plane
rI = 0.0
else:
rI = a / b
if rI < 0.0:
return 0
w = p0 + rI * (p1 - p0) - v0
denom = np.inner(u, v) * np.inner(u, v) - \
np.inner(u, u) * np.inner(v, v)
si = (np.inner(u, v) * np.inner(w, v) - \
np.inner(v, v) * np.inner(w, u)) / denom
if (si < 0.0) | (si > 1.0):
return 0
ti = (np.inner(u, v) * np.inner(w, u) - \
np.inner(u, u) * np.inner(w, v)) / denom
if (ti < 0.0) | (si + ti > 1.0):
return 0
if (rI == 0.0):
# point 0 lies ON the triangle. If checking for
# point inside polygon, return 2 so that the loop
# over triangles can stop, because it is on the
# polygon, thus inside.
return 2
return 1
def bounding_box_of_mesh(triangle):
return [np.min(triangle[:,0]), np.max(triangle[:,0]), np.min(triangle[:,1]), np.max(triangle[:,1]), np.min(triangle[:,2]), np.max(triangle[:,2])]
def boundingboxoftriangle(triangle,x,y,z):
localbox= [np.min(triangle[:,0]), np.max(triangle[:,0]), np.min(triangle[:,1]), np.max(triangle[:,1]), np.min(triangle[:,2]), np.max(triangle[:,2])]
#print 'local', localbox
for i in range(1,len(x)):
if (x[i-1] <= localbox[0] < x[i]):
x_min=i-1
if (x[i-1] < localbox[1] <= x[i]):
x_max=i
for i in range(1,len(y)):
if (y[i-1] <= localbox[2] < y[i]):
y_min=i-1
if (y[i-1] < localbox[3] <= y[i]):
y_max=i
for i in range(1,len(z)):
if (z[i-1] <= localbox[4] < z[i]):
z_min=i-1
if (z[i-1] < localbox[5] <= z[i]):
z_max=i
return [x_min, x_max, y_min, y_max, z_min, z_max]
spacing=5 # grid spacing
boundary=bounding_box_of_mesh(verts)
print boundary
x=np.arange(boundary[0]-2*spacing,boundary[1]+2*spacing,spacing)
y=np.arange(boundary[2]-2*spacing,boundary[3]+2*spacing,spacing)
z=np.arange(boundary[4]-2*spacing,boundary[5]+2*spacing,spacing)
Grid=np.zeros((len(x),len(y),len(z)),dtype=np.int)
print Grid.shape
data=verts[faces]
xarr=[]
yarr=[]
zarr=[]
# actual number of grid is very high so checking every grid is
# inside or outside is inefficient. So, I am looking for only
# those grid which is near to mesh boundary. This will reduce
#the time and later on internal grid can be interpolate easily.
for i in range(len(data)):
#print '\n', data[i]
AABB=boundingboxoftriangle(data[i],x,y,z) ## axis aligned bounding box
#print AABB
for gx in range(AABB[0],AABB[1]+1):
if gx not in xarr:
xarr.append(gx)
for gy in range(AABB[2],AABB[3]+1):
if gy not in yarr:
yarr.append(gy)
for gz in range(AABB[4],AABB[5]+1):
if gz not in zarr:
zarr.append(gz)
print len(xarr),len(yarr),len(zarr)
center=np.array([np.mean(verts[:,0]), np.mean(verts[:,1]), np.mean(verts[:,2])])
print center
fw=open('Grid_value_output_spacing__.dat','w')
p1=center #np.array([0,0,0])
for i in range(len(xarr)):
for j in range(len(yarr)):
for k in range(len(zarr)):
p0=np.array([x[xarr[i]],y[yarr[j]],z[zarr[k]]])
for go in range(len(data)):
value=ray_intersect_triangle(p0, p1, data[go])
if value>0:
Grid[i,j,k]=value
break
fw.write(str(xarr[i])+'\t'+str(yarr[j])+'\t'+str(zarr[k])+'\t'+str(x[xarr[i]])+'\t'+str(y[yarr[j]])+'\t'+str(z[zarr[k]])+'\t'+str(Grid[i,j,k])+'\n')
print i
fw.close()
#If the grid value is greater than 0 then it is inside the triangulated mesh.
#I am writing the value of only confusing grid near boundary.
#Deeper inside grid of mesh can be interpolate easily with above information.
#If grid spacing is very small then generating random points inside the
#mesh is equivalent to choosing the random grid.

how to generate new points as offset with gaussian distribution for some points in spherical coordinates in python

I am working with some points in spherical coordinates. I need to generate new points as the error points for them and a kind of offset for the old points.
The new point should be in a specific distance from the old one which distributing by gaussian distribution. The angle of new point compared to old one is not important.I am trying to generate new points for r direction. no matter what are phi and theta (Spherical coordinates)
To generate the new point distributing by gaussian function, I tried the numpy.rand.normal(mean,std,..). But It is generating 1D random points over mean value and this mean value is a real number. In my case I need an approach to specify the position of the old point and I have one given standard deviation for this distance from the original points.
Honesty, I dont have a copy of my code. It is on the university's server. But let's assume I have an array of size 100*3 including the spherical (or cartesian) coordinates of some points on a surface of a cylinder. In spherical case, the first column presents the radius value, the second column is theta and third one shows the phi for the points. now I want to generate random points from them using gaussian distribution. there is a given standard deviation for the gaussian distribution. The only important thing is that the new points generated by gaussian distribution are limited in r value. No matter the position of points in term of theta and phi.
When I tried numpy.rand.normal(mean,std,..), this generate some random points over the mean value. It does not help me. I want new points over my old ones with the given STD.
any idea would be appreciated.
This is a code, similar to mine written By Ophion How to generate regular points on cylindrical surface
def make_cylinder(radius, length, nlength, alpha, nalpha, center, orientation):
#Create the length array
I = np.linspace(0, length, nlength)
#Create alpha array avoid duplication of endpoints
#Conditional should be changed to meet your requirements
if int(alpha) == 360:
A = np.linspace(0, alpha, num=nalpha, endpoint=False)/180*np.pi
else:
A = np.linspace(0, alpha, num=nalpha)/180*np.pi
#Calculate X and Y
X = radius * np.cos(A)
Y = radius * np.sin(A)
#Tile/repeat indices so all unique pairs are present
pz = np.tile(I, nalpha)
px = np.repeat(X, nlength)
py = np.repeat(Y, nlength)
points = np.vstack(( pz, px, py )).T
#Shift to center
shift = np.array(center) - np.mean(points, axis=0)
points += shift
#Orient tube to new vector
#Grabbed from an old unutbu answer
def rotation_matrix(axis,theta):
a = np.cos(theta/2)
b,c,d = -axis*np.sin(theta/2)
return np.array([[a*a+b*b-c*c-d*d, 2*(b*c-a*d), 2*(b*d+a*c)],
[2*(b*c+a*d), a*a+c*c-b*b-d*d, 2*(c*d-a*b)],
[2*(b*d-a*c), 2*(c*d+a*b), a*a+d*d-b*b-c*c]])
ovec = orientation / np.linalg.norm(orientation)
cylvec = np.array([1,0,0])
if np.allclose(cylvec, ovec):
return points
#Get orthogonal axis and rotation
oaxis = np.cross(ovec, cylvec)
rot = np.arccos(np.dot(ovec, cylvec))
R = rotation_matrix(oaxis, rot)
return points.dot(R)
now calling the function:
points = make_cylinder(3, 5, 5, 360, 10, [0,2,0], [1,0,0])
sigma = 0.5 # given STD
ossfet_points = numpy.random.normal(np.mean(point[:,0]), sigma, size=(n,3))
If I'm not mistaken, you want random points on a spherical manifold with a gaussian distribution of distances from the center. If so, then you have the latter problem solved by sampling gaussian values of the radius using numpy.rand.normal
To get random spherical points is a little bit more tricky, but here's some code to do it (and a description of the math behind it at Wolfram MathWorld):
import numpy as np
num_points = 500
U = np.random.random(num_points)
V = np.random.random(num_points)
import math as m
def spherical_to_cartesian(vec):
'''
Convert spherical polar coordinates to cartesian coordinates:
See the definition of spherical_cartesian_to_polar.
#param vec: A vector of the 3 polar coordinates (r, u, v)
#return: (x, y, z)
'''
(r, u, v) = vec
x = r * m.sin(u) * m.cos(v)
y = r * m.sin(u) * m.sin(v)
z = r * m.cos(u)
return [x, y, z]
radius = 1.
points = np.array([spherical_to_cartesian([radius, 2 * np.pi * u, np.arccos(2*v - 1)]) for u,v in zip(U,V)])
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax = Axes3D(fig)
ax.plot(points[:,0], points[:,1], points[:,2], 'o')
Which will give you points like this:
Now if you want them to have normally distributed radii, you just need to substitute your randomly generated values in the list comprehension which uses the variable radius like this:
radii = np.random.normal(10, 3, 100)
points = np.array([spherical_to_cartesian([r, 2 * np.pi * u, np.arccos(2*v - 1)]) for r,u,v in zip(radii, U,V)])
Is this more or less what you're looking for?

Color matplotlib quiver field according to magnitude and direction

I'm attempting to achieve the same behavior as this function in Matlab, whereby the color of each arrow corresponds to both its magnitude and direction, essentially drawing its color from a wheel. I saw this question, but it only seems to work for barbs. I also saw this answer, but quiver complains that the color array must be two-dimensional.
What is the best way to compute C for matplotlib.pyplot.quiver, taking into account both magnitude and direction?
Even though this is quite old now, I've come across the same problem. Based on matplotlibs quiver demo and my own answer to this post, I created the following example. The idea is to convert the angle of a vector to the color using HSV colors Hue value. The absolute value of the vector is used as the saturation and the value.
import numpy as np
import matplotlib.colors
import matplotlib.pyplot as plt
def vector_to_rgb(angle, absolute):
"""Get the rgb value for the given `angle` and the `absolute` value
Parameters
----------
angle : float
The angle in radians
absolute : float
The absolute value of the gradient
Returns
-------
array_like
The rgb value as a tuple with values [0..1]
"""
global max_abs
# normalize angle
angle = angle % (2 * np.pi)
if angle < 0:
angle += 2 * np.pi
return matplotlib.colors.hsv_to_rgb((angle / 2 / np.pi,
absolute / max_abs,
absolute / max_abs))
X = np.arange(-10, 10, 1)
Y = np.arange(-10, 10, 1)
U, V = np.meshgrid(X, Y)
angles = np.arctan2(V, U)
lengths = np.sqrt(np.square(U) + np.square(V))
max_abs = np.max(lengths)
c = np.array(list(map(vector_to_rgb, angles.flatten(), lengths.flatten())))
fig, ax = plt.subplots()
q = ax.quiver(X, Y, U, V, color=c)
plt.show()
The color wheel is the following. The code for generating it is mentioned in the Edit.
Edit
I just noticed, that the linked matlab function "renders a vector field as a grid of unit-length arrows. The arrow direction indicates vector field direction, and the color indicates the magnitude". So my above example is not really what is in the question. Here are some modifications.
The left graph is the same as above. The right one does, what the cited matlab function does: A unit-length arrow plot with the color indicating the magnitude. The center one does not use the magnitude but only the direction in the color which might be useful too. I hope other combinations are clear from this example.
import numpy as np
import matplotlib.colors
import matplotlib.pyplot as plt
def vector_to_rgb(angle, absolute):
"""Get the rgb value for the given `angle` and the `absolute` value
Parameters
----------
angle : float
The angle in radians
absolute : float
The absolute value of the gradient
Returns
-------
array_like
The rgb value as a tuple with values [0..1]
"""
global max_abs
# normalize angle
angle = angle % (2 * np.pi)
if angle < 0:
angle += 2 * np.pi
return matplotlib.colors.hsv_to_rgb((angle / 2 / np.pi,
absolute / max_abs,
absolute / max_abs))
X = np.arange(-10, 10, 1)
Y = np.arange(-10, 10, 1)
U, V = np.meshgrid(X, Y)
angles = np.arctan2(V, U)
lengths = np.sqrt(np.square(U) + np.square(V))
max_abs = np.max(lengths)
# color is direction, hue and value are magnitude
c1 = np.array(list(map(vector_to_rgb, angles.flatten(), lengths.flatten())))
ax = plt.subplot(131)
ax.set_title("Color is lenth,\nhue and value are magnitude")
q = ax.quiver(X, Y, U, V, color=c1)
# color is length only
c2 = np.array(list(map(vector_to_rgb, angles.flatten(),
np.ones_like(lengths.flatten()) * max_abs)))
ax = plt.subplot(132)
ax.set_title("Color is direction only")
q = ax.quiver(X, Y, U, V, color=c2)
# color is direction only
c3 = np.array(list(map(vector_to_rgb, 2 * np.pi * lengths.flatten() / max_abs,
max_abs * np.ones_like(lengths.flatten()))))
# create one-length vectors
U_ddash = np.ones_like(U)
V_ddash = np.zeros_like(V)
# now rotate them
U_dash = U_ddash * np.cos(angles) - V_ddash * np.sin(angles)
V_dash = U_ddash * np.sin(angles) + V_ddash * np.cos(angles)
ax = plt.subplot(133)
ax.set_title("Uniform length,\nColor is magnitude only")
q = ax.quiver(X, Y, U_dash, V_dash, color=c3)
plt.show()
To plot the color wheel use the following code. Note that this uses the max_abs value from above which is the maximum value that the color hue and value can reach. The vector_to_rgb() function is also re-used here.
ax = plt.subplot(236, projection='polar')
n = 200
t = np.linspace(0, 2 * np.pi, n)
r = np.linspace(0, max_abs, n)
rg, tg = np.meshgrid(r, t)
c = np.array(list(map(vector_to_rgb, tg.T.flatten(), rg.T.flatten())))
cv = c.reshape((n, n, 3))
m = ax.pcolormesh(t, r, cv[:,:,1], color=c, shading='auto')
m.set_array(None)
ax.set_yticklabels([])
I don't know if you've since found that quiver with matplotlib 1.4.x has 3d capability. This capability is limited when attempting to colour the arrows however.
A friend and I write the following script (in half an hour or so) to plot my experiment data using hex values from a spreadsheet, for my thesis. We're going to make this more automated once we're done with the semester but the issue with passing a colour map to quiver is that it can't accept a vector form for some reason.
This link is to my git repository where the code I used, slightly neatened up by another friend, is hosted.
I hope I can save someone the time it took me.

Categories