Related
I try to do:
Cut STL file https://www.dropbox.com/s/pex20yqfgmxgt0w/wing_fish.stl?dl=0 at Z-coordinate using PyVsita https://docs.pyvista.org/ )
Extract point's coordinates X, Y at given section Z
Sort points to Upper and Down groups for further manipulation
Here is my code:
import pyvista as pv
import matplotlib.pylab as plt
import numpy as np
import math
mesh = pv.read('wing_fish.stl')
z_slice = [0, 0, 1] # normal to cut at
single_slice = mesh.slice(normal=z_slice, origin=[0, 0, 200]) # slicing
a = single_slice.points # choose only points
# p = pv.Plotter() #show section
# p.add_mesh(single_slice)
# p.show()
a = a[a[:,0].astype(float).argsort()] # sort all points by Х coord
# X min of all points
x0 = a[0][0]
# Y min of all points
y0 = a[0][1]
# X tail 1 of 2
xn = a[-1][0]
# Y tail 1 of 2
yn = a[-1][1]
# X tail 2 of 2
xn2 = a[-2][0]
# Y tail 2 of 2
yn2 = a[-2][1]
def line_y(x, x0, y0, xn, yn):
# return y coord at arbitary x coord of x0, y0 xn, yn LINE
return ((x - x0)*(yn-y0))/(xn-x0)+y0
def line_c(x0, y0, xn, yn):
# return x, y middle points of LINE
xc = (x0+xn)/2
yc = (y0+yn)/2
return xc, yc
def chord(P1, P2):
return math.sqrt((P2[1] - P1[1])**2 + (P2[0] - P1[0])**2)
xc_end, yc_end = line_c(xn, yn, xn2, yn2) # return midle at trailing edge
midLine = np.array([[x0,y0],[xc_end,yc_end]],dtype='float32')
c_temp_x_d = []
c_temp_y_d = []
c_temp_x_u = []
c_temp_y_u = []
isUp = None
isDown = None
for i in a:
if i[1] == line_y(i[0], x0=x0, y0=y0, xn=xc_end, yn=yc_end):
continue
elif i[1] < line_y(i[0], x0=x0, y0=y0, xn=xc_end, yn=yc_end):
c_temp_y_d.append(i[1])
c_temp_x_d.append(i[0])
isDown = True
else:
c_temp_y_u.append(i[1])
c_temp_x_u.append(i[0])
isUp = True
if len(c_temp_y_d) != 0 and len(c_temp_y_u) != 0:
print(c_temp_y_d[-1])
plt.plot(c_temp_x_d, c_temp_y_d, label='suppose to be down points')
plt.plot(c_temp_x_u, c_temp_y_u, label='suppose to be upper points')
plt.plot(midLine[:,0], midLine[:,1], label='Chord')
plt.scatter(a[:,0],a[:,1], label='raw points')
plt.legend();plt.grid();plt.show()
What I have:
What I want:
I would highly appreciate for any help and advises!
Thanks in advance!
You are discarding precious connectivity information that is already there in your STL mesh and in your slice!
I couldn't think of a more idiomatic solution within PyVista, but at worst you can take the cell (line) information from the slice and start walking your shape (that is topologically equivalent to a circle) from its left side to its right, and vice versa. Here's one way:
import numpy as np
import matplotlib.pyplot as plt
import pyvista as pv
mesh = pv.read('../wing_fish.stl')
z_slice = [0, 0, 1] # normal to cut at
single_slice = mesh.slice(normal=z_slice, origin=[0, 0, 200]) # slicing
# find points with smallest and largest x coordinate
points = single_slice.points
left_ind = points[:, 0].argmin()
right_ind = points[:, 0].argmax()
# sanity check for what we're about to do:
# 1. all cells are lines
assert single_slice.n_cells == single_slice.n_points
assert (single_slice.lines[::3] == 2).all()
# 2. all points appear exactly once as segment start and end
lines = single_slice.lines.reshape(-1, 3) # each row: [2, i_from, i_to]
assert len(set(lines[:, 1])) == lines.shape[0]
# create an auxiliary dict with from -> to index mappings
conn = dict(lines[:, 1:])
# and a function that walks this connectivity graph
def walk_connectivity(connectivity, start, end):
this_ind = start
path_inds = [this_ind]
while True:
next_ind = connectivity[this_ind]
path_inds.append(next_ind)
this_ind = next_ind
if this_ind == end:
# we're done
return path_inds
# start walking at point left_ind, walk until right_ind
first_side_inds = walk_connectivity(conn, left_ind, right_ind)
# now walk forward for the other half curve
second_side_inds = walk_connectivity(conn, right_ind, left_ind)
# get the point coordinates for plotting
first_side_points = points[first_side_inds, :-1]
second_side_points = points[second_side_inds, :-1]
# plot the two sides
fig, ax = plt.subplots()
ax.plot(*first_side_points.T)
ax.plot(*second_side_points.T)
plt.show()
In order to avoid using an O(n^2) algorithm, I defined an auxiliary dict that maps line segment start indices to end indices. In order for this to work we need some sanity checks, namely that the cells are all simple line segments, and that each segment has the same orientation (i.e. each start point is unique, and each end point is unique). Once we have this it's easy to start from the left edge of your wing profile and walk each line segment until we find the right edge.
The nature of this approach implies that we can't know a priori whether the path from left to right goes on the upper or the lower path. This needs experimentation on your part; name the two paths in whatever way you see fit.
And of course there's always room for fine tuning. For instance, the above implementation creates two paths that both start and end with the left and right-side boundary points of the mesh. If you want the top and bottom curves to share no points, you'll have to adjust the algorithm accordingly. And if the end point is not found on the path then the current implementation will give you an infinite loop with a list growing beyond all available memory. Consider adding some checks in the implementation to avoid this.
Anyway, this is what we get from the above:
I have plotted the projectile motion given initial conditions.
What I'm stuck on is trying to make tmax dependent on the initial conditions so if you change the initial conditions tmax changes to a reasonable value. Secondly How would I find the distance to impact, the time of flight and the velocity at impact. These all would depend on when y becomes zero but IDK how to use that info.
Here is my code. Thanks.
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
c=0.65 #Constant friction term
m=0.1 #Object's mass
g=9.81 #Gravitational acceleration
theta=50 #Angle
V0=10 #Initial Velocity
Vx0=np.cos(np.deg2rad(theta))*V0 #Calculates initial speed along the x-axis
Vy0=np.sin(np.deg2rad(theta))*V0 #Calculates initial speed along the y-axis
t0 = 0.0 #Initial time
tmax=1 #Final time
steps=20 #Number of time step
tAF = np.linspace(t0, tmax, steps)# Creates a 1-D array of time values
y0AF = [0.0, Vx0, 0.0, Vy0] #Initial condition for x-position, V along x, y-position, V along y
def derivative(yAF,tF): #Function which calculates the derivatives
Vx = yAF[1] #Element in array for the velocity along x axis
Vy = yAF[3] #Element in array for the velocity along y axis
return [Vx, -c*Vx/m, Vy, -g-c*Vy/m] #Function outputs [dx/dt, dVx/dt, dy/dt, dVy/dt]
yM = odeint(derivative, y0AF, tAF) #Array with the solution for the differential equation
plt.plot(yM[:,0],yM[:,2], '.') #Plots y over x
plt.xlabel('x-distance'), plt.ylabel('y-distance')
plt.show()
MAXy=np.amax(yM[:,2])
print('Maximum height reached is',MAXy)
Instead of trying to integrate the differential equations all in one use of odeint(), you can use multiple steps, changing the initial conditions to continue the integration, e.g. by replacing the line:
yM = odeint(derivative, y0AF, tAF)
... with something like:
y = np.array([y0AF])
tLargeStep = 0.3
i = 0
while 1:
tAF = np.linspace(i*tLargeStep, (i+1)*tLargeStep, steps)
yM = odeint(derivative, y[-1,:], tAF)
y = np.concatenate((y,yM[1:,:]))
if y[-1,2] < 0:
break
i += 1
plt.plot(y[:,0],y[:,2], '.') #Plots y over x
plt.xlabel('x-distance'), plt.ylabel('y-distance')
plt.show()
In this case, the integration will continue large step by large step until y is less than zero at the end of a large step.
Having obtained the data in y, you could use numpy.interpolate() on those data to find the distance to impact, the time of flight and the velocity at impact.
Alternatively, if you use scipy.integrate.solve_ivp() instead of odeint(), solve_ivp() allows you to specify events to track, and to make some of those events terminal, i.e. to make the solver stop computing when the specified event occurs.
I have an irregular 3d object and want to know the surface of this object. The object can be both convex or non convex type. I can get the surface of this object applying any method like marching cube, surface contour, or isosurface.
All this methods give me triangulated mesh which is basically contains edges and vertex.
My task is to generate random and lattice points inside the object.
How should i check whether my point is inside or outside?
Any suggestion?
Thanks a lot.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
from skimage import measure, io
from skimage.draw import ellipsoid
import skimage as sk
import random
I=np.zeros((50,50,50),dtype=np.float)
for i in range(50):
for j in range(50):
for k in range(50):
dist=np.linalg.norm([i,j,k]-O)
if dist<8:
I[i,j,k]=0.8#random.random()
dist=np.linalg.norm([i,j,k]-O2)
if dist<16:
I[i,j,k]=1#random.random()
verts, faces, normals, values = measure.marching_cubes_lewiner(I,0.7)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, projection='3d')
mesh = Poly3DCollection(verts[faces])
mesh.set_edgecolor('k')
ax.add_collection3d(mesh)
plt.show()
%now forget the above code and suppose i have only verts and
%faces information. Now how to generate random points inside this Data
Data=verts[faces]
???????
For random points inside the closed shape:
Select linear density of samples
Make bounding box enclosing the shape
Select entry point on the box
Select exit point, compute direction cosines (wx, wy, wz). Find all segments inside the shape along the ray
Start the ray from entry point
Get to first segment and and set it to pstart
Sample length s from exponential distribution with selected linear density
Find point pend = pstart + s (wx, wy, wz)
If it is in the first segment, store it, and make pstart = pend. Go to step 7.
If it is not, go to the start of another segment, and set it to pstart. Go to step 7. If there is no segment left, you're done with one ray, go to step 3 and generate another ray.
Generate some predefined number of rays, collect all stored points, and you're done
I am sharing the code which I have written. It might be useful for others if anybody is interested for similar kind of problem. This is not the optimize code. As grid spacing value decrease computation time increase. Also depends upon the number of triangle of mesh. Any suggestion for optimizing or improve the code is welcome. Thanks
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
import numpy as np
#from mayavi import mlab
verts # numpy array of vertex (triangulated mesh)
faces # numpy array of faces (triangulated mesh)
%This function is taken from here
%https://www.erikrotteveel.com/python/three-dimensional-ray-tracing-in-python/
def ray_intersect_triangle(p0, p1, triangle):
# Tests if a ray starting at point p0, in the direction
# p1 - p0, will intersect with the triangle.
#
# arguments:
# p0, p1: numpy.ndarray, both with shape (3,) for x, y, z.
# triangle: numpy.ndarray, shaped (3,3), with each row
# representing a vertex and three columns for x, y, z.
#
# returns:
# 0.0 if ray does not intersect triangle,
# 1.0 if it will intersect the triangle,
# 2.0 if starting point lies in the triangle.
v0, v1, v2 = triangle
u = v1 - v0
v = v2 - v0
normal = np.cross(u, v)
b = np.inner(normal, p1 - p0)
a = np.inner(normal, v0 - p0)
# Here is the main difference with the code in the link.
# Instead of returning if the ray is in the plane of the
# triangle, we set rI, the parameter at which the ray
# intersects the plane of the triangle, to zero so that
# we can later check if the starting point of the ray
# lies on the triangle. This is important for checking
# if a point is inside a polygon or not.
if (b == 0.0):
# ray is parallel to the plane
if a != 0.0:
# ray is outside but parallel to the plane
return 0
else:
# ray is parallel and lies in the plane
rI = 0.0
else:
rI = a / b
if rI < 0.0:
return 0
w = p0 + rI * (p1 - p0) - v0
denom = np.inner(u, v) * np.inner(u, v) - \
np.inner(u, u) * np.inner(v, v)
si = (np.inner(u, v) * np.inner(w, v) - \
np.inner(v, v) * np.inner(w, u)) / denom
if (si < 0.0) | (si > 1.0):
return 0
ti = (np.inner(u, v) * np.inner(w, u) - \
np.inner(u, u) * np.inner(w, v)) / denom
if (ti < 0.0) | (si + ti > 1.0):
return 0
if (rI == 0.0):
# point 0 lies ON the triangle. If checking for
# point inside polygon, return 2 so that the loop
# over triangles can stop, because it is on the
# polygon, thus inside.
return 2
return 1
def bounding_box_of_mesh(triangle):
return [np.min(triangle[:,0]), np.max(triangle[:,0]), np.min(triangle[:,1]), np.max(triangle[:,1]), np.min(triangle[:,2]), np.max(triangle[:,2])]
def boundingboxoftriangle(triangle,x,y,z):
localbox= [np.min(triangle[:,0]), np.max(triangle[:,0]), np.min(triangle[:,1]), np.max(triangle[:,1]), np.min(triangle[:,2]), np.max(triangle[:,2])]
#print 'local', localbox
for i in range(1,len(x)):
if (x[i-1] <= localbox[0] < x[i]):
x_min=i-1
if (x[i-1] < localbox[1] <= x[i]):
x_max=i
for i in range(1,len(y)):
if (y[i-1] <= localbox[2] < y[i]):
y_min=i-1
if (y[i-1] < localbox[3] <= y[i]):
y_max=i
for i in range(1,len(z)):
if (z[i-1] <= localbox[4] < z[i]):
z_min=i-1
if (z[i-1] < localbox[5] <= z[i]):
z_max=i
return [x_min, x_max, y_min, y_max, z_min, z_max]
spacing=5 # grid spacing
boundary=bounding_box_of_mesh(verts)
print boundary
x=np.arange(boundary[0]-2*spacing,boundary[1]+2*spacing,spacing)
y=np.arange(boundary[2]-2*spacing,boundary[3]+2*spacing,spacing)
z=np.arange(boundary[4]-2*spacing,boundary[5]+2*spacing,spacing)
Grid=np.zeros((len(x),len(y),len(z)),dtype=np.int)
print Grid.shape
data=verts[faces]
xarr=[]
yarr=[]
zarr=[]
# actual number of grid is very high so checking every grid is
# inside or outside is inefficient. So, I am looking for only
# those grid which is near to mesh boundary. This will reduce
#the time and later on internal grid can be interpolate easily.
for i in range(len(data)):
#print '\n', data[i]
AABB=boundingboxoftriangle(data[i],x,y,z) ## axis aligned bounding box
#print AABB
for gx in range(AABB[0],AABB[1]+1):
if gx not in xarr:
xarr.append(gx)
for gy in range(AABB[2],AABB[3]+1):
if gy not in yarr:
yarr.append(gy)
for gz in range(AABB[4],AABB[5]+1):
if gz not in zarr:
zarr.append(gz)
print len(xarr),len(yarr),len(zarr)
center=np.array([np.mean(verts[:,0]), np.mean(verts[:,1]), np.mean(verts[:,2])])
print center
fw=open('Grid_value_output_spacing__.dat','w')
p1=center #np.array([0,0,0])
for i in range(len(xarr)):
for j in range(len(yarr)):
for k in range(len(zarr)):
p0=np.array([x[xarr[i]],y[yarr[j]],z[zarr[k]]])
for go in range(len(data)):
value=ray_intersect_triangle(p0, p1, data[go])
if value>0:
Grid[i,j,k]=value
break
fw.write(str(xarr[i])+'\t'+str(yarr[j])+'\t'+str(zarr[k])+'\t'+str(x[xarr[i]])+'\t'+str(y[yarr[j]])+'\t'+str(z[zarr[k]])+'\t'+str(Grid[i,j,k])+'\n')
print i
fw.close()
#If the grid value is greater than 0 then it is inside the triangulated mesh.
#I am writing the value of only confusing grid near boundary.
#Deeper inside grid of mesh can be interpolate easily with above information.
#If grid spacing is very small then generating random points inside the
#mesh is equivalent to choosing the random grid.
Working in Python, I am doing some physics calculations over an NxM grid of values, where N goes from 1 to 3108 and M goes from 1 to 2304 (this corresponds to a large image). I need calculate a value at each and every point in this space, which totals ~ 7 million calculations. My current approach is painfully slow, and I am wondering if there is a way to complete this task and it not take hours...
My first approach was just to use nested for loops, but this seemed like the least efficient way to solve my problem. I have tried using NumPy's nditer and iterating over each axis individually, but I've read that it doesn't actually speed up my computations. Rather than looping through each axis individually, I also tried making a 3-D array and looping through the outer axis as shown in Brian's answer here How can I, in python, iterate over multiple 2d lists at once, cleanly? . Here is the current state of my code:
import numpy as np
x,y = np.linspace(1,3108,num=3108),np.linspace(1,2304,num=2304) # x&y dimensions of image
X,Y = np.meshgrid(x,y,indexing='ij')
all_coords = np.dstack((X,Y)) # moves to 3-D
all_coords = all_coords.astype(int) # sets coords to int
For reference, all_coords looks like this:
array([[[1.000e+00, 1.000e+00],
[1.000e+00, 2.000e+00],
[1.000e+00, 3.000e+00],
...,
[1.000e+00, 2.302e+03],
[1.000e+00, 2.303e+03],
[1.000e+00, 2.304e+03]],
[[2.000e+00, 1.000e+00],
[2.000e+00, 2.000e+00],
[2.000e+00, 3.000e+00],
...,
[2.000e+00, 2.302e+03],
[2.000e+00, 2.303e+03],
[2.000e+00, 2.304e+03]],
and so on. Back to my code...
'''
- below is a function that does a calculation on the full grid using the distance between x0,y0 and each point on the grid.
- the function takes x0,y0 and returns the calculated values across the grid
'''
def do_calc(x0,y0):
del_x, del_y = X-x0, Y-y0
np.seterr(divide='ignore', invalid='ignore')
dmx_ij = (del_x/((del_x**2)+(del_y**2))) # x component
dmy_ij = (del_y/((del_x**2)+(del_y**2))) # y component
return dmx_ij,dmy_ij
# now the actual loop
def do_loop():
dmx,dmy = 0,0
for pair in all_coords:
for xi,yi in pair:
DM = do_calc(xi,yi)
dmx,dmy = dmx+DM[0],dmy+DM[1]
return dmx,dmy
As you might see, this code takes an incredibly long time to run... If there is any way to modify my code such that it doesn't take hours to complete, I would be extremely interested in knowing how to do that. Thanks in advance for the help.
Here is a method that gives a 10,000x speedup at N=310, M=230. As the method scales better than the original code I'd expect a factor of more than a million at the full problem size.
The method exploits the shift invariance of the problem. For example, del_x**2 is essentially the same up to shift at each call of do_calc, so we compute it only once.
If the output of do_calc is weighted before summation the problem is no longer fully translation invariant, and this method doesn't work anymore. The result, however, can then be expressed in terms of linear convolution. At N=310, M=230 this still leaves us with a more than 1,000x speedup. And, again, this will be more at full problem size
Code for original problem
import numpy as np
#N, M = 3108, 2304
N, M = 310, 230
### OP's code
x,y = np.linspace(1,N,num=N),np.linspace(1,M,num=M) # x&y dimensions of image
X,Y = np.meshgrid(x,y,indexing='ij')
all_coords = np.dstack((X,Y)) # moves to 3-D
all_coords = all_coords.astype(int) # sets coords to int
'''
- below is a function that does a calculation on the full grid using the distance between x0,y0 and each point on the grid.
- the function takes x0,y0 and returns the calculated values across the grid
'''
def do_calc(x0,y0):
del_x, del_y = X-x0, Y-y0
np.seterr(divide='ignore', invalid='ignore')
dmx_ij = (del_x/((del_x**2)+(del_y**2))) # x component
dmy_ij = (del_y/((del_x**2)+(del_y**2))) # y component
return np.nan_to_num(dmx_ij), np.nan_to_num(dmy_ij)
# now the actual loop
def do_loop():
dmx,dmy = 0,0
for pair in all_coords:
for xi,yi in pair:
DM = do_calc(xi,yi)
dmx,dmy = dmx+DM[0],dmy+DM[1]
return dmx,dmy
from time import time
t = [time()]
### pp's code
x, y = np.ogrid[-N+1:N-1:2j*N - 1j, -M+1:M-1:2j*M - 1J]
den = x*x + y*y
den[N-1, M-1] = 1
xx = x / den
yy = y / den
for zz in xx, yy:
zz[N:] -= zz[:N-1]
zz[:, M:] -= zz[:, :M-1]
XX = xx.cumsum(0)[N-1:].cumsum(1)[:, M-1:]
YY = yy.cumsum(0)[N-1:].cumsum(1)[:, M-1:]
t.append(time())
### call OP's code for reference
X_OP, Y_OP = do_loop()
t.append(time())
# make sure results are equal
assert np.allclose(XX, X_OP)
assert np.allclose(YY, Y_OP)
print('pp {}\nOP {}'.format(*np.diff(t)))
Sample run:
pp 0.015251636505126953
OP 149.1642508506775
Code for weighted problem:
import numpy as np
#N, M = 3108, 2304
N, M = 310, 230
values = np.random.random((N, M))
x,y = np.linspace(1,N,num=N),np.linspace(1,M,num=M) # x&y dimensions of image
X,Y = np.meshgrid(x,y,indexing='ij')
all_coords = np.dstack((X,Y)) # moves to 3-D
all_coords = all_coords.astype(int) # sets coords to int
'''
- below is a function that does a calculation on the full grid using the distance between x0,y0 and each point on the grid.
- the function takes x0,y0 and returns the calculated values across the grid
'''
def do_calc(x0,y0, v):
del_x, del_y = X-x0, Y-y0
np.seterr(divide='ignore', invalid='ignore')
dmx_ij = (del_x/((del_x**2)+(del_y**2))) # x component
dmy_ij = (del_y/((del_x**2)+(del_y**2))) # y component
return v*np.nan_to_num(dmx_ij), v*np.nan_to_num(dmy_ij)
# now the actual loop
def do_loop():
dmx,dmy = 0,0
for pair, vv in zip(all_coords, values):
for (xi,yi), v in zip(pair, vv):
DM = do_calc(xi,yi, v)
dmx,dmy = dmx+DM[0],dmy+DM[1]
return dmx,dmy
from time import time
from scipy import signal
t = [time()]
x, y = np.ogrid[-N+1:N-1:2j*N - 1j, -M+1:M-1:2j*M - 1J]
den = x*x + y*y
den[N-1, M-1] = 1
xx = x / den
yy = y / den
XX, YY = (signal.fftconvolve(zz, values, 'valid') for zz in (xx, yy))
t.append(time())
X_OP, Y_OP = do_loop()
t.append(time())
assert np.allclose(XX, X_OP)
assert np.allclose(YY, Y_OP)
print('pp {}\nOP {}'.format(*np.diff(t)))
Sample run:
pp 0.12683939933776855
OP 158.35225439071655
I have drawn one position(x,y,z) of N particles in an enclosed volume.
x[i] = random.uniform(a,b) ...
I also found the constant velocity(vx,vy,vz) of the N particles.
vx[i] = random.gauss(mean,sigma) ...
Now I want to find the position of the N(=100) particles over time. I used the Euler-Cromer method to this.
delta_t = linspace(0,2,n-1)
n = 1000
v[0] = vx;...
r[0] = x;...
for i in range(n-1):
v[i+1,:] = v[i,:]
r[i+1,:] = r[i,:] + delta_t*v[i+1,:]
t[i+1] = t[i] + delta_t
But I want to find the position over time for every particle. How can I do this? Also, how do I plot the particles position over time in 3D?
To find the position of the particles at a given time you can use the following code:
import numpy as np
# assign random positions in the box 0,0,0 to 1,1,1
x = np.random.random((100,3))
# assign random velocities in the range around 0
v = np.random.normal(size=(100,3))
# define function to project the position in time according to
# laws of motion. x(t) = x_0 + v_0 * t
def position(x_0, v_0, t):
return x_0 + v_0*t
# get new position at time = 3.2
position(x, v, 3.2)