I am trying to simulate a particle flying at another particle while undergoing electrical repulsion (or attraction), called Rutherford-scattering. I have succeeded in simulating (a few) particles using for loops and python lists. However, now I want to use numpy arrays instead. The model will use the following steps:
For all particles:
Calculate radial distance with all other particles
Calculate the angle with all other particles
Calculate netto force in x-direction and y-direction
Create matrix with netto xForce and yForce for each particle
Create accelaration (also x and y component) matrix by a = F/mass
Update speed matrix
Update position matrix
My problem is that I do not know how I can use numpy arrays in calculating the force components.
Here follows my code which is not runnable.
import numpy as np
# I used this function to calculate the force while using for-loops.
def force(x1, y1, x2, x2):
angle = math.atan((y2 - y1)/(x2 - x1))
dr = ((x1-x2)**2 + (y1-y2)**2)**0.5
force = charge2 * charge2 / dr**2
xforce = math.cos(angle) * force
yforce = math.sin(angle) * force
# The direction of force depends on relative location
if x1 > x2 and y1<y2:
xforce = xforce
yforce = yforce
elif x1< x2 and y1< y2:
xforce = -1 * xforce
yforce = -1 * yforce
elif x1 > x2 and y1 > y2:
xforce = xforce
yforce = yforce
else:
xforce = -1 * xforce
yforce = -1* yforce
return xforce, yforce
def update(array):
# this for loop defeats the entire use of numpy arrays
for particle in range(len(array[0])):
# find distance of all particles pov from 1 particle
# find all x-forces and y-forces on that particle
xforce = # sum of all x-forces from all particles
yforce = # sum of all y-forces from all particles
force_arr[0, particle] = xforce
force_arr[1, particle] = yforce
return force
# begin parameters
t = 0
N = 3
masses = np.ones(N)
charges = np.ones(N)
loc_arr = np.random.rand(2, N)
speed_arr = np.random.rand(2, N)
acc_arr = np.random.rand(2, N)
force = np.random.rand(2, N)
while t < 0.5:
force_arr = update(loc_arry)
acc_arr = force_arr / masses
speed_arr += acc_array
loc_arr += speed_arr
t += dt
# plot animation
One approach to model this problem with arrays may be:
define the point coordinates as a Nx2 array. (This will help with extensibility if you advance to 3-D points later)
define the intermediate variables distance, angle, force as NxN arrays to represent the pairwise interactions
Numpy things to know about:
You can call most numeric functions on arrays if the arrays have the same shape (or conforming shapes, which is a nontrivial topic...)
meshgrid helps you generate the array indices necessary to shapeshift your Nx2 arrays to compute NxN results
and a tangential note (ha ha) arctan2() computes a signed angle, so you can bypass the complex "which quadrant" logic
For example you can do something like this. Note in get_dist and get_angle the arithmetic operations between points take place in the bottom-most dimension:
import numpy as np
# 2-D locations of particles
points = np.array([[1,0],[2,1],[2,2]])
N = len(points) # 3
def get_dist(p1, p2):
r = p2 - p1
return np.sqrt(np.sum(r*r, axis=2))
def get_angle(p1, p2):
r = p2 - p1
return np.arctan2(r[:,:,1], r[:,:,0])
ii = np.arange(N)
ix, iy = np.meshgrid(ii, ii)
dist = get_dist(points[ix], points[iy])
angle = get_angle(points[ix], points[iy])
# ... compute force
# ... apply the force, etc.
For the sample 3-point vector shown above:
In [246]: dist
Out[246]:
array([[0. , 1.41421356, 2.23606798],
[1.41421356, 0. , 1. ],
[2.23606798, 1. , 0. ]])
In [247]: angle / np.pi # divide by Pi to make the numbers recognizable
Out[247]:
array([[ 0. , -0.75 , -0.64758362],
[ 0.25 , 0. , -0.5 ],
[ 0.35241638, 0.5 , 0. ]])
Here is one go with only a loop for each time step, and it should work for any number of dimensions, I have tested with 3 too:
from matplotlib import pyplot as plt
import numpy as np
fig, ax = plt.subplots()
N = 4
ndim = 2
masses = np.ones(N)
charges = np.array([-1, 1, -1, 1]) * 2
# loc_arr = np.random.rand(N, ndim)
loc_arr = np.array(((-1,0), (1,0), (0,-1), (0,1)), dtype=float)
speed_arr = np.zeros((N, ndim))
# compute charge matrix, ie c1 * c2
charge_matrix = -1 * np.outer(charges, charges)
time = np.linspace(0, 0.5)
dt = np.ediff1d(time).mean()
for i, t in enumerate(time):
# get (dx, dy) for every point
delta = (loc_arr.T[..., np.newaxis] - loc_arr.T[:, np.newaxis]).T
# calculate Euclidean distance
distances = np.linalg.norm(delta, axis=-1)
# and normalised unit vector
unit_vector = (delta.T / distances).T
unit_vector[np.isnan(unit_vector)] = 0 # replace NaN values with 0
# calculate force
force = charge_matrix / distances**2 # norm gives length of delta vector
force[np.isinf(force)] = 0 # NaN forces are 0
# calculate acceleration in all dimensions
acc = (unit_vector.T * force / masses).T.sum(axis=1)
# v = a * dt
speed_arr += acc * dt
# increment position, xyz = v * dt
loc_arr += speed_arr * dt
# plotting
if not i:
color = 'k'
zorder = 3
ms = 3
for i, pt in enumerate(loc_arr):
ax.text(*pt + 0.1, s='{}q {}m'.format(charges[i], masses[i]))
elif i == len(time)-1:
color = 'b'
zroder = 3
ms = 3
else:
color = 'r'
zorder = 1
ms = 1
ax.plot(loc_arr[:,0], loc_arr[:,1], '.', color=color, ms=ms, zorder=zorder)
ax.set_aspect('equal')
The above example produces, where the black and blue points signify the start and end positions, respectively:
And when charges are equal charges = np.ones(N) * 2 the system symmetry is preserved and the charges repel:
And finally with some random initial velocities speed_arr = np.random.rand(N, 2):
EDIT
Made a small change to the code above to make sure it was correct. (I was missing -1 on the resultant force, ie. force between +/+ should be negative, and I was summing down the wrong axis, apologies for that. Now in the cases where masses[0] = 5, the system evolves correctly:
The classic approach is to calculate electric field for all particles in the system. Say you have 3 charged particles all with positive charge:
particles = np.array([[1,0,0],[2,1,0],[2,2,0]]) # location of each particle
q = np.array([1,1,1]) # charge of each particle
The easiest way to compute the electric field at each particle`s location is for loop:
def for_method(pos,q):
"""Computes electric field vectors for all particles using for-loop."""
Evect = np.zeros( (len(pos),len(pos[0])) ) # define output electric field vector
k = 1 / (4 * np.pi * const.epsilon_0) * np.ones((len(pos),len(pos[0]))) * 1.602e-19 # make this into matrix as matrix addition is faster
# alternatively you can get rid of np.ones and just define this as a number
for i, v0 in enumerate(pos): # s_p - selected particle | iterate over all particles | v0 reference particle
for v, qc in zip(pos,q): # loop over all particles and calculate electric force sum | v particle being calculated for
if all((v0 == v)): # do not compute for the same particle
continue
else:
r = v0 - v #
Evect[i] += r / np.linalg.norm(r) ** 3 * qc #! multiply by charge
return Evect * k
# to find electric field at each particle`s location call
for_method(particles, q)
This function returns array of vectors with the same shape as input particles array. To find force on each, you simply multiply this vector with q array of charges. From there on, you can easily find your acceleration and integrate the system using your favourite ODE solver.
Performance Optimization & Accuracy
For method is the slowest possible approach. The field can be computed using solely linear algebra granting significant speed boost. Following code is very efficient Numpy matrix "one-liner" (almost one-liner) to this problem:
def CPU_matrix_method(pos,q):
"""Classic vectorization of for Coulomb law using numpy arrays."""
k = 1 / (4 * np.pi * const.epsilon_0) * np.ones((len(pos),3)) * 1.602e-19 # define electric constant
dist = distance.cdist(pos,pos) # compute distances
return k * np.sum( (( np.tile(pos,len(pos)).reshape((len(pos),len(pos),3)) - np.tile(pos,(len(pos),1,1))) * q.reshape(len(q),1)).T * np.power(dist,-3, where = dist != 0),axis = 1).T
Note that this and following code also return electric field vector for each particle.
You can get even higher performance if you offload this onto the GPU using Cupy library. Following code is almost identical to the CPU_matrix_method, I only expanded the one-liner a little so that you could see better what is going on:
def GPU_matrix_method(pos,q):
"""GPU Coulomb law vectorization.
Takes in numpy arrays, performs computations and returns cupy array"""
# compute distance matrix between each particle
k_cp = 1 / (4 * cp.pi * const.epsilon_0) * cp.ones((len(pos),3)) * 1.602e-19 # define electric constant, runs faster if this is matrix
dist = cp.array(distance.cdist(pos,pos)) # could speed this up with cupy cdist function! use this: cupyx.scipy.spatial.distance.cdist
pos, q = cp.array(pos), cp.array(q) # load inputs to GPU memory
dist_mod = cp.power(dist,-3) # compute inverse cube of distance
dist_mod[dist_mod == cp.inf] = 0 # set all infinity entries to 0 (i.e. diagonal elements/ same particle-particle pairs)
# compute by magic
return k_cp * cp.sum((( cp.tile(pos,len(pos)).reshape((len(pos),len(pos),3)) - cp.tile(pos,(len(pos),1,1))) * q.reshape(len(q),1)).T * dist_mod, axis = 1).T
Regarding the accuracy of the mentioned algorithms, if you compute the 3 methods on the particles array you get identical results:
[[-6.37828367e-10 -7.66608512e-10 0.00000000e+00]
[ 5.09048221e-10 -9.30757576e-10 0.00000000e+00]
[ 1.28780145e-10 1.69736609e-09 0.00000000e+00]]
Regarding the performance, I computed each algorithm on systems ranging from 2 to 5000 charged particles. Additionally I also included Numba precompiled version of the for_method to make the for-loop approach competitive:
We see that for-loop performs terribly needing over 400 seconds to compute for system with 5000 particles. Zooming in to the bottom part:
This shows that matrix approach to this problem is orders of magnitude better. To be exact the 5000 particle evaluation took 18.5s for Numba for-loop, 4s for CPU matrix(5 times faster than Numba), and 0.8s for GPU matrix* (23 times faster than Numba). The significant difference shows for larger arrays.
* GPU used was Nvidia K100.
Related
THIS PART IS JUST BACKGROUND IF YOU NEED IT
I am developing a numerical solver for the Second-Order Kuramoto Model. The functions I use to find the derivatives of theta and omega are given below.
# n-dimensional change in omega
def d_theta(omega):
return omega
# n-dimensional change in omega
def d_omega(K,A,P,alpha,mask,n):
def layer1(theta,omega):
T = theta[:,None] - theta
A[mask] = K[mask] * np.sin(T[mask])
return - alpha*omega + P - A.sum(1)
return layer1
These equations return vectors.
QUESTION 1
I know how to use odeint for two dimensions, (y,t). for my research I want to use a built-in Python function that works for higher dimensions.
QUESTION 2
I do not necessarily want to stop after a predetermined amount of time. I have other stopping conditions in the code below that will indicate whether the system of equations converges to the steady state. How do I incorporate these into a built-in Python solver?
WHAT I CURRENTLY HAVE
This is the code I am currently using to solve the system. I just implemented RK4 with constant time stepping in a loop.
# This function randomly samples initial values in the domain and returns whether the solution converged
# Inputs:
# f change in theta (d_theta)
# g change in omega (d_omega)
# tol when step size is lower than tolerance, the solution is said to converge
# h size of the time step
# max_iter maximum number of steps Runge-Kutta will perform before giving up
# max_laps maximum number of laps the solution can do before giving up
# fixed_t vector of fixed points of theta
# fixed_o vector of fixed points of omega
# n number of dimensions
# theta initial theta vector
# omega initial omega vector
# Outputs:
# converges true if it nodes restabilizes, false otherwise
def kuramoto_rk4_wss(f,g,tol_ss,tol_step,h,max_iter,max_laps,fixed_o,fixed_t,n):
def layer1(theta,omega):
lap = np.zeros(n, dtype = int)
converges = False
i = 0
tau = 2 * np.pi
while(i < max_iter): # perform RK4 with constant time step
p_omega = omega
p_theta = theta
T1 = h*f(omega)
O1 = h*g(theta,omega)
T2 = h*f(omega + O1/2)
O2 = h*g(theta + T1/2,omega + O1/2)
T3 = h*f(omega + O2/2)
O3 = h*g(theta + T2/2,omega + O2/2)
T4 = h*f(omega + O3)
O4 = h*g(theta + T3,omega + O3)
theta = theta + (T1 + 2*T2 + 2*T3 + T4)/6 # take theta time step
mask2 = np.array(np.where(np.logical_or(theta > tau, theta < 0))) # find which nodes left [0, 2pi]
lap[mask2] = lap[mask2] + 1 # increment the mask
theta[mask2] = np.mod(theta[mask2], tau) # take the modulus
omega = omega + (O1 + 2*O2 + 2*O3 + O4)/6
if(max_laps in lap): # if any generator rotates this many times it probably won't converge
break
elif(np.any(omega > 12)): # if any of the generators is rotating this fast, it probably won't converge
break
elif(np.linalg.norm(omega) < tol_ss and # assert the nodes are sufficiently close to the equilibrium
np.linalg.norm(omega - p_omega) < tol_step and # assert change in omega is small
np.linalg.norm(theta - p_theta) < tol_step): # assert change in theta is small
converges = True
break
i = i + 1
return converges
return layer1
Thanks for your help!
You can wrap your existing functions into a function accepted by odeint (option tfirst=True) and solve_ivp as
def odesys(t,u):
theta,omega = u[:n],u[n:]; # or = u.reshape(2,-1);
return [ *f(omega), *g(theta,omega) ]; # or np.concatenate([f(omega), g(theta,omega)])
u0 = [*theta0, *omega0]
t = linspan(t0, tf, timesteps+1);
u = odeint(odesys, u0, t, tfirst=True);
#or
res = solve_ivp(odesys, [t0,tf], u0, t_eval=t)
The scipy methods pass numpy arrays and convert the return value into same, so that you do not have to care in the ODE function. The variant in comments is using explicit numpy functions.
While solve_ivp does have event handling, using it for a systematic collection of events is rather cumbersome. It would be easier to advance some fixed step, do the normalization and termination detection, and then repeat this.
If you want to later increase efficiency somewhat, use directly the stepper classes behind solve_ivp.
I have a numpy array filled with intensity readings at different radii in a uniform circle (for context, this is a 1D radiative transfer project for protostellar formation models: while much better models exist, my supervisor wasnts me to have the experience of producing one so I understand how others work).
I want to take that 1d array, and "rotate" it through a circle, forming a 2D array of intensities that could then be shown with imshow (or, with a bit of work, aplpy). The final array needs to be 2d, and the projection needs to be Cartesian, not polar.
I can do it with nested for loops, and I can do it with lookup tables, but I have a feeling there must be a neat way of doing it in numpy or something.
Any ideas?
EDIT:
I have had to go back and recreate my (frankly horrible) mess of for loops and if statements that I had before. If I really tried, I could probably get rid of one of the loops and one of the if statements by condensing things down. However, the aim is not to make it work with for loops, but see if there is a built in way to rotate the array.
impB is an array that differs slightly from what I stated it was before. Its actually just a list of radii where particles are detected. I then bin those into radius bins to get the intensity (or frequency if you prefer) in each radius. R is the scale factor for my radius as I run the model in a dimensionless way. iRes is a resolution scale factor, essentially how often I want to sample my radial bins. Everything else should be clear.
radJ = np.ndarray(shape=(2*iRes, 2*iRes)) # Create array of 2xRadius square
for i in range(iRes):
n = len(impB[np.where(impB[:] < ((i+1.) * (R / iRes)))]) # Count number of things within this radius +1
m = len(impB[np.where(impB[:] <= ((i) * (R / iRes)))]) # Count number of things in this radius
a = (((i + 1) * (R / iRes))**2 - ((i) * (R / iRes))**2) * math.pi # A normalisation factor based on area.....dont ask
for x in range(iRes):
for y in range(iRes):
if (x**2 + y**2) < (i * iRes)**2:
if (x**2 + y**2) >= (i * iRes)**2: # Checks for radius, and puts in cartesian space
radJ[x+iRes,y+iRes] = (n-m) / a # Put in actual intensity bins
radJ[x+iRes,-y+iRes] = (n-m) / a
radJ[-x+iRes,y+iRes] = (n-m) / a
radJ[-x+iRes,-y+iRes] = (n-m) / a
Nested loops are a simple approach for that. With ri_data_r and y containing your radius values (difference to the middle pixel) and the array for rotation, respectively, I would suggest:
from scipy import interpolate
import numpy as np
y = np.random.rand(100)
ri_data_r = np.linspace(-len(y)/2,len(y)/2,len(y))
interpol_index = interpolate.interp1d(ri_data_r, y)
xv = np.arange(-1, 1, 0.01) # adjust your matrix values here
X, Y = np.meshgrid(xv, xv)
profilegrid = np.ones(X.shape, float)
for i, x in enumerate(X[0, :]):
for k, y in enumerate(Y[:, 0]):
current_radius = np.sqrt(x ** 2 + y ** 2)
profilegrid[i, k] = interpol_index(current_radius)
print(profilegrid)
This will give you exactly what you are looking for. You just have to take in your array and calculate an symmetric array ri_data_r that has the same length as your data array and contains the distance between the actual data and the middle of the array. The code is doing this automatically.
I stumbled upon this question in a different context and I hope I understood it right. Here are two other ways of doing this. The first uses skimage.transform.warp with interpolation of desired order (here we use order=0 Nearest-neighbor). This method is slower but more precise and needs less memory then the second method.
The second one does not use interpolation, therefore is faster but also less precise and needs way more memory because it stores each 2D array containing one tilt until the end, where they are averaged with np.nanmean().
The difference between both solutions stemmed from the problem of handling the center of the final image where the tilts overlap the most, i.e. the first one would just add values with each tilt ending up out of the original range. This was "solved" by clipping the matrix in each step to a global_min and global_max (consult the code). The second one solves it by taking the mean of the tilts where they overlap, which forces us to use the np.nan.
Please, read the Example of usage and Sanity check sections in order to understand the plot titles.
Solution 1:
import numpy as np
from skimage.transform import warp
def rotate_vector(vector, deg_angle):
# Credit goes to skimage.transform.radon
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
center = vector.size // 2
square = np.zeros((vector.size, vector.size))
square[center,:] = vector
rad_angle = np.deg2rad(deg_angle)
cos_a, sin_a = np.cos(rad_angle), np.sin(rad_angle)
R = np.array([[cos_a, sin_a, -center * (cos_a + sin_a - 1)],
[-sin_a, cos_a, -center * (cos_a - sin_a - 1)],
[0, 0, 1]])
# Approx. 80% of time is spent in this function
return warp(square, R, clip=False, output_shape=((vector.size, vector.size)))
def place_vectors(vectors, deg_angles):
matrix = np.zeros((vectors.shape[-1], vectors.shape[-1]))
global_min, global_max = 0, 0
for i, deg_angle in enumerate(deg_angles):
tilt = rotate_vector(vectors[i], deg_angle)
global_min = tilt.min() if global_min > tilt.min() else global_min
global_max = tilt.max() if global_max < tilt.max() else global_max
matrix += tilt
matrix = np.clip(matrix, global_min, global_max)
return matrix
Solution 2:
Credit for the idea goes to my colleague Michael Scherbela.
import numpy as np
def rotate_vector(vector, deg_angle):
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
square = np.ones([vector.size, vector.size]) * np.nan
radius = vector.size // 2
r_values = np.linspace(-radius, radius, vector.size)
rad_angle = np.deg2rad(deg_angle)
ind_x = np.round(np.cos(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_y = np.round(np.sin(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_x = np.clip(ind_x, 0, vector.size-1)
ind_y = np.clip(ind_y, 0, vector.size-1)
square[ind_y, ind_x] = vector
return square
def place_vectors(vectors, deg_angles):
matrices = []
for deg_angle, vector in zip(deg_angles, vectors):
matrices.append(rotate_vector(vector, deg_angle))
matrix = np.nanmean(np.array(matrices), axis=0)
return np.nan_to_num(matrix, copy=False, nan=0.0)
Example of usage:
r = 100 # Radius of the circle, i.e. half the length of the vector
n = int(np.pi * r / 8) # Number of vectors, e.g. number of tilts in tomography
v = np.ones(2*r) # One vector, e.g. one tilt in tomography
V = np.array([v]*n) # All vectors, e.g. a sinogram in tomography
# Rotate 1D vector to a specific angle (output is 2D)
angle = 45
rotated = rotate_vector(v, angle)
# Rotate each row of a 2D array according to its angle (output is 2D)
angles = np.linspace(-90, 90, num=n, endpoint=False)
inplace = place_vectors(V, angles)
Sanity check:
These are just simple checks which by no means cover all possible edge cases. Depending on your use case you might want to extend the checks and adjust the method.
# I. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then sum(inplace) should be approx. equal to (n * (2πr - n)) / π
# which is an area that should be covered by the tilts
desired_area = (n * (2 * np.pi * r - n)) / np.pi
covered_area = np.sum(inplace)
covered_frac = covered_area / desired_area
print(f'This method covered {covered_frac * 100:.2f}% '
'of the area which should be covered in total.')
# II. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then a circle M with radius m <= r should be the largest circle which
# is fully covered by the vectors. I.e. its mean should be no less than 1.
# If n = πr then m = r.
# m = n / π
m = int(n / np.pi)
# Code for circular mask not included
mask = create_circular_mask(2*r, 2*r, center=None, radius=m)
m_area = np.mean(inplace[mask])
print(f'Full radius r={r}, radius m={m}, mean(M)={m_area:.4f}.')
Code for plotting:
import matplotlib.pyplot as plt
plt.figure(figsize=(16, 8))
plt.subplot(121)
rotated = np.nan_to_num(rotated) # not necessary in case of the first method
plt.title(
f'Output of rotate_vector(), angle={angle}°\n'
f'Sum is {np.sum(rotated):.2f} and should be {np.sum(v):.2f}')
plt.imshow(rotated, cmap=plt.cm.Greys_r)
plt.subplot(122)
plt.title(
f'Output of place_vectors(), r={r}, n={n}\n'
f'Covered {covered_frac * 100:.2f}% of the area which should be covered.\n'
f'Mean of the circle M is {m_area:.4f} and should be 1.0.')
plt.imshow(inplace)
circle=plt.Circle((r, r), m, color='r', fill=False)
plt.gcf().gca().add_artist(circle)
plt.gcf().gca().legend([circle], [f'Circle M (m={m})'])
We have a class that have three functions called(Bdisk, Bhalo,and BX).
all of these functions accept arrays (e.g. shape (1000))not matrices (e.g. shape (2,1000)).
I want to get the total of all these functions( total= Bdisk + Bhalo+BX), total these all functions give the magnetic field in all three components (B_r, B_phi, B_z) for thousand coordinate points (r, phi, z).
the code is here:
import numpy as np
import logging
import warnings
import gmf
signum = lambda x: (x < 0.) * -1. + (x >= 0) * 1.
pi = np.pi
#Class with analytical functions that describe the GMF according to the model of JF12
class GMF(object):
def __init__(self): # self:is automatically set to reference the newly created object that needs to be initialized
self.Rsun = -8.5 # position of the sun along the x axis in kpc
############################################################################
# Disk Parameters
############################################################################
self.bring, self.bring_unc = 0.1,0.1 # floats, field strength in ring at 3 kpc < r < 5 kpc
self.hdisk, self.hdisk_unc = 0.4, 0.03 # float, disk/halo transition height
self.wdisk, self.wdisk_unc = 0.27,0.08 # floats, transition width
self.b = np.array([0.1,3.,-0.9,-0.8,-2.0,-4.2,0.,2.7]) # (8,1)-dim np.arrays, field strength of spiral arms at 5 kpc
self.b_unc = np.array([1.8,0.6,0.8,0.3,0.1,0.5,1.8,1.8]) # uncertainty
self.rx = np.array([5.1,6.3,7.1,8.3,9.8,11.4,12.7,15.5])# (8,1)-dim np.array,dividing lines of spiral lines coordinates of neg. x-axes that intersect with arm
self.idisk = 11.5 * pi/180. # float, spiral arms pitch angle
#############################################################################
# Halo Parameters
#############################################################################
self.Bn, self.Bn_unc = 1.4,0.1 # floats, field strength northern halo
self.Bs, self.Bs_unc = -1.1,0.1 # floats, field strength southern halo
self.rn, self.rn_unc = 9.22,0.08 # floats, transition radius south, lower limit
self.rs, self.rs_unc = 16.7,0. # transition radius south, lower limit
self.whalo, self.whalo_unc = 0.2,0.12 # floats, transition width
self.z0, self.z0_unc = 5.3, 1.6 # floats, vertical scale height
##############################################################################
# Out of plaxe or "X" component Parameters
##############################################################################
self.BX0, self.BX_unc = 4.6,0.3 # floats, field strength at origin
self.ThetaX0, self.ThetaX0_unc = 49. * pi/180., pi/180. # elev. angle at z = 0, r > rXc
self.rXc, self.rXc_unc = 4.8, 0.2 # floats, radius where thetaX = thetaX0
self.rX, self.rX_unc = 2.9, 0.1 # floats, exponential scale length
# striated field
self.gamma, self.gamma_unc = 2.92,0.14 # striation and/or rel. elec. number dens. rescaling
return
##################################################################################
##################################################################################
# Transition function given by logistic function eq.5
##################################################################################
def L(self,z,h,w):
if np.isscalar(z):
z = np.array([z]) # scalar or numpy array with positions (height above disk, z; distance from center, r)
ones = np.ones(z.shape[0])
return 1./(ones + np.exp(-2. *(np.abs(z)- h)/w))
####################################################################################
# return distance from center for angle phi of logarithmic spiral
# r(phi) = rx * exp(b * phi) as np.array
####################################################################################
def r_log_spiral(self,phi):
if np.isscalar(phi): #Returns True if the type of num is a scalar type.
phi = np.array([phi])
ones = np.ones(phi.shape[0])
# self.rx.shape = 8
# phi.shape = p
# then result is given as (8,p)-dim array, each row stands for one rx
# vstack : Take a sequence of arrays and stack them vertically to make a single array
# tensordot(a, b, axes=2):Compute tensor dot product along specified axes for arrays >=1D.
result = np.tensordot(self.rx , np.exp((phi - 3.*pi*ones) / np.tan(pi/2. - self.idisk)),axes = 0)
result = np.vstack((result, np.tensordot(self.rx , np.exp((phi - pi*ones) / np.tan(pi/2. - self.idisk)),axes = 0) ))
result = np.vstack((result, np.tensordot(self.rx , np.exp((phi + pi*ones) / np.tan(pi/2. - self.idisk)),axes = 0) ))
return np.vstack((result, np.tensordot(self.rx , np.exp((phi + 3.*pi*ones) / np.tan(pi/2. - self.idisk)),axes = 0) ))
#############################################################################################
# Disk component in galactocentric cylindrical coordinates (r,phi,z)
#############################################################################################
def Bdisk(self,r,phi,z):
# Bdisk is purely azimuthal (toroidal) with the field strength b_ring
"""
r: N-dim np.array, distance from origin in GC cylindrical coordinates, is in kpc
z: N-dim np.array, height in kpc in GC cylindrical coordinates
phi:N-dim np.array, polar angle in GC cylindircal coordinates, in radian
Bdisk: (3,N)-dim np.array with (r,phi,z) components of disk field for each coordinate tuple
|Bdisk|: N-dim np.array, absolute value of Bdisk for each coordinate tuple
"""
if (not r.shape[0] == phi.shape[0]) and (not z.shape[0] == phi.shape[0]):
warnings.warn("List do not have equal shape! returning -1", RuntimeWarning)
return -1
# Return a new array of given shape and type, filled with zeros.
Bdisk = np.zeros((3,r.shape[0])) # Bdisk vector in r, phi, z
ones = np.ones(r.shape[0])
r_center = (r >= 3.) & (r < 5.1)
r_disk = (r >= 5.1) & (r <= 20.)
Bdisk[1,r_center] = self.bring
# Determine in which arm we are
# this is done for each coordinate individually
if np.sum(r_disk):
rls = self.r_log_spiral(phi[r_disk])
rls = np.abs(rls - r[r_disk])
arms = np.argmin(rls, axis = 0) % 8
# The magnetic spiral defined at r=5 kpc and fulls off as 1/r ,the field direction is given by:
Bdisk[0,r_disk] = np.sin(self.idisk)* self.b[arms] * (5. / r[r_disk])
Bdisk[1,r_disk] = np.cos(self.idisk)* self.b[arms] * (5. / r[r_disk])
Bdisk *= (ones - self.L(z,self.hdisk,self.wdisk)) # multiplied by L
return Bdisk, np.sqrt(np.sum(Bdisk**2.,axis = 0)) # the Bdisk, the normalization
# axis=0 : sum over index 0(row)
# axis=1 : sum over index 1(columns)
##############################################################################################
# Halo component
###############################################################################################
def Bhalo(self,r,z):
# Bhalo is purely azimuthal (toroidal), i.e. has only a phi component
if (not r.shape[0] == z.shape[0]):
warnings.warn("List do not have equal shape! returning -1", RuntimeWarning)
return -1
Bhalo = np.zeros((3,r.shape[0])) # Bhalo vector in r, phi, z rows: r, phi and z component
ones = np.ones(r.shape[0])
m = ( z != 0. )
# SEE equation 6.
Bhalo[1,m] = np.exp(-np.abs(z[m])/self.z0) * self.L(z[m], self.hdisk, self.wdisk) * \
( self.Bn * (ones[m] - self.L(r[m], self.rn, self.whalo)) * (z[m] > 0.) \
+ self.Bs * (ones[m] - self.L(r[m], self.rs, self.whalo)) * (z[m] < 0.) )
return Bhalo , np.sqrt(np.sum(Bhalo**2.,axis = 0))
##############################################################################################
# BX component (OUT OF THE PLANE)
###############################################################################################
def BX(self,r,z):
#BX is purely ASS and poloidal, i.e. phi component = 0
if (not r.shape[0] == z.shape[0]):
warnings.warn("List do not have equal shape! returning -1", RuntimeWarning)
return -1
BX= np.zeros((3,r.shape[0])) # BX vector in r, phi, z rows: r, phi and z component
m = np.sqrt(r**2. + z**2.) >= 1.
bx = lambda r_p: self.BX0 * np.exp(-r_p / self.rX) # eq.7
thetaX = lambda r,z,r_p: np.arctan(np.abs(z)/(r - r_p)) # eq.10
r_p = r[m] *self.rXc/(self.rXc + np.abs(z[m] ) / np.tan(self.ThetaX0)) # eq 9
m_r_b = r_p > self.rXc # region with constant elevation angle
m_r_l = r_p <= self.rXc # region with varying elevation angle
theta = np.zeros(z[m].shape[0])
b = np.zeros(z[m].shape[0])
r_p0 = (r[m])[m_r_b] - np.abs( (z[m])[m_r_b] ) / np.tan(self.ThetaX0) # eq.8
b[m_r_b] = bx(r_p0) * r_p0/ (r[m])[m_r_b] # the field strength in the constant elevation angle (b_x(r_p)r_p/r)
theta[m_r_b] = self.ThetaX0 * np.ones(theta.shape[0])[m_r_b]
b[m_r_l] = bx(r_p[m_r_l]) * (r_p[m_r_l]/(r[m])[m_r_l] )**2. # the field strength with varying elevation angle (b_x(r_p)(r_p/r)**2)
theta[m_r_l] = thetaX((r[m])[m_r_l] ,(z[m])[m_r_l] ,r_p[m_r_l])
mz = (z[m] == 0.)
theta[mz] = np.pi/2.
BX[0,m] = b * (np.cos(theta) * (z[m] >= 0) + np.cos(pi*np.ones(theta.shape[0]) - theta) * (z[m] < 0))
BX[2,m] = b * (np.sin(theta) * (z[m] >= 0) + np.sin(pi*np.ones(theta.shape[0]) - theta) * (z[m] < 0))
return BX, np.sqrt(np.sum(BX**2.,axis=0))
then, I create three arrays, one for r, one for phi, one for z. Each of these arrays has (e.g: thousand elements). like this:
import gmf
gmfm = gmf.GMF()
x = np.linspace(-20.,20.,100)
y = np.linspace(-20.,20.,100)
z = np.linspace(-1.,1.,x.shape[0])
xx,yy = np.meshgrid(x,y)
rr = np.sqrt(xx**2. + yy**2.)
theta = np.arctan2(yy,xx)
for i,r in enumerate(rr[:]):
Bdisk, Babs_d = gmfm.Bdisk(r,theta[i],z)
Bhalo, Babs_h = gmfm.Bhalo(r,z)
BX, Babs_x = gmfm.BX(r,z)
Btotal = Bdisk + Bhalo + BX
but I am getting when I make the addition of the three functions Btotal= Bdisk + Bhalo+BX) in 2d matrix with 3 rows and 100 columns.
My question is how can I add these three functions together to get Btotal in shape (n,) e.g( shape(100,)
because as I said in the beginning the three functions accept accept arrays (e.g. shape (1000) )then when we adding the three functions together we have to get the total also in the same shape (shape (n,)?
I do not know how can I do it, could you please tell me how can I make it.
thank you for your cooperation.
You need to correct the indention, for example in the def Bdisk method.
More importantly in
for i,r in enumerate(rr[:]):
Bdisk, Babs_d = gmfm.Bdisk(r,theta[i],z)
Bhalo, Babs_h = gmfm.Bhalo(r,z)
BX, Babs_x = gmfm.BX(r,z)
Btotal = Bdisk + Bhalo + BX
are you doing this addition for each iteration, or once at the end of the loop? You aren't accumulating any values over iterations. You are just throwing away the old ones, leaving you with the final iteration.
As for adding the array - it appears that all your arrays are initialed like:
Bdisk = np.zeros((3,r.shape[0]))
If that's what the method returns, then
Bdisk + Bhalo + BX
will just sum the corresponding elements of each array, resulting in a Btotal with the same shape. If you don't not like the shape of Btotal then change how Bdisk is calculated, because it has the same shape.
I'm looking for a method for solve the 2D heat equation with python. I have already implemented the finite difference method but is slow motion (to make 100,000 simulations takes 30 minutes). The idea is to create a code in which the end can write,
for t in TIME:
DeltaU=f(U)
U=U+DeltaU*DeltaT
save(U)
How can I do that?
In the first form of my code, I used the 2D method of finite difference, my grill is 5000x250 (x, y). Now I would like to decrease the speed of computing and the idea is to find
DeltaU = f(u)
where U is a heat function. For implementation I used this source http://www.timteatro.net/2010/10/29/performance-python-solving-the-2d-diffusion-equation-with-numpy/ for 2D case, but the run time is more expensive for my necessity. Are there some methods to do this?
Maybe I must to work with the matrix
A=1/dx^2 (2 -1 0 0 ... 0
-1 2 -1 0 ... 0
0 -1 2 -1 ... 0
. .
. .
. .
0 ... -1 2)
but how to make this in the 2D problem? How to inserting Boundary conditions in A?
This is the code for the finite difference that I used:
Lx=5000 # physical length x vector in micron
Ly=250 # physical length y vector in micron
Nx = 100 # number of point of mesh along x direction
Ny = 50 # number of point of mesh along y direction
a = 0.001 # diffusion coefficent
dx = 1/Nx
dy = 1/Ny
dt = (dx**2*dy**2)/(2*a*(dx**2 + dy**2)) # it is 0.04
x = linspace(0.1,Lx, Nx)[np.newaxis] # vector to create mesh
y = linspace(0.1,Ly, Ny)[np.newaxis] # vector to create mesh
I=sqrt(x*y.T) #initial data for heat equation
u=np.ones(([Nx,Ny])) # u is the matrix referred to heat function
steps=100000
for m in range (0,steps):
du=np.zeros(([Nx,Ny]))
for i in range (1,Nx-1):
for j in range(1,Ny-1):
dux = ( u[i+1,j] - 2*u[i,j] + u[i-1, j] ) / dx**2
duy = ( u[i,j+1] - 2*u[i,j] + u[i, j-1] ) / dy**2
du[i,j] = dt*a*(dux+duy)
# Boundary Conditions
t1=(u[:,0]+u[:,1])/2
u[:,0]=t1
u[:,1]=t1
t2=(u[0,:]+u[1,:])/2
u[0,:]=t2
u[1,:]=t2
t3=(u[-1,:]+u[-2,:])/2
u[-1,:]=t3
u[-2,:]=t3
u[:,-1]=1
filename1='data_{:08d}.txt'
if m%100==0:
np.savetxt(filename1.format(m),u,delimiter='\t' )
For elaborate 100000 steps the run time is about 30 minutes. I would to optimize this code (with the idea presented in the initial lines) to have a run time about 5/10 minutes or minus. How can I do it?
There are some simple but tremendous improvements possible.
Just by introducing Dxx, Dyy = 1/(dx*dx), 1/(dy*dy) the runtime drops 25%. By using slices and avoiding for-loops, the code is now 400 times faster.
import numpy as np
def f_for(u):
for m in range(0, steps):
du = np.zeros_like(u)
for i in range(1, Nx-1):
for j in range(1, Ny-1):
dux = (u[i+1, j] - 2*u[i, j] + u[i-1, j]) / dx**2
duy = (u[i, j+1] - 2*u[i, j] + u[i, j-1]) / dy**2
du[i, j] = dt*a*(dux + duy)
return du
def f_slice(u):
du = np.zeros_like(u)
Dxx, Dyy = 1/dx**2, 1/dy**2
i = slice(1, Nx-1)
iw = slice(0, Nx-2)
ie = slice(2, Nx)
j = slice(1, Ny-1)
js = slice(0, Ny-2)
jn = slice(2, Ny)
for m in range(0, steps):
dux = Dxx * (u[ie, j] - 2*u[i, j] + u[iw, j])
duy = Dyy * (u[i, jn] - 2*u[i, j] + u[i, js])
du[i, j] = dt*a*(dux + duy)
return du
Nx = 100 # number of mesh points in the x-direction
Ny = 50 # number of mesh points in the y-direction
a = 0.001 # diffusion coefficent
dx = 1/Nx
dy = 1/Ny
dt = (dx**2*dy**2)/(2*a*(dx**2 + dy**2))
steps = 10000
U = np.ones((Nx, Ny))
%timeit f_for(U)
%timeit f_slice(U)
Have you considered paralellizing your code or using GPU acceleration.
It would help if you ran your code the python profiler (cProfile) so that you can figure out where you bottleneck in runtime is. I'm assuming it's in solving the matrix equation you get to which can be easily sped up by the methods I listed above.
I might be wrong but in your code in the loop you create for the time steps, "m in range(steps)"
in the one line below you continue with;
Du =np.zeros(----).
Not an expert of python but this may be resulting in creating a sparse matrix in the number of steps, in this case 100k times.
The Question:
What is the best way to calculate inverse distance weighted (IDW) interpolation in Python, for point locations?
Some Background:
Currently I'm using RPy2 to interface with R and its gstat module. Unfortunately, the gstat module conflicts with arcgisscripting which I got around by running RPy2 based analysis in a separate process. Even if this issue is resolved in a recent/future release, and efficiency can be improved, I'd still like to remove my dependency on installing R.
The gstat website does provide a stand alone executable, which is easier to package with my python script, but I still hope for a Python solution which doesn't require multiple writes to disk and launching external processes. The number of calls to the interpolation function, of separate sets of points and values, can approach 20,000 in the processing I'm performing.
I specifically need to interpolate for points, so using the IDW function in ArcGIS to generate rasters sounds even worse than using R, in terms of performance.....unless there is a way to efficiently mask out only the points I need. Even with this modification, I wouldn't expect performance to be all that great. I will look into this option as another alternative. UPDATE: The problem here is you are tied to the cell size you are using. If you reduce the cell-size to get better accuracy, processing takes a long time. You also need to follow up by extracting by points.....over all an ugly method if you want values for specific points.
I have looked at the scipy documentation, but it doesn't look like there is a straight forward way to calculate IDW.
I'm thinking of rolling my own implementation, possibly using some of the scipy functionality to locate the closest points and calculate distances.
Am I missing something obvious? Is there a python module I haven't seen that does exactly what I want? Is creating my own implementation with the aid of scipy a wise choice?
changed 20 Oct: this class Invdisttree combines inverse-distance weighting and
scipy.spatial.KDTree.
Forget the original brute-force answer;
this is imho the method of choice for scattered-data interpolation.
""" invdisttree.py: inverse-distance-weighted interpolation using KDTree
fast, solid, local
"""
from __future__ import division
import numpy as np
from scipy.spatial import cKDTree as KDTree
# http://docs.scipy.org/doc/scipy/reference/spatial.html
__date__ = "2010-11-09 Nov" # weights, doc
#...............................................................................
class Invdisttree:
""" inverse-distance-weighted interpolation using KDTree:
invdisttree = Invdisttree( X, z ) -- data points, values
interpol = invdisttree( q, nnear=3, eps=0, p=1, weights=None, stat=0 )
interpolates z from the 3 points nearest each query point q;
For example, interpol[ a query point q ]
finds the 3 data points nearest q, at distances d1 d2 d3
and returns the IDW average of the values z1 z2 z3
(z1/d1 + z2/d2 + z3/d3)
/ (1/d1 + 1/d2 + 1/d3)
= .55 z1 + .27 z2 + .18 z3 for distances 1 2 3
q may be one point, or a batch of points.
eps: approximate nearest, dist <= (1 + eps) * true nearest
p: use 1 / distance**p
weights: optional multipliers for 1 / distance**p, of the same shape as q
stat: accumulate wsum, wn for average weights
How many nearest neighbors should one take ?
a) start with 8 11 14 .. 28 in 2d 3d 4d .. 10d; see Wendel's formula
b) make 3 runs with nnear= e.g. 6 8 10, and look at the results --
|interpol 6 - interpol 8| etc., or |f - interpol*| if you have f(q).
I find that runtimes don't increase much at all with nnear -- ymmv.
p=1, p=2 ?
p=2 weights nearer points more, farther points less.
In 2d, the circles around query points have areas ~ distance**2,
so p=2 is inverse-area weighting. For example,
(z1/area1 + z2/area2 + z3/area3)
/ (1/area1 + 1/area2 + 1/area3)
= .74 z1 + .18 z2 + .08 z3 for distances 1 2 3
Similarly, in 3d, p=3 is inverse-volume weighting.
Scaling:
if different X coordinates measure different things, Euclidean distance
can be way off. For example, if X0 is in the range 0 to 1
but X1 0 to 1000, the X1 distances will swamp X0;
rescale the data, i.e. make X0.std() ~= X1.std() .
A nice property of IDW is that it's scale-free around query points:
if I have values z1 z2 z3 from 3 points at distances d1 d2 d3,
the IDW average
(z1/d1 + z2/d2 + z3/d3)
/ (1/d1 + 1/d2 + 1/d3)
is the same for distances 1 2 3, or 10 20 30 -- only the ratios matter.
In contrast, the commonly-used Gaussian kernel exp( - (distance/h)**2 )
is exceedingly sensitive to distance and to h.
"""
# anykernel( dj / av dj ) is also scale-free
# error analysis, |f(x) - idw(x)| ? todo: regular grid, nnear ndim+1, 2*ndim
def __init__( self, X, z, leafsize=10, stat=0 ):
assert len(X) == len(z), "len(X) %d != len(z) %d" % (len(X), len(z))
self.tree = KDTree( X, leafsize=leafsize ) # build the tree
self.z = z
self.stat = stat
self.wn = 0
self.wsum = None;
def __call__( self, q, nnear=6, eps=0, p=1, weights=None ):
# nnear nearest neighbours of each query point --
q = np.asarray(q)
qdim = q.ndim
if qdim == 1:
q = np.array([q])
if self.wsum is None:
self.wsum = np.zeros(nnear)
self.distances, self.ix = self.tree.query( q, k=nnear, eps=eps )
interpol = np.zeros( (len(self.distances),) + np.shape(self.z[0]) )
jinterpol = 0
for dist, ix in zip( self.distances, self.ix ):
if nnear == 1:
wz = self.z[ix]
elif dist[0] < 1e-10:
wz = self.z[ix[0]]
else: # weight z s by 1/dist --
w = 1 / dist**p
if weights is not None:
w *= weights[ix] # >= 0
w /= np.sum(w)
wz = np.dot( w, self.z[ix] )
if self.stat:
self.wn += 1
self.wsum += w
interpol[jinterpol] = wz
jinterpol += 1
return interpol if qdim > 1 else interpol[0]
#...............................................................................
if __name__ == "__main__":
import sys
N = 10000
Ndim = 2
Nask = N # N Nask 1e5: 24 sec 2d, 27 sec 3d on mac g4 ppc
Nnear = 8 # 8 2d, 11 3d => 5 % chance one-sided -- Wendel, mathoverflow.com
leafsize = 10
eps = .1 # approximate nearest, dist <= (1 + eps) * true nearest
p = 1 # weights ~ 1 / distance**p
cycle = .25
seed = 1
exec "\n".join( sys.argv[1:] ) # python this.py N= ...
np.random.seed(seed )
np.set_printoptions( 3, threshold=100, suppress=True ) # .3f
print "\nInvdisttree: N %d Ndim %d Nask %d Nnear %d leafsize %d eps %.2g p %.2g" % (
N, Ndim, Nask, Nnear, leafsize, eps, p)
def terrain(x):
""" ~ rolling hills """
return np.sin( (2*np.pi / cycle) * np.mean( x, axis=-1 ))
known = np.random.uniform( size=(N,Ndim) ) ** .5 # 1/(p+1): density x^p
z = terrain( known )
ask = np.random.uniform( size=(Nask,Ndim) )
#...............................................................................
invdisttree = Invdisttree( known, z, leafsize=leafsize, stat=1 )
interpol = invdisttree( ask, nnear=Nnear, eps=eps, p=p )
print "average distances to nearest points: %s" % \
np.mean( invdisttree.distances, axis=0 )
print "average weights: %s" % (invdisttree.wsum / invdisttree.wn)
# see Wikipedia Zipf's law
err = np.abs( terrain(ask) - interpol )
print "average |terrain() - interpolated|: %.2g" % np.mean(err)
# print "interpolate a single point: %.2g" % \
# invdisttree( known[0], nnear=Nnear, eps=eps )
Edit: #Denis is right, a linear Rbf (e.g. scipy.interpolate.Rbf with "function='linear'") isn't the same as IDW...
(Note, all of these will use excessive amounts of memory if you're using a large number of points!)
Here's a simple exampe of IDW:
def simple_idw(x, y, z, xi, yi):
dist = distance_matrix(x,y, xi,yi)
# In IDW, weights are 1 / distance
weights = 1.0 / dist
# Make weights sum to one
weights /= weights.sum(axis=0)
# Multiply the weights for each interpolated point by all observed Z-values
zi = np.dot(weights.T, z)
return zi
Whereas, here's what a linear Rbf would be:
def linear_rbf(x, y, z, xi, yi):
dist = distance_matrix(x,y, xi,yi)
# Mutual pariwise distances between observations
internal_dist = distance_matrix(x,y, x,y)
# Now solve for the weights such that mistfit at the observations is minimized
weights = np.linalg.solve(internal_dist, z)
# Multiply the weights for each interpolated point by the distances
zi = np.dot(dist.T, weights)
return zi
(Using the distance_matrix function here:)
def distance_matrix(x0, y0, x1, y1):
obs = np.vstack((x0, y0)).T
interp = np.vstack((x1, y1)).T
# Make a distance matrix between pairwise observations
# Note: from <http://stackoverflow.com/questions/1871536>
# (Yay for ufuncs!)
d0 = np.subtract.outer(obs[:,0], interp[:,0])
d1 = np.subtract.outer(obs[:,1], interp[:,1])
return np.hypot(d0, d1)
Putting it all together into a nice copy-paste example yields some quick comparison plots:
(source: jkington at www.geology.wisc.edu)
(source: jkington at www.geology.wisc.edu)
(source: jkington at www.geology.wisc.edu)
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import Rbf
def main():
# Setup: Generate data...
n = 10
nx, ny = 50, 50
x, y, z = map(np.random.random, [n, n, n])
xi = np.linspace(x.min(), x.max(), nx)
yi = np.linspace(y.min(), y.max(), ny)
xi, yi = np.meshgrid(xi, yi)
xi, yi = xi.flatten(), yi.flatten()
# Calculate IDW
grid1 = simple_idw(x,y,z,xi,yi)
grid1 = grid1.reshape((ny, nx))
# Calculate scipy's RBF
grid2 = scipy_idw(x,y,z,xi,yi)
grid2 = grid2.reshape((ny, nx))
grid3 = linear_rbf(x,y,z,xi,yi)
print grid3.shape
grid3 = grid3.reshape((ny, nx))
# Comparisons...
plot(x,y,z,grid1)
plt.title('Homemade IDW')
plot(x,y,z,grid2)
plt.title("Scipy's Rbf with function=linear")
plot(x,y,z,grid3)
plt.title('Homemade linear Rbf')
plt.show()
def simple_idw(x, y, z, xi, yi):
dist = distance_matrix(x,y, xi,yi)
# In IDW, weights are 1 / distance
weights = 1.0 / dist
# Make weights sum to one
weights /= weights.sum(axis=0)
# Multiply the weights for each interpolated point by all observed Z-values
zi = np.dot(weights.T, z)
return zi
def linear_rbf(x, y, z, xi, yi):
dist = distance_matrix(x,y, xi,yi)
# Mutual pariwise distances between observations
internal_dist = distance_matrix(x,y, x,y)
# Now solve for the weights such that mistfit at the observations is minimized
weights = np.linalg.solve(internal_dist, z)
# Multiply the weights for each interpolated point by the distances
zi = np.dot(dist.T, weights)
return zi
def scipy_idw(x, y, z, xi, yi):
interp = Rbf(x, y, z, function='linear')
return interp(xi, yi)
def distance_matrix(x0, y0, x1, y1):
obs = np.vstack((x0, y0)).T
interp = np.vstack((x1, y1)).T
# Make a distance matrix between pairwise observations
# Note: from <http://stackoverflow.com/questions/1871536>
# (Yay for ufuncs!)
d0 = np.subtract.outer(obs[:,0], interp[:,0])
d1 = np.subtract.outer(obs[:,1], interp[:,1])
return np.hypot(d0, d1)
def plot(x,y,z,grid):
plt.figure()
plt.imshow(grid, extent=(x.min(), x.max(), y.max(), y.min()))
plt.hold(True)
plt.scatter(x,y,c=z)
plt.colorbar()
if __name__ == '__main__':
main()
I also needed something fast and i started with #joerington solution and ended up finally at numba
I always experiment between scipy, numpy and numba and choose best one. For this problem I use numba, for extra tmp memory is negligible giving super speed.
With using numpy there is a trade-of with memory and speed. For example on a 16GB ram if you want to calculate interpolation of 50000 points on other 50000 points it will go out of memory or be incredibly slow, no matter what.
So to save on memory we need to use for loops so as to have minimum temp memory allocation. But writing for loops in numpy would mean loosing possible vectorization. For this we have numba. You can add numba jit for a function accepting with for loops on numpy and it will effectively vectorize in on hardware + additional parallelism on multi-core. It will give better speed up for huge arrays case and also you can run it on GPU without writing cuda
An extremely simple snippet would be to calculate distance matrix, in IDW case we need inverse distance matrix. But even for methods other than IDW you can do something similar
Also on custom methods for calculation of hypotenuse I have few caution points here
#nb.njit((nb.float64[:, :], nb.float64[:, :]), parallel=True)
def f2(d0, d1):
print('Numba with parallel')
res = np.empty((d0.shape[0], d1.shape[0]), dtype=d0.dtype)
for i in nb.prange(d0.shape[0]):
for j in range(d1.shape[0]):
res[i, j] = np.sqrt((d0[i, 0] - d1[j, 0])**2 + (d0[i, 1] - d1[j, 1])**2)
return res
Also recent numba becoming compatible with scikit, so that is +1
Refer:
Why np.hypot and np.subtract.outer very fast compared to vanilla broadcast ? Using Numba for speedup numpy in parallel for distance matrix calculation
Custom dtype in numpy for lattitude, longitude for faster distance matrix/krigging/IDW interpolation calculations