Related
I'd like to generate N random 3-dimensional vectors (uniformly) on the unit sphere but with the condition, that their sum is equal to 0. My attempt was to generate N/2 random unit vectors, while the other are just the same vectors with a minus sign. The problem is, as I'm trying to achieve as little correlation as possible, and my idea is obviously not ideal, since half of my vectors are perfectly anti-correlated with their corresponding pair.
Your problem does not really have a solution, but you can generate a set of vectors that are going to be slightly less visibly correlated than your original solution of negating them. To be precise, if you generate N / 2 vectors and negate them, then rotate the negated vectors about their sum by any angle, you can guarantee that the sum will be zero and the correlation will be a more complicated rotation than a negative identity matrix.
import numpy as np
from scipy.spatial.transform import Rotation
N = 10
v1 = np.random.normal(size=(N / 2, 3))
v1 /= np.linalg.norm(v1, axis=1, keepdims=True)
axis = v1.sum(0)
rot = Rotation.from_rotvec(np.random.uniform(2.0 * np.pi) * axis / np.linalg.norm(axis))
v2 = rot.apply(-v1)
result = np.concatenate((v1, v2), axis=0)
This assumes that N is even in all cases. The normal distribution is a fairly standard method to generate points uniformly on a sphere: https://mathworld.wolfram.com/SpherePointPicking.html.
If you had some leeway from the sum being exactly zero, you could align two random sets of N / 2 vectors so that their sums point opposite each other.
In this code, I tried to generate vectors selected from a sphere by converting a theta, phi to x, y, z.
import numpy as np
def vectorize(theta, phi):
x = np.cos(phi) * np.cos(theta)
y = np.cos(phi) * np.sin(theta)
z = np.sin(phi)
return np.array([x, y, z])
theta_range = np.arange(0, 2 * np.pi, 0.01)
phi_range = np.arange(-np.pi / 2, np.pi / 2, 0.01)
TH, PI = np.meshgrid(theta_range, phi_range)
whole_map = np.vstack((TH.flatten(), PI.flatten())).T
# Number of vectors:
N = 100
# Selecting N/2 Vectors first at random
v_selected = np.random.choice(range(whole_map.shape[0]), N // 2)
vectors = np.array([vectorize(whole_map[ind][0], whole_map[ind][1]) for ind in v_selected])
# Doubling up the number of vectors by adding the negate of each vector to the vector set
vectors = np.vstack((vectors, -vectors))
print(vectors.sum(axis=0))
# array([1.94289029e-16, 1.17961196e-16, 1.11022302e-16])
# Almost 0, but isn't zero because of floating number precision when converted to binary
Here is the scatter plot of the points generated on the sphere with radius=1:
I am working on finding the frequencies from a given dataset and I am struggling to understand how np.fft.fft() works. I thought I had a working script but ran into a weird issue that I cannot understand.
I have a dataset that is roughly sinusoidal and I wanted to understand what frequencies the signal is composed of. Once I took the FFT, I got this plot:
However, when I take the same dataset, slice it in half, and plot the same thing, I get this:
I do not understand why the frequency drops from 144kHz to 128kHz which technically should be the same dataset but with a smaller length.
I can confirm a few things:
Step size between data points 0.001
I have tried interpolation with little luck.
If I slice the second half of the dataset I get a different frequency as well.
If my dataset is indeed composed of both 128 and 144kHz, then why doesn't the 128 peak show up in the first plot?
What is even more confusing is that I am running a script with pure sine waves without issues:
T = 0.001
fs = 1 / T
def find_nearest_ind(data, value):
return (np.abs(data - value)).argmin()
x = np.arange(0, 30, T)
ff = 0.2
y = np.sin(2 * ff * np.pi * x)
x = x[:len(x) // 2]
y = y[:len(y) // 2]
n = len(y) # length of the signal
k = np.arange(n)
T = n / fs
frq = k / T * 1e6 / 1000 # two sides frequency range
frq = frq[:len(frq) // 2] # one side frequency range
Y = np.fft.fft(y) / n # dft and normalization
Y = Y[:n // 2]
frq = frq[:50]
Y = Y[:50]
fig, (ax1, ax2) = plt.subplots(2)
ax1.plot(x, y)
ax1.set_xlabel("Time (us)")
ax1.set_ylabel("Electric Field (V / mm)")
peak_ind = find_nearest_ind(abs(Y), np.max(abs(Y)))
ax2.plot(frq, abs(Y))
ax2.axvline(frq[peak_ind], color = 'black', linestyle = '--', label = F"Frequency = {round(frq[peak_ind], 3)}kHz")
plt.legend()
plt.xlabel('Freq(kHz)')
ax1.title.set_text('dV/dX vs. Time')
ax2.title.set_text('Frequencies')
fig.tight_layout()
plt.show()
Here is a breakdown of your code, with some suggestions for improvement, and extra explanations. Working through it carefully will show you what is going on. The results you are getting are completely expected. I will propose a common solution at the end.
First set up your units correctly. I assume that you are dealing with seconds, not microseconds. You can adjust later as long as you stay consistent.
Establish the period and frequency of the sampling. This means that the Nyquist frequency for the FFT will be 500Hz:
T = 0.001 # 1ms sampling period
fs = 1 / T # 1kHz sampling frequency
Make a time domain of 30e3 points. The half domain will contain 15000 points. That implies a frequency resolution of 500Hz / 15k = 0.03333Hz.
x = np.arange(0, 30, T) # time domain
n = x.size # number of points: 30000
Before doing anything else, we can define our time domain right here. I prefer a more intuitive approach than what you are using. That way you don't have to redefine T or introduce the auxiliary variable k. But as long as the results are the same, it does not really matter:
F = np.linspace(0, 1 - 1/n, n) / T # Notice F[1] = 0.03333, as predicted
Now define the signal. You picked ff = 0.2. Notice that 0.2Hz. 0.2 / 0.03333 = 6, so you would expect to see your peak in exactly bin index 6 (F[6] == 0.2). To better illustrate what is going on, let's take ff = 0.22. This will bleed the spectrum into neighboring bins.
ff = 0.22
y = np.sin(2 * np.pi * ff * x)
Now take the FFT:
Y = np.fft.fft(y) / n
maxbin = np.abs(Y).argmax() # 7
maxF = F[maxbin] # 0.23333333: This is the nearest bin
Since your frequency bins are 0.03Hz wide, the best resolution you can expect 0.015Hz. For your real data, which has much lower resolution, the error is much larger.
Now let's take a look at what happens when you halve the data size. Among other things, the frequency resolution becomes smaller. Now you have a maximum frequency of 500Hz spread over 7.5k samples, not 15k: the resolution drops to 0.066666Hz per bin:
n2 = n // 2 # 15000
F2 = np.linspace(0, 1 - 1 / n2, n2) / T # F[1] = 0.06666
Y2 = np.fft.fft(y[:n2]) / n2
Take a look what happens to the frequency estimate:
maxbin2 = np.abs(Y2).argmax() # 3
maxF2 = F2[maxbin2] # 0.2: This is the nearest bin
Hopefully, you can see how this applies to your original data. In the full FFT, you have a resolution of ~16.1 per bin with the full data, and ~32.2kHz with the half data. So your original result is within ~±8kHz of the right peak, while the second one is within ~±16kHz. The true frequency is therefore between 136kHz and 144kHz. Another way to look at it is to compare the bins that you showed me:
full: 128.7 144.8 160.9
half: 96.6 128.7 160.9
When you take out exactly half of the data, you drop every other frequency bin. If your peak was originally closest to 144.8kHz, and you drop that bin, it will end up in either 128.7 or 160.9.
Note: Based on the bin numbers you show, I suspect that your computation of frq is a little off. Notice the 1 - 1/n in my linspace expression. You need that to get the right frequency axis: the last bin is (1 - 1/n) / T, not 1 / T, no matter how you compute it.
So how to get around this problem? The simplest solution is to do a parabolic fit on the three points around your peak. That is usually a sufficiently good estimator of the true frequency in the data when you are looking for essentially perfect sinusoids.
def peakF(F, Y):
index = np.abs(Y).argmax()
# Compute offset on normalized domain [-1, 0, 1], not F[index-1:index+2]
y = np.abs(Y[index - 1:index + 2])
# This is the offset from zero, which is the scaled offset from F[index]
vertex = (y[0] - y[2]) / (0.5 * (y[0] + y[2]) - y[1])
# F[1] is the bin resolution
return F[index] + vertex * F[1]
In case you are wondering how I got the formula for the parabola: I solved the system with x = [-1, 0, 1] and y = Y[index - 1:index + 2]. The matrix equation is
[(-1)^2 -1 1] [a] Y[index - 1]
[ 0^2 0 1] # [b] = Y[index]
[ 1^2 1 1] [c] Y[index + 1]
Computing the offset using a normalized domain and scaling afterwards is almost always more numerically stable than using whatever huge numbers you have in F[index - 1:index + 2].
You can plug the results in the example into this function to see if it works:
>>> peakF(F, Y)
0.2261613409657391
>>> peakF(F2, Y2)
0.20401580936430794
As you can see, the parabolic fit gives an improvement, however slight. There is no replacement for just increasing frequency resolution through more samples though!
I am trying to implement two numerical solutions. A forward Euler and a second order Runge-Kutta for the unsteady 2D heat equation with periodic boundary conditions. I am using a 3 point central difference in both cases for the spatial discretization.
This is my code:
def analytical(npx,npy,t,alpha=0.1): #Function to create analytical solution data
X = numpy.linspace(0,1,npx,endpoint=False) #X spatial range
Y = numpy.linspace(0,1,npy,endpoint=False) #Y Spatial Range
uShape = (1,numpy.shape(X)[0],numpy.shape(Y)[0]) #Shape of data array
u = numpy.zeros(uShape) #Allocate data array
m = 2 #m and n = 2 makes the function periodic in 0->1
n = 2
for i,x in enumerate(X): #Looping through x and y to produce analytical solution
for j,y in enumerate(Y):
u[0,i,j] = numpy.sin(m*pi*x)*numpy.sin(n*pi*y)*numpy.exp(-(m*m+n*n)*pi*pi*alpha*t)
return u,X,Y
def numericalHeatComparisonFE(): #Numerical function for forward euler
arraysize = 10 #Size of simulation array in x and y
d = 0.1 #value of pde coefficient alpha*dt/dx/dx
alpha = 1 #thermal diffusivity
dx = 1/arraysize #getting spatial step
dt = float(d*dx**2/alpha) #getting time step
T,x,y = analytical(arraysize,arraysize,0,alpha=alpha) #get analytical solution
numerical = numpy.zeros((2,)+T.shape) #create numerical data array
ns = numerical.shape #shape of numerical array aliased
numerical[0,:,:,:] = T[:,:,:] # assign initial conditions to first element in numerical
error = [] #create empty error list for absolute error
courant = alpha*dt/dx/dx #technically not the courant number but the coefficient for PDE
for i in range(1,20):#looping through twenty times for testing - solving FE each step
T,x,y = analytical(arraysize,arraysize,i*dt,alpha=alpha)
for idx,idy in numpy.ndindex(ns[2:]):
dxx = numerical[0,0,idx-1,idy]+numerical[0,0,(idx+1)%ns[-2],idy]-2*numerical[0,0,idx,idy] #X direction diffusion
dyy = numerical[0,0,idx,idy-1]+numerical[0,0,idx,(idy+1)%ns[-1]]-2*numerical[0,0,idx,idy] #Y direction diffusion
numerical[1,0,idx,idy] = courant*(dxx+dyy)+numerical[0,0,idx,idy] #Update formula
error.append(numpy.amax(numpy.absolute(numerical[1,:,:,:]-T[:,:,:])))#Add max error to error list
numerical[0,:,:,:] = numerical[1,:,:,:] #Update initial condition
print(numpy.amax(error))
def numericalHeatComparisonRK2():
arraysize = 10 #Size of simulation array in x and y
d = 0.1 #value of pde coefficient alpha*dt/dx/dx
alpha = 1 #thermal diffusivity
dx = 1/arraysize #getting spatial step
dt = float(d*dx**2/alpha) #getting time step
T,x,y = analytical(arraysize,arraysize,0,alpha=alpha) #get analytical solution
numerical = numpy.zeros((3,)+T.shape) #create numerical data array
ns = numerical.shape #shape of numerical array aliased
numerical[0,:,:,:] = T[:,:,:] # assign initial conditions to first element in numerical
error = [] #create empty error list for absolute error
courant = alpha*dt/dx/dx #technically not the courant number but the coefficient for PDE
for i in range(1,20): #Test twenty time steps -RK2
T,x,y = analytical(arraysize,arraysize,i*dt,alpha=alpha)
for idx,idy in numpy.ndindex(ns[2:]): #Intermediate step looping through indices
#Intermediate
dxx = numerical[0,0,idx-1,idy]+numerical[0,0,(idx+1)%ns[-2],idy]-2*numerical[0,0,idx,idy]
dyy = numerical[0,0,idx,idy-1]+numerical[0,0,idx,(idy+1)%ns[-1]]-2*numerical[0,0,idx,idy]
numerical[1,0,idx,idy] = 0.5*courant*(dxx+dyy)+numerical[0,0,idx,idy]
for idx,idy in numpy.ndindex(ns[2:]): #Update step looping through indices
#RK Step
dxx = numerical[1,0,idx-1,idy]+numerical[1,0,(idx+1)%ns[-2],idy]-2*numerical[1,0,idx,idy]
dyy = numerical[1,0,idx,idy-1]+numerical[1,0,idx,(idy+1)%ns[-1]]-2*numerical[1,0,idx,idy]
numerical[2,0,idx,idy] = courant*(dxx+dyy)+numerical[0,0,idx,idy]
error.append(numpy.amax(numpy.absolute(numerical[2,:,:,:]-T[:,:,:]))) #Add maximum error to list
numerical[0,:,:,:] = numerical[2,:,:,:] #Update initial conditions
print(numpy.amax(error))
if __name__ == "__main__":
numericalHeatComparisonFE()
numericalHeatComparisonRK2()
when running the code, I expect that the maximum error for the RK2 should be less than that of the FE but I get
0.0021498590913591187
for the FE and
0.011325197051528346
for the RK2. I have searched the code pretty thoroughly and haven't found any glaring typos or errors. I feel it has to be something minor that I am missing but I can't seem to find it. If you happen to spot an error or know something I don't help or a comment would be appreciated.
Thanks!
I am trying to simulate a particle flying at another particle while undergoing electrical repulsion (or attraction), called Rutherford-scattering. I have succeeded in simulating (a few) particles using for loops and python lists. However, now I want to use numpy arrays instead. The model will use the following steps:
For all particles:
Calculate radial distance with all other particles
Calculate the angle with all other particles
Calculate netto force in x-direction and y-direction
Create matrix with netto xForce and yForce for each particle
Create accelaration (also x and y component) matrix by a = F/mass
Update speed matrix
Update position matrix
My problem is that I do not know how I can use numpy arrays in calculating the force components.
Here follows my code which is not runnable.
import numpy as np
# I used this function to calculate the force while using for-loops.
def force(x1, y1, x2, x2):
angle = math.atan((y2 - y1)/(x2 - x1))
dr = ((x1-x2)**2 + (y1-y2)**2)**0.5
force = charge2 * charge2 / dr**2
xforce = math.cos(angle) * force
yforce = math.sin(angle) * force
# The direction of force depends on relative location
if x1 > x2 and y1<y2:
xforce = xforce
yforce = yforce
elif x1< x2 and y1< y2:
xforce = -1 * xforce
yforce = -1 * yforce
elif x1 > x2 and y1 > y2:
xforce = xforce
yforce = yforce
else:
xforce = -1 * xforce
yforce = -1* yforce
return xforce, yforce
def update(array):
# this for loop defeats the entire use of numpy arrays
for particle in range(len(array[0])):
# find distance of all particles pov from 1 particle
# find all x-forces and y-forces on that particle
xforce = # sum of all x-forces from all particles
yforce = # sum of all y-forces from all particles
force_arr[0, particle] = xforce
force_arr[1, particle] = yforce
return force
# begin parameters
t = 0
N = 3
masses = np.ones(N)
charges = np.ones(N)
loc_arr = np.random.rand(2, N)
speed_arr = np.random.rand(2, N)
acc_arr = np.random.rand(2, N)
force = np.random.rand(2, N)
while t < 0.5:
force_arr = update(loc_arry)
acc_arr = force_arr / masses
speed_arr += acc_array
loc_arr += speed_arr
t += dt
# plot animation
One approach to model this problem with arrays may be:
define the point coordinates as a Nx2 array. (This will help with extensibility if you advance to 3-D points later)
define the intermediate variables distance, angle, force as NxN arrays to represent the pairwise interactions
Numpy things to know about:
You can call most numeric functions on arrays if the arrays have the same shape (or conforming shapes, which is a nontrivial topic...)
meshgrid helps you generate the array indices necessary to shapeshift your Nx2 arrays to compute NxN results
and a tangential note (ha ha) arctan2() computes a signed angle, so you can bypass the complex "which quadrant" logic
For example you can do something like this. Note in get_dist and get_angle the arithmetic operations between points take place in the bottom-most dimension:
import numpy as np
# 2-D locations of particles
points = np.array([[1,0],[2,1],[2,2]])
N = len(points) # 3
def get_dist(p1, p2):
r = p2 - p1
return np.sqrt(np.sum(r*r, axis=2))
def get_angle(p1, p2):
r = p2 - p1
return np.arctan2(r[:,:,1], r[:,:,0])
ii = np.arange(N)
ix, iy = np.meshgrid(ii, ii)
dist = get_dist(points[ix], points[iy])
angle = get_angle(points[ix], points[iy])
# ... compute force
# ... apply the force, etc.
For the sample 3-point vector shown above:
In [246]: dist
Out[246]:
array([[0. , 1.41421356, 2.23606798],
[1.41421356, 0. , 1. ],
[2.23606798, 1. , 0. ]])
In [247]: angle / np.pi # divide by Pi to make the numbers recognizable
Out[247]:
array([[ 0. , -0.75 , -0.64758362],
[ 0.25 , 0. , -0.5 ],
[ 0.35241638, 0.5 , 0. ]])
Here is one go with only a loop for each time step, and it should work for any number of dimensions, I have tested with 3 too:
from matplotlib import pyplot as plt
import numpy as np
fig, ax = plt.subplots()
N = 4
ndim = 2
masses = np.ones(N)
charges = np.array([-1, 1, -1, 1]) * 2
# loc_arr = np.random.rand(N, ndim)
loc_arr = np.array(((-1,0), (1,0), (0,-1), (0,1)), dtype=float)
speed_arr = np.zeros((N, ndim))
# compute charge matrix, ie c1 * c2
charge_matrix = -1 * np.outer(charges, charges)
time = np.linspace(0, 0.5)
dt = np.ediff1d(time).mean()
for i, t in enumerate(time):
# get (dx, dy) for every point
delta = (loc_arr.T[..., np.newaxis] - loc_arr.T[:, np.newaxis]).T
# calculate Euclidean distance
distances = np.linalg.norm(delta, axis=-1)
# and normalised unit vector
unit_vector = (delta.T / distances).T
unit_vector[np.isnan(unit_vector)] = 0 # replace NaN values with 0
# calculate force
force = charge_matrix / distances**2 # norm gives length of delta vector
force[np.isinf(force)] = 0 # NaN forces are 0
# calculate acceleration in all dimensions
acc = (unit_vector.T * force / masses).T.sum(axis=1)
# v = a * dt
speed_arr += acc * dt
# increment position, xyz = v * dt
loc_arr += speed_arr * dt
# plotting
if not i:
color = 'k'
zorder = 3
ms = 3
for i, pt in enumerate(loc_arr):
ax.text(*pt + 0.1, s='{}q {}m'.format(charges[i], masses[i]))
elif i == len(time)-1:
color = 'b'
zroder = 3
ms = 3
else:
color = 'r'
zorder = 1
ms = 1
ax.plot(loc_arr[:,0], loc_arr[:,1], '.', color=color, ms=ms, zorder=zorder)
ax.set_aspect('equal')
The above example produces, where the black and blue points signify the start and end positions, respectively:
And when charges are equal charges = np.ones(N) * 2 the system symmetry is preserved and the charges repel:
And finally with some random initial velocities speed_arr = np.random.rand(N, 2):
EDIT
Made a small change to the code above to make sure it was correct. (I was missing -1 on the resultant force, ie. force between +/+ should be negative, and I was summing down the wrong axis, apologies for that. Now in the cases where masses[0] = 5, the system evolves correctly:
The classic approach is to calculate electric field for all particles in the system. Say you have 3 charged particles all with positive charge:
particles = np.array([[1,0,0],[2,1,0],[2,2,0]]) # location of each particle
q = np.array([1,1,1]) # charge of each particle
The easiest way to compute the electric field at each particle`s location is for loop:
def for_method(pos,q):
"""Computes electric field vectors for all particles using for-loop."""
Evect = np.zeros( (len(pos),len(pos[0])) ) # define output electric field vector
k = 1 / (4 * np.pi * const.epsilon_0) * np.ones((len(pos),len(pos[0]))) * 1.602e-19 # make this into matrix as matrix addition is faster
# alternatively you can get rid of np.ones and just define this as a number
for i, v0 in enumerate(pos): # s_p - selected particle | iterate over all particles | v0 reference particle
for v, qc in zip(pos,q): # loop over all particles and calculate electric force sum | v particle being calculated for
if all((v0 == v)): # do not compute for the same particle
continue
else:
r = v0 - v #
Evect[i] += r / np.linalg.norm(r) ** 3 * qc #! multiply by charge
return Evect * k
# to find electric field at each particle`s location call
for_method(particles, q)
This function returns array of vectors with the same shape as input particles array. To find force on each, you simply multiply this vector with q array of charges. From there on, you can easily find your acceleration and integrate the system using your favourite ODE solver.
Performance Optimization & Accuracy
For method is the slowest possible approach. The field can be computed using solely linear algebra granting significant speed boost. Following code is very efficient Numpy matrix "one-liner" (almost one-liner) to this problem:
def CPU_matrix_method(pos,q):
"""Classic vectorization of for Coulomb law using numpy arrays."""
k = 1 / (4 * np.pi * const.epsilon_0) * np.ones((len(pos),3)) * 1.602e-19 # define electric constant
dist = distance.cdist(pos,pos) # compute distances
return k * np.sum( (( np.tile(pos,len(pos)).reshape((len(pos),len(pos),3)) - np.tile(pos,(len(pos),1,1))) * q.reshape(len(q),1)).T * np.power(dist,-3, where = dist != 0),axis = 1).T
Note that this and following code also return electric field vector for each particle.
You can get even higher performance if you offload this onto the GPU using Cupy library. Following code is almost identical to the CPU_matrix_method, I only expanded the one-liner a little so that you could see better what is going on:
def GPU_matrix_method(pos,q):
"""GPU Coulomb law vectorization.
Takes in numpy arrays, performs computations and returns cupy array"""
# compute distance matrix between each particle
k_cp = 1 / (4 * cp.pi * const.epsilon_0) * cp.ones((len(pos),3)) * 1.602e-19 # define electric constant, runs faster if this is matrix
dist = cp.array(distance.cdist(pos,pos)) # could speed this up with cupy cdist function! use this: cupyx.scipy.spatial.distance.cdist
pos, q = cp.array(pos), cp.array(q) # load inputs to GPU memory
dist_mod = cp.power(dist,-3) # compute inverse cube of distance
dist_mod[dist_mod == cp.inf] = 0 # set all infinity entries to 0 (i.e. diagonal elements/ same particle-particle pairs)
# compute by magic
return k_cp * cp.sum((( cp.tile(pos,len(pos)).reshape((len(pos),len(pos),3)) - cp.tile(pos,(len(pos),1,1))) * q.reshape(len(q),1)).T * dist_mod, axis = 1).T
Regarding the accuracy of the mentioned algorithms, if you compute the 3 methods on the particles array you get identical results:
[[-6.37828367e-10 -7.66608512e-10 0.00000000e+00]
[ 5.09048221e-10 -9.30757576e-10 0.00000000e+00]
[ 1.28780145e-10 1.69736609e-09 0.00000000e+00]]
Regarding the performance, I computed each algorithm on systems ranging from 2 to 5000 charged particles. Additionally I also included Numba precompiled version of the for_method to make the for-loop approach competitive:
We see that for-loop performs terribly needing over 400 seconds to compute for system with 5000 particles. Zooming in to the bottom part:
This shows that matrix approach to this problem is orders of magnitude better. To be exact the 5000 particle evaluation took 18.5s for Numba for-loop, 4s for CPU matrix(5 times faster than Numba), and 0.8s for GPU matrix* (23 times faster than Numba). The significant difference shows for larger arrays.
* GPU used was Nvidia K100.
I have a numpy array filled with intensity readings at different radii in a uniform circle (for context, this is a 1D radiative transfer project for protostellar formation models: while much better models exist, my supervisor wasnts me to have the experience of producing one so I understand how others work).
I want to take that 1d array, and "rotate" it through a circle, forming a 2D array of intensities that could then be shown with imshow (or, with a bit of work, aplpy). The final array needs to be 2d, and the projection needs to be Cartesian, not polar.
I can do it with nested for loops, and I can do it with lookup tables, but I have a feeling there must be a neat way of doing it in numpy or something.
Any ideas?
EDIT:
I have had to go back and recreate my (frankly horrible) mess of for loops and if statements that I had before. If I really tried, I could probably get rid of one of the loops and one of the if statements by condensing things down. However, the aim is not to make it work with for loops, but see if there is a built in way to rotate the array.
impB is an array that differs slightly from what I stated it was before. Its actually just a list of radii where particles are detected. I then bin those into radius bins to get the intensity (or frequency if you prefer) in each radius. R is the scale factor for my radius as I run the model in a dimensionless way. iRes is a resolution scale factor, essentially how often I want to sample my radial bins. Everything else should be clear.
radJ = np.ndarray(shape=(2*iRes, 2*iRes)) # Create array of 2xRadius square
for i in range(iRes):
n = len(impB[np.where(impB[:] < ((i+1.) * (R / iRes)))]) # Count number of things within this radius +1
m = len(impB[np.where(impB[:] <= ((i) * (R / iRes)))]) # Count number of things in this radius
a = (((i + 1) * (R / iRes))**2 - ((i) * (R / iRes))**2) * math.pi # A normalisation factor based on area.....dont ask
for x in range(iRes):
for y in range(iRes):
if (x**2 + y**2) < (i * iRes)**2:
if (x**2 + y**2) >= (i * iRes)**2: # Checks for radius, and puts in cartesian space
radJ[x+iRes,y+iRes] = (n-m) / a # Put in actual intensity bins
radJ[x+iRes,-y+iRes] = (n-m) / a
radJ[-x+iRes,y+iRes] = (n-m) / a
radJ[-x+iRes,-y+iRes] = (n-m) / a
Nested loops are a simple approach for that. With ri_data_r and y containing your radius values (difference to the middle pixel) and the array for rotation, respectively, I would suggest:
from scipy import interpolate
import numpy as np
y = np.random.rand(100)
ri_data_r = np.linspace(-len(y)/2,len(y)/2,len(y))
interpol_index = interpolate.interp1d(ri_data_r, y)
xv = np.arange(-1, 1, 0.01) # adjust your matrix values here
X, Y = np.meshgrid(xv, xv)
profilegrid = np.ones(X.shape, float)
for i, x in enumerate(X[0, :]):
for k, y in enumerate(Y[:, 0]):
current_radius = np.sqrt(x ** 2 + y ** 2)
profilegrid[i, k] = interpol_index(current_radius)
print(profilegrid)
This will give you exactly what you are looking for. You just have to take in your array and calculate an symmetric array ri_data_r that has the same length as your data array and contains the distance between the actual data and the middle of the array. The code is doing this automatically.
I stumbled upon this question in a different context and I hope I understood it right. Here are two other ways of doing this. The first uses skimage.transform.warp with interpolation of desired order (here we use order=0 Nearest-neighbor). This method is slower but more precise and needs less memory then the second method.
The second one does not use interpolation, therefore is faster but also less precise and needs way more memory because it stores each 2D array containing one tilt until the end, where they are averaged with np.nanmean().
The difference between both solutions stemmed from the problem of handling the center of the final image where the tilts overlap the most, i.e. the first one would just add values with each tilt ending up out of the original range. This was "solved" by clipping the matrix in each step to a global_min and global_max (consult the code). The second one solves it by taking the mean of the tilts where they overlap, which forces us to use the np.nan.
Please, read the Example of usage and Sanity check sections in order to understand the plot titles.
Solution 1:
import numpy as np
from skimage.transform import warp
def rotate_vector(vector, deg_angle):
# Credit goes to skimage.transform.radon
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
center = vector.size // 2
square = np.zeros((vector.size, vector.size))
square[center,:] = vector
rad_angle = np.deg2rad(deg_angle)
cos_a, sin_a = np.cos(rad_angle), np.sin(rad_angle)
R = np.array([[cos_a, sin_a, -center * (cos_a + sin_a - 1)],
[-sin_a, cos_a, -center * (cos_a - sin_a - 1)],
[0, 0, 1]])
# Approx. 80% of time is spent in this function
return warp(square, R, clip=False, output_shape=((vector.size, vector.size)))
def place_vectors(vectors, deg_angles):
matrix = np.zeros((vectors.shape[-1], vectors.shape[-1]))
global_min, global_max = 0, 0
for i, deg_angle in enumerate(deg_angles):
tilt = rotate_vector(vectors[i], deg_angle)
global_min = tilt.min() if global_min > tilt.min() else global_min
global_max = tilt.max() if global_max < tilt.max() else global_max
matrix += tilt
matrix = np.clip(matrix, global_min, global_max)
return matrix
Solution 2:
Credit for the idea goes to my colleague Michael Scherbela.
import numpy as np
def rotate_vector(vector, deg_angle):
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
square = np.ones([vector.size, vector.size]) * np.nan
radius = vector.size // 2
r_values = np.linspace(-radius, radius, vector.size)
rad_angle = np.deg2rad(deg_angle)
ind_x = np.round(np.cos(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_y = np.round(np.sin(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_x = np.clip(ind_x, 0, vector.size-1)
ind_y = np.clip(ind_y, 0, vector.size-1)
square[ind_y, ind_x] = vector
return square
def place_vectors(vectors, deg_angles):
matrices = []
for deg_angle, vector in zip(deg_angles, vectors):
matrices.append(rotate_vector(vector, deg_angle))
matrix = np.nanmean(np.array(matrices), axis=0)
return np.nan_to_num(matrix, copy=False, nan=0.0)
Example of usage:
r = 100 # Radius of the circle, i.e. half the length of the vector
n = int(np.pi * r / 8) # Number of vectors, e.g. number of tilts in tomography
v = np.ones(2*r) # One vector, e.g. one tilt in tomography
V = np.array([v]*n) # All vectors, e.g. a sinogram in tomography
# Rotate 1D vector to a specific angle (output is 2D)
angle = 45
rotated = rotate_vector(v, angle)
# Rotate each row of a 2D array according to its angle (output is 2D)
angles = np.linspace(-90, 90, num=n, endpoint=False)
inplace = place_vectors(V, angles)
Sanity check:
These are just simple checks which by no means cover all possible edge cases. Depending on your use case you might want to extend the checks and adjust the method.
# I. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then sum(inplace) should be approx. equal to (n * (2πr - n)) / π
# which is an area that should be covered by the tilts
desired_area = (n * (2 * np.pi * r - n)) / np.pi
covered_area = np.sum(inplace)
covered_frac = covered_area / desired_area
print(f'This method covered {covered_frac * 100:.2f}% '
'of the area which should be covered in total.')
# II. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then a circle M with radius m <= r should be the largest circle which
# is fully covered by the vectors. I.e. its mean should be no less than 1.
# If n = πr then m = r.
# m = n / π
m = int(n / np.pi)
# Code for circular mask not included
mask = create_circular_mask(2*r, 2*r, center=None, radius=m)
m_area = np.mean(inplace[mask])
print(f'Full radius r={r}, radius m={m}, mean(M)={m_area:.4f}.')
Code for plotting:
import matplotlib.pyplot as plt
plt.figure(figsize=(16, 8))
plt.subplot(121)
rotated = np.nan_to_num(rotated) # not necessary in case of the first method
plt.title(
f'Output of rotate_vector(), angle={angle}°\n'
f'Sum is {np.sum(rotated):.2f} and should be {np.sum(v):.2f}')
plt.imshow(rotated, cmap=plt.cm.Greys_r)
plt.subplot(122)
plt.title(
f'Output of place_vectors(), r={r}, n={n}\n'
f'Covered {covered_frac * 100:.2f}% of the area which should be covered.\n'
f'Mean of the circle M is {m_area:.4f} and should be 1.0.')
plt.imshow(inplace)
circle=plt.Circle((r, r), m, color='r', fill=False)
plt.gcf().gca().add_artist(circle)
plt.gcf().gca().legend([circle], [f'Circle M (m={m})'])