Evaluation a function stored in column major order - python

I am trying to evaluate a function at discretized point and stored in column-major order like this:
import numpy as np;
N = 3 ##
n = N * N
h = 1 / (N + 1) # step size
h2 = h**2 #
deltaX = np.zeros(N)
deltaY = np.zeros(N);
def Function(x, y):
output = -20. * np.pi * np.sin(2 * np.pi * x) * sin(4 * np.pi * y)
return output
## Equally spaced delta:
for i in range(1, N + 1):
deltaX[i - 1] = i * h;
deltaY[i - 1] = i * h;
### Lexicographic Row order ###
### Evaluation of function at deltaX and deltaY
feval = np.zeros((n, 1))
How could I approach to evaluate the discretization for this function?

Good news: your function properly uses numpy operations, so is completely vectorized. That means that you can evaluate it at every element of the input arrays.
The shape of the inputs don't have to match exactly. They just have to broadcast together. That means that only non-singleton dimensions need to match.
So start by creating the appropriate input arrays. Numpy provides the tools to do this elegantly without looping:
N = 3
h = 1 / (N + 1)
delta_x = np.arange(1., N + 1.) * h
delta_y = np.linspace(h, N * h, N)[:, None]
I deliberately used two different ways to create the coordinate arrays, to serve as an example. In practice, you'd want to use one of the two methods.
The index [:, None] turns delta_y into a column vector. None introduces a new singleton axis. There are any number of Other ways to do the same thing, like `delta_y = ....reshape(-1, 1).
And read the docs I linked to, and for all the functions I used.
Now that you have a column in the y direction and a row in the x, you can call Function as just
val = Function(delta_x, delta_y)
The operation of arranging the 2D matrix val into a 1D array is called raveling. By default, it uses the default row-major order that numpy uses in memory. This order is also called "C" order. The alternative arrangement is to interpret the array in column major order, like Matlab does. This is called Fortran order. It will require a copy of the data since that's not how the elements are laid out in memory.
One way to ravel in Fortran order:
feval = val.ravel(order='F')
An alternative is to transpose and use C order:
feval = val.T.ravel()
The last two lines can be combined, so you end up with 3 lines:
delta_x = h * np.arange(1., N + 1.)
delta_y = h * np.arange(1., N + 1.)[:, None]
feval = Function(delta_x, delta_y).ravel(order='F')
You could make it into a one-liner, but that's pushing it.

Related

Problems efficently computing the roots of a multiple argument function

Hello my dear intelligent people on the internet,
I want to compute the roots of a (non invertible) two argument function in a given interval using scipy.optimize.fsolve:
kappa = 1.4
def Av_ma(Ma):
Ma[np.where(Ma < 0)] = 0
return Ma * ((2 + (kappa - 1) * (Ma ** 2)) / (kappa + 1)) ** ((kappa + 1) / (2 * (1 - kappa)))
def Av_ma_root(Ma, _Av_e_2):
return Av_ma(Ma) - _Av_e_2
The actual funtion Av_ma has only one argument, but I want to compute the roots for values of _Av_e_2 between 0 and 1. As a result i want an array containing only the root sets for like 100 values of _Av_e_2.
This is how my code is looks so far:
Av_e_2 = np.arange(0, 1 + 1e-12, 0.01)
x0 = np.arange(0.1, 1, 0.1)
startvec = np.repeat(x0, len(Av_e_2))
lAv_e_2 = np.tile(Av_e_2, np.shape(x0)[0])
pv2 = optimize.fsolve(Av_ma_root, startvec, args=lAv_e_2)
pv2 = np.reshape(pv2, (len(x0), len(Av_e_2)))
pv2 = np.round(pv2, 6)
pv2 = np.array([np.unique(p) for p in np.transpose(pv2)])
print(pv2)
I define an array of the 'starting estimate' array with the length of the Av_e_2.
Its used to give fsolve the same starting estimate for every value of Av_e_2.
There after i define an array of the values of Av_e_2. Its used to give fsolve every single value of Av_e_2
After optimizing I reshape, round and throw out the non-unique values.
In short: my code doesn't work. It does not produce the solution i want it to
test_vec_opt.py:25: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
pv2 = np.array([np.unique(p) for p in np.transpose(pv2)])

How to apply crank-nicolson method in python to a wave equation like schrodinger's

I'm trying to do a particle in a box simulation with no potential field. Took me some time to find out that simple explicit and implicit methods break unitary time evolution so I resorted to crank-nicolson, which is supposed to be unitary. But when I try it I find that it still is not so. I'm not sure what I'm missing.. The formulation I used is this:
where T is the tridiagonal Toeplitz matrix for the second derivative wrt x and
The system simplifies to
The A and B matrices are:
I just solve this linear system for using the sparse module. The math makes sense and I found the same numeric scheme in some papers so that led me to believe my code is where the problem is.
Here's my code so far:
import numpy as np
import matplotlib.pyplot as plt
from scipy.linalg import toeplitz
from scipy.sparse.linalg import spsolve
from scipy import sparse
# Spatial discretisation
N = 100
x = np.linspace(0, 1, N)
dx = x[1] - x[0]
# Time discretisation
K = 10000
t = np.linspace(0, 10, K)
dt = t[1] - t[0]
alpha = (1j * dt) / (2 * (dx ** 2))
A = sparse.csc_matrix(toeplitz([1 + 2 * alpha, -alpha, *np.zeros(N-4)]), dtype=np.cfloat) # 2 less for both boundaries
B = sparse.csc_matrix(toeplitz([1 - 2 * alpha, alpha, *np.zeros(N-4)]), dtype=np.cfloat)
# Initial and boundary conditions (localized gaussian)
psi = np.exp((1j * 50 * x) - (200 * (x - .5) ** 2))
b = B.dot(psi[1:-1])
psi[0], psi[-1] = 0, 0
for index, step in enumerate(t):
# Within the domain
psi[1:-1] = spsolve(A, b)
# Enforce boundaries
# psi[0], psi[N - 1] = 0, 0
b = B.dot(psi[1:-1])
# Square integration to show if it's unitary
print(np.trapz(np.abs(psi) ** 2, dx))
You are relying on the Toeplitz constructor to produce a symmetric matrix, so that the entries below the diagonal are the same as above the diagonal. However, the documentation for scipy.linalg.toeplitz(c, r=None) says not "transpose", but
*"If r is not given, r == conjugate(c) is assumed."
so that the resulting matrix is self-adjoint. In this case this means that the entries above the diagonal have their sign switched.
It makes no sense to first construct a dense matrix and then extract a sparse representation. Construct it as sparse tridiagonal matrix from the start, using scipy.sparse.diags
A = sparse.diags([ (N-3)*[-alpha], (N-2)*[1+2*alpha], (N-3)*[-alpha]], [-1,0,1], format="csc");
B = sparse.diags([ (N-3)*[ alpha], (N-2)*[1-2*alpha], (N-3)*[ alpha]], [-1,0,1], format="csc");

How to write a function, that generates a vector recursively in Python?

How can I write a recursive function to generate a vector X of size (1,n) as follows, where X_i is the i-th entry:
X_1 = Z_1 * E_1
X_i = max{B_(1,i) * X_1, ... , B_((i-1),i) * X_(i-1), Z_i} * E_i, i = 2,...,n,
where
Z = np.random.normal(0, 1,size = n)
E = np.random.lognormal(0, 1, size = n)
B = np.random.uniform(0,1,(n,n))
I do not have any experience with recursive functions, that is why I can not present any code with which I tried to solve this.
If you're working with numpy, then use all the power of numpy, not just the random module ;)
And if you work with vectors, then forget about recursion and use numpy's vectorised operations. For example, np.max gives you the maximum over an axis, np.dot gives you element-wise multiplication. You also have np.prod for the product of array elements over a given axis... Those are just examples that might fit your problem well. For a full documentation, https://docs.scipy.org/doc/numpy/
I got it, one does not need a recursion as #meowgoesthedog stated in the first comment.
import numpy as np
s=1000 # sample size
n=5
Z = np.random.normal(0, 1,size = (s,n))
B = np.random.uniform(0,1,(n,n))
E = np.random.lognormal(0, 1, size = (s,n))
X = np.zeros((s,n))
X[:,0] = Z[:,0]*E[:,0]
for k in range(s):
for l in range(1,n):
X[k,l] = max(np.max(X[k,:(l)] * B[:(l),l]), Z[k,l]) * E[k,l]

Vectorizing python code to numpy

I have the following code snippet (for Hough circle transform):
for r in range(1, 11):
for t in range(0, 360):
trad = np.deg2rad(t)
b = x - r * np.cos(trad)
a = y - r * np.sin(trad)
b = np.floor(b).astype('int')
a = np.floor(a).astype('int')
A[a, b, r-1] += 1
Where A is a 3D array of shape (height, width, 10), and
height and width represent the size of a given image.
My goal is to convert the snippet exclusively to numpy code.
My attempt is this:
arr_r = np.arange(1, 11)
arr_t = np.deg2rad(np.arange(0, 360))
arr_cos_t = np.cos(arr_t)
arr_sin_t = np.sin(arr_t)
arr_rcos = arr_r[..., np.newaxis] * arr_cos_t[np.newaxis, ...]
arr_rsin = arr_r[..., np.newaxis] * arr_sin_t[np.newaxis, ...]
arr_a = (y - arr_rsin).flatten().astype('int')
arr_b = (x - arr_rcos).flatten().astype('int')
Where x and y are two scalar values.
I am having trouble at converting the increment part: A[a,b,r] += 1. I thought of this: A[a,b,r] counts the number of occurrences of the pair (a,b,r), so a clue was to use a Cartesian product (but the arrays are too large).
Any tips or tricks I can use?
Thank you very much!
Edit: after filling A, I need (a,b,r) as argmax(A). The tuple (a,b,r) identifies a circle and its value in A represents the confidence value. So I want that tuple with the highest value in A. This is part of the voting algorithm from Hough circle transform: find circle parameter with unknown radius.
Method #1
Here's one way leveraging broadcasting to get the counts and update A (this assumes the a and b values computed in the intermediate steps are positive ones) -
d0,d1,d2 = A.shape
arr_r = np.arange(1, 11)
arr_t = np.deg2rad(np.arange(0, 360))
arr_b = np.floor(x - arr_r[:,None] * np.cos(arr_t)).astype('int')
arr_a = np.floor(y - arr_r[:,None] * np.sin(arr_t)).astype('int')
idx = (arr_a*d1*d2) + (arr_b * d2) + (arr_r-1)[:,None]
A.flat[:idx.max()+1] += np.bincount(idx.ravel())
# OR A.flat += np.bincount(idx.ravel(), minlength=A.size)
Method #2
Alternatively, we could avoid bincount to replace the last step in approach #1, like so -
idx.ravel().sort()
idx.shape = (-1)
grp_idx = np.flatnonzero(np.concatenate(([True], idx[1:]!=idx[:-1],[True])))
A.flat[idx[grp_idx[:-1]]] += np.diff(grp_idx)
Improvement with numexpr
We could also leverage numexpr module for faster sine, cosine computations, like so -
import numexpr as ne
arr_r2D = arr_r[:,None]
arr_b = ne.evaluate('floor(x - arr_r2D * cos(arr_t))').astype(int)
arr_a = ne.evaluate('floor(y - arr_r2D * sin(arr_t))').astype(int)
np.add(np.array ([arr_a, arr_b, 10]), 1)

Rotating 1D numpy array of radial intensities into 2D array of spacial intensities

I have a numpy array filled with intensity readings at different radii in a uniform circle (for context, this is a 1D radiative transfer project for protostellar formation models: while much better models exist, my supervisor wasnts me to have the experience of producing one so I understand how others work).
I want to take that 1d array, and "rotate" it through a circle, forming a 2D array of intensities that could then be shown with imshow (or, with a bit of work, aplpy). The final array needs to be 2d, and the projection needs to be Cartesian, not polar.
I can do it with nested for loops, and I can do it with lookup tables, but I have a feeling there must be a neat way of doing it in numpy or something.
Any ideas?
EDIT:
I have had to go back and recreate my (frankly horrible) mess of for loops and if statements that I had before. If I really tried, I could probably get rid of one of the loops and one of the if statements by condensing things down. However, the aim is not to make it work with for loops, but see if there is a built in way to rotate the array.
impB is an array that differs slightly from what I stated it was before. Its actually just a list of radii where particles are detected. I then bin those into radius bins to get the intensity (or frequency if you prefer) in each radius. R is the scale factor for my radius as I run the model in a dimensionless way. iRes is a resolution scale factor, essentially how often I want to sample my radial bins. Everything else should be clear.
radJ = np.ndarray(shape=(2*iRes, 2*iRes)) # Create array of 2xRadius square
for i in range(iRes):
n = len(impB[np.where(impB[:] < ((i+1.) * (R / iRes)))]) # Count number of things within this radius +1
m = len(impB[np.where(impB[:] <= ((i) * (R / iRes)))]) # Count number of things in this radius
a = (((i + 1) * (R / iRes))**2 - ((i) * (R / iRes))**2) * math.pi # A normalisation factor based on area.....dont ask
for x in range(iRes):
for y in range(iRes):
if (x**2 + y**2) < (i * iRes)**2:
if (x**2 + y**2) >= (i * iRes)**2: # Checks for radius, and puts in cartesian space
radJ[x+iRes,y+iRes] = (n-m) / a # Put in actual intensity bins
radJ[x+iRes,-y+iRes] = (n-m) / a
radJ[-x+iRes,y+iRes] = (n-m) / a
radJ[-x+iRes,-y+iRes] = (n-m) / a
Nested loops are a simple approach for that. With ri_data_r and y containing your radius values (difference to the middle pixel) and the array for rotation, respectively, I would suggest:
from scipy import interpolate
import numpy as np
y = np.random.rand(100)
ri_data_r = np.linspace(-len(y)/2,len(y)/2,len(y))
interpol_index = interpolate.interp1d(ri_data_r, y)
xv = np.arange(-1, 1, 0.01) # adjust your matrix values here
X, Y = np.meshgrid(xv, xv)
profilegrid = np.ones(X.shape, float)
for i, x in enumerate(X[0, :]):
for k, y in enumerate(Y[:, 0]):
current_radius = np.sqrt(x ** 2 + y ** 2)
profilegrid[i, k] = interpol_index(current_radius)
print(profilegrid)
This will give you exactly what you are looking for. You just have to take in your array and calculate an symmetric array ri_data_r that has the same length as your data array and contains the distance between the actual data and the middle of the array. The code is doing this automatically.
I stumbled upon this question in a different context and I hope I understood it right. Here are two other ways of doing this. The first uses skimage.transform.warp with interpolation of desired order (here we use order=0 Nearest-neighbor). This method is slower but more precise and needs less memory then the second method.
The second one does not use interpolation, therefore is faster but also less precise and needs way more memory because it stores each 2D array containing one tilt until the end, where they are averaged with np.nanmean().
The difference between both solutions stemmed from the problem of handling the center of the final image where the tilts overlap the most, i.e. the first one would just add values with each tilt ending up out of the original range. This was "solved" by clipping the matrix in each step to a global_min and global_max (consult the code). The second one solves it by taking the mean of the tilts where they overlap, which forces us to use the np.nan.
Please, read the Example of usage and Sanity check sections in order to understand the plot titles.
Solution 1:
import numpy as np
from skimage.transform import warp
def rotate_vector(vector, deg_angle):
# Credit goes to skimage.transform.radon
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
center = vector.size // 2
square = np.zeros((vector.size, vector.size))
square[center,:] = vector
rad_angle = np.deg2rad(deg_angle)
cos_a, sin_a = np.cos(rad_angle), np.sin(rad_angle)
R = np.array([[cos_a, sin_a, -center * (cos_a + sin_a - 1)],
[-sin_a, cos_a, -center * (cos_a - sin_a - 1)],
[0, 0, 1]])
# Approx. 80% of time is spent in this function
return warp(square, R, clip=False, output_shape=((vector.size, vector.size)))
def place_vectors(vectors, deg_angles):
matrix = np.zeros((vectors.shape[-1], vectors.shape[-1]))
global_min, global_max = 0, 0
for i, deg_angle in enumerate(deg_angles):
tilt = rotate_vector(vectors[i], deg_angle)
global_min = tilt.min() if global_min > tilt.min() else global_min
global_max = tilt.max() if global_max < tilt.max() else global_max
matrix += tilt
matrix = np.clip(matrix, global_min, global_max)
return matrix
Solution 2:
Credit for the idea goes to my colleague Michael Scherbela.
import numpy as np
def rotate_vector(vector, deg_angle):
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
square = np.ones([vector.size, vector.size]) * np.nan
radius = vector.size // 2
r_values = np.linspace(-radius, radius, vector.size)
rad_angle = np.deg2rad(deg_angle)
ind_x = np.round(np.cos(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_y = np.round(np.sin(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_x = np.clip(ind_x, 0, vector.size-1)
ind_y = np.clip(ind_y, 0, vector.size-1)
square[ind_y, ind_x] = vector
return square
def place_vectors(vectors, deg_angles):
matrices = []
for deg_angle, vector in zip(deg_angles, vectors):
matrices.append(rotate_vector(vector, deg_angle))
matrix = np.nanmean(np.array(matrices), axis=0)
return np.nan_to_num(matrix, copy=False, nan=0.0)
Example of usage:
r = 100 # Radius of the circle, i.e. half the length of the vector
n = int(np.pi * r / 8) # Number of vectors, e.g. number of tilts in tomography
v = np.ones(2*r) # One vector, e.g. one tilt in tomography
V = np.array([v]*n) # All vectors, e.g. a sinogram in tomography
# Rotate 1D vector to a specific angle (output is 2D)
angle = 45
rotated = rotate_vector(v, angle)
# Rotate each row of a 2D array according to its angle (output is 2D)
angles = np.linspace(-90, 90, num=n, endpoint=False)
inplace = place_vectors(V, angles)
Sanity check:
These are just simple checks which by no means cover all possible edge cases. Depending on your use case you might want to extend the checks and adjust the method.
# I. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then sum(inplace) should be approx. equal to (n * (2πr - n)) / π
# which is an area that should be covered by the tilts
desired_area = (n * (2 * np.pi * r - n)) / np.pi
covered_area = np.sum(inplace)
covered_frac = covered_area / desired_area
print(f'This method covered {covered_frac * 100:.2f}% '
'of the area which should be covered in total.')
# II. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then a circle M with radius m <= r should be the largest circle which
# is fully covered by the vectors. I.e. its mean should be no less than 1.
# If n = πr then m = r.
# m = n / π
m = int(n / np.pi)
# Code for circular mask not included
mask = create_circular_mask(2*r, 2*r, center=None, radius=m)
m_area = np.mean(inplace[mask])
print(f'Full radius r={r}, radius m={m}, mean(M)={m_area:.4f}.')
Code for plotting:
import matplotlib.pyplot as plt
plt.figure(figsize=(16, 8))
plt.subplot(121)
rotated = np.nan_to_num(rotated) # not necessary in case of the first method
plt.title(
f'Output of rotate_vector(), angle={angle}°\n'
f'Sum is {np.sum(rotated):.2f} and should be {np.sum(v):.2f}')
plt.imshow(rotated, cmap=plt.cm.Greys_r)
plt.subplot(122)
plt.title(
f'Output of place_vectors(), r={r}, n={n}\n'
f'Covered {covered_frac * 100:.2f}% of the area which should be covered.\n'
f'Mean of the circle M is {m_area:.4f} and should be 1.0.')
plt.imshow(inplace)
circle=plt.Circle((r, r), m, color='r', fill=False)
plt.gcf().gca().add_artist(circle)
plt.gcf().gca().legend([circle], [f'Circle M (m={m})'])

Categories