I have a function in python that is meant to operate on a scalar input, but multiplies matrices in the process. The exact code is shown below:
def f(t, n):
T = np.pi
a_0 = 0.5
n = np.arange(1, n + 1)
# Calculate the Fourier series of f(t)
a_n = np.sin(n*T) / (n * np.pi)
b_n = (1 - np.cos(n * T)) / (n * np.pi)
res = a_0 + np.sum(a_n * np.cos(n*t)) + np.sum(b_n * np.sin(n*t))
return res
Now, I want this to operate on a vector of inputs t, and for the implementation to stay vectorised (not to use for loops). I can see that making a matrix of dimensions len(t) x n where the initial vector n is just stacked vertically len(t) times, and then performing elementwise multiplication with t would be a solution, but what would be the proper way to implement this function?
Here's one vectorized approach that accepts a vector of inputs as t making use of broadcasting and sum-reduction for the final step with matrix-multiplication using np.dot -
def f_vectorized(t, n): # where t is an array
t2D = t[:,None]
T = np.pi
a_0 = 0.5
n = np.arange(1, n + 1)
a_n = np.sin(n*T) / (n * np.pi)
b_n = (1 - np.cos(n * T)) / (n * np.pi)
nt2D = n*t2D
return a_0 + np.cos(nt2D).dot(a_n) + np.sin(nt2D).dot(b_n)
Sample run -
In [142]: t
Out[142]: array([8, 1, 8, 0, 2, 7, 8, 8])
In [143]: n = 5
In [144]: f_vectorized(t,n)
Out[144]:
array([ 1.03254608, 0.94354963, 1.03254608, 0.5 , 0.95031599,
1.04127659, 1.03254608, 1.03254608])
Here is a formulaic "vectorisation". Note that only a handful of changes were necessary. First line and last but one.
First line: the asanyarray allows to accept array-like inputs, i.e. scalars, arrays, nested lists etc. and treat them all the same. The indexing adds one axis at the very end. That is the space for the Fourier coefficients. Conveniently, these will be automatically broadcast since they occupy the last dimension and missing axes are inserted on the left. This is why the code works almost unchanged.
Only the summations in the end have to be restricted to the Fourier axis, which is what the ..., axis=-1) kwargs do.
def f(t, n):
t = np.asanyarray(t)[..., None]
T = np.pi
a_0 = 0.5
n = np.arange(1, n + 1)
# Calculate the Fourier series of f(t)
a_n = np.sin(n*T) / (n * np.pi)
b_n = (1 - np.cos(n * T)) / (n * np.pi)
res = a_0 + np.sum(a_n * np.cos(n*t), axis=-1) + np.sum(b_n * np.sin(n*t), axis=-1)
return res
Related
I'm using numpy to compute the cross product of two arrays, but I'm running into the following error:
ValueError: non-broadcastable output operand with shape () doesn't match the broadcast shape (50,)
I have three variables which are of type numpy.ndarray and represent row vectors:
n = [0 0 1]
beta = [array with 50 elements, 0, array with 50 elements]
betap = [array with 50 elements, 0, array with 50 elements]
The first and third elements of beta represent the result of the following piecewise functions:
t_start = (-2 * 0.01) / 299792458
t_end = (3 * 0.01) / 299792458
t = np.linspace(t_start, t_end)
x_of_t = = np.piecewise(t,
[np.logical_or(t < t_start, t > t_end), np.logical_and(t_start <= t, t <= t_end)],
[((0.01 * 0.02) / (2 * np.pi * 2)), (lambda t: (0.01 * 0.02) / (2 * np.pi * 2) * np.cos((2 * np.pi)/0.01 * 299792458 * t))])
z_of_t = np.piecewise(t,
[t < t_start, t > t_end, np.logical_and(t_start <= t, t <= t_end)],
[(lambda t: (np.sqrt(0.75) * 299792458 * (t - t_start) + np.sqrt(0.75) * (1 - 0.02**2 / (4 * np.sqrt(0.75) * 2**2)) * 299792458 * t_start - 0.01 * 0.02**2 / (16 * np.pi * np.sqrt(0.75) * 2**2) * np.cos(2 * ((2 * np.pi)/0.01) * 299792458 * t_start))),
(lambda t: np.sqrt(0.75) * 299792458 * (t - t_end) + np.sqrt(0.75) * (1 - 0.02**2 / (4 * np.sqrt(0.75) * 2 **2)) * 299792458 * t_end - 0.01 * 0.02**2 / (16 * np.pi * np.sqrt(0.75) * 2 **2) * np.cos(2 * ((2 * np.pi)/0.01) * 299792458 * t_end)),
(lambda t: np.sqrt(0.75) * (1 - 0.02**2 / (4 * np.sqrt(0.75) * 2 **2)) * 299792458 * t - 0.01 * 0.02**2 / (16 * np.pi * np.sqrt(0.75) * 2**2) * np.cos(2 * ((2 * np.pi)/lambda_u) * 299792458 * t))])
The first and third elements of betap represent the derivatives of these functions. I am calculating the derivatives using numpy's gradient function np.gradient(beta_x_of_t) so the array lengths will match (len = 50). Similarly, when checking the shape of n, beta, and betap, they all have the same shape (3,). Checking the shape of the first and third elements of beta and betap gives the following shape: (50,).
Given that the shapes are the same, I thought I should be able to perform the cross-product using np.cross(n-beta, betap), but I get the above error.
My goal here is twofold:
Understand what the error message means. For example, I've looked over many other questions posted here, but I haven't seem someone answer what it means for an output operand to be non-broadcastable.
Using that information, solve the error to compute the cross product.
From the np.cross docs, 2 sample arrays:
In [51]: x = np.array([[1,2,3], [4,5,6]])
...: y = np.array([[4,5,6], [1,2,3]])
...:
In [52]: np.cross(x[0],y[0])
Out[52]: array([-3, 6, -3])
In [53]: np.cross(x,y)
Out[53]:
array([[-3, 6, -3],
[ 3, -6, 3]])
Thus, cross is taking 2 vector cross products. So it can take (n,3) arrays, and return a (n,3) result. (or (n,2) for a 2d vector cross).
And of course a cross with itself is 0
In [59]: np.cross(x,x)
Out[59]:
array([[0, 0, 0],
[0, 0, 0]])
Lets make a list like your beta:
In [60]: beta = [np.arange(3),0,np.arange(3)]
In [61]: beta
Out[61]: [array([0, 1, 2]), 0, array([0, 1, 2])]
numpy doesn't like to make an array from such a list:
In [62]: np.array(beta)
<ipython-input-62-f4485c3a6e4c>:1: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
np.array(beta)
Out[62]: array([array([0, 1, 2]), 0, array([0, 1, 2])], dtype=object)
But let's try to perform cross on such a (3,) shaped array:
In [64]: barr = np.array(beta,object)
In [65]: barr
Out[65]: array([array([0, 1, 2]), 0, array([0, 1, 2])], dtype=object)
In [66]: np.cross(barr, barr)
Traceback (most recent call last):
File "<ipython-input-66-7218b4a081e0>", line 1, in <module>
np.cross(barr, barr)
File "<__array_function__ internals>", line 5, in cross
File "/usr/local/lib/python3.8/dist-packages/numpy/core/numeric.py", line 1655, in cross
cp0 -= tmp
ValueError: non-broadcastable output operand with shape () doesn't match the broadcast shape (3,)
That looks like your error.
np.cross is intended for use with (N,3) shaped numeric dtype arrays. An object dtype array like your beta is wrong. I'm not even sure what you intend to be happening.
Most of your question is extraneous, that has nothing to do with the error. Your question should have shown the traceback, and clearly stated the shape and dtype of the arguments.
Forgive my ignorance, but the function H is actually an infinite sum (including -n terms). I'm hoping to truncate at larger values on the order of the 100s ideally. My code seems to work but I am not sure if it is actually summing over the values of n described.
Code
import numpy as np
from scipy.integrate import trapz
tvals = [1, 2, 3, 4, 5] # fixed values of t
xvals = [0, 0.2, 0.4, 0.6, 0.8, 1.0] # fixed values of x
xi = np.linspace(0, 1, 100)
def H(xi, x, t):
for n in range(-2, 2):
return 0.5 * np.sin(np.pi * xi) * np.exp(-(x - 2 * n - xi)**2 / 4 * t) / np.sqrt(np.pi * t)
# Everything to the right of the sin term is part of a sum
# xi is the integral variable!
TrapzH = trapz(H(xi, xvals[1], tvals[0]), x=None, dx=0.1, axis=-1)
print(TrapzH)
I've tested this against a version where n = 1 and this has range (0, 1) and they seem to have different values but I am still unsure.
Your function H does not iterate over the specified n range since it exits the function on the first return encountered (at n=-2). Maybe you are looking for something like
def H(xi, x, t):
sum = np.zeros(xi.shape)
for n in range(-2, 2):
sum += np.exp(-(x - 2 * n - xi)**2 / 4 * t) / np.sqrt(np.pi * t)
return 0.5 * np.sin(np.pi * xi) * sum
If I sample a non-central chi-square distribution using a Poisson distribution, I am unable to alter the size and can only input the mean, "nc / 2" (I must set size = 1 or it also returns the same error):
n = np.random.poisson(nc / 2, 1) # generates a random variable from the poisson distribution with
# mean: non-centrality parameter / 2
x[t] = c * mp.nsum(lambda i: np.random.standard_normal() ** 2, [0, v + 2 * n])
If I attempt to increase the size to the number of simulations being run
n = np.random.poisson(nc / 2, simulations)
where simulations = 10000, I receive:
"ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()"
Running the code with 1 simulation produces one desired result, and every run produces another random path.
Graph created under 10,000 simulations with size = one
However, it is a necessity to have the graph composed of paths determined by each iteration of the simulation. Under a different condition, the non-central chi-square distribution is determined by the code:
x[t] = c * ((np.random.standard_normal(simulations) + nc ** 0.5) ** 2 + mp.nsum(
lambda i: np.random.standard_normal(simulations) ** 2, [0, v - 1]))
which does produce the desired result
Graph produced by the line of code above
How can I obtain a different path for x[t] despite not being able to change the size of the Poisson distribution (i.e. not have the same path for each of the 10,000 simulations)
If required:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import stats
import mpmath as mp
T = 1
beta = 1.5
x0 = 0.05
q = 0
mu = x0 - q
alpha = - (2 - beta) * mu
sigma0 = 0.1
sigma = (2 - beta) * sigma0
b = - (1 - beta) / (2 * mu) * sigma ** 2
simulations = 10000
M = 50
dt = T / M
def srd_sampled_nxc2():
x = np.zeros((M + 1, simulations))
x[0] = x0
for t in range(1, M + 1):
v = 4 * b * alpha / sigma ** 2
c = (sigma ** 2 * (1 - np.exp(-alpha * dt))) / (4 * alpha)
nc = np.exp(-alpha * dt) / c * x[t - 1] # The non-centrality parameter lambda
if v > 1:
x[t] = c * ((np.random.standard_normal(simulations) + nc ** 0.5) ** 2 + mp.nsum(
lambda i: np.random.standard_normal(simulations) ** 2, [0, v - 1]))
else:
n = np.random.poisson(nc / 2, 1)
x[t] = c * mp.nsum(lambda i: np.random.standard_normal() ** 2, [0, v + 2 * n])
return x
x1 = srd_sampled_nxc2()
plt.figure(figsize=(10, 6))
plt.plot(x1[:, :10], lw=1)
plt.xlabel('time')
plt.ylabel('index')
plt.show()
I've realized that the variable beta greater than 1 creates a negative v and a very large nc. There was nothing to fill the array with due to the fact that no distribution could be created as v couldn't go positive. I am under the impression that b must be made positive and thus solving the negative v and allowing the program to run.
I will try and explain exactly what's going on and my issue.
This is a bit mathy and SO doesn't support latex, so sadly I had to resort to images. I hope that's okay.
I don't know why it's inverted, sorry about that.
At any rate, this is a linear system Ax = b where we know A and b, so we can find x, which is our approximation at the next time step. We continue doing this until time t_final.
This is the code
import numpy as np
tau = 2 * np.pi
tau2 = tau * tau
i = complex(0,1)
def solution_f(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) + np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))
def solution_g(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) - np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))
for l in range(2, 12):
N = 2 ** l #number of grid points
dx = 1.0 / N #space between grid points
dx2 = dx * dx
dt = dx #time step
t_final = 1
approximate_f = np.zeros((N, 1), dtype = np.complex)
approximate_g = np.zeros((N, 1), dtype = np.complex)
#Insert initial conditions
for k in range(N):
approximate_f[k, 0] = np.cos(tau * k * dx)
approximate_g[k, 0] = -i * np.sin(tau * k * dx)
#Create coefficient matrix
A = np.zeros((2 * N, 2 * N), dtype = np.complex)
#First row is special
A[0, 0] = 1 -3*i*dt
A[0, N] = ((2 * dt / dx2) + dt) * i
A[0, N + 1] = (-dt / dx2) * i
A[0, -1] = (-dt / dx2) * i
#Last row is special
A[N - 1, N - 1] = 1 - (3 * dt) * i
A[N - 1, N] = (-dt / dx2) * i
A[N - 1, -2] = (-dt / dx2) * i
A[N - 1, -1] = ((2 * dt / dx2) + dt) * i
#middle
for k in range(1, N - 1):
A[k, k] = 1 - (3 * dt) * i
A[k, k + N - 1] = (-dt / dx2) * i
A[k, k + N] = ((2 * dt / dx2) + dt) * i
A[k, k + N + 1] = (-dt / dx2) * i
#Bottom half
A[N :, :N] = A[:N, N:]
A[N:, N:] = A[:N, :N]
Ainv = np.linalg.inv(A)
#Advance through time
time = 0
while time < t_final:
b = np.concatenate((approximate_f, approximate_g), axis = 0)
x = np.dot(Ainv, b) #Solve Ax = b
approximate_f = x[:N]
approximate_g = x[N:]
time += dt
approximate_solution = np.concatenate((approximate_f, approximate_g), axis=0)
#Calculate the actual solution
actual_f = np.zeros((N, 1), dtype = np.complex)
actual_g = np.zeros((N, 1), dtype = np.complex)
for k in range(N):
actual_f[k, 0] = solution_f(t_final, k * dx)
actual_g[k, 0] = solution_g(t_final, k * dx)
actual_solution = np.concatenate((actual_f, actual_g), axis = 0)
print(np.sqrt(dx) * np.linalg.norm(actual_solution - approximate_solution))
It doesn't work. At least not in the beginning, it shouldn't start this slow. I should be unconditionally stable and converge to the right answer.
What's going wrong here?
The L2-norm can be a useful metric to test convergence, but isn't ideal when debugging as it doesn't explain what the problem is. Although your solution should be unconditionally stable, backward Euler won't necessarily converge to the right answer. Just like forward Euler is notoriously unstable (anti-dissipative), backward Euler is notoriously dissipative. Plotting your solutions confirms this. The numerical solutions converge to zero. For a next-order approximation, Crank-Nicolson is a reasonable candidate. The code below contains the more general theta-method so that you can tune the implicit-ness of the solution. theta=0.5 gives CN, theta=1 gives BE, and theta=0 gives FE.
A couple other things that I tweaked:
I selected a more appropriate time step of dt = (dx**2)/2 instead of dt = dx. That latter doesn't converge to the right solution using CN.
It's a minor note, but since t_final isn't guaranteed to be a multiple of dt, you weren't comparing solutions at the same time step.
With regards to your comment about it being slow: As you increase the spatial resolution, your time resolution needs to increase too. Even in your case with dt=dx, you have to perform a (1024 x 1024)*1024 matrix multiplication 1024 times. I didn't find this to take particularly long on my machine. I removed some unneeded concatenation to speed it up a bit, but changing the time step to dt = (dx**2)/2 will really bog things down, unfortunately. You could trying compiling with Numba if you are concerned with speed.
All that said, I didn't find tremendous success with the consistency of CN. I had to set N=2^6 to get anything at t_final=1. Increasing t_final makes this worse, decreasing t_final makes it better. Depending on your needs, you could looking into implementing TR-BDF2 or other linear multistep methods to improve this.
The code with a plot is below:
import numpy as np
import matplotlib.pyplot as plt
tau = 2 * np.pi
tau2 = tau * tau
i = complex(0,1)
def solution_f(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) + np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))
def solution_g(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) - np.exp(tau * i * x) *
np.exp((tau2 + 4) * i * t))
l=6
N = 2 ** l
dx = 1.0 / N
dx2 = dx * dx
dt = dx2/2
t_final = 1.
x_arr = np.arange(0,1,dx)
approximate_f = np.cos(tau*x_arr)
approximate_g = -i*np.sin(tau*x_arr)
H = np.zeros([2*N,2*N], dtype=np.complex)
for k in range(N):
H[k,k] = -3*i*dt
H[k,k+N] = (2/dx2+1)*i*dt
if k==0:
H[k,N+1] = -i/dx2*dt
H[k,-1] = -i/dx2*dt
elif k==N-1:
H[N-1,N] = -i/dx2*dt
H[N-1,-2] = -i/dx2*dt
else:
H[k,k+N-1] = -i/dx2*dt
H[k,k+N+1] = -i/dx2*dt
### Bottom half
H[N :, :N] = H[:N, N:]
H[N:, N:] = H[:N, :N]
### Theta method. 0.5 -> Crank Nicolson
theta=0.5
A = np.eye(2*N)+H*theta
B = np.eye(2*N)-H*(1-theta)
### Precompute for faster computations
mat = np.linalg.inv(A)#B
t = 0
b = np.concatenate((approximate_f, approximate_g))
while t < t_final:
t += dt
b = mat#b
approximate_f = b[:N]
approximate_g = b[N:]
approximate_solution = np.concatenate((approximate_f, approximate_g))
#Calculate the actual solution
actual_f = solution_f(t,np.arange(0,1,dx))
actual_g = solution_g(t,np.arange(0,1,dx))
actual_solution = np.concatenate((actual_f, actual_g))
plt.figure(figsize=(7,5))
plt.plot(x_arr,actual_f.real,c="C0",label=r"$Re(f_\mathrm{true})$")
plt.plot(x_arr,actual_f.imag,c="C1",label=r"$Im(f_\mathrm{true})$")
plt.plot(x_arr,approximate_f.real,c="C0",ls="--",label=r"$Re(f_\mathrm{num})$")
plt.plot(x_arr,approximate_f.imag,c="C1",ls="--",label=r"$Im(f_\mathrm{num})$")
plt.legend(loc=3,fontsize=12)
plt.xlabel("x")
plt.savefig("num_approx.png",dpi=150)
I am not going to go through all of your math, but I'm going to offer a suggestion.
The use of a direct calculation for fxx and gxx seems like a good candidate for being numerically unstable. Intuitively a first order method should be expected to make second order mistakes in the terms. Second order mistakes in the individual terms, after passing through that formula, wind up as constant order mistakes in the second derivative. Plus when your step size gets small, you are going to find that a quadratic formula makes even small roundoff mistakes turn into surprisingly large errors.
Instead I would suggest that you start by turning this into a first-order system of 4 functions, f, fx, g, and gx. And then proceed with backward's Euler on that system. Intuitively, with this approach, a first order method creates second order mistakes, which pass through a formula that creates first order mistakes of them. And now you are converging as you should from the start, and are also not as sensitive to propagation of roundoff errors.
I am trying to Find the Fourier series representation for n number of harmonics of a discrete time data set. The data is not originally periodic, so I performed a periodic extension on the data set and the result can be seen in the waveform below.
I have tried to replicate the solution in this question : Calculate the Fourier series with the trigonometry approach
However The results I received did not produce a proper output as can be seen in the pictures below. It seems that the computation simply outputs an offset version of the original signal.
The data set I am working with is a numpy array with 4060 elements.
How can I properly compute and graph the Fourier series decomposition of a discrete data set?
Here is the code i used, it is almost identical to that in the example referred to in the link, except changes have been made to accommodate my own signal data.
# dat is a list with the original non periodic data
# persig is essentially dat repeated over several periods
# Define "x" range.
l = len(persig)
x = np.linspace(0,1,l)
print len(x)
# Define "T", i.e functions' period.
T = len(dat)
print T
L = T / 2
# "f(x)" function definition.
def f(x):
persig = np.asarray(persig)
return persig
# "a" coefficient calculation.
def a(n, L, accuracy = 1000):
a, b = -L, L
dx = (b - a) / accuracy
integration = 0
for x in np.linspace(a, b, accuracy):
integration += f(x) * np.cos((n * np.pi * x) / L)
integration *= dx
return (1 / L) * integration
# "b" coefficient calculation.
def b(n, L, accuracy = 1000):
a, b = -L, L
dx = (b - a) / accuracy
integration = 0
for x in np.linspace(a, b, accuracy):
integration += f(x) * np.sin((n * np.pi * x) / L)
integration *= dx
return (1 / L) * integration
# Fourier series.
def Sf(x, L, n = 5):
a0 = a(0, L)
sum = np.zeros(np.size(x))
for i in np.arange(1, n + 1):
sum += ((a(i, L) * np.cos((i * np.pi * x) / L)) + (b(i, L) * np.sin((i * np.pi * x) / L)))
return (a0 / 2) + sum
plt.plot(x, f(x))
plt.plot(x, Sf(x, L))
plt.show()