Use Python SciPy to solve ODE - python

Now I face some problem when I use scipy.integrate.ode.
I want to use spectral method (fourier transform) solve a PDE including dispersive and convection term, such as
du/dt = A * d^3 u / dx^3 + C * du/dx
Then from fourier transform this PDE will convert to a set of ODEs in complex space (uk is complex vector)
duk/dt = (A * coeff^3 + C * coeff) * uk
coeff = (2 * pi * i * k) / L
k is wavenumber, (e.g.. k = 0, 1, 2, 3, -4, -3, -2, -1)
i^2 = -1,
L is length of domain.
When I use r = ode(uODE).set_integrator('zvode', method='adams'), python will warn like:
c ZVODE-- At current T (=R1), MXSTEP (=I1) steps
taken on this call before reaching TOUT
In above message, I1 = 500
In above message, R1 = 0.2191432098050D+00
I feel it is because the time step I chosen is too large, however I cannot decrease time step as every step is time consuming for my real problem. Do I have any other way to resolve this problem?

Did you consider solving the ODEs symbolically? With Sympy you can type
import sympy as sy
sy.init_printing() # use IPython for better results
from sympy.abc import A, C, c, x, t # variables
u = sy.Function(b'u')(x,t)
eq = sy.Eq(u.diff(t), c*u)
sl1 = sy.pde.pdsolve(eq, u)
print("The solution of:")
sy.pprint(eq)
print("was determined to be:")
sy.pprint(sl1)
print("")
print("Substituting the coefficient:")
k,L = sy.symbols("k L", real=True)
coeff = (2 * sy.pi * sy.I * k) / L
cc = (A * coeff**3 + C * coeff)
sl2 = sy.simplify(sl1.replace(c, cc))
sy.pprint(sl2)
gives the following output:
The solution of:
∂
──(u(x, t)) = c⋅u(x, t)
∂t
was determined to be:
c⋅t
u(x, t) = F(x)⋅ℯ
Substituting the coefficient:
⎛ 2 2 2⎞
-2⋅ⅈ⋅π⋅k⋅t⋅⎝4⋅π ⋅A⋅k - C⋅L ⎠
──────────────────────────────
3
L
u(x, t) = F(x)⋅ℯ
Note that F(x) depends on your initial values of u(x,t=0), which you need to provide.
Use sl2.rhs.evalf() to substitute in numbers.

Related

Differents results from create function in a different way - only length-1 arrays can be converted to Python scalars

I have defined the following functions in python:
from math import *
import numpy as np
import cmath
def BSM_CF(u, s0, T, r, sigma):
realp = -0.5*u**2*sigma**2*T
imagp = u*(s0+(r-0.5*sigma**2)*T)
zc = complex(realp, imagp)
return cmath.exp(zc)
def BSM_characteristic_function(v, x0, T, r, sigma):
cf_value = np.exp(((x0 / T + r - 0.5 * sigma ** 2) * 1j * v -
0.5 * sigma ** 2 * v ** 2) * T)
return cf_value
Parameters:
alpha = 1.5
K = 90
S0 = 100
T = 1
r = 0.05
sigma = 0.2
k = np.log(K / S0)
s0 = np.log(S0 / S0)
g = 1 # factor to increase accuracy
N = 2 ** 2
eta = 0.15
eps = (2*np.pi)/(N*eta)
b = 0.5 * N * eps - k
u = np.arange(1, N + 1, 1)
vo = eta * (u - 1)
v = vo - (alpha + 1) * 1j
BSMCF = BSM_characteristic_function(v, s0, T, r, sigma)
BSMCF_v2 = BSM_CF(0, s0, T, r, sigma)
print(BSMCF)
print(BSMCF_v2)
Both are the same functions. But, I get different results. How can I fix the function BSM_CF to get the same result from the function BSM_characteristic_function? The idea is get an array with len 4 values as in the funtion BSM_characteristic_function
Your calls are not identical. You are passing v in the first call and 0 in the second call. If I pass 0 for both, the results are identical. If I pass v, it complains because you can't call complex on a vector.
Numeric computation is Not always identical to symbolic algebra. For the first formula, you use complex computation as an alternative, which could result rounding errors in complex part. I came across such mistakes quite often as I used Mathematica, which loves to transfer a real formula to a complex one before doing the numeric computation.

Programming a PDE in Python with Brownian path

I'm currently trying to simulate a PDE including a Brownian path (one of the terms includes, that when going one timestep 'dt' further the change is weighted by a normal distributed variable with mean 0 and variance dt).
For this I used Fast Fourier Transform to get a system of ODEs which I can solve much more easily (at least that's what I thought). This lead me to the following code.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
#Defining some parameters a, b, c, which are included in the PDE
a = 10
b = 1.5
c = 20
#Creating the mesh
L = 100
N = 100
dx = L/N
x = np.arange(0, L, dx)
dt=0.01
t = np.linspace(0, 1, 100)
#Frequency for the Fourier Transformation
kappa= 2*np.pi*np.fft.fftfreq(N, d=dx)
#Initial condition for function u and its Fast Fourier Transformation
u0= np.zeros_like(x)
u0[int((L/4-L/10)/dx):int((L/4+L/10)/dx)]=2.5
u0[int((3*L/4-L/10)/dx):int((3*L/4+L/10)/dx)]=2.5
u0hat = np.fft.fft(u0)
u0hat_ri= np.concatenate((u0hat.real, u0hat.imag))
#Define the function describing the Transformation from the PDE to the system of ODEs
def func(uhat_ri, t, kappa, a, b, c):
uhat = uhat_ri[:N] +(1j)*uhat_ri[N:]
#Define the weighted change by the Brownian path B
mean = [0]*len(uhat)
diag = [0.1] * len(uhat)
cov = np.diag(diag)
B = np.random.multivariate_normal(mean, cov)
d_uhat = -a**2 * (np.power(kappa, 2))* uhat-c*(1j)*kappa*uhat + b* (1j) * kappa * uhat * B
d_uhat_ri = np.concatenate((d_uhat.real, d_uhat.imag))
return d_uhat_ri
#Solve the ODE with odeint
uhat_ri = odeint(func, u0hat_ri, t, args=(kappa, a, b, c))
uhat = uhat_ri[:, :N] + (1j) * uhat_ri[:, N:]
u = np.zeros_like(uhat)
#Inverse Transform the Solution
for k in range(len(t)):
u[:, k] = np.fft.ifft(uhat[k, :])
u = u.real
This program works if I exclude the Brownian path B in func
def func(uhat_ri, t, kappa, a, b, c):
uhat = uhat_ri[:N] +(1j)*uhat_ri[N:]
d_uhat = -a**2 * (np.power(kappa, 2))* uhat-c*(1j)*kappa*uhat + b* (1j) * kappa * uhat
d_uhat_ri = np.concatenate((d_uhat.real, d_uhat.imag))
return d_uhat_ri
But it takes a long time to execute when including B and also it tells me:
C:\Users\leo_h\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\integrate\odepack.py:247: ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.
warnings.warn(warning_msg, ODEintWarning)
EDIT/ANSWER:
I solved the problem, by putting the change in the Brownian path out of func. I guess it was just too much for odeint to cope with the function (or it generated a new Brownian path for each t?)
mean = [0]*len(u0hat)
diag = [2] * len(u0hat)
cov = np.diag(diag)
B = np.random.multivariate_normal(mean, cov)
def func(uhat_ri, t, kappa, a, b, c, B):
uhat = uhat_ri[:N] +(1j)*uhat_ri[N:]
d_uhat = np.zeros_like(uhat)
d_uhat = -a**2 * (np.power(kappa, 2)) * uhat-c * (1j) * kappa * uhat + b * B * (1j) * kappa * uhat
d_uhat_ri = np.concatenate((d_uhat.real, d_uhat.imag))
return d_uhat_ri
uhat_ri = odeint(func, u0hat_ri, t, args=(kappa, a, b, c, B))

How can I plot a 3D graph of a multivariate integral function, and find its global minima

I have a cost function f(r, Q), which is obtained in the code below. The cost function f(r, Q) is a function of two variables r and Q. I want to plot the values of the cost function for all values of r and Q in the range given below and also find the global minimum value of f(r, Q).
The range of r and Q are respectively :
0 < r < 5000
5000 < Q < 15000
The plot should be in r, Q and f(r,Q) axis.
Code for the cost function:
from numpy import sqrt, pi, exp
from scipy import optimize
from scipy.integrate import quad
import numpy as np
mean, std = 295, 250
l = 7
m = 30
p = 15
w = 7
K = 100
c = 5
h = 0.001 # per unit per day
# defining Cumulative distribution function
def cdf(x):
cdf_eqn = lambda t: (1 / (std * sqrt(2 * pi))) * exp(-(((t - mean) ** 2) / (2 * std ** 2)))
cdf = quad(cdf_eqn, -np.inf, x)[0]
return cdf
# defining Probability density function
def pdf(x):
return (1 / (std * sqrt(2 * pi))) * exp(-(((x - mean) ** 2) / (2 * std ** 2)))
# getting the equation in place
def G(r, Q):
return K + c * Q \
+ w * (quad(cdf, 0, Q)[0] + quad(lambda x: cdf(r + Q - x) * cdf(x), 0, r)[0]) \
+ p * (mean * l - r + quad(cdf, 0, r)[0])
def CL(r, Q):
return (Q - r + mean * l - quad(cdf, 0, Q)[0]
- quad(lambda x: cdf(r + Q - x) * cdf(x), 0, r)[0]
+ quad(cdf, 0, r)[0]) / mean
def I(r, Q):
return h * (Q + r - mean * l - quad(cdf, 0, Q)[0]
- quad(lambda x: cdf(r + Q - x) * cdf(x), 0, r)[0]
+ quad(cdf, 0, r)[0]) / 2
def f(params):
r, Q = params
TC = G(r, Q)/CL(r, Q) + I(r, Q)
return TC
How to plot this function f(r,Q) in a 3D plot and also get the global minima or minimas and values of r and Q at that particular point.
Additionally, I already tried using scipy.optimize.minimize to minimise the cost function f(r, Q) but the problem I am facing is that, it outputs the results - almost same as the initial guess given in the parameters for optimize.minimize. Here is the code for minimizing the function:
initial_guess = [2500., 10000.]
result = optimize.minimize(f, initial_guess, bounds=[(1, 5000), (5000, 15000)], tol=1e-3)
print(result)
Output:
fun: 2712.7698818644253
hess_inv: <2x2 LbfgsInvHessProduct with dtype=float64>
jac: array([-0.01195986, -0.01273293])
message: b'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
nfev: 6
nit: 1
status: 0
success: True
x: array([ 2500.01209628, 10000.0127784 ])
The output x: array([ 2500.01209628, 10000.0127784 ]) - Which I doubt is the real answer and also it is almost same as the initial guess provided. Am I doing anything wrong in minimizing or is there any other way to do it? So I want to plot the cost function and look around for myself.
It could be great if I can have an interactive plot to play around with
My answer is concerned only with plotting but in the end I'll comment on the issue of minimax.
For what you need a 3D surface plot is, imho, overkill, I'll show you instead show the use of contourf and contour to have a good idea of what is going on with your function.
First, the code — key points:
your code, as is, cannot be executed in a vector context, so I wrote an explicit loop to compute the values,
due to Matplotib design, the x axis of matrix data is associated on columns, this has to be accounted for,
the results of the countour and contourf must be saved because they are needed for the labels and the color bar, respectively,
no labels or legends because I don't know what you are doing.
That said, here it is the code
import matplotlib.pyplot as plt
import numpy as np
from numpy import sqrt, pi, exp
from scipy.integrate import quad
mean, std = 295, 250
l, m, p = 7, 30, 15
w, K, c = 7, 100, 5
h = 0.001 # per unit per day
# defining Cumulative distribution function
def cdf(x):
cdf_eqn = lambda t: (1 / (std * sqrt(2 * pi))) * exp(-(((t - mean) ** 2) / (2 * std ** 2)))
cdf = quad(cdf_eqn, -np.inf, x)[0]
return cdf
# defining Probability density function
def pdf(x):
return (1 / (std * sqrt(2 * pi))) * exp(-(((x - mean) ** 2) / (2 * std ** 2)))
# getting the equation in place
def G(r, Q):
return K + c * Q \
+ w * (quad(cdf, 0, Q)[0] + quad(lambda x: cdf(r + Q - x) * cdf(x), 0, r)[0]) \
+ p * (mean * l - r + quad(cdf, 0, r)[0])
def CL(r, Q):
return (Q - r + mean * l - quad(cdf, 0, Q)[0]
- quad(lambda x: cdf(r + Q - x) * cdf(x), 0, r)[0]
+ quad(cdf, 0, r)[0]) / mean
def I(r, Q):
return h * (Q + r - mean * l - quad(cdf, 0, Q)[0]
- quad(lambda x: cdf(r + Q - x) * cdf(x), 0, r)[0]
+ quad(cdf, 0, r)[0]) / 2
# pulling it all together
def f(r, Q):
TC = G(r, Q)/CL(r, Q) + I(r, Q)
return TC
nr, nQ = 6, 11
r = np.linspace(0, 5000, nr)
Q = np.linspace(5000, 15000, nQ)
z = np.zeros((nr, nQ)) # r ←→ y, Q ←→ x
for i, ir in enumerate(r):
for j, jQ in enumerate(Q):
z[i, j] = f(ir, jQ)
print('%2d: '%i, ','.join('%8.3f'%v for v in z[i]))
fig, ax = plt.subplots()
cf = plt.contourf(Q, r, z)
cc = plt.contour( Q, r, z, colors='k')
plt.clabel(cc)
plt.colorbar(cf, orientation='horizontal')
ax.set_aspect(1)
plt.show()
and here the results of its execution
$ python cost.py
0: 4093.654,3661.777,3363.220,3120.073,2939.119,2794.255,2675.692,2576.880,2493.283,2426.111,2359.601
1: 4072.865,3621.468,3315.193,3068.710,2887.306,2743.229,2626.065,2528.934,2447.123,2381.802,2316.991
2: 4073.852,3622.443,3316.163,3069.679,2888.275,2744.198,2627.035,2529.905,2448.095,2382.775,2317.965
3: 4015.328,3514.874,3191.722,2939.397,2758.876,2618.292,2505.746,2413.632,2336.870,2276.570,2216.304
4: 3881.198,3290.628,2947.273,2694.213,2522.845,2394.095,2293.867,2213.651,2148.026,2098.173,2047.140
5: 3616.675,2919.726,2581.890,2352.015,2208.814,2106.289,2029.319,1969.438,1921.555,1887.398,1849.850
$
I can add that global minimum and global maximum are in the corners, while there are two sub-horizontal lines of local minima (lower line) and local maxima (upper line) in the approximate regions r ≈ 1000 and r ≈ 2000.

Solving PDE with implicit euler in python - incorrect output

I will try and explain exactly what's going on and my issue.
This is a bit mathy and SO doesn't support latex, so sadly I had to resort to images. I hope that's okay.
I don't know why it's inverted, sorry about that.
At any rate, this is a linear system Ax = b where we know A and b, so we can find x, which is our approximation at the next time step. We continue doing this until time t_final.
This is the code
import numpy as np
tau = 2 * np.pi
tau2 = tau * tau
i = complex(0,1)
def solution_f(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) + np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))
def solution_g(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) - np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))
for l in range(2, 12):
N = 2 ** l #number of grid points
dx = 1.0 / N #space between grid points
dx2 = dx * dx
dt = dx #time step
t_final = 1
approximate_f = np.zeros((N, 1), dtype = np.complex)
approximate_g = np.zeros((N, 1), dtype = np.complex)
#Insert initial conditions
for k in range(N):
approximate_f[k, 0] = np.cos(tau * k * dx)
approximate_g[k, 0] = -i * np.sin(tau * k * dx)
#Create coefficient matrix
A = np.zeros((2 * N, 2 * N), dtype = np.complex)
#First row is special
A[0, 0] = 1 -3*i*dt
A[0, N] = ((2 * dt / dx2) + dt) * i
A[0, N + 1] = (-dt / dx2) * i
A[0, -1] = (-dt / dx2) * i
#Last row is special
A[N - 1, N - 1] = 1 - (3 * dt) * i
A[N - 1, N] = (-dt / dx2) * i
A[N - 1, -2] = (-dt / dx2) * i
A[N - 1, -1] = ((2 * dt / dx2) + dt) * i
#middle
for k in range(1, N - 1):
A[k, k] = 1 - (3 * dt) * i
A[k, k + N - 1] = (-dt / dx2) * i
A[k, k + N] = ((2 * dt / dx2) + dt) * i
A[k, k + N + 1] = (-dt / dx2) * i
#Bottom half
A[N :, :N] = A[:N, N:]
A[N:, N:] = A[:N, :N]
Ainv = np.linalg.inv(A)
#Advance through time
time = 0
while time < t_final:
b = np.concatenate((approximate_f, approximate_g), axis = 0)
x = np.dot(Ainv, b) #Solve Ax = b
approximate_f = x[:N]
approximate_g = x[N:]
time += dt
approximate_solution = np.concatenate((approximate_f, approximate_g), axis=0)
#Calculate the actual solution
actual_f = np.zeros((N, 1), dtype = np.complex)
actual_g = np.zeros((N, 1), dtype = np.complex)
for k in range(N):
actual_f[k, 0] = solution_f(t_final, k * dx)
actual_g[k, 0] = solution_g(t_final, k * dx)
actual_solution = np.concatenate((actual_f, actual_g), axis = 0)
print(np.sqrt(dx) * np.linalg.norm(actual_solution - approximate_solution))
It doesn't work. At least not in the beginning, it shouldn't start this slow. I should be unconditionally stable and converge to the right answer.
What's going wrong here?
The L2-norm can be a useful metric to test convergence, but isn't ideal when debugging as it doesn't explain what the problem is. Although your solution should be unconditionally stable, backward Euler won't necessarily converge to the right answer. Just like forward Euler is notoriously unstable (anti-dissipative), backward Euler is notoriously dissipative. Plotting your solutions confirms this. The numerical solutions converge to zero. For a next-order approximation, Crank-Nicolson is a reasonable candidate. The code below contains the more general theta-method so that you can tune the implicit-ness of the solution. theta=0.5 gives CN, theta=1 gives BE, and theta=0 gives FE.
A couple other things that I tweaked:
I selected a more appropriate time step of dt = (dx**2)/2 instead of dt = dx. That latter doesn't converge to the right solution using CN.
It's a minor note, but since t_final isn't guaranteed to be a multiple of dt, you weren't comparing solutions at the same time step.
With regards to your comment about it being slow: As you increase the spatial resolution, your time resolution needs to increase too. Even in your case with dt=dx, you have to perform a (1024 x 1024)*1024 matrix multiplication 1024 times. I didn't find this to take particularly long on my machine. I removed some unneeded concatenation to speed it up a bit, but changing the time step to dt = (dx**2)/2 will really bog things down, unfortunately. You could trying compiling with Numba if you are concerned with speed.
All that said, I didn't find tremendous success with the consistency of CN. I had to set N=2^6 to get anything at t_final=1. Increasing t_final makes this worse, decreasing t_final makes it better. Depending on your needs, you could looking into implementing TR-BDF2 or other linear multistep methods to improve this.
The code with a plot is below:
import numpy as np
import matplotlib.pyplot as plt
tau = 2 * np.pi
tau2 = tau * tau
i = complex(0,1)
def solution_f(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) + np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))
def solution_g(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) - np.exp(tau * i * x) *
np.exp((tau2 + 4) * i * t))
l=6
N = 2 ** l
dx = 1.0 / N
dx2 = dx * dx
dt = dx2/2
t_final = 1.
x_arr = np.arange(0,1,dx)
approximate_f = np.cos(tau*x_arr)
approximate_g = -i*np.sin(tau*x_arr)
H = np.zeros([2*N,2*N], dtype=np.complex)
for k in range(N):
H[k,k] = -3*i*dt
H[k,k+N] = (2/dx2+1)*i*dt
if k==0:
H[k,N+1] = -i/dx2*dt
H[k,-1] = -i/dx2*dt
elif k==N-1:
H[N-1,N] = -i/dx2*dt
H[N-1,-2] = -i/dx2*dt
else:
H[k,k+N-1] = -i/dx2*dt
H[k,k+N+1] = -i/dx2*dt
### Bottom half
H[N :, :N] = H[:N, N:]
H[N:, N:] = H[:N, :N]
### Theta method. 0.5 -> Crank Nicolson
theta=0.5
A = np.eye(2*N)+H*theta
B = np.eye(2*N)-H*(1-theta)
### Precompute for faster computations
mat = np.linalg.inv(A)#B
t = 0
b = np.concatenate((approximate_f, approximate_g))
while t < t_final:
t += dt
b = mat#b
approximate_f = b[:N]
approximate_g = b[N:]
approximate_solution = np.concatenate((approximate_f, approximate_g))
#Calculate the actual solution
actual_f = solution_f(t,np.arange(0,1,dx))
actual_g = solution_g(t,np.arange(0,1,dx))
actual_solution = np.concatenate((actual_f, actual_g))
plt.figure(figsize=(7,5))
plt.plot(x_arr,actual_f.real,c="C0",label=r"$Re(f_\mathrm{true})$")
plt.plot(x_arr,actual_f.imag,c="C1",label=r"$Im(f_\mathrm{true})$")
plt.plot(x_arr,approximate_f.real,c="C0",ls="--",label=r"$Re(f_\mathrm{num})$")
plt.plot(x_arr,approximate_f.imag,c="C1",ls="--",label=r"$Im(f_\mathrm{num})$")
plt.legend(loc=3,fontsize=12)
plt.xlabel("x")
plt.savefig("num_approx.png",dpi=150)
I am not going to go through all of your math, but I'm going to offer a suggestion.
The use of a direct calculation for fxx and gxx seems like a good candidate for being numerically unstable. Intuitively a first order method should be expected to make second order mistakes in the terms. Second order mistakes in the individual terms, after passing through that formula, wind up as constant order mistakes in the second derivative. Plus when your step size gets small, you are going to find that a quadratic formula makes even small roundoff mistakes turn into surprisingly large errors.
Instead I would suggest that you start by turning this into a first-order system of 4 functions, f, fx, g, and gx. And then proceed with backward's Euler on that system. Intuitively, with this approach, a first order method creates second order mistakes, which pass through a formula that creates first order mistakes of them. And now you are converging as you should from the start, and are also not as sensitive to propagation of roundoff errors.

solving equations simultaneously

I have the following set of equations, and I want to solve them simultaneously for X and Y. I've been advised that I could use numpy to solve these as a system of linear equations. Is that the best option, or is there a better way?
a = (((f * X) + (f2 * X3 )) / (1 + (f * X) + (f2 * X3 ))) * i
b = ((f2 * X3 ) / (1 + (f * X) + (f2 * X3))) * i
c = ((f * X) / (1 + (j * X) + (k * Y))) * i
d = ((k * Y) / (1 + (j * X) + (k * Y))) * i
f = 0.0001
i = 0.001
j = 0.0001
k = 0.001
e = 0 = X + a + b + c
g = 0.0001 = Y + d
h = i - a
As noted by Joe, this is actually a system of nonlinear equations. You are going to need more firepower than numpy alone provides.
Solution of nonlinear equations is tricky, and the typical approach is to define an objective function
F(z) = sum( e[n]^2, n=1...13 )
where z is a vector containing a value for each of your 13 variables a,b,c,d,e,f,g,h,i,X,Y and e[n] is the amount by which each of your 13 equations is violated. For example
e[3] = (d - ((k * Y) / (1 + (j * X) + (k * Y))) * i )
Once you have that objective function, then you can apply a nonlinear solver to try to find a z for which F(z)=0. That of course corresponds to a solution to your equations.
Commonly used solvers include:
The Solver in Microsoft Excel
The python library scipy.optimize
Fitting routines in the Gnu Scientific Library
Matlab's optimization toolbox
Note that all of them will work far better if you first alter your set of equations to eliminate as many variables as practical before trying to run the solver (e.g. by substituting for k wherever it is found). The reduced dimensionality makes a big difference.

Categories