In the image I wrote the diffusion creep equation with all its variables. I just need to understand how I can transfer this into python.
I was trying to the first part of the equation but I'm not sure how to include the last part.
Use exp method in math module
import math
if __name__ == "__main__":
foo = lambda A, d, m, n, E, P, V, R, T: 2 * pow(A, -1 / n) * pow(d, m / n) * \
math.exp((E + P * V) / (n * R * T))
print(foo(A=4.5, d=1, m=3, n=1, E=10.0 ** -15, P=1, V=6, R=8.314463, T=1623))
Related
I'm currently trying to simulate a PDE including a Brownian path (one of the terms includes, that when going one timestep 'dt' further the change is weighted by a normal distributed variable with mean 0 and variance dt).
For this I used Fast Fourier Transform to get a system of ODEs which I can solve much more easily (at least that's what I thought). This lead me to the following code.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
#Defining some parameters a, b, c, which are included in the PDE
a = 10
b = 1.5
c = 20
#Creating the mesh
L = 100
N = 100
dx = L/N
x = np.arange(0, L, dx)
dt=0.01
t = np.linspace(0, 1, 100)
#Frequency for the Fourier Transformation
kappa= 2*np.pi*np.fft.fftfreq(N, d=dx)
#Initial condition for function u and its Fast Fourier Transformation
u0= np.zeros_like(x)
u0[int((L/4-L/10)/dx):int((L/4+L/10)/dx)]=2.5
u0[int((3*L/4-L/10)/dx):int((3*L/4+L/10)/dx)]=2.5
u0hat = np.fft.fft(u0)
u0hat_ri= np.concatenate((u0hat.real, u0hat.imag))
#Define the function describing the Transformation from the PDE to the system of ODEs
def func(uhat_ri, t, kappa, a, b, c):
uhat = uhat_ri[:N] +(1j)*uhat_ri[N:]
#Define the weighted change by the Brownian path B
mean = [0]*len(uhat)
diag = [0.1] * len(uhat)
cov = np.diag(diag)
B = np.random.multivariate_normal(mean, cov)
d_uhat = -a**2 * (np.power(kappa, 2))* uhat-c*(1j)*kappa*uhat + b* (1j) * kappa * uhat * B
d_uhat_ri = np.concatenate((d_uhat.real, d_uhat.imag))
return d_uhat_ri
#Solve the ODE with odeint
uhat_ri = odeint(func, u0hat_ri, t, args=(kappa, a, b, c))
uhat = uhat_ri[:, :N] + (1j) * uhat_ri[:, N:]
u = np.zeros_like(uhat)
#Inverse Transform the Solution
for k in range(len(t)):
u[:, k] = np.fft.ifft(uhat[k, :])
u = u.real
This program works if I exclude the Brownian path B in func
def func(uhat_ri, t, kappa, a, b, c):
uhat = uhat_ri[:N] +(1j)*uhat_ri[N:]
d_uhat = -a**2 * (np.power(kappa, 2))* uhat-c*(1j)*kappa*uhat + b* (1j) * kappa * uhat
d_uhat_ri = np.concatenate((d_uhat.real, d_uhat.imag))
return d_uhat_ri
But it takes a long time to execute when including B and also it tells me:
C:\Users\leo_h\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\integrate\odepack.py:247: ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.
warnings.warn(warning_msg, ODEintWarning)
EDIT/ANSWER:
I solved the problem, by putting the change in the Brownian path out of func. I guess it was just too much for odeint to cope with the function (or it generated a new Brownian path for each t?)
mean = [0]*len(u0hat)
diag = [2] * len(u0hat)
cov = np.diag(diag)
B = np.random.multivariate_normal(mean, cov)
def func(uhat_ri, t, kappa, a, b, c, B):
uhat = uhat_ri[:N] +(1j)*uhat_ri[N:]
d_uhat = np.zeros_like(uhat)
d_uhat = -a**2 * (np.power(kappa, 2)) * uhat-c * (1j) * kappa * uhat + b * B * (1j) * kappa * uhat
d_uhat_ri = np.concatenate((d_uhat.real, d_uhat.imag))
return d_uhat_ri
uhat_ri = odeint(func, u0hat_ri, t, args=(kappa, a, b, c, B))
I'm doing the CS1301xII course through edX and I'm being asked to calculate Pokemon damage by using one function to calculate the modifier, which I need to call another for the calculation.
There are 9 parameters:
STAB, Type, Critical, Other, Random, Level, Attack, Defense, and Base.
My first function calculates a modifier used in the damage calculation. This is (STAB * Type * Critical * Other * Random) for reference.
def calculate_modifier(s, t, c, o, r):
mod = s * t * c * o * r
My second function is to calculate overall damage. This is (((2 * Level + 10) / 250) * (Attack / Defense) * Base + 2) for reference.
def calculate_damage(l, a, d, b):
dam = (((2 * l + 10) / 250) * (a / d) * b + 2)
How do I go about calling the calculate_modifier function within my calculate_damage function? Do I list all 9 of the parameters? Really struggling with how this should look.
The final calculate_damage formula should be dam * mod
I am on this course too, but I've done this question. This is what you should do:
dam = (((2 * l + 10) / 250) * (a / d) * b + 2) * calculate_modifier(STAB * Type * Critical * Other * Random)
If you try to calculate it inside the dam function, the autograder disqualifies you.
You just return the mod value from the first function and pass it to second,
You don't have to pass 9 parameters to second function.
def calculate_modifier(s, t, c, o, r):
mod = s * t * c * o * r
return mod
def calculate_damage(l, a, d, b, mod):
dam = (((2 * l + 10) / 250) * (a / d) * b + 2) * mod
return dam
mod = calculate_modifier(s, t, c, o, r)
dam = calculate_damage(l, a, d, b, mod)
or you can return from both function and pass them to the third function to calculate the final value
def calculate_modifier(s, t, c, o, r):
mod = s * t * c * o * r
return mod
def calculate_damage(l, a, d, b, mod):
dam = (((2 * l + 10) / 250) * (a / d) * b + 2)
return dam
def calculate_total_damage(mod, dam):
return dam * mod
mod = calculate_modifier(s, t, c, o, r)
dam = calculate_damage(l, a, d, b)
final = calculate_total_damage(mod, dam)
I have this interesting problem where I want to calculate the sum over the element-wise product of three matrices
While calculating \mathbf{p}_ {ijk} and c_{ijk} can be done apriori, I have my problem with f_{ijk}(x,y,z). Elements of this matrix are multivariate polynomials which depend upon the matrix indices, thus numpy.vectorize cannot be trivially applied. My best bet at tackling the issue would be to treat the (i,j,k) as additional variables such that numpy.vectorize is then subsequently applied to a 6-dimensional instead of 3-dimensional input. However, I am not sure if more efficient or alternative ways exist.
This is a simple way to implement that formula efficiently:
import numpy as np
np.random.seed(0)
l, m, n = 4, 5, 6
x, y, z = np.random.rand(3)
p = np.random.rand(l, m, n)
c = np.random.rand(l, m, n)
i, j, k = map(np.arange, (l, m, n))
xi = (x ** (l - i)) * (x ** l)
yj = (y ** (m - j)) * (y ** m)
zk = (z ** (n - k)) * (z ** n)
res = np.einsum('ijk,ijk,i,j,k->', p, c, xi, yj, zk)
print(res)
# 0.0007208482648476157
Or even slightly more compact:
import numpy as np
np.random.seed(0)
l, m, n = 4, 5, 6
x, y, z = np.random.rand(3)
p = np.random.rand(l, m, n)
c = np.random.rand(l, m, n)
t = map(lambda v, s: (v ** (s - np.arange(s))) * (v ** s), (x, y, z), (l, m, n))
res = np.einsum('ijk,ijk,i,j,k->', p, c, *t)
print(res)
# 0.0007208482648476157
Using np.einsum you minimize the need for intermediate arrays, so it should be faster that making f first (which you could get e.g. as f = np.einsum('i,j,k->ijk', xi, yj, zk)), multiplying p, c and f and then summing the result.
I'm reproducing Mathematica results using Sympy, and I'm new to the latter, so I might be doing things wrong. However, I noticed that some stuff that took a minute at max using Mathematica is just taking forever (read: did not finish after I started it an hour ago) in sympy. That applies both to Simplify(), and solve(). Am I doing something wrong, or is that really the case?
I'll attach my solve() case:
import sympy as sp
from sympy import init_printing
init_printing()
p, r, c, p, y, Lambda = sp.symbols('p r c p y Lambda')
F = sp.Symbol('F')
eta1 = lambda p: 1/(1-sp.exp(-Lambda) * sp.exp(-Lambda)*(sp.exp(Lambda) - 1 - Lambda))
eta2 = lambda p: 1/(1-sp.exp(-Lambda)) * sp.exp(-Lambda)/(1-F) * (sp.exp(Lambda*(1- F)) - 1 - Lambda*(1-F))
eta = lambda p: 1 - eta1(p) + eta2(p)
etaOfR = sp.limit(eta(p), F, 1)
S = lambda p: eta(p)*y/p*(p-c)
SOfR = etaOfR*y/r*(r-c)
sp.solve(S(p)-SOfR, F)
The corresponding Mathematica code:
ClearAll[r, p, lambda, a, A, c, eta, f, y, constant1, constant2, eta, \
etaOfR]
constant1[lambda_] := Exp[-lambda]/(1 - Exp[-lambda]);
constant2[lambda_] := constant1[lambda]*(Exp[lambda] - 1 - lambda);
eta[lambda_, f_] :=
1 - constant2[lambda] +
constant1[lambda]*(Exp[lambda*(1 - f)] - 1 - lambda*(1 - f)) ;
etaOfR[lambda_] := Limit[eta[lambda, f], f -> 1];
expression1[lambda_, f_] :=
y/p (p - c) eta[lambda, f] == y/r (r - c) etaOfR[lambda];
Solve[expression1[lambda, f], f] // FullSimplify
Output:
{{f -> (-(1 + lambda) p r +
c (lambda p + r) + (c -
p) r ProductLog[-E^(((-c lambda p + (c (-1 + lambda) +
p) r)/((c - p) r)))])/(lambda (c - p) r)}}
The correct way to do it is:
from sympy import *
init_printing()
p, r, c, p, y, lam, f = symbols('p r c p y lambda f')
constant1 = exp(-lam) / (1 - exp(-lam))
constant2 = constant1 * (exp(lam) - 1 - lam)
eta = 1 - constant2 + constant1 * (exp(lam * (1-f)) - 1 - lam * (1 - f))
etaOfR = limit(eta, f, 1)
expression1 = Eq(y / p * (p - c) * eta,
y / r * (r - c) * etaOfR)
solve(expression1, f)
You can also check the notebook here:
http://nbviewer.ipython.org/gist/jankoslavic/0ad7d5c2731d425dabb3
The results is equal to the one from Mathematica (see last line) and Sympy performance is comparable.
Now I face some problem when I use scipy.integrate.ode.
I want to use spectral method (fourier transform) solve a PDE including dispersive and convection term, such as
du/dt = A * d^3 u / dx^3 + C * du/dx
Then from fourier transform this PDE will convert to a set of ODEs in complex space (uk is complex vector)
duk/dt = (A * coeff^3 + C * coeff) * uk
coeff = (2 * pi * i * k) / L
k is wavenumber, (e.g.. k = 0, 1, 2, 3, -4, -3, -2, -1)
i^2 = -1,
L is length of domain.
When I use r = ode(uODE).set_integrator('zvode', method='adams'), python will warn like:
c ZVODE-- At current T (=R1), MXSTEP (=I1) steps
taken on this call before reaching TOUT
In above message, I1 = 500
In above message, R1 = 0.2191432098050D+00
I feel it is because the time step I chosen is too large, however I cannot decrease time step as every step is time consuming for my real problem. Do I have any other way to resolve this problem?
Did you consider solving the ODEs symbolically? With Sympy you can type
import sympy as sy
sy.init_printing() # use IPython for better results
from sympy.abc import A, C, c, x, t # variables
u = sy.Function(b'u')(x,t)
eq = sy.Eq(u.diff(t), c*u)
sl1 = sy.pde.pdsolve(eq, u)
print("The solution of:")
sy.pprint(eq)
print("was determined to be:")
sy.pprint(sl1)
print("")
print("Substituting the coefficient:")
k,L = sy.symbols("k L", real=True)
coeff = (2 * sy.pi * sy.I * k) / L
cc = (A * coeff**3 + C * coeff)
sl2 = sy.simplify(sl1.replace(c, cc))
sy.pprint(sl2)
gives the following output:
The solution of:
∂
──(u(x, t)) = c⋅u(x, t)
∂t
was determined to be:
c⋅t
u(x, t) = F(x)⋅ℯ
Substituting the coefficient:
⎛ 2 2 2⎞
-2⋅ⅈ⋅π⋅k⋅t⋅⎝4⋅π ⋅A⋅k - C⋅L ⎠
──────────────────────────────
3
L
u(x, t) = F(x)⋅ℯ
Note that F(x) depends on your initial values of u(x,t=0), which you need to provide.
Use sl2.rhs.evalf() to substitute in numbers.