I am trying to optimize my code with numba.
I designed the code to contain a gl.py file which contains some arrays which will be used by main.py and functions called inside main() from main.py.
The auxiliary.py looks like:
import numpy as np
from numba import jit, types
from cmath import sqrt, exp, sin
N_timesteps_imag = 100
N_epsilon_divs = 60
N_z_divs = 2000
K = N_z_divs # equal to the variable N_z_divs
delta_epsilon = 0.1
delta_z = 0.1
lambd = 1.5
z_max = (N_z_divs/2) * delta_z
epsilon_range = np.linspace(0.0, N_epsilon_divs*delta_epsilon, N_epsilon_divs+1)
z_range = np.linspace(-z_max, z_max, N_z_divs+1)
psi_ground = np.zeros((N_z_divs+1, N_epsilon_divs+1, N_timesteps_imag+1), dtype=types.complex128)
#jit(nopython=True)
def pop_psiground_t0():
for c1 in range(1, psi_ground.shape[0]-1):
for c2 in range(1, psi_ground.shape[1]-1):
zed = (c1 - N_z_divs/2) * delta_z
epsi = c2 * delta_epsilon
psi_ground[c1, c2, 0] = sqrt(3) * epsi * exp(-sqrt(epsi**(2*lambd) + zed**2))
pop_psiground_t0()
The main.py looks like (MWE):
import numpy as np
import auxiliary
def main():
print(auxiliary.psi_ground[1000, 40, 0]) # shall NOT be 0 + 0j !!!
if __name__ == '__main__':
main()
Irrespective to what I put for the keyword argument dtype for the declaration of psi_ground inside auxiliary.py, be it numba.types.complex128, np.complex128, np.clongdouble, nothing works.
In particular, for np.complex128, I get the following error when running python3 main.py:
No implementation of function Function(<built-in function setitem>) found for signature:
>>> setitem(readonly array(complex128, 3d, C), Tuple(int64, int64, Literal[int](0)), complex128)
There are 16 candidate implementations:
- Of which 14 did not match due to:
Overload of function 'setitem': File: <numerous>: Line N/A.
With argument(s): '(readonly array(complex128, 3d, C), UniTuple(int64 x 3), complex128)':
No match.
- Of which 2 did not match due to:
Overload in function 'SetItemBuffer.generic': File: numba/core/typing/arraydecl.py: Line 171.
With argument(s): '(readonly array(complex128, 3d, C), UniTuple(int64 x 3), complex128)':
Rejected as the implementation raised a specific error:
TypeError: Cannot modify value of type readonly array(complex128, 3d, C)
raised from /home/velenos14/.local/lib/python3.8/site-packages/numba/core/typing/arraydecl.py:177
During: typing of setitem at /mnt/c/Users/iusti/Desktop/test_python/auxiliary.py (45)
File "auxiliary.py", line 45:
def pop_psiground_t0():
<source elided>
epsi = c2 * delta_epsilon
psi_ground[c1, c2, 0] = sqrt(3) * epsi * exp(-sqrt(epsi**(2*lambd) + zed**2))
How can I proceed with this? I tried to follow what it's written here: numba TypingError with complex numpy array and native data types
And yes, I need that psi_ground array to be of complex type, with a lot of precision, even if initially it's populated by real numbers. Later in the main() will get re-populated by complex numbers. Thank you!
psi_ground = np.zeros((N_z_divs+1, N_epsilon_divs+1, N_timesteps_imag+1), dtype=types.complex128)
must be defined inside numba function. The error clearly states that numba is unable to change the values of the psi_ground array.
Below is the modified code
import numpy as np
from numba import jit, types
from cmath import sqrt, exp, sin
N_timesteps_imag = 100
N_epsilon_divs = 60
N_z_divs = 2000
K = N_z_divs # equal to the variable N_z_divs
delta_epsilon = 0.1
delta_z = 0.1
lambd = 1.5
z_max = (N_z_divs/2) * delta_z
epsilon_range = np.linspace(0.0, N_epsilon_divs*delta_epsilon, N_epsilon_divs+1)
z_range = np.linspace(-z_max, z_max, N_z_divs+1)
#jit(nopython=True)
def pop_psiground_t0():
psi_ground = np.zeros((N_z_divs+1, N_epsilon_divs+1, N_timesteps_imag+1), dtype=types.complex128)
for c1 in range(1, psi_ground.shape[0]-1):
for c2 in range(1, psi_ground.shape[1]-1):
zed = (c1 - N_z_divs/2) * delta_z
epsi = c2 * delta_epsilon
psi_ground[c1, c2, 0] = sqrt(3) * epsi * exp(-sqrt(epsi**(2*lambd) + zed**2))
pop_psiground_t0()
Related
Through using the ways and obtaining help from Stackoverflow users, I could find half of the solution and I need to complete it.
Through using Sympy I could produce my function parametrically and it became 100 different items similar to 0.03149536*exp(-4.56*s)*sin(2.33*s) 0.03446408*exp(-4.56*s)*sin(2.33*s). By using f = lambdify(s,f) I converted it to a NumPy function and I needed to do integral of in the different sthat I already have. The upper limit of the integral is a constant value and the lower limit must be done through afor loop`.
When I try to do, I get some error which I post below. The code that I wrote is below, but for being a reproducible question I have to put a generated data. TypeError: cannot determine truth value of Relational
from sympy import exp, sin, symbols, integrate, lambdify
import pandas as pd
import matplotlib.pyplot as plt
from scipy import integrate
import numpy as np
S = np.linspace(0,1000,100)
C = np.linspace(0,1,100)
s, t = symbols('s t')
lanr = -4.56
lani = -2.33
ID = S[-1]
result=[]
f = C * exp(lanr * s) * sin (lani * s)
f = lambdify(s,f)
#vff = np.vectorize(f)
for i in (S):
I = integrate.quad(f,s,(i,ID))
result.append(I)
print(result)
EDIT
I tried to do the same without havingSympy by using just Scipy and wrote the code below and again I could not solve the problem.
from scipy.integrate import quad
import numpy as np
lanr = -6.55
lani = -4.22
def integrand(c,s):
return c * np.exp(lanr * s) * np.sin (lani * s)
def self_integrate(c,s):
return quad(integrand,s,1003,1200)
import pandas as pd
file = pd.read_csv('1-100.csv',sep="\s+",header=None)
s = np.linspace(0,1000,100)
c = np.linspace(0,1,100)
ID = s[-1]
for i in s:
I = self_integrate(integrand,c,s)
print(I)
and I got this TypeError: self_integrate() takes 2 positional arguments but 3 were given
Assuming you want to integrate over s, and use c as a fixed parameter (for a given quad call), define:
In [198]: lanr = 1
...: lani = 2
...: def integrand(s, c):
...: return c * np.exp(lanr * s) * np.sin (lani * s)
...:
test it by itself:
In [199]: integrand(10,1.23)
Out[199]: 24734.0175253505
and test it in quad:
In [200]: quad(integrand, 0, 10, args=(1.23,))
Out[200]: (524.9015616747192, 3.381048596651226e-08)
doing the same for a range of c values:
In [201]: alist = []
...: for c in range(0,10):
...: x,y = quad(integrand, 0, 10, args=(c,))
...: alist.append(x)
...:
In [202]: alist
Out[202]:
[0.0,
426.74923713391905,
853.4984742678381,
1280.2477114017531,
1706.9969485356762,
2133.7461856695877,
2560.4954228035062,
2987.244659937424,
3413.9938970713524,
3840.743134205258]
From the quad docs:
quad(func, a, b, args=(),...)
func : {function, scipy.LowLevelCallable}
A Python function or method to integrate. If `func` takes many
arguments, it is integrated along the axis corresponding to the
first argument.
and an example:
>>> f = lambda x,a : a*x
>>> y, err = integrate.quad(f, 0, 1, args=(1,))
The docs are a bit long, but the basics should be straight forward.
I was tempted to say you were stuck on the sympy calling pattern, but the second argument for that is either the integration symbol, or a tuple.
>>> integrate(log(x), (x, 1, a))
a*log(a) - a + 1
So I'm puzzled as to why you were stuck on using
quad(integrand,s,1003,1200)
The s, whether a sympy variable or a linspace array does not make sense.
I'm trying to solve the following system: d²i/dt² + R'(i)/L di/dt + 1/LC i(t) = 1/L dE/dt as a set of coupled first order differential equations:
di/dt = k
dk/dt = 1/L dE/dt - R'(i)/L k - 1/LC i(t)
Here is the code I'm using:
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
from scipy.integrate import odeint
#Define model: x = [i , k]
def RLC(x , t):
i = sp.Symbol('i')
t = sp.Symbol('t')
#Data:
E = sp.ln(t + 1)
dE_dt = E.diff(t)
R1 = 1000 #1 kOhm
R2 = 100 #100 Ohm
R = R1 * i + R2 * i**3
dR_di = R.diff(i)
i = x[0]
k = x[1]
L = 10e-3 #10 mHy
C = 1.56e-6 #1.56 uF
#Model
di_dt = k
dk_dt = 1/L * dE_dt - dR_di/L * k - 1/(L*C) * i
dx_dt = np.array([di_dt , dk_dt])
return dx_dt
#init cond:
x0 = np.array([0 , 0])
#time points:
time = np.linspace(0, 30, 1000)
#solve ODE:
x = odeint(RLC, x0, time)
i = x[: , 0]
However, I get the following error: TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
So, I don't know if sympy and odeint don't work well together. Or maybe is it a problem because I defined t as sp.Symbol?
When you differentiate a function, you get a function back. So you need to evaluate it at a point in order to get a number. To evaluate a sympy expression, you could use .subs() but I prefer .replace() which feels more powerful (at least for me).
You must try and make every single variable have its own name in order to avoid confusion. For example, you replace the float input t with a sympy Symbol from the very beginning, thus losing the value of t. The variables x and i are also repeated in the outer scope which is not good practice if they mean different things.
The following should avoid confusion and hopefully produce something that you were expecting:
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# Define model: x = [i , k]
def RLC(x, t):
# define constants first
i = x[0]
k = x[1]
L = 10e-3 # 10 mHy
C = 1.56e-6 # 1.56 uF
R1 = 1000 # 1 kOhm
R2 = 100 # 100 Ohm
# define symbols (used to find derivatives)
i_symbol = sp.Symbol('i')
t_symbol = sp.Symbol('t')
# Data (differentiate and evaluate)
E = sp.ln(t_symbol + 1)
dE_dt = E.diff(t_symbol).replace(t_symbol, t)
R = R1 * i_symbol + R2 * i_symbol ** 3
dR_di = R.diff(i_symbol).replace(i_symbol, i)
# nothing should contain symbols from here onwards
# variables can however contain sympy expressions
# Model (convert sympy expressions to floats)
di_dt = float(k)
dk_dt = float(1 / L * dE_dt - dR_di / L * k - 1 / (L * C) * i)
dx_dt = np.array([di_dt, dk_dt])
return dx_dt
# init cond:
x0 = np.array([0, 0])
# time points:
time = np.linspace(0, 30, 1000)
# solve ODE:
solution = odeint(RLC, x0, time)
result = solution[:, 0]
print(result)
Just something to note: the value i = x[0] seemed to sit very close to 0 throughout each iteration. This means dR_di stayed basically at 1000 the whole time. I'm not familiar with odeint or your specific ODE, but hopefully this phenomenon is expected and isn't a problem.
I have data collected in CSV of every output of the friction model. the model imagines the contact between to surfaces as one dimensional bristles that react to being bent like springs this deflection. the force of friction is model as:
FL(V,Z) = sig0*Z +sig1*DZ/Dt +sig2*V
where V is the Velocity of the Surface Z is the deflection of the Bristles And DZ/Dt is the rate of deflection and is equal to:
DZ/Dt = V + abs(V)*Z/(Fc + (Fs-Fc)*exp(-(V^2/Vs^2))
= V + abs(V)*Z/G(V)
= V + H(V)*Z
Where Fc is the friction of the object in motion(constant), Fs is equal to the Force required to get the object into motion (a constant > Fc) and Vs is the total speed required to transition between the domains(a constant I've experimentally derived). the velocity and position of the block are provided in the CSV as well as the force of friction all with respect to time. I have also created an easily integrable approximation of the Velocity as a function of time (trigonometric).
On to the problem: the code throws a fit with the way I'm trying to pass lists in to the functions (I think).
The function the passes the parameters SEEMS to work (taken from a different file that simply plots the data the) however I've tried to numerically integrate the DZ/Dt and fit the sig parameters to the imported Friction data.
What I imported
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
from scipy import optimize
import pylab as pp
from math import sin, pi, exp, fabs, pow
Parameters
Fc=2.7 #N
Fs=8.2 #N
Vs=.34 #mm/s
Initial_conditions
ITime=Time[0]
Iz=[0,0,0]
Building the friction model
def velocity(time):
V=-13/2*1/60*pi*sin(1/60*pi*time+pi)
return V
def g(v,vs,fc,fs,sig0):
G=(1/sig0)*(fc+(fs-fc)*exp(-pow(v,2)/pow(vs,2)))
return G
def h(v,vg):
H=fabs(v)/vg
return H
def findz(z, time, sig):
Vx=velocity(time)
VG=g(Vx,Vs,Fc,Fs,sig)
HVx=h(Vx,VG)
dzdt=Vx+HVx*z
return dzdt
def friction(time,sig,iz):
dz=lambda z,time: findz(z,time,sig)
z=odeint(dz,iz,time)
return sig[0]*z+sig[1]*findz(z,time,sig[0])+sig[2]*velocity(Time)
Should return the difference between the Constructed function and the data and
yield a list containing the optimized parameters
def residual(sig):
return Ff-friction(Time,sig,Iz)
SigG=[4,20,1]
SigVal=optimize.leastsq(residual,SigG)
print "parameter values are ",SigVal
This returns
line 56, in velocity
V=-13/2*1/60*pi*sin(1/60*pi*time+pi)
TypeError: can't multiply sequence by non-int of type 'float'
Is this to do with the fact that I am passing lists?
As I mentioned in my comment, Velocity() ist the cause of the error that is most probably due to the fact that it uses a time value, whereas you pass a whole list/ array (with multiple values) to Velocity() when you call it in friction().
Using some chosen values and after shortening you code and passing ITime instead of Time the code runs correctly but it is left to you to judge if this is analytically what you wanted to achieve. Below is my code:
import numpy as np
from scipy import optimize
from scipy.integrate import odeint
from math import sin, pi, exp, fabs
# Parameters
Fc = 2.7 #N
Fs = 8.2 #N
Vs = .34 #mm/s
# define test values for Ff and Time
Ff = np.array([100, 50, 50])
Time = np.array([10, 20, 30])
# Initial_conditions
ITime = Time[0]
Iz = np.array([0, 0, 0])
# Building the friction model
V = lambda t: (-13 / 2) * ( 1 / (60 * pi * sin(1 / 60 * pi * t + pi)))
G = lambda v, vs, fc, fs, sig0: (1 / sig0) * (fc + (fs - fc) * exp(-v**2 / vs**2))
H = lambda v, vg: fabs(v) / vg
dzdt = lambda z, t, sig: V(t) + H(V(t), G(V(t), Vs, Fc, Fs, sig)) * z
def friction(t, sig, iz):
dz = lambda z, t: dzdt(z, t, sig)
z = odeint(dz, iz, t)
return sig[0]*z + sig[1]*dzdt(z, t, sig[0]) + sig[2]*V(t)
# Should return the difference between the Constructed function and the data
# and yield a list containing the optimized parameters
def residual(sig):
return Ff-friction(ITime, sig, Iz)[0]
SigG = np.array([4, 20, 1])
SigVal = optimize.leastsq(residual, SigG, full_output = False)
print("parameter values are ", SigVal )
Output:
parameter values are (array([ 4. , 3251.47271228, -2284.82881887]), 1)
I kept getting the error only length-1 arrays can be converted to Python scalars. Most people suggest sometimes numpy is not compatible with other existing math functions. but I changed every math function to np functions.
The error states:
Traceback (most recent call last): File "/Users/jimmy/Documents/2.py", line 20, in <module>
eu = mc_simulation(89,102,0.5,0.03,0.3,1000) File "/Users/jimmy/Documents/2.py", line 12, in mc_simulation
ST = s0 * exp((r - 0.5 * sigma ** 2) * T + sigma * a * z) TypeError: only length-1 arrays can be converted to Python scalars
My code:
from numpy import *
import numpy as np
from math import exp
def mc_simulation(s0, K, T, r, sigma, no_t):
random.seed(1000)
z = random.standard_normal(no_t)
ST = s0 * exp((r - 0.5 * sigma ** 2) * T + sigma * np.sqrt(T) * z)
payoff = maximum(ST - K, 0)
eu_call = exp(-r * T) * sum(payoff) / no_t
return eu_call
eu = mc_simulation(89,102,0.5,0.03,0.3,1000)
You don't need math here. Use numpy.exp. Furthermore, consider getting into the habit of not using the * operator with imports.
import numpy as np
np.random.seed(1000)
def mc_simulation(s0, K, T, r, sigma, no_t):
z = np.random.standard_normal(no_t)
ST = s0 * np.exp((r - 0.5 * sigma ** 2) * T + sigma * np.sqrt(T) * z)
payoff = np.maximum(ST - K, 0)
eu_call = np.exp(-r * T) * np.sum(payoff) / no_t
return eu_call
print(mc_simulation(89,102,0.5,0.03,0.3,1000))
3.4054951916465099
To your comment of "why shouldn't I use the * operator": there are a ton of good discussions on why this can create trouble. But here is what the official documentation has to say on that: when you use from numpy import *:
This imports all names except those beginning with an underscore (_).
In most cases Python programmers do not use this facility since it
introduces an unknown set of names into the interpreter, possibly
hiding some things you have already defined.
Your own example illustrates that. If you were to use:
from numpy import *
from math import *
Both have an exp function that gets imported into the namespace as exp. Python might then have trouble knowing which exp you want to use and, as you saw here, they are quite different. The same applies if you have already defined an exp function yourself, or any other function that shares a name with any in those two packages.
In general, be wary of any tutorials you run across that use from x import * consistently.
NumbaPro's #vectorize decorator seems like a neat way to utilize multicore processors for numeric computations. Unfortunately, the following fairly minimal example yields an error:
import numpy as np
from scipy.integrate import odeint
from numbapro import vectorize, float64, int64, jit
#vectorize([float64[:](float64[:], float64, float64, int64, float64, float64[:], float64)], target='parallel')
def heat_equation(x, t, a, p, h, dxdt, pi):
for i in xrange(p-1):
dxdt[i] = a * (x[i-1] - 2 * x[i] + x[i+1]) / h / h
dxdt[0] = 2*pi*np.cos(2*pi*t)
dxdt[p-1] = 0
return dxdt
if __name__ == '__main__':
p = 200
h = 1. / (p-1)
a = 0.125
x = np.linspace(0, 1, p)
y0 = np.zeros(p)
dxdt = np.zeros(p, dtype=np.float64)
pi = np.pi
for i in xrange(p):
y0[i] = 0
timeVector = np.linspace(0, 10, 100)
solVector = odeint(heat_equation, y0, timeVector, args=(a, p, h, dxdt, pi))
print solVector[-1, p/2]
The code above works just fine using the #jit decorator, but trying #vectorize gives the following error:
ValueError: format number 1 of "array(float64, 1d, A)" is not recognized
Apparently, there is an issue with the decorator parameters, but the type signature looks correct to me. Are there some additional restrictions I'm not abiding by?
Edit: modified the code to avoid use of numpy.zeros and numpy.pi within the decorated function as per Bakuriu's helpful comment below and adjusted the error received accordingly.
Code decorated with #jit usually cannot be directly ported to #vectorize. #vectorize turns the decorated function into a NumPy ufunc core (http://docs.scipy.org/doc/numpy/reference/ufuncs.html). Additional restriction to #vectorize is that all arguments to the core function (the function being decorated) must be scalars. Use #guvectorize if you need array arguments. See http://numba.pydata.org/numba-doc/dev/ufuncs.html#generalized-ufuncs for examples.