I want to use lambda as an input for a class function named Integrator. Inside of the Integrator class, the object should be called based on the current state. I do not know how to introduce these objects to the Integrator class.
Could you please tell me how can I call a created object inside of another class?
Here is the main simulation program:
# create LJ force object
lj_object = LennardJones(self.sigma, self.epsilon, self.compmethod, self.r_cut, self.box_len)
# create spring force object
sp_object = InterMolecularForce (self.oh_len, self.k_b, self.tet_eq, self.k_tet)
# create Integrator object
integrator_object = Integrator (O_mass, H_mass)
for i in range (grid.shape[0]-1) :
timespan = (grid [i], grid [i+1])
lj_force =lj_object (new_postate)
sp_force = sp_object(new_postate)
new_postate[i+1], new_velocity[i+1] = integrator_object (new_postate, new_velocity, lambda new_postate: lj_object (new_postate) + sp_object(new_postate), timespan)
return new_postate, new_velocity
The integrator is :
# calculate half step momenta
momenta_half_step = diag_mass * velocity + (force * (timespan[1] - timespan[0]) / 2)
position_full_step = posate + (timespan[1] - timespan[0]) * np.dot (inv(mass_matrix), momenta_half_step)
# calculate forces
lj_force = lj_object (position_full_step)
spring_force = sp_object(position_full_step)
force = lj_force + spring_force
momenta_full_step = momenta_half_step + ( timespan[1] - timespan[0] ) * force / 2
I have a strong feeling that I have missed the point here. But without attempting any of the complex applied mathematics your code is doing, here is a simple, bare-bones approach to passing a function defined in a lambda statement and passing it to a class to use.
class Operation:
def __init__(self, operator):
self.operator = operator
def compute(self, *operands):
return self.operator(operands[0], operands[1])
>>> addition=Operation(operator=lambda a,b: a+b)
>>> multiplication=Operation(operator=lambda a,b: a*b)
>>> addition.compute(3,5)
8
>>> multiplication.compute(3,5)
15
This is deliberately simplified and contrived: it does no type checking, it neither handles nor rejects operators that are not dyadic, and all in all it is not very useful. Its only purpose is to show how to call the function that was passed into the class, which seems to be the fundamental point of your question.
Related
I have a code in which I need the solar radiance. To calculate it I use the Planck function, which I have defined as:
def planck_W(self, x, t):
return (2*self.h*self.c**2/x**5) / (np.exp(self.h*self.c / (self.k*x*t)) -1)
Where x is the wavelength and t is the temperature, and the output is the radiance value.
However I want to be able to use also values coming from a CSV with different solar spectra, which already provides radiance values for different wavelengths. The sturcture of the CSV is Name of the spectrum used, _xx (wavelength), _yy (radiance)
self.spectra_list= pd.read_csv('solar.csv',converters={'_xx': literal_eval, '_yy':literal_eval)
def planck_W(self):
self.spectra_list= pd.read_csv('solar.csv',converters={'_xx':literal_eval, '_yy':literal_eval)
return interp1d( np.array(self._xx)*1e-9,
np.array(self._yy),
kind='linear')
Later I need to use this curve for a calculation at a wavelength range given by another variable, it starts with:
n0 = simpson((planck_W(s.wavelength)...
and I get the error:
planck_W() takes 1 positional argument but 2 were given
I'm kinda new to programming and don't have much idea what I'm doing, how do I manage to make it take the CSV values?
def planck_W(self):
This is the function signature which expects only self. But, while calling the function s.wavelength is supplied.
This causes the the error takes 1 positional argument but 2 were given.
self is not an argument in your case. Since you not pasted whole code I'm just guessing that planck_W() function is a part of a bigger picture which need to a class indeed.
If this is the case your function call should look like this:
n0 = simpson(planck_W())
wavelength value you put in your call is not used anywhere.
Otherwise, if you need that value, you have to add wavelength parameter like this:
def planck_W(self, wavelength):
and then use it inside the function
I hope this explains the situation
You should understand the difference between a function and a method.
Function
A function is a piece of code that may or may not take arguments/keywords and perform some actions.
The main reason for a function to be a thing might be to prevent repeating same code.
Example:
Calculate the sin of a given angle (In radians) See: https://en.wikipedia.org/wiki/Sine#Series_definition
angle = 3.1416
factorial3 = 1
for i in range(1, 4):
factorial3 *= i
factorial5 = 1
for i in range(1, 6):
factorial5 *= i
factorial7 = 1
for i in range(1, 8):
factorial7 *= i
sine = angle - ((angle * angle * angle) / factorial3) + ((angle * angle * angle * angle * angle) / factorial5) - \
((angle * angle * angle * angle * angle * angle * angle) / factorial7)
We can see the repeating pattern here.
The factorial and power are used multiple times. Those can be functions:
def factorial(n):
result = 1
for i in range(1, n+1):
result *= i
return result
def power(base, pow):
result = 1
for _ in range(pow):
result *= base
return result
sine2 = angle - (power(angle, 3) / factorial(3)) + (power(angle, 5) / factorial(5)) - (power(angle, 7) / factorial(7))
Method
A method is a function of a class.
We can create an object from a class and ask questions to the object (using method and attributes) and get results.
When you are calling a method of a class you use name spacing as such:
object_from_class = TheClass()
object_from_class.the_method(*possible_args)
Now you refereeing to the_method of the object_from_class.
The question is what if you want to refer to a method of the object inside the object.
Let's say we have a class which does trigonometric operations. We can have a sin, cos, tan etc.
You see here when we trying to calculate tan we do not need a whole new method. We can use sin/cos. Now I need to refer to the methods inside the object.
class Trigonometry:
def sin(self, angle):
return some_thing
def cos(self, angle):
return some_thing_else
def tan(self, angle):
return self.sin(angle) / self.cos(angle)
self.sin is equivalent of object_from_class.the_method when you are inside the object.
Note: Some languages, such as javascript, usesthis instead of self
Now, in your case, you are calling a method of an object def planck_W(self): which takes no arguments (technically it takes 1 argument but you should provide none) and passing some. That's why python is complaining about what number of arguments.
My question is if there's a way to take some values in a function that are not
integrated in odeint.
Exemple: if I have a derivative dy(x)/dt = A*x+ln(x) and before to get this equation I computed A throught of a intermediate equation like A = B*D . I would like to take the A's value during the process.
More detailed (only exemple):
def func(y,t)
K = y[0]
B = 3
A = cos(t**2) + B
dy/dt = A*t+ln(t)
return [dy/dt]
Can I take A's values of function?
The answer for Josh Karpel
The code is like that:
def Reaction(state,t):
# Integrate Results
p = state[0]
T = state[1]
# function determine enthalpy of system
f1(T,p) = enthalpy
# function determine specific volume of system
f2(T,p) = specific volume
# function determine heat release by reactions
f3(T,p,t) = heat release by reactions
# Derivatives
dp/dt = f(T,p,enthalpy,specific volume,heat release by reactions)
dT/dt = f(T,p,enthalpy,specific volume,heat release by reactions)
The real code is bigger than that. But, I would like to know if there is a way to store the values of f1 (enthalpy), f2 (specific volume), f3 (heat release) as a vector or tuple during the process of solution of odeint with the same size of p and T.
It's not entirely clear what you want, but it sounds like you need to pass another value to the function you're integrating over. There are two options I can think of:
scipy.integrate.odeint takes an args argument which contains extra arguments to be passed to the integrand function, which could then have signature y(t, A).
You could use functools.partial to construct a new function which has the argument A for the integrand function y(t, A) already set.
To preface this question, I understand that it could be done better. But this is a question in a class of mine and I must approach it this way. We cannot use any built in functions or packages.
I need to write a function to approximate the numerical value of the second derivative of a given function using finite difference. The function is below we are using.
2nd Derivative Formula (I lost the login info to my old account so pardon my lack of points and not being able to include images).
My question is this:
I don't understand how to make the python function accept the input function it is to be deriving. If someone puts in the input 2nd_deriv(2x**2 + 4, 6) I dont understand how to evaluate 2x^2 at 6.
If this is unclear, let me know and I can try again to describe. Python is new to me so I am just getting my feet wet.
Thanks
you can pass the function as any other "variable":
def f(x):
return 2*x*x + 4
def d2(fn, x0, h):
return (fn(x0+h) - 2*fn(x0) + fn(x0-h))/(h*h)
print(d2(f, 6, 0.1))
you can't pass a literal expression, you need a function (or a lambda).
def d2(f, x0, h = 1e-9):
func = f
if isinstance(f, str):
# quite insecure, use only with controlled input
func = eval ("lambda x:%s" % (f,))
return (func(x0+h) - 2*func(x0) + func(x0-h))/(2*h)
Then to use it
def g(x):
return 2*x**2 + 4
# using explicit function, forcing h value
print d2(g, 6, 1e-10)
Or directly:
# using lambda and default value for h
print d2(lambda x:2x**2+4, 6)
EDIT
updated to take into account that f can be a string or a function
I have several expressions of an undefined function some of which contain the corresponding (undefined) derivatives of that function. Both the function and its derivatives exist only as numerical data. I want to make functions out of my expressions and then call that function with the corresponding numerical data to numerically compute the expression. Unfortunately I have run into a problem with lambdify.
Consider the following simplified example:
import sympy
import numpy
# define a parameter and an unknown function on said parameter
t = sympy.Symbol('t')
s = sympy.Function('s')(t)
# a "normal" expression
a = t*s**2
print(a)
#OUT: t*s(t)**2
# an expression which contains a derivative
b = a.diff(t)
print(b)
#OUT: 2*t*s(t)*Derivative(s(t), t) + s(t)**2
# generate an arbitrary numerical input
# for demo purposes lets assume that s(t):=sin(t)
t0 = 0
s0 = numpy.sin(t0)
sd0 = numpy.cos(t0)
# labdify a
fa = sympy.lambdify([t, s], a)
va = fa(t0, s0)
print (va)
#OUT: 0
# try to lambdify b
fb = sympy.lambdify([t, s, s.diff(t)], b) # this fails with syntax error
vb = fb(t0, s0, sd0)
print (vb)
Error message:
File "<string>", line 1
lambda _Dummy_142,_Dummy_143,Derivative(s(t), t): (2*_Dummy_142*_Dummy_143*Derivative(_Dummy_143, _Dummy_142) + _Dummy_143**2)
^
SyntaxError: invalid syntax
Apparently the Derivative object is not resolved correctly, how can I work around that?
As an alternative to lambdify I'm also open to using theano or cython based solutions, but I have encountered similar problems with the corresponding printers.
Any help is appreciated.
As far as I can tell, the problem originates from an incorrect/unfortunate dummification process within the lambdify function. I have written my own dummification function that I apply to the parameters as well as the expression before passing them to lambdifying.
def dummify_undefined_functions(expr):
mapping = {}
# replace all Derivative terms
for der in expr.atoms(sympy.Derivative):
f_name = der.expr.func.__name__
var_names = [var.name for var in der.variables]
name = "d%s_d%s" % (f_name, 'd'.join(var_names))
mapping[der] = sympy.Symbol(name)
# replace undefined functions
from sympy.core.function import AppliedUndef
for f in expr.atoms(AppliedUndef):
f_name = f.func.__name__
mapping[f] = sympy.Symbol(f_name)
return expr.subs(mapping)
Use like this:
params = [dummify_undefined_functions(x) for x in [t, s, s.diff(t)]]
expr = dummify_undefined_functions(b)
fb = sympy.lambdify(params, expr)
Obviously this is somewhat brittle:
no guard against name-collisions
perhaps not the best possible name-scheme: df_dxdy for Derivative(f(x,y), x, y)
it is assumed that all derivatives are of the form:
Derivative(s(t), t, ...) with s(t) being an UndefinedFunction and t a Symbol. I have no idea what will happen if any argument to Derivative is a more complex expression. I kind of think/hope that the (automatic) simplification process will reduce any more complex derivative into an expression consisting of 'basic' derivatives. But I certainly do not guard against it.
largely untested (except for my specific use-cases)
Other than that it works quite well.
First off, rather than an UndefinedFunction, you could go ahead and use the implemented_function function to tie your numerical implementation of s(t) to a symbolic function.
Then, if you are constrained to discrete numerical data defining the function whose derivative occurs in the troublesome expression, much of the time, the numerical evaluation of the derivative may come from finite differences. As an alternative, sympy can automatically replace derivative terms with finite differences, and let the resulting expression be converted to a lambda. For example:
import sympy
import numpy
from sympy.utilities.lambdify import lambdify, implemented_function
from sympy import Function
# define a parameter and an unknown function on said parameter
t = sympy.Symbol('t')
s = implemented_function(Function('s'), numpy.cos)
print('A plain ol\' expression')
a = t*s(t)**2
print(a)
print('Derivative of above:')
b = a.diff(t)
print(b)
# try to lambdify b by first replacing with finite differences
dx = 0.1
bapprox = b.replace(lambda arg: arg.is_Derivative,
lambda arg: arg.as_finite_difference(points=dx))
print('Approximation of derivatives:')
print(bapprox)
fb = sympy.lambdify([t], bapprox)
t0 = 0.0
vb = fb(t0)
print(vb)
The similar question was discussed at here
You just need to define your own function and define its derivative as another function:
def f_impl(x):
return x**2
def df_impl(x):
return 2*x
class df(sy.Function):
nargs = 1
is_real = True
_imp_ = staticmethod(df_impl)
class f(sy.Function):
nargs = 1
is_real = True
_imp_ = staticmethod(f_impl)
def fdiff(self, argindex=1):
return df(self.args[0])
t = sy.Symbol('t')
print f(t).diff().subs({t:0.1})
expr = f(t) + f(t).diff()
expr_real = sy.lambdify(t, expr)
print expr_real(0.1)
I want to use the newton function loaded as
from scipy.optimize import newton
in order to find the zeros of a function enetered by the user. I write a script that first ask to the user to specify a function together with its first derivative, and also the starting point of the algorithm. First of all typing help(newton) I saw which parameters takes the function and the relative explanation:
newton(func, x0, fprime=None, args=(), tol=1.48e-08, maxiter=50)
func : function
The function whose zero is wanted. It must be a function of a
single variable of the form f(x,a,b,c...), where a,b,c... are extra
arguments that can be passed in the `args` parameter.
In which way I have to pass my function? If I use for func e.g. x**3 (and its first derivative) the response is NameError: name 'x' is not defined. On the internet I find that first I have to define my function and its first derivative and pass the names as parameters. So I made the following
fie = raw_input('Enter function in terms of x (e.g. x**2 - 2*x). F= ')
dfie = raw_input('Enter first derivative of function above DF = ')
x0 = input('Enter starting point x0 = ')
def F(x,fie):
y = eval(fie)
return y
def DF(x, dfie):
dy = eval(dfie)
return dy
print newton(F,x0,DF)
But I get the output
102 for iter in range(maxiter):
103 myargs = (p0,) + args
--> 104 fder = fprime(*myargs)
105 if fder == 0:
106 msg = "derivative was zero."
TypeError: DF() takes exactly 2 arguments (1 given)
and the same issue for F if I omit DF. Looking at the code in /usr/local/share/src/scipy/scipy/optimize/zeros.py I see that it evaluates the first derivative with fder=fprime(*myargs) so maybe I have to put in args something that make it working. I was thinking about it but no solution comes to me.
First, be aware that using eval makes your program vulnerable to malicious users. If that concern does not apply, you can create F and DF like this:
F = eval('lambda x :'+fie)
DF = eval('lambda x :'+dfie)
Then both functions expect only a single argument, and you can leave the args argument empty.
EDIT. If you really want to stick to your code as closely as possible, this should also work, but it does not look very nice to me. newton will send the same args to both functions.
def F(x,fie,dfie):
y = eval(fie)
return y
def DF(x,fie,dfie):
dy = eval(dfie)
return dy
print newton(F,x0,DF,(fie,dfie))