I need to calculate the derivative of a trinomial function in another function using the former as a parameter. This is my code so far:
def derivative(func):
num=func(a,b,c)
third=func(0,0,c)
first=func(a,0,0)
return (num-third)/x+first/x
def make_quadratic(a, b, c):
return lambda x: x*x*a+b*x+c
I suppose that by using the make_quadratic function I had to add 3 parameters (a,b,c) in func as well. What I am trying to do is remove c, divide with x and then add ax so that the resulting derivative is 2ax+b, I cannot however run the code since for some reason a,b,c are not defined even though I do give them values when I call the function.
Write yourself a class like the following one:
class TrigonomialFunction:
def __init__(self, a, b, c):
self.a, self.b, self.c = a, b, c
def calculate(self, x):
return self.a * x ** 2 + self.b * x + self.c
def derivate(self):
return TrigonomialFunction(0, 2 ** self.a, self.b)
def __str__(self):
return "f(x) = %s x**2 + %s * x + %s" % (self.a, self.b, self.c)
Which can then get used in the following way:
f = TrigonomialFunction(2, 3, -5)
print(f) # will return f(x) = 2 x**2 + 3 * x + -5
print(f.calculate(1)) # returns 0
print(f.calculate(-1)) # returns -6
derivateOfF = f.derivate()
print(derivateOfF) # returns f(x) = 0 x**2 + 4 * x + 3 which can be simplified to f(x) = 4*x + 3
print(derivateOfF.calculate(0)) # returns 3
derivateOfderivateOfF = derivateOfF.derivate()
print(derivateOfderivateOfF) # guess ;)
You can calculate the derivative of any function using sympy.
For your function, you can use:
import sympy
def get_derivative(func, x):
return sympy.diff(func, x)
def make_quadratic(a, b, c):
x = sympy.symbols('x')
func = a * x**2 + b * x + c
return func, x
func, x = make_quadratic(1, 2, 3)
print(get_derivative(func, x))
Which returns 2*x + 2
Related
n = 7
def f(x):
n = 8
return x + 1
def g(x):
n = 9
def h():
return x + 1
return h
def f(f, x):
return f(x + n)
f = f(g, n)
g = (lambda y: y())(f)
I'm trying to understand and keep track of all the changes in the binding of functions, but cannot seem to grasp why the y parameter in the lambda function gets binded to h() on being called.
The transition from step 14-15 in this Python Tutor link to be precise
Let's change the names of the functions so that twisted code is easier to follow. We can name then sequentially like fun_n and reassign the names after each function definition. I added some comments too
The first one would be
n = 7
def fun_1(x):
n = 8
return x + 1
f = fun1
The second one:
def fun_2(x):
n = 9
def h():
return x + 1
return h
g = fun_2
Now we are ready for some bad renaming
def fun_3(f, x):
return f(x + n)
f = fun_3
And now...
f = f(g, n)
which is equivalent to fun_3(fun2, n) with n=7.
fun_3 will return fun_2(7+n), again with n=7.
fun_2 will return a closure of h with x=14, so it will return a function that returns the result of 14 + 1. So f is now a function that always return 15
Finally,
g = (lambda y: y())(f)
creates a lambda function that calls whatever parameter is passed to it, and calls it passing f as a parameter. It is equivalent to:
fun_4 = lambda y: y()
g = fun_4(f)
I don't think it is really useful as an excercise.
I am writting a code to compare mathematic functions and say which one of them has the biggest value for given x and y
import math
def r(x,y):
r = 3 * math.pow(x, 2) + math.pow(y, 2)
return r
def b(x,y):
b = 2 * math.pow(x, 2) + 5 * math.pow(y, 2)
return b
def c(x,y):
c = -100 * x + math.pow(y, 3)
return c
def compare():
x = float(input("Type the value of x: "))
y = float(input("Type the value of y: "))
r(x,y)
b(x,y)
c(x,y)
if r > b and c:
print("r is the biggest.")
return
elif b > r and c:
print("b is the biggest.")
return
elif c > b and r:
print("c is the biggest.")
return
compare()
But I get the following error:
TypeError: '>' not supported between instances of 'function' and 'function'
You must assign the result of each function to a variable. Try the following:
import math
def r(x,y):
r = 3 * math.pow(x, 2) + math.pow(y, 2)
return r
def b(x,y):
b = 2 * math.pow(x, 2) + 5 * math.pow(y, 2)
return b
def c(x,y):
c = -100 * x + math.pow(y, 3)
return c
def compare():
x = float(input("Type the value of x: "))
y = float(input("Type the value of y: "))
R=r(x,y)
B=b(x,y)
C=c(x,y)
if R > B and R>C:
print("r is the biggest.")
return
elif B > R and B>C:
print("b is the biggest.")
return
elif C > B and C>R:
print("c is the biggest.")
return
compare()
There are two bugs here:
You need to compare the actual results of the function, not the function object itself.
You can't compare one numerical value to the and of two other values; what you're actually testing for in that case is a single comparison plus whether the other value is non-zero!
I'd use the max function to get the largest value rather than building a complicated chain of conditionals to do the same thing. That way if you add another function it's easy to just add it to the list:
import math
def r(x, y):
return 3 * math.pow(x, 2) + math.pow(y, 2)
def b(x, y):
return 2 * math.pow(x, 2) + 5 * math.pow(y, 2)
def c(x, y):
return -100 * x + math.pow(y, 3)
def compare():
functions = [r, b, c] # maybe this could be a parameter?
x = float(input("Type the value of x: "))
y = float(input("Type the value of y: "))
result, func_name = max((f(x, y), f.__name__) for f in functions)
print(f"{func_name} is the biggest.")
compare()
Type the value of x: 1
Type the value of y: 2
b is the biggest.
Type the value of x: 0
Type the value of y: 0
r is the biggest.
Type the value of x: -1
Type the value of y: -2
c is the biggest.
I want to find the zeros of a simple function for given parameters a, b, c. I have to use the Newton-Raphson method. The problem I obtain when I compile the code is that the x variable is not defined.
from scipy import optimize
def Zeros(a, b, c, u):
return optimize.newton(a*x**2+b*x+c, u, 2*ax+b, args=(a, b, c))
a, b, c are constants of the function f and u is the starting point. So with this function I should be able to obtain a zero by specifying a, b, c and u. For instance:
print Zeros(1, -1, 0, 0.8)
But I obtain "global name 'x' is not defined".
Why does that happen?
The way most programming languages work is that they use variables (the names a, b, c, u in your code) and functions (Zeros, for instance).
When calling a function, Python expects all of the "quantities" that are input to be defined. In your case, x does not exist.
The solution is to define a function that depends on x, for the function and its derivative
from scipy import optimize
def Zeros(a,b,c,u):
def f(x, a, b, c):
return a*x**2+b*x+c
def fprime(x, a, b, c):
return 2*a*x + b
return optimize.newton(f, u, fprime=fprime,args=(a,b,c))
print(Zeros(1,-1,0,0.8))
Crude way of doing it to see what's going on!
Define a function:
def f(x):
return x ** 6 / 6 - 3 * x ** 4 - 2 * x ** 3 / 3 + 27 * x ** 2 / 2 \
+ 18 * x - 30
Define the differential:
def d_f(x):
return x ** 5 - 12 * x ** 3 - 2 * x ** 2 + 27 * x + 18
Newton-Raphson:
x = 1
d = {'x': [x], 'f(x)': [f(x)], "f'(x)": [d_f(x)]}
for i in range(0, 40):
x = x - f(x) / d_f(x)
d['x'].append(x)
d['f(x)'].append(f(x))
d["f'(x)"].append(d_f(x))
df = pd.DataFrame(d, columns=['x', 'f(x)', "f'(x)"])
print(df)
I'm trying to return multiple values that are obtained inside a scipy root finding function (scipy.optimize.root).
For example:
B = 1
def testfun(x, B):
B = x + 7
return B**2 + 9/18 - x
y = scipy.optimize.root(testfun, 7, (B))
Is there any way to return the value of B without using globals?
I'm not aware of anything SciPy specific, but how about a simple closure:
from scipy import optimize
def testfun_factory():
params = {}
def testfun(x, B):
params['B'] = x + 7
return params['B']**2 + 9/18 - x
return params, testfun
params, testfun = testfun_factory()
y = optimize.root(testfun, 7, 1)
print(params['B'])
Alternatively, an instance of a class with __call__ could also be passed as the callable.
I want to create something like this programmatically:
a = (_vec, T.set_subtensor(_vec[0], _init[0]))[1]
b = (a, T.set_subtensor( a[1], a[0] * 2))[1]
c = (b, T.set_subtensor( b[2], b[1] * 2))[1]
vec_update = (c, T.set_subtensor(c[3], c[2] * 2))
test_vector = function([], outputs=vec_update)
subt = test_vector()
We got a = (_vec, T.set_subtensor(_vec[0], _init[0]))[1] so a is that whole statement. This is not doing anything yet. Then there is b = (a, T.set_subtensor( a[1], a[0] * 2))[1] which is depending on a and is another statement itself.This goes on until vec_update. I know it looks ugly but it just updates the column of a vector like col[n] = col[n-1] * 2 for col[0] = 1 returning a vector looking like this:
[[ 1. 2. 4. ..., 32. 64. 128.]]
Now Imagine I would want to do this a thousand times.. therefore I am wondering if I could generate such statements since they follow an easy pattern.
These "concatenated" statements are not evaluated until
test_vector = function([], outputs=vec_update)
which is when they are getting compiled to CUDA-code and
subt = test_vector()
does execute everything.
You can use function nesting:
def nest(f, g):
def h(x):
return f(g(x), x)
return h
expr = lambda (a,b) : a
expr = nest((lambda x, (a, b): x + (a - b)), expr)
expr = nest((lambda x, (a, b): x + (a - b)), expr)
print expr((1,2)) # prints -1
Regarding the example code, you could do something like (modifying nest to use no arguments):
def nest(f, g):
def h():
return f(g())
return h
expr = lambda: (_vec, _init)[1]
expr = nest(lambda x: T.set_subtensor(x[1], x[0] * 2)[1], expr)
expr = nest(lambda x: T.set_subtensor(x[2], x[1] * 2)[1], expr)
expr = nest(lambda x: T.set_subtensor(x[3], x[2] * 2)[1], expr)
expr = nest(lambda x: T.set_subtensor(x[4], x[3] * 2)[1], expr)
test_vector = function([], outputs=expr)
subt = test_vector()