Ode solver in python - python

I am trying to solve an ODE (dx^2/dt^2 = -4(x^2+y^2)^(3/2)) using scipy odeint, but I can't get it to work. Here is my code:
import numpy as np
from scipy.integrate import odeint
def system(x,t,y):
x1 = x[0]
x2 = x[1]
y1 = y
dx1_dt = x2
dx2_dt = -4*(x1**2+y1**2)**(3/2)
dx_dt = [dx1_dt,dx2_dt]
return dx_dt
x_0 = [2,3]
y_0 = [8,6]
t = np.linspace(0,1,30)
x_solved = odeint(system,x_0,t,args=(y_0[0]))
I am getting this error:
odepack.error: Extra arguments must be in a tuple
But I am passing the extra arguments as a tuple: args=(y_0[0]). What am i doing wrong? Thank you!

A tuple with single element should be in the format
(y_0[0],). Note the comma.
(x) evaluates to x
(x,) evaluates to a tuple with one element
( ) are often used for syntactic and better readability reasons.
is_true = (x and y) or (a or k)
Since ( ) are already used for creating tuples, the way to differentiate a single element tuple and an expression is that comma.

Related

float() argument must be a string or a number, not 'function'. What is wrong with this code?

I am trying to make a function call from a choice of two functions based on the variable 't' value. But "float() argument must be a string or a number, not 'function'" comes up. Please help.
import numpy as np
import sympy as sym
import matplotlib.pyplot as plt
n = 9
t = np.linspace(0,10,n)
u = np.linspace(0.0,n)
v = np.linspace(0.0,n)
a = np.linspace(0.0,n)
def T_td(t,utd,vtd):
t = sym.symbols('t')
y = utd*sym.sin(5*t) + vtd*sym.cos(5*t)
yp = y.diff(t)
ypp = yp.diff(t)
j = sym.lambdify(t,y)
k = sym.lambdify(t,yp)
l = sym.lambdify(t,ypp)
return j,k,l
def td_T(t):
t = sym.symbols('t')
y = sym.sin(5*t) + sym.cos(5*t)
yp = y.diff(t)
ypp = yp.diff(t)
j = sym.lambdify(t,y)
k = sym.lambdify(t,yp)
l = sym.lambdify(t,ypp)
return j,k,l
def func(t,utd,vtd):
if t < 5:
u,v,a = td_T(t)
utd = 0
vtd = 0
elif t == 5:
u,v,a = td_T(t)
utd = u
vtd = v
else:
u,v,a = T_td(t,utd,vtd)
return u,v,a,utd,vtd
#print(t)
for i in range(0,n,1):
u[i],v[i],a[i],u_td,v_td = func(t[i],0,0)
The first 3 values in the tuple returned by func() are of the type <function _lambdifygenerated at 0x1282cd090>
The target is a numpy array of floats.
Hence the error
Well, there are many probable errors in this code. The linspace arguments. The useless "t" arguments to td_T and T_td (since the first thing you do in it, is overwrite t with a symbolic value), the apparently useless u_dt and v_td in the main loop.
But the one causing your error, is the fact that u, v and a are numpy arrays of floats. And you are trying to force feed them with functions.
3 1st values returned by func are the values returned by td_T and T_td functions. Which are all the result of sym.lambdify. And as its name suggest, sym.lambdify returns a function, not a float. You are supposed to call those functions with some parameters. And since I've no idea what you are trying to do, I've no idea neither about which would be those parameters. But there have to be some.
Otherwise, it is like you where trying to do
u[i]=sin
v[i]=cos
a[i]=len
sin, cos or len are functions. sin(0), cos(0) and len([]) are numbers.
Likewise, the j, k, l your td_T and T_td functions returns are functions. j(0), k(1), l(2) would be numbers suited to be stored in u[i] and its kind.

Unexpected behaviour of sympy.lambdify with trigonometric functions

Given an expression, we can convert it into a function using sympy.lambdify. Similarly, given a function, we can convert it into an expression by evaluating it at symbol x. We would naturally expect that these two operations are inverses of each other. And, this expected behaviour is displayed when I use polynomial expressions. For example,
import sympy as sym
x = sym.symbols('x')
expr = 5*x**2 + 2*x + 3
f = sym.lambdify([x],expr)
f_expr = f(x)
print(expr == f_expr)
gives True as its output.
On the other hand, the following code does not run
import sympy as sym
x = sym.symbols('x')
expr = sym.sin(x)
f = sym.lambdify([x],expr)
f_expr = f(x)
print(expr == f_expr)
and throws the error "TypeError: loop of ufunc does not support argument 0 of type Symbol which has no callable sin method". Could you please explain why this is happening? My guess would be that sym.sin(x) does not return an "expression" analogous to 5x**2 + 2x + 3. But, I would like to understand it a bit better. Thanks in advance.
For a non-numeric object the lambdify code tries to do x.sin()
with making sure the sin function is from library sympy not numpy to avoid confusions.
you can try :
import sympy as sym
from sympy import sin
x = sym.symbols('x')
expr = sin(x)
# f = sym.lambdify(x,expr)
f = lambda x:sin(x)
f_expr = f(x)
print(expr == f_expr)

passing in initial/boundary conditions for a function in scipy.optimize.root as the args argument

I am trying to solve a non linear system. Here is the code for a toy problem.
import collections
import numpy as np
import scipy
def flat(x):
''' flattens a shallow list
ex: [[1,2,3],[4,5],[6]] ----> flattens to [1,2,3,4,5]
numpy flatten does not work on lists.
'''
if isinstance(x, collections.Iterable):
return [a for i in x for a in flat(i)]
else:
return [x]
def func(X):
'''setups the matrix dynamic equation and the set of constraints
'''
A = [[0,1,0,1],[2,1,0,4],[1,4,1,3],[3, 2, 1,0]]
A1 = [[1,0,1,-1], [0,-1,2,1],[1,2,0,1],[1,2,0,-2]]
x = X[:-1]
alpha = X[-1]
x0 = [1,2,3,4]
y = x - x0
# x[0] = 0.5
# x[3] = 0.3
dyneqn = np.dot(A,y) + alpha * np.dot(A1,x)
cons = (1/2.0)*np.dot(x.T,np.dot(A1,x)) + np.dot([-1,1,2,-3], x) + 0.5
return flat([dyneqn, cons])
sol = scipy.optimize.root(func,[1,-1,2,0,-1])
sol.x
Problem Statement
The argument X of the objective function f has five unknowns that we are solving for. I want to set the first parameter, i.e., X[0]=0.5and the fourth parameter i.e., X[3] = 0.3 and solve for the remaining 3 unknowns. Let us assume for simplicity that such a solution exists and my initial guess is somehow a good one.
Attempt:
I know I should probably pass these arguments to the args=() argument in scipy.optimize.root. I tried setting
args = (X[0]=0.5, X[3]=0.3)
init_guess = [0.5,-1,2,0.3,-1]
scipy.optimize.root(func,init_guess, args=args)
This is obviously wrong.
Question? How can I fix this?.
Note: I added the flat function so that the code is self contained. It has nothing to do with this question.
Typically with scipy functions like root, minimize, etc
root(func, x0, args=(a, b, c, ...))
requires a func that accepts:
func(x0, a, b, c, ...)
# do something those arguments
return value
x0 is the value that root varies, a,b,c are args value that are passed unchanged to your function. Depending of the problem x0 may be an array. The nature of the args is entirely up to you.
From your example I reconstruct that you want to solve for the second and third component of some vector x as well as the parameter alpha. With the args keyword of scipy.optmize.root that would look something like
def func(x_solve, x0, x3):
#x_solve.size should be 3
x = np.empty(4)
x[0], x[3] = x0, x3
x[1:3] = x_solve[:2]
alpha = x_solve[2]
...
scipy.optimize.root(func, [-1,2,-1], args=(.5, .3))
As Azat and kazemakase pointed out, I'm also not sure if you actually want to use root, but the usage of scipy.optimize.minimize is pretty much the same.
Edit: It should be possible to have a flexible set of fixed variables by using a dictionary as an additional argument which specifies those:
def func(x_solve, fixed):
x = x_solve[:-1] # last value is alpha
for idx in fixed.keys(): # overwrite fixed entries
x[idx] = fixed[idx]
alpha = x_solve[-1]
# fixed variables, key is the index
fixed_vars = {0:.5, 3:.3}
# find roots
scipy.optimize.root(func,
[.5, -1, 2, .3, -1],
args=(fixed_vars,))
That way, when the optimizer in root numerically evaluates the Jacobian it obtains zero for the fixed variables and should therefore leave those invariant. However, that might lead to complications in the convergence of the algorithm.

np.linspace vs range in Bokeh

I'm a coding newcomer and I'm trying to work with Bokeh. Newcomer to StackOverflow too, so please tell me if I did something wrong here.
I'm playing with this example from Bokeh website and I ran into a problem. When the x values are set, as in the example, using np.linspace, I'm able to use the interact and play with the update function. But, if I change x to a list, using range(), then I get this error: TypeError: can only concatenate list (not "float") to list. As I understand it, the problem lies in "x + phi", since x is a list and phi is a float.
I get that it's not possible to concatenate a list with a float, but why is it only when I use a numpy.ndarray that Python understands that I want to modify the function that controls the y values?
Here is the code (I'm using Jupyter Notebook):
x = np.linspace(0,10,1000)
y = np.sin(x)
p = figure(title="example", plot_height=300, plot_width=600, y_range=(-5,5))
r = p.line(x, y)
def update(f, w=1, A=1, phi=0):
if f == "sin": func = np.sin
elif f == "cos": func = np.cos
elif f == "tan": func = np.tan
r.data_source.data["y"] = A * func(w * x + phi)
push_notebook()
show(p, notebook_handle=True)
interact(update, f=["sin", "cos", "tan"], w=(0,100), A=(1,5), phi=(0,20, 0.1))
Yes, please compare your numpy documentation with the documentation of lists: https://docs.python.org/3.6/tutorial/datastructures.html
You can also play with the following code:
from numpy import linspace
a = linspace(2, 3, num=5)
b = range(5)
print(type(a), a)
print(type(b), b)
print()
print("array + array:", a + a)
print("list + list:", b + b)
print(a + 3.14159)
print(b + 2.718) # will fail as in your example, because it is a list
My advise is to not mix lists and arrays if there is no other good reason to do so. I personally often cast function arguments to arrays if necessary:
def f(an_array):
an_array = numpy.array(an_array)
# continue knowing that it is an array now,
# being aware that I make a copy of an_array at this point

Passing functions in python edited

from math import cos
def diff1(f, x): #approximates first derivative#
h = 10**(-10)
return (f(x+h) - f(x))/h
def newtonFunction(f,x):
return x - f(x)/float(diff1(f,x))
y = cos
x0 = 3
epsilon = .001
print diff1(newtonFunction(y,x0), x0)
This is just a portion of the code, but I want to calculate diff1(f,x) where f is newtonFunction but uses the argument f passed to NewtonMinimum. diff1 already takes f and x as an argument and I get an error saying it expects two arguments for newtonFunction.
I think what you're looking for is functools.partial.
The problem is that f is not newtonFunction, rather it's the value returned by newtonFunction(y,x0). In this example that's a floating point number, hence the 'float' object not callable.
If you want to pass a function as a parameter to another function, you need to use just its name:
diff1(newtonFunction, x0)
Note also that you will then have another problem: in diff1 you're calling f with only one parameter, but newtonFunction takes two parameters.
In diff1, you are missing a * in f(x+h) and f(x) and in newtonFunction. You are also leaving y as a built-in function, so I assumed you wanted the cos of x0. Here is your edited code:
from math import cos
def diff1(f, x): #approximates first derivative#
h = 10**(-10)
return (f*(x+h) - f*(x))/h
def newtonFunction(f,x):
return x - f*(x)/float(diff1(f,x))
y = cos
x0 = 3
epsilon = .001
print diff1(newtonFunction(y(x0),x0), x0)

Categories