Currently I'm attempting to solve 4 coupled ODE's to stabilize an inverted pendulum on a cart. I have no problem doing it with ODEINT from Scipy, however, I can't make it work with a manual implementation. Most likely this is due to some weird data formatting done in the 'model' function in the code.
I have tried multiple things to no avail, thus I won't post my error codes, since they range from the size not fitting when adding all the calculated steps in the RK4 method.
My current code with ODEINT working is down below. What I'm asking is whether someone can help me, so that the function 'model' is properly made, so that I can implement the RK4 solver (which I can do for other ODE's without any problem).
import numpy as np
from scipy.integrate import solve_ivp
from scipy import signal
g = 9.82
l = 0.281
mc = 6.28
alpha = 0.4
mp = 0.175
t_start = 0.
t_end = 12.
tol = 10**(-1)
# Define A and B and the poles we want
A = np.array([[0., 1., 0., 0.], [(mc+mp)*g/(l*mc), 0., 0., (-alpha)/(l*mc)], [0., 0., 0., 1.], [(g*mp)/mc, 0., 0., (-alpha)/mc]])
B = np.array([[0.], [1./(l*mc)], [0.], [1./mc]])
Poles = np.array([complex(-1.,2.), complex(-1.,-2.), complex(-2.,1.), complex(-2.,-1.)])
# Determine K
signal = signal.place_poles(A, B, Poles)
K = signal.gain_matrix
# print(signal.computed_poles) # To verify if the computes poles are correct
# Define the model
def model(t,x):
x1, x2, x3, x4 = x
u = -np.matmul(K,x)
dx1dt = x2
dx2dt = (np.cos(x1.astype(float))*(u-alpha*x4-mp*l*x2**2*np.sin(x1.astype(float)))+(mc+mp)*g*np.sin(x1.astype(float)))/(l*(mc+mp*(1-np.cos(x1.astype(float))**2)))
dx3dt = x4
dx4dt = (u-alpha*x4-mp*l*x2**2*np.sin(x1.astype(float))+mp*g*np.sin(x1.astype(float))*np.cos(x1.astype(float)))/(mc+mp*(1-np.cos(x1.astype(float))**2))
return np.array([dx1dt, dx2dt, dx3dt, dx4dt])
# Solve the system
N = 10000 # Number of steps
t = np.linspace(t_start, t_end, N)
t_span = (t_start, t_end)
x0 = np.array([0.2, 0., 0., 0.])
sol = solve_ivp(model,t_span,x0, t_eval=t, method='RK45')
index = np.argmin(sol.y[2,:]) # Max displacement from the origin
print(f' The biggest deviation from the origin is: {abs(sol.y[2, index])} meters.')
#This doesn't work
def RK4(fcn,a ,b ,y0 ,N):
h = (b-a)/N
x = a + np.arange(N+1)*h
y = np.zeros((x.size,y0.size))
y[0,:] = y0
for k in range(N):
k1 = fcn(x[k], y[k,:])
k2 = fcn(x[k] + h/2, y[k,:] + h*k1/2)
k3 = fcn(x[k] + h/2, y[k,:] + h*k2/2)
k4 = fcn(x[k] + h, y[k,:] + h*k3)
y[k+1,:] = y[k,:] + h/6*(k1 + 2*(k2 + k3) + k4)
return x,y
a,b = RK4(model, 0, 12, x0, 1000)
Which yields the following error:
runcell(0, 'C:/Users/Nikolai Lund Kühne/OneDrive - Aalborg Universitet/Uni/3. semester/P3 - Dynamiske Systemer/manualRK4.py')
The biggest deviation from the origin is: 0.48256054833140316 meters.
Traceback (most recent call last):
File "C:\Users\Nikolai Lund Kühne\OneDrive - Aalborg Universitet\Uni\3. semester\P3 - Dynamiske Systemer\manualRK4.py", line 57, in <module>
a,b = RK4(model, 0, 12, x0, 1000)
File "C:\Users\Nikolai Lund Kühne\OneDrive - Aalborg Universitet\Uni\3. semester\P3 - Dynamiske Systemer\manualRK4.py", line 53, in RK4
y[k+1,:] = y[k,:] + h/6*(k1 + 2*(k2 + k3) + k4)
ValueError: could not broadcast input array from shape (4,4,4) into shape (4)
Edit 2: Attempt to implement RK4 manually results in some weird errors.
Edit 1: Based on a comment the code is now implemented with solve_ivp.
I did not completely debug this, and you could also reduce the data to a state where the expected happens. So some speculation.
Numpy is halping in the style of Matlab. The constructed format of K is an array in the shape of a row vector, [[K1,K2,K3,K4]]. Now the matrix-vector multiplication in any form, K#x, has a one-dimensional result. Mathematically, one would expect either a scalar or a 1x1 matrix [[u1]]. Following the Matlab philosophy it is neither, it is a simple array u=[u1]. Any further scalar operation that has u inside will also result in 1-element arrays. Putting the derivatives together, this has the effect of producing a column vector. Now further operations with arrays have the potential to broadcast that to a 4x4 matrix-shaped array. How the 4x4x4 shaped tensor occurs I did not follow-up on, but it seems quite possible.
Related
I created a spline in Abaqus and now I would like to calculate the length of that spline.
The spline consists of 19 points [CoGX_Init, CoGY_Init, CoGZ_Init].
I want to determine the distance between each point with the following formula; (sqrt((x2-x1)**2+(y2-y1)**2+(z2-z1)**2)) and then count these points to find the complete length of the spline.
This is my code;
N = np.zeros((1, len(CoGZ_Init)))
for j in range(0, len(CoGZ_Init)-1):
x1 = CoGX_Init[j]
x2 = CoGX_Init[j+1]
y1 = CoGY_Init[j]
y2 = CoGY_Init[j+1]
z1 = CoGZ_Init[j]
z2 = CoGZ_Init[j+1]
N[j] = sqrt((x2-x1)**2+(y2-y1)**2+(z2-z1)**2)
print(sum[N])
When I run this, I receive the error the following error for line N[j]:
index 1 is out of bounds for axis 0 with size 1.
Your array N is an array like [[0., 0., 0., ... 0.]] -- so you need to change
N = np.zeros((1, len(CoGZ_Init)))
to
N = np.zeros(len(CoGZ_Init))
OR change
N[j] = sqrt((x2-x1)**2+(y2-y1)**2+(z2-z1)**2)
to
N[0][j] = sqrt((x2-x1)**2+(y2-y1)**2+(z2-z1)**2)
Normally I have been using GNU Octave to solve quadratic programming problems.
I solve problems like
x = 1/2x'Qx + c'x
With subject to
A*x <= b
lb <= x <= ub
Where lb and ub are lower bounds and upper bounds, e.g limits for x
My Octave code looks like this when I solve. Just one simple line
U = quadprog(Q, c, A, b, [], [], lb, ub);
The square brackets [] are empty because I don't need the equality constraints
Aeq*x = beq,
So my question is:
Is there a easy to use quadratic solver in Python for solving problems
x = 1/2x'Qx + c'x
With subject to
A*x <= b
lb <= x <= ub
Or subject to
b_lb <= A*x <= b_ub
lb <= x <= ub
You can write your own solver based scipy.optimize, here is a small example on how to code your custom python quadprog():
# python3
import numpy as np
from scipy import optimize
class quadprog(object):
def __init__(self, H, f, A, b, x0, lb, ub):
self.H = H
self.f = f
self.A = A
self.b = b
self.x0 = x0
self.bnds = tuple([(lb, ub) for x in x0])
# call solver
self.result = self.solver()
def objective_function(self, x):
return 0.5*np.dot(np.dot(x.T, self.H), x) + np.dot(self.f.T, x)
def solver(self):
cons = ({'type': 'ineq', 'fun': lambda x: self.b - np.dot(self.A, x)})
optimum = optimize.minimize(self.objective_function,
x0 = self.x0.T,
bounds = self.bnds,
constraints = cons,
tol = 10**-3)
return optimum
Here is how to use this, using the same variables from the first example provided in matlab-quadprog:
# init vars
H = np.array([[ 1, -1],
[-1, 2]])
f = np.array([-2, -6]).T
A = np.array([[ 1, 1],
[-1, 2],
[ 2, 1]])
b = np.array([2, 2, 3]).T
x0 = np.array([1, 2])
lb = 0
ub = 2
# call custom quadprog
quadprog = quadprog(H, f, A, b, x0, lb, ub)
print(quadprog.result)
The output of this short snippet is:
fun: -8.222222222222083
jac: array([-2.66666675, -4. ])
message: 'Optimization terminated successfully.'
nfev: 8
nit: 2
njev: 2
status: 0
success: True
x: array([0.66666667, 1.33333333])
For more information on how to use scipy.optimize.minimize please refer to the docs.
If you need a general quadratic programming solver like quadprog, I would suggest the open-source software cvxopt as noted in one of the comments. This is robust and really state-of-the-art. The main contributor is a major expert in the field and the co-author of a classic book on Convex Optimization.
The function you want to use is cvxopt.solvers.qp. A simple wrapper to use it in Numpy like quadprog is the following. Note that bounds can be included as a special case of inequality constraints.
import numpy as np
from cvxopt import matrix, solvers
def quadprog(P, q, G=None, h=None, A=None, b=None, options=None):
"""
Quadratic programming problem with both linear equalities and inequalities
Minimize 0.5 * x # P # x + q # x
Subject to G # x <= h
and A # x = b
"""
P, q = matrix(P), matrix(q)
if G is not None:
G, h = matrix(G), matrix(h)
if A is not None:
A, b = matrix(A), matrix(b)
sol = solvers.qp(A, b, G, h, A, b, options=options)
return np.array(sol['x']).ravel()
cvxopt used to be difficult to install, but is nowadays also included in the Anaconda distribution and can be installed (even on Windows) with conda install cvxopt.
If instead, you are interested in the more specific case of linear least-squares optimisation with bounds, which is a subset of the general quadratic programming, namely
Minimize || A # x - b ||
subject to lb <= x <= ub
Then Scipy has the specific function scipy.optimize.lsq_linear(A, b, bounds).
Note that the accepted answer is a very inefficient approach and should not be recommended. It makes no use of the crucial fact that the function you want to optimize is quadratic but instead uses a generic nonlinear optimization program and does not even specify the analytic gradient.
You could use the solve_qp function from qpsolvers. It solves quadratic programs in the following form:
minimize_x 1/2 x' P x + q'x
subject to G x <= h
A x == b
lb <= x <= ub
The function wraps the many QP solvers available in Python (full list here) via its solver keyword argument. Make sure to try different solvers to find the one that fits your problem best.
Here is a snippet for solving a small problem:
from numpy import array, dot
from qpsolvers import solve_qp
M = array([[1., 2., 0.], [-8., 3., 2.], [0., 1., 1.]])
P = dot(M.T, M) # this is a positive definite matrix
q = dot(array([3., 2., 3.]), M)
G = array([[1., 2., 1.], [2., 0., 1.], [-1., 2., -1.]])
h = array([3., 2., -2.])
A = array([1., 1., 1.])
b = array([1.])
x = solve_qp(P, q, G, h, A, b, solver="osqp")
print(f"QP solution: x = {x}")
And if you are interested linear least-squares with linear or box (bounds) constraints, there is also a solve_ls function. Here is a short tutorial on solving such problems.
Consider the complex mathematical function on the line [1, 15]:
f(x) = sin(x / 5) * exp(x / 10) + 5 * exp(-x / 2)
polynomial of degree n (w_0 + w_1 x + w_2 x^2 + ... + w_n x^n) is uniquely defined by any n + 1 different points through which it passes.
This means that its coefficients w_0, ... w_n can be determined from the following system of linear equations:
Where x_1, ..., x_n, x_ {n + 1} are the points through which the polynomial passes, and by f (x_1), ..., f (x_n), f (x_ {n + 1}) - values that it must take at these points.
I'm trying to form a system of linear equations (that is, specify the coefficient matrix A and the free vector b) for the polynomial of the third degree, which must coincide with the function f at points 1, 4, 10, and 15. Solve this system using the scipy.linalg.solve function.
A = numpy.array([[1., 1., 1., 1.], [1., 4., 8., 64.], [1., 10., 100., 1000.], [1., 15., 225., 3375.]])
V = numpy.array([3.25, 1.74, 2.50, 0.63])
numpy.linalg.solve(A, V)
I got the wrong answer, which is
So the question is: is the matrix correct?
No, your matrix is not correct.
The biggest mistake is your second sub-matrix for A. The third entry should be 4**2 which is 16 but you have 8. Less important, you have only two decimal places for your constants array V but you really should have more precision than that. Systems of linear equations are sometimes very sensitive to the provided values, so make them as precise as possible. Also, the rounding in your final three entries is bad: you rounded down but you should have rounded up. If you really want two decimal places (which I do not recommend) the values should be
V = numpy.array([3.25, 1.75, 2.51, 0.64])
But better would be
V = numpy.array([3.252216865271419, 1.7468459495903677,
2.5054164070002463, 0.6352214195786656])
With those changes to A and V I get the result
array([ 4.36264154, -1.29552587, 0.19333685, -0.00823565])
I get these two sympy plots, the first showing your original function and the second using the approximated cubic polynomial.
They look close to me! When I calculate the function values at 1, 4, 10, and 15, the largest absolute error is for 15, namely -4.57042132584462e-6. That is somewhat larger than I would have expected but probably is good enough.
Is it from data science course? :)
Here is an almost generic solution I did:
%matplotlib inline
import numpy as np;
import math;
import matplotlib.pyplot as plt;
def f(x):
return np.sin(x / 5) * np.exp(x / 10) + 5 * np.exp(-x / 2)
# approximate at the given points (feel free to experiment: change/add/remove)
points = np.array([1, 4, 10, 15])
n = points.size
# fill A-matrix, each row is 1 or xi^0, xi^1, xi^2, xi^3 .. xi^n
A = np.zeros((n, n))
for index in range(0, n):
A[index] = np.power(np.full(n, points[index]), np.arange(0, n, 1))
# fill b-matrix, i.e. function value at the given points
b = f(points)
# solve to get approximation polynomial coefficents
solve = np.linalg.solve(A,b)
# define the polynome approximation of the function
def polinom(x):
# Yi = solve * Xi where Xi = x^i
tiles = np.tile(x, (n, 1))
tiles[0] = np.ones(x.size)
for index in range(1, n):
tiles[index] = tiles[index]**index
return solve.dot(tiles)
# plot the graphs of original function and its approximation
x = np.linspace(1, 15, 100)
plt.plot(x, f(x))
plt.plot(x, polinom(x))
# print out the coefficients of polynome approximating our function
print(solve)
I'm writing a Python code using numpy. In my code I use "linalg.solve" to solve a linear system of n equations in n variables. Of course the solutions could be either positive or negative. What I need to do is to have always positive solutions or at least equal to 0. To do so I first want the software to solve my linear system of equations in this form
x=np.linalg.solve(A,b)
in which x is an array with n variables in a specific order (x1, x2, x3.....xn),
A is a n dimensional square matrix and b is a n-dimensional array.
Now I thought to do this:
-solve the system of equations
-check if every x is positive
-if not, every negative x I'll want them to be =0 (for example x2=-2 ---->x2=0)
-with a generic xn=0 want to eliminate the n-row and the n-coloumn in the n dimensional square matrix A (I'll obtain another square matrix A1) and eliminate the n element in b obtaining b1.
-solve the system again with the matrix A1 and b1
-re-iterate untill every x is positive or zero
-at last build a final array of n elements in which I'll put the last iteration solutions and every variable which was equal to zero ( I NEED THEM IN ORDER AS IT WOULD HAVE BEEN NO ITERATIONS so if during the iterations it was x2=0 -----> xfinal=[x1, 0 , x3,.....,xn]
Think it 'll work but don't know how to do it in python.
Hope I was clear. Can't really figure it out!
You have a minimization problem, i.e.
min ||Ax - b||
s.t. x_i >= 0 for all i in [0, n-1]
You can use the Optimize module from Scipy
import numpy as np
from scipy.optimize import minimize
A = np.array([[1., 2., 3.],[4., 5., 6.],[7., 8., 10.]], order='C')
b = np.array([6., 12., 21.])
n = len(b)
# Ax = b --> x = [1., -2., 3.]
fun = lambda x: np.linalg.norm(np.dot(A,x)-b)
# xo = np.linalg.solve(A,b)
# sol = minimize(fun, xo, method='SLSQP', constraints={'type': 'ineq', 'fun': lambda x: x})
sol = minimize(fun, np.zeros(n), method='L-BFGS-B', bounds=[(0.,None) for x in xrange(n)])
x = sol['x'] # [2.79149722e-01, 1.02818379e-15, 1.88222298e+00]
With your method I get x = [ 0.27272727, 0., 1.90909091].
In the case you still want to use your algorithm, it is below
n = len(b)
x = np.linalg.solve(A,b)
pos = np.where(x>=0.)[0]
while len(pos) < n:
Ap = A[pos][:,pos]
bp = b[pos]
xp = np.linalg.solve(Ap, bp)
x = np.zeros(len(b))
x[pos] = xp
pos = np.where(x>=0.)[0]
But I don't recommend you to use it, you should use the minimize option.
Even faster and more reliable also using minimization is the method scipy.optimize.lsq_linear that is specially dedicated for linear optimization.
The bounds are transposed and use np.inf instead of None for the upper bound.
Working example:
from scipy.optimize import lsq_linear
n = A.shape[1]
res = lsq_linear(A, b, bounds=np.array([(0.,np.inf) for i in range(n)]).T, lsmr_tol='auto', verbose=1)
y = res.x
You provide a matrix A and a vector b that have the same number of rows (= the dimension of the range of A).
I have the following code to solve non-negative least square.
Using scipy.nnls.
import numpy as np
from scipy.optimize import nnls
A = np.array([[60, 90, 120],
[30, 120, 90]])
b = np.array([67.5, 60])
x, rnorm = nnls(A,b)
print x
#[ 0. 0.17857143 0.42857143]
# Now need to have this array sum to 1.
What I want to do is to apply a constraint on x solution so that the it sums to 1. How can I do it?
I don't think you can use nnls directly as the Fortran code it calls doesn't allow extra constraints. However, the constraint that the equation sums to one can be introduced as a third equation, so your example system is of the form,
60 x1 + 90 x2 + 120 x3 = 67.5
30 x1 + 120 x2 + 90 x3 = 60
x1 + x2 + x3 = 1
As this is now a set of linear equations, the exact solution can be obtained from x=np.dot(np.linalg.inv(A),b) so that x=[0.6875, 0.3750, -0.0625]. This requires x3 to be negative. Therefore, there is no exact solution when x is positive to this problem.
For an approximate solution where x is constrained to be positive, this can be obtained using,
import numpy as np
from scipy.optimize import nnls
#Define minimisation function
def fn(x, A, b):
return np.sum(A*x,1) - b
#Define problem
A = np.array([[60., 90., 120.],
[30., 120., 90.],
[1., 1., 1. ]])
b = np.array([67.5, 60., 1.])
x, rnorm = nnls(A,b)
print(x,x.sum(),fn(x,A,b))
which gives, x=[0.60003332, 0.34998889, 0.] with a x.sum()=0.95.
I think if you wanted a more general solution including sum constraints, you'd need to use minimise with explicit constraints/bounds in the following form,
import numpy as np
from scipy.optimize import minimize
from scipy.optimize import nnls
#Define problem
A = np.array([[60, 90, 120],
[30, 120, 90]])
b = np.array([67.5, 60])
#Use nnls to get initial guess
x0, rnorm = nnls(A,b)
#Define minimisation function
def fn(x, A, b):
return np.linalg.norm(A.dot(x) - b)
#Define constraints and bounds
cons = {'type': 'eq', 'fun': lambda x: np.sum(x)-1}
bounds = [[0., None],[0., None],[0., None]]
#Call minimisation subject to these values
minout = minimize(fn, x0, args=(A, b), method='SLSQP',bounds=bounds,constraints=cons)
x = minout.x
print(x,x.sum(),fn(x,A,b))
which gives x=[0.674999366, 0.325000634, 0.] and x.sum()=1. From minimise, the sum is correct but the value of x is not quite right with np.dot(A,x)=[ 69.75001902, 59.25005706].