I was working on some linear equation problems regarding spline function, which already exists in our beautiful Python library, but the thing is that my professor requested me to find out the every coefficients of spline function so that I can fully understand its mathematical structure.
As a result, I came out with some simultaneous linear equations - the number of equations depend on the user input.
So I allocated my variables dynamically so that if the user declares he/she will utilize 3 points, it prints out 3 equations, and if declare 4 points, prints out 4 equations, and so on.
and now I have to solve these equations without even knowing how many equations there will be.
I searched some methods for solving linear equations using solve of Sympy, but none of them were showing me how to call these equations dynamically, depending on the user input.
All articles were saying like, oh it's easy, you can write as:
solve((eq1, eq2), dict=True)
but mine will be like :
solve((eq1, eq2, eq3, eq4, eq5, eq6, ... )
and the variables will be also like (a_0, a_1, a_2, a_3, ...)
I was trying to use 'eq{}'.format(i) to call all the 'eq{}'s I made, but systematically failed for the reason I don't know.
How can I call all the equations and variables I made in 'solve' method dynamically?
I'm a newbie to Python, please help me......
If there are not symbolic variables other than the ones for which you want to solve, then my favorite way of getting the symbols is:
def neq(n):
return list or tuple containing the n equations
from sympy import Tuple, solve
eqs = Tuple(*neq(n))
syms = eqs.free_symbols
sol = solve(eqs, syms)
You will get back a single dictionary (or a list of dictionaries) with the mapping for the symbol: solution.
Related
I have an assignment where I need to make a single defined function that runs newtons method and then be able plug in other defined functions to it and it will solve them all. I wrote one that works for equations that have 1 variable, and I only need to solve for one variable from the system but I don't know how to do that in code without solving for all four of them.
the function I wrote to run newtons method is this:
def fnewton(function, dx, x, n):
#defined the functions that need to be evaluated so that this code can be applied to any function I call
def f(x):
f=eval(function)
return f
#eval is used to evaluate whatever I put in the function place when I recall fnewton
#this won't work without eval to run the functions
def df(x):
df=eval(dx)
return df
for intercept in range(1,n):
i= x-(f(x)/df(x))
x= i
#this is literally just newtons method
#to find an intercept you can literally input intercept in a for loop and it'll do it for you
#I just found this out
#putting n in the range makes it count iterations
print ('root is at')
print (x)
print ('after this many iterations:')
print (n)
my current system of equations function looks like this:
def c(x):
T=x[0]
y=x[1]
nl=x[2]
nv=x[3]
RLN=.63*Antoine(An,Bn,Cn,T)-y*760
RLH=(1-.63)*Antoine(Ah,Bh,Ch,T)-(1-y)*760
N=.63*nl+y*nv-50
H=(1-.63)*nl+(1-y)*nv-50
return[RLN,RLH,N,H]
To use my function to solve this I've entered multiple variations of:
fnewton("c(x)","dcdx(x)", (2,2,2,2), 10)
Do I need to change the system of equations into 1 equation somehow or something I don't know how to manipulate my code to make this work and also work for equations with only 1 variable.
To perform Newton's method in multiple dimensions, you must replace the simple derivative by a Jacobian matrix, that is, a matrix that has the derivatives of all components of your multidimensional function with respect to all given variables. This is decribed here: https://en.wikipedia.org/wiki/Newton%27s_method#Systems_of_equations,
(or, perhaps more helpful, here: https://web.mit.edu/18.06/www/Spring17/Multidimensional-Newton.pdf in Sec. 1.4)
Instead of f(x)/f'(x), you need to work with the inverse of the Jacobian matrix times the vector function f. So the formula is actually quite similar!
I want to solve a second order differential equation with variable coefficients by using something like odeint. The problem with this one is that it doesn't work if the initial conditions are complex (which is the case now).
Do you know a way to solve the aforementioned equation with something similar to odeint?
odeint does not accept complex variables. You could use: the newer solver, solve_ivp; the older ode class with the "zvode" integrator; or odeintw, a wrapper of odeint that I wrote that handles complex-valued and matrix-valued differential equations.
You could always work with the real components (odeint convention)
def odesys(u,t):
z = u[0]+1j*u[1]
dz = u[2]+1j*u[3]
d2z = f(t,z,dz)
return [ dz.real, dz.imag, d2z.real, d2z.imag ]
where f stands for the explicit form of the second order ODE.
If I remember correctly, one of the methods ("vzode"?) that you can use in scipy.integrate.ode works directly with complex state variables.
EDIT: looks like this was already answered before here
It didn't show up in my searches because I didn't know the right nomenclature. I'll leave the question here for now in case someone arrives here because of the constraints.
I'm trying to optimize a function which is flat on almost all points ("steps function", but in a higher dimension).
The objective is to optimize a set of weights, that must sum to one, and are the parameters of a function which I need to minimize.
The problem is that, as the function is flat at most points, gradient techniques fail because they immediately converge on the starting "guess".
My hypothesis is that this could be solved with (a) Annealing or (b) Genetic Algos. Scipy sends me to basinhopping. However, I cannot find any way to use the constraint (the weights must sum to 1) or ranges (weights must be between 0 and 1) using scipy.
Actual question: How can I solve a minimization problem without gradients, and also use constraints and ranges for the input variables?
The following is a toy example (evidently this one could be solved using the gradient):
# import minimize
from scipy.optimize import minimize
# define a toy function to minimize
def my_small_func(g):
x = g[0]
y = g[1]
return x**2 - 2*y + 1
# define the starting guess
start_guess = [.5,.5]
# define the acceptable ranges (for [g1, g2] repectively)
my_ranges = ((0,1),(0,1))
# define the constraint (they must always sum to 1)
def constraint(g):
return g[0] + g[1] - 1
cons = {'type':'eq', 'fun': constraint}
# minimize
minimize(my_small_func, x0=start_guess, method='SLSQP',
bounds=rranges, constraints=cons)
I usually use R so maybe this is a bad answer, but anyway here goes.
You can solve optimization problems like the using a global optimizer. An example of this is Differential Evolution. The linked method does not use gradients. As for constraints, I usually build them manually. That looks something like this:
# some dummy function to minimize
def objective.function(a, b)
if a + b != 1 # if some constraint is not met
# return a very high value, indicating a very bad fit
return(10^90)
else
# do actual stuff of interest
return(fit.value)
Then you simply feed this function to the differential evolution package function and that should do the trick. Methods like differential evolution are made to solve in particular very high dimensional problems. However the constraint you mentioned can be a problem as it will likely result in very many invalid parameter configurations. This is not necessarily a problem for the algorithm, but is simply means you need to do a lot of tweaking and need to expect a lot of waiting time. Depending on your problem, you could try optimizing weights/ parameters in blocks. That means, optimize parameters given a set of weights, then optimize weights given the previous set of parameters and repeat that many times.
Hope this helps :)
I'm using fsolve in order to solve a non linear equation. My problem is that, depending on the starting point the solutions change and I am not sure that the ones that I found are the most reasonable.
This is the code
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve, brentq,newton
A = np.arange(0.05,0.95,0.01)
PHI = np.deg2rad(np.arange(0,90,1))
def f(b):
return np.angle((1+3*a**4-3*a**2)+(a**4-a**6)*(np.exp(2j*b)+2*np.exp(-1j*b))+(a**2-2*a**4+a**6)*(np.exp(-2j*b)+2*np.exp(1j*b)))-Phi
B = np.zeros((len(A),len(PHI)))
for i in range(len(A)):
for j in range(len(PHI)):
a = A[i]
Phi = PHI[j]
b = fsolve(f, 1)
B[i,j]= b
I fixed x0 = 1 because it seems to give the more reasonable values. But sometimes, I think the method doesn't converge and the resulting values are too big.
What can I do to find the best solution?
Many thanks!
The eternal issue with turning non-linear solvers loose is having a really good understanding of your function, your initial guess, the solver itself, and the problem you are trying to address.
I note that there are many (a,Phi) combinations where your function does not have real roots. You should do some math, directed by the actual problem you are trying to solve, and determine where the function should have roots. Not knowing the actual problem, I can't do that for you.
Also, as noted on a (since deleted) answer, this is cyclical on b, so using a bounded solver (such as scipy.optimize.minimize using method='L-BFGS-B' might help to keep things under control. Note that to find roots with a minimizer you use the square of your function. If the found minimum is not close to zero (for you to define based on the problem), the real minima might be a complex conjugate pair.
Good luck.
I am having trouble sovling the optical bloch equation, which is a first order ODE system with complex values. I have found scipy may solve such system, but their webpage offers too little information and I can hardly understand it.
I have 8 coupled first order ODEs, and I should generate a function like:
def derv(y):
compute the time dervative of elements in y
return answers as an array
then do complex_ode(derv)
My questions are:
my y is not a list but a matrix, how can i give a corrent output
fits into complex_ode()?
complex_ode() needs a jacobian, I have no idea how to start constructing one
and what type it should be?
Where should I put the initial conditions like in the normal ode and
time linspace?
this is scipy's complex_ode link:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.complex_ode.html
Could anyone provide me with more infomation so that I can learn a bit more.
I think we can at least point you in the right direction. The optical
bloch equation is a problem which is well understood in the scientific
community, although not by me :-), so there are already solutions on the internet
to this particular problem.
http://massey.dur.ac.uk/jdp/code.html
However, to address your needs, you spoke of using complex_ode, which I suppose
is fine, but I think just plain scipy.integrate.ode will work just fine as well
according to their documentation:
from scipy import eye
from scipy.integrate import ode
y0, t0 = [1.0j, 2.0], 0
def f(t, y, arg1):
return [1j*arg1*y[0] + y[1], -arg1*y[1]**2]
def jac(t, y, arg1):
return [[1j*arg1, 1], [0, -arg1*2*y[1]]]
r = ode(f, jac).set_integrator('zvode', method='bdf', with_jacobian=True)
r.set_initial_value(y0, t0).set_f_params(2.0).set_jac_params(2.0)
t1 = 10
dt = 1
while r.successful() and r.t < t1:
r.integrate(r.t+dt)
print r.t, r.y
You also have the added benefit of an older more established and better
documented function. I am surprised you have 8 and not 9 coupled ODE's, but I'm
sure you understand this better than I. Yes, you are correct, your function
should be of the form ydot = f(t,y), which you call def derv() but you're
going to need to make sure your function takes at least two parameters
like derv(t,y). If your y is in matrix, no problem! Just "reshape" it in
the derv(t,y) function like so:
Y = numpy.reshape(y,(num_rows,num_cols));
As long as num_rows*num_cols = 8, your number of ODE's you should be fine. Then
use the matrix in your computations. When you're all done, just be sure to return
a vector and not a matrix like:
out = numpy.reshape(Y,(8,1));
The Jacobian is not required, but it will likely allow the computation to proceed
much more quickly. If you do not know how to compute this you may want to consult
wikipedia or a calculus text book. It's pretty simple, but can be time consuming.
As far as initial conditions, you should probably already know what those should
be, whether it's complex or real valued. As long as you select values that are
within reason, it shouldn't matter much.