I am trying to solve for non linear equations in python. I have tried using the solver of the Sympy but it doesn't seem to work in a for loop statement. I am tyring to solve for the variable x over a range of inputs [N].
I have attached my code below
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
f_curve_coefficients = [-7.14285714e-02, 1.96333333e+01, 6.85130952e+03]
S = [0.2122, 0, 0]
a2 = f_curve_coefficients[0]
a1 = f_curve_coefficients[1]
a0 = f_curve_coefficients[2]
s2 = S[0]
s1 = S[1]
s0 = S[2]
answer=[]
x = symbols('x')
for N in range(0,2500,5):
solve([a2*x**2+a1*N*x+a0*N**2-s2*x**2-s1*x-s0-0])
answer.append(x)
print(answer)
There could be more efficient ways of solving this problem than using sympy * any help will be much appreicated.
Note I am still new to python, after transisitioning from Matlab. I could easliy solve this problem in Matlab and could attach the code, but I am battiling with this in Python
Answering to your question "There could be more efficient ways of solving this problem than using sympy * "
you can use fsolve to find the roots of non linear equation:
fsolve returns the roots of the (non-linear) equations defined by func(x) = 0 given a starting estimate
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html
below is the code:
from scipy.optimize import fsolve
import numpy as np
def f(variables) :
(x,y) = variables
first_eq = 2*x + y - 1
second_eq = x**2 + y**2 - 1
return [first_eq, second_eq]
roots = fsolve(f, (-1 , -1)) # fsolve(equations, X_0)
print(roots)
# [ 0.8 -0.6]
print(np.isclose(f(roots), [0.0, 0.0])) # func(root) should be almost 0.0.
If you prefer sympy you can use nsolve.
>>> nsolve([x+y**2-4, exp(x)+x*y-3], [x, y], [1, 1])
[0.620344523485226]
[1.83838393066159]
The first argument is a list of equations, the second is list of variables and the third is an initial guess.
Also For details, you can checkout similar question asked earlier on stack overflow regarding ways to solve Non-linear equations in python:
How to solve a pair of nonlinear equations using Python?
According to this documentation, the output of solve is the solution. Nothing is assigned to x, that's still just the symbol.
x = symbols('x')
for N in range(0,2500,5):
result = solve(a2*x**2+a1*N*x+a0*N**2-s2*x**2-s1*x-s0-0)
answer.append(result)
Related
I would like to solve a linear equation system in numpy in order to check whether a point lines up with a vector or not.
Given are the following equations for a vector2:
point[x] = vector1[x] + λ * vector2[x]
point[y] = vector1[y] + λ * vector2[y]
Numpys linalg.solve() offers the option to solve two equations in the form:
ax + by = c
by defining the parameters a and b in a numpy.array().
But I can't seem to find a way to deal with equations with one fixed parameter like:
m*x + b = 0
Am I missing a point or do I have to deal with another solution?
Thanks in advance!
Hi I will give it a try to help with this question.
The numpy.linagl.solve says:
Computes the “exact” solution, x, of the well-determined, i.e., full rank, linear matrix equation ax = b.
Note the assumptions made on the matrix!
Lambda the same
If your lambda for the point[x] and point[y] equation should be the same. Then just concatenate all the vectors.
x_new = np.concatenate([x,y])
vec1_new = np.concatenate([vec1_x,vec1_y])
...
Assuming that this will overdetermined your system and probably it will. Meaning you have too many equations and only one parameter to determine (well-determined assumption violated). My approach would be to go with least sqare.
The numpy.linagl.lstsq has a least square method too. Where the equation is y = mx + c is solved. For your case this is y = point[x], x = vector2[x] and c = vector1[x].
This is copied from the numpy.linagl.lstsq example:
x = np.array([0, 1, 2, 3])
y = np.array([-1, 0.2, 0.9, 2.1])
A = np.vstack([x, np.ones(len(x))]).T # => horizontal stack
m, c = np.linalg.lstsq(A, y, rcond=None)[0]
Lambda different
If the lambdas are different. Stack the vector2[x] and vector2[y] horizontal and you have [lambda_1, lambda_2] to find. Probably also more equations then lambds and you will find a least square solution.
Note
Keep in mind that even if you construct your system from a staight line and a fixed lambda. You might need a least square approach due to rounding and numeric differences.
You can solve your equation 2*x + 4 = 0 with sympy:
from sympy.abc import x
from sympy import Eq, solve
eq = Eq(2 * x + 4, 0)
print(solve(eq))
I wrote a python code that imports some data which I then manipulate to get a nonsquare matrix A. I then used the following code to solve the matrix equation.
from scipy.optimize import lsq_linear
X = lsq_linear(A_normalized, Y, bounds=(0, np.inf), method='bvls')
As you can see I used this particular method because I require all the X coefficients to be positive. However, I realized that lsq_linear from scipy.optimize minimizes the L2 norm of AX - Y to solve the equation. I was wondering if anyone knows of an alternative to lsq_linear that solves the equation by minimizing the L1 norm instead. I have looked in the scipy documentation, but so far I haven't had any luck finding such an alternative myself.
(Note that I actually know what Y is and what I am trying to solve for is X).
Edit: After trying various suggestions from the comments section and after much frustration I finally managed to sort of make it work using cvxpy. However, there is a problem with it. First of all the elements of X are supposed to all be positive, but that is not the case. Moreover, when I multiply the matrix A_normalized with X they are not equal to Y. My code is below. Any suggestions on what I can do to fix it would be highly appreciated. (By the way my original use of lsq_linear in the code above gave me an X that satisfied A_normalized*X = Y.)
import cvxpy as cp
from cvxpy.atoms import norm
import numpy as np
Y = specificity
x = cp.Variable(22)
objective = cp.Minimize(norm((A_normalized # x - Y), 1))
constraints = [0 <= x]
prob = cp.Problem(objective, constraints)
result = prob.solve()
print("Optimal value", result)
print("Optimal var")
X = x.value # A numpy ndarray.
print(X)
A_normalized # X == Y
So I have this complicated equation which I need to solve. I think that finally x should be of order 1E22. But the problem with this code is that it crashes my entire system. Is there a fix? I tried scipy.optimize.root but it doesn't really solve anything at this order of magnitude (it gives final answer as initial guess without any iteration).
from scipy.optimize import fsolve
import math
import mpmath
import scipy
import sympy
from sympy.solvers import solve
from sympy import Symbol
from sympy import sqrt,exp
x = Symbol('x',positive=True)
cs = 507.643E-12
esi = 1.05E-10
q = 1.6E-19
T = 300
k = 1.381E-23
ni = 1.45E16
print(solve(exp(x/((2*cs/(esi*q))**2)) - ((x/ni)**(esi*k*T)),x))
def func(N):
return (math.exp(N/math.pow(2*cs/(esi*q),2)) - math.pow(N/ni,(esi*k*T)))
n_initial_guess = 1E21
n_solution = fsolve(func, n_initial_guess)
print ("The solution is n = %f" % n_solution)
print ("at which the value of the expression is %f" % func(n_solution))
print(scipy.optimize.root(func, 1E22,tol=1E-10))
Neither of the scipy functions work. The sympy function crashes my laptop. Would Matlab be ideal for this?
Numeric solution with SciPy
The problem that SciPy has with this equation is the loss of significance. You are raising N to the tiny power esi*k*T which makes it very near 1; in floating point arithmetics, it becomes exactly 1. Similarly, the part coming from the exponential becomes 1. Then the two parts are subtracting, leaving 0 - equation appears to be already solved. You could have seen this happening by printing func(1E21) -- it returns 0.
The way to deal with the loss of significance is to rewrite the equation, from the original form
exp(x/((2*cs/(esi*q))**2)) == (x/ni)**(esi*k*T)
by raising both sides to the power 1/(esi*k*T):
exp(x*esi*q**2/(2*cs*k*T)**2)) == x/ni
So func becomes
def func(N):
return np.exp(N*esi*q**2/(k*T*(2*cs)**2)) - (N/ni)
(It's is advisable to use NumPy functions with SciPy solvers.) That said, the solvers, for example root(func, 1E10), will report being unable to converge to a solution.
Symbolic solution with SymPy
SymPy is for solving equations analytically. It does not care for a bunch of floating point numbers. Give it a symbolic equation instead:
x, a, b, c = symbols('x, a, b, c', positive=True)
sol = solve(exp(x/a) - (x/b)**c, x)[0]
The solution is obtained as -c*LambertW(-a/(b*c))/a. Then it can be evaluated:
cs = 507.643E-12
esi = 1.05E-10
q = 1.6E-19
T = 300
k = 1.381E-23
ni = 1.45E16
print(sol.evalf(subs={a: (2*cs/(esi*q))**2, b: ni, c: esi*k*T}))
Which prints -21301663061.0653 - 4649834682.69762*I confirming what one would already expect from the failure of convergence with SciPy: there are no real solutions of the equation.
I'm reading an article about Bloom filters, https://en.wikipedia.org/wiki/Bloom_filter, in which an expression is derived for the optimal number of hash functions. I'd like to reproduce the computation for the simplified case that m = n, that is, I'd like to determine the minimum of the function
(1-exp(-x))**x
which, from the article, should occur at x = ln(2). I tried doing this with sympy as follows:
In [1]: from sympy import *
In [2]: x, y, z = symbols('x y z')
In [3]: init_printing(use_unicode=True)
In [8]: from sympy.solvers import solve
In [9]: solve(diff((1-exp(-x))**x,x), x)
However, I get a
NotImplementedError: multiple generators [x, exp(x), log(1 - exp(-x))]
No algorithms are implemented to solve equation x*exp(-x)/(1 - exp(-x)) + log(1 - exp(-x))
I would just like to double-check whether Sympy really cannot solve this problem? Perhaps I need to add additional constraints/assumptions on x?
When you run into this issue where an equation can't be solved by manipulation of symbols (solving analytically), it's possible that it can still be solved by trying different numbers and getting to (or very close to) the correct answer (solving numerically).
You can convert your sympy solution to a numpy-based function, and use scipy to solve numerically.
from sympy import lambdify
from scipy.optimize import fsolve
func_np = sp.lambdify(x, diff((1-exp(-x))**x,x), modules=['numpy'])
solution = fsolve(func_np, 0.5)
This solves the equation as 0.69314718, which is what you expect.
I'm trying to solve the equation y'' + (epsilon-x^2)y = 0 numerically using odeint. I know the solutions (the wavefunctions of a QHO), but the output from odeint has no apparent relation to it. I can solve ODEs with constant coefficients just fine, but as soon as I move to variable ones, I can't solve any of the ones I tried. Here's my code:
#!/usr/bin/python2
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as spi
x = np.linspace(-5,5,1e4)
n = 0
epsilon = 2*n+1
def D(Y,x):
return np.array([Y[1], (epsilon-x**2)*Y[0]])
Y0 = [0,1]
Y = spi.odeint(D,Y0,x)
# Y is an array with the first column being y(x) and the second y'(x) for all x
plt.plot(x,Y[:,0],label='num')
#plt.plot(x,Y[:,1],label='numderiv')
plt.legend()
plt.show()
And the plot:
[not enough rep:] https://drive.google.com/file/d/0B6840LH2NhNpdUVucUxzUGFpZUk/edit?usp=sharing
Look here for plots of solution: http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/hosc5.html
It looks like your equation is not correctly interpreted. You have a differential equation y'' + (epsilon-x^2)y = 0, but you forget a minus sign in your vector form. In particular it should be
y[0]' = y[1]
y[1]' = -(epsilon-x^2)y[0]
So (adding the minus sign in front of the epsilon term
def D(Y,x):
return np.array([Y[1], -(epsilon-x**2)*Y[0]])
In fact the plot you have is consistent with the DE y'' + (epsilon-x^2)y = 0. Check it out: Wolphram Alpha