How to find out if the Sympy variable is complex? - python

I am writing a code which involves solving this equation
X = solve(Theta_Mod_Eqn*Ramp_Equation/(x+PT) - C, x)
I am using sympy library, now the equation has 7 roots few are complex and few are real. I am unable to segregate them because isinstance(i,complex) is always returning true
for i in X:
if not isinstance(i,complex):
if (i>-0.01 and i<maxSheaveDisp):
A = i;
for one case
i = -0.000581431210287302 - 0.2540334478167*I
In:i == complex
Out[39]: False
How to find out if the variable is complex?

The set of real numbers is a subset of the set of complex numbers. So, every real number is a complex number. For example, 3 is a complex number.
The correct question to ask is how to find out if a root is real. For that, you can use i.is_real if i is a SymPy symbol:
for i in X:
if i.is_real:
if (i>-0.01 and i<maxSheaveDisp):
A = i
One can also compare im(i) to 0: if im(i) == 0. This works for Python floats too.

Related

When I take an nth root in Python and NumPy, which of the n existing roots do I actually get?

Entailed by the fundamental theorem of algebra is the existence of n complex roots for the formula z^n=a where a is a real number, n is a positive integer, and z is a complex number. Some roots will also be real in addition to complex (i.e. a+bi where b=0).
One example where there are multiple real roots is z^2=1 where we obtain z = ±sqrt(1) = ± 1. The solution z = 1 is immediate. The solution z = -1 is obtained by z = sqrt(1) = sqrt(-1 * -1) = I * I = -1, which I is the imaginary unit.
In Python/NumPy (as well as many other programming languages and packages) only a single value is returned. Here are two examples for 5^{1/3}, which has 3 roots.
>>> 5 ** (1 / 3)
1.7099759466766968
>>> import numpy as np
>>> np.power(5, 1/3)
1.7099759466766968
It is not a problem for my use case that only one of the possible roots are returned, but it would be informative to know 'which' root is systematically calculated in the contexts of Python and NumPy. Perhaps there is an (ISO) standard stating which root should be returned, or perhaps there is a commonly-used algorithm that happens to return a specific root. I've imagined of an equivalence class such as "the maximum of the real-valued solutions", but I do not know.
Question: When I take an nth root in Python and NumPy, which of the n existing roots do I actually get?
Since typically the idenity xᵃ = exp(a⋅log(x)) is used to define the general power, you'll get the root corresponding to the chosen branch cut of the complex logarithm.
With regards to this, the numpy documentation says:
For real-valued input data types, log always returns real output. For each value that cannot be expressed as a real number or infinity, it yields nan and sets the invalid floating point error flag.
For complex-valued input, log is a complex analytical function that has a branch cut [-inf, 0] and is continuous from above on it. log handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard.
So for example, np.power(-1 +0j, 1/3) = 0.5 + 0.866j = np.exp(np.log(-1+0j)/3).

How to solve an equation with one unknown variable [duplicate]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Let's say I have an equation:
2x + 6 = 12
With algebra we can see that x = 3. How can I make a program in Python that can solve for x? I'm new to programming, and I looked at eval() and exec() but I can't figure out how to make them do what I want. I do not want to use external libraries (e.g. SAGE), I want to do this in just plain Python.
How about SymPy? Their solver looks like what you need. Have a look at their source code if you want to build the library yourself…
There are two ways to approach this problem: numerically and symbolically.
To solve it numerically, you have to first encode it as a "runnable" function - stick a value in, get a value out. For example,
def my_function(x):
return 2*x + 6
It is quite possible to parse a string to automatically create such a function; say you parse 2x + 6 into a list, [6, 2] (where the list index corresponds to the power of x - so 6*x^0 + 2*x^1). Then:
def makePoly(arr):
def fn(x):
return sum(c*x**p for p,c in enumerate(arr))
return fn
my_func = makePoly([6, 2])
my_func(3) # returns 12
You then need another function which repeatedly plugs an x-value into your function, looks at the difference between the result and what it wants to find, and tweaks its x-value to (hopefully) minimize the difference.
def dx(fn, x, delta=0.001):
return (fn(x+delta) - fn(x))/delta
def solve(fn, value, x=0.5, maxtries=1000, maxerr=0.00001):
for tries in xrange(maxtries):
err = fn(x) - value
if abs(err) < maxerr:
return x
slope = dx(fn, x)
x -= err/slope
raise ValueError('no solution found')
There are lots of potential problems here - finding a good starting x-value, assuming that the function actually has a solution (ie there are no real-valued answers to x^2 + 2 = 0), hitting the limits of computational accuracy, etc. But in this case, the error minimization function is suitable and we get a good result:
solve(my_func, 16) # returns (x =) 5.000000000000496
Note that this solution is not absolutely, exactly correct. If you need it to be perfect, or if you want to try solving families of equations analytically, you have to turn to a more complicated beast: a symbolic solver.
A symbolic solver, like Mathematica or Maple, is an expert system with a lot of built-in rules ("knowledge") about algebra, calculus, etc; it "knows" that the derivative of sin is cos, that the derivative of kx^p is kpx^(p-1), and so on. When you give it an equation, it tries to find a path, a set of rule-applications, from where it is (the equation) to where you want to be (the simplest possible form of the equation, which is hopefully the solution).
Your example equation is quite simple; a symbolic solution might look like:
=> LHS([6, 2]) RHS([16])
# rule: pull all coefficients into LHS
LHS, RHS = [lh-rh for lh,rh in izip_longest(LHS, RHS, 0)], [0]
=> LHS([-10,2]) RHS([0])
# rule: solve first-degree poly
if RHS==[0] and len(LHS)==2:
LHS, RHS = [0,1], [-LHS[0]/LHS[1]]
=> LHS([0,1]) RHS([5])
and there is your solution: x = 5.
I hope this gives the flavor of the idea; the details of implementation (finding a good, complete set of rules and deciding when each rule should be applied) can easily consume many man-years of effort.
Python may be good, but it isn't God...
There are a few different ways to solve equations. SymPy has already been mentioned, if you're looking for analytic solutions.
If you're happy to just have a numerical solution, Numpy has a few routines that can help. If you're just interested in solutions to polynomials, numpy.roots will work. Specifically for the case you mentioned:
>>> import numpy
>>> numpy.roots([2,-6])
array([3.0])
For more complicated expressions, have a look at scipy.fsolve.
Either way, you can't escape using a library.
If you only want to solve the extremely limited set of equations mx + c = y for positive integer m, c, y, then this will do:
import re
def solve_linear_equation ( equ ):
"""
Given an input string of the format "3x+2=6", solves for x.
The format must be as shown - no whitespace, no decimal numbers,
no negative numbers.
"""
match = re.match(r"(\d+)x\+(\d+)=(\d+)", equ)
m, c, y = match.groups()
m, c, y = float(m), float(c), float(y) # Convert from strings to numbers
x = (y-c)/m
print ("x = %f" % x)
Some tests:
>>> solve_linear_equation("2x+4=12")
x = 4.000000
>>> solve_linear_equation("123x+456=789")
x = 2.707317
>>>
If you want to recognise and solve arbitrary equations, like sin(x) + e^(i*pi*x) = 1, then you will need to implement some kind of symbolic maths engine, similar to maxima, Mathematica, MATLAB's solve() or Symbolic Toolbox, etc. As a novice, this is beyond your ken.
Use a different tool. Something like Wolfram Alpha, Maple, R, Octave, Matlab or any other algebra software package.
As a beginner you should probably not attempt to solve such a non-trivial problem.

a.cross(b.cross(c)) == (a.dot(c))*b - (a.dot(b))*c coming out False [duplicate]

Is there a way to check if two expressions are mathematically equal? I expected
tg(x)cos(x) == sin(x) to output True, but it outputs False. Is there a way to make such comparisons with sympy? Another example is
(a+b)**2 == a**2 + 2*a*b + b**2 which surprisingly also outputs False.
I found some similar questions, but none covered this exact problem.
From the SymPy documentation
== represents exact structural equality testing. “Exact” here means that two expressions will compare equal with == only if they are exactly equal structurally. Here, (x+1)^2 and x^2+2x+1 are not the same symbolically. One is the power of an addition of two terms, and the other is the addition of three terms.
It turns out that when using SymPy as a library, having == test for exact symbolic equality is far more useful than having it represent symbolic equality, or having it test for mathematical equality. However, as a new user, you will probably care more about the latter two. We have already seen an alternative to representing equalities symbolically, Eq. To test if two things are equal, it is best to recall the basic fact that if a=b, then a−b=0. Thus, the best way to check if a=b is to take a−b and simplify it, and see if it goes to 0. We will learn later that the function to do this is called simplify. This method is not infallible—in fact, it can be theoretically proven that it is impossible to determine if two symbolic expressions are identically equal in general—but for most common expressions, it works quite well.
As a demo for your particular question, we can use the subtraction of equivalent expressions and compare to 0 like so
>>> from sympy import simplify
>>> from sympy.abc import x,y
>>> vers1 = (x+y)**2
>>> vers2 = x**2 + 2*x*y + y**2
>>> simplify(vers1-vers2) == 0
True
>>> simplify(vers1+vers2) == 0
False
Alternatively you can use the .equals method to compare expressions:
from sympy import *
x = symbols('x')
expr1 = tan(x) * cos(x)
expr2 = sin(x)
expr1.equals(expr2)
True
The solution with simplify was too slow for me (had to crosscheck multiple variables), so I wrote the following function, which does some simple checkes beforehand, to reduce computational time, to use simplify only in the last step.
import numpy as np
import sympy as sp
def check_equal(Expr1,Expr2):
if Expr1==None or Expr2==None:
return(False)
if Expr1.free_symbols!=Expr2.free_symbols:
return(False)
vars = Expr1.free_symbols
your_values=np.random.random(len(vars))
Expr1_num=Expr1
Expr2_num=Expr2
for symbol,number in zip(vars, your_values):
Expr1_num=Expr1_num.subs(symbol, sp.Float(number))
Expr2_num=Expr2_num.subs(symbol, sp.Float(number))
Expr1_num=float(Expr2_num)
Expr2_num=float(Expr2_num)
if not np.allclose(Expr1_num,Expr2_num):
return(False)
if (Expr1.equals(Expr2)):
return(True)
else:
return(False)
As previously stated, (expr1 - expr2).simplify() or expr1.equals(expr2) will sometimes fail to recognize equality for expressions that are complex to simplify. To deal with this, a numerical evaluation of the expressions with random numbers may constitute a relatively safe "brute force" test. I've adapted the excellent solution by #Okapi575 to:
Test the numerical equality N-times with different random numbers each time for a more confident diagnostic
Warn the user when a pair of expressions only passes the numeric test but not the symbolic equality test.
For example:
Hope it can prove useful:
import sympy as sp
import numpy as np
def check_equal(Expr1, Expr2, n=10, positive=False, strictly_positive=False):
# Determine over what range to generate random numbers
sample_min = -1
sample_max = 1
if positive:
sample_min = 0
sample_max = 1
if strictly_positive:
sample_min = 1
sample_max = 2
# Regroup all free symbols from both expressions
free_symbols = set(Expr1.free_symbols) | set(Expr2.free_symbols)
# Numeric (brute force) equality testing n-times
for i in range(n):
your_values=np.random.uniform(sample_min, sample_max, len(free_symbols))
Expr1_num=Expr1
Expr2_num=Expr2
for symbol,number in zip(free_symbols, your_values):
Expr1_num=Expr1_num.subs(symbol, sp.Float(number))
Expr2_num=Expr2_num.subs(symbol, sp.Float(number))
Expr1_num=float(Expr2_num)
Expr2_num=float(Expr2_num)
if not np.allclose(Expr1_num, Expr2_num):
print("Fails numerical test")
return(False)
# If all goes well so far, check symbolic equality
if (Expr1.equals(Expr2)):
return(True)
else:
print("Passes the numerical test but not the symbolic test")
# Still returns true though
return(True)
EDIT: code updated (1) to compare expressions with differing numbers of free symbols (for example, after symbols got cancelled out during a simplification), and (2) to allow for the specification of a positive or strictly positive random number range.
Check this out from the original sympy themselves.
https://github.com/sympy/sympy/wiki/Faq
Example of it working for me

Rounding errors in quadratic solver

I'm pretty new to python and am trying to write some code to solve a given quadratic function. I'm having some trouble with rounding errors in floats, I think because I am dividing two numbers that are very large with a very small difference. (Also I'm assuming all inputs have real solutions for now.) I've put two different versions of the quadratic equation to show my problem. It works fine for most inputs, but when I try a = .001, b = 1000, c = .001 I get two answers that have a significant difference. Here is my code:
from math import sqrt
a = float(input("Enter a: "))
b = float(input("Enter b: "))
c = float(input("Enter c: "))
xp = (-b+sqrt(b**2-4*a*c))/(2*a)
xn = (-b-sqrt(b**2-4*a*c))/(2*a)
print("The solutions are: x = ",xn,", ",xp,sep = '')
xp = (2*c)/(-b-sqrt(b**2-4*a*c))
xn = (2*c)/(-b+sqrt(b**2-4*a*c))
print("The solutions are: x = ",xn,", ",xp,sep = '')
I'm no expert in the maths field but I believe you should use numpy (a py module for maths), due to internal number representation on computers your calculus will not match real math. (floating point arithmetics)
http://docs.python.org/2/tutorial/floatingpoint.html
Check this is almost exaclty what you want.
http://www.annigeri.in/2012/02/python-class-for-quadratic-equations.html
To get more precise results with floating point, be careful not to subtract similar quantities. For the quadratic x^2 + a x + b = 0 you know that the roots x1 and x2 make
b = x1 * x2
Compute the one with larger absolute value, and get the other one from this relation.
Solutions:
Numpy as suggested by user dhunter is usually the best solution for math in python. The numpy libraries are capable of doing quick and accurate math in a number of different fields.
Decimal data types were added in python 2.4 If you do not want to download an external library and do not anticipate doing many long or complex equations, decimal datatypes may fit the bill.
Simply add:
from decimal import *
to the top of your code and then replace all instances of the word float with the word Decimal (note the uppercase "D".)
Ex: Decimal(1.1047262519) as opposed to float(1.1047262519)
Theory:
Float arithmetic is based off of binary math and is therefore not always exactly what a user would expect. An excelent description of the float Vs. decimal types is located Here
The previously-mentioned numpy module is not particularly relevant to the rounding error mentioned in the question. On the other hand, the decimal module can be used in a brute-force manner to get accurate computations. The following snippet from an ipython interpreter session illustrates its use (with default 28-digit accuracy), and also shows that the corresponding floating-point calculation only has 5 decimal places of accuracy.
In [180]: from decimal import Decimal
In [181]: a=Decimal('0.001'); b=Decimal('1000'); c=Decimal('0.001')
In [182]: (b*b - 4*a*c).sqrt()
Out[182]: Decimal('999.9999999979999999999980000')
In [183]: b-(b*b - 4*a*c).sqrt()
Out[183]: Decimal('2.0000000000020000E-9')
In [184]: a = .001; b = 1000; c = .001
In [185]: math.sqrt(b*b - 4*a*c)
Out[185]: 999.999999998
In [186]: b-math.sqrt(b*b - 4*a*c)
Out[186]: 1.999978849198669e-09
In [187]: 2*a*c/b
Out[187]: 1.9999999999999997e-09
Taylor series for the square root offers an alternative method to use when 4ac is tiny compared to b**2. In this case, √(b*b-4*a*c) ≈ b - 4*a*c/(2*b), whence b - √(b*b-4*a*c) ≈ 2*a*c/b. As can be seen in the line [187] entries above, Taylor series computation gives a 12-digits-accurate result while using floating point instead of Decimal. Using another Taylor series term might add a couple more digits of accuracy.
There are special cases that you should deal with:
a == 0 means a linear equation and one root: x = -c/b
b == 0 means two roots of the form x1, x2 = ±sqrt(-c/a)
c == 0 means two roots, but one of them is zero: x*(ax+b) = 0
If the discriminant is negative, you have two complex conjugate roots.
I'd recommend calculating the discriminant this way:
discriminant = b*sqrt(1.0-4.0*a*c/b)
I'd also recommend reading this:
https://math.stackexchange.com/questions/187242/quadratic-equation-error

How can I solve equations in Python? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Let's say I have an equation:
2x + 6 = 12
With algebra we can see that x = 3. How can I make a program in Python that can solve for x? I'm new to programming, and I looked at eval() and exec() but I can't figure out how to make them do what I want. I do not want to use external libraries (e.g. SAGE), I want to do this in just plain Python.
How about SymPy? Their solver looks like what you need. Have a look at their source code if you want to build the library yourself…
There are two ways to approach this problem: numerically and symbolically.
To solve it numerically, you have to first encode it as a "runnable" function - stick a value in, get a value out. For example,
def my_function(x):
return 2*x + 6
It is quite possible to parse a string to automatically create such a function; say you parse 2x + 6 into a list, [6, 2] (where the list index corresponds to the power of x - so 6*x^0 + 2*x^1). Then:
def makePoly(arr):
def fn(x):
return sum(c*x**p for p,c in enumerate(arr))
return fn
my_func = makePoly([6, 2])
my_func(3) # returns 12
You then need another function which repeatedly plugs an x-value into your function, looks at the difference between the result and what it wants to find, and tweaks its x-value to (hopefully) minimize the difference.
def dx(fn, x, delta=0.001):
return (fn(x+delta) - fn(x))/delta
def solve(fn, value, x=0.5, maxtries=1000, maxerr=0.00001):
for tries in xrange(maxtries):
err = fn(x) - value
if abs(err) < maxerr:
return x
slope = dx(fn, x)
x -= err/slope
raise ValueError('no solution found')
There are lots of potential problems here - finding a good starting x-value, assuming that the function actually has a solution (ie there are no real-valued answers to x^2 + 2 = 0), hitting the limits of computational accuracy, etc. But in this case, the error minimization function is suitable and we get a good result:
solve(my_func, 16) # returns (x =) 5.000000000000496
Note that this solution is not absolutely, exactly correct. If you need it to be perfect, or if you want to try solving families of equations analytically, you have to turn to a more complicated beast: a symbolic solver.
A symbolic solver, like Mathematica or Maple, is an expert system with a lot of built-in rules ("knowledge") about algebra, calculus, etc; it "knows" that the derivative of sin is cos, that the derivative of kx^p is kpx^(p-1), and so on. When you give it an equation, it tries to find a path, a set of rule-applications, from where it is (the equation) to where you want to be (the simplest possible form of the equation, which is hopefully the solution).
Your example equation is quite simple; a symbolic solution might look like:
=> LHS([6, 2]) RHS([16])
# rule: pull all coefficients into LHS
LHS, RHS = [lh-rh for lh,rh in izip_longest(LHS, RHS, 0)], [0]
=> LHS([-10,2]) RHS([0])
# rule: solve first-degree poly
if RHS==[0] and len(LHS)==2:
LHS, RHS = [0,1], [-LHS[0]/LHS[1]]
=> LHS([0,1]) RHS([5])
and there is your solution: x = 5.
I hope this gives the flavor of the idea; the details of implementation (finding a good, complete set of rules and deciding when each rule should be applied) can easily consume many man-years of effort.
Python may be good, but it isn't God...
There are a few different ways to solve equations. SymPy has already been mentioned, if you're looking for analytic solutions.
If you're happy to just have a numerical solution, Numpy has a few routines that can help. If you're just interested in solutions to polynomials, numpy.roots will work. Specifically for the case you mentioned:
>>> import numpy
>>> numpy.roots([2,-6])
array([3.0])
For more complicated expressions, have a look at scipy.fsolve.
Either way, you can't escape using a library.
If you only want to solve the extremely limited set of equations mx + c = y for positive integer m, c, y, then this will do:
import re
def solve_linear_equation ( equ ):
"""
Given an input string of the format "3x+2=6", solves for x.
The format must be as shown - no whitespace, no decimal numbers,
no negative numbers.
"""
match = re.match(r"(\d+)x\+(\d+)=(\d+)", equ)
m, c, y = match.groups()
m, c, y = float(m), float(c), float(y) # Convert from strings to numbers
x = (y-c)/m
print ("x = %f" % x)
Some tests:
>>> solve_linear_equation("2x+4=12")
x = 4.000000
>>> solve_linear_equation("123x+456=789")
x = 2.707317
>>>
If you want to recognise and solve arbitrary equations, like sin(x) + e^(i*pi*x) = 1, then you will need to implement some kind of symbolic maths engine, similar to maxima, Mathematica, MATLAB's solve() or Symbolic Toolbox, etc. As a novice, this is beyond your ken.
Use a different tool. Something like Wolfram Alpha, Maple, R, Octave, Matlab or any other algebra software package.
As a beginner you should probably not attempt to solve such a non-trivial problem.

Categories