Basics of SymPy - python

I am just starting to play with SymPy and I am a bit surprised by some of its behavior, for instance this is not the results I would expect:
>>> import sympy as s
>>> (-1)**s.I == s.E**(-1* s.pi)
False
>>> s.I**s.I == s.exp(-s.pi/2)
False
Why are these returning False and is there a way to get it to convert from one way of writing a complex number to another?

From the FAQ:
Why does SymPy say that two equal expressions are unequal?
The equality operator (==) tests whether expressions have identical form, not whether they are mathematically equivalent.
To make equality testing useful in basic cases, SymPy tries to rewrite mathematically equivalent expressions to a canonical form when evaluating them. For example, SymPy evaluates both x+x and -(-2*x) to 2*x, and x*x to x**2.
The simplest example where the default transformations fail to generate a canonical form is for nonlinear polynomials, which can be represented in both factored and expanded form. Although clearly a(1+b) = a+ab mathematically, SymPy gives:
>>> bool(a*(1+b) == a + a*b) False
Likewise, SymPy fails to detect that the difference is zero:
>>> bool(a*(1+b) - (a+a*b) == 0) False
If you want to determine the mathematical equivalence of nontrivial expressions, you should apply a more advanced simplification routine to both sides of the equation. In the case of polynomials, expressions can be rewritten in a canonical form by expanding them fully. This is done using the .expand() method in SymPy:
>>> A, B = a*(1+b), a + a*b
>>> bool(A.expand() == B.expand()) True
>>> (A - B).expand() 0
If .expand() does not help, try simplify(), trigsimp(), etc, which attempt more advanced transformations. For example,
>>> trigsimp(cos(x)**2 + sin(x)**2) == 1 True

Because they're not equal. Try this one:
s.E**(s.I* s.pi)== s.I*s.I

Related

Sympy cannot evaluate an infinite sum involving gamma functions

I am using Sympy to evaluate some symbolic sums that involve manipulations of the gamma functions but I noticed that in this case it's not evaluating the sum and keeps it unevaluated.
import sympy as sp
a = sp.Symbol('a',real=True)
b = sp.Symbol('b',real=True)
d = sp.Symbol('d',real=True)
c = sp.Symbol('c',integer=True)
z = sp.Symbol('z',complex=True)
t = sp.Symbol('t',complex=True)
sp.simplify(t-sp.summation((sp.exp(-d)*(d**c)/sp.gamma(c+1))/(z-c-a*t),(c,0,sp.oo)))
I then need to lambdify this expression, and unfortunately this becomes impossible to do.
With Matlab symbolic toolbox however I get the following answer:
Matlab
>> a=sym('a')
>> b=sym('b');
>> c=sym('c')
>> d=sym('d');
>> z=sym('z');
>> t=sym('t');
>> symsum((exp(-d)*(d^c)/factorial(c))/(z-c-a*t),c,0,inf)
ans =
(-d)^(z - a*t)*exp(-d)*(gamma(a*t - z) - igamma(a*t - z, -d))
The formula involves lower incomplete gamma functions, as expected.
Any idea why of this behaviour? I thought sympy was able to do this summation symbolically.
Running your code with SymPy 1.2 results in
d**(-a*t + z)*exp(-I*pi*a*t - d + I*pi*z)*lowergamma(a*t - z, d*exp_polar(I*pi)) + t
By the way, summation already attempts to evaluate the sum (and succeeds in case of SymPy 1.2), subsequent simplification is cosmetic. (And can sometimes be harmful).
The presence of exp_polar means that SymPy found it necessary to consider the points on the Riemann surface of logarithmic function instead of regular complex numbers. (Related bit of docs). The function lower_gamma is branched and so we must distinguish between "the value at -1, if we arrive to -1 from 1 going clockwise" from "the value at -1, if we arrive to -1 from 1 going counterclockwise". The former is exp_polar(-I*pi), the latter is exp_polar(I*pi).
All this is very interesting but not really helpful when you need concrete evaluation of the expression. We have to unpolarify this expression, and from what Matlab shows, simply replacing exp_polar with exp is a correct way to do so here.
rv = sp.simplify(t-sp.summation((sp.exp(-d)*(d**c)/sp.gamma(c+1))/(z-c-a*t),(c,0,sp.oo)))
rv = rv.subs(sp.exp_polar, sp.exp)
Result: d**(-a*t + z)*exp(-I*pi*a*t - d + I*pi*z)*lowergamma(a*t - z, -d) + t
There is still something to think about here, with complex numbers and so on. Is d positive or negative? What does raising it to the power -a*t+z mean, what branch of multivalued power function do we take? The same issues are present in Matlab output, where -d is raised to a power.
I recommend testing this with floating point input (direct summation of series vs evaluation of the SymPy expression for it), and adding assumptions on the sign of d if possible.

How to find out if the Sympy variable is complex?

I am writing a code which involves solving this equation
X = solve(Theta_Mod_Eqn*Ramp_Equation/(x+PT) - C, x)
I am using sympy library, now the equation has 7 roots few are complex and few are real. I am unable to segregate them because isinstance(i,complex) is always returning true
for i in X:
if not isinstance(i,complex):
if (i>-0.01 and i<maxSheaveDisp):
A = i;
for one case
i = -0.000581431210287302 - 0.2540334478167*I
In:i == complex
Out[39]: False
How to find out if the variable is complex?
The set of real numbers is a subset of the set of complex numbers. So, every real number is a complex number. For example, 3 is a complex number.
The correct question to ask is how to find out if a root is real. For that, you can use i.is_real if i is a SymPy symbol:
for i in X:
if i.is_real:
if (i>-0.01 and i<maxSheaveDisp):
A = i
One can also compare im(i) to 0: if im(i) == 0. This works for Python floats too.

Tuples and Ternary and positional parameters

Given:
>>> a,b=2,3
>>> c,d=3,2
>>> def f(x,y): print(x,y)
I have an existing (as in cannot be changed) 2 positional parameter function where I want the positional parameters to always be in ascending order; i.e., f(2,3) no matter what two arguments I use (f(a,b) is the same as f(c,d) in the example)
I know that I could do:
>>> f(*sorted([c,d]))
2 3
Or I could do:
>>> f(*((a,b) if a<b else (b,a)))
2 3
(Note the need for tuple parenthesis in this form because , is lower precedence than the ternary...)
Or,
def my_f(a,b):
return f(a,b) if a<b else f(b,a)
All these seem kinda kludgy. Is there another syntax that I am missing?
Edit
I missed an 'old school' Python two member tuple method. Index a two member tuple based on the True == 1, False == 0 method:
>>> f(*((a,b),(b,a))[a>b])
2 3
Also:
>>> f(*{True:(a,b), False:(b,a)}[a<b])
2 3
Edit 2
The reason for this silly exercise: numpy.isclose has the following usage note:
For finite values, isclose uses the following equation to test whether
two floating point values are equivalent.
absolute(a - b) <= (atol + rtol * absolute(b))
The above equation is not symmetric in a and b, so that isclose(a, b)
might be different from isclose(b, a) in some rare cases.
I would prefer that not happen.
I am looking for the fastest way to make sure that arguments to numpy.isclose are in a consistent order. That is why I am shying away from f(*sorted([c,d]))
Implemented my solution in case anyone else is looking.
def sort(f):
def wrapper(*args):
return f(*sorted(args))
return wrapper
#sort
def f(x, y):
print(x, y)
f(3, 2)
>>> (2, 3)
Also since #Tadhg McDonald-Jensen mention that you may not be able to change the function yourself that you could wrap the function as such
my_func = sort(f)
You mention that your use-case is np.isclose. However your approach isn't a good way to solve the real issue. But it's understandable given the poor argument naming of that function - it sort of implies that both arguments are interchangable. If it were: numpy.isclose(measured, expected, ...) (or something like it) it would be much clearer.
For example if you expect the value 10 and measure 10.51 and you allow for 5% deviation, then in order to get a useful result you must use np.isclose(10.51, 10, ...), otherwise you would get wrong results:
>>> import numpy as np
>>> measured = 10.51
>>> expected = 10
>>> err_rel = 0.05
>>> err_abs = 0.0
>>> np.isclose(measured, expected, err_rel, err_abs)
False
>>> np.isclose(expected, measured, err_rel, err_abs)
True
It's clear to see that the first one gives the correct result because the actually measured value is not within the tolerance of the expected value. That's because the relative uncertainty is an "attribute" of the expected value, not of the value you compare it with!
So solving this issue by "sorting" the parameters is just wrong. That's a bit like changing the numerator and denominator for division because the denominator contains zeros and dividing by zero could give NaN, Inf, a Warning or an Exception... it definetly avoids the problem but just by giving an incorrect result (the comparison isn't perfect because with division it will almost always give a wrong result; with isclose it's rare).
This was a somewhat artificial example designed to trigger that behaviour and most of the time it's not important if you use measured, expected or expected, measured but in the few cases where it does matter you can't solve it by swapping the arguments (except when you have no "expected" result, but that rarely happens - at least it shouldn't).
There was some discussion about this topic when math.isclose was added to the python library:
Symmetry (PEP 485)
[...]
Which approach is most appropriate depends on what question is being asked. If the question is: "are these two numbers close to each other?", there is no obvious ordering, and a symmetric test is most appropriate.
However, if the question is: "Is the computed value within x% of this known value?", then it is appropriate to scale the tolerance to the known value, and an asymmetric test is most appropriate.
[...]
This proposal [for math.isclose] uses a symmetric test.
So if your test falls into the first category and you like a symmetric test - then math.isclose could be a viable alternative (at least if you're dealing with scalars):
math.isclose(a, b, *, rel_tol=1e-09, abs_tol=0.0)
[...]
rel_tol is the relative tolerance – it is the maximum allowed difference between a and b, relative to the larger absolute value of a or b. For example, to set a tolerance of 5%, pass rel_tol=0.05. The default tolerance is 1e-09, which assures that the two values are the same within about 9 decimal digits. rel_tol must be greater than zero.
[...]
Just in case this answer couldn't convince you and you still want to use a sorted approach - then you should order by the absolute of you values (i.e. *sorted([a, b], key=abs)). Otherwise you might get surprising results when comparing negative numbers:
>>> np.isclose(-10.51, -10, err_rel, err_abs) # -10.51 is smaller than -10!
False
>>> np.isclose(-10, -10.51, err_rel, err_abs)
True
For only two elements in the tuple, the second one is the preferred idiom -- in my experience. It's fast, readable, etc.
No, there isn't really another syntax. There's also
(min(a,b), max(a,b))
... but this isn't particularly superior to the other methods; merely another way of expressing it.
Note after comment by dawg:
A class with custom comparison operators could return the same object for both min and max.

a.cross(b.cross(c)) == (a.dot(c))*b - (a.dot(b))*c coming out False [duplicate]

Is there a way to check if two expressions are mathematically equal? I expected
tg(x)cos(x) == sin(x) to output True, but it outputs False. Is there a way to make such comparisons with sympy? Another example is
(a+b)**2 == a**2 + 2*a*b + b**2 which surprisingly also outputs False.
I found some similar questions, but none covered this exact problem.
From the SymPy documentation
== represents exact structural equality testing. “Exact” here means that two expressions will compare equal with == only if they are exactly equal structurally. Here, (x+1)^2 and x^2+2x+1 are not the same symbolically. One is the power of an addition of two terms, and the other is the addition of three terms.
It turns out that when using SymPy as a library, having == test for exact symbolic equality is far more useful than having it represent symbolic equality, or having it test for mathematical equality. However, as a new user, you will probably care more about the latter two. We have already seen an alternative to representing equalities symbolically, Eq. To test if two things are equal, it is best to recall the basic fact that if a=b, then a−b=0. Thus, the best way to check if a=b is to take a−b and simplify it, and see if it goes to 0. We will learn later that the function to do this is called simplify. This method is not infallible—in fact, it can be theoretically proven that it is impossible to determine if two symbolic expressions are identically equal in general—but for most common expressions, it works quite well.
As a demo for your particular question, we can use the subtraction of equivalent expressions and compare to 0 like so
>>> from sympy import simplify
>>> from sympy.abc import x,y
>>> vers1 = (x+y)**2
>>> vers2 = x**2 + 2*x*y + y**2
>>> simplify(vers1-vers2) == 0
True
>>> simplify(vers1+vers2) == 0
False
Alternatively you can use the .equals method to compare expressions:
from sympy import *
x = symbols('x')
expr1 = tan(x) * cos(x)
expr2 = sin(x)
expr1.equals(expr2)
True
The solution with simplify was too slow for me (had to crosscheck multiple variables), so I wrote the following function, which does some simple checkes beforehand, to reduce computational time, to use simplify only in the last step.
import numpy as np
import sympy as sp
def check_equal(Expr1,Expr2):
if Expr1==None or Expr2==None:
return(False)
if Expr1.free_symbols!=Expr2.free_symbols:
return(False)
vars = Expr1.free_symbols
your_values=np.random.random(len(vars))
Expr1_num=Expr1
Expr2_num=Expr2
for symbol,number in zip(vars, your_values):
Expr1_num=Expr1_num.subs(symbol, sp.Float(number))
Expr2_num=Expr2_num.subs(symbol, sp.Float(number))
Expr1_num=float(Expr2_num)
Expr2_num=float(Expr2_num)
if not np.allclose(Expr1_num,Expr2_num):
return(False)
if (Expr1.equals(Expr2)):
return(True)
else:
return(False)
As previously stated, (expr1 - expr2).simplify() or expr1.equals(expr2) will sometimes fail to recognize equality for expressions that are complex to simplify. To deal with this, a numerical evaluation of the expressions with random numbers may constitute a relatively safe "brute force" test. I've adapted the excellent solution by #Okapi575 to:
Test the numerical equality N-times with different random numbers each time for a more confident diagnostic
Warn the user when a pair of expressions only passes the numeric test but not the symbolic equality test.
For example:
Hope it can prove useful:
import sympy as sp
import numpy as np
def check_equal(Expr1, Expr2, n=10, positive=False, strictly_positive=False):
# Determine over what range to generate random numbers
sample_min = -1
sample_max = 1
if positive:
sample_min = 0
sample_max = 1
if strictly_positive:
sample_min = 1
sample_max = 2
# Regroup all free symbols from both expressions
free_symbols = set(Expr1.free_symbols) | set(Expr2.free_symbols)
# Numeric (brute force) equality testing n-times
for i in range(n):
your_values=np.random.uniform(sample_min, sample_max, len(free_symbols))
Expr1_num=Expr1
Expr2_num=Expr2
for symbol,number in zip(free_symbols, your_values):
Expr1_num=Expr1_num.subs(symbol, sp.Float(number))
Expr2_num=Expr2_num.subs(symbol, sp.Float(number))
Expr1_num=float(Expr2_num)
Expr2_num=float(Expr2_num)
if not np.allclose(Expr1_num, Expr2_num):
print("Fails numerical test")
return(False)
# If all goes well so far, check symbolic equality
if (Expr1.equals(Expr2)):
return(True)
else:
print("Passes the numerical test but not the symbolic test")
# Still returns true though
return(True)
EDIT: code updated (1) to compare expressions with differing numbers of free symbols (for example, after symbols got cancelled out during a simplification), and (2) to allow for the specification of a positive or strictly positive random number range.
Check this out from the original sympy themselves.
https://github.com/sympy/sympy/wiki/Faq
Example of it working for me

What's the purpose of the + (pos) unary operator in Python?

Generally speaking, what should the unary + do in Python?
I'm asking because, so far, I have never seen a situation like this:
+obj != obj
Where obj is a generic object implementing __pos__().
So I'm wondering: why do + and __pos__() exist? Can you provide a real-world example where the expression above evaluates to True?
Here's a "real-world" example from the decimal package:
>>> from decimal import Decimal
>>> obj = Decimal('3.1415926535897932384626433832795028841971')
>>> +obj != obj # The __pos__ function rounds back to normal precision
True
>>> obj
Decimal('3.1415926535897932384626433832795028841971')
>>> +obj
Decimal('3.141592653589793238462643383')
In Python 3.3 and above, collections.Counter uses the + operator to remove non-positive counts.
>>> from collections import Counter
>>> fruits = Counter({'apples': 0, 'pears': 4, 'oranges': -89})
>>> fruits
Counter({'pears': 4, 'apples': 0, 'oranges': -89})
>>> +fruits
Counter({'pears': 4})
So if you have negative or zero counts in a Counter, you have a situation where +obj != obj.
>>> obj = Counter({'a': 0})
>>> +obj != obj
True
I believe that Python operators where inspired by C, where the + operator was introduced for symmetry (and also some useful hacks, see comments).
In weakly typed languages such as PHP or Javascript, + tells the runtime to coerce the value of the variable into a number. For example, in Javascript:
+"2" + 1
=> 3
"2" + 1
=> '21'
Python is strongly typed, so strings don't work as numbers, and, as such, don't implement an unary plus operator.
It is certainly possible to implement an object for which +obj != obj :
>>> class Foo(object):
... def __pos__(self):
... return "bar"
...
>>> +Foo()
'bar'
>>> obj = Foo()
>>> +"a"
As for an example for which it actually makes sense, check out the
surreal numbers. They are a superset of the reals which includes
infinitesimal values (+ epsilon, - epsilon), where epsilon is
a positive value which is smaller than any other positive number, but
greater than 0; and infinite ones (+ infinity, - infinity).
You could define epsilon = +0, and -epsilon = -0.
While 1/0 is still undefined, 1/epsilon = 1/+0 is +infinity, and 1/-epsilon = -infinity. It is
nothing more than taking limits of 1/x as x aproaches 0 from the right (+) or from the left (-).
As 0 and +0 behave differently, it makes sense that 0 != +0.
A lot of examples here look more like bugs. This one is actually a feature, though:
The + operator implies a copy.
This is extremely useful when writing generic code for scalars and arrays.
For example:
def f(x, y):
z = +x
z += y
return z
This function works on both scalars and NumPy arrays without making extra copies and without changing the type of the object and without requiring any external dependencies!
If you used numpy.positive or something like that, you would introduce a NumPy dependency, and you would force numbers to NumPy types, which can be undesired by the caller.
If you did z = x + y, your result would no longer necessarily be the same type as x. In many cases that's fine, but when it's not, it's not an option.
If you did z = --x, you would create an unnecessary copy, which is slow.
If you did z = 1 * x, you'd perform an unnecessary multiplication, which is also slow.
If you did copy.copy... I guess that'd work, but it's pretty cumbersome.
Unary + is a really great option for this.
For symmetry, because unary minus is an operator, unary plus must be too. In most arithmetic situations, it doesn't do anything, but keep in mind that users can define arbitrary classes and use these operators for anything they want, even if it isn't strictly algebraic.
I know it's an old thread, but I wanted to extend the existing answers to provide a broader set of examples:
+ could assert for positivity and throw exception if it's not - very useful to detect corner cases.
The object may be multivalued (think ±sqrt(z) as a single object -- for solving quadratic equations, for multibranched analytical functions, anything where you can "collapse" a twovalued function into one branch with a sign. This includes the ±0 case mentioned by vlopez.
If you do lazy evaluation, this may create a function object that adds something to whatever it is applied to something else. For instance, if you are parsing arithmetics incrementally.
As an identity function to pass as an argument to some functional.
For algebraic structures where sign accumulates -- ladder operators and such. Sure, it could be done with other functions, but one could conceivably see something like y=+++---+++x. Even more, they don't have to commute. This constructs a free group of plus and minuses which could be useful. Even in formal grammar implementations.
Wild usage: it could "mark" a step in the calculation as "active" in some sense. reap/sow system -- every plus remembers the value and at the end, you can gather the collected intermediates... because why not?
That, plus all the typecasting reasons mentioned by others.
And after all... it's nice to have one more operator in case you need it.
__pos__() exists in Python to give programmers similar possibilities as in C++ language — to overload operators, in this case the unary operator +.
(Overloading operators means give them a different meaning for different objects, e. g. binary + behaves differently for numbers and for strings — numbers are added while strings are concatenated.)
Objects may implement (beside others) these emulating numeric types functions (methods):
__pos__(self) # called for unary +
__neg__(self) # called for unary -
__invert__(self) # called for unary ~
So +object means the same as object.__pos__() — they are interchangeable.
However, +object is more easy on the eye.
Creator of a particular object has free hands to implement these functions as he wants — as other people showed in their real world's examples.
And my contribution — as a joke: ++i != +i in C/C++.
Unary + is actually the fastest way to see if a value is numeric or not (and raise an exception if it isn't)! It's a single instruction in bytecode, where something like isinstance(i, int) actually looks up and calls the isinstance function!

Categories