A statement in sympy is returning false when it shouldn't be - python

For context: I'm using sympy in python 2.7. Part of my project involves simplifying a mathematical expression, but I ran into a problem when using sympy:
from sympy import *
x = symbols ("x")
(-x*exp(-x) + exp(-x)) == (1-x)*(exp(-x))
the code above returns me
False
Both my own maths and wolframalpha disagree with this - did I type something wrong or is this some shortcoming of sympy that I'm not yet aware of?

From the docs page:
http://docs.sympy.org/dev/gotchas.html
If you want to test for symbolic equality, one way is to subtract one expression from the other and run it through functions likeexpand(), simplify(), and trigsimp() and see if the equation reduces to 0.

If you want to create a symbolic equality, use Eq:
In [1]: Eq((-x*exp(-x) + exp(-x)), (1-x)*(exp(-x)))
Out[1]:
-x -x -x
- x⋅ℯ + ℯ = (-x + 1)⋅ℯ
The == operator has to immediately return a boolean expression, this is a Python standard. Therefore the == operator matches the rude structure of the expressions without performing any mathematical transformation (apart from previous minor automatic transformations happening at the expression construction).

Related

Add a z3 constraint, such that the value of a z3 variable equals to the return value of some function

I have a Python function that takes a real number and returns a string, e.g.
def fun(x):
if x == 0.5:
return "Hello"
else:
return "Bye"
I start a z3 solver:
from z3 import *
r = Real('r')
s = Solver()
Now I want to tell the solver find a value of r, such that fun(r) returns "Hello". I am facing the following problems:
If r = Real('r'), then I cannot call fun(r), because r is a z3 variable
I cannot say s.add(StringVal("Hello") == StringVal(fun(r))) as a constraint, because 1) r is a z3 variable, 2) the interpretation of r is not known yet. Hence, I have no model from which I could extract a value and pass it to the fun.
Is it possible to implement what I want?
There's no out-of-the box way to do this in z3, unless you're willing to modify the definition of fun so z3 can understand it. Like this:
from z3 import *
def fun(x):
return If(x == 0.5, StringVal("Hello"), StringVal("Bye"))
r = Real('r')
s = Solver()
s.add(fun(r) == StringVal("Hello"))
print(s.check())
print(s.model())
This prints:
sat
[r = 1/2]
So, we changed the function fun to use the z3 idioms that correspond to what you wanted to write. A few points:
Obviously, not every Python construct will be easy to translate in this way. While you can express most every construct, the complexity of the hand-transformation will get higher as you work on more complicated functions.
So far as I know, there's no automatic way to do this translation for you. However, take a look at this blog post for some ideas on how to do so via the disassembler.
PyExZ3 is a now-defunct symbolic simulator for Python, using z3 as the backend solver. While the project is no longer active, you might be able to resurrect the source-code or get some ideas from it.
You have used Real to model the parameter x. Z3's Real values are true mathematical reals. (Strictly speaking, they are algebraic reals, but that's besides the point for now.) Python, however, doesn't have those; instead it uses double-precision floats. But luckily, z3 understands floats, so for a more realistic encoding use the Float64 sort. See here for details.

Simplifying exponential representation of hyperbolic functions in sympy

I am trying to rewrite some exponential functions in an expression to cosh and sinh. The rewrite() function works to get from a hyperbolic function to its exponential representation. But it does not work to get back.
>>> import sympy
>>> x=sympy.Symbol('x')
>>> sympy.cosh(x).rewrite(sympy.exp)
exp(x)/2 + exp(-x)/2
>>> sympy.cosh(x).rewrite(sympy.exp).rewrite(sympy.cosh)
exp(x)/2 + exp(-x)/2
I would expect the result of the last command to be 'cosh(x)'. Can someone explain to me why it is not?
I tried to find some documentation on the rewrite() function but the only bit I found was the short section in http://docs.sympy.org/latest/tutorial/simplification.html that is not really helpful.
Applying .rewrite(sympy.cos) returns cosh(x) as you wanted. Apparently, the hyperbolic cosine is treated by rewrite as a variant of the normal one.
Here is a reference on rewrite method.
Alternatively, simplify(expr) also transforms exp(x)/2 + exp(-x)/2 into cosh(x).

Sympy comparison fail with sympified expressions

I recently encountered a problem with SympPy that I can't fix ; I'm a beginner with the library and I've spent quite some time looking for a solution on the web, without finding one, which is why I came here.
Here's the problem : I have an output from a javascript web app that gives me the expression (x**2)**(1/3) instead of x**(2/3). "No problem", I thought, "SymPy's going to parse that"... except not. If I compare, I get that result:
>>> sympify("(x**2)**(1/3)") == sympify("x**(2/3)")
False
I ran some tests, and I only get that with exponents in sympified expressions. Example:
>>> (x**(1/3))**2 == x**(2/3)
True
Even weirder, if I invert the two exponents, I get a correct result:
>>> sympify("(x**(1/3))**2") == sympify("x**(2/3)")
True
I tried using simplify and extend, but it's not working either. The only way I could get a good result is by using eval, but that requires to first create a symbol "x", which is something I cannot do, because I can't know exactly which symbols are going to be used in the expressions.
So here's the question : is that a bug in SymPy, or am I doing something wrong?
Please excuse me if the issue is well known and I couldn't find anything about it by myself.
As pointed out on the SymPy issue, this is intentional. These two expressions are not equal for general complex x. Take for instance x = -1, ((-1)**2)**(1/3) == 1**(1/3) == 1, whereas (-1)**(2/3) = -1/2 + sqrt(3)/2*I. See http://docs.sympy.org/0.7.3/tutorial/simplification.html#powers for more information. SymPy will only simplify such exponents if it knows that it is true for the full domain.
In particular, this is true if x is nonnegative. (x**a)**b == x**(a*b) is also true if b is an integer, which is why your other tests worked.
If you want to force simplify this, there are two main options. One (the best option), is to just assume that x is nonegative, or positive.
>>> x = symbols('x', positive=True)
>>> (x**2)**(S(1)/3)
x**(2/3)
If you want to replace your symbols with positive versions after parsing from a string, you can use posify:
>>> posify((x**2)**(S(1)/3))
(_x**(2/3), {_x: x})
Or you can use the locals argument to sympify to manually set {'x': Symbol('x', positive=True}.
Another way, if this doesn't fully suit your needs, is to force simplify it with powdenest:
>>> x = symbols('x') # No assumptions on x
>>> (x**2)**(S(1)/3)
(x**2)**(1/3)
>>> powdenest((x**2)**(S(1)/3), force=True)
x**(2/3)
If you do this, you need to be careful, because you are applying identities that are not always true, so you can end up with mathematically incorrect results.
It's a bug in sympy.
There is not a correct expand for a hierarchy of exponentials (see https://github.com/sympy/sympy/blob/master/sympy/core/power.py#L445)

Exponentiation in Python - should I prefer ** operator instead of math.pow and math.sqrt? [duplicate]

This question already has answers here:
Which is faster in Python: x**.5 or math.sqrt(x)?
(15 answers)
Closed 9 years ago.
In my field it's very common to square some numbers, operate them together, and take the square root of the result. This is done in pythagorean theorem, and the RMS calculation, for example.
In numpy, I have done the following:
result = numpy.sqrt(numpy.sum(numpy.pow(some_vector, 2)))
And in pure python something like this would be expected:
result = math.sqrt(math.pow(A, 2) + math.pow(B,2)) # example with two dimensions.
However, I have been using this pure python form, since I find it much more compact, import-independent, and seemingly equivalent:
result = (A**2 + B**2)**0.5 # two dimensions
result = (A**2 + B**2 + C**2 + D**2)**0.5
I have heard some people argue that the ** operator is sort of a hack, and that squaring a number by exponentiating it by 0.5 is not so readable. But what I'd like to ask is if:
"Is there any COMPUTATIONAL reason to prefer the former two alternatives over the third one(s)?"
Thanks for reading!
math.sqrt is the C implementation of square root and is therefore different from using the ** operator which implements Python's built-in pow function. Thus, using math.sqrt actually gives a different answer than using the ** operator and there is indeed a computational reason to prefer numpy or math module implementation over the built-in. Specifically the sqrt functions are probably implemented in the most efficient way possible whereas ** operates over a large number of bases and exponents and is probably unoptimized for the specific case of square root. On the other hand, the built-in pow function handles a few extra cases like "complex numbers, unbounded integer powers, and modular exponentiation".
See this Stack Overflow question for more information on the difference between ** and math.sqrt.
In terms of which is more "Pythonic", I think we need to discuss the very definition of that word. From the official Python glossary, it states that a piece of code or idea is Pythonic if it "closely follows the most common idioms of the Python language, rather than implementing code using concepts common to other languages." In every single other language I can think of, there is some math module with basic square root functions. However there are languages that lack a power operator like ** e.g. C++. So ** is probably more Pythonic, but whether or not it's objectively better depends on the use case.
Even in base Python you can do the computation in generic form
result = sum(x**2 for x in some_vector) ** 0.5
x ** 2 is surely not an hack and the computation performed is the same (I checked with cpython source code). I actually find it more readable (and readability counts).
Using instead x ** 0.5 to take the square root doesn't do the exact same computations as math.sqrt as the former (probably) is computed using logarithms and the latter (probably) using the specific numeric instruction of the math processor.
I often use x ** 0.5 simply because I don't want to add math just for that. I'd expect however a specific instruction for the square root to work better (more accurately) than a multi-step operation with logarithms.

Python - solve polynomial for y

I'm taking in a function (e.g. y = x**2) and need to solve for x. I know I can painstakingly solve this manually, but I'm trying to find instead a method to use. I've browsed numpy, scipy and sympy, but can't seem to find what I'm looking for. Currently I'm making a lambda out of the function so it'd be nice if i'm able to keep that format for the the method, but not necessary.
Thanks!
If you are looking for numerical solutions (i.e. just interested in the numbers, not the symbolic closed form solutions), then there are a few options for you in the SciPy.optimize module. For something simple, the newton is a pretty good start for simple polynomials, but you can take it from there.
For symbolic solutions (which is to say to get y = x**2 -> x = +/- sqrt(y)) SymPy solver gives you roughly what you need. The whole SymPy package is directed at doing symbolic manipulation.
Here is an example using the Python interpreter to solve the equation that is mentioned in the question. You will need to make sure that SymPy package is installed, then:
>>>> from sympy import * # we are importing everything for ease of use
>>>> x = Symbol("x")
>>>> y = Symbol("y") # create the two variables
>>>> equation = Eq(x ** 2, y) # create the equation
>>>> solve(equation, x)
[y**(1/2), -y**(1/2)]
As you see the basics are fairly workable, even as an interactive algebra system. Not nearly as nice as Mathematica, but then again, it is free and you can incorporate it into your own programs. Make sure to read the Gotchas and Pitfalls section of the SymPy documentation on how to encode the appropriate equations.
If all this was to get a quick and dirty solutions to equations then there is always Wolfram Alpha.
Use Newton-Raphson via scipy.optimize.newton. It finds roots of an equation, i.e., values of x for which f(x) = 0. In the example, you can cast the problem as looking for a root of the function f(x) = x² - y. If you supply a lambda that computes y, you can provide a general solution thus:
def inverse(f, f_prime=None):
def solve(y):
return newton(lambda x: f(x) - y, 1, f_prime, (), 1E-10, 1E6)
return solve
Using this function is quite simple:
>>> sqrt = inverse(lambda x: x**2)
>>> sqrt(2)
1.4142135623730951
>>> import math
>>> math.sqrt(2)
1.4142135623730951
Depending on the input function, you may need to tune the parameters to newton(). The current version uses a starting guess of 1, a tolerance of 10-10 and a maximum iteration count of 106.
For an additional speed-up, you can supply the derivative of the function in question:
>>> sqrt = inverse(lambda x: x**2, lambda x: 2*x)
In fact, without it, the function actually uses the secant method instead of Newton-Raphson, which relies on knowing the derivative.
Check out SymPy, specifically the solver.

Categories