I don't understand why Sympy won't return to me the expression below simplified (not sure its a bug in my code or a feature of Sympy).
import sympy as sp
a = sp.Symbol('a',finite = True, real = True)
b = sp.Symbol('b',finite = True, real = True)
sp.assumptions.assume.global_assumptions.add(sp.Q.positive(b))
sp.assumptions.assume.global_assumptions.add(sp.Q.negative(a))
sp.simplify(sp.Max(a-b,a+b))
I would expect the output to be $a+b$, but Sympy still gives me $Max(a-b,a+b)$.
Thanks; as you can see I am a beginner in Sympy so any hints/help are appreciated.
Surely the result should be a + b...
You can do this by setting the assumptions on the symbol as in:
In [2]: a = Symbol('a', negative=True)
In [3]: b = Symbol('b', positive=True)
In [4]: Max(a - b, a + b)
Out[4]: a + b
You are trying to use the new assumptions system but that system is still experimental and is not widely used within sympy. The new assumptions are not used in core evaluation so e.g. the Max function has no idea that you have declared global assumptions on a and b unless those assumptions are declared on the symbols as I show above.
Related
I was trying to understand the autograd mechanism in more depth. To test my understanding, I tried to write the following code which I expected to produce an error (i.e., Trying to backward through the graph a second time).
b = torch.Tensor([0.5])
for i in range(5):
b.data.zero_().add_(0.5)
b = b + a
c = b*a
c.backward()
Apparently, it should report an error when c.backward() is called for the second time in the for loop, because the history of b has been freed, however, nothing happens.
But when I tried to change b + a to b * a as follows,
b = torch.Tensor([0.5])
for i in range(5):
b.data.zero_().add_(0.5)
b = b * a
c = b*a
c.backward()
It did report the error I was expecting.
This looks pretty weird to me. I don't understand why there is no error evoked for the former case, and why it makes a difference to change from + to *.
The difference is adding a constant doesn't change the gradient, but mull by const does. It seems, autograd is aware of it, and optimizes out 'b = b + a'.
Good afternoon,
I am having issues with my code. The code I am going to show is a simplified version of the actual one, but the idea is the same.
Var1 = Symbol('Var1')
Var2 = Symbol('Var2')
A = 20
B = 30
Var1 = solve(12+A*B+Var1, Var1)
Var2 = solve(Var1+A+B+Var2, Var2)
print(Var1,Var2)
What is the issue is that for example print(Var1) gives me back the numerical solution of the equation which is -612, but when it comes to print(Var2) it displays -Var1 -50 instead of recognizing that Var1 became a number.
This is the library I import:
from sympy.solvers import solve
from sympy import Symbol
Any idea how to make it understand that Var1 became a number ?
I did try to assign a new variable and then use it in the Var2 equation but it gave me an error.
solve returns a list of (possibly multiple) solutions. sol[0] will give you the first (and in your case only) solution. you may then substitute that solution in your second equation:
from sympy.solvers import solve
from sympy import Symbol
Var1 = Symbol('Var1')
Var2 = Symbol('Var2')
A = 20
B = 30
sol1 = solve(12+A*B+Var1, Var1) # [-612]
eqn2 = (Var1+A+B+Var2).subs({'Var1': sol1[0]}) # Var2 - 562
sol2 = solve(eqn2, Var2) # [562]
print(sol1,sol2) # [-612] [562]
print(sol1[0],sol2[0]) # -612 562
The answer to 2^(-1/3) are three roots:
0.79370, -0.39685-0.68736i and 0.39685+0.68736i (approximately)
See the correct answer at Wolfram Alpha.
I know several languages that supports complex numbers, but they all only return the first of the three results:
Python:
>>> complex(2,0)**(-1/3)
(0.7937005259840998-0j)
Octave:
>> (2+0i)^(-1/3)
ans = 0.79370
Julia:
julia> complex(2,0)^(-1/3)
0.7937005259840998 + 0.0im
What I'm looking for is something along the lines of:
>> 2^(-1/3)
[0.79370+0i, -0.39685-0.68736i, 0.39685+0.68736i]
Is there a programming language (with a REPL) that will correctly return all three roots, without having to resort to any special modules or libraries, that also has an open source implementation available?
As many comments explained, wanting a general purpose language to give by default the result from every branch of the complex root function is probably a tall order. But Julia allows specializing/overloading operators very naturally (as even the out-of-the-box implementation is often written in Julia). Specifically:
using Roots,Polynomials # Might need to Pkg.add("Roots") first
import Base: ^
^{T<:AbstractFloat}(b::T, r::Rational{Int64}) =
roots(poly([0])^r.den - b^abs(r.num)).^sign(r.num)
And now when trying to raise a float to a rational power:
julia> 2.0^(-1//3)
3-element Array{Complex{Float64},1}:
-0.39685-0.687365im
-0.39685+0.687365im
0.793701-0.0im
Note that specializing the definition of ^ to rational exponents solves the rounding problem mentioned in the comments.
Here is how to solve for all of the roots of b1/n via the roots the polynomial xn - b with Matlab's roots or Octave's roots:
b = 2;
n = -3; % for b^(1/n)
c = [1 zeros(1,abs(n)-1) -b];
r = roots(c).^sign(n);
which returns
r =
-0.396850262992050 - 0.687364818499301i
-0.396850262992050 + 0.687364818499301i
0.793700525984100 + 0.000000000000000i
Alternatively, using roots of unity (not sure how numerically robust this is):
b = 2;
n = -3;
n0 = abs(n);
r0 = b^(1/n0);
w = exp(2*pi*1i/n0);
r = (r0*w.^(0:n0-1).').^sign(n)
Or using Matlab's Symbolic Math toolbox:
b = 2;
n = -3;
c = [1 zeros(1,abs(n)-1) -b];
r = solve(poly2sym(c)).^sign(n)
which returns:
r =
2^(2/3)/2
2^(2/3)/(2*((3^(1/2)*1i)/2 - 1/2))
-2^(2/3)/(2*((3^(1/2)*1i)/2 + 1/2))
In certain cases you might also find nthroot helpful (Octave documentation).
Im trying to compile an expression that contains an UndefinedFunction which has an implementation provided. (Alternatively: an expression which contains a Symbol which represents a call to an external numerical function)
Is there a way to do this? Either with autowrap or codegen or perhaps with some manual editing of the generated files?
The following naive example does not work:
import sympy as sp
import numpy as np
from sympy.abc import *
from sympy.utilities.lambdify import implemented_function
from sympy.utilities.autowrap import autowrap, ufuncify
def g_implementation(a):
"""represents some numerical function"""
return a*5
# sympy wrapper around the above function
g = implemented_function('g', g_implementation)
# some random expression using the above function
e = (x+g(a))**2+100
# try to compile said expression
f = autowrap(e, backend='cython')
# fails with "undefined reference to `g'"
EDIT:
I have several large Sympy expressions
The expressions are generated automatically (via differentiation and such)
The expressions contain some "implemented UndefinedFunctions" that call some numerical functions (i.e. NOT Sympy expressions)
The final script/program that has to evaluate the expressions for some input will be called quite often. That means that evaluating the expression in Sympy (via evalf) is definitely not feasible. Even compiling just in time (lambdify, autowrap, ufuncify, numba.jit) produces too much of an overhead.
Basically I want to create a binary python extension for those expressions without having to implement them by hand in C, which I consider too error prone.
OS is Windows 7 64bit
You may want to take a look at this answer about serialization of SymPy lambdas (generated by lambdify).
It's not exactly what you asked but may alleviate your problem with startup performance. Then lambdify-ied functions will mostly count only execution time.
You may also take a look at Theano. It has nice integration with SymPy.
Ok, this could do the trick, I hope, if not let me know and I'll try again.
I compare a compiled version of an expression using Cython against the lambdified expression.
from sympy.utilities.autowrap import autowrap
from sympy import symbols, lambdify
def wraping(expression):
return autowrap(expression, backend='cython')
def lamFunc(expression, x, y):
return lambdify([x,y], expr)
x, y = symbols('x y')
expr = ((x - y)**(25)).expand()
print expr
binary_callable = wraping(expr)
print binary_callable(1, 2)
lamb = lamFunc(expr, x, y)
print lamb(1,2)
which outputs:
x**25 - 25*x**24*y + 300*x**23*y**2 - 2300*x**22*y**3 + 12650*x**21*y**4 - 53130*x**20*y**5 + 177100*x**19*y**6 - 480700*x**18*y**7 + 1081575*x**17*y**8 - 2042975*x**16*y**9 + 3268760*x**15*y**10 - 4457400*x**14*y**11 + 5200300*x**13*y**12 - 5200300*x**12*y**13 + 4457400*x**11*y**14 - 3268760*x**10*y**15 + 2042975*x**9*y**16 - 1081575*x**8*y**17 + 480700*x**7*y**18 - 177100*x**6*y**19 + 53130*x**5*y**20 - 12650*x**4*y**21 + 2300*x**3*y**22 - 300*x**2*y**23 + 25*x*y**24 - y**25
-1.0
-1
If I time the execution times the autowraped function is 10x faster (depending on the problem I have also observed cases where the factor was as little as two):
%timeit binary_callable(12, 21)
100000 loops, best of 3: 2.87 µs per loop
%timeit lamb(12, 21)
100000 loops, best of 3: 28.7 µs per loop
So here wraping(expr) wraps your expression expr and returns a wrapped object binary_callable. This you can use at any time to do numerical evaluation.
EDIT: I have done this on Linux/Ubuntu and Windows OS, both seem to work fine!
I have a little question about sympy.
I did load the library with :
from sympy import *
At some point of my program I would like to evaluate a function.
x=Symbol('x', real=True)
sqrt(1-x).subs(x, 9).evalf()
>>> 2.82842712474619*I
Sympy answer me complex value but i would like an error as in basic python :
sqrt(-1)
>>> ValueError: math domain error
Someone know how to do that using sympy?
I may be wrong, but I don't think you can make it yell that way, because that's a scientific library so it is made for supporting imaginary numbers, but you can change it a bit:
x=Symbol('x', real=True)
v = sqrt(1-x).subs(x, 9).evalf()
if not v.is_real:
raise ValueError, "math domain error"
or you can create a function:
def assert_real(v):
if not v.is_real:
raise ValueError, "math domain error"
return v
so you can do:
x = Symbol('x', real=True)
v = assert_real(sqrt(1-x).subs(x, 9).evalf())