Specify algebra in SymPy? - python

Is there some way to tell SymPy that the plus operator for the type of objects you are working with is non-associative? I want to use SymPy to do symbolic computations involving gyrovector, for which the + operator is replaced with the ⊕ operator, which is non-associative.

Related

lambdify expressions with native sympy functions

I would like to lambdify sympy's exp, but I run into funny issues when trying to evaluate the function at a sympy.Symbol. This
import sympy
t = sympy.Symbol('t')
f = sympy.lambdify(t, t**2)
f(t) # no problem
works fine, but this
t = sympy.Symbol('t')
f = sympy.lambdify(t, sympy.exp(t))
f(t)
gives
AttributeError: 'Symbol' object has no attribute 'exp'
The same goes for all other native sympy functions I've tried (log, sin, etc.).
Any idea what's going on?
You should specify modules you want to use with modules argument of the lambdify function:
f = sympy.lambdify(t, sympy.exp(t), modules=["sympy"])
The main use of lambdify is to allow for a fast numerical evaluation of expressions.
This is achieved by replacing abstract and slow SymPy functions (like sympy.exp) with faster ones intended for numbers (like math.exp or numpy.exp).
These cannot handle SymPy symbols (like your t) as an argument, which is not what lambdify is intended for anyway.
If you call lambdify with dummify=False as an additional argument, you get a more meaningful error, when calling f(t), namely:
TypeError: can't convert expression to float
The expression that cannot be converted here is your argument t.
If you want to use a lambdified function with symbols as an argument for some reason, you need to pass modules=["sympy"] as an additional argument to lambdify.
This argument specifies which module lambdify uses to replace SymPy functions (like sympy.exp) – in this case, it’s sympy again, so nothing actually happens.

A statement in sympy is returning false when it shouldn't be

For context: I'm using sympy in python 2.7. Part of my project involves simplifying a mathematical expression, but I ran into a problem when using sympy:
from sympy import *
x = symbols ("x")
(-x*exp(-x) + exp(-x)) == (1-x)*(exp(-x))
the code above returns me
False
Both my own maths and wolframalpha disagree with this - did I type something wrong or is this some shortcoming of sympy that I'm not yet aware of?
From the docs page:
http://docs.sympy.org/dev/gotchas.html
If you want to test for symbolic equality, one way is to subtract one expression from the other and run it through functions likeexpand(), simplify(), and trigsimp() and see if the equation reduces to 0.
If you want to create a symbolic equality, use Eq:
In [1]: Eq((-x*exp(-x) + exp(-x)), (1-x)*(exp(-x)))
Out[1]:
-x -x -x
- x⋅ℯ + ℯ = (-x + 1)⋅ℯ
The == operator has to immediately return a boolean expression, this is a Python standard. Therefore the == operator matches the rude structure of the expressions without performing any mathematical transformation (apart from previous minor automatic transformations happening at the expression construction).

Python symbolic integration

I am using symbolic integration to integrate a combined function of circular function and power function.
from sympy import *
import math
import numpy as np
t = Symbol('t')
integrate(0.000671813*(7/2*(1.22222222+sin(2*math.pi*t-math.pi/2))-6)**0.33516,t)
However, when I finished input, it gives me an odd result:
0.000671813*Integral((3.0*sin(6.28318530717959*t - 1.5707963267949) - 2.33333334)**0.33516, t)
Why does this result contain Integral()? I checked online other functions and there is no Integral() in them.
An unevaluated Integral answer means that SymPy was unable to compute the integral.
Essentially you are trying to integrate a function that looks like
(sin(t) + a)**0.33516
where a is a constant number.
In general such an integration is not possible to express in elementary functions; see, for example, http://www.sosmath.com/calculus/integration/fant/fant.html,
especially the sentence on Chebyshev's theorem.

evaluating a sympy function at an arbitrary-precision floating point

Given a sympy symbolic function, for example
x=symbols("x")
h=sin(x)
when one calls
h.subs(x, mpf('1.0000000000000000000000000000000000000000000001'))
sympy returns a floating point number. The return value is not an mpf.
Is there a convenient way to get sympy to evaluate symbolic standard functions (like exponential and trig functions), using arbitrary precision?
To evaluate an expression, use expr.evalf(), which takes a precision as the first argument (the default is 15). You can substitute expressions using expr.evalf(prec, subs=subs_dict).
>>> sin(x).evalf(50, subs={x: Float('1.0')})
0.84147098480789650665250232163029899962256306079837

Exponentiation in Python - should I prefer ** operator instead of math.pow and math.sqrt? [duplicate]

This question already has answers here:
Which is faster in Python: x**.5 or math.sqrt(x)?
(15 answers)
Closed 9 years ago.
In my field it's very common to square some numbers, operate them together, and take the square root of the result. This is done in pythagorean theorem, and the RMS calculation, for example.
In numpy, I have done the following:
result = numpy.sqrt(numpy.sum(numpy.pow(some_vector, 2)))
And in pure python something like this would be expected:
result = math.sqrt(math.pow(A, 2) + math.pow(B,2)) # example with two dimensions.
However, I have been using this pure python form, since I find it much more compact, import-independent, and seemingly equivalent:
result = (A**2 + B**2)**0.5 # two dimensions
result = (A**2 + B**2 + C**2 + D**2)**0.5
I have heard some people argue that the ** operator is sort of a hack, and that squaring a number by exponentiating it by 0.5 is not so readable. But what I'd like to ask is if:
"Is there any COMPUTATIONAL reason to prefer the former two alternatives over the third one(s)?"
Thanks for reading!
math.sqrt is the C implementation of square root and is therefore different from using the ** operator which implements Python's built-in pow function. Thus, using math.sqrt actually gives a different answer than using the ** operator and there is indeed a computational reason to prefer numpy or math module implementation over the built-in. Specifically the sqrt functions are probably implemented in the most efficient way possible whereas ** operates over a large number of bases and exponents and is probably unoptimized for the specific case of square root. On the other hand, the built-in pow function handles a few extra cases like "complex numbers, unbounded integer powers, and modular exponentiation".
See this Stack Overflow question for more information on the difference between ** and math.sqrt.
In terms of which is more "Pythonic", I think we need to discuss the very definition of that word. From the official Python glossary, it states that a piece of code or idea is Pythonic if it "closely follows the most common idioms of the Python language, rather than implementing code using concepts common to other languages." In every single other language I can think of, there is some math module with basic square root functions. However there are languages that lack a power operator like ** e.g. C++. So ** is probably more Pythonic, but whether or not it's objectively better depends on the use case.
Even in base Python you can do the computation in generic form
result = sum(x**2 for x in some_vector) ** 0.5
x ** 2 is surely not an hack and the computation performed is the same (I checked with cpython source code). I actually find it more readable (and readability counts).
Using instead x ** 0.5 to take the square root doesn't do the exact same computations as math.sqrt as the former (probably) is computed using logarithms and the latter (probably) using the specific numeric instruction of the math processor.
I often use x ** 0.5 simply because I don't want to add math just for that. I'd expect however a specific instruction for the square root to work better (more accurately) than a multi-step operation with logarithms.

Categories