Time-consuming substitution in sympy - python

I have a very long polynomial in Sympy that I want to substitute with numbers. But this substitution will take a lot of time, while the same polynomial in MATLAB software will be output in less than one second.
How can I solve this problem in Python?
I just tried for exapmle: polynomial.subs([(k11,9600000000.00000), (k12,3066000000.0000),...])

If you are looking for a numerical result, you probably want to look at lambdify, which converts the symbolic polynomial to a numerical function for faster evaluation...
f = lambdify(coefficient_list, polynomial)
result = f(*coefficient_values)

Related

Why are the dot products in this expression not evaluating (after substituting Symbols for BaseVectors in CoordSys3D)?

The below expression looks messy, but all that needs to be known is that:
All of the variables are of type Symbol
Dot products were formed with sp.vector.dot
Absolute value performed with sp.Abs
The issue arises in this second_expression below, where I have simply used first_expression.subs() to sub two Symbols for BaseVectors from the CoordSys3D framework. Namely, the z hat and e_ j symbols from above have been swapped for k_N and j_N respectively shown below.
Issue: The three dot products are not evaluating (and neither is the Abs). Why would it not evaluate when sympy.vector.dot is designated to dot two BaseVectors? (the orthogonal k_N and j_N).
The second expression should look like this when evaluated:
If anyone is willing to help me with this in any way, I would be extremely grateful.
EDIT: This problem has been condensed into a simpler one
SymPy substitute Symbol for Vector

How can I accurately calculate a mathematical expression in Python?

(Sorry for my poor English:)
I know that I can use eval to calculate the result of a mathematical expression.
However, this is not accurate(e.g: 0.3-0.2 != 0.1). After searching, I found I can use Fraction or some other methods to calculate.
But I didn't find out how to use these methods to directly calculate a string expression. For example, Fraction('239/3289') is correct but I can't use Fraction('(239/3289+392)/(12+993)').
Is there a simple way to accurately calculate an mathematical expression?
edit: Actually I'm just wondering how to parse such a formula and return the result in Fraction instead of splitting the string and using Fraction(Fraction('239/3289') + 392, 12 + 993)...Does someone know how to do it?
Try math.isclose and cmath.isclose as suggested here
The argument to Fraction is not an arbitrary expression, it's just for creating one fraction. You can then use that like a number in a more general expression.
Fraction(Fraction('239/3289') + 392, 12 + 992)

how to get real valule in R [duplicate]

I'm just learning R, so please forgive what I'm sure is a very elementary question. How do I take the real part of a complex number?
If you read the help file for complex (?complex), you will see a number of functions for performing complex arithmetic. The details clearly state
The functions Re, Im, Mod, Arg and Conj have their usual interpretation as returning the real part, imaginary part, modulus, argument and complex conjugate for complex values.
Therefore
Use Re:
Re(1+2i)
# 1
For the imaginary part:
Im(1+2i)
# 2
?complex will list other useful functions.
Re(z)
and
Im(z)
would be the functions you're looking for.
See also http://www.johnmyleswhite.com/notebook/2009/12/18/using-complex-numbers-in-r/

How to estimate how big one number is from another?

I'm trying to find an efficient solution to this problem. I receive two numbers from an API call (we can call them n1, n2). Suppose n2 is bigger than n1. I want to know how much bigger n2 is. The difference between the two is not enough because I don't know how to evaluate the result of the subtraction. I don't see any other solution but to define a tolerance range that fits my domain of application and check if their difference falls within that range. Any idea?
Why don't you just take the division?
Suppose n1 is 1 and n2 is 10.
If you divide n2 by n1 you will see that n1 is 10 times bigger than n2.
It's a common problem when using floating-point, that results can be almost equal and you want to treat them as if they were equal.
You need a formula to measure the difference between the numbers relative to the size of the numbers, and it's remarkably tricky to find. One problem is avoiding division by zero. As always, first check the libraries: in this case, the math module includes math.isclose(a, b, *, rel_tol=1e-09, abs_tol=0.0)
So let the mathematics experts handle it for you:
import math
if math.isclose(a,b):
# they agree to around nine decimal places

Simplify expression generated with sympy codegen

I am using sympy.utilities.codegen to generate some C-code which I use for numerical computations. For example a generated C-function looks somehow like this:
double f(double x, double y, double z){
return M_PI*sin(M_PI*x)*sin(M_PI*y) + sin(M_PI*y)*sin(M_PI*z);
}
So in general I have larger functions with more expressions, which is problematic for my numerical computations. Since I work with CUDA I have a reduced set of registers for my computations. What I want to do is to split an expression into smaller ones and also do some substitutions such that expensive computations are calculated only once. Here is an example what the above code would look like.
double f(double x, double y, double z){
double sinx = sin(M_PI*x);
double siny = sin(M_PI*y);
double sinz = sin(M_PI*z);
double result;
result = M_PI*sinx*siny;
result += siny*sinz;
return result;
}
So obviously for this small functions this substitutions doesn't pay off but in general this is the only way to get things to work for me for larger functions. So my questions would be, are there any simple built in options to get this kind of behaviour? The most important part for me would be to split up the calculation into small steps. I guess the substitution could be done with some string replacement routines.
You most probably want to perform common subexpression elimination.
In your example siny is the only expression actually being reused:
>>> expr = pi*sin(pi*x)*sin(pi*y) + sin(pi*y)*sin(pi*z)
>>> print(cse(expr))
([(x0, sin(pi*y))], [pi*x0*sin(pi*x) + x0*sin(pi*z)])
Usually compilers should already do these transformations - at least if you ask it
to ignore the non-associativity of e.g. floating point multiplication (by passing e.g. -ffast-math). I don't have any experience with nvcc though.
If you run into limitations when working with codegen for generating CUDA code - please feel free to improve upon it at and send Pull Requests to the SymPy project. Be sure to have the latest master branch checked out though since Jim Crist is currently refactoring the code printers: https://github.com/sympy/sympy/pull/7823

Categories