I have a simple factorization problem in sympy that I cannot sort out. I've had great success with sympy working with quite complex integrals, but I'm flummoxed by something simple.
How do I get
phi**2 - 2*phi*phi_0 + phi_0**2 - 8
to factor to
(phi - phi_0)**2 - 8
?
I've already tried the factor function
factor(phi**2 - 2*phi*phi_0 + phi_0**2 - 8,phi-phi_0)
which yields the same old solution.
As I noted in the comment, such "partial factorizations" are not unique (for instance, x**2 + 5*x + 7 equals (x + 2)*(x + 3) + 1 and (x + 1)*(x + 4) + 3, and once you understand what's going on it's not hard to come up with examples of your own).
There are ways to do such things manually, but it's hard to know what to tell you because I don't know what generality you are looking for. For instance, probably the easiest way to do this particular example is
>>> print(A.subs(phi, x + phi_0).factor().subs(x, phi - phi_0))
(phi - phi_0)**2 - 8
That is, let x = phi - phi_0 (SymPy isn't smart enough to replace phi - phi_0 with x, but it is smart enough to replace phi with x - phi_0, which is the same thing). This doesn't generalize as nicely if you want to factor in terms of a larger polynomial, or if you don't know what you are aiming for. But given the names of your variables, I'm guessing phi - phi_0 is all you care about.
Beyond this, I should point out that you can more or less do any kind of simplification you want by digging in and writing your own algorithms that dig into the expressions. Take a look at http://docs.sympy.org/latest/tutorial/manipulation.html to get you started. Also take a look at all the methods of Expr. There are quite a few useful helper functions if you end up writing such a thing, such as the various as_* methods.
Related
I am having some trouble writing this equation in Python and having it produce correct results:
y = 2.95710^-7 * x^4 – 2.34310^-5 * x^3 – 1.67*10^-4 * x^2 + 0.04938 * x – 1.083.
I have tried the following code:
y = 0.0000002957*x**4 - 0.00002343*x**3 - 0.000167*x**2 + 0.04938*x - 1.083
and also:
y = (2.957*10**-7)*x**4 - (2.343*10**-5)x**3 - (1.67*10**4)*x**2 + 0.04938*x - 1.083
any advice would be helpful, I think the problem might be the scientific notation or the exponents and the way I am inputting them
EDIT
in response to questions, the equation spits out an incorrect number than what I get on a calculator
What you have can easily be converted to Python in a very direct way. The equivalent Python statement (after fixing a flaw in the original equation...see below) is :
y = 2.95710e-7 * x**4 - 2.34310e-5 * x**3 - 1.67e-4 * x**2 + 0.04938 * x - 1.083
This involves only the simple substitution of the ^ characters, where you replace that character with e in the exponential floating point constants and with ** for raising x to an integer power.
For reference to be able to more easily compare the source with the result, here's the original equation with the one slight fix mentioned below:
y = 2.95710^-7 * x^4 – 2.34310^-5 * x^3 – 1.67^-4 * x^2 + 0.04938 * x – 1.083
UPDATE: Thanks to Pranav for pointing out a flaw in the source equation. The term 1.67*10^-4 should be changed to 1.67^-4 to match the other similar terms, and the equivalent fix made to the resulting Python equation. Those fixes were made and commented on above.
I always like to see what I'm working with in situations like this. I used matplotlib to plot this function, using an unchanged version (except for the function and the bounds on x) of this sample code. Here's what I got for x values between -500 and 500:
Thanks for answering everybody. The most helpful answer was using "e" instead of scientific notation (*10^x).
Say I have an equation:
a^x + b^x + c^x = n
Since I know a, b, c and n, is there a way to solve for x?
I have been struggling with this problem for a while now, and I can't seem to find a solution online.
My current method is to iterate over X until the left side is "close enough" to n. The method is pretty slow and in an already computationally difficult algorithm.
Example:
3^x + 5^x + 7^x = 83
How do i go about solving for x. (2 in this case)
I tried the equation in WolframAlpha and it seems to know how to solve it, but any other program fails to do so.
I probably should also mention that X is not an integer (mostly in 0.01 to 0.05 range in my case).
You can use scipy library. You can install it using command pip install scipy
Then, this code will work:
from scipy.optimize import root
def eqn(x):
return 3**x + 5**x + 7**x - 83
myroot = root(eqn, 2)
print(myroot.x)
Here, root takes two arguments root(fun, x0) where fun is the function of the equation and x0 is an rough estimate of the root value. For example if you know that your root will fall in range of (0,1) then you can enter 0 as rough estimate.
Also make sure the equation entered in the code is such that R.H.S. is equal to 0.
In our case 3^x + 5^x + 7^x = 83 becomes 3^x + 5^x + 7^x - 83 = 0
Reference Documentation
If you want to stick to base Python, it is easy enough to implement Newton's method for this problem:
from math import log
def solve(a,b,c,n,guess,tol = 1e-12):
x = guess
for i in range(100):
x_new = x - (a**x + b**x + c**x - n)/(log(a)*a**x + log(b)*b**x + log(c)*c**x)
if abs(x-x_new) < tol: return x_new
x = x_new
return "Doesn't converge on a root"
Newton's method might fail to converge in some pathological cases, hence an escape valve for such cases. In practice it converges very rapidly.
For example:
>>> solve(3,5,7,83,1)
2.0
Despite all this, I think that Cute Panda's answer is superior. It is easy enough to do a straight-forward implementation of such numerical algorithms, one that works adequately in most cases, but naive implementations such as the one give above tend to be vulnerable to excessive round-off error as well as other problems. scipy uses highly optimized routines which are implemented in a much more robust way.
I am using Sympy to evaluate some symbolic sums that involve manipulations of the gamma functions but I noticed that in this case it's not evaluating the sum and keeps it unevaluated.
import sympy as sp
a = sp.Symbol('a',real=True)
b = sp.Symbol('b',real=True)
d = sp.Symbol('d',real=True)
c = sp.Symbol('c',integer=True)
z = sp.Symbol('z',complex=True)
t = sp.Symbol('t',complex=True)
sp.simplify(t-sp.summation((sp.exp(-d)*(d**c)/sp.gamma(c+1))/(z-c-a*t),(c,0,sp.oo)))
I then need to lambdify this expression, and unfortunately this becomes impossible to do.
With Matlab symbolic toolbox however I get the following answer:
Matlab
>> a=sym('a')
>> b=sym('b');
>> c=sym('c')
>> d=sym('d');
>> z=sym('z');
>> t=sym('t');
>> symsum((exp(-d)*(d^c)/factorial(c))/(z-c-a*t),c,0,inf)
ans =
(-d)^(z - a*t)*exp(-d)*(gamma(a*t - z) - igamma(a*t - z, -d))
The formula involves lower incomplete gamma functions, as expected.
Any idea why of this behaviour? I thought sympy was able to do this summation symbolically.
Running your code with SymPy 1.2 results in
d**(-a*t + z)*exp(-I*pi*a*t - d + I*pi*z)*lowergamma(a*t - z, d*exp_polar(I*pi)) + t
By the way, summation already attempts to evaluate the sum (and succeeds in case of SymPy 1.2), subsequent simplification is cosmetic. (And can sometimes be harmful).
The presence of exp_polar means that SymPy found it necessary to consider the points on the Riemann surface of logarithmic function instead of regular complex numbers. (Related bit of docs). The function lower_gamma is branched and so we must distinguish between "the value at -1, if we arrive to -1 from 1 going clockwise" from "the value at -1, if we arrive to -1 from 1 going counterclockwise". The former is exp_polar(-I*pi), the latter is exp_polar(I*pi).
All this is very interesting but not really helpful when you need concrete evaluation of the expression. We have to unpolarify this expression, and from what Matlab shows, simply replacing exp_polar with exp is a correct way to do so here.
rv = sp.simplify(t-sp.summation((sp.exp(-d)*(d**c)/sp.gamma(c+1))/(z-c-a*t),(c,0,sp.oo)))
rv = rv.subs(sp.exp_polar, sp.exp)
Result: d**(-a*t + z)*exp(-I*pi*a*t - d + I*pi*z)*lowergamma(a*t - z, -d) + t
There is still something to think about here, with complex numbers and so on. Is d positive or negative? What does raising it to the power -a*t+z mean, what branch of multivalued power function do we take? The same issues are present in Matlab output, where -d is raised to a power.
I recommend testing this with floating point input (direct summation of series vs evaluation of the SymPy expression for it), and adding assumptions on the sign of d if possible.
I'm currently trying to calculate a negative group delay of analog filters by using symbolic calculations in Python. The problem that I'm currently trying to resolve is to get rid of some very small imaginary coefficients.
For example, consider fraction with such numerator (imaginary parts are bolded):
(-1.705768*w^18 + 14.702976409432*w^16 + 1.06581410364015e-14*I*w^15 - 28.7694094371724*w^14 - 9.94759830064144e-14*I*w^13 + 59.0191623753299*w^12 + 5.6843418860808e-14*I*w^11 + 24.7015297857594*w^10 - 1.13686837721616e-13*I*w^9 - 549.093511217598*w^8 - 5.6843418860808e-14*I*w^7 + 1345.40434657845*w^6 + 2.27373675443232e-13*I*w^5 - 1594.14046181284*w^4 - 1.13686837721616e-13*I*w^3 + 980.58940367608*w^2 - 254.8428594382)
Is there any way to automatically round those small coefficients, so they would be equal 0 (in general any negligligible values)? Or at least, can I somehow filter imaginary values out? I've tried to use re(given_fraction), but it couldn't return anything. Also standard rounding function can't cope with symbolic expressions.
The rounding part was already addressed in Printing the output rounded to 3 decimals in SymPy so I won't repeat my answer there, focusing instead of dropping imaginary parts of coefficients.
Method 1
You can simply do re(expr) where expr is your expression. But for this to work, w must be known to be a real variable; otherwise there is no way for SymPy to tell what the real part of (3+4*I)*w is. (SymPy symbols are assumed to be complex unless stated otherwise.) This will do the job:
w = symbols('w', real=True)
expr = # your formula
expr = re(expr)
Method 2
If for some reason you can't do the above... another, somewhat intrusive, way to drop the imaginary part of everything is to replace I with 0:
expr = expr.xreplace({I: 0})
This assumes the expression is already in the expanded form (as shown), so there is no (3+4*I)**2, for example; otherwise the result would be wrong.
Method 3
A more robust approach than 2, but specialized to polynomials:
expr = Poly([re(c) for c in Poly(expr, w).all_coeffs()], w).as_expr()
Here the expression is first turned into a polynomial in w (which is possible in your example, since it has a polynomial form). Then the real part of each coefficient is taken, and a polynomial is rebuilt from them. The final part as_expr() returns it back to expression form, if desired.
Either way, the output for your expression:
-1.705768*w**18 + 14.702976409432*w**16 - 28.7694094371724*w**14 + 59.0191623753299*w**12 + 24.7015297857594*w**10 - 549.093511217598*w**8 + 1345.40434657845*w**6 - 1594.14046181284*w**4 + 980.58940367608*w**2 - 254.8428594382
From
from sympy import *
t,r = symbols('t r', real=True, nonnegative=True)
c_x,c_y,a1,a2 = symbols('c_x c_y a1 a2', real=True)
integrate(-r*(a1 - a2)*(c_x*cos(-a1*t + a1 + a2*t) + c_y*sin(-a1*t + a1 + a2*t) + r)/2,(t,0,1))
I obtain the piecewise solution
Piecewise((-a1*c_x*r*cos(a2)/2 - a1*c_y*r*sin(a2)/2 - a1*r**2/2 + a2*c_x*r*cos(a2)/2 + a2*c_y*r*sin(a2)/2 + a2*r**2/2, Eq(a1, a2)), (-a1*r**2/2 + a2*r**2/2 - c_x*r*sin(a1)/2 + c_x*r*sin(a2)/2 + c_y*r*cos(a1)/2 - c_y*r*cos(a2)/2, True))
which does not need to be piecewised because if a1=a2 both expressions are 0, therefore the second expression is actually a global non-piecewise solution.
So my first question is: can I make sympy give me the non-piecewise solution? (by setting some option or anything else)
Regardless of the above mentioned possibility, since I can accept that a1 is not equal to a2 (it is a limit case of no interest), is there a way to tell sympy of such assumption? (again in order to obatin the non-piecewise solution)
Thanks in advance from a sympy novice.
P.S. For the same problem Maxima gives directly the non-piecewise solution.
there is a keyword conds of which the default is "piecewise". It can also be set to "separate" or "none". However, as it is a definite integral, probably you can try the keyword manual=True as well..
If you set the keyword to conds='separate', it should return a distinct tuple with the convergence conditions. I tried it, only gives a single solution. I don't know yet why this behaviour is not as expected.
The conds='none' keyword should not return the convergence conditions, just the solution. This is I think what you are looking for.
Another option, which is only valid in context of definite integrals, is another keyword manual=True. This mimics integrating by hand, conveniently "forgetting" about checking for convergence conditions.