How to avoid useless piecewised solutions from sympy - python

From
from sympy import *
t,r = symbols('t r', real=True, nonnegative=True)
c_x,c_y,a1,a2 = symbols('c_x c_y a1 a2', real=True)
integrate(-r*(a1 - a2)*(c_x*cos(-a1*t + a1 + a2*t) + c_y*sin(-a1*t + a1 + a2*t) + r)/2,(t,0,1))
I obtain the piecewise solution
Piecewise((-a1*c_x*r*cos(a2)/2 - a1*c_y*r*sin(a2)/2 - a1*r**2/2 + a2*c_x*r*cos(a2)/2 + a2*c_y*r*sin(a2)/2 + a2*r**2/2, Eq(a1, a2)), (-a1*r**2/2 + a2*r**2/2 - c_x*r*sin(a1)/2 + c_x*r*sin(a2)/2 + c_y*r*cos(a1)/2 - c_y*r*cos(a2)/2, True))
which does not need to be piecewised because if a1=a2 both expressions are 0, therefore the second expression is actually a global non-piecewise solution.
So my first question is: can I make sympy give me the non-piecewise solution? (by setting some option or anything else)
Regardless of the above mentioned possibility, since I can accept that a1 is not equal to a2 (it is a limit case of no interest), is there a way to tell sympy of such assumption? (again in order to obatin the non-piecewise solution)
Thanks in advance from a sympy novice.
P.S. For the same problem Maxima gives directly the non-piecewise solution.

there is a keyword conds of which the default is "piecewise". It can also be set to "separate" or "none". However, as it is a definite integral, probably you can try the keyword manual=True as well..
If you set the keyword to conds='separate', it should return a distinct tuple with the convergence conditions. I tried it, only gives a single solution. I don't know yet why this behaviour is not as expected.
The conds='none' keyword should not return the convergence conditions, just the solution. This is I think what you are looking for.
Another option, which is only valid in context of definite integrals, is another keyword manual=True. This mimics integrating by hand, conveniently "forgetting" about checking for convergence conditions.

Related

Is there a programmable method for calculating the exponent value of a power sum

Say I have an equation:
a^x + b^x + c^x = n
Since I know a, b, c and n, is there a way to solve for x?
I have been struggling with this problem for a while now, and I can't seem to find a solution online.
My current method is to iterate over X until the left side is "close enough" to n. The method is pretty slow and in an already computationally difficult algorithm.
Example:
3^x + 5^x + 7^x = 83
How do i go about solving for x. (2 in this case)
I tried the equation in WolframAlpha and it seems to know how to solve it, but any other program fails to do so.
I probably should also mention that X is not an integer (mostly in 0.01 to 0.05 range in my case).
You can use scipy library. You can install it using command pip install scipy
Then, this code will work:
from scipy.optimize import root
def eqn(x):
return 3**x + 5**x + 7**x - 83
myroot = root(eqn, 2)
print(myroot.x)
Here, root takes two arguments root(fun, x0) where fun is the function of the equation and x0 is an rough estimate of the root value. For example if you know that your root will fall in range of (0,1) then you can enter 0 as rough estimate.
Also make sure the equation entered in the code is such that R.H.S. is equal to 0.
In our case 3^x + 5^x + 7^x = 83 becomes 3^x + 5^x + 7^x - 83 = 0
Reference Documentation
If you want to stick to base Python, it is easy enough to implement Newton's method for this problem:
from math import log
def solve(a,b,c,n,guess,tol = 1e-12):
x = guess
for i in range(100):
x_new = x - (a**x + b**x + c**x - n)/(log(a)*a**x + log(b)*b**x + log(c)*c**x)
if abs(x-x_new) < tol: return x_new
x = x_new
return "Doesn't converge on a root"
Newton's method might fail to converge in some pathological cases, hence an escape valve for such cases. In practice it converges very rapidly.
For example:
>>> solve(3,5,7,83,1)
2.0
Despite all this, I think that Cute Panda's answer is superior. It is easy enough to do a straight-forward implementation of such numerical algorithms, one that works adequately in most cases, but naive implementations such as the one give above tend to be vulnerable to excessive round-off error as well as other problems. scipy uses highly optimized routines which are implemented in a much more robust way.

Sympy cannot evaluate an infinite sum involving gamma functions

I am using Sympy to evaluate some symbolic sums that involve manipulations of the gamma functions but I noticed that in this case it's not evaluating the sum and keeps it unevaluated.
import sympy as sp
a = sp.Symbol('a',real=True)
b = sp.Symbol('b',real=True)
d = sp.Symbol('d',real=True)
c = sp.Symbol('c',integer=True)
z = sp.Symbol('z',complex=True)
t = sp.Symbol('t',complex=True)
sp.simplify(t-sp.summation((sp.exp(-d)*(d**c)/sp.gamma(c+1))/(z-c-a*t),(c,0,sp.oo)))
I then need to lambdify this expression, and unfortunately this becomes impossible to do.
With Matlab symbolic toolbox however I get the following answer:
Matlab
>> a=sym('a')
>> b=sym('b');
>> c=sym('c')
>> d=sym('d');
>> z=sym('z');
>> t=sym('t');
>> symsum((exp(-d)*(d^c)/factorial(c))/(z-c-a*t),c,0,inf)
ans =
(-d)^(z - a*t)*exp(-d)*(gamma(a*t - z) - igamma(a*t - z, -d))
The formula involves lower incomplete gamma functions, as expected.
Any idea why of this behaviour? I thought sympy was able to do this summation symbolically.
Running your code with SymPy 1.2 results in
d**(-a*t + z)*exp(-I*pi*a*t - d + I*pi*z)*lowergamma(a*t - z, d*exp_polar(I*pi)) + t
By the way, summation already attempts to evaluate the sum (and succeeds in case of SymPy 1.2), subsequent simplification is cosmetic. (And can sometimes be harmful).
The presence of exp_polar means that SymPy found it necessary to consider the points on the Riemann surface of logarithmic function instead of regular complex numbers. (Related bit of docs). The function lower_gamma is branched and so we must distinguish between "the value at -1, if we arrive to -1 from 1 going clockwise" from "the value at -1, if we arrive to -1 from 1 going counterclockwise". The former is exp_polar(-I*pi), the latter is exp_polar(I*pi).
All this is very interesting but not really helpful when you need concrete evaluation of the expression. We have to unpolarify this expression, and from what Matlab shows, simply replacing exp_polar with exp is a correct way to do so here.
rv = sp.simplify(t-sp.summation((sp.exp(-d)*(d**c)/sp.gamma(c+1))/(z-c-a*t),(c,0,sp.oo)))
rv = rv.subs(sp.exp_polar, sp.exp)
Result: d**(-a*t + z)*exp(-I*pi*a*t - d + I*pi*z)*lowergamma(a*t - z, -d) + t
There is still something to think about here, with complex numbers and so on. Is d positive or negative? What does raising it to the power -a*t+z mean, what branch of multivalued power function do we take? The same issues are present in Matlab output, where -d is raised to a power.
I recommend testing this with floating point input (direct summation of series vs evaluation of the SymPy expression for it), and adding assumptions on the sign of d if possible.

sympy factor simple relationship

I have a simple factorization problem in sympy that I cannot sort out. I've had great success with sympy working with quite complex integrals, but I'm flummoxed by something simple.
How do I get
phi**2 - 2*phi*phi_0 + phi_0**2 - 8
to factor to
(phi - phi_0)**2 - 8
?
I've already tried the factor function
factor(phi**2 - 2*phi*phi_0 + phi_0**2 - 8,phi-phi_0)
which yields the same old solution.
As I noted in the comment, such "partial factorizations" are not unique (for instance, x**2 + 5*x + 7 equals (x + 2)*(x + 3) + 1 and (x + 1)*(x + 4) + 3, and once you understand what's going on it's not hard to come up with examples of your own).
There are ways to do such things manually, but it's hard to know what to tell you because I don't know what generality you are looking for. For instance, probably the easiest way to do this particular example is
>>> print(A.subs(phi, x + phi_0).factor().subs(x, phi - phi_0))
(phi - phi_0)**2 - 8
That is, let x = phi - phi_0 (SymPy isn't smart enough to replace phi - phi_0 with x, but it is smart enough to replace phi with x - phi_0, which is the same thing). This doesn't generalize as nicely if you want to factor in terms of a larger polynomial, or if you don't know what you are aiming for. But given the names of your variables, I'm guessing phi - phi_0 is all you care about.
Beyond this, I should point out that you can more or less do any kind of simplification you want by digging in and writing your own algorithms that dig into the expressions. Take a look at http://docs.sympy.org/latest/tutorial/manipulation.html to get you started. Also take a look at all the methods of Expr. There are quite a few useful helper functions if you end up writing such a thing, such as the various as_* methods.

Awkward Substitution in Sympy

Is there a simpler way to do substitution in Sympy which is similar to Sage or Mathematica.
In Mathematica You have something called as eliminate() which given a set of equations you can ask it to eliminate certain variables.
In Sage you need to be more hands on with it but its still more or less similar to Mathematica.
In Sympy comparatively its more awkward to do substitution.
In the screenshot the red arrows show what i am talking about. The white Arrow is the method i think would be more appropriate.
edit 1: here is a link to the function in mathematica http://reference.wolfram.com/mathematica/ref/Eliminate.html
You can have equations (actually Equality object) in SymPy:
>>> eq1=Eq(x,y);eq2=Eq(x,5)
But you are right, subs doesn't guess everything for you. It looks like Sage assumes that if a variable is isolated on one side of an equation, that is the variable to be replaced. But there is no guarantee that you will always conveniently have the desired variable isolated. It's not hard to use solve to give you the desired variable isolated:
>>> solve(eq2,x,dict=1)
[{x:5}]
And then that can be substituted into the equation from which you want to eliminate that variable.
>>> eq1.subs(solve(eq2,x,dict=1)[0])
5=y
Use of the "exclude" keyword doesn't presently behave quite as I would expect; perhaps it should act in an elimination sense:
>>> solve((eq1,eq2), exclude=(x,))
{y:x}
Following up on the above comments and https://github.com/sympy/sympy/issues/14741, one way to do the above in Sympy would be:
from sympy import Eq, var
var('P, F, K, M, E0, E1, E2, E3, E4')
a = Eq(E1, (E0 + P - F)*K - M)
b = Eq(E2, (E1 + P - F)*K - M)
c = Eq(E3, (E2 + P - F)*K - M)
d = Eq(E4, (E3 + P - F)*K - M - F)
d.subs(*c.args).subs(*b.args).subs(*a.args)

Multivariate Root Finding in Python

Using excel solver, it is easy to find a solution (optimum value for x and y )for this equation:
(x*14.80461) + (y * -4.9233) + (10*0.4803) ≈ 0
However, I can't figure out how to do this in Python. The existing scipy optimize library function like fsolve() or leastsq() seems to work with only one variable.... (I might just not know how to use them)...
Any suggestions?
Thanks!
>>> def f(x):
... return x[0]*14.80461 + x[1]*(-4.9233) + x[2]*(10*0.4803)
>>> def vf(x):
... return [f(x), 0, 0]
>> xx = fsolve(vf, x0=[0,0,1])
>>>
>>> f(xx)
8.8817841970012523e-16
Since the solution is not unique, different initial values for an unknown lead to different (valid) solutions.
EDIT: Why this works. Well, it's a dirty hack. It's just that fsolve and its relatives deal with systems of equations. What I did here, I defined a system of three equations (f(x) returns a three-element list) for three variables (x has three elements). Now fsolve uses a Newton-type algorithm to converge to a solution.
Clearly, the system is underdefined: you can specify arbitrary values of two variables, say, x[1] and x[2] and find x[0] to satisfy the only non-trivial equation you have. You can see this explicitly by specifying a couple of initial guesses for x0 and see different outputs, all of which satisfy f(x)=0 up to a certain tolerance.

Categories