I have the following code in sympy
from sympy import *
x,y,G=symbols('x y G')
G=x**(3./2.) - y
g_inv=solve(G, x)
if len(g_inv)>1: g_inv=g_inv[-1]
dginvdy=diff(g_inv, y)
The problem is that this gives me
____
3 ╱ 2
2⋅╲╱ y
─────────
3⋅y
and not 2*y**(-1./3)/3 as I expected. I have tried simplify() and even cancel() but no luck. Also, if I define the variables with real=True I can't invert it with solvefor some reason. If I define only yas being real I get
2⋅sign(y)
─────────
3 _____
3⋅╲╱ │y│
which is closer (?) but still not what I want. Defining y as positive also didn't do the trick.
This may seem like something silly but it tremendously complicates the calculations I do from then on.
Any ideas?
I think you need to use sympy.factor here rather than simplify:
In [2]: dginvdy
Out[2]: 2*(y**2)**(1/3)/(3*y)
In [3]: factor(dginvdy)
Out[3]: 2/(3*y**(1/3))
The sympy docs go into some detail about this.
I have found that my root simplification headaches are often alleviated by defining my variables with the assumption positive=True, and indeed this method gets you to your desired answer here too. You'll need to get rid of your if statement and use g_inv=solve(G, x)[0] because solve(...) will now return only a single solution. This method can lead to some loss of generality so you just need to know your problem.
Related
The title says it all... why does sympy have the following behavior?
import sympy.physics.units as u
print(0*u.meter)
# >>> 0
And Pint has this behavior:
import pint
u = pint.UnitRegistry()
print(0*u.meter)
# >>> 0 meter
I think I prefer pint's behavior, because it allows for dimensional consistency. 0 is a proper magnitude of some unit. For instance, 0 degrees kelvin has a definitive meaning... it's not just the absence of anything...
So I realize the contributors of sympy probably chose this implementation for some reason. Can you help me see the light?
The discussion of reasons for implementation belongs on GitHub (where I raised this issue), not here. A short answer is that units are a bolted-on structure; the core of SymPy is not really unit-aware.
You can create 0*meter expression by passing evaluate=False parameter to Mul:
>>> Mul(0, u.meter, evaluate=False)
0*meter
However, it will become 0 if combined with something else.
>>> 3*Mul(0, u.meter, evaluate=False)
0
Wrapping in UnevaluatedExpr prevents the above, but causes more problems than it solves.
>>> 3*UnevaluatedExpr(Mul(0, u.meter, evaluate=False))
3*(0*meter)
So I am starting with an equality of an equation and a fraction that I use to solve for both x and y:
mrs = y/x
ratio = 2/5
x = sympy.solveset(sympy.Eq(mrs, ratio), x)
y = sympy.solveset(sympy.Eq(mrs, ratio), y)
In the end, solving for y returns:
{2*x/5}
Which is a FiniteSet
But solving for x returns:
{5*y/2} \ {0}
Which is a Complement
I don't get why solving for one variable gives me a FiniteSet when solving for the other doesn't do the same? Also, would there be a way to solve for the other variable so as to get a FiniteSet instead of a Complement?
What do you expect as a result? Could you solve this problem by hand and write the expected solution? And why would you want a FiniteSet as solution?
I myself can not come up with a better notation than sympy, since x=0 needs to be excluded.
When you continue working with the solutions sympy can easily work with both, FiniteSet and Complement. Mathematically those are not completely different structures. The difference is that sympy somehow needs to represent these solutions internally and can not use the same construction for everything, but rather uses small building blocks to create the solution. The result you get with type(x) is symply the last building block used.
EDIT: Some math here: x=0 does not solve the equation y/x=2/5 for any y. So this must be excluded from the solutionset.
If you solve for y, then x=0 is already excluded since y/0 is not well defined.
If you solve for y, then y=0 is a priori possible, since 0/x=0 for x!=0. Thus sympy needs to exclude x=0 manually, which it does by removing 0 from the set of solutions.
Now, since we know that x=0 can never be a solution of the equation we can exclude it before even trying to solve the equation. Therefore we do
x = sympy.symbols('x', real=True, nonzero=True)
right at the beginning of the example (before the definition of mrs). The rest can remain unchanged.
From
from sympy import *
t,r = symbols('t r', real=True, nonnegative=True)
c_x,c_y,a1,a2 = symbols('c_x c_y a1 a2', real=True)
integrate(-r*(a1 - a2)*(c_x*cos(-a1*t + a1 + a2*t) + c_y*sin(-a1*t + a1 + a2*t) + r)/2,(t,0,1))
I obtain the piecewise solution
Piecewise((-a1*c_x*r*cos(a2)/2 - a1*c_y*r*sin(a2)/2 - a1*r**2/2 + a2*c_x*r*cos(a2)/2 + a2*c_y*r*sin(a2)/2 + a2*r**2/2, Eq(a1, a2)), (-a1*r**2/2 + a2*r**2/2 - c_x*r*sin(a1)/2 + c_x*r*sin(a2)/2 + c_y*r*cos(a1)/2 - c_y*r*cos(a2)/2, True))
which does not need to be piecewised because if a1=a2 both expressions are 0, therefore the second expression is actually a global non-piecewise solution.
So my first question is: can I make sympy give me the non-piecewise solution? (by setting some option or anything else)
Regardless of the above mentioned possibility, since I can accept that a1 is not equal to a2 (it is a limit case of no interest), is there a way to tell sympy of such assumption? (again in order to obatin the non-piecewise solution)
Thanks in advance from a sympy novice.
P.S. For the same problem Maxima gives directly the non-piecewise solution.
there is a keyword conds of which the default is "piecewise". It can also be set to "separate" or "none". However, as it is a definite integral, probably you can try the keyword manual=True as well..
If you set the keyword to conds='separate', it should return a distinct tuple with the convergence conditions. I tried it, only gives a single solution. I don't know yet why this behaviour is not as expected.
The conds='none' keyword should not return the convergence conditions, just the solution. This is I think what you are looking for.
Another option, which is only valid in context of definite integrals, is another keyword manual=True. This mimics integrating by hand, conveniently "forgetting" about checking for convergence conditions.
Is there a simpler way to do substitution in Sympy which is similar to Sage or Mathematica.
In Mathematica You have something called as eliminate() which given a set of equations you can ask it to eliminate certain variables.
In Sage you need to be more hands on with it but its still more or less similar to Mathematica.
In Sympy comparatively its more awkward to do substitution.
In the screenshot the red arrows show what i am talking about. The white Arrow is the method i think would be more appropriate.
edit 1: here is a link to the function in mathematica http://reference.wolfram.com/mathematica/ref/Eliminate.html
You can have equations (actually Equality object) in SymPy:
>>> eq1=Eq(x,y);eq2=Eq(x,5)
But you are right, subs doesn't guess everything for you. It looks like Sage assumes that if a variable is isolated on one side of an equation, that is the variable to be replaced. But there is no guarantee that you will always conveniently have the desired variable isolated. It's not hard to use solve to give you the desired variable isolated:
>>> solve(eq2,x,dict=1)
[{x:5}]
And then that can be substituted into the equation from which you want to eliminate that variable.
>>> eq1.subs(solve(eq2,x,dict=1)[0])
5=y
Use of the "exclude" keyword doesn't presently behave quite as I would expect; perhaps it should act in an elimination sense:
>>> solve((eq1,eq2), exclude=(x,))
{y:x}
Following up on the above comments and https://github.com/sympy/sympy/issues/14741, one way to do the above in Sympy would be:
from sympy import Eq, var
var('P, F, K, M, E0, E1, E2, E3, E4')
a = Eq(E1, (E0 + P - F)*K - M)
b = Eq(E2, (E1 + P - F)*K - M)
c = Eq(E3, (E2 + P - F)*K - M)
d = Eq(E4, (E3 + P - F)*K - M - F)
d.subs(*c.args).subs(*b.args).subs(*a.args)
I want to do in python what this guy did in MATLAB.
I have installed anaconda, so i have numpy and sympy libraries. So far I have tried with numpy nsolve, but that doesn't work. I should say I'm new with python, and also that I konw how to do it in MATLAB :P.
The equation:
-2*log(( 2.51/(331428*sqrt(x)) ) + ( 0.0002 /(3.71*0.26)) ) = 1/sqrt(x)
Normally, I would solve this iteratively, simply guessing x on the left and than solving for the x on the right. Put solution on the left, solve again. Repeat until left x is close to right. I have an idea what solution should be.
So I could do that, but that's not very cool. I want to do it numerically.
My 15€ Casio calculator can solve it as is, so I think it shouldn't be to complicated?
Thank you for your help,
edit: so I have tried the following:
from scipy.optimize import brentq
w=10;
d=0.22;
rho=1.18;
ni=18.2e-6;
Re=(w*d*rho)/ni
k=0.2e-3;
d=0.26;
def f(x,Re,k,d):
return (
-2*log((2.51/(Re*sqrt(x)))+(k/(3.71*d)),10)*sqrt(x)+1
);
print(
scipy.optimize.brentq
(
f,0.0,1.0,xtol=4.44e-12,maxiter=100,args=(),full_output=True,disp=True
)
);
And i get this result:
r = _zeros._brentq(f,a,b,xtol,maxiter,args,full_output,disp)
TypeError: f() takes exactly 4 arguments (1 given)
Is it because I'm solving also solving for constants?
edit2:
so I think I have to assign constants via args=() keyword, so I changed:
f,0.0,1.0,xtol=4.44e-12,maxiter=100,args=(Re,k,d),full_output=True,disp=True
but now I get this:
-2*log((2.51/(Re*sqrt(x)))+(k/(3.71*d)),10)*sqrt(x)+1
TypeError: return arrays must be of ArrayType
Anyway, when I put in a different equation; lets say 2*x*Re+(k*d)/(x+5) it works, so I guess I have to transform the equation.
so it dies here: log(x,10)..
edit4: correct syntax is log10(x)... Now it works but I get zero as a result
This works fine. I've done a few things here. First, I've used a simpler definition of the function using the global variables you've defined anyway. I find this a little nicer than passing the args= to the solvers, it also enables easier use of your own custom solvers if you ever need something like that. I've used the generic root function as an entry point rather than using a particular algorithm - this is nice because you can simply pass a different method later. I've also fixed up your spacing to be as recommended by PEP 8 and fixed your erronious rewriting of the equation. I find it more intuitive simply to write LHS - RHS rather than manipulate as you did. Also, notice that I've replaced all the integer literals with 1.0 or whatever to avoid problems with integer division. 0.02 is regarded as a pretty standard starting point for the friction factor.
import numpy
from scipy.optimize import root
w = 10.0
d = 0.22
rho = 1.18
ni = 18.2e-6
Re = w*d*rho/ni
k = 0.2e-3
def f(x):
return (-2*numpy.log10((2.51/(Re*numpy.sqrt(x))) + (k/(3.71*d))) - 1.0/numpy.sqrt(x))
print root(f, 0.02)
I must also mention that fixed point iteration is actually faster than even Newton's method for this problem. You can use the built-in fixed point iteration routine by defining f2 as follows:
def f2(x):
LHS = -2*numpy.log10((2.51/(Re*numpy.sqrt(x))) + (k/(3.71*d)))
return 1/LHS**2
Timings (starting further from the root to show speed of convergence):
%timeit root(f, 0.2)
1000 loops, best of 3: 428 µs per loop
%timeit fixed_point(f2, 0.2)
10000 loops, best of 3: 148 µs per loop
Your tags are a little off: you're tagging it as sympy which is a library for symbolic computations, but say that you want to solve it numerically. In case the latter is your actual intention, here are relevant scipy docs:
http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#root-finding
Scipy with fixed_point is to be preferred also because the root does not converge for guess values far away, like the 0.2 in the #chthonicdaemon %timeit exempla.