I've got the following equations:
q1dd,b1,q2,q3,v1,q2dd,a1,a2,b2 = symbols('\ddot{q}_1 b1 q2 q3 v1 \ddot{q}_2 a1 a2 b2')
eq1 = -q1dd+b1*cos(q2)*sin(q3)*v1
eq2 = -q2dd+a1*sin(q2)+a2*cos(q2) + b2*cos(q3)*v1
display(eq1)
display(eq2)
According to sympy rules these are -lhs+rhs=0. Thus, both equations are equal to zero.
I'd like to solve the set in sympy
sol1 = nonlinsolve([eq1,eq2],[v1,q3])
sol2 = solve([eq1,eq2],[v1,q3])
however, the result is super complicated. Also trigsimp and simplify do not change the solution.
By hand I can just divide eq1/eq2 = 0 and solve for tan(q3) and solve eq1 for v1. This is a very short solution.
My question is: am I doing something wrong (other solver, form of parametrization,handling,...), or is sympy just not ready yet to solve these things as elegantly?
Your approach is not wrong. SymPy, or other programs, are not about to replace people with some knowledge of mathematics. In this case the nonlinear solvers miss the opportunity to simplify sin(q3)/cos(q3) to tan(q3), thus reducing the number of appearances of q3 to one. If they are pushed to follow a particular strategy - e.g., "solve for v1 from the first, sub into the second, simplify and solve for q3" - the solution comes out without much fuss.
v1sol = solve(eq1, v1)[0]
q3sol = solve(simplify(eq2.subs(v1, v1sol)), q3)[0]
print([v1sol, q3sol])
This outputs
[\ddot{q}_1/(b1*sin(q3)*cos(q2)), -atan(\ddot{q}_1*b2/(b1*(-\ddot{q}_2 + a1*sin(q2) + a2*cos(q2))*cos(q2)))]
Related
I am pretty new to the subject of linear programming and would appreciate any pointers.
I have a slightly complicated equation but here is a simpler version of the problem:
x1 + x2 = 10
#subject to the following constraints:
0 <= x1 <= 5 and
3x1 <= x2 <= 20
Basically x2 has to have a value that is greater than 3 times that of x1. So in this case the solutions are, x1 = [0,1,2] and correspondingly x2 = [10, 9, 8]
There is a lot of material out there for minimizing or maximizing an objective function but this is not one of them. What do you call solving such type of problems and also what is the recommended way to solve this preferably using some libraries from python that finds one single or multiple feasible solutions?
Your problem could be stated as
min 0*x1+0*x2 ("zero coefficients")
subject to
x1+x2=10
3x1-x2<=0
x2<=20 (note that this constraint follows from x1,x2>=0 and their sum being 10)
This can easily fed into a linear programming package such as pulp. I am more of a R user than a python user hence I can not provide details. You could solve it also online without any programming.
EDIT: rereading your question, I see that your desired solutions are not continuous (e.g. it seems you are not looking for [2.5, 7.5] as solution), but are restricted to integer values. The problem would then called a "mixed integer problem" instead of "linear problem". Pulp, however, should be able to solve it if you can declare the variables x1, x2 as integers.
Another point is, if you are after ALL integer solutions given the constraints. There has been some discussions about that here on stackoverflow, however I am unsure if pulp can do that out of the box.
From
from sympy import *
t,r = symbols('t r', real=True, nonnegative=True)
c_x,c_y,a1,a2 = symbols('c_x c_y a1 a2', real=True)
integrate(-r*(a1 - a2)*(c_x*cos(-a1*t + a1 + a2*t) + c_y*sin(-a1*t + a1 + a2*t) + r)/2,(t,0,1))
I obtain the piecewise solution
Piecewise((-a1*c_x*r*cos(a2)/2 - a1*c_y*r*sin(a2)/2 - a1*r**2/2 + a2*c_x*r*cos(a2)/2 + a2*c_y*r*sin(a2)/2 + a2*r**2/2, Eq(a1, a2)), (-a1*r**2/2 + a2*r**2/2 - c_x*r*sin(a1)/2 + c_x*r*sin(a2)/2 + c_y*r*cos(a1)/2 - c_y*r*cos(a2)/2, True))
which does not need to be piecewised because if a1=a2 both expressions are 0, therefore the second expression is actually a global non-piecewise solution.
So my first question is: can I make sympy give me the non-piecewise solution? (by setting some option or anything else)
Regardless of the above mentioned possibility, since I can accept that a1 is not equal to a2 (it is a limit case of no interest), is there a way to tell sympy of such assumption? (again in order to obatin the non-piecewise solution)
Thanks in advance from a sympy novice.
P.S. For the same problem Maxima gives directly the non-piecewise solution.
there is a keyword conds of which the default is "piecewise". It can also be set to "separate" or "none". However, as it is a definite integral, probably you can try the keyword manual=True as well..
If you set the keyword to conds='separate', it should return a distinct tuple with the convergence conditions. I tried it, only gives a single solution. I don't know yet why this behaviour is not as expected.
The conds='none' keyword should not return the convergence conditions, just the solution. This is I think what you are looking for.
Another option, which is only valid in context of definite integrals, is another keyword manual=True. This mimics integrating by hand, conveniently "forgetting" about checking for convergence conditions.
I've run into a strange situation where z3py produces two separate answers for what would logically be the same problem.
Version 1:
>>> import z3
>>> r, r2, q = z3.Reals('r r2 q')
>>> s = z3.Solver()
>>> s.add(r > 2, r2 == r, q == r2 ** z3.RealVal(0.5))
>>> s.check()
unknown
Version 2
>>> import z3
>>> r, r2, q = z3.Reals('r r2 q')
>>> s = z3.Solver()
>>> s.add(r > 2, r2 == r, q * q == r2)
>>> s.check()
sat
How do I change what I'm doing with version 1 so that it will produce an accurate result? These constraints are being generated on-the-fly and would possibly add significantly to the complexity of the application if I were to attempt to re-write them on-the-fly. Also, in the case where the root is truly symbolic, it would simply not be possible for Python itself to solve that problem.
Edit: I've discovered that if I use the following setup for my Solver, it will solve successfully (although a bit slower):
z3.Then("simplify","solve-eqs","smt").solver()
It's not entirely clear to me what the implications are of specifying that rather than just the default solver, however.
The out of the box performance of Z3 on non-linear real problems is not great, it will definitely take some fiddling to make it find all the solutions you need. My first attempt is always to switch to the NLSAT solver (apply the qfnra-nlsat tactic, or make a solver from it). That solver is often much better on QF_NRA problems, but it doesn't support any theory combination, i.e., if you have other types of variables it will bail out.
Also, search stackoverflow for "Z3" and "non-linear", there have been a multitude of questions and answers to various aspects thereof.
Is there a simpler way to do substitution in Sympy which is similar to Sage or Mathematica.
In Mathematica You have something called as eliminate() which given a set of equations you can ask it to eliminate certain variables.
In Sage you need to be more hands on with it but its still more or less similar to Mathematica.
In Sympy comparatively its more awkward to do substitution.
In the screenshot the red arrows show what i am talking about. The white Arrow is the method i think would be more appropriate.
edit 1: here is a link to the function in mathematica http://reference.wolfram.com/mathematica/ref/Eliminate.html
You can have equations (actually Equality object) in SymPy:
>>> eq1=Eq(x,y);eq2=Eq(x,5)
But you are right, subs doesn't guess everything for you. It looks like Sage assumes that if a variable is isolated on one side of an equation, that is the variable to be replaced. But there is no guarantee that you will always conveniently have the desired variable isolated. It's not hard to use solve to give you the desired variable isolated:
>>> solve(eq2,x,dict=1)
[{x:5}]
And then that can be substituted into the equation from which you want to eliminate that variable.
>>> eq1.subs(solve(eq2,x,dict=1)[0])
5=y
Use of the "exclude" keyword doesn't presently behave quite as I would expect; perhaps it should act in an elimination sense:
>>> solve((eq1,eq2), exclude=(x,))
{y:x}
Following up on the above comments and https://github.com/sympy/sympy/issues/14741, one way to do the above in Sympy would be:
from sympy import Eq, var
var('P, F, K, M, E0, E1, E2, E3, E4')
a = Eq(E1, (E0 + P - F)*K - M)
b = Eq(E2, (E1 + P - F)*K - M)
c = Eq(E3, (E2 + P - F)*K - M)
d = Eq(E4, (E3 + P - F)*K - M - F)
d.subs(*c.args).subs(*b.args).subs(*a.args)
Using excel solver, it is easy to find a solution (optimum value for x and y )for this equation:
(x*14.80461) + (y * -4.9233) + (10*0.4803) ≈ 0
However, I can't figure out how to do this in Python. The existing scipy optimize library function like fsolve() or leastsq() seems to work with only one variable.... (I might just not know how to use them)...
Any suggestions?
Thanks!
>>> def f(x):
... return x[0]*14.80461 + x[1]*(-4.9233) + x[2]*(10*0.4803)
>>> def vf(x):
... return [f(x), 0, 0]
>> xx = fsolve(vf, x0=[0,0,1])
>>>
>>> f(xx)
8.8817841970012523e-16
Since the solution is not unique, different initial values for an unknown lead to different (valid) solutions.
EDIT: Why this works. Well, it's a dirty hack. It's just that fsolve and its relatives deal with systems of equations. What I did here, I defined a system of three equations (f(x) returns a three-element list) for three variables (x has three elements). Now fsolve uses a Newton-type algorithm to converge to a solution.
Clearly, the system is underdefined: you can specify arbitrary values of two variables, say, x[1] and x[2] and find x[0] to satisfy the only non-trivial equation you have. You can see this explicitly by specifying a couple of initial guesses for x0 and see different outputs, all of which satisfy f(x)=0 up to a certain tolerance.