I tried to solve the following inequality in sympy:
(10000 / x) - 1 < 0
So I issued the command:
solve_poly_inequality( Poly((10000 / x) - 1 ), '<')
However, I got:
[Interval.open(-oo, 1/10000)]
However, my manual computations give either x < 0 or x > 10000.
What am I missing? Due to the -1, I cannot represent it as a rational function.
Thanks in advance!
You are using a low-level solving routine. I would recommend using the higher-level routines solve or solveset, e.g.
>>> solveset((10000 / x) - 1 < 0, x, S.Reals)
(−∞, 0) ∪ (10000, ∞)
The reason that your attempt is right but looks wrong is that you did not specify the generator to use so Poly used 1/x as its variable (let's call it g) so it solved the problem 1000*g - 1 < 0...which is true when g is less than 1/1000 as you found.
You can see this generator identification by writing
>>> Poly(1000/x - 1)
Poly(1000*1/x - 1, 1/x, domain='ZZ')
10000/x-1 is not a polynomial in x but a polynomial in 1/x. Rather, 10000/x-1 is a rational function in x. While you may try to put Poly(1000*1/x - 1, x, domain='ZZ'), there will be errors
PolynomialError: 1/x contains an element of the generators set
because by definition 10000/x-1 cannot be a polynomial in x. Thus, you just cannot do the computation with this.
You can also try to following or other solvers.
from sympy.solvers.inequalities import reduce_rational_inequalities
from sympy import Poly
from sympy.abc import x
reduce_rational_inequalities([[10000/x - 1 < 0]], x)
((-oo < x) & (x < 0)) | ((10000 < x) & (x < oo))
Related
I am currently trying to find the maximum radius of a circle I can manifest between existing circles around it.
i.e. I'm trying to find not only the maximum radius, but the center point most suited for it over a specific given straight line.
In order to find said maxima I'm trying to implement a generalized Lagrange multipliers solution using sympy.
If "n" represents the amount of constraints I have, then I was able to:
Create n symbols generator.
Perform the necessary nth-gradient over the Lagrange function
Manifest the required inequalities (from constraints) to achieve the list of equalities and inequalities needed to be solved.
The code:
from sympy import S
from sympy import *
import sympy as smp
#Lagrange Multipliers
def sympy_distfun(cx,cy,radius):
x,y=smp.symbols('x y',real=True)
return sqrt((x-cx)**2+(y-cy)**2)-radius
def sympy_circlefun(cx,cy,radius):
x,y=smp.symbols('x y',real=True)
return (x-cx)**2+(y-cy)**2-radius**2
def sympy_linefun(slope,b):
x,y=smp.symbols('x y',real=True)
return slope*x+b-y
def lagrange_multiplier(objective,constraints):
x,y=smp.symbols('x y',real=True)
a=list(smp.symbols('a0:%d'%len(constraints),real=True))
cons=[constraints[i]*a[i] for i in range(len(a))]
L=objective+(-1)*sum(cons)
gradL=[smp.diff(L,var) for var in [x,y]+a]
constraints=[(con)>= 0 for con in constraints]
eqs=gradL+constraints
vars=a+[x,y]
solution=smp.solve(eqs[0],vars)
#solution=smp.solveset(eqs,vars)
print(solution)
line=sympy_linefun(0.66666,-4.3333)
dist=sympy_distfun(11,3,4)
circlefunc1=sympy_circlefun(11,3,4)
circlefunc2=sympy_circlefun(0,0,3)
lagrange_multiplier(dist,[line,circlefunc1,circlefunc2])
But, when using smp.solveset(eqs,vars) I encounter the error message:
ValueError: [-0.66666*a0 - a1*(2*x - 22) - 2*a2*x + (x - 11)/sqrt((x - 11)**2 + (y - 3)**2), a0 - a1*(2*y - 6) - 2*a2*y + (y - 3)/sqrt((x - 11)**2 + (y - 3)**2), -0.66666*x + y + 4.3333, -(x - 11)**2 - (y - 3)**2 + 16, -x**2 - y**2 + 9, 0.66666*x - y - 4.3333 >= 0, (x - 11)**2 + (y - 3)**2 - 16 >= 0, x**2 + y**2 - 9 >= 0] is not a valid SymPy expression
When using: solution=smp.solve(eqs[0],vars) to try and solve one equation, it sends sympy into a CPU crushing frenzy and it obviously fails to complete the calculation. I made sure to declare all variables as real so i fail to see why it takes so long to solve.
Would like to understand what I'm missing when it comes to handling multiple inequalities with sympy, and if there is a more optimized faster way to solve Lagrange multiplication I'd love to give it a try
I have a program where I want to minimize an absolute difference of two variables (an absolute error function). Say:
e_abs(x, y) = |Ax - By|; where e_abs(x, y) is my objective function that I want to minimize.
The function is subjected to the following constrains:
x and y are integers;
x >= 0; y >= 0
x + y = C, where C is an arbitrary constant (also C >= 0)
I am using the mip library (https://www.python-mip.com/), where I have defined both my objective function and my constrains.
The problem is that mip does not have an "abs" method. So I had to overcome that by spliting the main problem into two optimization sub-problems:
e(x, y) = Ax - By
Porblem 1: minimize e(x, y); subject to e(x, y) >= 0
Porblem 2: maximize e(x, y); subject to e(x, y) <= 0
After solving the two separate problems, compare the two results, yield the min(abs(e)).
That should have worked, but mip does not seem to understand that the error can be negative. As I show below:
constr(0): -1.0941176470588232 X(0, 0) +6.199999999999998 X(1, 0) - error = -0.0
constr(1): error <= -0.0
constr(2): X(0, 0) + X(1, 0) = 1.0
Obs.: consider X(0, 0) as x and X(1, 0) as y in our example
Again, the program results OptimizationStatus.INFEASIBLE, where clearly the combination X(0, 0) = 1 and X(1, 0) = 0 solves the problem.
Is it a formulation issue of my model? Or is it a bad behavior of the mip library?
You can (and should) reformulate. Because you are minimizing the absolute value of a function, you can introduce a dummy variable and 2 constraints on that variable and then minimize the dummy variable to keep it linear. (ABS is a non-linear function).
So, introduce z such that:
z >= Ax - By
and
z >= -(Ax - By)
then your objective is to minimize z
How I can equate an equation to zero then solve it (the purpose is to eliminate the denominator).
y=(x**2-2)/3*x
In Matlab this works:
solution= solve(y==0,x)
but not in python.
from sympy import *
x, y = symbols('x y')
y=(x**2-2)/3*x
# set the expression, y, equal to 0 and solve
result = solve(Eq(y, 0))
print(result)
Another solution:
from sympy import *
x, y = symbols('x y')
equation = Eq(y, (x**2-2)/3*x)
# Use sympy.subs() method
result = solve(equation.subs(y, 0))
print(result)
Edit (even simpler):
from sympy import *
x, y = symbols('x y')
y=(x**2-2)/3*x
# solve the expression y (by default set equal to 0)
result = solve(y)
print(result)
If you want only to eliminate the denominator, then you can split it into numerator and denominator. If the equation is already appear as a fraction and you want the numerator then
>>> y=(x**2-2)/(3*x); y # note parentheses around denom, is that what you meant?
(x**2 - 2)/(3*x)
>>> numer(_)
x**2 - 2
But if the equation appears as a sum then you can put it over a denominator and perhaps factor to identify numerator factors that must be zero in order to solve the equation:
>>> y + x/(x**2+2)
x/(x**2 + 2) + (x**2 - 2)/(3*x)
>>> n, d = _.as_numer_denom(); (n, d)
(3*x**2 + (x**2 - 2)*(x**2 + 2), 3*x*(x**2 + 2))
>>> factor(n)
(x - 1)*(x + 1)*(x**2 + 4)
>>> solve(_)
[-1, 1, -2*I, 2*I]
You don't have to factor the numerator before attempting to solve, however. But I sometimes find it useful when working with a specific equation.
If you have an example of an equation that is solved quickly elsewhere but not in SymPy, please post it.
I want to solve this equation:
However, when I try to solve it with scipy fsolve, it converges towards infinity instead of giving me the solution.
The reason why it goes to infinity is that the function tends to 0 when x tends to infinity:
Here you have the sample code:
def f(x, r): return -e ** (-r * x)
def h(r): return 2 * f(4, r) - f(2, r) - f(10, r)
x0 = np.array([1])
print(optimize.fsolve(h, x0))
With some other parameters it finds the solution. However, I want the code to work with different parameters, not just the ones in the example. I also want to avoid the zero solution.
Many thanks
If you let t = exp(-2x) then the equation is just polynomial, so you can solve it with numpy.roots
import numpy as np
roots = np.roots([[-1, 0, 0, 2, -1, 0])
solutions = map(lambda x: -log(x)/2, roots)
gives you 3 real and 2 complex solutions.
Let's suppose that I have two transcendental functions f(x, y) = 0 and g(a, b) = 0.
a and b depend on y, so if I could solve the first equation analytically for y, y = f(x), I could have the second function depending only on x and thus solving it numerically.
I prefer to use python, but if matlab is able to handle this is ok for me.
Is there a way to solve analytically trascendent functions for a variable with python/matlab? Taylor is fine too, as long as I can choose the order of approximation.
I tried running this through Sympy like so:
import sympy
j, k, m, x, y = sympy.symbols("j k m x y")
eq = sympy.Eq(k * sympy.tan(y) + j * sympy.tan(sympy.asin(sympy.sin(y) / x)), m)
eq.simplify()
which turned your equation into
Eq(m, j*sin(y)/(x*sqrt(1 - sin(y)**2/x**2)) + k*tan(y))
which, after a bit more poking, gives us
k * tan(y) + j * sin(y) / sqrt(x**2 - sin(y)**2) == m
We can find an expression for x(y) like
sympy.solve(eq, x)
which returns
[-sqrt(j**2*sin(y)**2/(k*tan(y) - m)**2 + sin(y)**2),
sqrt(j**2*sin(y)**2/(k*tan(y) - m)**2 + sin(y)**2)]
but an analytic solution for y(x) fails.