I need to solve a system of inequalities for n variables where one of the inequalities has an orthogonality constraint. The inequalities are as follows.
In my particular problem, it has been proven that solving the system always results in one exact solution. Because of the orthogonality, I don't think I can use linear programming. Anyone have recommendations for efficient methods to solve in Python? Wolfram alpha is able to do it! (You can see an example here and here) I know sympy can't solve because it only can solve for univariate case. Please include runtime with whatever answer you give!
Edit: Just after posting this, I found z3. And it can somewhat do what I want. But I think it might be overkill and might not be as efficient as possible (hoping for linear in n:= the number of variables to solve for).
from z3 import Solver, Real
def solve_inequalities(z):
n = len(z)
s = Solver()
ys = [Real(f'y{i}') for i in range(n)]
s.add(sum(ys) == 0)
for i in range(1, n):
s.add(sum(ys[:i]) >= 0)
s.add(sum([(z[i]-ys[i])*ys[i] for i in range(n)]) == 0)
for i in range(n-1):
s.add(z[i]-ys[i] <= z[i+1]-ys[i+1])
s.check()
return s.model()
Related
When I solve an equation using SymPy, it would be nice to learn what it actually did to find the solutions.
The following is a simple test case for answers. We know it should be using the quadratic equation, or some equivalent approach.
from sympy import *
>>> x = Symbol('x', real=True)
>>> solve(2*x**2 + 12*x + 12)
[-3 - sqrt(3), -3 + sqrt(3)]
Is some way of filtering through a record of the execution stack, or something else to extract what steps were useful to SymPy in solving a problem? I imagine some sort of tree-like search of steps is taken by SymPy, and a record of the stack calls would have to be pruned/simplified somehow.
I have a system of non-linear integer inequalities that I want to solve. In it I need to compute the absolute value of integers and also the maximum/minimum of two integers.
Here is a toy example:
from z3 import *
set_option(verbose=10)
x, y, z, z1 = Ints('x y z z1')
def abs(x):
return If(x >= 0,x,-x)
def max(x, y):
return If(x>=y, x, y)
def min(x, y):
return If(x<=y, x, y)
s = Solver()
s.add(x**2 + y**2 >= 26)
s.add(min(abs(y), abs(x))> 5)
s.add(3*x**2 + 25*y**2 >= 100)
s.add(x*y - z*z1 < 10)
s.add(max(abs(z), abs(z1)) <= 10)
s.add(min(abs(z), abs(z1)) > 1)
s.check()
print(s.model())
My real system is more complicated and takes much longer to run.
I don't really understand how Z3 works under the hood but I am worried that the way I have defined abs, max and min using Python functions may make it hard for Z3 to solve the system of inequalities. Is there a better way that allows Z3 potentially to be more efficient?
The way you coded them are just fine. There's really no "better" way to code these operations.
Nonlinear problems are really difficult for SMT solvers. In fact, one way they solve these is to assume the values are "real" numbers, solve it, and then check to see if the model actually only consists of integers. Another trick is to reduce to bit-vectors: Assign larger and larger bit-sized vectors to variables and see if one can find a model. You can imagine that both of these techniques are good for "model finding" but are terrible at proving unsat. (For details see: How does Z3 handle non-linear integer arithmetic?)
If your problem is truly non-linear, perhaps an SMT solver just isn't the best tool for you. An actual theorem prover that has support for arithmetic theories might be a better choice, though of course that's an entirely different discussion.
One thing you can try is "simplify" the problem. For instance, you seem to be always using abs(y) and abs(x), perhaps you can drop the abs term and simply assert x > 0 and y > 0? Note that this is not a sound reduction: You are explicitly telling the solver to ignore all negative x and y values, but it might be "good" enough for your problem since you may only care when x and y are positive anyhow. This would help the solver as it would reduce the search space and would get rid of the conditional expression, though keep in mind that you're asking a different question and hence your solution-space is now different. (It might even become unsat with the new constraint.)
Long story short; non-linear arithmetic is difficult, and the way you're coding min/max/abs are just fine. See if you can "simplify" the problem by not using them, by perhaps solving a related bit simpler problem for the solver. If that's not possible, I'm afraid you'll have to look beyond SMT solvers to handle your non-linear set of equations. (And none of that will be easy of course, as the problem is inherently difficult. Again read through How does Z3 handle non-linear integer arithmetic? for some extra details.)
Currently I'm using PuLP to solve a maximization problem. It works fine, but I'd like to be able to get the N-best solutions instead of just one. Is there a way to do this in PuLP or any other free/Python solution? I toyed with the idea of just randomly picking some of the variables from the optimal solution and throwing them out and re-running, but this seems like a total hack.
If your problem is fast to solve, you can try to limit the objective from above step by step. For examle, if the objective value of the optimal solution is X, try to re-run the problem with an additional constraint:
problem += objective <= X - eps, ""
where the reduction step eps depends on your knowledge of the problem.
Of course, if you just pick some eps blindly and get a solution, you don't know if the solution is the 2nd best, 10th best or 1000-th best ... But you can do some systematic search (binary, grid) on the eps parameter (if the problem is really fast to solve).
So I figured out how (by RTFM) to get multiple soutions. In my code I essentially have:
number_unique = 1 # The number of variables that should be unique between runs
model += objective
model += constraint1
model += constraint2
model += constraint3
for i in range(1,5):
model.solve()
selected_vars = []
for p in vars:
if p_vars[p].value() != 0:
selected_vars.append(p)
print_results()
# Add a new constraint that the sum of all of the variables should
# not total up to what I'm looking for (effectively making unique solutions)
model += sum([p_vars[p] for p in selected_vars]) <= 10 - number_unique
This works great, but I've realized that I really do need to go the random route. I've got 10 different variables and by only throwing out a couple of them, my solutions tend to have the same heavy weighted vars in all the permutations (which is to be expected).
I'm using fsolve in order to solve a non linear equation. My problem is that, depending on the starting point the solutions change and I am not sure that the ones that I found are the most reasonable.
This is the code
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve, brentq,newton
A = np.arange(0.05,0.95,0.01)
PHI = np.deg2rad(np.arange(0,90,1))
def f(b):
return np.angle((1+3*a**4-3*a**2)+(a**4-a**6)*(np.exp(2j*b)+2*np.exp(-1j*b))+(a**2-2*a**4+a**6)*(np.exp(-2j*b)+2*np.exp(1j*b)))-Phi
B = np.zeros((len(A),len(PHI)))
for i in range(len(A)):
for j in range(len(PHI)):
a = A[i]
Phi = PHI[j]
b = fsolve(f, 1)
B[i,j]= b
I fixed x0 = 1 because it seems to give the more reasonable values. But sometimes, I think the method doesn't converge and the resulting values are too big.
What can I do to find the best solution?
Many thanks!
The eternal issue with turning non-linear solvers loose is having a really good understanding of your function, your initial guess, the solver itself, and the problem you are trying to address.
I note that there are many (a,Phi) combinations where your function does not have real roots. You should do some math, directed by the actual problem you are trying to solve, and determine where the function should have roots. Not knowing the actual problem, I can't do that for you.
Also, as noted on a (since deleted) answer, this is cyclical on b, so using a bounded solver (such as scipy.optimize.minimize using method='L-BFGS-B' might help to keep things under control. Note that to find roots with a minimizer you use the square of your function. If the found minimum is not close to zero (for you to define based on the problem), the real minima might be a complex conjugate pair.
Good luck.
I am having trouble sovling the optical bloch equation, which is a first order ODE system with complex values. I have found scipy may solve such system, but their webpage offers too little information and I can hardly understand it.
I have 8 coupled first order ODEs, and I should generate a function like:
def derv(y):
compute the time dervative of elements in y
return answers as an array
then do complex_ode(derv)
My questions are:
my y is not a list but a matrix, how can i give a corrent output
fits into complex_ode()?
complex_ode() needs a jacobian, I have no idea how to start constructing one
and what type it should be?
Where should I put the initial conditions like in the normal ode and
time linspace?
this is scipy's complex_ode link:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.complex_ode.html
Could anyone provide me with more infomation so that I can learn a bit more.
I think we can at least point you in the right direction. The optical
bloch equation is a problem which is well understood in the scientific
community, although not by me :-), so there are already solutions on the internet
to this particular problem.
http://massey.dur.ac.uk/jdp/code.html
However, to address your needs, you spoke of using complex_ode, which I suppose
is fine, but I think just plain scipy.integrate.ode will work just fine as well
according to their documentation:
from scipy import eye
from scipy.integrate import ode
y0, t0 = [1.0j, 2.0], 0
def f(t, y, arg1):
return [1j*arg1*y[0] + y[1], -arg1*y[1]**2]
def jac(t, y, arg1):
return [[1j*arg1, 1], [0, -arg1*2*y[1]]]
r = ode(f, jac).set_integrator('zvode', method='bdf', with_jacobian=True)
r.set_initial_value(y0, t0).set_f_params(2.0).set_jac_params(2.0)
t1 = 10
dt = 1
while r.successful() and r.t < t1:
r.integrate(r.t+dt)
print r.t, r.y
You also have the added benefit of an older more established and better
documented function. I am surprised you have 8 and not 9 coupled ODE's, but I'm
sure you understand this better than I. Yes, you are correct, your function
should be of the form ydot = f(t,y), which you call def derv() but you're
going to need to make sure your function takes at least two parameters
like derv(t,y). If your y is in matrix, no problem! Just "reshape" it in
the derv(t,y) function like so:
Y = numpy.reshape(y,(num_rows,num_cols));
As long as num_rows*num_cols = 8, your number of ODE's you should be fine. Then
use the matrix in your computations. When you're all done, just be sure to return
a vector and not a matrix like:
out = numpy.reshape(Y,(8,1));
The Jacobian is not required, but it will likely allow the computation to proceed
much more quickly. If you do not know how to compute this you may want to consult
wikipedia or a calculus text book. It's pretty simple, but can be time consuming.
As far as initial conditions, you should probably already know what those should
be, whether it's complex or real valued. As long as you select values that are
within reason, it shouldn't matter much.