I have to find solutions to an integer programming problem:
I am using Mosek's Fusion API (Python). Now the constrains are easy to put in, I am more worried about the actual objective. The problem for me is: How can I tell mosek that I want to sum by all is, js or ks and define what they are, what are their boundaries, etc.?
This is a simplified version of a self-caching problem in the context of servers. So i here means a server, j means an object to cache, but in this version there's one object, so this I guess is not important. k means server too, so e.g. d(ik) means the distance from the server i to the server k.
But whatever I want to achieve, I don't know how to write this objective. For now I have something like this:
from mosek.fusion import Domain, Model, Expr, ObjectiveSense
alpha = 4 # alpha is the same for all i and j
demand = 1 # w is the same for all i and k
n = 6 # number of servers
distances_matrix = [[...], [...], ...]
with Model("lo1") as M:
x = M.variable("x", n, Domain.integral(Domain.inRange(0, 1)))
y = M.variable("y", n, Domain.integral(Domain.inRange(0, 1)))
alpha_times_x = Expr.mul(alpha, x)
demand_times_dist_times_y = Expr.mul(demand, distances_matrix, y)
M.objective("obj", ObjectiveSense.Minimize, )
M.solve()
print(x.level())
print(y.level())
Now of course the demand_times_dist_times_y is wrong, because I want to get the distance from i to k from the matrix. And the x above is fine since xs are: {x0, x1, x2, x3, x4, x5, x6}, but the ys would have to be {y11, y12, y13, y14, y15, y16, y21, y22, ..., y66}, so I guess I defined them wrong.
So e.g. how can I define that i,k are in {1,2,3,4,5,6} and create an Expr.sum by e.g. k? And how would I define those two sums at the beginning of the objective?
I don't know if that answers the question, but if you have, say
x = M.variable("x", n, Domain.integral(Domain.inRange(0, 1)))
then sum_i x_i is obtained with
Expr.sum(x)
Similarly, if now alpha is a numerical array of length n then sum_i (alpha_i*x_i) is obtained with
Expr.sum( Expr.mulElm(alpha,x) )
or even
Expr.dot( alpha, x )
and so on. You never explicitly specify the summation index, you are summing all entries of whatever appears inside the Expr.sum and similar methods.
Related
I'm working on a program to solve the Bloch (or more precise the Bloch McConnell) equations in python. So the equation to solve is:
where A is a NxN matrix, A+ its pseudoinverse and M0 and B are vectors of size N.
The special thing is that I wanna solve the equation for several offsets (and thus several matrices A) at the same time. The new dimensions are:
A: MxNxN
b: Nx1
M0: MxNx1
The conventional version of the program (using a loop over the 1st dimension of size M) works fine, but I'm stuck at one point in the 'parallel version'.
At the moment my code looks like this:
def bmcsim(A, b, M0, timestep):
ex = myexpm(A*timestep) # returns stacked array of size MxNxN
M = np.zeros_like(M0)
for n in range(ex.shape[0]):
A_tmp = A[n,:,:]
A_b = np.linalg.lstsq(A_tmp ,b, rcond=None)[0]
M[n,:,:] = np.abs(np.real(np.dot(ex[n,:,:], M0[n,:,:] + A_b) - A_b))
return M
and I would like to get rid of that for n in range(ex.shape[0]) loop. Unfortunately, np.linalg.lstsq doesn't work for stacked arrays, does it? In myexpm is used np.apply_along_axis for a another problem:
def myexpm(A):
vals,vects = np.linalg.eig(A)
tmp = np.einsum('ijk,ikl->ijl', vects, np.apply_along_axis(np.diag, -1, np.exp(vals)))
return np.einsum('ijk,ikl->ijl', tmp, np.linalg.inv(vects))
However, that just works for 1D input data. Is there something similar that I can use with np.linalg.lstsq? The np.dot in bmcsim will be replaced with np.einsum like in myexpm I guess, or are there better ways?
Thanks for your help!
Update:
I just realized that I can replace np.linalg.lstsq(A,b) with np.linalg.solve(A.T.dot(A), A.T.dot(b)) and managed to get rid of the loop this way:
def bmcsim2(A, b, M0, timestep):
ex = myexpm(A*timestep)
b_stack = np.repeat(b[np.newaxis, :, :], offsets.size, axis=0)
tmp_left = np.einsum('kji,ikl->ijl', np.transpose(A), A)
tmp_right = np.einsum('kji,ikl->ijl', np.transpose(A), b_stack)
A_b_stack = np.linalg.solve(tmp_left , tmp_right )
return np.abs(np.real(np.einsum('ijk,ikl->ijl',ex, M0+A_b_stack ) - A_b_stack ))
This is about 3 times faster, but still a bit complicated. I hope there is a better (shorter/easier) way, that's maybe even faster?!
I have two group of lists, A and O. Both of them have points from x,y z coordinate. I want to calculate the distance between points from A and B. I used a for loop, but it only give me one result. It should give me 8 numbers from the results. I'm very appreciate that someone can have a look. It's the last step in my project.
Ax = [-232.34, -233.1, -232.44, -233.02, -232.47, -232.17, -232.6, -232.29, -231.65]
Ay = [-48.48, -49.48, -50.81, -51.42, -51.95, -52.25, -52.83, -53.63, -53.24]
Az = [-260.77, -253.6, -250.25, -248.88, -248.06, -247.59, -245.82, -243.98, -243.76]
Ox = [-302.07, -302.13, -303.13, -302.69, -303.03, -302.55, -302.6, -302.46, -302.59]
Oy = [-1.73, -3.37, -4.92, -4.85, -5.61, -5.2, -5.91, -6.41, -7.4]
Oz = [-280.1, -273.02, -269.74, -268.32, -267.45, -267.22, -266.01, -264.79, -264.96]
distance = []
for xa in A1:
for ya in A2:
for za in A3:
for x1 in o1:
for y1 in o2:
for z1 in o3:
distance += distance
distance = (((xa-x1)**2)+((ya-y1)**2)+((za-z1)**2))**(1/2)
print(distance)
Other people have provided fixes to your immediate problem. I would also recommend that you start using numpy and avoid all of those for loops. Numpy provides ways to vectorize your code, basically offload all of the looping that needs to be done to very efficient C++ implementations. For instance, you can replace your whole nested for-loop thing with the following vectorized implementation:
import numpy as np
# Convert your arrays to numpy arrays
Ax = np.asarray(Ax)
Ay = np.asarray(Ay)
Az = np.asarray(Az)
Ox = np.asarray(Ox)
Oy = np.asarray(Oy)
Oz = np.asarray(Oz)
# Find the distance in a single, vectorized operation
np.sqrt(np.sum(((Ax-Ox)**2, (Ay-Oy)**2, (Az-Oz)**2), axis=0))
Your first issue is this:
distance = (((xa-x1)**2)+((ya-y1)**2)+((za-z1)**2))**(1/2)
This despite you defining distance as a list. You're replacing a list of values with a single value. What you want is
distance.append((((xa-x1)**2)+((ya-y1)**2)+((za-z1)**2))**(1/2))
which will add this value to the end of the list.
Second thing: your workflow could be improved. Instead of using that many for loops, try doing this: You know that the lengths of A1, A2, A3, o1, o2, and o3 are the same length, so:
distance = []
for i in range(len(A1)): # will run 8 times because the length of A1 is 8
xa, ya, za = A1[i], A2[i], A3[i] # these values correspond to each other
xb, yb, zb = o1[i], o2[i], o3[i] # all are in the same position in their respective list
distance.append((((xa-x1)**2)+((ya-y1)**2)+((za-z1)**2))**(1/2))
print distance
You need to be appending to distance not assigning it. You should be doing something like this inside of your for loops:
distance.append((((xa-x1)**2)+((ya-y1)**2)+((za-z1)**2))**(1/2))
By nesting all those loops, you're going to be executing each "subloop" every iteration of the "parent loop", and so on, resulting in far more loops than necessary and some mixed up data. As other answers have mentioned, you're also reassigning distance to the value of the last calulation of the inner-most loop, every pass.
You can do all of this a lot more efficiently by zipping the data.
distance = []
for ptA, ptB in zip(zip(Ax, Ay, Az), zip(Ox, Oy, Oz)):
distance.append(pow(sum(pow(a - b, 2) for a, b in zip(ptA, ptB)), 0.5))
Your nested loops are not merely inefficient, but incorrect. You are going through every combination of x, y, and z values for both sets of points.
Here's a list comprehension to accomplish the task:
distance = [((xa-x1)**2 + (ya-y1)**2 + (za-z1)**2)**(0.5)
for (xa, ya, za, x1, y1, z1) in zip(Ax, Ay, Az, Ox, Oy, Oz)]
The zip call produces groups of the corresponding coordinate values. These are then unpacked into individual values for a given pair of points. Then the distances is calculated and added to the resulting list. Here is the result:
[86.14803712215387, 85.25496701072612, 86.50334270997855, 86.02666679582558, 86.61455593605497, 86.90445212991106, 86.65519315078585, 87.10116761559514, 87.08173861378742]
Note that the (1/2) in your formula works for Python 3, but not for Python 2. I've use 0.5, which will work for both. Using math.sqrt() might be an even better idea.
I have an assignment for my Applied Computer Science class in which we are to test the strength and qualities of an algorithm that tries to find the k:th highest value in a list. This is to be tested both for varying k:s (0, 1, 2, ..., k=N), and varying N:s where I have chosen k=0 and N (1, 2, ..., N). We're supposed to determine whether the algorithm ever makes more than pi*N comparisons as well as log2(N/2 - k)*N comparisons.
To find some clarity in this I made two plots, both having number of comparisons as the y-axis, and k and N respectively for the x-axis. Alongside the plots I want the functions y = pi*N and y = log2(N/2 - k)*N. The problem is I get a ValueError for the second function, most definitely when N/2 = k. I would still like to plot it in python and my question is how to get around this.
The rest of the code isn't really relevant.
My question is: how can I plot this function while circumventing it's undefined parts? I still want to illustrate the undefined part so I don't want to make exceptions in which a simplification is made.
def plotHelper(x, yA, yS, title, trials):
yPi = list()
yLogN = list()
for point in x:
yPi.append(point*math.pi)
yLogN.append((trials*math.log(trials/2 - point, 2)
plt.figure()
plt.title(title)
plt.plot(x, yA)
plt.plot(x, yS)
plt.plot(x, yPi)
plt.plot(x, yLogN)
plt.grid(True)
plt.legend(["Mean", "Standard Deviation", "y = pi * N"])
plt.ylabel("Comparisons")
plt.xlabel("k")
plt.show()
def plot():
choice = initiate()
yA, yS, trials = trialFunc(choice)
x = range(0, trials)
f_of_k = "Comparisons as a function of desired k (N elements)"
f_of_n = "Comparisons as a function of elements (k = 0)"
if choice:
title = f_of_k
else:
title = f_of_n
plotHelper(x, yA, yS, title, trials)
If you cannot represent a value of a function as a number, you can represent it as a Not-a-Number aka NaN. There's no ready-made constant for it, but you can easily produce that value:
NaN = float('NaN')
Now your function can go like this:
def function_to_plot(n): # A contrived example.
if abs(n) <= 1:
return NaN
return sqrt(n * n - sin(n) ** 2)
After that, matplotlib just works, it knows how to skip points that are NaNs.
If you just want to tabulate your function by hand, you can safely print NaNs using format for floats.
For a bit more detail, you can use float('+inf') and float('-inf') to represent infinities.
Also, just in case, Python works fine with complex numbers; import cmath and do things like assert cmath.sqrt(-2j) == (1-1j).
I'm relatively new to Python, and am encountering some issues in writing a piece of code that generates and then solves a system of differential equations.
My approach to doing this was to create a set of variables and coefficients, (x0, x1, ..., xn) and (c0, c1 ,..., cn) repsectively, in a list with the function var(). Then the equations are constructed in EOM1(). The initial conditions, along with the set of equations, are all put together in EOM2() and solved using odeint.
Currently the code below runs, albeit not efficiently the reason for which I believe is because odeint runs through all the code with each interaction (that's something else I need to fix but isn't the main problem!).
import sympy as sy
from scipy.integrate import odeint
n=2
cn0list = [0.01, 0.05]
xn0list = [0.01, 0.01]
def var():
xnlist=[]
cnlist=[]
for i in range(n+1):
xnlist.append('x{0}'.format(i))
cnlist.append('c{0}'.format(i))
return xnlist, cnlist
def EOM1():
drdtlist=[]
for i in range(n):
cn1=sy.Symbol(var()[1][i])
xn0=sy.Symbol(var()[0][i])
xn1=sy.Symbol(var()[0][i+1])
eom=cn1*xn0*(1.0-xn1)-cn1*xn1-xn1
drdtlist.append(eom)
xi=sy.Symbol(var()[0][0])
xf=sy.Symbol(var()[0][n])
drdtlist[n-1]=drdtlist[n-1].subs(xf,xi)
return drdtlist
def EOM2(xn, t, cn):
x0, x1 = xn
c0, c1 = cn
f = EOM1()
output = []
for part in f:
output.append(part.evalf(subs={'x0':x0, 'x1':x1, 'c0':c0, 'c1':c1}))
return output
abserr = 1.0e-6
relerr = 1.0e-4
stoptime = 10.0
numpoints = 20
t = [stoptime * float(i) / (numpoints - 1) for i in range(numpoints)]
wsol = odeint(EOM2, xn0list, t, args=(cn0list,), atol=abserr, rtol=relerr)
My problem is that I had difficulty getting Python to treat the variables generated by Sympy appropriately. I got around this with the line
output.append(part.evalf(subs={'x0':x0, 'x1':x1, 'c0':c0, 'c1':c1}))
in EOM2(). Unfortunately, I do not know how to generalize this line to a list of variables from x0 to xn, and from c0 to cn. The same applies to the earlier line in EOM2(),
x0, x1 = xn
c0, c1 = cn
In other words I set n to an arbitrary number, is there a way for Python to interpret each element as it does with the ones I manually entered above? I have tried the following
output.append(part.evalf(subs={'x{0}'.format(j):var(n)[0][j], 'c{0}'.format(j):var(n)[1][j]}))
yet this yields the error that led me to use evalf in the first place,
TypeError: can't convert expression to float
Is there any way do what I want to, generate a set of n equations which are then solved with odeint?
Instead of using evalf you want to look into using sympy.lambdify to generate a callback for use with SciPy. You will need to create a function with the expected signature of odeint, e.g.:
y, params = sym.symbols('y:3'), sym.symbols('kf kb')
ydot = rhs(y, p=params)
f = sym.lambdify((y, t) + params, ydot)
yout = odeint(f, y0, tout, param_values)
We gave a tutorial on (among other things) how to use lambdify with odeint at the SciPy 2017 conference, the material is available here: http://www.sympy.org/scipy-2017-codegen-tutorial/
If you are open to use an external library to handle the function signatures of external solvers you may be interested in a library I've authored: pyodesys
If I understand correctly, you want to make an arbitrary number of substitutions in a SymPy expression. This is how it can be done:
n = 10
syms = sy.symbols('x0:{}'.format(n)) # an array of n symbols
expr = sum(syms) # some expression with those symbols
floats = [1/(j+1) for j in range(n)] # numbers to put in
expr.subs({symbol: value for symbol, value in zip(syms, floats)})
The result of subs is a float in this case (no evalf needed).
Note that the function symbols can directly create any number of symbols for you, via the colon notation. No need for a loop.
I'm studying dynamical systems, particularly the logistic family g(x) = cx(1-x), and I need to iterate this function an arbitrary amount of times to understand its behavior. I have no problem iterating the function given a specific point x_0, but again, I'd like to graph the entire function and its iterations, not just a single point. For plotting a single function, I have this code:
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
def logplot(c, n = 10):
dt = .001
x = np.arange(0,1.001,dt)
y = c*x*(1-x)
plt.plot(x,y)
plt.axis([0, 1, 0, c*.25 + (1/10)*c*.25])
plt.show()
I suppose I could tackle this by the lengthy/daunting method of explicitly creating a list of the range of each iteration using something like the following:
def log(c,x0):
return c*x0*(1-x)
def logiter(c,x0,n):
i = 0
y = []
while i <= n:
val = log(c,x0)
y.append(val)
x0 = val
i += 1
return y
But this seems really cumbersome and I was wondering if there were a better way. Thanks
Some different options
This is really a matter of style. Your solution works and is not very difficult to understand. If you want to go on on those lines, then I would just tweak it a bit:
def logiter(c, x0, n):
y = []
x = x0
for i in range(n):
x = c*x*(1-x)
y.append(x)
return np.array(y)
The changes:
for loop is easier to read than a while loop
x0 is not used in the iteration (this adds one more variable, but it is mathematically easier to understand; x0 is a constant)
the function is written out, as it is a very simple one-liner (if it weren't, its name should be changed to be something else than log, which is very easy to confuse with logarithm)
the result is converted into a numpy array. (Just what I usually do, if I need to plot something)
In my opinion the function is now legible enough.
You might also take an object-oriented approach and create a logistic function object:
class Logistics():
def __init__(self, c, x0):
self.x = x0
self.c = c
def next_iter(self):
self.x = self.c * self.x * (1 - self.x)
return self.x
Then you may use this:
def logiter(c, x0, n):
l = Logistics(c, x0)
return np.array([ l.next_iter() for i in range(n) ])
Or if you may make it a generator:
def log_generator(c, x0):
x = x0
while True:
x = c * x * (1-x)
yield x
def logiter(c, x0, n):
l = log_generator(c, x0)
return np.array([ l.next() for i in range(n) ])
If you need performance and have large tables, then I suggest:
def logiter(c, x0, n):
res = np.empty((n, len(x0)))
res[0] = c * x0 * (1 - x0)
for i in range(1,n):
res[i] = c * res[i-1] * (1 - res[i-1])
return res
This avoids the slowish conversion into np.array and some copying of stuff around. The memory is allocated only once, and the expensive conversion from a list into an array is avoided.
(BTW, if you returned an array with the initial x0 as the first row, the last version would look cleaner. Now the first one has to be calculated separately if copying the vector around is desired to be avoided.)
Which one is best? I do not know. IMO, all are readable and justified, it is a matter of style. However, I speak only very broken and poor Pythonic, so there may be good reasons why still something else is better or why something of the above is not good!
Performance
About performance: With my machine I tried the following:
logiter(3.2, linspace(0,1,1000), 10000)
For the first three approaches the time is essentially the same, approximately 1.5 s. For the last approach (preallocated array) the run time is 0.2 s. However, if the conversion from a list into an array is removed, the first one runs in 0.16 s, so the time is really spent in the conversion procedure.
Visualization
I can think of two useful but quite different ways to visualize the function. You mention that you will have, say, 100 or 1000 different x0's to start with. You do not mention how many iterations you want to have, but maybe we will start with just 100. So, let us create an array with 100 different x0's and 100 iterations at c = 3.2.
data = logiter(3.6, np.linspace(0,1,100), 100)
In a way a standard method to visualize the function is draw 100 lines, each of which represents one starting value. That is easy:
import matplotlib.pyplot as plt
plt.plot(data)
plt.show()
This gives:
Well, it seems that all values end up oscillating somewhere, but other than that we have only a mess of color. This approach may be more useful, if you use a narrower range of values for x0:
data = logiter(3.6, np.linspace(0.8,0.81,100), 100)
you may color-code the starting values by e.g.:
color1 = np.array([1,0,0])
color2 = np.array([0,0,1])
for i,k in enumerate(np.linspace(0, 1, data.shape[1])):
plt.plot(data[:,i], '.', color=(1-k)*color1 + k*color2)
This plots the first columns (corresponding to x0 = 0.80) in red and the last columns in blue and uses a gradual color change in between. (Please note that the more blue a dot is, the later it is drawn, and thus blues overlap reds.)
However, it is possible to take a quite different approach.
data = logiter(3.6, np.linspace(0,1,1000), 50)
plt.imshow(data.T, cmap=plt.cm.bwr, interpolation='nearest', origin='lower',extent=[1,21,0,1], vmin=0, vmax=1)
plt.axis('tight')
plt.colorbar()
gives:
This is my personal favourite. I won't spoil anyone's joy by explaining it too much, but IMO this shows many peculiarities of the behaviour very easily.
Here's what I was aiming for; an indirect approach to understanding (by visualization) the behavior of initial conditions of the function g(c, x) = cx(1-x):
def jam(c, n):
x = np.linspace(0,1,100)
y = c*x*(1-x)
for i in range(n):
plt.plot(x, y)
y = c*y*(1-y)
plt.show()