I'm working on a program to solve the Bloch (or more precise the Bloch McConnell) equations in python. So the equation to solve is:
where A is a NxN matrix, A+ its pseudoinverse and M0 and B are vectors of size N.
The special thing is that I wanna solve the equation for several offsets (and thus several matrices A) at the same time. The new dimensions are:
A: MxNxN
b: Nx1
M0: MxNx1
The conventional version of the program (using a loop over the 1st dimension of size M) works fine, but I'm stuck at one point in the 'parallel version'.
At the moment my code looks like this:
def bmcsim(A, b, M0, timestep):
ex = myexpm(A*timestep) # returns stacked array of size MxNxN
M = np.zeros_like(M0)
for n in range(ex.shape[0]):
A_tmp = A[n,:,:]
A_b = np.linalg.lstsq(A_tmp ,b, rcond=None)[0]
M[n,:,:] = np.abs(np.real(np.dot(ex[n,:,:], M0[n,:,:] + A_b) - A_b))
return M
and I would like to get rid of that for n in range(ex.shape[0]) loop. Unfortunately, np.linalg.lstsq doesn't work for stacked arrays, does it? In myexpm is used np.apply_along_axis for a another problem:
def myexpm(A):
vals,vects = np.linalg.eig(A)
tmp = np.einsum('ijk,ikl->ijl', vects, np.apply_along_axis(np.diag, -1, np.exp(vals)))
return np.einsum('ijk,ikl->ijl', tmp, np.linalg.inv(vects))
However, that just works for 1D input data. Is there something similar that I can use with np.linalg.lstsq? The np.dot in bmcsim will be replaced with np.einsum like in myexpm I guess, or are there better ways?
Thanks for your help!
Update:
I just realized that I can replace np.linalg.lstsq(A,b) with np.linalg.solve(A.T.dot(A), A.T.dot(b)) and managed to get rid of the loop this way:
def bmcsim2(A, b, M0, timestep):
ex = myexpm(A*timestep)
b_stack = np.repeat(b[np.newaxis, :, :], offsets.size, axis=0)
tmp_left = np.einsum('kji,ikl->ijl', np.transpose(A), A)
tmp_right = np.einsum('kji,ikl->ijl', np.transpose(A), b_stack)
A_b_stack = np.linalg.solve(tmp_left , tmp_right )
return np.abs(np.real(np.einsum('ijk,ikl->ijl',ex, M0+A_b_stack ) - A_b_stack ))
This is about 3 times faster, but still a bit complicated. I hope there is a better (shorter/easier) way, that's maybe even faster?!
Related
I have a code from this paper and trying to convert it to python (numpy especially). But in this code, it has a declare that confuse me.
XF=[X fit'];
The X is getting from
function[Best_score,Best_pos,DTBO_curve]=DTBO(SearchAgents,Max_iterations,lowerbound,upperbound,dimension,fitness)
lowerbound=ones(1,dimension).*(lowerbound);
upperbound=ones(1,dimension).*(upperbound);
for i=1:dimension
X(:,i) = lowerbound(i)+rand(SearchAgents,1).*(upperbound(i) - lowerbound(i));
end
and fit is getting from
for i =1:SearchAgents
L=X(i,:);
fit(i)=fitness(L);
end
My questions are
What it does mean?
How i convert it to python?
I only know that ' meaning transpose in matlab, but i don't know the rest. I can't run this code by myself because i don't have matlab license. Also the paper's writer didn't explain it made me more confused.
I trying to run that line directly in python. Of course it doesnt work. I also try to do little modify and trying to assuming maybe it is declaring array.
XF = array([X, transpose(fit)])
And still doesnt work because it cannot combine two array with different dimension.
Btw if you asked me, the final result of this code is the best candidate solution in last iteration.
It would have been better if you would have given the shapes of X and fit, but I guess it is something like:
import numpy as np
N, M = 10, 20
X = np.random.rand(N, M)
fit = np.random.rand(M)
XF = np.concatenate([X, fit[None, :]], axis=0)
print(XF.shape) # (11, 20)
You might also want to take a look at np.block:
Matlab’s “square bracket stacking”, [A, B, ...; p, q, ...], is equivalent to np.block([[A, B, ...], [p, q, ...]]).
I'm trying to find approximate nonzero solutions of M#x = 0 using SVD in scipy where M is a complex-valued 4x4 matrix.
First a toy example:
M = np.array([
[1,1,1,1],
[1,0,1,1],
[1,-1,0,0],
[0,0,0,1e-10]
])
U, s, Vh = scipy.linalg.svd(M)
print(s) # [2.57554368e+00 1.49380718e+00 3.67579714e-01 7.07106781e-11]
print(Vh[-1]) # [ 0.00000000e+00 2.77555756e-16 -7.07106781e-01 7.07106781e-01]
print(np.linalg.norm( M#Vh[-1] )) # 7.07106781193738e-11
So in this case, the smallest (last) value in s is very small, and corresponding last column Vh[-1] is the approximate solution to M#x=0, where M#Vh[-1] is also very small, roughly same order as s[-1].
Now the real example which doesn't work the same way:
M = np.array([[ 1.68572560e-01-3.98053448e-02j, 5.61165939e-01-1.22638499e-01j,
3.39625823e-02-1.16216469e+00j, 2.65140034e-06-4.10296457e-06j],
[ 4.17991622e-01+1.33504182e-02j, -4.79190633e-01-2.08562169e-01j,
4.87429517e-01+3.68070222e-01j, -3.63710538e-05+6.43912577e-06j],
[-2.18353842e+06-4.20344071e+05j, -2.52806647e+06-2.08794519e+05j,
-2.01808847e+06-1.96246695e+06j, -5.77147300e-01-3.12598394e+00j],
[-3.03044160e+05-6.45842521e+04j, -6.85879183e+05+2.07045473e+05j,
6.14194217e+04-1.28864668e+04j, -7.08794838e+00+9.70230041e+00j]])
U, s, Vh = scipy.linalg.svd(M)
print(s) # [4.42615634e+06 5.70600901e+05 4.68468171e-01 5.21600592e-13]
print(Vh[-1]) # [-5.35883825e-05+0.00000000e+00j 3.74712739e-05-9.89288566e-06j 4.03111556e-06+7.59306578e-06j -8.20834667e-01+5.71165865e-01j]
print(np.linalg.norm( M#Vh[-1] )) # 35.950705194666476
What's going on here? s[-1] is very small, so M#x should have a solution in principle, but Vh[-1] doesn't look like a solution. Is this an issue with M and Vh being complex numbers? A numerical stability/accuracy issue? Something else?
I'd really like to figure out what x would give M#x with roughly the same order of magnitude as s[-1], please let me know any way to solve this.
You forgot the conjugate transpose
The decomposition given by SVD is np.allclose(M, U # np.diag(s) # Vh), if s[-1] is small it means that the last column of U # np.diag(s) ~ M # np.inv(Vh) ~ M # Vh.T.conj(). So you can find the use
M # Vh[-1].T.conj() # [-7.77136331e-14-3.74441041e-13j,
# 4.67810503e-14+3.45797987e-13j,
# -2.84217094e-14-1.06581410e-14j,
# 7.10542736e-15+3.10862447e-15j]
I understand the basics of how vectorization works, but I'm struggling to see how to apply that knowledge to my use case. I have a working algorithm for some image processing. However, the particular algorithm that I'm working with doesn't process the entire image as there is a border to account for the "window" that gets shifted around the image.
I'm trying to use this to better understand Numpy's vectorization, but I can't figure out how to account for the window and the border. Below is what I have in vanilla python (with the actual algorithm redacted, I'm only asking for help on how to vectorize). I looked into np.fromfunction and a few other options, but have had no luck. Any suggestions would be welcome at this point.
half_k = np.int(np.floor(k_size / 2));
U = np.zeros(img_a.shape, dtype=np.float64);
V = np.zeros(img_b.shape, dtype=np.float64);
for y in range(half_k, img_a.shape[0] - half_k):
for x in range(half_k, img_a.shape[1] - half_k):
# init variables for window calc goes here
for j in range(y - half_k, y + half_k + 1):
for i in range(x - half_k, x + half_k + 1):
# stuff init-ed above gets added to here
# final calc on things calculated in windows goes here
U[y][x] = one_of_the_window_calculations
V[y][x] = the_other_one
return U, V
I think you can create an array of the indices of the patches with a function like this get_patch_idx in the first place
def get_patch_idx(ind,array_shape,step):
row_nums,col_nums = array_shape
col_idx = ind-(ind//col_nums)*col_nums if ind%col_nums !=0 else col_nums
row_idx = ind//col_nums if ind%col_nums !=0 else ind//col_nums
if col_idx+step==col_nums or row_idx+step==row_nums or col_idx-step==-1 or row_idx-step==-1: raise ValueError
upper = [(row_idx-1)*col_nums+col_idx-1,(row_idx-1)*col_nums+col_idx,(row_idx-1)*col_nums+col_idx+1]
middle = [row_idx*col_nums+col_idx-1,row_idx*col_nums+col_idx,row_idx*col_nums+col_idx+1]
lower = [(row_idx+1)*col_nums+col_idx-1,(row_idx+1)*col_nums+col_idx,(row_idx+1)*col_nums+col_idx+1]
return [upper,middle,lower]
Assume you have an (10,8) array, and half_k is 1
test = np.linspace(1,80,80).reshape(10,8)*2
mask = np.linspace(0,79,80).reshape(10,8)[1:-1,1:-1].ravel().astype(np.int)
in which the indices in mask are allowed, then you can create an array of indices of the patches
patches_inds = np.array([get_patch_idx(ind,test.shape,1) for ind in mask])
with this patches_inds, patches of the original array test can be sliced with np.take
patches = np.take(test,patches_inds)
This will bypass for loop efficiently.
I have an array of size (254, 80) which I need to use scipy's fsolve on. I have found that the speed of using fsolve on a vector is quicker than it is in a for loop but only for vectors upto about 100 values long. After this, the speed quickly drops off and becomes very slow, sometimes completely stopping.
I'm currently looping through one dimension of the array and using a vectorised fsolve on the smaller dimension but it's still taking longer than I would expect/like.
Does anyone have a good work around for this or a know of a similar function which will be happy handling a vector of a larger size? Or perhaps if I am doing something wrong...
Here's the current code:
for i in range(array.shape[0]):
f = lambda y: a[i] - m[i]*y - md[i]*(( y**4 + 2*(y**2)*np.cos(Thetas[i,:]) )**0.25)
ystar[i,:] = fsolve(f, y0[i])
(The rest of the variables are all a similar size)
Digging in to this further, I have found that a function such as
f = lambda y: y*np.tanh(y) - a0/(m**2)
is faster to solve than
f = lambda y: (m**2)y*np.tanh(y) - a0
where m and a0 are large 2D np arrays.
Can anyone explain why this is?
Thanks,
Rachael
Although noone answered I found a workaround which avoided the fsolve function and used interpolation instead. Luckily the initial guess is good enough that only a few y values are needed. If the initial guess knowledge is poor then this method is probably not appropriate. Do note this still has some issues but for my purposes it performs well...
ystar = np.empty((A,B)) # empty array for the solutions
num_ys = 20 #number of points to find where the solution is
y0_u = y0 #just so the calculated initial guess isn't overwritten
for i in range(Thetas.shape[1]):
ys = np.linspace(-.05,.2,num_ys)[:,None]*np.ones((num_ys,Thetas.shape[0])) + y0_u
vals = (np.squeeze(eta) - np.squeeze(m)*ys*np.sqrt(g*np.tanh(ys**2*depth)) - np.squeeze(md)*np.sqrt(g*np.tanh(depth*np.sqrt(ys**4+2*(ys**2)*kB*np.cos(Thetas[:,i]+phi_bi)+kB**2)))*(( ys**4+2*(ys**2)*kB*np.cos(Thetas[:,i]+phi_bi)+kB**2 )**0.25))
idxs_important = -1*(np.clip(np.vstack(((np.sign(vals[:-1]*vals[1:])-1),np.zeros((1,Thetas[:,i].size)))),-1,0) + np.clip(np.vstack((np.zeros((1,Thetas[:,i].size)),(np.sign(vals[:-1]*vals[1:]))-1)),-1,0))
ys_chosen = idxs_important*ys
ys_chosen[ys_chosen==0] = 10000
sorted_ys_idx = np.argsort(ys_chosen.T, axis = 1)
sorted_ys = ((ys_chosen.T)[np.arange(np.shape(ys_chosen.T)[0])[:,np.newaxis],sorted_ys_idx]).T
sorted_vals = (((vals*idxs_important).T)[np.arange(np.shape(vals.T)[0])[:,np.newaxis],sorted_ys_idx]).T
# interpolation bit
x_id = 0
yposs = sorted_ys[:2,:]
valposs = sorted_vals[:2,:]
y = yposs[0,:] + (yposs[1,:] - yposs[0,:])*(x_id - valposs[0,:])/(valposs[1,:] - valposs[0,:])
ystar[:,i] = np.squeeze(y)
y0_u=ystar[:,i]
I'm studying dynamical systems, particularly the logistic family g(x) = cx(1-x), and I need to iterate this function an arbitrary amount of times to understand its behavior. I have no problem iterating the function given a specific point x_0, but again, I'd like to graph the entire function and its iterations, not just a single point. For plotting a single function, I have this code:
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
def logplot(c, n = 10):
dt = .001
x = np.arange(0,1.001,dt)
y = c*x*(1-x)
plt.plot(x,y)
plt.axis([0, 1, 0, c*.25 + (1/10)*c*.25])
plt.show()
I suppose I could tackle this by the lengthy/daunting method of explicitly creating a list of the range of each iteration using something like the following:
def log(c,x0):
return c*x0*(1-x)
def logiter(c,x0,n):
i = 0
y = []
while i <= n:
val = log(c,x0)
y.append(val)
x0 = val
i += 1
return y
But this seems really cumbersome and I was wondering if there were a better way. Thanks
Some different options
This is really a matter of style. Your solution works and is not very difficult to understand. If you want to go on on those lines, then I would just tweak it a bit:
def logiter(c, x0, n):
y = []
x = x0
for i in range(n):
x = c*x*(1-x)
y.append(x)
return np.array(y)
The changes:
for loop is easier to read than a while loop
x0 is not used in the iteration (this adds one more variable, but it is mathematically easier to understand; x0 is a constant)
the function is written out, as it is a very simple one-liner (if it weren't, its name should be changed to be something else than log, which is very easy to confuse with logarithm)
the result is converted into a numpy array. (Just what I usually do, if I need to plot something)
In my opinion the function is now legible enough.
You might also take an object-oriented approach and create a logistic function object:
class Logistics():
def __init__(self, c, x0):
self.x = x0
self.c = c
def next_iter(self):
self.x = self.c * self.x * (1 - self.x)
return self.x
Then you may use this:
def logiter(c, x0, n):
l = Logistics(c, x0)
return np.array([ l.next_iter() for i in range(n) ])
Or if you may make it a generator:
def log_generator(c, x0):
x = x0
while True:
x = c * x * (1-x)
yield x
def logiter(c, x0, n):
l = log_generator(c, x0)
return np.array([ l.next() for i in range(n) ])
If you need performance and have large tables, then I suggest:
def logiter(c, x0, n):
res = np.empty((n, len(x0)))
res[0] = c * x0 * (1 - x0)
for i in range(1,n):
res[i] = c * res[i-1] * (1 - res[i-1])
return res
This avoids the slowish conversion into np.array and some copying of stuff around. The memory is allocated only once, and the expensive conversion from a list into an array is avoided.
(BTW, if you returned an array with the initial x0 as the first row, the last version would look cleaner. Now the first one has to be calculated separately if copying the vector around is desired to be avoided.)
Which one is best? I do not know. IMO, all are readable and justified, it is a matter of style. However, I speak only very broken and poor Pythonic, so there may be good reasons why still something else is better or why something of the above is not good!
Performance
About performance: With my machine I tried the following:
logiter(3.2, linspace(0,1,1000), 10000)
For the first three approaches the time is essentially the same, approximately 1.5 s. For the last approach (preallocated array) the run time is 0.2 s. However, if the conversion from a list into an array is removed, the first one runs in 0.16 s, so the time is really spent in the conversion procedure.
Visualization
I can think of two useful but quite different ways to visualize the function. You mention that you will have, say, 100 or 1000 different x0's to start with. You do not mention how many iterations you want to have, but maybe we will start with just 100. So, let us create an array with 100 different x0's and 100 iterations at c = 3.2.
data = logiter(3.6, np.linspace(0,1,100), 100)
In a way a standard method to visualize the function is draw 100 lines, each of which represents one starting value. That is easy:
import matplotlib.pyplot as plt
plt.plot(data)
plt.show()
This gives:
Well, it seems that all values end up oscillating somewhere, but other than that we have only a mess of color. This approach may be more useful, if you use a narrower range of values for x0:
data = logiter(3.6, np.linspace(0.8,0.81,100), 100)
you may color-code the starting values by e.g.:
color1 = np.array([1,0,0])
color2 = np.array([0,0,1])
for i,k in enumerate(np.linspace(0, 1, data.shape[1])):
plt.plot(data[:,i], '.', color=(1-k)*color1 + k*color2)
This plots the first columns (corresponding to x0 = 0.80) in red and the last columns in blue and uses a gradual color change in between. (Please note that the more blue a dot is, the later it is drawn, and thus blues overlap reds.)
However, it is possible to take a quite different approach.
data = logiter(3.6, np.linspace(0,1,1000), 50)
plt.imshow(data.T, cmap=plt.cm.bwr, interpolation='nearest', origin='lower',extent=[1,21,0,1], vmin=0, vmax=1)
plt.axis('tight')
plt.colorbar()
gives:
This is my personal favourite. I won't spoil anyone's joy by explaining it too much, but IMO this shows many peculiarities of the behaviour very easily.
Here's what I was aiming for; an indirect approach to understanding (by visualization) the behavior of initial conditions of the function g(c, x) = cx(1-x):
def jam(c, n):
x = np.linspace(0,1,100)
y = c*x*(1-x)
for i in range(n):
plt.plot(x, y)
y = c*y*(1-y)
plt.show()