Why is Scipy.linalg.eigh much slower when using np.complex64? - python

I have a large Hermitian matrix of which I need to calculate the eigenvalues and eigenvectors. For this I use scipy.linalg.eigh. Above a certain size of the matrix, scipy is much faster if np.complex128 is used as data type instead of np.complex64.
If I use np.complex64, the CPU load drops after some time and only one core is used. I checked the SciPy source and SciPy seems to use the cheevr or zheevr methods from lapack. I use SciPy with openblas-0.3.20. Is this a bug in openblas?
If I run the following code with 32 cores, it takes about 13 minutes with np.complex128 but more than 2 hours with np.complex64.
import numpy as np
import scipy.linalg
import time
n = 27
shape = (n**3, n**3)
dtype = np.complex64
H = np.random.uniform(-1, 1, shape) + 1.j * np.random.uniform(-1, 1, shape)
H = np.asarray(H, dtype=dtype)
start = time.time()
values, vectors = scipy.linalg.eigh(H)
print(f"It took {time.time() - start} seconds.")

Related

Only GPU to CPU transfer with cupy is incredible slow

If I have an array on the GPU, it is really slow (order of hundreds of seconds) to copy back an array of shape (20, 256, 256).
My code is the following:
import cupy as cp
from cupyx.scipy.ndimage import convolve
import numpy as np
# Fast...
xt = np.random.randint(0, 255, (20, 256, 256)).astype(np.float32)
xt_gpu = cp.asarray(xt)
# Also very fast...
result_gpu = convolve(xt_gpu, xt_gpu, mode='constant')
# Very very very very very slow....
result_cpu = cp.asnumpy(result_gpu)
I measured the times using cp.cuda.Event() with record and synchronize to avoid measuring any random times, but is still the same result, the GPU->CPU transfer is incredible slow. However, using PyTorch or TensorFlow this is not the case (out of experience for similar data size/shape)... What am I doing wrong?
I think you might be timing it wrong. I modified the code to synchronize between every GPU operation and it seems like the convolution takes the majority of the time with both transfer operations being very fast.
import cupy as cp
from cupyx.scipy.ndimage import convolve
import numpy as np
import time
# Fast...
xt = np.random.randint(0, 255, (20, 256, 256)).astype(np.float32)
t0 = time.time()
xt_gpu = cp.asarray(xt)
cp.cuda.stream.get_current_stream().synchronize()
print(time.time() - t0)
# Also very fast...
t0 = time.time()
result_gpu = convolve(xt_gpu, xt_gpu, mode='constant')
cp.cuda.stream.get_current_stream().synchronize()
print(time.time() - t0)
# Very very very very very slow....
t0 = time.time()
result_cpu = cp.asnumpy(result_gpu)
cp.cuda.stream.get_current_stream().synchronize()
print(time.time() - t0)
Output:
0.1380000114440918
4.032999753952026
0.0010001659393310547
To me it seems like you are not actually synchronizing between calls when you tested it. Until the transfer back to a numpy array all operations are simply queued up and seem to finish instantly without the synchronize calls. This would lead to the measured GPU->CPU transfer time actually being the time for the convolution and the transfer.
I also meet the same problem, I found that accessing Float64 data is way faster than Float32, maybe you can try to .astype(float64).

PyFFTW perfomance on multidimensional arrays

I have a nD array, say of dimensions: (144, 522720) and I need to compute its FFT.
PyFFTW seems slower than numpy and scipy, that it is NOT expected.
Am I doing something obviously wrong?
Below is my code
import numpy
import scipy
import pyfftw
import time
n1 = 144
n2 = 522720
loops = 2
pyfftw.config.NUM_THREADS = 4
pyfftw.config.PLANNER_EFFORT = 'FFTW_ESTIMATE'
# pyfftw.config.PLANNER_EFFORT = 'FFTW_MEASURE'
Q_1 = pyfftw.empty_aligned([n1, n2], dtype='float64')
Q_2 = pyfftw.empty_aligned([n1, n2], dtype='complex_')
Q_ref = pyfftw.empty_aligned([n1, n2], dtype='complex_')
# repeat a few times to see if pyfft planner helps
for i in range(0,loops):
Q_1 = numpy.random.rand(n1,n2)
s1 = time.time()
Q_ref = numpy.fft.fft(Q_1, axis=0)
print('NUMPY - elapsed time: ', time.time() - s1, 's.')
s1 = time.time()
Q_2 = scipy.fft.fft(Q_1, axis=0)
print('SCIPY - elapsed time: ', time.time() - s1, 's.')
print('Equal = ', numpy.allclose(Q_2, Q_ref))
s1 = time.time()
Q_2 = pyfftw.interfaces.numpy_fft.fft(Q_1, axis=0)
print('PYFFTW NUMPY - elapsed time = ', time.time() - s1, 's.')
print('Equal = ', numpy.allclose(Q_2, Q_ref))
s1 = time.time()
Q_2 = pyfftw.interfaces.scipy_fftpack.fft(Q_1, axis=0)
print('PYFFTW SCIPY - elapsed time = ', time.time() - s1, 's.')
print('Equal = ', numpy.allclose(Q_2, Q_ref))
s1 = time.time()
fft_object = pyfftw.builders.fft(Q_1, axis=0)
Q_2 = fft_object()
print('FFTW PURE Elapsed time = ', time.time() - s1, 's')
print('Equal = ', numpy.allclose(Q_2, Q_ref))
Firstly, if you turn on the cache before you main loop, the interfaces work largely as expected:
pyfftw.interfaces.cache.enable()
pyfftw.interfaces.cache.set_keepalive_time(30)
It's interesting that despite wisdom that should be stored, the construction of the pyfftw objects is still rather slow when the cache is off. No matter, this is exactly the purpose of the cache. In your case you need to make the cache keep-alive time quite long because your loop is very long.
Secondly, it's not a fair comparison to include the construction time of the fft_object in the final test. If you move it outside the timer, then calling fft_object is a better measure.
Thirdly, it's also interesting to see that even with cache turned on, the call to numpy_fft is slower than the call to scipy_fft. Since there is no obvious difference in the code path, I suggest that is caching issue. This is the sort of issue that timeit seeks to mitigate. Here's my proposed timing code which is more meaningful:
import numpy
import scipy
import pyfftw
import timeit
n1 = 144
n2 = 522720
pyfftw.config.NUM_THREADS = 4
pyfftw.config.PLANNER_EFFORT = 'FFTW_MEASURE'
Q_1 = pyfftw.empty_aligned([n1, n2], dtype='float64')
pyfftw.interfaces.cache.enable()
pyfftw.interfaces.cache.set_keepalive_time(30)
times = timeit.repeat(lambda: numpy.fft.fft(Q_1, axis=0), repeat=5, number=1)
print('NUMPY fastest time = ', min(times))
times = timeit.repeat(lambda: scipy.fft.fft(Q_1, axis=0), repeat=5, number=1)
print('SCIPY fastest time = ', min(times))
times = timeit.repeat(
lambda: pyfftw.interfaces.numpy_fft.fft(Q_1, axis=0), repeat=5, number=1)
print('PYFFTW NUMPY fastest time = ', min(times))
times = timeit.repeat(
lambda: pyfftw.interfaces.scipy_fftpack.fft(Q_1, axis=0), repeat=5, number=1)
print('PYFFTW SCIPY fastest time = ', min(times))
fft_object = pyfftw.builders.fft(Q_1, axis=0)
times = timeit.repeat(lambda: fft_object(Q_1), repeat=5, number=1)
print('FFTW PURE fastest time = ', min(times))
On my machine this gives an output like:
NUMPY fastest time = 0.6622681759763509
SCIPY fastest time = 0.6572431400418282
PYFFTW NUMPY fastest time = 0.4003451430471614
PYFFTW SCIPY fastest time = 0.40362057799939066
FFTW PURE fastest time = 0.324020683998242
You can do a bit better if you don't force it to copy the input array into a complex data type by changing Q_1 to be complex128:
NUMPY fastest time = 0.6483533839927986
SCIPY fastest time = 0.847397351055406
PYFFTW NUMPY fastest time = 0.3237176960101351
PYFFTW SCIPY fastest time = 0.3199474769644439
FFTW PURE fastest time = 0.2546963169006631
That interesting scipy slow-down is repeatable.
That said, if your input is real, you should be doing a real transform (for >50% speed-up with pyfftw) and manipulating the resultant complex output.
What's interesting about this example is (I think) how important the cache is in the results (which I suggest is why switching to a real transform is so effective in speeding things up). You see something dramatic also when you use change the array size to 524288 (the next power of two, which you think might perhaps speed things up, but not slow it down dramatically). In this case everything slows down quite a bit, scipy particularly. It feels to me that scipy is more cache sensitive, which would explain the slow down with changing the input to complex128 (522720 is quite a nice number for FFTing though, so perhaps we should expect a slowdown).
Finally, if speed is secondary to accuracy, you can always use 32-bit floats as the data type. If you combine that with doing a real transform, you get a better than factor of 10 speed-up over the initial numpy best given above:
PYFFTW NUMPY fastest time = 0.09026529802940786
PYFFTW SCIPY fastest time = 0.1701313250232488
FFTW PURE fastest time = 0.06202622700948268
(numpy and scipy don't change much as I think they use 64-bit floats internally).
Edit: I forgot that the Scipy's fftpack real FFTs have a weird output structure, which pyfftw replicates with some slowdown. This is changed to be more sensible in the new FFT module.
The new FFT interface is implemented in pyFFTW and should be preferred. There was unfortunately a problem with the docs being rebuilt so the docs were a long time out of date and didn't show the new interface - hopefully that is fixed now.

Python efficient summation in large 2D array

My task is fairly simple: I have a large 2D matrix, containing only zeros and ones. For each position in this matrix, I want to sum all pixels in a window around this position. The problem is that the matrix has the shape (166667, 17668) and window sizes range from (333, 333) to (5333, 5333). So far I have only tried on a subset of the data. The code I arrived at:
out_arr = np.array( in_arr.shape )
in_arr = np.pad(in_arr, windowsize//2, mode='reflect')
for y in range(out_arr.shape[0]):
for x in range(out_arr.shape[1]):
out_arr[y, x] = np.sum(in_arr[y:y+windowsize, x:x+windowsize])
Obviously, this takes a long time. But in my case it was faster than a rolling window approach using numpy.stride_tricks.as_strided, as described here. I tried compiling it using cython, without effect.
What would be your suggestions to speed this up, apart from parallelizing?
I have a Nvidia Titan X at hand. Is there a way to benefit from that?
(e.g. using cupy)
For windowed summation convolution is actually overkill since a simple O(n) solution exists:
import numpy as np
from scipy.signal import convolve
def winsum(in_arr, windowsize):
in_arr = np.pad(in_arr, windowsize//2+1, mode='reflect')[:-1, :-1]
in_arr[0] = 0
in_arr[:, 0] = 0
ps = in_arr.cumsum(0).cumsum(1)
return ps[windowsize:, windowsize:] + ps[:-windowsize, :-windowsize] \
- ps[windowsize:, :-windowsize] - ps[:-windowsize, windowsize:]
This is already fast but you can save even more because ps calculated once for the largest window size could be reused for all smaller window sizes.
However, there is one potential drawback, which are the very large numbers that may arise from summing everything like that. A numerically more sound version eliminates this problem by taking the differences first. Downside: the extra saving through sharing ps is no longer available.
def winsum_safe(in_arr, windowsize):
in_arr = np.pad(in_arr, windowsize//2, mode='reflect')
in_arr[windowsize:] -= in_arr[:-windowsize]
in_arr[:, windowsize:] -= in_arr[:, :-windowsize]
return in_arr.cumsum(0)[windowsize-1:].cumsum(1)[:, windowsize-1:]
For reference, here is the closest competitor which is fft based convolution. You need an up-to-date version of scipy for this to work efficiently. On older versions use fftconvolve instead of convolve.
def winsumc(in_arr, windowsize):
in_arr = np.pad(in_arr, windowsize//2, mode='reflect')
kernel = np.ones((windowsize, windowsize), in_arr.dtype)
return convolve(in_arr, kernel, 'valid')
The next one is to simulate scipy's old - and excruciatingly slow - behavior.
def winsum_nofft(in_arr, windowsize):
in_arr = np.pad(in_arr, windowsize//2, mode='reflect')
kernel = np.ones((windowsize, windowsize), in_arr.dtype)
return convolve(in_arr, kernel, 'valid', method='direct')
Testing and benchmarking:
data = np.random.random((1000, 1000))
assert np.allclose(winsum(data, 333), winsumc(data, 333))
assert np.allclose(winsum(data, 333), winsum_safe(data, 333))
kwds = dict(globals=globals(), number=10)
from timeit import timeit
from time import perf_counter
print('data 100x1000, window 333x333')
print('cumsum: ', timeit('winsum(data, 333)', **kwds)*100, 'ms')
print('cumsum safe: ', timeit('winsum_safe(data, 333)', **kwds)*100, 'ms')
print('fftconv: ', timeit('winsumc(data, 333)', **kwds)*100, 'ms')
t = perf_counter()
res = winsum_nofft(data, 99) # 333 just takes too long
t = perf_counter() - t
assert np.allclose(winsum(data, 99), res)
print('data 100x1000, window 99x99')
print('conv: ', t*1000, 'ms')
Sample output:
data 100x1000, window 333x333
cumsum: 70.33260859316215 ms
cumsum safe: 59.98647050000727 ms
fftconv: 298.60571819590405 ms
data 100x1000, window 99x99
conv: 135224.8261970235 ms
#Divakar pointed out in the comments that you can use conv2d and he is right. Here is an example:
import numpy as np
from scipy import signal
data = np.random.rand(5,5) # you original data that you want to sum
kernel = np.ones((2,2)) # square matrix of your dimensions, filled with ones
output = signal.convolve2d(data,kernel,mode='same') # the convolution

Gram-Schmidt orthogonalization in pure Tensorflow: performance for iterative solution is much slower than numpy

I want to do Gram-Schmidt orthogonalization to fix big matrices which start to deviate slightly from orthogonality in pure Tensorflow (to do it on the graph within larger computation, without breaking it). The solutions I've seen like the one there are used "externally" (doing multiple sess.run inside).
So I wrote a simple and I think very inefficient implementation myself:
def tf_gram_schmidt(vectors):
# add batch dimension for matmul
basis = tf.expand_dims(vectors[0,:]/tf.norm(vectors[0,:]),0)
for i in range(1,vectors.get_shape()[0].value):
v = vectors[i,:]
# add batch dimension for matmul
v = tf.expand_dims(v,0)
w = v - tf.matmul(tf.matmul(v, tf.transpose(basis)), basis)
# I assume that my matrix is close to orthogonal
basis = tf.concat([basis, w/tf.norm(w)],axis=0)
return basis
But when I compare it with the same iterative external code, it is 3 times slower (on GPU !!!) (though has a bit better precision):
how much source differs from orthogonal matrix:
44.7176
tensorflow version:
0.034667
Time elapsed: 23365.9820557ms
numpy version with tensorflow and variable re-assign to the result of numpy code:
0.057589
Time elapsed: 8540.5600071ms
(UPD 4: I had a small mistake in my example, but it didn't change timings at all, as ort_discrepancy() is a lightweight function):
Minimal example:
import tensorflow as tf
import numpy as np
import time
# found this code somewhere on stackoverflow
def np_gram_schmidt(vectors):
basis = []
for v in vectors:
w = v - np.sum( np.dot(v,b)*b for b in basis )
if (w > 1e-10).any():
basis.append(w/np.linalg.norm(w))
else:
basis.append(np.zeros(w.shape))
return np.array(basis)
def tf_gram_schmidt(vectors):
# add batch dimension for matmul
basis = tf.expand_dims(vectors[0,:]/tf.norm(vectors[0,:]),0)
for i in range(1,vectors.get_shape()[0].value):
v = vectors[i,:]
# add batch dimension for matmul
v = tf.expand_dims(v,0)
w = v - tf.matmul(tf.matmul(v, tf.transpose(basis)), basis)
# I assume that my matrix is close to orthogonal
basis = tf.concat([basis, w/tf.norm(w)],axis=0)
return basis
# how much matrix differs from orthogonal
# computes ||W*W^T - I||2
def ort_discrepancy(matrix):
wwt = tf.matmul(matrix, matrix, transpose_a=True)
rows = tf.shape(wwt)[0]
cols = tf.shape(wwt)[1]
return tf.norm((wwt - tf.eye(rows,cols)),ord='euclidean')
np.random.seed(0)
# white noise matrix
np_nearly_orthogonal = np.random.normal(size=(2000,2000))
# centered rows
np_nearly_orthogonal = np.array([row/np.linalg.norm(row) for row in np_nearly_orthogonal])
tf_nearly_orthogonal = tf.Variable(np_nearly_orthogonal,dtype=tf.float32)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print("how much source differs from orthogonal matrix:")
print(ort_discrepancy(tf_nearly_orthogonal).eval())
print("tensorflow version:")
start = time.time()
print(ort_discrepancy(tf_gram_schmidt(tf_nearly_orthogonal)).eval())
end = time.time()
print("Time elapsed: %sms"%(1000*(end-start)))
print("numpy version with tensorflow and variable re-assign to the result of numpy code:")
start = time.time()
tf_nearly_orthogonal = tf.Variable(np_gram_schmidt(tf_nearly_orthogonal.eval()),dtype=tf.float32)
sess.run(tf.variables_initializer([tf_nearly_orthogonal]))
# check that variable was updated
print(ort_discrepancy(tf_nearly_orthogonal).eval())
end = time.time()
print("Time elapsed: %sms"%(1000*(end-start)))
Is there a way to speed it up? I couldn't figure out how to do it for G-S which requires appending to the basis (so no tf.map_fn parallelization can help).
UPD: I have achieved difference in 2x by optimizing tf.matmul:
def tf_gram_schmidt(vectors):
# add batch dimension for matmul
basis = tf.expand_dims(vectors[0,:]/tf.norm(vectors[0,:]),0)
for i in range(1,vectors.get_shape()[0].value):
v = vectors[i,:]
# add batch dimension for matmul
v = tf.expand_dims(v,0)
w = v - tf.matmul(tf.matmul(v, basis, transpose_b=True), basis)
# I assume that my matrix is close to orthogonal
basis = tf.concat([basis, w/tf.norm(w)],axis=0)
return basis
how much source differs from orthogonal matrix:
44.7176
tensorflow version:
0.0335421
Time elapsed: 17004.458189ms
numpy version with tensorflow and variable re-assign to the result of numpy code:
0.057589
Time elapsed: 8082.20791817ms
EDIT2:
Just for fun, tried to fully mimic numpy solution, and got extremely long working code:
def tf_gram_schmidt(vectors):
# add batch dimension for matmul
basis = tf.expand_dims(vectors[0,:]/tf.norm(vectors[0,:]),0)
for i in range(1,vectors.get_shape()[0].value):
v = vectors[i,:]
# like in numpy example
multiplied = tf.reduce_sum(tf.map_fn(lambda b: tf.scalar_mul(tf.tensordot(v,b,axes=[[0],[0]]),b), basis), axis=0)
w = v - multiplied
## add batch dimension for matmul
##v = tf.expand_dims(v,0)
##w = v - tf.matmul(tf.matmul(v, basis, transpose_b=True), basis)
# I assume that my matrix is close to orthogonal
basis = tf.concat([basis, tf.expand_dims(w/tf.norm(w),0)],axis=0)
return basis
(which seems to overfill GPU memory as well):
how much source differs from orthogonal matrix:
44.7176
tensorflow version:
2018-01-05 22:12:09.854505: I tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 14005 get requests, put_count=5105 evicted_count=1000 eviction_rate=0.195886 and unsatisfied allocation rate=0.714031
2018-01-05 22:12:09.854530: I tensorflow/core/common_runtime/gpu/pool_allocator.cc:259] Raising pool_size_limit_ from 100 to 110
2018-01-05 22:12:13.090296: I tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 308520 get requests, put_count=314261 evicted_count=6000 eviction_rate=0.0190924 and unsatisfied allocation rate=0.00088487
2018-01-05 22:12:22.270822: I tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 1485113 get requests, put_count=1500399 evicted_count=16000 eviction_rate=0.0106638 and unsatisfied allocation rate=0.000490198
2018-01-05 22:12:37.833056: I tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 3484575 get requests, put_count=3509407 evicted_count=26000 eviction_rate=0.00740866 and unsatisfied allocation rate=0.000339209
2018-01-05 22:12:59.995184: I tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 6315546 get requests, put_count=6349923 evicted_count=36000 eviction_rate=0.00566936 and unsatisfied allocation rate=0.000259202
0.0290728
Time elapsed: 136108.97398ms
numpy version with tensorflow and variable re-assign to the result of numpy code:
0.057589
Time elapsed: 10618.8428402ms
UPD3: My GPU is GTX1050, it usually has speedup 5-7 times in comparison to my CPU. So the result is very strange for me.
UPD5: Ok, I found that GPU is almost not used for this code, while training neural network with manually written backpropagation which uses a lot of tf.matmul's and other matrix arithmetics fully exploits it. Why is it so?
UPD 6:
Following the given suggestion I have measured the time in a new way:
# Akshay's suggestion to measure performance correclty
orthogonalized = ort_discrepancy(tf_gram_schmidt(tf_nearly_orthogonal))
with tf.Session() as sess:
sess.run(init)
print("how much source differs from orthogonal matrix:")
print(ort_discrepancy(tf_nearly_orthogonal).eval())
print("tensorflow version:")
start = time.time()
tf_result = sess.run(orthogonalized)
end = time.time()
print(tf_result)
print("Time elapsed: %sms"%(1000*(end-start)))
print("numpy version with tensorflow and variable re-assign to the result of numpy code:")
start = time.time()
tf_nearly_orthogonal = tf.Variable(np_gram_schmidt(tf_nearly_orthogonal.eval()),dtype=tf.float32)
sess.run(tf.variables_initializer([tf_nearly_orthogonal]))
# check that variable was updated
print(ort_discrepancy(tf_nearly_orthogonal).eval())
end = time.time()
print("Time elapsed: %sms"%(1000*(end-start)))
Now I can see 4x speedup:
how much source differs from orthogonal matrix:
44.7176
tensorflow version:
0.018951
Time elapsed: 2594.85888481ms
numpy version with tensorflow and variable re-assign to the result of numpy code:
0.057589
Time elapsed: 8851.86600685ms
TensorFlow appears slow because your benchmark is measuring both the time that it construct the graph and the time it takes to execute it; a fairer comparison between TensorFlow and NumPy would exclude graph construction from the benchmark. In particular, your benchmark should probably look something like this:
print("tensorflow version:")
# This line constructs the graph but does not execute it.
orthogonalized = ort_discrepancy(tf_gram_schmidt(tf_nearly_orthogonal))
start = time.time()
tf_result = sess.run(orthogonalized)
end = time.time()

Scipy convolve2d with subsampling like Theano's conv2d?

I wish to perform 2D convolution on images of size 600 X 400 using a 10 X 10 filter. The filter is not separable. scipy.signal.convolve2d works well for me currently but, I am expecting a lot bigger images soon.
To counter that, I have two ideas
resizing images
subsampling (or striding)?
Focusing on the subsampling part, theano has a function which does convolution the same way as scipy convolve2d, see theano conv2d
It also has the subsampling option too. But, installing theano on windows has been painful to me. How do I get subsampling work with scipy.signal.convolve2d? Any other alternatives (which doesn't require me installing me some heavyweight library)?
You could implement subsampling by hand, I'll only sketch 1d for simplicity. Say you want to sample s = d * f on a regular subgrid with spacing k. Then your nth sample is s_nk = sum_i=0^10 f_i d_nk-i. The thing to observe here is that the indices of f and d always sum to a multiple of k. This suggests splitting it up into sub-sums s_nk = sum_j=0^k-1 sum_i=0^10/k f_j+ik d_-j+(n-i)k. So what you need to do is: subsample d and f at grids with spacing k at all offsets 0, ..., k-1. Convolve all pairs of subsampled d and f whose offsets sum to 0 or k and add the results.
Here's some code for 1d. It roughly implements the above, only the grids are placed slightly differently to make index management easier. The second function does it the stupid way, i.e. computes the full convolution and then decimates. It is for testing the first function against.
import numpy as np
from scipy import signal
def ss_conv(d1, d2, decimate):
n = (len(d1) + len(d2) - 1) // decimate
out = np.zeros((n,))
for i in range(decimate):
d1d = d1[i::decimate]
d2d = d2[decimate-i-1::decimate]
cv = signal.convolve(d1d, d2d, 'full')
out[:len(cv)] += cv
return out
def conv_ss(d1, d2, decimate):
return signal.convolve(d1, d2, 'full')[decimate-1::decimate]
Edit: 2d version:
import numpy as np
from scipy import signal
def ss_conv_2d(d1, d2, decy, decx):
ny = (d1.shape[0] + d2.shape[0] - 1) // decy
nx = (d1.shape[1] + d2.shape[1] - 1) // decx
out = np.zeros((ny, nx))
for i in range(decy):
for j in range(decx):
d1d = d1[i::decy, j::decx]
d2d = d2[decy-i-1::decy, decx-j-1::decx]
cv = signal.convolve2d(d1d, d2d, 'full')
out[:cv.shape[0], :cv.shape[1]] += cv
return out
def conv_ss_2d(d1, d2, decy, decx):
return signal.convolve2d(d1, d2, 'full')[decy-1::decy, decx-1::decx]

Categories