All combinations of all elements of a 2D array - python

So I have matrix A
A = [[0,0,1,-1]
[0,0,1,-1]
[0,0,1,-1]
[0,0,1,-1]]
And I want to have all the possible combinations with these elements. This means that rows can change between them and columns as well. In this situation, I would expect a 4^4 = 256 possibilities. I have tried:
combs = np.array(list(itertools.product(*A)))
It does creates me, my desire to output a matrix of (256,4), but all the rows are equal. This means that I get vector [0,0,1,-1], 256 times.
Here is an example:
output = [[0,0,0,0]
[0,0,0,1]
[0,0,1,1]
[0,1,1,1]
[1,1,1,1]
[-1,1,1,-1]
[-1,-1,-1,-1]
....
[0,-1,0,-1]
Another example, if
A = [[1,2,3]
[4,5,6]
[7,8,9]]
The output should be all the possible combinations of arrays that the matrix can form
Combs =[[1,1,1]
[1,1,2]
[1,1,3]
[1,1,...9]
[2,1,1]
[2,2,1]
[1,2,1]
Another example would be:
I have the vector layers
layers = [1,2,3,4,5]
And then I have vector angle
angle = [0,90,45,-45]
each layer can have one of the angles, so I create a matrix A
A = [[0,90,45,-45]
[0,90,45,-45]
[0,90,45,-45]
[0,90,45,-45]
[0,90,45,-45]]
Great, but now I want to know all possible combinations that layers can have. For example, layer 1 can have an angle of 0º, layer 2 an angle of 90º, layer 3 an angle of 0º, layer 4 an angle of 45º and layer 5 and an angle of 0º. This creates the array
Comb = [0,90,0,45,0]
So all the combinations would be in a matrix
Comb = [[0,0,0,0,0]
[0,0,0,0,90]
[0,0,0,90,90]
[0,0,90,90,90]
[0,90,90,90,90]
[90,90,90,90,90]
...
[0,45,45,45,45]
[0,45,90,-45,90]]
How can I generalize this process for bigger matrices.
Am I doing something wrong?
Thank you!

It's OK to use np.array in conjunction with list(iterable), especially in your case where iterable is itertools.product(*A). However, this can be optimised since you know the shape of array of your output.
There are many ways to perform product so I'll just put my list:
Methods of Cartesian Product
import itertools
import numpy as np
def numpy_product_itertools(arr):
return np.array(list(itertools.product(*arr)))
def numpy_product_fromiter(arr):
dt = np.dtype([('', np.intp)]*len(arr)) #or np.dtype(','.join('i'*len(arr)))
indices = np.fromiter(itertools.product(*arr), dt)
return indices.view(np.intp).reshape(-1, len(arr))
def numpy_product_meshgrid(arr):
return np.stack(np.meshgrid(*arr), axis=-1).reshape(-1, len(arr))
def numpy_product_broadcast(arr): #a little bit different type of output
items = [np.array(item) for item in arr]
idx = np.where(np.eye(len(arr)), Ellipsis, None)
out = [x[tuple(i)] for x,i in zip(items, idx)]
return list(np.broadcast(*out))
Example of usage
A = [[1,2,3], [4,5], [7]]
numpy_product_itertools(A)
numpy_product_fromiter(A)
numpy_product_meshgrid(A)
numpy_product_broadcast(A)
Comparison of performance
import benchit
benchit.setparams(rep=1)
%matplotlib inline
sizes = [3,4,5,6,7]
N = sizes[-1]
arr = [np.arange(0,100,10).tolist()] * N
fns = [numpy_product_itertools, numpy_product_fromiter, numpy_product_meshgrid, numpy_product_broadcast]
in_ = {s: (arr[:s],) for s in sizes}
t = benchit.timings(fns, in_, multivar=True, input_name='Cartesian product of N arrays of length=10')
t.plot(logx=False, figsize=(12, 6), fontsize=14)
Note that numba beats majority of these algorithms although it's not included.

Related

caculating the sum of elements around an element in a numpy array [duplicate]

I am trying to perform a 2d convolution in python using numpy
I have a 2d array as follows with kernel H_r for the rows and H_c for the columns
data = np.zeros((nr, nc), dtype=np.float32)
#fill array with some data here then convolve
for r in range(nr):
data[r,:] = np.convolve(data[r,:], H_r, 'same')
for c in range(nc):
data[:,c] = np.convolve(data[:,c], H_c, 'same')
data = data.astype(np.uint8);
It does not produce the output that I was expecting, does this code look OK, I think the problem is with the casting from float32 to 8bit. Whats the best way to do this
Thanks
Maybe it is not the most optimized solution, but this is an implementation I used before with numpy library for Python:
def convolution2d(image, kernel, bias):
m, n = kernel.shape
if (m == n):
y, x = image.shape
y = y - m + 1
x = x - m + 1
new_image = np.zeros((y,x))
for i in range(y):
for j in range(x):
new_image[i][j] = np.sum(image[i:i+m, j:j+m]*kernel) + bias
return new_image
I hope this code helps other guys with the same doubt.
Regards.
Edit [Jan 2019]
#Tashus comment bellow is correct, and #dudemeister's answer is thus probably more on the mark. The function he suggested is also more efficient, by avoiding a direct 2D convolution and the number of operations that would entail.
Possible Problem
I believe you are doing two 1d convolutions, the first per columns and the second per rows, and replacing the results from the first with the results of the second.
Notice that numpy.convolve with the 'same' argument returns an array of equal shape to the largest one provided, so when you make the first convolution you already populated the entire data array.
One good way to visualize your arrays during these steps is to use Hinton diagrams, so you can check which elements already have a value.
Possible Solution
You can try to add the results of the two convolutions (use data[:,c] += .. instead of data[:,c] = on the second for loop), if your convolution matrix is the result of using the one dimensional H_r and H_c matrices like so:
Another way to do that would be to use scipy.signal.convolve2d with a 2d convolution array, which is probably what you wanted to do in the first place.
Since you already have your kernel separated you should simply use the sepfir2d function from scipy:
from scipy.signal import sepfir2d
convolved = sepfir2d(data, H_r, H_c)
On the other hand, the code you have there looks all right ...
I checked out many implementations and found none for my purpose, which should be really simple. So here is a dead-simple implementation with for loop
def convolution2d(image, kernel, stride, padding):
image = np.pad(image, [(padding, padding), (padding, padding)], mode='constant', constant_values=0)
kernel_height, kernel_width = kernel.shape
padded_height, padded_width = image.shape
output_height = (padded_height - kernel_height) // stride + 1
output_width = (padded_width - kernel_width) // stride + 1
new_image = np.zeros((output_height, output_width)).astype(np.float32)
for y in range(0, output_height):
for x in range(0, output_width):
new_image[y][x] = np.sum(image[y * stride:y * stride + kernel_height, x * stride:x * stride + kernel_width] * kernel).astype(np.float32)
return new_image
It might not be the most optimized solution either, but it is approximately ten times faster than the one proposed by #omotto and it only uses basic numpy function (as reshape, expand_dims, tile...) and no 'for' loops:
def gen_idx_conv1d(in_size, ker_size):
"""
Generates a list of indices. This indices correspond to the indices
of a 1D input tensor on which we would like to apply a 1D convolution.
For instance, with a 1D input array of size 5 and a kernel of size 3, the
1D convolution product will successively looks at elements of indices [0,1,2],
[1,2,3] and [2,3,4] in the input array. In this case, the function idx_conv1d(5,3)
outputs the following array: array([0,1,2,1,2,3,2,3,4]).
args:
in_size: (type: int) size of the input 1d array.
ker_size: (type: int) kernel size.
return:
idx_list: (type: np.array) list of the successive indices of the 1D input array
access to the 1D convolution algorithm.
example:
>>> gen_idx_conv1d(in_size=5, ker_size=3)
array([0, 1, 2, 1, 2, 3, 2, 3, 4])
"""
f = lambda dim1, dim2, axis: np.reshape(np.tile(np.expand_dims(np.arange(dim1),axis),dim2),-1)
out_size = in_size-ker_size+1
return f(ker_size, out_size, 0)+f(out_size, ker_size, 1)
def repeat_idx_2d(idx_list, nbof_rep, axis):
"""
Repeats an array of indices (idx_list) a number of time (nbof_rep) "along" an axis
(axis). This function helps to browse through a 2d array of size
(len(idx_list),nbof_rep).
args:
idx_list: (type: np.array or list) a 1D array of indices.
nbof_rep: (type: int) number of repetition.
axis: (type: int) axis "along" which the repetition will be applied.
return
idx_list: (type: np.array) a 1D array of indices of size len(idx_list)*nbof_rep.
example:
>>> a = np.array([0, 1, 2])
>>> repeat_idx_2d(a, 3, 0) # repeats array 'a' 3 times along 'axis' 0
array([0, 0, 0, 1, 1, 1, 2, 2, 2])
>>> repeat_idx_2d(a, 3, 1) # repeats array 'a' 3 times along 'axis' 1
array([0, 1, 2, 0, 1, 2, 0, 1, 2])
>>> b = np.reshape(np.arange(3*4), (3,4))
>>> b[repeat_idx_2d(np.arange(3), 4, 0), repeat_idx_2d(np.arange(4), 3, 1)]
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
"""
assert axis in [0,1], "Axis should be equal to 0 or 1."
tile_axis = (nbof_rep,1) if axis else (1,nbof_rep)
return np.reshape(np.tile(np.expand_dims(idx_list, 1),tile_axis),-1)
def conv2d(im, ker):
"""
Performs a 'valid' 2D convolution on an image. The input image may be
a 2D or a 3D array.
The output image first two dimensions will be reduced depending on the
convolution size.
The kernel may be a 2D or 3D array. If 2D, it will be applied on every
channel of the input image. If 3D, its last dimension must match the
image one.
args:
im: (type: np.array) image (2D or 3D).
ker: (type: np.array) convolution kernel (2D or 3D).
returns:
im: (type: np.array) convolved image.
example:
>>> im = np.reshape(np.arange(10*10*3),(10,10,3))/(10*10*3) # 3D image
>>> ker = np.array([[0,1,0],[-1,0,1],[0,-1,0]]) # 2D kernel
>>> conv2d(im, ker) # 3D array of shape (8,8,3)
"""
if len(im.shape)==2: # if the image is a 2D array, it is reshaped by expanding the last dimension
im = np.expand_dims(im,-1)
im_x, im_y, im_w = im.shape
if len(ker.shape)==2: # if the kernel is a 2D array, it is reshaped so it will be applied to all of the image channels
ker = np.tile(np.expand_dims(ker,-1),[1,1,im_w]) # the same kernel will be applied to all of the channels
assert ker.shape[-1]==im.shape[-1], "Kernel and image last dimension must match."
ker_x = ker.shape[0]
ker_y = ker.shape[1]
# shape of the output image
out_x = im_x - ker_x + 1
out_y = im_y - ker_y + 1
# reshapes the image to (out_x, ker_x, out_y, ker_y, im_w)
idx_list_x = gen_idx_conv1d(im_x, ker_x) # computes the indices of a 1D conv (cf. idx_conv1d doc)
idx_list_y = gen_idx_conv1d(im_y, ker_y)
idx_reshaped_x = repeat_idx_2d(idx_list_x, len(idx_list_y), 0) # repeats the previous indices to be used in 2D (cf. repeat_idx_2d doc)
idx_reshaped_y = repeat_idx_2d(idx_list_y, len(idx_list_x), 1)
im_reshaped = np.reshape(im[idx_reshaped_x, idx_reshaped_y, :], [out_x, ker_x, out_y, ker_y, im_w]) # reshapes
# reshapes the 2D kernel
ker = np.reshape(ker,[1, ker_x, 1, ker_y, im_w])
# applies the kernel to the image and reduces the dimension back to the one of original input image
return np.squeeze(np.sum(im_reshaped*ker, axis=(1,3)))
I tried to add a lot of comments to explain the method but the global idea is to reshape the 3D input image to a 5D one of shape (output_image_height, kernel_height, output_image_width, kernel_width, output_image_channel) and then to apply the kernel directly using the basic array multiplication. Of course, this methods is then using more memory (during the execution the size of the image is thus multiply by kernel_height*kernel_width) but it is faster.
To do this reshape step, I 'over-used' the indexing methods of numpy arrays, especially, the possibility of giving a numpy array as indices into a numpy array.
This methods could also be used to re-code the 2D convolution product in Pytorch or Tensorflow using the base math functions but I have no doubt in saying that it will be slower than the existing nn.conv2d operator...
I really enjoyed coding this method by only using the numpy basic tools.
One of the most obvious is to hard code the kernel.
img = img.convert('L')
a = np.array(img)
out = np.zeros([a.shape[0]-2, a.shape[1]-2], dtype='float')
out += a[:-2, :-2]
out += a[1:-1, :-2]
out += a[2:, :-2]
out += a[:-2, 1:-1]
out += a[1:-1,1:-1]
out += a[2:, 1:-1]
out += a[:-2, 2:]
out += a[1:-1, 2:]
out += a[2:, 2:]
out /= 9.0
out = out.astype('uint8')
img = Image.fromarray(out)
This example does a box blur 3x3 completely unrolled. You can multiply the values where you have a different value and divide them by a different amount. But, if you honestly want the quickest and dirtiest method this is it. I think it beats Guillaume Mougeot's method by a factor of like 5. His method beating the others by a factor of 10.
It may lose a few steps if you're doing something like a gaussian blur. and need to multiply some stuff.
Try to first round and then cast to uint8:
data = data.round().astype(np.uint8);
I wrote this convolve_stride which uses numpy.lib.stride_tricks.as_strided. Moreover it supports both strides and dilation. It is also compatible to tensor with order > 2.
import numpy as np
from numpy.lib.stride_tricks import as_strided
from im2col import im2col
def conv_view(X, F_s, dr, std):
X_s = np.array(X.shape)
F_s = np.array(F_s)
dr = np.array(dr)
Fd_s = (F_s - 1) * dr + 1
if np.any(Fd_s > X_s):
raise ValueError('(Dilated) filter size must be smaller than X')
std = np.array(std)
X_ss = np.array(X.strides)
Xn_s = (X_s - Fd_s) // std + 1
Xv_s = np.append(Xn_s, F_s)
Xv_ss = np.tile(X_ss, 2) * np.append(std, dr)
return as_strided(X, Xv_s, Xv_ss, writeable=False)
def convolve_stride(X, F, dr=None, std=None):
if dr is None:
dr = np.ones(X.ndim, dtype=int)
if std is None:
std = np.ones(X.ndim, dtype=int)
if not (X.ndim == F.ndim == len(dr) == len(std)):
raise ValueError('X.ndim, F.ndim, len(dr), len(std) must be the same')
Xv = conv_view(X, F.shape, dr, std)
return np.tensordot(Xv, F, axes=X.ndim)
%timeit -n 100 -r 10 convolve_stride(A, F)
#31.2 ms ± 1.31 ms per loop (mean ± std. dev. of 10 runs, 100 loops each)
Super simple and fast convolution using only basic numpy:
import numpy as np
def conv2d(image, kernel):
# apply kernel to image, return image of the same shape
# assume both image and kernel are 2D arrays
# kernel = np.flipud(np.fliplr(kernel)) # optionally flip the kernel
k = kernel.shape[0]
width = k//2
# place the image inside a frame to compensate for the kernel overlap
a = framed(image, width)
b = np.zeros(image.shape) # fill the output array with zeros; do not use np.empty()
# shift the image around each pixel, multiply by the corresponding kernel value and accumulate the results
for p, dp, r, dr in [(i, i + image.shape[0], j, j + image.shape[1]) for i in range(k) for j in range(k)]:
b += a[p:dp, r:dr] * kernel[p, r]
# or just write two nested for loops if you prefer
# np.clip(b, 0, 255, out=b) # optionally clip values exceeding the limits
return b
def framed(image, width):
a = np.zeros((image.shape[0]+2*width, image.shape[1]+2*width))
a[width:-width, width:-width] = image
# alternatively fill the frame with ones or copy border pixels
return a
Run it:
Image.fromarray(conv2d(image, kernel).astype('uint8'))
Instead of sliding the kernel along the image and computing the transformation pixel by pixel, create a series of shifted versions of the image corresponding to each element in the kernel and apply the corresponding kernel value to each of the shifted image versions.
This is probably the fastest you can get using just basic numpy; the speed is already comparable to C implementation of scipy convolve2d and better than fftconvolve. The idea is similar to #Tatarize. This example works only for one color component; for RGB just repeat for each (or modify the algorithm accordingly).
Typically, Convolution 2D is a misnomer. Ideally, under the hood,
whats being done is a correlation of 2 matrices.
pad == same
returns the output as the same as input dimension
It can also take asymmetric images. In order to perform correlation(convolution in deep learning lingo) on a batch of 2d matrices, one can iterate over all the channels, calculate the correlation for each of the channel slices with the respective filter slice.
For example: If image is (28,28,3) and filter size is (5,5,3) then take each of the 3 slices from the image channel and perform the cross correlation using the custom function above and stack the resulting matrix in the respective dimension of the output.
def get_cross_corr_2d(W, X, pad = 'valid'):
if(pad == 'same'):
pr = int((W.shape[0] - 1)/2)
pc = int((W.shape[1] - 1)/2)
conv_2d = np.zeros((X.shape[0], X.shape[1]))
X_pad = np.zeros((X.shape[0] + 2*pr, X.shape[1] + 2*pc))
X_pad[pr:pr+X.shape[0], pc:pc+X.shape[1]] = X
for r in range(conv_2d.shape[0]):
for c in range(conv_2d.shape[1]):
conv_2d[r,c] = np.sum(np.inner(W, X_pad[r:r+W.shape[0], c:c+W.shape[1]]))
return conv_2d
else:
pr = W.shape[0] - 1
pc = W.shape[1] - 1
conv_2d = np.zeros((X.shape[0] - W.shape[0] + 2*pr + 1,
X.shape[1] - W.shape[1] + 2*pc + 1))
X_pad = np.zeros((X.shape[0] + 2*pr, X.shape[1] + 2*pc))
X_pad[pr:pr+X.shape[0], pc:pc+X.shape[1]] = X
for r in range(conv_2d.shape[0]):
for c in range(conv_2d.shape[1]):
conv_2d[r,c] = np.sum(np.multiply(W, X_pad[r:r+W.shape[0], c:c+W.shape[1]]))
return conv_2d
This code incorrect:
for r in range(nr):
data[r,:] = np.convolve(data[r,:], H_r, 'same')
for c in range(nc):
data[:,c] = np.convolve(data[:,c], H_c, 'same')
See Nussbaumer transformation from multidimentional convolution to one dimentional.

Subtracting Two dimensional arrays using numpy broadcasting

I'm new to the numpy in general so this is an easy question however i'm clueless as how to solve it.
i'm trying to implement K nearest neighbor algorithm for classification of a Data set
there are to arrays named new_points and point that respectively have the shape of (30,4)
and (120,4) (with 4 being the total number of the properties of each element)
so i'm trying to calculate the distance between each new point and all old points using numpy.broadcasting
def calc_no_loop(new_points, points):
return np.sum((new_points-points)**2,axis=1)
#doesn't work here is log
ValueError: operands could not be broadcast together with shapes (30,4) (120,4)
however as per rules of broadcasting two array of shapes (30,4) and (120,4) are incompatible
so i would appreciate any insight on how to slove this (using .reshape prehaps - not sure)
please note: that i'have already implemented the same function using one and two loops but can't implement it without one
def calc_two_loops(new_points, points):
m, n = len(new_points), len(points)
d = np.zeros((m, n))
for i in range(m):
for j in range(n):
d[i, j] = np.sum((new_points[i] - points[j])**2)
return d
def calc_one_loop(new_points, points):
m, n = len(new_points), len(points)
d = np.zeros((m, n))
print(d)
for i in range(m):
d[i] = np.sum((new_points[i] - points)**2)
return d
Let's create an exapmle smaller in size:
nNew = 3; nOld = 5 # Number of new / old points
# New points
new_points = np.arange(100, 100 + nNew * 4).reshape(nNew, 4)
# Old points
points = np.arange(10, 10 + nOld * 8, 2).reshape(nOld, 4)
To compute the differences alone, run:
dfr = new_points[:, np.newaxis, :] - points[np.newaxis, :, :]
So far we have differences in each property of each point (every new point with every old point).
The shape of dfr is (3, 5, 4):
first dimension: the number of new point,
second dimension: the number of old point,
third dimension: the difference in each property.
Then, to sum squares of differences by points, run:
d = np.power(dfr, 2).sum(axis=2)
and this is your result.
For my sample data, the result is:
array([[31334, 25926, 21030, 16646, 12774],
[34230, 28566, 23414, 18774, 14646],
[37254, 31334, 25926, 21030, 16646]], dtype=int32)
So you have 30 new points, and 120 old points, so if I understand you correctly you want a shape(120,30) array result of distances.
You could do
import numpy as np
points = np.random.random(120*4).reshape(120,4)
new_points = np.random.random(30*4).reshape(30,4)
def calc_no_loop(new_points, points):
res = np.zeros([len(points[:,0]),len(new_points[:,0])])
for idx in range(len(points[:,0])):
res[idx,:] = np.sum((points[idx,:]-new_points)**2,axis=1)
return np.sqrt(res)
test = calc_no_loop(new_points,points)
print(np.shape(test))
print(test)
Which gives
(120, 30)
[[0.67166838 0.78096694 0.94983683 ... 1.00960301 0.48076185 0.56419991]
[0.88156338 0.54951826 0.73919191 ... 0.87757896 0.76305462 0.52486626]
[0.85271938 0.56085692 0.73063341 ... 0.97884167 0.90509791 0.7505591 ]
...
[0.53968258 0.64514941 0.89225849 ... 0.99278462 0.31861253 0.44615026]
[0.51647526 0.58611128 0.83298535 ... 0.86669406 0.64931403 0.71517123]
[1.08515826 0.64626221 0.6898687 ... 0.96882542 1.08075076 0.80144746]]
But from your function name above I get the notion that you do not want a loop? Then you could do this instead:
def calc_no_loop(new_points, points):
new_points1 = np.repeat(new_points[np.newaxis,...],len(points),axis=0)
points1 = np.repeat(points[:,np.newaxis,:],len(new_points),axis=1)
return np.sqrt(np.sum((new_points-points1)**2 ,axis=2))
test = calc_no_loop(new_points,points)
print(np.shape(test))
print(test)
which has output:
(120, 30)
[[0.67166838 0.78096694 0.94983683 ... 1.00960301 0.48076185 0.56419991]
[0.88156338 0.54951826 0.73919191 ... 0.87757896 0.76305462 0.52486626]
[0.85271938 0.56085692 0.73063341 ... 0.97884167 0.90509791 0.7505591 ]
...
[0.53968258 0.64514941 0.89225849 ... 0.99278462 0.31861253 0.44615026]
[0.51647526 0.58611128 0.83298535 ... 0.86669406 0.64931403 0.71517123]
[1.08515826 0.64626221 0.6898687 ... 0.96882542 1.08075076 0.80144746]]
i.e. the same result. Note that I added the np.sqrt() into the result which you may have forgotten in your example above.

Splitting a 2D Array into smaller 2D arrays WITH a certain overlap using Numpy

Similar to the answer for simple 1D arrays posted here: https://stackoverflow.com/a/38163917/15473622
I would like to split a large 2D array (representing an elevation raster dataset), into smaller 2D arrays (tiles WITH a certain overlap).
Given the following array:
A = [[1,2,3,4,5,6,7,8,9,0],
[2,3,4,5,6,7,8,9,0,1],
[3,4,5,6,7,8,9,0,1,2],
[4,5,6,7,8,9,0,1,2,3],
[5,6,7,8,9,0,1,2,3,4],
[6,7,8,9,0,1,2,3,4,5],
[7,8,9,0,1,2,3,4,5,6]]
Where the input variables would be:
height = 3
width = 4
overlap = 1 *or* y-step = 2, x-step = 3
You would get the following tiles (the last rows and columns can be smaller if the edge arrays are not perfectly divisible). Note B[4] is completely surrounded by a 1 cell buffer and the edge tiles do not extend beyond the original array extents:
B[0] = [[1,2,3,4,5], B[1] = [[4,5,6,7,8,9], B[2] = [[8,9,0],
[2,3,4,5,6], [5,6,7,8,9,0], [9,0,1],
[3,4,5,6,7], [6,7,8,9,0,1], [0,1,2],
[4,5,6,7,8]] [7,8,9,0,1,2]] [1,2,3]]
B[3] = [[3,4,5,6,7], B[4] = [[6,7,8,9,0,1], B[5] = [[0,1,2],
[4,5,6,7,8], [7,8,9,0,1,2], [1,2,3],
[5,6,7,8,9], [8,9,0,1,2,3], [2,3,4],
[6,7,8,9,0], [9,0,1,2,3,4], [3,4,5],
[7,8,9,0,1]] [0,1,2,3,4,5]] [4,5,6]]
B[6] = [[6,7,8,9,0], B[7] = [[9,0,1,2,3,4], B[8] = [[3,4,5],
[7,8,9,0,1]] [0,1,2,3,4,5]] [4,5,6]]
The tile overlap is necessary for further terrain processing on the smaller tiles to eliminate any edge effects when merged back together later.
Is there an elegant way to do this? Many thanks in advance!
(And thank you Julien for correcting my original logic!)
Your example doesn't seem to fully correspond to your logic (it seems to imply the last column alone should also be selected)... So please clarify or just adapt this simple 1-liner to your exact need:
A = [[1,2,3,4,5,6,7,8,9,0],
[2,3,4,5,6,7,8,9,0,1],
[3,4,5,6,7,8,9,0,1,2],
[4,5,6,7,8,9,0,1,2,3],
[5,6,7,8,9,0,1,2,3,4],
[6,7,8,9,0,1,2,3,4,5]]
A = np.array(A)
height = 3
width = 4
overlap = 1
y_step = height-overlap
x_step = width-overlap
B = [A[y:y+height,x:x+width] for y in range(0, A.shape[0], y_step) for x in range(0, A.shape[1], x_step)]

Efficient way of computing the cross products between two sets of vectors numpy

I have two sets of 2000 3D vectors each, and I need to compute the cross product between each possible pair. I currently do it like this
for tx in tangents_x:
for ty in tangents_y:
cross = np.cross(tx, ty)
(... do something with the cross variable...)
This works, but it's pretty slow. Is there a way to make it faster?
If I was interested in the element-wise product, I could just do the following
# Define initial vectors
tx = np.array([np.random.randn(3) for i in range(2000)])
ty = np.array([np.random.randn(3) for i in range(2000)])
# Store them into matrices
X = np.array([tx for i in range(2000)])
Y = np.array([ty for i in range(2000)]).T
# Compute the element-wise product
ew = X * Y
# Use the element_wise product as usual
for i,tx in enumerate(tangents_x):
for j,ty in enumerate(tangents_y):
(... use the element wise product of tx and ty as ew[i,j])
How can I apply this to the cross product instead of the element-wise one? Or, do you see another alternative?
Thanks much :)
Like many numpy functions cross supports broadcasting, therefore you can simply do:
np.cross(tangents_x[:, None, :], tangents_y)
or - more verbose but maybe easier to read
np.cross(tangents_x[:, None, :], tangents_y[None, :, :])
This reshapes tangents_x and tangents_y to shapes 2000, 1, 3 and 1, 2000, 3. By the rules of broadcasting this will be interpreted like two arrays of shape 2000, 2000, 3 where tangents_x is repeated along axis 1 and tangents_y is repeated along axis 0.
Just write it out and compile it
import numpy as np
import numba as nb
#nb.njit(fastmath=True,parallel=True)
def calc_cros(vec_1,vec_2):
res=np.empty((vec_1.shape[0],vec_2.shape[0],3),dtype=vec_1.dtype)
for i in nb.prange(vec_1.shape[0]):
for j in range(vec_2.shape[0]):
res[i,j,0]=vec_1[i,1] * vec_2[j,2] - vec_1[i,2] * vec_2[j,1]
res[i,j,1]=vec_1[i,2] * vec_2[j,0] - vec_1[i,0] * vec_2[j,2]
res[i,j,2]=vec_1[i,0] * vec_2[j,1] - vec_1[i,1] * vec_2[j,0]
return res
Performance
#create data
tx = np.random.rand(3000,3)
ty = np.random.rand(3000,3)
#don't measure compilation overhead
comb=calc_cros(tx,ty)
t1=time.time()
comb=calc_cros(tx,ty)
print(time.time()-t1)
This gives 0.08s for the two (3000,3) matrices.
np.dot is almost always going to be faster. So you could convert one of the vectors into a matrix.
def skew(x):
return np.array([[0, -x[2], x[1]],
[x[2], 0, -x[0]],
[-x[1], x[0], 0]])
On my machine this runs faster:
tx = np.array([np.random.randn(3) for i in range(100)])
ty = np.array([np.random.randn(3) for i in range(100)])
tt=time.clock()
for x in tx:
for y in ty:
cross = np.cross(x, y)
print(time.clock()-tt)
0.207 sec
tt=time.clock()
for x in tx:
m=skew(x)
for y in ty:
cross = np.dot(m, y)
print(time.clock()-tt)
0.015 sec
This result may vary depending on the computer.
You could use np.meshgrid() to build the combination matrix and then decompose the cross product. The rest is fiddling around with the axes etc:
# build two lists of 5 3D vecotrs as example values:
a_list = np.random.randint(0, 10, (5, 3))
b_list = np.random.randint(0, 10, (5, 3))
# here the original approach using slow list comprehensions:
slow = np.array([[ np.cross(a, b) for a in a_list ] for b in b_list ])
# now the faster proposed version:
g = np.array([ np.meshgrid(a_list[:,i], b_list[:,i]) for i in range(3) ])
fast = np.array([ g[1,0] * g[2,1] - g[2,0] * g[1,1],
g[2,0] * g[0,1] - g[0,0] * g[2,1],
g[0,0] * g[1,1] - g[1,0] * g[0,1] ]).transpose(1, 2, 0)
I tested this with 10000×10000 elements (instead of the 5×5 in the example above) and it took 6.4 seconds with the fast version. The slow version already took 27 seconds for 500 elements.
For your 2000×2000 elements the fast version takes 0.23s on my computer. Fast enough for you?
Use a cartesian product to get all possible pairs
import itertools as it
all_pairs = it.product(tx, ty)
And then use map to loop over all pairs and compute the cross product:
map(lambda x: np.cross(x[0], x[1]), all_pairs)

2d convolution using python and numpy

I am trying to perform a 2d convolution in python using numpy
I have a 2d array as follows with kernel H_r for the rows and H_c for the columns
data = np.zeros((nr, nc), dtype=np.float32)
#fill array with some data here then convolve
for r in range(nr):
data[r,:] = np.convolve(data[r,:], H_r, 'same')
for c in range(nc):
data[:,c] = np.convolve(data[:,c], H_c, 'same')
data = data.astype(np.uint8);
It does not produce the output that I was expecting, does this code look OK, I think the problem is with the casting from float32 to 8bit. Whats the best way to do this
Thanks
Maybe it is not the most optimized solution, but this is an implementation I used before with numpy library for Python:
def convolution2d(image, kernel, bias):
m, n = kernel.shape
if (m == n):
y, x = image.shape
y = y - m + 1
x = x - m + 1
new_image = np.zeros((y,x))
for i in range(y):
for j in range(x):
new_image[i][j] = np.sum(image[i:i+m, j:j+m]*kernel) + bias
return new_image
I hope this code helps other guys with the same doubt.
Regards.
Edit [Jan 2019]
#Tashus comment bellow is correct, and #dudemeister's answer is thus probably more on the mark. The function he suggested is also more efficient, by avoiding a direct 2D convolution and the number of operations that would entail.
Possible Problem
I believe you are doing two 1d convolutions, the first per columns and the second per rows, and replacing the results from the first with the results of the second.
Notice that numpy.convolve with the 'same' argument returns an array of equal shape to the largest one provided, so when you make the first convolution you already populated the entire data array.
One good way to visualize your arrays during these steps is to use Hinton diagrams, so you can check which elements already have a value.
Possible Solution
You can try to add the results of the two convolutions (use data[:,c] += .. instead of data[:,c] = on the second for loop), if your convolution matrix is the result of using the one dimensional H_r and H_c matrices like so:
Another way to do that would be to use scipy.signal.convolve2d with a 2d convolution array, which is probably what you wanted to do in the first place.
Since you already have your kernel separated you should simply use the sepfir2d function from scipy:
from scipy.signal import sepfir2d
convolved = sepfir2d(data, H_r, H_c)
On the other hand, the code you have there looks all right ...
I checked out many implementations and found none for my purpose, which should be really simple. So here is a dead-simple implementation with for loop
def convolution2d(image, kernel, stride, padding):
image = np.pad(image, [(padding, padding), (padding, padding)], mode='constant', constant_values=0)
kernel_height, kernel_width = kernel.shape
padded_height, padded_width = image.shape
output_height = (padded_height - kernel_height) // stride + 1
output_width = (padded_width - kernel_width) // stride + 1
new_image = np.zeros((output_height, output_width)).astype(np.float32)
for y in range(0, output_height):
for x in range(0, output_width):
new_image[y][x] = np.sum(image[y * stride:y * stride + kernel_height, x * stride:x * stride + kernel_width] * kernel).astype(np.float32)
return new_image
It might not be the most optimized solution either, but it is approximately ten times faster than the one proposed by #omotto and it only uses basic numpy function (as reshape, expand_dims, tile...) and no 'for' loops:
def gen_idx_conv1d(in_size, ker_size):
"""
Generates a list of indices. This indices correspond to the indices
of a 1D input tensor on which we would like to apply a 1D convolution.
For instance, with a 1D input array of size 5 and a kernel of size 3, the
1D convolution product will successively looks at elements of indices [0,1,2],
[1,2,3] and [2,3,4] in the input array. In this case, the function idx_conv1d(5,3)
outputs the following array: array([0,1,2,1,2,3,2,3,4]).
args:
in_size: (type: int) size of the input 1d array.
ker_size: (type: int) kernel size.
return:
idx_list: (type: np.array) list of the successive indices of the 1D input array
access to the 1D convolution algorithm.
example:
>>> gen_idx_conv1d(in_size=5, ker_size=3)
array([0, 1, 2, 1, 2, 3, 2, 3, 4])
"""
f = lambda dim1, dim2, axis: np.reshape(np.tile(np.expand_dims(np.arange(dim1),axis),dim2),-1)
out_size = in_size-ker_size+1
return f(ker_size, out_size, 0)+f(out_size, ker_size, 1)
def repeat_idx_2d(idx_list, nbof_rep, axis):
"""
Repeats an array of indices (idx_list) a number of time (nbof_rep) "along" an axis
(axis). This function helps to browse through a 2d array of size
(len(idx_list),nbof_rep).
args:
idx_list: (type: np.array or list) a 1D array of indices.
nbof_rep: (type: int) number of repetition.
axis: (type: int) axis "along" which the repetition will be applied.
return
idx_list: (type: np.array) a 1D array of indices of size len(idx_list)*nbof_rep.
example:
>>> a = np.array([0, 1, 2])
>>> repeat_idx_2d(a, 3, 0) # repeats array 'a' 3 times along 'axis' 0
array([0, 0, 0, 1, 1, 1, 2, 2, 2])
>>> repeat_idx_2d(a, 3, 1) # repeats array 'a' 3 times along 'axis' 1
array([0, 1, 2, 0, 1, 2, 0, 1, 2])
>>> b = np.reshape(np.arange(3*4), (3,4))
>>> b[repeat_idx_2d(np.arange(3), 4, 0), repeat_idx_2d(np.arange(4), 3, 1)]
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
"""
assert axis in [0,1], "Axis should be equal to 0 or 1."
tile_axis = (nbof_rep,1) if axis else (1,nbof_rep)
return np.reshape(np.tile(np.expand_dims(idx_list, 1),tile_axis),-1)
def conv2d(im, ker):
"""
Performs a 'valid' 2D convolution on an image. The input image may be
a 2D or a 3D array.
The output image first two dimensions will be reduced depending on the
convolution size.
The kernel may be a 2D or 3D array. If 2D, it will be applied on every
channel of the input image. If 3D, its last dimension must match the
image one.
args:
im: (type: np.array) image (2D or 3D).
ker: (type: np.array) convolution kernel (2D or 3D).
returns:
im: (type: np.array) convolved image.
example:
>>> im = np.reshape(np.arange(10*10*3),(10,10,3))/(10*10*3) # 3D image
>>> ker = np.array([[0,1,0],[-1,0,1],[0,-1,0]]) # 2D kernel
>>> conv2d(im, ker) # 3D array of shape (8,8,3)
"""
if len(im.shape)==2: # if the image is a 2D array, it is reshaped by expanding the last dimension
im = np.expand_dims(im,-1)
im_x, im_y, im_w = im.shape
if len(ker.shape)==2: # if the kernel is a 2D array, it is reshaped so it will be applied to all of the image channels
ker = np.tile(np.expand_dims(ker,-1),[1,1,im_w]) # the same kernel will be applied to all of the channels
assert ker.shape[-1]==im.shape[-1], "Kernel and image last dimension must match."
ker_x = ker.shape[0]
ker_y = ker.shape[1]
# shape of the output image
out_x = im_x - ker_x + 1
out_y = im_y - ker_y + 1
# reshapes the image to (out_x, ker_x, out_y, ker_y, im_w)
idx_list_x = gen_idx_conv1d(im_x, ker_x) # computes the indices of a 1D conv (cf. idx_conv1d doc)
idx_list_y = gen_idx_conv1d(im_y, ker_y)
idx_reshaped_x = repeat_idx_2d(idx_list_x, len(idx_list_y), 0) # repeats the previous indices to be used in 2D (cf. repeat_idx_2d doc)
idx_reshaped_y = repeat_idx_2d(idx_list_y, len(idx_list_x), 1)
im_reshaped = np.reshape(im[idx_reshaped_x, idx_reshaped_y, :], [out_x, ker_x, out_y, ker_y, im_w]) # reshapes
# reshapes the 2D kernel
ker = np.reshape(ker,[1, ker_x, 1, ker_y, im_w])
# applies the kernel to the image and reduces the dimension back to the one of original input image
return np.squeeze(np.sum(im_reshaped*ker, axis=(1,3)))
I tried to add a lot of comments to explain the method but the global idea is to reshape the 3D input image to a 5D one of shape (output_image_height, kernel_height, output_image_width, kernel_width, output_image_channel) and then to apply the kernel directly using the basic array multiplication. Of course, this methods is then using more memory (during the execution the size of the image is thus multiply by kernel_height*kernel_width) but it is faster.
To do this reshape step, I 'over-used' the indexing methods of numpy arrays, especially, the possibility of giving a numpy array as indices into a numpy array.
This methods could also be used to re-code the 2D convolution product in Pytorch or Tensorflow using the base math functions but I have no doubt in saying that it will be slower than the existing nn.conv2d operator...
I really enjoyed coding this method by only using the numpy basic tools.
One of the most obvious is to hard code the kernel.
img = img.convert('L')
a = np.array(img)
out = np.zeros([a.shape[0]-2, a.shape[1]-2], dtype='float')
out += a[:-2, :-2]
out += a[1:-1, :-2]
out += a[2:, :-2]
out += a[:-2, 1:-1]
out += a[1:-1,1:-1]
out += a[2:, 1:-1]
out += a[:-2, 2:]
out += a[1:-1, 2:]
out += a[2:, 2:]
out /= 9.0
out = out.astype('uint8')
img = Image.fromarray(out)
This example does a box blur 3x3 completely unrolled. You can multiply the values where you have a different value and divide them by a different amount. But, if you honestly want the quickest and dirtiest method this is it. I think it beats Guillaume Mougeot's method by a factor of like 5. His method beating the others by a factor of 10.
It may lose a few steps if you're doing something like a gaussian blur. and need to multiply some stuff.
Try to first round and then cast to uint8:
data = data.round().astype(np.uint8);
I wrote this convolve_stride which uses numpy.lib.stride_tricks.as_strided. Moreover it supports both strides and dilation. It is also compatible to tensor with order > 2.
import numpy as np
from numpy.lib.stride_tricks import as_strided
from im2col import im2col
def conv_view(X, F_s, dr, std):
X_s = np.array(X.shape)
F_s = np.array(F_s)
dr = np.array(dr)
Fd_s = (F_s - 1) * dr + 1
if np.any(Fd_s > X_s):
raise ValueError('(Dilated) filter size must be smaller than X')
std = np.array(std)
X_ss = np.array(X.strides)
Xn_s = (X_s - Fd_s) // std + 1
Xv_s = np.append(Xn_s, F_s)
Xv_ss = np.tile(X_ss, 2) * np.append(std, dr)
return as_strided(X, Xv_s, Xv_ss, writeable=False)
def convolve_stride(X, F, dr=None, std=None):
if dr is None:
dr = np.ones(X.ndim, dtype=int)
if std is None:
std = np.ones(X.ndim, dtype=int)
if not (X.ndim == F.ndim == len(dr) == len(std)):
raise ValueError('X.ndim, F.ndim, len(dr), len(std) must be the same')
Xv = conv_view(X, F.shape, dr, std)
return np.tensordot(Xv, F, axes=X.ndim)
%timeit -n 100 -r 10 convolve_stride(A, F)
#31.2 ms ± 1.31 ms per loop (mean ± std. dev. of 10 runs, 100 loops each)
Super simple and fast convolution using only basic numpy:
import numpy as np
def conv2d(image, kernel):
# apply kernel to image, return image of the same shape
# assume both image and kernel are 2D arrays
# kernel = np.flipud(np.fliplr(kernel)) # optionally flip the kernel
k = kernel.shape[0]
width = k//2
# place the image inside a frame to compensate for the kernel overlap
a = framed(image, width)
b = np.zeros(image.shape) # fill the output array with zeros; do not use np.empty()
# shift the image around each pixel, multiply by the corresponding kernel value and accumulate the results
for p, dp, r, dr in [(i, i + image.shape[0], j, j + image.shape[1]) for i in range(k) for j in range(k)]:
b += a[p:dp, r:dr] * kernel[p, r]
# or just write two nested for loops if you prefer
# np.clip(b, 0, 255, out=b) # optionally clip values exceeding the limits
return b
def framed(image, width):
a = np.zeros((image.shape[0]+2*width, image.shape[1]+2*width))
a[width:-width, width:-width] = image
# alternatively fill the frame with ones or copy border pixels
return a
Run it:
Image.fromarray(conv2d(image, kernel).astype('uint8'))
Instead of sliding the kernel along the image and computing the transformation pixel by pixel, create a series of shifted versions of the image corresponding to each element in the kernel and apply the corresponding kernel value to each of the shifted image versions.
This is probably the fastest you can get using just basic numpy; the speed is already comparable to C implementation of scipy convolve2d and better than fftconvolve. The idea is similar to #Tatarize. This example works only for one color component; for RGB just repeat for each (or modify the algorithm accordingly).
Typically, Convolution 2D is a misnomer. Ideally, under the hood,
whats being done is a correlation of 2 matrices.
pad == same
returns the output as the same as input dimension
It can also take asymmetric images. In order to perform correlation(convolution in deep learning lingo) on a batch of 2d matrices, one can iterate over all the channels, calculate the correlation for each of the channel slices with the respective filter slice.
For example: If image is (28,28,3) and filter size is (5,5,3) then take each of the 3 slices from the image channel and perform the cross correlation using the custom function above and stack the resulting matrix in the respective dimension of the output.
def get_cross_corr_2d(W, X, pad = 'valid'):
if(pad == 'same'):
pr = int((W.shape[0] - 1)/2)
pc = int((W.shape[1] - 1)/2)
conv_2d = np.zeros((X.shape[0], X.shape[1]))
X_pad = np.zeros((X.shape[0] + 2*pr, X.shape[1] + 2*pc))
X_pad[pr:pr+X.shape[0], pc:pc+X.shape[1]] = X
for r in range(conv_2d.shape[0]):
for c in range(conv_2d.shape[1]):
conv_2d[r,c] = np.sum(np.inner(W, X_pad[r:r+W.shape[0], c:c+W.shape[1]]))
return conv_2d
else:
pr = W.shape[0] - 1
pc = W.shape[1] - 1
conv_2d = np.zeros((X.shape[0] - W.shape[0] + 2*pr + 1,
X.shape[1] - W.shape[1] + 2*pc + 1))
X_pad = np.zeros((X.shape[0] + 2*pr, X.shape[1] + 2*pc))
X_pad[pr:pr+X.shape[0], pc:pc+X.shape[1]] = X
for r in range(conv_2d.shape[0]):
for c in range(conv_2d.shape[1]):
conv_2d[r,c] = np.sum(np.multiply(W, X_pad[r:r+W.shape[0], c:c+W.shape[1]]))
return conv_2d
This code incorrect:
for r in range(nr):
data[r,:] = np.convolve(data[r,:], H_r, 'same')
for c in range(nc):
data[:,c] = np.convolve(data[:,c], H_c, 'same')
See Nussbaumer transformation from multidimentional convolution to one dimentional.

Categories