MATLAB to Python conversion: vectors, arrays, index elements - python

Good day to everyone! I'm currently converting a MATLAB project to Python 2.7. I am trying to convert the line
h = [ im(:,2:cols) zeros(rows,1) ] - [ zeros(rows,1) im(:,1:cols-1) ];
When I try to convert it
h = np.concatenate((im[1,range(2,cols)], np.zeros((rows, 1)))) -
np.concatenate((np.zeros((rows, 1)),im[1,range(2,cols - 1)] ))
IDLE returns different errors like
ValueError: all the input arrays must have same number of dimensions
I'm very new to Python and I would appreciate it if you would suggest other methods. Thank you so much! Here's the function I am trying to convert.
function [gradient, or] = canny(im, sigma, scaling, vert, horz)
xscaling = vert; yscaling = horz;
hsize = [6*sigma+1, 6*sigma+1]; % The filter size.
gaussian = fspecial('gaussian',hsize,sigma);
im = filter2(gaussian,im); % Smoothed image.
im = imresize(im, scaling, 'AntiAliasing',false);
[rows, cols] = size(im);
h = [ im(:,2:cols) zeros(rows,1) ] - [ zeros(rows,1) im(:,1:cols-1) ];
And I also would ask the equivalent of ':' operator that is used mainly in indeces and arrays in Python. Is there any equivalent for the : operator?
The Python converted code I started:
def canny(im=None, sigma=None, scaling=None, vert=None, horz=None):
xscaling = vert
yscaling = horz
hsize = (6 * sigma + 1), (6 * sigma + 1) # The filter size.
gaussian = gauss2D(hsize, sigma)
im = filter2(gaussian, im) # Smoothed image.
print("This is im")
print(im)
print("This is hsize")
print(hsize)
print("This is scaling")
print(scaling)
#scaling = 0.4
#scaling = tuple(scaling)
im = cv2.resize(im,None, fx=scaling, fy=scaling )
[rows, cols] = np.shape(im)

Say your data is in a list of lists. Try this:
a = [[2, 9, 4], [7, 5, 3], [6, 1, 8]]
im = np.array(a, dtype=float)
rows = 3
cols = 3
h = (np.hstack([im[:, 1:cols], np.zeros((rows, 1))])
- np.hstack([np.zeros((rows, 1)), im[:, :cols-1]]))
The equivalent of MATLAB's horzcat (that is, [A B]) is np.hstack and the equivalent of vertcat ([A; B]) is np.vstack.
Array indexing in numpy is very close to MATLAB, except that indexes start at 0 in numpy, and the range p:q means "p to q-1".
Also, the storage order of arrays is row-major by default, and you can use column-major order if you want (see this). In MATLAB, arrays are stored in column-major order. To check in Python, type for instance np.isfortran(im). If it returns true, the array has the same order as MATLAB (Fortran order), otherwise it's row-major (C order). It's important when you want to optimize loops, or when you pass an array to a C or Fortran routine.
Ideally, try to put everything in an np.array as soon as possible, and don't use lists (they take much more space and processing is much slower). There are also some quirks: for instance, 1.0 / 0.0 throws an exception, but np.float64(1.0) / np.float64(0.0) returns inf, like in MATLAB.
Another example from the comments:
d1 = [ im(2:rows,2:cols) zeros(rows-1,1); zeros(1,cols) ] - ...
[ zeros(1,cols); zeros(rows-1,1) im(1:rows-1,1:cols-1) ];
d2 = [ zeros(1,cols); im(1:rows-1,2:cols) zeros(rows-1,1); ] - ...
[ zeros(rows-1,1) im(2:rows,1:cols-1); zeros(1,cols) ];
For this one, rather than np.vstack and np.hstack, you can use np.block.
im = np.ones((10, 15))
rows, cols = im.shape
d1 = (np.block([[im[1:rows, 1:cols], np.zeros((rows-1, 1))],
[np.zeros((1, cols))]]) -
np.block([[np.zeros((1, cols))],
[np.zeros((rows-1, 1)), im[:rows-1, :cols-1]]]))
d2 = (np.block([[np.zeros((1, cols))],
[im[:rows-1, 1:cols], np.zeros((rows-1, 1))]]) -
np.block([[np.zeros((rows-1, 1)), im[1:rows, :cols-1]],
[np.zeros((1, cols))]]))

With np.zeros((Nrows,1)) you are generating a 2D array containing Nrows 1D arrays with 1 element. Then, with im[1,2:cols] your are getting a 1D array of cols-2 elements. You should change np.zeros((rows,1)) by np.zeros(rows).
Moreover, at the second np.concatenate, when you get a subarray from 'im' you should take the same number of elements than in the first concatenate. Note that you are taking one element less: range(2,cols) VS range(2,cols-1).

Related

caculating the sum of elements around an element in a numpy array [duplicate]

I am trying to perform a 2d convolution in python using numpy
I have a 2d array as follows with kernel H_r for the rows and H_c for the columns
data = np.zeros((nr, nc), dtype=np.float32)
#fill array with some data here then convolve
for r in range(nr):
data[r,:] = np.convolve(data[r,:], H_r, 'same')
for c in range(nc):
data[:,c] = np.convolve(data[:,c], H_c, 'same')
data = data.astype(np.uint8);
It does not produce the output that I was expecting, does this code look OK, I think the problem is with the casting from float32 to 8bit. Whats the best way to do this
Thanks
Maybe it is not the most optimized solution, but this is an implementation I used before with numpy library for Python:
def convolution2d(image, kernel, bias):
m, n = kernel.shape
if (m == n):
y, x = image.shape
y = y - m + 1
x = x - m + 1
new_image = np.zeros((y,x))
for i in range(y):
for j in range(x):
new_image[i][j] = np.sum(image[i:i+m, j:j+m]*kernel) + bias
return new_image
I hope this code helps other guys with the same doubt.
Regards.
Edit [Jan 2019]
#Tashus comment bellow is correct, and #dudemeister's answer is thus probably more on the mark. The function he suggested is also more efficient, by avoiding a direct 2D convolution and the number of operations that would entail.
Possible Problem
I believe you are doing two 1d convolutions, the first per columns and the second per rows, and replacing the results from the first with the results of the second.
Notice that numpy.convolve with the 'same' argument returns an array of equal shape to the largest one provided, so when you make the first convolution you already populated the entire data array.
One good way to visualize your arrays during these steps is to use Hinton diagrams, so you can check which elements already have a value.
Possible Solution
You can try to add the results of the two convolutions (use data[:,c] += .. instead of data[:,c] = on the second for loop), if your convolution matrix is the result of using the one dimensional H_r and H_c matrices like so:
Another way to do that would be to use scipy.signal.convolve2d with a 2d convolution array, which is probably what you wanted to do in the first place.
Since you already have your kernel separated you should simply use the sepfir2d function from scipy:
from scipy.signal import sepfir2d
convolved = sepfir2d(data, H_r, H_c)
On the other hand, the code you have there looks all right ...
I checked out many implementations and found none for my purpose, which should be really simple. So here is a dead-simple implementation with for loop
def convolution2d(image, kernel, stride, padding):
image = np.pad(image, [(padding, padding), (padding, padding)], mode='constant', constant_values=0)
kernel_height, kernel_width = kernel.shape
padded_height, padded_width = image.shape
output_height = (padded_height - kernel_height) // stride + 1
output_width = (padded_width - kernel_width) // stride + 1
new_image = np.zeros((output_height, output_width)).astype(np.float32)
for y in range(0, output_height):
for x in range(0, output_width):
new_image[y][x] = np.sum(image[y * stride:y * stride + kernel_height, x * stride:x * stride + kernel_width] * kernel).astype(np.float32)
return new_image
It might not be the most optimized solution either, but it is approximately ten times faster than the one proposed by #omotto and it only uses basic numpy function (as reshape, expand_dims, tile...) and no 'for' loops:
def gen_idx_conv1d(in_size, ker_size):
"""
Generates a list of indices. This indices correspond to the indices
of a 1D input tensor on which we would like to apply a 1D convolution.
For instance, with a 1D input array of size 5 and a kernel of size 3, the
1D convolution product will successively looks at elements of indices [0,1,2],
[1,2,3] and [2,3,4] in the input array. In this case, the function idx_conv1d(5,3)
outputs the following array: array([0,1,2,1,2,3,2,3,4]).
args:
in_size: (type: int) size of the input 1d array.
ker_size: (type: int) kernel size.
return:
idx_list: (type: np.array) list of the successive indices of the 1D input array
access to the 1D convolution algorithm.
example:
>>> gen_idx_conv1d(in_size=5, ker_size=3)
array([0, 1, 2, 1, 2, 3, 2, 3, 4])
"""
f = lambda dim1, dim2, axis: np.reshape(np.tile(np.expand_dims(np.arange(dim1),axis),dim2),-1)
out_size = in_size-ker_size+1
return f(ker_size, out_size, 0)+f(out_size, ker_size, 1)
def repeat_idx_2d(idx_list, nbof_rep, axis):
"""
Repeats an array of indices (idx_list) a number of time (nbof_rep) "along" an axis
(axis). This function helps to browse through a 2d array of size
(len(idx_list),nbof_rep).
args:
idx_list: (type: np.array or list) a 1D array of indices.
nbof_rep: (type: int) number of repetition.
axis: (type: int) axis "along" which the repetition will be applied.
return
idx_list: (type: np.array) a 1D array of indices of size len(idx_list)*nbof_rep.
example:
>>> a = np.array([0, 1, 2])
>>> repeat_idx_2d(a, 3, 0) # repeats array 'a' 3 times along 'axis' 0
array([0, 0, 0, 1, 1, 1, 2, 2, 2])
>>> repeat_idx_2d(a, 3, 1) # repeats array 'a' 3 times along 'axis' 1
array([0, 1, 2, 0, 1, 2, 0, 1, 2])
>>> b = np.reshape(np.arange(3*4), (3,4))
>>> b[repeat_idx_2d(np.arange(3), 4, 0), repeat_idx_2d(np.arange(4), 3, 1)]
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
"""
assert axis in [0,1], "Axis should be equal to 0 or 1."
tile_axis = (nbof_rep,1) if axis else (1,nbof_rep)
return np.reshape(np.tile(np.expand_dims(idx_list, 1),tile_axis),-1)
def conv2d(im, ker):
"""
Performs a 'valid' 2D convolution on an image. The input image may be
a 2D or a 3D array.
The output image first two dimensions will be reduced depending on the
convolution size.
The kernel may be a 2D or 3D array. If 2D, it will be applied on every
channel of the input image. If 3D, its last dimension must match the
image one.
args:
im: (type: np.array) image (2D or 3D).
ker: (type: np.array) convolution kernel (2D or 3D).
returns:
im: (type: np.array) convolved image.
example:
>>> im = np.reshape(np.arange(10*10*3),(10,10,3))/(10*10*3) # 3D image
>>> ker = np.array([[0,1,0],[-1,0,1],[0,-1,0]]) # 2D kernel
>>> conv2d(im, ker) # 3D array of shape (8,8,3)
"""
if len(im.shape)==2: # if the image is a 2D array, it is reshaped by expanding the last dimension
im = np.expand_dims(im,-1)
im_x, im_y, im_w = im.shape
if len(ker.shape)==2: # if the kernel is a 2D array, it is reshaped so it will be applied to all of the image channels
ker = np.tile(np.expand_dims(ker,-1),[1,1,im_w]) # the same kernel will be applied to all of the channels
assert ker.shape[-1]==im.shape[-1], "Kernel and image last dimension must match."
ker_x = ker.shape[0]
ker_y = ker.shape[1]
# shape of the output image
out_x = im_x - ker_x + 1
out_y = im_y - ker_y + 1
# reshapes the image to (out_x, ker_x, out_y, ker_y, im_w)
idx_list_x = gen_idx_conv1d(im_x, ker_x) # computes the indices of a 1D conv (cf. idx_conv1d doc)
idx_list_y = gen_idx_conv1d(im_y, ker_y)
idx_reshaped_x = repeat_idx_2d(idx_list_x, len(idx_list_y), 0) # repeats the previous indices to be used in 2D (cf. repeat_idx_2d doc)
idx_reshaped_y = repeat_idx_2d(idx_list_y, len(idx_list_x), 1)
im_reshaped = np.reshape(im[idx_reshaped_x, idx_reshaped_y, :], [out_x, ker_x, out_y, ker_y, im_w]) # reshapes
# reshapes the 2D kernel
ker = np.reshape(ker,[1, ker_x, 1, ker_y, im_w])
# applies the kernel to the image and reduces the dimension back to the one of original input image
return np.squeeze(np.sum(im_reshaped*ker, axis=(1,3)))
I tried to add a lot of comments to explain the method but the global idea is to reshape the 3D input image to a 5D one of shape (output_image_height, kernel_height, output_image_width, kernel_width, output_image_channel) and then to apply the kernel directly using the basic array multiplication. Of course, this methods is then using more memory (during the execution the size of the image is thus multiply by kernel_height*kernel_width) but it is faster.
To do this reshape step, I 'over-used' the indexing methods of numpy arrays, especially, the possibility of giving a numpy array as indices into a numpy array.
This methods could also be used to re-code the 2D convolution product in Pytorch or Tensorflow using the base math functions but I have no doubt in saying that it will be slower than the existing nn.conv2d operator...
I really enjoyed coding this method by only using the numpy basic tools.
One of the most obvious is to hard code the kernel.
img = img.convert('L')
a = np.array(img)
out = np.zeros([a.shape[0]-2, a.shape[1]-2], dtype='float')
out += a[:-2, :-2]
out += a[1:-1, :-2]
out += a[2:, :-2]
out += a[:-2, 1:-1]
out += a[1:-1,1:-1]
out += a[2:, 1:-1]
out += a[:-2, 2:]
out += a[1:-1, 2:]
out += a[2:, 2:]
out /= 9.0
out = out.astype('uint8')
img = Image.fromarray(out)
This example does a box blur 3x3 completely unrolled. You can multiply the values where you have a different value and divide them by a different amount. But, if you honestly want the quickest and dirtiest method this is it. I think it beats Guillaume Mougeot's method by a factor of like 5. His method beating the others by a factor of 10.
It may lose a few steps if you're doing something like a gaussian blur. and need to multiply some stuff.
Try to first round and then cast to uint8:
data = data.round().astype(np.uint8);
I wrote this convolve_stride which uses numpy.lib.stride_tricks.as_strided. Moreover it supports both strides and dilation. It is also compatible to tensor with order > 2.
import numpy as np
from numpy.lib.stride_tricks import as_strided
from im2col import im2col
def conv_view(X, F_s, dr, std):
X_s = np.array(X.shape)
F_s = np.array(F_s)
dr = np.array(dr)
Fd_s = (F_s - 1) * dr + 1
if np.any(Fd_s > X_s):
raise ValueError('(Dilated) filter size must be smaller than X')
std = np.array(std)
X_ss = np.array(X.strides)
Xn_s = (X_s - Fd_s) // std + 1
Xv_s = np.append(Xn_s, F_s)
Xv_ss = np.tile(X_ss, 2) * np.append(std, dr)
return as_strided(X, Xv_s, Xv_ss, writeable=False)
def convolve_stride(X, F, dr=None, std=None):
if dr is None:
dr = np.ones(X.ndim, dtype=int)
if std is None:
std = np.ones(X.ndim, dtype=int)
if not (X.ndim == F.ndim == len(dr) == len(std)):
raise ValueError('X.ndim, F.ndim, len(dr), len(std) must be the same')
Xv = conv_view(X, F.shape, dr, std)
return np.tensordot(Xv, F, axes=X.ndim)
%timeit -n 100 -r 10 convolve_stride(A, F)
#31.2 ms ± 1.31 ms per loop (mean ± std. dev. of 10 runs, 100 loops each)
Super simple and fast convolution using only basic numpy:
import numpy as np
def conv2d(image, kernel):
# apply kernel to image, return image of the same shape
# assume both image and kernel are 2D arrays
# kernel = np.flipud(np.fliplr(kernel)) # optionally flip the kernel
k = kernel.shape[0]
width = k//2
# place the image inside a frame to compensate for the kernel overlap
a = framed(image, width)
b = np.zeros(image.shape) # fill the output array with zeros; do not use np.empty()
# shift the image around each pixel, multiply by the corresponding kernel value and accumulate the results
for p, dp, r, dr in [(i, i + image.shape[0], j, j + image.shape[1]) for i in range(k) for j in range(k)]:
b += a[p:dp, r:dr] * kernel[p, r]
# or just write two nested for loops if you prefer
# np.clip(b, 0, 255, out=b) # optionally clip values exceeding the limits
return b
def framed(image, width):
a = np.zeros((image.shape[0]+2*width, image.shape[1]+2*width))
a[width:-width, width:-width] = image
# alternatively fill the frame with ones or copy border pixels
return a
Run it:
Image.fromarray(conv2d(image, kernel).astype('uint8'))
Instead of sliding the kernel along the image and computing the transformation pixel by pixel, create a series of shifted versions of the image corresponding to each element in the kernel and apply the corresponding kernel value to each of the shifted image versions.
This is probably the fastest you can get using just basic numpy; the speed is already comparable to C implementation of scipy convolve2d and better than fftconvolve. The idea is similar to #Tatarize. This example works only for one color component; for RGB just repeat for each (or modify the algorithm accordingly).
Typically, Convolution 2D is a misnomer. Ideally, under the hood,
whats being done is a correlation of 2 matrices.
pad == same
returns the output as the same as input dimension
It can also take asymmetric images. In order to perform correlation(convolution in deep learning lingo) on a batch of 2d matrices, one can iterate over all the channels, calculate the correlation for each of the channel slices with the respective filter slice.
For example: If image is (28,28,3) and filter size is (5,5,3) then take each of the 3 slices from the image channel and perform the cross correlation using the custom function above and stack the resulting matrix in the respective dimension of the output.
def get_cross_corr_2d(W, X, pad = 'valid'):
if(pad == 'same'):
pr = int((W.shape[0] - 1)/2)
pc = int((W.shape[1] - 1)/2)
conv_2d = np.zeros((X.shape[0], X.shape[1]))
X_pad = np.zeros((X.shape[0] + 2*pr, X.shape[1] + 2*pc))
X_pad[pr:pr+X.shape[0], pc:pc+X.shape[1]] = X
for r in range(conv_2d.shape[0]):
for c in range(conv_2d.shape[1]):
conv_2d[r,c] = np.sum(np.inner(W, X_pad[r:r+W.shape[0], c:c+W.shape[1]]))
return conv_2d
else:
pr = W.shape[0] - 1
pc = W.shape[1] - 1
conv_2d = np.zeros((X.shape[0] - W.shape[0] + 2*pr + 1,
X.shape[1] - W.shape[1] + 2*pc + 1))
X_pad = np.zeros((X.shape[0] + 2*pr, X.shape[1] + 2*pc))
X_pad[pr:pr+X.shape[0], pc:pc+X.shape[1]] = X
for r in range(conv_2d.shape[0]):
for c in range(conv_2d.shape[1]):
conv_2d[r,c] = np.sum(np.multiply(W, X_pad[r:r+W.shape[0], c:c+W.shape[1]]))
return conv_2d
This code incorrect:
for r in range(nr):
data[r,:] = np.convolve(data[r,:], H_r, 'same')
for c in range(nc):
data[:,c] = np.convolve(data[:,c], H_c, 'same')
See Nussbaumer transformation from multidimentional convolution to one dimentional.

What is the best script in python that replicates matlabs imresize3?

I am translating code from MATLAB to python but cannot perfectly replicate the results of MATLAB's imresize3. My input is a 101x101x101 array. First four inputs ([0,0:3,0] or (1,1:4,1)) are: 0.3819 0.4033 0.4336 0.2767. The data input for both languages is identical.
sampleQDNormSmall = imresize3(sampleQDNorm,0.5);
This results in a 51x51x51 array where the first four values (1,1:4,1) for example are: 0.3443 0.2646 0.2700 0.2835
Now I've tried two different pieces of code in python to replicate these results:
from skimage.transform import resize
from skimage.transform import rescale
sampleQDNormSmall = resize(sampleQDNorm,(0.5*sampleQDNorm.shape[0],0.5*sampleQDNorm.shape[1],0.5*sampleQDNorm.shape[2]),order=3,anti_aliasing=True);
sampleQDNormSmall1=rescale(sampleQDNorm,0.5,order=3,anti_aliasing=True)
The first one gives a 51x51x51 array that has the first four values [0,0:3,0] of: 0.3452 0.2669 0.2774 0.3099. Which is very close but not exactly the same numerical outputs. I don't know enough about the optional arguments to know might get me a better result.
The second one gives a 50x50x50 array that has the first four values [0,0:3,0] of: 0.3422 0.2623 0.2810 0.3006. This is a different output array size and also doesn't reproduce the same numerical outputs as the MATLAB code or the other python function
I don't know enough about the optional arguments to know might get me a better result. I know for this type of array, MATLAB's default is cubic interpolation which is why I am using order 3 in python. The default for anti-aliasing in both is true. I have a two bigger arrays that I am having the same issues with: a (873x873x873) array and a bool (873x873x873) array.
The MATLAB code I'm using is considered the "correct answer" for the work I am doing so I am trying to replicate the results as accurately as possible into python. Please let me know what I can try in python to reproduce the correct data.
sampleQDNorm is roughly random decimals between 0 and 1 for [0:100,0:100,0:100] and is padded with zeros on sides [:,:,101],[:,101,:],[101,:,:]
Getting the exact same result as MATLAB imresize3 is challenging.
One reason is that MATLAB enables Antialiasing filter by default, and I can't seem to find the equivalent Python implementation.
The closet existing Python alternatives are described in this post.
scipy.ndimage.zoom supports 3D resizing.
It could be that skimage.transform.resize gives closer result, but none are identical to MATLAB result.
Reimplementing imresize3:
Looking at the MATLAB implementation of imresize3 (MATLAB source code), it is apparent that MATLAB implementation "simply" uses resize along each axis:
Resize (by half) along the vertical axis.
Resize the above result (by half) along the horizontal axis.
Resize the above result (by half) along the depth axis.
Here is a MATLAB codes sample that demonstrates the implementation (using cubic interpolation):
I1 = imread('peppers.png');
I2 = imresize(imread('autumn.tif'), [size(I1, 1), size(I1, 2)]);
I3 = imresize(imread('football.jpg'), [size(I1, 1), size(I1, 2)]);
I4 = imresize(imread('cameraman.tif'), [size(I1, 1), size(I1, 2)]);
I = cat(3, I1, I2, I3, I4);
J = imresize3(I, 0.5, 'cubic', 'Antialiasing', false);
imwrite(I1, '/Tmp/I1.png');
imwrite(I2, '/Tmp/I2.png');
imwrite(I3, '/Tmp/I3.png');
imwrite(I4, '/Tmp/I4.png');
imwrite(J(:,:,1), '/Tmp/J1.png');
imwrite(J(:,:,2), '/Tmp/J2.png');
imwrite(J(:,:,3), '/Tmp/J3.png');
imwrite(J(:,:,4), '/Tmp/J4.png');
imwrite(J(:,:,5), '/Tmp/J5.png');
K = cubicResize3(I, 0.5);
max_abs_diff = max(abs(double(J(:)) - double(K(:))));
disp(['max_abs_diff = ', num2str(max_abs_diff)])
function B = cubicResize3(A, scale)
order = [1 2 3];
B = A;
for k = 1:numel(order)
dim = order(k);
B = cubicResizeAlongDim(B, dim, scale);
end
end
function out = cubicResizeAlongDim(in, dim, scale)
% If dim is 3, permute the input matrix so that the third dimension
% becomes the first dimension. This way, we can resize along the
% third dimensions as though we were resizing along the first dimension.
isThirdDimResize = (dim == 3);
if isThirdDimResize
in = permute(in, [3 2 1]);
dim = 1;
end
if dim == 1
out_rows = round(size(in, 1)*scale);
out_cols = size(in, 2);
else % dim == 2
out_rows = size(in, 1);
out_cols = round(size(in,2)*scale);
end
out = zeros(out_rows, out_cols, size(in, 3), class(in)); % Allocate array for storing the output.
for i = 1:size(in, 3)
% Resize each color plane separately
out(:, :, i) = imresize(in(:, :, i), [out_rows, out_cols], 'bicubic', 'Antialiasing', false);
end
% Permute back so that the original dimensions are restored if we were
% resizing along the third dimension.
if isThirdDimResize
out = permute(out, [3 2 1]);
end
end
The result is max_abs_diff = 0, meaning that cubicResize3 and imresize3 gave the same output.
Note:
The above implementation stores images in Tmp folder to be used a input for testing Python implementation.
Here is a Python implementation using OpenCV:
import numpy as np
import cv2
#from scipy.ndimage import zoom
def cubic_resize_along_dim(inp, dim, scale):
""" Implementation is based on MATLAB source code of resizeAlongDim function """
# If dim is 3, permute the input matrix so that the third dimension
# becomes the first dimension. This way, we can resize along the
# third dimensions as though we were resizing along the first dimension.
is_third_dim_resize = (dim == 2)
if is_third_dim_resize:
inp = np.swapaxes(inp, 2, 0).copy() # in = permute(in, [3 2 1])
dim = 0
if dim == 0:
out_rows = int(np.round(inp.shape[0]*scale)) # out_rows = round(size(in, 1)*scale);
out_cols = inp.shape[1] # out_cols = size(in, 2);
else: # dim == 1
out_rows = inp.shape[0] # out_rows = size(in, 1);
out_cols = int(np.round(inp.shape[1]*scale)) # out_cols = round(size(in,2)*scale);
out = np.zeros((out_rows, out_cols, inp.shape[2]), inp.dtype) # out = zeros(out_rows, out_cols, size(in, 3), class(in)); % Allocate array for storing the output.
for i in range(inp.shape[2]):
# Resize each color plane separately
out[:, :, i] = cv2.resize(inp[:, :, i], (out_cols, out_rows), interpolation=cv2.INTER_CUBIC) # out(:, :, i) = imresize(inp(:, :, i), [out_rows, out_cols], 'bicubic', 'Antialiasing', false);
# Permute back so that the original dimensions are restored if we were
# resizing along the third dimension.
if is_third_dim_resize:
out = np.swapaxes(out, 2, 0) # out = permute(out, [3 2 1]);
return out
def cubic_resize3(a, scale):
b = a.copy()
for k in range(3):
b = cubic_resize_along_dim(b, k, scale)
return b
# Build 3D input image (10 channels with resolution 512x384).
i1 = cv2.cvtColor(cv2.imread('/Tmp/I1.png', cv2.IMREAD_UNCHANGED), cv2.COLOR_BGR2RGB)
i2 = cv2.cvtColor(cv2.imread('/Tmp/I2.png', cv2.IMREAD_UNCHANGED), cv2.COLOR_BGR2RGB)
i3 = cv2.cvtColor(cv2.imread('/Tmp/I3.png', cv2.IMREAD_UNCHANGED), cv2.COLOR_BGR2RGB)
i4 = cv2.imread('/Tmp/I4.png', cv2.IMREAD_UNCHANGED)
im = np.dstack((i1, i2, i3, i4)) # Stack arrays along the third axis
# Read and adjust MATLAB output (out_mat is used as reference for testing).
# out_mat is the result of J = imresize3(I, 0.5, 'cubic', 'Antialiasing', false);
j1 = cv2.imread('/Tmp/J1.png', cv2.IMREAD_UNCHANGED)
j2 = cv2.imread('/Tmp/J2.png', cv2.IMREAD_UNCHANGED)
j3 = cv2.imread('/Tmp/J3.png', cv2.IMREAD_UNCHANGED)
j4 = cv2.imread('/Tmp/J4.png', cv2.IMREAD_UNCHANGED)
j5 = cv2.imread('/Tmp/J5.png', cv2.IMREAD_UNCHANGED)
out_mat = np.dstack((j1, j2, j3, j4, j5)) # Stack arrays along the third axis
#out_py = zoom(im, 0.5, order=3, mode='reflect')
# Execute 3D resize in Python
out_py = cubic_resize3(im, 0.5)
abs_diff = np.absolute(out_mat.astype(np.int16) - out_py.astype(np.int16))
print(f'max_abs_diff = {abs_diff.max()}')
The Python implementation reads the input files stored by MATLAB (and convert from BGR to RGB when required).
The implementation compares the result of cubic_resize3 with the MATLAB output of imresize3.
The maximum difference is 12 (not zero).
Apparently cv2.resize and MATLAB imresize gives slightly different results.
Update:
Replacing:
out[:, :, i] = cv2.resize(inp[:, :, i], (out_cols, out_rows), interpolation=cv2.INTER_CUBIC)
with:
out[:, :, i] = transform.resize(inp[:, :, i], (out_rows, out_cols), order=3, mode='edge', anti_aliasing=False, preserve_range=True)
Reduces the maximum difference to 4.

Matrices help - index 3 is out of bounds for axis 1 with size 3 BUT I'm pretty sure I have a (3,n) matrix

I keep getting error 'index 3 is out of bounds for axis 1 with size 3' but I'm sure that I'm using a (3,n) matrix rather than a (n,3) one. I'm not very familiar with matrices in python so have been using a kind of hacky way of getting them into the shape I want so I can multiply or add them. Can anyone see where I've gone wrong or suggest some better practice?
I'm trying to perform a rotational transform on A, generated via:
A = array(random.rand(3, 9));
where A is containes a set of x,y,z coordinates in every column. E.g:
Matrix A:
[[0.70799333 0.77123425 0.07271538 0.52498025 0.84353825 0.78331767
0.06428417 0.25629863 0.6654734 0.77562903]
[0.34179928 0.83233168 0.3920859 0.19819796 0.22486337 0.09274312
0.49057914 0.69716143 0.613912 0.04940198]
[0.98522559 0.71273242 0.70784866 0.61589377 0.34007973 0.34492078
0.44491238 0.37423906 0.37427018 0.13558728]]
The translated matrix is calculated via A_translated = re_R.(each column of A) + ret_t, where
ret_R:
[[ 0.1928724 0.90776212 0.372516 ]
[ 0.27931303 -0.41473028 0.8660156 ]
[ 0.94062983 -0.06298194 -0.33353981]]
and
ret_t:
[[0.93445859]
[0.59949888]
[0.77385835]]
My attempt was as follows
count = 0
num_rows, num_cols = A.shape
translated_A = pd.DataFrame( zeros( (num_rows, num_cols) ) )
print('Translated A: \n', translated_A)
for i in range(0, num_cols):
multiply = ret_R.A[:,i] # works up until (not including) i = 3
#IndexError: index 3 is out of bounds for axis 1 with size 3
print('Multiply: \n', multiply)
multiply2 = np.matrix(pd.DataFrame(multiply))
matrix = multiply2 + ret_t #works
matrix2 = pd.DataFrame(matrix) #np.matrix(pd.DataFrame(matrix)) # not working ?
print('Matrix:', matrix2)
translated_A[i] = matrix2[0]
print(translated_A)
The line multiply = ret_R.A[:,i] only works up until and not including i = 3, which suggests that my A matrix is n,3 but I'm sure it's 3,n. I kept switching between matrices and data frames as this seemed to work but it doesn't work past i = 2.
I've realised that I should be using an '#' to find the dot product of the matrices properly rather than a '.' and I had to transpose multiply2 to get an matrix in the form [ [] [] [] ]. I no longer have to keep switching between a data frame and matrix

Why does numpy.random.dirichlet() not accept multidimensional arrays?

On the numpy page they give the example of
s = np.random.dirichlet((10, 5, 3), 20)
which is all fine and great; but what if you want to generate random samples from a 2D array of alphas?
alphas = np.random.randint(10, size=(20, 3))
If you try np.random.dirichlet(alphas), np.random.dirichlet([x for x in alphas]), or np.random.dirichlet((x for x in alphas)), it results in a
ValueError: object too deep for desired array. The only thing that seems to work is:
y = np.empty(alphas.shape)
for i in xrange(np.alen(alphas)):
y[i] = np.random.dirichlet(alphas[i])
print y
...which is far from ideal for my code structure. Why is this the case, and can anyone think of a more "numpy-like" way of doing this?
Thanks in advance.
np.random.dirichlet is written to generate samples for a single Dirichlet distribution. That code is implemented in terms of the Gamma distribution, and that implementation can be used as the basis for a vectorized code to generate samples from different distributions. In the following, dirichlet_sample takes an array alphas with shape (n, k), where each row is an alpha vector for a Dirichlet distribution. It returns an array also with shape (n, k), each row being a sample of the corresponding distribution from alphas. When run as a script, it generates samples using dirichlet_sample and np.random.dirichlet to verify that they are generating the same samples (up to normal floating point differences).
import numpy as np
def dirichlet_sample(alphas):
"""
Generate samples from an array of alpha distributions.
"""
r = np.random.standard_gamma(alphas)
return r / r.sum(-1, keepdims=True)
if __name__ == "__main__":
alphas = 2 ** np.random.randint(0, 4, size=(6, 3))
np.random.seed(1234)
d1 = dirichlet_sample(alphas)
print "dirichlet_sample:"
print d1
np.random.seed(1234)
d2 = np.empty(alphas.shape)
for k in range(len(alphas)):
d2[k] = np.random.dirichlet(alphas[k])
print "np.random.dirichlet:"
print d2
# Compare d1 and d2:
err = np.abs(d1 - d2).max()
print "max difference:", err
Sample run:
dirichlet_sample:
[[ 0.38980834 0.4043844 0.20580726]
[ 0.14076375 0.26906604 0.59017021]
[ 0.64223074 0.26099934 0.09676991]
[ 0.21880145 0.33775249 0.44344606]
[ 0.39879859 0.40984454 0.19135688]
[ 0.73976425 0.21467288 0.04556287]]
np.random.dirichlet:
[[ 0.38980834 0.4043844 0.20580726]
[ 0.14076375 0.26906604 0.59017021]
[ 0.64223074 0.26099934 0.09676991]
[ 0.21880145 0.33775249 0.44344606]
[ 0.39879859 0.40984454 0.19135688]
[ 0.73976425 0.21467288 0.04556287]]
max difference: 5.55111512313e-17
I think you're looking for
y = np.array([np.random.dirichlet(x) for x in alphas])
for your list comprehension. Otherwise you're simply passing a python list or tuple. I imagine the reason numpy.random.dirichlet does not accept your list of alpha values is because it's not set up to - it already accepts an array, which it expects to have a dimension of k, as per the documentation.

2d convolution using python and numpy

I am trying to perform a 2d convolution in python using numpy
I have a 2d array as follows with kernel H_r for the rows and H_c for the columns
data = np.zeros((nr, nc), dtype=np.float32)
#fill array with some data here then convolve
for r in range(nr):
data[r,:] = np.convolve(data[r,:], H_r, 'same')
for c in range(nc):
data[:,c] = np.convolve(data[:,c], H_c, 'same')
data = data.astype(np.uint8);
It does not produce the output that I was expecting, does this code look OK, I think the problem is with the casting from float32 to 8bit. Whats the best way to do this
Thanks
Maybe it is not the most optimized solution, but this is an implementation I used before with numpy library for Python:
def convolution2d(image, kernel, bias):
m, n = kernel.shape
if (m == n):
y, x = image.shape
y = y - m + 1
x = x - m + 1
new_image = np.zeros((y,x))
for i in range(y):
for j in range(x):
new_image[i][j] = np.sum(image[i:i+m, j:j+m]*kernel) + bias
return new_image
I hope this code helps other guys with the same doubt.
Regards.
Edit [Jan 2019]
#Tashus comment bellow is correct, and #dudemeister's answer is thus probably more on the mark. The function he suggested is also more efficient, by avoiding a direct 2D convolution and the number of operations that would entail.
Possible Problem
I believe you are doing two 1d convolutions, the first per columns and the second per rows, and replacing the results from the first with the results of the second.
Notice that numpy.convolve with the 'same' argument returns an array of equal shape to the largest one provided, so when you make the first convolution you already populated the entire data array.
One good way to visualize your arrays during these steps is to use Hinton diagrams, so you can check which elements already have a value.
Possible Solution
You can try to add the results of the two convolutions (use data[:,c] += .. instead of data[:,c] = on the second for loop), if your convolution matrix is the result of using the one dimensional H_r and H_c matrices like so:
Another way to do that would be to use scipy.signal.convolve2d with a 2d convolution array, which is probably what you wanted to do in the first place.
Since you already have your kernel separated you should simply use the sepfir2d function from scipy:
from scipy.signal import sepfir2d
convolved = sepfir2d(data, H_r, H_c)
On the other hand, the code you have there looks all right ...
I checked out many implementations and found none for my purpose, which should be really simple. So here is a dead-simple implementation with for loop
def convolution2d(image, kernel, stride, padding):
image = np.pad(image, [(padding, padding), (padding, padding)], mode='constant', constant_values=0)
kernel_height, kernel_width = kernel.shape
padded_height, padded_width = image.shape
output_height = (padded_height - kernel_height) // stride + 1
output_width = (padded_width - kernel_width) // stride + 1
new_image = np.zeros((output_height, output_width)).astype(np.float32)
for y in range(0, output_height):
for x in range(0, output_width):
new_image[y][x] = np.sum(image[y * stride:y * stride + kernel_height, x * stride:x * stride + kernel_width] * kernel).astype(np.float32)
return new_image
It might not be the most optimized solution either, but it is approximately ten times faster than the one proposed by #omotto and it only uses basic numpy function (as reshape, expand_dims, tile...) and no 'for' loops:
def gen_idx_conv1d(in_size, ker_size):
"""
Generates a list of indices. This indices correspond to the indices
of a 1D input tensor on which we would like to apply a 1D convolution.
For instance, with a 1D input array of size 5 and a kernel of size 3, the
1D convolution product will successively looks at elements of indices [0,1,2],
[1,2,3] and [2,3,4] in the input array. In this case, the function idx_conv1d(5,3)
outputs the following array: array([0,1,2,1,2,3,2,3,4]).
args:
in_size: (type: int) size of the input 1d array.
ker_size: (type: int) kernel size.
return:
idx_list: (type: np.array) list of the successive indices of the 1D input array
access to the 1D convolution algorithm.
example:
>>> gen_idx_conv1d(in_size=5, ker_size=3)
array([0, 1, 2, 1, 2, 3, 2, 3, 4])
"""
f = lambda dim1, dim2, axis: np.reshape(np.tile(np.expand_dims(np.arange(dim1),axis),dim2),-1)
out_size = in_size-ker_size+1
return f(ker_size, out_size, 0)+f(out_size, ker_size, 1)
def repeat_idx_2d(idx_list, nbof_rep, axis):
"""
Repeats an array of indices (idx_list) a number of time (nbof_rep) "along" an axis
(axis). This function helps to browse through a 2d array of size
(len(idx_list),nbof_rep).
args:
idx_list: (type: np.array or list) a 1D array of indices.
nbof_rep: (type: int) number of repetition.
axis: (type: int) axis "along" which the repetition will be applied.
return
idx_list: (type: np.array) a 1D array of indices of size len(idx_list)*nbof_rep.
example:
>>> a = np.array([0, 1, 2])
>>> repeat_idx_2d(a, 3, 0) # repeats array 'a' 3 times along 'axis' 0
array([0, 0, 0, 1, 1, 1, 2, 2, 2])
>>> repeat_idx_2d(a, 3, 1) # repeats array 'a' 3 times along 'axis' 1
array([0, 1, 2, 0, 1, 2, 0, 1, 2])
>>> b = np.reshape(np.arange(3*4), (3,4))
>>> b[repeat_idx_2d(np.arange(3), 4, 0), repeat_idx_2d(np.arange(4), 3, 1)]
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
"""
assert axis in [0,1], "Axis should be equal to 0 or 1."
tile_axis = (nbof_rep,1) if axis else (1,nbof_rep)
return np.reshape(np.tile(np.expand_dims(idx_list, 1),tile_axis),-1)
def conv2d(im, ker):
"""
Performs a 'valid' 2D convolution on an image. The input image may be
a 2D or a 3D array.
The output image first two dimensions will be reduced depending on the
convolution size.
The kernel may be a 2D or 3D array. If 2D, it will be applied on every
channel of the input image. If 3D, its last dimension must match the
image one.
args:
im: (type: np.array) image (2D or 3D).
ker: (type: np.array) convolution kernel (2D or 3D).
returns:
im: (type: np.array) convolved image.
example:
>>> im = np.reshape(np.arange(10*10*3),(10,10,3))/(10*10*3) # 3D image
>>> ker = np.array([[0,1,0],[-1,0,1],[0,-1,0]]) # 2D kernel
>>> conv2d(im, ker) # 3D array of shape (8,8,3)
"""
if len(im.shape)==2: # if the image is a 2D array, it is reshaped by expanding the last dimension
im = np.expand_dims(im,-1)
im_x, im_y, im_w = im.shape
if len(ker.shape)==2: # if the kernel is a 2D array, it is reshaped so it will be applied to all of the image channels
ker = np.tile(np.expand_dims(ker,-1),[1,1,im_w]) # the same kernel will be applied to all of the channels
assert ker.shape[-1]==im.shape[-1], "Kernel and image last dimension must match."
ker_x = ker.shape[0]
ker_y = ker.shape[1]
# shape of the output image
out_x = im_x - ker_x + 1
out_y = im_y - ker_y + 1
# reshapes the image to (out_x, ker_x, out_y, ker_y, im_w)
idx_list_x = gen_idx_conv1d(im_x, ker_x) # computes the indices of a 1D conv (cf. idx_conv1d doc)
idx_list_y = gen_idx_conv1d(im_y, ker_y)
idx_reshaped_x = repeat_idx_2d(idx_list_x, len(idx_list_y), 0) # repeats the previous indices to be used in 2D (cf. repeat_idx_2d doc)
idx_reshaped_y = repeat_idx_2d(idx_list_y, len(idx_list_x), 1)
im_reshaped = np.reshape(im[idx_reshaped_x, idx_reshaped_y, :], [out_x, ker_x, out_y, ker_y, im_w]) # reshapes
# reshapes the 2D kernel
ker = np.reshape(ker,[1, ker_x, 1, ker_y, im_w])
# applies the kernel to the image and reduces the dimension back to the one of original input image
return np.squeeze(np.sum(im_reshaped*ker, axis=(1,3)))
I tried to add a lot of comments to explain the method but the global idea is to reshape the 3D input image to a 5D one of shape (output_image_height, kernel_height, output_image_width, kernel_width, output_image_channel) and then to apply the kernel directly using the basic array multiplication. Of course, this methods is then using more memory (during the execution the size of the image is thus multiply by kernel_height*kernel_width) but it is faster.
To do this reshape step, I 'over-used' the indexing methods of numpy arrays, especially, the possibility of giving a numpy array as indices into a numpy array.
This methods could also be used to re-code the 2D convolution product in Pytorch or Tensorflow using the base math functions but I have no doubt in saying that it will be slower than the existing nn.conv2d operator...
I really enjoyed coding this method by only using the numpy basic tools.
One of the most obvious is to hard code the kernel.
img = img.convert('L')
a = np.array(img)
out = np.zeros([a.shape[0]-2, a.shape[1]-2], dtype='float')
out += a[:-2, :-2]
out += a[1:-1, :-2]
out += a[2:, :-2]
out += a[:-2, 1:-1]
out += a[1:-1,1:-1]
out += a[2:, 1:-1]
out += a[:-2, 2:]
out += a[1:-1, 2:]
out += a[2:, 2:]
out /= 9.0
out = out.astype('uint8')
img = Image.fromarray(out)
This example does a box blur 3x3 completely unrolled. You can multiply the values where you have a different value and divide them by a different amount. But, if you honestly want the quickest and dirtiest method this is it. I think it beats Guillaume Mougeot's method by a factor of like 5. His method beating the others by a factor of 10.
It may lose a few steps if you're doing something like a gaussian blur. and need to multiply some stuff.
Try to first round and then cast to uint8:
data = data.round().astype(np.uint8);
I wrote this convolve_stride which uses numpy.lib.stride_tricks.as_strided. Moreover it supports both strides and dilation. It is also compatible to tensor with order > 2.
import numpy as np
from numpy.lib.stride_tricks import as_strided
from im2col import im2col
def conv_view(X, F_s, dr, std):
X_s = np.array(X.shape)
F_s = np.array(F_s)
dr = np.array(dr)
Fd_s = (F_s - 1) * dr + 1
if np.any(Fd_s > X_s):
raise ValueError('(Dilated) filter size must be smaller than X')
std = np.array(std)
X_ss = np.array(X.strides)
Xn_s = (X_s - Fd_s) // std + 1
Xv_s = np.append(Xn_s, F_s)
Xv_ss = np.tile(X_ss, 2) * np.append(std, dr)
return as_strided(X, Xv_s, Xv_ss, writeable=False)
def convolve_stride(X, F, dr=None, std=None):
if dr is None:
dr = np.ones(X.ndim, dtype=int)
if std is None:
std = np.ones(X.ndim, dtype=int)
if not (X.ndim == F.ndim == len(dr) == len(std)):
raise ValueError('X.ndim, F.ndim, len(dr), len(std) must be the same')
Xv = conv_view(X, F.shape, dr, std)
return np.tensordot(Xv, F, axes=X.ndim)
%timeit -n 100 -r 10 convolve_stride(A, F)
#31.2 ms ± 1.31 ms per loop (mean ± std. dev. of 10 runs, 100 loops each)
Super simple and fast convolution using only basic numpy:
import numpy as np
def conv2d(image, kernel):
# apply kernel to image, return image of the same shape
# assume both image and kernel are 2D arrays
# kernel = np.flipud(np.fliplr(kernel)) # optionally flip the kernel
k = kernel.shape[0]
width = k//2
# place the image inside a frame to compensate for the kernel overlap
a = framed(image, width)
b = np.zeros(image.shape) # fill the output array with zeros; do not use np.empty()
# shift the image around each pixel, multiply by the corresponding kernel value and accumulate the results
for p, dp, r, dr in [(i, i + image.shape[0], j, j + image.shape[1]) for i in range(k) for j in range(k)]:
b += a[p:dp, r:dr] * kernel[p, r]
# or just write two nested for loops if you prefer
# np.clip(b, 0, 255, out=b) # optionally clip values exceeding the limits
return b
def framed(image, width):
a = np.zeros((image.shape[0]+2*width, image.shape[1]+2*width))
a[width:-width, width:-width] = image
# alternatively fill the frame with ones or copy border pixels
return a
Run it:
Image.fromarray(conv2d(image, kernel).astype('uint8'))
Instead of sliding the kernel along the image and computing the transformation pixel by pixel, create a series of shifted versions of the image corresponding to each element in the kernel and apply the corresponding kernel value to each of the shifted image versions.
This is probably the fastest you can get using just basic numpy; the speed is already comparable to C implementation of scipy convolve2d and better than fftconvolve. The idea is similar to #Tatarize. This example works only for one color component; for RGB just repeat for each (or modify the algorithm accordingly).
Typically, Convolution 2D is a misnomer. Ideally, under the hood,
whats being done is a correlation of 2 matrices.
pad == same
returns the output as the same as input dimension
It can also take asymmetric images. In order to perform correlation(convolution in deep learning lingo) on a batch of 2d matrices, one can iterate over all the channels, calculate the correlation for each of the channel slices with the respective filter slice.
For example: If image is (28,28,3) and filter size is (5,5,3) then take each of the 3 slices from the image channel and perform the cross correlation using the custom function above and stack the resulting matrix in the respective dimension of the output.
def get_cross_corr_2d(W, X, pad = 'valid'):
if(pad == 'same'):
pr = int((W.shape[0] - 1)/2)
pc = int((W.shape[1] - 1)/2)
conv_2d = np.zeros((X.shape[0], X.shape[1]))
X_pad = np.zeros((X.shape[0] + 2*pr, X.shape[1] + 2*pc))
X_pad[pr:pr+X.shape[0], pc:pc+X.shape[1]] = X
for r in range(conv_2d.shape[0]):
for c in range(conv_2d.shape[1]):
conv_2d[r,c] = np.sum(np.inner(W, X_pad[r:r+W.shape[0], c:c+W.shape[1]]))
return conv_2d
else:
pr = W.shape[0] - 1
pc = W.shape[1] - 1
conv_2d = np.zeros((X.shape[0] - W.shape[0] + 2*pr + 1,
X.shape[1] - W.shape[1] + 2*pc + 1))
X_pad = np.zeros((X.shape[0] + 2*pr, X.shape[1] + 2*pc))
X_pad[pr:pr+X.shape[0], pc:pc+X.shape[1]] = X
for r in range(conv_2d.shape[0]):
for c in range(conv_2d.shape[1]):
conv_2d[r,c] = np.sum(np.multiply(W, X_pad[r:r+W.shape[0], c:c+W.shape[1]]))
return conv_2d
This code incorrect:
for r in range(nr):
data[r,:] = np.convolve(data[r,:], H_r, 'same')
for c in range(nc):
data[:,c] = np.convolve(data[:,c], H_c, 'same')
See Nussbaumer transformation from multidimentional convolution to one dimentional.

Categories