In order to solve a problem which is only possible element by element I need to combine NumPy's tuple indexing with an explicit slice.
def f(shape, n):
"""
:param shape: any shape of an array
:type shape: tuple
:type n: int
"""
x = numpy.zeros( (n,) + shape )
for i in numpy.ndindex(shape): # i = (k, l, ...)
x[:, k, l, ...] = numpy.random.random(n)
x[:, *i] results in a SyntaxError and x[:, i] is interpreted as numpy.array([ x[:, k] for k in i ]). Unfortunally it's not possible to have the n-dimension as last (x = numpy.zeros(shape+(n,)) for x[i] = numpy.random.random(n)) because of the further usage of x.
EDIT: Here some example wished in comment.
>>> n, shape = 2, (3,4)
>>> x = np.arange(24).reshape((n,)+(3,4))
>>> print(x)
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]])
>>> i = (1,2)
>>> print(x[ ??? ]) # '???' expressed by i with any length is the question
array([ 6, 18])
If I understand the question correctly, you have a multi-dimensional numpy array and want to index it by combining a : slice with some number of other indices from a tuple i.
The index to the numpy array is a tuple, so you can basically just combine those 'partial' indices to one tuple and use that as the index. A naive approach might look like this
x[ (:,) + i ] = numpy.random.random(n) # does not work
but this will give a syntax error. Instead of :, you have to use the slice builtin.
x[ (slice(None),) + i ] = numpy.random.random(n)
Related
I have 2 2D-arrays. I am trying to convolve along the axis 1. np.convolve doesn't provide the axis argument. The answer here, convolves 1 2D-array with a 1D array using np.apply_along_axis. But it cannot be directly applied to my use case. The question here doesn't have an answer.
MWE is as follows.
import numpy as np
a = np.random.randint(0, 5, (2, 5))
"""
a=
array([[4, 2, 0, 4, 3],
[2, 2, 2, 3, 1]])
"""
b = np.random.randint(0, 5, (2, 2))
"""
b=
array([[4, 3],
[4, 0]])
"""
# What I want
c = np.convolve(a, b, axis=1) # axis is not supported as an argument
"""
c=
array([[16, 20, 6, 16, 24, 9],
[ 8, 8, 8, 12, 4, 0]])
"""
I know I can do it using np.fft.fft, but it seems like an unnecessary step to get a simple thing done. Is there a simple way to do this? Thanks.
Why not just do a list comprehension with zip?
>>> np.array([np.convolve(x, y) for x, y in zip(a, b)])
array([[16, 20, 6, 16, 24, 9],
[ 8, 8, 8, 12, 4, 0]])
>>>
Or with scipy.signal.convolve2d:
>>> from scipy.signal import convolve2d
>>> convolve2d(a, b)[[0, 2]]
array([[16, 20, 6, 16, 24, 9],
[ 8, 8, 8, 12, 4, 0]])
>>>
One possibility could be to manually go the way to the Fourier spectrum, and back:
n = np.max([a.shape, b.shape]) + 1
np.abs(np.fft.ifft(np.fft.fft(a, n=n) * np.fft.fft(b, n=n))).astype(int)
# array([[16, 20, 6, 16, 24, 9],
# [ 8, 8, 8, 12, 4, 0]])
Would it be considered too ugly to loop over the orthogonal dimension? That would not add much overhead unless the main dimension is very short. Creating the output array ahead of time ensures that no memory needs to be copied about.
def convolvesecond(a, b):
N1, L1 = a.shape
N2, L2 = b.shape
if N1 != N2:
raise ValueError("Not compatible")
c = np.zeros((N1, L1 + L2 - 1), dtype=a.dtype)
for n in range(N1):
c[n,:] = np.convolve(a[n,:], b[n,:], 'full')
return c
For the generic case (convolving along the k-th axis of a pair of multidimensional arrays), I would resort to a pair of helper functions I always keep on hand to convert multidimensional problems to the basic 2d case:
def semiflatten(x, d=0):
'''SEMIFLATTEN - Permute and reshape an array to convenient matrix form
y, s = SEMIFLATTEN(x, d) permutes and reshapes the arbitrary array X so
that input dimension D (default: 0) becomes the second dimension of the
output, and all other dimensions (if any) are combined into the first
dimension of the output. The output is always 2-D, even if the input is
only 1-D.
If D<0, dimensions are counted from the end.
Return value S can be used to invert the operation using SEMIUNFLATTEN.
This is useful to facilitate looping over arrays with unknown shape.'''
x = np.array(x)
shp = x.shape
ndims = x.ndim
if d<0:
d = ndims + d
perm = list(range(ndims))
perm.pop(d)
perm.append(d)
y = np.transpose(x, perm)
# Y has the original D-th axis last, preceded by the other axes, in order
rest = np.array(shp, int)[perm[:-1]]
y = np.reshape(y, [np.prod(rest), y.shape[-1]])
return y, (d, rest)
def semiunflatten(y, s):
'''SEMIUNFLATTEN - Reverse the operation of SEMIFLATTEN
x = SEMIUNFLATTEN(y, s), where Y, S are as returned from SEMIFLATTEN,
reverses the reshaping and permutation.'''
d, rest = s
x = np.reshape(y, np.append(rest, y.shape[-1]))
perm = list(range(x.ndim))
perm.pop()
perm.insert(d, x.ndim-1)
x = np.transpose(x, perm)
return x
(Note that reshape and transpose do not create copies, so these functions are extremely fast.)
With those, the generic form can be written as:
def convolvealong(a, b, axis=-1):
a, S1 = semiflatten(a, axis)
b, S2 = semiflatten(b, axis)
c = convolvesecond(a, b)
return semiunflatten(c, S1)
Suppose I have an n-D array A, with an arbitrary n:
a = np.arange(3*4*5*...).reshape((3, 4, 5, ...))
...And I have an array of indices (exactly this shape or transposed):
ix = np.array([
[0,1,1,...],
[0,0,1,...],
[1,0,1,...],
])
What is the fastest way of getting an array of the elements from a at indices from ix ?
(e.g. the fastest way of obtaining :
def take_at(a, ix):
res = np.empty((ix.shape[0],*a.shape[ix.shape[1]:]))
for i in range(ix.shape[0]):
res[i,...] = a[(*ix[i],)]
return res
...Using (if possible) only numpy vectorized function / loop / operations ?)
simple copy-pastable example to test : :
a = np.arange(3*4*5).reshape((3,4,5))
ix = np.array([
[0,1,1],
[0,0,1],
[1,0,1],
])
def take_at(a, ix):
res = np.empty((ix.shape[0],*a.shape[ix.shape[1]:]), dtype=a.dtype)
for i in range(ix.shape[0]):
res[i,...] = a[(*ix[i],)]
return res
take_at(a, ix)
You need to pack ix into a tuple:
take_at(a, ix)
Out[]: array([ 6, 1, 21])
a[tuple(ix.T)]
Out[]: array([ 6, 1, 21])
You can do:
a[ix[:,0],ix[:,1],ix[:,2]]
Output:
array([ 6, 1, 21])
I was reviewing operators with numpy arrays and I found something I did not expect and that I do not know how to interpret.
The operation I am performing is an array A to the power of an array B, clearly with A and B having the same shape. The behaviour I am expecting is 'element-wise power': each element of A to the power of the corresponding elements in B. However, it seems that something else is going on.
import numpy as np
values = list(range(1, 10))
array_1d = np.array(values)
print(array_1d ** (array_1d * 5))
So I'm making [1, 2, 3, 4, 5, 6, 7, 9] to the power of [5, 10, 15, 20, ...].
The expected outcome (in my head) is the equivalent of [v ** (v*5) for v in list(range(1,10))], that is:
[1,
1024,
14348907,
1099511627776,
298023223876953125,
221073919720733357899776,
378818692265664781682717625943,
1329227995784915872903807060280344576,
8727963568087712425891397479476727340041449].
However the output is:
array([ 1, 1024, 14348907, 0, 167814181, 1073741824, 613813847, 0, -1054898967], dtype=int32)
Does someone know the reason of this result? What's really happening here?
Thank you!
dtype=int32 you're overflowing integers and looping back:
1099511627776 % (2**32) == 0
298023223876953125 % (2**32) == 167814181
...
The problem is the used data type, try:
v = arange(1,10, dtype='int32')
print(v ** (5*v))
v = arange(1,10, dtype='int64')
print(v ** (5*v))
v = arange(1,10, dtype='float')
print(v ** (5*v))
Something very simple in Matlab, but I can't get it in Python. How to get the following:
x=np.array([1,2,3])
y=np.array([4,5,6,7])
z=x.T*y
z=
[[4,5,6,7],
[8,10,12,14],
[12,15,18,21]]
As in
x [4][5][6][7]
[1]
[2]
[3]
In scientific python that would be an outer product np.outer(x,y)
See http://docs.scipy.org/doc/numpy/reference/generated/numpy.outer.html:
import numpy;
>>> x=numpy.array([1,2,3])
>>> y=numpy.array([4,5,6,7])
>>> numpy.outer(x,y)
array([[ 4, 5, 6, 7],
[ 8, 10, 12, 14],
[12, 15, 18, 21]])
In MATLAB, size(x) is (1,3). So x' is (3,1). Multiply that by y, which is (1,4), produces (3,4) shape.
In numpy, x.shape is (3,). x.T is the same. So to get the same outer product, you need to expand the dimensions of x and y. One way is with reshape.
z = x.reshape(3,1)* y.reshape(1,4)
numpy also lets you do this with a newaxis indexing (None also works). It also automatically adds beginning newaxis if that is needed. So this also does the job:
z = x[:,np.newaxis]*y
np.outer does exactly this (with a minor embelishment): a.ravel()[:, newaxis]*b.ravel()[newaxis,:].
There's another tool in numpy
z = np.einsum('i,j->ij',x,y)
It is based on an indexing notation that is popular in physics, and is especially useful in writing more complicated inner (dot) products.
Using list comprehension:
x = [1, 2, 3]
y = [4, 5, 6, 7]
z = [[i * j for j in y] for i in x]
I'm trying to slice and iterate over a multidimensional array at the same time. I have a solution that's functional, but it's kind of ugly, and I bet there's a slick way to do the iteration and slicing that I don't know about. Here's the code:
import numpy as np
x = np.arange(64).reshape(4,4,4)
y = [x[i:i+2,j:j+2,k:k+2] for i in range(0,4,2)
for j in range(0,4,2)
for k in range(0,4,2)]
y = np.array(y)
z = np.array([np.min(u) for u in y]).reshape(y.shape[1:])
Your last reshape doesn't work, because y has no shape defined. Without it you get:
>>> x = np.arange(64).reshape(4,4,4)
>>> y = [x[i:i+2,j:j+2,k:k+2] for i in range(0,4,2)
... for j in range(0,4,2)
... for k in range(0,4,2)]
>>> z = np.array([np.min(u) for u in y])
>>> z
array([ 0, 2, 8, 10, 32, 34, 40, 42])
But despite that, what you probably want is reshaping your array to 6 dimensions, which gets you the same result as above:
>>> xx = x.reshape(2, 2, 2, 2, 2, 2)
>>> zz = xx.min(axis=-1).min(axis=-2).min(axis=-3)
>>> zz
array([[[ 0, 2],
[ 8, 10]],
[[32, 34],
[40, 42]]])
>>> zz.ravel()
array([ 0, 2, 8, 10, 32, 34, 40, 42])
It's hard to tell exactly what you want in the last mean, but you can use stride_tricks to get a "slicker" way. It's rather tricky.
import numpy.lib.stride_tricks
# This returns a view with custom strides, x2[i,j,k] matches y[4*i+2*j+k]
x2 = numpy.lib.stride_tricks(
x, shape=(2,2,2,2,2,2),
strides=(numpy.array([32,8,2,16,4,1])*x.dtype.itemsize))
z2 = z2.min(axis=-1).min(axis=-2).min(axis=-3)
Still, I can't say this is much more readable. (Or efficient, as each min call will make temporaries.)
Note, my answer differs from Jaime's because I tried to match your elements of y. You can tell if you replace the min with max.