I have been trying to implement some modification to speed up this pseudo code:
>>> A=np.array([1,1,1,2,2,2,3,3,3])
>>> B=np.array([np.power(A,n) for n in [3,4,5]])
>>> B
array([[ 1, 1, 1, 8, 8, 8, 27, 27, 27],
[ 1, 1, 1, 16, 16, 16, 81, 81, 81],
[ 1, 1, 1, 32, 32, 32, 243, 243, 243]])
Where elements of A are often repeated 10-20 times and the shape of B needs to be retained because it is multiplied by another array of the same shape later.
My first idea was to use the following code:
uA=np.unique(A)
uB=np.array([np.power(uA,n) for n in [3,4,5]])
B=[]
for num in range(uB.shape[0]):
Temp=np.copy(A)
for k,v in zip(uA,uB[num]): Temp[A==k] = v
B.append(Temp)
B=np.array(B)
### Also any better way to create the numpy array B?
This seems fairly terrible and there is likely a better way. Any idea on how to speed this up would be much appreciated.
Thank you for your time.
Here is an update. I realized that my function was poorly coded. A thank you to everyone for the suggestions. I will try to rephrase my questions better in the future so that they show everything required.
Normal='''
import numpy as np
import scipy
def func(value,n):
if n==0: return 1
else: return np.power(value,n)/scipy.factorial(n,exact=0)+func(value,n-1)
A=np.random.randint(10,size=250)
A=np.unique(A)
B=np.array([func(A,n) for n in [6,8,10]])
'''
Me='''
import numpy as np
import scipy
def func(value,n):
if n==0: return 1
else: return np.power(value,n)/scipy.factorial(n,exact=0)+func(value,n-1)
A=np.random.randint(10,size=250)
uA=np.unique(A)
uB=np.array([func(A,n) for n in [6,8,10]])
B=[]
for num in range(uB.shape[0]):
Temp=np.copy(A)
for k,v in zip(uA,uB[num]): Temp[A==k] = v
B.append(Temp)
B=np.array(B)
'''
Alex='''
import numpy as np
import scipy
A=np.random.randint(10,size=250)
power=np.arange(11)
fact=scipy.factorial(np.arange(11),exact=0).reshape(-1,1)
power=np.power(A,np.arange(11).reshape(-1,1))
value=power/fact
six=np.sum(value[:6],axis=0)
eight=six+np.sum(value[6:8],axis=0)
ten=eight+np.sum(value[8:],axis=0)
B=np.vstack((six,eight,ten))
'''
Alex='''
import numpy as np
import scipy
A=np.random.randint(10,size=250)
power=np.arange(11)
fact=scipy.factorial(np.arange(11),exact=0).reshape(-1,1)
power=np.power(A,np.arange(11).reshape(-1,1))
value=power/fact
six=np.sum(value[:6],axis=0)
eight=six+np.sum(value[6:8],axis=0)
ten=eight+np.sum(value[8:],axis=0)
B=np.vstack((six,eight,ten))
'''
Alex2='''
import numpy as np
import scipy
def find_count(the_list):
count = list(the_list).count
result = [count(item) for item in set(the_list)]
return result
A=np.random.randint(10,size=250)
A_unique=np.unique(A)
A_counts = np.array(find_count(A_unique))
fact=scipy.factorial(np.arange(11),exact=0).reshape(-1,1)
power=np.power(A_unique,np.arange(11).reshape(-1,1))
value=power/fact
six=np.sum(value[:6],axis=0)
eight=six+np.sum(value[6:8],axis=0)
ten=eight+np.sum(value[8:],axis=0)
B_nodup=np.vstack((six,eight,ten))
B_list = [ np.transpose( np.tile( B_nodup[:,i], (A_counts[i], 1) ) ) for i in range(A_unique.shape[0]) ]
B = np.hstack( B_list )
'''
print timeit.timeit(Normal, number=10000)
print timeit.timeit(Me, number=10000)
print timeit.timeit(Alex, number=10000)
print timeit.timeit(Alex2, number=10000)
Normal: 10.7544178963
Me: 23.2039361
Alex: 4.85648703575
Alex2: 4.18024992943
You can broadcast np.power across A if you change its shape to that of a column vector.
>>> np.power(A.reshape(-1,1), [3,4,5]).T
array([[ 1, 1, 1, 8, 8, 8, 27, 27, 27],
[ 1, 1, 1, 16, 16, 16, 81, 81, 81],
[ 1, 1, 1, 32, 32, 32, 243, 243, 243]])
Use a combination of numpy.tile() and numpy.hstack(), as follows:
A = np.array([1,2,3])
A_counts = np.array([3,3,3])
A_powers = np.array([[3],[4],[5]])
B_nodup = np.power(A, A_powers)
B_list = [ np.transpose( np.tile( B_nodup[:,i], (A_counts[i], 1) ) ) for i in range(A.shape[0]) ]
B = np.hstack( B_list )
The transpose and stack may be reversed, this may be faster:
B_list = [ np.tile( B_nodup[:,i], (A_counts[i], 1) ) for i in range(A.shape[0]) ]
B = np.transpose( np.vstack( B_list ) )
This is likely only worth doing if the function you are calculating is quite expensive, or it is duplicated many, many times (more than 10); doing a tile and stack to prevent calculating the power function an extra 10 times is likely not worth it. Please benchmark and let us know.
EDIT: Or, you could just use broadcasting to get rid of the list comprehension:
>>> A=np.array([1,1,1,2,2,2,3,3,3])
>>> B = np.power(A,[[3],[4],[5]])
>>> B
array([[ 1, 1, 1, 8, 8, 8, 27, 27, 27],
[ 1, 1, 1, 16, 16, 16, 81, 81, 81],
[ 1, 1, 1, 32, 32, 32, 243, 243, 243]])
This is probably pretty fast, but doesn't actually do what you asked.
My go at it with 200k iterations, the first method is mine.
import numpy as np
import time
N = 200000
start = time.time()
for j in range(N):
x = np.array([1,1,1,2,2,2,3,3,3])
powers = np.array([3,4,5])
result = np.zeros((powers.size,x.size)).astype(np.int32)
for i in range(powers.size):
result[i,:] = x**powers[i]
print time.time()-start, "seconds"
start = time.time()
for j in range(N):
A=np.array([1,1,1,2,2,2,3,3,3])
B = np.power(A,[[3],[4],[5]])
print time.time()-start, "seconds"
start = time.time()
for j in range(N):
np.power(A.reshape(-1,1), [3,4,5]).T
print time.time()-start, "seconds"
start = time.time()
for j in range(N):
A=np.array([1,1,1,2,2,2,3,3,3])
B=np.array([np.power(x,n) for n in [3,4,5]])
print time.time()-start, "seconds"
Produces
8.88000011444 seconds
9.25099992752 seconds
3.95399999619 seconds
7.43799996376 seconds
larsmans method is clearly fastest.
(ps how do you link to an answer or user here without explicit url #larsman doesnt work)
Related
I wish to create a variable array of numbers in numpy while skipping a chunk of numbers. For instance, If I have the variables:
m = 5
k = 3
num = 50
I want to create a linearly spaced numpy array starting at num and ending at num - k, skip k numbers and continue the array generation. Then repeat this process m times. For example, the above would yield:
np.array([50, 49, 48, 47, 44, 43, 42, 41, 38, 37, 36, 35, 32, 31, 30, 29, 26, 25, 24, 23])
How can I accomplish this via Numpy?
You can try:
import numpy as np
m = 5
k = 3
num = 50
np.hstack([np.arange(num - 2*i*k, num - (2*i+1)*k - 1, -1) for i in range(m)])
It gives:
array([50, 49, 48, 47, 44, 43, 42, 41, 38, 37, 36, 35, 32, 31, 30, 29, 26,
25, 24, 23])
Edit:
#JanChristophTerasa posted an answer (now deleted) that avoided Python loops by masking some elements of an array obtained using np.arange(). Here is a solution inspired by that idea. It works much faster than the above one:
import numpy as np
m = 5
k = 3
num = 50
x = np.arange(num, num - 2*k*m , -1).reshape(-1, 2*k)
x[:, :k+1].ravel()
We can use a mask and np.tile:
def mask_and_tile(m=5, k=3, num=50):
a = np.arange(num, num - 2 * m * k, -1) # create numbers
mask = np.ones(k * 2, dtype=bool) # create mask
mask[k+1:] = False # set appropriate elements to False
mask = np.tile(mask, m) # repeat mask m times
result = a[mask] # mask our numbers
return result
Or we can use a mask and just toggle the appropriate element:
def mask(m=5, k=3, num=50):
a = np.arange(num, num - 2 * m * k, -1) # create numbers
mask = np.ones_like(a, dtype=bool).reshape(-1, k)
mask[1::2] = False
mask[1::2, 0] = True
result = a[mask.flatten()]
return result
This will work fine:
import numpy as np
m = 5
k = 3
num = 50
h=0
x = np.array([])
for i in range(m):
x = np.append(x, range(num-h,num-h-k-1,-1))
h+=2*k
print(x)
Output
[50. 49. 48. 47. 44. 43. 42. 41. 38. 37. 36. 35. 32. 31. 30. 29. 26. 25.
24. 23.]
One way of doing this is making a 2D grid and calculating each number based on its position in the grid, then flattening it to a 1D array.
import numpy as np
num=50
m=5
k=3
# coordinates in a grid of width k+1 and height m
y, x = np.mgrid[:m, :k+1]
# a=[[50-0, 50-1, 50-2, 50-3], [50-0-2*3*1, 50-1-2*3*1, ...], [50-0-2*3*2...]...]
a = num - x - 2 * k * y
print(a.ravel())
I have a numpy array:
A = np.array([8, 2, 33, 4, 3, 6])
What I want is to create another array B where each element is the pairwise max of 2 consecutive pairs in A, so I get:
B = np.array([8, 33, 33, 4, 6])
Any ideas on how to implement?
Any ideas on how to implement this for more then 2 elements? (same thing but for consecutive n elements)
Edit:
The answers gave me a way to solve this question, but for the n-size window case, is there a more efficient way that does not require loops?
Edit2:
Turns out that the question is equivalent for asking how to perform 1d max-pooling of a list with a window of size n.
Does anyone know how to implement this efficiently?
One solution to the pairwise problem is using the np.maximum function and array slicing:
B = np.maximum(A[:-1], A[1:])
A loop-free solution is to use max on the windows created by skimage.util.view_as_windows:
list(map(max, view_as_windows(A, (2,))))
[8, 33, 33, 4, 6]
Copy/pastable example:
import numpy as np
from skimage.util import view_as_windows
A = np.array([8, 2, 33, 4, 3, 6])
list(map(max, view_as_windows(A, (2,))))
Here is an approach specifically taylored for larger windows. It is O(1) in window size and O(n) in data size.
I've done a pure numpy and a pythran implementation.
How do we achieve O(1) in window size? We use a "sawtooth" trick: If w is the window width we group the data into lots of w and for each group we do the cumulative maximum from left to right and from right to left. The elements of any in-between window distribute over two groups and the maxima of the intersections are among the cumulative maxima we have computed earlier. So we need a total of 3 comparisons per data point.
benchit (thanks #Divakar) for w=100; my functions are pp (numpy) and winmax (pythran):
For small window size w=5 the picture is more even. Interestingly, pythran still has a huge edge even for very small sizes. They must be doing something right to mimimze call overhead.
python code:
cummax = np.maximum.accumulate
def pp(a,w):
N = a.size//w
if a.size-w+1 > N*w:
out = np.empty(a.size-w+1,a.dtype)
out[:-1] = cummax(a[w*N-1::-1].reshape(N,w),axis=1).ravel()[:w-a.size-1:-1]
out[-1] = a[w*N:].max()
else:
out = cummax(a[w*N-1::-1].reshape(N,w),axis=1).ravel()[:w-a.size-2:-1]
out[1:N*w-w+1] = np.maximum(out[1:N*w-w+1],
cummax(a[w:w*N].reshape(N-1,w),axis=1).ravel())
out[N*w-w+1:] = np.maximum(out[N*w-w+1:],cummax(a[N*w:]))
return out
pythran version; compile with pythran -O3 <filename.py>; this creates a compiled module which you can import:
import numpy as np
# pythran export winmax(float[:],int)
# pythran export winmax(int[:],int)
def winmax(data,winsz):
N = data.size//winsz
if N < 1:
raise ValueError
out = np.empty(data.size-winsz+1,data.dtype)
nxt = winsz
for j in range(winsz,data.size):
if j == nxt:
nxt += winsz
out[j+1-winsz] = data[j]
else:
out[j+1-winsz] = out[j-winsz] if out[j-winsz]>data[j] else data[j]
running = data[-winsz:N*winsz].max()
nxt -= winsz << (nxt > data.size)
for j in range(data.size-winsz,0,-1):
if j == nxt:
nxt -= winsz
running = data[j-1]
else:
running = data[j] if data[j] > running else running
out[j] = out[j] if out[j] > running else running
out[0] = data[0] if data[0] > running else running
return out
In this Q&A, we are basically asking for sliding max values. This has been explored before - Max in a sliding window in NumPy array. Since, we are looking to be efficient, we can look further. One of those would be numba and here are two final variants I ended up with that leverage parallel directive that boosts performance over a without version :
import numpy as np
from numba import njit, prange
#njit(parallel=True)
def numba1(a, W):
L = len(a)-W+1
out = np.empty(L, dtype=a.dtype)
v = np.iinfo(a.dtype).min
for i in prange(L):
max1 = v
for j in range(W):
cur = a[i + j]
if cur>max1:
max1 = cur
out[i] = max1
return out
#njit(parallel=True)
def numba2(a, W):
L = len(a)-W+1
out = np.empty(L, dtype=a.dtype)
for i in prange(L):
for j in range(W):
cur = a[i + j]
if cur>out[i]:
out[i] = cur
return out
From the earlier linked Q&A, the equivalent SciPy version would be -
from scipy.ndimage.filters import maximum_filter1d
def scipy_max_filter1d(a, W):
L = len(a)-W+1
hW = W//2 # Half window size
return maximum_filter1d(a,size=W)[hW:hW+L]
Benchmarking
Other posted working approaches for generic window arg :
from skimage.util import view_as_windows
def rolling(a, window):
shape = (a.size - window + 1, window)
strides = (a.itemsize, a.itemsize)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
# #mathfux's soln
def npmax_strided(a,n):
return np.max(rolling(a, n), axis=1)
# #Nicolas Gervais's soln
def mapmax_strided(a, W):
return list(map(max, view_as_windows(a,W)))
cummax = np.maximum.accumulate
def pp(a,w):
N = a.size//w
if a.size-w+1 > N*w:
out = np.empty(a.size-w+1,a.dtype)
out[:-1] = cummax(a[w*N-1::-1].reshape(N,w),axis=1).ravel()[:w-a.size-1:-1]
out[-1] = a[w*N:].max()
else:
out = cummax(a[w*N-1::-1].reshape(N,w),axis=1).ravel()[:w-a.size-2:-1]
out[1:N*w-w+1] = np.maximum(out[1:N*w-w+1],
cummax(a[w:w*N].reshape(N-1,w),axis=1).ravel())
out[N*w-w+1:] = np.maximum(out[N*w-w+1:],cummax(a[N*w:]))
return out
Using benchit package (few benchmarking tools packaged together; disclaimer: I am its author) to benchmark proposed solutions.
import benchit
funcs = [mapmax_strided, npmax_strided, numba1, numba2, scipy_max_filter1d, pp]
in_ = {(n,W):(np.random.randint(0,100,n),W) for n in 10**np.arange(2,6) for W in [2, 10, 20, 50, 100]}
t = benchit.timings(funcs, in_, multivar=True, input_name=['Array-length', 'Window-length'])
t.plot(logx=True, sp_ncols=1, save='timings.png')
So, numba ones are great for window sizes lower than 10, at which there's no clear winner and on larger window sizes pp wins with SciPy one at second spot.
In case there are consecutive n items, extended solution requires looping:
np.maximum(*[A[i:len(A)-n+i+1] for i in range(n)])
In order to avoid it you can use stride tricks and convert A to array of n-length blocks:
def rolling(a, window):
shape = (a.size - window + 1, window)
strides = (a.itemsize, a.itemsize)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
For example:
>>> rolling(A, 3)
array([[ 8, 2, 8],
[ 2, 8, 33],
[ 8, 33, 33],
[33, 33, 4]])
After it's done you can kill it with np.max(rolling(A, n), axis=1).
Though, despite its elegance, neither this solution nor first one were not efficient because we apply repeatedly maximum on adjacent blocks that differs by two items only.
a recursive solution, for all of n
import numpy as np
import sys
def recursive(a: np.ndarray, n: int, b=None, level=2):
if n <= 0 or n > len(a):
raise ValueError(f'len(a):{len(a)} n:{n}')
if n == 1:
return a
if len(a) == n:
return np.max(a)
b = np.maximum(a[:-1], a[1:]) if b is None else np.maximum(a[level - 1:], b)
if n == level:
return b
return recursive(a, n, b[:-1], level + 1)
test_data = np.array([8, 2, 33, 4, 3, 6])
for test_n in range(1, len(test_data) + 2):
try:
print(recursive(test_data, n=test_n))
except ValueError as e:
sys.stderr.write(str(e))
output
[ 8 2 33 4 3 6]
[ 8 33 33 4 6]
[33 33 33 6]
[33 33 33]
[33 33]
33
len(a):6 n:7
about recursive function
You can observe the following data, and then you will know how to write the recursive function.
"""
np.array([8, 2, 33, 4, 3, 6])
n=2: (8, 2), (2, 33), (33, 4), (4, 3), (3, 6) => [8, 33, 33, 4, 6] => B' = [8, 33, 33, 4]
n=3: (8, 2, 33), (2, 33, 4), (33, 4, 3), (4, 3, 6) => B' [33, 4, 3, 6] => np.maximum([8, 33, 33, 4], [33, 4, 3, 6]) => 33, 33, 33, 6
...
"""
Using Pandas:
A = pd.Series([8, 2, 33, 4, 3, 6])
res = pd.concat([A,A.shift(-1)],axis=1).max(axis=1,skipna=False).dropna()
>>res
0 8.0
1 33.0
2 33.0
3 4.0
4 6.0
Or using numpy:
np.vstack([A[1:],A[:-1]]).max(axis=0)
edit: it's an image so the suggested (How can I efficiently process a numpy array in blocks similar to Matlab's blkproc (blockproc) function) isn't really working for me
I have the following matlab code
fun = #(block_struct) ...
std2(block_struct.data) * ones(size(block_struct.data));
B=blockproc(im2double(Icorrected), [4 4], fun);
I want to remake my code, but this time in Python. I have installed Scikit and i'm trying to work around it like this
b = np.std(a, axis = 2)
The problem of course it's that i'm not applying the std for a number of blocks, just like above.
How can i do something like this? Start a loop and try to call the function for each X*X blocks? Then i wouldn't keep the size the it was.
Is there another more efficient way?
If there is no overlap in the windows you can reshape the data to suit your needs:
Find the mean of 3x3 windows of a 9x9 array.
import numpy as np
>>> a
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8],
[ 9, 10, 11, 12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23, 24, 25, 26],
[27, 28, 29, 30, 31, 32, 33, 34, 35],
[36, 37, 38, 39, 40, 41, 42, 43, 44],
[45, 46, 47, 48, 49, 50, 51, 52, 53],
[54, 55, 56, 57, 58, 59, 60, 61, 62],
[63, 64, 65, 66, 67, 68, 69, 70, 71],
[72, 73, 74, 75, 76, 77, 78, 79, 80]])
Find the new shape
>>> window_size = (3,3)
>>> tuple(np.array(a.shape) / window_size) + window_size
(3, 3, 3, 3)
>>> b = a.reshape(3,3,3,3)
Find the mean along the first and third axes.
>>> b.mean(axis = (1,3))
array([[ 10., 13., 16.],
[ 37., 40., 43.],
[ 64., 67., 70.]])
>>>
2x2 windows of a 4x4 array:
>>> a = np.arange(16).reshape((4,4))
>>> window_size = (2,2)
>>> tuple(np.array(a.shape) / window_size) + window_size
(2, 2, 2, 2)
>>> b = a.reshape(2,2,2,2)
>>> b.mean(axis = (1,3))
array([[ 2.5, 4.5],
[ 10.5, 12.5]])
>>>
It won't work if the window size doesn't divide into the array size evenly. In that case you need some overlap in the windows or if you just want overlap numpy.lib.stride_tricks.as_strided is the way to go - a generic N-D function can be found at Efficient Overlapping Windows with Numpy
Another option for 2d arrays is sklearn.feature_extraction.image.extract_patches_2d and for ndarray's - sklearn.feature_extraction.image.extract_patches. Each manipulate the array's strides to produce the patches/windows.
I did the following
io.use_plugin('pil', 'imread')
a = io.imread('C:\Users\Dimitrios\Desktop\polimesa\\arizona.jpg')
B = np.zeros((len(a)/2 +1, len(a[0])/2 +1))
for i in xrange(0, len(a), 2):
for j in xrange(0, len(a[0]), 2):
x.append(a[i][j])
if i+1 < len(a):
x.append(a[i+1][j])
if j+1 < len(a[0]):
x.append(a[i][j+1])
if i+1 < len(a) and j+1 < len(a[0]):
x.append(a[i+1][j+1])
B[i/2][j/2] = np.std(x)
x[:] = []
and i think it's correct. Iterating over the image by 2 and taking each neighbour node, adding them to a list and calculating std.
edit* later edited for 4x4 blocks.
We can implement blockproc() in python the following way:
def blockproc(im, block_sz, func):
h, w = im.shape
m, n = block_sz
for x in range(0, h, m):
for y in range(0, w, n):
block = im[x:x+m, y:y+n]
block[:,:] = func(block)
return im
Now, let's apply it to implement contrast enhancement with local histogram equalization, with the low-contrast moon image (of size 512x512) as input and choosing 32x32 blocks:
from skimage import data, exposure
img = data.moon()
img = img / img.max()
m, n = 64, 64
img_eq = blockproc(img.copy(), (m, n), exposure.equalize_hist)
Display the input and output images:
Note that the function does in-place modification to the image, hence a copy of the input image is passed instead.
How do I remove every nth element in an array?
import numpy as np
x = np.array([0,10,27,35,44,32,56,35,87,22,47,17])
n = 3 # remove every 3rd element
...something like the opposite of x[0::n]? I've tried this, but of course it doesn't work:
for i in np.arange(0,len(x),n):
x = np.delete(x,i)
You're close... Pass the entire arange as subslice to delete instead of attempting to delete each element in turn, eg:
import numpy as np
x = np.array([0,10,27,35,44,32,56,35,87,22,47,17])
x = np.delete(x, np.arange(0, x.size, 3))
# [10 27 44 32 35 87 47 17]
I just add another way with reshaping if the length of your array is a multiple of n:
import numpy as np
x = np.array([0,10,27,35,44,32,56,35,87,22,47,17])
x = x.reshape(-1,3)[:,1:].flatten()
# [10 27 44 32 35 87 47 17]
On my computer it runs almost twice faster than the solution with np.delete (between 1.8x and 1.9x to be honnest).
You can also easily perfom fancy operations, like m deletions each n values etc.
Here's a super fast version for 2D arrays: Remove every m-th row and n-th column from a 2D array (assuming the shape of the array is a multiple of (n, m)):
array2d = np.arange(60).reshape(6, 10)
m, n = (3, 5)
remove = lambda x, q: x.reshape(x.shape[0], -1, q)[..., 1:].reshape(x.shape[0], -1).T
remove(remove(array2d, n), m)
returns:
array([[11, 12, 13, 14, 16, 17, 18, 19],
[21, 22, 23, 24, 26, 27, 28, 29],
[41, 42, 43, 44, 46, 47, 48, 49],
[51, 52, 53, 54, 56, 57, 58, 59]])
To generalize for any shape use padding or reduce the input array depending on your situation.
Speed comparison:
from time import time
'remove'
start = time()
for _ in range(100000):
res = remove(remove(array2d, n), m)
time() - start
'delete'
start = time()
for _ in range(100000):
tmp = np.delete(array2d, np.arange(0, array2d.shape[0], m), axis=0)
res = np.delete(tmp, np.arange(0, array2d.shape[1], n), axis=1)
time() - start
"""
'remove'
0.3835930824279785
'delete'
3.173515558242798
"""
So, compared to numpy.delete the above method is significantly faster.
I'm trying to slice and iterate over a multidimensional array at the same time. I have a solution that's functional, but it's kind of ugly, and I bet there's a slick way to do the iteration and slicing that I don't know about. Here's the code:
import numpy as np
x = np.arange(64).reshape(4,4,4)
y = [x[i:i+2,j:j+2,k:k+2] for i in range(0,4,2)
for j in range(0,4,2)
for k in range(0,4,2)]
y = np.array(y)
z = np.array([np.min(u) for u in y]).reshape(y.shape[1:])
Your last reshape doesn't work, because y has no shape defined. Without it you get:
>>> x = np.arange(64).reshape(4,4,4)
>>> y = [x[i:i+2,j:j+2,k:k+2] for i in range(0,4,2)
... for j in range(0,4,2)
... for k in range(0,4,2)]
>>> z = np.array([np.min(u) for u in y])
>>> z
array([ 0, 2, 8, 10, 32, 34, 40, 42])
But despite that, what you probably want is reshaping your array to 6 dimensions, which gets you the same result as above:
>>> xx = x.reshape(2, 2, 2, 2, 2, 2)
>>> zz = xx.min(axis=-1).min(axis=-2).min(axis=-3)
>>> zz
array([[[ 0, 2],
[ 8, 10]],
[[32, 34],
[40, 42]]])
>>> zz.ravel()
array([ 0, 2, 8, 10, 32, 34, 40, 42])
It's hard to tell exactly what you want in the last mean, but you can use stride_tricks to get a "slicker" way. It's rather tricky.
import numpy.lib.stride_tricks
# This returns a view with custom strides, x2[i,j,k] matches y[4*i+2*j+k]
x2 = numpy.lib.stride_tricks(
x, shape=(2,2,2,2,2,2),
strides=(numpy.array([32,8,2,16,4,1])*x.dtype.itemsize))
z2 = z2.min(axis=-1).min(axis=-2).min(axis=-3)
Still, I can't say this is much more readable. (Or efficient, as each min call will make temporaries.)
Note, my answer differs from Jaime's because I tried to match your elements of y. You can tell if you replace the min with max.