Related
I want to modify block elements of 3d array without for loop. Without loop because it is the bottleneck of my code.
To illustrate what I want, I draw a figure:
The code with for loop:
import numpy as np
# Create 3d array with 2x4x4 elements
a = np.arange(2*4*4).reshape(2,4,4)
b = np.zeros(np.shape(a))
# Change Block Elements
for it1 in range(2):
b[it1]= np.block([[a[it1,0:2,0:2], a[it1,2:4,0:2]],[a[it1,0:2,2:4], a[it1,2:4,2:4]]] )
First let's see if there's a way to do what you want for a 2D array using only indexing, reshape, and transpose operations. If there is, then there's a good chance that you can extend it to a larger number of dimensions.
x = np.arange(2 * 3 * 2 * 5).reshape(2 * 3, 2 * 5)
Clearly you can reshape this into an array that has the blocks along a separate dimension:
x.reshape(2, 3, 2, 5)
Then you can transpose the resulting blocks:
x.reshape(2, 3, 2, 5).transpose(2, 1, 0, 3)
So far, none of the data has been copied. To make the copy happen, reshape back into the original shape:
x.reshape(2, 3, 2, 5).transpose(2, 1, 0, 3).reshape(2 * 3, 2 * 5)
Adding additional leading dimensions is as simple as increasing the number of the dimensions you want to swap:
b = a.reshape(a.shape[0], 2, a.shape[1] // 2, 2, a.shape[2] // 2).transpose(0, 3, 2, 1, 4).reshape(a.shape)
Here is a quick benchmark of the other implementations with your original array:
a = np.arange(2*4*4).reshape(2,4,4)
%%timeit
b = np.zeros(np.shape(a))
for it1 in range(2):
b[it1] = np.block([[a[it1, 0:2, 0:2], a[it1, 2:4, 0:2]], [a[it1, 0:2, 2:4], a[it1, 2:4, 2:4]]])
27.7 µs ± 107 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%%timeit
b = a.copy()
b[:,0:2,2:4], b[:,2:4,0:2] = b[:,2:4,0:2].copy(), b[:,0:2,2:4].copy()
2.22 µs ± 3.89 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit b = np.block([[a[:,0:2,0:2], a[:,2:4,0:2]],[a[:,0:2,2:4], a[:,2:4,2:4]]])
13.6 µs ± 217 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit b = a.reshape(a.shape[0], 2, a.shape[1] // 2, 2, a.shape[2] // 2).transpose(0, 3, 2, 1, 4).reshape(a.shape)
1.27 µs ± 14.7 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
For small arrays, the differences can sometimes be attributed to overhead. Here is a more meaningful comparison with arrays of size 10x1000x1000, split into 10 500x500 blocks:
a = np.arange(10*1000*1000).reshape(10, 1000, 1000)
%%timeit
b = np.zeros(np.shape(a))
for it1 in range(10):
b[it1]= np.block([[a[it1,0:500,0:500], a[it1,500:1000,0:500]],[a[it1,0:500,500:1000], a[it1,500:1000,500:1000]]])
58 ms ± 904 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit
b = a.copy()
b[:,0:500,500:1000], b[:,500:1000,0:500] = b[:,500:1000,0:500].copy(), b[:,0:500,500:1000].copy()
41.2 ms ± 688 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit b = np.block([[a[:,0:500,0:500], a[:,500:1000,0:500]],[a[:,0:500,500:1000], a[:,500:1000,500:1000]]])
27.5 ms ± 569 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit b = a.reshape(a.shape[0], 2, a.shape[1] // 2, 2, a.shape[2] // 2).transpose(0, 3, 2, 1, 4).reshape(a.shape)
20 ms ± 161 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
So it seems that using numpy's own reshaping and transposition mechanism is fastest on my computer. Also, notice that the overhead of np.block becomes less important than copying the temporary arrays as size gets bigger, so the other two implementations change places.
You can directly replace the it1 by a slice of the whole dimension:
b = np.block([[a[:,0:2,0:2], a[:,2:4,0:2]],[a[:,0:2,2:4], a[:,2:4,2:4]]])
Will it make it faster?
import numpy as np
a = np.arange(2*4*4).reshape(2,4,4)
b = a.copy()
b[:,0:2,2:4], b[:,2:4,0:2] = b[:,2:4,0:2].copy(), b[:,0:2,2:4].copy()
Comparison with np.block() alternative from another answer.
Option 1:
%timeit b = a.copy(); b[:,0:2,2:4], b[:,2:4,0:2] = b[:,2:4,0:2].copy(), b[:,0:2,2:4].copy()
Output:
5.44 µs ± 134 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
Option 2
%timeit b = np.block([[a[:,0:2,0:2], a[:,2:4,0:2]],[a[:,0:2,2:4], a[:,2:4,2:4]]])
Output:
30.6 µs ± 1.75 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
Suppose I have a 1d array a where from each element I would like to have a range of which the size is stored in ranges:
a = np.array([10,9,12])
ranges = np.array([2,4,3])
The desired output would be:
np.array([10,11,9,10,11,12,12,13,14])
I could of course use a for loop, but I prefer a fully vectorized approach. np.repeat allows one to repeat the elements in a a number of times by setting repeats=, but I am not aware of a similar numpy function particularly dealing with the problem above.
>>> np.hstack([np.arange(start, start+size) for start, size in zip(a, ranges)])
array([10, 11, 9, 10, 11, 12, 12, 13, 14])
With pandas it could be easier:
>>> import pandas as pd
>>> x = pd.Series(np.repeat(a, ranges))
>>> x + x.groupby(x).cumcount()
0 10
1 11
2 9
3 10
4 11
5 12
6 12
7 13
8 14
dtype: int64
>>>
If you want a numpy array:
>>> x.add(x.groupby(x).cumcount()).to_numpy()
array([10, 11, 9, 10, 11, 12, 12, 13, 14], dtype=int64)
>>>
Someone asked about timing, so I compared the times of the three solutions (so far) in a very simple manner, using the %timeit magic function in Jupyter notebook cells.
I set it up as follows:
N = 1
a = np.array([10,9,12])
a = np.tile(a, N)
ranges = np.array([2,4,3])
ranges = np.tile(ranges, N)
a.shape, ranges.shape
So I could easily scale (albeit things not random, but repeated).
Then I ran:
%timeit np.hstack([np.arange(start, start+size) for start, size in zip(a, ranges)])
,
%timeit x = pd.Series(np.repeat(a, ranges)); x.add(x.groupby(x).cumcount()).to_numpy()
and
%timeit np.array([i for j in range(len(a)) for i in range(a[j],a[j]+ranges[j])])
Results are as follows:
N = 1:
9.81 µs ± 481 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
568 µs ± 20.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
3.53 µs ± 81.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
N = 10:
63.4 µs ± 976 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
575 µs ± 15.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
25.1 µs ± 698 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
N = 100:
612 µs ± 12.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
608 µs ± 25.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
237 µs ± 9.62 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
N = 1000:
6.09 ms ± 52 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
852 µs ± 2.66 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
2.44 ms ± 43.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
So the Pandas solution wins when things get to arrays of 1000 elements or more, but the Python double list comprehension does an excellent job until that point. np.hstack probably loses out because of extra memory allocation and copying, but that's a guess. Note also that the Pandas solution is nearly the same time for each array size.
Caveats still exists because there are repeated numbers, and all values are relatively small integers. This really shouldn't matter, but I'm not (yet) betting on it. (For example, Pandas groupby functionality may be fast because of the repeated numbers.)
Bonus: the OP has statement in a comment that "The real life arrays are around 1000 elements, yet with ranges ranging from 100 to 1000. So becomes quite big – pr94".
So I adjusted my timing test to the following:
import numpy as np
import pandas as pd
N = 1000
a = np.random.randint(100, 1000, N)
# This is how I understand "ranges ranging from 100 to 1000"
ranges = np.random.randint(100, 1000, N)
%timeit np.hstack([np.arange(start, start+size) for start, size in zip(a, ranges)])
%timeit x = pd.Series(np.repeat(a, ranges)); x.add(x.groupby(x).cumcount()).to_numpy()
%timeit np.array([i for j in range(len(a)) for i in range(a[j],a[j]+ranges[j])])
Which comes out as :
hstack: 2.78 ms ± 38.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
pandas: 18.4 ms ± 663 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
double list comprehension: 64.1 ms ± 427 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Which shows that those caveats I mentioned, in some form at least, do seem to exist. But people should double check whether this testing code is actually the most relevant and appropriate, and whether it is correct.
This problem is probably going to be solved much faster with a Numba-compiled function:
#nb.jit
def expand_range(values, counts):
n = len(values)
m = np.sum(counts)
r = np.zeros((m,), dtype=values.dtype)
k = 0
for i in range(n):
x = values[i]
for j in range(counts[i]):
r[k] = x + j
k += 1
return r
On the very small inputs:
%timeit expand_range(a, ranges)
# 1.16 µs ± 126 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit x = pd.Series(np.repeat(a, ranges)); x.add(x.groupby(x).cumcount()).to_numpy()
# 617 µs ± 4.32 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit np.hstack([np.arange(start, start+size) for start, size in zip(a, ranges)])
# 25 µs ± 2.2 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.array([i for j in range(len(a)) for i in range(a[j],a[j]+ranges[j])])
# 13.5 µs ± 929 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
and on somewhat larger inputs:
b = np.random.randint(0, 1000, 1000)
b_ranges = np.random.randint(1, 10, 1000)
%timeit expand_range(b, b_ranges)
# 5.07 µs ± 98.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit x = pd.Series(np.repeat(a, ranges)); x.add(x.groupby(x).cumcount()).to_numpy()
# 617 µs ± 4.32 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit np.hstack([np.arange(start, start+size) for start, size in zip(a, ranges)])
# 25 µs ± 2.2 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.array([i for j in range(len(a)) for i in range(a[j],a[j]+ranges[j])])
# 13.5 µs ± 929 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
these show that with Numba-based approach winning the speed gain is at least 100x over any of the other approaches proposed so far.
With the numbers closer to what as been indicated in one of the comments by the OP:
b = np.random.randint(10, 1000, 1000)
b_ranges = np.random.randint(100, 1000, 1000)
%timeit expand_range(b, b_ranges)
# 1.5 ms ± 67.9 µs per loop (mean ± std. dev. of 7 runs, 1000
%timeit x = pd.Series(np.repeat(b, b_ranges)); x.add(x.groupby(x).cumcount()).to_numpy()
# 91.8 ms ± 6.53 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit np.hstack([np.arange(start, start+size) for start, size in zip(b, b_ranges)])
# 10.7 ms ± 402 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit np.array([i for j in range(len(b)) for i in range(b[j],b[j]+b_ranges[j])])
# 144 ms ± 4.54 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
which is still at least a respectable 7x over the others.
Let's say I have this numpy array:
[[3 2 1 5]
[3 2 1 5]
[3 2 1 5]
[3 2 1 5]]
How to merge the values of the last column into the first column (or any column to any column). Expected output:
[[8 2 1]
[8 2 1]
[8 2 1]
[8 2 1]]
I've found this solution. But, is there any better way than that?
As per comment, you need to create a view or copy of array in order to get a new array with different size. This is a short comparison of performance of view vs copy:
x = np.tile([1,3,2,4],(4,1))
def f(x):
# calculation + view
x[:,0] = x[:,0] + x[:,-1]
return x[:,:-1]
def g(x):
# calculation + copy
x[:,0] = x[:,0] + x[:,-1]
return np.delete(x,-1, 1)
def h(x):
#calculation only
x[:,0] = x[:,0] + x[:,-1]
%timeit f(x)
%timeit g(x)
%timeit h(x)
9.16 µs ± 1.1 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
35 µs ± 7.35 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
7.81 µs ± 1.42 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
And if len(x) were = 1M:
6.13 ms ± 623 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
18 ms ± 2.37 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
5.83 ms ± 720 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
So solution in a link is very economic, it applies calculation + instant view.
I don't know if this is the best, but it's kind of clever.
In [66]: np.add.reduceat(arr[:,[0,3,1,2]], [0,2,3], axis=1)
Out[66]:
array([[8, 2, 1],
[8, 2, 1],
[8, 2, 1]])
reduceat applies add to groups of columns (axis 1). I first reordered the columns to put the ones to be added together.
I'm trying to convert piece of matlab code to python.
a=[1 2 3;4 5 6]
b= sum(a<5)
//output :
ans :
2 1 1
Actually return the number of elements in every column which has the condition.
Is there any equivalent function in numpy (python) to do this ?
Its the same.
a=np.array([[1, 2, 3],[4, 5, 6]])
b=np.sum(a<5,axis=0) # the only difference is that you need to explicitly set the dimension
Although not made for this purpose, an alternate solution would be
a=np.array([[1, 2, 3],[4, 5, 6]])
np.count_nonzero(a<5, axis=0)
# array([2, 1, 1])
Performance
For small arrays, np.sum seems to be slightly faster
x = np.repeat([1, 2, 3], 100)
y = np.repeat([4, 5, 6], 100)
a=np.array([x,y])
%timeit np.sum(a<5, axis=0)
# 7.18 µs ± 669 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit np.count_nonzero(a<5, axis=0)
# 11.8 µs ± 386 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
For very large arrays, np.count_nonzero seems to be slightly faster
x = np.repeat([1, 2, 3], 5000000)
y = np.repeat([4, 5, 6], 5000000)
a=np.array([x,y])
%timeit np.sum(a<5, axis=0)
# 126 ms ± 6.92 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit np.count_nonzero(a<5, axis=0)
# 100 ms ± 6.72 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
I need to merge the list and have a function that can be implemented, but when the number of merges is very slow and unbearable, I wonder if there is a more efficient way
Consolidation conditions:Sub-lists contain identical numbers to each other Thank you
Simple Association:
[7,8,9] = [7,8]+[8,9] #The same number 8
Cascade contains:
[1,2,3] = [1,2,3]+[3,4] #The same number 3
[3,4,5,6] = [3,4],[4,5,6] #The same number 4
[1,2,3,4,5,6] = [1,2,3]+[3,4,5,6] #The same number 3
Function:
a = [ [1,2,3],[4,5,6],[3,4],[7,8],[8,9],[6,12,13] ]
b = len(a)
for i in range(b):
for j in range(b):
x = list(set(a[i]+a[j]))
y = len(a[j])+len(a[i])
if i == j or a[i] == 0 or a[j] == 0:
break
elif len(x) < y:
a[i] = x
a[j] = [0]
print a
print [i for i in a if i!= [0]]
result:
[[8, 9, 7], [1, 2, 3, 4, 5, 6, 10, 11]]
Above is an example where each sub-list in the actual calculation has a length of only 2,
a = [[1,3],[5,6],[3,4],[7,8],[8,9],[12,13]]
I want to miss out more data, here is a simulation data.
a = np.random.rand(150,150)>0.99
a[np.tril_indices(a.shape[1], -1)] = 0
a[np.diag_indices(a.shape[1])] = 0
a = [list(x) for x in np.c_[np.where(a)]]
consolidate(a)
I think your algorithm is close to optimal, except that the inner loop can be shortened because the intersection operation is symmetric, i.e. if you check that (A, B) intersect, there is no need to check for (B, A).
This way you would go from O(n²) to O(n * (n / 2)).
However, I would rewrite the piece of code more cleanly and I would also avoid modifying the input.
Note also, that since sets do not guarantee ordering, it is a good idea to do some sorting before getting to list.
Here is my proposed code (EDITED to reduce the number of castings and sortings):
def consolidate(items):
items = [set(item.copy()) for item in items]
for i, x in enumerate(items):
for j, y in enumerate(items[i + 1:]):
if x & y:
items[i + j + 1] = x | y
items[i] = None
return [sorted(x) for x in items if x]
Encapsulating your code in a function, I would get:
def consolidate_orig(a):
a = [x.copy() for x in a]
b = len(a)
for i in range(b):
for j in range(b):
x = list(set(a[i]+a[j]))
y = len(a[j])+len(a[i])
if i == j or a[i] == 0 or a[j] == 0:
break
elif len(x) < y:
a[i] = x
a[j] = [0]
return [i for i in a if i!= [0]]
This would allow us to do some clean micro-benchmarking (for completeness I have included also #zipa's merge()):
EDIT:
#zipa's code is not properly encapsulated, here is an equivalent version with proper encapsulation:
def merge(iterable, base=None):
if base is None:
base = iterable
merged = set([tuple(set(i).union(
*[j for j in base if set(i).intersection(j)])) for i in iterable])
if merged == iterable:
return merged
else:
return merge(merged, base)
and updated timings:
in_list = [[1,2,3], [4,5,6], [3,4], [7,8], [8,9], [6,12,13]]
%timeit consolidate_orig(in_list)
# 17.9 µs ± 368 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit consolidate(in_list)
# 6.15 µs ± 30 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit merge(in_list)
# 53.6 µs ± 718 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
in_list = [[1, 3], [5, 6], [3, 4], [7, 8], [8, 9], [12, 13]]
%timeit consolidate_orig(in_list)
# 16.1 µs ± 159 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit consolidate(in_list)
# 5.87 µs ± 71.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit merge(in_list)
# 27 µs ± 701 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Showing that, at least for this input, the proposed solution is consistently faster.
Since it is not too straightforward to generate large meaningful inputs, I'll leave to you to check that this is more efficient then your approach for the larger inputs you have in mind.
EDIT
With larger, but probably meaningless inputs, the timings are still favorable for the proposed version:
in_list = [[1,2,3], [4,5,6], [3,4], [7,8], [8,9], [6,12,13]] * 300
%timeit consolidate_orig(in_list)
# 1.04 s ± 14.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit consolidate(in_list)
# 724 ms ± 7.51 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit merge(in_list)
# 1.04 s ± 7.94 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
in_list = [[1, 3], [5, 6], [3, 4], [7, 8], [8, 9], [12, 13]] * 300
%timeit consolidate_orig(in_list)
# 1.03 s ± 18 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit consolidate(in_list)
# 354 ms ± 3.43 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit merge(in_list)
# 967 ms ± 16.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
This approach should perform faster on larger nested lists:
def merge(iterable):
merged = set([tuple(set(i).union(*[j for j in a if set(i).intersection(j)])) for i in iterable])
if merged == iterable:
return merged
else:
return merge(merged)
merged(a)
#set([(1, 2, 3, 4, 5, 6, 12, 13), (8, 9, 7)])
It recursively combines lists until all the combinations are exhausted.