Related
Since collections.Counter is so slow, I am pursuing a faster method of summing mapped values in Python 2.7. It seems like a simple concept and I'm kind of disappointed in the built-in Counter method.
Basically, I need to be able to take arrays like this:
array([[ 0., 2.],
[ 2., 2.],
[ 3., 1.]])
array([[ 0., 3.],
[ 1., 1.],
[ 2., 5.]])
And then "add" them so they look like this:
array([[ 0., 5.],
[ 1., 1.],
[ 2., 7.],
[ 3., 1.]])
If there isn't a good way to do this quickly and efficiently, I'm open to any other ideas that will allow me to do something similar to this, and I'm open to modules other than Numpy.
Thanks!
Edit: Ready for some speedtests?
Intel win 64bit machine. All of the following values are in seconds; 20000 loops.
collections.Counter results:
2.131000, 2.125000, 2.125000
Divakar's union1d + masking results:
1.641000, 1.633000, 1.625000
Divakar's union1d + indexing results:
0.625000, 0.625000, 0.641000
Histogram results:
1.844000, 1.938000, 1.858000
Pandas results:
16.659000, 16.686000, 16.885000
Conclusions: union1d + indexing wins, the array size is too small for Pandas to be effective, and the histogram approach blew my mind with its simplicity but I'm guessing it takes too much overhead to create. All of the responses I received were very good, though. This is what I used to get the numbers. Thanks again!
Edit: And it should be mentioned that using Counter1.update(Counter2.elements()) is terrible despite doing the same exact thing (65.671000 sec).
Later Edit: I've been thinking about this a lot, and I've came to realize that, with Numpy, it might be more effective to fill each array with zeros so that the first column isn't even needed since we can just use the index, and that would also make it much easier to add multiple arrays together as well as do other functions. Additionally, Pandas makes more sense than Numpy since there would be no need to 0-fill, and it would definitely be more effective with large data sets (however, Numpy has the advantage of being compatible on more platforms, like GAE, if that matters at all). Lastly, the answer I checked was definitely the best answer for the exact question I asked--adding the two arrays in the way I showed--but I think what I needed was a change in perspective.
Here's one approach with np.union1d and masking -
def app1(a,b):
c0 = np.union1d(a[:,0],b[:,0])
out = np.zeros((len(c0),2))
out[:,0] = c0
mask1 = np.in1d(c0,a[:,0])
out[mask1,1] = a[:,1]
mask2 = np.in1d(c0,b[:,0])
out[mask2,1] += b[:,1]
return out
Sample run -
In [174]: a
Out[174]:
array([[ 0., 2.],
[ 12., 2.],
[ 23., 1.]])
In [175]: b
Out[175]:
array([[ 0., 3.],
[ 1., 1.],
[ 12., 5.]])
In [176]: app1(a,b)
Out[176]:
array([[ 0., 5.],
[ 1., 1.],
[ 12., 7.],
[ 23., 1.]])
Here's another with np.union1d and indexing -
def app2(a,b):
n = np.maximum(a[:,0].max(), b[:,0].max())+1
c0 = np.union1d(a[:,0],b[:,0])
out0 = np.zeros((int(n), 2))
out0[a[:,0].astype(int),1] = a[:,1]
out0[b[:,0].astype(int),1] += b[:,1]
out = out0[c0.astype(int)]
out[:,0] = c0
return out
For the case where all indices are covered by the first column values in a and b -
def app2_specific(a,b):
c0 = np.union1d(a[:,0],b[:,0])
n = c0[-1]+1
out0 = np.zeros((int(n), 2))
out0[a[:,0].astype(int),1] = a[:,1]
out0[b[:,0].astype(int),1] += b[:,1]
out0[:,0] = c0
return out0
Sample run -
In [234]: a
Out[234]:
array([[ 0., 2.],
[ 2., 2.],
[ 3., 1.]])
In [235]: b
Out[235]:
array([[ 0., 3.],
[ 1., 1.],
[ 2., 5.]])
In [236]: app2_specific(a,b)
Out[236]:
array([[ 0., 5.],
[ 1., 1.],
[ 2., 7.],
[ 3., 1.]])
If you know the number of fields, use np.bincount.
c = np.vstack([a, b])
counts = np.bincount(c[:, 0], weights = c[:, 1], minlength = numFields)
out = np.vstack([np.arange(numFields), counts]).T
This works if you're getting all your data at once. Make a list of your arrays and vstack them. If you're getting data chunks sequentially, you can use np.add.at to do the same thing.
out = np.zeros(2, numFields)
out[:, 0] = np.arange(numFields)
np.add.at(out[:, 1], a[:, 0], a[:, 1])
np.add.at(out[:, 1], b[:, 0], b[:, 1])
You can use a basic histogram, this will deal with gaps, too. You can filter out zero-count entries if need be.
import numpy as np
x = np.array([[ 0., 2.],
[ 2., 2.],
[ 3., 1.]])
y = np.array([[ 0., 3.],
[ 1., 1.],
[ 2., 5.],
[ 5., 3.]])
c, w = np.vstack((x,y)).T
h, b = np.histogram(c, weights=w,
bins=np.arange(c.min(),c.max()+2))
r = np.vstack((b[:-1], h)).T
print(r)
# [[ 0. 5.]
# [ 1. 1.]
# [ 2. 7.]
# [ 3. 1.]
# [ 4. 0.]
# [ 5. 3.]]
r_nonzero = r[r[:,1]!=0]
Pandas have some functions doing exactly what you intend
import pandas as pd
pda = pd.DataFrame(a).set_index(0)
pdb = pd.DataFrame(b).set_index(0)
result = pd.concat([pda, pdb], axis=1).fillna(0).sum(axis=1)
Edit: If you actually need the data back in numpy format, just do
array_res = result.reset_index(name=1).values
This is a quintessential grouping problem, which numpy_indexed (disclaimer: I am its author) was created to solve elegantly and efficiently:
import numpy_indexed as npi
C = np.concatenate([A, B], axis=0)
labels, sums = npi.group_by(C[:, 0]).sum(C[:, 1])
Note: its cleaner to maintain your label arrays as a seperate int array; floats are finicky when it comes to labeling things, with positive and negative zeros, and printed values not relaying all binary state. Better to use ints for that.
I think I've misunderstood something with indexing in numpy.
I have a 3D-numpy array of shape (dim_x, dim_y, dim_z) and I want to find the maximum along the third axis (dim_z), and set its value to 1 and all the others to zero.
The problem is that I end up with several 1 in the same row, even if values are different.
Here is the code :
>>> test = np.random.rand(2,3,2)
>>> test
array([[[ 0.13110146, 0.07138861],
[ 0.84444158, 0.35296986],
[ 0.97414498, 0.63728852]],
[[ 0.61301975, 0.02313646],
[ 0.14251848, 0.91090492],
[ 0.14217992, 0.41549218]]])
>>> result = np.zeros_like(test)
>>> result[:test.shape[0], np.arange(test.shape[1]), np.argmax(test, axis=2)]=1
>>> result
array([[[ 1., 0.],
[ 1., 1.],
[ 1., 1.]],
[[ 1., 0.],
[ 1., 1.],
[ 1., 1.]]])
I was expecting to end with :
array([[[ 1., 0.],
[ 1., 0.],
[ 1., 0.]],
[[ 1., 0.],
[ 0., 1.],
[ 0., 1.]]])
Probably I'm missing something here. From what I've understood, 0:dim_x, np.arange(dim_y) returns dim_x of dim_y tuples and np.argmax(test, axis=dim_z) has the shape (dim_x, dim_y) so if the indexing is of the form [x, y, z] a couple [x, y] is not supposed to appear twice.
Could someone explain me where I'm wrong ? Thanks in advance.
What we are looking for
We get the argmax indices along the last axis -
idx = np.argmax(test, axis=2)
For the given sample data, we have idx :
array([[0, 0, 0],
[0, 1, 1]])
Now, idx covers the first and second axes, while getting those argmax indices.
To assign the corresponding ones in the output, we need to create range arrays for the first two axes covering the lengths along those and aligned according to the shape of idx. Now, idx is a 2D array of shape (m,n), where m = test.shape[0] and n = test.shape[1].
Thus, the range arrays for assignment into first two axes of output must be -
X = np.arange(test.shape[0])[:,None]
Y = np.arange(test.shape[1])
Notice, the extension of the first range array to 2D is needed to have it aligned against the rows of idx and Y would align against the cols of idx -
In [239]: X
Out[239]:
array([[0],
[1]])
In [240]: Y
Out[240]: array([0, 1, 2])
Schematically put -
idx :
Y array
--------->
x x x | X array
x x x |
v
The fault in original code
Your code was -
result[:test.shape[0], np.arange(test.shape[1]), ..
This is essentially :
result[:, np.arange(test.shape[1]), ...
So, you are selecting all elements along the first axis, instead of only selecting the corresponding ones that correspond to idx indices. In that process, you were selecting a lot more than required elements for assignment and hence you were seeing many more than required 1s in result array.
The correction
Thus, the only correction needed was indexing into the first axis with the range array and a working solution would be -
result[np.arange(test.shape[0])[:,None], np.arange(test.shape[1]), ...
The alternative(s)
Alternatively, using the range arrays created earlier with X and Y -
result[X,Y,idx] = 1
Another way to get X,Y would be with np.mgrid -
m,n = test.shape[:2]
X,Y = np.ogrid[:m,:n]
I think there's a problem with mixing basic (slice) and advanced indexing. It's easier to see when selecting value from an array than with this assignment; but it can result in transposed axes. For a problem like this it is better use advanced indexing all around, as provided by ix_
In [24]: test = np.random.rand(2,3,2)
In [25]: idx=np.argmax(test,axis=2)
In [26]: idx
Out[26]:
array([[1, 0, 1],
[0, 1, 1]], dtype=int32)
with basic and advanced:
In [31]: res1 = np.zeros_like(test)
In [32]: res1[:, np.arange(test.shape[1]), idx]=1
In [33]: res1
Out[33]:
array([[[ 1., 1.],
[ 1., 1.],
[ 0., 1.]],
[[ 1., 1.],
[ 1., 1.],
[ 0., 1.]]])
with advanced:
In [35]: I,J = np.ix_(range(test.shape[0]), range(test.shape[1]))
In [36]: I
Out[36]:
array([[0],
[1]])
In [37]: J
Out[37]: array([[0, 1, 2]])
In [38]: res2 = np.zeros_like(test)
In [40]: res2[I, J , idx]=1
In [41]: res2
Out[41]:
array([[[ 0., 1.],
[ 1., 0.],
[ 0., 1.]],
[[ 1., 0.],
[ 0., 1.],
[ 0., 1.]]])
On further thought, the use of the slice for the 1st dimension is just wrong , if the goal is to set or find the 6 argmax values
In [54]: test
Out[54]:
array([[[ 0.15288242, 0.36013289],
[ 0.90794601, 0.15265616],
[ 0.34014976, 0.53804266]],
[[ 0.97979479, 0.15898605],
[ 0.04933804, 0.89804999],
[ 0.10199319, 0.76170911]]])
In [55]: test[I, J, idx]
Out[55]:
array([[ 0.36013289, 0.90794601, 0.53804266],
[ 0.97979479, 0.89804999, 0.76170911]])
In [56]: test[:, J, idx]
Out[56]:
array([[[ 0.36013289, 0.90794601, 0.53804266],
[ 0.15288242, 0.15265616, 0.53804266]],
[[ 0.15898605, 0.04933804, 0.76170911],
[ 0.97979479, 0.89804999, 0.76170911]]])
With the slice it selects a (2,3,2) set of values from test (or res), not the intended (2,3). There 2 extra rows.
Here is an easier way to do it:
>>> test == test.max(axis=2, keepdims=1)
array([[[ True, False],
[ True, False],
[ True, False]],
[[ True, False],
[False, True],
[False, True]]], dtype=bool)
...and if you really want that as floating-point 1.0 and 0.0, then convert it:
>>> (test==test.max(axis=2, keepdims=1)).astype(float)
array([[[ 1., 0.],
[ 1., 0.],
[ 1., 0.]],
[[ 1., 0.],
[ 0., 1.],
[ 0., 1.]]])
Here is a way to do it with only one winner per row-column combo (i.e. no ties, as discussed in comments):
rowmesh, colmesh = np.meshgrid(range(test.shape[0]), range(test.shape[1]), indexing='ij')
maxloc = np.argmax(test, axis=2)
flatind = np.ravel_multi_index( [rowmesh, colmesh, maxloc ], test.shape )
result = np.zeros_like(test)
result.flat[flatind] = 1
UPDATE after reading hpaulj's answer:
rowmesh, colmesh = np.ix_(range(test.shape[0]), range(test.shape[1]))
is a more-efficient, more numpythonic, alternative to my meshgrid call (the rest of the code stays the same)
The issue of why your approach fails is hard to explain, but here's one place where intuition could start: your slicing approach says "all rows, times all columns, times a certain sequence of layers". How many elements is that slice in total? By contrast, how many elements do you actually want to set to 1? It can be instructive to look at the values you get when you view the corresponding test values of the slice you're trying to assign to:
>>> test[:, :, maxloc].shape
(2, 3, 2, 3) # oops! it's because maxloc itself is 2x3
>>> test[:, :, maxloc]
array([[[[ 0.13110146, 0.13110146, 0.13110146],
[ 0.13110146, 0.07138861, 0.07138861]],
[[ 0.84444158, 0.84444158, 0.84444158],
[ 0.84444158, 0.35296986, 0.35296986]],
[[ 0.97414498, 0.97414498, 0.97414498],
[ 0.97414498, 0.63728852, 0.63728852]]],
[[[ 0.61301975, 0.61301975, 0.61301975],
[ 0.61301975, 0.02313646, 0.02313646]],
[[ 0.14251848, 0.14251848, 0.14251848],
[ 0.14251848, 0.91090492, 0.91090492]],
[[ 0.14217992, 0.14217992, 0.14217992],
[ 0.14217992, 0.41549218, 0.41549218]]]]) # note the repetition, because in maxloc you're repeatedly asking for layer 0 sometimes, and sometimes repeatedly for layer 1
I have several .mat files and each of them including a Matrix. I need to import them in python using h5py, because they have been save by -v7.3.
For example:
*myfile.mat includes matrix X with the size of (10, 20)*
I use following commands in python:
*import numpy np,h5py
f=h5py.File('myfile.mat','r')
data=np.array(f['X'])
data.shape* -> **(20, 10) Here is the problem!**
The matrix X is transposed. How can I import the X without being transposed?
I think you have to live with transposing. MATLAB if F ordered, numpy C ordered (by default). Somewhere along the line loadmat does that transposing. h5py does not, so you have to do some sort of transposing or reordering.
And by the way, transpose is one of the cheapest operations on a numpy array.
save a (2,3) array in Octave
octave:27> x=[0,1,2;3,4,5]
octave:28> save 'x34_7.mat' '-7' x
octave:33> save 'x34_h5.mat' '-hdf5' x
octave:32> reshape(x,[1,6])
ans = 0 3 1 4 2 5
load it. The shape is (2,3), but if F ordered:
In [102]: x7=loadmat('x34_7.mat')
In [103]: x7['x']
Out[103]:
array([[ 0., 1., 2.],
[ 3., 4., 5.]])
In [104]: _.flags
Out[104]:
C_CONTIGUOUS : False
F_CONTIGUOUS : True
...
Look at the h5 version:
In [110]: f=h5py.File('x34_h5.mat','r')
In [111]: x5=f['x']['value'][:]
Out[111]:
array([[ 0., 3.],
[ 1., 4.],
[ 2., 5.]])
# C_contiguous
and the data in x5 buffer is in the same order as in Octave:
In [134]: np.frombuffer(x5.data, float)
Out[134]: array([ 0., 3., 1., 4., 2., 5.])
so is the data from loadmat (though I have to transpose to look at it with frombuffer (to be Ccontiguous)
In [139]: np.frombuffer(x7.T.data,float)
Out[139]: array([ 0., 3., 1., 4., 2., 5.])
(Is there a better way of varifying that x5.data and x7.data has the same content?)
This pattern holds with higher dimensions. In MATLAB it's the 1st dimension that varies most rapidly. Loaded by h5py, that dimension corresponds to the last. So a x(:,2,2,2) would correspond to a x[1,1,1,:], and a x.T[:,1,1,1].
Say I had the following:
x = np.array([[1.,2.,3.,4.],[2.,3.,4.,5,],[1.,3.,5.,6.]])
What would the syntax be in order to select say, the first two columns of every row? (So [[1.,2.],[2.,3.],[1.,3.]]).
Ultimately I want to run a loop of the form:
for j in range(len(x)):
a = x[1,2:j] * #something
Where x[1,2:j] refers to what I am trying to achieve in my question. Thanks in advance!
You can use np.hsplit() (Split an array into multiple sub-arrays horizontally (column-wise).) then chose the fist part :
>>> np.hsplit(x,2)[0]
array([[ 1., 2.],
[ 2., 3.],
[ 1., 3.]])
Or you can just use slicing :
>>> x[:, :2]
array([[ 1., 2.],
[ 2., 3.],
[ 1., 3.]])
You can slice axis 1 of the array x:
>>> x[:, :2]
array([[ 1., 2.],
[ 2., 3.],
[ 1., 3.]])
The : for axis 0 effectively means "every row". The :2 in axis 1 means "get the first two columns (0 and 1)".
Slicing in multiple dimensions works similarly to Python lists and other iterables,
start:stop:step
You can specify a slice for each dimension of the array, or use : to get everything along the axis.
New at Python and Numpy, trying to create 3-dimensional arrays. My problem is that the order of the dimensions are off compared to Matlab. In fact the order doesn't make sense at all.
Creating a matrix:
x = np.zeros((2,3,4))
In my world this should result in 2 rows, 3 columns and 4 depth dimensions and it should be presented as:
[0 0 0 [0 0 0 [0 0 0 [0 0 0
0 0 0] 0 0 0] 0 0 0] 0 0 0]
Seperating on each depth dimensions.
Instead it is presented as
[0 0 0 0 [0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0] 0 0 0 0]
That is, 3 rows, 4 column and 2 depth dimensions. That is, the first dimension is the "depth". To further add to this problem, importing an image with OpenCV the color dimension is the last dimension, that is, I see the color information as the depth dimension. This complicates things greatly if all I want to do is try something on a known smaller 3-dimensional array.
Have I misunderstood something? If not, why the heck is numpy using such a unintuitive way of working with 3D-dimensional arrays?
You have a truncated array representation. Let's look at a full example:
>>> a = np.zeros((2, 3, 4))
>>> a
array([[[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]],
[[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]]])
Arrays in NumPy are printed as the word array followed by structure, similar to embedded Python lists. Let's create a similar list:
>>> l = [[[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]],
[[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]]]
>>> l
[[[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]],
[[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]]]
The first level of this compound list l has exactly 2 elements, just as the first dimension of the array a (# of rows). Each of these elements is itself a list with 3 elements, which is equal to the second dimension of a (# of columns). Finally, the most nested lists have 4 elements each, same as the third dimension of a (depth/# of colors).
So you've got exactly the same structure (in terms of dimensions) as in Matlab, just printed in another way.
Some caveats:
Matlab stores data column by column ("Fortran order"), while NumPy by default stores them row by row ("C order"). This doesn't affect indexing, but may affect performance. For example, in Matlab efficient loop will be over columns (e.g. for n = 1:10 a(:, n) end), while in NumPy it's preferable to iterate over rows (e.g. for n in range(10): a[n, :] -- note n in the first position, not the last).
If you work with colored images in OpenCV, remember that:
2.1. It stores images in BGR format and not RGB, like most Python libraries do.
2.2. Most functions work on image coordinates (x, y), which are opposite to matrix coordinates (i, j).
You are right, you are creating a matrix with 2 rows, 3 columns and 4 depth. Numpy prints matrixes different to Matlab:
Numpy:
>>> import numpy as np
>>> np.zeros((2,3,2))
array([[[ 0., 0.],
[ 0., 0.],
[ 0., 0.]],
[[ 0., 0.],
[ 0., 0.],
[ 0., 0.]]])
Matlab
>> zeros(2, 3, 2)
ans(:,:,1) =
0 0 0
0 0 0
ans(:,:,2) =
0 0 0
0 0 0
However you are calculating the same matrix. Take a look to Numpy for Matlab users, it will guide you converting Matlab code to Numpy.
For example if you are using OpenCV, you can build an image using numpy taking into account that OpenCV uses BGR representation:
import cv2
import numpy as np
a = np.zeros((100, 100,3))
a[:,:,0] = 255
b = np.zeros((100, 100,3))
b[:,:,1] = 255
c = np.zeros((100, 200,3))
c[:,:,2] = 255
img = np.vstack((c, np.hstack((a, b))))
cv2.imshow('image', img)
cv2.waitKey(0)
If you take a look to matrix c you will see it is a 100x200x3 matrix which is exactly what it is shown in the image (in red as we have set the R coordinate to 255 and the other two remain at 0).
No need to go in such deep technicalities, and get yourself blasted. Let me explain it in the most easiest way. We all have studied "Sets" during our school-age in Mathematics. Just consider 3D numpy array as the formation of "sets".
x = np.zeros((2,3,4))
Simply Means:
2 Sets, 3 Rows per Set, 4 Columns
Example:
Input
x = np.zeros((2,3,4))
Output
Set # 1 ---- [[[ 0., 0., 0., 0.], ---- Row 1
[ 0., 0., 0., 0.], ---- Row 2
[ 0., 0., 0., 0.]], ---- Row 3
Set # 2 ---- [[ 0., 0., 0., 0.], ---- Row 1
[ 0., 0., 0., 0.], ---- Row 2
[ 0., 0., 0., 0.]]] ---- Row 3
Explanation:
See? we have 2 Sets, 3 Rows per Set, and 4 Columns.
Note: Whenever you see a "Set of numbers" closed in double brackets from both ends. Consider it as a "set". And 3D and 3D+ arrays are always built on these "sets".
As much as people like to say "order doesn't matter its just convention" this breaks down when entering cross domain interfaces, IE transfer from C ordering to Fortran ordering or some other ordering scheme. There, precisely how your data is layed out and how shape is represented in numpy is very important.
By default, numpy uses C ordering, which means contiguous elements in memory are the elements stored in rows. You can also do FORTRAN ordering ("F"), this instead orders elements based on columns, indexing contiguous elements.
Numpy's shape further has its own order in which it displays the shape. In numpy, shape is largest stride first, ie, in a 3d vector, it would be the least contiguous dimension, Z, or pages, 3rd dim etc... So when executing:
np.zeros((2,3,4)).shape
you will get
(2,3,4)
which is actually (frames, rows, columns). doing np.zeros((2,2,3,4)).shape instead would mean (metaframs, frames, rows, columns). This makes more sense when you think of creating multidimensional arrays in C like langauges. For C++, creating a non contiguously defined 4D array results in an array [ of arrays [ of arrays [ of elements ]]]. This forces you to de reference the first array that holds all the other arrays (4th dimension) then the same all the way down (3rd, 2nd, 1st) resulting in syntax like:
double element = array4d[w][z][y][x];
In fortran, this indexed ordering is reversed (x is instead first array4d[x][y][z][w]), most contiguous to least contiguous and in matlab, it gets all weird.
Matlab tried to preserve both mathematical default ordering (row, column) but also use column major internally for libraries, and not follow C convention of dimensional ordering. In matlab, you order this way:
double element = array4d[y][x][z][w];
which deifies all convention and creates weird situations where you are sometimes indexing as if row ordered and sometimes column ordered (such as with matrix creation).
In reality, Matlab is the unintuitive one, not Numpy.
Read this article for better insight: numpy: Array shapes and reshaping arrays
Note: NumPy reports the shape of 3D arrays in the order layers, rows, columns.
I also got confused initially in NumPy. When you say :
x = np.zeros((2,3,4))
It interprets as:
Generate a 3d matrix with 2 matrices of 3 rows each. Each row must contain 4 elements each;
Numpy always starts assigning dimensions from the outermost then moves in
Thumb rule: A 2d array is a matrix